Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Hia, people, welcome to TechCrunch regular AI newsletter. If you want it in your inbox every Wednesday, sign up HereThe
You probably noticed that we avoided the newsletter last week. Reason? A chaotic AI news cycle made by more vulgar Chinese AI agency DIPSC has suddenly increased prominenceAnd the industry and the government practically always react to the corner.
Fortunately, we are back on the track – and not a moment, considering the development of News on the last weekend last weekend from OpenAI.
OpenAI chief executive officer Sam Altman stopped in Tokyo, CEO of the Japanese Sangh Softbank CEO to chat with the boy. Softbank is a major OpenAI Investor And PartnerBe Promise to help fund The USA is a huge data center infrastructure project in the United States
So Ultman probably felt that he was the son’s owed in his few hours.
What did the two billionaire talk about? Lots of abstract work through AI “Agents” through every secondhand reporting. The son said that his company would spend $ 3 billion a year on OpenAI products and would team to develop a platform with Openai, “Crystal [sic] The intellect, “millions of traditions with the goal of automatically automatically white-collar work.
“Softbank corporation through automation and autonomy of all its work and workflow will convert its business and services and create new values,” says Softbank in one Press Release MondayThe
I ask, though, what do you think of a humble worker about all this automation and autonomy?
Like Fintc Clarner’s CEO Sebastian Simiatkovsky, who is often AI is bouquet about people’s replacementThe opinion of the son seems to be that agent stand-in for workers can only exceed the fabulous resource. Glassed over is the expense of abundance. If the wider automation of the job comes, Unemployment on a huge scale seems to be the least resultsThe
It is discouraged that those who are top AI Race – OpenAI companies and softbanks like Softbank – prefer to spend press conferences, which draws an image of automatic corporations with low workers with low workers. They are of course business – not charity. And AI development does not come cheap. However probably people AI will believe If it is directed to establish its deployment if they show some more concern for their welfare.
Food for thought.
Deep study: OpenAIA has launched a new AI “agent” designed to help people to conduct deep, complicated research using the AI-powered chatbot platform, Chattbot platform, ChattGP.
O3-Mini: According to other OpenAI reports, the company launched a new AI “logic” model, and 3-minute, following a preview last December. This is not the most powerful model of the opening, but also proud of the 3-minute advanced skills and reaction speed.
I banned the risk there: In the European Union until Sunday, block controllers can ban the use of AI systems as “unacceptable risks” or the cause of loss. These include AIs used for social scoring and submarine ads.
A Drama About AI “Dumors”: AI has a new drama about the “doumar” culture, loosely based Sam Altman’s ousted as Chief Executive Officer of OpenAI In November 2023. My colleagues shared their thoughts after watching the Dominic and Rebecca Premiere.
To increase crop yields technology: Google X “Munshot Factory” announced its latest graduation this week. Hareetical agriculture This is a data- and the machine learning-driven startup is aimed at how to grow crops.
Reasonable models are better than your average AI in solving problems, especially science-and math questions. But they have no silver bullets.
Ay New study from researchers in the Chinese company Tensent Investigates the “Underlying” issue in rational models, where the models prematurely, inevitably leave the potential committed discipline of thought. Per The Study’s Results, “Underthinking” Patterns Tend to Occur More Frequently With Harder Problems, Leading Models to Switch Between Reasoning Chains Without Arring at Answers.
The team has suggested a fix that enhances the accuracy of the models, appoints the models “thoroughly” to “fully” to “fully” for development “before considering the options.

Tikatok owner by Bitdance, Chinese AI company Munshot and others supported researchers have published a new opening model capable of creating relatively high-quality music from prompts.
Model, called UVocal and backing tracks can output a song at a few minutes in length. This is under an Apache 2.0 license, which means the model can be used commercially without ban.
But there are downside. The ongoing yu requires a bee GPU; It takes six minutes to make a 30-second song of songs moreover with RTX 4090 Moreover, it is also clear whether the model was trained using copyrighted data; Its manufacturers did not say. If it is seen that copyrighted songs were actually on the model training set, users may face the future IP challenge.

The AI ​​Lab anthropologists have claimed that it has developed a strategy to more reliably against the AI ​​”Jailbreaks”, the methods that can be used to bypass the AI ​​system protection systems.
Strategy, Constitutional classified“Classified” depends on two sets of AI models: a “input” is classified and a “output” classifier. Input classification adds to a protected model with templates describing jailbreak and other approved content, when the output classified reaction from a model calculates the possibility that discusses harmful data.
The anthropologist states that constitutional classmates can filter the “overwhelming majority” of jailbreaks. But it comes at a cost. Each query claims more than 25% of the more than 25% and the protected model is less than 0.38% less likely to answer innocent questions.