Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

AI labs are racing to build data centers As big as Manhattan, Each costs billions of dollars and consumes as much energy as a small city. The effort is driven by a deep belief in “scaling” — the idea that adding more computing power to existing AI training methods will eventually create superintelligent systems capable of performing all sorts of tasks.
But a growing chorus of AI researchers say the scaling of large language models has reached its limits and that other advances may be needed to improve AI performance.
That’s the bet Sarah Hooker, Koher’s former VP of AI research and a Google Brain alumnus, is taking with her new startup. Adaptation Labs. He co-founded the company with colleague Kohre and Google veteran Sudeep Roy, and it is based on the idea that scaling LLM has become an inefficient way to squeeze more performance out of AI models. Hooker, who left Kohr in August, announced quietly The startup will start hiring more broadly this month.
In an interview with TechCrunch, Hooker said that Adaptation Labs is building AI systems that can continuously adapt and learn from their real-world experiences, and do so very efficiently. He declined to share details about the methods behind the approach or whether the organization relies on LLM or another architecture.
“There’s a turning point now where it’s very clear that the formula for scaling these models — the scaling-piled approach, which is interesting but very boring — has not produced intelligence that is able to navigate or interact with the world,” Hooker said.
Adaptation is “the heart of learning,” according to Hooker. For example, stub your toe when you walk past your dining room table and you’ll learn to step more carefully around it next time. AI labs have attempted to capture this concept through reinforcement learning (RL), which allows AI models to learn from their mistakes in controlled settings. However, today’s RL methods do not help AI models in production – meaning systems already in use by customers – to learn from their mistakes in real time. They just keep stubbing their toes.
Some AI labs offer consulting services to help enterprises fine-tune their AI models to their custom needs, but this comes at a price. OpenAI customers are known to need Cost over $10 million To provide consultancy services of fine-tuning with the company.
TechCrunch event
San Francisco
|
October 27-29, 2025
“We have a handful of frontier labs that define this set of AI models that serve everyone the same and are very expensive to adapt,” Hooker said. “And actually, I think that doesn’t need to be true anymore, and AI systems can learn from an environment very efficiently. Proving that AI can control and shape it will completely change the dynamics of, and really, who these models serve at the end of the day.”
Adoption Labs is the latest sign that the industry’s faith in scaling the LL.M. is eroding. A recent paper by MIT researchers found the world’s largest AI model May show diminishing returns soon. San Francisco’s vibes seem to be changing, too. The AI world’s favorite podcaster, Dwarkesh Patel, recently hosted some unusually dubious conversations with famous AI researchers.
Richard Sutton, a Turing Prize winner considered the “father of RL”, told Patel in September that LLM cannot really scale Because they don’t learn from real world experience. This month, early OpenAI employee Andrej Karpathi told Patel There was a reservation About the long-term potential of RL to improve AI models.
Such fears are not unprecedented. By the end of 2024, Some AI researchers have expressed concern That scaling AI models through pretraining — so that AI models learn patterns from piles of datasets — was hitting diminishing returns. Until then, pre-training was the secret sauce for OpenAI and Google to improve their models.
Those pretraining scaling concerns now appearing in the dataBut AI has found other ways to improve industrial models. In 2025, advancements around AI reasoning models, which take additional time and computational resources to solve problems before providing answers, have further advanced the capabilities of AI models.
AI Labs is convinced that scaling RL and AI reasoning models is the new frontier. OpenAI researchers previously told TechCrunch They created their first AI reasoning modelo1, because they thought it would grow well. Recently researchers at Meta and Periodic Labs Dr published a paper Exploring how RL can further scale performance – a study reported Costing over $4 million, How expensive the current approach remains.
In contrast, adaptation labs aim to find the next breakthrough and prove that learning from experience can be much cheaper. The startup was in talks to raise a $20 million to $40 million seed round earlier this fall, according to three investors who reviewed its pitch deck. They say the round has closed, though the final amount is unclear. Hooker declined to comment.
“We’re set up to be very ambitious,” said Hooker, when asked about his investors.
Hooker previously led Kohr Labs, where he trained small AI models for enterprise use cases. Compact AI systems now routinely outperform their larger counterparts on coding, math and reasoning criteria — a trend Hooker wants to continue.
He has also built a reputation for expanding access to AI research globally, recruiting research talent from underrepresented regions such as Africa. While Adoption Labs will soon open a San Francisco office, Hooker said he plans to hire globally.
If Hooker and Adaptation Labs are right about the limitations of scaling, the implications could be huge. Billions have already been invested in scaling LLM, with the assumption that larger models will lead to general intelligence. But it’s possible that truly adaptive learning could prove not only more powerful — but far more efficient.
Marina Temkin contributed reporting.