More

    Why Cohere’s ex-AI research lead is betting against the scaling race

    The race among AI labs to construct massive data centers—some as expansive as Manhattan—is escalating, with each facility costing billions and consuming energy equivalent to a small city. At the heart of this endeavor lies the principle of “scaling,” the belief that augmenting computational power in AI training can eventually lead to the development of superintelligent systems capable of various complex tasks.

    However, many AI researchers are voicing concerns that the scaling of large language models (LLMs) may be approaching its limits, suggesting more innovative breakthroughs are essential for enhancing AI performance.

    This is the pursuit of Sara Hooker, former VP of AI Research at Cohere and a seasoned Google Brain alumna, who has co-founded a new startup, Adaption Labs. Alongside Sudip Roy, another veteran from Cohere and Google, the company is founded on the belief that the current trend of scaling LLMs is becoming an ineffective approach to improving AI model performance. After leaving Cohere in August, Hooker quietly announced the launch of her startup this month as she seeks to broaden recruitment efforts.

    In a conversation with TechCrunch, Hooker elaborated on how Adaption Labs aims to create AI systems that can continually adapt and learn from real-world experiences with remarkable efficiency. While she refrained from divulging specific methodologies or whether the company utilizes LLMs or alternative architectures, her vision is crystal clear.

    “We’ve reached a pivotal moment now where it’s apparent that simply scaling these models—what I call ‘scaling-pilled’ approaches, which are alluring yet remarkably mundane—hasn’t yielded intelligence capable of navigating or engaging with the world effectively,” Hooker explained.

    According to Hooker, adapting is the “core of learning.” For instance, if you accidentally stub your toe on the dining room table, you’ll learn to maneuver more carefully around it in the future. AI labs have attempted to embody this concept through reinforcement learning (RL), enabling AI models to learn from errors within controlled environments. However, these current RL techniques fall short for AI models in production—those actively utilized by customers—leaving them to encounter the same mistakes repeatedly.

    While some AI labs offer consulting services to help businesses optimize their AI models to fit their specific needs, these services often come with a steep price tag. Reports indicate that OpenAI may require customers to spend upwards of $10 million to access its consulting services for model fine-tuning.

    TechCrunch Event

    San Francisco
    |
    October 27-29, 2025

    “There’s a handful of leading labs dictating the AI models that are served uniformly, and they come at a significant cost to adapt,” Hooker stated. “However, this doesn’t have to be the norm. AI systems can learn efficiently from their environments. Demonstrating this will fundamentally alter who controls and shapes AI and, ultimately, who these models will serve.”

    Adaption Labs signifies a shift in the industry’s confidence in scaling LLMs. A recent study from MIT researchers revealed that the largest AI models might soon face diminishing returns. Reflecting this shifting perspective, popular AI podcaster Dwarkesh Patel hosted some strikingly skeptical discussions with well-known AI researchers recently.

    Richard Sutton, a Turing Award recipient widely recognized as “the father of RL,” voiced to Patel in September that LLMs cannot genuinely scale as they lack the ability to learn from real-world experiences. Following suit, this month, early OpenAI contributor Andrej Karpathy expressed to Patel his concerns about the long-term effectiveness of RL in advancing AI models.

    These concerns are not new. By late 2024, numerous AI researchers voiced worries that scaling AI models via pretraining—where models learn patterns from vast datasets—was hitting a plateau. Until then, pretraining had been the secret weapon for both OpenAI and Google in enhancing their models.

    Now, these pretraining scaling concerns are manifesting in observable data, yet the AI sector has explored other avenues to enhance models. Breakthroughs in 2025 surrounding AI reasoning models—which require additional time and computational resources to solve problems before delivering answers—have further expanded the capabilities of AI systems.

    AI labs appear convinced that advancing scaling in RL and AI reasoning models represents the new frontier. OpenAI researchers previously informed TechCrunch that their initial AI reasoning model, o1, was conceived under the belief it would scale effectively. Meanwhile, researchers from Meta and Periodic Labs have recently published a paper investigating how RL could potentially enhance performance even further—a study reportedly costing over $4 million, highlighting the hefty expenses associated with current methodologies.

    Adaption Labs, by contrast, seeks to uncover the next major breakthrough and demonstrate that learning from experience can be significantly more economical. The startup was reportedly in discussions to secure a seed investment of between $20 million and $40 million earlier this fall according to three investors familiar with its pitch. It is confirmed that this funding round has concluded, although the final amount remains undisclosed. Hooker opted not to comment on the specifics.

    “We’re determined to be highly ambitious,” Hooker asserted when questioned about her investors.

    In her previous role at Cohere Labs, Hooker focused on training smaller AI models for enterprise applications. Currently, compact AI systems are proving to outperform their larger counterparts in areas like coding, mathematics, and reasoning—an ongoing trend Hooker is eager to pursue.

    She has also established a strong commitment to expanding global access to AI research, actively recruiting talent from underrepresented regions including Africa. Although Adaption Labs is set to launch an office in San Francisco, Hooker intends to hire talent from around the globe.

    If Hooker and Adaption Labs validate the limitations of scaling, the implications could be profound. Substantial investments have been made in expanding LLMs based on the premise that larger models equate to superior general intelligence. However, the reality may be that genuine adaptive learning could not only prove more potent but also far more efficient.

    Marina Temkin contributed reporting.

    Source

    Recent Articles

    spot_img

    Related Stories

    Leave A Reply

    Please enter your comment!
    Please enter your name here

    Stay on op - Ge the daily news in your inbox