news

The limits of intelligence — Why AI advancement could be slowing down

The logos of Google Gemini, ChatGPT, Microsoft Copilot, Claude by Anthropic, Perplexity, and Bing apps are displayed on the screen of a smartphone in Reno, United States, on November 21, 2024.
Jaque Silva | Nurphoto | Getty Images

Generative artificial intelligence has developed so quickly in the past two years that massive breakthroughs seemed more a question of when rather than if. But in recent weeks, Silicon Valley has become increasingly concerned that advancements are slowing.  

One early indication is the lack of progress between models released by the biggest players in the space. The Information reports OpenAI is facing a significantly smaller boost in quality for its next model GPT-5, while Anthropic has delayed the release of its most powerful model Opus, according to wording that was removed from its website. Even at tech giant Google, Bloomberg reports that an upcoming version of Gemini is not living up to internal expectations.  

"Remember, ChatGPT came out at the end of 2022, so now it's been close to two years," said Dan Niles, founder of Niles Investment Management. "You had initially a huge ramp up in terms of what all these new models can do, and what's happening now is you really trained all these models and so the performance increases are kind of leveling off."

If progress is plateauing, it would call into question a core assumption that Silicon Valley has treated as religion: scaling laws. The idea is that adding more computing power and more data guarantees better models to an infinite degree. But those recent developments suggest they may be more theory than law. 

The key problem could be that AI companies are running out of data for training models, hitting what experts call the "data wall." Instead, they're turning to synthetic data, or AI-generated data. But that's a band-aid solution, according to Scale AI founder Alexandr Wang.  

 "AI is an industry which is garbage in, garbage out," Wang said. "So if you feed into these models a lot of AI gobbledygook, then the models are just going to spit out more AI gobbledygook."  

But some leaders in the industry are pushing back on the idea that the rate of improvement is hitting a wall.  

"Foundation model pre-training scaling is intact and it's continuing," Nvidia CEO Jensen Huang said on the chipmaker's latest earnings call. "As you know, this is an empirical law, not a fundamental physical law. But the evidence is that it continues to scale."

OpenAI CEO Sam Altman posted on X simply, "there is no wall." 

OpenAI and Anthropic didn't respond to requests for comment. Google says it's pleased with its progress on Gemini and has seen meaningful performance gains in capabilities like reasoning and coding. 

If AI acceleration is tapped out, the next phase of the race is the search for use cases – consumer applications that can be built on top of existing technology without the need for further model improvements. The development and deployment of AI agents, for example, is expected to be a game-changer. 

"I think we're going to live in a world where there are going to be hundreds of millions, billions of AI agents, eventually probably more AI agents than there are people in the world," Meta CEO Mark Zuckerberg said in a recent podcast interview.  

Watch the video to learn more. 

Copyright CNBC
Contact Us