Google Shifts AI Strategy Amid Gemini Performance Challenges
Google is rethinking its artificial intelligence strategy amid recent reports that its conversational artificial intelligence tool, Gemini, is failing to achieve performance gains. The large language model’s slowed progress has contributed to debates regarding the future trajectory of generative AI and its decelerating pace of improvement.
The Information reports that Google’s AI development is experiencing lagging improvement rates compared to the previous model versions. Past models exhibited faster rates with more training data and computing power. However, the current AI model is improving at a slower pace, even when using more processing data and specialized AI chips.
To account for the slowdown in AI model advancements, Google researchers have altered their strategy efforts toward new methods of boosting AI technology performance while maintaining a focus on multimodal AI innovations, AI reasoning models, and synthetic data training capabilities.
AI Performance: Misery Loves Company
Google is not the only company experiencing slower advancements in its model development. Its AI performance challenges follow similar trends recently experienced by several leading AI developers.
OpenAI’s evaluations for its newest model, Orion, have only found moderate coding improvements and minimal new enhancements compared to previous models. Anthropic has also been experiencing development delays with its AI model Claude 3.5 Opus. As the competing companies face difficulties developing their latest models, each has worked to alter its AI strategies. Anthropic has pushed back Opus’ release, and OpenAI researchers have been inventing new techniques to make up for slowed advances.
A spokesperson for Google explained that the company is rethinking its model training approaches and investing in data, applying AI-generated synthetic data, video, and audio in its model training. However, despite the dedication of more computing resources and training data, the new model has yet to progress at a rate that meets expectations.
Rethinking AI Scaling Laws
As companies hit stumbling blocks in their attempts to develop more advanced AI models, industry analysts fear what these bottlenecks could indicate for the future of generative AI technology.
These recent AI development experiences are challenging the overarching understanding of AI model improvement, also known as scaling laws. The belief was that new models would show similar improvement rates if the developers utilized more specialized AI chips and processed more data.
As the concept of scaling laws is tested, the speed and performance of new AI model deployments may diminish until new methods of enhancing them are discovered. Meanwhile, the slowed momentum of generative AI progress among tech giants could benefit smaller AI companies. As Google, OpenAI, and other big-name players work to overcome their model challenges, smaller developers may have time to catch up in terms of their own AI quality.
See how Google’s Gemini and OpenAI’s ChatGPT compare in our head-to-head review.
The post Google Shifts AI Strategy Amid Gemini Performance Challenges appeared first on eWEEK.