Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

New LLM model from Google could undermine the case for computer chips

March 12, 2025 at 03:15PM

Earlier today Google announced a new large language model that will compete with ChatGPT, DeepSeek and others. It highlights a trend of huge efficiency gains that will require fewer computer chips to do the same thing as prior models.

It’s called Gemma 3 and is really a collection of models of various sizes that can be run locally and from a single H-100 chip.

“Gemma 3 delivers state-of-the-art performance for its size,
outperforming Llama-405B, DeepSeek-V3 and o3-mini in preliminary human
preference evaluations on LMArena’s leaderboard. This helps you to
create engaging user experiences that can fit on a single GPU or TPU
host,” the release says.

The nightmare scenario for Nvidia and others is that even throwing huge amounts of compute at LLMs only provides marginal improvements, something that seems to be the case with GPT 4.5. Meanwhile, much cheaper and smaller models are proving to be capable of doing most of the things that users want.

That could mean a cliff in demand for GPUs and a decline in Nvidia revenues. Shares are up 4.3% today but were higher by 7% earlier and have tumbled to $113 from a high of $153 in early January.

This article was written by Adam Button at www.forexlive.com.

New LLM model from Google could undermine the case for computer chips