Technology
Training AI models might not need enormous data centres – Crypto News
Once, the world’s richest men competed over yachts, jets and private islands. Now, the size-measuring contest of choice is clusters. Just 18 months ago, OpenAI trained GPT-4, its then state-of-the-art large language model (LLM), on a network of around 25,000 then state-of-the-art graphics processing units (GPUs) made by Nvidia. Now Elon Musk and Mark Zuckerberg, bosses of X and Meta respectively, are waving their chips in the air: Mr Musk says he has 100,000 GPUs in one data centre and plans to buy 200,000. Mr Zuckerberg says he’ll get 350,000.
This contest to build ever-bigger computing clusters for ever-more-powerful artificial-intelligence (AI) models cannot continue indefinitely. Each extra chip adds not only processing power but also to the organisational burden of keeping the whole cluster synchronised. The more chips there are, the more time the data centre’s chips will spend shuttling data around rather than doing useful work. Simply increasing the number of GPUs will provide diminishing returns.
Computer scientists are therefore looking for cleverer, less resource-intensive ways to train future AI models. The solution could lie with ditching the enormous bespoke computing clusters (and their associated upfront costs) altogether and, instead, distributing the task of training between many smaller data centres. This, say some experts, could be the first step towards an even more ambitious goal—training AI models without the need for any dedicated hardware at all.
Training a modern AI system involves ingesting data—sentences, say, or the structure of a protein—that has had some sections hidden. The model makes a guess at what the hidden sections might contain. If it makes the wrong guess, the model is tweaked by a mathematical process called backpropagation so that, the next time it tries the same prediction, it will be infinitesimally closer to the correct answer.
I knew you were trouble
The problems come when you want to be able to work “in parallel”—to have two, or 200,000, GPUs working on backpropagation at the same time. After each step, the chips share data about the changes they have made. If they didn’t, you wouldn’t have a single training run, you’d have 200,000 chips training 200,000 models on their own. That data-sharing process starts with “checkpointing”, in which a snapshot of the training so far is created. This can get complicated fast. There is only one link between two chips, but 190 between 20 chips and almost 20bn for 200,000 chips. The time it takes to checkpoint and share data grows commensurately. For big training runs, around half the time can often be spent on these non-training steps.
All that wasted time gave Arthur Douillard, an engineer at Google DeepMind, an idea. Why not just do fewer checkpoints? In late 2023, he and his colleagues published a method for “Distributed Low-Communication Training of Language Models”, or DiLoCo. Rather than training on 100,000 GPUs, all of which speak to each other at every step, DiLoCo describes how to distribute training across different “islands”, each still a sizeable data centre. Within the islands, checkpointing continues as normal, but across them, the communication burden drops 500-fold.
There are trade-offs. Models trained this way seem to struggle to hit the same peak performance as those trained in monolithic data centres. But interestingly, that impact seems to exist only when the models are rated on the same tasks they are trained on: predicting the missing data.
When they are turned to predictions that they’ve never been asked to make before, they seem to generalise better. Ask them to answer a reasoning question in a form not in the training data, and pound for pound they may outclass the traditionally trained models. That could be an artefact of each island of compute being slightly freer to spiral off in its own direction between checkpointing runs, when they get hauled back on task. Like a cohort of studious undergraduates forming their own research groups rather than being lectured to en masse, the end result is therefore slightly less focused on the task at hand, but with a much wider experience.
Vincent Weisser, founder of Prime Intellect, an open-source AI lab, has taken DiLoCo and run with it. In November 2024 his team completed training on Intellect-1, a 10bn-parameter LLM comparable to Meta’s centrally trained Llama 2, which was state-of-the-art when released in 2023.
Mr Weisser’s team built OpenDiLoCo, a lightly modified version of Mr Douillard’s original, and set it to work training a new model using 30 GPU clusters in eight cities across three continents. In his trials, the GPUs ended up actively working for 83% of the time—that’s compared with 100% in the baseline scenario, in which all the GPUs were in the same building. When training was limited to data centres in America, they were actively working for 96% of the time. Instead of checkpointing every training step, Mr Weisser’s approach checkpoints only every 500 steps. And instead of sharing all the information about every change, it “quantises” the changes, dropping the least significant three-quarters of the data.
For the most advanced labs, with monolithic data centres already built, there is no pressing reason to make the switch to distributed training yet. But, given time, Mr Douillard thinks that his approach will become the norm. The advantages are clear, and the downsides—at least, those illustrated by the small training runs that have been completed so far—seem to be fairly limited.
For an open-source lab like Prime Intellect, the distributed approach has other benefits. Data centres big enough to train a 10bn-parameter model are few and far between. That scarcity drives up prices to access their compute—if it is even available on the open market at all, rather than hoarded by the companies that have built them. Smaller clusters are readily available, however. Each of the 30 clusters Prime Intellect used was a rack of just eight GPUs, with up to 14 of the clusters online at any given time. This resource is a thousand times smaller than data centres used by frontier labs, but neither Mr Weisser nor Mr Douillard see any reason why their approach would not scale.
For Mr Weisser, the motivation for distributing training is also to distribute power—and not just in the electrical sense. “It’s extremely important that it’s not in the hands of one nation, one corporation,” he says. The approach is hardly a free-for-all, though—one of the eight-GPU clusters he used in his training run costs $600,000; the total network deployed by Prime Intellect would cost $18m to buy. But his work is a sign, at least, that training capable AI models does not have to cost billions of dollars.
And what if the costs could drop further still? The dream for developers pursuing truly decentralised AI is to drop the need for purpose-built training chips entirely. Measured in teraflops, a count of how many operations a chip can do in a second, one of Nvidia’s most capable chips is roughly as powerful as 300 or so top-end iPhones. But there are a lot more iPhones in the world than GPUs. What if they (and other consumer computers) could all be put to work, churning through training runs while their owners sleep?
The trade-offs would be enormous. The ease of working with high-performance chips is that, even when distributed around the world, they are at least the same model operating at the same speed. That would be lost. Worse, not only would the training progress need to be aggregated and redistributed at each checkpoint step, so would the training data itself, since typical consumer hardware is unable to store the terabytes of data that goes into a cutting-edge LLM. New computing breakthroughs would be required, says Nic Lane of Flower, one of the labs trying to make that approach a reality.
The gains, though, could add up, with the approach leading to better models, reckons Mr Lane. In the same way that distributed training makes models better at generalising, models trained on “sharded” datasets, where only portions of the training data are given to each GPU, could perform better when confronted with unexpected input in the real world. All that would leave the billionaires needing something else to compete over.
© 2025, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com
Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.
MoreLess
-
Cryptocurrency1 week ago
Robinhood Lists HYPE As Hyperliquid Flips Aster, Lighter In Perp DEX Volume – Crypto News
-
Metaverse1 week agoTech layoffs: From Meta, Amazon to Google — these IT majors have cut AI related jobs – Crypto News
-
Cryptocurrency1 week ago
XRP News: Ripple Unveils ‘Ripple Prime’ After Closing $1.25B Hidden Road Deal – Crypto News
-
Blockchain1 week agoAfrica Countries Pass Crypto Laws to Attract Industry – Crypto News
-
Business1 week ago
Peter Schiff Challenges Binance Founder CZ to Debate as Bitcoin Vs. Gold Rivalry Heats Up – Crypto News
-
De-fi1 week agoAster Rallies on ‘Rocket Launch’ Incentives Campaign – Crypto News
-
Cryptocurrency1 week agoTrump plans to pick Michael Selig to lead CFTC: Report – Crypto News
-
Blockchain1 week agoISM Data Hints Bitcoin Cycle Could Last Longer Than Usual – Crypto News
-
others7 days ago
JPY soft and underperforming G10 in quiet trade – Scotiabank – Crypto News
-
De-fi7 days agoNearly Half of US Retail Crypto Holders Haven’t Earned Yield: MoreMarkets – Crypto News
-
Technology1 week ago‘It just freezes’: Spotify users fume over app crashes on Android devices, company responds – Crypto News
-
Cryptocurrency1 week agoDOGE to $0.33 in Sight? Dogecoin Must Defend This Key Level First – Crypto News
-
Cryptocurrency1 week agoWhat next for Avantis price after the 73% recovery? – Crypto News
-
Technology1 week agoNothing OS 4.0 Beta introduces pre-installed apps to Phone (3a) series: Co-founder Akis Evangelidis explains the update – Crypto News
-
Blockchain1 week agoEthereum Rebounds From Bull Market Support: Can It Conquer The ‘Golden Pocket’ Next? – Crypto News
-
Blockchain6 days agoXRP Price Gains Traction — Buyers Pile In Ahead Of Key Technical Breakout – Crypto News
-
Technology4 days agoSam Altman says OpenAI is developing a ‘legitimate AI researcher’ by 2028 that can discover new science on its own – Crypto News
-
Technology1 week agoUniswap Foundation (UNI) awards Brevis $9M grant to accelerate V4 adoption – Crypto News
-
Technology1 week agoFrom Studio smoke to golden hour: How to create stunning AI portraits with Google Gemini – 16 viral prompts – Crypto News
-
Blockchain1 week agoBinance Stablecoin Outflow On A Steady Rise — What This Means For The Market – Crypto News
-
De-fi1 week agoHYPE Jumps 10% as Robinhood Announces Spot Listing – Crypto News
-
others7 days ago
Platinum price recovers from setback – Commerzbank – Crypto News
-
Cryptocurrency6 days agoWestern Union eyes stablecoin rails in pursuit of a ‘super app’ vision – Crypto News
-
Cryptocurrency1 week agoCrypto update: Bitcoin and Ethereum are stable as market’s focus shifts to US inflation data – Crypto News
-
De-fi1 week agoSolana DEX Meteora Launches Native MET Token – Crypto News
-
Technology1 week agoGoogle and Apple face extra UK scrutiny over strategic role in mobile platforms – Crypto News
-
Business1 week ago
White House Crypto Czar Backs Michael Selig as ‘Excellent Choice’ To Lead CFTC – Crypto News
-
others1 week ago
JPY weak and underperforming – Scotiabank – Crypto News
-
Business1 week ago
Breaking: Trump To Meet China’s President On October 30, Bitcoin Bounces – Crypto News
-
Cryptocurrency7 days agoUSDJPY Forecast: The Dollar’s Winning Streak Why New Highs Could Be At Hand – Crypto News
-
Cryptocurrency1 week agoLedger Nano Gen5 feels like Flex for less – Crypto News
-
Cryptocurrency1 week agoFetch.ai and Ocean Protocol move toward resolving $120M FET dispute – Crypto News
-
Metaverse1 week agoGemini in Gmail automates meeting schedules effortlessly – Crypto News
-
Blockchain1 week agoEntire Startup Lifecycle to Move Onchain – Crypto News
-
Cryptocurrency7 days agoNEAR’s inflation reduction vote fails pass threshold, but it may still be implemented – Crypto News
-
Blockchain7 days agoXRP/BTC Retests 6-Year Breakout Trendline, Analyst Calls For Decoupling – Crypto News
-
others7 days ago
Indian Court Declares XRP as Property in WazirX Hack Case – Crypto News
-
Technology7 days agoSurvival instinct? New study says some leading AI models won’t let themselves be shut down – Crypto News
-
others6 days ago
Is Changpeng “CZ” Zhao Returning To Binance? Probably Not – Crypto News
-
Technology1 week agoSolana’s RWA market surpasses $700M all-time high as adoption accelerates – Crypto News
-
Cryptocurrency1 week agoJito’s JTO token rises on a16z’s $50 million investment in Solana staking protocol – Crypto News
-
Technology1 week ago
Dogecoin Price Crash Looms as Flag, Death Cross, Falling DOGE ETF Inflows Coincide – Crypto News
-
Blockchain1 week agoBitcoin Whale From 2009 Moves Coins After 14 Years Asleep – Crypto News
-
Technology1 week agoOpenAI announces major Sora update: Editing, trending cameos, and Android launch on the way – Crypto News
-
Business1 week ago
HBAR Price Targets 50% Jump as Hedera Unleashes Massive Staking Move – Crypto News
-
Business1 week ago
PEPE Coin Price Prediction as Weekly Outflows Hit $17M – Is Rebound Ahead? – Crypto News
-
Cryptocurrency1 week agoHYPE Breaks Out After Robinhood Listing and S-1 Filing: What’s Next? – Crypto News
-
Technology1 week ago
Analyst Eyes Key Support Retest Before a Rebound for Ethereum Price Amid $93M ETF Outflows and BlackRock Dump – Crypto News
-
Business1 week ago
Ripple Explores New XRP Use Cases as Brad Garlinghouse Reaffirms Token’s ‘Central’ Role – Crypto News
-
others1 week ago
Bitcoin Price Eyes $120K Ahead of FED’s 98.3% Likelihood to Cut Rates – Crypto News
