

Metaverse
Large language models are getting bigger and better – Crypto News
That hunger for the new has only accelerated. In March Anthropic launched Claude 3, which bested the previous top models from OpenAI and Google on various leaderboards. On April 9th OpenAI reclaimed the crown (on some measures) by tweaking its model. On April 18th Meta released Llama 3, which early results suggest is the most capable open model to date. OpenAI is likely to make a splash sometime this year when it releases GPT-5, which may have capabilities beyond any current large language model (LLM). If the rumours are to be believed, the next generation of models will be even more remarkable—able to perform multi-step tasks, for instance, rather than merely responding to prompts, or analysing complex questions carefully instead of blurting out the first algorithmically available answer.
For those who believe that this is the usual tech hype, consider this: investors are deadly serious about backing the next generation of models. GPT-5 and other next-gen models are expected to cost billions of dollars to train. OpenAI is also reportedly partnering with Microsoft, a tech giant, to build a new $100bn data centre. Based on the numbers alone, it seems as though the future will hold limitless exponential growth. This chimes with a view shared by many AI researchers called the “scaling hypothesis”, namely that the architecture of current LLMs is on the path to unlocking phenomenal progress. All that is needed to exceed human abilities, according to the hypothesis, is more data and more powerful computer chips.
Look closer at the technical frontier, however, and some daunting hurdles become evident.
Beauty’s not enough
Data may well present the most immediate bottleneck. Epoch AI, a research outfit, estimates the well of high-quality textual data on the public internet will run dry by 2026. This has left researchers scrambling for ideas. Some labs are turning to the private web, buying data from brokers and news websites. Others are turning to the internet’s vast quantities of audio and visual data, which could be used to train ever-bigger models for decades. Video can be particularly useful in teaching AI models about the physics of the world around them. If a model can observe a ball flying through the air, it might more easily work out the mathematical equation that describes the projectile’s motion. Leading models like GPT-4 and Gemini are now “multimodal”, capable of dealing with various types of data.
When data can no longer be found, it can be made. Companies like Scale AI and Surge AI have built large networks of people to generate and annotate data, including PhD researchers solving problems in maths or biology. One executive at a leading AI startup estimates this is costing AI labs hundreds of millions of dollars per year. A cheaper approach involves generating “synthetic data” in which one LLM makes billions of pages of text to train a second model. Though that method can run into trouble: models trained like this can lose past knowledge and generate uncreative responses. A more fruitful way to train AI models on synthetic data is to have them learn through collaboration or competition. Researchers call this “self-play”. In 2017 Google DeepMind, the search giant’s AI lab, developed a model called AlphaGo that, after training against itself, beat the human world champion in the game of Go. Google and other firms now use similar techniques on their latest LLMs.
Extending ideas like self-play to new domains is hot topic of research. But most real-world problems—from running a business to being a good doctor—are more complex than a game, without clear-cut winning moves. This is why, for such complex domains, data to train models is still needed from people who can differentiate between good and bad quality responses. This in turn slows things down.
More silicon, but make it fashion
Better hardware is another route to more powerful models. Graphics-processing units (GPUs), originally designed for video-gaming, have become the go-to chip for most AI programmers thanks to their ability to run intensive calculations in parallel. One way to unlock new capabilities may lie in using chips designed specifically for AI models. Cerebras, a chipmaker based in Silicon Valley, released a product in March containing 50 times as many transistors as the largest GPU. Model-building is usually hampered by data needing to be continuously loaded on and off the GPUs as the model is trained. Cerebras’s giant chip, by contrast, has memory built in.
New models that can take advantage of these advances will be more reliable and better at handling tricky requests from users. One way this may happen is through larger “context windows”, the amount of text, image or video that a user can feed into a model when making requests. Enlarging context windows to allow users to upload additional relevant information also seems to be an effective way of curbing hallucination, the tendency of AI models to confidently answer questions with made-up information.
But while some model-makers race for more resources, others see signs that the scaling hypothesis is running into trouble. Physical constraints—insufficient memory, say, or rising energy costs—place practical limitations on bigger model designs. More worrying, it is not clear that expanding context windows will be enough for continued progress. Yann LeCun, a star AI boffin now at Meta, is one of many who believe the limitations in the current AI models cannot be fixed with more of the same.
Some scientists are therefore turning to a long-standing source of inspiration in the field of AI—the human brain. The average adult can reason and plan far better than the best LLMs, despite using less power and much less data. “AI needs better learning algorithms, and we know they’re possible because your brain has them,” says Pedro Domingos, a computer scientist at the University of Washington.
One problem, he says, is the algorithm by which LLMs learn, called backpropagation. All LLMs are neural networks arranged in layers, which receive inputs and transform them to predict outputs. When the LLM is in its learning phase, it compares its predictions against the version of reality available in its training data. If these diverge, the algorithm makes small tweaks to each layer of the network to improve future predictions. That makes it computationally intensive and incremental.
The neural networks in today’s LLMs are also inefficiently structured. Since 2017 most AI models have used a type of neural-network architecture known as a transformer (the “T” in GPT), which allowed them to establish relationships between bits of data that are far apart within a data set. Previous approaches struggled to make such long-range connections. If a transformer-based model were asked to write the lyrics to a song, for example, it could, in its coda, riff on lines from many verses earlier, whereas a more primitive model would have forgotten all about the start by the time it had got to the end of the song. Transformers can also be run on many processors at once, significantly reducing the time it takes to train them.
Albert Gu, a computer scientist at Carnegie Mellon University, nevertheless thinks the transformers’ time may soon be up. Scaling up their context windows is highly computationally inefficient: as the input doubles, the amount of computation required to process it quadruples. Alongside Tri Dao of Princeton University, Dr Gu has come up with an alternative architecture called Mamba. If, by analogy, a transformer reads all of a book’s pages at once, Mamba reads them sequentially, updating its worldview as it progresses. This is not only more efficient, but also more closely approximates the way human comprehension works.
LLMs also need help getting better at reasoning and planning. Andrej Karpathy, a researcher formerly at OpenAI, explained in a recent talk that current LLMs are only capable of “system 1″ thinking. In humans, this is the automatic mode of thought involved in snap decisions. In contrast, “system 2″ thinking is slower, more conscious and involves iteration. For AI systems, that may require algorithms capable of something called search—an ability to outline and examine many different courses of action before selecting the best one. This would be similar in spirit to how game-playing AI models can choose the best moves after exploring several options.
Advanced planning via search is the focus of much current effort. Meta’s Dr LeCun, for example, is trying to program the ability to reason and make predictions directly into an AI system. In 2022 he proposed a framework called “Joint Embedding Predictive Architecture” (JEPA), which is trained to predict larger chunks of text or images in a single step than current generative-AI models. That lets it focus on global features of a data set. When analysing animal images, for example, a JEPA-based model may more quickly focus on size, shape and colour rather than individual patches of fur. The hope is that by abstracting things out JEPA learns more efficiently than generative models, which get distracted by irrelevant details.
Experiments with approaches like Mamba or JEPA remain the exception. Until data and computing power become insurmountable hurdles, transformer-based models will stay in favour. But as engineers push them into ever more complex applications, human expertise will remain essential in the labelling of data. This could mean slower progress than before. For a new generation of AI models to stun the world as ChatGPT did in 2022, fundamental breakthroughs may be needed.
© 2024, The Economist Newspaper Limited. All rights reserved.
From The Economist, published under licence. The original content can be found on www.economist.com
-
Blockchain6 days ago
Ethereum Price Performance Could Hinge On This Binance Metric — Here’s Why – Crypto News
-
Cryptocurrency1 week ago
French Exoskeleton Company Wandercraft Pivots to Humanoid Robots – Crypto News
-
Cryptocurrency1 week ago
French Exoskeleton Company Wandercraft Pivots to Humanoid Robots – Crypto News
-
Cryptocurrency1 week ago
Donald Trump Ready to Ditch His Tesla Amid Musk Feud? (Report) – Crypto News
-
Cryptocurrency1 week ago
Trump-Elon feud Erupts, Crypto falls, Coinbase to list Fartcoin – Crypto News
-
others1 week ago
Canadian Dollar gives back gains despite upbeat jobs data – Crypto News
-
Technology1 week ago
Best juicer for home in 2025: Top 10 choices for your family’s good health from brands like Philips, Borosil and more – Crypto News
-
Blockchain5 days ago
OpenLedger Invests $25 Million to Combat ‘Extractive’ AI Economy – Crypto News
-
others5 days ago
Gold price in India: Rates on June 10 – Crypto News
-
Technology1 week ago
Why Anthropic CEO Dario Amodei thinks a 10-year AI regulation freeze is dangerous – Crypto News
-
Technology1 week ago
Gemini can now schedule tasks, send reminders and keep you on track: Here’s how it works – Crypto News
-
Technology7 days ago
How artificial intelligence caught leukaemia in Maharashtra’s Parbhani – Crypto News
-
Technology4 days ago
Circle IPO shows strong crypto market investor demand – Crypto News
-
Technology1 week ago
Microsoft integrates AI shopping into Copilot app, bringing price tracking and smart comparisons – Crypto News
-
others1 week ago
Gold prices fall as the USD extends gains post NFP – Crypto News
-
Technology6 days ago
OpenAI CEO Sam Altman says AI is like an intern today, but it will soon match experienced software engineers – Crypto News
-
Cryptocurrency6 days ago
TRON: Who’s fueling TRX’s breakout? It’s not whales, here’s the answer! – Crypto News
-
Technology1 week ago
The Quiet Voices Questioning China’s AI Hype – Crypto News
-
others1 week ago
EUR/USD retreats from multi-week highs ahead of Eurozone GDP and consumption data – Crypto News
-
Technology7 days ago
Weekly Tech Recap: Resident Evil Requiem release date revealed, OnePlus 13s makes India debut and more – Crypto News
-
Technology7 days ago
OnePlus 13s review: A near-perfect compact phone, minus a few flagship perks – Crypto News
-
Technology5 days ago
iOS 26’s Liquid Glass redesign met with backlash from Apple users: ‘Please tone it down’ – Crypto News
-
others1 week ago
Bitcoin Could Crash by Double-Digit Percentage Points in a ‘Quick Move’ if This Support Level Fails, Warns Crypto Trader – Crypto News
-
others1 week ago
GBP/USD slips as strong US jobs data cools Fed rate cut bets – Crypto News
-
others1 week ago
Widely Followed Analyst Outlines Bullish Path for Bitcoin, Says BTC Will Battle Gold and ‘Never Look Back’ – Crypto News
-
Technology1 week ago
Best juicer for home in 2025: Top 10 choices for your family’s good health from brands like Philips, Borosil and more – Crypto News
-
Cryptocurrency1 week ago
Stacks [STX] down 31% after Alex Protocol exploit – Details – Crypto News
-
others7 days ago
New Yorkers Warned of Fake QR Codes Being Placed on Parking Meters That Steal Victims’ Payment Information – Crypto News
-
Cryptocurrency6 days ago
Union completes trusted setup to pave the way for trustless cross-chain DeFi – Crypto News
-
Technology5 days ago
Father’s Day 2025 gift ideas: Smartwatch, Bluetooth speaker and more – Crypto News
-
Cryptocurrency1 week ago
Solo Miner Defies Odds After Mining Bitcoin Block Earning Over $330K – Crypto News
-
others1 week ago
Russia Central Bank Reserves $ climbed from previous $678.5B to $678.7B – Crypto News
-
Blockchain1 week ago
Maple Expands to Solana, Brings Yield-Bearing Stablecoin – Crypto News
-
Metaverse1 week ago
The transformer birthed GenAI. Meet the man who built it – Crypto News
-
Blockchain1 week ago
Deutsche Bank Considers Digital Asset Projects – Crypto News
-
Business1 week ago
Businesses Limit Pass-Through of Tariff Costs – Crypto News
-
others1 week ago
S&P 500 reaches 6,000 for first time since February on NFP print – Crypto News
-
Technology6 days ago
India targets indigenous 2nm, Nvidia-level GPU by 2030 – Crypto News
-
others6 days ago
Analyst Says Bitcoin Has ‘Pretty Good’ Chance of Hitting Massive Price Target in 2026, Citing Three Technical Signals – Crypto News
-
Technology6 days ago
BP Puts AI at the Heart of Its Efforts to Boost Performance – Crypto News
-
others5 days ago
Japan Money Supply M2+CD (YoY) increased to 0.6% in May from previous 0.5% – Crypto News
-
Cryptocurrency5 days ago
Resistance Persists at $2,700 But Buyer Appetite Grows – Crypto News
-
Technology5 days ago
iOS 26’s Liquid Glass redesign met with backlash from Apple users: ‘Please tone it down’ – Crypto News
-
Technology5 days ago
iOS 26’s Liquid Glass redesign met with backlash from Apple users: ‘Please tone it down’ – Crypto News
-
others5 days ago
Stock Market Pullback in Sight As Several of America’s Problems Still Remain, Warns Former JPMorgan Strategist – Crypto News
-
others5 days ago
ARK Invest’s Cathie Wood Unveils Massive Price Target for Tesla (TSLA) in Five Years Fueled by Robotaxi Platform – Crypto News
-
Technology4 days ago
One Tech Tip: How to protect your 23andMe genetic data – Crypto News
-
Blockchain1 week ago
Solo Bitcoin Miner Wins $330K Block as Difficulty Hits Record 126.98T – Crypto News
-
others1 week ago
Bitcoin Flashing Bearish Signal After Failing To Break Above Major Resistance, Says Crypto Trader – Here Are His Targets – Crypto News
-
Cryptocurrency1 week ago
Dogecoin Price Vulnerable Near Key Support. Musk to the Rescue? – Crypto News