

Metaverse
Researchers are figuring out how large language models work – Crypto News
LLMs are built using a technique called deep learning, in which a network of billions of neurons, simulated in software and modelled on the structure of the human brain, is exposed to trillions of examples of something to discover inherent patterns. Trained on text strings, LLMs can hold conversations, generate text in a variety of styles, write software code, translate between languages and more besides.
Models are essentially grown, rather than designed, says Josh Batson, a researcher at Anthropic, an AI startup. Because LLMs are not explicitly programmed, nobody is entirely sure why they have such extraordinary abilities. Nor do they know why LLMs sometimes misbehave, or give wrong or made-up answers, known as “hallucinations”. LLMs really are black boxes. This is worrying, given that they and other deep-learning systems are starting to be used for all kinds of things, from offering customer support to preparing document summaries to writing software code.
It would be helpful to be able to poke around inside an LLM to see what is going on, just as it is possible, given the right tools, to do with a car engine or a microprocessor. Being able to understand a model’s inner workings in bottom-up, forensic detail is called “mechanistic interpretability”. But it is a daunting task for networks with billions of internal neurons. That has not stopped people trying, including Dr Batson and his colleagues. In a paper published in May, they explained how they have gained new insight into the workings of one of Anthropic’s LLMs.
One might think individual neurons inside an LLM would correspond to specific words. Unfortunately, things are not that simple. Instead, individual words or concepts are associated with the activation of complex patterns of neurons, and individual neurons may be activated by many different words or concepts. This problem was pointed out in earlier work by researchers at Anthropic, published in 2022. They proposed—and subsequently tried—various workarounds, achieving good results on very small language models in 2023 with a so-called “sparse autoencoder”. In their latest results they have scaled up this approach to work with Claude 3 Sonnet, a full-sized LLM.
A sparse autoencoder is, essentially, a second, smaller neural network that is trained on the activity of an LLM, looking for distinct patterns in activity when “sparse” (ie, very small) groups of its neurons fire together. Once many such patterns, known as features, have been identified, the researchers can determine which words trigger which features. The Anthropic team found individual features that corresponded to specific cities, people, animals and chemical elements, as well as higher-level concepts such as transport infrastructure, famous female tennis players, or the notion of secrecy. They performed this exercise three times, identifying 1m, 4m and, on the last go, 34m features within the Sonnet LLM.
The result is a sort of mind-map of the LLM, showing a small fraction of the concepts it has learned about from its training data. Places in the San Francisco Bay Area that are close geographically are also “close” to each other in the concept space, as are related concepts, such as diseases or emotions. “This is exciting because we have a partial conceptual map, a hazy one, of what’s happening,” says Dr Batson. “And that’s the starting point—we can enrich that map and branch out from there.”
Focus the mind
As well as seeing parts of the LLM light up, as it were, in response to specific concepts, it is also possible to change its behaviour by manipulating individual features. Anthropic tested this idea by “spiking” (ie, turning up) a feature associated with the Golden Gate Bridge. The result was a version of Claude that was obsessed with the bridge, and mentioned it at any opportunity. When asked how to spend $10, for example, it suggested paying the toll and driving over the bridge; when asked to write a love story, it made up one about a lovelorn car that could not wait to cross it.
That may sound silly, but the same principle could be used to discourage the model from talking about particular topics, such as bioweapons production. “AI safety is a major goal here,” says Dr Batson. It can also be applied to behaviours. By tuning specific features, models could be made more or less sycophantic, empathetic or deceptive. Might a feature emerge that corresponds to the tendency to hallucinate? “We didn’t find a smoking gun,” says Dr Batson. Whether hallucinations have an identifiable mechanism or signature is, he says, a “million-dollar question”. And it is one addressed, by another group of researchers, in a new paper in Nature.
Sebastian Farquhar and colleagues at the University of Oxford used a measure called “semantic entropy” to assess whether a statement from an LLM is likely to be a hallucination or not. Their technique is quite straightforward: essentially, an LLM is given the same prompt several times, and its answers are then clustered by “semantic similarity” (ie, according to their meaning). The researchers’ hunch was that the “entropy” of these answers—in other words, the degree of inconsistency—corresponds to the LLM’s uncertainty, and thus the likelihood of hallucination. If all its answers are essentially variations on a theme, they are probably not hallucinations (though they may still be incorrect).
In one example, the Oxford group asked an LLM which country is associated with fado music, and it consistently replied that fado is the national music of Portugal—which is correct, and not a hallucination. But when asked about the function of a protein called StarD10, the model gave several wildly different answers, which suggests hallucination. (The researchers prefer the term “confabulation”, a subset of hallucinations they define as “arbitrary and incorrect generations”.) Overall, this approach was able to distinguish between accurate statements and hallucinations 79% of the time; ten percentage points better than previous methods. This work is complementary, in many ways, to Anthropic’s.
Others have also been lifting the lid on LLMs: the “superalignment” team at OpenAI, maker of GPT-4 and ChatGPT, released its own paper on sparse autoencoders in June, though the team has now been dissolved after several researchers left the firm. But the OpenAI paper contained some innovative ideas, says Dr Batson. “We are really happy to see groups all over, working to understand models better,” he says. “We want everybody doing it.”
© 2024, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com
-
Technology1 week ago
XRP Ledger Secures Major Win, Powering China’s Top Supply Chain Firm – Crypto News
-
De-fi1 week ago
Binance Lists Dolomite’s DOLO Token, Adds Fifth Lira Pair – Crypto News
-
Business4 days ago
PYMNTS’ Summer of Big Quotes, From Tariffs to Trust Codes – Crypto News
-
Technology1 week ago
Google’s Gemini 2.5 Flash Image does it all – From blurring backgrounds to multi-image fusion – Crypto News
-
Technology1 week ago
Google’s Gemini 2.5 Flash Image does it all – From blurring backgrounds to multi-image fusion – Crypto News
-
others1 week ago
Ripple’s RLUSD Launches on Aave’s Horizon RWA Market as Adoption Expands – Crypto News
-
others1 week ago
Ripple’s RLUSD Launches on Aave’s Horizon RWA Market as Adoption Expands – Crypto News
-
Technology1 week ago
Morgan Stanley Flips to September Rate Cut Call: Here’s What Changed – Crypto News
-
Business1 week ago
BlackRock Buys $300M in Ethereum as Crypto ETF Inflows Return – Crypto News
-
Blockchain1 week ago
Decoding Google’s Layer-1 blockchain: what it means and what we know – Crypto News
-
Blockchain1 week ago
Bitcoin Dives As On-Chain Data Shows Every Cohort Now Selling – Crypto News
-
Business1 week ago
Pi Network Hackathon Winner Hints at Coinbase Listing Amid Pi Open Source Transition – Crypto News
-
Blockchain1 week ago
Google’s Rich Widmann shares LinkedIn update on Universal Ledger blockchain – Crypto News
-
Cryptocurrency1 week ago
Philippine Senator Suggests Putting National Budget On-chain – Crypto News
-
Blockchain1 week ago
Animoca, Antler’s Ibex Launch Fund to Tokenize Japan’s IP – Crypto News
-
Business1 week ago
Donald Trump Jr.’s VC Firm Invests ‘Millions’ in $1B Crypto Platform Polymarket – Crypto News
-
Business1 week ago
Scott Bessent Says 11 ‘Strong’ Candidates in Line to Replace Fed Chair Powell – Crypto News
-
Technology1 week ago
Mint Explainer | A web for machines, not humans: Decoding ex-Twitter CEO Parag Agrawal’s next big move – Crypto News
-
Cryptocurrency7 days ago
South Korea Busts Hacking Syndicate After Multi-Million Dollar Crypto Losses – Crypto News
-
Business1 week ago
CR7 Meme Coin Hits $5M Market Cap Then Dumps Following $143M Rug Pull – Crypto News
-
Business1 week ago
Morgan Stanley Flips to September Rate Cut Call: Here’s What Changed – Crypto News
-
others1 week ago
Breaking: U.S. Government to Begin Issuing GDP Data on Blockchain in Latest Crypto Push – Crypto News
-
Technology1 week ago
Pump.fun Buys Back $58M PUMP Tokens; Price Up 4% – Crypto News
-
De-fi1 week ago
Crypto and DeFi in 2026: Adoption, Innovation, and the Road Ahead – Crypto News
-
Cryptocurrency1 week ago
Why Kanye West’s YZY Meme Coin is Down 82% Juts a Week After Launch? – Crypto News
-
De-fi1 week ago
Stablecoins Just Got Real: The Future of Programmable Money in the GENIUS Era – Crypto News
-
Technology1 week ago
PUMP circulating supply shrinks as Pump.fun’s total buybacks surpass $58M – Crypto News
-
Technology1 week ago
Google is working on Quick Share for iPhone: Here’s everything we know so far – Crypto News
-
De-fi1 week ago
Pantera Capital Seeks $1.25 Billion to Build Solana Investment Vehicle – Crypto News
-
Technology1 week ago
Aave’s new Horizon allows institutions to borrow stablecoins using real-world assets – Crypto News
-
De-fi1 week ago
Pantera Capital Seeks $1.25 Billion to Build Solana Investment Vehicle – Crypto News
-
others1 week ago
Breaking: Canary Capital Files S-1 For Trump Coin ETF With U.S. SEC – Crypto News
-
Cryptocurrency1 week ago
5 memecoins positioned to skyrocket as social buzz grows in August – Crypto News
-
Cryptocurrency1 week ago
Synthetix price soars 20% amid volume spike: here’s why – Crypto News
-
De-fi1 week ago
HYPE Hits New All-Time High as Hyperliquid Dominates DeFi Revenue – Crypto News
-
De-fi1 week ago
Sony’s Soneium Debuts Scoring System to Record Onchain Participation – Crypto News
-
Cryptocurrency7 days ago
A Stable Investment Backed by Real-World Assets – Crypto News
-
De-fi1 week ago
BlackRock CEO Larry Fink and Tim Draper Endorse Bitcoin as Hedge, Predict $250,000 Target Amid Currency Debasement – Crypto News
-
others1 week ago
Why is XRP Price Down Even After the Ripple Lawsuit End? – Crypto News
-
others1 week ago
Gold price in India: Rates on August 26 – Crypto News
-
others1 week ago
XAG/USD rises toward $39.00 due to increased safe-haven demand – Crypto News
-
Business1 week ago
Donald Trump Jr.’s VC Firm Invests ‘Millions’ in $1B Crypto Platform Polymarket – Crypto News
-
Business1 week ago
MetaPlanet Launches $881M International Stock Issuance for BTC Purchases – Crypto News
-
Technology1 week ago
Trump Media teams with Crypto.com in $100M CRO token deal – Crypto News
-
others1 week ago
Dow Jones slows to a crawl amid quiet markets – Crypto News
-
Blockchain1 week ago
Fenwick Denies Lawsuit Claiming It Helped FTX Fraud – Crypto News
-
Blockchain1 week ago
XRP Shows Strength Amid $3 Retest, But Price Risks Correction – Crypto News
-
Blockchain1 week ago
Chainlink, Commerce Department Bring Data to Blockchain – Crypto News
-
Technology1 week ago
JPMorgan Invests $500M In AI-Hedge Fund Numerai; NMR Price Up 33% – Crypto News
-
others1 week ago
Gold price in Malaysia: Rates on August 26 – Crypto News