

Technology
When Will AI Be Smarter Than Humans? Don’t Ask – Crypto News
(Bloomberg Opinion) — If you’ve heard the term artificial general intelligence, or AGI, it probably makes you think of a humanish intelligence, like the honey-voiced AI love interest in the movie Her, or a superhuman one, like Skynet from The Terminator. At any rate, something science-fictional and far off.
But now a growing number of people in the tech industry and even outside it are prophesying AGI or “human-level” AI in the very near future.
These people may believe what they are saying, but it is at least partly hype designed to get investors to throw billions of dollars at AI companies. Yes, big changes are almost certainly on the way, and you should be preparing for them. But for most of us, calling them AGI is at best a distraction and at worst deliberate misdirection. Business leaders and policymakers need a better way to think about what’s coming. Fortunately, there is one.
Sam Altman of OpenAI, Dario Amodei of Anthropic and Elon Musk of xAI (the thing he’s least famous for) have all said recently that AGI, or something like it, will arrive within a couple of years. More measured voices like Google DeepMind’s Demis Hassabis and Meta’s Yann LeCun see it being at least five to 10 years out. More recently, the meme has gone mainstream, with journalists including the New York Times’ Ezra Klein and Kevin Roose arguing that society should get ready for something like AGI in the very near future.
I say “something like” because oftentimes, these people flirt with the term AGI and then retreat to a more equivocal phrasing like “powerful AI.” And what they may mean by it varies enormously — from AI that can do almost any individual cognitive task as well as a human but might still be quite specialized (Klein, Roose), to doing Nobel Prize-level work (Amodei, Altman), to thinking like an actual human in all respects (Hassabis), to operating in the physical world (LeCun), or simply being “smarter than the smartest human” (Musk).
So, are any of these “really” AGI?
The truth is, it doesn’t matter. If there is even such a thing as AGI — which, I will argue, there isn’t — it’s not going to be a sharp threshold we cross. To the people who tout it, AGI is now simply shorthand for the idea that something very disruptive is imminent: software that can’t merely code an app, draft a school assignment, write bedtime stories for your children or book a holiday — but might throw lots of people out of work, make major scientific breakthroughs, and provide frightening power to hackers, terrorists, corporations and governments.
This prediction is worth taking seriously, and calling it AGI does have a way of making people sit up and listen. But instead of talking about AGI or human-level AI, let’s talk about different types of AI, and what they will and won’t be able to do.
Some form of human-level intelligence has been the goal ever since the AI race kicked off 70 years ago. For decades, the best that could be done was “narrow AI” like IBM’s chess-winning Deep Blue, or Google’s AlphaFold, which predicts protein structures and won its creators (including Hassabis) a share of the chemistry Nobel last year. Both were far beyond human-level, but only for one highly specific task.
If AGI now suddenly seems closer, it’s because the large-language models underlying ChatGPT and its ilk appear to be both more humanlike and more general-purpose.
LLMs interact with us in plain language. They can give at least plausible-looking answers to most questions. They write pretty good fiction, at least when it’s very short. (For longer stories, they lose track of characters and plot details.) They’re scoring ever higher on benchmark tests of skills like coding, medical or bar exams, and math problems. They’re getting better at step-by-step reasoning and more complex tasks. When the most gung-ho AI folks talk about AGI being around the corner, it’s basically a more advanced form of these models they’re talking about.
It’s not that LLMs won’t have big impacts. Some software companies already plan to hire fewer engineers. Most tasks that follow a similar process every time — making medical diagnoses, drafting legal dockets, writing research briefs, creating marketing campaigns and so on — will be things a human worker can at least partly outsource to AI. Some already are.
That will make those workers more productive, which could lead to the elimination of some jobs. Though not necessarily: Geoffrey Hinton, the Nobel Prize-winning computer scientist known as the godfather of AI, infamously predicted that AI would soon make radiologists obsolete. Today, there’s a shortage of them in the US.
But in an important sense, LLMs are still “narrow AI.” They can ace one job while being lousy at a seemingly adjacent one — a phenomenon known as the jagged frontier.
For example, an AI might pass a bar exam with flying colors but botch turning a conversation with a client into a legal brief. It may answer some questions perfectly, but regularly “hallucinate” (i.e. invent facts) on others. LLMs do well with problems that can be solved using clear-cut rules, but in some newer tests where the rules were more ambiguous, models that scored 80% or more on other benchmarks struggled even to reach single figures.
And even if LLMs started to beat these tests, too, they would still be narrow. It’s one thing to tackle a defined, limited problem, however difficult. It’s quite another to take on what people actually do in a typical workday.
Even a mathematician doesn’t just spend all day doing math problems. People do countless things that can’t be benchmarked because they aren’t bounded problems with right or wrong answers. We weigh conflicting priorities, ditch failing plans, make allowances for incomplete knowledge, develop workarounds, act on hunches, read the room and, above all, interact constantly with the highly unpredictable and irrational intelligences that are other human beings.
Indeed, one argument against LLMs ever being able to do Nobel Prize-level work is that the most brilliant scientists aren’t those who know the most, but those who challenge conventional wisdom, propose unlikely hypotheses and ask questions nobody else has thought to ask. That’s pretty much the opposite of an LLM, which is designed to find the likeliest consensus answer based on all the available information.
So we might one day be able to build an LLM that can do almost any individual cognitive task as well as a human. It might be able to string together a whole series of tasks to solve a bigger problem. By some definitions, it would be human-level AI. But it would still be as dumb as a brick if you put it to work in an office.
Human Intelligence Isn’t ‘General’
A core problem with the idea of AGI is that it’s based on a highly anthropocentric notion of what intelligence is.
Most AI research treats intelligence as a more or less linear measure. It assumes that at some point, machines will reach human-level or “general” intelligence, and then perhaps “superintelligence,” at which point they either become Skynet and destroy us or turn into benevolent gods who take care of all our needs.
But there’s a strong argument that human intelligence is not in fact “general.” Our minds have evolved for the very specific challenge of being us. Our body size and shape, the kinds of food we can digest, the predators we once faced, the size of our kin groups, the way we communicate, even the strength of gravity and the wavelengths of light we perceive have all gone into determining what our minds are good at. Other animals have many forms of intelligence we lack: A spider can distinguish predators from prey in the vibrations of its web, an elephant can remember migration routes thousands of miles long, and in an octopus, each tentacle literally has a mind of its own.
In a 2017 essay for Wired, Kevin Kelly argued that we should think of human intelligence not as being at the top of some evolutionary tree, but as just one point within a cluster of Earth-based intelligences that itself is a tiny smear in a universe of all possible alien and machine intelligences. This, he wrote, blows apart the “myth of a superhuman AI” that can do everything far better than us. Rather, we should expect “many hundreds of extra-human new species of thinking, most different from humans, none that will be general purpose, and none that will be an instant god solving major problems in a flash.”
This is a feature, not a bug. For most needs, specialized intelligences will, I suspect, be both cheaper and more reliable than a jack-of-all-trades that resembles us as closely as possible. Not to mention that they’re less likely to rise up and demand their rights.
None of this is to dismiss the huge leaps we can expect from AI in the next few years.
One leap that’s already begun is “agentic” AI. Agents are still based on LLMs, but instead of merely analyzing information, they can carry out actions like making a purchase or filling in a web form. Zoom, for example, soon plans to launch agents that can scour a meeting transcript to create action items, draft follow-up emails and schedule the next meeting. So far, the performance of AI agents is mixed, but as with LLMs, expect it to dramatically improve to the point where quite sophisticated processes can be automated.
Some may claim this is AGI. But once again, that’s more confusing than enlightening. Agents won’t be “general,” but more like personal assistants with extremely one-track minds. You might have dozens of them. Even if they make your productivity skyrocket, managing them will be like juggling dozens of different software apps — much like you’re already doing. Perhaps you’ll get an agent to manage all your agents, but it too will be restricted to whatever goals you set it.
And what will happen when millions or billions of agents are interacting together online is anybody’s guess. Perhaps, just as trading algorithms have set off inexplicable market “flash crashes,” they’ll trigger one another in unstoppable chain reactions that paralyze half the internet. More worryingly, malicious actors could mobilize swarms of agents to sow havoc.
Still, LLMs and their agents are just one type of AI. Within a few years, we may have fundamentally different kinds. LeCun’s lab at Meta, for instance, is one of several that are trying to build what’s called embodied AI.
The theory is that by putting AI in a robot body in the physical world, or in a simulation, it can learn about objects, location and motion — the building blocks of human understanding from which higher concepts can flow. By contrast, LLMs, trained purely on vast amounts of text, ape human thought processes on the surface but show no evidence that they actually have them, or even that they think in any meaningful sense.
Will embodied AI lead to truly thinking machines, or just very dexterous robots? Right now, that’s impossible to say. Even if it’s the former, though, it would still be misleading to call it AGI.
To go back to the point about evolution: Just as it would be absurd to expect a human to think like a spider or an elephant, it would be absurd to expect an oblong robot with six wheels and four arms that doesn’t sleep, eat or have sex — let alone form friendships, wrestle with its conscience or contemplate its own mortality — to think like a human. It might be able to carry Grandma from the living room to the bedroom, but it will both conceive of and perform the task utterly differently from the way we would.
Many of the things AI will be capable of, we can’t even imagine today. The best way to track and make sense of that progress will be to stop trying to compare it to humans, or to anything from the movies, and instead just keep asking: What does it actually do?
More From Bloomberg Opinion:
This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Gideon Lichfield is the former editor-in-chief of Wired magazine and MIT Technology Review. He writes Futurepolis, a newsletter on the future of democracy.
More stories like this are available on bloomberg.com/opinion
-
others1 week ago
Will Ethereum Price Rally to $3,200 as Wall Street Pivots from BTC to ETH – Crypto News
-
others6 days ago
Skies are clearing for Delta as stock soars 13% on earnings beat – Crypto News
-
others6 days ago
Skies are clearing for Delta as stock soars 13% on earnings beat – Crypto News
-
Cryptocurrency1 week ago
TON Foundation Confirms UAE Golden Visa Offer Is Not Official – Crypto News
-
Blockchain1 week ago
Insomnia Labs Debuts Stablecoin Credit Platform for Creators – Crypto News
-
Cryptocurrency1 week ago
Coinbase hacker returns with $12.5 mln ETH buy: Will security concerns affect Ethereum? – Crypto News
-
others1 week ago
Appropriate to have cautious gradual stance on easing – Crypto News
-
Cryptocurrency1 week ago
Tornado Cash Judge Won’t Let One Case Be Mentioned in Roman Storm’s Trial: Here’s Why – Crypto News
-
Blockchain1 week ago
Kraken and Backed Expand Tokenized Equities to BNB Chain – Crypto News
-
others7 days ago
EUR/GBP posts modest gain above 0.8600 ahead of German inflation data – Crypto News
-
Blockchain7 days ago
Ant Group Eyes USDC Integration Circle’s: Report – Crypto News
-
Cryptocurrency6 days ago
Bitcoin Breaks New Record at $111K, What’s Fueling the $120K Price Target? – Crypto News
-
Technology6 days ago
XRP Eyes $3 Breakout Amid Rising BlackRock ETF Speculation – Crypto News
-
Blockchain1 week ago
XRP Set To Shock The Crypto Market With 30% Share: Analyst – Crypto News
-
others1 week ago
Is a Pi Network Crash Ahead As 272M Coins Unlock in July – Crypto News
-
Business1 week ago
Solana ETF Launch Delayed Amid Wait for SEC’s Crypto ETF Framework – Crypto News
-
others1 week ago
USD/CHF gains ground below 0.8000 ahead of US tariff deadline – Crypto News
-
Blockchain1 week ago
EU Questions Robinhood About OpenAI and SpaceX Stock Tokens – Crypto News
-
Cryptocurrency1 week ago
On thinking ahead when markets get murky – Crypto News
-
Technology1 week ago
Solana Meme Coin PNUT Rallies 10% Amid Elon Musk’s Statement – Crypto News
-
Cryptocurrency1 week ago
Is ETH Finally Ready to Shoot For $3K? (Ethereum Price Analysis) – Crypto News
-
Blockchain1 week ago
XRP Rally Possible If Senate Web3 Crypto Summit Goes Well – Crypto News
-
others1 week ago
USD/CAD trades with positive bias below 1.3700; looks to FOMC minutes for fresh impetus – Crypto News
-
Blockchain1 week ago
Ethereum Bulls Roar — $3K Beckons After 5% Spike – Crypto News
-
Technology1 week ago
VC Firm Ego Death Capital Closes $100M Funding to Back Bitcoin-Based Projects – Crypto News
-
others1 week ago
NovaEx Launches with a Security-First Crypto Trading Platform Offering Deep Liquidity and Institutional-Grade Infrastructure – Crypto News
-
Business7 days ago
Did Ripple Really Win XRP Lawsuit Despite $125M Fine? Lawyer Fires Back at CEO – Crypto News
-
Cryptocurrency7 days ago
XRP price forecast as coins surges 2.19% to $2.33 – Crypto News
-
others6 days ago
Anthony Scaramucci Says $180,000 Bitcoin Price Explosion Possible As BTC ‘Supremacy’ Creeps Up – Here’s His Timeline – Crypto News
-
Blockchain6 days ago
SUI Chart Pattern Confirmation Sets $3.89 Price Target – Crypto News
-
others6 days ago
EUR/GBP climbs as weak UK data fuels BoE rate cut speculation – Crypto News
-
Blockchain5 days ago
Bitcoin Hits All-Time High as Crypto Legislation Votes Near – Crypto News
-
Business5 days ago
PENGU Rallies Over 20% Amid Coinbase’s Pudgy Penguins PFP Frenzy – Crypto News
-
Cryptocurrency1 week ago
Macroeconomics, Market Shifts, and Trading Speed Take Center Stage at B2MEET by B2PRIME – Crypto News
-
Blockchain1 week ago
UAE Golden Visa Is ‘Being Developed Independently‘ — TON Foundation – Crypto News
-
others1 week ago
Nasdaq-Listed Bit Digital Converts Entire Bitcoin Holdings To Ethereum Treasury – Crypto News
-
others1 week ago
Ethereum Continues Outperforming Institutional Capital Flows As Investors Pour $1,040,000,000 Into Crypto Products: CoinShares – Crypto News
-
Cryptocurrency1 week ago
Elon Musk announces his ‘America Party’ will embrace Bitcoin, criticizes Trump’s fiscal bill – Crypto News
-
Technology1 week ago
Huaweis AI lab denies that one of its Pangu models copied Alibabas Qwen – Crypto News
-
Cryptocurrency1 week ago
XRP could rally higher on steady capital inflow; check forecast – Crypto News
-
Blockchain1 week ago
Vitalik Buterin Backs Copyleft Licensing for Fairer Crypto – Crypto News
-
Cryptocurrency1 week ago
Bulls In Control But Resistance Persists at $2.30. What Next? – Crypto News
-
Technology1 week ago
GameSquare Stock Shoots 58% After Revealing $100 Million Ethereum Treasury Strategy – Crypto News
-
others1 week ago
Australian Dollar remains stronger due to persistent inflation risks, FOMC Minutes eyed – Crypto News
-
others1 week ago
US Dollar Resurgence May Be Around the Corner, According to Barclays Currency Strategist – Here’s Why: Report – Crypto News
-
others1 week ago
Trump Jr. Backed Thumzup Media To Invest In ETH, XRP, SOL, DOGE And LTC – Crypto News
-
Cryptocurrency1 week ago
Bitcoin Hits Record Peak. How High Can It Surge in 2025? – Crypto News
-
Blockchain1 week ago
Binance Founder Backs BNB Treasury Company Aiming For US IPO – Crypto News
-
Cryptocurrency1 week ago
Tokenized Securities Are Still Securities, US SEC Warns Robinhood, Kraken – Crypto News
-
Technology1 week ago
10 Smartchoice tablets from top brands, curated for everyday use, up to 45% off before Amazon Prime Day Sale – Crypto News