Technology
When Will AI Be Smarter Than Humans? Don’t Ask – Crypto News
(Bloomberg Opinion) — If you’ve heard the term artificial general intelligence, or AGI, it probably makes you think of a humanish intelligence, like the honey-voiced AI love interest in the movie Her, or a superhuman one, like Skynet from The Terminator. At any rate, something science-fictional and far off.
But now a growing number of people in the tech industry and even outside it are prophesying AGI or “human-level” AI in the very near future.
These people may believe what they are saying, but it is at least partly hype designed to get investors to throw billions of dollars at AI companies. Yes, big changes are almost certainly on the way, and you should be preparing for them. But for most of us, calling them AGI is at best a distraction and at worst deliberate misdirection. Business leaders and policymakers need a better way to think about what’s coming. Fortunately, there is one.
Sam Altman of OpenAI, Dario Amodei of Anthropic and Elon Musk of xAI (the thing he’s least famous for) have all said recently that AGI, or something like it, will arrive within a couple of years. More measured voices like Google DeepMind’s Demis Hassabis and Meta’s Yann LeCun see it being at least five to 10 years out. More recently, the meme has gone mainstream, with journalists including the New York Times’ Ezra Klein and Kevin Roose arguing that society should get ready for something like AGI in the very near future.
I say “something like” because oftentimes, these people flirt with the term AGI and then retreat to a more equivocal phrasing like “powerful AI.” And what they may mean by it varies enormously — from AI that can do almost any individual cognitive task as well as a human but might still be quite specialized (Klein, Roose), to doing Nobel Prize-level work (Amodei, Altman), to thinking like an actual human in all respects (Hassabis), to operating in the physical world (LeCun), or simply being “smarter than the smartest human” (Musk).
So, are any of these “really” AGI?
The truth is, it doesn’t matter. If there is even such a thing as AGI — which, I will argue, there isn’t — it’s not going to be a sharp threshold we cross. To the people who tout it, AGI is now simply shorthand for the idea that something very disruptive is imminent: software that can’t merely code an app, draft a school assignment, write bedtime stories for your children or book a holiday — but might throw lots of people out of work, make major scientific breakthroughs, and provide frightening power to hackers, terrorists, corporations and governments.
This prediction is worth taking seriously, and calling it AGI does have a way of making people sit up and listen. But instead of talking about AGI or human-level AI, let’s talk about different types of AI, and what they will and won’t be able to do.
Some form of human-level intelligence has been the goal ever since the AI race kicked off 70 years ago. For decades, the best that could be done was “narrow AI” like IBM’s chess-winning Deep Blue, or Google’s AlphaFold, which predicts protein structures and won its creators (including Hassabis) a share of the chemistry Nobel last year. Both were far beyond human-level, but only for one highly specific task.
If AGI now suddenly seems closer, it’s because the large-language models underlying ChatGPT and its ilk appear to be both more humanlike and more general-purpose.
LLMs interact with us in plain language. They can give at least plausible-looking answers to most questions. They write pretty good fiction, at least when it’s very short. (For longer stories, they lose track of characters and plot details.) They’re scoring ever higher on benchmark tests of skills like coding, medical or bar exams, and math problems. They’re getting better at step-by-step reasoning and more complex tasks. When the most gung-ho AI folks talk about AGI being around the corner, it’s basically a more advanced form of these models they’re talking about.
It’s not that LLMs won’t have big impacts. Some software companies already plan to hire fewer engineers. Most tasks that follow a similar process every time — making medical diagnoses, drafting legal dockets, writing research briefs, creating marketing campaigns and so on — will be things a human worker can at least partly outsource to AI. Some already are.
That will make those workers more productive, which could lead to the elimination of some jobs. Though not necessarily: Geoffrey Hinton, the Nobel Prize-winning computer scientist known as the godfather of AI, infamously predicted that AI would soon make radiologists obsolete. Today, there’s a shortage of them in the US.
But in an important sense, LLMs are still “narrow AI.” They can ace one job while being lousy at a seemingly adjacent one — a phenomenon known as the jagged frontier.
For example, an AI might pass a bar exam with flying colors but botch turning a conversation with a client into a legal brief. It may answer some questions perfectly, but regularly “hallucinate” (i.e. invent facts) on others. LLMs do well with problems that can be solved using clear-cut rules, but in some newer tests where the rules were more ambiguous, models that scored 80% or more on other benchmarks struggled even to reach single figures.
And even if LLMs started to beat these tests, too, they would still be narrow. It’s one thing to tackle a defined, limited problem, however difficult. It’s quite another to take on what people actually do in a typical workday.
Even a mathematician doesn’t just spend all day doing math problems. People do countless things that can’t be benchmarked because they aren’t bounded problems with right or wrong answers. We weigh conflicting priorities, ditch failing plans, make allowances for incomplete knowledge, develop workarounds, act on hunches, read the room and, above all, interact constantly with the highly unpredictable and irrational intelligences that are other human beings.
Indeed, one argument against LLMs ever being able to do Nobel Prize-level work is that the most brilliant scientists aren’t those who know the most, but those who challenge conventional wisdom, propose unlikely hypotheses and ask questions nobody else has thought to ask. That’s pretty much the opposite of an LLM, which is designed to find the likeliest consensus answer based on all the available information.
So we might one day be able to build an LLM that can do almost any individual cognitive task as well as a human. It might be able to string together a whole series of tasks to solve a bigger problem. By some definitions, it would be human-level AI. But it would still be as dumb as a brick if you put it to work in an office.
Human Intelligence Isn’t ‘General’
A core problem with the idea of AGI is that it’s based on a highly anthropocentric notion of what intelligence is.
Most AI research treats intelligence as a more or less linear measure. It assumes that at some point, machines will reach human-level or “general” intelligence, and then perhaps “superintelligence,” at which point they either become Skynet and destroy us or turn into benevolent gods who take care of all our needs.
But there’s a strong argument that human intelligence is not in fact “general.” Our minds have evolved for the very specific challenge of being us. Our body size and shape, the kinds of food we can digest, the predators we once faced, the size of our kin groups, the way we communicate, even the strength of gravity and the wavelengths of light we perceive have all gone into determining what our minds are good at. Other animals have many forms of intelligence we lack: A spider can distinguish predators from prey in the vibrations of its web, an elephant can remember migration routes thousands of miles long, and in an octopus, each tentacle literally has a mind of its own.
In a 2017 essay for Wired, Kevin Kelly argued that we should think of human intelligence not as being at the top of some evolutionary tree, but as just one point within a cluster of Earth-based intelligences that itself is a tiny smear in a universe of all possible alien and machine intelligences. This, he wrote, blows apart the “myth of a superhuman AI” that can do everything far better than us. Rather, we should expect “many hundreds of extra-human new species of thinking, most different from humans, none that will be general purpose, and none that will be an instant god solving major problems in a flash.”
This is a feature, not a bug. For most needs, specialized intelligences will, I suspect, be both cheaper and more reliable than a jack-of-all-trades that resembles us as closely as possible. Not to mention that they’re less likely to rise up and demand their rights.
None of this is to dismiss the huge leaps we can expect from AI in the next few years.
One leap that’s already begun is “agentic” AI. Agents are still based on LLMs, but instead of merely analyzing information, they can carry out actions like making a purchase or filling in a web form. Zoom, for example, soon plans to launch agents that can scour a meeting transcript to create action items, draft follow-up emails and schedule the next meeting. So far, the performance of AI agents is mixed, but as with LLMs, expect it to dramatically improve to the point where quite sophisticated processes can be automated.
Some may claim this is AGI. But once again, that’s more confusing than enlightening. Agents won’t be “general,” but more like personal assistants with extremely one-track minds. You might have dozens of them. Even if they make your productivity skyrocket, managing them will be like juggling dozens of different software apps — much like you’re already doing. Perhaps you’ll get an agent to manage all your agents, but it too will be restricted to whatever goals you set it.
And what will happen when millions or billions of agents are interacting together online is anybody’s guess. Perhaps, just as trading algorithms have set off inexplicable market “flash crashes,” they’ll trigger one another in unstoppable chain reactions that paralyze half the internet. More worryingly, malicious actors could mobilize swarms of agents to sow havoc.
Still, LLMs and their agents are just one type of AI. Within a few years, we may have fundamentally different kinds. LeCun’s lab at Meta, for instance, is one of several that are trying to build what’s called embodied AI.
The theory is that by putting AI in a robot body in the physical world, or in a simulation, it can learn about objects, location and motion — the building blocks of human understanding from which higher concepts can flow. By contrast, LLMs, trained purely on vast amounts of text, ape human thought processes on the surface but show no evidence that they actually have them, or even that they think in any meaningful sense.
Will embodied AI lead to truly thinking machines, or just very dexterous robots? Right now, that’s impossible to say. Even if it’s the former, though, it would still be misleading to call it AGI.
To go back to the point about evolution: Just as it would be absurd to expect a human to think like a spider or an elephant, it would be absurd to expect an oblong robot with six wheels and four arms that doesn’t sleep, eat or have sex — let alone form friendships, wrestle with its conscience or contemplate its own mortality — to think like a human. It might be able to carry Grandma from the living room to the bedroom, but it will both conceive of and perform the task utterly differently from the way we would.
Many of the things AI will be capable of, we can’t even imagine today. The best way to track and make sense of that progress will be to stop trying to compare it to humans, or to anything from the movies, and instead just keep asking: What does it actually do?
More From Bloomberg Opinion:
This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Gideon Lichfield is the former editor-in-chief of Wired magazine and MIT Technology Review. He writes Futurepolis, a newsletter on the future of democracy.
More stories like this are available on bloomberg.com/opinion
-
Blockchain1 week agoTokenized Deposits for Payments, Treasury – Crypto News
-
Metaverse1 week agoTech layoffs: From Meta, Amazon to Google — these IT majors have cut AI related jobs – Crypto News
-
Business1 week ago
XRP Price Classical Pattern Points to a Rebound as XRPR ETF Hits $100M Milestone – Crypto News
-
Cryptocurrency1 week ago
Robinhood Lists HYPE As Hyperliquid Flips Aster, Lighter In Perp DEX Volume – Crypto News
-
Technology1 week ago
$1.68 Trillion T. Rowe Price Files for First Active Crypto ETF Holding BTC, ETH, SOL, and XRP – Crypto News
-
Metaverse1 week agoBezos fund believes AI can save the planet. Nvidia, Google are all-in. – Crypto News
-
Cryptocurrency1 week agoCrypto update: Bitcoin and Ethereum are stable as market’s focus shifts to US inflation data – Crypto News
-
Cryptocurrency7 days agoTrump plans to pick Michael Selig to lead CFTC: Report – Crypto News
-
Business6 days ago
White House Crypto Czar Backs Michael Selig as ‘Excellent Choice’ To Lead CFTC – Crypto News
-
Blockchain6 days agoBinance Stablecoin Outflow On A Steady Rise — What This Means For The Market – Crypto News
-
Blockchain1 week agoHere’s Why The Shiba Inu Price Could Bottom And Rise Another 40% – Crypto News
-
Technology1 week agoChatGPT down: Thousands of users unable to access AI chatbot, OpenAI says it is working on a fix – Crypto News
-
Business1 week ago
Breaking: Trump To Meet China’s President On October 30, Bitcoin Bounces – Crypto News
-
Cryptocurrency1 week ago
XRP News: Ripple Unveils ‘Ripple Prime’ After Closing $1.25B Hidden Road Deal – Crypto News
-
Blockchain7 days agoAfrica Countries Pass Crypto Laws to Attract Industry – Crypto News
-
others6 days ago
JPY soft and underperforming G10 in quiet trade – Scotiabank – Crypto News
-
De-fi5 days agoNearly Half of US Retail Crypto Holders Haven’t Earned Yield: MoreMarkets – Crypto News
-
Cryptocurrency5 days agoUSDJPY Forecast: The Dollar’s Winning Streak Why New Highs Could Be At Hand – Crypto News
-
others1 week ago
Veteran Trader Peter Brandt Says “MSTR Could Go Underwater” If Bitcoin Repeats 1977 Soybean Crash – Crypto News
-
De-fi1 week agoCoinbase Acquires Echo for $375 Million to Expand On-Chain Fundraising – Crypto News
-
Cryptocurrency1 week agoCrypto slump worsens as Bitcoin slips amid a broad market sell-off – Crypto News
-
Business1 week ago
How the Crypto Market Could React to the Next Fed Meeting on October 29? – Crypto News
-
Cryptocurrency1 week ago155 Filings Across 35 Assets, Analyst Backs Index Funds – Crypto News
-
Technology1 week agoSundar Pichai hails ‘verifiable’ quantum computing breakthrough as Google’s Willow surpasses ability of supercomputers – Crypto News
-
Technology1 week ago‘It just freezes’: Spotify users fume over app crashes on Android devices, company responds – Crypto News
-
others1 week ago
JPY weak and underperforming – Scotiabank – Crypto News
-
Blockchain6 days agoISM Data Hints Bitcoin Cycle Could Last Longer Than Usual – Crypto News
-
Technology6 days agoNothing OS 4.0 Beta introduces pre-installed apps to Phone (3a) series: Co-founder Akis Evangelidis explains the update – Crypto News
-
De-fi6 days agoHYPE Jumps 10% as Robinhood Announces Spot Listing – Crypto News
-
Blockchain6 days agoEthereum Rebounds From Bull Market Support: Can It Conquer The ‘Golden Pocket’ Next? – Crypto News
-
others6 days ago
Platinum price recovers from setback – Commerzbank – Crypto News
-
Blockchain5 days agoXRP/BTC Retests 6-Year Breakout Trendline, Analyst Calls For Decoupling – Crypto News
-
Technology5 days agoSurvival instinct? New study says some leading AI models won’t let themselves be shut down – Crypto News
-
Cryptocurrency5 days agoWestern Union eyes stablecoin rails in pursuit of a ‘super app’ vision – Crypto News
-
Blockchain5 days agoXRP Price Gains Traction — Buyers Pile In Ahead Of Key Technical Breakout – Crypto News
-
Business1 week ago
‘Trump Insider Whale’ Increases Bitcoin Short As U.S. Counters China in New Australia Deal – Crypto News
-
Technology1 week agoAI bubble isnt near a peak. Its only at ‘base camp’: Jen – Crypto News
-
others1 week agoLikely to trade in a range between 0.6470 and 0.651 – UOB Group – Crypto News
-
Blockchain1 week agoBinance expands global crypto access with new USD transfer feature – Crypto News
-
Blockchain1 week agoKadena Shuts Down Operations – Team Confirms Immediate Cease Of All Activities – Crypto News
-
Technology1 week agoYouTube brings a new feature to stop you from endlessly scrolling Shorts: here’s how it works – Crypto News
-
Technology1 week agoSolana’s RWA market surpasses $700M all-time high as adoption accelerates – Crypto News
-
Cryptocurrency1 week agoJito’s JTO token rises on a16z’s $50 million investment in Solana staking protocol – Crypto News
-
Technology1 week ago
Dogecoin Price Crash Looms as Flag, Death Cross, Falling DOGE ETF Inflows Coincide – Crypto News
-
De-fi1 week agoSolana DEX Meteora Launches Native MET Token – Crypto News
-
Technology1 week agoGoogle and Apple face extra UK scrutiny over strategic role in mobile platforms – Crypto News
-
Cryptocurrency1 week agoLedger Nano Gen5 feels like Flex for less – Crypto News
-
De-fi1 week agoAster Rallies on ‘Rocket Launch’ Incentives Campaign – Crypto News
-
Blockchain1 week agoBitcoin Whale From 2009 Moves Coins After 14 Years Asleep – Crypto News
-
Cryptocurrency1 week agoDOGE to $0.33 in Sight? Dogecoin Must Defend This Key Level First – Crypto News
