

Metaverse
How to worry wisely about artificial intelligence – Crypto News
In particular, new “large language models” (LLMs)—the sort that powersChat GPTa chatbot made by Open AI, a startup—have surprised even their creators with their unexpected talents as they have been scaled up. Such “emergent” abilities include everything from solving logic puzzles and writing computer code to identifying films from plot summaries written in emoji.
These models stand to transform humans’ relationship with computers, knowledge and even with themselves. Proponents of AI argue for its potential to solve big problems by developing new drugs, designing new materials to help fight climate change, or untangling the complexities of fusion power. To others, the fact that ais’ capabilities are already outrunning their creators’ understanding risks bringing to life the science-fiction disaster scenario of a machine that outsmarts its inventor, often with fatal consequences.
This bubbling mixture of excitement and fear makes it hard to weigh the opportunities and risks. But lessons can be learned from other industries, and from past technological shifts. So what has changed to make AI so much more capable? How scared should you be? And what should governments do?
In a special science sectionIn this article, we explore the workings of LLMs and their future direction. The first wave of modern AI systems, which emerged a decade ago, relied on carefully labeled training data. Once exposed to a sufficient number of labeled examples, they could learn to do things like recognize images or transcribe speech. Today’s systems do not require pre-labelling, and as a result can be trained using much larger data sets taken from online sources. LLMs can, in effect, be trained on the entire internet—which explains their capabilities, good and bad.
Those capabilities became apparent to a wider public when ChatGPT was released in November. A million people had used it within a week; 100m within two months. It was soon being used to generate school essays and wedding speeches. ChatGPT’s popularity, and Microsoft’s move to incorporate it into Bing, its search engine, prompted rival firms to release chatbots too.
Some of these produced strange results. Bing Chat suggested to a journalist that he should leave his wife. ChatGPT has been accused of defamation by a law professor. LLMs produce answers that have the patina of truth, but often contain factual errors or outright fabrications. Even so, Microsoft, Google and other tech firms have begun to incorporate LLMs into their products, to help users create documents and perform other tasks.
The recent acceleration in both the power and visibility of AI systems, and growing awareness of their abilities and defects, have raised fears that the technology is now advancing so quickly that it cannot be safely controlled. Hence the call for a pause, and growing concern that AI could threaten not just jobs, factual accuracy and reputations, but the existence of humanity itself.
Extinction? Rebellion?
The fear that machines will steal jobs is centuries old. But so far new technology has created new jobs to replace the ones it has destroyed. Machines tend to be able to perform some tasks, not others, increasing demand for people who can do the jobs machines cannot. Could this time be different? A sudden dislocation in job markets cannot be ruled out, even if so far there is no sign of one, Previous technology has tended to replace unskilled tasks, but LLMs can perform some white-collar tasks, such as summarizing documents and writing code.
The degree of existential risk posed by AI has been hotly debated. Experts are divided. In a survey of AI researchers carried out in 2022, 48% thought there was at least a 10% chance that AI’s impact would be “extremely bad (eg, human extinction)”. But 25% said the risk was 0%; the median researcher put the risk at 5%. The nightmare is that an advanced AI causes harm on a massive scale, by creating poisons or viruses, or persuading humans to commit terrorist acts. It need not have evil intent: researchers worry that future AIs may have goals that do not align with those of their human creators.
Such scenarios should not be dismissed. But all involve a huge amount of guesswork, and a leap from today’s technology. And many imagine that future AIs will have unfettered access to energy, money and computing power, which are real constraints today, and could be denied to a rogue AI in the future. Furthermore, experts tend to overstate the risks in their area, compared with other forecasters. (And Mr Musk, who is launching his own AI startup, has an interest in his rivals downing tools.) Imposing heavy regulation, or indeed a pause, today seems an over-reaction. A pause would also be unenforceable.
Regulation is needed, but for more mundane reasons than saving humanity. Existing AI systems raise real concerns about bias, privacy and intellectual-property rights. As the technology advances, other problems could become apparent. The key is to balance the promise of AI with an assessment of the risks, and to be ready to adapt.
So far governments are taking three different approaches. At one end of the spectrum is Britain, which has proposed a “light-touch” approach with no new rules or regulatory bodies, but applies existing regulations to AI systems. The aim is to boost investment and turn Britain into an “AI superpower”. . America has taken a similar approach, though the Biden administration is now seeking public views on what a rulebook might look like.
The EU is taking a tougher line. Its proposed law categorizes different uses of AI by the degree of risk, and requires increasingly stringent monitoring and disclosure as the degree of risk rises from, say, music-recommendation to self-driving cars. Some uses of AI are banned altogether, such as subliminal advertising and remote biometrics. Firms that break the rules will be fined. For some critics, these regulations are too stifling.
But others say an even sterner approach is needed. Governments should treat AI like medicines, with a dedicated regulator, strict testing and pre-approval before public release. China is doing some of this, requiring firms to register AI products and undergo a security review before release. But safety may be less of a motive than politics: a key requirements is that AI’s output reflects the “core value of socialism”.
What to do? The light-touch approach is unlikely to be sufficient. If AI is as important a technology as cars, planes and medicines—and there is good reason to believe that it is—then, like them, it will need new regulations. Accordingly, the EU’s model is closest to the mark, although its classification system is overwrought and a principles-based approach would be more flexible. Compelling disclosure about how systems are trained, how they operate and how they are monitored, and requiring inspections, would be comparable to similar rules in other industries.
This could allow for tighter regulation over time, if needed. A dedicated regulator may then seem appropriate; so too may intergovernmental treaties, similar to those that govern nuclear weapons, should plausible evidence emerge of existential risk. To monitor that risk, governments could form a body modeled on CERN, a particle-physics laboratory, that could also study AI safety and ethics—areas where companies lack incentives to invest as much as society might wish.
This powerful technology poses new risks, but also offers extraordinary opportunities. Balancing the two means treading carefully. A measured approach today can provide the foundations on which further rules can be added in the future. But the time to start building those foundations is now.
For subscribers only: To see how we design each week’s cover, sign up to our weekly Cover Story newsletter.
© 2023, The Economist Newspaper Limited. All rights reserved. From The Economist, published under license. The original content can be found on www.economist.com
Catch all the business news, market news, breaking news Events and Latest News Updates on Live Mint. Download Mint News App to get Daily Market Updates.
Updated: 21 Jun 2023, 01:32 PM IST
-
Technology4 days ago
ChatGPT users are mass cancelling OpenAI subscriptions after GPT-5 launch: Here’s why – Crypto News
-
Technology1 week ago
Binance to List Fireverse (FIR)- What You Need to Know Before August 6 – Crypto News
-
Technology1 week ago
Best computer set under ₹20000 for daily work and study needs: Top 6 affordable picks students and beginners – Crypto News
-
Technology1 week ago
Google DeepMind CEO Demis Hassabis explains why AI could replace doctors but not nurses – Crypto News
-
others1 week ago
Bank of America CEO Denies Alleged Debanking Trend, Says Regulators Need To Provide More Clarity To Avoid ‘Second-Guessing’ – Crypto News
-
others1 week ago
Japan CFTC JPY NC Net Positions down to ¥89.2K from previous ¥106.6K – Crypto News
-
Cryptocurrency1 week ago
How to Trade Meme Coins in 2025 – Crypto News
-
Business1 week ago
Analyst Spots Death Cross on XRP Price as Exchange Inflows Surge – Is A Crash Ahead ? – Crypto News
-
others1 week ago
Pi Network Invests In OpenMiind’s $20M Vision for Humanoid Robots- Is It A Right Move? – Crypto News
-
Business1 week ago
Pi Network Invests In OpenMiind’s $20M Vision for Humanoid Robots- Is It A Right Move? – Crypto News
-
De-fi1 week ago
TON Sinks 7.6% Despite Verb’s $558M Bid to Build First Public Toncoin Treasury Firm – Crypto News
-
Cryptocurrency4 days ago
DWP Management Secures $200M in XRP Post SEC-Win – Crypto News
-
Business1 week ago
Is Quantum Computing A Threat for Bitcoin- Elon Musk Asks Grok – Crypto News
-
Technology1 week ago
Elon Musk reveals why AI won’t replace consultants anytime soon—and it’s not what you think – Crypto News
-
others1 week ago
Is Friday’s sell-off the beginning of a downtrend? – Crypto News
-
Technology1 week ago
Oppo K13 Turbo, K13 Turbo Pro to launch in India on 11 August: Expected price, specs and more – Crypto News
-
Blockchain1 week ago
Shiba Inu Team Member Reveals ‘Primary Challenge’ And ‘Top Priority’ Amid Market Uncertainty – Crypto News
-
Technology1 week ago
OpenAI releases new reasoning-focused open-weight AI models optimised for laptops – Crypto News
-
Blockchain1 week ago
Crypto Market Might Be Undervalued Amid SEC’s New Stance – Crypto News
-
Metaverse1 week ago
ChatGPT won’t help you break up anymore as OpenAI tweaks rules – Crypto News
-
De-fi6 days ago
Coinbase Pushes for ZK-enabled AML Overhaul Just Months After Data Breach – Crypto News
-
Technology4 days ago
Humanoid Robots Still Lack AI Technology, Unitree CEO Says – Crypto News
-
others1 week ago
Visa and Mastercard’s Payment Dominance Not Threatened by Stablecoins, According to Execs – Crypto News
-
Business1 week ago
Breaking: U.S. CFTC Kicks off Crypto Sprint, Explores Spot and Futures Trading Together – Crypto News
-
Cryptocurrency1 week ago
Lido Slashes 15% of Staff, Cites Operational Cost Concerns – Crypto News
-
others1 week ago
MetaPlanet Launches Online Clothing Store As Part of ‘Brand Strategy’ – Crypto News
-
Technology1 week ago
iPhone users alert! Truecaller to discontinue call recording feature for iOS from September 30. Here’s what you can do… – Crypto News
-
Technology1 week ago
iPhone users alert! Truecaller to discontinue call recording feature for iOS from September 30. Here’s what you can do… – Crypto News
-
others7 days ago
US President Trump issues executive order imposing additional 25% tariff on India – Crypto News
-
Business7 days ago
Analyst Predicts $4K Ethereum Rally as SEC Clarifies Liquid Staking Rules – Crypto News
-
De-fi7 days ago
SEC Says Some Stablecoins Can Be Treated as Cash, but Experts Warn of Innovation Risk – Crypto News
-
Business6 days ago
XRP Price Prediction As $214B SBI Holdings Files for XRP ETF- Analyst Sees Rally to $4 Ahead – Crypto News
-
others6 days ago
EUR firmer but off overnight highs – Scotiabank – Crypto News
-
Blockchain6 days ago
Trump to Sign an EO Over Ideological Debanking: Report – Crypto News
-
De-fi6 days ago
Ripple Expands Its Stablecoin Payments Infra with $200M Rail Acquisition – Crypto News
-
others5 days ago
Ripple To Gobble Up Payments Platform Rail for $200,000,000 To Support Transactions via XRP and RLUSD Stablecoin – Crypto News
-
Cryptocurrency5 days ago
Harvard Reveals $116 Million Investment in BlackRock Bitcoin ETF – Crypto News
-
others4 days ago
SEC Latest Filing Reveal Ripple Case Win Could Trigger XRP Treasury Boom Like Ethereum – Crypto News
-
others1 week ago
United States CFTC Oil NC Net Positions climbed from previous 153.3K to 156K – Crypto News
-
De-fi1 week ago
Ripple CTO Admits ‘Even Ripple Can’t Use XRPL DEX’ as XRP Faces Longtime Fan’s Scrutiny – Crypto News
-
Technology1 week ago
Infinix GT 30 5G+ confirmed to launch in India on 8 August: Expected price, specs and more – Crypto News
-
De-fi1 week ago
Ethereum Turns 10: Experts Say ETH Could Hit $40,000 in Next Decade as Network Matures – Crypto News
-
Blockchain1 week ago
GENIUS Act Could Limit Stablecoin Appeal Amid Tokenization Boom – Crypto News
-
Blockchain1 week ago
Trump CFTC Pick Brian Quintenz Questioned On Kalshi Ties – Crypto News
-
Technology1 week ago
Solo Bitcoin miners keep winning against long odds – Crypto News
-
Cryptocurrency1 week ago
Crypto Rises Alongside Stocks as Fed Pivot Bets Build – Crypto News
-
Blockchain1 week ago
Bitcoin Uptrend Over? This Week’s Price Action May Hold The Key – Crypto News
-
Business1 week ago
Trump Says He May Announce Fed Chair Powell’s Replacement “Fairly Soon” – Crypto News
-
Blockchain1 week ago
SEC Says Certain Liquid Staking Activities Fall Outside of Securities Laws – Crypto News
-
Technology1 week ago
SEC Clarifies Liquid Staking Isn’t a Security Amid Project Crypto Push – Crypto News