

Metaverse
AI needs regulation, but what kind, and how much? – Crypto News
Perhaps the best-known risk is embodied by the killer robots in the “Terminator” films—the idea that AI will turn against its human creators. The tale of the hubristic inventor who loses control of his own creation is centuries old. And in the modern era people are, observes Chris Dixon, a venture capitalist, “trained by Hollywood from childhood to fear artificial intelligence”. A version of this thesis, which focuses on the existential risks (or “x-risks”) to humanity that might someday be posed by AI, was fleshed out by Nick Bostrom, a Swedish philosopher, in a series of books and papers starting in 2002. His arguments have been embraced and extended by others including Elon Musk, boss of Tesla, SpaceX and, regrettably, X.
Those in this “AI safety” camp, also known as “AI doomers”, worry that it could cause harm in a variety of ways. If AI systems are able to improve themselves, for example, there could be sudden “take off” or “explosion” where AIs beget more powerful AIs in quick succession. The resulting “superintelligence” would far outsmart humans, doomers fear, and might have very different motivations from its human creators. Other doomer scenarios involve AIs carrying out cyber-attacks, helping with the creation of bombs and bioweapons and persuading humans to commit terrorist acts or deploy nuclear weapons.
After the release of ChatGPT in November 2022 highlighted the growing power of AI, public debate was dominated by AI-safety concerns. In March 2023 a group of tech grandees, including Mr Musk, called for a moratorium of at least six months on AI development. The following November a group of 100 world leaders and tech executives met at an AI-safety summit at Bletchley Park in England, declaring that the most advanced (“frontier”) AI models have the “potential for serious, even catastrophic, harm”.
This focus has since provoked something of a backlash. Critics make the case that x-risks are still largely speculative, and that bad actors who want to build bioweapons can already look for advice on the internet. Instead of worrying about theoretical, long-term risks posed by AI, they argue, the focus should be on real risks posed by AI that exist today, such as bias, discrimination, AI-generated disinformation and violation of intellectual-property rights. Prominent advocates of this position, known as the “AI ethics” camp, include Emily Bender, of the University of Washington, and Timnit Gebru, who was fired from Google after she co-wrote a paper about such dangers.
View Full Image
Examples abound of real-world risks posed by AI systems going wrong. An image-labelling feature in Google Photos tagged black people as gorillas; facial-recognition systems trained on mostly white faces misidentify people of colour; an AI resumé-scanning system built to identify promising job candidates consistently favoured men, even when names and genders of applicants were hidden; algorithms used to estimate reoffending rates, allocate child benefits or determine who qualifies for bank loans have displayed racial bias. AI tools can be used to create “deepfake” videos, including pornographic ones, to harass people online or misrepresent the views of politicians. And AI firms face a growing number of lawsuits from writers, artists and musicians who claim that the use of their intellectual property to train AI models is illegal.
When world leaders and tech executives met in Seoul in May 2024 for another AI gathering, the talk was less about far-off x-risks and more about such immediate problems—a trend likely to continue at the next AI-safety summit, if it is still called that, in France in 2025. The AI-ethics camp, in short, now has the ear of policymakers. This is unsurprising, because when it comes to making laws to regulate AI, a process now under way in much of the world, it makes sense to focus on attending to existing harms—for example by criminalising deepfakes—or on requiring audits of AI systems used by government agencies.
Even so, politicians have questions to answer. How broad should rules be? Is self-regulation sufficient, or are laws needed? Does the technology itself require rules, or only its applications? And what is the opportunity cost of regulations that reduce the scope for innovation? Governments have begun to answer these questions, each in their own way.
At one end of the spectrum are countries which rely mostly on self-regulation, including the Gulf states and Britain (although the new Labour government may change this). The leader of this pack is America. Members of Congress talk about AI risks but no law is forthcoming. This makes President Joe Biden’s executive order on AI, signed in October 2023, the country’s most important legal directive for the technology.
The order requires that firms which use more than 1026 computational operations to train an AI model, a threshold at which models are considered a potential risk to national security and public safety, have to notify authorities and share the results of safety tests. This threshold will affect only the very largest models. For the rest, voluntary commitments and self-regulation reign supreme. Lawmakers worry that overly strict regulation could stifle innovation in a field where America is a world leader; they also fear that regulation could allow China to pull ahead in AI research.
China’s government is taking a much tougher approach. It has proposed several sets of AI rules. The aim is less to protect humanity, or to shield Chinese citizens and companies, than it is to control the flow of information. AI models’ training data and outputs must be “true and accurate”, and reflect “the core values of socialism”. Given the propensity of AI models to make things up, these standards may be difficult to meet. But that may be what China wants: when everyone is in violation of the regulations, the government can selectively enforce them however it likes.
Europe sits somewhere in the middle. In May, the European Union passed the world’s first comprehensive legislation, the AI Act, which came into force on August 1st and which cemented the bloc’s role as the setter of global digital standards. But the law is mostly a product-safety document which regulates applications of the technology according to how risky they are. An AI-powered writing assistant needs no regulation, for instance, whereas a service that assists radiologists does. Some uses, such as real-time facial recognition in public spaces, are banned outright. Only the most powerful models have to comply with strict rules, such as mandates both to assess the risks that they pose and to take measures to mitigate them.
A new world order?
A grand global experiment is therefore under way, as different governments take different approaches to regulating AI. As well as introducing new rules, this also involves setting up some new institutions. The EU has created an AI Office to ensure that big model-makers comply with its new law. By contrast, America and Britain will rely on existing agencies in areas where AI is deployed, such as in health care or the legal profession. But both countries have created AI-safety institutes. Other countries, including Japan and Singapore, intend to set up similar bodies.
Meanwhile, three separate efforts are under way to devise global rules and a body to oversee them. One is the AI-safety summits and the various national AI-safety institutes, which are meant to collaborate. Another is the “Hiroshima Process”, launched in the Japanese city in May 2023 by the G7 group of rich democracies and increasingly taken over by the OECD, a larger club of mostly rich countries. A third effort is led by the UN, which has created an advisory body that is producing a report ahead of a summit in September.
These three initiatives will probably converge and give rise to a new international organisation. There are many views on what form it should take. OpenAI, the startup behind ChatGPT, says it wants something like the International Atomic Energy Agency, the world’s nuclear watchdog, to monitor x-risks. Microsoft, a tech giant and OpenAI’s biggest shareholder, prefers a less imposing body modelled on the International Civil Aviation Organisation, which sets rules for aviation. Academic researchers argue for an AI equivalent of the European Organisation for Nuclear Research, or CERN. A compromise, supported by the EU, would create something akin to the Intergovernmental Panel on Climate Change, which keeps the world abreast of research into global warming and its impact.
In the meantime, the picture is messy. Worried that a re-elected Donald Trump would do away with the executive order on AI, America’s states have moved to regulate the technology—notably California, with more than 30 AI-related bills in the works. One in particular, to be voted on in late August, has the tech industry up in arms. Among other things, it would force AI firms to build a “kill switch” into their systems. In Hollywood’s home state, the spectre of “Terminator” continues to loom large over the discussion of AI.
© 2024, The Economist Newspaper Ltd. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com
-
Blockchain7 days ago
On-Chain Tokenization for Payments Professionals – Crypto News
-
Cryptocurrency1 week ago
Copper and P2P.org announce strategic collaboration to elevate institutional staking solutions – Crypto News
-
others1 week ago
Will Pi Network Price Crash or Rally After 212M Unlocks? – Crypto News
-
others1 week ago
Billionaire Barry Silbert Says This Is the Next Big Investment Theme for Crypto Assets – Crypto News
-
Cryptocurrency1 week ago
Breaking: Canary Capital Files For Staked Tron ETF – Crypto News
-
others1 week ago
Interoperability Protocol Asset Surges After a16z Acquires $55,000,000 Worth of the Project’s Native Asset – Crypto News
-
others6 days ago
GBP/USD retreats from YTD high past 1.34 on Fed turmoil – Crypto News
-
others5 days ago
Cantor Partners With Tether, SoftBank, Bitfinex For $3 Billion Bitcoin Bet – Crypto News
-
Cryptocurrency1 week ago
Manta Co-Founder ‘Targeted’ by Lazarus Group in Zoom Phishing Attempt – Crypto News
-
others1 week ago
Macro Guru Lyn Alden Predicts ‘Pretty Good Performance’ for Bitcoin Over the Coming Months – But There’s a Catch – Crypto News
-
Cryptocurrency1 week ago
Dow Jones Index Hangs On TSMC Earnings to Resist Slip. Will It Hold? – Crypto News
-
Technology1 week ago
Researchers Unveil 3D Tech That Lets Users ‘Touch’ Virtual Items – Crypto News
-
Business1 week ago
Coinbase Faces Renewed Legal Battle as Oregon Revives Old SEC Playbook – Crypto News
-
Technology1 week ago
Bitcoin Price Analysis: Two Weeks After Trump’s Tariffs, BTC Outperforms S&P 500 by 50% – Crypto News
-
Cryptocurrency1 week ago
Solana founders ‘returning to cypherpunk roots’: Colosseum’s Taylor – Crypto News
-
others1 week ago
Binance Reveals Major Update For Indian Users: Details – Crypto News
-
Blockchain1 week ago
Crypto, DeFi may widen wealth gap, destabilize finance: BIS report – Crypto News
-
Blockchain1 week ago
Cardano Whales Offload 180 Million ADA In 5 Days – Smart Profit-Taking? – Crypto News
-
Blockchain1 week ago
Altcoin unit bias ‘absolutely destroying’ crypto newbies — Samson Mow – Crypto News
-
Cryptocurrency1 week ago
Cardano Price Safeguards $0.600 Support, Upside Momentum Weakens – Crypto News
-
Cryptocurrency1 week ago
XRP bulls eye $2.60 – One move can trigger a major squeeze – Crypto News
-
Business1 week ago
Expert Says Solana Price To $2,000 Is Within Reach, Here’s How – Crypto News
-
Cryptocurrency7 days ago
Bybit CEO: Two-Thirds of Funds From $1.4B Lazarus Group Hack Still Traceable – Crypto News
-
others5 days ago
Australian Dollar receives support as private sector activity expands in April – Crypto News
-
others5 days ago
Tests 99.00 support after pulling back from nine-day EMA – Crypto News
-
Cryptocurrency5 days ago
Dogecoin ETF? 21Shares files as crypto market sees 12% rally – Crypto News
-
Blockchain1 week ago
Ethereum Price Stalls In Tight Range – Big Price Move Incoming? – Crypto News
-
Cryptocurrency1 week ago
Altcoins struggle while stablecoins shine: Is this the new normal? – Crypto News
-
Blockchain1 week ago
XRP to $50? Technical Analyst Lays Out the Roadmap – Crypto News
-
others1 week ago
SEC Greenlights New VanEck ‘Onchain Economy ETF’ That Holds Stocks Tied to the Digital Asset Sector – Crypto News
-
Business1 week ago
Lorenzo Protocol (BANK) Price Rallies 150% After This Binance Announcement – Crypto News
-
Cryptocurrency1 week ago
Elizabeth Warren Warns Stock Market Will Crash If Trump Fires Powell, Will Crypto Market Crash Too? – Crypto News
-
Business7 days ago
XRP, Bitcoin, Ethereum Price Prediction: $100M Shorts Wiped as Crypto Market Bounces 2.89% – Crypto News
-
Technology6 days ago
Peter Schiff Predicts Gold Will Soar As Fed Cuts Rates, Will Bitcoin Price Follow? – Crypto News
-
Technology6 days ago
European Central Bank Claims Trump’s Crypto Push to Impact Europe Economy – Crypto News
-
Technology5 days ago
AWS, Microsoft Slow Down Data Center Deployments – Crypto News
-
Technology5 days ago
HashKey, Bosera partner to launch world’s first tokenized money market ETFs – Crypto News
-
Technology1 week ago
Can Shiba Inu Price Realistically Reach $0.0001? Elliot Wave Analysis Says Yes – Crypto News
-
Technology1 week ago
Special discounts for Amazon Prime members only! Up to 75% off on vacuum cleaners, water purifier, and more – Crypto News
-
Blockchain1 week ago
Ethereum Price Stalls In Tight Range – Big Price Move Incoming? – Crypto News
-
Blockchain1 week ago
Firing Jerome Powell will crash financial markets — Sen. Elizabeth Warren – Crypto News
-
others1 week ago
GBP/USD remains on track to post weekly gains – Crypto News
-
Technology1 week ago
XRP Price Holds $2.08 Amid Derivatives Surge and Liquidation Risks – Crypto News
-
Technology1 week ago
OpenAI’s latest AI models are smarter, but they make things up more often. Here’s what we know – Crypto News
-
Blockchain1 week ago
Trump firing Powell would be a ‘very bad precedent to set’ — Pompliano – Crypto News
-
others1 week ago
Italy Trade Balance EU up to €-0.361B in February from previous €-0.635B – Crypto News
-
Blockchain1 week ago
Bitcoin Ready For $90K? ‘Next Big Move’ Could Come Next Week – Crypto News
-
Cryptocurrency1 week ago
Canary Capital Seeks SEC Approval for Tron ETF With Staking – Crypto News
-
others1 week ago
Peter Brandt Predicts 50% Crash For ‘Everyone’s Favorite’ XRP Price – Crypto News
-
others1 week ago
On-Chain Metrics Suggest Bitcoin (BTC) Could Be Approaching Early Bear Market Phase: Glassnode – Crypto News