
Metaverse
Mint Explainer: What the US exec order, G7 norms on AI mean for big tech cos – Crypto News
While about 60 countries, including India, already have national AI Strategies, US President Joe Biden on Monday released a new executive order on AI, even as the Group of Seven (G7 comprises Canada, France, Germany, Italy, Japan, the UK and the US) introduced guiding principles and a code of conduct.
Further, the UK will host a two-day AI Safety Summit on Wednesday, even as Britain’s tech secretary Michelle Donelan has indicated the UK government’s reluctance to set up a global regulator for AI.
That said, the existing guidelines, reports, whitepapers, and working groups on the subject of AI regulation can be overwhelming. Canada, for instance, has drafted the Artificial Intelligence and Data Act (AIDA) while the US has the AI Bill of Rights and State Initiatives. China’s draft on ‘Administrative Measures for Generative AI Services’ is open for public consultation, while Brazil and Japan, too, have draft regulations in place. India is a founding member and also the Council Chair of the Global Partnership on Artificial Intelligence (GPAI), which includes countries such as the US, the UK, EU, Australia, Canada, France, Germany, Italy, Japan, Mexico, New Zealand, South Korea, and Singapore.
The US administration, on its part, published the blueprint for an AI Bill of Rights in October 2022, and issued an Executive Order directing agencies to combat algorithmic discrimination this February. So why did it release another document on the same subject? In its defence, the US government says that over the past several months, it has engaged with many countries, including India, to understand their AI governance frameworks before releasing this new executive order.
What the new executive order rules
Among other things, the new order has directed developers of “the most powerful AI systems” to share their “safety test results and other critical information with the U.S. government”. The order refers to AI models that are “…trained on tens and thousands of GPUs (~ 50K H100s or $50M+)…on any cluster with 10^20 FLOPs…”
While the US government has not named any company, the directives are likely to apply to foundation models and large language models (LLMs) that have been built by big tech companies such as Microsoft-backed OpenAI, Microsoft, Google (which has also invested in OpenAI rival, Anthropic), Meta, and Hugging Face.
The new directive mandates that companies developing any foundation model must notify the US government when training it, and “must share the results of all red-team safety tests”. A red team would identify areas where a model could potentially pose a serious risk to national security, national economic security, or national public health and safety.
However, some AI experts are pointing out that this could add to the bureaucracy of decision-making. Bindu Reddy, CEO and co-founder of Abacus.AI, believes the AI Executive Order “is a bit ridiculous and pretty hard to enforce”. She asked on microblogging site ‘X’ (formerly Twitter): “How do you determine if something is a “serious risk to national security!”?”
There’s another issue that merits attention in this context. While the new executive order aims at protecting the privacy and security of the US government, its agencies, and citizens, it’s not clear what it would mean for enterprises around the world, including India, that have begun building solutions based on application programming interfaces (APIs) provided by foundation AI models and LLMs built by US-based companies. In simple words, will APIs based on foundation models and LLMs that protect US interests be suitable to companies in other countries too?
Meanwhile, according to the executive order, the standards will be set by the National Institute of Standards and Technology (NIST) and applied by the Department of Homeland Security to critical infrastructure sectors. It also mandates that agencies funding life-science projects would have to establish these standards as a condition of federal funding in a bid to prevent the use of the models to engineer biological materials.
There’s good reason for this move since advances in deep learning and molecular biology are speeding up drug discovery and also giving companies the potential to build, among other things, AI systems that can “discover the mechanisms regulating RNA processing, predict the effects of genetic variants, and design therapeutic molecules that restore RNA and protein”.
The US Department of Commerce, meanwhile, will develop guidance for content authentication and watermarking to clearly label AI-generated content. The idea is to stem the spread of fake news that can seem authoritative. Reddy countered on X, “We may as well kill vision AI, if we actually enforced that. Are Enterprises allowed to use AI to generate images and use them in their marketing?
The US government order has also mandated the development of a National Security Memorandum, a document that is aimed at ensuring that the US military and intelligence community use AI “safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI”.
The so-called ”Weaponization” of AI is becoming a reality with many countries working on Autonomous Weapon Systems (AWS). While many countries have autonomous drones, the US is testing AI bots that can fly a modified F-16 fighter jet. It also has a “secretive” Air Force programme called the Next Generation Air Dominance, which involves about 1,000 drone “wingmen” called collaborative combat aircraft, operating alongside 200 piloted planes. Russia, meanwhile, is experimenting with autonomous tank-like vehicles. Likewise, China is developing AI-run weapon systems.
The US government has also called on Congress to pass a bipartisan data privacy legislation to protect all Americans, especially children. It has also mandated that the Department of Health and Human Services establish a safety programme to “receive reports of—and act to remedy – harms or unsafe healthcare practices involving AI”, even as it has undertaken to develop resources to support educators deploying AI-enabled educational tools, such as personalized tutoring in schools. It has also asked for a report on AI’s potential labour-market impacts.
G7 guiding principles on AI too
On Monday, the G7 also released its International Guiding Principles on Artificial Intelligence and a voluntary Code of Conduct for AI developers under the Hiroshima AI process. The G7 Hiroshima AI Process was launched during the G7 Summit on 19 May 2023. It aims at advocating global safeguards for advanced AI systems. This effort is one component of broader international conversations about establishing guidelines for AI, occurring within organizations such as the OECD, the Global Partnership on Artificial Intelligence (GPAI), as well as in the framework of the EU-U.S. Trade and Technology Council and the EU’s Digital Partnerships.
Among other things, the G7 note exhorts organizations not to develop or deploy advanced AI systems “in ways that undermine democratic values, are particularly harmful to individuals or communities, facilitate terrorism, promote criminal misuse, or pose substantial risks to safety, security and human rights, and are thus not acceptable”. It also suggests that while testing, developers should seek to enable traceability in relation to datasets, processes, and decisions made during system development.
The document adds that these measures should be documented and supported by regularly updated technical documentation. It also highlights the need for organisations to publish transparency reports that contain “meaningful information” for all new significant releases of advanced AI systems. It also encourages organisations to collaborate with each other across the AI lifecycle to share and report relevant information to the public with a view to advancing safety, security and trustworthiness of advanced AI systems.
Meanwhile, the UK is gearing up for a two-day AI Summit beginning Wednesday, which is aimed at addressing long-term risks associated with AI technologies. The UK government is expected to showcase its “Frontier AI Taskforce”, an advisory panel reporting directly to the prime minister that is in talks with big AI companies including OpenAI, Anthropic and Google DeepMind to gain access to their models and evaluate risks. However, Britain’s tech secretary Michelle Donelan has indicated the UK government’s reluctance to set up a global regulator for AI.
But what about India? As pointed out above, India is the Council Chair of GPAI. In a July editorial, ‘Striking the right balance when regulating AI’, Mint pointed out that the Telecom Regulatory Authority of India’s recommendation to set up an independent statutory Artificial Intelligence and Data Authority of India (AIDAI) body, which will act both as a regulator and recommendatory body while playing an advisory role for all AI-related sectors, is a step in the right direction for more than one reason.
Yet, a divided house
Chiefs of global companies including Elon Musk and Masayoshi Son, and AI experts including Geoffery Hinton and Yoshua Bengio believe the phenomenal growth of generative AI models indicates that machines will soon think and act like humans, a trend we refer to as artificial general intelligence, or AGI.
They argue rightly that researchers are unable to fully understand how these unsupervised algorithms (that train on humungous amounts of data but they learn on their own with no human programming) perform tasks like creating new content including audio, code, images, text, simulations, and videos. Further, these models can plagiarize, be biased, potentially replace thousands of routine jobs, and also pose security and privacy risks.
The fear is that if we are unable to fully understand the workings of these unsupervised networks, they could automatically evolve into Syknet-like machines that achieve AI Singularity or AGI.
Yet, an equally accomplished group of experts, including Yann LeCun, Fei-Fei Li, and Andrew Ng, believes AI is nowhere close to becoming sentient. They underscore that AI’s benefits, such as powering smartphones, driverless vehicles, low-cost satellites, chatbots, and providing flood forecasts and warnings, far outweigh its perceived risks.
Governments and policymakers, however, can ill-afford to wait for a consensus on foundation models, LLMs, and AGI to put guardrails in place as we pointed out in an editorial this month. In this context, the US executive order and the G7 guidelines are sensible moves even though they will require continuous refinement.
-
Cryptocurrency1 week ago
The monetary power of the periphery: How Dallas defends the dollar – Crypto News
-
Cryptocurrency1 week ago
Nifty 50 Ends Higher After Two-Day Drop, But Bulls Struggle to Break 25,000 – Crypto News
-
Cryptocurrency1 week ago
XRP drops 1.05% as resistance levels cap recovery – Crypto News
-
others1 week ago
Gold surges above $3,300 as US jobs data disappoints, Trump tariffs blocked – Crypto News
-
others1 week ago
Trader Michaël van de Poppe Says Ethereum-Based Altcoin Primed To Do Well in Coming Months, Updates Outlook on Bitcoin and Sui – Crypto News
-
Blockchain1 week ago
Testing Strength At Key Support – Crypto News
-
Technology1 week ago
Cool savings for a hot season: Top 10 deals for you on ACs, refrigerators, microwaves, and more with up to 60% off – Crypto News
-
Cryptocurrency1 week ago
One day left to invest in Bitcoin Pepe before it hits centralised exchanges – Crypto News
-
Cryptocurrency1 week ago
Coinbase helps bust $20M spoofing case – Crypto News
-
Cryptocurrency1 week ago
SOL Strategies Files $1B Shelf Prospectus to Boost Solana Investment ‘Flexibility’ – Crypto News
-
Technology1 week ago
Why Is Pepe Coin Trending Today? – Crypto News
-
Technology1 week ago
WhatsApp Status gets new Instagram-like features: Here’s what’s new – Crypto News
-
Cryptocurrency7 days ago
Can Shiba Inu Price Recover as Age Consumed & Falling MVRV Signal Bottom? – Crypto News
-
Blockchain6 days ago
Czech Justice Minister Resigns Over $45M Bitcoin Donation Scandal – Crypto News
-
Cryptocurrency1 week ago
Litecoin price forecast: tracking LTC’s bullish technical setup – Crypto News
-
Cryptocurrency1 week ago
Litecoin price forecast: tracking LTC’s bullish technical setup – Crypto News
-
Cryptocurrency1 week ago
Cold Summer? Bitcoin Price Breaches $105K Support As Tariffs Return to Play – Crypto News
-
Business1 week ago
Sharplink Gaming Files $1 Billion Shelf Offering To Purchase Ethereum – Crypto News
-
others1 week ago
Sharplink Gaming Files $1 Billion Shelf Offering To Purchase Ethereum – Crypto News
-
Cryptocurrency7 days ago
Bitcoin in ‘make or break’ zone – Trump Media hints at what’s next – Crypto News
-
Blockchain6 days ago
Bitcoin Still Bullish, But $200,000 Off The Table And $137,000 In Sight – Crypto News
-
Technology6 days ago
Just-In: IMF Raises Red Flag Over Pakistan’s Bitcoin Mining Plans, Is $1.5B IMF Loan at Risk? – Crypto News
-
Blockchain1 week ago
Bitcoin $106,800 Support Retest To Determine BTC’s Next Move – Crypto News
-
Blockchain1 week ago
RBI Expands Digital Rupee Pilots, UPI Leads Global Real-Time Payments – Crypto News
-
Blockchain1 week ago
Telegram raises $1.7 billion via bond offering – Crypto News
-
Cryptocurrency1 week ago
XRP futures surge past $223M as price holds $2.27 support – Crypto News
-
others1 week ago
Bankrupt Crypto Exchange FTX Officially Kicks Off Second Round of Creditor Repayments With $5,400,000,000 Distribution – Crypto News
-
others7 days ago
JPMorgan Chase CEO Warns US Bond Crisis Coming After Massive Money Printing, Says Regulators Will Panic – Crypto News
-
Blockchain1 week ago
US court freezes $57M USDC allegedly linked to LIBRA scandal – Crypto News
-
others1 week ago
Gold rebounds as US Dollar retreats while court strikes down Trump’s tariffs – Crypto News
-
Blockchain1 week ago
Ethereum Price Faces Mild Correction — Support Levels in Focus – Crypto News
-
Business1 week ago
XRP Crash: Why Price Is Falling Today? – Crypto News
-
Business1 week ago
Floki Inu Announces Valhalla Mainnet Launch Date; FLOKI Price to Rally? – Crypto News
-
Metaverse1 week ago
IndiaAI Mission gets 16,000 new GPUs, three more foundational models – Crypto News
-
others1 week ago
$413,200,000,000 in Unrealized Losses Hit US Banks As FDIC Warns Rising Rates Adding Pressure – Crypto News
-
Cryptocurrency1 week ago
Friday Charts: Click here for good news – Crypto News
-
Blockchain7 days ago
Major crypto hacks fell 40% in May, says PeckShield – Crypto News
-
Business6 days ago
Michael Saylor Signals Another Massive Strategy Bitcoin Purchase – Crypto News
-
Business6 days ago
XRP Las Vegas: Brad Garlinghouse Says Bitcoin Is Not The Enemy – Crypto News
-
Blockchain6 days ago
Strategy signals another Bitcoin buy on June 2 – Crypto News
-
others5 days ago
‘Nothing Stops This Train’ – Macro Guru Lyn Alden Warns Fed Has No Way To Slow Down Debt Growth in US Financial System – Crypto News
-
Cryptocurrency5 days ago
Ethereum’s Pectra Upgrade leaves massive loophole for scammers – Crypto News
-
Technology1 week ago
Solana’s Downfall Could Fuel Ethereum Price Rally to $3,500 – Crypto News
-
Business1 week ago
Trump Tariffs Struck Down By US Courts, ‘Buy Everything’ Says Arthur Hayes – Crypto News
-
Blockchain1 week ago
Crypto lobby group says SEC should back off regulating most DAOs – Crypto News
-
others1 week ago
Trader Who Called 2021 Bitcoin and Crypto Collapse Says Key Indicator Now Flashing Green – Crypto News
-
Blockchain1 week ago
Bitcoin $106,800 Support Retest To Determine BTC’s Next Move – Crypto News
-
Business1 week ago
Is Meta Adopting Bitcoin? What’s behind Strive CEO and Mark Zuckerberg Meeting – Crypto News
-
Cryptocurrency1 week ago
Quant (QNT) rally pauses at $123, Sell-off or surge ahead? – Crypto News
-
others1 week ago
Crypto Couple Kidnapped In Argentina, Freed After $43K Ransom – Crypto News