
Metaverse
Mint Explainer: What the US exec order, G7 norms on AI mean for big tech cos – Crypto News
While about 60 countries, including India, already have national AI Strategies, US President Joe Biden on Monday released a new executive order on AI, even as the Group of Seven (G7 comprises Canada, France, Germany, Italy, Japan, the UK and the US) introduced guiding principles and a code of conduct.
Further, the UK will host a two-day AI Safety Summit on Wednesday, even as Britain’s tech secretary Michelle Donelan has indicated the UK government’s reluctance to set up a global regulator for AI.
That said, the existing guidelines, reports, whitepapers, and working groups on the subject of AI regulation can be overwhelming. Canada, for instance, has drafted the Artificial Intelligence and Data Act (AIDA) while the US has the AI Bill of Rights and State Initiatives. China’s draft on ‘Administrative Measures for Generative AI Services’ is open for public consultation, while Brazil and Japan, too, have draft regulations in place. India is a founding member and also the Council Chair of the Global Partnership on Artificial Intelligence (GPAI), which includes countries such as the US, the UK, EU, Australia, Canada, France, Germany, Italy, Japan, Mexico, New Zealand, South Korea, and Singapore.
The US administration, on its part, published the blueprint for an AI Bill of Rights in October 2022, and issued an Executive Order directing agencies to combat algorithmic discrimination this February. So why did it release another document on the same subject? In its defence, the US government says that over the past several months, it has engaged with many countries, including India, to understand their AI governance frameworks before releasing this new executive order.
What the new executive order rules
Among other things, the new order has directed developers of “the most powerful AI systems” to share their “safety test results and other critical information with the U.S. government”. The order refers to AI models that are “…trained on tens and thousands of GPUs (~ 50K H100s or $50M+)…on any cluster with 10^20 FLOPs…”
While the US government has not named any company, the directives are likely to apply to foundation models and large language models (LLMs) that have been built by big tech companies such as Microsoft-backed OpenAI, Microsoft, Google (which has also invested in OpenAI rival, Anthropic), Meta, and Hugging Face.
The new directive mandates that companies developing any foundation model must notify the US government when training it, and “must share the results of all red-team safety tests”. A red team would identify areas where a model could potentially pose a serious risk to national security, national economic security, or national public health and safety.
However, some AI experts are pointing out that this could add to the bureaucracy of decision-making. Bindu Reddy, CEO and co-founder of Abacus.AI, believes the AI Executive Order “is a bit ridiculous and pretty hard to enforce”. She asked on microblogging site ‘X’ (formerly Twitter): “How do you determine if something is a “serious risk to national security!”?”
There’s another issue that merits attention in this context. While the new executive order aims at protecting the privacy and security of the US government, its agencies, and citizens, it’s not clear what it would mean for enterprises around the world, including India, that have begun building solutions based on application programming interfaces (APIs) provided by foundation AI models and LLMs built by US-based companies. In simple words, will APIs based on foundation models and LLMs that protect US interests be suitable to companies in other countries too?
Meanwhile, according to the executive order, the standards will be set by the National Institute of Standards and Technology (NIST) and applied by the Department of Homeland Security to critical infrastructure sectors. It also mandates that agencies funding life-science projects would have to establish these standards as a condition of federal funding in a bid to prevent the use of the models to engineer biological materials.
There’s good reason for this move since advances in deep learning and molecular biology are speeding up drug discovery and also giving companies the potential to build, among other things, AI systems that can “discover the mechanisms regulating RNA processing, predict the effects of genetic variants, and design therapeutic molecules that restore RNA and protein”.
The US Department of Commerce, meanwhile, will develop guidance for content authentication and watermarking to clearly label AI-generated content. The idea is to stem the spread of fake news that can seem authoritative. Reddy countered on X, “We may as well kill vision AI, if we actually enforced that. Are Enterprises allowed to use AI to generate images and use them in their marketing?
The US government order has also mandated the development of a National Security Memorandum, a document that is aimed at ensuring that the US military and intelligence community use AI “safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI”.
The so-called ”Weaponization” of AI is becoming a reality with many countries working on Autonomous Weapon Systems (AWS). While many countries have autonomous drones, the US is testing AI bots that can fly a modified F-16 fighter jet. It also has a “secretive” Air Force programme called the Next Generation Air Dominance, which involves about 1,000 drone “wingmen” called collaborative combat aircraft, operating alongside 200 piloted planes. Russia, meanwhile, is experimenting with autonomous tank-like vehicles. Likewise, China is developing AI-run weapon systems.
The US government has also called on Congress to pass a bipartisan data privacy legislation to protect all Americans, especially children. It has also mandated that the Department of Health and Human Services establish a safety programme to “receive reports of—and act to remedy – harms or unsafe healthcare practices involving AI”, even as it has undertaken to develop resources to support educators deploying AI-enabled educational tools, such as personalized tutoring in schools. It has also asked for a report on AI’s potential labour-market impacts.
G7 guiding principles on AI too
On Monday, the G7 also released its International Guiding Principles on Artificial Intelligence and a voluntary Code of Conduct for AI developers under the Hiroshima AI process. The G7 Hiroshima AI Process was launched during the G7 Summit on 19 May 2023. It aims at advocating global safeguards for advanced AI systems. This effort is one component of broader international conversations about establishing guidelines for AI, occurring within organizations such as the OECD, the Global Partnership on Artificial Intelligence (GPAI), as well as in the framework of the EU-U.S. Trade and Technology Council and the EU’s Digital Partnerships.
Among other things, the G7 note exhorts organizations not to develop or deploy advanced AI systems “in ways that undermine democratic values, are particularly harmful to individuals or communities, facilitate terrorism, promote criminal misuse, or pose substantial risks to safety, security and human rights, and are thus not acceptable”. It also suggests that while testing, developers should seek to enable traceability in relation to datasets, processes, and decisions made during system development.
The document adds that these measures should be documented and supported by regularly updated technical documentation. It also highlights the need for organisations to publish transparency reports that contain “meaningful information” for all new significant releases of advanced AI systems. It also encourages organisations to collaborate with each other across the AI lifecycle to share and report relevant information to the public with a view to advancing safety, security and trustworthiness of advanced AI systems.
Meanwhile, the UK is gearing up for a two-day AI Summit beginning Wednesday, which is aimed at addressing long-term risks associated with AI technologies. The UK government is expected to showcase its “Frontier AI Taskforce”, an advisory panel reporting directly to the prime minister that is in talks with big AI companies including OpenAI, Anthropic and Google DeepMind to gain access to their models and evaluate risks. However, Britain’s tech secretary Michelle Donelan has indicated the UK government’s reluctance to set up a global regulator for AI.
But what about India? As pointed out above, India is the Council Chair of GPAI. In a July editorial, ‘Striking the right balance when regulating AI’, Mint pointed out that the Telecom Regulatory Authority of India’s recommendation to set up an independent statutory Artificial Intelligence and Data Authority of India (AIDAI) body, which will act both as a regulator and recommendatory body while playing an advisory role for all AI-related sectors, is a step in the right direction for more than one reason.
Yet, a divided house
Chiefs of global companies including Elon Musk and Masayoshi Son, and AI experts including Geoffery Hinton and Yoshua Bengio believe the phenomenal growth of generative AI models indicates that machines will soon think and act like humans, a trend we refer to as artificial general intelligence, or AGI.
They argue rightly that researchers are unable to fully understand how these unsupervised algorithms (that train on humungous amounts of data but they learn on their own with no human programming) perform tasks like creating new content including audio, code, images, text, simulations, and videos. Further, these models can plagiarize, be biased, potentially replace thousands of routine jobs, and also pose security and privacy risks.
The fear is that if we are unable to fully understand the workings of these unsupervised networks, they could automatically evolve into Syknet-like machines that achieve AI Singularity or AGI.
Yet, an equally accomplished group of experts, including Yann LeCun, Fei-Fei Li, and Andrew Ng, believes AI is nowhere close to becoming sentient. They underscore that AI’s benefits, such as powering smartphones, driverless vehicles, low-cost satellites, chatbots, and providing flood forecasts and warnings, far outweigh its perceived risks.
Governments and policymakers, however, can ill-afford to wait for a consensus on foundation models, LLMs, and AGI to put guardrails in place as we pointed out in an editorial this month. In this context, the US executive order and the G7 guidelines are sensible moves even though they will require continuous refinement.
-
Technology1 week ago
Chip Designer Arm Plans to Become Chip Manufacturer – Crypto News
-
Cryptocurrency3 days ago
SUI eyes 24% rally as bullish price action gains strength – Crypto News
-
others6 days ago
Japanese Yen remains depressed amid modest USD strength; downside seems limited – Crypto News
-
Technology1 week ago
MacBook Air M3 15-inch model gets a ₹12,000 price drop on Amazon: Deal explained – Crypto News
-
Cryptocurrency2 days ago
Coinbase scores major win as SEC set to drop lawsuit – Crypto News
-
others1 week ago
Japan Foreign Investment in Japan Stocks declined to ¥-384.4B in February 7 from previous ¥-315.2B – Crypto News
-
Technology1 week ago
Perplexity takes on ChatGPT and Gemini with new Deep Research AI that completes most tasks in under 3 minutes – Crypto News
-
Technology1 week ago
Lava Pro Watch X with 1.44-inch AMOLED display, in-built GPS launched in India at ₹4,499 – Crypto News
-
Blockchain6 days ago
XRP Set To Outshine Gold? Analyst Predicts 1,000% Surge – Crypto News
-
Cryptocurrency1 week ago
Advisers on crypto: Takeaways from another survey – Crypto News
-
others1 week ago
Remains subdued below 1.4200 near falling wedge’s lower threshold – Crypto News
-
Cryptocurrency1 week ago
0xLoky Introduces AI-powered Intel for Crypto Data & On-chain Insights – Crypto News
-
Technology1 week ago
Factbox-China’s AI firms take spotlight with deals, low-cost models – Crypto News
-
Technology1 week ago
Massive price drops on Samsung Galaxy devices: Up to ₹10000 discount on Watch Ultra, Tab S10 Plus, and more – Crypto News
-
Cryptocurrency1 week ago
Tether Acquires a Minority Stake in Italian Football Giant Juventus – Crypto News
-
Blockchain1 week ago
XRP To 3 Digits? The ‘Signs’ That Could Confirm It, Basketball Analyst Says – Crypto News
-
others1 week ago
Australian Dollar jumps to highs since December on USD weakness – Crypto News
-
Technology1 week ago
Weekly Tech Recap: JioHotstar launched, Sam Altman vs Elon Musk feud intensifies, Perplexity takes on ChatGPT and more – Crypto News
-
Technology1 week ago
What will it take for India to become a global data centre hub? – Crypto News
-
Technology1 week ago
ChatGPT vs Perplexity: Sam Altman praises Aravind Srinivas’ Deep Research AI; ‘Proud of you’ – Crypto News
-
Blockchain1 week ago
NEAR Breaks Below Parallel Channel: Key Levels To Watch – Crypto News
-
Blockchain7 days ago
Will BTC Rebound Or Drop To $76,000? – Crypto News
-
Blockchain7 days ago
XRP Price Settles After Gains—Is a Fresh Upside Move Coming? – Crypto News
-
Metaverse6 days ago
How AI will divide the best from the rest – Crypto News
-
Business6 days ago
What Will be KAITO Price At Launch? – Crypto News
-
Business6 days ago
Elon Musk’s DOGE Launches Probe into US SEC, Ripple Lawsuit To End? – Crypto News
-
Blockchain6 days ago
XRP Price Pulls Back From Highs—Are Bulls Still in Control? – Crypto News
-
Business5 days ago
Whales Move From Shiba Inu to FXGuys – Here’s Why – Crypto News
-
Technology1 week ago
Best phones under ₹20,000 in February 2025: Poco X7, Motorola Edge 50 Neo and more – Crypto News
-
Blockchain1 week ago
Popular Investor Says Memecoin More Superior With ‘World’s Best Chart’ – Crypto News
-
Cryptocurrency1 week ago
Who is Satoshi Nakamoto, The Creator of Bitcoin? – Crypto News
-
Technology1 week ago
Grok 3 is coming! Elon Musk announces launch date, promises ‘smartest AI on Earth’ – Crypto News
-
Technology7 days ago
Union Minister Ashwini Vaishnaw to launch India AI Mission portal soon, 10 companies set to provide 14,000 GPUs – Crypto News
-
Business6 days ago
These 3 Altcoins Will Help You Capitalize on Stellar’s Recent DIp – Crypto News
-
others6 days ago
Forex Today: What if the RBA…? – Crypto News
-
Cryptocurrency5 days ago
Hayden Davis crypto scandal deepens as LIBRA memecoin faces fraud allegations – Crypto News
-
Technology5 days ago
Luminious inverters for your home to never see darkness again – Crypto News
-
Technology3 days ago
Stellantis Debuts System to Handle ‘Routine Driving Tasks’ – Crypto News
-
Metaverse1 week ago
Strange Love: why people are falling for their AI companions – Crypto News
-
Technology1 week ago
Former Google CEO warns of ‘Bin Laden scenario’ for AI: ‘They could misuse it and do real harm’ – Crypto News
-
Cryptocurrency1 week ago
Yap-to-earn takes over Twitter – Blockworks – Crypto News
-
Cryptocurrency1 week ago
Someone Just Won $100K in Bitcoin From a $50 Pack of Trading Cards – Crypto News
-
Technology1 week ago
Cyber fraud alert: Doctor duped of ₹15.50 lakh via fake trading app; here’s what happened – Crypto News
-
Cryptocurrency1 week ago
Crypto narratives as we await next market move – Crypto News
-
Business1 week ago
How Will It Affect Pi Coin Price? – Crypto News
-
Cryptocurrency1 week ago
GameStop Stock Price Pumps After Report of Bitcoin Buying Plans – Crypto News
-
Blockchain1 week ago
XRP Bullish Pennant Targets $15-$17 But Confirmation Is Required – Crypto News
-
Business6 days ago
Why Ethereum (ETH) Price Revival Could Start Soon After Solana Mess? – Crypto News
-
Business6 days ago
Market Veteran Predicts XRP Price If Ripple Completes Cup and Handle Pattern – Crypto News
-
Cryptocurrency6 days ago
Bitcoin Sees $430M in Outflows as Market Responds to Fed’s Hawkish Stance – Crypto News