Mint Explainer: What the US exec order, G7 norms on AI mean for big tech cos – Crypto News – Crypto News
Connect with us
Mint Explainer: What the US exec order, G7 norms on AI mean for big tech cos Mint Explainer: What the US exec order, G7 norms on AI mean for big tech cos

Metaverse

Mint Explainer: What the US exec order, G7 norms on AI mean for big tech cos – Crypto News

Published

on

While about 60 countries, including India, already have national AI Strategies, US President Joe Biden on Monday released a new executive order on AI, even as the Group of Seven (G7 comprises Canada, France, Germany, Italy, Japan, the UK and the US) introduced guiding principles and a code of conduct.

Further, the UK will host a two-day AI Safety Summit on Wednesday, even as Britain’s tech secretary Michelle Donelan has indicated the UK government’s reluctance to set up a global regulator for AI.

That said, the existing guidelines, reports, whitepapers, and working groups on the subject of AI regulation can be overwhelming. Canada, for instance, has drafted the Artificial Intelligence and Data Act (AIDA) while the US has the AI Bill of Rights and State Initiatives. China’s draft on ‘Administrative Measures for Generative AI Services’ is open for public consultation, while Brazil and Japan, too, have draft regulations in place. India is a founding member and also the Council Chair of the Global Partnership on Artificial Intelligence (GPAI), which includes countries such as the US, the UK, EU, Australia, Canada, France, Germany, Italy, Japan, Mexico, New Zealand, South Korea, and Singapore.

The US administration, on its part, published the blueprint for an AI Bill of Rights in October 2022, and issued an Executive Order directing agencies to combat algorithmic discrimination this February. So why did it release another document on the same subject? In its defence, the US government says that over the past several months, it has engaged with many countries, including India, to understand their AI governance frameworks before releasing this new executive order.

What the new executive order rules

Among other things, the new order has directed developers of “the most powerful AI systems” to share their “safety test results and other critical information with the U.S. government”. The order refers to AI models that are “…trained on tens and thousands of GPUs (~ 50K H100s or $50M+)…on any cluster with 10^20 FLOPs…”

While the US government has not named any company, the directives are likely to apply to foundation models and large language models (LLMs) that have been built by big tech companies such as Microsoft-backed OpenAI, Microsoft, Google (which has also invested in OpenAI rival, Anthropic), Meta, and Hugging Face.

The new directive mandates that companies developing any foundation model must notify the US government when training it, and “must share the results of all red-team safety tests”. A red team would identify areas where a model could potentially pose a serious risk to national security, national economic security, or national public health and safety.

However, some AI experts are pointing out that this could add to the bureaucracy of decision-making. Bindu Reddy, CEO and co-founder of Abacus.AI, believes the AI Executive Order “is a bit ridiculous and pretty hard to enforce”. She asked on microblogging site ‘X’ (formerly Twitter): “How do you determine if something is a “serious risk to national security!”?”

There’s another issue that merits attention in this context. While the new executive order aims at protecting the privacy and security of the US government, its agencies, and citizens, it’s not clear what it would mean for enterprises around the world, including India, that have begun building solutions based on application programming interfaces (APIs) provided by foundation AI models and LLMs built by US-based companies. In simple words, will APIs based on foundation models and LLMs that protect US interests be suitable to companies in other countries too?

Meanwhile, according to the executive order, the standards will be set by the National Institute of Standards and Technology (NIST) and applied by the Department of Homeland Security to critical infrastructure sectors. It also mandates that agencies funding life-science projects would have to establish these standards as a condition of federal funding in a bid to prevent the use of the models to engineer biological materials.

There’s good reason for this move since advances in deep learning and molecular biology are speeding up drug discovery and also giving companies the potential to build, among other things, AI systems that can “discover the mechanisms regulating RNA processing, predict the effects of genetic variants, and design therapeutic molecules that restore RNA and protein”.

The US Department of Commerce, meanwhile, will develop guidance for content authentication and watermarking to clearly label AI-generated content. The idea is to stem the spread of fake news that can seem authoritative. Reddy countered on X, “We may as well kill vision AI, if we actually enforced that. Are Enterprises allowed to use AI to generate images and use them in their marketing?

The US government order has also mandated the development of a National Security Memorandum, a document that is aimed at ensuring that the US military and intelligence community use AI “safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI”.

The so-called ”Weaponization” of AI is becoming a reality with many countries working on Autonomous Weapon Systems (AWS). While many countries have autonomous drones, the US is testing AI bots that can fly a modified F-16 fighter jet. It also has a “secretive” Air Force programme called the Next Generation Air Dominance, which involves about 1,000 drone “wingmen” called collaborative combat aircraft, operating alongside 200 piloted planes. Russia, meanwhile, is experimenting with autonomous tank-like vehicles. Likewise, China is developing AI-run weapon systems.

The US government has also called on Congress to pass a bipartisan data privacy legislation to protect all Americans, especially children. It has also mandated that the Department of Health and Human Services establish a safety programme to “receive reports of—and act to remedy – harms or unsafe healthcare practices involving AI”, even as it has undertaken to develop resources to support educators deploying AI-enabled educational tools, such as personalized tutoring in schools. It has also asked for a report on AI’s potential labour-market impacts.

G7 guiding principles on AI too

On Monday, the G7 also released its International Guiding Principles on Artificial Intelligence and a voluntary Code of Conduct for AI developers under the Hiroshima AI process. The G7 Hiroshima AI Process was launched during the G7 Summit on 19 May 2023. It aims at advocating global safeguards for advanced AI systems. This effort is one component of broader international conversations about establishing guidelines for AI, occurring within organizations such as the OECD, the Global Partnership on Artificial Intelligence (GPAI), as well as in the framework of the EU-U.S. Trade and Technology Council and the EU’s Digital Partnerships.

Among other things, the G7 note exhorts organizations not to develop or deploy advanced AI systems “in ways that undermine democratic values, are particularly harmful to individuals or communities, facilitate terrorism, promote criminal misuse, or pose substantial risks to safety, security and human rights, and are thus not acceptable”. It also suggests that while testing, developers should seek to enable traceability in relation to datasets, processes, and decisions made during system development.

The document adds that these measures should be documented and supported by regularly updated technical documentation. It also highlights the need for organisations to publish transparency reports that contain “meaningful information” for all new significant releases of advanced AI systems. It also encourages organisations to collaborate with each other across the AI lifecycle to share and report relevant information to the public with a view to advancing safety, security and trustworthiness of advanced AI systems.

Meanwhile, the UK is gearing up for a two-day AI Summit beginning Wednesday, which is aimed at addressing long-term risks associated with AI technologies. The UK government is expected to showcase its “Frontier AI Taskforce”, an advisory panel reporting directly to the prime minister that is in talks with big AI companies including OpenAI, Anthropic and Google DeepMind to gain access to their models and evaluate risks. However, Britain’s tech secretary Michelle Donelan has indicated the UK government’s reluctance to set up a global regulator for AI.

But what about India? As pointed out above, India is the Council Chair of GPAI. In a July editorial, ‘Striking the right balance when regulating AI’, Mint pointed out that the Telecom Regulatory Authority of India’s recommendation to set up an independent statutory Artificial Intelligence and Data Authority of India (AIDAI) body, which will act both as a regulator and recommendatory body while playing an advisory role for all AI-related sectors, is a step in the right direction for more than one reason.

Yet, a divided house

Chiefs of global companies including Elon Musk and Masayoshi Son, and AI experts including Geoffery Hinton and Yoshua Bengio believe the phenomenal growth of generative AI models indicates that machines will soon think and act like humans, a trend we refer to as artificial general intelligence, or AGI.

They argue rightly that researchers are unable to fully understand how these unsupervised algorithms (that train on humungous amounts of data but they learn on their own with no human programming) perform tasks like creating new content including audio, code, images, text, simulations, and videos. Further, these models can plagiarize, be biased, potentially replace thousands of routine jobs, and also pose security and privacy risks.

The fear is that if we are unable to fully understand the workings of these unsupervised networks, they could automatically evolve into Syknet-like machines that achieve AI Singularity or AGI.

Yet, an equally accomplished group of experts, including Yann LeCun, Fei-Fei Li, and Andrew Ng, believes AI is nowhere close to becoming sentient. They underscore that AI’s benefits, such as powering smartphones, driverless vehicles, low-cost satellites, chatbots, and providing flood forecasts and warnings, far outweigh its perceived risks.

Governments and policymakers, however, can ill-afford to wait for a consensus on foundation models, LLMs, and AGI to put guardrails in place as we pointed out in an editorial this month. In this context, the US executive order and the G7 guidelines are sensible moves even though they will require continuous refinement.

Trending