Mint Primer: What if someone uses ChatGPT for military work? – Crypto News – Crypto News
Connect with us
Mint Primer: What if someone uses ChatGPT for military work? Mint Primer: What if someone uses ChatGPT for military work?

Metaverse

Mint Primer: What if someone uses ChatGPT for military work? – Crypto News

Published

on

Until recently, OpenAI wouldn’t allow its models to be used for activity that had “high risk of physical harm, including weapons development, military and warfare.” Now it has removed the words ‘military’ and ‘warfare’. Is this a routine update, or should we be worried?

What is the policy change at OpenAI?

Until recently, OpenAI had explicitly banned the use of its models for weapons development and military and warfare. But on 10 January, it updated its policy. It continues to prevent the use of its service “to harm yourself or others”, citing “develop or use weapons” as an example, but has removed the words ‘military’ and ‘warfare’, as first pointed out by The Intercept. With OpenAI already working with the Pentagon on a number of projects including cybersecurity initiatives, there’s concern in some quarters that this softening of stance could result in the misuse of GPT-4 and ChatGPT for military and warfare.

What is the context of this policy change?

Microsoft Corp., OpenAI’s biggest investor, already has software contracts with the armed forces and other government branches in the US. In September 2023, the US Defense Advanced Research Projects Agency (DARPA) said it would collaborate with Anthropic, Google, Microsoft and OpenAI to help develop “state-of-the-art cybersecurity systems”. Anna Makanju, OpenAI’s vice-president of global affairs, said in an interview at the World Economic Forum in Davos on 16 January that the company had initiated talks with the US government about methods to assist in preventing veteran suicides.

What does OpenAI have to say on this?

OpenAI told TechCrunch that while it does not allow its platform to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, “national security use cases that align with our mission”, such as its partnership with DARPA to develop cybersecurity tools.

Can AI be used to make weapons?

Some fear AI-enabled ‘robot generals’ may eventually be able to launch nuclear weapons. The WEF terms adverse outcomes of AI among the top risks in its Global Risks Report 2024. On 16 May, 2023, OpenAI CEO Sam Altman told a US Senate subcommittee meeting, “If this technology goes wrong, it can go quite wrong…” On 28 May, 2023, Dario Amodei, CEO of Anthropic, testified at another subcommittee that AI models could be misused to make bioweapons if appropriate guardrails are not put in place.

Do we need more laws to address the issue?

Some experts believe AI-powered machines will eventually think and act like humans—so-called artificial general intelligence (AGI), or artificial super intelligence (ASI). There is broad agreement that a draft risk-based European Union AI Act; the guiding principles and AI code of conduct by the G-7; and the US AI Bill are steps in the right direction. India too is expected to introduce the Digital India Act soon with guardrails to regulate AI and intermediaries. That said, fear of AI should not stifle innovation.

Trending