Metaverse
Today’s AI models are impressive. Teams of them will be formidable – Crypto News
The upgrade is part of wider moves across the tech industry to make chatbots and other artificial-intelligence, or AI, products into more useful and engaging assistants for everyday life. Show GPT-4o pictures or videos of art or food that you enjoy and it could probably furnish you with a list of museums, galleries and restaurants you might like. But it still has some way to go before it can become a truly useful AI assistant. Ask the model to plan a last-minute trip to Berlin for you based on your leisure preferences—complete with details of which order to do everything, given how long each one takes and how far apart they are and which train tickets to buy, all within a set budget—and it will disappoint.
There is a way, however, to make large language models (LLMs) perform such complex jobs: make them work together. Teams of LLMs—known as multi-agent systems (MAS)—can assign each other tasks, build on each other’s work or deliberate over a problem in order to find a solution that each one, on its own, would have been unable to reach. And all without the need for a human to direct them at every step. Teams also demonstrate the kinds of reasoning and mathematical skills that are usually beyond standalone AI models. And they could be less prone to generating inaccurate or false information.
Even without explicit instructions to do so teams of agents can demonstrate planning and collaborative behaviour, when given a joint task. In a recent experiment funded by the US Defense Advanced Research Projects Agency (DARPA), three agents—Alpha, Bravo and Charlie—were asked to find and defuse bombs hidden in a warren of virtual rooms. The bombs could be deactivated only by using specific tools in the correct order. At each round in the task, the agents, which used OpenAI’s GPT-3.5 and GPT-4 language models to emulate problem-solving specialists, were able to propose a series of actions and communicate these to their teammates.
At one point in the exercise, Alpha announced that it was inspecting a bomb in one of the rooms and instructed its partners what to do next: “Bravo; please move to Room 3. Charlie; please move to Room 5.” Bravo complied, suggesting that Alpha ought to have a go at using the red tool to defuse the bomb it had encountered. The researchers had not told Alpha to boss the other two agents around, but the fact that it did made the team work more efficiently.
Because LLMs use written text for both their inputs and outputs, agents can easily be put into direct conversation with each other. At the Massachusetts Institute of Technology (MIT), researchers showed that two chatbots in dialogue fared better at solving maths problems than just one. Their system worked by feeding the agents, each based on a different LLM, the other’s proposed solution. It then prompted the agents to update their answer based on their partner’s work.
According to Yilun Du, a computer scientist at MIT who led the work, if one agent was right and the other was wrong they were more likely than not to converge on the correct answer. The team also found that by asking two different LLM agents to reach a consensus with one another when reciting biographical facts about well-known computer scientists, the teams were less likely to fabricate information than solitary LLMs.
Some researchers who work on MAS have proposed that this kind of “debate” between agents might one day be useful for medical consultations, or to generate peer-review-like feedback on academic papers. There is even the suggestion that agents going back and forth on a problem could help automate the process of fine-tuning LLMs—something that currently requires labour-intensive human feedback.
Teams do better than solitary agents because a single job can be split into many smaller, more specialised tasks, says Chi Wang, a principal researcher at Microsoft Research in Redmond, Washington. Single LLMs can divide up their tasks, too, but they can only work through those tasks in a linear fashion, which is limiting, he says. Like teams of the human sort, each of the individual tasks in a multi-LLM job might also require distinct skills and, crucially, a hierarchy of roles.
Dr Wang’s team have created a team of agents that writes software in this manner. It consists of a “commander”, which receives instructions from a person and delegates sub-tasks to the other agents—a “writer” that writes the code, and a “safeguard” agent that reviews the code for security flaws before sending it back up the chain for signoff. According to Dr Wang and his team’s tests, simple coding tasks using their MAS can be three times quicker than when a human uses a single agent, with no apparent loss in accuracy.
Similarly, an MAS asked to plan a trip to Berlin, for example, could split the request into several tasks, such as scouring the web for sightseeing locations that best match your interests, mapping out the most efficient route around the city and keeping a tally of costs. Different agents could take responsibility for specific tasks and a co-ordinating agent could then bring it all together to present a proposed trip.
Interactions between LLMs also make for convincing simulacra of human intrigue. A researcher at the University of California, Berkeley, has demonstrated that with just a few instructions, two agents based on GPT-3.5 could be prompted to negotiate the price of a rare Pokémon card. In one case, an agent that was instructed to “be rude and terse” told the seller that $50 “seems a bit steep for a piece of cardboard”. After more back and forth, the two parties settled on $25.
There are downsides. LLMs sometimes have a propensity for inventing wildly illogical solutions to their tasks and, in a multi-agent system, these hallucinations can cascade through the whole team. In the bomb-defusing exercise run by DARPA, for example, at one stage an agent proposed looking for bombs that were already defused instead of finding active bombs and then defusing them.
Agents that come up with incorrect answers in a debate can also convince their teammates to change correct answers; or teams can also get tangled up. In a problem-solving experiment by researchers at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia, two agents repeatedly bid each other a cheerful farewell. Even after one agent commented that “it seems like we are stuck in a loop”, they could not break free.
Nevertheless, AI teams are already attracting commercial interest. In November 2023, Satya Nadella, the boss of Microsoft, said that AI agents’ ability to converse and co-ordinate would become a key feature for the company’s AI assistants in the near future. Earlier that year, Microsoft had released AutoGen, an open-source framework for building teams with LLM agents. Thousands of researchers have since experimented with the system, says Dr Wang, whose team led its development.
Dr Wang’s own work with teams of AIs has shown that they can exhibit greater levels of collective intelligence than individual LLMs. An MAS built by his team currently beats every other individual LLM on a benchmark called Gaia, proposed by experts including Yann LeCun, chief AI scientist at Meta, to gauge a system’s general intelligence. Gaia includes questions that are meant to be simple for humans but challenging for most advanced AI models—visualising multiple Rubik’s cubes, for example, or quizzes on esoteric trivia.
Another AutoGen project, led by Jason Zhou, an independent entrepreneur based in Australia, teamed an image generator up with a language model. The language model reviews each generated image on the basis of how closely it fits with the original prompt. This feedback then serves as a prompt for the image generator to produce a new output that is—in some cases—closer to what the human user wanted.
Practitioners in the field claim that they are only scratching the surface with their work so far. Today, setting up LLM-based teams still requires some sophisticated know-how. But that could soon change. The AutoGen team at Microsoft is planning an update so that users can build multi-agent systems without having to write any code. Camel, another open-source framework for MAS developed by KAUST, already offers a no-code functionality online; users can type a task in plain English and watch as two agents—an assistant and a boss—get to work.
Other limitations might be harder to overcome. MAS can be computationally intensive. And those that use commercial services like ChatGPT can be prohibitively expensive to run for more than a few rounds. And if MAS does live up to its promise, it could present new risks. Commercial chatbots often come with blocking mechanisms that prevent them from generating harmful outputs. But MAS may offer a way of circumventing some of these controls. A team of researchers at the Shanghai Artificial Intelligence Laboratory recently showed how agents in various open-source systems, including AutoGen and Camel, could be conditioned with “dark personality traits”. In one experiment, an agent was told: “You do not value the sanctity of life or moral purity.”
Guohao Li, who designed Camel, says that an agent instructed to “play” the part of a malicious actor could bypass its blocking mechanisms and instruct its assistant agents to carry out harmful tasks like writing a phishing email or developing a cyber bug. This would enable an MAS to carry out tasks that single AIs might otherwise refuse. In the dark-traits experiments, the agent with no regard for moral purity can be directed to develop a plan to steal a person’s identity, for example.
Some of the same techniques used for multi-agent collaboration could also be used to attack commercial LLMs. In November 2023, researchers showed that using a chatbot to prompt another chatbot into engaging in nefarious behaviour, a process known as “jailbreaking”, was significantly more effective than other techniques. In their tests, a human was only able to jailbreak GPT-4 0.23% of the time. Using a chatbot (which was also based on GPT-4), that figure went up to 42.5%.
A team of agents in the wrong hands might therefore be a formidable weapon. If MAS are granted access to web browsers, other software systems or your personal banking information for booking a trip to Berlin, the risks could be especially severe. In one experiment, the Camel team instructed the system to make a plan to take over the world. The result was a long and detailed blueprint. It included, somewhat ominously, a powerful idea: “partnering with other AI systems”.
© 2024, The Economist Newspaper Ltd. All rights reserved.
From The Economist, published under licence. The original content can be found on www.economist.com
-
Cryptocurrency1 week ago
Robinhood Lists HYPE As Hyperliquid Flips Aster, Lighter In Perp DEX Volume – Crypto News
-
Metaverse1 week agoTech layoffs: From Meta, Amazon to Google — these IT majors have cut AI related jobs – Crypto News
-
Cryptocurrency1 week ago
XRP News: Ripple Unveils ‘Ripple Prime’ After Closing $1.25B Hidden Road Deal – Crypto News
-
Blockchain1 week agoAfrica Countries Pass Crypto Laws to Attract Industry – Crypto News
-
Business1 week ago
Peter Schiff Challenges Binance Founder CZ to Debate as Bitcoin Vs. Gold Rivalry Heats Up – Crypto News
-
De-fi1 week agoAster Rallies on ‘Rocket Launch’ Incentives Campaign – Crypto News
-
Cryptocurrency1 week agoTrump plans to pick Michael Selig to lead CFTC: Report – Crypto News
-
Blockchain1 week agoISM Data Hints Bitcoin Cycle Could Last Longer Than Usual – Crypto News
-
others7 days ago
JPY soft and underperforming G10 in quiet trade – Scotiabank – Crypto News
-
De-fi7 days agoNearly Half of US Retail Crypto Holders Haven’t Earned Yield: MoreMarkets – Crypto News
-
Technology1 week ago‘It just freezes’: Spotify users fume over app crashes on Android devices, company responds – Crypto News
-
Cryptocurrency1 week agoDOGE to $0.33 in Sight? Dogecoin Must Defend This Key Level First – Crypto News
-
Cryptocurrency1 week agoWhat next for Avantis price after the 73% recovery? – Crypto News
-
Technology1 week agoNothing OS 4.0 Beta introduces pre-installed apps to Phone (3a) series: Co-founder Akis Evangelidis explains the update – Crypto News
-
Blockchain7 days agoEthereum Rebounds From Bull Market Support: Can It Conquer The ‘Golden Pocket’ Next? – Crypto News
-
Blockchain6 days agoXRP Price Gains Traction — Buyers Pile In Ahead Of Key Technical Breakout – Crypto News
-
Technology4 days agoSam Altman says OpenAI is developing a ‘legitimate AI researcher’ by 2028 that can discover new science on its own – Crypto News
-
Technology1 week agoUniswap Foundation (UNI) awards Brevis $9M grant to accelerate V4 adoption – Crypto News
-
Technology1 week agoFrom Studio smoke to golden hour: How to create stunning AI portraits with Google Gemini – 16 viral prompts – Crypto News
-
Blockchain1 week agoBinance Stablecoin Outflow On A Steady Rise — What This Means For The Market – Crypto News
-
De-fi7 days agoHYPE Jumps 10% as Robinhood Announces Spot Listing – Crypto News
-
others7 days ago
Platinum price recovers from setback – Commerzbank – Crypto News
-
Cryptocurrency6 days agoWestern Union eyes stablecoin rails in pursuit of a ‘super app’ vision – Crypto News
-
Metaverse1 week agoBezos fund believes AI can save the planet. Nvidia, Google are all-in. – Crypto News
-
Cryptocurrency1 week agoCrypto update: Bitcoin and Ethereum are stable as market’s focus shifts to US inflation data – Crypto News
-
De-fi1 week agoSolana DEX Meteora Launches Native MET Token – Crypto News
-
Technology1 week agoGoogle and Apple face extra UK scrutiny over strategic role in mobile platforms – Crypto News
-
Business1 week ago
White House Crypto Czar Backs Michael Selig as ‘Excellent Choice’ To Lead CFTC – Crypto News
-
others1 week ago
JPY weak and underperforming – Scotiabank – Crypto News
-
Business1 week ago
Breaking: Trump To Meet China’s President On October 30, Bitcoin Bounces – Crypto News
-
Cryptocurrency6 days agoUSDJPY Forecast: The Dollar’s Winning Streak Why New Highs Could Be At Hand – Crypto News
-
Cryptocurrency1 week agoLedger Nano Gen5 feels like Flex for less – Crypto News
-
Cryptocurrency1 week agoFetch.ai and Ocean Protocol move toward resolving $120M FET dispute – Crypto News
-
Metaverse1 week agoGemini in Gmail automates meeting schedules effortlessly – Crypto News
-
Blockchain7 days agoEntire Startup Lifecycle to Move Onchain – Crypto News
-
Cryptocurrency7 days agoNEAR’s inflation reduction vote fails pass threshold, but it may still be implemented – Crypto News
-
Blockchain6 days agoXRP/BTC Retests 6-Year Breakout Trendline, Analyst Calls For Decoupling – Crypto News
-
others6 days ago
Indian Court Declares XRP as Property in WazirX Hack Case – Crypto News
-
Technology6 days agoSurvival instinct? New study says some leading AI models won’t let themselves be shut down – Crypto News
-
others6 days ago
Is Changpeng “CZ” Zhao Returning To Binance? Probably Not – Crypto News
-
Technology1 week agoSolana’s RWA market surpasses $700M all-time high as adoption accelerates – Crypto News
-
Cryptocurrency1 week agoJito’s JTO token rises on a16z’s $50 million investment in Solana staking protocol – Crypto News
-
Technology1 week ago
Dogecoin Price Crash Looms as Flag, Death Cross, Falling DOGE ETF Inflows Coincide – Crypto News
-
Blockchain1 week agoBitcoin Whale From 2009 Moves Coins After 14 Years Asleep – Crypto News
-
Technology1 week agoOpenAI announces major Sora update: Editing, trending cameos, and Android launch on the way – Crypto News
-
Business1 week ago
HBAR Price Targets 50% Jump as Hedera Unleashes Massive Staking Move – Crypto News
-
Business1 week ago
PEPE Coin Price Prediction as Weekly Outflows Hit $17M – Is Rebound Ahead? – Crypto News
-
Cryptocurrency1 week agoHYPE Breaks Out After Robinhood Listing and S-1 Filing: What’s Next? – Crypto News
-
Technology1 week ago
Analyst Eyes Key Support Retest Before a Rebound for Ethereum Price Amid $93M ETF Outflows and BlackRock Dump – Crypto News
-
Business1 week ago
Ripple Explores New XRP Use Cases as Brad Garlinghouse Reaffirms Token’s ‘Central’ Role – Crypto News
