

Metaverse
Today’s AI models are impressive. Teams of them will be formidable – Crypto News
The upgrade is part of wider moves across the tech industry to make chatbots and other artificial-intelligence, or AI, products into more useful and engaging assistants for everyday life. Show GPT-4o pictures or videos of art or food that you enjoy and it could probably furnish you with a list of museums, galleries and restaurants you might like. But it still has some way to go before it can become a truly useful AI assistant. Ask the model to plan a last-minute trip to Berlin for you based on your leisure preferences—complete with details of which order to do everything, given how long each one takes and how far apart they are and which train tickets to buy, all within a set budget—and it will disappoint.
There is a way, however, to make large language models (LLMs) perform such complex jobs: make them work together. Teams of LLMs—known as multi-agent systems (MAS)—can assign each other tasks, build on each other’s work or deliberate over a problem in order to find a solution that each one, on its own, would have been unable to reach. And all without the need for a human to direct them at every step. Teams also demonstrate the kinds of reasoning and mathematical skills that are usually beyond standalone AI models. And they could be less prone to generating inaccurate or false information.
Even without explicit instructions to do so teams of agents can demonstrate planning and collaborative behaviour, when given a joint task. In a recent experiment funded by the US Defense Advanced Research Projects Agency (DARPA), three agents—Alpha, Bravo and Charlie—were asked to find and defuse bombs hidden in a warren of virtual rooms. The bombs could be deactivated only by using specific tools in the correct order. At each round in the task, the agents, which used OpenAI’s GPT-3.5 and GPT-4 language models to emulate problem-solving specialists, were able to propose a series of actions and communicate these to their teammates.
At one point in the exercise, Alpha announced that it was inspecting a bomb in one of the rooms and instructed its partners what to do next: “Bravo; please move to Room 3. Charlie; please move to Room 5.” Bravo complied, suggesting that Alpha ought to have a go at using the red tool to defuse the bomb it had encountered. The researchers had not told Alpha to boss the other two agents around, but the fact that it did made the team work more efficiently.
Because LLMs use written text for both their inputs and outputs, agents can easily be put into direct conversation with each other. At the Massachusetts Institute of Technology (MIT), researchers showed that two chatbots in dialogue fared better at solving maths problems than just one. Their system worked by feeding the agents, each based on a different LLM, the other’s proposed solution. It then prompted the agents to update their answer based on their partner’s work.
According to Yilun Du, a computer scientist at MIT who led the work, if one agent was right and the other was wrong they were more likely than not to converge on the correct answer. The team also found that by asking two different LLM agents to reach a consensus with one another when reciting biographical facts about well-known computer scientists, the teams were less likely to fabricate information than solitary LLMs.
Some researchers who work on MAS have proposed that this kind of “debate” between agents might one day be useful for medical consultations, or to generate peer-review-like feedback on academic papers. There is even the suggestion that agents going back and forth on a problem could help automate the process of fine-tuning LLMs—something that currently requires labour-intensive human feedback.
Teams do better than solitary agents because a single job can be split into many smaller, more specialised tasks, says Chi Wang, a principal researcher at Microsoft Research in Redmond, Washington. Single LLMs can divide up their tasks, too, but they can only work through those tasks in a linear fashion, which is limiting, he says. Like teams of the human sort, each of the individual tasks in a multi-LLM job might also require distinct skills and, crucially, a hierarchy of roles.
Dr Wang’s team have created a team of agents that writes software in this manner. It consists of a “commander”, which receives instructions from a person and delegates sub-tasks to the other agents—a “writer” that writes the code, and a “safeguard” agent that reviews the code for security flaws before sending it back up the chain for signoff. According to Dr Wang and his team’s tests, simple coding tasks using their MAS can be three times quicker than when a human uses a single agent, with no apparent loss in accuracy.
Similarly, an MAS asked to plan a trip to Berlin, for example, could split the request into several tasks, such as scouring the web for sightseeing locations that best match your interests, mapping out the most efficient route around the city and keeping a tally of costs. Different agents could take responsibility for specific tasks and a co-ordinating agent could then bring it all together to present a proposed trip.
Interactions between LLMs also make for convincing simulacra of human intrigue. A researcher at the University of California, Berkeley, has demonstrated that with just a few instructions, two agents based on GPT-3.5 could be prompted to negotiate the price of a rare Pokémon card. In one case, an agent that was instructed to “be rude and terse” told the seller that $50 “seems a bit steep for a piece of cardboard”. After more back and forth, the two parties settled on $25.
There are downsides. LLMs sometimes have a propensity for inventing wildly illogical solutions to their tasks and, in a multi-agent system, these hallucinations can cascade through the whole team. In the bomb-defusing exercise run by DARPA, for example, at one stage an agent proposed looking for bombs that were already defused instead of finding active bombs and then defusing them.
Agents that come up with incorrect answers in a debate can also convince their teammates to change correct answers; or teams can also get tangled up. In a problem-solving experiment by researchers at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia, two agents repeatedly bid each other a cheerful farewell. Even after one agent commented that “it seems like we are stuck in a loop”, they could not break free.
Nevertheless, AI teams are already attracting commercial interest. In November 2023, Satya Nadella, the boss of Microsoft, said that AI agents’ ability to converse and co-ordinate would become a key feature for the company’s AI assistants in the near future. Earlier that year, Microsoft had released AutoGen, an open-source framework for building teams with LLM agents. Thousands of researchers have since experimented with the system, says Dr Wang, whose team led its development.
Dr Wang’s own work with teams of AIs has shown that they can exhibit greater levels of collective intelligence than individual LLMs. An MAS built by his team currently beats every other individual LLM on a benchmark called Gaia, proposed by experts including Yann LeCun, chief AI scientist at Meta, to gauge a system’s general intelligence. Gaia includes questions that are meant to be simple for humans but challenging for most advanced AI models—visualising multiple Rubik’s cubes, for example, or quizzes on esoteric trivia.
Another AutoGen project, led by Jason Zhou, an independent entrepreneur based in Australia, teamed an image generator up with a language model. The language model reviews each generated image on the basis of how closely it fits with the original prompt. This feedback then serves as a prompt for the image generator to produce a new output that is—in some cases—closer to what the human user wanted.
Practitioners in the field claim that they are only scratching the surface with their work so far. Today, setting up LLM-based teams still requires some sophisticated know-how. But that could soon change. The AutoGen team at Microsoft is planning an update so that users can build multi-agent systems without having to write any code. Camel, another open-source framework for MAS developed by KAUST, already offers a no-code functionality online; users can type a task in plain English and watch as two agents—an assistant and a boss—get to work.
Other limitations might be harder to overcome. MAS can be computationally intensive. And those that use commercial services like ChatGPT can be prohibitively expensive to run for more than a few rounds. And if MAS does live up to its promise, it could present new risks. Commercial chatbots often come with blocking mechanisms that prevent them from generating harmful outputs. But MAS may offer a way of circumventing some of these controls. A team of researchers at the Shanghai Artificial Intelligence Laboratory recently showed how agents in various open-source systems, including AutoGen and Camel, could be conditioned with “dark personality traits”. In one experiment, an agent was told: “You do not value the sanctity of life or moral purity.”
Guohao Li, who designed Camel, says that an agent instructed to “play” the part of a malicious actor could bypass its blocking mechanisms and instruct its assistant agents to carry out harmful tasks like writing a phishing email or developing a cyber bug. This would enable an MAS to carry out tasks that single AIs might otherwise refuse. In the dark-traits experiments, the agent with no regard for moral purity can be directed to develop a plan to steal a person’s identity, for example.
Some of the same techniques used for multi-agent collaboration could also be used to attack commercial LLMs. In November 2023, researchers showed that using a chatbot to prompt another chatbot into engaging in nefarious behaviour, a process known as “jailbreaking”, was significantly more effective than other techniques. In their tests, a human was only able to jailbreak GPT-4 0.23% of the time. Using a chatbot (which was also based on GPT-4), that figure went up to 42.5%.
A team of agents in the wrong hands might therefore be a formidable weapon. If MAS are granted access to web browsers, other software systems or your personal banking information for booking a trip to Berlin, the risks could be especially severe. In one experiment, the Camel team instructed the system to make a plan to take over the world. The result was a long and detailed blueprint. It included, somewhat ominously, a powerful idea: “partnering with other AI systems”.
© 2024, The Economist Newspaper Ltd. All rights reserved.
From The Economist, published under licence. The original content can be found on www.economist.com
-
Technology1 week ago
Chip Designer Arm Plans to Become Chip Manufacturer – Crypto News
-
Cryptocurrency3 days ago
SUI eyes 24% rally as bullish price action gains strength – Crypto News
-
others6 days ago
Japanese Yen remains depressed amid modest USD strength; downside seems limited – Crypto News
-
Technology1 week ago
MacBook Air M3 15-inch model gets a ₹12,000 price drop on Amazon: Deal explained – Crypto News
-
Cryptocurrency2 days ago
Coinbase scores major win as SEC set to drop lawsuit – Crypto News
-
others1 week ago
Japan Foreign Investment in Japan Stocks declined to ¥-384.4B in February 7 from previous ¥-315.2B – Crypto News
-
Technology1 week ago
Perplexity takes on ChatGPT and Gemini with new Deep Research AI that completes most tasks in under 3 minutes – Crypto News
-
Technology1 week ago
Lava Pro Watch X with 1.44-inch AMOLED display, in-built GPS launched in India at ₹4,499 – Crypto News
-
Blockchain6 days ago
XRP Set To Outshine Gold? Analyst Predicts 1,000% Surge – Crypto News
-
Cryptocurrency1 week ago
Advisers on crypto: Takeaways from another survey – Crypto News
-
others1 week ago
Remains subdued below 1.4200 near falling wedge’s lower threshold – Crypto News
-
Cryptocurrency1 week ago
0xLoky Introduces AI-powered Intel for Crypto Data & On-chain Insights – Crypto News
-
Technology1 week ago
Factbox-China’s AI firms take spotlight with deals, low-cost models – Crypto News
-
Technology1 week ago
Massive price drops on Samsung Galaxy devices: Up to ₹10000 discount on Watch Ultra, Tab S10 Plus, and more – Crypto News
-
Cryptocurrency1 week ago
Tether Acquires a Minority Stake in Italian Football Giant Juventus – Crypto News
-
Blockchain1 week ago
XRP To 3 Digits? The ‘Signs’ That Could Confirm It, Basketball Analyst Says – Crypto News
-
others1 week ago
Australian Dollar jumps to highs since December on USD weakness – Crypto News
-
Technology1 week ago
Weekly Tech Recap: JioHotstar launched, Sam Altman vs Elon Musk feud intensifies, Perplexity takes on ChatGPT and more – Crypto News
-
Technology1 week ago
What will it take for India to become a global data centre hub? – Crypto News
-
Technology1 week ago
ChatGPT vs Perplexity: Sam Altman praises Aravind Srinivas’ Deep Research AI; ‘Proud of you’ – Crypto News
-
Blockchain1 week ago
NEAR Breaks Below Parallel Channel: Key Levels To Watch – Crypto News
-
Blockchain7 days ago
Will BTC Rebound Or Drop To $76,000? – Crypto News
-
Blockchain7 days ago
XRP Price Settles After Gains—Is a Fresh Upside Move Coming? – Crypto News
-
Metaverse6 days ago
How AI will divide the best from the rest – Crypto News
-
Business6 days ago
What Will be KAITO Price At Launch? – Crypto News
-
Business6 days ago
Elon Musk’s DOGE Launches Probe into US SEC, Ripple Lawsuit To End? – Crypto News
-
Blockchain6 days ago
XRP Price Pulls Back From Highs—Are Bulls Still in Control? – Crypto News
-
Business5 days ago
Whales Move From Shiba Inu to FXGuys – Here’s Why – Crypto News
-
Technology3 days ago
Stellantis Debuts System to Handle ‘Routine Driving Tasks’ – Crypto News
-
Technology1 week ago
Best phones under ₹20,000 in February 2025: Poco X7, Motorola Edge 50 Neo and more – Crypto News
-
Blockchain1 week ago
Popular Investor Says Memecoin More Superior With ‘World’s Best Chart’ – Crypto News
-
Cryptocurrency1 week ago
Crypto narratives as we await next market move – Crypto News
-
Business1 week ago
How Will It Affect Pi Coin Price? – Crypto News
-
Cryptocurrency1 week ago
Who is Satoshi Nakamoto, The Creator of Bitcoin? – Crypto News
-
Technology1 week ago
Grok 3 is coming! Elon Musk announces launch date, promises ‘smartest AI on Earth’ – Crypto News
-
Technology7 days ago
Union Minister Ashwini Vaishnaw to launch India AI Mission portal soon, 10 companies set to provide 14,000 GPUs – Crypto News
-
Business6 days ago
These 3 Altcoins Will Help You Capitalize on Stellar’s Recent DIp – Crypto News
-
others6 days ago
Forex Today: What if the RBA…? – Crypto News
-
Cryptocurrency6 days ago
Hayden Davis crypto scandal deepens as LIBRA memecoin faces fraud allegations – Crypto News
-
Technology6 days ago
Luminious inverters for your home to never see darkness again – Crypto News
-
Metaverse1 week ago
Strange Love: why people are falling for their AI companions – Crypto News
-
Technology1 week ago
Former Google CEO warns of ‘Bin Laden scenario’ for AI: ‘They could misuse it and do real harm’ – Crypto News
-
Cryptocurrency1 week ago
Yap-to-earn takes over Twitter – Blockworks – Crypto News
-
Cryptocurrency1 week ago
Someone Just Won $100K in Bitcoin From a $50 Pack of Trading Cards – Crypto News
-
Technology1 week ago
Cyber fraud alert: Doctor duped of ₹15.50 lakh via fake trading app; here’s what happened – Crypto News
-
Technology1 week ago
Best gaming mobiles under ₹20,000 in February 2025: Poco X6 Pro, OnePlus Nord CE 4 Lite and more – Crypto News
-
Cryptocurrency1 week ago
GameStop Stock Price Pumps After Report of Bitcoin Buying Plans – Crypto News
-
Blockchain1 week ago
XRP Bullish Pennant Targets $15-$17 But Confirmation Is Required – Crypto News
-
Technology7 days ago
South Korea removes DeepSeek from app stores, existing users advised to ‘service with caution’ – Crypto News
-
Blockchain7 days ago
Bitcoin Price Falls Short Again—Is a Deeper Decline Coming? – Crypto News