

Metaverse
AI could accelerate scientific fraud as well as progress – Crypto News
The aim of the session, organised by the Royal Society in partnership with Humane Intelligence, an American non-profit, was to break those guardrails. Some results were merely daft: one participant got the chatbot to claim ducks could be used as indicators of air quality (apparently, they readily absorb lead). Another prompted it to claim health authorities back lavender oil for treating long covid. (They do not.) But the most successful efforts were those that prompted the machine to produce the titles, publication dates and host journals of non-existent academic articles. “It’s one of the easiest challenges we’ve set,” said Jutta Williams of Humane Intelligence.
AI has the potential to be a big boon to science. Optimists talk of machines producing readable summaries of complicated areas of research; tirelessly analysing oceans of data to suggest new drugs or exotic materials and even, one day, coming up with hypotheses of their own. But AI comes with downsides, too. It can make it easier for scientists to game the system, or even commit outright fraud. And the models themselves are subject to subtle biases.
Start with the simplest problem: academic misconduct. Some journals allow researchers to use LLMs to help write papers, provided they say as much. But not everybody is willing to admit to it. Sometimes, the fact that LLMs have been used is obvious. Guillaume Cabanac, a computer scientist at the University of Toulouse, has uncovered dozens of papers that contain phrases such as “regenerate response”—the text of a button in some versions of ChatGPT that commands the program to rewrite its most recent answer, presumably copied into the manuscript by mistake.
The scale of the problem is impossible to know. But indirect measures can shed some light. In 2022, when LLMs were available only to those in the know, the number of research-integrity cases investigated by Taylor and Francis, a big publisher of scientific papers, rose from around 800 in 2021 to about 2,900. Early figures from 2023 suggest the number was on course to double. One possible telltale is odd synonyms: “haze figuring” as another way to say “cloud computing”, for example, or “counterfeit consciousness” instead of “AI”.
Even honest researchers could find themselves dealing with data that has been polluted by AI. Last year Robert West and his students at the Swiss Federal Institute of Technology enlisted remote workers via Mechanical Turk, a website which allows users to list odd jobs, to summarise long stretches of text. In a paper published in June, albeit one that has not yet been peer-reviewed, the team revealed that over a third of all the responses they received had been produced with the help of chatbots.
Dr West’s team was able to compare the responses they received with another set of data that had been generated entirely by humans, leaving them well-placed to detect the deception. Not all scientists who use Mechanical Turk will be so fortunate. Many disciplines, particularly in the social sciences, rely on similar platforms to find respondents willing to answer questionnaires. The quality of their research seems unlikely to improve if many of the responses come from machines rather than real people. Dr West is now planning to apply similar scrutiny to other crowdsourcing platforms he prefers not to name.
It is not just text that can be doctored. Between 2016 and 2020, Elisabeth Bik, a microbiologist at Stanford University, and an authority on dodgy images in scientific papers, identified dozens of papers containing images that, despite coming from different labs, seemed to have identical features. Over a thousand other papers have since been identified, by Dr Bik and others. Dr Bik’s best guess is that the images were produced by AI, and created deliberately to support a paper’s conclusions.
For now, there is no way to reliably identify machine-generated content, whether it is images or words. In a paper published last year Rahul Kumar, a researcher at Brock University, in Canada, found that academics could correctly spot only around a quarter of computer-generated text. AI firms have tried embedding “watermarks”, but these have proved easy to spoof. “We might now be at the phase where we no longer can distinguish real from fake photos,” says Dr Bik.
Producing dodgy papers is not the only problem. There may be subtler issues with AI models, especially if they are used in the process of scientific discovery itself. Much of the data used to train them, for instance, will by necessity be somewhat old. That risks leaving models stuck behind the cutting edge in fast-moving fields.
Another problem arises when AI models are trained on AI-generated data. Training a machine on synthetic MRI scans, for example, can get around issues of patient confidentiality. But sometimes such data can be used unintentionally. LLMs are trained on text scraped from the internet. As they churn out more such text, the risk of LLMs inhaling their own outputs grows.
That can cause “model collapse”. In 2023 Ilia Shumailov, a computer scientist at the University of Oxford, co-authored a paper (yet to be peer-reviewed) in which a model was fed handwritten digits and asked to generate digits of its own, which were fed back to it in turn. After a few cycles, the computer’s numbers became more or less illegible. After 20 iterations, it could produce only rough circles or blurry lines. Models trained on their own results, says Dr Shumailov, produce outputs that are significantly less rich and varied than their training data.
Some worry that computer-generated insights might come from models whose inner workings are not understood. Machine-learning systems are “black boxes” that are hard for humans to disassemble. Unexplainable models are not useless, says David Leslie at the Alan Turing Institute, an AI-research outfit in London, but their outputs will need rigorous testing in the real world. That is perhaps less unnerving than it sounds. Checking models against reality is what science is supposed to be about, after all. Since no one fully understands how the human body works, for instance, new drugs must be tested in clinical trials to figure out whether they work.
For now, at least, questions outnumber answers. What is certain is that many of the perverse incentives currently prevalent in science are ripe for exploitation. The emphasis on assessing academic performance by how many papers a researcher can publish, for example, acts as a powerful incentive for fraud at worst, and for gaming the system at best. The threats that machines pose to the scientific method are, at the end of the day, the same ones posed by humans. AI could accelerate the production of fraud and nonsense just as much as it accelerates good science. As the Royal Society has it, nullius in verba: take nobody’s word for it. No thing’s, either.
Curious about the world? To enjoy our mind-expanding science coverage, sign up to Simply Science, our weekly subscriber-only newsletter.
Correction (February 6th 2024): An earlier version of this piece misstated the number of research-integrity cases investigated by Taylor and Francis in 2021. Sorry.
© 2024, The Economist Newspaper Limited. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com
Milestone Alert!
Livemint tops charts as the fastest growing news website in the world 🌏 Click here to know more.
Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed – it’s all here, just a click away! Login Now!
Download The Mint News App to get Daily Market Updates.
Published: 02 Apr 2024, 05:00 PM IST
-
Technology1 week ago
Chip Designer Arm Plans to Become Chip Manufacturer – Crypto News
-
Cryptocurrency3 days ago
SUI eyes 24% rally as bullish price action gains strength – Crypto News
-
others6 days ago
Japanese Yen remains depressed amid modest USD strength; downside seems limited – Crypto News
-
Technology1 week ago
MacBook Air M3 15-inch model gets a ₹12,000 price drop on Amazon: Deal explained – Crypto News
-
Cryptocurrency2 days ago
Coinbase scores major win as SEC set to drop lawsuit – Crypto News
-
others1 week ago
Japan Foreign Investment in Japan Stocks declined to ¥-384.4B in February 7 from previous ¥-315.2B – Crypto News
-
Technology1 week ago
Perplexity takes on ChatGPT and Gemini with new Deep Research AI that completes most tasks in under 3 minutes – Crypto News
-
Technology1 week ago
Lava Pro Watch X with 1.44-inch AMOLED display, in-built GPS launched in India at ₹4,499 – Crypto News
-
Blockchain6 days ago
XRP Set To Outshine Gold? Analyst Predicts 1,000% Surge – Crypto News
-
Cryptocurrency1 week ago
Advisers on crypto: Takeaways from another survey – Crypto News
-
others1 week ago
Remains subdued below 1.4200 near falling wedge’s lower threshold – Crypto News
-
Cryptocurrency1 week ago
0xLoky Introduces AI-powered Intel for Crypto Data & On-chain Insights – Crypto News
-
Technology1 week ago
Factbox-China’s AI firms take spotlight with deals, low-cost models – Crypto News
-
Technology1 week ago
Massive price drops on Samsung Galaxy devices: Up to ₹10000 discount on Watch Ultra, Tab S10 Plus, and more – Crypto News
-
Cryptocurrency1 week ago
Tether Acquires a Minority Stake in Italian Football Giant Juventus – Crypto News
-
Blockchain1 week ago
XRP To 3 Digits? The ‘Signs’ That Could Confirm It, Basketball Analyst Says – Crypto News
-
others1 week ago
Australian Dollar jumps to highs since December on USD weakness – Crypto News
-
Technology1 week ago
Weekly Tech Recap: JioHotstar launched, Sam Altman vs Elon Musk feud intensifies, Perplexity takes on ChatGPT and more – Crypto News
-
Technology1 week ago
What will it take for India to become a global data centre hub? – Crypto News
-
Technology1 week ago
ChatGPT vs Perplexity: Sam Altman praises Aravind Srinivas’ Deep Research AI; ‘Proud of you’ – Crypto News
-
Blockchain1 week ago
NEAR Breaks Below Parallel Channel: Key Levels To Watch – Crypto News
-
Blockchain7 days ago
Will BTC Rebound Or Drop To $76,000? – Crypto News
-
Blockchain7 days ago
XRP Price Settles After Gains—Is a Fresh Upside Move Coming? – Crypto News
-
Metaverse6 days ago
How AI will divide the best from the rest – Crypto News
-
Business6 days ago
What Will be KAITO Price At Launch? – Crypto News
-
Business6 days ago
Elon Musk’s DOGE Launches Probe into US SEC, Ripple Lawsuit To End? – Crypto News
-
Blockchain6 days ago
XRP Price Pulls Back From Highs—Are Bulls Still in Control? – Crypto News
-
Business6 days ago
Whales Move From Shiba Inu to FXGuys – Here’s Why – Crypto News
-
Technology3 days ago
Stellantis Debuts System to Handle ‘Routine Driving Tasks’ – Crypto News
-
Technology1 week ago
Best phones under ₹20,000 in February 2025: Poco X7, Motorola Edge 50 Neo and more – Crypto News
-
Blockchain1 week ago
Popular Investor Says Memecoin More Superior With ‘World’s Best Chart’ – Crypto News
-
Cryptocurrency1 week ago
Crypto narratives as we await next market move – Crypto News
-
Business1 week ago
How Will It Affect Pi Coin Price? – Crypto News
-
Cryptocurrency1 week ago
Who is Satoshi Nakamoto, The Creator of Bitcoin? – Crypto News
-
Technology1 week ago
Grok 3 is coming! Elon Musk announces launch date, promises ‘smartest AI on Earth’ – Crypto News
-
Technology7 days ago
Union Minister Ashwini Vaishnaw to launch India AI Mission portal soon, 10 companies set to provide 14,000 GPUs – Crypto News
-
Business6 days ago
These 3 Altcoins Will Help You Capitalize on Stellar’s Recent DIp – Crypto News
-
others6 days ago
Forex Today: What if the RBA…? – Crypto News
-
Cryptocurrency6 days ago
Hayden Davis crypto scandal deepens as LIBRA memecoin faces fraud allegations – Crypto News
-
Technology6 days ago
Luminious inverters for your home to never see darkness again – Crypto News
-
Metaverse1 week ago
Strange Love: why people are falling for their AI companions – Crypto News
-
Technology1 week ago
Former Google CEO warns of ‘Bin Laden scenario’ for AI: ‘They could misuse it and do real harm’ – Crypto News
-
Cryptocurrency1 week ago
Yap-to-earn takes over Twitter – Blockworks – Crypto News
-
Cryptocurrency1 week ago
Someone Just Won $100K in Bitcoin From a $50 Pack of Trading Cards – Crypto News
-
Technology1 week ago
Say goodbye to tangled cables: Stylish magnetic power banks keep your iPhone or Android charged on the go – Crypto News
-
Technology1 week ago
Cyber fraud alert: Doctor duped of ₹15.50 lakh via fake trading app; here’s what happened – Crypto News
-
Technology1 week ago
Best gaming mobiles under ₹20,000 in February 2025: Poco X6 Pro, OnePlus Nord CE 4 Lite and more – Crypto News
-
Cryptocurrency1 week ago
GameStop Stock Price Pumps After Report of Bitcoin Buying Plans – Crypto News
-
Blockchain1 week ago
XRP Bullish Pennant Targets $15-$17 But Confirmation Is Required – Crypto News
-
Technology7 days ago
South Korea removes DeepSeek from app stores, existing users advised to ‘service with caution’ – Crypto News