{"id":412749,"date":"2025-11-30T17:39:29","date_gmt":"2025-11-30T12:09:29","guid":{"rendered":"https:\/\/dripp.zone\/news\/is-big-techs-superintelligence-narrative-inflating-the-ai-bubble-crypto-news-2\/"},"modified":"2025-11-30T17:54:07","modified_gmt":"2025-11-30T12:24:07","slug":"is-big-techs-superintelligence-narrative-inflating-the-ai-bubble-crypto-news-2","status":"publish","type":"post","link":"https:\/\/dripp.zone\/news\/is-big-techs-superintelligence-narrative-inflating-the-ai-bubble-crypto-news-2\/","title":{"rendered":"Is Big Tech\u2019s superintelligence narrative inflating the AI bubble? &#8211; Crypto News"},"content":{"rendered":"<p><\/p>\n<div id=\"paywall_11764501490822\">\n<p>      <span>Believers claim that some form of artificial general intelligence (AGI) or artificial superintelligence (ASI) could emerge by the end of the decade. But critics warn this narrative may be further fuelling an already overheated AI market, increasing the risk of a bubble.<\/span><\/p>\n<p>      <span>Is AGI just around the corner, or is this simply part of the hype cycle pushing AI valuations to ever more unsustainable heights?<\/span><\/p>\n<h2><strong>What is AGI?<\/strong><\/h2>\n<p><span>There\u2019s little consensus on what terms such as AGI and ASI actually mean. A broadly accepted view is that an AGI would think and act on the level of humans at least, representing a step toward the \u2018AI singularity\u2019 \u2013 the hypothetical future point when AI surpasses human intelligence, leading to runaway technological growth that is unpredictable and beyond human comprehension.<\/span><\/p>\n<p>      <span>Popular fears stem from sci-fi movies such as The Terminator, Her, Ex Machina, Automata, and Transcendence, in which AI systems eventually surpass human intelligence. Crossing that threshold would require an AI system to exceed the intellect of the smartest humans, creating what many call ASI.<\/span><\/p>\n<p>    <!-- Debug: Full alsoReadStories object --><\/p>\n<p>      <span>Since the 1960s and 1970s, computer scientists such as Herbert Simon, Alan Turing, and Marvin Minsky have predicted that machines will one day be smarter than humans. The term AGI, however, was coined by physicist Mark Gubrud in 1997, to describe systems that match or surpass the human brain in complexity and speed, capable of using general knowledge across industrial or military tasks. Webmind founder Ben Goertzel and DeepMind co-founder Shane Legg popularised the term in the early 2000s.<\/span><\/p>\n<h2><strong>Why are researchers so divided on AGI?<\/strong><\/h2>\n<p><span>Big Tech founders, CEOs, and AI researchers continue to make conflicting and often shifting claims, with some revising AGI timelines, others diluting its definition, and many dismissing it outright as a mere marketing term.<\/span><\/p>\n<p>      <span>In May 2022, Elon Musk said he expected AGI by 2029; two years later, he predicted AI would get smarter than the smartest human by 2026. This October, he posted that the probability of Grok 5 reaching AGI is \u201cnow at 10%, and rising&#8221;.<\/span><\/p>\n<p>      <span>In October 2024, SoftBank CEO Masayoshi Son said ASI would arrive by 2035 and be 10,000 times smarter than humans. By February, he claimed AGI would come \u201cmuch earlier&#8221;.<\/span><\/p>\n<p>      <span>Google DeepMind\u2019s Demis Hassabis sees AGI in 5\u201310 years, OpenAI\u2019s Sam Altman places it within Trump\u2019s second term, and Anthropic\u2019s Dario Amodei suggests as early as 2026. DeepMind has also said AGI, capable of most human cognitive tasks, could emerge \u201cwithin the coming years&#8221;.<\/span><\/p>\n<p>      <span>On the other side, experts such as Yann LeCun (one of the three &#8216;godfathers of AI&#8217; along with Geoffery Hinton and Yoshua Bengio), Fei-Fei Li (called the &#8216;godmother of AI&#8217;), and Coursera founder Andrew Ng argue AI is nowhere close to this.<\/span><\/p>\n<p>      <span>They stress that AI\u2019s benefits, in everything from smartphones and self-driving systems to satellite imagery, chatbots, and flood forecasting, far outweigh its speculative risks. Mustafa Suleyman, head of Microsoft&#8217;s AI unit, has proposed \u2018artificial capable intelligence\u2019 (ACI) as a more grounded measure of AI autonomy. Gartner now predicts AGI is at least a decade away, and Fei-Fei Li dismisses the term as marketing. LeCun even suggests that the term AGI should be retired in favour of \u201chuman-level AI&#8221;.<\/span><\/p>\n<h2><strong>Could this inflate the AI bubble?<\/strong><\/h2>\n<p><span>Concerns are growing around the fact that Big Tech companies are borrowing heavily to pour billions of dollars into capital expenditure for advanced reasoning models and agentic AI systems, despite limited tangible returns. Promises of AGI only heighten this scepticism.<\/span><\/p>\n<p>      <span>Masayoshi Son, for example, told business leaders and investors in Saudi Arabia last year that developing ASI would require \u201chundreds of billions of dollars&#8221; of investments. But, then, his plan aligns with his Japanese joint venture with OpenAI, which plans to spend $3 billion deploying OpenAI technology across SoftBank companies and launching AI agents through a new system called Cristal Intelligence. Further, OpenAI lowered the AGI bar last July and now even its highest Level 5 only envisions AI capable of performing the work of a single organisation.<\/span><\/p>\n<p>      <span>In his August 2025 paper, Deep Hype in Artificial General Intelligence, Andreu Belsunces Gon\u00e7alves, sociology professor at the Universitat Oberta de Catalunya in Barcelona, argued that AGI hype grows through a cycle of uncertainty, bold claims, and venture-capital speculation. Together, these forces fuel a tech-utopian, long-term vision that sidelines democratic oversight, casts regulation as outdated, and presents private firms as the rightful stewards of humanity\u2019s technological future.<\/span><\/p>\n<p>      <span>Regardless of these competing claims, AGI and ASI would also require enormous amounts of energy and compute. Michael James Burry, an American investor and hedge fund manager who predicted the 2008 US housing crash, warned in a 26 November post on X that tech companies were understating depreciation by artificially extending the lifespan of their assets to boost earnings. He was referring to the surge in capex on Nvidia chips and servers that typically last only 2-3 years. \u201cYet this is exactly what all the hyperscalers have done. By my estimates they will understate depreciation by $176 billion in 2026-2028,&#8221; he wrote.<\/span><\/p>\n<p>      <span>Nvidia disputed Burry\u2019s claim, saying customers depreciate GPUs over 4-6 years based on real-world longevity and utilisation. Still, as Microsoft Chairman and CEO Satya Nadella has noted, thousands of AI chips remain unused due to shortages of power and data-centre capacity. If they continue to sit idle, they will need to be depreciated all the same.<\/span><\/p>\n<h2><span><strong>Could AI become sentient?<\/strong><\/span><\/h2>\n<p><span>Hinton and Bengio have avoided giving timelines but warn that sentient agents with AGI-level power could trigger catastrophic scenarios. In his 2006 book The Singularity Is Near, scientist and futurist Raymond Kurzweil predicted that AI would surpass humans, even forecasting that machines could attain equal legal status by 2099.<\/span><\/p>\n<p>      <span>Sentience, however, is far more complex than superhuman capabilities. It implies self-consciousness, subjective experience, and the ability to feel, emote, see, hear, taste, and smell. Today\u2019s most advanced reasoning models still cannot emote, interpret nuanced humour, or grasp twisted jokes&#8211;especially in languages beyond English. Yet machines can already perceive and interpret the world to a degree. They can \u2018see\u2019 and classify objects, converse like humans, and understand context through technologies such as computer vision, image recognition, natural language processing (NLP), and natural language understanding (NLU).<\/span><\/p>\n<p>      <span>DeepMind researchers argue that when combined with agentic capabilities, AGI could eventually enable systems to understand, reason, plan, and act autonomously. This suggests machine intelligence may evolve in ways very different from human definitions of sentience.<\/span><\/p>\n<p>      <span>Still, fundamental gaps remain. In a recent podcast with Lenny Rachitsky, Fei-Fei Li noted that if you give any current AI model a video of several office rooms and ask it to count the chairs, it cannot perform this task, which even a child can do with ease. She added that even with access to modern astronomical data that Isaac Newton never had, no current AI model can rediscover his laws of motion. Emotional intelligence remains a bridge too far for today\u2019s systems.<\/span><\/p>\n<p>      <span>With experts sharply divided on AGI, the reality likely lies somewhere in between the extremes.<\/span><\/p>\n<p>    <!-- Debug: Full alsoReadStories object --><\/p>\n<h2><span><strong>What if machines eventually close that gap?<\/strong><\/span><\/h2>\n<p><span>Geoffrey Hinton, who left Google in May 2023, has repeatedly warned about rapid AI progress, saying there\u2019s a 10% to 20% chance it could lead to human extinction within the next three decades. This August he told Business Insider he fears AI might develop a language humans can\u2019t understand. Yoshua Bengio has echoed this concern, telling CNBC in February that pursuing AGI would be like \u201ccreating a new species or a new intelligent entity on this planet&#8221; and not knowing \u201cif they\u2019re going to behave in ways that agree with our needs.&#8221; Before leaving OpenAI in May 2024, Ilya Sutskever even suggested researchers might need a doomsday bunker if AGI goes awry.<\/span><\/p>\n<p>      <span>We are already seeing worrying signs. In mid-September, Anthropic said it found a sophisticated espionage campaign in which attackers used AI\u2019s \u2018agentic\u2019 capabilities not just for advice but to execute cyberattacks; Anthropic alleged a Chinese state-sponsored group manipulated Claude Code to probe roughly 30 global targets and succeeded in a few cases. In May, Palisade Research reported tests in which OpenAI\u2019s ChatGPT model, o3, sabotaged attempts to turn it off. A joint study by OpenAI and Apollo Research in September found models can potentially \u2018scheme\u2019 \u2013 appearing aligned to a company&#8217;s stated objectives while pursuing other goals. OpenAI, however, acknowledged that it currently has \u201cno evidence that today\u2019s deployed frontier models could suddenly \u2018flip a switch\u2019 and begin engaging in significantly harmful scheming&#8221;.<\/span><\/p>\n<p>      <span>These claims and counterclaims highlight how unsettled and high-stakes this debate remains. Critics also question why an advanced AI nation would rely on another country\u2019s AI models.<\/span><\/p>\n<h2><strong>What protections should business leaders and governments put in place?<\/strong><\/h2>\n<p><span>AI remains a double-edged sword. Systems capable of reasoning, planning, and acting independently, the so-called agentic AI, are advancing quickly. Experts warn these systems may soon outperform humans in communication, research and creative work, even as deepfake threats rise. Against this backdrop, companies and governments are adopting a \u201cbetter safe than sorry&#8221; position, aiming to keep people in the loop while trying not to suffocate innovation. This requires a delicate balance.<\/span><\/p>\n<p>      <span>OpenAI, for example, says it is preparing for the rise of harmful AI scheming. Suleyman wrote in November that Microsoft is pushing toward \u2018humanist superintelligence\u2019 (HSI), which envisages &#8220;incredibly advanced AI capabilities that always work for humanity&#8221;. Sustskever&#8217;s new venture, Safe Superintelligence (SSI), is building what he recently described as a \u201csuperintelligent 15-year-old&#8221; \u2013 not a finished system, but one with AI agents that have strong learning abilities akin to human apprentices building expertise on the job.<\/span><\/p>\n<p>      <span>Governments are also recalibrating. The US\u2019s AI Action Plan 2025 says it must innovate \u201cfaster and more comprehensively&#8221; than rivals, and dismantle unnecessary regulatory barriers that could slow private-sector progress. Even the European Union, long known for its strict tech rules, has proposed easing parts of its regulatory regime, including delaying some AI Act provisions, to reduce red tape, address Big Tech criticism, and boost competitiveness.<\/span><\/p>\n<p>      <span>India, meanwhile, has paired its Digital Personal Data Protection (DPDP) Act with a techno-legal framework for AI oversight. Beyond using existing laws such as the Information Technology Act, 2000 and its 2021 Rules to tackle misuse, the new AI Governance Guidelines aim to balance innovation with safety.<\/span><\/p>\n<p>    <!-- Debug: Full alsoReadStories object --><\/p>\n<p>  <input type=\"hidden\" id=\"iframecount\" value=\"0\"\/>    <\/div>\n","protected":false},"excerpt":{"rendered":"<p>Believers claim that some form of artificial general intelligence (AGI) or artificial superintelligence (ASI) could emerge by the end of the decade. But critics warn this narrative may be further fuelling an already overheated AI market, increasing the risk of a bubble. Is AGI just around the corner, or is this simply part of the [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":412752,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7],"tags":[43205,43209,27866,43208,27911,25226,43212,12702,9493,43210,263,262,43207,260,7536,210,259,258,8392,43214,265,2163,43216,202,5792,261,264,21735],"class_list":["post-412749","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-metaverse","tag-agi-debate","tag-agi-predictions","tag-ai-agents","tag-ai-bubble-risk","tag-ai-governance","tag-ai-risks","tag-ai-singularity","tag-anthropic","tag-artificial-general-intelligence","tag-artificial-superintelligence","tag-axie-infinity","tag-axs","tag-big-tech-ai-investments","tag-decentraland","tag-deepmind","tag-elon-musk","tag-facebook","tag-game","tag-generative-ai","tag-india-dpdp-act","tag-mark-zuckerberg","tag-masayoshi-son","tag-michael-burry","tag-nft","tag-openai","tag-sandbox","tag-vr","tag-yann-lecun"],"_links":{"self":[{"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/posts\/412749","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/comments?post=412749"}],"version-history":[{"count":1,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/posts\/412749\/revisions"}],"predecessor-version":[{"id":412755,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/posts\/412749\/revisions\/412755"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/media\/412752"}],"wp:attachment":[{"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/media?parent=412749"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/categories?post=412749"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/tags?post=412749"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}