Why the AI race may never have a clear winner – Crypto News – Crypto News
Connect with us
On 25 September, Amazon.com Inc. said it would invest up to $4 billion in Anthropic, making it a minority shareholder in the company and catapulting it into the generative AI race On 25 September, Amazon.com Inc. said it would invest up to $4 billion in Anthropic, making it a minority shareholder in the company and catapulting it into the generative AI race

Metaverse

Why the AI race may never have a clear winner – Crypto News

Published

on

Regardless of how you juggle these acronyms, it’s not wise to bet on any one company in the AI or generative AI race, simply because there are too many variables. New partnerships, acquisitions and investments; the emergence of new, disruptive technologies; and global regulations can all stifle the growth of new technologies and prevent mega deals.

Consider these developments. On 25 September, Amazon.com Inc. said it would invest up to $4 billion in Anthropic, making it a minority shareholder in the company and catapulting it into the generative AI race, dominated by the likes of OpenAI, Microsoft, Google, Meta, and Nvidia. Anthropic was founded in 2021 by Dario Amodei and others who were previously involved in the development of OpenAI’s GPT-3 language model. It recently debuted its new AI chatbot named Claude 2.

Last year, Google invested over $300 million in Anthropic but the exact figure was not publicly disclosed. The investment gave Google a 10% stake in Anthropic and allowed the company to scale its AI computing systems using Google Cloud. It also allowed Anthropic to use Google’s infrastructure to train and deploy its AI models.

A few hours after Amazon’s investment announcement, OpenAI – not to be outdone – said it was starting to roll out new voice and image capabilities in ChatGPT.

And just a week ago, on 20 September, Amazon said its large language model (LLM) would make Alexa “more conversational with a new level of smart home intelligence”, a day after Google announced a series of updates to Bard that would give the chatbot access to its suite of tools including YouTube, Google Drive, and Google Flights.

Meta, meanwhile, is already working on a generative AI chatbot called ‘Gen AI Personas’ for younger users on Instagram and Facebook. It’s expected to be unveiled this week at the company’s two-day annual ‘Meta Connect’ event, which kicks off on Wednesday, according to The Wall Street Journal. Microsoft meanwhile has announced plans to embed its generative AI assistant ‘Copilot’ in many of its products.

The race to grab a slice of generative AI is critical for big tech companies, and with good reason. Generative AI models, which are being used to create new content including text, images, audio, video, code, and simulations with the help of natural language ‘prompts’, are being used in at least one business function, according to one-third of respondents who participated in the August McKinsey Global survey. Moreover, 40% of respondents said their organizations will increase their investment in AI overall because of advances in generative AI.

Nigel Green, CEO of deVere Group, a financial consultancy, said investors should act now to have the “early advantage”. “Getting in early allows investors to establish a competitive advantage over latecomers. They can secure favourable entry points and lower purchase prices, maximizing their potential profits. This tech has the potential to disrupt existing industries or create entirely new ones. Early investors are likely to benefit from the exponential growth that often accompanies the adoption of such technologies. As these innovations gain traction, their valuations could skyrocket, resulting in significant returns on investment,” he noted.

Green cautioned, though, that while “AI is the big story currently, investors should, as always, remain diversified across asset classes, sectors and regions in order to maximise returns per unit of risk (volatility) incurred”.

That said, change appears to be the only constant in AI, which makes betting on any one company a futile exercise.

Google, for instance, was ideally positioned to be the winner in the AI race because its transformer model, which can predict the next word, sentence or even para, was the foundation for all large language models, or LLMs. But when Microsoft partnered with OpenAI, many began to write off Google, whose mission was “AI first”. OpenAI’s generative pre-trained transformer (GPT) and the GPT-powered chatbot ChatGPT garnered more than 100 million users within the first two months of its launch on 31 December 2022. That Bard was making blunder after blunder only added to Google’s woes and helped ChatGPT’s cause.

But just when many thought Google would fall behind in the AI race, the company said it would combine its AI research units – Google Brain and DeepMind. Google has also rejuvenated Bard and made it available in 180 countries, including India. Bard uses Language Model for Dialogue Applications (LaMDA), a transformer-based model invented by Google in 2017. It learns by “reading” trillions of words that help it to pick up on the patterns that make up human language. Gemini is now being touted as Google’s “next-generation foundation model”, which is still in training.

Amazon, too, is back in the limelight with the Anthropic deal which. Among other things, it will make Amazon Web Services (AWS) the primary cloud provider for Anthropic. According to Andy Jassy, Amazon’s CEO, “Customers are quite excited about Amazon Bedrock, AWS’s new managed service that enables companies to use various foundation models to build generative AI applications on top of, as well as AWS Trainium, AWS’s AI training chip, and our collaboration with Anthropic should help customers get even more value from these two capabilities.”

“We are excited to use AWS’s Trainium chips to develop future foundation models,” said Dario Amodei, co-founder and CEO of Anthropic. AWS offers these custom chips, Inferentia and Trainium, to its customers as an alternative to training their LLMs on Nvidia’s graphics processing units (GPUs), which are becoming increasingly expensive and difficult to procure.

To be sure, Amazon had already joined Microsoft and Google in the generative AI race with Bedrock, which is AWS’s managed service that helps companies use various foundation models to build generative AI applications atop it. For instance, travel media company Lonely Planet is developing a generative AI solution on AWS “to help customers plan epic trips and create life-changing experiences with personalized travel itineraries”, according to Chris Whyde, senior vice president of Engineering and Data Science at Lonely Planet.

“By building with Claude 2 on Amazon Bedrock, we reduced itinerary generation costs by nearly 80% when we quickly created a scalable, secure AI platform that organizes our book content in minutes to deliver cohesive, highly accurate travel recommendations. Now we can re-package and personalize our content in various ways on our digital platforms, based on customer preference, all while highlighting trusted local voices—just like Lonely Planet has done for 50 years,” he added.

Likewise, Bridgewater Associates, an asset management firm for institutional investors, has partnered with the AWS Generative AI Innovation Center to use Amazon Bedrock and Anthropic’s Claude model “to create a secure large language model-powered Investment Analyst Assistant that will be able to generate elaborate charts, compute financial indicators, and create summaries of the results, based on both minimal and complex instructions”, according to Greg Jensen, co-CIO at Bridgewater Associates.

Amazon SageMaker, too, allows developers to build, train, and deploy AI models and allows customers to add AI capabilities like image recognition, forecasting, and intelligent search to applications with a simple application programming interface (API) call. Amazon Bedrock allows LLMs from AI21 Labs, Anthropic, Stability AI, and Amazon to be accessible via an application programming interface (API). Further, if the code completion tool GitHub’s Copilot offers complete code snippets based on context, Amazon has announced the preview of Amazon CodeWhisperer – its AI coding companion.

Microsoft, for its part, has already invested $10 billion in OpenAI. It is already building AI-powered ‘Copilots’ (a term it uses for AI-powered assistants) to make coding more efficient with GitHub, increase work productivity with Microsoft 365, and improve search with Bing and Edge. Microsoft will extend the reach of its copilots to the next Windows 11 update and to apps such as Paint, Photos, and Clipchamp.

Bing will add support for the latest DALL-E 3 model from OpenAI and deliver more personalized answers based on your search history, a new AI-powered shopping experience, and updates to Bing Chat Enterprise that make it more mobile and visual. Microsoft 365 Copilot will be available for enterprise customers from 1 November, along with a new AI assistant called Microsoft 365 Chat.

AI has made rapid progress over the past five years, primarily due to three factors: better algorithms, more high-quality data, and the phenomenal rise in computing power. Nvidia has benefitted from the third factor, powering AI models with its GPUs that are typically used in gaming. OpenAI, for instance, used H100’s predecessor — Nvidia A100 GPUs — to train and run ChatGPT and will be using the GPUs on its Azure (Microsoft’s) supercomputer to power its continuing AI research.

Meta, too, is a key technology partner of Nvidia and developed its Hopper-based AI supercomputer Grand Teton system with GPUs. Stability AI, a text-to-image generative AI startup, uses H100 to accelerate its video, 3D and multimodal models.

Central processing units (CPUs) are also used to train AI models, but the parallel computing feature of GPUs allows devices to run several calculations or processes simultaneously. The training of AI models involves millions of calculations, and parallel computing helps speed up the process. This has transformed Nvidia from being just a gamer’s delight to becoming the poster boy for the world of AI and Generative AI. It is now the darling of investors, who valued it at about $1.13 trillion as of 8 September, pegging Huang’s own net worth at a little over $40 billion.

Intel does not want to be left behind in the AI race, as was evident during the Intel Innovation 2023 that began on 19 September in San Jose, California. And while Jensen Huang, Nvidia’s co-founder, president, and chief executive officer has been promoting ‘accelerated computing’, a term that blends CPUs, GPUs and other processors, Intel CEO Pat Gelsinger is pushing ‘Siliconomy’, a term he coined to describe “an evolving economy enabled by the magic of silicon where semiconductors are essential to maintaining and enabling modern economies”.

That said, Nvidia is a fabless company that does not manufacture its own chips, while Intel has foundries to make its own chips. Nevertheless, both the above-mentioned terms simply imply that AI is here to stay, and that the companies designing or making chips will leave no stone unturned to get a bigger slice of the AI pie.

Microsoft, too, is reportedly working on AI chips that can be used to train LLMs and avoid relying on Nvidia. For now, though, Nvidia has stolen a march in this space. According to a 27 May report by investment bank JPMorgan, the company could garner about 60% of the AI market this year on the back of its GPUs and networking products.

Given these rapid developments, picking a clear winner only gets harder.

Trending