Google’s AI edge set to catch rivals – Crypto News – Crypto News
Connect with us
Google’s AI edge set to catch rivals Google’s AI edge set to catch rivals

Metaverse

Google’s AI edge set to catch rivals – Crypto News

Published

on

New Delhi: Google on Wednesday announced its new foundational artificial intelligence (AI) model, Gemini, replacing its Pathways Language Model (PaLM) that so far powered all of its generative AI applications and offerings.

The new offering will come in three sizes—Nano, Pro and Ultra—and debuted through Google’s AI chatbot, Bard, in English across 170 countries including India.

However, while claiming that the new AI model “exceeds state-of-the-art” AI models, Google executives did not reveal the size of the model in terms of the number of data parameters, or how it compares with OpenAI’s Generative Pre-trained Transformer (GPT)-4 model that underpins the latest versions of ChatGPT, at a media roundtable ahead of the announcement.

Eli Collins, vice-president of product at Google DeepMind, said Gemini is “more efficient” than previous AI models developed by Google.

However, while Google said Gemini outperforms OpenAI’s GPT-3.5 during the roundtable, they did not offer a reference as to whether the model would outperform GPT-4 as well.

Gemini Nano will be the smallest, lightest version of the AI model, and will be used in localized applications.

A version of Gemini Nano was piloted Wednesday through Google’s newest smartphone, Pixel 8 Pro, and is also available in India. Bard, will be powered by Gemini Pro, but will support only English. Support for other languages will be expanded subsequently.

Other products such as Search, Ads and Chrome will continue to use PaLM for the time being, and be switched to Gemini eventually. Google also said a new version of its Bard chatbot—Bard ‘Advan-ced’—will also be launched “early next year”, and use the Gemini Ultra model.

In a blog post, Demis Hassabis, CEO Deepmind, Google’s AI research division, said the company is already experimenting with Gemini underneath the Google Search Generative Experience (SGE), for which the company introduced Indic language support in August. Hassabis said Gemini has reduced latency of search results by “40%” as compared with PaLM-powered SGE outputs.

In the press briefing, Collins said the Gemini model was built to be natively multi-modal—thereby “understanding” text, images and audio as input and output methods by default. The new AI model is also trained to natively address hallucinations and issues of bias—factors that have plagued generative AI applications ever since OpenAI’s ChatGPT brought the generative AI subset into the limelight.

Calling it “our most capable and general model yet,” Sundar Pichai, chief executive of Google and its parent firm Alphabet, added, “These are the first models of the Gemini era and the first realization of the vision we had when we formed Google DeepMind earlier this year. This new era of models represents one of the biggest science and engineering efforts we’ve undertaken as a company.”

Google claimed that Gemini’s overall performance “exceeds current state-of-the-art results on 30 of the 32 widely-used academic benchmarks used in LLM R&D.”

Industry-wide standards, however, continue to hold OpenAI’s Generative Pre-trained Transformer (GPT)-4 model as the industry-leading standard. The latter, which powers enterprise offerings of GPT-4 as well as ChatGPT, is largely seen as the most popular industry standard AI model in the market until date. Meta, too, unveiled its latest generation AI model, Llama-2, in July this year. Microsoft is a significant investor in OpenAI, and has also partnered Meta for Llama-2, thereby making it a key strategic rival to Google in the generative AI race.

The launch of Gemini comes as industry stakeholders expect generative AI to mature in business use cases in the coming year. Last week, John Roese, global chief technology officer at Dell, said that more businesses will use more mature generative AI applications, while the general hype and pilots around the technology could die down.

Google on Wednesday also announced v5p—its latest-generation Tensor Processing Unit (TPU). The latter is a custom chip designed in-house by Google, specializing in AI use cases. The company called it the “most powerful, efficient and scalable TPU system to date.”

 

Trending