{"id":392785,"date":"2025-05-22T01:07:18","date_gmt":"2025-05-21T19:37:18","guid":{"rendered":"https:\/\/dripp.zone\/news\/google-supercharges-gemini-with-new-ai-features-at-i-o-2025-13-updates-you-should-not-miss-crypto-news\/"},"modified":"2025-05-22T01:11:38","modified_gmt":"2025-05-21T19:41:38","slug":"google-supercharges-gemini-with-new-ai-features-at-i-o-2025-13-updates-you-should-not-miss-crypto-news","status":"publish","type":"post","link":"https:\/\/dripp.zone\/news\/google-supercharges-gemini-with-new-ai-features-at-i-o-2025-13-updates-you-should-not-miss-crypto-news\/","title":{"rendered":"Google supercharges Gemini with new AI features at I\/O 2025: 13 Updates you should not miss &#8211; Crypto News"},"content":{"rendered":"<p><\/p>\n<div>\n<div id=\"article-index-0\" class=\"storyParagraph\">\n<p>Google has rolled out a set of major updates to its Gemini app, aiming to widen its appeal and offer users more ways to interact with AI. Announced at the annual Google I\/O 2025 event on Tuesday, the updates include tools for visual help, media generation, and research \u2014 now available on both Android and iOS. Here is every AI update announced by the American tech giant:<\/p>\n<\/div>\n<div id=\"article-index-1\" class=\"storyParagraph\">\n<h2>Gemini Live<\/h2>\n<p>Gemini Live, which lets users share their camera feed or screen during conversations, is now available for free. The feature is designed to help users show what they mean instead of typing out questions. According to Google, conversations using <a rel=\"nofollow\" target=\"_blank\" class=\"backlink\" target=\"_blank\" href=\"https:\/\/www.livemint.com\/technology\/tech-news\/io-2025-google-introduces-major-ai-upgrades-to-search-brings-ai-mode-to-users-11747764773367.html\" data-vars-page-type=\"story\" data-vars-link-type=\"Manual\">Gemini Live <\/a>tend to be longer than text-only chats, which the company attributes to the more interactive format.<\/p>\n<\/div>\n<div id=\"article-index-2\" class=\"storyParagraph\">\n<p>Google plans to integrate the tool more deeply into its wider ecosystem in the coming weeks. Users will be able to link Gemini Live with apps like Maps, Calendar, Tasks, and Keep. For example, asking about restaurant options may link directly to Google Maps, or a group chat could lead to an event being added to Calendar.<\/p>\n<\/div>\n<div id=\"article-index-3\" class=\"storyParagraph\">\n<h2>Imagen 4<\/h2>\n<p>Gemini now includes <a rel=\"nofollow\" target=\"_blank\" class=\"backlink\" target=\"_blank\" href=\"https:\/\/www.livemint.com\/technology\/tech-news\/io-2025-google-introduces-major-ai-upgrades-to-search-brings-ai-mode-to-users-11747764773367.html\" data-vars-page-type=\"story\" data-vars-link-type=\"Manual\">Imagen 4<\/a>, a new image generation model that supports improved visual detail and better text rendering within images. Users can create graphics and visuals for various uses, including presentations and social media posts.<\/p>\n<\/div>\n<div id=\"article-index-4\" class=\"storyParagraph\">\n<h2>Veo 3<\/h2>\n<p>For video creation, <i>Veo 3<\/i> is being introduced. It supports text-to-video generation and can also add ambient sounds and basic character dialogue. Veo 3 is currently available only to<a rel=\"nofollow\" target=\"_blank\" class=\"backlink\" target=\"_blank\" href=\"https:\/\/www.livemint.com\/technology\/tech-news\/google-io-event-2025-live-updates-latest-expected-releases-developer-conference-android-16-gemini-sundar-pichai-ai-11747752941230.html\" data-vars-page-type=\"story\" data-vars-link-type=\"Manual\"><i>Google AI Ultra<\/i><\/a> subscribers in the U.S., limiting access for international users or those on the free plan.<\/p>\n<\/div>\n<div id=\"article-index-5\" class=\"storyParagraph\">\n<h2>Deep Research<\/h2>\n<p>The Deep Research feature now allows users to upload personal files \u2014 such as PDFs or images \u2014 to be included in AI-generated reports. The goal is to provide more personalised and context-rich results by combining private and public data sources. <a rel=\"nofollow\" target=\"_blank\" class=\"backlink\" target=\"_blank\" href=\"https:\/\/www.livemint.com\/technology\/tech-news\/google-io-event-2025-live-updates-latest-expected-releases-developer-conference-android-16-gemini-sundar-pichai-ai-11747752941230.html\" data-vars-page-type=\"story\" data-vars-link-type=\"Manual\">Google <\/a>has announced plans to expand this functionality to include content from Google Drive and Gmail in the near future.<\/p>\n<\/div>\n<div id=\"article-index-6\" class=\"storyParagraph\">\n<h2>Project Astra<\/h2>\n<p>Project Astra showcases the real-time capabilities of Google\u2019s Gemini models, with the initial features now integrated into <a rel=\"nofollow\" target=\"_blank\" class=\"backlink\" target=\"_blank\" href=\"https:\/\/www.livemint.com\/technology\/tech-news\/google-unveils-ai-ultra-and-ai-pro-new-ai-subscription-plans-at-i-o-2025-price-benefits-availability-and-more-11747770742467.html\" data-vars-page-type=\"story\" data-vars-link-type=\"Manual\">Gemini Live. <\/a>This upgraded version can actively use a device\u2019s camera to interpret on-screen content in real-time. Among the latest improvements are a more natural, expressive voice powered by native audio generation, enhanced memory functionality, and advanced computer control features.<\/p>\n<\/div>\n<div id=\"article-index-7\" class=\"storyParagraph\">\n<p>During the keynote at Google I\/O 2025, a live demonstration highlighted Gemini Live\u2019s ability to interact fluidly with users\u2014responding with expressive speech, handling interruptions seamlessly, and continuing conversations without losing context. It also showcased multitasking abilities such as making business calls, scrolling through documents, and browsing the web, all in <a rel=\"nofollow\" target=\"_blank\" class=\"backlink\" target=\"_blank\" href=\"https:\/\/www.livemint.com\/technology\/tech-news\/google-unveils-ai-ultra-and-ai-pro-new-ai-subscription-plans-at-i-o-2025-price-benefits-availability-and-more-11747770742467.html\" data-vars-page-type=\"story\" data-vars-link-type=\"Manual\">real-time.<\/a><\/p>\n<\/div>\n<div id=\"article-index-8\" class=\"storyParagraph\">\n<h2>Google Flow<\/h2>\n<p>Google Flow is an AI-powered filmmaking tool designed for creatives to effortlessly generate cinematic videos. It combines Google&#8217;s advanced models \u2014 Veo (video generation), Imagen (image generation), and Gemini (natural language understanding) \u2014 to help users turn everyday language prompts into high-quality visual scenes. Flow enables consistent character and scene creation, allowing seamless integration across multiple clips. It&#8217;s built to make storytelling faster, intuitive, and visually stunning.<\/p>\n<\/div>\n<div id=\"article-index-9\" class=\"storyParagraph\">\n<h2>Agent Mode<\/h2>\n<p>At the Google I\/O 2025 event,<a rel=\"nofollow\" target=\"_blank\" class=\"backlink\" target=\"_blank\" href=\"https:\/\/www.livemint.com\/technology\/tech-news\/io-2025-google-introduces-major-ai-upgrades-to-search-brings-ai-mode-to-users-11747764773367.html\" data-vars-page-type=\"story\" data-vars-link-type=\"Manual\"> Pichai<\/a> unveiled a new feature called Agent Mode for the Gemini app. This upcoming experimental tool, initially available to subscribers, is designed to handle complex tasks and planning on the user\u2019s behalf. With Agent Mode, Gemini moves beyond simple responses to take on more autonomous actions\u2014organising, scheduling, and executing multi-step tasks. Google also announced that these agentic AI capabilities will extend to Chrome, Search, and the Gemini platform, marking a significant step toward AI that can proactively manage tasks rather than just react to prompts.<\/p>\n<\/div>\n<div id=\"article-index-10\" class=\"storyParagraph\">\n<h2>Google Jules<\/h2>\n<p>Jules is an autonomous, agentic coding assistant that works directly with your codebase. Unlike traditional code-completion tools, Jules clones your repository into a secure <a rel=\"nofollow\" target=\"_blank\" class=\"backlink\" target=\"_blank\" href=\"https:\/\/www.livemint.com\/technology\/tech-news\/io-2025-google-introduces-major-ai-upgrades-to-search-brings-ai-mode-to-users-11747764773367.html\" data-vars-page-type=\"story\" data-vars-link-type=\"Manual\">Google Cloud VM<\/a>, understands your project context, and independently handles tasks like writing tests, fixing bugs, building features, and more. It works asynchronously, so you can focus elsewhere while it completes tasks and returns with a detailed plan, reasoning, and code changes. Jules is now in public beta and prioritises privacy, keeping your code secure and isolated.<\/p>\n<\/div>\n<div id=\"article-index-11\" class=\"storyPage_alsoRead__ZE9yL\"><strong>Also Read<\/strong> <!-- -->|<!-- --> <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/www.livemint.com\/technology\/io-2025-5-new-gemini-features-google-announced-all-you-need-to-know-11747769334774.html\">Google Gemini\u2019s top 5 new features from I\/O 2025<\/a><\/div>\n<div id=\"article-index-11\" class=\"storyPage_alsoRead__ZE9yL\"><strong>Also Read<\/strong> <!-- -->|<!-- --> <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/www.livemint.com\/technology\/tech-news\/io-2025-google-introduces-major-ai-upgrades-to-search-brings-ai-mode-to-users-11747764773367.html\">Google introduces major AI upgrades to Search, brings \u2018AI Mode\u2019<\/a><\/div>\n<div id=\"article-index-12\" class=\"storyParagraph\">\n<h2>AI Mode in Search<\/h2>\n<p>Google is rolling out a new feature called AI Mode, aimed at people who want a more advanced and interactive search experience. First tested in Labs, AI Mode is now available to everyone in the U.S., with a wider global rollout expected later. A new tab for AI Mode will soon appear in the Google app and on desktop.<\/p>\n<\/div>\n<div id=\"article-index-13\" class=\"storyParagraph\">\n<p><a rel=\"nofollow\" target=\"_blank\" target=\"_blank\" href=\"https:\/\/www.livemint.com\/technology\/tech-news\/google-io-event-2025-live-updates-latest-expected-releases-developer-conference-android-16-gemini-sundar-pichai-ai-11747752941230.html\" data-vars-anchor-text=\"AI Mode\">AI Mode<\/a> uses something called a \u201cquery fan-out\u201d system. This means it breaks down your question into smaller parts and runs many searches at once, helping it dig deeper and return more useful and detailed answers from across the internet. It also uses Gemini 2.5, Google\u2019s most advanced AI model yet.<\/p>\n<\/div>\n<div id=\"article-index-14\" class=\"storyParagraph\">\n<p>With AI Mode, users can ask follow-up questions, get interactive links, and even use images or live video to search in real-time. It is not just about answering questions anymore, Google wants to help people<i>do things<\/i>, from booking tickets to comparing data.<\/p>\n<\/div>\n<div id=\"article-index-15\" class=\"storyParagraph\">\n<h2>Google Meet speech translation in real-time using AI<\/h2>\n<p>Google has introduced a groundbreaking AI-powered speech translation feature in Google Meet, enabling real-time audio-to-audio translation during calls. Built on DeepMind\u2019s advanced AudioLM technology and integrated with the Gemini AI model, this system translates spoken language into a listener&#8217;s preferred language\u2014while preserving the speaker\u2019s original voice, tone, and emotional expression.<\/p>\n<\/div>\n<div id=\"article-index-16\" class=\"storyParagraph\">\n<p>Unlike traditional caption-based translation, this feature directly transforms speech, delivering natural-sounding audio in real time. Users hear the translated voice with subtle overlays of the original, enhancing clarity and maintaining conversational context. Though there is a slight delay for processing, the experience closely mimics having a live interpreter on the call.<\/p>\n<\/div>\n<div id=\"article-index-17\" class=\"storyParagraph\">\n<h2>Google Beam<\/h2>\n<p>Google Beam is a new 3D video communication platform that transforms regular 2D video calls into immersive 3D experiences. Announced at Google I\/O 2025, it uses multiple cameras and AI to create realistic, real-time 3D visuals with depth and eye contact. Powered by Google Cloud and designed for enterprise use, Beam also supports precise head tracking and is expected to feature real-time speech translation. It will roll out on HP devices later this year.<\/p>\n<\/div>\n<div id=\"article-index-18\" class=\"storyParagraph\">\n<h2>Gemma 3n<\/h2>\n<p>Gemma 3n is Google&#8217;s first open AI model designed specifically for on-device use, bringing fast, multimodal intelligence to phones, tablets, and laptops. Built on a new architecture developed with partners like Qualcomm, MediaTek, and Samsung, it powers real-time, private AI experiences without relying on the cloud. Gemma 3n also forms the foundation for the next generation of Gemini Nano, enabling developers to explore advanced AI directly on everyday devices.<\/p>\n<\/div>\n<div id=\"article-index-19\" class=\"storyParagraph\">\n<h2>Try-on<\/h2>\n<p>The &#8220;Try On&#8221; feature in Google Search allows users to see how clothes like shirts, dresses, pants, and skirts would look on them by uploading a full-length photo. Available through Search Labs in the U.S., the tool uses AI to generate a visual of the outfit on the user. It also lets users save or share the images for feedback before making a purchase.<\/p>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Google has rolled out a set of major updates to its Gemini app, aiming to widen its appeal and offer users more ways to interact with AI. Announced at the annual Google I\/O 2025 event on Tuesday, the updates include tools for visual help, media generation, and research \u2014 now available on both Android and [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":392790,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8],"tags":[35871,35864,440,30753,35863,35877,35860,35855,35868,35856,27548,35854,27372,35875,35870,35865,188,183,185,186,35859,35867,35852,35881,35853,14932,35873,35882,35879,35880,35862,35850,29226,35866,33685,35857,35872,187,184,35874,34926,35861,35851,35869,189,35878,150,182,27578,190,35858,35876],"class_list":["post-392785","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-technology","tag-3d-video-google-beam","tag-agent-mode-gemini","tag-ai","tag-ai-coding-assistant","tag-ai-filmmaking-tool","tag-ai-in-fashion-shopping","tag-ai-in-google-drive","tag-ai-media-generation","tag-ai-mode-in-search","tag-ai-research-tools","tag-ai-updates","tag-ai-visual-help","tag-ai-powered-search","tag-ai-powered-try-on","tag-audiolm-deepmind","tag-autonomous-ai-agent","tag-blockchain-tech","tag-blockchain-technology","tag-crypto-technology","tag-cryptocurrency-technology","tag-deep-research-google","tag-gemini-2-5","tag-gemini-app-updates","tag-gemini-ecosystem-integration","tag-gemini-live","tag-gemini-nano","tag-gemma-3n","tag-google-ai-announcements","tag-google-ai-tools-2025","tag-google-ai-ultra-subscription","tag-google-flow","tag-google-gemini-app","tag-google-i-o-2025","tag-google-jules","tag-image-generation","tag-imagen-4","tag-immersive-video-calls","tag-metaverse-technology","tag-nft-technology","tag-on-device-ai","tag-project-astra","tag-real-time-ai-assistant","tag-real-time-translation","tag-real-time-translation-google-meet","tag-soul-bound-token","tag-sundar-pichai-ai-keynote","tag-tech","tag-technology","tag-text-to-video","tag-token-technology","tag-veo-3","tag-virtual-try-on-google"],"_links":{"self":[{"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/posts\/392785","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/comments?post=392785"}],"version-history":[{"count":1,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/posts\/392785\/revisions"}],"predecessor-version":[{"id":392791,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/posts\/392785\/revisions\/392791"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/media\/392790"}],"wp:attachment":[{"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/media?parent=392785"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/categories?post=392785"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/tags?post=392785"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}