{"id":162836,"date":"2023-09-26T09:52:55","date_gmt":"2023-09-26T04:22:55","guid":{"rendered":"https:\/\/dripp.zone\/news\/?p=162836"},"modified":"2023-09-26T09:52:55","modified_gmt":"2023-09-26T04:22:55","slug":"chatgpt-can-now-talk-to-you-heres-how-to-use-the-newly-released-features-by-openai-crypto-news","status":"publish","type":"post","link":"https:\/\/dripp.zone\/news\/chatgpt-can-now-talk-to-you-heres-how-to-use-the-newly-released-features-by-openai-crypto-news\/","title":{"rendered":"ChatGPT can now talk to you. Here&#8217;s how to use the newly released features by OpenAI &#8211; Crypto News"},"content":{"rendered":"<p><\/p>\n<div id=\"mainArea\">\n<div class=\"FirstEle\">\n<p>      Microsoft backed startup <a rel=\"nofollow noopener\" target=\"_blank\" class=\"manualbacklink\" href=\"https:\/\/www.livemint.comlivemint.com\/topic\/openai\">OpenAI<\/a> announced new features for its generative AI based chatbot <a rel=\"nofollow noopener\" target=\"_blank\" class=\"manualbacklink\" href=\"https:\/\/www.livemint.comlivemint.com\/topic\/chatgpt\">ChatGPT<\/a>. The chatbot now gets voice and image capabilites that will allow users to get responses from ChatGPT using five different voices and get responses on the images submitted by them.<\/p>\n<\/p><\/div>\n<div class=\"paywall\" id=\"paywall_11695698817189\">\n<p>   In a post on X, informing about the new features for viral chatbot, OpenAI wrote, \u201cChatGPT can now see, hear, and speak. Rolling out over next two weeks, Plus users will be able to have voice conversations with ChatGPT (iOS &amp; Android) and to include images in conversations (all platforms).&#8221;<\/p>\n<p>   \u201cThey (voice and image capabilites) offer a new, more intuitive type of interface by allowing you to have a voice conversation or show ChatGPT what you\u2019re talking about.&#8221; the Sam Altman led company said in a subsequent blogpost.\u00a0<\/p>\n<h2>ChatGPT can now talk to you:\u00a0<\/h2>\n<p>ChatGPT will now have the ability to answer the queries in five different voices which can selected by users according to their preferences. OpenAI says that it took help from professional voice actors to create each voice while also using the company&#8217;s own speech recognition system Whisper in order to transcribe spoken words into text.\u00a0<\/p>\n<p>   The new voice abilities of ChatGPT are powered by a new text-to-speech model that OpenAI claims is capable of generating human-like audio from just text and a few seconds of sample speech, opening doors to many \u2018creative and accessibility-focused applications\u2019.<\/p>\n<p>   OpenAI is also collaborating with other companies to leverage the power of this new technology. Spotify has also partnered with the AI startup in order to translate podcasts into additional language in the podcaster&#8217;s own voices.<\/p>\n<h2>ChatGPT can see:\u00a0<\/h2>\n<p>OpenAI is using the multimodal abilites of GPT-3.5 and GPT-4 in order to power the Image understanding of ChatGPT. Users can now upload one or more images to ask ChatGPT questions like explore the contents of my fridge to plan a meal, or analyze a complex graph for work-related data.\u00a0<\/p>\n<p>   \u00a0<\/p>\n<p>   \u00a0<\/p>\n<p>&#8220;Exciting news! Mint is now on WhatsApp Channels <span>\ud83d\ude80<\/span> Subscribe today by clicking the link and stay updated with the latest financial insights!&#8221; <a rel=\"nofollow noopener\" target=\"_blank\" class=\"autobacklink-topic\" href=\"https:\/\/www.whatsapp.com\/channel\/0029Va91YSeGehEM6oMesj3d?utm_source=Mint&amp;utm_medium=Web&amp;utm_campaign=Whatsapp_channel\" data-name=\"Whatsapp_channel\"><strong>Click here!<\/strong><\/a> <\/p>\n<div class=\"disclamerText\" id=\"disclamerArea_11695698817189\">\n<div class=\"seeless\" id=\"disclamerText_11695698817189\">\n                Catch all the <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.livemint.com\/\" class=\"bold\">Business News<\/a>, <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.livemint.com\/market\" class=\"bold\">Market News<\/a>, <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.livemint.com\/latest-news\" class=\"bold\">Breaking News<\/a> Events and <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.livemint.com\/latest-news\" class=\"bold\">Latest News<\/a> Updates on Live Mint.<br \/>\n        Download The <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.livemint.com\/apps\" class=\"bold\">Mint News App<\/a> to get Daily Market Updates.\n    <\/div>\n<p><a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.livemint.com\/ai\/artificial-intelligence\/javascript:void(0)\" class=\"readMore\" id=\"readMore_11695698817189\">More<\/a><br \/>\n<a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.livemint.com\/ai\/artificial-intelligence\/javascript:void(0)\" class=\"readLess\" id=\"readLess_11695698817189\">Less<\/a>\n<\/div>\n<p>\n\t\tUpdated: 26 Sep 2023, 09:51 AM IST\n\t<\/p>\n<\/p><\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Microsoft backed startup OpenAI announced new features for its generative AI based chatbot ChatGPT. The chatbot now gets voice and image capabilites that will allow users to get responses from ChatGPT using five different voices and get responses on the images submitted by them. In a post on X, informing about the new features for [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":162837,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7],"tags":[263,262,5834,12700,12699,260,259,258,8392,265,202,5792,7124,261,264],"class_list":["post-162836","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-metaverse","tag-axie-infinity","tag-axs","tag-chatgpt","tag-chatgpt-voice-ai","tag-chatgpt-voice-assistant","tag-decentraland","tag-facebook","tag-game","tag-generative-ai","tag-mark-zuckerberg","tag-nft","tag-openai","tag-openai-chatgpt","tag-sandbox","tag-vr"],"_links":{"self":[{"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/posts\/162836","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/comments?post=162836"}],"version-history":[{"count":2,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/posts\/162836\/revisions"}],"predecessor-version":[{"id":162839,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/posts\/162836\/revisions\/162839"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/media\/162837"}],"wp:attachment":[{"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/media?parent=162836"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/categories?post=162836"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dripp.zone\/news\/wp-json\/wp\/v2\/tags?post=162836"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}