OpenAI is rolling out long-awaited ‘advanced voice’ feature

외신뉴스
2024-09-25 15:16 PM

Tom Mitchelhill12 hours agoOpenAI is rolling out long-awaited ‘advanced voice’ featureOpenAI’s new advanced voice feature introduces five new voices, capable of remembering previous conversations and changing the topic of discussion mid-conversation.1178 Total views3 Total sharesListen to article 0:00NewsOwn this piece of crypto historyCollect this article as NFTCOINTELEGRAPH IN YOUR SOCIAL FEEDFollow ourSubscribe onArtificial intelligence firm OpenAI has begun rolling out its long-awaited “Advanced Voice” feature for select ChatGPT users. 


“Advanced Voice is rolling out to all Plus and Team users in the ChatGPT app over the course of the week,” OpenAI said in a Sept. 24 post to X.


“It can also say ‘Sorry I’m late’ in over 50 languages,” a nod to the delay in releasing the voice feature, which was originally planned to be released earlier in the year. Advanced Voice is rolling out to all Plus and Team users in the ChatGPT app over the course of the week.

While you’ve been patiently waiting, we’ve added Custom Instructions, Memory, five new voices, and improved accents.

It can also say “Sorry I’m late” in over 50 languages. pic.twitter.com/APOqqhXtDg— OpenAI (@OpenAI) September 24, 2024


Advanced Voice Mode is an upgrade to ChatGPT’s latest 4.0 model. It allows for faster and more intuitive communication with the model and includes several more humanlike conversational improvements. 


As part of the new feature, OpenAI unveiled five new voices, Arbor, Maple, SXol, Spruce, and Vale, which come as additions to the existing Breeze, Juniper, Cove, and Ember voice options. 


Users of the ChatGPT Plus and Team tiers will gain staggered access to the new voices, which are designed to make conversations more human-like, including allowing users to interrupt the conversation and switch topics mid-conversation. 


OpenAI also includes instructions and “memories” when the new voices are released. Users can put in custom instructions that tailor the chatbot to their preferences, and the chatbot can learn from and “remember” important things from previous audio conversations. 


It doesn’t always work as intended, though. In an FAQ, OpenAI admitted that the conversation experience isn’t yet optimized for use with an in-car Bluetooth or speakerphone, and ChatGPT could be interrupted by noises.


Related:OpenAI’s current business model is ‘untenable’ — Report


Still, some users on X complained that the new voice options paled in comparison to Altman’s controversial once-planned “Sky” voice model — which was taken off the table after a heated legal dispute with actress Scarlett Johansson. 


Johansson said Altman approached her in 2023 to be the voice of ChatGPT, but she decided to decline for “personal reasons.” Following the release of Sky as the voice of GTP 4.0, she said she was “shocked, angered, and in disbelief” over the chatbot’s eerily similar vocal features. 


Altman later scrapped the voice and has maintained that any resemblance was purely by chance, despite having made a single-word post to X that read “Her” — a direct reference to the 2013 Spike Jonze film where Johansson voiced the operating system of an intelligent AI companion. 


AI Eye:AI drone ‘hellscape’ plan for Taiwan, LLMs too dumb to destroy humanity# Altcoin# Technology# OpenAIAdd reaction

외신뉴스
Crypto news


함께 보면 좋은 콘텐츠

All posts
Top