OpenAIs new GPT-4o lets people interact using voice or video in the same model
At times, GPT-4 Omni struggled to understand the intention of the users, but the model was fairly graceful in navigating the slip-ups. OpenAI unveiled GPT-4 Omni (GPT-4o) during its Spring Update on Monday morning in San Francisco. Chief Technology Officer Mira Murati and OpenAI staff showcased their newest flagship model, capable of real-time verbal conversations with a friendly AI chatbot that convincingly speaks like a human. ChatGPT is multimodal, meaning users can use images and voice to prompt the chatbot. ChatGPT Voice — available on iOS and Android phones — lets users hold conversations with ChatGPT, which can respond in one of five AI-generated voices. Although Claude has sufficient vision capabilities to analyze uploaded files, including images and PDFs, it does not support image generation, voice interaction or web browsing.
For example, chatbots can write an entire essay in seconds, raising concerns about students cheating and not learning how to write properly. These fears even led some school districts to block access when ChatGPT initially launched. People have expressed concerns about AI chatbots replacing or atrophying human intelligence. There is a subscription option, ChatGPT Plus, that costs $20 per month. The paid subscription model gives you extra perks, such as priority access to GPT-4o, DALL-E 3, and the latest upgrades.
OpenAI Launches GPT-4o and More Features for ChatGPT – CNET
OpenAI Launches GPT-4o and More Features for ChatGPT.
The result, the company’s demonstration suggests, is a conversational assistant much in the vein of Siri or Alexa but capable of fielding much more complex prompts. OpenAI CTO Mira Murati led the live demonstration of the new release one day before Google is expected to unveil its own AI advancements at its flagship I/O conference on Tuesday, May 14. In addition, although GPT-4o will generally be more cost-effective for new deployments, IT teams looking to manage existing setups might find it more economical to continue using GPT-4.
Controversy over GPT-4o’s voice capabilities
Free users will face message limits for GPT-4o, and after hitting those caps, they’ll be switched to GPT-4o mini. ChatGPT Plus users will have higher message limits than free users, and those on a Team and Enterprise plan will have even fewer restrictions. Finally, GPT-5’s release could mean that GPT-4 will become accessible and cheaper to use. As I mentioned earlier, GPT-4’s high cost has turned away many potential users.
OpenAI is giving users their first access to GPT-4o’s updated realistic audio responses. The alpha version is now available to a small group of ChatGPT Plus users, and the company says the feature will gradually roll out to all Plus users in the fall of 2024. The release follows controversy surrounding the voice’s similarity to Scarlett Johansson, leading OpenAI to delay its release. While GPT-4o audio abilities are impressive, Omni works in several mediums.
The full version of GPT-4o, used in ChatGPT Plus, responds faster than previous versions of GPT; is more accurate; and includes features such as advanced data analysis. You can foun additiona information about ai customer service and artificial intelligence and NLP. GPT-4o can also create more detailed responses and is faster at tasks such as describing photos and writing image captions. And while GPT-3.5 was only trained on data up to January 2022, GPT-4o has been trained on data up to October 2023. Claude’s responses also tend to be more reserved than ChatGPT’s, reflecting Anthropic’s safety-centric ethos.
Former OpenAI Staffer Says the Company Is Breaking Copyright Law and Destroying the Internet
It just unveiled its own Gemini Live AI assistant that’s multi-modal with impressive voice and video capabilities. Check out our GPT-4o vs Gemini Live preview to see how these supercharged ChatGPT App AI helpers are stacking up. Working in a similar way to human translators at global summits, ChatGPT acts like the middle man between two people speaking completely different languages.
GPT-4 cuts off in September 2021 but Turbo is up to date as of April last year. It can also take images asn an input as well as speech making it multimodal. ChatGPT also includes an API that developers can use to integrate OpenAI LLMs into third-party software. It lacks a Save button, but users can copy and paste answers from ChatGPT into another application. It does have an Archive button that can list previous responses in ChatGPT’s left-hand pane for quick retrieval.
Despite ChatGPT’s extensive abilities, other chatbots have advantages that might be better suited for your use case, including Copilot, Claude, Perplexity, Jasper, and more. Although ChatGPT gets the most buzz, other options are just as good—and might even be better suited to your needs. ZDNET has created a list of the best chatbots, all of which we have tested to identify the best tool for your requirements. Generative AI models of this type are trained on vast amounts of information from the internet, including websites, books, news articles, and more. With a subscription to ChatGPT Plus, you can access GPT-4, GPT-4o mini or GPT-4o. Plus, users also have priority access to GPT-4o, even at capacity, while free users get booted down to GPT-4o mini.
Media outlets had speculated that the launch would be a new AI-powered search product to rival Google, but Altman clarified that the release would not include a search engine. “Not gpt-5, not a search engine, but we’ve been hard at work on some new stuff we think people will love! The company said in its announcement that ChatGPT-4o is 50% cheaper and twice as fast as GPT-4 turbo. It’s making the new model available to all users, bringing “GPT 4 class intelligence” to free customers.
And we pore over customer reviews to find out what matters to real people who already own and use the products and services we’re assessing.
Nvidia’s latest model release signals just how fast the AI landscape is shifting.
Although Claude has sufficient vision capabilities to analyze uploaded files, including images and PDFs, it does not support image generation, voice interaction or web browsing.
Both models are trained to generate natural-sounding text in response to users’ prompts, and they can engage in interactive, back-and-forth conversations, retaining memory and context to inform future responses. New features are coming to ChatGPT’s voice mode as part of the new model. The app will be able to act as a Her-like voice assistant, responding in real time and observing the world around you. The current voice mode is more limited, responding to one prompt at a time and working with only what it can hear.
In reality, far fewer than 1.8 trillion parameters are actually being used at any one time. In turn, AI models with more parameters have demonstrated greater information processing ability. They plan to explore areas like developing more device-friendly model sizes, incorporating additional modalities (e.g., audio, video), and further investment in the agent platform layer. GPT-4 hasn’t performed particularly well in benchmark tests against new models recently, including Claude 3 Opus or Google’s Gemini. In another step toward making AI more accessible, OpenAI announced a «refreshed» UI, which includes the ability to interact with ChatGPT on a more conversational level, as well as to share videos as a starting point. In his review of ChatGPT 4, Khan says it’s «noticeably smarter than its free counterpart. And for those who strive for accuracy and ask questions requiring greater computational dexterity, it’s a worthy upgrade.»
However, Altman believes that GPT-5 will significantly outperform its predecessor. Barret Zoph and Mark Chen, both researchers at OpenAI, walked through a number of applications for the new model. You could interrupt the model during its responses, and it would stop, listen, and adjust course. That said, some users may still prefer GPT-4, especially in business contexts. Because GPT-4 has been available for over a year now, it’s well tested and already familiar to many developers and businesses.
Voice will be more capable and will have some additional abilities
The singing voice was impressive and could be used to provide vocals for songs as part of an AI music model in the future. The hype is real and there are nearly 40,000 people watching the live stream on YouTube — so hopefully we get something interesting. Little is known, but we do know ChatGPT it’s in the later stages of testing, with the possible plan being to pair ChatGPT with a web crawler based search. Ryan Morrison provides some great insight into what OpenAI will need to do to beat Google at its own game — including making it available as part of the free plan.
ChatGPT-5: Expected release date, price, and what we know so far – ReadWrite
ChatGPT-5: Expected release date, price, and what we know so far.
They started by asking it to create a story and had it attempt different voices including a robotic sound, a singing voice and with intense drama. In another demo of the ChatGPT Voice upgrade they demonstrated the ability to make OpenAI voice sound not just natural but dramatic and emotional. My bet would be on us seeing a new Sora video, potentially the Shy Kids balloon head video posted on Friday to the OpenAI YouTube channel. We may even see Figure, the AI robotics company OpenAI has invested in, bring out one of the GPT-4-powered robots to talk to Altman. Sora has probably been the most high-profile product announcement since ChatGPT itself but it remains restricted to a handful of selected users outside of OpenAI.
Instead, OpenAI replaced plugins with GPTs, which are easier for developers to build. OpenAI once offered plugins for ChatGPT to connect to third-party applications and access real-time information on the web. The plugins expanded ChatGPT’s abilities, allowing it to assist with many more activities, such as planning a trip or finding a place to eat. GPT-4 is OpenAI’s language model, much more advanced than its predecessor, GPT-3.5.
GPT-4 outperforms GPT-3.5 in a series of simulated benchmark exams and produces fewer hallucinations. In January 2023, OpenAI released a free tool to detect AI-generated text. Unfortunately, OpenAI’s classifier tool could only correctly identify 26% of AI-written text with a «likely AI-written» designation. Furthermore, it provided false positives 9% of the time, incorrectly identifying human-written work as AI-produced. AI models can generate advanced, realistic content that can be exploited by bad actors for harm, such as spreading misinformation about public figures and influencing elections. Instead of asking for clarification on ambiguous questions, the model guesses what your question means, which can lead to poor responses.
In short, the answer is no, not because people haven’t tried, but because none do it efficiently. The AI assistant can identify inappropriate submissions to prevent unsafe content generation. If you are looking for a platform that can explain complex topics in an easy-to-understand manner, then ChatGPT might be what you want. If you want the best of both worlds, plenty of AI search engines combine both. Microsoft is a major investor in OpenAI thanks to multiyear, multi-billion dollar investments. Elon Musk was an investor when OpenAI was first founded in 2015 but has since completely severed ties with the startup and created his own AI chatbot, Grok.
According to Anthropic, the models can handle up to 1 million tokens for certain applications, but interested users need to contact Anthropic for details. And, although all models can analyze user-uploaded images and documents, they lack image generation, voice and internet browsing capabilities. Moving forward, GPT-4o will chat gpt 4 release power the free version of ChatGPT, with GPT-4o and GPT-4o mini replacing GPT-3.5. GPT-4 will remain available only to those on a paid plan, including ChatGPT Plus, Team and Enterprise, which start at $20 per month. OpenAI is launching GPT-4o, an iteration of the GPT-4 model that powers its hallmark product, ChatGPT.
OpenAI plans to release its next big AI model by December
The organization works to identify and minimize tech harms to young people and previously flagged ChatGPT as lacking in transparency and privacy. OpenAI is opening a new office in Tokyo and has plans for a GPT-4 model optimized specifically for the Japanese language. The move underscores how OpenAI will likely need to localize its technology to different languages as it expands. OpenAI announced new updates for easier data analysis within ChatGPT. Users can now upload files directly from Google Drive and Microsoft OneDrive, interact with tables and charts, and export customized charts for presentations.
In June 2023, just a few months after GPT-4 was released, Hotz publicly explained that GPT-4 was comprised of roughly 1.8 trillion parameters. More specifically, the architecture consisted of eight models, with each internal model made up of 220 billion parameters. While OpenAI hasn’t publicly released the architecture of their recent models, including GPT-4 and GPT-4o, various experts have made estimates.
Prior to her experience in audience development, Alyssa worked as a content writer and holds a Bachelor’s in Journalism at the University of North Texas. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best. However, users have noted that there are some character limitations after around 500 words. Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel. ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt.
Google Gemini is multimodal — it understands audio, video and computer code as well as text. Google has paused Gemini’s image generation feature because of inaccuracies, however. Google’s statement disclosing the pause pledged to re-release an improved image generation feature soon. Gemini is Google’s GenAI model that was built by the Google DeepMind AI research library. The Gemini AI model powered Google’s Bard GenAI tool that launched in March 2023. Google rebranded Bard as Gemini in February 2024, several months after launching Gemini Advanced based on its new Ultra 1.0 LLM foundation.
He also said that OpenAI would focus on building better reasoning capabilities as well as the ability to process videos. The current-gen GPT-4 model already offers speech and image functionality, so video is the next logical step. The company also showed off a text-to-video AI tool called Sora in the following weeks. GPT-4’s impressive skillset and ability to mimic humans sparked fear in the tech community, prompting many to question the ethics and legality of it all. Some notable personalities, including Elon Musk and Steve Wozniak, have warned about the dangers of AI and called for a unilateral pause on training models “more advanced than GPT-4”.
«An important part of our mission is being able to make our advanced AI tools available to everyone for free,» including removing the need to sign up for ChatGPT. Rumors also point to a 3D and improved image model, so the question is whether, in addition to the updates to GPT-4 and ChatGPT, we’ll get a look at Sora, Voice Engine and more. The company also has an ElevenLabs competitor in Voice Engine that is also buried behind safety research and capable of cloning a voice in seconds. Time will tell, but we’ve got some educated guesses as to what these could mean — based on what features are already present and looking at the direction OpenAI has taken. Oh, and let’s not forget how important generative AI has been for giving humanoid robots a brain.
ChatGPT and GPT-4 could get a sweet upgrade this fall with ‘strawberry’
OpenAIs new GPT-4o lets people interact using voice or video in the same model
At times, GPT-4 Omni struggled to understand the intention of the users, but the model was fairly graceful in navigating the slip-ups. OpenAI unveiled GPT-4 Omni (GPT-4o) during its Spring Update on Monday morning in San Francisco. Chief Technology Officer Mira Murati and OpenAI staff showcased their newest flagship model, capable of real-time verbal conversations with a friendly AI chatbot that convincingly speaks like a human. ChatGPT is multimodal, meaning users can use images and voice to prompt the chatbot. ChatGPT Voice — available on iOS and Android phones — lets users hold conversations with ChatGPT, which can respond in one of five AI-generated voices. Although Claude has sufficient vision capabilities to analyze uploaded files, including images and PDFs, it does not support image generation, voice interaction or web browsing.
For example, chatbots can write an entire essay in seconds, raising concerns about students cheating and not learning how to write properly. These fears even led some school districts to block access when ChatGPT initially launched. People have expressed concerns about AI chatbots replacing or atrophying human intelligence. There is a subscription option, ChatGPT Plus, that costs $20 per month. The paid subscription model gives you extra perks, such as priority access to GPT-4o, DALL-E 3, and the latest upgrades.
OpenAI Launches GPT-4o and More Features for ChatGPT – CNET
OpenAI Launches GPT-4o and More Features for ChatGPT.
Posted: Fri, 17 May 2024 07:00:00 GMT [source]
The result, the company’s demonstration suggests, is a conversational assistant much in the vein of Siri or Alexa but capable of fielding much more complex prompts. OpenAI CTO Mira Murati led the live demonstration of the new release one day before Google is expected to unveil its own AI advancements at its flagship I/O conference on Tuesday, May 14. In addition, although GPT-4o will generally be more cost-effective for new deployments, IT teams looking to manage existing setups might find it more economical to continue using GPT-4.
Controversy over GPT-4o’s voice capabilities
Free users will face message limits for GPT-4o, and after hitting those caps, they’ll be switched to GPT-4o mini. ChatGPT Plus users will have higher message limits than free users, and those on a Team and Enterprise plan will have even fewer restrictions. Finally, GPT-5’s release could mean that GPT-4 will become accessible and cheaper to use. As I mentioned earlier, GPT-4’s high cost has turned away many potential users.
OpenAI is giving users their first access to GPT-4o’s updated realistic audio responses. The alpha version is now available to a small group of ChatGPT Plus users, and the company says the feature will gradually roll out to all Plus users in the fall of 2024. The release follows controversy surrounding the voice’s similarity to Scarlett Johansson, leading OpenAI to delay its release. While GPT-4o audio abilities are impressive, Omni works in several mediums.
The full version of GPT-4o, used in ChatGPT Plus, responds faster than previous versions of GPT; is more accurate; and includes features such as advanced data analysis. You can foun additiona information about ai customer service and artificial intelligence and NLP. GPT-4o can also create more detailed responses and is faster at tasks such as describing photos and writing image captions. And while GPT-3.5 was only trained on data up to January 2022, GPT-4o has been trained on data up to October 2023. Claude’s responses also tend to be more reserved than ChatGPT’s, reflecting Anthropic’s safety-centric ethos.
Former OpenAI Staffer Says the Company Is Breaking Copyright Law and Destroying the Internet
It just unveiled its own Gemini Live AI assistant that’s multi-modal with impressive voice and video capabilities. Check out our GPT-4o vs Gemini Live preview to see how these supercharged ChatGPT App AI helpers are stacking up. Working in a similar way to human translators at global summits, ChatGPT acts like the middle man between two people speaking completely different languages.
GPT-4 cuts off in September 2021 but Turbo is up to date as of April last year. It can also take images asn an input as well as speech making it multimodal. ChatGPT also includes an API that developers can use to integrate OpenAI LLMs into third-party software. It lacks a Save button, but users can copy and paste answers from ChatGPT into another application. It does have an Archive button that can list previous responses in ChatGPT’s left-hand pane for quick retrieval.
Despite ChatGPT’s extensive abilities, other chatbots have advantages that might be better suited for your use case, including Copilot, Claude, Perplexity, Jasper, and more. Although ChatGPT gets the most buzz, other options are just as good—and might even be better suited to your needs. ZDNET has created a list of the best chatbots, all of which we have tested to identify the best tool for your requirements. Generative AI models of this type are trained on vast amounts of information from the internet, including websites, books, news articles, and more. With a subscription to ChatGPT Plus, you can access GPT-4, GPT-4o mini or GPT-4o. Plus, users also have priority access to GPT-4o, even at capacity, while free users get booted down to GPT-4o mini.
Media outlets had speculated that the launch would be a new AI-powered search product to rival Google, but Altman clarified that the release would not include a search engine. “Not gpt-5, not a search engine, but we’ve been hard at work on some new stuff we think people will love! The company said in its announcement that ChatGPT-4o is 50% cheaper and twice as fast as GPT-4 turbo. It’s making the new model available to all users, bringing “GPT 4 class intelligence” to free customers.
Both models are trained to generate natural-sounding text in response to users’ prompts, and they can engage in interactive, back-and-forth conversations, retaining memory and context to inform future responses. New features are coming to ChatGPT’s voice mode as part of the new model. The app will be able to act as a Her-like voice assistant, responding in real time and observing the world around you. The current voice mode is more limited, responding to one prompt at a time and working with only what it can hear.
In reality, far fewer than 1.8 trillion parameters are actually being used at any one time. In turn, AI models with more parameters have demonstrated greater information processing ability. They plan to explore areas like developing more device-friendly model sizes, incorporating additional modalities (e.g., audio, video), and further investment in the agent platform layer. GPT-4 hasn’t performed particularly well in benchmark tests against new models recently, including Claude 3 Opus or Google’s Gemini. In another step toward making AI more accessible, OpenAI announced a «refreshed» UI, which includes the ability to interact with ChatGPT on a more conversational level, as well as to share videos as a starting point. In his review of ChatGPT 4, Khan says it’s «noticeably smarter than its free counterpart. And for those who strive for accuracy and ask questions requiring greater computational dexterity, it’s a worthy upgrade.»
However, Altman believes that GPT-5 will significantly outperform its predecessor. Barret Zoph and Mark Chen, both researchers at OpenAI, walked through a number of applications for the new model. You could interrupt the model during its responses, and it would stop, listen, and adjust course. That said, some users may still prefer GPT-4, especially in business contexts. Because GPT-4 has been available for over a year now, it’s well tested and already familiar to many developers and businesses.
Voice will be more capable and will have some additional abilities
The singing voice was impressive and could be used to provide vocals for songs as part of an AI music model in the future. The hype is real and there are nearly 40,000 people watching the live stream on YouTube — so hopefully we get something interesting. Little is known, but we do know ChatGPT it’s in the later stages of testing, with the possible plan being to pair ChatGPT with a web crawler based search. Ryan Morrison provides some great insight into what OpenAI will need to do to beat Google at its own game — including making it available as part of the free plan.
ChatGPT-5: Expected release date, price, and what we know so far – ReadWrite
ChatGPT-5: Expected release date, price, and what we know so far.
Posted: Mon, 09 Sep 2024 07:00:00 GMT [source]
They started by asking it to create a story and had it attempt different voices including a robotic sound, a singing voice and with intense drama. In another demo of the ChatGPT Voice upgrade they demonstrated the ability to make OpenAI voice sound not just natural but dramatic and emotional. My bet would be on us seeing a new Sora video, potentially the Shy Kids balloon head video posted on Friday to the OpenAI YouTube channel. We may even see Figure, the AI robotics company OpenAI has invested in, bring out one of the GPT-4-powered robots to talk to Altman. Sora has probably been the most high-profile product announcement since ChatGPT itself but it remains restricted to a handful of selected users outside of OpenAI.
Instead, OpenAI replaced plugins with GPTs, which are easier for developers to build. OpenAI once offered plugins for ChatGPT to connect to third-party applications and access real-time information on the web. The plugins expanded ChatGPT’s abilities, allowing it to assist with many more activities, such as planning a trip or finding a place to eat. GPT-4 is OpenAI’s language model, much more advanced than its predecessor, GPT-3.5.
GPT-4 outperforms GPT-3.5 in a series of simulated benchmark exams and produces fewer hallucinations. In January 2023, OpenAI released a free tool to detect AI-generated text. Unfortunately, OpenAI’s classifier tool could only correctly identify 26% of AI-written text with a «likely AI-written» designation. Furthermore, it provided false positives 9% of the time, incorrectly identifying human-written work as AI-produced. AI models can generate advanced, realistic content that can be exploited by bad actors for harm, such as spreading misinformation about public figures and influencing elections. Instead of asking for clarification on ambiguous questions, the model guesses what your question means, which can lead to poor responses.
In short, the answer is no, not because people haven’t tried, but because none do it efficiently. The AI assistant can identify inappropriate submissions to prevent unsafe content generation. If you are looking for a platform that can explain complex topics in an easy-to-understand manner, then ChatGPT might be what you want. If you want the best of both worlds, plenty of AI search engines combine both. Microsoft is a major investor in OpenAI thanks to multiyear, multi-billion dollar investments. Elon Musk was an investor when OpenAI was first founded in 2015 but has since completely severed ties with the startup and created his own AI chatbot, Grok.
According to Anthropic, the models can handle up to 1 million tokens for certain applications, but interested users need to contact Anthropic for details. And, although all models can analyze user-uploaded images and documents, they lack image generation, voice and internet browsing capabilities. Moving forward, GPT-4o will chat gpt 4 release power the free version of ChatGPT, with GPT-4o and GPT-4o mini replacing GPT-3.5. GPT-4 will remain available only to those on a paid plan, including ChatGPT Plus, Team and Enterprise, which start at $20 per month. OpenAI is launching GPT-4o, an iteration of the GPT-4 model that powers its hallmark product, ChatGPT.
OpenAI plans to release its next big AI model by December
The organization works to identify and minimize tech harms to young people and previously flagged ChatGPT as lacking in transparency and privacy. OpenAI is opening a new office in Tokyo and has plans for a GPT-4 model optimized specifically for the Japanese language. The move underscores how OpenAI will likely need to localize its technology to different languages as it expands. OpenAI announced new updates for easier data analysis within ChatGPT. Users can now upload files directly from Google Drive and Microsoft OneDrive, interact with tables and charts, and export customized charts for presentations.
In June 2023, just a few months after GPT-4 was released, Hotz publicly explained that GPT-4 was comprised of roughly 1.8 trillion parameters. More specifically, the architecture consisted of eight models, with each internal model made up of 220 billion parameters. While OpenAI hasn’t publicly released the architecture of their recent models, including GPT-4 and GPT-4o, various experts have made estimates.
Prior to her experience in audience development, Alyssa worked as a content writer and holds a Bachelor’s in Journalism at the University of North Texas. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best. However, users have noted that there are some character limitations after around 500 words. Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel. ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt.
Google Gemini is multimodal — it understands audio, video and computer code as well as text. Google has paused Gemini’s image generation feature because of inaccuracies, however. Google’s statement disclosing the pause pledged to re-release an improved image generation feature soon. Gemini is Google’s GenAI model that was built by the Google DeepMind AI research library. The Gemini AI model powered Google’s Bard GenAI tool that launched in March 2023. Google rebranded Bard as Gemini in February 2024, several months after launching Gemini Advanced based on its new Ultra 1.0 LLM foundation.
He also said that OpenAI would focus on building better reasoning capabilities as well as the ability to process videos. The current-gen GPT-4 model already offers speech and image functionality, so video is the next logical step. The company also showed off a text-to-video AI tool called Sora in the following weeks. GPT-4’s impressive skillset and ability to mimic humans sparked fear in the tech community, prompting many to question the ethics and legality of it all. Some notable personalities, including Elon Musk and Steve Wozniak, have warned about the dangers of AI and called for a unilateral pause on training models “more advanced than GPT-4”.
«An important part of our mission is being able to make our advanced AI tools available to everyone for free,» including removing the need to sign up for ChatGPT. Rumors also point to a 3D and improved image model, so the question is whether, in addition to the updates to GPT-4 and ChatGPT, we’ll get a look at Sora, Voice Engine and more. The company also has an ElevenLabs competitor in Voice Engine that is also buried behind safety research and capable of cloning a voice in seconds. Time will tell, but we’ve got some educated guesses as to what these could mean — based on what features are already present and looking at the direction OpenAI has taken. Oh, and let’s not forget how important generative AI has been for giving humanoid robots a brain.