- By Ashish Singh
- Tue, 14 May 2024 09:07 AM (IST)
- Source:JND
Sam Altman’s OpenAI has introduced GPT-4o which is a new version of its powerful LLM GPT-4 at an event on Monday. The 'o' in GPT-4o stands for 'Omni' and is said to be introduced in all the OpenAI products in the coming weeks. For those unaware, the new version of the model can generate a combination of text, audio and images.
The new model will be free for the users and the paid users will have up to five times capacity limits compared to the free users. Not only that, the company claimed that GPT-4o is twice as fast as, and half the cost of GPT-4 Turbo. It will be rolled out in a phased manner with the support of 50 languages all over the world.
READ: Android 15 May Bring ‘Super Dark Mode’ For Users With Enhanced Optimisations; Details
During a livestream event, the firm stated, "We are making GPT-4o available in the free tier, and to Plus users with up to 5 times higher message limits." A new Voice Mode version with GPT-4o will soon be available in ChatGPT alpha according to OpenAI.
"GPT-4o, our newest model, is the best we've ever made. It is fast, intelligent, and multimodal by nature," Altman wrote on X. "Every ChatGPT user has access to it, even those on the free plan! GPT-4 class models have only been accessible to subscribers paying a monthly fee thus far. This is crucial to our goal; we want everyone to have access to excellent AI tools," he said.
With an average response time of 320 milliseconds, GPT-4o may process audio inputs in as little as 232 milliseconds—a speed comparable to a human's throughout a conversation. Compared to other versions, GPT-4o excels in visual and auditory comprehension. In the upcoming weeks, the business intends to make support for GPT-4o's additional audio and video capabilities available through the API to a select number of reliable partners.
READ: PS5 Slim Price Discounts: Gamers Will Get Rs 5,000 Off On New PlayStation 5 Slim But For Few Days
The business trained a single new model with GPT-4o end-to-end for text, vision, and audio, which means that the same neural network handles all inputs and outputs.
"Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations," said OpenAI.
Additionally, the business released a ChatGPT desktop application for Mac. OpenAI also revealed at the event that users can now access its unique GPT Store for free. Users will be able to develop and distribute their own chatbots, or GPTs, through the GPT Store.