OpenAI introduces the faster and cheaper UI model GPT-4o
During OpenAI's live event on Monday, the company unveiled an upgrade to the GPT-4 model, which is more than a year old. The new large language model, which was trained or trained on the huge amount of data on the Internet, OpenAI predicts that it will be better at text, as well as processing audio and image material in real time.
The company claims that the system can answer questions in a few milliseconds, which of course enables more fluid and efficient communication. In the demonstration of the model, OpenAI researchers and Chief Technology Officer Mira Murati talked to the new ChatGPT using only their voices, showing that the model can respond well. During the presentation, the chatbot was also shown to translate speech from one language to another almost instantly, and at one point even sang part of the story on demand.
Muratijeva je za Bloomberg News povedala, da je to prvič, da pri OpenAI-ju posegajo po tako velikih preskokih na področju interakcije in preprostosti uporabe. “It really doesMrsmo that you can collaborate with tools like ChatGPT“, je dodala.
The update will bring many features to free users that were previously limited to those with a paid subscription. For example, the ability to search for answers to queries online, talk to a chatbot and hear answers in different tones, and command it to save details that the chatbot can recall in the future.
The release of GPT-4o is likely to usher in a new chapter of AI development, with GPT-4 still remaining the gold standard. A growing number of start-ups and large tech companies, including Anthropic, Cohere, and Alphabet's Google, have recently introduced their own UI designs that they claim will surpass GPT-4 in certain parameters.
In a blog post on Monday, OpenAI CEO Sam Altman said that while the original version of ChatGPT showed how people could use completely everyday language to interact with computers, using GPT-4o seemed like a big step forward.