Behind us is the extremely interesting Spring Update conference, during which OpenAI presented a number of novelties. Let's take a look at them.
Summary of key points:
I have to admit that when I watched the recording shortly after the premiere, I "had to pick my jaw up off the floor". 🙂 If you haven't seen it yet, I encourage you to spend half an hour watching it. I've included a link to the full recording below.
The GPT-4o presentation was one of the highlights of the Spring Update conference, which took place on May 13. Recording of the main conference .
The name of the model, and specifically the letter "o" in the name, refers to the Latin word "omni", meaning "all-round". I think it perfectly captures the idea behind the model. The new OpenAI flagship has the same level of intelligence as GPT-4, but:
OpenAI employees presented a number of possibilities of the new model. Below are the most interesting ones:
There are many more videos showing the capabilities of the new version of GPT. You can find them all on the OpenAI YouTube channel .
It's worth noting here that not all changes will be visible in GPT-4o right away. As it follows from the publication on the OpenAI blog , some of them will be implemented in phases (e.g. the new voice mode).
We plan to launch the new voice mode and these new features in alpha in the coming weeks, with early access for Plus users as we roll out more broadly.
Using GPT-4o, free users will now have access to features such as:
Until now, free users only had access to GPT-3.5. To use GPT-4, you had to buy a Plus subscription, for $20 per month. From now on, that will change.
OpenAI employees have repeatedly emphasized on their website and during presentations how much they care about providing access to AI for everyone. For this reason, they have decided that the new "flagship" will be available for free to people who have an account on their platform . This is good news. However, there is another, this time bad news. There will be limits, and they will be quite conservative. As we can read on the OpenAI blog :
The number of messages free users can send using GPT-4o will be limited based on usage and demand.
In practice, the average user will not be able to estimate how much longer they can use the “smarter” assistant. Once the limit is exhausted, the model will automatically switch to the free GPT-3.5. The mechanism will display an appropriate notification.
I received access to GPT-4o in the Free version on Thursday, May 16, 2024. The first thing that caught my eye was the very small limits when it comes to the daily length of the conversation. I don't know how these limits are calculated exactly, but on the first day I used up my limit after about 5-10 minutes.
The new flagship model is not everything. There are several other, smaller changes related to the accessibility of the assistant. Here are a few of them:
I rate the entire conference very positively from the point of view of a huge AI fan, but also an expert who works on implementing such solutions on a daily basis. I am very impressed by the strategy chosen by OpenAI to develop their key product. They decide on small but thoughtful changes, thanks to which GPT becomes more and more complete.
In some moments, the recordings show some imperfections, e.g. GPT, instead of focusing on the face of the person whose emotions it is supposed to recognize, at first describes the previously sent image ( 41 seconds of this recording ). In my opinion, they only add realism and charm to the entire presentation. This puts them in opposition to Google, which (to put it mildly)stretched reality in recordings presenting the Gemini model at the end of last year.
It's hard not to get the impression that what we're seeing is starting to resemble the reality depicted in the 2013 film "Her." OpenAI CEO Sam Altman seems to be quietly pointing out the inspiration for this title. Let's just hope that the creators of GPT, in addition to the inspiration, have learned from it. 😉
And what are your feelings after watching the presentation and reading the list of new features? Or maybe you've already had the opportunity to test GPT-4o? Please share your impressions in the comments below. I'd love to hear your thoughts and experiences. 🙂
Sources:
Title Image Credit: Andrew Neel, Unsplash