- Mastering Azure management: A comparative analysis of leading cloud platforms
- Sweat the small stuff: Data protection in the age of AI
- GAO report says DHS, other agencies need to up their game in AI risk assessment
- This LG Bluetooth speaker impressed me with a design feature I've yet to see on competitors
- Amazon's AI Shopping Guides helps you research less and shop more. Here's how it works
OpenAI unveils its most advanced o3 reasoning model on its last day of 'shipmas'
With the holiday season upon us, many companies are finding ways to take advantage through deals, promotions, or other campaigns. OpenAI has found a way to participate with its “12 days of OpenAI” event series.
OpenAI announced via an X post that starting on Dec. 5, the company would host 12 days of live streams and release “a bunch of new things, big and small,” according to the post. The company saved the best for last, sharing its biggest annoouncemnt on Friday, Dec. 20, the last day of the series.
Also: I’m a ChatGPT power user – here’s why Canvas is its best productivity feature
Here’s everything you need to know about the campaign, as well as a round-up of every day’s drops.
What are the ’12 days of OpenAI’?
OpenAI CEO Sam Altman shared more details about the event, which kicked off at 10 a.m. PT on Dec. 5 and occurred daily for 12 weekdays with a live stream featuring a launch or demo. The launches included both “big ones” or “stocking stuffers,” according to Altman.
What’s dropped?
Friday, December 20
On the last day of OpenAI, OpenAI unveiled its latest models, o3, which encompass o3 and o3 mini.
- As previously reported, the reason for the name that skips “o2” is the existence of Telefonica’s O2 telecommunications brand, which could cause confusion and copyright issues.
- OpenAI said that the technology will not be available to the general public yet.
- o3 can outperform o1 in a variety of benchmarks, including math and science, as seen in its performance on the AIME 2024, a competition math benchmark, and the GPQA, a Ph.D. level science benchmark with biology, physics, and chemistry questions.
- o3 also scored a new state-of-the-art score on the ARC-AGI benchmark, which is significant because it shows the model is nearing AGI, although, to be clear, it is not there yet.
- o3 mini is a new model in the o3 family that will support three reasoning options: low, medium, and high. The thinking time determines the performance. At a low level, it performs the same as o1-mini on the Codeforces Competition Code benchmark, but at medium and high, it can perform comparable to o1, if not better. This performance remained consistent across other evals.
- In addition to internal safety testing, OpenAI is, for the first time, opening up the o3 models to external safety testing. Safety researchers can get early access to the model by filling out a form on the OpenAI website, which will be open until January 10.
- Sam Altman concluded the live stream by sharing that the o3 model is planned to launch at the end of January, and the full o3 model will be launched after that.
- The company also introduced deliberative alignment, “a training paradigm that directly teaches reasoning LLMs the text of human-written and interpretable safety specifications, and trains them to reason explicitly about these specifications before answering,” according to OpenAI.
Thursday, December 19
On the second to last day of ’12 days of OpenAI,’ the company focused on releases regarding its MacOS desktop app and its interoperability with other apps.
- Users can now use the desktop app on MacOS to see and automate their work with ChatGPT. There will be more releases of this nature in 2025, but until then, OpenAI has been introducing the three features below.
- Using the “Work with Apps” button, users can now work with many more coding apps. The list includes: BBEdit, MatLab, Nova, Script Editor, TextMate, Android Studio, AppCode, CLion, DataGrip, GoLand, IntelliJ IDEA, PHPStorm, PyCharm, RubyMine, RustRover, WebStorm, Prompt, and Warp.
- For users who use ChatGPT for writing, the desktop app now supports Apple Notes, Quip, and Notion.
- Lastly, the desktop app for MacOS now supports Advanced Voice Mode while working with other apps.
- Features have already been shipped. All you have to do is have the latest version of the MacOS app and a Plus, Pro, Team, Enterprise, and Edu subscription.
- To ease privacy concerns, OpenAI says ChatGPT will only work with apps when manually prompted. When the feature is active, users know what will be attached to the message.
- “Day 12, we have something super special, so don’t miss it,” teased OpenAI about its upcoming Friday release.
Wednesday, December 18
Have you ever wanted to use ChatGPT without a Wi-Fi connection? Now, all you have to do is place a phone call. Here’s what OpenAI released on the 10th day:
- By dialing 1-800-ChatGPT, you can now access the chatbot via a toll-free number. OpenAI encourages users to save ChatGPT in their contacts for easy access.
- Users can call anywhere in the US; in other countries, users can message ChatGPT on WhatsApp. Users get 15 minutes of free ChatGPT calls per month.
- In WhatsApp, users can enter a prompt via a text as they would with any other person in their contacts. In this experience, it is just a text message.
- The phone call feature works on any phone, from a smartphone to a flip phone — even a rotary phone.
- The presenters said it is meant to make ChatGPT more accessible to more users.
Tuesday, December 17
The releases on the ninth day all focus on developer features and updates, dubbed “Mini Dev Day.” These launches include:
- The o1 model is finally out of preview in the API with support for function calling, structured outputs, developer messages, vision capabilities, and lower latency, according to the company.
- o1 in the API also features a new parameter: “reasoning effort.” This parameter allows developers to tell the model how much effort is put into formulating an answer, which helps with cost efficiency.
- OpenAI also introduced WebRTC support for the Realtime API, which makes it easier for developers “to build and scale real-time voice products across platforms.”
- The Realtime API also got a 60% audio token price drop, support for GPT-4o mini, and more control over responses.
- The fine-tuning API now supports Preference Fine-Tuning, which allows users to “Optimize the model to favor desired behavior by reinforcing preferred responses and reducing the likelihood of unpreferred ones,” according to OpenAI.
- OpenAI also introduced new Go and Java SDKs in beta.
- An “AMA” (ask me anything) session will be held for an hour after the live stream on the OpenAI GitHub platform with the presenters.
Monday, December 16
The drops for the second Monday in the 12 days of OpenAI series all focused on Search in ChatGPT.
- The AI search engine is available to all users starting today, including all free users who are signed in anywhere they can access ChatGPT. The feature was previously only available to ChatGPT Plus users.
- The search experience, which allows users to browse the web from ChatGPT, got faster and better on mobile and now has an enriched map experience. The upgrades include image-rich visual results.
- Search is integrated into Advance Voice mode, meaning you can now search as you talk to ChatGPT. To activate this feature, just activate Advance Voice the same way you regularly would and ask it your query verbally. It will then answer your query verbally by pulling from the web.
- OpenAI also teased developers, saying, “Tomorrow is for you,” and calling the upcoming livestream a “mini Dev Day.”
Friday, December 13
One of OpenAI’s most highly requested features has been an organizational feature to better keep track of your conversations. On Friday, OpenAI delivered a new feature called “Projects.”
- Projects is a new way to organize and customize your chats in ChatGPT, meant to be a part of continuing to optimize the core experience of ChatGPT.
- When creating a Project, you can include a title, a customized folder color, relevant project files, instructions for ChatGPT on how it can best help you with the project, and more in one place.
- In the Project, you can start a chat and add previous chats from the sidebar to your Project. It can also answer questions using your context in a regular chat format. The chats can be saved in the Project, making it easier to pick up your conversations later and know exactly what to look for where.
- It will be rolled out to Plus, Pro, and Teams users starting today. OpenAI says it’s coming to free users as soon as possible. Enterprise and Edu users will see it rolled out early next year.
Thursday, December 12
When the live stream started, OpenAI addressed the elephant in the room — the fact that the company’s live stream went down the day before. OpenAI apologized for the inconvenience and said its team is working on a post-mortem to be posted later.
Then it got straight into the news — another highly-anticipated announcement:
- Advanced Voice Mode now has screen-sharing and visual capabilities, meaning it can assist with the context of what it is viewing, whether that be from your phone camera or what’s on your screen.
- These capabilities build on what Advanced Voice could already do very well — engaging in casual conversation as a human would. The natural-like conversations can be interrupted, have multi-turns, and understand non-linear trains of thought.
- In the demo, the user gets directions from ChatGPT’s Advanced Voice on how to make a cup of coffee. As the demoer goes through the steps, ChatGPT is verbally offering insights and directions.
- There’s another bonus for the Christmas season: Users can access a new Santa voice. To activate it, all users have to do is click on the snowflake icon. Santa is rolling out throughout today everywhere that users can access ChatGPT voice mode. The first time you talk to Santa, your usage limits reset, even if you have reached the limit already, so you can have a conversation with him.
- Video and screen sharing are rolling out in the latest mobile apps starting today and throughout next week to all Team users and most Pro and Plus subscribers. Pro and Plus subscribers in Europe will get access “as soon as we can,” and Enterprise and Edu users will get access early next year.
Wednesday, December 11
Apple released iOS 18.2 on Wednesday. The release includes integrations with ChatGPT across Siri, Writing Tools, and Visual Intelligence. As a result, the live stream focused on walking through the integration.
- Siri can now recognize when you ask questions outside its scope that could benefit from being answered by ChatGPT instead. In those instances, it will ask if you’d like to process the query using ChatGPT. Before any request is sent to ChatGPT, a message notifying the user and asking for permission will always appear, placing control in the user’s hands as much as possible.
- Visual Intelligence refers to a new feature for the iPhone 16 lineup that users can access by tapping the Camera Control button. Once the camera is open, users can point it at something and search the web with Google, or use ChatGPT to learn more about what they are viewing or perform other tasks such as translating or summarizing text.
- Writing Tools now features a new “Compose” tool, which allows users to create text from scratch by leveraging ChatGPT. With the feature, users can even generate images using DALL-E.
All of the above features are subject to ChatGPT’s daily usage limits, the same way that users would reach limits while using the free version of the model on ChatGPT. Users can choose whether or not to enable the ChatGPT integration in Settings.
Read more about it here: iOS 18.2 rolls out to iPhones: Try these 6 new AI features today
Tuesday, December 10
- Canvas is coming to all web users, regardless of plan, in GPT-4o, meaning it is no longer just available in beta for ChatGPT Plus users.
- Canvas has been built into GPT-4o natively, meaning you can just call on Canvas instead of having to go to the toggle on the model selector.
- The Canvas interface is the same as what users saw in beta in ChatGPT Plus, with a table on the left hand side that shows the Q+A exchange and a right-hand tab that shows your project, displaying all of the edits as they go, as well as shortcuts.
- Canvas can also be used with custom GPTs. It is turned on by default when creating a new one, and there is an option to add Canvas to existing GPTs.
- Canvas also has the ability to run Python code directly in Canvas, allowing ChatGPT to execute coding tasks such as fixing bugs.
Read more about it here: I’m a ChatGPT power user – and Canvas is still my favorite productivity feature a month later
Monday, December 9
OpenAI teased the third-day announcement as “something you’ve been waiting for,” followed by the much-anticipated drop of its video model — Sora. Here’s what you need to know:
- Known as Sora Turbo, the video model is smarter than the February model that was previewed.
- Access is coming in the US later today; users need only ChatGPT Plus and Pro.
- Sora can generate video-to-video, text-to-video, and more.
- ChatGPT Plus users can generate up to 50 videos per month at 480p resolution or fewer videos at 720p. The Pro Plan offers 10x more usage.
- The new model is smarter and cheaper than the previewed February model.
- Sora features an explore page where users can view each other’s creations. Users can click on any video to see how it was created.
- A live demo showed the model in use. The demo-ers entered a prompt and picked aspect ratio, duration, and even presets. I found the live demo video results to be realistic and stunning.
- OpenAI also unveiled Storyboard, a tool that lets users generate inputs for every frame in a sequence.
Friday, December 6:
On the second day of “shipmas,” OpenAI expanded access to its Reinforcement Fine-Tuning Research Program:
- The Reinforcement Fine-Tuning program allows developers and machine learning engineers to fine-tune OpenAI models to “excel at specific sets of complex, domain-specific tasks,” according to OpenAI.
- Reinforcement Fine-Tuning refers to a customization technique in which developers can define a model’s behavior by inputting tasks and grading the output. The model then uses this feedback as a guide to improve, becoming better at reasoning through similar problems, and enhancing overall accuracy.
- OpenAI encourages research institutes, universities, and enterprises to apply to the program, particularly those that perform narrow sets of complex tasks, could benefit from the assistance of AI, and perform tasks that have an objectively correct answer.
- Spots are limited; interested applicants can apply by filling out this form.
- OpenAI aims to make Reinforcement Fine-Tuning publicly available in early 2025.
Thursday, December 5:
OpenAI started with a bang, unveiling two major upgrades to its chatbot: a new tier of ChatGPT subscription, ChatGPT Pro, and the full version of the company’s o1 model.
The full version of o1:
- Will be better for all kinds of prompts, beyond math and science
- Will make major mistakes about 34% less often than o1-preview, while thinking about 50% faster
- Rolls out today, replacing o1-preview to all ChatGPT Plus and now Pro users
- Lets users input images, as seen in the demo, to provide multi-modal reasoning (reasoning on both text and images)
ChatGPT Pro:
- Is meant for ChatGPT Plus superusers, granting them unlimited access to the best OpenAI has to offer, including unlimited access to OpenAI o1-mini, GPT-4o, and Advanced Mode
- Features o1 pro mode, which uses more computing to reason through the hardest science and math problems
- Costs $200 per month
Where can you access the live stream?
The live streams were held on the OpenAI website, and posted to its YouTube channel immediately after. So if you missed the 12 days of OpenAI and want to rewatch, you can access them all on the company’s YouTube channel.