ARSTECHNICA.COM
12 days of OpenAI: The Ars Technica recap
DASH AWAY ALL 12 days of OpenAI: The Ars Technica recap Did OpenAI's big holiday event live up to the billing? Benj Edwards Dec 20, 2024 5:01 pm | 0 Credit: J Studios via Getty Images Credit: J Studios via Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreOver the past 12 business days, OpenAI has announced a new product or demoed an AI feature every weekday, calling the PR event "12 days of OpenAI." We've covered some of the major announcements, but we thought a look at each announcement might be useful for people seeking a comprehensive look at each day's developments.The timing and rapid pace of these announcementsparticularly in light of Google's competing releasesillustrates the intensifying competition in AI development. What might normally have been spread across months was compressed into just 12 business days, giving users and developers a lot to process as they head into 2025.Humorously, we asked ChatGPT what it thought about the whole series of announcements, and it was skeptical that the event even took place. "The rapid-fire announcements over 12 days seem plausible," wrote ChatGPT-4o, "But might strain credibility without a clearer explanation of how OpenAI managed such an intense release schedule, especially given the complexity of the features."But it did happen, and here's a chronicle of what went down on each day.Day 1: Thursday, December 5On the first day of OpenAI, the company released its full o1 model, making it available to ChatGPT Plus and Team subscribers worldwide. The company reported that the model operates faster than its preview version and reduces major errors by 34 percent on complex real-world questions.The o1 model brings new capabilities for image analysis, allowing users to upload and receive detailed explanations of visual content. OpenAI said it plans to expand o1's features to include web browsing and file uploads in ChatGPT, with API access coming soon. The API version will support vision tasks, function calling, and structured outputs for system integration.OpenAI also launched ChatGPT Pro, a $200 subscription tier that provides "unlimited" access to o1, GPT-4o, and Advanced Voice features. Pro subscribers receive an exclusive version of o1 that uses additional computing power for complex problem-solving. Alongside this release, OpenAI announced a grant program that will provide ChatGPT Pro access to 10 medical researchers at established institutions, with plans to extend grants to other fields.Day 2: Friday, December 6Day 2 wasn't as exciting. OpenAI unveiled Reinforcement Fine-Tuning (RFT), a model customization method that will let developers modify "o-series" models for specific tasks. The technique reportedly goes beyond traditional supervised fine-tuning by using reinforcement learning to help models improve their reasoning abilities through repeated iterations. In other words, OpenAI created a new way to train AI models that lets them learn from practice and feedback.OpenAI says that Berkeley Lab computational researcher Justin Reese tested RFT for researching rare genetic diseases, while Thomson Reuters has created a specialized o1-mini model for its CoCounsel AI legal assistant. The technique requires developers to provide a dataset and evaluation criteria, with OpenAI's platform managing the reinforcement learning process.OpenAI plans to release RFT to the public in early 2024 but currently offers limited access through their Reinforcement Fine-Tuning Research Program for researchers, universities, and companies.Day 3: Monday, December 9On day 3, OpenAI released Sora, its text-to-video model, as a standalone product now accessible through sora.com for ChatGPT Plus and Pro subscribers. The company says the new version operates faster than the research preview shown in February 2024, when OpenAI first demonstrated the model's ability to create videos from text descriptions.The release moved Sora from research preview to a production service, marking OpenAI's official entry into the video synthesis market. The company published a blog post detailing the subscription tiers and deployment strategy for the service.Day 4: Tuesday, December 10On day 4, OpenAI moved its Canvas feature out of beta testing, making it available to all ChatGPT users, including those on free tiers. Canvas provides a dedicated interface for extended writing and coding projects beyond the standard chat format, now with direct integration into the GPT-4o model.The updated canvas allows users to run Python code within the interface and includes a text-pasting feature for importing existing content. OpenAI added compatibility with custom GPTs and a "show changes" function that tracks modifications to writing and code. The company said Canvas is now on chatgpt.com for web users and also available through a Windows desktop application, with more features planned for future updates.Day 5: Wednesday, December 11On day 5, OpenAI announced that ChatGPT would integrate with Apple Intelligence across iOS, iPadOS, and macOS devices. The integration works on iPhone 16 series phones, iPhone 15 Pro models, iPads with A17 Pro or M1 chips and later, and Macs with M1 processors or newer, running their respective latest operating systems.The integration lets users access ChatGPT's features (such as they are), including image and document analysis, directly through Apple's system-level intelligence features. The feature works with all ChatGPT subscription tiers and operates within Apple's privacy framework. Iffy message summaries remain unaffected by the additions.Enterprise and Team account users need administrator approval to access the integration.Day 6: Thursday, December 12On the sixth day, OpenAI added two features to ChatGPT's voice capabilities: "video calling" with screen sharing support for ChatGPT Plus and Pro subscribers and a seasonal Santa Claus voice preset.The new visual Advanced Voice Mode features work through the mobile app, letting users show their surroundings or share their screen with the AI model during voice conversations. While the rollout covers most countries, users in several European nations, including EU member states, Switzerland, Iceland, Norway, and Liechtenstein, will get access at a later date. Enterprise and education users can expect these features in January.The Santa voice option appears as a snowflake icon in the ChatGPT interface across mobile devices, web browsers, and desktop apps, with conversations in this mode not affecting chat history or memory. Don't expect Santa to remember what you want for Christmas between sessions.Day 7: Friday, December 13OpenAI introduced Projects, a new organizational feature in ChatGPT that lets users group related conversations and files, on day 7. The feature works with the company's GPT-4o model and provides a central location for managing resources related to specific tasks or topicskinda like Anthropic's "Projects" feature.ChatGPT Plus, Pro, and Team subscribers can currently access Projects through chatgpt.com and the Windows desktop app, with view-only support on mobile devices and macOS. Users can create projects by clicking a plus icon in the sidebar, where they can add files and custom instructions that provide context for future conversations.OpenAI said it plans to expand Projects in 2024 with support for additional file types, cloud storage integration through Google Drive and Microsoft OneDrive, and compatibility with other models like o1. Enterprise and education users will receive access to Projects in January.Day 8: Monday, December 16On day 8, OpenAI expanded its search features in ChatGPT, extending access to all users with free accounts while reportedly adding speed improvements and mobile optimizations. Basically, you can use ChatGPT like a web search engine, although in practice it doesn't seem to be as comprehensive as Google Search at the moment.The update includes a new maps interface and integration with Advanced Voice, allowing users to perform searches during voice conversations. The search capability, which previously required a paid subscription, now works across all platforms where ChatGPT operates.Day 9: Tuesday, December 17On day 9, OpenAI released its o1 model through its API platform, adding support for function calling, developer messages, and vision processing capabilities. The company also reduced GPT-4o audio pricing by 60 percent and introduced a GPT-4o mini option that costs one-tenth of previous audio rates.OpenAI also simplified its WebRTC integration for real-time applications and unveiled Preference Fine-Tuning, which provides developers new ways to customize models. The company also launched beta versions of software development kits for the Go and Java programming languages, expanding its toolkit for developers.Day 10: Wednesday, December 18On Wednesday, OpenAI did something a little fun and launched voice and messaging access to ChatGPT through a toll-free number (1-800-CHATGPT), as well as WhatsApp. US residents can make phone calls with a 15-minute monthly limit, while global users can message ChatGPT through WhatsApp at the same number.OpenAI said the release is a way to reach users who lack consistent high-speed internet access or want to try AI through familiar communication channels, but it's also just a clever hack. As evidence, OpenAI notes that these new interfaces serve as experimental access points, with more "limited functionality" than the full ChatGPT service, and still recommends existing users continue using their regular ChatGPT accounts for complete features.Day 11: Thursday, December 19On Thursday, OpenAI expanded ChatGPT's desktop app integration to include additional coding environments and productivity software. The update added support for Jetbrains IDEs like PyCharm and IntelliJ IDEA, VS Code variants including Cursor and VSCodium, and text editors such as BBEdit and TextMate.OpenAI also included integration with Apple Notes, Notion, and Quip, while adding Advanced Voice Mode compatibility when working with desktop applications. These features require manual activation for each app and remain available to paid subscribers, including Plus, Pro, Team, Enterprise, and Education users, with Enterprise and Education customers needing administrator approval to enable the functionality.Day 12: Friday, December 20On Friday, OpenAI concluded its twelve days of announcements by previewing two new simulated reasoning models, o3 and o3-mini, while opening applications for safety and security researchers to test them before public release. Early evaluations show o3 achieving a 2727 rating on Codeforces programming contests and scoring 96.7 perecent on AIME 2024 mathematics problems.The company reports o3 set performance records on advanced benchmarks, solving 25.2 percent of problems on EpochAI's Frontier Math evaluations and scoring above 85 percent on the ARC-AGI test, which is comparable to human results. OpenAI also published research about "deliberative alignment," a technique used in developing o1. The company has not announced firm release dates for either new o3 model, but CEO Sam Altman said o3-mini might ship in late January.So what did we learn?OpenAI's December campaign revealed that OpenAI had a lot of things sitting around that it needed to ship, and it picked a fun theme to unite the announcements. Google responded in kind, as we have covered.Several trends from the releases stand out. OpenAI is heavily investing in multimodal capabilities. The o1 model's release, Sora's evolution from research preview to product, and the expansion of voice features with video calling all point toward systems that can seamlessly handle text, images, voice, and video.The company is also focusing heavily on developer tools and customization, so it can continue to have a cloud service business and have its products integrated into other applications. Between the API releases, Reinforcement Fine-Tuning, and expanded IDE integrations, OpenAI is building out its ecosystem for developers and enterprises. And the introduction of o3 shows that OpenAI is still attempting to push technological boundaries, even in the face of diminishing returns in training LLM base models.OpenAI seems to be positioning itself for a 2025 where generative AI moves beyond text chatbots and simple image generators and finds its way into novel applications that we probably can't even predict yet. We'll have to wait and see what the company and developers come up with in the year ahead.Benj EdwardsSenior AI ReporterBenj EdwardsSenior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 0 Comments
0 Comentários 0 Compartilhamentos 41 Visualizações