
OpenAI just launched a sequel to its most popular API
www.fastcompany.com
More than three million developers are using OpenAIs APIs as shorthand code to infuse apps and websites with an engine of advanced AI. And today, the companys most popular API, called Chat Completions, is getting a significant sequel called Responses. Eight months in development, it will vastly expand upon and simplify the experience of plugging into OpenAI.For developers, Responses will mean using less code to stack more complex questions to the AI. A hundred lines of code will turn into just three, as the company is courting a wider set of developers who dont consider themselves LLM experts. For consumers, it will mean youll soon be interacting with AI thats faster, more fluid in using forms of media other than text, and more capable of taking more steps on your behalf.Completions were very much designed in a world where you could only put text into it, and you could only get text out of it. But now we have models that can do work across multiple mediums. We can put images in, we can get audio out of the model, and [users] can speak to the model in real time and have it speak to you back, says Steve Coffey, an engineer at OpenAI. [Completions] is just like the wrong tool for the jobso Responses was designed from the ground up.What is OpenAIs new Responses API update?APIs are basically the software gateways to use features from a service or platform inside your own. And to OpenAI,itsAPIs are as carefully designed as any producteven if wedonttend to consider APIs as designed objects. The iPhone has APIs for apps using its camera and accelerometers, for instance, while Stripes APIs make it possible for websites and apps to take paymentsand in each case, the ease of integrating these APIs has been vital to courting developers and growing a business.OpenAI created the modern API for AI in 2020 (and Chat Completions in 2023) so developers could plug into its AI platform. Its competitors have since copied OpenAIs approach to be something of an informal standard across the industry.Thousands of apps, ranging from Perplexity (an AI search engine) to Harvey (an AI for lawyers), currently integrate OpenAI APIs.Today, OpenAI offers a few different APIs, including one that generates images for Dall-E and another that exclusively works to summarize or write text from scratch. For this release, OpenAI is focusing on Responses, the evolution of its Chat Completion APIthe companys popular way that app developers can plug into the core conversational technology behind ChatGPT. The way the Chat Completions was designed, developers could only send one query in text at a time, and get a single answer in text back. Practically speaking, that meant complicated questions could take several steps, and each individual new question took time, introducing more latency.Now, a developer plugs strings of code into the Responses API, crossed with natural language queries that developers can use that are more or less the way you or I would talk to ChatGPT. (A long user manual helps developers understand what they can and cant do.)OpenAIs API will offer multi turn conversations that understand context and conversational floweven when you mix in multimedia like images and, soon to arrive, voice/sound. Responses can also juggle several processes at once, because with a single line of code, you can connect Tools hosted by OpenAI into this process. These tools will include a web search (so OpenAIs responses can be grounded in more real time data), a code interpreter to write and test code, and a file search to analyze and summarize files. The new API will also let developers connect to OperatorOpenAIs agentic tool that can analyze screens and actually take actions on the users behalfand comes with a new kit of software that helps developers juggle multiple AI agents at the same time.As the company explains, building APIs requires forecasting years ahead at the functions developers may want, and if you squint, its not hard to deconstruct OpenAIs own thesis on the future lurking in the feature set. The API has vastly expanded upon whats possible to do when you plug into OpenAI as a developerembracing fuzzy inputs of multimedia, integrating information so responses are current, and perhaps most notable of all, acting on behalf of the user to save them time and effort.Im very excited for this year because of the agentic behavior that our models will unlock the model is taking multiple steps on its own volition and giving you an answer, says Atty Eleti, an engineer at OpenAI. On the far end, [it makes way for] AI engineers, AI designers, AI auditors, AI accountants. Little junior interns that you can instruct and operate and ask them to go up and do these things. And I think were on the cusp of that becoming a very tangible reality.Still, these long-term possibilities are grounded in immediate efficiencies. The API updates mean that a simple question, whats the weather in San Francisco, goes from taking a hundred lines of code to just three. Adding all of the aforementioned tools requires just one line of code. This means that coding AI apps should be faster for developers. And because many queries hit OpenAIs servers all at once, responses should come faster for end users.[Image: OpenAI]The challenge of bringing developers alongLike any tool, APIs have to be designed for ease of use. They are not just about coding capabilities, the OpenAI team argues, but about designing clarity and possibility.The education ladder of an API is something that has to be very consciously designed, because our target audience is not people who know how AI works or how LLMs work, says Eleti. And so we introduce them to AI in a sort of a ladder way, where you do a bit of work and you get some reward out of it. You do a bit more work, you understand some more concepts, and then over time, you can graduate to the more complex functionality.OpenAI gives this instructive feedback to developers through their own mistakes. Whenever it generates errors, OpenAI tries to explain what went wrong in plain language, so the developer can actually understand how to improve their technique. The OpenAI team believes that such feedback, coupled with autocomplete coding tools, should make Responses easy for developers to learn.I think that really good APIs sort of allow you to start off with the gas pedal in the steering wheel and graduate slowly to the airplane cockpit by exposing more and more functionality in the form of knobs, in the form of like settings and these other things that are hidden from you first, but exposed over time, says Coffey.The tricky part of updating an API, however, is not just making it self-explanatory. The API also needs to be backwards compatible because software thats already been built to connect to OpenAI cant suddenly go dark after an update. So Responses is backwards compatible with software built upon Chat Completions. Furthermore, the Completions API itself will continue working as it always has. OpenAI will continue to support it into the future, offering updates that put Completions as close to feature parity with Responses as it can. (But to use those nifty tools, youll need to graduate to Responses.)Over time, the OpenAI API team bets that most of its developers will land on Responses, given the extra capabilities and that it will be price-equivalent to run. Assuming that OpenAI has bet on the right future, AI software is about to become faster, more capable, and more proactive than anything weve seen to date.
0 Комментарии
·0 Поделились
·44 Просмотры