OpenAIs deep research gives a preview of the AI agents of the future
www.fastcompany.com
Welcome toAI Decoded,Fast Companys weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every weekhere.OpenAIs deep research gives a preview of the AI agents of the futureOpenAI announced this week its AI research assistant, which it calls deep research. Powered by OpenAIs o3-mini model (which was trained to use trial and error to find answers to complex questions), deep research is one of OpenAIs first attempts at a real agent thats capable of following instructions and working on its own.OpenAI says deep research is built for people in fields like finance, science, policy, and engineering who need thorough, precise, and reliable research. It can also be useful for big-ticket purchases, like houses or cars. Because the model needs to spin a lot of cycles and tote around a lot of memory during its task, it uses a lot of computing power on an OpenAI server. Thats why only the companys $200-per-month Pro users have access to the tool, and theyre limited to 100 searches per month. OpenAI was kind enough to grant me access for a week to try it out. I found a new deep research button just below the prompting window in ChatGPT.I first asked it to research all the nondrug products that claim to help people with low back pain. I was thinking about consumer tech gadgets, but Id not specified that. So ChatGPT was unsure about the scope of my search (and, apparently, so was I), and it asked me if I wanted to include ergonomic furniture and posture correctors. The model researched the question for 6 minutes, cited 20 sources, and returned a 2,000-word essay on all the consumer back pain devices it could find on the internet. It discussed the relative values of heated vibration belts, contact pad systems, and Transcutaneous Electrical Nerve Stimulation (TENS) units. It even generated a grid that displayed all the details and pricing of 10 different devices. Not knowing a great deal about such devices, I couldnt find any gaps in the information, or any suspect statements.I decided to try something a little harder. I would like an executive overview of the current research into using artificial intelligence to find new cancer treatments or diagnostic tools, I typed. Please organize your answer so that the treatments that are most promising, and closest to being used on real patients, are given emphasis.Like DeepSeeks R1 model and Googles Gemini Advanced 2.0 Flash Thinking Experimental, OpenAIs research tool also shows you its chain of thought, as it works toward a satisfying answer. While it searched it telegraphed its process: Im working through AIs integration in cancer diagnostics and treatment, covering imaging, pathology, genomics, and radiotherapy planning. Progressing towards a comprehensive understanding. OpenAI also makes a nice UX choice by putting this chain-of-thought flow in a separate pane at the right of the screen, instead of presenting it right on top of the research results. The only problem is, you only get one chance to see it, because it goes away after the agent finishes its research.I was surprised that OpenAIs deep research tool used only 4 minutes to finish its work, and cited only 18 sources. It created a summary of how AI is being used in cancer research, citing specific studies that validated the AI in clinical settings. It discussed trends in using AI in reading medical imaging, finding cancer risk in genome data, AI-assisted surgery, drug discovery, and radiation therapy planning and dosing. However, I noticed that many of the studies and FDA approvals cited didnt occur within the past 18 months. Some of the statements in the report sounded outdated: Notably, several AI-driven tools are nearing real-world clinical usewith some already approvedparticularly in diagnostics (imaging and pathology), it stated, but AI diagnostic tools are already in clinical use.Before starting the research, I was aware of a new landmark study published two days ago in The Lancet medical journal about AI assisting doctors in reading mammograms (more on that below). The deep research report mentioned this same study, but it outlined preliminary results published in 2023, not the more recent results published this month.I have full confidence in OpenAIs deep research tool for doing product searches. Im less confident, though, about scientific research, only because of the currency of the research it included in its report. Its also possible that my search was overbroad, since AI is now being used on many fronts to fight cancer. And to be clear: Two searches certainly isnt enough to pass judgement on deep research. The number and kinds of searches you can do is practically infinite, so Ill be testing it more while I still have access. On the whole Im impressed with OpenAIs new toolat the very least it gives you a framework and some sources and ideas to start you off on your own research.AI is working alongside doctors on early breast cancer detectionA study of more than 100,000 breast images from mammography screenings in Sweden found that when an AI system assisted single doctors in reviewing mammograms, positive detections of cancer increased by 29%. The screenings were coordinated as part of the Swedish national screening program and performed at four screening sites in southwest Sweden.The AI system, called Transpara, was developed by ScreenPoint Medical in the Netherlands. Normally, two doctors review mammograms together. When AI steps in for one of them, overall screen reading time drops by 44.2%, saving lots of time for oncologists. The AI makes no decisions; it merely points out potential problem spots in the image and assigns a risk score. The human doctor then decides how to proceed. With a nearly 30% improvement in early detections of cancer, the AI is quite literally saving lives. Healthcare providers have been using AI image recognition systems in diagnostics since 2017, and with success, but the results of large scale studies are only now beginning to appear.Google touts the profitability of its AI search adsAlphabet announced its quarterly results earlier this week and hidden among the other results was some good news about Googles AI search results (called AI Overviews). Some observers feared that Google would struggle to find ad formats that brands like within the new AI results, or that ads around the AI results would cannibalize Googles regular search ads business. But Google may have found the right formats already, because the AI ads are selling well and are profitable, analysts say. We were particularly impressed by the firms commentary on AI Overviews monetization, which is approximately at par with traditional search monetization despite its launch just a few months ago, says Morningstar equity analyst Malik Ahmed Khan in a research brief.Khan says Googles AI investments paid off in the companys revamped Shopping section within Google Search, which was upgraded last quarter with AI. The Shopping segment yielded 13% more daily active U.S. users in December 2024 compared with the same month a year earlier. Google also says that younger people who are attracted to AI Overviews end up using regular Google Search more, with their usage increasing over time. This dynamic of AI Overviews being additive to Google Search stands at odds with the market narrative of generative AI being the death knell for traditional search, Khan says.Google also announced that it intends to spend $75 billion in capital expenditures during 2025, much of which will go toward new cloud capacity and AI infrastructure.More AI coverage from Fast Company:Hundreds of rigged votes can skew AI model rankings on Chatbot Arena, study findsAI might run your next employee trainingYou can try DeepSeeks R1 through Perplexitywithout the security riskWhy this cybersecurity startup wants to watermark everythingWant exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
0 Comments
·0 Shares
·36 Views