• The Army’s Newest Recruits: Tech Execs From Meta, OpenAI and More

    Silicon Valley executives are joining a new innovation corps in the Army Reserve.
    #armys #newest #recruits #tech #execs
    The Army’s Newest Recruits: Tech Execs From Meta, OpenAI and More
    Silicon Valley executives are joining a new innovation corps in the Army Reserve. #armys #newest #recruits #tech #execs
    WWW.WSJ.COM
    The Army’s Newest Recruits: Tech Execs From Meta, OpenAI and More
    Silicon Valley executives are joining a new innovation corps in the Army Reserve.
    Like
    Love
    Wow
    Sad
    Angry
    484
    2 Comentários 0 Compartilhamentos 0 Anterior
  • Apple WWDC 2025: News and analysis

    Apple’s Worldwide Developers Conference 2025 saw a range of announcements that offered a glimpse into the future of Apple’s software design and artificial intelligencestrategy, highlighted by a new design language called  Liquid Glass and by Apple Intelligence news.

    Liquid Glass is designed to add translucency and dynamic movement to Apple’s user interface across iPhones, iPads, Macs, Apple Watches, and Apple TVs. The overhaul aims to make interactions with elements like buttons and sidebars adapt contextually.

    However, the real news of WWDC could be what we didn’t see.  Analysts had high expectations for Apple’s AI strategy, and while Apple Intelligence was talked about, many market watchers reported that it lacked the innovation that have come from Google’s and Microsoft’s generative AIrollouts.

    The question of whether Apple is playing catch-up lingered at WWDC 2025, and comments from Apple execs about delays to a significant AI overhaul for Siri were apparently interpreted as a setback by investors, leading to a negative reaction and drop in stock price.

    Follow this page for Computerworld‘s coverage of WWDC25.

    WWDC25 news and analysis

    Apple’s AI Revolution: Insights from WWDC

    June 13, 2025: At Apple’s big developer event, developers were served a feast of AI-related updates, including APIs that let them use Apple Intelligence in their apps and ChatGPT-augmentation from within Xcode. As a development environment, Apple has secured its future, with Macs forming the most computationally performant systems you can affordably purchase for the job.

    For developers, Apple’s tools get a lot better for AI

    June 12, 2025: Apple announced one important AI update at WWDC this week, the introduction of support for third-party large language models such as ChatGPT from within Xcode. It’s a big step that should benefit developers, accelerating app development.

    WWDC 25: What’s new for Apple and the enterprise?

    June 11, 2025: Beyond its new Liquid Glass UI and other major improvements across its operating systems, Apple introduced a hoard of changes, tweaks, and enhancements for IT admins at WWDC 2025.

    What we know so far about Apple’s Liquid Glass UI

    June 10, 2025: What Apple has tried to achieve with Liquid Glass is to bring together the optical quality of glass and the fluidity of liquid to emphasize transparency and lighting when using your devices. 

    WWDC first look: How Apple is improving its ecosystem

    June 9, 2025: While the new user interface design Apple execs highlighted at this year’s Worldwide Developers Conferencemight have been a bit of an eye-candy distraction, Apple’s enterprise users were not forgotten.

    Apple infuses AI into the Vision Pro

    June 8, 2025: Sluggish sales of Apple’s Vision Pro mixed reality headset haven’t dampened the company’s enthusiasm for advancing the device’s 3D computing experience, which now incorporates AI to deliver richer context and experiences.

    WWDC: Apple is about to unlock international business

    June 4, 2025: One of the more exciting pre-WWDC rumors is that Apple is preparing to make language problems go away by implementing focused artificial intelligence in Messages, which will apparently be able to translate incoming and outgoing messages on the fly. 
    #apple #wwdc #news #analysis
    Apple WWDC 2025: News and analysis
    Apple’s Worldwide Developers Conference 2025 saw a range of announcements that offered a glimpse into the future of Apple’s software design and artificial intelligencestrategy, highlighted by a new design language called  Liquid Glass and by Apple Intelligence news. Liquid Glass is designed to add translucency and dynamic movement to Apple’s user interface across iPhones, iPads, Macs, Apple Watches, and Apple TVs. The overhaul aims to make interactions with elements like buttons and sidebars adapt contextually. However, the real news of WWDC could be what we didn’t see.  Analysts had high expectations for Apple’s AI strategy, and while Apple Intelligence was talked about, many market watchers reported that it lacked the innovation that have come from Google’s and Microsoft’s generative AIrollouts. The question of whether Apple is playing catch-up lingered at WWDC 2025, and comments from Apple execs about delays to a significant AI overhaul for Siri were apparently interpreted as a setback by investors, leading to a negative reaction and drop in stock price. Follow this page for Computerworld‘s coverage of WWDC25. WWDC25 news and analysis Apple’s AI Revolution: Insights from WWDC June 13, 2025: At Apple’s big developer event, developers were served a feast of AI-related updates, including APIs that let them use Apple Intelligence in their apps and ChatGPT-augmentation from within Xcode. As a development environment, Apple has secured its future, with Macs forming the most computationally performant systems you can affordably purchase for the job. For developers, Apple’s tools get a lot better for AI June 12, 2025: Apple announced one important AI update at WWDC this week, the introduction of support for third-party large language models such as ChatGPT from within Xcode. It’s a big step that should benefit developers, accelerating app development. WWDC 25: What’s new for Apple and the enterprise? June 11, 2025: Beyond its new Liquid Glass UI and other major improvements across its operating systems, Apple introduced a hoard of changes, tweaks, and enhancements for IT admins at WWDC 2025. What we know so far about Apple’s Liquid Glass UI June 10, 2025: What Apple has tried to achieve with Liquid Glass is to bring together the optical quality of glass and the fluidity of liquid to emphasize transparency and lighting when using your devices.  WWDC first look: How Apple is improving its ecosystem June 9, 2025: While the new user interface design Apple execs highlighted at this year’s Worldwide Developers Conferencemight have been a bit of an eye-candy distraction, Apple’s enterprise users were not forgotten. Apple infuses AI into the Vision Pro June 8, 2025: Sluggish sales of Apple’s Vision Pro mixed reality headset haven’t dampened the company’s enthusiasm for advancing the device’s 3D computing experience, which now incorporates AI to deliver richer context and experiences. WWDC: Apple is about to unlock international business June 4, 2025: One of the more exciting pre-WWDC rumors is that Apple is preparing to make language problems go away by implementing focused artificial intelligence in Messages, which will apparently be able to translate incoming and outgoing messages on the fly.  #apple #wwdc #news #analysis
    WWW.COMPUTERWORLD.COM
    Apple WWDC 2025: News and analysis
    Apple’s Worldwide Developers Conference 2025 saw a range of announcements that offered a glimpse into the future of Apple’s software design and artificial intelligence (AI) strategy, highlighted by a new design language called  Liquid Glass and by Apple Intelligence news. Liquid Glass is designed to add translucency and dynamic movement to Apple’s user interface across iPhones, iPads, Macs, Apple Watches, and Apple TVs. The overhaul aims to make interactions with elements like buttons and sidebars adapt contextually. However, the real news of WWDC could be what we didn’t see.  Analysts had high expectations for Apple’s AI strategy, and while Apple Intelligence was talked about, many market watchers reported that it lacked the innovation that have come from Google’s and Microsoft’s generative AI (genAI) rollouts. The question of whether Apple is playing catch-up lingered at WWDC 2025, and comments from Apple execs about delays to a significant AI overhaul for Siri were apparently interpreted as a setback by investors, leading to a negative reaction and drop in stock price. Follow this page for Computerworld‘s coverage of WWDC25. WWDC25 news and analysis Apple’s AI Revolution: Insights from WWDC June 13, 2025: At Apple’s big developer event, developers were served a feast of AI-related updates, including APIs that let them use Apple Intelligence in their apps and ChatGPT-augmentation from within Xcode. As a development environment, Apple has secured its future, with Macs forming the most computationally performant systems you can affordably purchase for the job. For developers, Apple’s tools get a lot better for AI June 12, 2025: Apple announced one important AI update at WWDC this week, the introduction of support for third-party large language models (LLM) such as ChatGPT from within Xcode. It’s a big step that should benefit developers, accelerating app development. WWDC 25: What’s new for Apple and the enterprise? June 11, 2025: Beyond its new Liquid Glass UI and other major improvements across its operating systems, Apple introduced a hoard of changes, tweaks, and enhancements for IT admins at WWDC 2025. What we know so far about Apple’s Liquid Glass UI June 10, 2025: What Apple has tried to achieve with Liquid Glass is to bring together the optical quality of glass and the fluidity of liquid to emphasize transparency and lighting when using your devices.  WWDC first look: How Apple is improving its ecosystem June 9, 2025: While the new user interface design Apple execs highlighted at this year’s Worldwide Developers Conference (WWDC) might have been a bit of an eye-candy distraction, Apple’s enterprise users were not forgotten. Apple infuses AI into the Vision Pro June 8, 2025: Sluggish sales of Apple’s Vision Pro mixed reality headset haven’t dampened the company’s enthusiasm for advancing the device’s 3D computing experience, which now incorporates AI to deliver richer context and experiences. WWDC: Apple is about to unlock international business June 4, 2025: One of the more exciting pre-WWDC rumors is that Apple is preparing to make language problems go away by implementing focused artificial intelligence in Messages, which will apparently be able to translate incoming and outgoing messages on the fly. 
    Like
    Love
    Wow
    Angry
    Sad
    391
    2 Comentários 0 Compartilhamentos 0 Anterior
  • Why Companies Need to Reimagine Their AI Approach

    Ivy Grant, SVP of Strategy & Operations, Twilio June 13, 20255 Min Readpeshkova via alamy stockAsk technologists and enterprise leaders what they hope AI will deliver, and most will land on some iteration of the "T" word: transformation. No surprise, AI and its “cooler than you” cousin, generative AI, have been hyped nonstop for the past 24 months. But therein lies the problem. Many organizations are rushing to implement AI without a grasp on the return on investment, leading to high spend and low impact. Without anchoring AI to clear friction points and acceleration opportunities, companies invite fatigue, anxiety and competitive risk. Two-thirds of C-suite execs say GenAI has created tension and division within their organizations; nearly half say it’s “tearing their company apart.” Mostreport adoption challenges; more than a third call it a massive disappointment. While AI's potential is irrefutable, companies need to reject the narrative of AI as a standalone strategy or transformational savior. Its true power is as a catalyst to amplify what already works and surface what could. Here are three principles to make that happen. 1. Start with friction, not function Many enterprises struggle with where to start when integrating AI. My advice: Start where the pain is greatest. Identify the processes that create the most friction and work backward from there. AI is a tool, not a solution. By mapping real pain points to AI use cases, you can hone investments to the ripest fruit rather than simply where it hangs at the lowest. Related:For example, one of our top sources of customer pain was troubleshooting undeliverable messages, which forced users to sift through error code documentation. To solve this, an AI assistant was introduced to detect anomalies, explain causes in natural language, and guide customers toward resolution. We achieved a 97% real-time resolution rate through a blend of conversational AI and live support. Most companies have long-standing friction points that support teams routinely explain. Or that you’ve developed organizational calluses over; problems considered “just the cost of doing business.” GenAI allows leaders to revisit these areas and reimagine what’s possible. 2. The need forspeed We hear stories of leaders pushing an “all or nothing” version of AI transformation: Use AI to cut functional headcount or die. Rather than leading with a “stick” through wholesale transformation mandates or threats to budgets, we must recognize AI implementation as a fundamental culture change. Just as you wouldn't expect to transform your company culture overnight by edict, it's unreasonable to expect something different from your AI transformation. Related:Some leaders have a tendency to move faster than the innovation ability or comfort level of their people. Most functional leads aren’t obstinate in their slow adoption of AI tools, their long-held beliefs to run a process or to assess risks. We hired these leaders for their decades of experience in “what good looks like” and deep expertise in incremental improvements; then we expect them to suddenly define a futuristic vision that challenges their own beliefs. As executive leaders, we must give grace, space and plenty of “carrots” -- incentives, training, and support resources -- to help them reimagine complex workflows with AI. And, we must recognize that AI has the ability to make progress in ways that may not immediately create cost efficiencies, such as for operational improvements that require data cleansing, deep analytics, forecasting, dynamic pricing, and signal sensing. These aren’t the sexy parts of AI, but they’re the types of issues that require superhuman intelligence and complex problem-solving that AI was made for. 3. A flywheel of acceleration The other transformation that AI should support is creating faster and broader “test and learn” cycles. AI implementation is not a linear process with start here and end there. Organizations that want to leverage AI as a competitive advantage should establish use cases where AI can break down company silos and act as a catalyst to identify the next opportunity. That identifies the next as a flywheel of acceleration. This flywheel builds on accumulated learnings, making small successes into larger wins while avoiding costly AI disasters from rushed implementation. Related:For example, at Twilio we are building a customer intelligence platform that analyzes thousands of conversations to identify patterns and drive insights. If we see multiple customers mention a competitor's pricing, it could signal a take-out campaign. What once took weeks to recognize and escalate can now be done in near real-time and used for highly coordinated activations across marketing, product, sales, and other teams. With every AI acceleration win, we uncover more places to improve hand-offs, activation speed, and business decision-making. That flywheel of innovation is how true AI transformation begins to drive impactful business outcomes. Ideas to Fuel Your AI Strategy Organizations can accelerate their AI implementations through these simple shifts in approach: Revisit your long-standing friction points, both customer-facing and internal, across your organization -- particularly explore the ones you thought were “the cost of doing business” Don’t just look for where AI can reduce manual processes, but find the highly complex problems and start experimenting Support your functional experts with AI-driven training, resources, tools, and incentives to help them challenge their long-held beliefs about what works for the future Treat AI implementation as a cultural change that requires time, experimentation, learning, and carrots Recognize that transformation starts with a flywheel of acceleration, where each new experiment can lead to the next big discovery The most impactful AI implementations don’t rush transformation; they strategically accelerate core capabilities and unlock new ones to drive measurable change. About the AuthorIvy GrantSVP of Strategy & Operations, Twilio Ivy Grant is Senior Vice President of Strategy & Operations at Twilio where she leads strategic planning, enterprise analytics, M&A Integration and is responsible for driving transformational initiatives that enable Twilio to continuously improve its operations. Prior to Twilio, Ivy’s career has balanced senior roles in strategy consulting at McKinsey & Company, Edelman and PwC with customer-centric operational roles at Walmart, Polo Ralph Lauren and tech startup Eversight Labs. She loves solo international travel, hugging exotic animals and boxing. Ivy has an MBA from NYU’s Stern School of Business and a BS in Applied Economics from Cornell University. See more from Ivy GrantReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    #why #companies #need #reimagine #their
    Why Companies Need to Reimagine Their AI Approach
    Ivy Grant, SVP of Strategy & Operations, Twilio June 13, 20255 Min Readpeshkova via alamy stockAsk technologists and enterprise leaders what they hope AI will deliver, and most will land on some iteration of the "T" word: transformation. No surprise, AI and its “cooler than you” cousin, generative AI, have been hyped nonstop for the past 24 months. But therein lies the problem. Many organizations are rushing to implement AI without a grasp on the return on investment, leading to high spend and low impact. Without anchoring AI to clear friction points and acceleration opportunities, companies invite fatigue, anxiety and competitive risk. Two-thirds of C-suite execs say GenAI has created tension and division within their organizations; nearly half say it’s “tearing their company apart.” Mostreport adoption challenges; more than a third call it a massive disappointment. While AI's potential is irrefutable, companies need to reject the narrative of AI as a standalone strategy or transformational savior. Its true power is as a catalyst to amplify what already works and surface what could. Here are three principles to make that happen. 1. Start with friction, not function Many enterprises struggle with where to start when integrating AI. My advice: Start where the pain is greatest. Identify the processes that create the most friction and work backward from there. AI is a tool, not a solution. By mapping real pain points to AI use cases, you can hone investments to the ripest fruit rather than simply where it hangs at the lowest. Related:For example, one of our top sources of customer pain was troubleshooting undeliverable messages, which forced users to sift through error code documentation. To solve this, an AI assistant was introduced to detect anomalies, explain causes in natural language, and guide customers toward resolution. We achieved a 97% real-time resolution rate through a blend of conversational AI and live support. Most companies have long-standing friction points that support teams routinely explain. Or that you’ve developed organizational calluses over; problems considered “just the cost of doing business.” GenAI allows leaders to revisit these areas and reimagine what’s possible. 2. The need forspeed We hear stories of leaders pushing an “all or nothing” version of AI transformation: Use AI to cut functional headcount or die. Rather than leading with a “stick” through wholesale transformation mandates or threats to budgets, we must recognize AI implementation as a fundamental culture change. Just as you wouldn't expect to transform your company culture overnight by edict, it's unreasonable to expect something different from your AI transformation. Related:Some leaders have a tendency to move faster than the innovation ability or comfort level of their people. Most functional leads aren’t obstinate in their slow adoption of AI tools, their long-held beliefs to run a process or to assess risks. We hired these leaders for their decades of experience in “what good looks like” and deep expertise in incremental improvements; then we expect them to suddenly define a futuristic vision that challenges their own beliefs. As executive leaders, we must give grace, space and plenty of “carrots” -- incentives, training, and support resources -- to help them reimagine complex workflows with AI. And, we must recognize that AI has the ability to make progress in ways that may not immediately create cost efficiencies, such as for operational improvements that require data cleansing, deep analytics, forecasting, dynamic pricing, and signal sensing. These aren’t the sexy parts of AI, but they’re the types of issues that require superhuman intelligence and complex problem-solving that AI was made for. 3. A flywheel of acceleration The other transformation that AI should support is creating faster and broader “test and learn” cycles. AI implementation is not a linear process with start here and end there. Organizations that want to leverage AI as a competitive advantage should establish use cases where AI can break down company silos and act as a catalyst to identify the next opportunity. That identifies the next as a flywheel of acceleration. This flywheel builds on accumulated learnings, making small successes into larger wins while avoiding costly AI disasters from rushed implementation. Related:For example, at Twilio we are building a customer intelligence platform that analyzes thousands of conversations to identify patterns and drive insights. If we see multiple customers mention a competitor's pricing, it could signal a take-out campaign. What once took weeks to recognize and escalate can now be done in near real-time and used for highly coordinated activations across marketing, product, sales, and other teams. With every AI acceleration win, we uncover more places to improve hand-offs, activation speed, and business decision-making. That flywheel of innovation is how true AI transformation begins to drive impactful business outcomes. Ideas to Fuel Your AI Strategy Organizations can accelerate their AI implementations through these simple shifts in approach: Revisit your long-standing friction points, both customer-facing and internal, across your organization -- particularly explore the ones you thought were “the cost of doing business” Don’t just look for where AI can reduce manual processes, but find the highly complex problems and start experimenting Support your functional experts with AI-driven training, resources, tools, and incentives to help them challenge their long-held beliefs about what works for the future Treat AI implementation as a cultural change that requires time, experimentation, learning, and carrots Recognize that transformation starts with a flywheel of acceleration, where each new experiment can lead to the next big discovery The most impactful AI implementations don’t rush transformation; they strategically accelerate core capabilities and unlock new ones to drive measurable change. About the AuthorIvy GrantSVP of Strategy & Operations, Twilio Ivy Grant is Senior Vice President of Strategy & Operations at Twilio where she leads strategic planning, enterprise analytics, M&A Integration and is responsible for driving transformational initiatives that enable Twilio to continuously improve its operations. Prior to Twilio, Ivy’s career has balanced senior roles in strategy consulting at McKinsey & Company, Edelman and PwC with customer-centric operational roles at Walmart, Polo Ralph Lauren and tech startup Eversight Labs. She loves solo international travel, hugging exotic animals and boxing. Ivy has an MBA from NYU’s Stern School of Business and a BS in Applied Economics from Cornell University. See more from Ivy GrantReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #why #companies #need #reimagine #their
    WWW.INFORMATIONWEEK.COM
    Why Companies Need to Reimagine Their AI Approach
    Ivy Grant, SVP of Strategy & Operations, Twilio June 13, 20255 Min Readpeshkova via alamy stockAsk technologists and enterprise leaders what they hope AI will deliver, and most will land on some iteration of the "T" word: transformation. No surprise, AI and its “cooler than you” cousin, generative AI (GenAI), have been hyped nonstop for the past 24 months. But therein lies the problem. Many organizations are rushing to implement AI without a grasp on the return on investment (ROI), leading to high spend and low impact. Without anchoring AI to clear friction points and acceleration opportunities, companies invite fatigue, anxiety and competitive risk. Two-thirds of C-suite execs say GenAI has created tension and division within their organizations; nearly half say it’s “tearing their company apart.” Most (71%) report adoption challenges; more than a third call it a massive disappointment. While AI's potential is irrefutable, companies need to reject the narrative of AI as a standalone strategy or transformational savior. Its true power is as a catalyst to amplify what already works and surface what could. Here are three principles to make that happen. 1. Start with friction, not function Many enterprises struggle with where to start when integrating AI. My advice: Start where the pain is greatest. Identify the processes that create the most friction and work backward from there. AI is a tool, not a solution. By mapping real pain points to AI use cases, you can hone investments to the ripest fruit rather than simply where it hangs at the lowest. Related:For example, one of our top sources of customer pain was troubleshooting undeliverable messages, which forced users to sift through error code documentation. To solve this, an AI assistant was introduced to detect anomalies, explain causes in natural language, and guide customers toward resolution. We achieved a 97% real-time resolution rate through a blend of conversational AI and live support. Most companies have long-standing friction points that support teams routinely explain. Or that you’ve developed organizational calluses over; problems considered “just the cost of doing business.” GenAI allows leaders to revisit these areas and reimagine what’s possible. 2. The need for (dual) speed We hear stories of leaders pushing an “all or nothing” version of AI transformation: Use AI to cut functional headcount or die. Rather than leading with a “stick” through wholesale transformation mandates or threats to budgets, we must recognize AI implementation as a fundamental culture change. Just as you wouldn't expect to transform your company culture overnight by edict, it's unreasonable to expect something different from your AI transformation. Related:Some leaders have a tendency to move faster than the innovation ability or comfort level of their people. Most functional leads aren’t obstinate in their slow adoption of AI tools, their long-held beliefs to run a process or to assess risks. We hired these leaders for their decades of experience in “what good looks like” and deep expertise in incremental improvements; then we expect them to suddenly define a futuristic vision that challenges their own beliefs. As executive leaders, we must give grace, space and plenty of “carrots” -- incentives, training, and support resources -- to help them reimagine complex workflows with AI. And, we must recognize that AI has the ability to make progress in ways that may not immediately create cost efficiencies, such as for operational improvements that require data cleansing, deep analytics, forecasting, dynamic pricing, and signal sensing. These aren’t the sexy parts of AI, but they’re the types of issues that require superhuman intelligence and complex problem-solving that AI was made for. 3. A flywheel of acceleration The other transformation that AI should support is creating faster and broader “test and learn” cycles. AI implementation is not a linear process with start here and end there. Organizations that want to leverage AI as a competitive advantage should establish use cases where AI can break down company silos and act as a catalyst to identify the next opportunity. That identifies the next as a flywheel of acceleration. This flywheel builds on accumulated learnings, making small successes into larger wins while avoiding costly AI disasters from rushed implementation. Related:For example, at Twilio we are building a customer intelligence platform that analyzes thousands of conversations to identify patterns and drive insights. If we see multiple customers mention a competitor's pricing, it could signal a take-out campaign. What once took weeks to recognize and escalate can now be done in near real-time and used for highly coordinated activations across marketing, product, sales, and other teams. With every AI acceleration win, we uncover more places to improve hand-offs, activation speed, and business decision-making. That flywheel of innovation is how true AI transformation begins to drive impactful business outcomes. Ideas to Fuel Your AI Strategy Organizations can accelerate their AI implementations through these simple shifts in approach: Revisit your long-standing friction points, both customer-facing and internal, across your organization -- particularly explore the ones you thought were “the cost of doing business” Don’t just look for where AI can reduce manual processes, but find the highly complex problems and start experimenting Support your functional experts with AI-driven training, resources, tools, and incentives to help them challenge their long-held beliefs about what works for the future Treat AI implementation as a cultural change that requires time, experimentation, learning, and carrots (not just sticks) Recognize that transformation starts with a flywheel of acceleration, where each new experiment can lead to the next big discovery The most impactful AI implementations don’t rush transformation; they strategically accelerate core capabilities and unlock new ones to drive measurable change. About the AuthorIvy GrantSVP of Strategy & Operations, Twilio Ivy Grant is Senior Vice President of Strategy & Operations at Twilio where she leads strategic planning, enterprise analytics, M&A Integration and is responsible for driving transformational initiatives that enable Twilio to continuously improve its operations. Prior to Twilio, Ivy’s career has balanced senior roles in strategy consulting at McKinsey & Company, Edelman and PwC with customer-centric operational roles at Walmart, Polo Ralph Lauren and tech startup Eversight Labs. She loves solo international travel, hugging exotic animals and boxing. Ivy has an MBA from NYU’s Stern School of Business and a BS in Applied Economics from Cornell University. See more from Ivy GrantReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Too big, fail too

    Inside Apple’s high-gloss standoff with AI ambition and the uncanny choreography of WWDC 2025There was a time when watching an Apple keynote — like Steve Jobs introducing the iPhone in 2007, the masterclass of all masterclasses in product launching — felt like watching a tightrope act. There was suspense. Live demos happened — sometimes they failed, and when they didn’t, the applause was real, not piped through a Dolby mix.These days, that tension is gone. Since 2020, in the wake of the pandemic, Apple events have become pre-recorded masterworks: drone shots sweeping over Apple Park, transitions smoother than a Pixar short, and executives delivering their lines like odd, IRL spatial personas. They move like human renderings: poised, confident, and just robotic enough to raise a brow. The kind of people who, if encountered in real life, would probably light up half a dozen red flags before a handshake is even offered. A case in point: the official “Liquid Glass” UI demo — it’s visually stunning, yes, but also uncanny, like a concept reel that forgot it needed to ship. that’s the paradox. Not only has Apple trimmed down the content of WWDC, it’s also polished the delivery into something almost inhumanly controlled. Every keynote beat feels engineered to avoid risk, reduce friction, and glide past doubt. But in doing so, something vital slips away: the tension, the spontaneity, the sense that the future is being made, not just performed.Just one year earlier, WWDC 2024 opened with a cinematic cold open “somewhere over California”: Schiller piloting an Apple-branded plane, iPod in hand, muttering “I’m getting too old for this stuff.” A perfect mix of Lethal Weapon camp and a winking message that yes, Classic-Apple was still at the controls — literally — flying its senior leadership straight toward Cupertino. Out the hatch, like high-altitude paratroopers of optimism, leapt the entire exec team, with Craig Federighi, always the go-to for Apple’s auto-ironic set pieces, leading the charge, donning a helmet literally resembling his own legendary mane. It was peak-bold, bizarre, and unmistakably Apple. That intro now reads like the final act of full-throttle confidence.This year’s WWDC offered a particularly crisp contrast. Aside from the new intro — which features Craig Federighi drifting an F1-style race car across the inner rooftop ring of Apple Park as a “therapy session”, a not-so-subtle nod to the upcoming Formula 1 blockbuster but also to the accountability for the failure to deliver the system-wide AI on time — WWDC 2025 pulled back dramatically. The new “Apple Intelligence” was introduced in a keynote with zero stumbles, zero awkward transitions, and visuals so pristine they could have been rendered on a Vision Pro. Not only had the scope of WWDC been trimmed down to safer talking points, but even the tone had shifted — less like a tech summit, more like a handsomely lit containment-mode seminar. And that, perhaps, was the problem. The presentation wasn’t a reveal — it was a performance. And performances can be edited in post. Demos can’t.So when Apple in march 2025 quietly admitted, for the first time, in a formal press release addressed to reporters like John Gruber, that the personalized Siri and system-wide AI features would be delayed — the reaction wasn’t outrage. It was something subtler: disillusionment. Gruber’s response cracked the façade wide open. His post opened a slow but persistent wave of unease, rippling through developer Slack channels and private comment threads alike. John Gruber’s reaction, published under the headline “Something is rotten in the State of Cupertino”, was devastating. His critique opened the floodgates to a wave of murmurs and public unease among developers and insiders, many of whom had begun to question what was really happening at the helm of key divisions central to Apple’s future.Many still believe Apple is the only company truly capable of pulling off hardware-software integrated AI at scale. But there’s a sense that the company is now operating in damage-control mode. The delay didn’t just push back a feature — it disrupted the entire strategic arc of WWDC 2025. What could have been a milestone in system-level AI became a cautious sidestep, repackaged through visual polish and feature tweaks. The result: a presentation focused on UI refinements and safe bets, far removed from the sweeping revolution that had been teased as the main selling point for promoting the iPhone 16 launch, “Built for Apple Intelligence”.That tension surfaced during Joanna Stern’s recent live interview with Craig Federighi and Greg Joswiak. These are two of Apple’s most media-savvy execs, and yet, in a setting where questions weren’t scripted, you could see the seams. Their usual fluency gave way to something stiffer. More careful. Less certain. And even the absences speak volumes: for the first time in a decade, no one from Apple’s top team joined John Gruber’s Talk Show at WWDC. It wasn’t a scheduling fluke — nor a petty retaliation for Gruber’s damning March article. It was a retreat — one that Stratechery’s Ben Thompson described as exactly that: a strategic fallback, not a brave reset.Meanwhile, the keynote narrative quietly shifted from AI ambition to UI innovation: new visual effects, tighter integration, call screening. Credit here goes to Alan Dye — Apple VP of Human Interface Design and one of the last remaining members of Jony Ive’s inner circle not yet absorbed into LoveFrom — whose long-arc work on interface aesthetics, from the early stages of the Dynamic Island onward, is finally starting to click into place. This is classic Apple: refinement as substance, design as coherence. But it was meant to be the cherry on top of a much deeper AI-system transformation — not the whole sundae. All useful. All safe. And yet, the thing that Apple could uniquely deliver — a seamless, deeply integrated, user-controlled and privacy-safe Apple Intelligence — is now the thing it seems most reluctant to show.There is no doubt the groundwork has been laid. And to Apple’s credit, Jason Snell notes that the company is shifting gears, scaling ambitions to something that feels more tangible. But in scaling back the risk, something else has been scaled back too: the willingness to look your audience of stakeholders, developers and users live, in the eye, and show the future for how you have carefully crafted it and how you can put it in the market immediately, or in mere weeks. Showing things as they are, or as they will be very soon. Rehearsed, yes, but never faked.Even James Dyson’s live demo of a new vacuum showed more courage. No camera cuts. No soft lighting. Just a human being, showing a thing. It might have sucked, literally or figuratively. But it didn’t. And it stuck. That’s what feels missing in Cupertino.Some have started using the term glasslighting — a coined pun blending Apple’s signature glassy aesthetics with the soft manipulations of marketing, like a gentle fog of polished perfection that leaves expectations quietly disoriented. It’s not deception. It’s damage control. But that instinct, understandable as it is, doesn’t build momentum. It builds inertia. And inertia doesn’t sell intelligence. It only delays the reckoning.Before the curtain falls, it’s hard not to revisit the uncanny polish of Apple’s speakers presence. One might start to wonder whether Apple is really late on AI — or whether it’s simply developed such a hyper-advanced internal model that its leadership team has been replaced by real-time human avatars, flawlessly animated, fed directly by the Neural Engine. Not the constrained humanity of two floating eyes behind an Apple Vision headset, but full-on flawless embodiment — if this is Apple’s augmented AI at work, it may be the only undisclosed and underpromised demo actually shipping.OS30 live demoMeanwhile, just as Apple was soft-pedaling its A.I. story with maximum visual polish, a very different tone landed from across the bay: Sam Altman and Jony Ive, sitting in a bar, talking about the future. stage. No teleprompter. No uncanny valley. Just two “old friends”, with one hell of a budget, quietly sketching the next era of computing. A vision Apple once claimed effortlessly.There’s still the question of whether Apple, as many hope, can reclaim — and lock down — that leadership for itself. A healthy dose of competition, at the very least, can only help.Too big, fail too was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #too #big #fail
    Too big, fail too
    Inside Apple’s high-gloss standoff with AI ambition and the uncanny choreography of WWDC 2025There was a time when watching an Apple keynote — like Steve Jobs introducing the iPhone in 2007, the masterclass of all masterclasses in product launching — felt like watching a tightrope act. There was suspense. Live demos happened — sometimes they failed, and when they didn’t, the applause was real, not piped through a Dolby mix.These days, that tension is gone. Since 2020, in the wake of the pandemic, Apple events have become pre-recorded masterworks: drone shots sweeping over Apple Park, transitions smoother than a Pixar short, and executives delivering their lines like odd, IRL spatial personas. They move like human renderings: poised, confident, and just robotic enough to raise a brow. The kind of people who, if encountered in real life, would probably light up half a dozen red flags before a handshake is even offered. A case in point: the official “Liquid Glass” UI demo — it’s visually stunning, yes, but also uncanny, like a concept reel that forgot it needed to ship. that’s the paradox. Not only has Apple trimmed down the content of WWDC, it’s also polished the delivery into something almost inhumanly controlled. Every keynote beat feels engineered to avoid risk, reduce friction, and glide past doubt. But in doing so, something vital slips away: the tension, the spontaneity, the sense that the future is being made, not just performed.Just one year earlier, WWDC 2024 opened with a cinematic cold open “somewhere over California”: Schiller piloting an Apple-branded plane, iPod in hand, muttering “I’m getting too old for this stuff.” A perfect mix of Lethal Weapon camp and a winking message that yes, Classic-Apple was still at the controls — literally — flying its senior leadership straight toward Cupertino. Out the hatch, like high-altitude paratroopers of optimism, leapt the entire exec team, with Craig Federighi, always the go-to for Apple’s auto-ironic set pieces, leading the charge, donning a helmet literally resembling his own legendary mane. It was peak-bold, bizarre, and unmistakably Apple. That intro now reads like the final act of full-throttle confidence.This year’s WWDC offered a particularly crisp contrast. Aside from the new intro — which features Craig Federighi drifting an F1-style race car across the inner rooftop ring of Apple Park as a “therapy session”, a not-so-subtle nod to the upcoming Formula 1 blockbuster but also to the accountability for the failure to deliver the system-wide AI on time — WWDC 2025 pulled back dramatically. The new “Apple Intelligence” was introduced in a keynote with zero stumbles, zero awkward transitions, and visuals so pristine they could have been rendered on a Vision Pro. Not only had the scope of WWDC been trimmed down to safer talking points, but even the tone had shifted — less like a tech summit, more like a handsomely lit containment-mode seminar. And that, perhaps, was the problem. The presentation wasn’t a reveal — it was a performance. And performances can be edited in post. Demos can’t.So when Apple in march 2025 quietly admitted, for the first time, in a formal press release addressed to reporters like John Gruber, that the personalized Siri and system-wide AI features would be delayed — the reaction wasn’t outrage. It was something subtler: disillusionment. Gruber’s response cracked the façade wide open. His post opened a slow but persistent wave of unease, rippling through developer Slack channels and private comment threads alike. John Gruber’s reaction, published under the headline “Something is rotten in the State of Cupertino”, was devastating. His critique opened the floodgates to a wave of murmurs and public unease among developers and insiders, many of whom had begun to question what was really happening at the helm of key divisions central to Apple’s future.Many still believe Apple is the only company truly capable of pulling off hardware-software integrated AI at scale. But there’s a sense that the company is now operating in damage-control mode. The delay didn’t just push back a feature — it disrupted the entire strategic arc of WWDC 2025. What could have been a milestone in system-level AI became a cautious sidestep, repackaged through visual polish and feature tweaks. The result: a presentation focused on UI refinements and safe bets, far removed from the sweeping revolution that had been teased as the main selling point for promoting the iPhone 16 launch, “Built for Apple Intelligence”.That tension surfaced during Joanna Stern’s recent live interview with Craig Federighi and Greg Joswiak. These are two of Apple’s most media-savvy execs, and yet, in a setting where questions weren’t scripted, you could see the seams. Their usual fluency gave way to something stiffer. More careful. Less certain. And even the absences speak volumes: for the first time in a decade, no one from Apple’s top team joined John Gruber’s Talk Show at WWDC. It wasn’t a scheduling fluke — nor a petty retaliation for Gruber’s damning March article. It was a retreat — one that Stratechery’s Ben Thompson described as exactly that: a strategic fallback, not a brave reset.Meanwhile, the keynote narrative quietly shifted from AI ambition to UI innovation: new visual effects, tighter integration, call screening. Credit here goes to Alan Dye — Apple VP of Human Interface Design and one of the last remaining members of Jony Ive’s inner circle not yet absorbed into LoveFrom — whose long-arc work on interface aesthetics, from the early stages of the Dynamic Island onward, is finally starting to click into place. This is classic Apple: refinement as substance, design as coherence. But it was meant to be the cherry on top of a much deeper AI-system transformation — not the whole sundae. All useful. All safe. And yet, the thing that Apple could uniquely deliver — a seamless, deeply integrated, user-controlled and privacy-safe Apple Intelligence — is now the thing it seems most reluctant to show.There is no doubt the groundwork has been laid. And to Apple’s credit, Jason Snell notes that the company is shifting gears, scaling ambitions to something that feels more tangible. But in scaling back the risk, something else has been scaled back too: the willingness to look your audience of stakeholders, developers and users live, in the eye, and show the future for how you have carefully crafted it and how you can put it in the market immediately, or in mere weeks. Showing things as they are, or as they will be very soon. Rehearsed, yes, but never faked.Even James Dyson’s live demo of a new vacuum showed more courage. No camera cuts. No soft lighting. Just a human being, showing a thing. It might have sucked, literally or figuratively. But it didn’t. And it stuck. That’s what feels missing in Cupertino.Some have started using the term glasslighting — a coined pun blending Apple’s signature glassy aesthetics with the soft manipulations of marketing, like a gentle fog of polished perfection that leaves expectations quietly disoriented. It’s not deception. It’s damage control. But that instinct, understandable as it is, doesn’t build momentum. It builds inertia. And inertia doesn’t sell intelligence. It only delays the reckoning.Before the curtain falls, it’s hard not to revisit the uncanny polish of Apple’s speakers presence. One might start to wonder whether Apple is really late on AI — or whether it’s simply developed such a hyper-advanced internal model that its leadership team has been replaced by real-time human avatars, flawlessly animated, fed directly by the Neural Engine. Not the constrained humanity of two floating eyes behind an Apple Vision headset, but full-on flawless embodiment — if this is Apple’s augmented AI at work, it may be the only undisclosed and underpromised demo actually shipping.OS30 live demoMeanwhile, just as Apple was soft-pedaling its A.I. story with maximum visual polish, a very different tone landed from across the bay: Sam Altman and Jony Ive, sitting in a bar, talking about the future. stage. No teleprompter. No uncanny valley. Just two “old friends”, with one hell of a budget, quietly sketching the next era of computing. A vision Apple once claimed effortlessly.There’s still the question of whether Apple, as many hope, can reclaim — and lock down — that leadership for itself. A healthy dose of competition, at the very least, can only help.Too big, fail too was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #too #big #fail
    UXDESIGN.CC
    Too big, fail too
    Inside Apple’s high-gloss standoff with AI ambition and the uncanny choreography of WWDC 2025There was a time when watching an Apple keynote — like Steve Jobs introducing the iPhone in 2007, the masterclass of all masterclasses in product launching — felt like watching a tightrope act. There was suspense. Live demos happened — sometimes they failed, and when they didn’t, the applause was real, not piped through a Dolby mix.These days, that tension is gone. Since 2020, in the wake of the pandemic, Apple events have become pre-recorded masterworks: drone shots sweeping over Apple Park, transitions smoother than a Pixar short, and executives delivering their lines like odd, IRL spatial personas. They move like human renderings: poised, confident, and just robotic enough to raise a brow. The kind of people who, if encountered in real life, would probably light up half a dozen red flags before a handshake is even offered. A case in point: the official “Liquid Glass” UI demo — it’s visually stunning, yes, but also uncanny, like a concept reel that forgot it needed to ship.https://medium.com/media/fcb3b16cc42621ba32153aff80ea1805/hrefAnd that’s the paradox. Not only has Apple trimmed down the content of WWDC, it’s also polished the delivery into something almost inhumanly controlled. Every keynote beat feels engineered to avoid risk, reduce friction, and glide past doubt. But in doing so, something vital slips away: the tension, the spontaneity, the sense that the future is being made, not just performed.Just one year earlier, WWDC 2024 opened with a cinematic cold open “somewhere over California”:https://medium.com/media/f97f45387353363264d99c341d4571b0/hrefPhil Schiller piloting an Apple-branded plane, iPod in hand, muttering “I’m getting too old for this stuff.” A perfect mix of Lethal Weapon camp and a winking message that yes, Classic-Apple was still at the controls — literally — flying its senior leadership straight toward Cupertino. Out the hatch, like high-altitude paratroopers of optimism, leapt the entire exec team, with Craig Federighi, always the go-to for Apple’s auto-ironic set pieces, leading the charge, donning a helmet literally resembling his own legendary mane. It was peak-bold, bizarre, and unmistakably Apple. That intro now reads like the final act of full-throttle confidence.This year’s WWDC offered a particularly crisp contrast. Aside from the new intro — which features Craig Federighi drifting an F1-style race car across the inner rooftop ring of Apple Park as a “therapy session”, a not-so-subtle nod to the upcoming Formula 1 blockbuster but also to the accountability for the failure to deliver the system-wide AI on time — WWDC 2025 pulled back dramatically. The new “Apple Intelligence” was introduced in a keynote with zero stumbles, zero awkward transitions, and visuals so pristine they could have been rendered on a Vision Pro. Not only had the scope of WWDC been trimmed down to safer talking points, but even the tone had shifted — less like a tech summit, more like a handsomely lit containment-mode seminar. And that, perhaps, was the problem. The presentation wasn’t a reveal — it was a performance. And performances can be edited in post. Demos can’t.So when Apple in march 2025 quietly admitted, for the first time, in a formal press release addressed to reporters like John Gruber, that the personalized Siri and system-wide AI features would be delayed — the reaction wasn’t outrage. It was something subtler: disillusionment. Gruber’s response cracked the façade wide open. His post opened a slow but persistent wave of unease, rippling through developer Slack channels and private comment threads alike. John Gruber’s reaction, published under the headline “Something is rotten in the State of Cupertino”, was devastating. His critique opened the floodgates to a wave of murmurs and public unease among developers and insiders, many of whom had begun to question what was really happening at the helm of key divisions central to Apple’s future.Many still believe Apple is the only company truly capable of pulling off hardware-software integrated AI at scale. But there’s a sense that the company is now operating in damage-control mode. The delay didn’t just push back a feature — it disrupted the entire strategic arc of WWDC 2025. What could have been a milestone in system-level AI became a cautious sidestep, repackaged through visual polish and feature tweaks. The result: a presentation focused on UI refinements and safe bets, far removed from the sweeping revolution that had been teased as the main selling point for promoting the iPhone 16 launch, “Built for Apple Intelligence”.That tension surfaced during Joanna Stern’s recent live interview with Craig Federighi and Greg Joswiak. These are two of Apple’s most media-savvy execs, and yet, in a setting where questions weren’t scripted, you could see the seams. Their usual fluency gave way to something stiffer. More careful. Less certain. And even the absences speak volumes: for the first time in a decade, no one from Apple’s top team joined John Gruber’s Talk Show at WWDC. It wasn’t a scheduling fluke — nor a petty retaliation for Gruber’s damning March article. It was a retreat — one that Stratechery’s Ben Thompson described as exactly that: a strategic fallback, not a brave reset.Meanwhile, the keynote narrative quietly shifted from AI ambition to UI innovation: new visual effects, tighter integration, call screening. Credit here goes to Alan Dye — Apple VP of Human Interface Design and one of the last remaining members of Jony Ive’s inner circle not yet absorbed into LoveFrom — whose long-arc work on interface aesthetics, from the early stages of the Dynamic Island onward, is finally starting to click into place. This is classic Apple: refinement as substance, design as coherence. But it was meant to be the cherry on top of a much deeper AI-system transformation — not the whole sundae. All useful. All safe. And yet, the thing that Apple could uniquely deliver — a seamless, deeply integrated, user-controlled and privacy-safe Apple Intelligence — is now the thing it seems most reluctant to show.There is no doubt the groundwork has been laid. And to Apple’s credit, Jason Snell notes that the company is shifting gears, scaling ambitions to something that feels more tangible. But in scaling back the risk, something else has been scaled back too: the willingness to look your audience of stakeholders, developers and users live, in the eye, and show the future for how you have carefully crafted it and how you can put it in the market immediately, or in mere weeks. Showing things as they are, or as they will be very soon. Rehearsed, yes, but never faked.Even James Dyson’s live demo of a new vacuum showed more courage. No camera cuts. No soft lighting. Just a human being, showing a thing. It might have sucked, literally or figuratively. But it didn’t. And it stuck. That’s what feels missing in Cupertino.Some have started using the term glasslighting — a coined pun blending Apple’s signature glassy aesthetics with the soft manipulations of marketing, like a gentle fog of polished perfection that leaves expectations quietly disoriented. It’s not deception. It’s damage control. But that instinct, understandable as it is, doesn’t build momentum. It builds inertia. And inertia doesn’t sell intelligence. It only delays the reckoning.Before the curtain falls, it’s hard not to revisit the uncanny polish of Apple’s speakers presence. One might start to wonder whether Apple is really late on AI — or whether it’s simply developed such a hyper-advanced internal model that its leadership team has been replaced by real-time human avatars, flawlessly animated, fed directly by the Neural Engine. Not the constrained humanity of two floating eyes behind an Apple Vision headset, but full-on flawless embodiment — if this is Apple’s augmented AI at work, it may be the only undisclosed and underpromised demo actually shipping.OS30 live demoMeanwhile, just as Apple was soft-pedaling its A.I. story with maximum visual polish, a very different tone landed from across the bay: Sam Altman and Jony Ive, sitting in a bar, talking about the future.https://medium.com/media/5cdea73d7fde0b538e038af1990afa44/hrefNo stage. No teleprompter. No uncanny valley. Just two “old friends”, with one hell of a budget, quietly sketching the next era of computing. A vision Apple once claimed effortlessly.There’s still the question of whether Apple, as many hope, can reclaim — and lock down — that leadership for itself. A healthy dose of competition, at the very least, can only help.Too big, fail too was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Build A Rocket Boy execs depart one week before MindsEye launches

    Two executives at Build A Rocket Boy have parted ways with the studio, a mere week ahead of the launch of its next title, MindsEye, on June 10.Spotted by Eurogamer, Chief Legal Officer Riley Graebner and Chief Financial Officer Paul Bland have departed the company. They announced their leave on LinkedIn.Graebner, who was at the studio for three and a half years, said he’s proud of what the team accomplished. He mentioned doubling the size of the company to over 450 people and building a legal team and legal ops infrastructure as highlights of his tenure."There are so many people who have made my time at BARB memorable and I fear forgetting essential names," Graebner wrote. "A heartfelt thank you will have to do."A "concerted effort" to "trash" MindsEye and Build A Rocket BoyThe news comes days after a user in the MindsEye Discord server asked studio co-CEO Mark Gerhard if the people who reacted negatively to the game were financed by someone, to which he replied "100%"He then claimed that there's been a "concerted effort to trash the game and the studio."Build A Rocket Boy, which is spearheaded by former Grand Theft Auto V producer Leslie Benzies, is following up on a tumultuous 2024. In January, the studio received million to continue building its immersive game platform Everywhere, announced back in 2022 around vague mentions of a blockchain component, as well as MindsEye and UGC design tools. Then, it laid off an unknown number of staff just over four weeks later.Related:In October, Hitman creator IO Interactive announced a partnership as publisher for MindsEye, as well as the Everywhere platform. Two months later, the studio acquired PlayFusion. As part of the deal, Gerhard, who was CEO and CTO at the studio, joined Build A Rocket Boy as co-CEO."We’ve always admired PlayFusion’s creativity and innovative approach to transmedia entertainment, and joining forces with them will ascend Build a Rocket Boy to our next level of excellence," Benzies said at the time.
    #build #rocket #boy #execs #depart
    Build A Rocket Boy execs depart one week before MindsEye launches
    Two executives at Build A Rocket Boy have parted ways with the studio, a mere week ahead of the launch of its next title, MindsEye, on June 10.Spotted by Eurogamer, Chief Legal Officer Riley Graebner and Chief Financial Officer Paul Bland have departed the company. They announced their leave on LinkedIn.Graebner, who was at the studio for three and a half years, said he’s proud of what the team accomplished. He mentioned doubling the size of the company to over 450 people and building a legal team and legal ops infrastructure as highlights of his tenure."There are so many people who have made my time at BARB memorable and I fear forgetting essential names," Graebner wrote. "A heartfelt thank you will have to do."A "concerted effort" to "trash" MindsEye and Build A Rocket BoyThe news comes days after a user in the MindsEye Discord server asked studio co-CEO Mark Gerhard if the people who reacted negatively to the game were financed by someone, to which he replied "100%"He then claimed that there's been a "concerted effort to trash the game and the studio."Build A Rocket Boy, which is spearheaded by former Grand Theft Auto V producer Leslie Benzies, is following up on a tumultuous 2024. In January, the studio received million to continue building its immersive game platform Everywhere, announced back in 2022 around vague mentions of a blockchain component, as well as MindsEye and UGC design tools. Then, it laid off an unknown number of staff just over four weeks later.Related:In October, Hitman creator IO Interactive announced a partnership as publisher for MindsEye, as well as the Everywhere platform. Two months later, the studio acquired PlayFusion. As part of the deal, Gerhard, who was CEO and CTO at the studio, joined Build A Rocket Boy as co-CEO."We’ve always admired PlayFusion’s creativity and innovative approach to transmedia entertainment, and joining forces with them will ascend Build a Rocket Boy to our next level of excellence," Benzies said at the time. #build #rocket #boy #execs #depart
    WWW.GAMEDEVELOPER.COM
    Build A Rocket Boy execs depart one week before MindsEye launches
    Two executives at Build A Rocket Boy have parted ways with the studio, a mere week ahead of the launch of its next title, MindsEye, on June 10.Spotted by Eurogamer, Chief Legal Officer Riley Graebner and Chief Financial Officer Paul Bland have departed the company. They announced their leave on LinkedIn.Graebner, who was at the studio for three and a half years, said he’s proud of what the team accomplished. He mentioned doubling the size of the company to over 450 people and building a legal team and legal ops infrastructure as highlights of his tenure."There are so many people who have made my time at BARB memorable and I fear forgetting essential names," Graebner wrote. "A heartfelt thank you will have to do."A "concerted effort" to "trash" MindsEye and Build A Rocket BoyThe news comes days after a user in the MindsEye Discord server asked studio co-CEO Mark Gerhard if the people who reacted negatively to the game were financed by someone, to which he replied "100%" (via VG247.) He then claimed that there's been a "concerted effort to trash the game and the studio."Build A Rocket Boy, which is spearheaded by former Grand Theft Auto V producer Leslie Benzies, is following up on a tumultuous 2024. In January, the studio received $110 million to continue building its immersive game platform Everywhere, announced back in 2022 around vague mentions of a blockchain component, as well as MindsEye and UGC design tools. Then, it laid off an unknown number of staff just over four weeks later.Related:In October, Hitman creator IO Interactive announced a partnership as publisher for MindsEye, as well as the Everywhere platform. Two months later, the studio acquired PlayFusion. As part of the deal, Gerhard, who was CEO and CTO at the studio, joined Build A Rocket Boy as co-CEO."We’ve always admired PlayFusion’s creativity and innovative approach to transmedia entertainment, and joining forces with them will ascend Build a Rocket Boy to our next level of excellence," Benzies said at the time.
    Like
    Love
    Wow
    Sad
    Angry
    250
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Nintendo is bringing one of its exclusive games to PC claims Microsoft website

    Nintendo is bringing one of its exclusive games to PC claims Microsoft website

    GameCentral

    Published May 31, 2025 3:51pm

    Updated May 31, 2025 4:02pm

    Xenoblade Chronicles X: Definitive Edition – not available on PCA listing for a Nintendo first party game has appeared on Microsoft’s Edge Game Assist webpage and it’s either a mistake or the biggest news in gaming for a decade.
    Although Sony has finally embraced the PC market it seems impossible to imagine that any Nintendo-made game would ever appear on a modern PC, especially given some of the failed experiments in the 90s, with titles like Mario’s Game Gallery.
    The question must certainly have come up, amongst Nintendo’s execs, and you can guarantee that Microsoft has encouraged them to release games on the format, but there’s never been any outward sign that they’ve considered it… until now.
    To be clear, this is almost certainly a mistake of some kind, but nevertheless, the recently re-released Xenoblade Chronicles X is currently listed as one of various ordinary PC games that are ‘enhanced for Microsoft Edge Game Assist.’
    We’ll be honest, we’ve never heard of Edge Game Assist until now, and we imagine most other people haven’t either, but according to Microsoft, ‘for a selection of popular PC games’ it highlights ‘helpful resources whenever you open a new tab. Many popular PC games are already enhanced for Game Assist, with more on the way.’
    While there is an infinitesimally small chance that Microsoft has convinced Nintendo to release games on PC, and that fact has been accidentally revealed early – ahead of the Xbox Games Showcase next Sunday – the much more likely explanation is that this is some kind of error.
    Perhaps it’s a disgruntled employee or intern but it’s a very odd mistake to make for a human and yet seems like exactly the sort of error an AI would make.
    Microsoft is obsessed with AI at the moment, in terms of both using it and selling it to others, and don’t seem to care whether it does what it’s supposed to or not – reportedly Xbox use it for language translations in Europe, even for things as important as the Xbox dashboard, and there are often very obvious mistakes.
    Given how unpopular Xbox is on the Continent you might have thought they’d learn from that, but it seems not.
    As it is, at time of writing, Xenoblade Chronicles X is still listed amongst the supported games. You can’t see what kind of assistance is being offered though, as you have to start the game first… which doesn’t exist on PC.

    More Trending

    Xenoblade Chronicles X: Definitive Edition was released on Nintendo Switch this March, as the last major Wii U game to be ported to the format.
    That means that all the Xenoblade Chronicles games are now available on Switch, following Nintendo’s move to buy 100% of developer Monolith Soft – who they also use as a support studio for major games such as Zelda: Breath Of the Wild.
    A new Xenoblade game is expected early on in the Switch 2’s lifespan, and Monolith Soft is already working on a new role-playing game of some sort.
    So, the chances of Microsoft teaming up with Nintendo to release Xenoblade, or any other exclusive, on PC seem miniscule. And mistakes like this are only likely to put Nintendo off the idea even more.

    Another AI blunder?Email gamecentral@metro.co.uk, leave a comment below, follow us on Twitter, and sign-up to our newsletter.
    To submit Inbox letters and Reader’s Features more easily, without the need to send an email, just use our Submit Stuff page here.
    For more stories like this, check our Gaming page.

    GameCentral
    Sign up for exclusive analysis, latest releases, and bonus community content.
    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    #nintendo #bringing #one #its #exclusive
    Nintendo is bringing one of its exclusive games to PC claims Microsoft website
    Nintendo is bringing one of its exclusive games to PC claims Microsoft website GameCentral Published May 31, 2025 3:51pm Updated May 31, 2025 4:02pm Xenoblade Chronicles X: Definitive Edition – not available on PCA listing for a Nintendo first party game has appeared on Microsoft’s Edge Game Assist webpage and it’s either a mistake or the biggest news in gaming for a decade. Although Sony has finally embraced the PC market it seems impossible to imagine that any Nintendo-made game would ever appear on a modern PC, especially given some of the failed experiments in the 90s, with titles like Mario’s Game Gallery. The question must certainly have come up, amongst Nintendo’s execs, and you can guarantee that Microsoft has encouraged them to release games on the format, but there’s never been any outward sign that they’ve considered it… until now. To be clear, this is almost certainly a mistake of some kind, but nevertheless, the recently re-released Xenoblade Chronicles X is currently listed as one of various ordinary PC games that are ‘enhanced for Microsoft Edge Game Assist.’ We’ll be honest, we’ve never heard of Edge Game Assist until now, and we imagine most other people haven’t either, but according to Microsoft, ‘for a selection of popular PC games’ it highlights ‘helpful resources whenever you open a new tab. Many popular PC games are already enhanced for Game Assist, with more on the way.’ While there is an infinitesimally small chance that Microsoft has convinced Nintendo to release games on PC, and that fact has been accidentally revealed early – ahead of the Xbox Games Showcase next Sunday – the much more likely explanation is that this is some kind of error. Perhaps it’s a disgruntled employee or intern but it’s a very odd mistake to make for a human and yet seems like exactly the sort of error an AI would make. Microsoft is obsessed with AI at the moment, in terms of both using it and selling it to others, and don’t seem to care whether it does what it’s supposed to or not – reportedly Xbox use it for language translations in Europe, even for things as important as the Xbox dashboard, and there are often very obvious mistakes. Given how unpopular Xbox is on the Continent you might have thought they’d learn from that, but it seems not. As it is, at time of writing, Xenoblade Chronicles X is still listed amongst the supported games. You can’t see what kind of assistance is being offered though, as you have to start the game first… which doesn’t exist on PC. More Trending Xenoblade Chronicles X: Definitive Edition was released on Nintendo Switch this March, as the last major Wii U game to be ported to the format. That means that all the Xenoblade Chronicles games are now available on Switch, following Nintendo’s move to buy 100% of developer Monolith Soft – who they also use as a support studio for major games such as Zelda: Breath Of the Wild. A new Xenoblade game is expected early on in the Switch 2’s lifespan, and Monolith Soft is already working on a new role-playing game of some sort. So, the chances of Microsoft teaming up with Nintendo to release Xenoblade, or any other exclusive, on PC seem miniscule. And mistakes like this are only likely to put Nintendo off the idea even more. Another AI blunder?Email gamecentral@metro.co.uk, leave a comment below, follow us on Twitter, and sign-up to our newsletter. To submit Inbox letters and Reader’s Features more easily, without the need to send an email, just use our Submit Stuff page here. For more stories like this, check our Gaming page. GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy #nintendo #bringing #one #its #exclusive
    METRO.CO.UK
    Nintendo is bringing one of its exclusive games to PC claims Microsoft website
    Nintendo is bringing one of its exclusive games to PC claims Microsoft website GameCentral Published May 31, 2025 3:51pm Updated May 31, 2025 4:02pm Xenoblade Chronicles X: Definitive Edition – not available on PC (Nintendo) A listing for a Nintendo first party game has appeared on Microsoft’s Edge Game Assist webpage and it’s either a mistake or the biggest news in gaming for a decade. Although Sony has finally embraced the PC market it seems impossible to imagine that any Nintendo-made game would ever appear on a modern PC, especially given some of the failed experiments in the 90s, with titles like Mario’s Game Gallery. The question must certainly have come up, amongst Nintendo’s execs, and you can guarantee that Microsoft has encouraged them to release games on the format, but there’s never been any outward sign that they’ve considered it… until now. To be clear, this is almost certainly a mistake of some kind, but nevertheless, the recently re-released Xenoblade Chronicles X is currently listed as one of various ordinary PC games that are ‘enhanced for Microsoft Edge Game Assist.’ We’ll be honest, we’ve never heard of Edge Game Assist until now, and we imagine most other people haven’t either, but according to Microsoft, ‘for a selection of popular PC games’ it highlights ‘helpful resources whenever you open a new tab. Many popular PC games are already enhanced for Game Assist, with more on the way.’ While there is an infinitesimally small chance that Microsoft has convinced Nintendo to release games on PC, and that fact has been accidentally revealed early – ahead of the Xbox Games Showcase next Sunday – the much more likely explanation is that this is some kind of error. Perhaps it’s a disgruntled employee or intern but it’s a very odd mistake to make for a human and yet seems like exactly the sort of error an AI would make. Microsoft is obsessed with AI at the moment, in terms of both using it and selling it to others, and don’t seem to care whether it does what it’s supposed to or not – reportedly Xbox use it for language translations in Europe, even for things as important as the Xbox dashboard, and there are often very obvious mistakes. Given how unpopular Xbox is on the Continent you might have thought they’d learn from that, but it seems not. As it is, at time of writing, Xenoblade Chronicles X is still listed amongst the supported games. You can’t see what kind of assistance is being offered though, as you have to start the game first… which doesn’t exist on PC. More Trending Xenoblade Chronicles X: Definitive Edition was released on Nintendo Switch this March, as the last major Wii U game to be ported to the format. That means that all the Xenoblade Chronicles games are now available on Switch, following Nintendo’s move to buy 100% of developer Monolith Soft – who they also use as a support studio for major games such as Zelda: Breath Of the Wild. A new Xenoblade game is expected early on in the Switch 2’s lifespan, and Monolith Soft is already working on a new role-playing game of some sort. So, the chances of Microsoft teaming up with Nintendo to release Xenoblade, or any other exclusive, on PC seem miniscule. And mistakes like this are only likely to put Nintendo off the idea even more. Another AI blunder? (Microsoft) Email gamecentral@metro.co.uk, leave a comment below, follow us on Twitter, and sign-up to our newsletter. To submit Inbox letters and Reader’s Features more easily, without the need to send an email, just use our Submit Stuff page here. For more stories like this, check our Gaming page. GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    0 Comentários 0 Compartilhamentos 0 Anterior
  • OpenAI wants ChatGPT to be a ‘super assistant’ for every part of your life

    Thanks to the legal discovery process, Google’s antitrust trial with the Department of Justice has provided a fascinating glimpse into the future of ChatGPT.An internal OpenAI strategy document titled “ChatGPT: H1 2025 Strategy” describes the company’s aspiration to build an “AI super assistant that deeply understands you and is your interface to the internet.” Although the document is heavily redacted in parts, it reveals that OpenAI aims for ChatGPT to soon develop into much more than a chatbot. “In the first half of next year, we’ll start evolving ChatGPT into a super-assistant: one that knows you, understands what you care about, and helps with any task that a smart, trustworthy, emotionally intelligent person with a computer could do,” reads the document from late 2024. “The timing is right. Models like 02 and 03 are finally smart enough to reliably perform agentic tasks, tools like computer use can boost ChatGPT’s ability to take action, and interaction paradigms like multimodality and generative UI allow both ChatGPT and users to express themselves in the best way for the task.”The document goes on to describe a “super assistant” as “an intelligent entity with T-shaped skills” for both widely applicable and niche tasks. “The broad part is all about making life easier: answering a question, finding a home, contacting a lawyer, joining a gym, planning vacations, buying gifts, managing calendars, keeping track of todos, sending emails.” It mentions coding as an early example of a more niche task.Even when reading around the redactions, it’s clear that OpenAI sees hardware as essential to its future, and that it wants people to think of ChatGPT as not just a tool, but a companion. This tracks with Sam Altman recently saying that young people are using ChatGPT like a “ life advisor.”“Today, ChatGPT is in our lives through existing form factors — our website, phone, and desktop apps,” another part of the strategy document reads. “But our vision for ChatGPT is to help you with all of your life, no matter where you are. At home, it should help answer questions, play music, and suggest recipes. On the go, it should help you get to places, find the best restaurants, or catch up with friends. At work, it should help you take meeting notes, or prepare for the big presentation. And on solo walks, it should help you reflect and wind down.” At the same time, OpenAI finds itself in a wobbly position. Its infrastructure isn’t able to handle ChatGPT’s rising usage, which explains Altman’s focus on building data centers. In a section of the document describing AI chatbot competition, the company writes that “we are leading here, but we can’t rest,” and that “growth and revenue won’t line up forever.” It acknowledges that there are “powerful incumbents who will leverage their distribution to advantage their own products,” and states that OpenAI will advocate for regulation that requires other platforms to allow people to set ChatGPT as the default assistant.“We have what we need to win: one of the fastest-growing products of all time, a category-defining brand, a research lead, a compute lead, a world-class research team, and an increasing number of effective people with agency who are motivated to ship,” the OpenAI document states. “We don’t rely on ads, giving us flexibility on what to build. Our culture values speed, bold moves, and self-disruption. Maintaining these advantages is hard work but, if we do, they will last for a while.”ElsewhereApple chickens out: For the first time in a decade, Apple won’t have its execs participate in John Gruber’s annual post-WWDC live podcast. Gruber recently wrote the viral “something is rotten in the state of Cupertino” essay, which was widely discussed in Apple circles. Although he hasn’t publicly connected that critical piece to the company backing out of his podcast, it’s easy to see the throughline. It says a lot about the state of Apple when its leaders don’t even want to participate in what has historically been a friendly forum.Elon was high: As Elon Musk attempts to reframe the public’s view of him by doing interviews about SpaceX, The New York Times reports that last year, he was taking so much ketamine that it “was affecting his bladder.” He also reportedly “traveled with a daily medication box that held about 20 pills, including ones with the markings of the stimulant Adderall.” Both Musk and the White House have had multiple opportunities to directly refute this report, and they have not. Now, Musk is at least partially stepping away from DOGE along with key lieutenants like Steve Davis. DOGE may be a failure based on Musk’s own stated hopes for spending cuts, but his closeness to Trump has certainly helped rescue X from financial ruin and grown SpaceX’s business. Now, the more difficult work begins: saving Tesla. Overheard“The way we do ranking is sacrosanct to us.” - Google CEO Sundar Pichai on Decoder, explaining why the company’s search results won’t be changed for President Trump or anyone else. “Compared to previous technology changes, I’m a little bit more worried about the labor impact… Yes, people will adapt, but they may not adapt fast enough.” - Anthropic CEO Dario Amodei on CNN raising the alarm about the technology he is developing. “Meta is a very different company than it was nine years ago when they fired me.” - Anduril founder Palmer Luckey telling Ashlee Vance why he is linking up with Mark Zuckerberg to make headsets for the military. Personnel logThe flattening of Meta’s AI organization has taken effect, with VP Ahmad Al-Dahle no longer overseeing the entire group. Now, he co-leads “AGI Foundations” with Amir Frenkel, VP of engineering, while Connor Hayes runs all AI products. All three men now report to Meta CPO Chris Cox, who has diplomatically framed the changes as a way to “give each org more ownership.”Xbox co-founder J Allard is leading a new ‘breakthrough’ devices group called ZeroOne. One of the devices will be smart home-related, according to job listings.C.J. Mahoney, a former Trump administration official, is being promoted to general counsel at Microsoft, which has also hired Lisa Monaco from the last Biden administration to lead global policy. Reed Hastings is joining the board of Anthropic “because I believe in their approach to AI development, and to help humanity progress.”Sebastian Barrios, previously SVP at Mercado Libre, is joining Roblox as SVP of engineering for several areas, including ads, game discovery, and the company’s virtual currency work.Fidji Simo’s replacement at Instacart will be chief business officer Chris Rogers, who will become the company’s next CEO on August 15th after she officially joins OpenAI.Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you have thoughts on this issue or a story idea to share. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    #openai #wants #chatgpt #super #assistant
    OpenAI wants ChatGPT to be a ‘super assistant’ for every part of your life
    Thanks to the legal discovery process, Google’s antitrust trial with the Department of Justice has provided a fascinating glimpse into the future of ChatGPT.An internal OpenAI strategy document titled “ChatGPT: H1 2025 Strategy” describes the company’s aspiration to build an “AI super assistant that deeply understands you and is your interface to the internet.” Although the document is heavily redacted in parts, it reveals that OpenAI aims for ChatGPT to soon develop into much more than a chatbot. “In the first half of next year, we’ll start evolving ChatGPT into a super-assistant: one that knows you, understands what you care about, and helps with any task that a smart, trustworthy, emotionally intelligent person with a computer could do,” reads the document from late 2024. “The timing is right. Models like 02 and 03 are finally smart enough to reliably perform agentic tasks, tools like computer use can boost ChatGPT’s ability to take action, and interaction paradigms like multimodality and generative UI allow both ChatGPT and users to express themselves in the best way for the task.”The document goes on to describe a “super assistant” as “an intelligent entity with T-shaped skills” for both widely applicable and niche tasks. “The broad part is all about making life easier: answering a question, finding a home, contacting a lawyer, joining a gym, planning vacations, buying gifts, managing calendars, keeping track of todos, sending emails.” It mentions coding as an early example of a more niche task.Even when reading around the redactions, it’s clear that OpenAI sees hardware as essential to its future, and that it wants people to think of ChatGPT as not just a tool, but a companion. This tracks with Sam Altman recently saying that young people are using ChatGPT like a “ life advisor.”“Today, ChatGPT is in our lives through existing form factors — our website, phone, and desktop apps,” another part of the strategy document reads. “But our vision for ChatGPT is to help you with all of your life, no matter where you are. At home, it should help answer questions, play music, and suggest recipes. On the go, it should help you get to places, find the best restaurants, or catch up with friends. At work, it should help you take meeting notes, or prepare for the big presentation. And on solo walks, it should help you reflect and wind down.” At the same time, OpenAI finds itself in a wobbly position. Its infrastructure isn’t able to handle ChatGPT’s rising usage, which explains Altman’s focus on building data centers. In a section of the document describing AI chatbot competition, the company writes that “we are leading here, but we can’t rest,” and that “growth and revenue won’t line up forever.” It acknowledges that there are “powerful incumbents who will leverage their distribution to advantage their own products,” and states that OpenAI will advocate for regulation that requires other platforms to allow people to set ChatGPT as the default assistant.“We have what we need to win: one of the fastest-growing products of all time, a category-defining brand, a research lead, a compute lead, a world-class research team, and an increasing number of effective people with agency who are motivated to ship,” the OpenAI document states. “We don’t rely on ads, giving us flexibility on what to build. Our culture values speed, bold moves, and self-disruption. Maintaining these advantages is hard work but, if we do, they will last for a while.”ElsewhereApple chickens out: For the first time in a decade, Apple won’t have its execs participate in John Gruber’s annual post-WWDC live podcast. Gruber recently wrote the viral “something is rotten in the state of Cupertino” essay, which was widely discussed in Apple circles. Although he hasn’t publicly connected that critical piece to the company backing out of his podcast, it’s easy to see the throughline. It says a lot about the state of Apple when its leaders don’t even want to participate in what has historically been a friendly forum.Elon was high: As Elon Musk attempts to reframe the public’s view of him by doing interviews about SpaceX, The New York Times reports that last year, he was taking so much ketamine that it “was affecting his bladder.” He also reportedly “traveled with a daily medication box that held about 20 pills, including ones with the markings of the stimulant Adderall.” Both Musk and the White House have had multiple opportunities to directly refute this report, and they have not. Now, Musk is at least partially stepping away from DOGE along with key lieutenants like Steve Davis. DOGE may be a failure based on Musk’s own stated hopes for spending cuts, but his closeness to Trump has certainly helped rescue X from financial ruin and grown SpaceX’s business. Now, the more difficult work begins: saving Tesla. Overheard“The way we do ranking is sacrosanct to us.” - Google CEO Sundar Pichai on Decoder, explaining why the company’s search results won’t be changed for President Trump or anyone else. “Compared to previous technology changes, I’m a little bit more worried about the labor impact… Yes, people will adapt, but they may not adapt fast enough.” - Anthropic CEO Dario Amodei on CNN raising the alarm about the technology he is developing. “Meta is a very different company than it was nine years ago when they fired me.” - Anduril founder Palmer Luckey telling Ashlee Vance why he is linking up with Mark Zuckerberg to make headsets for the military. Personnel logThe flattening of Meta’s AI organization has taken effect, with VP Ahmad Al-Dahle no longer overseeing the entire group. Now, he co-leads “AGI Foundations” with Amir Frenkel, VP of engineering, while Connor Hayes runs all AI products. All three men now report to Meta CPO Chris Cox, who has diplomatically framed the changes as a way to “give each org more ownership.”Xbox co-founder J Allard is leading a new ‘breakthrough’ devices group called ZeroOne. One of the devices will be smart home-related, according to job listings.C.J. Mahoney, a former Trump administration official, is being promoted to general counsel at Microsoft, which has also hired Lisa Monaco from the last Biden administration to lead global policy. Reed Hastings is joining the board of Anthropic “because I believe in their approach to AI development, and to help humanity progress.”Sebastian Barrios, previously SVP at Mercado Libre, is joining Roblox as SVP of engineering for several areas, including ads, game discovery, and the company’s virtual currency work.Fidji Simo’s replacement at Instacart will be chief business officer Chris Rogers, who will become the company’s next CEO on August 15th after she officially joins OpenAI.Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you have thoughts on this issue or a story idea to share. You can respond here or ping me securely on Signal.Thanks for subscribing.See More: #openai #wants #chatgpt #super #assistant
    WWW.THEVERGE.COM
    OpenAI wants ChatGPT to be a ‘super assistant’ for every part of your life
    Thanks to the legal discovery process, Google’s antitrust trial with the Department of Justice has provided a fascinating glimpse into the future of ChatGPT.An internal OpenAI strategy document titled “ChatGPT: H1 2025 Strategy” describes the company’s aspiration to build an “AI super assistant that deeply understands you and is your interface to the internet.” Although the document is heavily redacted in parts, it reveals that OpenAI aims for ChatGPT to soon develop into much more than a chatbot. “In the first half of next year, we’ll start evolving ChatGPT into a super-assistant: one that knows you, understands what you care about, and helps with any task that a smart, trustworthy, emotionally intelligent person with a computer could do,” reads the document from late 2024. “The timing is right. Models like 02 and 03 are finally smart enough to reliably perform agentic tasks, tools like computer use can boost ChatGPT’s ability to take action, and interaction paradigms like multimodality and generative UI allow both ChatGPT and users to express themselves in the best way for the task.”The document goes on to describe a “super assistant” as “an intelligent entity with T-shaped skills” for both widely applicable and niche tasks. “The broad part is all about making life easier: answering a question, finding a home, contacting a lawyer, joining a gym, planning vacations, buying gifts, managing calendars, keeping track of todos, sending emails.” It mentions coding as an early example of a more niche task.Even when reading around the redactions, it’s clear that OpenAI sees hardware as essential to its future, and that it wants people to think of ChatGPT as not just a tool, but a companion. This tracks with Sam Altman recently saying that young people are using ChatGPT like a “ life advisor.”“Today, ChatGPT is in our lives through existing form factors — our website, phone, and desktop apps,” another part of the strategy document reads. “But our vision for ChatGPT is to help you with all of your life, no matter where you are. At home, it should help answer questions, play music, and suggest recipes. On the go, it should help you get to places, find the best restaurants, or catch up with friends. At work, it should help you take meeting notes, or prepare for the big presentation. And on solo walks, it should help you reflect and wind down.” At the same time, OpenAI finds itself in a wobbly position. Its infrastructure isn’t able to handle ChatGPT’s rising usage, which explains Altman’s focus on building data centers. In a section of the document describing AI chatbot competition, the company writes that “we are leading here, but we can’t rest,” and that “growth and revenue won’t line up forever.” It acknowledges that there are “powerful incumbents who will leverage their distribution to advantage their own products,” and states that OpenAI will advocate for regulation that requires other platforms to allow people to set ChatGPT as the default assistant. (Coincidentally, Apple is rumored to soon let iOS users also select Google’s Gemini for Siri queries. Meta AI just hit one billion users as well, thanks mostly to its many hooks in Instagram, WhatsApp, and Facebook.) “We have what we need to win: one of the fastest-growing products of all time, a category-defining brand, a research lead (reasoning, multimodal), a compute lead, a world-class research team, and an increasing number of effective people with agency who are motivated to ship,” the OpenAI document states. “We don’t rely on ads, giving us flexibility on what to build. Our culture values speed, bold moves, and self-disruption. Maintaining these advantages is hard work but, if we do, they will last for a while.”ElsewhereApple chickens out: For the first time in a decade, Apple won’t have its execs participate in John Gruber’s annual post-WWDC live podcast. Gruber recently wrote the viral “something is rotten in the state of Cupertino” essay, which was widely discussed in Apple circles. Although he hasn’t publicly connected that critical piece to the company backing out of his podcast, it’s easy to see the throughline. It says a lot about the state of Apple when its leaders don’t even want to participate in what has historically been a friendly forum.Elon was high: As Elon Musk attempts to reframe the public’s view of him by doing interviews about SpaceX, The New York Times reports that last year, he was taking so much ketamine that it “was affecting his bladder.” He also reportedly “traveled with a daily medication box that held about 20 pills, including ones with the markings of the stimulant Adderall.” Both Musk and the White House have had multiple opportunities to directly refute this report, and they have not. Now, Musk is at least partially stepping away from DOGE along with key lieutenants like Steve Davis. DOGE may be a failure based on Musk’s own stated hopes for spending cuts, but his closeness to Trump has certainly helped rescue X from financial ruin and grown SpaceX’s business. Now, the more difficult work begins: saving Tesla. Overheard“The way we do ranking is sacrosanct to us.” - Google CEO Sundar Pichai on Decoder, explaining why the company’s search results won’t be changed for President Trump or anyone else. “Compared to previous technology changes, I’m a little bit more worried about the labor impact… Yes, people will adapt, but they may not adapt fast enough.” - Anthropic CEO Dario Amodei on CNN raising the alarm about the technology he is developing. “Meta is a very different company than it was nine years ago when they fired me.” - Anduril founder Palmer Luckey telling Ashlee Vance why he is linking up with Mark Zuckerberg to make headsets for the military. Personnel logThe flattening of Meta’s AI organization has taken effect, with VP Ahmad Al-Dahle no longer overseeing the entire group. Now, he co-leads “AGI Foundations” with Amir Frenkel, VP of engineering, while Connor Hayes runs all AI products. All three men now report to Meta CPO Chris Cox, who has diplomatically framed the changes as a way to “give each org more ownership.”Xbox co-founder J Allard is leading a new ‘breakthrough’ devices group at Amazon called ZeroOne. One of the devices will be smart home-related, according to job listings.C.J. Mahoney, a former Trump administration official, is being promoted to general counsel at Microsoft, which has also hired Lisa Monaco from the last Biden administration to lead global policy. Reed Hastings is joining the board of Anthropic “because I believe in their approach to AI development, and to help humanity progress.” (He’s joining Anthropic’s corporate board, not the supervising board of its public benefit trust that can hire and fire corporate directors.)Sebastian Barrios, previously SVP at Mercado Libre, is joining Roblox as SVP of engineering for several areas, including ads, game discovery, and the company’s virtual currency work.Fidji Simo’s replacement at Instacart will be chief business officer Chris Rogers, who will become the company’s next CEO on August 15th after she officially joins OpenAI.Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you have thoughts on this issue or a story idea to share. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    0 Comentários 0 Compartilhamentos 0 Anterior
  • The Real Life Tech Execs That Inspired Jesse Armstrong’s Mountainhead

    Jesse Armstrong loves to pull fictional stories out of reality. His universally acclaimed TV show Succession, for instance, was inspired by real-life media dynasties like the Murdochs and the Hearsts. Similarly, his newest film Mountainhead centers upon characters that share key traits with the tech world’s most powerful leaders: Elon Musk, Mark Zuckerberg, Sam Altman, and others.Mountainhead, which releases on HBO on May 31 at 8 p.m. ET, portrays four top tech executives who retreat to a Utah hideaway as the AI deepfake tools newly released by one of their companies wreak havoc across the world. As the believable deepfakes inflame hatred on social media and real-world violence, the comfortably-appointed quartet mulls a global governmental takeover, intergalactic conquest and immortality, before interpersonal conflict derails their plans.Armstrong tells TIME in a Zoom interview that he first became interested in writing a story about tech titans after reading books like Michael Lewis’ Going Infiniteand Ashlee Vance’s Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future, as well as journalistic profiles of Peter Thiel, Marc Andreessen, and others. He then built the story around the interplay between four character archetypes—the father, the dynamo, the usurper, and the hanger-on—and conducted extensive research so that his fictional executives reflected real ones. His characters, he says, aren’t one-to-one matches, but “Frankenstein monsters with limbs sewn together.” These characters are deeply flawed and destructive, to say the least. Armstrong says he did not intend for the film to be a wholly negative depiction of tech leaders and AI development. “I do try to take myself out of it, but obviously my sense of what this tech does and could do infuses the piece. Maybe I do have some anxieties,” he says. Armstrong contends that the film is more so channeling fears that AI leaders themselves have warned about. “If somebody who knows the technology better than anyone in the world thinks there's a 1/5th chance that it's going to wipe out humanity—and they're some of the optimists—I think that's legitimately quite unnerving,” he says. Here’s how each of the characters in Mountainhead resembles real-world tech leaders. This article contains spoilers. Venisis the dynamo.Cory Michael Smith in Mountainhead Macall Polay—HBOVenis is Armstrong’s “dynamo”: the richest man in the world, who has gained his wealth from his social media platform Traam and its 4 billion users. Venis is ambitious, juvenile, and self-centered, even questioning whether other people are as real as him and his friends. Venis’ first obvious comp is Elon Musk, the richest man in the real world. Like Musk, Venis is obsessed with going to outer space and with using his enormous war chest to build hyperscale data centers to create powerful anti-woke AI systems. Venis also has a strange relationship with his child, essentially using it as a prop to help him through his own emotional turmoil. Throughout the movie, others caution Venis to shut down his deepfake AI tools which have led to military conflict and the desecration of holy sites across the world. Venis rebuffs them and says that people just need to adapt to technological changes and focus on the cool art being made. This argument is similar to those made by Sam Altman, who has argued that OpenAI needs to unveil ChatGPT and other cutting-edge tools as fast as possible in order to show the public the power of the technology. Like Mark Zuckerberg, Venis presides over a massively popular social media platform that some have accused of ignoring harms in favor of growth. Just as Amnesty International accused Meta of having “substantially contributed” to human rights violations perpetrated against Myanmar’s Rohingya ethnic group, Venis complains of the UN being “up his ass for starting a race war.”Randallis the father.Steve Carell in Mountainhead Macall Polay—HBOThe group’s eldest member is Randall, an investor and technologist who resembles Marc Andreessen and Peter Thiel in his lofty philosophizing and quest for immortality. Like Andreessen, Randall is a staunch accelerationist who believes that U.S. companies need to develop AI as fast as possible in order to both prevent the Chinese from controlling the technology, and to ostensibly ignite a new American utopia in which productivity, happiness, and health flourish. Randall’s power comes from the fact that he was Venis’ first investor, just as Thiel was an early investor in Facebook. While Andreessen pens manifestos about technological advancement, Randall paints his mission in grandiose, historical terms, using anti-democratic, sci-fi-inflected language that resembles that of the philosopher Curtis Yarvin, who has been funded and promoted by Thiel over his career. Randall’s justification of murder through utilitarian and Kantian lenses calls to mind Sam Bankman-Fried’s extensive philosophizing, which included a declaration that he would roll the dice on killing everyone on earth if there was a 51% chance he would create a second earth. Bankman-Fried’s approach—in embracing risk and harm in order to reap massive rewards—led him to be convicted of massive financial fraud. Randall is also obsessed with longevity just like Thiel, who has railed for years against the “inevitability of death” and yearns for “super-duper medical treatments” that would render him immortal. Jeffis the usurper.Ramy Youssef in Mountainhead Macall Polay—HBOJeff is a technologist who often serves as the movie’s conscience, slinging criticisms about the other characters. But he’s also deeply embedded within their world, and he needs their resources, particularly Venis’ access to computing power, to thrive. In the end, Jeff sells out his values for his own survival and well-being. AI skeptics have lobbed similar criticisms at the leaders of the main AI labs, including Altman—who started OpenAI as a nonprofit before attempting to restructure the company—as well as Demis Hassabis and Dario Amodei. Hassabis is the CEO of Google Deepmind and a winner of the 2024 Nobel Prize in Chemistry; a rare scientist surrounded by businessmen and technologists. In order to try to achieve his AI dreams of curing disease and halting global warning, Hassabis enlisted with Google, inking a contract in 2014 in which he prohibited Google from using his technology for military applications. But that clause has since disappeared, and the AI systems developed under Hassabis are being sold, via Google, to militaries like Israel’s. Another parallel can be drawn between Jeff and Amodei, an AI researcher who defected from OpenAI after becoming worried that the company was cutting back its safety measures, and then formed his own company, Anthropic. Amodei has urged governments to create AI guardrails and has warned about the potentially catastrophic effects of the AI industry’s race dynamics. But some have criticized Anthropic for operating similarly to OpenAI, prioritizing scale in a way that exacerbates competitive pressures. Souperis the hanger-on. Jason Schwartzman in Mountainhead Macall Polay—HBOEvery quartet needs its Turtle or its Ringo; a clear fourth wheel to serve as a punching bag for the rest of the group’s alpha males. Mountainhead’s hanger-on is Souper, thus named because he has soup kitchen money compared to the rest. In order to prove his worth, he’s fixated on getting funding for a meditation startup that he hopes will eventually become an “everything app.” No tech exec would want to be compared to Souper, who has a clear inferiority complex. But plenty of tech leaders have emphasized the importance of meditation and mindfulness—including Twitter co-founder and Square CEO Jack Dorsey, who often goes on meditation retreats. Armstrong, in his interview, declined to answer specific questions about his characters’ inspirations, but conceded that some of the speculations were in the right ballpark. “For people who know the area well, it's a little bit of a fun house mirror in that you see something and are convinced that it's them,” he says. “I think all of those people featured in my research. There's bits of Andreessen and David Sacks and some of those philosopher types. It’s a good parlor game to choose your Frankenstein limbs.”
    #real #life #tech #execs #that
    The Real Life Tech Execs That Inspired Jesse Armstrong’s Mountainhead
    Jesse Armstrong loves to pull fictional stories out of reality. His universally acclaimed TV show Succession, for instance, was inspired by real-life media dynasties like the Murdochs and the Hearsts. Similarly, his newest film Mountainhead centers upon characters that share key traits with the tech world’s most powerful leaders: Elon Musk, Mark Zuckerberg, Sam Altman, and others.Mountainhead, which releases on HBO on May 31 at 8 p.m. ET, portrays four top tech executives who retreat to a Utah hideaway as the AI deepfake tools newly released by one of their companies wreak havoc across the world. As the believable deepfakes inflame hatred on social media and real-world violence, the comfortably-appointed quartet mulls a global governmental takeover, intergalactic conquest and immortality, before interpersonal conflict derails their plans.Armstrong tells TIME in a Zoom interview that he first became interested in writing a story about tech titans after reading books like Michael Lewis’ Going Infiniteand Ashlee Vance’s Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future, as well as journalistic profiles of Peter Thiel, Marc Andreessen, and others. He then built the story around the interplay between four character archetypes—the father, the dynamo, the usurper, and the hanger-on—and conducted extensive research so that his fictional executives reflected real ones. His characters, he says, aren’t one-to-one matches, but “Frankenstein monsters with limbs sewn together.” These characters are deeply flawed and destructive, to say the least. Armstrong says he did not intend for the film to be a wholly negative depiction of tech leaders and AI development. “I do try to take myself out of it, but obviously my sense of what this tech does and could do infuses the piece. Maybe I do have some anxieties,” he says. Armstrong contends that the film is more so channeling fears that AI leaders themselves have warned about. “If somebody who knows the technology better than anyone in the world thinks there's a 1/5th chance that it's going to wipe out humanity—and they're some of the optimists—I think that's legitimately quite unnerving,” he says. Here’s how each of the characters in Mountainhead resembles real-world tech leaders. This article contains spoilers. Venisis the dynamo.Cory Michael Smith in Mountainhead Macall Polay—HBOVenis is Armstrong’s “dynamo”: the richest man in the world, who has gained his wealth from his social media platform Traam and its 4 billion users. Venis is ambitious, juvenile, and self-centered, even questioning whether other people are as real as him and his friends. Venis’ first obvious comp is Elon Musk, the richest man in the real world. Like Musk, Venis is obsessed with going to outer space and with using his enormous war chest to build hyperscale data centers to create powerful anti-woke AI systems. Venis also has a strange relationship with his child, essentially using it as a prop to help him through his own emotional turmoil. Throughout the movie, others caution Venis to shut down his deepfake AI tools which have led to military conflict and the desecration of holy sites across the world. Venis rebuffs them and says that people just need to adapt to technological changes and focus on the cool art being made. This argument is similar to those made by Sam Altman, who has argued that OpenAI needs to unveil ChatGPT and other cutting-edge tools as fast as possible in order to show the public the power of the technology. Like Mark Zuckerberg, Venis presides over a massively popular social media platform that some have accused of ignoring harms in favor of growth. Just as Amnesty International accused Meta of having “substantially contributed” to human rights violations perpetrated against Myanmar’s Rohingya ethnic group, Venis complains of the UN being “up his ass for starting a race war.”Randallis the father.Steve Carell in Mountainhead Macall Polay—HBOThe group’s eldest member is Randall, an investor and technologist who resembles Marc Andreessen and Peter Thiel in his lofty philosophizing and quest for immortality. Like Andreessen, Randall is a staunch accelerationist who believes that U.S. companies need to develop AI as fast as possible in order to both prevent the Chinese from controlling the technology, and to ostensibly ignite a new American utopia in which productivity, happiness, and health flourish. Randall’s power comes from the fact that he was Venis’ first investor, just as Thiel was an early investor in Facebook. While Andreessen pens manifestos about technological advancement, Randall paints his mission in grandiose, historical terms, using anti-democratic, sci-fi-inflected language that resembles that of the philosopher Curtis Yarvin, who has been funded and promoted by Thiel over his career. Randall’s justification of murder through utilitarian and Kantian lenses calls to mind Sam Bankman-Fried’s extensive philosophizing, which included a declaration that he would roll the dice on killing everyone on earth if there was a 51% chance he would create a second earth. Bankman-Fried’s approach—in embracing risk and harm in order to reap massive rewards—led him to be convicted of massive financial fraud. Randall is also obsessed with longevity just like Thiel, who has railed for years against the “inevitability of death” and yearns for “super-duper medical treatments” that would render him immortal. Jeffis the usurper.Ramy Youssef in Mountainhead Macall Polay—HBOJeff is a technologist who often serves as the movie’s conscience, slinging criticisms about the other characters. But he’s also deeply embedded within their world, and he needs their resources, particularly Venis’ access to computing power, to thrive. In the end, Jeff sells out his values for his own survival and well-being. AI skeptics have lobbed similar criticisms at the leaders of the main AI labs, including Altman—who started OpenAI as a nonprofit before attempting to restructure the company—as well as Demis Hassabis and Dario Amodei. Hassabis is the CEO of Google Deepmind and a winner of the 2024 Nobel Prize in Chemistry; a rare scientist surrounded by businessmen and technologists. In order to try to achieve his AI dreams of curing disease and halting global warning, Hassabis enlisted with Google, inking a contract in 2014 in which he prohibited Google from using his technology for military applications. But that clause has since disappeared, and the AI systems developed under Hassabis are being sold, via Google, to militaries like Israel’s. Another parallel can be drawn between Jeff and Amodei, an AI researcher who defected from OpenAI after becoming worried that the company was cutting back its safety measures, and then formed his own company, Anthropic. Amodei has urged governments to create AI guardrails and has warned about the potentially catastrophic effects of the AI industry’s race dynamics. But some have criticized Anthropic for operating similarly to OpenAI, prioritizing scale in a way that exacerbates competitive pressures. Souperis the hanger-on. Jason Schwartzman in Mountainhead Macall Polay—HBOEvery quartet needs its Turtle or its Ringo; a clear fourth wheel to serve as a punching bag for the rest of the group’s alpha males. Mountainhead’s hanger-on is Souper, thus named because he has soup kitchen money compared to the rest. In order to prove his worth, he’s fixated on getting funding for a meditation startup that he hopes will eventually become an “everything app.” No tech exec would want to be compared to Souper, who has a clear inferiority complex. But plenty of tech leaders have emphasized the importance of meditation and mindfulness—including Twitter co-founder and Square CEO Jack Dorsey, who often goes on meditation retreats. Armstrong, in his interview, declined to answer specific questions about his characters’ inspirations, but conceded that some of the speculations were in the right ballpark. “For people who know the area well, it's a little bit of a fun house mirror in that you see something and are convinced that it's them,” he says. “I think all of those people featured in my research. There's bits of Andreessen and David Sacks and some of those philosopher types. It’s a good parlor game to choose your Frankenstein limbs.” #real #life #tech #execs #that
    TIME.COM
    The Real Life Tech Execs That Inspired Jesse Armstrong’s Mountainhead
    Jesse Armstrong loves to pull fictional stories out of reality. His universally acclaimed TV show Succession, for instance, was inspired by real-life media dynasties like the Murdochs and the Hearsts. Similarly, his newest film Mountainhead centers upon characters that share key traits with the tech world’s most powerful leaders: Elon Musk, Mark Zuckerberg, Sam Altman, and others.Mountainhead, which releases on HBO on May 31 at 8 p.m. ET, portrays four top tech executives who retreat to a Utah hideaway as the AI deepfake tools newly released by one of their companies wreak havoc across the world. As the believable deepfakes inflame hatred on social media and real-world violence, the comfortably-appointed quartet mulls a global governmental takeover, intergalactic conquest and immortality, before interpersonal conflict derails their plans.Armstrong tells TIME in a Zoom interview that he first became interested in writing a story about tech titans after reading books like Michael Lewis’ Going Infinite (about Sam Bankman-Fried) and Ashlee Vance’s Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future, as well as journalistic profiles of Peter Thiel, Marc Andreessen, and others. He then built the story around the interplay between four character archetypes—the father, the dynamo, the usurper, and the hanger-on—and conducted extensive research so that his fictional executives reflected real ones. His characters, he says, aren’t one-to-one matches, but “Frankenstein monsters with limbs sewn together.” These characters are deeply flawed and destructive, to say the least. Armstrong says he did not intend for the film to be a wholly negative depiction of tech leaders and AI development. “I do try to take myself out of it, but obviously my sense of what this tech does and could do infuses the piece. Maybe I do have some anxieties,” he says. Armstrong contends that the film is more so channeling fears that AI leaders themselves have warned about. “If somebody who knows the technology better than anyone in the world thinks there's a 1/5th chance that it's going to wipe out humanity—and they're some of the optimists—I think that's legitimately quite unnerving,” he says. Here’s how each of the characters in Mountainhead resembles real-world tech leaders. This article contains spoilers. Venis (Cory Michael Smith) is the dynamo.Cory Michael Smith in Mountainhead Macall Polay—HBOVenis is Armstrong’s “dynamo”: the richest man in the world, who has gained his wealth from his social media platform Traam and its 4 billion users. Venis is ambitious, juvenile, and self-centered, even questioning whether other people are as real as him and his friends. Venis’ first obvious comp is Elon Musk, the richest man in the real world. Like Musk, Venis is obsessed with going to outer space and with using his enormous war chest to build hyperscale data centers to create powerful anti-woke AI systems. Venis also has a strange relationship with his child, essentially using it as a prop to help him through his own emotional turmoil. Throughout the movie, others caution Venis to shut down his deepfake AI tools which have led to military conflict and the desecration of holy sites across the world. Venis rebuffs them and says that people just need to adapt to technological changes and focus on the cool art being made. This argument is similar to those made by Sam Altman, who has argued that OpenAI needs to unveil ChatGPT and other cutting-edge tools as fast as possible in order to show the public the power of the technology. Like Mark Zuckerberg, Venis presides over a massively popular social media platform that some have accused of ignoring harms in favor of growth. Just as Amnesty International accused Meta of having “substantially contributed” to human rights violations perpetrated against Myanmar’s Rohingya ethnic group, Venis complains of the UN being “up his ass for starting a race war.”Randall (Steve Carell) is the father.Steve Carell in Mountainhead Macall Polay—HBOThe group’s eldest member is Randall, an investor and technologist who resembles Marc Andreessen and Peter Thiel in his lofty philosophizing and quest for immortality. Like Andreessen, Randall is a staunch accelerationist who believes that U.S. companies need to develop AI as fast as possible in order to both prevent the Chinese from controlling the technology, and to ostensibly ignite a new American utopia in which productivity, happiness, and health flourish. Randall’s power comes from the fact that he was Venis’ first investor, just as Thiel was an early investor in Facebook. While Andreessen pens manifestos about technological advancement, Randall paints his mission in grandiose, historical terms, using anti-democratic, sci-fi-inflected language that resembles that of the philosopher Curtis Yarvin, who has been funded and promoted by Thiel over his career. Randall’s justification of murder through utilitarian and Kantian lenses calls to mind Sam Bankman-Fried’s extensive philosophizing, which included a declaration that he would roll the dice on killing everyone on earth if there was a 51% chance he would create a second earth. Bankman-Fried’s approach—in embracing risk and harm in order to reap massive rewards—led him to be convicted of massive financial fraud. Randall is also obsessed with longevity just like Thiel, who has railed for years against the “inevitability of death” and yearns for “super-duper medical treatments” that would render him immortal. Jeff (Ramy Youssef) is the usurper.Ramy Youssef in Mountainhead Macall Polay—HBOJeff is a technologist who often serves as the movie’s conscience, slinging criticisms about the other characters. But he’s also deeply embedded within their world, and he needs their resources, particularly Venis’ access to computing power, to thrive. In the end, Jeff sells out his values for his own survival and well-being. AI skeptics have lobbed similar criticisms at the leaders of the main AI labs, including Altman—who started OpenAI as a nonprofit before attempting to restructure the company—as well as Demis Hassabis and Dario Amodei. Hassabis is the CEO of Google Deepmind and a winner of the 2024 Nobel Prize in Chemistry; a rare scientist surrounded by businessmen and technologists. In order to try to achieve his AI dreams of curing disease and halting global warning, Hassabis enlisted with Google, inking a contract in 2014 in which he prohibited Google from using his technology for military applications. But that clause has since disappeared, and the AI systems developed under Hassabis are being sold, via Google, to militaries like Israel’s. Another parallel can be drawn between Jeff and Amodei, an AI researcher who defected from OpenAI after becoming worried that the company was cutting back its safety measures, and then formed his own company, Anthropic. Amodei has urged governments to create AI guardrails and has warned about the potentially catastrophic effects of the AI industry’s race dynamics. But some have criticized Anthropic for operating similarly to OpenAI, prioritizing scale in a way that exacerbates competitive pressures. Souper (Jason Schwartzman) is the hanger-on. Jason Schwartzman in Mountainhead Macall Polay—HBOEvery quartet needs its Turtle or its Ringo; a clear fourth wheel to serve as a punching bag for the rest of the group’s alpha males. Mountainhead’s hanger-on is Souper, thus named because he has soup kitchen money compared to the rest (hundreds of millions as opposed to billions of dollars). In order to prove his worth, he’s fixated on getting funding for a meditation startup that he hopes will eventually become an “everything app.” No tech exec would want to be compared to Souper, who has a clear inferiority complex. But plenty of tech leaders have emphasized the importance of meditation and mindfulness—including Twitter co-founder and Square CEO Jack Dorsey, who often goes on meditation retreats. Armstrong, in his interview, declined to answer specific questions about his characters’ inspirations, but conceded that some of the speculations were in the right ballpark. “For people who know the area well, it's a little bit of a fun house mirror in that you see something and are convinced that it's them,” he says. “I think all of those people featured in my research. There's bits of Andreessen and David Sacks and some of those philosopher types. It’s a good parlor game to choose your Frankenstein limbs.”
    4 Comentários 0 Compartilhamentos 0 Anterior
  • This week was low-key the worst in modern video game history – Reader’s Feature

    Marathon – just one of the problems highlighted this weekA reader is disturbed by the amount of bad news in the video game world at the moment, especially as most of it involves issues that have been brewing for many months.
    We are in a strange situation right now with video games, where almost all the news is terrible and yet great games continue to be released. This has the side effect of masking the serious issues from many gamers, who either don’t know or don’t care about what’s really going on.
    As long as games as good as Clair Obscur: Expedition 33 and Monster Hunter Wilds are still coming out then everything must be fine, right? Wrong.
    What disturbed me this week, while reading the Metro, is that apart from job cutsit had examples of all the biggest problems going on. What I found extra worrying is not only did they happen all at the same time but they’re all long running issues that show absolutely no sign of being fixed.
    Perhaps the most obvious problem was the growing inevitability of £80 becoming the default price for big name games. At this point it’d be a victory if it only increases to £80, because GTA 6 will almost certainly be more.
    What was so awful about this week’s news is that we had two big industry figures telling us that actually we shouldn’t complain, we should get a second job to afford the games and just eat the cost.
    I wouldn’t necessarily expect better from someone like Randy Pitchford, but hearing the ex-Sony guy saying that we shouldn’t complain just shows how out of touch these execs and decision makers are. Increasing costs will lower the number of games people buy and that means a lot of titles and companies are just going to have the door slammed in their faces.
    People’s money is not going to stretch as far as it used to and that is going to be a big problem for some games. Many are already predicting Marathon will either be a flop or just cancelled before it gets a chance, and it’s not hard to see why. Nothing about it looks appealing and Sony seeming to be looking for any excuse to shut down Bungie completely, at the loss of hundreds of jobs.

    To view this video please enable JavaScript, and consider upgrading to a web
    browser that
    supports HTML5
    video

    Up Next

    Oh, and the reason for Bungie’s downfall? Corporate greed, according to the people that used to work there. That’s not exactly shocking news but there it is in black and white: all these problems could’ve been avoided if Bungie’s bosses had thought of the company first and not themselves.
    But then we also had the revelation that the boss of Take-Two doesn’t play video games and has no interest in trying out GTA 6, even though he totally could. This is also a massive non-surprise and probably very common in the games industry, where decisions are made on a spreadsheet and not from a place of passion or ambition.
    Sometimes they just seem to lack basic competence though, such as the lack of any plan for when games become too expensive and time consuming to make – a problem they must have seen coming years ago. This was illustrated perfectly this week by Hideo Kojima saying his Metal Gear spiritual sequel won’t be out for five or six years, even though he announced it over a year ago.
    The amount of time it takes to make a game is out of control, but nobody is doing anything about it. And then to finish we had the rumour that Sony is only going to have a State of Play this summer, not a full showcase, or possibly have nothing at all in terms of not-E3 events. Thereby setting us up for another 12 months of no major announcements and only one or two releases.

    More Trending

    I don’t want to get anyone down, but I do think it’s important to point out that just because good games are still coming out it doesn’t mean that it’s not chaos behind the scenes, which ultimately is only going to lead to even greater disaster if none of the problems are dealt with.
    By reader Ollie

    Hideo Kojima will be approaching his 60s by the time Physint comes outThe reader’s features do not necessarily represent the views of GameCentral or Metro.
    You can submit your own 500 to 600-word reader feature at any time, which if used will be published in the next appropriate weekend slot. Just contact us at gamecentral@metro.co.uk or use our Submit Stuff page and you won’t need to send an email.

    GameCentral
    Sign up for exclusive analysis, latest releases, and bonus community content.
    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    #this #week #was #lowkey #worst
    This week was low-key the worst in modern video game history – Reader’s Feature
    Marathon – just one of the problems highlighted this weekA reader is disturbed by the amount of bad news in the video game world at the moment, especially as most of it involves issues that have been brewing for many months. We are in a strange situation right now with video games, where almost all the news is terrible and yet great games continue to be released. This has the side effect of masking the serious issues from many gamers, who either don’t know or don’t care about what’s really going on. As long as games as good as Clair Obscur: Expedition 33 and Monster Hunter Wilds are still coming out then everything must be fine, right? Wrong. What disturbed me this week, while reading the Metro, is that apart from job cutsit had examples of all the biggest problems going on. What I found extra worrying is not only did they happen all at the same time but they’re all long running issues that show absolutely no sign of being fixed. Perhaps the most obvious problem was the growing inevitability of £80 becoming the default price for big name games. At this point it’d be a victory if it only increases to £80, because GTA 6 will almost certainly be more. What was so awful about this week’s news is that we had two big industry figures telling us that actually we shouldn’t complain, we should get a second job to afford the games and just eat the cost. I wouldn’t necessarily expect better from someone like Randy Pitchford, but hearing the ex-Sony guy saying that we shouldn’t complain just shows how out of touch these execs and decision makers are. Increasing costs will lower the number of games people buy and that means a lot of titles and companies are just going to have the door slammed in their faces. People’s money is not going to stretch as far as it used to and that is going to be a big problem for some games. Many are already predicting Marathon will either be a flop or just cancelled before it gets a chance, and it’s not hard to see why. Nothing about it looks appealing and Sony seeming to be looking for any excuse to shut down Bungie completely, at the loss of hundreds of jobs. To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video Up Next Oh, and the reason for Bungie’s downfall? Corporate greed, according to the people that used to work there. That’s not exactly shocking news but there it is in black and white: all these problems could’ve been avoided if Bungie’s bosses had thought of the company first and not themselves. But then we also had the revelation that the boss of Take-Two doesn’t play video games and has no interest in trying out GTA 6, even though he totally could. This is also a massive non-surprise and probably very common in the games industry, where decisions are made on a spreadsheet and not from a place of passion or ambition. Sometimes they just seem to lack basic competence though, such as the lack of any plan for when games become too expensive and time consuming to make – a problem they must have seen coming years ago. This was illustrated perfectly this week by Hideo Kojima saying his Metal Gear spiritual sequel won’t be out for five or six years, even though he announced it over a year ago. The amount of time it takes to make a game is out of control, but nobody is doing anything about it. And then to finish we had the rumour that Sony is only going to have a State of Play this summer, not a full showcase, or possibly have nothing at all in terms of not-E3 events. Thereby setting us up for another 12 months of no major announcements and only one or two releases. More Trending I don’t want to get anyone down, but I do think it’s important to point out that just because good games are still coming out it doesn’t mean that it’s not chaos behind the scenes, which ultimately is only going to lead to even greater disaster if none of the problems are dealt with. By reader Ollie Hideo Kojima will be approaching his 60s by the time Physint comes outThe reader’s features do not necessarily represent the views of GameCentral or Metro. You can submit your own 500 to 600-word reader feature at any time, which if used will be published in the next appropriate weekend slot. Just contact us at gamecentral@metro.co.uk or use our Submit Stuff page and you won’t need to send an email. GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy #this #week #was #lowkey #worst
    METRO.CO.UK
    This week was low-key the worst in modern video game history – Reader’s Feature
    Marathon – just one of the problems highlighted this week (Bungie) A reader is disturbed by the amount of bad news in the video game world at the moment, especially as most of it involves issues that have been brewing for many months. We are in a strange situation right now with video games, where almost all the news is terrible and yet great games continue to be released. This has the side effect of masking the serious issues from many gamers, who either don’t know or don’t care about what’s really going on. As long as games as good as Clair Obscur: Expedition 33 and Monster Hunter Wilds are still coming out then everything must be fine, right? Wrong. What disturbed me this week, while reading the Metro, is that apart from job cuts (although they were implied) it had examples of all the biggest problems going on. What I found extra worrying is not only did they happen all at the same time but they’re all long running issues that show absolutely no sign of being fixed. Perhaps the most obvious problem was the growing inevitability of £80 becoming the default price for big name games. At this point it’d be a victory if it only increases to £80, because GTA 6 will almost certainly be more. What was so awful about this week’s news is that we had two big industry figures telling us that actually we shouldn’t complain, we should get a second job to afford the games and just eat the cost. I wouldn’t necessarily expect better from someone like Randy Pitchford, but hearing the ex-Sony guy saying that we shouldn’t complain just shows how out of touch these execs and decision makers are. Increasing costs will lower the number of games people buy and that means a lot of titles and companies are just going to have the door slammed in their faces. People’s money is not going to stretch as far as it used to and that is going to be a big problem for some games. Many are already predicting Marathon will either be a flop or just cancelled before it gets a chance, and it’s not hard to see why. Nothing about it looks appealing and Sony seeming to be looking for any excuse to shut down Bungie completely, at the loss of hundreds of jobs. To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video Up Next Oh, and the reason for Bungie’s downfall? Corporate greed, according to the people that used to work there. That’s not exactly shocking news but there it is in black and white: all these problems could’ve been avoided if Bungie’s bosses had thought of the company first and not themselves. But then we also had the revelation that the boss of Take-Two doesn’t play video games and has no interest in trying out GTA 6, even though he totally could. This is also a massive non-surprise and probably very common in the games industry, where decisions are made on a spreadsheet and not from a place of passion or ambition. Sometimes they just seem to lack basic competence though, such as the lack of any plan for when games become too expensive and time consuming to make – a problem they must have seen coming years ago. This was illustrated perfectly this week by Hideo Kojima saying his Metal Gear spiritual sequel won’t be out for five or six years, even though he announced it over a year ago. The amount of time it takes to make a game is out of control, but nobody is doing anything about it. And then to finish we had the rumour that Sony is only going to have a State of Play this summer, not a full showcase, or possibly have nothing at all in terms of not-E3 events. Thereby setting us up for another 12 months of no major announcements and only one or two releases. More Trending I don’t want to get anyone down, but I do think it’s important to point out that just because good games are still coming out it doesn’t mean that it’s not chaos behind the scenes, which ultimately is only going to lead to even greater disaster if none of the problems are dealt with. By reader Ollie Hideo Kojima will be approaching his 60s by the time Physint comes out (Lorne Thomson/Redferns) (Credits: Redferns) The reader’s features do not necessarily represent the views of GameCentral or Metro. You can submit your own 500 to 600-word reader feature at any time, which if used will be published in the next appropriate weekend slot. Just contact us at gamecentral@metro.co.uk or use our Submit Stuff page and you won’t need to send an email. GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    0 Comentários 0 Compartilhamentos 0 Anterior
  • ChatGPT: Everything you need to know about the AI-powered chatbot

    ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to supercharge productivity through writing essays and code with short text prompts has evolved into a behemoth with 300 million weekly active users.
    2024 was a big year for OpenAI, from its partnership with Apple for its generative AI offering, Apple Intelligence, the release of GPT-4o with voice capabilities, and the highly-anticipated launch of its text-to-video model Sora.
    OpenAI also faced its share of internal drama, including the notable exits of high-level execs like co-founder and longtime chief scientist Ilya Sutskever and CTO Mira Murati. OpenAI has also been hit with lawsuits from Alden Global Capital-owned newspapers alleging copyright infringement, as well as an injunction from Elon Musk to halt OpenAI’s transition to a for-profit.
    In 2025, OpenAI is battling the perception that it’s ceding ground in the AI race to Chinese rivals like DeepSeek. The company has been trying to shore up its relationship with Washington as it simultaneously pursues an ambitious data center project, and as it reportedly lays the groundwork for one of the largest funding rounds in history.
    Below, you’ll find a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. If you have any other questions, check out our ChatGPT FAQ here.
    To see a list of 2024 updates, go here.
    Timeline of the most recent ChatGPT updates

    Techcrunch event

    Join us at TechCrunch Sessions: AI
    Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    May 2025
    OpenAI CFO says hardware will drive ChatGPT’s growth
    OpenAI plans to purchase Jony Ive’s devices startup io for billion. Sarah Friar, CFO of OpenAI, thinks that the hardware will significantly enhance ChatGPT and broaden OpenAI’s reach to a larger audience in the future.
    OpenAI’s ChatGPT unveils its AI coding agent, Codex
    OpenAI has introduced its AI coding agent, Codex, powered by codex-1, a version of its o3 AI reasoning model designed for software engineering tasks. OpenAI says codex-1 generates more precise and “cleaner” code than o3. The coding agent may take anywhere from one to 30 minutes to complete tasks such as writing simple features, fixing bugs, answering questions about your codebase, and running tests.
    Sam Altman aims to make ChatGPT more personalized by tracking every aspect of a person’s life
    Sam Altman, the CEO of OpenAI, said during a recent AI event hosted by VC firm Sequoia that he wants ChatGPT to record and remember every detail of a person’s life when one attendee asked about how ChatGPT can become more personalized.
    OpenAI releases its GPT-4.1 and GPT-4.1 mini AI models in ChatGPT
    OpenAI said in a post on X that it has launched its GPT-4.1 and GPT4.1 mini AI models in ChagGPT.
    OpenAI has launched a new feature for ChatGPT deep research to analyze code repositories on GitHub. The ChatGPT deep research feature is in beta and lets developers connect with GitHub to ask questions about codebases and engineering documents. The connector will soon be available for ChatGPT Plus, Pro, and Team users, with support for Enterprise and Education coming shortly, per an OpenAI spokesperson.
    OpenAI launches a new data residency program in Asia
    After introducing a data residency program in Europe in February, OpenAI has now launched a similar program in Asian countries including India, Japan, Singapore, and South Korea. The new program will be accessible to users of ChatGPT Enterprise, ChatGPT Edu, and API. It will help organizations in Asia meet their local data sovereignty requirements when using OpenAI’s products.
    OpenAI to introduce a program to grow AI infrastructure
    OpenAI is unveiling a program called OpenAI for Countries, which aims to develop the necessary local infrastructure to serve international AI clients better. The AI startup will work with governments to assist with increasing data center capacity and customizing OpenAI’s products to meet specific language and local needs. OpenAI for Countries is part of efforts to support the company’s expansion of its AI data center Project Stargate to new locations outside the U.S., per Bloomberg.
    OpenAI promises to make changes to prevent future ChatGPT sycophancy
    OpenAI has announced its plan to make changes to its procedures for updating the AI models that power ChatGPT, following an update that caused the platform to become overly sycophantic for many users.
    April 2025
    OpenAI clarifies the reason ChatGPT became overly flattering and agreeable
    OpenAI has released a post on the recent sycophancy issues with the default AI model powering ChatGPT, GPT-4o, leading the company to revert an update to the model released last week. CEO Sam Altman acknowledged the issue on Sunday and confirmed two days later that the GPT-4o update was being rolled back. OpenAI is working on “additional fixes” to the model’s personality. Over the weekend, users on social media criticized the new model for making ChatGPT too validating and agreeable. It became a popular meme fast.
    OpenAI is working to fix a “bug” that let minors engage in inappropriate conversations
    An issue within OpenAI’s ChatGPT enabled the chatbot to create graphic erotic content for accounts registered by users under the age of 18, as demonstrated by TechCrunch’s testing, a fact later confirmed by OpenAI. “Protecting younger users is a top priority, and our Model Spec, which guides model behavior, clearly restricts sensitive content like erotica to narrow contexts such as scientific, historical, or news reporting,” a spokesperson told TechCrunch via email. “In this case, a bug allowed responses outside those guidelines, and we are actively deploying a fix to limit these generations.”
    OpenAI has added a few features to its ChatGPT search, its web search tool in ChatGPT, to give users an improved online shopping experience. The company says people can ask super-specific questions using natural language and receive customized results. The chatbot provides recommendations, images, and reviews of products in various categories such as fashion, beauty, home goods, and electronics.
    OpenAI wants its AI model to access cloud models for assistance
    OpenAI leaders have been talking about allowing the open model to link up with OpenAI’s cloud-hosted models to improve its ability to respond to intricate questions, two sources familiar with the situation told TechCrunch.
    OpenAI aims to make its new “open” AI model the best on the market
    OpenAI is preparing to launch an AI system that will be openly accessible, allowing users to download it for free without any API restrictions. Aidan Clark, OpenAI’s VP of research, is spearheading the development of the open model, which is in the very early stages, sources familiar with the situation told TechCrunch.
    OpenAI’s GPT-4.1 may be less aligned than earlier models
    OpenAI released a new AI model called GPT-4.1 in mid-April. However, multiple independent tests indicate that the model is less reliable than previous OpenAI releases. The company skipped that step — sending safety cards for GPT-4.1 — claiming in a statement to TechCrunch that “GPT-4.1 is not a frontier model, so there won’t be a separate system card released for it.”
    OpenAI’s o3 AI model scored lower than expected on a benchmark
    Questions have been raised regarding OpenAI’s transparency and procedures for testing models after a difference in benchmark outcomes was detected by first- and third-party benchmark results for the o3 AI model. OpenAI introduced o3 in December, stating that the model could solve approximately 25% of questions on FrontierMath, a difficult math problem set. Epoch AI, the research institute behind FrontierMath, discovered that o3 achieved a score of approximately 10%, which was significantly lower than OpenAI’s top-reported score.
    OpenAI unveils Flex processing for cheaper, slower AI tasks
    OpenAI has launched a new API feature called Flex processing that allows users to use AI models at a lower cost but with slower response times and occasional resource unavailability. Flex processing is available in beta on the o3 and o4-mini reasoning models for non-production tasks like model evaluations, data enrichment, and asynchronous workloads.
    OpenAI’s latest AI models now have a safeguard against biorisks
    OpenAI has rolled out a new system to monitor its AI reasoning models, o3 and o4 mini, for biological and chemical threats. The system is designed to prevent models from giving advice that could potentially lead to harmful attacks, as stated in OpenAI’s safety report.
    OpenAI launches its latest reasoning models, o3 and o4-mini
    OpenAI has released two new reasoning models, o3 and o4 mini, just two days after launching GPT-4.1. The company claims o3 is the most advanced reasoning model it has developed, while o4-mini is said to provide a balance of price, speed, and performance. The new models stand out from previous reasoning models because they can use ChatGPT features like web browsing, coding, and image processing and generation. But they hallucinate more than several of OpenAI’s previous models.
    OpenAI has added a new section to ChatGPT to offer easier access to AI-generated images for all user tiers
    Open AI introduced a new section called “library” to make it easier for users to create images on mobile and web platforms, per the company’s X post.
    OpenAI could “adjust” its safeguards if rivals release “high-risk” AI
    OpenAI said on Tuesday that it might revise its safety standards if “another frontier AI developer releases a high-risk system without comparable safeguards.” The move shows how commercial AI developers face more pressure to rapidly implement models due to the increased competition.
    OpenAI is currently in the early stages of developing its own social media platform to compete with Elon Musk’s X and Mark Zuckerberg’s Instagram and Threads, according to The Verge. It is unclear whether OpenAI intends to launch the social network as a standalone application or incorporate it into ChatGPT.
    OpenAI will remove its largest AI model, GPT-4.5, from the API, in July
    OpenAI will discontinue its largest AI model, GPT-4.5, from its API even though it was just launched in late February. GPT-4.5 will be available in a research preview for paying customers. Developers can use GPT-4.5 through OpenAI’s API until July 14; then, they will need to switch to GPT-4.1, which was released on April 14.
    OpenAI unveils GPT-4.1 AI models that focus on coding capabilities
    OpenAI has launched three members of the GPT-4.1 model — GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano — with a specific focus on coding capabilities. It’s accessible via the OpenAI API but not ChatGPT. In the competition to develop advanced programming models, GPT-4.1 will rival AI models such as Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and DeepSeek’s upgraded V3.
    OpenAI will discontinue ChatGPT’s GPT-4 at the end of April
    OpenAI plans to sunset GPT-4, an AI model introduced more than two years ago, and replace it with GPT-4o, the current default model, per changelog. It will take effect on April 30. GPT-4 will remain available via OpenAI’s API.
    OpenAI could release GPT-4.1 soon
    OpenAI may launch several new AI models, including GPT-4.1, soon, The Verge reported, citing anonymous sources. GPT-4.1 would be an update of OpenAI’s GPT-4o, which was released last year. On the list of upcoming models are GPT-4.1 and smaller versions like GPT-4.1 mini and nano, per the report.
    OpenAI has updated ChatGPT to use information from your previous conversations
    OpenAI started updating ChatGPT to enable the chatbot to remember previous conversations with a user and customize its responses based on that context. This feature is rolling out to ChatGPT Pro and Plus users first, excluding those in the U.K., EU, Iceland, Liechtenstein, Norway, and Switzerland.
    OpenAI is working on watermarks for images made with ChatGPT
    It looks like OpenAI is working on a watermarking feature for images generated using GPT-4o. AI researcher Tibor Blaho spotted a new “ImageGen” watermark feature in the new beta of ChatGPT’s Android app. Blaho also found mentions of other tools: “Structured Thoughts,” “Reasoning Recap,” “CoT Search Tool,” and “l1239dk1.”
    OpenAI offers ChatGPT Plus for free to U.S., Canadian college students
    OpenAI is offering its -per-month ChatGPT Plus subscription tier for free to all college students in the U.S. and Canada through the end of May. The offer will let millions of students use OpenAI’s premium service, which offers access to the company’s GPT-4o model, image generation, voice interaction, and research tools that are not available in the free version.
    ChatGPT users have generated over 700M images so far
    More than 130 million users have created over 700 million images since ChatGPT got the upgraded image generator on March 25, according to COO of OpenAI Brad Lightcap. The image generator was made available to all ChatGPT users on March 31, and went viral for being able to create Ghibli-style photos.
    OpenAI’s o3 model could cost more to run than initial estimate
    The Arc Prize Foundation, which develops the AI benchmark tool ARC-AGI, has updated the estimated computing costs for OpenAI’s o3 “reasoning” model managed by ARC-AGI. The organization originally estimated that the best-performing configuration of o3 it tested, o3 high, would cost approximately to address a single problem. The Foundation now thinks the cost could be much higher, possibly around per task.
    OpenAI CEO says capacity issues will cause product delays
    In a series of posts on X, OpenAI CEO Sam Altman said the company’s new image-generation tool’s popularity may cause product releases to be delayed. “We are getting things under control, but you should expect new releases from OpenAI to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges,” he wrote.
    March 2025
    OpenAI plans to release a new ‘open’ AI language model
    OpeanAI intends to release its “first” open language model since GPT-2 “in the coming months.” The company plans to host developer events to gather feedback and eventually showcase prototypes of the model. The first developer event is to be held in San Francisco, with sessions to follow in Europe and Asia.
    OpenAI removes ChatGPT’s restrictions on image generation
    OpenAI made a notable change to its content moderation policies after the success of its new image generator in ChatGPT, which went viral for being able to create Studio Ghibli-style images. The company has updated its policies to allow ChatGPT to generate images of public figures, hateful symbols, and racial features when requested. OpenAI had previously declined such prompts due to the potential controversy or harm they may cause. However, the company has now “evolved” its approach, as stated in a blog post published by Joanne Jang, the lead for OpenAI’s model behavior.
    OpenAI adopts Anthropic’s standard for linking AI models with data
    OpenAI wants to incorporate Anthropic’s Model Context Protocolinto all of its products, including the ChatGPT desktop app. MCP, an open-source standard, helps AI models generate more accurate and suitable responses to specific queries, and lets developers create bidirectional links between data sources and AI applications like chatbots. The protocol is currently available in the Agents SDK, and support for the ChatGPT desktop app and Responses API will be coming soon, OpenAI CEO Sam Altman said.
    OpenAI’s viral Studio Ghibli-style images could raise AI copyright concerns
    The latest update of the image generator on OpenAI’s ChatGPT has triggered a flood of AI-generated memes in the style of Studio Ghibli, the Japanese animation studio behind blockbuster films like “My Neighbor Totoro” and “Spirited Away.” The burgeoning mass of Ghibli-esque images have sparked concerns about whether OpenAI has violated copyright laws, especially since the company is already facing legal action for using source material without authorization.
    OpenAI expects revenue to triple to billion this year
    OpenAI expects its revenue to triple to billion in 2025, fueled by the performance of its paid AI software, Bloomberg reported, citing an anonymous source. While the startup doesn’t expect to reach positive cash flow until 2029, it expects revenue to increase significantly in 2026 to surpass billion, the report said.
    ChatGPT has upgraded its image-generation feature
    OpenAI on Tuesday rolled out a major upgrade to ChatGPT’s image-generation capabilities: ChatGPT can now use the GPT-4o model to generate and edit images and photos directly. The feature went live earlier this week in ChatGPT and Sora, OpenAI’s AI video-generation tool, for subscribers of the company’s Pro plan, priced at a month, and will be available soon to ChatGPT Plus subscribers and developers using the company’s API service. The company’s CEO Sam Altman said on Wednesday, however, that the release of the image generation feature to free users would be delayed due to higher demand than the company expected.
    OpenAI announces leadership updates
    Brad Lightcap, OpenAI’s chief operating officer, will lead the company’s global expansion and manage corporate partnerships as CEO Sam Altman shifts his focus to research and products, according to a blog post from OpenAI. Lightcap, who previously worked with Altman at Y Combinator, joined the Microsoft-backed startup in 2018. OpenAI also said Mark Chen would step into the expanded role of chief research officer, and Julia Villagra will take on the role of chief people officer.
    OpenAI’s AI voice assistant now has advanced feature
    OpenAI has updated its AI voice assistant with improved chatting capabilities, according to a video posted on Mondayto the company’s official media channels. The update enables real-time conversations, and the AI assistant is said to be more personable and interrupts users less often. Users on ChatGPT’s free tier can now access the new version of Advanced Voice Mode, while paying users will receive answers that are “more direct, engaging, concise, specific, and creative,” a spokesperson from OpenAI told TechCrunch.
    OpenAI and Meta have separately engaged in discussions with Indian conglomerate Reliance Industries regarding potential collaborations to enhance their AI services in the country, per a report by The Information. One key topic being discussed is Reliance Jio distributing OpenAI’s ChatGPT. Reliance has proposed selling OpenAI’s models to businesses in India through an application programming interfaceso they can incorporate AI into their operations. Meta also plans to bolster its presence in India by constructing a large 3GW data center in Jamnagar, Gujarat. OpenAI, Meta, and Reliance have not yet officially announced these plans.
    OpenAI faces privacy complaint in Europe for chatbot’s defamatory hallucinations
    Noyb, a privacy rights advocacy group, is supporting an individual in Norway who was shocked to discover that ChatGPT was providing false information about him, stating that he had been found guilty of killing two of his children and trying to harm the third. “The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”
    OpenAI upgrades its transcription and voice-generating AI models
    OpenAI has added new transcription and voice-generating AI models to its APIs: a text-to-speech model, “gpt-4o-mini-tts,” that delivers more nuanced and realistic sounding speech, as well as two speech-to-text models called “gpt-4o-transcribe” and “gpt-4o-mini-transcribe”. The company claims they are improved versions of what was already there and that they hallucinate less.
    OpenAI has launched o1-pro, a more powerful version of its o1
    OpenAI has introduced o1-pro in its developer API. OpenAI says its o1-pro uses more computing than its o1 “reasoning” AI model to deliver “consistently better responses.” It’s only accessible to select developers who have spent at least on OpenAI API services. OpenAI charges for every million tokensinput into the model and for every million tokens the model produces. It costs twice as much as OpenAI’s GPT-4.5 for input and 10 times the price of regular o1.
    Noam Brown, who heads AI reasoning research at OpenAI, thinks that certain types of AI models for “reasoning” could have been developed 20 years ago if researchers had understood the correct approach and algorithms.
    OpenAI says it has trained an AI that’s “really good” at creative writing
    OpenAI CEO Sam Altman said, in a post on X, that the company has trained a “new model” that’s “really good” at creative writing. He posted a lengthy sample from the model given the prompt “Please write a metafictional literary short story about AI and grief.” OpenAI has not extensively explored the use of AI for writing fiction. The company has mostly concentrated on challenges in rigid, predictable areas such as math and programming.might not be that great at creative writing at all.
    OpenAI rolled out new tools designed to help developers and businesses build AI agents — automated systems that can independently accomplish tasks — using the company’s own AI models and frameworks. The tools are part of OpenAI’s new Responses API, which enables enterprises to develop customized AI agents that can perform web searches, scan through company files, and navigate websites, similar to OpenAI’s Operator product. The Responses API effectively replaces OpenAI’s Assistants API, which the company plans to discontinue in the first half of 2026.
    OpenAI reportedly plans to charge up to a month for specialized AI ‘agents’
    OpenAI intends to release several “agent” products tailored for different applications, including sorting and ranking sales leads and software engineering, according to a report from The Information. One, a “high-income knowledge worker” agent, will reportedly be priced at a month. Another, a software developer agent, is said to cost a month. The most expensive rumored agents, which are said to be aimed at supporting “PhD-level research,” are expected to cost per month. The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly billion last year after paying for costs related to running its services and other expenses. It’s unclear when these agentic tools might launch or which customers will be eligible to buy them.
    ChatGPT can directly edit your code
    The latest version of the macOS ChatGPT app allows users to edit code directly in supported developer tools, including Xcode, VS Code, and JetBrains. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users like Enterprise, Edu, and free users.
    ChatGPT’s weekly active users doubled in less than 6 months, thanks to new releases
    According to a new report from VC firm Andreessen Horowitz, OpenAI’s AI chatbot, ChatGPT, experienced solid growth in the second half of 2024. It took ChatGPT nine months to increase its weekly active users from 100 million in November 2023 to 200 million in August 2024, but it only took less than six months to double that number once more, according to the report. ChatGPT’s weekly active users increased to 300 million by December 2024 and 400 million by February 2025. ChatGPT has experienced significant growth recently due to the launch of new models and features, such as GPT-4o, with multimodal capabilities. ChatGPT usage spiked from April to May 2024, shortly after that model’s launch.
    February 2025
    OpenAI cancels its o3 AI model in favor of a ‘unified’ next-gen release
    OpenAI has effectively canceled the release of o3 in favor of what CEO Sam Altman is calling a “simplified” product offering. In a post on X, Altman said that, in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot oftechnology,” including o3, in ChatGPT and its API. As a result of that roadmap decision, OpenAI no longer plans to release o3 as a standalone model. 
    ChatGPT may not be as power-hungry as once assumed
    A commonly cited stat is that ChatGPT requires around 3 watt-hours of power to answer a single question. Using OpenAI’s latest default model for ChatGPT, GPT-4o, as a reference, nonprofit AI research institute Epoch AI found the average ChatGPT query consumes around 0.3 watt-hours. However, the analysis doesn’t consider the additional energy costs incurred by ChatGPT with features like image generation or input processing.
    OpenAI now reveals more of its o3-mini model’s thought process
    In response to pressure from rivals like DeepSeek, OpenAI is changing the way its o3-mini model communicates its step-by-step “thought” process. ChatGPT users will see an updated “chain of thought” that shows more of the model’s “reasoning” steps and how it arrived at answers to questions.
    You can now use ChatGPT web search without logging in
    OpenAI is now allowing anyone to use ChatGPT web search without having to log in. While OpenAI had previously allowed users to ask ChatGPT questions without signing in, responses were restricted to the chatbot’s last training update. This only applies through ChatGPT.com, however. To use ChatGPT in any form through the native mobile app, you will still need to be logged in.
    OpenAI unveils a new ChatGPT agent for ‘deep research’
    OpenAI announced a new AI “agent” called deep research that’s designed to help people conduct in-depth, complex research using ChatGPT. OpenAI says the “agent” is intended for instances where you don’t just want a quick answer or summary, but instead need to assiduously consider information from multiple websites and other sources.
    January 2025
    OpenAI used a subreddit to test AI persuasion
    OpenAI used the subreddit r/ChangeMyView to measure the persuasive abilities of its AI reasoning models. OpenAI says it collects user posts from the subreddit and asks its AI models to write replies, in a closed environment, that would change the Reddit user’s mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models’ responses to human replies for that same post. 
    OpenAI launches o3-mini, its latest ‘reasoning’ model
    OpenAI launched a new AI “reasoning” model, o3-mini, the newest in the company’s o family of models. OpenAI first previewed the model in December alongside a more capable system called o3. OpenAI is pitching its new model as both “powerful” and “affordable.”
    ChatGPT’s mobile users are 85% male, report says
    A new report from app analytics firm Appfigures found that over half of ChatGPT’s mobile users are under age 25, with users between ages 50 and 64 making up the second largest age demographic. The gender gap among ChatGPT users is even more significant. Appfigures estimates that across age groups, men make up 84.5% of all users.
    OpenAI launches ChatGPT plan for US government agencies
    OpenAI launched ChatGPT Gov designed to provide U.S. government agencies an additional way to access the tech. ChatGPT Gov includes many of the capabilities found in OpenAI’s corporate-focused tier, ChatGPT Enterprise. OpenAI says that ChatGPT Gov enables agencies to more easily manage their own security, privacy, and compliance, and could expedite internal authorization of OpenAI’s tools for the handling of non-public sensitive data.
    More teens report using ChatGPT for schoolwork, despite the tech’s faults
    Younger Gen Zers are embracing ChatGPT, for schoolwork, according to a new survey by the Pew Research Center. In a follow-up to its 2023 poll on ChatGPT usage among young people, Pew asked ~1,400 U.S.-based teens ages 13 to 17 whether they’ve used ChatGPT for homework or other school-related assignments. Twenty-six percent said that they had, double the number two years ago. Just over half of teens responding to the poll said they think it’s acceptable to use ChatGPT for researching new subjects. But considering the ways ChatGPT can fall short, the results are possibly cause for alarm.
    OpenAI says it may store deleted Operator data for up to 90 days
    OpenAI says that it might store chats and associated screenshots from customers who use Operator, the company’s AI “agent” tool, for up to 90 days — even after a user manually deletes them. While OpenAI has a similar deleted data retention policy for ChatGPT, the retention period for ChatGPT is only 30 days, which is 60 days shorter than Operator’s.
    OpenAI launches Operator, an AI agent that performs tasks autonomously
    OpenAI is launching a research preview of Operator, a general-purpose AI agent that can take control of a web browser and independently perform certain actions. Operator promises to automate tasks such as booking travel accommodations, making restaurant reservations, and shopping online.
    Operator, OpenAI’s agent tool, could be released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the Pro subscription plan. The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code. TechCrunch separately identified the same references to Operator on OpenAI’s website.
    OpenAI tests phone number-only ChatGPT signups
    OpenAI has begun testing a feature that lets new ChatGPT users sign up with only a phone number — no email required. The feature is currently in beta in the U.S. and India. However, users who create an account using their number can’t upgrade to one of OpenAI’s paid plans without verifying their account via an email. Multi-factor authentication also isn’t supported without a valid email.
    ChatGPT now lets you schedule reminders and recurring tasks
    ChatGPT’s new beta feature, called tasks, allows users to set simple reminders. For example, you can ask ChatGPT to remind you when your passport expires in six months, and the AI assistant will follow up with a push notification on whatever platform you have tasks enabled. The feature will start rolling out to ChatGPT Plus, Team, and Pro users around the globe this week.
    New ChatGPT feature lets users assign it traits like ‘chatty’ and ‘Gen Z’
    OpenAI is introducing a new way for users to customize their interactions with ChatGPT. Some users found they can specify a preferred name or nickname and “traits” they’d like the chatbot to have. OpenAI suggests traits like “Chatty,” “Encouraging,” and “Gen Z.” However, some users reported that the new options have disappeared, so it’s possible they went live prematurely.
    FAQs:
    What is ChatGPT? How does it work?
    ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI. The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text.
    When did ChatGPT get released?
    November 30, 2022 is when ChatGPT was released for public use.
    What is the latest version of ChatGPT?
    Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o.
    Can I use ChatGPT for free?
    There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus.
    Who uses ChatGPT?
    Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns.
    What companies use ChatGPT?
    Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool.
    Most recently, Microsoft announced at its 2023 Build conference that it is integrating its ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT.  And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space.
    What does GPT mean in ChatGPT?
    GPT stands for Generative Pre-Trained Transformer.
    What is the difference between ChatGPT and a chatbot?
    A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions.
    ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt.
    Can ChatGPT write essays?
    Yes.
    Can ChatGPT commit libel?
    Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel.
    We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry.
    Does ChatGPT have an app?
    Yes, there is a free ChatGPT mobile app for iOS and Android users.
    What is the ChatGPT character limit?
    It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words.
    Does ChatGPT have an API?
    Yes, it was released March 1, 2023.
    What are some sample everyday uses for ChatGPT?
    Everyday examples include programming, scripts, email replies, listicles, blog ideas, summarization, etc.
    What are some advanced uses for ChatGPT?
    Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc.
    How good is ChatGPT at writing code?
    It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used.
    Can you save a ChatGPT chat?
    Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet.
    Are there alternatives to ChatGPT?
    Yes. There are multiple AI-powered chatbot competitors such as Together, Google’s Gemini and Anthropic’s Claude, and developers are creating open source alternatives.
    How does ChatGPT handle data privacy?
    OpenAI has said that individuals in “certain jurisdictions”can object to the processing of their personal information by its AI models by filling out this form. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”.
    The web form for making a deletion of data about you request is entitled “OpenAI Personal Data Removal Request”.
    In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest”, pointing users towards more information about requesting an opt out — when it writes: “See here for instructions on how you can opt out of our use of your information to train our models.”
    What controversies have surrounded ChatGPT?
    Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamineand the incendiary mixture napalm.
    An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service.
    CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect.
    Several major school systems and colleges, including New York City Public Schools, have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with.
    There have also been cases of ChatGPT accusing individuals of false crimes.
    Where can I find examples of ChatGPT prompts?
    Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase. Another is ChatX. More launch every day.
    Can ChatGPT be detected?
    Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best.
    Are ChatGPT chats public?
    No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service.
    What lawsuits are there surrounding ChatGPT?
    None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT.
    Are there issues regarding plagiarism with ChatGPT?
    Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data.
    #chatgpt #everything #you #need #know
    ChatGPT: Everything you need to know about the AI-powered chatbot
    ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to supercharge productivity through writing essays and code with short text prompts has evolved into a behemoth with 300 million weekly active users. 2024 was a big year for OpenAI, from its partnership with Apple for its generative AI offering, Apple Intelligence, the release of GPT-4o with voice capabilities, and the highly-anticipated launch of its text-to-video model Sora. OpenAI also faced its share of internal drama, including the notable exits of high-level execs like co-founder and longtime chief scientist Ilya Sutskever and CTO Mira Murati. OpenAI has also been hit with lawsuits from Alden Global Capital-owned newspapers alleging copyright infringement, as well as an injunction from Elon Musk to halt OpenAI’s transition to a for-profit. In 2025, OpenAI is battling the perception that it’s ceding ground in the AI race to Chinese rivals like DeepSeek. The company has been trying to shore up its relationship with Washington as it simultaneously pursues an ambitious data center project, and as it reportedly lays the groundwork for one of the largest funding rounds in history. Below, you’ll find a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. If you have any other questions, check out our ChatGPT FAQ here. To see a list of 2024 updates, go here. Timeline of the most recent ChatGPT updates Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW May 2025 OpenAI CFO says hardware will drive ChatGPT’s growth OpenAI plans to purchase Jony Ive’s devices startup io for billion. Sarah Friar, CFO of OpenAI, thinks that the hardware will significantly enhance ChatGPT and broaden OpenAI’s reach to a larger audience in the future. OpenAI’s ChatGPT unveils its AI coding agent, Codex OpenAI has introduced its AI coding agent, Codex, powered by codex-1, a version of its o3 AI reasoning model designed for software engineering tasks. OpenAI says codex-1 generates more precise and “cleaner” code than o3. The coding agent may take anywhere from one to 30 minutes to complete tasks such as writing simple features, fixing bugs, answering questions about your codebase, and running tests. Sam Altman aims to make ChatGPT more personalized by tracking every aspect of a person’s life Sam Altman, the CEO of OpenAI, said during a recent AI event hosted by VC firm Sequoia that he wants ChatGPT to record and remember every detail of a person’s life when one attendee asked about how ChatGPT can become more personalized. OpenAI releases its GPT-4.1 and GPT-4.1 mini AI models in ChatGPT OpenAI said in a post on X that it has launched its GPT-4.1 and GPT4.1 mini AI models in ChagGPT. OpenAI has launched a new feature for ChatGPT deep research to analyze code repositories on GitHub. The ChatGPT deep research feature is in beta and lets developers connect with GitHub to ask questions about codebases and engineering documents. The connector will soon be available for ChatGPT Plus, Pro, and Team users, with support for Enterprise and Education coming shortly, per an OpenAI spokesperson. OpenAI launches a new data residency program in Asia After introducing a data residency program in Europe in February, OpenAI has now launched a similar program in Asian countries including India, Japan, Singapore, and South Korea. The new program will be accessible to users of ChatGPT Enterprise, ChatGPT Edu, and API. It will help organizations in Asia meet their local data sovereignty requirements when using OpenAI’s products. OpenAI to introduce a program to grow AI infrastructure OpenAI is unveiling a program called OpenAI for Countries, which aims to develop the necessary local infrastructure to serve international AI clients better. The AI startup will work with governments to assist with increasing data center capacity and customizing OpenAI’s products to meet specific language and local needs. OpenAI for Countries is part of efforts to support the company’s expansion of its AI data center Project Stargate to new locations outside the U.S., per Bloomberg. OpenAI promises to make changes to prevent future ChatGPT sycophancy OpenAI has announced its plan to make changes to its procedures for updating the AI models that power ChatGPT, following an update that caused the platform to become overly sycophantic for many users. April 2025 OpenAI clarifies the reason ChatGPT became overly flattering and agreeable OpenAI has released a post on the recent sycophancy issues with the default AI model powering ChatGPT, GPT-4o, leading the company to revert an update to the model released last week. CEO Sam Altman acknowledged the issue on Sunday and confirmed two days later that the GPT-4o update was being rolled back. OpenAI is working on “additional fixes” to the model’s personality. Over the weekend, users on social media criticized the new model for making ChatGPT too validating and agreeable. It became a popular meme fast. OpenAI is working to fix a “bug” that let minors engage in inappropriate conversations An issue within OpenAI’s ChatGPT enabled the chatbot to create graphic erotic content for accounts registered by users under the age of 18, as demonstrated by TechCrunch’s testing, a fact later confirmed by OpenAI. “Protecting younger users is a top priority, and our Model Spec, which guides model behavior, clearly restricts sensitive content like erotica to narrow contexts such as scientific, historical, or news reporting,” a spokesperson told TechCrunch via email. “In this case, a bug allowed responses outside those guidelines, and we are actively deploying a fix to limit these generations.” OpenAI has added a few features to its ChatGPT search, its web search tool in ChatGPT, to give users an improved online shopping experience. The company says people can ask super-specific questions using natural language and receive customized results. The chatbot provides recommendations, images, and reviews of products in various categories such as fashion, beauty, home goods, and electronics. OpenAI wants its AI model to access cloud models for assistance OpenAI leaders have been talking about allowing the open model to link up with OpenAI’s cloud-hosted models to improve its ability to respond to intricate questions, two sources familiar with the situation told TechCrunch. OpenAI aims to make its new “open” AI model the best on the market OpenAI is preparing to launch an AI system that will be openly accessible, allowing users to download it for free without any API restrictions. Aidan Clark, OpenAI’s VP of research, is spearheading the development of the open model, which is in the very early stages, sources familiar with the situation told TechCrunch. OpenAI’s GPT-4.1 may be less aligned than earlier models OpenAI released a new AI model called GPT-4.1 in mid-April. However, multiple independent tests indicate that the model is less reliable than previous OpenAI releases. The company skipped that step — sending safety cards for GPT-4.1 — claiming in a statement to TechCrunch that “GPT-4.1 is not a frontier model, so there won’t be a separate system card released for it.” OpenAI’s o3 AI model scored lower than expected on a benchmark Questions have been raised regarding OpenAI’s transparency and procedures for testing models after a difference in benchmark outcomes was detected by first- and third-party benchmark results for the o3 AI model. OpenAI introduced o3 in December, stating that the model could solve approximately 25% of questions on FrontierMath, a difficult math problem set. Epoch AI, the research institute behind FrontierMath, discovered that o3 achieved a score of approximately 10%, which was significantly lower than OpenAI’s top-reported score. OpenAI unveils Flex processing for cheaper, slower AI tasks OpenAI has launched a new API feature called Flex processing that allows users to use AI models at a lower cost but with slower response times and occasional resource unavailability. Flex processing is available in beta on the o3 and o4-mini reasoning models for non-production tasks like model evaluations, data enrichment, and asynchronous workloads. OpenAI’s latest AI models now have a safeguard against biorisks OpenAI has rolled out a new system to monitor its AI reasoning models, o3 and o4 mini, for biological and chemical threats. The system is designed to prevent models from giving advice that could potentially lead to harmful attacks, as stated in OpenAI’s safety report. OpenAI launches its latest reasoning models, o3 and o4-mini OpenAI has released two new reasoning models, o3 and o4 mini, just two days after launching GPT-4.1. The company claims o3 is the most advanced reasoning model it has developed, while o4-mini is said to provide a balance of price, speed, and performance. The new models stand out from previous reasoning models because they can use ChatGPT features like web browsing, coding, and image processing and generation. But they hallucinate more than several of OpenAI’s previous models. OpenAI has added a new section to ChatGPT to offer easier access to AI-generated images for all user tiers Open AI introduced a new section called “library” to make it easier for users to create images on mobile and web platforms, per the company’s X post. OpenAI could “adjust” its safeguards if rivals release “high-risk” AI OpenAI said on Tuesday that it might revise its safety standards if “another frontier AI developer releases a high-risk system without comparable safeguards.” The move shows how commercial AI developers face more pressure to rapidly implement models due to the increased competition. OpenAI is currently in the early stages of developing its own social media platform to compete with Elon Musk’s X and Mark Zuckerberg’s Instagram and Threads, according to The Verge. It is unclear whether OpenAI intends to launch the social network as a standalone application or incorporate it into ChatGPT. OpenAI will remove its largest AI model, GPT-4.5, from the API, in July OpenAI will discontinue its largest AI model, GPT-4.5, from its API even though it was just launched in late February. GPT-4.5 will be available in a research preview for paying customers. Developers can use GPT-4.5 through OpenAI’s API until July 14; then, they will need to switch to GPT-4.1, which was released on April 14. OpenAI unveils GPT-4.1 AI models that focus on coding capabilities OpenAI has launched three members of the GPT-4.1 model — GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano — with a specific focus on coding capabilities. It’s accessible via the OpenAI API but not ChatGPT. In the competition to develop advanced programming models, GPT-4.1 will rival AI models such as Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and DeepSeek’s upgraded V3. OpenAI will discontinue ChatGPT’s GPT-4 at the end of April OpenAI plans to sunset GPT-4, an AI model introduced more than two years ago, and replace it with GPT-4o, the current default model, per changelog. It will take effect on April 30. GPT-4 will remain available via OpenAI’s API. OpenAI could release GPT-4.1 soon OpenAI may launch several new AI models, including GPT-4.1, soon, The Verge reported, citing anonymous sources. GPT-4.1 would be an update of OpenAI’s GPT-4o, which was released last year. On the list of upcoming models are GPT-4.1 and smaller versions like GPT-4.1 mini and nano, per the report. OpenAI has updated ChatGPT to use information from your previous conversations OpenAI started updating ChatGPT to enable the chatbot to remember previous conversations with a user and customize its responses based on that context. This feature is rolling out to ChatGPT Pro and Plus users first, excluding those in the U.K., EU, Iceland, Liechtenstein, Norway, and Switzerland. OpenAI is working on watermarks for images made with ChatGPT It looks like OpenAI is working on a watermarking feature for images generated using GPT-4o. AI researcher Tibor Blaho spotted a new “ImageGen” watermark feature in the new beta of ChatGPT’s Android app. Blaho also found mentions of other tools: “Structured Thoughts,” “Reasoning Recap,” “CoT Search Tool,” and “l1239dk1.” OpenAI offers ChatGPT Plus for free to U.S., Canadian college students OpenAI is offering its -per-month ChatGPT Plus subscription tier for free to all college students in the U.S. and Canada through the end of May. The offer will let millions of students use OpenAI’s premium service, which offers access to the company’s GPT-4o model, image generation, voice interaction, and research tools that are not available in the free version. ChatGPT users have generated over 700M images so far More than 130 million users have created over 700 million images since ChatGPT got the upgraded image generator on March 25, according to COO of OpenAI Brad Lightcap. The image generator was made available to all ChatGPT users on March 31, and went viral for being able to create Ghibli-style photos. OpenAI’s o3 model could cost more to run than initial estimate The Arc Prize Foundation, which develops the AI benchmark tool ARC-AGI, has updated the estimated computing costs for OpenAI’s o3 “reasoning” model managed by ARC-AGI. The organization originally estimated that the best-performing configuration of o3 it tested, o3 high, would cost approximately to address a single problem. The Foundation now thinks the cost could be much higher, possibly around per task. OpenAI CEO says capacity issues will cause product delays In a series of posts on X, OpenAI CEO Sam Altman said the company’s new image-generation tool’s popularity may cause product releases to be delayed. “We are getting things under control, but you should expect new releases from OpenAI to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges,” he wrote. March 2025 OpenAI plans to release a new ‘open’ AI language model OpeanAI intends to release its “first” open language model since GPT-2 “in the coming months.” The company plans to host developer events to gather feedback and eventually showcase prototypes of the model. The first developer event is to be held in San Francisco, with sessions to follow in Europe and Asia. OpenAI removes ChatGPT’s restrictions on image generation OpenAI made a notable change to its content moderation policies after the success of its new image generator in ChatGPT, which went viral for being able to create Studio Ghibli-style images. The company has updated its policies to allow ChatGPT to generate images of public figures, hateful symbols, and racial features when requested. OpenAI had previously declined such prompts due to the potential controversy or harm they may cause. However, the company has now “evolved” its approach, as stated in a blog post published by Joanne Jang, the lead for OpenAI’s model behavior. OpenAI adopts Anthropic’s standard for linking AI models with data OpenAI wants to incorporate Anthropic’s Model Context Protocolinto all of its products, including the ChatGPT desktop app. MCP, an open-source standard, helps AI models generate more accurate and suitable responses to specific queries, and lets developers create bidirectional links between data sources and AI applications like chatbots. The protocol is currently available in the Agents SDK, and support for the ChatGPT desktop app and Responses API will be coming soon, OpenAI CEO Sam Altman said. OpenAI’s viral Studio Ghibli-style images could raise AI copyright concerns The latest update of the image generator on OpenAI’s ChatGPT has triggered a flood of AI-generated memes in the style of Studio Ghibli, the Japanese animation studio behind blockbuster films like “My Neighbor Totoro” and “Spirited Away.” The burgeoning mass of Ghibli-esque images have sparked concerns about whether OpenAI has violated copyright laws, especially since the company is already facing legal action for using source material without authorization. OpenAI expects revenue to triple to billion this year OpenAI expects its revenue to triple to billion in 2025, fueled by the performance of its paid AI software, Bloomberg reported, citing an anonymous source. While the startup doesn’t expect to reach positive cash flow until 2029, it expects revenue to increase significantly in 2026 to surpass billion, the report said. ChatGPT has upgraded its image-generation feature OpenAI on Tuesday rolled out a major upgrade to ChatGPT’s image-generation capabilities: ChatGPT can now use the GPT-4o model to generate and edit images and photos directly. The feature went live earlier this week in ChatGPT and Sora, OpenAI’s AI video-generation tool, for subscribers of the company’s Pro plan, priced at a month, and will be available soon to ChatGPT Plus subscribers and developers using the company’s API service. The company’s CEO Sam Altman said on Wednesday, however, that the release of the image generation feature to free users would be delayed due to higher demand than the company expected. OpenAI announces leadership updates Brad Lightcap, OpenAI’s chief operating officer, will lead the company’s global expansion and manage corporate partnerships as CEO Sam Altman shifts his focus to research and products, according to a blog post from OpenAI. Lightcap, who previously worked with Altman at Y Combinator, joined the Microsoft-backed startup in 2018. OpenAI also said Mark Chen would step into the expanded role of chief research officer, and Julia Villagra will take on the role of chief people officer. OpenAI’s AI voice assistant now has advanced feature OpenAI has updated its AI voice assistant with improved chatting capabilities, according to a video posted on Mondayto the company’s official media channels. The update enables real-time conversations, and the AI assistant is said to be more personable and interrupts users less often. Users on ChatGPT’s free tier can now access the new version of Advanced Voice Mode, while paying users will receive answers that are “more direct, engaging, concise, specific, and creative,” a spokesperson from OpenAI told TechCrunch. OpenAI and Meta have separately engaged in discussions with Indian conglomerate Reliance Industries regarding potential collaborations to enhance their AI services in the country, per a report by The Information. One key topic being discussed is Reliance Jio distributing OpenAI’s ChatGPT. Reliance has proposed selling OpenAI’s models to businesses in India through an application programming interfaceso they can incorporate AI into their operations. Meta also plans to bolster its presence in India by constructing a large 3GW data center in Jamnagar, Gujarat. OpenAI, Meta, and Reliance have not yet officially announced these plans. OpenAI faces privacy complaint in Europe for chatbot’s defamatory hallucinations Noyb, a privacy rights advocacy group, is supporting an individual in Norway who was shocked to discover that ChatGPT was providing false information about him, stating that he had been found guilty of killing two of his children and trying to harm the third. “The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.” OpenAI upgrades its transcription and voice-generating AI models OpenAI has added new transcription and voice-generating AI models to its APIs: a text-to-speech model, “gpt-4o-mini-tts,” that delivers more nuanced and realistic sounding speech, as well as two speech-to-text models called “gpt-4o-transcribe” and “gpt-4o-mini-transcribe”. The company claims they are improved versions of what was already there and that they hallucinate less. OpenAI has launched o1-pro, a more powerful version of its o1 OpenAI has introduced o1-pro in its developer API. OpenAI says its o1-pro uses more computing than its o1 “reasoning” AI model to deliver “consistently better responses.” It’s only accessible to select developers who have spent at least on OpenAI API services. OpenAI charges for every million tokensinput into the model and for every million tokens the model produces. It costs twice as much as OpenAI’s GPT-4.5 for input and 10 times the price of regular o1. Noam Brown, who heads AI reasoning research at OpenAI, thinks that certain types of AI models for “reasoning” could have been developed 20 years ago if researchers had understood the correct approach and algorithms. OpenAI says it has trained an AI that’s “really good” at creative writing OpenAI CEO Sam Altman said, in a post on X, that the company has trained a “new model” that’s “really good” at creative writing. He posted a lengthy sample from the model given the prompt “Please write a metafictional literary short story about AI and grief.” OpenAI has not extensively explored the use of AI for writing fiction. The company has mostly concentrated on challenges in rigid, predictable areas such as math and programming.might not be that great at creative writing at all. OpenAI rolled out new tools designed to help developers and businesses build AI agents — automated systems that can independently accomplish tasks — using the company’s own AI models and frameworks. The tools are part of OpenAI’s new Responses API, which enables enterprises to develop customized AI agents that can perform web searches, scan through company files, and navigate websites, similar to OpenAI’s Operator product. The Responses API effectively replaces OpenAI’s Assistants API, which the company plans to discontinue in the first half of 2026. OpenAI reportedly plans to charge up to a month for specialized AI ‘agents’ OpenAI intends to release several “agent” products tailored for different applications, including sorting and ranking sales leads and software engineering, according to a report from The Information. One, a “high-income knowledge worker” agent, will reportedly be priced at a month. Another, a software developer agent, is said to cost a month. The most expensive rumored agents, which are said to be aimed at supporting “PhD-level research,” are expected to cost per month. The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly billion last year after paying for costs related to running its services and other expenses. It’s unclear when these agentic tools might launch or which customers will be eligible to buy them. ChatGPT can directly edit your code The latest version of the macOS ChatGPT app allows users to edit code directly in supported developer tools, including Xcode, VS Code, and JetBrains. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users like Enterprise, Edu, and free users. ChatGPT’s weekly active users doubled in less than 6 months, thanks to new releases According to a new report from VC firm Andreessen Horowitz, OpenAI’s AI chatbot, ChatGPT, experienced solid growth in the second half of 2024. It took ChatGPT nine months to increase its weekly active users from 100 million in November 2023 to 200 million in August 2024, but it only took less than six months to double that number once more, according to the report. ChatGPT’s weekly active users increased to 300 million by December 2024 and 400 million by February 2025. ChatGPT has experienced significant growth recently due to the launch of new models and features, such as GPT-4o, with multimodal capabilities. ChatGPT usage spiked from April to May 2024, shortly after that model’s launch. February 2025 OpenAI cancels its o3 AI model in favor of a ‘unified’ next-gen release OpenAI has effectively canceled the release of o3 in favor of what CEO Sam Altman is calling a “simplified” product offering. In a post on X, Altman said that, in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot oftechnology,” including o3, in ChatGPT and its API. As a result of that roadmap decision, OpenAI no longer plans to release o3 as a standalone model.  ChatGPT may not be as power-hungry as once assumed A commonly cited stat is that ChatGPT requires around 3 watt-hours of power to answer a single question. Using OpenAI’s latest default model for ChatGPT, GPT-4o, as a reference, nonprofit AI research institute Epoch AI found the average ChatGPT query consumes around 0.3 watt-hours. However, the analysis doesn’t consider the additional energy costs incurred by ChatGPT with features like image generation or input processing. OpenAI now reveals more of its o3-mini model’s thought process In response to pressure from rivals like DeepSeek, OpenAI is changing the way its o3-mini model communicates its step-by-step “thought” process. ChatGPT users will see an updated “chain of thought” that shows more of the model’s “reasoning” steps and how it arrived at answers to questions. You can now use ChatGPT web search without logging in OpenAI is now allowing anyone to use ChatGPT web search without having to log in. While OpenAI had previously allowed users to ask ChatGPT questions without signing in, responses were restricted to the chatbot’s last training update. This only applies through ChatGPT.com, however. To use ChatGPT in any form through the native mobile app, you will still need to be logged in. OpenAI unveils a new ChatGPT agent for ‘deep research’ OpenAI announced a new AI “agent” called deep research that’s designed to help people conduct in-depth, complex research using ChatGPT. OpenAI says the “agent” is intended for instances where you don’t just want a quick answer or summary, but instead need to assiduously consider information from multiple websites and other sources. January 2025 OpenAI used a subreddit to test AI persuasion OpenAI used the subreddit r/ChangeMyView to measure the persuasive abilities of its AI reasoning models. OpenAI says it collects user posts from the subreddit and asks its AI models to write replies, in a closed environment, that would change the Reddit user’s mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models’ responses to human replies for that same post.  OpenAI launches o3-mini, its latest ‘reasoning’ model OpenAI launched a new AI “reasoning” model, o3-mini, the newest in the company’s o family of models. OpenAI first previewed the model in December alongside a more capable system called o3. OpenAI is pitching its new model as both “powerful” and “affordable.” ChatGPT’s mobile users are 85% male, report says A new report from app analytics firm Appfigures found that over half of ChatGPT’s mobile users are under age 25, with users between ages 50 and 64 making up the second largest age demographic. The gender gap among ChatGPT users is even more significant. Appfigures estimates that across age groups, men make up 84.5% of all users. OpenAI launches ChatGPT plan for US government agencies OpenAI launched ChatGPT Gov designed to provide U.S. government agencies an additional way to access the tech. ChatGPT Gov includes many of the capabilities found in OpenAI’s corporate-focused tier, ChatGPT Enterprise. OpenAI says that ChatGPT Gov enables agencies to more easily manage their own security, privacy, and compliance, and could expedite internal authorization of OpenAI’s tools for the handling of non-public sensitive data. More teens report using ChatGPT for schoolwork, despite the tech’s faults Younger Gen Zers are embracing ChatGPT, for schoolwork, according to a new survey by the Pew Research Center. In a follow-up to its 2023 poll on ChatGPT usage among young people, Pew asked ~1,400 U.S.-based teens ages 13 to 17 whether they’ve used ChatGPT for homework or other school-related assignments. Twenty-six percent said that they had, double the number two years ago. Just over half of teens responding to the poll said they think it’s acceptable to use ChatGPT for researching new subjects. But considering the ways ChatGPT can fall short, the results are possibly cause for alarm. OpenAI says it may store deleted Operator data for up to 90 days OpenAI says that it might store chats and associated screenshots from customers who use Operator, the company’s AI “agent” tool, for up to 90 days — even after a user manually deletes them. While OpenAI has a similar deleted data retention policy for ChatGPT, the retention period for ChatGPT is only 30 days, which is 60 days shorter than Operator’s. OpenAI launches Operator, an AI agent that performs tasks autonomously OpenAI is launching a research preview of Operator, a general-purpose AI agent that can take control of a web browser and independently perform certain actions. Operator promises to automate tasks such as booking travel accommodations, making restaurant reservations, and shopping online. Operator, OpenAI’s agent tool, could be released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the Pro subscription plan. The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code. TechCrunch separately identified the same references to Operator on OpenAI’s website. OpenAI tests phone number-only ChatGPT signups OpenAI has begun testing a feature that lets new ChatGPT users sign up with only a phone number — no email required. The feature is currently in beta in the U.S. and India. However, users who create an account using their number can’t upgrade to one of OpenAI’s paid plans without verifying their account via an email. Multi-factor authentication also isn’t supported without a valid email. ChatGPT now lets you schedule reminders and recurring tasks ChatGPT’s new beta feature, called tasks, allows users to set simple reminders. For example, you can ask ChatGPT to remind you when your passport expires in six months, and the AI assistant will follow up with a push notification on whatever platform you have tasks enabled. The feature will start rolling out to ChatGPT Plus, Team, and Pro users around the globe this week. New ChatGPT feature lets users assign it traits like ‘chatty’ and ‘Gen Z’ OpenAI is introducing a new way for users to customize their interactions with ChatGPT. Some users found they can specify a preferred name or nickname and “traits” they’d like the chatbot to have. OpenAI suggests traits like “Chatty,” “Encouraging,” and “Gen Z.” However, some users reported that the new options have disappeared, so it’s possible they went live prematurely. FAQs: What is ChatGPT? How does it work? ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI. The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text. When did ChatGPT get released? November 30, 2022 is when ChatGPT was released for public use. What is the latest version of ChatGPT? Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o. Can I use ChatGPT for free? There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus. Who uses ChatGPT? Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns. What companies use ChatGPT? Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool. Most recently, Microsoft announced at its 2023 Build conference that it is integrating its ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT.  And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space. What does GPT mean in ChatGPT? GPT stands for Generative Pre-Trained Transformer. What is the difference between ChatGPT and a chatbot? A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions. ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt. Can ChatGPT write essays? Yes. Can ChatGPT commit libel? Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel. We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry. Does ChatGPT have an app? Yes, there is a free ChatGPT mobile app for iOS and Android users. What is the ChatGPT character limit? It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words. Does ChatGPT have an API? Yes, it was released March 1, 2023. What are some sample everyday uses for ChatGPT? Everyday examples include programming, scripts, email replies, listicles, blog ideas, summarization, etc. What are some advanced uses for ChatGPT? Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc. How good is ChatGPT at writing code? It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used. Can you save a ChatGPT chat? Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet. Are there alternatives to ChatGPT? Yes. There are multiple AI-powered chatbot competitors such as Together, Google’s Gemini and Anthropic’s Claude, and developers are creating open source alternatives. How does ChatGPT handle data privacy? OpenAI has said that individuals in “certain jurisdictions”can object to the processing of their personal information by its AI models by filling out this form. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”. The web form for making a deletion of data about you request is entitled “OpenAI Personal Data Removal Request”. In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest”, pointing users towards more information about requesting an opt out — when it writes: “See here for instructions on how you can opt out of our use of your information to train our models.” What controversies have surrounded ChatGPT? Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamineand the incendiary mixture napalm. An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service. CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect. Several major school systems and colleges, including New York City Public Schools, have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with. There have also been cases of ChatGPT accusing individuals of false crimes. Where can I find examples of ChatGPT prompts? Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase. Another is ChatX. More launch every day. Can ChatGPT be detected? Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best. Are ChatGPT chats public? No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service. What lawsuits are there surrounding ChatGPT? None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT. Are there issues regarding plagiarism with ChatGPT? Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data. #chatgpt #everything #you #need #know
    TECHCRUNCH.COM
    ChatGPT: Everything you need to know about the AI-powered chatbot
    ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to supercharge productivity through writing essays and code with short text prompts has evolved into a behemoth with 300 million weekly active users. 2024 was a big year for OpenAI, from its partnership with Apple for its generative AI offering, Apple Intelligence, the release of GPT-4o with voice capabilities, and the highly-anticipated launch of its text-to-video model Sora. OpenAI also faced its share of internal drama, including the notable exits of high-level execs like co-founder and longtime chief scientist Ilya Sutskever and CTO Mira Murati. OpenAI has also been hit with lawsuits from Alden Global Capital-owned newspapers alleging copyright infringement, as well as an injunction from Elon Musk to halt OpenAI’s transition to a for-profit. In 2025, OpenAI is battling the perception that it’s ceding ground in the AI race to Chinese rivals like DeepSeek. The company has been trying to shore up its relationship with Washington as it simultaneously pursues an ambitious data center project, and as it reportedly lays the groundwork for one of the largest funding rounds in history. Below, you’ll find a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. If you have any other questions, check out our ChatGPT FAQ here. To see a list of 2024 updates, go here. Timeline of the most recent ChatGPT updates Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW May 2025 OpenAI CFO says hardware will drive ChatGPT’s growth OpenAI plans to purchase Jony Ive’s devices startup io for $6.4 billion. Sarah Friar, CFO of OpenAI, thinks that the hardware will significantly enhance ChatGPT and broaden OpenAI’s reach to a larger audience in the future. OpenAI’s ChatGPT unveils its AI coding agent, Codex OpenAI has introduced its AI coding agent, Codex, powered by codex-1, a version of its o3 AI reasoning model designed for software engineering tasks. OpenAI says codex-1 generates more precise and “cleaner” code than o3. The coding agent may take anywhere from one to 30 minutes to complete tasks such as writing simple features, fixing bugs, answering questions about your codebase, and running tests. Sam Altman aims to make ChatGPT more personalized by tracking every aspect of a person’s life Sam Altman, the CEO of OpenAI, said during a recent AI event hosted by VC firm Sequoia that he wants ChatGPT to record and remember every detail of a person’s life when one attendee asked about how ChatGPT can become more personalized. OpenAI releases its GPT-4.1 and GPT-4.1 mini AI models in ChatGPT OpenAI said in a post on X that it has launched its GPT-4.1 and GPT4.1 mini AI models in ChagGPT. OpenAI has launched a new feature for ChatGPT deep research to analyze code repositories on GitHub. The ChatGPT deep research feature is in beta and lets developers connect with GitHub to ask questions about codebases and engineering documents. The connector will soon be available for ChatGPT Plus, Pro, and Team users, with support for Enterprise and Education coming shortly, per an OpenAI spokesperson. OpenAI launches a new data residency program in Asia After introducing a data residency program in Europe in February, OpenAI has now launched a similar program in Asian countries including India, Japan, Singapore, and South Korea. The new program will be accessible to users of ChatGPT Enterprise, ChatGPT Edu, and API. It will help organizations in Asia meet their local data sovereignty requirements when using OpenAI’s products. OpenAI to introduce a program to grow AI infrastructure OpenAI is unveiling a program called OpenAI for Countries, which aims to develop the necessary local infrastructure to serve international AI clients better. The AI startup will work with governments to assist with increasing data center capacity and customizing OpenAI’s products to meet specific language and local needs. OpenAI for Countries is part of efforts to support the company’s expansion of its AI data center Project Stargate to new locations outside the U.S., per Bloomberg. OpenAI promises to make changes to prevent future ChatGPT sycophancy OpenAI has announced its plan to make changes to its procedures for updating the AI models that power ChatGPT, following an update that caused the platform to become overly sycophantic for many users. April 2025 OpenAI clarifies the reason ChatGPT became overly flattering and agreeable OpenAI has released a post on the recent sycophancy issues with the default AI model powering ChatGPT, GPT-4o, leading the company to revert an update to the model released last week. CEO Sam Altman acknowledged the issue on Sunday and confirmed two days later that the GPT-4o update was being rolled back. OpenAI is working on “additional fixes” to the model’s personality. Over the weekend, users on social media criticized the new model for making ChatGPT too validating and agreeable. It became a popular meme fast. OpenAI is working to fix a “bug” that let minors engage in inappropriate conversations An issue within OpenAI’s ChatGPT enabled the chatbot to create graphic erotic content for accounts registered by users under the age of 18, as demonstrated by TechCrunch’s testing, a fact later confirmed by OpenAI. “Protecting younger users is a top priority, and our Model Spec, which guides model behavior, clearly restricts sensitive content like erotica to narrow contexts such as scientific, historical, or news reporting,” a spokesperson told TechCrunch via email. “In this case, a bug allowed responses outside those guidelines, and we are actively deploying a fix to limit these generations.” OpenAI has added a few features to its ChatGPT search, its web search tool in ChatGPT, to give users an improved online shopping experience. The company says people can ask super-specific questions using natural language and receive customized results. The chatbot provides recommendations, images, and reviews of products in various categories such as fashion, beauty, home goods, and electronics. OpenAI wants its AI model to access cloud models for assistance OpenAI leaders have been talking about allowing the open model to link up with OpenAI’s cloud-hosted models to improve its ability to respond to intricate questions, two sources familiar with the situation told TechCrunch. OpenAI aims to make its new “open” AI model the best on the market OpenAI is preparing to launch an AI system that will be openly accessible, allowing users to download it for free without any API restrictions. Aidan Clark, OpenAI’s VP of research, is spearheading the development of the open model, which is in the very early stages, sources familiar with the situation told TechCrunch. OpenAI’s GPT-4.1 may be less aligned than earlier models OpenAI released a new AI model called GPT-4.1 in mid-April. However, multiple independent tests indicate that the model is less reliable than previous OpenAI releases. The company skipped that step — sending safety cards for GPT-4.1 — claiming in a statement to TechCrunch that “GPT-4.1 is not a frontier model, so there won’t be a separate system card released for it.” OpenAI’s o3 AI model scored lower than expected on a benchmark Questions have been raised regarding OpenAI’s transparency and procedures for testing models after a difference in benchmark outcomes was detected by first- and third-party benchmark results for the o3 AI model. OpenAI introduced o3 in December, stating that the model could solve approximately 25% of questions on FrontierMath, a difficult math problem set. Epoch AI, the research institute behind FrontierMath, discovered that o3 achieved a score of approximately 10%, which was significantly lower than OpenAI’s top-reported score. OpenAI unveils Flex processing for cheaper, slower AI tasks OpenAI has launched a new API feature called Flex processing that allows users to use AI models at a lower cost but with slower response times and occasional resource unavailability. Flex processing is available in beta on the o3 and o4-mini reasoning models for non-production tasks like model evaluations, data enrichment, and asynchronous workloads. OpenAI’s latest AI models now have a safeguard against biorisks OpenAI has rolled out a new system to monitor its AI reasoning models, o3 and o4 mini, for biological and chemical threats. The system is designed to prevent models from giving advice that could potentially lead to harmful attacks, as stated in OpenAI’s safety report. OpenAI launches its latest reasoning models, o3 and o4-mini OpenAI has released two new reasoning models, o3 and o4 mini, just two days after launching GPT-4.1. The company claims o3 is the most advanced reasoning model it has developed, while o4-mini is said to provide a balance of price, speed, and performance. The new models stand out from previous reasoning models because they can use ChatGPT features like web browsing, coding, and image processing and generation. But they hallucinate more than several of OpenAI’s previous models. OpenAI has added a new section to ChatGPT to offer easier access to AI-generated images for all user tiers Open AI introduced a new section called “library” to make it easier for users to create images on mobile and web platforms, per the company’s X post. OpenAI could “adjust” its safeguards if rivals release “high-risk” AI OpenAI said on Tuesday that it might revise its safety standards if “another frontier AI developer releases a high-risk system without comparable safeguards.” The move shows how commercial AI developers face more pressure to rapidly implement models due to the increased competition. OpenAI is currently in the early stages of developing its own social media platform to compete with Elon Musk’s X and Mark Zuckerberg’s Instagram and Threads, according to The Verge. It is unclear whether OpenAI intends to launch the social network as a standalone application or incorporate it into ChatGPT. OpenAI will remove its largest AI model, GPT-4.5, from the API, in July OpenAI will discontinue its largest AI model, GPT-4.5, from its API even though it was just launched in late February. GPT-4.5 will be available in a research preview for paying customers. Developers can use GPT-4.5 through OpenAI’s API until July 14; then, they will need to switch to GPT-4.1, which was released on April 14. OpenAI unveils GPT-4.1 AI models that focus on coding capabilities OpenAI has launched three members of the GPT-4.1 model — GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano — with a specific focus on coding capabilities. It’s accessible via the OpenAI API but not ChatGPT. In the competition to develop advanced programming models, GPT-4.1 will rival AI models such as Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and DeepSeek’s upgraded V3. OpenAI will discontinue ChatGPT’s GPT-4 at the end of April OpenAI plans to sunset GPT-4, an AI model introduced more than two years ago, and replace it with GPT-4o, the current default model, per changelog. It will take effect on April 30. GPT-4 will remain available via OpenAI’s API. OpenAI could release GPT-4.1 soon OpenAI may launch several new AI models, including GPT-4.1, soon, The Verge reported, citing anonymous sources. GPT-4.1 would be an update of OpenAI’s GPT-4o, which was released last year. On the list of upcoming models are GPT-4.1 and smaller versions like GPT-4.1 mini and nano, per the report. OpenAI has updated ChatGPT to use information from your previous conversations OpenAI started updating ChatGPT to enable the chatbot to remember previous conversations with a user and customize its responses based on that context. This feature is rolling out to ChatGPT Pro and Plus users first, excluding those in the U.K., EU, Iceland, Liechtenstein, Norway, and Switzerland. OpenAI is working on watermarks for images made with ChatGPT It looks like OpenAI is working on a watermarking feature for images generated using GPT-4o. AI researcher Tibor Blaho spotted a new “ImageGen” watermark feature in the new beta of ChatGPT’s Android app. Blaho also found mentions of other tools: “Structured Thoughts,” “Reasoning Recap,” “CoT Search Tool,” and “l1239dk1.” OpenAI offers ChatGPT Plus for free to U.S., Canadian college students OpenAI is offering its $20-per-month ChatGPT Plus subscription tier for free to all college students in the U.S. and Canada through the end of May. The offer will let millions of students use OpenAI’s premium service, which offers access to the company’s GPT-4o model, image generation, voice interaction, and research tools that are not available in the free version. ChatGPT users have generated over 700M images so far More than 130 million users have created over 700 million images since ChatGPT got the upgraded image generator on March 25, according to COO of OpenAI Brad Lightcap. The image generator was made available to all ChatGPT users on March 31, and went viral for being able to create Ghibli-style photos. OpenAI’s o3 model could cost more to run than initial estimate The Arc Prize Foundation, which develops the AI benchmark tool ARC-AGI, has updated the estimated computing costs for OpenAI’s o3 “reasoning” model managed by ARC-AGI. The organization originally estimated that the best-performing configuration of o3 it tested, o3 high, would cost approximately $3,000 to address a single problem. The Foundation now thinks the cost could be much higher, possibly around $30,000 per task. OpenAI CEO says capacity issues will cause product delays In a series of posts on X, OpenAI CEO Sam Altman said the company’s new image-generation tool’s popularity may cause product releases to be delayed. “We are getting things under control, but you should expect new releases from OpenAI to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges,” he wrote. March 2025 OpenAI plans to release a new ‘open’ AI language model OpeanAI intends to release its “first” open language model since GPT-2 “in the coming months.” The company plans to host developer events to gather feedback and eventually showcase prototypes of the model. The first developer event is to be held in San Francisco, with sessions to follow in Europe and Asia. OpenAI removes ChatGPT’s restrictions on image generation OpenAI made a notable change to its content moderation policies after the success of its new image generator in ChatGPT, which went viral for being able to create Studio Ghibli-style images. The company has updated its policies to allow ChatGPT to generate images of public figures, hateful symbols, and racial features when requested. OpenAI had previously declined such prompts due to the potential controversy or harm they may cause. However, the company has now “evolved” its approach, as stated in a blog post published by Joanne Jang, the lead for OpenAI’s model behavior. OpenAI adopts Anthropic’s standard for linking AI models with data OpenAI wants to incorporate Anthropic’s Model Context Protocol (MCP) into all of its products, including the ChatGPT desktop app. MCP, an open-source standard, helps AI models generate more accurate and suitable responses to specific queries, and lets developers create bidirectional links between data sources and AI applications like chatbots. The protocol is currently available in the Agents SDK, and support for the ChatGPT desktop app and Responses API will be coming soon, OpenAI CEO Sam Altman said. OpenAI’s viral Studio Ghibli-style images could raise AI copyright concerns The latest update of the image generator on OpenAI’s ChatGPT has triggered a flood of AI-generated memes in the style of Studio Ghibli, the Japanese animation studio behind blockbuster films like “My Neighbor Totoro” and “Spirited Away.” The burgeoning mass of Ghibli-esque images have sparked concerns about whether OpenAI has violated copyright laws, especially since the company is already facing legal action for using source material without authorization. OpenAI expects revenue to triple to $12.7 billion this year OpenAI expects its revenue to triple to $12.7 billion in 2025, fueled by the performance of its paid AI software, Bloomberg reported, citing an anonymous source. While the startup doesn’t expect to reach positive cash flow until 2029, it expects revenue to increase significantly in 2026 to surpass $29.4 billion, the report said. ChatGPT has upgraded its image-generation feature OpenAI on Tuesday rolled out a major upgrade to ChatGPT’s image-generation capabilities: ChatGPT can now use the GPT-4o model to generate and edit images and photos directly. The feature went live earlier this week in ChatGPT and Sora, OpenAI’s AI video-generation tool, for subscribers of the company’s Pro plan, priced at $200 a month, and will be available soon to ChatGPT Plus subscribers and developers using the company’s API service. The company’s CEO Sam Altman said on Wednesday, however, that the release of the image generation feature to free users would be delayed due to higher demand than the company expected. OpenAI announces leadership updates Brad Lightcap, OpenAI’s chief operating officer, will lead the company’s global expansion and manage corporate partnerships as CEO Sam Altman shifts his focus to research and products, according to a blog post from OpenAI. Lightcap, who previously worked with Altman at Y Combinator, joined the Microsoft-backed startup in 2018. OpenAI also said Mark Chen would step into the expanded role of chief research officer, and Julia Villagra will take on the role of chief people officer. OpenAI’s AI voice assistant now has advanced feature OpenAI has updated its AI voice assistant with improved chatting capabilities, according to a video posted on Monday (March 24) to the company’s official media channels. The update enables real-time conversations, and the AI assistant is said to be more personable and interrupts users less often. Users on ChatGPT’s free tier can now access the new version of Advanced Voice Mode, while paying users will receive answers that are “more direct, engaging, concise, specific, and creative,” a spokesperson from OpenAI told TechCrunch. OpenAI and Meta have separately engaged in discussions with Indian conglomerate Reliance Industries regarding potential collaborations to enhance their AI services in the country, per a report by The Information. One key topic being discussed is Reliance Jio distributing OpenAI’s ChatGPT. Reliance has proposed selling OpenAI’s models to businesses in India through an application programming interface (API) so they can incorporate AI into their operations. Meta also plans to bolster its presence in India by constructing a large 3GW data center in Jamnagar, Gujarat. OpenAI, Meta, and Reliance have not yet officially announced these plans. OpenAI faces privacy complaint in Europe for chatbot’s defamatory hallucinations Noyb, a privacy rights advocacy group, is supporting an individual in Norway who was shocked to discover that ChatGPT was providing false information about him, stating that he had been found guilty of killing two of his children and trying to harm the third. “The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.” OpenAI upgrades its transcription and voice-generating AI models OpenAI has added new transcription and voice-generating AI models to its APIs: a text-to-speech model, “gpt-4o-mini-tts,” that delivers more nuanced and realistic sounding speech, as well as two speech-to-text models called “gpt-4o-transcribe” and “gpt-4o-mini-transcribe”. The company claims they are improved versions of what was already there and that they hallucinate less. OpenAI has launched o1-pro, a more powerful version of its o1 OpenAI has introduced o1-pro in its developer API. OpenAI says its o1-pro uses more computing than its o1 “reasoning” AI model to deliver “consistently better responses.” It’s only accessible to select developers who have spent at least $5 on OpenAI API services. OpenAI charges $150 for every million tokens (about 750,000 words) input into the model and $600 for every million tokens the model produces. It costs twice as much as OpenAI’s GPT-4.5 for input and 10 times the price of regular o1. Noam Brown, who heads AI reasoning research at OpenAI, thinks that certain types of AI models for “reasoning” could have been developed 20 years ago if researchers had understood the correct approach and algorithms. OpenAI says it has trained an AI that’s “really good” at creative writing OpenAI CEO Sam Altman said, in a post on X, that the company has trained a “new model” that’s “really good” at creative writing. He posted a lengthy sample from the model given the prompt “Please write a metafictional literary short story about AI and grief.” OpenAI has not extensively explored the use of AI for writing fiction. The company has mostly concentrated on challenges in rigid, predictable areas such as math and programming.might not be that great at creative writing at all. OpenAI rolled out new tools designed to help developers and businesses build AI agents — automated systems that can independently accomplish tasks — using the company’s own AI models and frameworks. The tools are part of OpenAI’s new Responses API, which enables enterprises to develop customized AI agents that can perform web searches, scan through company files, and navigate websites, similar to OpenAI’s Operator product. The Responses API effectively replaces OpenAI’s Assistants API, which the company plans to discontinue in the first half of 2026. OpenAI reportedly plans to charge up to $20,000 a month for specialized AI ‘agents’ OpenAI intends to release several “agent” products tailored for different applications, including sorting and ranking sales leads and software engineering, according to a report from The Information. One, a “high-income knowledge worker” agent, will reportedly be priced at $2,000 a month. Another, a software developer agent, is said to cost $10,000 a month. The most expensive rumored agents, which are said to be aimed at supporting “PhD-level research,” are expected to cost $20,000 per month. The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses. It’s unclear when these agentic tools might launch or which customers will be eligible to buy them. ChatGPT can directly edit your code The latest version of the macOS ChatGPT app allows users to edit code directly in supported developer tools, including Xcode, VS Code, and JetBrains. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users like Enterprise, Edu, and free users. ChatGPT’s weekly active users doubled in less than 6 months, thanks to new releases According to a new report from VC firm Andreessen Horowitz (a16z), OpenAI’s AI chatbot, ChatGPT, experienced solid growth in the second half of 2024. It took ChatGPT nine months to increase its weekly active users from 100 million in November 2023 to 200 million in August 2024, but it only took less than six months to double that number once more, according to the report. ChatGPT’s weekly active users increased to 300 million by December 2024 and 400 million by February 2025. ChatGPT has experienced significant growth recently due to the launch of new models and features, such as GPT-4o, with multimodal capabilities. ChatGPT usage spiked from April to May 2024, shortly after that model’s launch. February 2025 OpenAI cancels its o3 AI model in favor of a ‘unified’ next-gen release OpenAI has effectively canceled the release of o3 in favor of what CEO Sam Altman is calling a “simplified” product offering. In a post on X, Altman said that, in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot of [OpenAI’s] technology,” including o3, in ChatGPT and its API. As a result of that roadmap decision, OpenAI no longer plans to release o3 as a standalone model.  ChatGPT may not be as power-hungry as once assumed A commonly cited stat is that ChatGPT requires around 3 watt-hours of power to answer a single question. Using OpenAI’s latest default model for ChatGPT, GPT-4o, as a reference, nonprofit AI research institute Epoch AI found the average ChatGPT query consumes around 0.3 watt-hours. However, the analysis doesn’t consider the additional energy costs incurred by ChatGPT with features like image generation or input processing. OpenAI now reveals more of its o3-mini model’s thought process In response to pressure from rivals like DeepSeek, OpenAI is changing the way its o3-mini model communicates its step-by-step “thought” process. ChatGPT users will see an updated “chain of thought” that shows more of the model’s “reasoning” steps and how it arrived at answers to questions. You can now use ChatGPT web search without logging in OpenAI is now allowing anyone to use ChatGPT web search without having to log in. While OpenAI had previously allowed users to ask ChatGPT questions without signing in, responses were restricted to the chatbot’s last training update. This only applies through ChatGPT.com, however. To use ChatGPT in any form through the native mobile app, you will still need to be logged in. OpenAI unveils a new ChatGPT agent for ‘deep research’ OpenAI announced a new AI “agent” called deep research that’s designed to help people conduct in-depth, complex research using ChatGPT. OpenAI says the “agent” is intended for instances where you don’t just want a quick answer or summary, but instead need to assiduously consider information from multiple websites and other sources. January 2025 OpenAI used a subreddit to test AI persuasion OpenAI used the subreddit r/ChangeMyView to measure the persuasive abilities of its AI reasoning models. OpenAI says it collects user posts from the subreddit and asks its AI models to write replies, in a closed environment, that would change the Reddit user’s mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models’ responses to human replies for that same post.  OpenAI launches o3-mini, its latest ‘reasoning’ model OpenAI launched a new AI “reasoning” model, o3-mini, the newest in the company’s o family of models. OpenAI first previewed the model in December alongside a more capable system called o3. OpenAI is pitching its new model as both “powerful” and “affordable.” ChatGPT’s mobile users are 85% male, report says A new report from app analytics firm Appfigures found that over half of ChatGPT’s mobile users are under age 25, with users between ages 50 and 64 making up the second largest age demographic. The gender gap among ChatGPT users is even more significant. Appfigures estimates that across age groups, men make up 84.5% of all users. OpenAI launches ChatGPT plan for US government agencies OpenAI launched ChatGPT Gov designed to provide U.S. government agencies an additional way to access the tech. ChatGPT Gov includes many of the capabilities found in OpenAI’s corporate-focused tier, ChatGPT Enterprise. OpenAI says that ChatGPT Gov enables agencies to more easily manage their own security, privacy, and compliance, and could expedite internal authorization of OpenAI’s tools for the handling of non-public sensitive data. More teens report using ChatGPT for schoolwork, despite the tech’s faults Younger Gen Zers are embracing ChatGPT, for schoolwork, according to a new survey by the Pew Research Center. In a follow-up to its 2023 poll on ChatGPT usage among young people, Pew asked ~1,400 U.S.-based teens ages 13 to 17 whether they’ve used ChatGPT for homework or other school-related assignments. Twenty-six percent said that they had, double the number two years ago. Just over half of teens responding to the poll said they think it’s acceptable to use ChatGPT for researching new subjects. But considering the ways ChatGPT can fall short, the results are possibly cause for alarm. OpenAI says it may store deleted Operator data for up to 90 days OpenAI says that it might store chats and associated screenshots from customers who use Operator, the company’s AI “agent” tool, for up to 90 days — even after a user manually deletes them. While OpenAI has a similar deleted data retention policy for ChatGPT, the retention period for ChatGPT is only 30 days, which is 60 days shorter than Operator’s. OpenAI launches Operator, an AI agent that performs tasks autonomously OpenAI is launching a research preview of Operator, a general-purpose AI agent that can take control of a web browser and independently perform certain actions. Operator promises to automate tasks such as booking travel accommodations, making restaurant reservations, and shopping online. Operator, OpenAI’s agent tool, could be released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the $200 Pro subscription plan. The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code. TechCrunch separately identified the same references to Operator on OpenAI’s website. OpenAI tests phone number-only ChatGPT signups OpenAI has begun testing a feature that lets new ChatGPT users sign up with only a phone number — no email required. The feature is currently in beta in the U.S. and India. However, users who create an account using their number can’t upgrade to one of OpenAI’s paid plans without verifying their account via an email. Multi-factor authentication also isn’t supported without a valid email. ChatGPT now lets you schedule reminders and recurring tasks ChatGPT’s new beta feature, called tasks, allows users to set simple reminders. For example, you can ask ChatGPT to remind you when your passport expires in six months, and the AI assistant will follow up with a push notification on whatever platform you have tasks enabled. The feature will start rolling out to ChatGPT Plus, Team, and Pro users around the globe this week. New ChatGPT feature lets users assign it traits like ‘chatty’ and ‘Gen Z’ OpenAI is introducing a new way for users to customize their interactions with ChatGPT. Some users found they can specify a preferred name or nickname and “traits” they’d like the chatbot to have. OpenAI suggests traits like “Chatty,” “Encouraging,” and “Gen Z.” However, some users reported that the new options have disappeared, so it’s possible they went live prematurely. FAQs: What is ChatGPT? How does it work? ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI. The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text. When did ChatGPT get released? November 30, 2022 is when ChatGPT was released for public use. What is the latest version of ChatGPT? Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o. Can I use ChatGPT for free? There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus. Who uses ChatGPT? Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns. What companies use ChatGPT? Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool. Most recently, Microsoft announced at its 2023 Build conference that it is integrating its ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT.  And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space. What does GPT mean in ChatGPT? GPT stands for Generative Pre-Trained Transformer. What is the difference between ChatGPT and a chatbot? A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions. ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt. Can ChatGPT write essays? Yes. Can ChatGPT commit libel? Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel. We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry. Does ChatGPT have an app? Yes, there is a free ChatGPT mobile app for iOS and Android users. What is the ChatGPT character limit? It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words. Does ChatGPT have an API? Yes, it was released March 1, 2023. What are some sample everyday uses for ChatGPT? Everyday examples include programming, scripts, email replies, listicles, blog ideas, summarization, etc. What are some advanced uses for ChatGPT? Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc. How good is ChatGPT at writing code? It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used. Can you save a ChatGPT chat? Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet. Are there alternatives to ChatGPT? Yes. There are multiple AI-powered chatbot competitors such as Together, Google’s Gemini and Anthropic’s Claude, and developers are creating open source alternatives. How does ChatGPT handle data privacy? OpenAI has said that individuals in “certain jurisdictions” (such as the EU) can object to the processing of their personal information by its AI models by filling out this form. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”. The web form for making a deletion of data about you request is entitled “OpenAI Personal Data Removal Request”. In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest” (LI), pointing users towards more information about requesting an opt out — when it writes: “See here for instructions on how you can opt out of our use of your information to train our models.” What controversies have surrounded ChatGPT? Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamine (meth) and the incendiary mixture napalm. An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service. CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect. Several major school systems and colleges, including New York City Public Schools, have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with. There have also been cases of ChatGPT accusing individuals of false crimes. Where can I find examples of ChatGPT prompts? Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase. Another is ChatX. More launch every day. Can ChatGPT be detected? Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best. Are ChatGPT chats public? No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service. What lawsuits are there surrounding ChatGPT? None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT. Are there issues regarding plagiarism with ChatGPT? Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data.
    0 Comentários 0 Compartilhamentos 0 Anterior
Páginas Impulsionadas
CGShares https://cgshares.com