• Free alternatives to Photoshop, Office, Premiere, and Netflix

    You don't have to go for the paid software options. Image: Timothy Exodus/Unsplash

    Get the Popular Science daily newsletter
    Breakthroughs, discoveries, and DIY tips sent every weekday.

    Most of us are signed up to plenty of digital subscriptions, covering streaming services, cloud storage, fitness apps, and plenty more. This extends to software subscriptions, too: Both Adobe Photoshop and Microsoft Officeask for monthly or yearly subscriptions if you want to stay up to date.
    Add up here and there and you can soon find yourself paying out more each week than you want. What you might not know is that for just about every paid software program out there, there’s a perfectly adequate and free replacement—so you can cut your dependency on software subscriptions right down.
    GIMP is an image editor packed with features. Screenshot: GIMP
    The rather oddly named GIMP—it stands for GNU Image Manipulation Program—is a head-on challenger to Adobe Photoshop, with a lot of the same advanced features on offer across object selections and manipulations, layers, and effects. GIMP doesn’t have as much AI stuffed into it as Photoshop does, but you might see that as a benefit.
    Whether you want to touch up and enhance the photos you’ve taken, or you want to create digital art, GIMP can handle it all. Open up the software and you’ll see you get a wealth of tools to play around with; there are plenty of third-party extensions and customizations available too—plus lots of tutorials and more help on the web.
    Download GIMP for Windows or macOS.
    LibreOffice Writer is a solid alternative to Microsoft Word. Screenshot: LibreOffice
    Microsoft Office is now called Microsoft 365, but however you refer to it, it’s anchored by Word, Excel, and PowerPoint. While Microsoft asks for a one-off fee or regular subscription, you can use LibreOffice completely free of charge—including the equivalent apps Writer, Calc, and Impress.
    If you have any experience using the Microsoft apps, you’ll feel right at home inside the LibreOffice apps—and they can import and export using Office file formats too. And just because you’re not paying for the software doesn’t mean you’re missing out on features, because these programs come backed with a host of useful options and tools.
    Download LibreOffice for Windows or macOS.
    Watch as much as you want on Tubi, for free. Screenshot: Tubi
    When it comes to movies and shows, there are plenty of services that will charge you a fee for access, including Netflix. Not so Tubi, which is completely funded by ads. Okay, it might not have the latest and greatest selection of titles, but there’s still plenty to watch, completely free. You aren’t going to run out of viewing material anytime soon.
    Tubi is one of a growing number of FAST streaming services, which stands for free ad-supported streaming television; others you might want to check out include Pluto TV and the Roku Channel. While content on these platforms is usually older than on the alternatives, you’ll probably be surprised at how much good stuff there is.
    Watch Tubi on the web, or on Android or iOS.
    Use KeePass as your password manager
    KeePass is a simple, straightforward password manager. Screenshot: KeePass
    We’ve written before about the benefits of using a password manager, but most of them require a subscription to use all of their features. If a password manager offers a free plan at all, it usually restricts how many passwords you can save or how many devices you can sync between, or apply some other limitations.
    KeePass is different, as it’s completely free and open source. It comes with plenty of features to keep your passwords private and secure, and while there’s only an official version for Windows, there are several unofficial ports so you can sync your passwords across macOS, Android, and iOS too.
    Download KeePass for Windows.
    Create videos with ease with OpenShot. Screenshot: OpenShot
    We’ll finish where we started, with an alternative to a program from the Adobe Creative Cloud suite. Unless you’re a professional filmmaker who needs the very best in industry-standard tools, OpenShot will give you everything you need in video editing features and options, and it’s capable of some impressive results.
    The extensive list includes support for key frame animations, an unlimited number of tracks, easy-to-use scaling and trimming tools, compositing, image overlays, title creating, and support for a broad range of video, audio, and image formats. Despite all of those features and more, you won’t find it difficult to use.
    Download OpenShot for Windows or macOS.
    #free #alternatives #photoshop #office #premiere
    Free alternatives to Photoshop, Office, Premiere, and Netflix
    You don't have to go for the paid software options. Image: Timothy Exodus/Unsplash Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. Most of us are signed up to plenty of digital subscriptions, covering streaming services, cloud storage, fitness apps, and plenty more. This extends to software subscriptions, too: Both Adobe Photoshop and Microsoft Officeask for monthly or yearly subscriptions if you want to stay up to date. Add up here and there and you can soon find yourself paying out more each week than you want. What you might not know is that for just about every paid software program out there, there’s a perfectly adequate and free replacement—so you can cut your dependency on software subscriptions right down. GIMP is an image editor packed with features. Screenshot: GIMP The rather oddly named GIMP—it stands for GNU Image Manipulation Program—is a head-on challenger to Adobe Photoshop, with a lot of the same advanced features on offer across object selections and manipulations, layers, and effects. GIMP doesn’t have as much AI stuffed into it as Photoshop does, but you might see that as a benefit. Whether you want to touch up and enhance the photos you’ve taken, or you want to create digital art, GIMP can handle it all. Open up the software and you’ll see you get a wealth of tools to play around with; there are plenty of third-party extensions and customizations available too—plus lots of tutorials and more help on the web. Download GIMP for Windows or macOS. LibreOffice Writer is a solid alternative to Microsoft Word. Screenshot: LibreOffice Microsoft Office is now called Microsoft 365, but however you refer to it, it’s anchored by Word, Excel, and PowerPoint. While Microsoft asks for a one-off fee or regular subscription, you can use LibreOffice completely free of charge—including the equivalent apps Writer, Calc, and Impress. If you have any experience using the Microsoft apps, you’ll feel right at home inside the LibreOffice apps—and they can import and export using Office file formats too. And just because you’re not paying for the software doesn’t mean you’re missing out on features, because these programs come backed with a host of useful options and tools. Download LibreOffice for Windows or macOS. Watch as much as you want on Tubi, for free. Screenshot: Tubi When it comes to movies and shows, there are plenty of services that will charge you a fee for access, including Netflix. Not so Tubi, which is completely funded by ads. Okay, it might not have the latest and greatest selection of titles, but there’s still plenty to watch, completely free. You aren’t going to run out of viewing material anytime soon. Tubi is one of a growing number of FAST streaming services, which stands for free ad-supported streaming television; others you might want to check out include Pluto TV and the Roku Channel. While content on these platforms is usually older than on the alternatives, you’ll probably be surprised at how much good stuff there is. Watch Tubi on the web, or on Android or iOS. Use KeePass as your password manager KeePass is a simple, straightforward password manager. Screenshot: KeePass We’ve written before about the benefits of using a password manager, but most of them require a subscription to use all of their features. If a password manager offers a free plan at all, it usually restricts how many passwords you can save or how many devices you can sync between, or apply some other limitations. KeePass is different, as it’s completely free and open source. It comes with plenty of features to keep your passwords private and secure, and while there’s only an official version for Windows, there are several unofficial ports so you can sync your passwords across macOS, Android, and iOS too. Download KeePass for Windows. Create videos with ease with OpenShot. Screenshot: OpenShot We’ll finish where we started, with an alternative to a program from the Adobe Creative Cloud suite. Unless you’re a professional filmmaker who needs the very best in industry-standard tools, OpenShot will give you everything you need in video editing features and options, and it’s capable of some impressive results. The extensive list includes support for key frame animations, an unlimited number of tracks, easy-to-use scaling and trimming tools, compositing, image overlays, title creating, and support for a broad range of video, audio, and image formats. Despite all of those features and more, you won’t find it difficult to use. Download OpenShot for Windows or macOS. #free #alternatives #photoshop #office #premiere
    WWW.POPSCI.COM
    Free alternatives to Photoshop, Office, Premiere, and Netflix
    You don't have to go for the paid software options. Image: Timothy Exodus/Unsplash Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. Most of us are signed up to plenty of digital subscriptions, covering streaming services, cloud storage, fitness apps, and plenty more. This extends to software subscriptions, too: Both Adobe Photoshop and Microsoft Office (now Microsoft 365) ask for monthly or yearly subscriptions if you want to stay up to date. Add up $5 here and $10 there and you can soon find yourself paying out more each week than you want. What you might not know is that for just about every paid software program out there, there’s a perfectly adequate and free replacement—so you can cut your dependency on software subscriptions right down. GIMP is an image editor packed with features. Screenshot: GIMP The rather oddly named GIMP—it stands for GNU Image Manipulation Program—is a head-on challenger to Adobe Photoshop, with a lot of the same advanced features on offer across object selections and manipulations, layers, and effects. GIMP doesn’t have as much AI stuffed into it as Photoshop does, but you might see that as a benefit. Whether you want to touch up and enhance the photos you’ve taken, or you want to create digital art, GIMP can handle it all. Open up the software and you’ll see you get a wealth of tools to play around with; there are plenty of third-party extensions and customizations available too—plus lots of tutorials and more help on the web. Download GIMP for Windows or macOS. LibreOffice Writer is a solid alternative to Microsoft Word. Screenshot: LibreOffice Microsoft Office is now called Microsoft 365, but however you refer to it, it’s anchored by Word, Excel, and PowerPoint. While Microsoft asks for a one-off fee or regular subscription, you can use LibreOffice completely free of charge—including the equivalent apps Writer (documents), Calc (spreadsheets), and Impress (presentations). If you have any experience using the Microsoft apps, you’ll feel right at home inside the LibreOffice apps—and they can import and export using Office file formats too. And just because you’re not paying for the software doesn’t mean you’re missing out on features, because these programs come backed with a host of useful options and tools. Download LibreOffice for Windows or macOS. Watch as much as you want on Tubi, for free. Screenshot: Tubi When it comes to movies and shows, there are plenty of services that will charge you a fee for access, including Netflix. Not so Tubi, which is completely funded by ads. Okay, it might not have the latest and greatest selection of titles, but there’s still plenty to watch, completely free. You aren’t going to run out of viewing material anytime soon. Tubi is one of a growing number of FAST streaming services, which stands for free ad-supported streaming television; others you might want to check out include Pluto TV and the Roku Channel. While content on these platforms is usually older than on the alternatives, you’ll probably be surprised at how much good stuff there is. Watch Tubi on the web, or on Android or iOS. Use KeePass as your password manager KeePass is a simple, straightforward password manager. Screenshot: KeePass We’ve written before about the benefits of using a password manager, but most of them require a subscription to use all of their features. If a password manager offers a free plan at all, it usually restricts how many passwords you can save or how many devices you can sync between, or apply some other limitations. KeePass is different, as it’s completely free and open source (so you can look at the source code yourself, if you wish). It comes with plenty of features to keep your passwords private and secure, and while there’s only an official version for Windows, there are several unofficial ports so you can sync your passwords across macOS, Android, and iOS too. Download KeePass for Windows. Create videos with ease with OpenShot. Screenshot: OpenShot We’ll finish where we started, with an alternative to a program from the Adobe Creative Cloud suite. Unless you’re a professional filmmaker who needs the very best in industry-standard tools, OpenShot will give you everything you need in video editing features and options, and it’s capable of some impressive results. The extensive list includes support for key frame animations, an unlimited number of tracks, easy-to-use scaling and trimming tools, compositing, image overlays, title creating (including 3D titles), and support for a broad range of video, audio, and image formats. Despite all of those features and more, you won’t find it difficult to use. Download OpenShot for Windows or macOS.
    0 Comments 0 Shares 0 Reviews
  • Unironically the Best Case: Retro Silverstone FLP02 with Turbo Button

    Cases News Unironically the Best Case: Retro Silverstone FLP02 with Turbo ButtonJune 6, 2025Last Updated: 2025-06-06Silverstone made the best case of Computex 2025 -- and it's actually shippingThe HighlightsThe FLP02 case is Silverstone's latest in its now growing lineup of retro-themed computer casesThe FLP02 will be sold for around if all things go as planned, or just under 200 EURIt includes modern features, like 360mm radiator support, but also mixes in old throwbacksTable of ContentsAutoTOC Grab a GN Tear-Down Toolkit to support our AD-FREE reviews and IN-DEPTH testing while also getting a high-quality, highly portable 10-piece toolkit that was custom designed for use with video cards for repasting and water block installation. Includes a portable roll bag, hook hangers for pegboards, a storage compartment, and instructional GPU disassembly cards.IntroWe visited Silverstone’s booth at Computex 2025 and walked away thinking we saw the best case of the show.Editor's note: This was originally published on May 21, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsHostSteve BurkeEditing, CameraMike GaglioneVitalii MakhnovetsWriting, Web EditingJimmy ThangSilverstone FLP02Our favorite case happens to be Silverstone’s retro-inspired, beige FLP02. Its old theme may look like an April Fool’s joke, but it’s definitely going into mass production. The case evokes the look of computers along the 286 through 486 era along with some of the early Pentium PCs. The case has a red power switch on the front along with a reset button, which actually follows the front lock. The turbo button, on the other hand, adjusts the fan speed. The number display indicates how fast the fans are going.The FLP02 case is based on existing tooling. Internally, the case is set up pretty normal in some ways. The power supply shroud is present and on the bottom, and it’s punctured on the top for airflow. Back in the olden days, the PSU would be in the top. The FLP02 also has 5.25 hard drive cage support.The switches on the front of the case, which represent floppy drives, are actually functional. Releasing the lock allows the slot cover to come out. Silverstone tells us the mechanism here that we saw at Computex is actually very difficult to manufacture so the company will probably create a stronger and more resilient mechanism with the company showing us a 3D printed mock-up of one.  Internally, the back of the case has a 120mm fan, but it can fit a 140mm one. The top of the FLP02 can fit a 360mm radiator. The case also has a vertical GPU mount option, though it’s only for a 2-slot wide mount, which restricts what kind of card you can put in it. The FLP02 also has a vertical GPU support, which is obviously a more modern feature.   For inspiration, Silverstone told us it Googled old computers and chose bits and pieces that it liked for the case’s design.  Older computer cases wouldn’t have had a lot of ventilation on the front, but the FLP02 has some ventilation on the front bottom. Its top panel is also ventilated and has a dust filter. The top of the case also has options for multiple radiator sizes.   The back side of the case has all of the modern cable management options so it ends up being a mix of design from both old and new. In terms of pricing, Silverstone says it will probably be but that’s based on the current tariff situation. In the European market, the company is looking at around or less than 200 Euros. The case is also hiding some more modern features, like the front-panel USB ports, under covers to keep the immersion that the case is old. We plan on reviewing the case when it comes out. Silverstone LD05 Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work!Switching gears, Silverstone’s LD05 is a more modern fish-tank style ATX case that’s trying to hit a price point, which is, again, dependent on the tariff situation. The company plans on providing 3x120mm ARGB fans. In terms of fan-mount locations, there are 2 on the side and a fan on the back. And there’s also space on the top for either 120mm or 140mm fans. The build we saw had 3x120mm ones.The case has a heavily ventilated power supply shroud, which also has a hard-drive cage within it, which is also perforated. Speaking of perforations, the back side panel is also perforated. The backside has some cable management space. It’s pretty standard. The LD05 also has white cables that try to match the case itself. The color isn’t an exact match, however.  Silverstone Alta T1The Alta T1 is a case we saw at last year’s Computex and Silverstone tells us it will be over a grand. Silverstone Alta T2We saw a version of the T2 case last year. In terms of pricing, the T2 will be about It has an aluminum shell. When we pulled off its bottom side panel at Computex, it revealed 1 of 2 installed power supplies in the system we looked at. The other PSU is right behind it. The shroud area also has drive mounts in the middle and the front. The case itself has a ton of drive cage options. The T2 essentially acts like a home-server rendering farm of sorts. It’s got 11 slots for PCIe devices, making it one of the larger cases on the market for PCIe support.  The case’s rail system allows you to basically mount whatever you want wherever you want.The top front of the case has a canted angle, which has a plate that pulls off. There’s also another plate on the front bottom that pulls off and reveals the interior of the case. The T2 we saw also had 180mm fans installed in it. Silverstone Home Server Interview Visit our Patreon page to contribute a few dollars toward this website's operationAdditionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.We also interviewed Tony from Silverstone, where he walked us through some of the company’s home-server style cases. Make sure you check out that interview in our video.
    #unironically #best #case #retro #silverstone
    Unironically the Best Case: Retro Silverstone FLP02 with Turbo Button
    Cases News Unironically the Best Case: Retro Silverstone FLP02 with Turbo ButtonJune 6, 2025Last Updated: 2025-06-06Silverstone made the best case of Computex 2025 -- and it's actually shippingThe HighlightsThe FLP02 case is Silverstone's latest in its now growing lineup of retro-themed computer casesThe FLP02 will be sold for around if all things go as planned, or just under 200 EURIt includes modern features, like 360mm radiator support, but also mixes in old throwbacksTable of ContentsAutoTOC Grab a GN Tear-Down Toolkit to support our AD-FREE reviews and IN-DEPTH testing while also getting a high-quality, highly portable 10-piece toolkit that was custom designed for use with video cards for repasting and water block installation. Includes a portable roll bag, hook hangers for pegboards, a storage compartment, and instructional GPU disassembly cards.IntroWe visited Silverstone’s booth at Computex 2025 and walked away thinking we saw the best case of the show.Editor's note: This was originally published on May 21, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsHostSteve BurkeEditing, CameraMike GaglioneVitalii MakhnovetsWriting, Web EditingJimmy ThangSilverstone FLP02Our favorite case happens to be Silverstone’s retro-inspired, beige FLP02. Its old theme may look like an April Fool’s joke, but it’s definitely going into mass production. The case evokes the look of computers along the 286 through 486 era along with some of the early Pentium PCs. The case has a red power switch on the front along with a reset button, which actually follows the front lock. The turbo button, on the other hand, adjusts the fan speed. The number display indicates how fast the fans are going.The FLP02 case is based on existing tooling. Internally, the case is set up pretty normal in some ways. The power supply shroud is present and on the bottom, and it’s punctured on the top for airflow. Back in the olden days, the PSU would be in the top. The FLP02 also has 5.25 hard drive cage support.The switches on the front of the case, which represent floppy drives, are actually functional. Releasing the lock allows the slot cover to come out. Silverstone tells us the mechanism here that we saw at Computex is actually very difficult to manufacture so the company will probably create a stronger and more resilient mechanism with the company showing us a 3D printed mock-up of one.  Internally, the back of the case has a 120mm fan, but it can fit a 140mm one. The top of the FLP02 can fit a 360mm radiator. The case also has a vertical GPU mount option, though it’s only for a 2-slot wide mount, which restricts what kind of card you can put in it. The FLP02 also has a vertical GPU support, which is obviously a more modern feature.   For inspiration, Silverstone told us it Googled old computers and chose bits and pieces that it liked for the case’s design.  Older computer cases wouldn’t have had a lot of ventilation on the front, but the FLP02 has some ventilation on the front bottom. Its top panel is also ventilated and has a dust filter. The top of the case also has options for multiple radiator sizes.   The back side of the case has all of the modern cable management options so it ends up being a mix of design from both old and new. In terms of pricing, Silverstone says it will probably be but that’s based on the current tariff situation. In the European market, the company is looking at around or less than 200 Euros. The case is also hiding some more modern features, like the front-panel USB ports, under covers to keep the immersion that the case is old. We plan on reviewing the case when it comes out. Silverstone LD05 Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work!Switching gears, Silverstone’s LD05 is a more modern fish-tank style ATX case that’s trying to hit a price point, which is, again, dependent on the tariff situation. The company plans on providing 3x120mm ARGB fans. In terms of fan-mount locations, there are 2 on the side and a fan on the back. And there’s also space on the top for either 120mm or 140mm fans. The build we saw had 3x120mm ones.The case has a heavily ventilated power supply shroud, which also has a hard-drive cage within it, which is also perforated. Speaking of perforations, the back side panel is also perforated. The backside has some cable management space. It’s pretty standard. The LD05 also has white cables that try to match the case itself. The color isn’t an exact match, however.  Silverstone Alta T1The Alta T1 is a case we saw at last year’s Computex and Silverstone tells us it will be over a grand. Silverstone Alta T2We saw a version of the T2 case last year. In terms of pricing, the T2 will be about It has an aluminum shell. When we pulled off its bottom side panel at Computex, it revealed 1 of 2 installed power supplies in the system we looked at. The other PSU is right behind it. The shroud area also has drive mounts in the middle and the front. The case itself has a ton of drive cage options. The T2 essentially acts like a home-server rendering farm of sorts. It’s got 11 slots for PCIe devices, making it one of the larger cases on the market for PCIe support.  The case’s rail system allows you to basically mount whatever you want wherever you want.The top front of the case has a canted angle, which has a plate that pulls off. There’s also another plate on the front bottom that pulls off and reveals the interior of the case. The T2 we saw also had 180mm fans installed in it. Silverstone Home Server Interview Visit our Patreon page to contribute a few dollars toward this website's operationAdditionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.We also interviewed Tony from Silverstone, where he walked us through some of the company’s home-server style cases. Make sure you check out that interview in our video. #unironically #best #case #retro #silverstone
    GAMERSNEXUS.NET
    Unironically the Best Case: Retro Silverstone FLP02 with Turbo Button
    Cases News Unironically the Best Case: Retro Silverstone FLP02 with Turbo ButtonJune 6, 2025Last Updated: 2025-06-06Silverstone made the best case of Computex 2025 -- and it's actually shippingThe HighlightsThe FLP02 case is Silverstone's latest in its now growing lineup of retro-themed computer casesThe FLP02 will be sold for around $220, if all things go as planned, or just under 200 EURIt includes modern features, like 360mm radiator support, but also mixes in old throwbacksTable of ContentsAutoTOC Grab a GN Tear-Down Toolkit to support our AD-FREE reviews and IN-DEPTH testing while also getting a high-quality, highly portable 10-piece toolkit that was custom designed for use with video cards for repasting and water block installation. Includes a portable roll bag, hook hangers for pegboards, a storage compartment, and instructional GPU disassembly cards.IntroWe visited Silverstone’s booth at Computex 2025 and walked away thinking we saw the best case of the show.Editor's note: This was originally published on May 21, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsHostSteve BurkeEditing, CameraMike GaglioneVitalii MakhnovetsWriting, Web EditingJimmy ThangSilverstone FLP02Our favorite case happens to be Silverstone’s retro-inspired, beige FLP02. Its old theme may look like an April Fool’s joke, but it’s definitely going into mass production. The case evokes the look of computers along the 286 through 486 era along with some of the early Pentium PCs. The case has a red power switch on the front along with a reset button, which actually follows the front lock. The turbo button, on the other hand, adjusts the fan speed. The number display indicates how fast the fans are going.The FLP02 case is based on existing tooling. Internally, the case is set up pretty normal in some ways. The power supply shroud is present and on the bottom, and it’s punctured on the top for airflow. Back in the olden days, the PSU would be in the top. The FLP02 also has 5.25 hard drive cage support.The switches on the front of the case, which represent floppy drives, are actually functional. Releasing the lock allows the slot cover to come out. Silverstone tells us the mechanism here that we saw at Computex is actually very difficult to manufacture so the company will probably create a stronger and more resilient mechanism with the company showing us a 3D printed mock-up of one.  Internally, the back of the case has a 120mm fan, but it can fit a 140mm one. The top of the FLP02 can fit a 360mm radiator. The case also has a vertical GPU mount option, though it’s only for a 2-slot wide mount, which restricts what kind of card you can put in it. The FLP02 also has a vertical GPU support, which is obviously a more modern feature.   For inspiration, Silverstone told us it Googled old computers and chose bits and pieces that it liked for the case’s design.  Older computer cases wouldn’t have had a lot of ventilation on the front, but the FLP02 has some ventilation on the front bottom. Its top panel is also ventilated and has a dust filter. The top of the case also has options for multiple radiator sizes.   The back side of the case has all of the modern cable management options so it ends up being a mix of design from both old and new. In terms of pricing, Silverstone says it will probably be $220, but that’s based on the current tariff situation. In the European market, the company is looking at around or less than 200 Euros. The case is also hiding some more modern features, like the front-panel USB ports, under covers to keep the immersion that the case is old. We plan on reviewing the case when it comes out. Silverstone LD05 Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work! (or consider a direct donation or a Patreon contribution!)Switching gears, Silverstone’s LD05 is a more modern fish-tank style ATX case that’s trying to hit a $100 price point, which is, again, dependent on the tariff situation. The company plans on providing 3x120mm ARGB fans. In terms of fan-mount locations, there are 2 on the side and a fan on the back. And there’s also space on the top for either 120mm or 140mm fans. The build we saw had 3x120mm ones.The case has a heavily ventilated power supply shroud, which also has a hard-drive cage within it, which is also perforated. Speaking of perforations, the back side panel is also perforated. The backside has some cable management space. It’s pretty standard. The LD05 also has white cables that try to match the case itself. The color isn’t an exact match, however.  Silverstone Alta T1The Alta T1 is a case we saw at last year’s Computex and Silverstone tells us it will be over a grand. Silverstone Alta T2We saw a version of the T2 case last year. In terms of pricing, the T2 will be about $1,000. It has an aluminum shell. When we pulled off its bottom side panel at Computex, it revealed 1 of 2 installed power supplies in the system we looked at. The other PSU is right behind it. The shroud area also has drive mounts in the middle and the front. The case itself has a ton of drive cage options. The T2 essentially acts like a home-server rendering farm of sorts. It’s got 11 slots for PCIe devices, making it one of the larger cases on the market for PCIe support.  The case’s rail system allows you to basically mount whatever you want wherever you want.The top front of the case has a canted angle, which has a plate that pulls off. There’s also another plate on the front bottom that pulls off and reveals the interior of the case. The T2 we saw also had 180mm fans installed in it. Silverstone Home Server Interview Visit our Patreon page to contribute a few dollars toward this website's operation (or consider a direct donation or buying something from our GN Store!) Additionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.We also interviewed Tony from Silverstone, where he walked us through some of the company’s home-server style cases. Make sure you check out that interview in our video.
    Like
    Love
    Wow
    Sad
    Angry
    762
    4 Comments 0 Shares 0 Reviews
  • Why tech companies are snubbing the London Stock Exchange

    British fintech Wise said this week it would shift its primary listing from London to New York, joining a growing list of firms snubbing the London Stock Exchange.
    UK chip designer Arm opted for a New York IPO in 2023, while food delivery giant Just Eat Takeaway quit the LSE for Amsterdam in November. 
    Sweden’s Klarna has confirmed plans to go public in New York, following in the footsteps of fellow Stockholm-based tech darling Spotify, which listed on the NYSE in 2018. 
    The draw? Bigger valuations, deeper capital, and more appetite for risk.

    Register Now
    “The US economy continues to perform far better than the EU, and valuations are simply higher for companies that can list there,” Victor Basta, managing partner at Artis Partners, told TNW.   
    The numbers back him up. The NYSE boasts a market cap of around trillion — compared to just trillion for the LSE. 
    That scale — and the deep-pocketed investors it attracts — pushed Arm to list across the pond. Wise followed for the same reason, according to CEO Kristo Käärmann. 
    Käärmann said the move would tap “the biggest market opportunity in the world for our products today, and enable better access to the world’s deepest and most liquid capital market.” 
    Beyond sheer growth potential, US investors are also known for taking bigger bets on growth-stage tech companies.  
    “US investors understand the whole ‘revenue-before-profit’ strategy,”  Andrey Korchak, a British serial entrepreneur, told TNW. “Meanwhile, in Europe, they often want to see revenue from day one.” 
    That risk aversion, Korchak believes, restricts the growth of startups.
    “Europe just doesn’t have the same density of tech unicorns,” he said. “And when startups here do hit that billion-dollar mark, most still prefer to list in the US.”
    Sean Reddington, co-founder of UK tech firm Thrive, fears that Wise’s New York listing will deepen the problems. 
    “Wise’s move to the US signals a worrying trend,” he said. “It threatens a ‘brain drain’ of capital and talent, making it harder for growth-stage VCs to invest in UK scaleups without a clear US exit plan.”
    He called for urgent government action, including providing “meaningful incentives” for tech firms to list in the UK. 
    “If the ultimate reward of a domestic IPO is diminished, it pushes more companies to consider relocating or listing overseas,” he said.
    Europe’s startup struggles will be a hot topic at TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale — use the code TNWXMEDIA2025 at checkout to get 30%.

    Story by

    Siôn Geschwindt

    Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicSiôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicles, he's happiest sourcing a scoop, investigating the impact of emerging technologies, and even putting them to the test. He has five years of journalism experience and holds a dual degree in media and environmental science from the University of Cape Town, South Africa. When he's not writing, you can probably find Siôn out hiking, surfing, playing the drums or catering to his moderate caffeine addiction. You can contact him at: sion.geschwindtprotonmailcom

    Get the TNW newsletter
    Get the most important tech news in your inbox each week.

    Also tagged with
    #why #tech #companies #are #snubbing
    Why tech companies are snubbing the London Stock Exchange
    British fintech Wise said this week it would shift its primary listing from London to New York, joining a growing list of firms snubbing the London Stock Exchange. UK chip designer Arm opted for a New York IPO in 2023, while food delivery giant Just Eat Takeaway quit the LSE for Amsterdam in November.  Sweden’s Klarna has confirmed plans to go public in New York, following in the footsteps of fellow Stockholm-based tech darling Spotify, which listed on the NYSE in 2018.  The draw? Bigger valuations, deeper capital, and more appetite for risk. Register Now “The US economy continues to perform far better than the EU, and valuations are simply higher for companies that can list there,” Victor Basta, managing partner at Artis Partners, told TNW.    The numbers back him up. The NYSE boasts a market cap of around trillion — compared to just trillion for the LSE.  That scale — and the deep-pocketed investors it attracts — pushed Arm to list across the pond. Wise followed for the same reason, according to CEO Kristo Käärmann.  Käärmann said the move would tap “the biggest market opportunity in the world for our products today, and enable better access to the world’s deepest and most liquid capital market.”  Beyond sheer growth potential, US investors are also known for taking bigger bets on growth-stage tech companies.   “US investors understand the whole ‘revenue-before-profit’ strategy,”  Andrey Korchak, a British serial entrepreneur, told TNW. “Meanwhile, in Europe, they often want to see revenue from day one.”  That risk aversion, Korchak believes, restricts the growth of startups. “Europe just doesn’t have the same density of tech unicorns,” he said. “And when startups here do hit that billion-dollar mark, most still prefer to list in the US.” Sean Reddington, co-founder of UK tech firm Thrive, fears that Wise’s New York listing will deepen the problems.  “Wise’s move to the US signals a worrying trend,” he said. “It threatens a ‘brain drain’ of capital and talent, making it harder for growth-stage VCs to invest in UK scaleups without a clear US exit plan.” He called for urgent government action, including providing “meaningful incentives” for tech firms to list in the UK.  “If the ultimate reward of a domestic IPO is diminished, it pushes more companies to consider relocating or listing overseas,” he said. Europe’s startup struggles will be a hot topic at TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale — use the code TNWXMEDIA2025 at checkout to get 30%. Story by Siôn Geschwindt Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicSiôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicles, he's happiest sourcing a scoop, investigating the impact of emerging technologies, and even putting them to the test. He has five years of journalism experience and holds a dual degree in media and environmental science from the University of Cape Town, South Africa. When he's not writing, you can probably find Siôn out hiking, surfing, playing the drums or catering to his moderate caffeine addiction. You can contact him at: sion.geschwindtprotonmailcom Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with #why #tech #companies #are #snubbing
    THENEXTWEB.COM
    Why tech companies are snubbing the London Stock Exchange
    British fintech Wise said this week it would shift its primary listing from London to New York, joining a growing list of firms snubbing the London Stock Exchange. UK chip designer Arm opted for a New York IPO in 2023, while food delivery giant Just Eat Takeaway quit the LSE for Amsterdam in November.  Sweden’s Klarna has confirmed plans to go public in New York, following in the footsteps of fellow Stockholm-based tech darling Spotify, which listed on the NYSE in 2018.  The draw? Bigger valuations, deeper capital, and more appetite for risk. Register Now “The US economy continues to perform far better than the EU, and valuations are simply higher for companies that can list there,” Victor Basta, managing partner at Artis Partners, told TNW.    The numbers back him up. The NYSE boasts a market cap of around $27 trillion — compared to just $3.5 trillion for the LSE.  That scale — and the deep-pocketed investors it attracts — pushed Arm to list across the pond. Wise followed for the same reason, according to CEO Kristo Käärmann.  Käärmann said the move would tap “the biggest market opportunity in the world for our products today, and enable better access to the world’s deepest and most liquid capital market.”  Beyond sheer growth potential, US investors are also known for taking bigger bets on growth-stage tech companies.   “US investors understand the whole ‘revenue-before-profit’ strategy,”  Andrey Korchak, a British serial entrepreneur, told TNW. “Meanwhile, in Europe, they often want to see revenue from day one.”  That risk aversion, Korchak believes, restricts the growth of startups. “Europe just doesn’t have the same density of tech unicorns,” he said. “And when startups here do hit that billion-dollar mark, most still prefer to list in the US.” Sean Reddington, co-founder of UK tech firm Thrive, fears that Wise’s New York listing will deepen the problems.  “Wise’s move to the US signals a worrying trend,” he said. “It threatens a ‘brain drain’ of capital and talent, making it harder for growth-stage VCs to invest in UK scaleups without a clear US exit plan.” He called for urgent government action, including providing “meaningful incentives” for tech firms to list in the UK.  “If the ultimate reward of a domestic IPO is diminished, it pushes more companies to consider relocating or listing overseas,” he said. Europe’s startup struggles will be a hot topic at TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale — use the code TNWXMEDIA2025 at checkout to get 30%. Story by Siôn Geschwindt Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehic (show all) Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicles, he's happiest sourcing a scoop, investigating the impact of emerging technologies, and even putting them to the test. He has five years of journalism experience and holds a dual degree in media and environmental science from the University of Cape Town, South Africa. When he's not writing, you can probably find Siôn out hiking, surfing, playing the drums or catering to his moderate caffeine addiction. You can contact him at: sion.geschwindt [at] protonmail [dot] com Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with
    Like
    Love
    Wow
    Sad
    Angry
    585
    0 Comments 0 Shares 0 Reviews
  • Meta Apps Have Been Covertly Tracking Android Users' Web Activity for Months

    I don't expect Meta to respect my data or my privacy, but the company continues to surprise me with how low they're willing to go in the name of data collection. The latest such story comes to us from a report titled "Disclosure: Covert Web-to-App Tracking via Localhost on Android." In short, Meta and Yandexhave been tracking potentially billions of Android users by abusing a security loophole in Android. That loophole allows the companies to access identifying browsing data from your web browser as long as you have their Android apps installed. How does this tracking work?As the report explains, Android allows any installed app with internet permissions to access the "loopback address" or localhost, an address a device uses to communicate with itself. As it happens, your web browser also has access to the localhost, which allows JavaScripts embedded on certain websites to connect to Android apps and share browsing data and identifiers.What are those JavaScripts, you might ask? In this case, that's Meta Pixel and Yandex Metrica, scripts that let companies track users on their sites. Trackers are an unfortunate part of the modern internet, but Meta Pixel is only supposed to be able to follow you while you browse the web. This loop lets Meta Pixel scripts send your browsing data, cookies, and identifiers back to installed Meta apps like Facebook and Instagram. The same goes for Yandex with its apps like Maps and Browser.You certainly didn't sign up for that when you installed Instagram on your Android device. But once you logged in, the next time you visited a website that embedded Meta Pixel, the script beamed your information back to the app. All of a sudden, Meta had identifying browsing data from your web activity, not via the browsing itself, but from the "unrelated" Instagram app. Chrome, Firefox, and Edge were all affected in these findings. DuckDuckGo blocked some but not all of the domains here, so it was "minimally affected." Brave does block requests to the localhost if you don't consent to it, so it did successfully protect users from this tracking.Researchers say Yandex has been doing this since February of 2017 on HTTP sites, and May of 2018 on HTTPS sites. Meta Pixel, on the other hand, hasn't been tracking this way for long: It only started September of 2024 for HTTP, and ended that practice in October. It started via Websocket and WebRTC STUN in November, and WebRTC TURN in May. Website owners apparently complained to Meta starting in September, asking why Meta Pixel communicates with the localhost. As far as researchers could find, Meta never responded.Researchers make it clear that the type of tracking is possible on iOS, as developers can establish localhost connections and apps can "listen in" too. However, they found no evidence of this tracking on iOS devices, and hypothesize that it has to do with how iOS restricts native apps running in the background.Meta has officially stopped this tracking The good news is, as of June 3, researchers say they have not observed Meta Pixel communicating with the localhost. They didn't say the same for Yandex Metrika, though Yandex told Ars Technica it was "discontinuing the practice." Ars Technica also reports that Google has opened an investigation into these actions that "blatantly violate our security and privacy principles."However, even if Meta has stopped this tracking following the report, the damage could be widespread. As highlighted in the report, estimates put Meta Pixel adoption anywhere from 2.4 million to 5.8 million sites. From here, researchers found that just over 17,000 Meta Pixel sites in the U.S. attempt to connect to the localhost, and over 78% of those do so without any user consent needed, including sites like AP News, Buzzfeed, and The Verge. That's a lot of websites that could have been sending your data back to your Facebook and Instagram apps. The report features a tool that you can use to look for affected sites, but notes the list is not exhaustive, and absence doesn't mean the site is safe.Meta sent me the following statement in response to my request for comment: “We are in discussions with Google to address a potential miscommunication regarding the application of their policies. Upon becoming aware of the concerns, we decided to pause the feature while we work with Google to resolve the issue.”
    #meta #apps #have #been #covertly
    Meta Apps Have Been Covertly Tracking Android Users' Web Activity for Months
    I don't expect Meta to respect my data or my privacy, but the company continues to surprise me with how low they're willing to go in the name of data collection. The latest such story comes to us from a report titled "Disclosure: Covert Web-to-App Tracking via Localhost on Android." In short, Meta and Yandexhave been tracking potentially billions of Android users by abusing a security loophole in Android. That loophole allows the companies to access identifying browsing data from your web browser as long as you have their Android apps installed. How does this tracking work?As the report explains, Android allows any installed app with internet permissions to access the "loopback address" or localhost, an address a device uses to communicate with itself. As it happens, your web browser also has access to the localhost, which allows JavaScripts embedded on certain websites to connect to Android apps and share browsing data and identifiers.What are those JavaScripts, you might ask? In this case, that's Meta Pixel and Yandex Metrica, scripts that let companies track users on their sites. Trackers are an unfortunate part of the modern internet, but Meta Pixel is only supposed to be able to follow you while you browse the web. This loop lets Meta Pixel scripts send your browsing data, cookies, and identifiers back to installed Meta apps like Facebook and Instagram. The same goes for Yandex with its apps like Maps and Browser.You certainly didn't sign up for that when you installed Instagram on your Android device. But once you logged in, the next time you visited a website that embedded Meta Pixel, the script beamed your information back to the app. All of a sudden, Meta had identifying browsing data from your web activity, not via the browsing itself, but from the "unrelated" Instagram app. Chrome, Firefox, and Edge were all affected in these findings. DuckDuckGo blocked some but not all of the domains here, so it was "minimally affected." Brave does block requests to the localhost if you don't consent to it, so it did successfully protect users from this tracking.Researchers say Yandex has been doing this since February of 2017 on HTTP sites, and May of 2018 on HTTPS sites. Meta Pixel, on the other hand, hasn't been tracking this way for long: It only started September of 2024 for HTTP, and ended that practice in October. It started via Websocket and WebRTC STUN in November, and WebRTC TURN in May. Website owners apparently complained to Meta starting in September, asking why Meta Pixel communicates with the localhost. As far as researchers could find, Meta never responded.Researchers make it clear that the type of tracking is possible on iOS, as developers can establish localhost connections and apps can "listen in" too. However, they found no evidence of this tracking on iOS devices, and hypothesize that it has to do with how iOS restricts native apps running in the background.Meta has officially stopped this tracking The good news is, as of June 3, researchers say they have not observed Meta Pixel communicating with the localhost. They didn't say the same for Yandex Metrika, though Yandex told Ars Technica it was "discontinuing the practice." Ars Technica also reports that Google has opened an investigation into these actions that "blatantly violate our security and privacy principles."However, even if Meta has stopped this tracking following the report, the damage could be widespread. As highlighted in the report, estimates put Meta Pixel adoption anywhere from 2.4 million to 5.8 million sites. From here, researchers found that just over 17,000 Meta Pixel sites in the U.S. attempt to connect to the localhost, and over 78% of those do so without any user consent needed, including sites like AP News, Buzzfeed, and The Verge. That's a lot of websites that could have been sending your data back to your Facebook and Instagram apps. The report features a tool that you can use to look for affected sites, but notes the list is not exhaustive, and absence doesn't mean the site is safe.Meta sent me the following statement in response to my request for comment: “We are in discussions with Google to address a potential miscommunication regarding the application of their policies. Upon becoming aware of the concerns, we decided to pause the feature while we work with Google to resolve the issue.” #meta #apps #have #been #covertly
    LIFEHACKER.COM
    Meta Apps Have Been Covertly Tracking Android Users' Web Activity for Months
    I don't expect Meta to respect my data or my privacy, but the company continues to surprise me with how low they're willing to go in the name of data collection. The latest such story comes to us from a report titled "Disclosure: Covert Web-to-App Tracking via Localhost on Android." In short, Meta and Yandex (a Russian technology company) have been tracking potentially billions of Android users by abusing a security loophole in Android. That loophole allows the companies to access identifying browsing data from your web browser as long as you have their Android apps installed. How does this tracking work?As the report explains, Android allows any installed app with internet permissions to access the "loopback address" or localhost, an address a device uses to communicate with itself. As it happens, your web browser also has access to the localhost, which allows JavaScripts embedded on certain websites to connect to Android apps and share browsing data and identifiers.What are those JavaScripts, you might ask? In this case, that's Meta Pixel and Yandex Metrica, scripts that let companies track users on their sites. Trackers are an unfortunate part of the modern internet, but Meta Pixel is only supposed to be able to follow you while you browse the web. This loop lets Meta Pixel scripts send your browsing data, cookies, and identifiers back to installed Meta apps like Facebook and Instagram. The same goes for Yandex with its apps like Maps and Browser.You certainly didn't sign up for that when you installed Instagram on your Android device. But once you logged in, the next time you visited a website that embedded Meta Pixel, the script beamed your information back to the app. All of a sudden, Meta had identifying browsing data from your web activity, not via the browsing itself, but from the "unrelated" Instagram app. Chrome, Firefox, and Edge were all affected in these findings. DuckDuckGo blocked some but not all of the domains here, so it was "minimally affected." Brave does block requests to the localhost if you don't consent to it, so it did successfully protect users from this tracking.Researchers say Yandex has been doing this since February of 2017 on HTTP sites, and May of 2018 on HTTPS sites. Meta Pixel, on the other hand, hasn't been tracking this way for long: It only started September of 2024 for HTTP, and ended that practice in October. It started via Websocket and WebRTC STUN in November, and WebRTC TURN in May. Website owners apparently complained to Meta starting in September, asking why Meta Pixel communicates with the localhost. As far as researchers could find, Meta never responded.Researchers make it clear that the type of tracking is possible on iOS, as developers can establish localhost connections and apps can "listen in" too. However, they found no evidence of this tracking on iOS devices, and hypothesize that it has to do with how iOS restricts native apps running in the background.Meta has officially stopped this tracking The good news is, as of June 3, researchers say they have not observed Meta Pixel communicating with the localhost. They didn't say the same for Yandex Metrika, though Yandex told Ars Technica it was "discontinuing the practice." Ars Technica also reports that Google has opened an investigation into these actions that "blatantly violate our security and privacy principles."However, even if Meta has stopped this tracking following the report, the damage could be widespread. As highlighted in the report, estimates put Meta Pixel adoption anywhere from 2.4 million to 5.8 million sites. From here, researchers found that just over 17,000 Meta Pixel sites in the U.S. attempt to connect to the localhost, and over 78% of those do so without any user consent needed, including sites like AP News, Buzzfeed, and The Verge. That's a lot of websites that could have been sending your data back to your Facebook and Instagram apps. The report features a tool that you can use to look for affected sites, but notes the list is not exhaustive, and absence doesn't mean the site is safe.Meta sent me the following statement in response to my request for comment: “We are in discussions with Google to address a potential miscommunication regarding the application of their policies. Upon becoming aware of the concerns, we decided to pause the feature while we work with Google to resolve the issue.”
    Like
    Love
    Wow
    Sad
    Angry
    77
    0 Comments 0 Shares 0 Reviews
  • How microwave tech can help reclaim critical materials from e-waste

    When the computer or phone you’re using right now blinks its last blink and you drop it off for recycling, do you know what happens?

    At the recycling center, powerful magnets will pull out steel. Spinning drums will toss aluminum into bins. Copper wires will get neatly bundled up for resale. But as the conveyor belt keeps rolling, tiny specks of valuable, lesser-known materials such as gallium, indium, and tantalum will be left behind.

    Those tiny specks are critical materials. They’re essential for building new technology, and they’re in short supply in the U.S. They could be reused, but there’s a problem: Current recycling methods make recovering critical minerals from e-waste too costly or hazardous, so many recyclers simply skip them.

    Sadly, most of these hard-to-recycle materials end up buried in landfills or get mixed into products like cement. But it doesn’t have to be this way. New technology is starting to make a difference.

    As demand for these critical materials keeps growing, discarded electronics can become valuable resources. My colleagues and I at West Virginia University are developing a new technology to change how we recycle. Instead of using toxic chemicals, our approach uses electricity, making it safer, cleaner, and more affordable to recover critical materials from electronics.

    How much e-waste are we talking about?

    Americans generated about 2.7 million tons of electronic waste in 2018, according to the latest federal data. Including uncounted electronics, the U.S. recycles only about 15% of its total e-waste, suggests a survey by the United Nations.

    Even worse, nearly half the electronics that people in Northern America sent to recycling centers end up shipped overseas. They often land in scrapyards, where workers may use dangerous methods like burning or leaching with harsh chemicals to pull out valuable metals. These practices can harm both the environment and workers’ health. That’s why the Environmental Protection Agency restricts these methods in the U.S.

    The tiny specks matter

    Critical minerals are in most of the technology around you. Every phone screen has a super-thin layer of a material called indium tin oxide. LEDs glow because of a metal called gallium. Tantalum stores energy in tiny electronic parts called capacitors.

    All of these materials are flagged as “high risk” on the U.S. Department of Energy’s critical materials list. That means the U.S. relies heavily on these materials for important technologies, but their supply could easily be disrupted by conflicts, trade disputes, or shortages.

    Right now, just a few countries, including China, control most of the mining, processing, and recovery of these materials, making the U.S. vulnerable if those countries decide to limit exports or raise prices.

    These materials aren’t cheap, either. For example, the U.S. Geological Survey reports that gallium was priced between to per kilogram in 2024. That’s 50 times more expensive than common metals like copper, at per kilogram in 2024.

    Revolutionizing recycling with microwaves

    At West Virginia University’s Department of Mechanical, Materials, and Aerospace Engineering, I and materials scientist Edward Sabolsky asked a simple question: Could we find a way to heat only specific parts of electronic waste to recover these valuable materials?

    If we could focus the heat on just the tiny specks of critical minerals, we might be able to recycle them easily and efficiently.

    The solution we found: microwaves.

    This equipment isn’t very different from the microwave ovens you use to heat food at home, just bigger and more powerful. The basic science is the same: Electromagnetic waves cause electrons to oscillate, creating heat.

    In our approach, though, we’re not heating water molecules like you do when cooking. Instead, we heat carbon, the black residue that collects around a candle flame or car tailpipe. Carbon heats up much faster in a microwave than water does. But don’t try this at home; your kitchen microwave wasn’t designed for such high temperatures.

    In our recycling method, we first shred the electronic waste, mix it with materials called fluxes that trap impurities, and then heat the mixture with microwaves. The microwaves rapidly heat the carbon that comes from the plastics and adhesives in the e-waste. This causes the carbon to react with the tiny specks of critical materials. The result: a tiny piece of pure, sponge-like metal about the size of a grain of rice.

    This metal can then be easily separated from leftover waste using filters.

    So far, in our laboratory tests, we have successfully recovered about 80% of the gallium, indium, and tantalum from e-waste, at purities between 95% and 97%. We have also demonstrated how it can be integrated with existing recycling processes.

    Why the Department of Defense is interested

    Our recycling technology got its start with help from a program funded by the Defense Department’s Advanced Research Projects Agency, or DARPA.

    Many important technologies, from radar systems to nuclear reactors, depend on these special materials. While the Department of Defense uses less of them than the commercial market, they are a national security concern.

    We’re planning to launch larger pilot projects next to test the method on smartphone circuit boards, LED lighting parts, and server cards from data centers. These tests will help us fine-tune the design for a bigger system that can recycle tons of e-waste per hour instead of just a few pounds. That could mean producing up to 50 pounds of these critical minerals per hour from every ton of e-waste processed.

    If the technology works as expected, we believe this approach could help meet the nation’s demand for critical materials.

    How to make e-waste recycling common

    One way e-waste recycling could become more common is if Congress held electronics companies responsible for recycling their products and recovering the critical materials inside. Closing loopholes that allow companies to ship e-waste overseas, instead of processing it safely in the U.S., could also help build a reserve of recovered critical minerals.

    But the biggest change may come from simple economics. Once technology becomes available to recover these tiny but valuable specks of critical materials quickly and affordably, the U.S. can transform domestic recycling and take a big step toward solving its shortage of critical materials.

    Terence Musho is an associate professor of engineering at West Virginia University.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.
    #how #microwave #tech #can #help
    How microwave tech can help reclaim critical materials from e-waste
    When the computer or phone you’re using right now blinks its last blink and you drop it off for recycling, do you know what happens? At the recycling center, powerful magnets will pull out steel. Spinning drums will toss aluminum into bins. Copper wires will get neatly bundled up for resale. But as the conveyor belt keeps rolling, tiny specks of valuable, lesser-known materials such as gallium, indium, and tantalum will be left behind. Those tiny specks are critical materials. They’re essential for building new technology, and they’re in short supply in the U.S. They could be reused, but there’s a problem: Current recycling methods make recovering critical minerals from e-waste too costly or hazardous, so many recyclers simply skip them. Sadly, most of these hard-to-recycle materials end up buried in landfills or get mixed into products like cement. But it doesn’t have to be this way. New technology is starting to make a difference. As demand for these critical materials keeps growing, discarded electronics can become valuable resources. My colleagues and I at West Virginia University are developing a new technology to change how we recycle. Instead of using toxic chemicals, our approach uses electricity, making it safer, cleaner, and more affordable to recover critical materials from electronics. How much e-waste are we talking about? Americans generated about 2.7 million tons of electronic waste in 2018, according to the latest federal data. Including uncounted electronics, the U.S. recycles only about 15% of its total e-waste, suggests a survey by the United Nations. Even worse, nearly half the electronics that people in Northern America sent to recycling centers end up shipped overseas. They often land in scrapyards, where workers may use dangerous methods like burning or leaching with harsh chemicals to pull out valuable metals. These practices can harm both the environment and workers’ health. That’s why the Environmental Protection Agency restricts these methods in the U.S. The tiny specks matter Critical minerals are in most of the technology around you. Every phone screen has a super-thin layer of a material called indium tin oxide. LEDs glow because of a metal called gallium. Tantalum stores energy in tiny electronic parts called capacitors. All of these materials are flagged as “high risk” on the U.S. Department of Energy’s critical materials list. That means the U.S. relies heavily on these materials for important technologies, but their supply could easily be disrupted by conflicts, trade disputes, or shortages. Right now, just a few countries, including China, control most of the mining, processing, and recovery of these materials, making the U.S. vulnerable if those countries decide to limit exports or raise prices. These materials aren’t cheap, either. For example, the U.S. Geological Survey reports that gallium was priced between to per kilogram in 2024. That’s 50 times more expensive than common metals like copper, at per kilogram in 2024. Revolutionizing recycling with microwaves At West Virginia University’s Department of Mechanical, Materials, and Aerospace Engineering, I and materials scientist Edward Sabolsky asked a simple question: Could we find a way to heat only specific parts of electronic waste to recover these valuable materials? If we could focus the heat on just the tiny specks of critical minerals, we might be able to recycle them easily and efficiently. The solution we found: microwaves. This equipment isn’t very different from the microwave ovens you use to heat food at home, just bigger and more powerful. The basic science is the same: Electromagnetic waves cause electrons to oscillate, creating heat. In our approach, though, we’re not heating water molecules like you do when cooking. Instead, we heat carbon, the black residue that collects around a candle flame or car tailpipe. Carbon heats up much faster in a microwave than water does. But don’t try this at home; your kitchen microwave wasn’t designed for such high temperatures. In our recycling method, we first shred the electronic waste, mix it with materials called fluxes that trap impurities, and then heat the mixture with microwaves. The microwaves rapidly heat the carbon that comes from the plastics and adhesives in the e-waste. This causes the carbon to react with the tiny specks of critical materials. The result: a tiny piece of pure, sponge-like metal about the size of a grain of rice. This metal can then be easily separated from leftover waste using filters. So far, in our laboratory tests, we have successfully recovered about 80% of the gallium, indium, and tantalum from e-waste, at purities between 95% and 97%. We have also demonstrated how it can be integrated with existing recycling processes. Why the Department of Defense is interested Our recycling technology got its start with help from a program funded by the Defense Department’s Advanced Research Projects Agency, or DARPA. Many important technologies, from radar systems to nuclear reactors, depend on these special materials. While the Department of Defense uses less of them than the commercial market, they are a national security concern. We’re planning to launch larger pilot projects next to test the method on smartphone circuit boards, LED lighting parts, and server cards from data centers. These tests will help us fine-tune the design for a bigger system that can recycle tons of e-waste per hour instead of just a few pounds. That could mean producing up to 50 pounds of these critical minerals per hour from every ton of e-waste processed. If the technology works as expected, we believe this approach could help meet the nation’s demand for critical materials. How to make e-waste recycling common One way e-waste recycling could become more common is if Congress held electronics companies responsible for recycling their products and recovering the critical materials inside. Closing loopholes that allow companies to ship e-waste overseas, instead of processing it safely in the U.S., could also help build a reserve of recovered critical minerals. But the biggest change may come from simple economics. Once technology becomes available to recover these tiny but valuable specks of critical materials quickly and affordably, the U.S. can transform domestic recycling and take a big step toward solving its shortage of critical materials. Terence Musho is an associate professor of engineering at West Virginia University. This article is republished from The Conversation under a Creative Commons license. Read the original article. #how #microwave #tech #can #help
    WWW.FASTCOMPANY.COM
    How microwave tech can help reclaim critical materials from e-waste
    When the computer or phone you’re using right now blinks its last blink and you drop it off for recycling, do you know what happens? At the recycling center, powerful magnets will pull out steel. Spinning drums will toss aluminum into bins. Copper wires will get neatly bundled up for resale. But as the conveyor belt keeps rolling, tiny specks of valuable, lesser-known materials such as gallium, indium, and tantalum will be left behind. Those tiny specks are critical materials. They’re essential for building new technology, and they’re in short supply in the U.S. They could be reused, but there’s a problem: Current recycling methods make recovering critical minerals from e-waste too costly or hazardous, so many recyclers simply skip them. Sadly, most of these hard-to-recycle materials end up buried in landfills or get mixed into products like cement. But it doesn’t have to be this way. New technology is starting to make a difference. As demand for these critical materials keeps growing, discarded electronics can become valuable resources. My colleagues and I at West Virginia University are developing a new technology to change how we recycle. Instead of using toxic chemicals, our approach uses electricity, making it safer, cleaner, and more affordable to recover critical materials from electronics. How much e-waste are we talking about? Americans generated about 2.7 million tons of electronic waste in 2018, according to the latest federal data. Including uncounted electronics, the U.S. recycles only about 15% of its total e-waste, suggests a survey by the United Nations. Even worse, nearly half the electronics that people in Northern America sent to recycling centers end up shipped overseas. They often land in scrapyards, where workers may use dangerous methods like burning or leaching with harsh chemicals to pull out valuable metals. These practices can harm both the environment and workers’ health. That’s why the Environmental Protection Agency restricts these methods in the U.S. The tiny specks matter Critical minerals are in most of the technology around you. Every phone screen has a super-thin layer of a material called indium tin oxide. LEDs glow because of a metal called gallium. Tantalum stores energy in tiny electronic parts called capacitors. All of these materials are flagged as “high risk” on the U.S. Department of Energy’s critical materials list. That means the U.S. relies heavily on these materials for important technologies, but their supply could easily be disrupted by conflicts, trade disputes, or shortages. Right now, just a few countries, including China, control most of the mining, processing, and recovery of these materials, making the U.S. vulnerable if those countries decide to limit exports or raise prices. These materials aren’t cheap, either. For example, the U.S. Geological Survey reports that gallium was priced between $220 to $500 per kilogram in 2024. That’s 50 times more expensive than common metals like copper, at $9.48 per kilogram in 2024. Revolutionizing recycling with microwaves At West Virginia University’s Department of Mechanical, Materials, and Aerospace Engineering, I and materials scientist Edward Sabolsky asked a simple question: Could we find a way to heat only specific parts of electronic waste to recover these valuable materials? If we could focus the heat on just the tiny specks of critical minerals, we might be able to recycle them easily and efficiently. The solution we found: microwaves. This equipment isn’t very different from the microwave ovens you use to heat food at home, just bigger and more powerful. The basic science is the same: Electromagnetic waves cause electrons to oscillate, creating heat. In our approach, though, we’re not heating water molecules like you do when cooking. Instead, we heat carbon, the black residue that collects around a candle flame or car tailpipe. Carbon heats up much faster in a microwave than water does. But don’t try this at home; your kitchen microwave wasn’t designed for such high temperatures. In our recycling method, we first shred the electronic waste, mix it with materials called fluxes that trap impurities, and then heat the mixture with microwaves. The microwaves rapidly heat the carbon that comes from the plastics and adhesives in the e-waste. This causes the carbon to react with the tiny specks of critical materials. The result: a tiny piece of pure, sponge-like metal about the size of a grain of rice. This metal can then be easily separated from leftover waste using filters. So far, in our laboratory tests, we have successfully recovered about 80% of the gallium, indium, and tantalum from e-waste, at purities between 95% and 97%. We have also demonstrated how it can be integrated with existing recycling processes. Why the Department of Defense is interested Our recycling technology got its start with help from a program funded by the Defense Department’s Advanced Research Projects Agency, or DARPA. Many important technologies, from radar systems to nuclear reactors, depend on these special materials. While the Department of Defense uses less of them than the commercial market, they are a national security concern. We’re planning to launch larger pilot projects next to test the method on smartphone circuit boards, LED lighting parts, and server cards from data centers. These tests will help us fine-tune the design for a bigger system that can recycle tons of e-waste per hour instead of just a few pounds. That could mean producing up to 50 pounds of these critical minerals per hour from every ton of e-waste processed. If the technology works as expected, we believe this approach could help meet the nation’s demand for critical materials. How to make e-waste recycling common One way e-waste recycling could become more common is if Congress held electronics companies responsible for recycling their products and recovering the critical materials inside. Closing loopholes that allow companies to ship e-waste overseas, instead of processing it safely in the U.S., could also help build a reserve of recovered critical minerals. But the biggest change may come from simple economics. Once technology becomes available to recover these tiny but valuable specks of critical materials quickly and affordably, the U.S. can transform domestic recycling and take a big step toward solving its shortage of critical materials. Terence Musho is an associate professor of engineering at West Virginia University. This article is republished from The Conversation under a Creative Commons license. Read the original article.
    0 Comments 0 Shares 0 Reviews
  • Is generative AI really 'just a tool'?

    "AI is inevitable."That's a phrase that's rattled around my head for a month. Not willingly mind you. It's taken up lodging in my grey matter after hearing it in meetings, reading it in emails, and seeing it buffeted back and forth across Bluesky, LinkedIn, and Discord.It's not a convincing phrase. If you hear it from AI boosters it's easy to brush off as raw hype, and if you hear it from doomsayers it can lull you into a sense of fatalism. But as the philosopher Natasha Bedingfield told us in 2004, today is where the book begins, the rest is still unwritten. Nothing, for better or worse, is inevitable.But in those various calls another phrase—one you may have heard at your studio—has slipped past more unnoticed: "AI is just a tool. It can be used for good or evil, like any other tool."After all this is a business where we use tools for good or evil, right and wrong, correctly and incorrectly. We debate the effectiveness of Unity, Unreal, or Godot. We agonize over whether to use procedural versus hand-crafted content. We debate and discuss the topic so much that Game Developers Conference has a whole Tools Summit dedicated to craft of making game development software.Viewing generative AI through the neutral lens of tool assessment is natural—and I'll go so far as to say admirable—for our community. It's a method we use to get past hype and bombast, to try and take technology on its own terms and see how it fits our purposes. And as the 2025 GDC State of the Industry report tells us, some developers are adopting generative AI, plenty of them not bought in on the hype but through the act of seeking the right tool for the job.Related:But looking at generative AI as 'just a tool' is a deeply flawed lens. That phrase betrays a quiet cynicism. Because nothing—not generative AI, not a firearm, not even a hammer—is "just a tool."The function of tools is influenced by their formConsider two tools found in many American households: the claw hammer and the handgun.Normally Game Developer restricts itself to the craft of making video games but I promise this is relevant. Guns are another tool where neutralizing rhetoric is deployed to downplay a tool's negative effects. I grew up in a gun-owning house in a gun-owning neighborhood in suburban Maryland. There were probably four handguns sitting in lockboxes across two rooms, a few rifles and shotguns in a vault in the basement, and one questionably legal World War I firearm tucked away in a closet. The NRA's mantra of "guns don't kill people, people kill people" was commonplace. A neighbor of mine laughed when I advocated for stronger regulations on gun ownership on the basis of "guns are meant to kill." "Guns aren't meant to kill," I recall him saying. "Cars can kill people. Does that mean cars are meant for killing?"His point boils down to this: The outcome of the tool's use is not worth considering when discussing regulation, only its potential use. A gun is a tool and the user has control over a tool is used.Cars are already tightly regulated and cost thousands of dollars, making his point moot, so we'll break down the construction of the claw hammer instead. We generally refer to hammers as being used to pound nails into wood, but I mainly use mine for hammering anchors into drywall because I'm a theater kid and was taught in crew to trust screws.In either case, the physical shape of the claw hammer dictates its most common purpose. The handle extends into a metal object that is blunt at one end, and clawed on the other. The design follows the swing of the human arm, transferring kinetic energy generated by the bicep, down the elbow, through the wrist, and into the blunt end.We also know that claw hammers are not useful for every form of transferring this energy. Variations on hammer design like the ball-peen hammer show how this basic purpose needs to be altered for different tasks. The shape and the material changes depending on the purpose. To sell more hammers, companies invest in better materials and affordances like rubber grips to make their use more comfortable.Like a firearm, hammers can be used as weapons. That same transference of force can be used to harm another living being. Video games sometimes place hammers in a players' loadout alongside guns, grenades, and weapons of war.Neither the hammer nor the firearm is "just" a tool. They are tools that are optimized for a purpose. We can study that purpose, and cast judgements about a tool's safety, merits, and need to be regulated based on that.But. The shape of the hammer is not an efficient way to inflict harm. This is supported by data from the FBI Crime Statistics survey, which gathers data filed by police departments that participate in assembling data. "Handgun" is the most common weapon used in homicides, and "knife/cutting instrument" ranks higher than "blunt objects." That's because handguns are an incredibly efficient means of wounding living beings.Let's break down the handgun the way we did the hammer. Handguns are assembled from an assortment of components that transfer the squeeze of a trigger into the strike of a hammer against a firing pin, which strikes the primer of a bullet's cartridge and sends it propelling out of the tube. Though some bullets seen in larger firearms are meant to penetrate metal, a handgun's bullet is envisioned and designed to cut through flesh.Image via Adobe Stock.These constraints make handguns efficient at few other tasks. In a pinch you could use the butt of a handgun as a hammer. I can't find any data about them being used for that purpose. I can only wander onto a construction site and count the number of firearms in toolboxes as a general sample size.Neither the hammer nor the firearm is "just" a tool. They are tools that are optimized for a purpose. We can study that purpose, and cast judgements about a tool's safety, merits, and need to be regulated based on that. Firearm advocates oppose this process through neutralizing language because it's difficult to dispute the correlation between the number of guns versus the number of murders and assaults with guns in a geographic area.Generative AI proponents sometimes regurgitate that language when defending this new technology. Because like the gun lobby, they don't want the purpose of generative AI decided by its outcomes, only its potential.What is that purpose? It may be the death of truth itself.Generative AI is broadly used to deceive through mimicryGenerative AI is a tool for deception.That's not what its biggest backers will tell you. It's broadly pitched as a tool for efficiency. But efficiency is hard to measure and easy to game. Deception is loud and obvious. Students are using it to cheat on papers. Scam calls with AI-generated voices are on the rise. The Department Human Health and Services published a study citing secretary Kennedy's unfounded health views that cites nonexistent studies, likely generated through AI. There was that cadre of YouTubers creating AI-generated fake movie trailers to attract clicks and make money off people who don't follow entertainment use. Apple marketed Apple Intelligence with advertisements showing people deceiving their neighbors, family, and coworkers. Activision Blizzard used generative AI to advertise games that don't exist.Now here's the rub: games—and all of entertainment—are also a form of deception. We use the phrase "magic circle" to describe how we attract players into our worlds. We use camera tricks, rendering technology, and even VO barks to simulate digital worlds. People engage with games, film, TV, books, and especially magic shows because on some level they want to be not just deceived, but lied to. AI has also been sold as technology that will let every player make their own perfect experience tailored for them by generating worlds, visual assets, and audio on the fly. But the best pitches I've heard for AI tend to "hide" the presence of the LLM, only mildly asking the player for prompts in order to accomplish behind-the-scenes computing tasks. These lies can make shared realities, not wholly distinct ones.That is the difference between telling lies to make virtual worlds and and telling lies to shape the real one. Lies in virtual worlds create shared realities. Lies in the real world tear them down.How appropriate that one such "shared reality," the Star Wars show Andor, recently warned us about the price we pay with treating AI as "just a tool." "The loss of an objective reality is perhaps the most dangerous," said the character Mon Mothma in a climactic speech decrying the whitewashing of a carefully executed genocide."When truth leaves us, when we let it slip away, when it is ripped from our hands, we become vulnerable to the appetite of whatever monster screams the loudest."Game Developers Conference and Game Developer are sibling organizations under Informa.
    #generative #really #039just #tool039
    Is generative AI really 'just a tool'?
    "AI is inevitable."That's a phrase that's rattled around my head for a month. Not willingly mind you. It's taken up lodging in my grey matter after hearing it in meetings, reading it in emails, and seeing it buffeted back and forth across Bluesky, LinkedIn, and Discord.It's not a convincing phrase. If you hear it from AI boosters it's easy to brush off as raw hype, and if you hear it from doomsayers it can lull you into a sense of fatalism. But as the philosopher Natasha Bedingfield told us in 2004, today is where the book begins, the rest is still unwritten. Nothing, for better or worse, is inevitable.But in those various calls another phrase—one you may have heard at your studio—has slipped past more unnoticed: "AI is just a tool. It can be used for good or evil, like any other tool."After all this is a business where we use tools for good or evil, right and wrong, correctly and incorrectly. We debate the effectiveness of Unity, Unreal, or Godot. We agonize over whether to use procedural versus hand-crafted content. We debate and discuss the topic so much that Game Developers Conference has a whole Tools Summit dedicated to craft of making game development software.Viewing generative AI through the neutral lens of tool assessment is natural—and I'll go so far as to say admirable—for our community. It's a method we use to get past hype and bombast, to try and take technology on its own terms and see how it fits our purposes. And as the 2025 GDC State of the Industry report tells us, some developers are adopting generative AI, plenty of them not bought in on the hype but through the act of seeking the right tool for the job.Related:But looking at generative AI as 'just a tool' is a deeply flawed lens. That phrase betrays a quiet cynicism. Because nothing—not generative AI, not a firearm, not even a hammer—is "just a tool."The function of tools is influenced by their formConsider two tools found in many American households: the claw hammer and the handgun.Normally Game Developer restricts itself to the craft of making video games but I promise this is relevant. Guns are another tool where neutralizing rhetoric is deployed to downplay a tool's negative effects. I grew up in a gun-owning house in a gun-owning neighborhood in suburban Maryland. There were probably four handguns sitting in lockboxes across two rooms, a few rifles and shotguns in a vault in the basement, and one questionably legal World War I firearm tucked away in a closet. The NRA's mantra of "guns don't kill people, people kill people" was commonplace. A neighbor of mine laughed when I advocated for stronger regulations on gun ownership on the basis of "guns are meant to kill." "Guns aren't meant to kill," I recall him saying. "Cars can kill people. Does that mean cars are meant for killing?"His point boils down to this: The outcome of the tool's use is not worth considering when discussing regulation, only its potential use. A gun is a tool and the user has control over a tool is used.Cars are already tightly regulated and cost thousands of dollars, making his point moot, so we'll break down the construction of the claw hammer instead. We generally refer to hammers as being used to pound nails into wood, but I mainly use mine for hammering anchors into drywall because I'm a theater kid and was taught in crew to trust screws.In either case, the physical shape of the claw hammer dictates its most common purpose. The handle extends into a metal object that is blunt at one end, and clawed on the other. The design follows the swing of the human arm, transferring kinetic energy generated by the bicep, down the elbow, through the wrist, and into the blunt end.We also know that claw hammers are not useful for every form of transferring this energy. Variations on hammer design like the ball-peen hammer show how this basic purpose needs to be altered for different tasks. The shape and the material changes depending on the purpose. To sell more hammers, companies invest in better materials and affordances like rubber grips to make their use more comfortable.Like a firearm, hammers can be used as weapons. That same transference of force can be used to harm another living being. Video games sometimes place hammers in a players' loadout alongside guns, grenades, and weapons of war.Neither the hammer nor the firearm is "just" a tool. They are tools that are optimized for a purpose. We can study that purpose, and cast judgements about a tool's safety, merits, and need to be regulated based on that.But. The shape of the hammer is not an efficient way to inflict harm. This is supported by data from the FBI Crime Statistics survey, which gathers data filed by police departments that participate in assembling data. "Handgun" is the most common weapon used in homicides, and "knife/cutting instrument" ranks higher than "blunt objects." That's because handguns are an incredibly efficient means of wounding living beings.Let's break down the handgun the way we did the hammer. Handguns are assembled from an assortment of components that transfer the squeeze of a trigger into the strike of a hammer against a firing pin, which strikes the primer of a bullet's cartridge and sends it propelling out of the tube. Though some bullets seen in larger firearms are meant to penetrate metal, a handgun's bullet is envisioned and designed to cut through flesh.Image via Adobe Stock.These constraints make handguns efficient at few other tasks. In a pinch you could use the butt of a handgun as a hammer. I can't find any data about them being used for that purpose. I can only wander onto a construction site and count the number of firearms in toolboxes as a general sample size.Neither the hammer nor the firearm is "just" a tool. They are tools that are optimized for a purpose. We can study that purpose, and cast judgements about a tool's safety, merits, and need to be regulated based on that. Firearm advocates oppose this process through neutralizing language because it's difficult to dispute the correlation between the number of guns versus the number of murders and assaults with guns in a geographic area.Generative AI proponents sometimes regurgitate that language when defending this new technology. Because like the gun lobby, they don't want the purpose of generative AI decided by its outcomes, only its potential.What is that purpose? It may be the death of truth itself.Generative AI is broadly used to deceive through mimicryGenerative AI is a tool for deception.That's not what its biggest backers will tell you. It's broadly pitched as a tool for efficiency. But efficiency is hard to measure and easy to game. Deception is loud and obvious. Students are using it to cheat on papers. Scam calls with AI-generated voices are on the rise. The Department Human Health and Services published a study citing secretary Kennedy's unfounded health views that cites nonexistent studies, likely generated through AI. There was that cadre of YouTubers creating AI-generated fake movie trailers to attract clicks and make money off people who don't follow entertainment use. Apple marketed Apple Intelligence with advertisements showing people deceiving their neighbors, family, and coworkers. Activision Blizzard used generative AI to advertise games that don't exist.Now here's the rub: games—and all of entertainment—are also a form of deception. We use the phrase "magic circle" to describe how we attract players into our worlds. We use camera tricks, rendering technology, and even VO barks to simulate digital worlds. People engage with games, film, TV, books, and especially magic shows because on some level they want to be not just deceived, but lied to. AI has also been sold as technology that will let every player make their own perfect experience tailored for them by generating worlds, visual assets, and audio on the fly. But the best pitches I've heard for AI tend to "hide" the presence of the LLM, only mildly asking the player for prompts in order to accomplish behind-the-scenes computing tasks. These lies can make shared realities, not wholly distinct ones.That is the difference between telling lies to make virtual worlds and and telling lies to shape the real one. Lies in virtual worlds create shared realities. Lies in the real world tear them down.How appropriate that one such "shared reality," the Star Wars show Andor, recently warned us about the price we pay with treating AI as "just a tool." "The loss of an objective reality is perhaps the most dangerous," said the character Mon Mothma in a climactic speech decrying the whitewashing of a carefully executed genocide."When truth leaves us, when we let it slip away, when it is ripped from our hands, we become vulnerable to the appetite of whatever monster screams the loudest."Game Developers Conference and Game Developer are sibling organizations under Informa. #generative #really #039just #tool039
    WWW.GAMEDEVELOPER.COM
    Is generative AI really 'just a tool'?
    "AI is inevitable."That's a phrase that's rattled around my head for a month. Not willingly mind you. It's taken up lodging in my grey matter after hearing it in meetings, reading it in emails, and seeing it buffeted back and forth across Bluesky, LinkedIn, and Discord.It's not a convincing phrase. If you hear it from AI boosters it's easy to brush off as raw hype, and if you hear it from doomsayers it can lull you into a sense of fatalism. But as the philosopher Natasha Bedingfield told us in 2004, today is where the book begins, the rest is still unwritten. Nothing, for better or worse, is inevitable.But in those various calls another phrase—one you may have heard at your studio—has slipped past more unnoticed: "AI is just a tool. It can be used for good or evil, like any other tool."After all this is a business where we use tools for good or evil, right and wrong, correctly and incorrectly. We debate the effectiveness of Unity, Unreal, or Godot. We agonize over whether to use procedural versus hand-crafted content. We debate and discuss the topic so much that Game Developers Conference has a whole Tools Summit dedicated to craft of making game development software.Viewing generative AI through the neutral lens of tool assessment is natural—and I'll go so far as to say admirable—for our community. It's a method we use to get past hype and bombast, to try and take technology on its own terms and see how it fits our purposes. And as the 2025 GDC State of the Industry report tells us, some developers are adopting generative AI, plenty of them not bought in on the hype but through the act of seeking the right tool for the job.Related:But looking at generative AI as 'just a tool' is a deeply flawed lens. That phrase betrays a quiet cynicism (one we hear often from opponents of firearm regulation in the United Stats). Because nothing—not generative AI, not a firearm, not even a hammer—is "just a tool."The function of tools is influenced by their formConsider two tools found in many American households: the claw hammer and the handgun.Normally Game Developer restricts itself to the craft of making video games but I promise this is relevant. Guns are another tool where neutralizing rhetoric is deployed to downplay a tool's negative effects. I grew up in a gun-owning house in a gun-owning neighborhood in suburban Maryland. There were probably four handguns sitting in lockboxes across two rooms, a few rifles and shotguns in a vault in the basement, and one questionably legal World War I firearm tucked away in a closet. The NRA's mantra of "guns don't kill people, people kill people" was commonplace. A neighbor of mine laughed when I advocated for stronger regulations on gun ownership on the basis of "guns are meant to kill." "Guns aren't meant to kill," I recall him saying. "Cars can kill people. Does that mean cars are meant for killing?"His point boils down to this: The outcome of the tool's use is not worth considering when discussing regulation, only its potential use. A gun is a tool and the user has control over a tool is used.Cars are already tightly regulated and cost thousands of dollars, making his point moot, so we'll break down the construction of the claw hammer instead. We generally refer to hammers as being used to pound nails into wood, but I mainly use mine for hammering anchors into drywall because I'm a theater kid and was taught in crew to trust screws.In either case, the physical shape of the claw hammer dictates its most common purpose. The handle extends into a metal object that is blunt at one end, and clawed on the other. The design follows the swing of the human arm, transferring kinetic energy generated by the bicep, down the elbow, through the wrist, and into the blunt end.We also know that claw hammers are not useful for every form of transferring this energy. Variations on hammer design like the ball-peen hammer show how this basic purpose needs to be altered for different tasks. The shape and the material changes depending on the purpose. To sell more hammers, companies invest in better materials and affordances like rubber grips to make their use more comfortable.Like a firearm, hammers can be used as weapons. That same transference of force can be used to harm another living being. Video games sometimes place hammers in a players' loadout alongside guns, grenades, and weapons of war.Neither the hammer nor the firearm is "just" a tool. They are tools that are optimized for a purpose. We can study that purpose, and cast judgements about a tool's safety, merits, and need to be regulated based on that.But. The shape of the hammer is not an efficient way to inflict harm. This is supported by data from the FBI Crime Statistics survey, which gathers data filed by police departments that participate in assembling data. "Handgun" is the most common weapon used in homicides, and "knife/cutting instrument" ranks higher than "blunt objects." That's because handguns are an incredibly efficient means of wounding living beings.Let's break down the handgun the way we did the hammer. Handguns are assembled from an assortment of components that transfer the squeeze of a trigger into the strike of a hammer against a firing pin, which strikes the primer of a bullet's cartridge and sends it propelling out of the tube. Though some bullets seen in larger firearms are meant to penetrate metal, a handgun's bullet is envisioned and designed to cut through flesh.Image via Adobe Stock.These constraints make handguns efficient at few other tasks. In a pinch you could use the butt of a handgun as a hammer. I can't find any data about them being used for that purpose. I can only wander onto a construction site and count the number of firearms in toolboxes as a general sample size.Neither the hammer nor the firearm is "just" a tool. They are tools that are optimized for a purpose. We can study that purpose, and cast judgements about a tool's safety, merits, and need to be regulated based on that. Firearm advocates oppose this process through neutralizing language because it's difficult to dispute the correlation between the number of guns versus the number of murders and assaults with guns in a geographic area.Generative AI proponents sometimes regurgitate that language when defending this new technology. Because like the gun lobby, they don't want the purpose of generative AI decided by its outcomes, only its potential.What is that purpose? It may be the death of truth itself.Generative AI is broadly used to deceive through mimicryGenerative AI is a tool for deception.That's not what its biggest backers will tell you. It's broadly pitched as a tool for efficiency. But efficiency is hard to measure and easy to game. Deception is loud and obvious. Students are using it to cheat on papers. Scam calls with AI-generated voices are on the rise. The Department Human Health and Services published a study citing secretary Kennedy's unfounded health views that cites nonexistent studies, likely generated through AI. There was that cadre of YouTubers creating AI-generated fake movie trailers to attract clicks and make money off people who don't follow entertainment use. Apple marketed Apple Intelligence with advertisements showing people deceiving their neighbors, family, and coworkers. Activision Blizzard used generative AI to advertise games that don't exist.Now here's the rub: games—and all of entertainment—are also a form of deception. We use the phrase "magic circle" to describe how we attract players into our worlds. We use camera tricks, rendering technology, and even VO barks to simulate digital worlds. People engage with games, film, TV, books, and especially magic shows because on some level they want to be not just deceived, but lied to. AI has also been sold as technology that will let every player make their own perfect experience tailored for them by generating worlds, visual assets, and audio on the fly. But the best pitches I've heard for AI tend to "hide" the presence of the LLM, only mildly asking the player for prompts in order to accomplish behind-the-scenes computing tasks. These lies can make shared realities, not wholly distinct ones.That is the difference between telling lies to make virtual worlds and and telling lies to shape the real one. Lies in virtual worlds create shared realities. Lies in the real world tear them down.How appropriate that one such "shared reality," the Star Wars show Andor, recently warned us about the price we pay with treating AI as "just a tool." "The loss of an objective reality is perhaps the most dangerous," said the character Mon Mothma in a climactic speech decrying the whitewashing of a carefully executed genocide."When truth leaves us, when we let it slip away, when it is ripped from our hands, we become vulnerable to the appetite of whatever monster screams the loudest."Game Developers Conference and Game Developer are sibling organizations under Informa.
    0 Comments 0 Shares 0 Reviews
  • ChatGPT: Everything you need to know about the AI-powered chatbot

    ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to supercharge productivity through writing essays and code with short text prompts has evolved into a behemoth with 300 million weekly active users.
    2024 was a big year for OpenAI, from its partnership with Apple for its generative AI offering, Apple Intelligence, the release of GPT-4o with voice capabilities, and the highly-anticipated launch of its text-to-video model Sora.
    OpenAI also faced its share of internal drama, including the notable exits of high-level execs like co-founder and longtime chief scientist Ilya Sutskever and CTO Mira Murati. OpenAI has also been hit with lawsuits from Alden Global Capital-owned newspapers alleging copyright infringement, as well as an injunction from Elon Musk to halt OpenAI’s transition to a for-profit.
    In 2025, OpenAI is battling the perception that it’s ceding ground in the AI race to Chinese rivals like DeepSeek. The company has been trying to shore up its relationship with Washington as it simultaneously pursues an ambitious data center project, and as it reportedly lays the groundwork for one of the largest funding rounds in history.
    Below, you’ll find a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. If you have any other questions, check out our ChatGPT FAQ here.
    To see a list of 2024 updates, go here.
    Timeline of the most recent ChatGPT updates

    Techcrunch event

    Join us at TechCrunch Sessions: AI
    Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    May 2025
    OpenAI CFO says hardware will drive ChatGPT’s growth
    OpenAI plans to purchase Jony Ive’s devices startup io for billion. Sarah Friar, CFO of OpenAI, thinks that the hardware will significantly enhance ChatGPT and broaden OpenAI’s reach to a larger audience in the future.
    OpenAI’s ChatGPT unveils its AI coding agent, Codex
    OpenAI has introduced its AI coding agent, Codex, powered by codex-1, a version of its o3 AI reasoning model designed for software engineering tasks. OpenAI says codex-1 generates more precise and “cleaner” code than o3. The coding agent may take anywhere from one to 30 minutes to complete tasks such as writing simple features, fixing bugs, answering questions about your codebase, and running tests.
    Sam Altman aims to make ChatGPT more personalized by tracking every aspect of a person’s life
    Sam Altman, the CEO of OpenAI, said during a recent AI event hosted by VC firm Sequoia that he wants ChatGPT to record and remember every detail of a person’s life when one attendee asked about how ChatGPT can become more personalized.
    OpenAI releases its GPT-4.1 and GPT-4.1 mini AI models in ChatGPT
    OpenAI said in a post on X that it has launched its GPT-4.1 and GPT4.1 mini AI models in ChagGPT.
    OpenAI has launched a new feature for ChatGPT deep research to analyze code repositories on GitHub. The ChatGPT deep research feature is in beta and lets developers connect with GitHub to ask questions about codebases and engineering documents. The connector will soon be available for ChatGPT Plus, Pro, and Team users, with support for Enterprise and Education coming shortly, per an OpenAI spokesperson.
    OpenAI launches a new data residency program in Asia
    After introducing a data residency program in Europe in February, OpenAI has now launched a similar program in Asian countries including India, Japan, Singapore, and South Korea. The new program will be accessible to users of ChatGPT Enterprise, ChatGPT Edu, and API. It will help organizations in Asia meet their local data sovereignty requirements when using OpenAI’s products.
    OpenAI to introduce a program to grow AI infrastructure
    OpenAI is unveiling a program called OpenAI for Countries, which aims to develop the necessary local infrastructure to serve international AI clients better. The AI startup will work with governments to assist with increasing data center capacity and customizing OpenAI’s products to meet specific language and local needs. OpenAI for Countries is part of efforts to support the company’s expansion of its AI data center Project Stargate to new locations outside the U.S., per Bloomberg.
    OpenAI promises to make changes to prevent future ChatGPT sycophancy
    OpenAI has announced its plan to make changes to its procedures for updating the AI models that power ChatGPT, following an update that caused the platform to become overly sycophantic for many users.
    April 2025
    OpenAI clarifies the reason ChatGPT became overly flattering and agreeable
    OpenAI has released a post on the recent sycophancy issues with the default AI model powering ChatGPT, GPT-4o, leading the company to revert an update to the model released last week. CEO Sam Altman acknowledged the issue on Sunday and confirmed two days later that the GPT-4o update was being rolled back. OpenAI is working on “additional fixes” to the model’s personality. Over the weekend, users on social media criticized the new model for making ChatGPT too validating and agreeable. It became a popular meme fast.
    OpenAI is working to fix a “bug” that let minors engage in inappropriate conversations
    An issue within OpenAI’s ChatGPT enabled the chatbot to create graphic erotic content for accounts registered by users under the age of 18, as demonstrated by TechCrunch’s testing, a fact later confirmed by OpenAI. “Protecting younger users is a top priority, and our Model Spec, which guides model behavior, clearly restricts sensitive content like erotica to narrow contexts such as scientific, historical, or news reporting,” a spokesperson told TechCrunch via email. “In this case, a bug allowed responses outside those guidelines, and we are actively deploying a fix to limit these generations.”
    OpenAI has added a few features to its ChatGPT search, its web search tool in ChatGPT, to give users an improved online shopping experience. The company says people can ask super-specific questions using natural language and receive customized results. The chatbot provides recommendations, images, and reviews of products in various categories such as fashion, beauty, home goods, and electronics.
    OpenAI wants its AI model to access cloud models for assistance
    OpenAI leaders have been talking about allowing the open model to link up with OpenAI’s cloud-hosted models to improve its ability to respond to intricate questions, two sources familiar with the situation told TechCrunch.
    OpenAI aims to make its new “open” AI model the best on the market
    OpenAI is preparing to launch an AI system that will be openly accessible, allowing users to download it for free without any API restrictions. Aidan Clark, OpenAI’s VP of research, is spearheading the development of the open model, which is in the very early stages, sources familiar with the situation told TechCrunch.
    OpenAI’s GPT-4.1 may be less aligned than earlier models
    OpenAI released a new AI model called GPT-4.1 in mid-April. However, multiple independent tests indicate that the model is less reliable than previous OpenAI releases. The company skipped that step — sending safety cards for GPT-4.1 — claiming in a statement to TechCrunch that “GPT-4.1 is not a frontier model, so there won’t be a separate system card released for it.”
    OpenAI’s o3 AI model scored lower than expected on a benchmark
    Questions have been raised regarding OpenAI’s transparency and procedures for testing models after a difference in benchmark outcomes was detected by first- and third-party benchmark results for the o3 AI model. OpenAI introduced o3 in December, stating that the model could solve approximately 25% of questions on FrontierMath, a difficult math problem set. Epoch AI, the research institute behind FrontierMath, discovered that o3 achieved a score of approximately 10%, which was significantly lower than OpenAI’s top-reported score.
    OpenAI unveils Flex processing for cheaper, slower AI tasks
    OpenAI has launched a new API feature called Flex processing that allows users to use AI models at a lower cost but with slower response times and occasional resource unavailability. Flex processing is available in beta on the o3 and o4-mini reasoning models for non-production tasks like model evaluations, data enrichment, and asynchronous workloads.
    OpenAI’s latest AI models now have a safeguard against biorisks
    OpenAI has rolled out a new system to monitor its AI reasoning models, o3 and o4 mini, for biological and chemical threats. The system is designed to prevent models from giving advice that could potentially lead to harmful attacks, as stated in OpenAI’s safety report.
    OpenAI launches its latest reasoning models, o3 and o4-mini
    OpenAI has released two new reasoning models, o3 and o4 mini, just two days after launching GPT-4.1. The company claims o3 is the most advanced reasoning model it has developed, while o4-mini is said to provide a balance of price, speed, and performance. The new models stand out from previous reasoning models because they can use ChatGPT features like web browsing, coding, and image processing and generation. But they hallucinate more than several of OpenAI’s previous models.
    OpenAI has added a new section to ChatGPT to offer easier access to AI-generated images for all user tiers
    Open AI introduced a new section called “library” to make it easier for users to create images on mobile and web platforms, per the company’s X post.
    OpenAI could “adjust” its safeguards if rivals release “high-risk” AI
    OpenAI said on Tuesday that it might revise its safety standards if “another frontier AI developer releases a high-risk system without comparable safeguards.” The move shows how commercial AI developers face more pressure to rapidly implement models due to the increased competition.
    OpenAI is currently in the early stages of developing its own social media platform to compete with Elon Musk’s X and Mark Zuckerberg’s Instagram and Threads, according to The Verge. It is unclear whether OpenAI intends to launch the social network as a standalone application or incorporate it into ChatGPT.
    OpenAI will remove its largest AI model, GPT-4.5, from the API, in July
    OpenAI will discontinue its largest AI model, GPT-4.5, from its API even though it was just launched in late February. GPT-4.5 will be available in a research preview for paying customers. Developers can use GPT-4.5 through OpenAI’s API until July 14; then, they will need to switch to GPT-4.1, which was released on April 14.
    OpenAI unveils GPT-4.1 AI models that focus on coding capabilities
    OpenAI has launched three members of the GPT-4.1 model — GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano — with a specific focus on coding capabilities. It’s accessible via the OpenAI API but not ChatGPT. In the competition to develop advanced programming models, GPT-4.1 will rival AI models such as Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and DeepSeek’s upgraded V3.
    OpenAI will discontinue ChatGPT’s GPT-4 at the end of April
    OpenAI plans to sunset GPT-4, an AI model introduced more than two years ago, and replace it with GPT-4o, the current default model, per changelog. It will take effect on April 30. GPT-4 will remain available via OpenAI’s API.
    OpenAI could release GPT-4.1 soon
    OpenAI may launch several new AI models, including GPT-4.1, soon, The Verge reported, citing anonymous sources. GPT-4.1 would be an update of OpenAI’s GPT-4o, which was released last year. On the list of upcoming models are GPT-4.1 and smaller versions like GPT-4.1 mini and nano, per the report.
    OpenAI has updated ChatGPT to use information from your previous conversations
    OpenAI started updating ChatGPT to enable the chatbot to remember previous conversations with a user and customize its responses based on that context. This feature is rolling out to ChatGPT Pro and Plus users first, excluding those in the U.K., EU, Iceland, Liechtenstein, Norway, and Switzerland.
    OpenAI is working on watermarks for images made with ChatGPT
    It looks like OpenAI is working on a watermarking feature for images generated using GPT-4o. AI researcher Tibor Blaho spotted a new “ImageGen” watermark feature in the new beta of ChatGPT’s Android app. Blaho also found mentions of other tools: “Structured Thoughts,” “Reasoning Recap,” “CoT Search Tool,” and “l1239dk1.”
    OpenAI offers ChatGPT Plus for free to U.S., Canadian college students
    OpenAI is offering its -per-month ChatGPT Plus subscription tier for free to all college students in the U.S. and Canada through the end of May. The offer will let millions of students use OpenAI’s premium service, which offers access to the company’s GPT-4o model, image generation, voice interaction, and research tools that are not available in the free version.
    ChatGPT users have generated over 700M images so far
    More than 130 million users have created over 700 million images since ChatGPT got the upgraded image generator on March 25, according to COO of OpenAI Brad Lightcap. The image generator was made available to all ChatGPT users on March 31, and went viral for being able to create Ghibli-style photos.
    OpenAI’s o3 model could cost more to run than initial estimate
    The Arc Prize Foundation, which develops the AI benchmark tool ARC-AGI, has updated the estimated computing costs for OpenAI’s o3 “reasoning” model managed by ARC-AGI. The organization originally estimated that the best-performing configuration of o3 it tested, o3 high, would cost approximately to address a single problem. The Foundation now thinks the cost could be much higher, possibly around per task.
    OpenAI CEO says capacity issues will cause product delays
    In a series of posts on X, OpenAI CEO Sam Altman said the company’s new image-generation tool’s popularity may cause product releases to be delayed. “We are getting things under control, but you should expect new releases from OpenAI to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges,” he wrote.
    March 2025
    OpenAI plans to release a new ‘open’ AI language model
    OpeanAI intends to release its “first” open language model since GPT-2 “in the coming months.” The company plans to host developer events to gather feedback and eventually showcase prototypes of the model. The first developer event is to be held in San Francisco, with sessions to follow in Europe and Asia.
    OpenAI removes ChatGPT’s restrictions on image generation
    OpenAI made a notable change to its content moderation policies after the success of its new image generator in ChatGPT, which went viral for being able to create Studio Ghibli-style images. The company has updated its policies to allow ChatGPT to generate images of public figures, hateful symbols, and racial features when requested. OpenAI had previously declined such prompts due to the potential controversy or harm they may cause. However, the company has now “evolved” its approach, as stated in a blog post published by Joanne Jang, the lead for OpenAI’s model behavior.
    OpenAI adopts Anthropic’s standard for linking AI models with data
    OpenAI wants to incorporate Anthropic’s Model Context Protocolinto all of its products, including the ChatGPT desktop app. MCP, an open-source standard, helps AI models generate more accurate and suitable responses to specific queries, and lets developers create bidirectional links between data sources and AI applications like chatbots. The protocol is currently available in the Agents SDK, and support for the ChatGPT desktop app and Responses API will be coming soon, OpenAI CEO Sam Altman said.
    OpenAI’s viral Studio Ghibli-style images could raise AI copyright concerns
    The latest update of the image generator on OpenAI’s ChatGPT has triggered a flood of AI-generated memes in the style of Studio Ghibli, the Japanese animation studio behind blockbuster films like “My Neighbor Totoro” and “Spirited Away.” The burgeoning mass of Ghibli-esque images have sparked concerns about whether OpenAI has violated copyright laws, especially since the company is already facing legal action for using source material without authorization.
    OpenAI expects revenue to triple to billion this year
    OpenAI expects its revenue to triple to billion in 2025, fueled by the performance of its paid AI software, Bloomberg reported, citing an anonymous source. While the startup doesn’t expect to reach positive cash flow until 2029, it expects revenue to increase significantly in 2026 to surpass billion, the report said.
    ChatGPT has upgraded its image-generation feature
    OpenAI on Tuesday rolled out a major upgrade to ChatGPT’s image-generation capabilities: ChatGPT can now use the GPT-4o model to generate and edit images and photos directly. The feature went live earlier this week in ChatGPT and Sora, OpenAI’s AI video-generation tool, for subscribers of the company’s Pro plan, priced at a month, and will be available soon to ChatGPT Plus subscribers and developers using the company’s API service. The company’s CEO Sam Altman said on Wednesday, however, that the release of the image generation feature to free users would be delayed due to higher demand than the company expected.
    OpenAI announces leadership updates
    Brad Lightcap, OpenAI’s chief operating officer, will lead the company’s global expansion and manage corporate partnerships as CEO Sam Altman shifts his focus to research and products, according to a blog post from OpenAI. Lightcap, who previously worked with Altman at Y Combinator, joined the Microsoft-backed startup in 2018. OpenAI also said Mark Chen would step into the expanded role of chief research officer, and Julia Villagra will take on the role of chief people officer.
    OpenAI’s AI voice assistant now has advanced feature
    OpenAI has updated its AI voice assistant with improved chatting capabilities, according to a video posted on Mondayto the company’s official media channels. The update enables real-time conversations, and the AI assistant is said to be more personable and interrupts users less often. Users on ChatGPT’s free tier can now access the new version of Advanced Voice Mode, while paying users will receive answers that are “more direct, engaging, concise, specific, and creative,” a spokesperson from OpenAI told TechCrunch.
    OpenAI and Meta have separately engaged in discussions with Indian conglomerate Reliance Industries regarding potential collaborations to enhance their AI services in the country, per a report by The Information. One key topic being discussed is Reliance Jio distributing OpenAI’s ChatGPT. Reliance has proposed selling OpenAI’s models to businesses in India through an application programming interfaceso they can incorporate AI into their operations. Meta also plans to bolster its presence in India by constructing a large 3GW data center in Jamnagar, Gujarat. OpenAI, Meta, and Reliance have not yet officially announced these plans.
    OpenAI faces privacy complaint in Europe for chatbot’s defamatory hallucinations
    Noyb, a privacy rights advocacy group, is supporting an individual in Norway who was shocked to discover that ChatGPT was providing false information about him, stating that he had been found guilty of killing two of his children and trying to harm the third. “The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”
    OpenAI upgrades its transcription and voice-generating AI models
    OpenAI has added new transcription and voice-generating AI models to its APIs: a text-to-speech model, “gpt-4o-mini-tts,” that delivers more nuanced and realistic sounding speech, as well as two speech-to-text models called “gpt-4o-transcribe” and “gpt-4o-mini-transcribe”. The company claims they are improved versions of what was already there and that they hallucinate less.
    OpenAI has launched o1-pro, a more powerful version of its o1
    OpenAI has introduced o1-pro in its developer API. OpenAI says its o1-pro uses more computing than its o1 “reasoning” AI model to deliver “consistently better responses.” It’s only accessible to select developers who have spent at least on OpenAI API services. OpenAI charges for every million tokensinput into the model and for every million tokens the model produces. It costs twice as much as OpenAI’s GPT-4.5 for input and 10 times the price of regular o1.
    Noam Brown, who heads AI reasoning research at OpenAI, thinks that certain types of AI models for “reasoning” could have been developed 20 years ago if researchers had understood the correct approach and algorithms.
    OpenAI says it has trained an AI that’s “really good” at creative writing
    OpenAI CEO Sam Altman said, in a post on X, that the company has trained a “new model” that’s “really good” at creative writing. He posted a lengthy sample from the model given the prompt “Please write a metafictional literary short story about AI and grief.” OpenAI has not extensively explored the use of AI for writing fiction. The company has mostly concentrated on challenges in rigid, predictable areas such as math and programming.might not be that great at creative writing at all.
    OpenAI rolled out new tools designed to help developers and businesses build AI agents — automated systems that can independently accomplish tasks — using the company’s own AI models and frameworks. The tools are part of OpenAI’s new Responses API, which enables enterprises to develop customized AI agents that can perform web searches, scan through company files, and navigate websites, similar to OpenAI’s Operator product. The Responses API effectively replaces OpenAI’s Assistants API, which the company plans to discontinue in the first half of 2026.
    OpenAI reportedly plans to charge up to a month for specialized AI ‘agents’
    OpenAI intends to release several “agent” products tailored for different applications, including sorting and ranking sales leads and software engineering, according to a report from The Information. One, a “high-income knowledge worker” agent, will reportedly be priced at a month. Another, a software developer agent, is said to cost a month. The most expensive rumored agents, which are said to be aimed at supporting “PhD-level research,” are expected to cost per month. The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly billion last year after paying for costs related to running its services and other expenses. It’s unclear when these agentic tools might launch or which customers will be eligible to buy them.
    ChatGPT can directly edit your code
    The latest version of the macOS ChatGPT app allows users to edit code directly in supported developer tools, including Xcode, VS Code, and JetBrains. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users like Enterprise, Edu, and free users.
    ChatGPT’s weekly active users doubled in less than 6 months, thanks to new releases
    According to a new report from VC firm Andreessen Horowitz, OpenAI’s AI chatbot, ChatGPT, experienced solid growth in the second half of 2024. It took ChatGPT nine months to increase its weekly active users from 100 million in November 2023 to 200 million in August 2024, but it only took less than six months to double that number once more, according to the report. ChatGPT’s weekly active users increased to 300 million by December 2024 and 400 million by February 2025. ChatGPT has experienced significant growth recently due to the launch of new models and features, such as GPT-4o, with multimodal capabilities. ChatGPT usage spiked from April to May 2024, shortly after that model’s launch.
    February 2025
    OpenAI cancels its o3 AI model in favor of a ‘unified’ next-gen release
    OpenAI has effectively canceled the release of o3 in favor of what CEO Sam Altman is calling a “simplified” product offering. In a post on X, Altman said that, in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot oftechnology,” including o3, in ChatGPT and its API. As a result of that roadmap decision, OpenAI no longer plans to release o3 as a standalone model. 
    ChatGPT may not be as power-hungry as once assumed
    A commonly cited stat is that ChatGPT requires around 3 watt-hours of power to answer a single question. Using OpenAI’s latest default model for ChatGPT, GPT-4o, as a reference, nonprofit AI research institute Epoch AI found the average ChatGPT query consumes around 0.3 watt-hours. However, the analysis doesn’t consider the additional energy costs incurred by ChatGPT with features like image generation or input processing.
    OpenAI now reveals more of its o3-mini model’s thought process
    In response to pressure from rivals like DeepSeek, OpenAI is changing the way its o3-mini model communicates its step-by-step “thought” process. ChatGPT users will see an updated “chain of thought” that shows more of the model’s “reasoning” steps and how it arrived at answers to questions.
    You can now use ChatGPT web search without logging in
    OpenAI is now allowing anyone to use ChatGPT web search without having to log in. While OpenAI had previously allowed users to ask ChatGPT questions without signing in, responses were restricted to the chatbot’s last training update. This only applies through ChatGPT.com, however. To use ChatGPT in any form through the native mobile app, you will still need to be logged in.
    OpenAI unveils a new ChatGPT agent for ‘deep research’
    OpenAI announced a new AI “agent” called deep research that’s designed to help people conduct in-depth, complex research using ChatGPT. OpenAI says the “agent” is intended for instances where you don’t just want a quick answer or summary, but instead need to assiduously consider information from multiple websites and other sources.
    January 2025
    OpenAI used a subreddit to test AI persuasion
    OpenAI used the subreddit r/ChangeMyView to measure the persuasive abilities of its AI reasoning models. OpenAI says it collects user posts from the subreddit and asks its AI models to write replies, in a closed environment, that would change the Reddit user’s mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models’ responses to human replies for that same post. 
    OpenAI launches o3-mini, its latest ‘reasoning’ model
    OpenAI launched a new AI “reasoning” model, o3-mini, the newest in the company’s o family of models. OpenAI first previewed the model in December alongside a more capable system called o3. OpenAI is pitching its new model as both “powerful” and “affordable.”
    ChatGPT’s mobile users are 85% male, report says
    A new report from app analytics firm Appfigures found that over half of ChatGPT’s mobile users are under age 25, with users between ages 50 and 64 making up the second largest age demographic. The gender gap among ChatGPT users is even more significant. Appfigures estimates that across age groups, men make up 84.5% of all users.
    OpenAI launches ChatGPT plan for US government agencies
    OpenAI launched ChatGPT Gov designed to provide U.S. government agencies an additional way to access the tech. ChatGPT Gov includes many of the capabilities found in OpenAI’s corporate-focused tier, ChatGPT Enterprise. OpenAI says that ChatGPT Gov enables agencies to more easily manage their own security, privacy, and compliance, and could expedite internal authorization of OpenAI’s tools for the handling of non-public sensitive data.
    More teens report using ChatGPT for schoolwork, despite the tech’s faults
    Younger Gen Zers are embracing ChatGPT, for schoolwork, according to a new survey by the Pew Research Center. In a follow-up to its 2023 poll on ChatGPT usage among young people, Pew asked ~1,400 U.S.-based teens ages 13 to 17 whether they’ve used ChatGPT for homework or other school-related assignments. Twenty-six percent said that they had, double the number two years ago. Just over half of teens responding to the poll said they think it’s acceptable to use ChatGPT for researching new subjects. But considering the ways ChatGPT can fall short, the results are possibly cause for alarm.
    OpenAI says it may store deleted Operator data for up to 90 days
    OpenAI says that it might store chats and associated screenshots from customers who use Operator, the company’s AI “agent” tool, for up to 90 days — even after a user manually deletes them. While OpenAI has a similar deleted data retention policy for ChatGPT, the retention period for ChatGPT is only 30 days, which is 60 days shorter than Operator’s.
    OpenAI launches Operator, an AI agent that performs tasks autonomously
    OpenAI is launching a research preview of Operator, a general-purpose AI agent that can take control of a web browser and independently perform certain actions. Operator promises to automate tasks such as booking travel accommodations, making restaurant reservations, and shopping online.
    Operator, OpenAI’s agent tool, could be released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the Pro subscription plan. The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code. TechCrunch separately identified the same references to Operator on OpenAI’s website.
    OpenAI tests phone number-only ChatGPT signups
    OpenAI has begun testing a feature that lets new ChatGPT users sign up with only a phone number — no email required. The feature is currently in beta in the U.S. and India. However, users who create an account using their number can’t upgrade to one of OpenAI’s paid plans without verifying their account via an email. Multi-factor authentication also isn’t supported without a valid email.
    ChatGPT now lets you schedule reminders and recurring tasks
    ChatGPT’s new beta feature, called tasks, allows users to set simple reminders. For example, you can ask ChatGPT to remind you when your passport expires in six months, and the AI assistant will follow up with a push notification on whatever platform you have tasks enabled. The feature will start rolling out to ChatGPT Plus, Team, and Pro users around the globe this week.
    New ChatGPT feature lets users assign it traits like ‘chatty’ and ‘Gen Z’
    OpenAI is introducing a new way for users to customize their interactions with ChatGPT. Some users found they can specify a preferred name or nickname and “traits” they’d like the chatbot to have. OpenAI suggests traits like “Chatty,” “Encouraging,” and “Gen Z.” However, some users reported that the new options have disappeared, so it’s possible they went live prematurely.
    FAQs:
    What is ChatGPT? How does it work?
    ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI. The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text.
    When did ChatGPT get released?
    November 30, 2022 is when ChatGPT was released for public use.
    What is the latest version of ChatGPT?
    Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o.
    Can I use ChatGPT for free?
    There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus.
    Who uses ChatGPT?
    Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns.
    What companies use ChatGPT?
    Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool.
    Most recently, Microsoft announced at its 2023 Build conference that it is integrating its ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT.  And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space.
    What does GPT mean in ChatGPT?
    GPT stands for Generative Pre-Trained Transformer.
    What is the difference between ChatGPT and a chatbot?
    A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions.
    ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt.
    Can ChatGPT write essays?
    Yes.
    Can ChatGPT commit libel?
    Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel.
    We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry.
    Does ChatGPT have an app?
    Yes, there is a free ChatGPT mobile app for iOS and Android users.
    What is the ChatGPT character limit?
    It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words.
    Does ChatGPT have an API?
    Yes, it was released March 1, 2023.
    What are some sample everyday uses for ChatGPT?
    Everyday examples include programming, scripts, email replies, listicles, blog ideas, summarization, etc.
    What are some advanced uses for ChatGPT?
    Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc.
    How good is ChatGPT at writing code?
    It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used.
    Can you save a ChatGPT chat?
    Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet.
    Are there alternatives to ChatGPT?
    Yes. There are multiple AI-powered chatbot competitors such as Together, Google’s Gemini and Anthropic’s Claude, and developers are creating open source alternatives.
    How does ChatGPT handle data privacy?
    OpenAI has said that individuals in “certain jurisdictions”can object to the processing of their personal information by its AI models by filling out this form. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”.
    The web form for making a deletion of data about you request is entitled “OpenAI Personal Data Removal Request”.
    In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest”, pointing users towards more information about requesting an opt out — when it writes: “See here for instructions on how you can opt out of our use of your information to train our models.”
    What controversies have surrounded ChatGPT?
    Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamineand the incendiary mixture napalm.
    An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service.
    CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect.
    Several major school systems and colleges, including New York City Public Schools, have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with.
    There have also been cases of ChatGPT accusing individuals of false crimes.
    Where can I find examples of ChatGPT prompts?
    Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase. Another is ChatX. More launch every day.
    Can ChatGPT be detected?
    Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best.
    Are ChatGPT chats public?
    No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service.
    What lawsuits are there surrounding ChatGPT?
    None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT.
    Are there issues regarding plagiarism with ChatGPT?
    Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data.
    #chatgpt #everything #you #need #know
    ChatGPT: Everything you need to know about the AI-powered chatbot
    ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to supercharge productivity through writing essays and code with short text prompts has evolved into a behemoth with 300 million weekly active users. 2024 was a big year for OpenAI, from its partnership with Apple for its generative AI offering, Apple Intelligence, the release of GPT-4o with voice capabilities, and the highly-anticipated launch of its text-to-video model Sora. OpenAI also faced its share of internal drama, including the notable exits of high-level execs like co-founder and longtime chief scientist Ilya Sutskever and CTO Mira Murati. OpenAI has also been hit with lawsuits from Alden Global Capital-owned newspapers alleging copyright infringement, as well as an injunction from Elon Musk to halt OpenAI’s transition to a for-profit. In 2025, OpenAI is battling the perception that it’s ceding ground in the AI race to Chinese rivals like DeepSeek. The company has been trying to shore up its relationship with Washington as it simultaneously pursues an ambitious data center project, and as it reportedly lays the groundwork for one of the largest funding rounds in history. Below, you’ll find a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. If you have any other questions, check out our ChatGPT FAQ here. To see a list of 2024 updates, go here. Timeline of the most recent ChatGPT updates Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW May 2025 OpenAI CFO says hardware will drive ChatGPT’s growth OpenAI plans to purchase Jony Ive’s devices startup io for billion. Sarah Friar, CFO of OpenAI, thinks that the hardware will significantly enhance ChatGPT and broaden OpenAI’s reach to a larger audience in the future. OpenAI’s ChatGPT unveils its AI coding agent, Codex OpenAI has introduced its AI coding agent, Codex, powered by codex-1, a version of its o3 AI reasoning model designed for software engineering tasks. OpenAI says codex-1 generates more precise and “cleaner” code than o3. The coding agent may take anywhere from one to 30 minutes to complete tasks such as writing simple features, fixing bugs, answering questions about your codebase, and running tests. Sam Altman aims to make ChatGPT more personalized by tracking every aspect of a person’s life Sam Altman, the CEO of OpenAI, said during a recent AI event hosted by VC firm Sequoia that he wants ChatGPT to record and remember every detail of a person’s life when one attendee asked about how ChatGPT can become more personalized. OpenAI releases its GPT-4.1 and GPT-4.1 mini AI models in ChatGPT OpenAI said in a post on X that it has launched its GPT-4.1 and GPT4.1 mini AI models in ChagGPT. OpenAI has launched a new feature for ChatGPT deep research to analyze code repositories on GitHub. The ChatGPT deep research feature is in beta and lets developers connect with GitHub to ask questions about codebases and engineering documents. The connector will soon be available for ChatGPT Plus, Pro, and Team users, with support for Enterprise and Education coming shortly, per an OpenAI spokesperson. OpenAI launches a new data residency program in Asia After introducing a data residency program in Europe in February, OpenAI has now launched a similar program in Asian countries including India, Japan, Singapore, and South Korea. The new program will be accessible to users of ChatGPT Enterprise, ChatGPT Edu, and API. It will help organizations in Asia meet their local data sovereignty requirements when using OpenAI’s products. OpenAI to introduce a program to grow AI infrastructure OpenAI is unveiling a program called OpenAI for Countries, which aims to develop the necessary local infrastructure to serve international AI clients better. The AI startup will work with governments to assist with increasing data center capacity and customizing OpenAI’s products to meet specific language and local needs. OpenAI for Countries is part of efforts to support the company’s expansion of its AI data center Project Stargate to new locations outside the U.S., per Bloomberg. OpenAI promises to make changes to prevent future ChatGPT sycophancy OpenAI has announced its plan to make changes to its procedures for updating the AI models that power ChatGPT, following an update that caused the platform to become overly sycophantic for many users. April 2025 OpenAI clarifies the reason ChatGPT became overly flattering and agreeable OpenAI has released a post on the recent sycophancy issues with the default AI model powering ChatGPT, GPT-4o, leading the company to revert an update to the model released last week. CEO Sam Altman acknowledged the issue on Sunday and confirmed two days later that the GPT-4o update was being rolled back. OpenAI is working on “additional fixes” to the model’s personality. Over the weekend, users on social media criticized the new model for making ChatGPT too validating and agreeable. It became a popular meme fast. OpenAI is working to fix a “bug” that let minors engage in inappropriate conversations An issue within OpenAI’s ChatGPT enabled the chatbot to create graphic erotic content for accounts registered by users under the age of 18, as demonstrated by TechCrunch’s testing, a fact later confirmed by OpenAI. “Protecting younger users is a top priority, and our Model Spec, which guides model behavior, clearly restricts sensitive content like erotica to narrow contexts such as scientific, historical, or news reporting,” a spokesperson told TechCrunch via email. “In this case, a bug allowed responses outside those guidelines, and we are actively deploying a fix to limit these generations.” OpenAI has added a few features to its ChatGPT search, its web search tool in ChatGPT, to give users an improved online shopping experience. The company says people can ask super-specific questions using natural language and receive customized results. The chatbot provides recommendations, images, and reviews of products in various categories such as fashion, beauty, home goods, and electronics. OpenAI wants its AI model to access cloud models for assistance OpenAI leaders have been talking about allowing the open model to link up with OpenAI’s cloud-hosted models to improve its ability to respond to intricate questions, two sources familiar with the situation told TechCrunch. OpenAI aims to make its new “open” AI model the best on the market OpenAI is preparing to launch an AI system that will be openly accessible, allowing users to download it for free without any API restrictions. Aidan Clark, OpenAI’s VP of research, is spearheading the development of the open model, which is in the very early stages, sources familiar with the situation told TechCrunch. OpenAI’s GPT-4.1 may be less aligned than earlier models OpenAI released a new AI model called GPT-4.1 in mid-April. However, multiple independent tests indicate that the model is less reliable than previous OpenAI releases. The company skipped that step — sending safety cards for GPT-4.1 — claiming in a statement to TechCrunch that “GPT-4.1 is not a frontier model, so there won’t be a separate system card released for it.” OpenAI’s o3 AI model scored lower than expected on a benchmark Questions have been raised regarding OpenAI’s transparency and procedures for testing models after a difference in benchmark outcomes was detected by first- and third-party benchmark results for the o3 AI model. OpenAI introduced o3 in December, stating that the model could solve approximately 25% of questions on FrontierMath, a difficult math problem set. Epoch AI, the research institute behind FrontierMath, discovered that o3 achieved a score of approximately 10%, which was significantly lower than OpenAI’s top-reported score. OpenAI unveils Flex processing for cheaper, slower AI tasks OpenAI has launched a new API feature called Flex processing that allows users to use AI models at a lower cost but with slower response times and occasional resource unavailability. Flex processing is available in beta on the o3 and o4-mini reasoning models for non-production tasks like model evaluations, data enrichment, and asynchronous workloads. OpenAI’s latest AI models now have a safeguard against biorisks OpenAI has rolled out a new system to monitor its AI reasoning models, o3 and o4 mini, for biological and chemical threats. The system is designed to prevent models from giving advice that could potentially lead to harmful attacks, as stated in OpenAI’s safety report. OpenAI launches its latest reasoning models, o3 and o4-mini OpenAI has released two new reasoning models, o3 and o4 mini, just two days after launching GPT-4.1. The company claims o3 is the most advanced reasoning model it has developed, while o4-mini is said to provide a balance of price, speed, and performance. The new models stand out from previous reasoning models because they can use ChatGPT features like web browsing, coding, and image processing and generation. But they hallucinate more than several of OpenAI’s previous models. OpenAI has added a new section to ChatGPT to offer easier access to AI-generated images for all user tiers Open AI introduced a new section called “library” to make it easier for users to create images on mobile and web platforms, per the company’s X post. OpenAI could “adjust” its safeguards if rivals release “high-risk” AI OpenAI said on Tuesday that it might revise its safety standards if “another frontier AI developer releases a high-risk system without comparable safeguards.” The move shows how commercial AI developers face more pressure to rapidly implement models due to the increased competition. OpenAI is currently in the early stages of developing its own social media platform to compete with Elon Musk’s X and Mark Zuckerberg’s Instagram and Threads, according to The Verge. It is unclear whether OpenAI intends to launch the social network as a standalone application or incorporate it into ChatGPT. OpenAI will remove its largest AI model, GPT-4.5, from the API, in July OpenAI will discontinue its largest AI model, GPT-4.5, from its API even though it was just launched in late February. GPT-4.5 will be available in a research preview for paying customers. Developers can use GPT-4.5 through OpenAI’s API until July 14; then, they will need to switch to GPT-4.1, which was released on April 14. OpenAI unveils GPT-4.1 AI models that focus on coding capabilities OpenAI has launched three members of the GPT-4.1 model — GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano — with a specific focus on coding capabilities. It’s accessible via the OpenAI API but not ChatGPT. In the competition to develop advanced programming models, GPT-4.1 will rival AI models such as Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and DeepSeek’s upgraded V3. OpenAI will discontinue ChatGPT’s GPT-4 at the end of April OpenAI plans to sunset GPT-4, an AI model introduced more than two years ago, and replace it with GPT-4o, the current default model, per changelog. It will take effect on April 30. GPT-4 will remain available via OpenAI’s API. OpenAI could release GPT-4.1 soon OpenAI may launch several new AI models, including GPT-4.1, soon, The Verge reported, citing anonymous sources. GPT-4.1 would be an update of OpenAI’s GPT-4o, which was released last year. On the list of upcoming models are GPT-4.1 and smaller versions like GPT-4.1 mini and nano, per the report. OpenAI has updated ChatGPT to use information from your previous conversations OpenAI started updating ChatGPT to enable the chatbot to remember previous conversations with a user and customize its responses based on that context. This feature is rolling out to ChatGPT Pro and Plus users first, excluding those in the U.K., EU, Iceland, Liechtenstein, Norway, and Switzerland. OpenAI is working on watermarks for images made with ChatGPT It looks like OpenAI is working on a watermarking feature for images generated using GPT-4o. AI researcher Tibor Blaho spotted a new “ImageGen” watermark feature in the new beta of ChatGPT’s Android app. Blaho also found mentions of other tools: “Structured Thoughts,” “Reasoning Recap,” “CoT Search Tool,” and “l1239dk1.” OpenAI offers ChatGPT Plus for free to U.S., Canadian college students OpenAI is offering its -per-month ChatGPT Plus subscription tier for free to all college students in the U.S. and Canada through the end of May. The offer will let millions of students use OpenAI’s premium service, which offers access to the company’s GPT-4o model, image generation, voice interaction, and research tools that are not available in the free version. ChatGPT users have generated over 700M images so far More than 130 million users have created over 700 million images since ChatGPT got the upgraded image generator on March 25, according to COO of OpenAI Brad Lightcap. The image generator was made available to all ChatGPT users on March 31, and went viral for being able to create Ghibli-style photos. OpenAI’s o3 model could cost more to run than initial estimate The Arc Prize Foundation, which develops the AI benchmark tool ARC-AGI, has updated the estimated computing costs for OpenAI’s o3 “reasoning” model managed by ARC-AGI. The organization originally estimated that the best-performing configuration of o3 it tested, o3 high, would cost approximately to address a single problem. The Foundation now thinks the cost could be much higher, possibly around per task. OpenAI CEO says capacity issues will cause product delays In a series of posts on X, OpenAI CEO Sam Altman said the company’s new image-generation tool’s popularity may cause product releases to be delayed. “We are getting things under control, but you should expect new releases from OpenAI to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges,” he wrote. March 2025 OpenAI plans to release a new ‘open’ AI language model OpeanAI intends to release its “first” open language model since GPT-2 “in the coming months.” The company plans to host developer events to gather feedback and eventually showcase prototypes of the model. The first developer event is to be held in San Francisco, with sessions to follow in Europe and Asia. OpenAI removes ChatGPT’s restrictions on image generation OpenAI made a notable change to its content moderation policies after the success of its new image generator in ChatGPT, which went viral for being able to create Studio Ghibli-style images. The company has updated its policies to allow ChatGPT to generate images of public figures, hateful symbols, and racial features when requested. OpenAI had previously declined such prompts due to the potential controversy or harm they may cause. However, the company has now “evolved” its approach, as stated in a blog post published by Joanne Jang, the lead for OpenAI’s model behavior. OpenAI adopts Anthropic’s standard for linking AI models with data OpenAI wants to incorporate Anthropic’s Model Context Protocolinto all of its products, including the ChatGPT desktop app. MCP, an open-source standard, helps AI models generate more accurate and suitable responses to specific queries, and lets developers create bidirectional links between data sources and AI applications like chatbots. The protocol is currently available in the Agents SDK, and support for the ChatGPT desktop app and Responses API will be coming soon, OpenAI CEO Sam Altman said. OpenAI’s viral Studio Ghibli-style images could raise AI copyright concerns The latest update of the image generator on OpenAI’s ChatGPT has triggered a flood of AI-generated memes in the style of Studio Ghibli, the Japanese animation studio behind blockbuster films like “My Neighbor Totoro” and “Spirited Away.” The burgeoning mass of Ghibli-esque images have sparked concerns about whether OpenAI has violated copyright laws, especially since the company is already facing legal action for using source material without authorization. OpenAI expects revenue to triple to billion this year OpenAI expects its revenue to triple to billion in 2025, fueled by the performance of its paid AI software, Bloomberg reported, citing an anonymous source. While the startup doesn’t expect to reach positive cash flow until 2029, it expects revenue to increase significantly in 2026 to surpass billion, the report said. ChatGPT has upgraded its image-generation feature OpenAI on Tuesday rolled out a major upgrade to ChatGPT’s image-generation capabilities: ChatGPT can now use the GPT-4o model to generate and edit images and photos directly. The feature went live earlier this week in ChatGPT and Sora, OpenAI’s AI video-generation tool, for subscribers of the company’s Pro plan, priced at a month, and will be available soon to ChatGPT Plus subscribers and developers using the company’s API service. The company’s CEO Sam Altman said on Wednesday, however, that the release of the image generation feature to free users would be delayed due to higher demand than the company expected. OpenAI announces leadership updates Brad Lightcap, OpenAI’s chief operating officer, will lead the company’s global expansion and manage corporate partnerships as CEO Sam Altman shifts his focus to research and products, according to a blog post from OpenAI. Lightcap, who previously worked with Altman at Y Combinator, joined the Microsoft-backed startup in 2018. OpenAI also said Mark Chen would step into the expanded role of chief research officer, and Julia Villagra will take on the role of chief people officer. OpenAI’s AI voice assistant now has advanced feature OpenAI has updated its AI voice assistant with improved chatting capabilities, according to a video posted on Mondayto the company’s official media channels. The update enables real-time conversations, and the AI assistant is said to be more personable and interrupts users less often. Users on ChatGPT’s free tier can now access the new version of Advanced Voice Mode, while paying users will receive answers that are “more direct, engaging, concise, specific, and creative,” a spokesperson from OpenAI told TechCrunch. OpenAI and Meta have separately engaged in discussions with Indian conglomerate Reliance Industries regarding potential collaborations to enhance their AI services in the country, per a report by The Information. One key topic being discussed is Reliance Jio distributing OpenAI’s ChatGPT. Reliance has proposed selling OpenAI’s models to businesses in India through an application programming interfaceso they can incorporate AI into their operations. Meta also plans to bolster its presence in India by constructing a large 3GW data center in Jamnagar, Gujarat. OpenAI, Meta, and Reliance have not yet officially announced these plans. OpenAI faces privacy complaint in Europe for chatbot’s defamatory hallucinations Noyb, a privacy rights advocacy group, is supporting an individual in Norway who was shocked to discover that ChatGPT was providing false information about him, stating that he had been found guilty of killing two of his children and trying to harm the third. “The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.” OpenAI upgrades its transcription and voice-generating AI models OpenAI has added new transcription and voice-generating AI models to its APIs: a text-to-speech model, “gpt-4o-mini-tts,” that delivers more nuanced and realistic sounding speech, as well as two speech-to-text models called “gpt-4o-transcribe” and “gpt-4o-mini-transcribe”. The company claims they are improved versions of what was already there and that they hallucinate less. OpenAI has launched o1-pro, a more powerful version of its o1 OpenAI has introduced o1-pro in its developer API. OpenAI says its o1-pro uses more computing than its o1 “reasoning” AI model to deliver “consistently better responses.” It’s only accessible to select developers who have spent at least on OpenAI API services. OpenAI charges for every million tokensinput into the model and for every million tokens the model produces. It costs twice as much as OpenAI’s GPT-4.5 for input and 10 times the price of regular o1. Noam Brown, who heads AI reasoning research at OpenAI, thinks that certain types of AI models for “reasoning” could have been developed 20 years ago if researchers had understood the correct approach and algorithms. OpenAI says it has trained an AI that’s “really good” at creative writing OpenAI CEO Sam Altman said, in a post on X, that the company has trained a “new model” that’s “really good” at creative writing. He posted a lengthy sample from the model given the prompt “Please write a metafictional literary short story about AI and grief.” OpenAI has not extensively explored the use of AI for writing fiction. The company has mostly concentrated on challenges in rigid, predictable areas such as math and programming.might not be that great at creative writing at all. OpenAI rolled out new tools designed to help developers and businesses build AI agents — automated systems that can independently accomplish tasks — using the company’s own AI models and frameworks. The tools are part of OpenAI’s new Responses API, which enables enterprises to develop customized AI agents that can perform web searches, scan through company files, and navigate websites, similar to OpenAI’s Operator product. The Responses API effectively replaces OpenAI’s Assistants API, which the company plans to discontinue in the first half of 2026. OpenAI reportedly plans to charge up to a month for specialized AI ‘agents’ OpenAI intends to release several “agent” products tailored for different applications, including sorting and ranking sales leads and software engineering, according to a report from The Information. One, a “high-income knowledge worker” agent, will reportedly be priced at a month. Another, a software developer agent, is said to cost a month. The most expensive rumored agents, which are said to be aimed at supporting “PhD-level research,” are expected to cost per month. The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly billion last year after paying for costs related to running its services and other expenses. It’s unclear when these agentic tools might launch or which customers will be eligible to buy them. ChatGPT can directly edit your code The latest version of the macOS ChatGPT app allows users to edit code directly in supported developer tools, including Xcode, VS Code, and JetBrains. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users like Enterprise, Edu, and free users. ChatGPT’s weekly active users doubled in less than 6 months, thanks to new releases According to a new report from VC firm Andreessen Horowitz, OpenAI’s AI chatbot, ChatGPT, experienced solid growth in the second half of 2024. It took ChatGPT nine months to increase its weekly active users from 100 million in November 2023 to 200 million in August 2024, but it only took less than six months to double that number once more, according to the report. ChatGPT’s weekly active users increased to 300 million by December 2024 and 400 million by February 2025. ChatGPT has experienced significant growth recently due to the launch of new models and features, such as GPT-4o, with multimodal capabilities. ChatGPT usage spiked from April to May 2024, shortly after that model’s launch. February 2025 OpenAI cancels its o3 AI model in favor of a ‘unified’ next-gen release OpenAI has effectively canceled the release of o3 in favor of what CEO Sam Altman is calling a “simplified” product offering. In a post on X, Altman said that, in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot oftechnology,” including o3, in ChatGPT and its API. As a result of that roadmap decision, OpenAI no longer plans to release o3 as a standalone model.  ChatGPT may not be as power-hungry as once assumed A commonly cited stat is that ChatGPT requires around 3 watt-hours of power to answer a single question. Using OpenAI’s latest default model for ChatGPT, GPT-4o, as a reference, nonprofit AI research institute Epoch AI found the average ChatGPT query consumes around 0.3 watt-hours. However, the analysis doesn’t consider the additional energy costs incurred by ChatGPT with features like image generation or input processing. OpenAI now reveals more of its o3-mini model’s thought process In response to pressure from rivals like DeepSeek, OpenAI is changing the way its o3-mini model communicates its step-by-step “thought” process. ChatGPT users will see an updated “chain of thought” that shows more of the model’s “reasoning” steps and how it arrived at answers to questions. You can now use ChatGPT web search without logging in OpenAI is now allowing anyone to use ChatGPT web search without having to log in. While OpenAI had previously allowed users to ask ChatGPT questions without signing in, responses were restricted to the chatbot’s last training update. This only applies through ChatGPT.com, however. To use ChatGPT in any form through the native mobile app, you will still need to be logged in. OpenAI unveils a new ChatGPT agent for ‘deep research’ OpenAI announced a new AI “agent” called deep research that’s designed to help people conduct in-depth, complex research using ChatGPT. OpenAI says the “agent” is intended for instances where you don’t just want a quick answer or summary, but instead need to assiduously consider information from multiple websites and other sources. January 2025 OpenAI used a subreddit to test AI persuasion OpenAI used the subreddit r/ChangeMyView to measure the persuasive abilities of its AI reasoning models. OpenAI says it collects user posts from the subreddit and asks its AI models to write replies, in a closed environment, that would change the Reddit user’s mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models’ responses to human replies for that same post.  OpenAI launches o3-mini, its latest ‘reasoning’ model OpenAI launched a new AI “reasoning” model, o3-mini, the newest in the company’s o family of models. OpenAI first previewed the model in December alongside a more capable system called o3. OpenAI is pitching its new model as both “powerful” and “affordable.” ChatGPT’s mobile users are 85% male, report says A new report from app analytics firm Appfigures found that over half of ChatGPT’s mobile users are under age 25, with users between ages 50 and 64 making up the second largest age demographic. The gender gap among ChatGPT users is even more significant. Appfigures estimates that across age groups, men make up 84.5% of all users. OpenAI launches ChatGPT plan for US government agencies OpenAI launched ChatGPT Gov designed to provide U.S. government agencies an additional way to access the tech. ChatGPT Gov includes many of the capabilities found in OpenAI’s corporate-focused tier, ChatGPT Enterprise. OpenAI says that ChatGPT Gov enables agencies to more easily manage their own security, privacy, and compliance, and could expedite internal authorization of OpenAI’s tools for the handling of non-public sensitive data. More teens report using ChatGPT for schoolwork, despite the tech’s faults Younger Gen Zers are embracing ChatGPT, for schoolwork, according to a new survey by the Pew Research Center. In a follow-up to its 2023 poll on ChatGPT usage among young people, Pew asked ~1,400 U.S.-based teens ages 13 to 17 whether they’ve used ChatGPT for homework or other school-related assignments. Twenty-six percent said that they had, double the number two years ago. Just over half of teens responding to the poll said they think it’s acceptable to use ChatGPT for researching new subjects. But considering the ways ChatGPT can fall short, the results are possibly cause for alarm. OpenAI says it may store deleted Operator data for up to 90 days OpenAI says that it might store chats and associated screenshots from customers who use Operator, the company’s AI “agent” tool, for up to 90 days — even after a user manually deletes them. While OpenAI has a similar deleted data retention policy for ChatGPT, the retention period for ChatGPT is only 30 days, which is 60 days shorter than Operator’s. OpenAI launches Operator, an AI agent that performs tasks autonomously OpenAI is launching a research preview of Operator, a general-purpose AI agent that can take control of a web browser and independently perform certain actions. Operator promises to automate tasks such as booking travel accommodations, making restaurant reservations, and shopping online. Operator, OpenAI’s agent tool, could be released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the Pro subscription plan. The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code. TechCrunch separately identified the same references to Operator on OpenAI’s website. OpenAI tests phone number-only ChatGPT signups OpenAI has begun testing a feature that lets new ChatGPT users sign up with only a phone number — no email required. The feature is currently in beta in the U.S. and India. However, users who create an account using their number can’t upgrade to one of OpenAI’s paid plans without verifying their account via an email. Multi-factor authentication also isn’t supported without a valid email. ChatGPT now lets you schedule reminders and recurring tasks ChatGPT’s new beta feature, called tasks, allows users to set simple reminders. For example, you can ask ChatGPT to remind you when your passport expires in six months, and the AI assistant will follow up with a push notification on whatever platform you have tasks enabled. The feature will start rolling out to ChatGPT Plus, Team, and Pro users around the globe this week. New ChatGPT feature lets users assign it traits like ‘chatty’ and ‘Gen Z’ OpenAI is introducing a new way for users to customize their interactions with ChatGPT. Some users found they can specify a preferred name or nickname and “traits” they’d like the chatbot to have. OpenAI suggests traits like “Chatty,” “Encouraging,” and “Gen Z.” However, some users reported that the new options have disappeared, so it’s possible they went live prematurely. FAQs: What is ChatGPT? How does it work? ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI. The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text. When did ChatGPT get released? November 30, 2022 is when ChatGPT was released for public use. What is the latest version of ChatGPT? Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o. Can I use ChatGPT for free? There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus. Who uses ChatGPT? Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns. What companies use ChatGPT? Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool. Most recently, Microsoft announced at its 2023 Build conference that it is integrating its ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT.  And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space. What does GPT mean in ChatGPT? GPT stands for Generative Pre-Trained Transformer. What is the difference between ChatGPT and a chatbot? A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions. ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt. Can ChatGPT write essays? Yes. Can ChatGPT commit libel? Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel. We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry. Does ChatGPT have an app? Yes, there is a free ChatGPT mobile app for iOS and Android users. What is the ChatGPT character limit? It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words. Does ChatGPT have an API? Yes, it was released March 1, 2023. What are some sample everyday uses for ChatGPT? Everyday examples include programming, scripts, email replies, listicles, blog ideas, summarization, etc. What are some advanced uses for ChatGPT? Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc. How good is ChatGPT at writing code? It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used. Can you save a ChatGPT chat? Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet. Are there alternatives to ChatGPT? Yes. There are multiple AI-powered chatbot competitors such as Together, Google’s Gemini and Anthropic’s Claude, and developers are creating open source alternatives. How does ChatGPT handle data privacy? OpenAI has said that individuals in “certain jurisdictions”can object to the processing of their personal information by its AI models by filling out this form. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”. The web form for making a deletion of data about you request is entitled “OpenAI Personal Data Removal Request”. In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest”, pointing users towards more information about requesting an opt out — when it writes: “See here for instructions on how you can opt out of our use of your information to train our models.” What controversies have surrounded ChatGPT? Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamineand the incendiary mixture napalm. An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service. CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect. Several major school systems and colleges, including New York City Public Schools, have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with. There have also been cases of ChatGPT accusing individuals of false crimes. Where can I find examples of ChatGPT prompts? Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase. Another is ChatX. More launch every day. Can ChatGPT be detected? Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best. Are ChatGPT chats public? No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service. What lawsuits are there surrounding ChatGPT? None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT. Are there issues regarding plagiarism with ChatGPT? Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data. #chatgpt #everything #you #need #know
    TECHCRUNCH.COM
    ChatGPT: Everything you need to know about the AI-powered chatbot
    ChatGPT, OpenAI’s text-generating AI chatbot, has taken the world by storm since its launch in November 2022. What started as a tool to supercharge productivity through writing essays and code with short text prompts has evolved into a behemoth with 300 million weekly active users. 2024 was a big year for OpenAI, from its partnership with Apple for its generative AI offering, Apple Intelligence, the release of GPT-4o with voice capabilities, and the highly-anticipated launch of its text-to-video model Sora. OpenAI also faced its share of internal drama, including the notable exits of high-level execs like co-founder and longtime chief scientist Ilya Sutskever and CTO Mira Murati. OpenAI has also been hit with lawsuits from Alden Global Capital-owned newspapers alleging copyright infringement, as well as an injunction from Elon Musk to halt OpenAI’s transition to a for-profit. In 2025, OpenAI is battling the perception that it’s ceding ground in the AI race to Chinese rivals like DeepSeek. The company has been trying to shore up its relationship with Washington as it simultaneously pursues an ambitious data center project, and as it reportedly lays the groundwork for one of the largest funding rounds in history. Below, you’ll find a timeline of ChatGPT product updates and releases, starting with the latest, which we’ve been updating throughout the year. If you have any other questions, check out our ChatGPT FAQ here. To see a list of 2024 updates, go here. Timeline of the most recent ChatGPT updates Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW May 2025 OpenAI CFO says hardware will drive ChatGPT’s growth OpenAI plans to purchase Jony Ive’s devices startup io for $6.4 billion. Sarah Friar, CFO of OpenAI, thinks that the hardware will significantly enhance ChatGPT and broaden OpenAI’s reach to a larger audience in the future. OpenAI’s ChatGPT unveils its AI coding agent, Codex OpenAI has introduced its AI coding agent, Codex, powered by codex-1, a version of its o3 AI reasoning model designed for software engineering tasks. OpenAI says codex-1 generates more precise and “cleaner” code than o3. The coding agent may take anywhere from one to 30 minutes to complete tasks such as writing simple features, fixing bugs, answering questions about your codebase, and running tests. Sam Altman aims to make ChatGPT more personalized by tracking every aspect of a person’s life Sam Altman, the CEO of OpenAI, said during a recent AI event hosted by VC firm Sequoia that he wants ChatGPT to record and remember every detail of a person’s life when one attendee asked about how ChatGPT can become more personalized. OpenAI releases its GPT-4.1 and GPT-4.1 mini AI models in ChatGPT OpenAI said in a post on X that it has launched its GPT-4.1 and GPT4.1 mini AI models in ChagGPT. OpenAI has launched a new feature for ChatGPT deep research to analyze code repositories on GitHub. The ChatGPT deep research feature is in beta and lets developers connect with GitHub to ask questions about codebases and engineering documents. The connector will soon be available for ChatGPT Plus, Pro, and Team users, with support for Enterprise and Education coming shortly, per an OpenAI spokesperson. OpenAI launches a new data residency program in Asia After introducing a data residency program in Europe in February, OpenAI has now launched a similar program in Asian countries including India, Japan, Singapore, and South Korea. The new program will be accessible to users of ChatGPT Enterprise, ChatGPT Edu, and API. It will help organizations in Asia meet their local data sovereignty requirements when using OpenAI’s products. OpenAI to introduce a program to grow AI infrastructure OpenAI is unveiling a program called OpenAI for Countries, which aims to develop the necessary local infrastructure to serve international AI clients better. The AI startup will work with governments to assist with increasing data center capacity and customizing OpenAI’s products to meet specific language and local needs. OpenAI for Countries is part of efforts to support the company’s expansion of its AI data center Project Stargate to new locations outside the U.S., per Bloomberg. OpenAI promises to make changes to prevent future ChatGPT sycophancy OpenAI has announced its plan to make changes to its procedures for updating the AI models that power ChatGPT, following an update that caused the platform to become overly sycophantic for many users. April 2025 OpenAI clarifies the reason ChatGPT became overly flattering and agreeable OpenAI has released a post on the recent sycophancy issues with the default AI model powering ChatGPT, GPT-4o, leading the company to revert an update to the model released last week. CEO Sam Altman acknowledged the issue on Sunday and confirmed two days later that the GPT-4o update was being rolled back. OpenAI is working on “additional fixes” to the model’s personality. Over the weekend, users on social media criticized the new model for making ChatGPT too validating and agreeable. It became a popular meme fast. OpenAI is working to fix a “bug” that let minors engage in inappropriate conversations An issue within OpenAI’s ChatGPT enabled the chatbot to create graphic erotic content for accounts registered by users under the age of 18, as demonstrated by TechCrunch’s testing, a fact later confirmed by OpenAI. “Protecting younger users is a top priority, and our Model Spec, which guides model behavior, clearly restricts sensitive content like erotica to narrow contexts such as scientific, historical, or news reporting,” a spokesperson told TechCrunch via email. “In this case, a bug allowed responses outside those guidelines, and we are actively deploying a fix to limit these generations.” OpenAI has added a few features to its ChatGPT search, its web search tool in ChatGPT, to give users an improved online shopping experience. The company says people can ask super-specific questions using natural language and receive customized results. The chatbot provides recommendations, images, and reviews of products in various categories such as fashion, beauty, home goods, and electronics. OpenAI wants its AI model to access cloud models for assistance OpenAI leaders have been talking about allowing the open model to link up with OpenAI’s cloud-hosted models to improve its ability to respond to intricate questions, two sources familiar with the situation told TechCrunch. OpenAI aims to make its new “open” AI model the best on the market OpenAI is preparing to launch an AI system that will be openly accessible, allowing users to download it for free without any API restrictions. Aidan Clark, OpenAI’s VP of research, is spearheading the development of the open model, which is in the very early stages, sources familiar with the situation told TechCrunch. OpenAI’s GPT-4.1 may be less aligned than earlier models OpenAI released a new AI model called GPT-4.1 in mid-April. However, multiple independent tests indicate that the model is less reliable than previous OpenAI releases. The company skipped that step — sending safety cards for GPT-4.1 — claiming in a statement to TechCrunch that “GPT-4.1 is not a frontier model, so there won’t be a separate system card released for it.” OpenAI’s o3 AI model scored lower than expected on a benchmark Questions have been raised regarding OpenAI’s transparency and procedures for testing models after a difference in benchmark outcomes was detected by first- and third-party benchmark results for the o3 AI model. OpenAI introduced o3 in December, stating that the model could solve approximately 25% of questions on FrontierMath, a difficult math problem set. Epoch AI, the research institute behind FrontierMath, discovered that o3 achieved a score of approximately 10%, which was significantly lower than OpenAI’s top-reported score. OpenAI unveils Flex processing for cheaper, slower AI tasks OpenAI has launched a new API feature called Flex processing that allows users to use AI models at a lower cost but with slower response times and occasional resource unavailability. Flex processing is available in beta on the o3 and o4-mini reasoning models for non-production tasks like model evaluations, data enrichment, and asynchronous workloads. OpenAI’s latest AI models now have a safeguard against biorisks OpenAI has rolled out a new system to monitor its AI reasoning models, o3 and o4 mini, for biological and chemical threats. The system is designed to prevent models from giving advice that could potentially lead to harmful attacks, as stated in OpenAI’s safety report. OpenAI launches its latest reasoning models, o3 and o4-mini OpenAI has released two new reasoning models, o3 and o4 mini, just two days after launching GPT-4.1. The company claims o3 is the most advanced reasoning model it has developed, while o4-mini is said to provide a balance of price, speed, and performance. The new models stand out from previous reasoning models because they can use ChatGPT features like web browsing, coding, and image processing and generation. But they hallucinate more than several of OpenAI’s previous models. OpenAI has added a new section to ChatGPT to offer easier access to AI-generated images for all user tiers Open AI introduced a new section called “library” to make it easier for users to create images on mobile and web platforms, per the company’s X post. OpenAI could “adjust” its safeguards if rivals release “high-risk” AI OpenAI said on Tuesday that it might revise its safety standards if “another frontier AI developer releases a high-risk system without comparable safeguards.” The move shows how commercial AI developers face more pressure to rapidly implement models due to the increased competition. OpenAI is currently in the early stages of developing its own social media platform to compete with Elon Musk’s X and Mark Zuckerberg’s Instagram and Threads, according to The Verge. It is unclear whether OpenAI intends to launch the social network as a standalone application or incorporate it into ChatGPT. OpenAI will remove its largest AI model, GPT-4.5, from the API, in July OpenAI will discontinue its largest AI model, GPT-4.5, from its API even though it was just launched in late February. GPT-4.5 will be available in a research preview for paying customers. Developers can use GPT-4.5 through OpenAI’s API until July 14; then, they will need to switch to GPT-4.1, which was released on April 14. OpenAI unveils GPT-4.1 AI models that focus on coding capabilities OpenAI has launched three members of the GPT-4.1 model — GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano — with a specific focus on coding capabilities. It’s accessible via the OpenAI API but not ChatGPT. In the competition to develop advanced programming models, GPT-4.1 will rival AI models such as Google’s Gemini 2.5 Pro, Anthropic’s Claude 3.7 Sonnet, and DeepSeek’s upgraded V3. OpenAI will discontinue ChatGPT’s GPT-4 at the end of April OpenAI plans to sunset GPT-4, an AI model introduced more than two years ago, and replace it with GPT-4o, the current default model, per changelog. It will take effect on April 30. GPT-4 will remain available via OpenAI’s API. OpenAI could release GPT-4.1 soon OpenAI may launch several new AI models, including GPT-4.1, soon, The Verge reported, citing anonymous sources. GPT-4.1 would be an update of OpenAI’s GPT-4o, which was released last year. On the list of upcoming models are GPT-4.1 and smaller versions like GPT-4.1 mini and nano, per the report. OpenAI has updated ChatGPT to use information from your previous conversations OpenAI started updating ChatGPT to enable the chatbot to remember previous conversations with a user and customize its responses based on that context. This feature is rolling out to ChatGPT Pro and Plus users first, excluding those in the U.K., EU, Iceland, Liechtenstein, Norway, and Switzerland. OpenAI is working on watermarks for images made with ChatGPT It looks like OpenAI is working on a watermarking feature for images generated using GPT-4o. AI researcher Tibor Blaho spotted a new “ImageGen” watermark feature in the new beta of ChatGPT’s Android app. Blaho also found mentions of other tools: “Structured Thoughts,” “Reasoning Recap,” “CoT Search Tool,” and “l1239dk1.” OpenAI offers ChatGPT Plus for free to U.S., Canadian college students OpenAI is offering its $20-per-month ChatGPT Plus subscription tier for free to all college students in the U.S. and Canada through the end of May. The offer will let millions of students use OpenAI’s premium service, which offers access to the company’s GPT-4o model, image generation, voice interaction, and research tools that are not available in the free version. ChatGPT users have generated over 700M images so far More than 130 million users have created over 700 million images since ChatGPT got the upgraded image generator on March 25, according to COO of OpenAI Brad Lightcap. The image generator was made available to all ChatGPT users on March 31, and went viral for being able to create Ghibli-style photos. OpenAI’s o3 model could cost more to run than initial estimate The Arc Prize Foundation, which develops the AI benchmark tool ARC-AGI, has updated the estimated computing costs for OpenAI’s o3 “reasoning” model managed by ARC-AGI. The organization originally estimated that the best-performing configuration of o3 it tested, o3 high, would cost approximately $3,000 to address a single problem. The Foundation now thinks the cost could be much higher, possibly around $30,000 per task. OpenAI CEO says capacity issues will cause product delays In a series of posts on X, OpenAI CEO Sam Altman said the company’s new image-generation tool’s popularity may cause product releases to be delayed. “We are getting things under control, but you should expect new releases from OpenAI to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges,” he wrote. March 2025 OpenAI plans to release a new ‘open’ AI language model OpeanAI intends to release its “first” open language model since GPT-2 “in the coming months.” The company plans to host developer events to gather feedback and eventually showcase prototypes of the model. The first developer event is to be held in San Francisco, with sessions to follow in Europe and Asia. OpenAI removes ChatGPT’s restrictions on image generation OpenAI made a notable change to its content moderation policies after the success of its new image generator in ChatGPT, which went viral for being able to create Studio Ghibli-style images. The company has updated its policies to allow ChatGPT to generate images of public figures, hateful symbols, and racial features when requested. OpenAI had previously declined such prompts due to the potential controversy or harm they may cause. However, the company has now “evolved” its approach, as stated in a blog post published by Joanne Jang, the lead for OpenAI’s model behavior. OpenAI adopts Anthropic’s standard for linking AI models with data OpenAI wants to incorporate Anthropic’s Model Context Protocol (MCP) into all of its products, including the ChatGPT desktop app. MCP, an open-source standard, helps AI models generate more accurate and suitable responses to specific queries, and lets developers create bidirectional links between data sources and AI applications like chatbots. The protocol is currently available in the Agents SDK, and support for the ChatGPT desktop app and Responses API will be coming soon, OpenAI CEO Sam Altman said. OpenAI’s viral Studio Ghibli-style images could raise AI copyright concerns The latest update of the image generator on OpenAI’s ChatGPT has triggered a flood of AI-generated memes in the style of Studio Ghibli, the Japanese animation studio behind blockbuster films like “My Neighbor Totoro” and “Spirited Away.” The burgeoning mass of Ghibli-esque images have sparked concerns about whether OpenAI has violated copyright laws, especially since the company is already facing legal action for using source material without authorization. OpenAI expects revenue to triple to $12.7 billion this year OpenAI expects its revenue to triple to $12.7 billion in 2025, fueled by the performance of its paid AI software, Bloomberg reported, citing an anonymous source. While the startup doesn’t expect to reach positive cash flow until 2029, it expects revenue to increase significantly in 2026 to surpass $29.4 billion, the report said. ChatGPT has upgraded its image-generation feature OpenAI on Tuesday rolled out a major upgrade to ChatGPT’s image-generation capabilities: ChatGPT can now use the GPT-4o model to generate and edit images and photos directly. The feature went live earlier this week in ChatGPT and Sora, OpenAI’s AI video-generation tool, for subscribers of the company’s Pro plan, priced at $200 a month, and will be available soon to ChatGPT Plus subscribers and developers using the company’s API service. The company’s CEO Sam Altman said on Wednesday, however, that the release of the image generation feature to free users would be delayed due to higher demand than the company expected. OpenAI announces leadership updates Brad Lightcap, OpenAI’s chief operating officer, will lead the company’s global expansion and manage corporate partnerships as CEO Sam Altman shifts his focus to research and products, according to a blog post from OpenAI. Lightcap, who previously worked with Altman at Y Combinator, joined the Microsoft-backed startup in 2018. OpenAI also said Mark Chen would step into the expanded role of chief research officer, and Julia Villagra will take on the role of chief people officer. OpenAI’s AI voice assistant now has advanced feature OpenAI has updated its AI voice assistant with improved chatting capabilities, according to a video posted on Monday (March 24) to the company’s official media channels. The update enables real-time conversations, and the AI assistant is said to be more personable and interrupts users less often. Users on ChatGPT’s free tier can now access the new version of Advanced Voice Mode, while paying users will receive answers that are “more direct, engaging, concise, specific, and creative,” a spokesperson from OpenAI told TechCrunch. OpenAI and Meta have separately engaged in discussions with Indian conglomerate Reliance Industries regarding potential collaborations to enhance their AI services in the country, per a report by The Information. One key topic being discussed is Reliance Jio distributing OpenAI’s ChatGPT. Reliance has proposed selling OpenAI’s models to businesses in India through an application programming interface (API) so they can incorporate AI into their operations. Meta also plans to bolster its presence in India by constructing a large 3GW data center in Jamnagar, Gujarat. OpenAI, Meta, and Reliance have not yet officially announced these plans. OpenAI faces privacy complaint in Europe for chatbot’s defamatory hallucinations Noyb, a privacy rights advocacy group, is supporting an individual in Norway who was shocked to discover that ChatGPT was providing false information about him, stating that he had been found guilty of killing two of his children and trying to harm the third. “The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.” OpenAI upgrades its transcription and voice-generating AI models OpenAI has added new transcription and voice-generating AI models to its APIs: a text-to-speech model, “gpt-4o-mini-tts,” that delivers more nuanced and realistic sounding speech, as well as two speech-to-text models called “gpt-4o-transcribe” and “gpt-4o-mini-transcribe”. The company claims they are improved versions of what was already there and that they hallucinate less. OpenAI has launched o1-pro, a more powerful version of its o1 OpenAI has introduced o1-pro in its developer API. OpenAI says its o1-pro uses more computing than its o1 “reasoning” AI model to deliver “consistently better responses.” It’s only accessible to select developers who have spent at least $5 on OpenAI API services. OpenAI charges $150 for every million tokens (about 750,000 words) input into the model and $600 for every million tokens the model produces. It costs twice as much as OpenAI’s GPT-4.5 for input and 10 times the price of regular o1. Noam Brown, who heads AI reasoning research at OpenAI, thinks that certain types of AI models for “reasoning” could have been developed 20 years ago if researchers had understood the correct approach and algorithms. OpenAI says it has trained an AI that’s “really good” at creative writing OpenAI CEO Sam Altman said, in a post on X, that the company has trained a “new model” that’s “really good” at creative writing. He posted a lengthy sample from the model given the prompt “Please write a metafictional literary short story about AI and grief.” OpenAI has not extensively explored the use of AI for writing fiction. The company has mostly concentrated on challenges in rigid, predictable areas such as math and programming.might not be that great at creative writing at all. OpenAI rolled out new tools designed to help developers and businesses build AI agents — automated systems that can independently accomplish tasks — using the company’s own AI models and frameworks. The tools are part of OpenAI’s new Responses API, which enables enterprises to develop customized AI agents that can perform web searches, scan through company files, and navigate websites, similar to OpenAI’s Operator product. The Responses API effectively replaces OpenAI’s Assistants API, which the company plans to discontinue in the first half of 2026. OpenAI reportedly plans to charge up to $20,000 a month for specialized AI ‘agents’ OpenAI intends to release several “agent” products tailored for different applications, including sorting and ranking sales leads and software engineering, according to a report from The Information. One, a “high-income knowledge worker” agent, will reportedly be priced at $2,000 a month. Another, a software developer agent, is said to cost $10,000 a month. The most expensive rumored agents, which are said to be aimed at supporting “PhD-level research,” are expected to cost $20,000 per month. The jaw-dropping figure is indicative of how much cash OpenAI needs right now: The company lost roughly $5 billion last year after paying for costs related to running its services and other expenses. It’s unclear when these agentic tools might launch or which customers will be eligible to buy them. ChatGPT can directly edit your code The latest version of the macOS ChatGPT app allows users to edit code directly in supported developer tools, including Xcode, VS Code, and JetBrains. ChatGPT Plus, Pro, and Team subscribers can use the feature now, and the company plans to roll it out to more users like Enterprise, Edu, and free users. ChatGPT’s weekly active users doubled in less than 6 months, thanks to new releases According to a new report from VC firm Andreessen Horowitz (a16z), OpenAI’s AI chatbot, ChatGPT, experienced solid growth in the second half of 2024. It took ChatGPT nine months to increase its weekly active users from 100 million in November 2023 to 200 million in August 2024, but it only took less than six months to double that number once more, according to the report. ChatGPT’s weekly active users increased to 300 million by December 2024 and 400 million by February 2025. ChatGPT has experienced significant growth recently due to the launch of new models and features, such as GPT-4o, with multimodal capabilities. ChatGPT usage spiked from April to May 2024, shortly after that model’s launch. February 2025 OpenAI cancels its o3 AI model in favor of a ‘unified’ next-gen release OpenAI has effectively canceled the release of o3 in favor of what CEO Sam Altman is calling a “simplified” product offering. In a post on X, Altman said that, in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot of [OpenAI’s] technology,” including o3, in ChatGPT and its API. As a result of that roadmap decision, OpenAI no longer plans to release o3 as a standalone model.  ChatGPT may not be as power-hungry as once assumed A commonly cited stat is that ChatGPT requires around 3 watt-hours of power to answer a single question. Using OpenAI’s latest default model for ChatGPT, GPT-4o, as a reference, nonprofit AI research institute Epoch AI found the average ChatGPT query consumes around 0.3 watt-hours. However, the analysis doesn’t consider the additional energy costs incurred by ChatGPT with features like image generation or input processing. OpenAI now reveals more of its o3-mini model’s thought process In response to pressure from rivals like DeepSeek, OpenAI is changing the way its o3-mini model communicates its step-by-step “thought” process. ChatGPT users will see an updated “chain of thought” that shows more of the model’s “reasoning” steps and how it arrived at answers to questions. You can now use ChatGPT web search without logging in OpenAI is now allowing anyone to use ChatGPT web search without having to log in. While OpenAI had previously allowed users to ask ChatGPT questions without signing in, responses were restricted to the chatbot’s last training update. This only applies through ChatGPT.com, however. To use ChatGPT in any form through the native mobile app, you will still need to be logged in. OpenAI unveils a new ChatGPT agent for ‘deep research’ OpenAI announced a new AI “agent” called deep research that’s designed to help people conduct in-depth, complex research using ChatGPT. OpenAI says the “agent” is intended for instances where you don’t just want a quick answer or summary, but instead need to assiduously consider information from multiple websites and other sources. January 2025 OpenAI used a subreddit to test AI persuasion OpenAI used the subreddit r/ChangeMyView to measure the persuasive abilities of its AI reasoning models. OpenAI says it collects user posts from the subreddit and asks its AI models to write replies, in a closed environment, that would change the Reddit user’s mind on a subject. The company then shows the responses to testers, who assess how persuasive the argument is, and finally OpenAI compares the AI models’ responses to human replies for that same post.  OpenAI launches o3-mini, its latest ‘reasoning’ model OpenAI launched a new AI “reasoning” model, o3-mini, the newest in the company’s o family of models. OpenAI first previewed the model in December alongside a more capable system called o3. OpenAI is pitching its new model as both “powerful” and “affordable.” ChatGPT’s mobile users are 85% male, report says A new report from app analytics firm Appfigures found that over half of ChatGPT’s mobile users are under age 25, with users between ages 50 and 64 making up the second largest age demographic. The gender gap among ChatGPT users is even more significant. Appfigures estimates that across age groups, men make up 84.5% of all users. OpenAI launches ChatGPT plan for US government agencies OpenAI launched ChatGPT Gov designed to provide U.S. government agencies an additional way to access the tech. ChatGPT Gov includes many of the capabilities found in OpenAI’s corporate-focused tier, ChatGPT Enterprise. OpenAI says that ChatGPT Gov enables agencies to more easily manage their own security, privacy, and compliance, and could expedite internal authorization of OpenAI’s tools for the handling of non-public sensitive data. More teens report using ChatGPT for schoolwork, despite the tech’s faults Younger Gen Zers are embracing ChatGPT, for schoolwork, according to a new survey by the Pew Research Center. In a follow-up to its 2023 poll on ChatGPT usage among young people, Pew asked ~1,400 U.S.-based teens ages 13 to 17 whether they’ve used ChatGPT for homework or other school-related assignments. Twenty-six percent said that they had, double the number two years ago. Just over half of teens responding to the poll said they think it’s acceptable to use ChatGPT for researching new subjects. But considering the ways ChatGPT can fall short, the results are possibly cause for alarm. OpenAI says it may store deleted Operator data for up to 90 days OpenAI says that it might store chats and associated screenshots from customers who use Operator, the company’s AI “agent” tool, for up to 90 days — even after a user manually deletes them. While OpenAI has a similar deleted data retention policy for ChatGPT, the retention period for ChatGPT is only 30 days, which is 60 days shorter than Operator’s. OpenAI launches Operator, an AI agent that performs tasks autonomously OpenAI is launching a research preview of Operator, a general-purpose AI agent that can take control of a web browser and independently perform certain actions. Operator promises to automate tasks such as booking travel accommodations, making restaurant reservations, and shopping online. Operator, OpenAI’s agent tool, could be released sooner rather than later. Changes to ChatGPT’s code base suggest that Operator will be available as an early research preview to users on the $200 Pro subscription plan. The changes aren’t yet publicly visible, but a user on X who goes by Choi spotted these updates in ChatGPT’s client-side code. TechCrunch separately identified the same references to Operator on OpenAI’s website. OpenAI tests phone number-only ChatGPT signups OpenAI has begun testing a feature that lets new ChatGPT users sign up with only a phone number — no email required. The feature is currently in beta in the U.S. and India. However, users who create an account using their number can’t upgrade to one of OpenAI’s paid plans without verifying their account via an email. Multi-factor authentication also isn’t supported without a valid email. ChatGPT now lets you schedule reminders and recurring tasks ChatGPT’s new beta feature, called tasks, allows users to set simple reminders. For example, you can ask ChatGPT to remind you when your passport expires in six months, and the AI assistant will follow up with a push notification on whatever platform you have tasks enabled. The feature will start rolling out to ChatGPT Plus, Team, and Pro users around the globe this week. New ChatGPT feature lets users assign it traits like ‘chatty’ and ‘Gen Z’ OpenAI is introducing a new way for users to customize their interactions with ChatGPT. Some users found they can specify a preferred name or nickname and “traits” they’d like the chatbot to have. OpenAI suggests traits like “Chatty,” “Encouraging,” and “Gen Z.” However, some users reported that the new options have disappeared, so it’s possible they went live prematurely. FAQs: What is ChatGPT? How does it work? ChatGPT is a general-purpose chatbot that uses artificial intelligence to generate text after a user enters a prompt, developed by tech startup OpenAI. The chatbot uses GPT-4, a large language model that uses deep learning to produce human-like text. When did ChatGPT get released? November 30, 2022 is when ChatGPT was released for public use. What is the latest version of ChatGPT? Both the free version of ChatGPT and the paid ChatGPT Plus are regularly updated with new GPT models. The most recent model is GPT-4o. Can I use ChatGPT for free? There is a free version of ChatGPT that only requires a sign-in in addition to the paid version, ChatGPT Plus. Who uses ChatGPT? Anyone can use ChatGPT! More and more tech companies and search engines are utilizing the chatbot to automate text or quickly answer user questions/concerns. What companies use ChatGPT? Multiple enterprises utilize ChatGPT, although others may limit the use of the AI-powered tool. Most recently, Microsoft announced at its 2023 Build conference that it is integrating its ChatGPT-based Bing experience into Windows 11. A Brooklyn-based 3D display startup Looking Glass utilizes ChatGPT to produce holograms you can communicate with by using ChatGPT.  And nonprofit organization Solana officially integrated the chatbot into its network with a ChatGPT plug-in geared toward end users to help onboard into the web3 space. What does GPT mean in ChatGPT? GPT stands for Generative Pre-Trained Transformer. What is the difference between ChatGPT and a chatbot? A chatbot can be any software/system that holds dialogue with you/a person but doesn’t necessarily have to be AI-powered. For example, there are chatbots that are rules-based in the sense that they’ll give canned responses to questions. ChatGPT is AI-powered and utilizes LLM technology to generate text after a prompt. Can ChatGPT write essays? Yes. Can ChatGPT commit libel? Due to the nature of how these models work, they don’t know or care whether something is true, only that it looks true. That’s a problem when you’re using it to do your homework, sure, but when it accuses you of a crime you didn’t commit, that may well at this point be libel. We will see how handling troubling statements produced by ChatGPT will play out over the next few months as tech and legal experts attempt to tackle the fastest moving target in the industry. Does ChatGPT have an app? Yes, there is a free ChatGPT mobile app for iOS and Android users. What is the ChatGPT character limit? It’s not documented anywhere that ChatGPT has a character limit. However, users have noted that there are some character limitations after around 500 words. Does ChatGPT have an API? Yes, it was released March 1, 2023. What are some sample everyday uses for ChatGPT? Everyday examples include programming, scripts, email replies, listicles, blog ideas, summarization, etc. What are some advanced uses for ChatGPT? Advanced use examples include debugging code, programming languages, scientific concepts, complex problem solving, etc. How good is ChatGPT at writing code? It depends on the nature of the program. While ChatGPT can write workable Python code, it can’t necessarily program an entire app’s worth of code. That’s because ChatGPT lacks context awareness — in other words, the generated code isn’t always appropriate for the specific context in which it’s being used. Can you save a ChatGPT chat? Yes. OpenAI allows users to save chats in the ChatGPT interface, stored in the sidebar of the screen. There are no built-in sharing features yet. Are there alternatives to ChatGPT? Yes. There are multiple AI-powered chatbot competitors such as Together, Google’s Gemini and Anthropic’s Claude, and developers are creating open source alternatives. How does ChatGPT handle data privacy? OpenAI has said that individuals in “certain jurisdictions” (such as the EU) can object to the processing of their personal information by its AI models by filling out this form. This includes the ability to make requests for deletion of AI-generated references about you. Although OpenAI notes it may not grant every request since it must balance privacy requests against freedom of expression “in accordance with applicable laws”. The web form for making a deletion of data about you request is entitled “OpenAI Personal Data Removal Request”. In its privacy policy, the ChatGPT maker makes a passing acknowledgement of the objection requirements attached to relying on “legitimate interest” (LI), pointing users towards more information about requesting an opt out — when it writes: “See here for instructions on how you can opt out of our use of your information to train our models.” What controversies have surrounded ChatGPT? Recently, Discord announced that it had integrated OpenAI’s technology into its bot named Clyde where two users tricked Clyde into providing them with instructions for making the illegal drug methamphetamine (meth) and the incendiary mixture napalm. An Australian mayor has publicly announced he may sue OpenAI for defamation due to ChatGPT’s false claims that he had served time in prison for bribery. This would be the first defamation lawsuit against the text-generating service. CNET found itself in the midst of controversy after Futurism reported the publication was publishing articles under a mysterious byline completely generated by AI. The private equity company that owns CNET, Red Ventures, was accused of using ChatGPT for SEO farming, even if the information was incorrect. Several major school systems and colleges, including New York City Public Schools, have banned ChatGPT from their networks and devices. They claim that the AI impedes the learning process by promoting plagiarism and misinformation, a claim that not every educator agrees with. There have also been cases of ChatGPT accusing individuals of false crimes. Where can I find examples of ChatGPT prompts? Several marketplaces host and provide ChatGPT prompts, either for free or for a nominal fee. One is PromptBase. Another is ChatX. More launch every day. Can ChatGPT be detected? Poorly. Several tools claim to detect ChatGPT-generated text, but in our tests, they’re inconsistent at best. Are ChatGPT chats public? No. But OpenAI recently disclosed a bug, since fixed, that exposed the titles of some users’ conversations to other people on the service. What lawsuits are there surrounding ChatGPT? None specifically targeting ChatGPT. But OpenAI is involved in at least one lawsuit that has implications for AI systems trained on publicly available data, which would touch on ChatGPT. Are there issues regarding plagiarism with ChatGPT? Yes. Text-generating AI models like ChatGPT have a tendency to regurgitate content from their training data.
    0 Comments 0 Shares 0 Reviews
  • Folding the Future: Lenovo ThinkPad X1 Fold 2024 vs. Huawei MateBook Fold Ultimate Design

    Why revisit the Lenovo ThinkPad X1 Fold in 2025? The answer lies in the rapid evolution of foldable computing. When Lenovo introduced its second-generation foldable PC last year, it represented the pinnacle of what was possible in this emerging category. The device combined a versatile 16.3-inch OLED display with robust engineering and the familiar Windows ecosystem. It set benchmarks for build quality, display technology, and adaptability that competitors would need to surpass.
    Designer: Lenovo
    Designer: Huawei
    Fast forward to today, and the landscape has shifted dramatically. Huawei has unveiled its MateBook Fold Ultimate Design, a device that challenges our understanding of what foldable laptops can achieve. With an 18-inch display that folds to a 13-inch form factor, a chassis measuring just 7.3mm when open, and a proprietary operating system built specifically for foldable hardware, Huawei has raised the stakes considerably.
    This comparison arrives at a pivotal moment for foldable computing. The category has matured beyond proof-of-concept to deliver genuinely useful productivity tools. Now that we have seen what Lenovo accomplished with the X1 Fold 2024, let us examine how Huawei’s MateBook Fold Ultimate Design responds and potentially redefines the future of portable computing.

    Design Philosophy and Physical Presence
    The Lenovo ThinkPad X1 Fold 2024 embodies the ThinkPad ethos of reliability and purposeful design. Its magnesium alloy frame and recycled PET woven fabric cover create a device that feels substantial and durable. The fold-flat hinge eliminates gaps when closed, protecting the display while maintaining a clean profile. At 8.6mm when open and 17.4mm when closed, the X1 Fold is not the thinnest laptop available, but its construction inspires confidence. The device weighs approximately 2.9 pounds without accessories, increasing to 4.3 pounds with the keyboard and stand attached. This weight reflects Lenovo’s prioritization of durability over absolute portability.

    Huawei takes a dramatically different approach with the MateBook Fold Ultimate Design. The device measures an astonishing 7.3mm when open and 14.9mm when closed, making it significantly thinner than the X1 Fold. At just 1.16kgfor the base unit and 1.45kg with the keyboard, the MateBook Fold is remarkably light for a device with an 18-inch display. This achievement comes from Huawei’s use of carbon fiber reinforcement and a zirconium-based liquid metal hinge. The 285mm “water-drop” hinge design provides smooth folding action and increased durability, with Huawei claiming a 400% improvement in hovering torque compared to conventional designs.
    The most significant physical difference between these devices becomes apparent in their approach to accessories. Lenovo requires a separate kickstand for desk use, adding bulk and complexity to the overall package. Huawei integrates a sturdy kickstand directly into the MateBook Fold, eliminating the need for additional accessories and streamlining the user experience. This built-in solution allows for more versatile positioning and reduces the number of components users need to manage.

    Both devices transform between multiple modes, but their physical dimensions create distinct experiences. When folded, the X1 Fold becomes a 12-inch laptop, which many users find cramped for serious multitasking. The MateBook Fold offers a more generous 13-inch workspace in laptop mode, providing additional screen real estate for productivity tasks. This difference may seem small on paper, but it significantly impacts the practical usability of these devices in their folded configurations.

    The materials chosen for each device reveal different priorities. Lenovo emphasizes sustainability with its recycled PET fabric cover and plastic-free packaging. This approach aligns with growing corporate environmental concerns and provides a tactile warmth that distinguishes the X1 Fold from typical metal-clad laptops. Huawei focuses on premium materials that enable extreme thinness, using advanced alloys and composites throughout the chassis. Both approaches result in distinctive aesthetics that will appeal to different user preferences.
    Display Technology and Visual Experience
    Display technology represents the heart of any foldable device, and both manufacturers have made significant investments in this critical component. The Lenovo ThinkPad X1 Fold features a 16.3-inch OLED panel with a resolution of 2560 x 2024 and a 4:3 aspect ratio. This display delivers 400 nits of brightness for standard content, increasing to 600 nits for HDR material. The panel supports DisplayHDR True Black 600 certification and Dolby Vision, covering 100% of the DCI-P3 color gamut. An anti-smudge coating helps maintain visual clarity during extended use.

    Huawei pushes display technology further with the MateBook Fold Ultimate Design. Its 18-inch LTPO OLED screen boasts a resolution of 3296 x 2472, maintaining the same 4:3 aspect ratio as the Lenovo. However, the MateBook Fold achieves a peak brightness of 1600 nits, more than double that of the X1 Fold. The dual-layer LTPO technology reduces power consumption by 30% compared to standard OLED panels while supporting adaptive refresh rates from 1Hz to 120Hz. This combination of size, brightness, and efficiency creates a visual experience that surpasses the X1 Fold in nearly every measurable aspect.
    Both displays exhibit a visible crease at the fold, though the severity varies. Lenovo’s hinge design minimizes the crease when the device is fully open, but it becomes more noticeable at certain viewing angles. Huawei claims its water-drop hinge reduces crease visibility, though independent verification is limited. In practical use, both creases become less distracting over time as users adapt to the form factor.
    Color accuracy and visual impact favor the MateBook Fold, with its higher brightness and contrast ratio of 2,000,000:1 creating more vibrant images and videos. The X1 Fold delivers excellent color reproduction but cannot match the visual punch of Huawei’s display. For creative professionals and media consumers, this difference could be decisive when choosing between these devices.

    The touch response and pen input capabilities of both displays deserve consideration. Lenovo’s display works seamlessly with the Precision Pen, offering pressure sensitivity that makes note-taking and sketching feel natural. The anti-smudge coating balances fingerprint resistance with smooth touch response. Huawei provides similar functionality, though detailed specifications about pressure sensitivity levels and palm rejection capabilities are not yet widely available. Both devices support multi-touch gestures for navigation and manipulation of on-screen elements.
    The 4:3 aspect ratio on both devices proves ideal for productivity applications, providing more vertical space than typical 16:9 laptop displays. This ratio works particularly well for document editing, web browsing, and coding. When watching widescreen video content, both devices display black bars at the top and bottom, but the overall screen size still delivers an immersive viewing experience, especially on the larger MateBook Fold.
    Performance and Hardware Capabilities
    The performance profiles of these devices reflect their different design philosophies. Lenovo equips the ThinkPad X1 Fold with 12th Generation Intel processors, ranging from the Core i5-1230U to the Core i7-1260U vPro. These 10-core, 12-thread chips provide adequate performance for productivity tasks but represent previous-generation technology in 2025. The X1 Fold supports up to 32GB of LPDDR5 RAM and 1TB of PCIe Gen 4 SSD storage. Intel Iris Xe integrated graphics handle visual processing, delivering sufficient power for office applications but struggling with demanding creative workloads.

    Huawei takes a different approach with its Kirin X90 ARM-based chipset. This custom silicon is specifically optimized for HarmonyOS and the foldable form factor. The MateBook Fold includes 32GB of RAM and offers storage options up to 2TB. While direct performance comparisons are difficult due to the different architectures, the Kirin X90 delivers responsive performance for HarmonyOS applications and benefits from tight hardware-software integration.
    Thermal management represents another point of divergence. Lenovo employs a fanless design in the X1 Fold, prioritizing silent operation over sustained performance. This approach leads to thermal throttling during extended workloads, limiting the device’s capabilities for processor-intensive tasks. Huawei incorporates a vapor chamber cooling system with diamond aluminum dual fans in the MateBook Fold, enabling 28W sustained performance without excessive heat or noise. This advanced cooling solution allows the MateBook Fold to maintain peak performance during demanding tasks, despite its thinner profile.

    Battery life reflects both hardware choices and software optimization. The X1 Fold includes a dual-battery design totaling 64Wh, delivering approximately 8 hours and 51 minutes in laptop mode and 7 hours and 27 minutes in tablet mode under real-world conditions. The MateBook Fold features a larger 74.69Wh battery, and its LTPO display technology reduces power consumption significantly. While independent verification of Huawei’s “all-day” battery claims is not yet available, the combination of a larger battery and more efficient display technology suggests the MateBook Fold should offer superior battery life in comparable usage scenarios.
    The storage subsystems in both devices utilize high-speed solid-state technology, but with different implementations. Lenovo’s PCIe Gen 4 SSD delivers sequential read speeds up to 5,000MB/s, providing quick access to large files and rapid application loading. Huawei has not published detailed storage performance metrics, but contemporary flagship devices typically feature similar high-performance storage solutions. Both devices offer sufficient storage capacity for professional workloads, with options ranging from 256GB to 2TB depending on configuration.
    Memory configurations play a crucial role in multitasking performance. Both devices offer 32GB in their top configurations, which provides ample headroom for demanding productivity workflows. Neither device allows for user-upgradable memory, as both use soldered RAM to maintain their slim profiles. This limitation means buyers must carefully consider their memory needs at purchase, as future upgrades are not possible.
    Operating Systems and Software Experience
    The most fundamental difference between these devices lies in their operating systems. The Lenovo ThinkPad X1 Fold runs Windows 11 Pro, providing access to the vast Windows software ecosystem and familiar productivity tools. Windows offers broad compatibility with business applications and enterprise management systems, making the X1 Fold a natural choice for corporate environments. However, Windows 11 still struggles with optimization for foldable form factors. Mode switching can be inconsistent, and the operating system sometimes fails to properly scale applications when transitioning between configurations.

    Huawei’s MateBook Fold runs HarmonyOS 5, a proprietary operating system designed specifically for the company’s ecosystem of devices. HarmonyOS offers several advantages for foldable hardware, including faster boot times, more efficient resource management, and seamless integration with other Huawei products. The operating system includes AI-powered features like document summarization, real-time translation, and context-aware suggestions through the Xiaoyi assistant. HarmonyOS also enables advanced multi-device collaboration, allowing users to transfer running apps between Huawei phones, tablets, and the MateBook Fold without interruption.
    The software ecosystem represents a significant consideration for potential buyers. Windows provides access to millions of applications, including industry-standard productivity, creative, and development tools. HarmonyOS currently offers over 1,000 optimized applications, with projections for 2,000+ by the end of 2025. While this number is growing rapidly, it remains a fraction of what Windows provides. Additionally, HarmonyOS and its app ecosystem are primarily focused on the Chinese market, limiting its appeal for international users.

    Security features differ between the platforms as well. Lenovo includes its ThinkShield security suite, Windows Hello facial recognition, and optional Computer Vision human-presence detection for privacy and security. Huawei implements its StarShield architecture, which provides security at the kernel level and throughout the operating system stack. Both approaches offer robust protection, but organizations with established Windows security protocols may prefer Lenovo’s more familiar implementation.

    The multitasking capabilities of each operating system deserve special attention for foldable devices. Windows 11 includes Snap Layouts and multiple virtual desktops, which work well on the X1 Fold’s large unfolded display. However, the interface can become cluttered in laptop mode due to the reduced screen size. HarmonyOS 5 features a multitasking system specifically designed for foldable displays, with intuitive gestures for splitting the screen, floating windows, and quick app switching. This optimization creates a more cohesive experience when transitioning between different device configurations.
    Software updates and long-term support policies differ significantly between these platforms. Windows 11 receives regular security updates and feature enhancements from Microsoft, with a well-established support lifecycle. HarmonyOS is newer, with less predictable update patterns, though Huawei has committed to regular improvements. For business users planning multi-year deployments, Windows offers more certainty regarding future compatibility and security maintenance.
    Keyboard, Input, and Accessory Integration
    The keyboard experience significantly impacts productivity on foldable devices, and both manufacturers take different approaches to this challenge. Lenovo offers the ThinkPad Bluetooth TrackPoint Keyboard Folio as an optional accessory. This keyboard maintains the classic ThinkPad feel with good key travel and includes the iconic red TrackPoint nub. However, the keyboard feels cramped compared to standard ThinkPad models, and the haptic touchpad is smaller than ideal for extended use. The keyboard attaches magnetically to the lower half of the folded display but adds 1.38 pounds to the overall weight.

    Huawei includes a 5mm wireless aluminum keyboard with the MateBook Fold. This ultra-thin keyboard offers 1.5mm of key travel and a responsive touchpad. Weighing just 0.64 pounds, it adds minimal bulk to the package while providing a comfortable typing experience. The keyboard connects wirelessly and can be positioned flexibly, allowing users to create a more ergonomic workspace than the fixed position of Lenovo’s solution.
    Stylus support is available on both devices, with Lenovo offering the Precision Pen for note-taking and drawing. The X1 Fold’s pen attaches magnetically to the display, ensuring it remains available when needed. Huawei provides similar stylus functionality, though detailed specifications for its pen accessory are limited in current documentation.
    The most significant accessory difference is the kickstand implementation. Lenovo requires a separate adjustable-angle kickstand for desk use, adding another component to manage and transport. Huawei integrates the kickstand directly into the MateBook Fold, providing immediate stability without additional accessories. This integrated approach streamlines the user experience and reduces setup time when transitioning between usage modes.
    Virtual keyboard implementations provide another input option when physical keyboards are impractical. Both devices can display touch keyboards on the lower portion of the folded screen, creating a laptop-like experience without additional hardware. Lenovo’s implementation relies on Windows 11’s touch keyboard, which offers reasonable accuracy but lacks haptic feedback. Huawei’s virtual keyboard is deeply integrated with HarmonyOS, providing customizable layouts and adaptive suggestions based on user behavior. Neither virtual keyboard fully replaces a physical keyboard for extended typing sessions, but both provide convenient input options for quick tasks.
    The accessory ecosystem extends beyond keyboards and styluses. Lenovo leverages the ThinkPad’s business heritage with a range of compatible docks, cases, and adapters designed for professional use. Huawei focuses on cross-device accessories that work across its product line, creating a cohesive ecosystem for users invested in multiple Huawei products. This difference reflects the broader positioning of each brand, with Lenovo targeting enterprise customers and Huawei pursuing ecosystem-driven consumer experiences.
    Connectivity and Expansion Options
    Connectivity options reflect the different priorities of these manufacturers. The Lenovo ThinkPad X1 Fold includes two Thunderbolt 4 ports and one USB-C 3.2 Gen 2 port, providing versatile connectivity for peripherals and external displays. The device supports Wi-Fi 6E and Bluetooth 5.2, with optional LTE/5G connectivity for truly mobile productivity. This cellular option represents a significant advantage for professionals who need reliable internet access regardless of Wi-Fi availability.
    The Huawei MateBook Fold offers two USB-C ports, Wi-Fi 6, and Bluetooth 5.2. The device does not include cellular connectivity options, limiting its independence from Wi-Fi networks. The reduced port selection compared to the X1 Fold may require additional adapters for users with multiple peripherals or specialized equipment.

    Audio capabilities favor the MateBook Fold, which includes six speakers compared to the X1 Fold’s three. Both devices feature four-array microphones for clear voice capture during video conferences. Camera quality is superior on the MateBook Fold, with an 8MP sensor versus the 5MP camera on the X1 Fold. These differences impact the multimedia experience, particularly for users who frequently participate in video calls or consume media content.
    External display support varies between the devices. Lenovo’s Thunderbolt 4 ports enable connection to multiple high-resolution monitors, supporting sophisticated desktop setups when needed. Huawei’s USB-C ports provide display output capabilities, but with potentially fewer options for multi-monitor configurations. For professionals who regularly connect to external displays, projectors, or specialized peripherals, these connectivity differences could significantly impact workflow efficiency.
    Wireless connectivity standards influence performance in different environments. The X1 Fold’s Wi-Fi 6E support provides access to the less congested 6GHz band, potentially delivering faster and more reliable connections in crowded wireless environments. The MateBook Fold’s Wi-Fi 6 implementation is still capable but lacks access to these additional frequency bands. For users in dense office environments or congested urban areas, this difference could affect day-to-day connectivity performance.
    Future expansion capabilities depend largely on the port selection and standards support. Thunderbolt 4 provides the X1 Fold with a forward-looking connectivity standard that supports a wide range of current and upcoming peripherals. The MateBook Fold’s standard USB-C implementation offers good compatibility but lacks some of the advanced features and bandwidth of Thunderbolt. This distinction may become more relevant as users add peripherals and accessories over the device’s lifespan.
    Price, Availability, and Value Proposition
    The value equation for these devices involves balancing innovation, performance, and accessibility. The Lenovo ThinkPad X1 Fold starts at for the base configuration with a Core i5 processor, 16GB of RAM, and 256GB of storage. Fully equipped models with Core i7 processors, 32GB of RAM, and 1TB of storage approach These prices typically do not include the keyboard and kickstand accessories, which add approximately -300 to the total cost.

    The Huawei MateBook Fold Ultimate Design is priced between CNY 24,000 and 27,000depending on configuration. This pricing includes the wireless keyboard, making the total package cost comparable to a fully equipped X1 Fold with accessories. However, the MateBook Fold is currently available only in China, with no announced plans for international release. This limited availability significantly restricts its potential market impact outside of Asia.
    Global support and service represent another consideration. Lenovo maintains service centers worldwide, providing reliable support for business travelers and international organizations. Huawei’s support network is more limited outside of China, potentially creating challenges for users who experience hardware issues in regions without official service options.
    The target audience for each device influences its value proposition. The X1 Fold appeals to business professionals who prioritize Windows compatibility, global support, and integration with existing enterprise systems. Its ThinkPad branding carries significant weight in corporate environments, where reliability and security take precedence over cutting-edge specifications. The MateBook Fold targets technology enthusiasts and creative professionals who value display quality, design innovation, and ecosystem integration. Its limited availability and HarmonyOS platform make it less suitable for mainstream business adoption but potentially more appealing to users seeking the absolute latest in hardware engineering.
    Financing options and business leasing programs further differentiate these devices in the market. Lenovo offers established enterprise leasing programs that allow organizations to deploy the X1 Fold without significant upfront capital expenditure. These programs typically include service agreements and upgrade paths that align with corporate refresh cycles. Huawei’s business services are less developed outside of China, potentially limiting financing options for international customers interested in the MateBook Fold.
    Conclusion: The Future of Foldable Computing
    The Lenovo ThinkPad X1 Fold 2024 and Huawei MateBook Fold Ultimate Design represent two distinct visions for the future of foldable computing. Lenovo prioritizes durability, Windows compatibility, and global accessibility, creating a device that fits seamlessly into existing business environments. Huawei pushes the boundaries of hardware engineering, delivering a thinner, lighter device with a larger display and custom operating system optimized for the foldable form factor.

    For business users who require Windows compatibility and global support, the X1 Fold remains the more practical choice despite its thicker profile and aging processors. Its proven durability and enterprise-friendly features make it a safer investment for organizations deploying foldable technology. The device excels in versatility, allowing users to switch between tablet, laptop, and desktop modes with minimal compromise.
    Creative professionals and early adopters who prioritize display quality and cutting-edge design may find the MateBook Fold more appealing, provided they can access it in their region and adapt to HarmonyOS. The larger, brighter display and thinner profile create a more futuristic experience, though the limited software ecosystem and regional availability present significant barriers to widespread adoption.
    Looking forward, both devices point toward necessary improvements in the next generation of foldable computers. Future models should incorporate the latest processors with AI acceleration, reduce weight without sacrificing durability, integrate kickstands directly into the chassis, and provide larger, more comfortable keyboards. Display technology should continue to advance, with higher refresh rates, improved crease durability, and enhanced power efficiency. Software must evolve to better support the unique capabilities of foldable hardware, with more intuitive mode switching and optimized multitasking.

    The competition between Lenovo and Huawei benefits consumers by accelerating innovation and highlighting different approaches to solving the challenges of foldable computing. As these technologies mature and prices eventually decrease, foldable devices will transition from executive status symbols to practical tools for a broader range of users. The X1 Fold and MateBook Fold represent important steps in this evolution, each contributing valuable lessons that will shape the next generation of flexible computing devices.
    The ideal foldable device would combine Huawei’s hardware innovations with Lenovo’s software compatibility and global support. It would feature the thinness and display quality of the MateBook Fold, the enterprise security and connectivity options of the X1 Fold, and an operating system that seamlessly adapts to different usage modes. While neither current device achieves this perfect balance, both demonstrate remarkable engineering achievements that push the boundaries of what portable computers can be.

    As we look to the future, the success of foldable computing will depend not just on hardware specifications but on the development of software experiences that truly leverage the unique capabilities of these flexible displays. The device that ultimately dominates this category will be the one that most effectively bridges the gap between technical innovation and practical utility, creating experiences that simply aren’t possible on conventional laptops or tablets. Both Lenovo and Huawei have taken significant steps toward this goal, and their ongoing competition promises to accelerate progress toward truly transformative foldable computers.The post Folding the Future: Lenovo ThinkPad X1 Fold 2024 vs. Huawei MateBook Fold Ultimate Design first appeared on Yanko Design.
    #folding #future #lenovo #thinkpad #fold
    Folding the Future: Lenovo ThinkPad X1 Fold 2024 vs. Huawei MateBook Fold Ultimate Design
    Why revisit the Lenovo ThinkPad X1 Fold in 2025? The answer lies in the rapid evolution of foldable computing. When Lenovo introduced its second-generation foldable PC last year, it represented the pinnacle of what was possible in this emerging category. The device combined a versatile 16.3-inch OLED display with robust engineering and the familiar Windows ecosystem. It set benchmarks for build quality, display technology, and adaptability that competitors would need to surpass. Designer: Lenovo Designer: Huawei Fast forward to today, and the landscape has shifted dramatically. Huawei has unveiled its MateBook Fold Ultimate Design, a device that challenges our understanding of what foldable laptops can achieve. With an 18-inch display that folds to a 13-inch form factor, a chassis measuring just 7.3mm when open, and a proprietary operating system built specifically for foldable hardware, Huawei has raised the stakes considerably. This comparison arrives at a pivotal moment for foldable computing. The category has matured beyond proof-of-concept to deliver genuinely useful productivity tools. Now that we have seen what Lenovo accomplished with the X1 Fold 2024, let us examine how Huawei’s MateBook Fold Ultimate Design responds and potentially redefines the future of portable computing. Design Philosophy and Physical Presence The Lenovo ThinkPad X1 Fold 2024 embodies the ThinkPad ethos of reliability and purposeful design. Its magnesium alloy frame and recycled PET woven fabric cover create a device that feels substantial and durable. The fold-flat hinge eliminates gaps when closed, protecting the display while maintaining a clean profile. At 8.6mm when open and 17.4mm when closed, the X1 Fold is not the thinnest laptop available, but its construction inspires confidence. The device weighs approximately 2.9 pounds without accessories, increasing to 4.3 pounds with the keyboard and stand attached. This weight reflects Lenovo’s prioritization of durability over absolute portability. Huawei takes a dramatically different approach with the MateBook Fold Ultimate Design. The device measures an astonishing 7.3mm when open and 14.9mm when closed, making it significantly thinner than the X1 Fold. At just 1.16kgfor the base unit and 1.45kg with the keyboard, the MateBook Fold is remarkably light for a device with an 18-inch display. This achievement comes from Huawei’s use of carbon fiber reinforcement and a zirconium-based liquid metal hinge. The 285mm “water-drop” hinge design provides smooth folding action and increased durability, with Huawei claiming a 400% improvement in hovering torque compared to conventional designs. The most significant physical difference between these devices becomes apparent in their approach to accessories. Lenovo requires a separate kickstand for desk use, adding bulk and complexity to the overall package. Huawei integrates a sturdy kickstand directly into the MateBook Fold, eliminating the need for additional accessories and streamlining the user experience. This built-in solution allows for more versatile positioning and reduces the number of components users need to manage. Both devices transform between multiple modes, but their physical dimensions create distinct experiences. When folded, the X1 Fold becomes a 12-inch laptop, which many users find cramped for serious multitasking. The MateBook Fold offers a more generous 13-inch workspace in laptop mode, providing additional screen real estate for productivity tasks. This difference may seem small on paper, but it significantly impacts the practical usability of these devices in their folded configurations. The materials chosen for each device reveal different priorities. Lenovo emphasizes sustainability with its recycled PET fabric cover and plastic-free packaging. This approach aligns with growing corporate environmental concerns and provides a tactile warmth that distinguishes the X1 Fold from typical metal-clad laptops. Huawei focuses on premium materials that enable extreme thinness, using advanced alloys and composites throughout the chassis. Both approaches result in distinctive aesthetics that will appeal to different user preferences. Display Technology and Visual Experience Display technology represents the heart of any foldable device, and both manufacturers have made significant investments in this critical component. The Lenovo ThinkPad X1 Fold features a 16.3-inch OLED panel with a resolution of 2560 x 2024 and a 4:3 aspect ratio. This display delivers 400 nits of brightness for standard content, increasing to 600 nits for HDR material. The panel supports DisplayHDR True Black 600 certification and Dolby Vision, covering 100% of the DCI-P3 color gamut. An anti-smudge coating helps maintain visual clarity during extended use. Huawei pushes display technology further with the MateBook Fold Ultimate Design. Its 18-inch LTPO OLED screen boasts a resolution of 3296 x 2472, maintaining the same 4:3 aspect ratio as the Lenovo. However, the MateBook Fold achieves a peak brightness of 1600 nits, more than double that of the X1 Fold. The dual-layer LTPO technology reduces power consumption by 30% compared to standard OLED panels while supporting adaptive refresh rates from 1Hz to 120Hz. This combination of size, brightness, and efficiency creates a visual experience that surpasses the X1 Fold in nearly every measurable aspect. Both displays exhibit a visible crease at the fold, though the severity varies. Lenovo’s hinge design minimizes the crease when the device is fully open, but it becomes more noticeable at certain viewing angles. Huawei claims its water-drop hinge reduces crease visibility, though independent verification is limited. In practical use, both creases become less distracting over time as users adapt to the form factor. Color accuracy and visual impact favor the MateBook Fold, with its higher brightness and contrast ratio of 2,000,000:1 creating more vibrant images and videos. The X1 Fold delivers excellent color reproduction but cannot match the visual punch of Huawei’s display. For creative professionals and media consumers, this difference could be decisive when choosing between these devices. The touch response and pen input capabilities of both displays deserve consideration. Lenovo’s display works seamlessly with the Precision Pen, offering pressure sensitivity that makes note-taking and sketching feel natural. The anti-smudge coating balances fingerprint resistance with smooth touch response. Huawei provides similar functionality, though detailed specifications about pressure sensitivity levels and palm rejection capabilities are not yet widely available. Both devices support multi-touch gestures for navigation and manipulation of on-screen elements. The 4:3 aspect ratio on both devices proves ideal for productivity applications, providing more vertical space than typical 16:9 laptop displays. This ratio works particularly well for document editing, web browsing, and coding. When watching widescreen video content, both devices display black bars at the top and bottom, but the overall screen size still delivers an immersive viewing experience, especially on the larger MateBook Fold. Performance and Hardware Capabilities The performance profiles of these devices reflect their different design philosophies. Lenovo equips the ThinkPad X1 Fold with 12th Generation Intel processors, ranging from the Core i5-1230U to the Core i7-1260U vPro. These 10-core, 12-thread chips provide adequate performance for productivity tasks but represent previous-generation technology in 2025. The X1 Fold supports up to 32GB of LPDDR5 RAM and 1TB of PCIe Gen 4 SSD storage. Intel Iris Xe integrated graphics handle visual processing, delivering sufficient power for office applications but struggling with demanding creative workloads. Huawei takes a different approach with its Kirin X90 ARM-based chipset. This custom silicon is specifically optimized for HarmonyOS and the foldable form factor. The MateBook Fold includes 32GB of RAM and offers storage options up to 2TB. While direct performance comparisons are difficult due to the different architectures, the Kirin X90 delivers responsive performance for HarmonyOS applications and benefits from tight hardware-software integration. Thermal management represents another point of divergence. Lenovo employs a fanless design in the X1 Fold, prioritizing silent operation over sustained performance. This approach leads to thermal throttling during extended workloads, limiting the device’s capabilities for processor-intensive tasks. Huawei incorporates a vapor chamber cooling system with diamond aluminum dual fans in the MateBook Fold, enabling 28W sustained performance without excessive heat or noise. This advanced cooling solution allows the MateBook Fold to maintain peak performance during demanding tasks, despite its thinner profile. Battery life reflects both hardware choices and software optimization. The X1 Fold includes a dual-battery design totaling 64Wh, delivering approximately 8 hours and 51 minutes in laptop mode and 7 hours and 27 minutes in tablet mode under real-world conditions. The MateBook Fold features a larger 74.69Wh battery, and its LTPO display technology reduces power consumption significantly. While independent verification of Huawei’s “all-day” battery claims is not yet available, the combination of a larger battery and more efficient display technology suggests the MateBook Fold should offer superior battery life in comparable usage scenarios. The storage subsystems in both devices utilize high-speed solid-state technology, but with different implementations. Lenovo’s PCIe Gen 4 SSD delivers sequential read speeds up to 5,000MB/s, providing quick access to large files and rapid application loading. Huawei has not published detailed storage performance metrics, but contemporary flagship devices typically feature similar high-performance storage solutions. Both devices offer sufficient storage capacity for professional workloads, with options ranging from 256GB to 2TB depending on configuration. Memory configurations play a crucial role in multitasking performance. Both devices offer 32GB in their top configurations, which provides ample headroom for demanding productivity workflows. Neither device allows for user-upgradable memory, as both use soldered RAM to maintain their slim profiles. This limitation means buyers must carefully consider their memory needs at purchase, as future upgrades are not possible. Operating Systems and Software Experience The most fundamental difference between these devices lies in their operating systems. The Lenovo ThinkPad X1 Fold runs Windows 11 Pro, providing access to the vast Windows software ecosystem and familiar productivity tools. Windows offers broad compatibility with business applications and enterprise management systems, making the X1 Fold a natural choice for corporate environments. However, Windows 11 still struggles with optimization for foldable form factors. Mode switching can be inconsistent, and the operating system sometimes fails to properly scale applications when transitioning between configurations. Huawei’s MateBook Fold runs HarmonyOS 5, a proprietary operating system designed specifically for the company’s ecosystem of devices. HarmonyOS offers several advantages for foldable hardware, including faster boot times, more efficient resource management, and seamless integration with other Huawei products. The operating system includes AI-powered features like document summarization, real-time translation, and context-aware suggestions through the Xiaoyi assistant. HarmonyOS also enables advanced multi-device collaboration, allowing users to transfer running apps between Huawei phones, tablets, and the MateBook Fold without interruption. The software ecosystem represents a significant consideration for potential buyers. Windows provides access to millions of applications, including industry-standard productivity, creative, and development tools. HarmonyOS currently offers over 1,000 optimized applications, with projections for 2,000+ by the end of 2025. While this number is growing rapidly, it remains a fraction of what Windows provides. Additionally, HarmonyOS and its app ecosystem are primarily focused on the Chinese market, limiting its appeal for international users. Security features differ between the platforms as well. Lenovo includes its ThinkShield security suite, Windows Hello facial recognition, and optional Computer Vision human-presence detection for privacy and security. Huawei implements its StarShield architecture, which provides security at the kernel level and throughout the operating system stack. Both approaches offer robust protection, but organizations with established Windows security protocols may prefer Lenovo’s more familiar implementation. The multitasking capabilities of each operating system deserve special attention for foldable devices. Windows 11 includes Snap Layouts and multiple virtual desktops, which work well on the X1 Fold’s large unfolded display. However, the interface can become cluttered in laptop mode due to the reduced screen size. HarmonyOS 5 features a multitasking system specifically designed for foldable displays, with intuitive gestures for splitting the screen, floating windows, and quick app switching. This optimization creates a more cohesive experience when transitioning between different device configurations. Software updates and long-term support policies differ significantly between these platforms. Windows 11 receives regular security updates and feature enhancements from Microsoft, with a well-established support lifecycle. HarmonyOS is newer, with less predictable update patterns, though Huawei has committed to regular improvements. For business users planning multi-year deployments, Windows offers more certainty regarding future compatibility and security maintenance. Keyboard, Input, and Accessory Integration The keyboard experience significantly impacts productivity on foldable devices, and both manufacturers take different approaches to this challenge. Lenovo offers the ThinkPad Bluetooth TrackPoint Keyboard Folio as an optional accessory. This keyboard maintains the classic ThinkPad feel with good key travel and includes the iconic red TrackPoint nub. However, the keyboard feels cramped compared to standard ThinkPad models, and the haptic touchpad is smaller than ideal for extended use. The keyboard attaches magnetically to the lower half of the folded display but adds 1.38 pounds to the overall weight. Huawei includes a 5mm wireless aluminum keyboard with the MateBook Fold. This ultra-thin keyboard offers 1.5mm of key travel and a responsive touchpad. Weighing just 0.64 pounds, it adds minimal bulk to the package while providing a comfortable typing experience. The keyboard connects wirelessly and can be positioned flexibly, allowing users to create a more ergonomic workspace than the fixed position of Lenovo’s solution. Stylus support is available on both devices, with Lenovo offering the Precision Pen for note-taking and drawing. The X1 Fold’s pen attaches magnetically to the display, ensuring it remains available when needed. Huawei provides similar stylus functionality, though detailed specifications for its pen accessory are limited in current documentation. The most significant accessory difference is the kickstand implementation. Lenovo requires a separate adjustable-angle kickstand for desk use, adding another component to manage and transport. Huawei integrates the kickstand directly into the MateBook Fold, providing immediate stability without additional accessories. This integrated approach streamlines the user experience and reduces setup time when transitioning between usage modes. Virtual keyboard implementations provide another input option when physical keyboards are impractical. Both devices can display touch keyboards on the lower portion of the folded screen, creating a laptop-like experience without additional hardware. Lenovo’s implementation relies on Windows 11’s touch keyboard, which offers reasonable accuracy but lacks haptic feedback. Huawei’s virtual keyboard is deeply integrated with HarmonyOS, providing customizable layouts and adaptive suggestions based on user behavior. Neither virtual keyboard fully replaces a physical keyboard for extended typing sessions, but both provide convenient input options for quick tasks. The accessory ecosystem extends beyond keyboards and styluses. Lenovo leverages the ThinkPad’s business heritage with a range of compatible docks, cases, and adapters designed for professional use. Huawei focuses on cross-device accessories that work across its product line, creating a cohesive ecosystem for users invested in multiple Huawei products. This difference reflects the broader positioning of each brand, with Lenovo targeting enterprise customers and Huawei pursuing ecosystem-driven consumer experiences. Connectivity and Expansion Options Connectivity options reflect the different priorities of these manufacturers. The Lenovo ThinkPad X1 Fold includes two Thunderbolt 4 ports and one USB-C 3.2 Gen 2 port, providing versatile connectivity for peripherals and external displays. The device supports Wi-Fi 6E and Bluetooth 5.2, with optional LTE/5G connectivity for truly mobile productivity. This cellular option represents a significant advantage for professionals who need reliable internet access regardless of Wi-Fi availability. The Huawei MateBook Fold offers two USB-C ports, Wi-Fi 6, and Bluetooth 5.2. The device does not include cellular connectivity options, limiting its independence from Wi-Fi networks. The reduced port selection compared to the X1 Fold may require additional adapters for users with multiple peripherals or specialized equipment. Audio capabilities favor the MateBook Fold, which includes six speakers compared to the X1 Fold’s three. Both devices feature four-array microphones for clear voice capture during video conferences. Camera quality is superior on the MateBook Fold, with an 8MP sensor versus the 5MP camera on the X1 Fold. These differences impact the multimedia experience, particularly for users who frequently participate in video calls or consume media content. External display support varies between the devices. Lenovo’s Thunderbolt 4 ports enable connection to multiple high-resolution monitors, supporting sophisticated desktop setups when needed. Huawei’s USB-C ports provide display output capabilities, but with potentially fewer options for multi-monitor configurations. For professionals who regularly connect to external displays, projectors, or specialized peripherals, these connectivity differences could significantly impact workflow efficiency. Wireless connectivity standards influence performance in different environments. The X1 Fold’s Wi-Fi 6E support provides access to the less congested 6GHz band, potentially delivering faster and more reliable connections in crowded wireless environments. The MateBook Fold’s Wi-Fi 6 implementation is still capable but lacks access to these additional frequency bands. For users in dense office environments or congested urban areas, this difference could affect day-to-day connectivity performance. Future expansion capabilities depend largely on the port selection and standards support. Thunderbolt 4 provides the X1 Fold with a forward-looking connectivity standard that supports a wide range of current and upcoming peripherals. The MateBook Fold’s standard USB-C implementation offers good compatibility but lacks some of the advanced features and bandwidth of Thunderbolt. This distinction may become more relevant as users add peripherals and accessories over the device’s lifespan. Price, Availability, and Value Proposition The value equation for these devices involves balancing innovation, performance, and accessibility. The Lenovo ThinkPad X1 Fold starts at for the base configuration with a Core i5 processor, 16GB of RAM, and 256GB of storage. Fully equipped models with Core i7 processors, 32GB of RAM, and 1TB of storage approach These prices typically do not include the keyboard and kickstand accessories, which add approximately -300 to the total cost. The Huawei MateBook Fold Ultimate Design is priced between CNY 24,000 and 27,000depending on configuration. This pricing includes the wireless keyboard, making the total package cost comparable to a fully equipped X1 Fold with accessories. However, the MateBook Fold is currently available only in China, with no announced plans for international release. This limited availability significantly restricts its potential market impact outside of Asia. Global support and service represent another consideration. Lenovo maintains service centers worldwide, providing reliable support for business travelers and international organizations. Huawei’s support network is more limited outside of China, potentially creating challenges for users who experience hardware issues in regions without official service options. The target audience for each device influences its value proposition. The X1 Fold appeals to business professionals who prioritize Windows compatibility, global support, and integration with existing enterprise systems. Its ThinkPad branding carries significant weight in corporate environments, where reliability and security take precedence over cutting-edge specifications. The MateBook Fold targets technology enthusiasts and creative professionals who value display quality, design innovation, and ecosystem integration. Its limited availability and HarmonyOS platform make it less suitable for mainstream business adoption but potentially more appealing to users seeking the absolute latest in hardware engineering. Financing options and business leasing programs further differentiate these devices in the market. Lenovo offers established enterprise leasing programs that allow organizations to deploy the X1 Fold without significant upfront capital expenditure. These programs typically include service agreements and upgrade paths that align with corporate refresh cycles. Huawei’s business services are less developed outside of China, potentially limiting financing options for international customers interested in the MateBook Fold. Conclusion: The Future of Foldable Computing The Lenovo ThinkPad X1 Fold 2024 and Huawei MateBook Fold Ultimate Design represent two distinct visions for the future of foldable computing. Lenovo prioritizes durability, Windows compatibility, and global accessibility, creating a device that fits seamlessly into existing business environments. Huawei pushes the boundaries of hardware engineering, delivering a thinner, lighter device with a larger display and custom operating system optimized for the foldable form factor. For business users who require Windows compatibility and global support, the X1 Fold remains the more practical choice despite its thicker profile and aging processors. Its proven durability and enterprise-friendly features make it a safer investment for organizations deploying foldable technology. The device excels in versatility, allowing users to switch between tablet, laptop, and desktop modes with minimal compromise. Creative professionals and early adopters who prioritize display quality and cutting-edge design may find the MateBook Fold more appealing, provided they can access it in their region and adapt to HarmonyOS. The larger, brighter display and thinner profile create a more futuristic experience, though the limited software ecosystem and regional availability present significant barriers to widespread adoption. Looking forward, both devices point toward necessary improvements in the next generation of foldable computers. Future models should incorporate the latest processors with AI acceleration, reduce weight without sacrificing durability, integrate kickstands directly into the chassis, and provide larger, more comfortable keyboards. Display technology should continue to advance, with higher refresh rates, improved crease durability, and enhanced power efficiency. Software must evolve to better support the unique capabilities of foldable hardware, with more intuitive mode switching and optimized multitasking. The competition between Lenovo and Huawei benefits consumers by accelerating innovation and highlighting different approaches to solving the challenges of foldable computing. As these technologies mature and prices eventually decrease, foldable devices will transition from executive status symbols to practical tools for a broader range of users. The X1 Fold and MateBook Fold represent important steps in this evolution, each contributing valuable lessons that will shape the next generation of flexible computing devices. The ideal foldable device would combine Huawei’s hardware innovations with Lenovo’s software compatibility and global support. It would feature the thinness and display quality of the MateBook Fold, the enterprise security and connectivity options of the X1 Fold, and an operating system that seamlessly adapts to different usage modes. While neither current device achieves this perfect balance, both demonstrate remarkable engineering achievements that push the boundaries of what portable computers can be. As we look to the future, the success of foldable computing will depend not just on hardware specifications but on the development of software experiences that truly leverage the unique capabilities of these flexible displays. The device that ultimately dominates this category will be the one that most effectively bridges the gap between technical innovation and practical utility, creating experiences that simply aren’t possible on conventional laptops or tablets. Both Lenovo and Huawei have taken significant steps toward this goal, and their ongoing competition promises to accelerate progress toward truly transformative foldable computers.The post Folding the Future: Lenovo ThinkPad X1 Fold 2024 vs. Huawei MateBook Fold Ultimate Design first appeared on Yanko Design. #folding #future #lenovo #thinkpad #fold
    WWW.YANKODESIGN.COM
    Folding the Future: Lenovo ThinkPad X1 Fold 2024 vs. Huawei MateBook Fold Ultimate Design
    Why revisit the Lenovo ThinkPad X1 Fold in 2025? The answer lies in the rapid evolution of foldable computing. When Lenovo introduced its second-generation foldable PC last year, it represented the pinnacle of what was possible in this emerging category. The device combined a versatile 16.3-inch OLED display with robust engineering and the familiar Windows ecosystem. It set benchmarks for build quality, display technology, and adaptability that competitors would need to surpass. Designer: Lenovo Designer: Huawei Fast forward to today, and the landscape has shifted dramatically. Huawei has unveiled its MateBook Fold Ultimate Design, a device that challenges our understanding of what foldable laptops can achieve. With an 18-inch display that folds to a 13-inch form factor, a chassis measuring just 7.3mm when open, and a proprietary operating system built specifically for foldable hardware, Huawei has raised the stakes considerably. This comparison arrives at a pivotal moment for foldable computing. The category has matured beyond proof-of-concept to deliver genuinely useful productivity tools. Now that we have seen what Lenovo accomplished with the X1 Fold 2024, let us examine how Huawei’s MateBook Fold Ultimate Design responds and potentially redefines the future of portable computing. Design Philosophy and Physical Presence The Lenovo ThinkPad X1 Fold 2024 embodies the ThinkPad ethos of reliability and purposeful design. Its magnesium alloy frame and recycled PET woven fabric cover create a device that feels substantial and durable. The fold-flat hinge eliminates gaps when closed, protecting the display while maintaining a clean profile. At 8.6mm when open and 17.4mm when closed, the X1 Fold is not the thinnest laptop available, but its construction inspires confidence. The device weighs approximately 2.9 pounds without accessories, increasing to 4.3 pounds with the keyboard and stand attached. This weight reflects Lenovo’s prioritization of durability over absolute portability. Huawei takes a dramatically different approach with the MateBook Fold Ultimate Design. The device measures an astonishing 7.3mm when open and 14.9mm when closed, making it significantly thinner than the X1 Fold. At just 1.16kg (2.56 pounds) for the base unit and 1.45kg with the keyboard, the MateBook Fold is remarkably light for a device with an 18-inch display. This achievement comes from Huawei’s use of carbon fiber reinforcement and a zirconium-based liquid metal hinge. The 285mm “water-drop” hinge design provides smooth folding action and increased durability, with Huawei claiming a 400% improvement in hovering torque compared to conventional designs. The most significant physical difference between these devices becomes apparent in their approach to accessories. Lenovo requires a separate kickstand for desk use, adding bulk and complexity to the overall package. Huawei integrates a sturdy kickstand directly into the MateBook Fold, eliminating the need for additional accessories and streamlining the user experience. This built-in solution allows for more versatile positioning and reduces the number of components users need to manage. Both devices transform between multiple modes, but their physical dimensions create distinct experiences. When folded, the X1 Fold becomes a 12-inch laptop, which many users find cramped for serious multitasking. The MateBook Fold offers a more generous 13-inch workspace in laptop mode, providing additional screen real estate for productivity tasks. This difference may seem small on paper, but it significantly impacts the practical usability of these devices in their folded configurations. The materials chosen for each device reveal different priorities. Lenovo emphasizes sustainability with its recycled PET fabric cover and plastic-free packaging. This approach aligns with growing corporate environmental concerns and provides a tactile warmth that distinguishes the X1 Fold from typical metal-clad laptops. Huawei focuses on premium materials that enable extreme thinness, using advanced alloys and composites throughout the chassis. Both approaches result in distinctive aesthetics that will appeal to different user preferences. Display Technology and Visual Experience Display technology represents the heart of any foldable device, and both manufacturers have made significant investments in this critical component. The Lenovo ThinkPad X1 Fold features a 16.3-inch OLED panel with a resolution of 2560 x 2024 and a 4:3 aspect ratio. This display delivers 400 nits of brightness for standard content, increasing to 600 nits for HDR material. The panel supports DisplayHDR True Black 600 certification and Dolby Vision, covering 100% of the DCI-P3 color gamut. An anti-smudge coating helps maintain visual clarity during extended use. Huawei pushes display technology further with the MateBook Fold Ultimate Design. Its 18-inch LTPO OLED screen boasts a resolution of 3296 x 2472, maintaining the same 4:3 aspect ratio as the Lenovo. However, the MateBook Fold achieves a peak brightness of 1600 nits, more than double that of the X1 Fold. The dual-layer LTPO technology reduces power consumption by 30% compared to standard OLED panels while supporting adaptive refresh rates from 1Hz to 120Hz. This combination of size, brightness, and efficiency creates a visual experience that surpasses the X1 Fold in nearly every measurable aspect. Both displays exhibit a visible crease at the fold, though the severity varies. Lenovo’s hinge design minimizes the crease when the device is fully open, but it becomes more noticeable at certain viewing angles. Huawei claims its water-drop hinge reduces crease visibility, though independent verification is limited. In practical use, both creases become less distracting over time as users adapt to the form factor. Color accuracy and visual impact favor the MateBook Fold, with its higher brightness and contrast ratio of 2,000,000:1 creating more vibrant images and videos. The X1 Fold delivers excellent color reproduction but cannot match the visual punch of Huawei’s display. For creative professionals and media consumers, this difference could be decisive when choosing between these devices. The touch response and pen input capabilities of both displays deserve consideration. Lenovo’s display works seamlessly with the Precision Pen, offering pressure sensitivity that makes note-taking and sketching feel natural. The anti-smudge coating balances fingerprint resistance with smooth touch response. Huawei provides similar functionality, though detailed specifications about pressure sensitivity levels and palm rejection capabilities are not yet widely available. Both devices support multi-touch gestures for navigation and manipulation of on-screen elements. The 4:3 aspect ratio on both devices proves ideal for productivity applications, providing more vertical space than typical 16:9 laptop displays. This ratio works particularly well for document editing, web browsing, and coding. When watching widescreen video content, both devices display black bars at the top and bottom, but the overall screen size still delivers an immersive viewing experience, especially on the larger MateBook Fold. Performance and Hardware Capabilities The performance profiles of these devices reflect their different design philosophies. Lenovo equips the ThinkPad X1 Fold with 12th Generation Intel processors, ranging from the Core i5-1230U to the Core i7-1260U vPro. These 10-core, 12-thread chips provide adequate performance for productivity tasks but represent previous-generation technology in 2025. The X1 Fold supports up to 32GB of LPDDR5 RAM and 1TB of PCIe Gen 4 SSD storage. Intel Iris Xe integrated graphics handle visual processing, delivering sufficient power for office applications but struggling with demanding creative workloads. Huawei takes a different approach with its Kirin X90 ARM-based chipset. This custom silicon is specifically optimized for HarmonyOS and the foldable form factor. The MateBook Fold includes 32GB of RAM and offers storage options up to 2TB. While direct performance comparisons are difficult due to the different architectures, the Kirin X90 delivers responsive performance for HarmonyOS applications and benefits from tight hardware-software integration. Thermal management represents another point of divergence. Lenovo employs a fanless design in the X1 Fold, prioritizing silent operation over sustained performance. This approach leads to thermal throttling during extended workloads, limiting the device’s capabilities for processor-intensive tasks. Huawei incorporates a vapor chamber cooling system with diamond aluminum dual fans in the MateBook Fold, enabling 28W sustained performance without excessive heat or noise. This advanced cooling solution allows the MateBook Fold to maintain peak performance during demanding tasks, despite its thinner profile. Battery life reflects both hardware choices and software optimization. The X1 Fold includes a dual-battery design totaling 64Wh, delivering approximately 8 hours and 51 minutes in laptop mode and 7 hours and 27 minutes in tablet mode under real-world conditions. The MateBook Fold features a larger 74.69Wh battery, and its LTPO display technology reduces power consumption significantly. While independent verification of Huawei’s “all-day” battery claims is not yet available, the combination of a larger battery and more efficient display technology suggests the MateBook Fold should offer superior battery life in comparable usage scenarios. The storage subsystems in both devices utilize high-speed solid-state technology, but with different implementations. Lenovo’s PCIe Gen 4 SSD delivers sequential read speeds up to 5,000MB/s, providing quick access to large files and rapid application loading. Huawei has not published detailed storage performance metrics, but contemporary flagship devices typically feature similar high-performance storage solutions. Both devices offer sufficient storage capacity for professional workloads, with options ranging from 256GB to 2TB depending on configuration. Memory configurations play a crucial role in multitasking performance. Both devices offer 32GB in their top configurations, which provides ample headroom for demanding productivity workflows. Neither device allows for user-upgradable memory, as both use soldered RAM to maintain their slim profiles. This limitation means buyers must carefully consider their memory needs at purchase, as future upgrades are not possible. Operating Systems and Software Experience The most fundamental difference between these devices lies in their operating systems. The Lenovo ThinkPad X1 Fold runs Windows 11 Pro, providing access to the vast Windows software ecosystem and familiar productivity tools. Windows offers broad compatibility with business applications and enterprise management systems, making the X1 Fold a natural choice for corporate environments. However, Windows 11 still struggles with optimization for foldable form factors. Mode switching can be inconsistent, and the operating system sometimes fails to properly scale applications when transitioning between configurations. Huawei’s MateBook Fold runs HarmonyOS 5, a proprietary operating system designed specifically for the company’s ecosystem of devices. HarmonyOS offers several advantages for foldable hardware, including faster boot times, more efficient resource management, and seamless integration with other Huawei products. The operating system includes AI-powered features like document summarization, real-time translation, and context-aware suggestions through the Xiaoyi assistant. HarmonyOS also enables advanced multi-device collaboration, allowing users to transfer running apps between Huawei phones, tablets, and the MateBook Fold without interruption. The software ecosystem represents a significant consideration for potential buyers. Windows provides access to millions of applications, including industry-standard productivity, creative, and development tools. HarmonyOS currently offers over 1,000 optimized applications, with projections for 2,000+ by the end of 2025. While this number is growing rapidly, it remains a fraction of what Windows provides. Additionally, HarmonyOS and its app ecosystem are primarily focused on the Chinese market, limiting its appeal for international users. Security features differ between the platforms as well. Lenovo includes its ThinkShield security suite, Windows Hello facial recognition, and optional Computer Vision human-presence detection for privacy and security. Huawei implements its StarShield architecture, which provides security at the kernel level and throughout the operating system stack. Both approaches offer robust protection, but organizations with established Windows security protocols may prefer Lenovo’s more familiar implementation. The multitasking capabilities of each operating system deserve special attention for foldable devices. Windows 11 includes Snap Layouts and multiple virtual desktops, which work well on the X1 Fold’s large unfolded display. However, the interface can become cluttered in laptop mode due to the reduced screen size. HarmonyOS 5 features a multitasking system specifically designed for foldable displays, with intuitive gestures for splitting the screen, floating windows, and quick app switching. This optimization creates a more cohesive experience when transitioning between different device configurations. Software updates and long-term support policies differ significantly between these platforms. Windows 11 receives regular security updates and feature enhancements from Microsoft, with a well-established support lifecycle. HarmonyOS is newer, with less predictable update patterns, though Huawei has committed to regular improvements. For business users planning multi-year deployments, Windows offers more certainty regarding future compatibility and security maintenance. Keyboard, Input, and Accessory Integration The keyboard experience significantly impacts productivity on foldable devices, and both manufacturers take different approaches to this challenge. Lenovo offers the ThinkPad Bluetooth TrackPoint Keyboard Folio as an optional accessory. This keyboard maintains the classic ThinkPad feel with good key travel and includes the iconic red TrackPoint nub. However, the keyboard feels cramped compared to standard ThinkPad models, and the haptic touchpad is smaller than ideal for extended use. The keyboard attaches magnetically to the lower half of the folded display but adds 1.38 pounds to the overall weight. Huawei includes a 5mm wireless aluminum keyboard with the MateBook Fold. This ultra-thin keyboard offers 1.5mm of key travel and a responsive touchpad. Weighing just 0.64 pounds, it adds minimal bulk to the package while providing a comfortable typing experience. The keyboard connects wirelessly and can be positioned flexibly, allowing users to create a more ergonomic workspace than the fixed position of Lenovo’s solution. Stylus support is available on both devices, with Lenovo offering the Precision Pen for note-taking and drawing. The X1 Fold’s pen attaches magnetically to the display, ensuring it remains available when needed. Huawei provides similar stylus functionality, though detailed specifications for its pen accessory are limited in current documentation. The most significant accessory difference is the kickstand implementation. Lenovo requires a separate adjustable-angle kickstand for desk use, adding another component to manage and transport. Huawei integrates the kickstand directly into the MateBook Fold, providing immediate stability without additional accessories. This integrated approach streamlines the user experience and reduces setup time when transitioning between usage modes. Virtual keyboard implementations provide another input option when physical keyboards are impractical. Both devices can display touch keyboards on the lower portion of the folded screen, creating a laptop-like experience without additional hardware. Lenovo’s implementation relies on Windows 11’s touch keyboard, which offers reasonable accuracy but lacks haptic feedback. Huawei’s virtual keyboard is deeply integrated with HarmonyOS, providing customizable layouts and adaptive suggestions based on user behavior. Neither virtual keyboard fully replaces a physical keyboard for extended typing sessions, but both provide convenient input options for quick tasks. The accessory ecosystem extends beyond keyboards and styluses. Lenovo leverages the ThinkPad’s business heritage with a range of compatible docks, cases, and adapters designed for professional use. Huawei focuses on cross-device accessories that work across its product line, creating a cohesive ecosystem for users invested in multiple Huawei products. This difference reflects the broader positioning of each brand, with Lenovo targeting enterprise customers and Huawei pursuing ecosystem-driven consumer experiences. Connectivity and Expansion Options Connectivity options reflect the different priorities of these manufacturers. The Lenovo ThinkPad X1 Fold includes two Thunderbolt 4 ports and one USB-C 3.2 Gen 2 port, providing versatile connectivity for peripherals and external displays. The device supports Wi-Fi 6E and Bluetooth 5.2, with optional LTE/5G connectivity for truly mobile productivity. This cellular option represents a significant advantage for professionals who need reliable internet access regardless of Wi-Fi availability. The Huawei MateBook Fold offers two USB-C ports, Wi-Fi 6, and Bluetooth 5.2. The device does not include cellular connectivity options, limiting its independence from Wi-Fi networks. The reduced port selection compared to the X1 Fold may require additional adapters for users with multiple peripherals or specialized equipment. Audio capabilities favor the MateBook Fold, which includes six speakers compared to the X1 Fold’s three. Both devices feature four-array microphones for clear voice capture during video conferences. Camera quality is superior on the MateBook Fold, with an 8MP sensor versus the 5MP camera on the X1 Fold. These differences impact the multimedia experience, particularly for users who frequently participate in video calls or consume media content. External display support varies between the devices. Lenovo’s Thunderbolt 4 ports enable connection to multiple high-resolution monitors, supporting sophisticated desktop setups when needed. Huawei’s USB-C ports provide display output capabilities, but with potentially fewer options for multi-monitor configurations. For professionals who regularly connect to external displays, projectors, or specialized peripherals, these connectivity differences could significantly impact workflow efficiency. Wireless connectivity standards influence performance in different environments. The X1 Fold’s Wi-Fi 6E support provides access to the less congested 6GHz band, potentially delivering faster and more reliable connections in crowded wireless environments. The MateBook Fold’s Wi-Fi 6 implementation is still capable but lacks access to these additional frequency bands. For users in dense office environments or congested urban areas, this difference could affect day-to-day connectivity performance. Future expansion capabilities depend largely on the port selection and standards support. Thunderbolt 4 provides the X1 Fold with a forward-looking connectivity standard that supports a wide range of current and upcoming peripherals. The MateBook Fold’s standard USB-C implementation offers good compatibility but lacks some of the advanced features and bandwidth of Thunderbolt. This distinction may become more relevant as users add peripherals and accessories over the device’s lifespan. Price, Availability, and Value Proposition The value equation for these devices involves balancing innovation, performance, and accessibility. The Lenovo ThinkPad X1 Fold starts at $2,499 for the base configuration with a Core i5 processor, 16GB of RAM, and 256GB of storage. Fully equipped models with Core i7 processors, 32GB of RAM, and 1TB of storage approach $3,900. These prices typically do not include the keyboard and kickstand accessories, which add approximately $250-300 to the total cost. The Huawei MateBook Fold Ultimate Design is priced between CNY 24,000 and 27,000 (approximately $3,300 to $3,700) depending on configuration. This pricing includes the wireless keyboard, making the total package cost comparable to a fully equipped X1 Fold with accessories. However, the MateBook Fold is currently available only in China, with no announced plans for international release. This limited availability significantly restricts its potential market impact outside of Asia. Global support and service represent another consideration. Lenovo maintains service centers worldwide, providing reliable support for business travelers and international organizations. Huawei’s support network is more limited outside of China, potentially creating challenges for users who experience hardware issues in regions without official service options. The target audience for each device influences its value proposition. The X1 Fold appeals to business professionals who prioritize Windows compatibility, global support, and integration with existing enterprise systems. Its ThinkPad branding carries significant weight in corporate environments, where reliability and security take precedence over cutting-edge specifications. The MateBook Fold targets technology enthusiasts and creative professionals who value display quality, design innovation, and ecosystem integration. Its limited availability and HarmonyOS platform make it less suitable for mainstream business adoption but potentially more appealing to users seeking the absolute latest in hardware engineering. Financing options and business leasing programs further differentiate these devices in the market. Lenovo offers established enterprise leasing programs that allow organizations to deploy the X1 Fold without significant upfront capital expenditure. These programs typically include service agreements and upgrade paths that align with corporate refresh cycles. Huawei’s business services are less developed outside of China, potentially limiting financing options for international customers interested in the MateBook Fold. Conclusion: The Future of Foldable Computing The Lenovo ThinkPad X1 Fold 2024 and Huawei MateBook Fold Ultimate Design represent two distinct visions for the future of foldable computing. Lenovo prioritizes durability, Windows compatibility, and global accessibility, creating a device that fits seamlessly into existing business environments. Huawei pushes the boundaries of hardware engineering, delivering a thinner, lighter device with a larger display and custom operating system optimized for the foldable form factor. For business users who require Windows compatibility and global support, the X1 Fold remains the more practical choice despite its thicker profile and aging processors. Its proven durability and enterprise-friendly features make it a safer investment for organizations deploying foldable technology. The device excels in versatility, allowing users to switch between tablet, laptop, and desktop modes with minimal compromise. Creative professionals and early adopters who prioritize display quality and cutting-edge design may find the MateBook Fold more appealing, provided they can access it in their region and adapt to HarmonyOS. The larger, brighter display and thinner profile create a more futuristic experience, though the limited software ecosystem and regional availability present significant barriers to widespread adoption. Looking forward, both devices point toward necessary improvements in the next generation of foldable computers. Future models should incorporate the latest processors with AI acceleration, reduce weight without sacrificing durability, integrate kickstands directly into the chassis, and provide larger, more comfortable keyboards. Display technology should continue to advance, with higher refresh rates, improved crease durability, and enhanced power efficiency. Software must evolve to better support the unique capabilities of foldable hardware, with more intuitive mode switching and optimized multitasking. The competition between Lenovo and Huawei benefits consumers by accelerating innovation and highlighting different approaches to solving the challenges of foldable computing. As these technologies mature and prices eventually decrease, foldable devices will transition from executive status symbols to practical tools for a broader range of users. The X1 Fold and MateBook Fold represent important steps in this evolution, each contributing valuable lessons that will shape the next generation of flexible computing devices. The ideal foldable device would combine Huawei’s hardware innovations with Lenovo’s software compatibility and global support. It would feature the thinness and display quality of the MateBook Fold, the enterprise security and connectivity options of the X1 Fold, and an operating system that seamlessly adapts to different usage modes. While neither current device achieves this perfect balance, both demonstrate remarkable engineering achievements that push the boundaries of what portable computers can be. As we look to the future, the success of foldable computing will depend not just on hardware specifications but on the development of software experiences that truly leverage the unique capabilities of these flexible displays. The device that ultimately dominates this category will be the one that most effectively bridges the gap between technical innovation and practical utility, creating experiences that simply aren’t possible on conventional laptops or tablets. Both Lenovo and Huawei have taken significant steps toward this goal, and their ongoing competition promises to accelerate progress toward truly transformative foldable computers.The post Folding the Future: Lenovo ThinkPad X1 Fold 2024 vs. Huawei MateBook Fold Ultimate Design first appeared on Yanko Design.
    19 Comments 0 Shares 0 Reviews
  • FDA restricts COVID-19 vaccines to older adults and high-risk groups. Here’s what to know

    On May 20, 2025, the Food and Drug Administration announced a new stance on who should receive the COVID-19 vaccine.

    The agency said it would approve new versions of the vaccine only for adults 65 years of age and older as well as for people with one or more risk factors for severe COVID-19 outcomes. These risk factors include medical conditions such as asthma, cancer, chronic kidney disease, heart disease and diabetes.

    However, healthy younger adults and children who fall outside of these groups may not be eligible to receive the COVID-19 shot this fall. Vaccine manufacturers will have to conduct clinical trials to demonstrate that the vaccine benefits low-risk groups.

    FDA Commissioner Martin Makary and the agency’s head of vaccines, Vinay Prasad, described the new framework in an article published in the New England Journal of Medicine and in a public webcast.

    The Conversation U.S. asked Libby Richards, a nursing professor involved in public health promotion, to explain why the changes were made and what they mean for the general public.

    Why did the FDA diverge from past practice?

    Until the May 20 announcement, getting a yearly COVID-19 vaccine was recommended for everyone ages 6 months and older, regardless of their health risk.

    According to Makary and Prasad, the Food and Drug Administration is moving away from these universal recommendations and instead taking a risk-based approach based on its interpretation of public health trends – specifically, the declining COVID-19 booster uptake, a lack of strong evidence that repeated boosters improve health outcomes for healthy people and the fact that natural immunity from past COVID-19 infections is widespread.

    The FDA states it wants to ensure the vaccine is backed by solid clinical trial data, especially for low-risk groups.

    Was this a controversial decision or a clear consensus?

    The FDA’s decision to adopt a risk-based framework for the COVID-19 vaccine aligns with the expected recommendations from the Advisory Committee on Immunization Practices, an advisory group of vaccine experts offering expert guidance to the Centers for Disease Control and Prevention on vaccine policy, which is scheduled to meet in June 2025. But while this advisory committee was also expected to recommend allowing low-risk people to get annual COVID-19 vaccines if they want to, the FDA’s policy will likely make that difficult.

    Although the FDA states that its new policy aims to promote greater transparency and evidenced-based decision-making, the change is controversial – in part because it circumvents the usual process for evaluating vaccine recommendations. The FDA is enacting this policy change by limiting its approval of the vaccine to high-risk groups, and it is doing so without any new data supporting its decision. Usually, however, the FDA broadly approves a vaccine based on whether it is safe and effective, and decisions on who should be eligible to receive it are left to the CDC, which receives research-based guidance from the Advisory Committee on Immunization Practices.

    Additionally, FDA officials point to Canada, Australia and some European countries that limit vaccine recommendations to older adults and other high-risk people as a model for its revised framework. But vaccine strategies vary widely, and this more conservative approach has not necessarily proven superior. Also, those countries have universal health care systems and have a track record of more equitable access to COVID-19 care and better COVID-19 outcomes.

    Another question is how health officials’ positions on COVID-19 vaccines affect public perception. Makary and Prasad noted that COVID-19 vaccination campaigns may have actually eroded public trust in vaccination. But some vaccine experts have expressed concerns that limiting COVID-19 vaccine access might further fuel vaccine hesitancy because any barrier to vaccine access can reduce uptake and hinder efforts to achieve widespread immunity.

    What conditions count as risk factors?

    The New England Journal of Medicine article includes a lengthy list of conditions that increase the risk of severe COVID-19 and notes that about 100 million to 200 million people will fall into this category and will thus be eligible to get the vaccine.

    Pregnancy is included. Some items on the list, however, are unclear. For example, the list includes asthma, but the data that asthma is a risk factor for severe COVID-19 is scant.

    Also on the list is physical inactivity, which likely applies to a vast swath of Americans and is difficult to define. Studies have found links between regular physical activity and reduced risk of severe COVID-19 infection, but it’s unclear how health care providers will define and measure physical inactivity when assessing a patient’s eligibility for COVID-19 vaccines.

    Most importantly, the list leaves out an important group – caregivers and household members of people at high risk of severe illness from COVID-19 infection. This omission leaves high-risk people more vulnerable to exposure to COVID-19 from healthy people they regularly interact with. Multiple countries the new framework refers to do include this group.

    Why is the FDA requiring new clinical trials?

    According to the FDA, the benefits of multiple doses of COVID-19 vaccines for healthy adults are currently unproven. It’s true that studies beyond the fourth vaccine dose are scarce. However, multiple studies have demonstrated that the vaccine is effective at preventing the risk of severe COVID-19 infection, hospitalization and death in low-risk adults and children. Receiving multiple doses of COVID-19 vaccines has also been shown to reduce the risk of long COVID.

    The FDA is requiring vaccine manufactures to conduct additional large randomized clinical trials to further evaluate the safety and effectiveness of COVID-19 boosters for healthy adults and children. These trials will primarily test whether the vaccines prevent symptomatic infections, and secondarily whether they prevent hospitalization and death. Such trials are more complex, costly and time-consuming than the more common approach of testing for immunological response.

    This requirement will likely delay both the timeliness and the availability of COVID-19 vaccine boosters and slow public health decision-making.

    Will low-risk people be able to get a COVID-19 shot?

    Not automatically. Under the new FDA framework, healthy adults who wish to receive the fall COVID-19 vaccine will face obstacles. Health care providers can administer vaccines “off-label”, but insurance coverage is widely based on FDA recommendations. The new, narrower FDA approval will likely reduce both access to COVID-19 vaccines for the general public and insurance coverage for COVID-19 vaccines.

    The FDA’s focus on individual risks and benefits may overlook broader public health benefits. Communities with higher vaccination rates have fewer opportunities to spread the virus.

    What about vaccines for children?

    High-risk children age 6 months and older who have conditions that increase the risk of severe COVID-19 are still eligible for the vaccine under the new framework. As of now, healthy children age 6 months and older without underlying medical conditions will not have routine access to COVID-19 vaccines until further clinical trial data is available.

    Existing vaccines already on the market will remain available, but it is unclear how long they will stay authorized and how the change will affect childhood vaccination overall.

    Libby Richards is a professor of nursing at Purdue University.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.
    #fda #restricts #covid19 #vaccines #older
    FDA restricts COVID-19 vaccines to older adults and high-risk groups. Here’s what to know
    On May 20, 2025, the Food and Drug Administration announced a new stance on who should receive the COVID-19 vaccine. The agency said it would approve new versions of the vaccine only for adults 65 years of age and older as well as for people with one or more risk factors for severe COVID-19 outcomes. These risk factors include medical conditions such as asthma, cancer, chronic kidney disease, heart disease and diabetes. However, healthy younger adults and children who fall outside of these groups may not be eligible to receive the COVID-19 shot this fall. Vaccine manufacturers will have to conduct clinical trials to demonstrate that the vaccine benefits low-risk groups. FDA Commissioner Martin Makary and the agency’s head of vaccines, Vinay Prasad, described the new framework in an article published in the New England Journal of Medicine and in a public webcast. The Conversation U.S. asked Libby Richards, a nursing professor involved in public health promotion, to explain why the changes were made and what they mean for the general public. Why did the FDA diverge from past practice? Until the May 20 announcement, getting a yearly COVID-19 vaccine was recommended for everyone ages 6 months and older, regardless of their health risk. According to Makary and Prasad, the Food and Drug Administration is moving away from these universal recommendations and instead taking a risk-based approach based on its interpretation of public health trends – specifically, the declining COVID-19 booster uptake, a lack of strong evidence that repeated boosters improve health outcomes for healthy people and the fact that natural immunity from past COVID-19 infections is widespread. The FDA states it wants to ensure the vaccine is backed by solid clinical trial data, especially for low-risk groups. Was this a controversial decision or a clear consensus? The FDA’s decision to adopt a risk-based framework for the COVID-19 vaccine aligns with the expected recommendations from the Advisory Committee on Immunization Practices, an advisory group of vaccine experts offering expert guidance to the Centers for Disease Control and Prevention on vaccine policy, which is scheduled to meet in June 2025. But while this advisory committee was also expected to recommend allowing low-risk people to get annual COVID-19 vaccines if they want to, the FDA’s policy will likely make that difficult. Although the FDA states that its new policy aims to promote greater transparency and evidenced-based decision-making, the change is controversial – in part because it circumvents the usual process for evaluating vaccine recommendations. The FDA is enacting this policy change by limiting its approval of the vaccine to high-risk groups, and it is doing so without any new data supporting its decision. Usually, however, the FDA broadly approves a vaccine based on whether it is safe and effective, and decisions on who should be eligible to receive it are left to the CDC, which receives research-based guidance from the Advisory Committee on Immunization Practices. Additionally, FDA officials point to Canada, Australia and some European countries that limit vaccine recommendations to older adults and other high-risk people as a model for its revised framework. But vaccine strategies vary widely, and this more conservative approach has not necessarily proven superior. Also, those countries have universal health care systems and have a track record of more equitable access to COVID-19 care and better COVID-19 outcomes. Another question is how health officials’ positions on COVID-19 vaccines affect public perception. Makary and Prasad noted that COVID-19 vaccination campaigns may have actually eroded public trust in vaccination. But some vaccine experts have expressed concerns that limiting COVID-19 vaccine access might further fuel vaccine hesitancy because any barrier to vaccine access can reduce uptake and hinder efforts to achieve widespread immunity. What conditions count as risk factors? The New England Journal of Medicine article includes a lengthy list of conditions that increase the risk of severe COVID-19 and notes that about 100 million to 200 million people will fall into this category and will thus be eligible to get the vaccine. Pregnancy is included. Some items on the list, however, are unclear. For example, the list includes asthma, but the data that asthma is a risk factor for severe COVID-19 is scant. Also on the list is physical inactivity, which likely applies to a vast swath of Americans and is difficult to define. Studies have found links between regular physical activity and reduced risk of severe COVID-19 infection, but it’s unclear how health care providers will define and measure physical inactivity when assessing a patient’s eligibility for COVID-19 vaccines. Most importantly, the list leaves out an important group – caregivers and household members of people at high risk of severe illness from COVID-19 infection. This omission leaves high-risk people more vulnerable to exposure to COVID-19 from healthy people they regularly interact with. Multiple countries the new framework refers to do include this group. Why is the FDA requiring new clinical trials? According to the FDA, the benefits of multiple doses of COVID-19 vaccines for healthy adults are currently unproven. It’s true that studies beyond the fourth vaccine dose are scarce. However, multiple studies have demonstrated that the vaccine is effective at preventing the risk of severe COVID-19 infection, hospitalization and death in low-risk adults and children. Receiving multiple doses of COVID-19 vaccines has also been shown to reduce the risk of long COVID. The FDA is requiring vaccine manufactures to conduct additional large randomized clinical trials to further evaluate the safety and effectiveness of COVID-19 boosters for healthy adults and children. These trials will primarily test whether the vaccines prevent symptomatic infections, and secondarily whether they prevent hospitalization and death. Such trials are more complex, costly and time-consuming than the more common approach of testing for immunological response. This requirement will likely delay both the timeliness and the availability of COVID-19 vaccine boosters and slow public health decision-making. Will low-risk people be able to get a COVID-19 shot? Not automatically. Under the new FDA framework, healthy adults who wish to receive the fall COVID-19 vaccine will face obstacles. Health care providers can administer vaccines “off-label”, but insurance coverage is widely based on FDA recommendations. The new, narrower FDA approval will likely reduce both access to COVID-19 vaccines for the general public and insurance coverage for COVID-19 vaccines. The FDA’s focus on individual risks and benefits may overlook broader public health benefits. Communities with higher vaccination rates have fewer opportunities to spread the virus. What about vaccines for children? High-risk children age 6 months and older who have conditions that increase the risk of severe COVID-19 are still eligible for the vaccine under the new framework. As of now, healthy children age 6 months and older without underlying medical conditions will not have routine access to COVID-19 vaccines until further clinical trial data is available. Existing vaccines already on the market will remain available, but it is unclear how long they will stay authorized and how the change will affect childhood vaccination overall. Libby Richards is a professor of nursing at Purdue University. This article is republished from The Conversation under a Creative Commons license. Read the original article. #fda #restricts #covid19 #vaccines #older
    WWW.FASTCOMPANY.COM
    FDA restricts COVID-19 vaccines to older adults and high-risk groups. Here’s what to know
    On May 20, 2025, the Food and Drug Administration announced a new stance on who should receive the COVID-19 vaccine. The agency said it would approve new versions of the vaccine only for adults 65 years of age and older as well as for people with one or more risk factors for severe COVID-19 outcomes. These risk factors include medical conditions such as asthma, cancer, chronic kidney disease, heart disease and diabetes. However, healthy younger adults and children who fall outside of these groups may not be eligible to receive the COVID-19 shot this fall. Vaccine manufacturers will have to conduct clinical trials to demonstrate that the vaccine benefits low-risk groups. FDA Commissioner Martin Makary and the agency’s head of vaccines, Vinay Prasad, described the new framework in an article published in the New England Journal of Medicine and in a public webcast. The Conversation U.S. asked Libby Richards, a nursing professor involved in public health promotion, to explain why the changes were made and what they mean for the general public. Why did the FDA diverge from past practice? Until the May 20 announcement, getting a yearly COVID-19 vaccine was recommended for everyone ages 6 months and older, regardless of their health risk. According to Makary and Prasad, the Food and Drug Administration is moving away from these universal recommendations and instead taking a risk-based approach based on its interpretation of public health trends – specifically, the declining COVID-19 booster uptake, a lack of strong evidence that repeated boosters improve health outcomes for healthy people and the fact that natural immunity from past COVID-19 infections is widespread. The FDA states it wants to ensure the vaccine is backed by solid clinical trial data, especially for low-risk groups. Was this a controversial decision or a clear consensus? The FDA’s decision to adopt a risk-based framework for the COVID-19 vaccine aligns with the expected recommendations from the Advisory Committee on Immunization Practices, an advisory group of vaccine experts offering expert guidance to the Centers for Disease Control and Prevention on vaccine policy, which is scheduled to meet in June 2025. But while this advisory committee was also expected to recommend allowing low-risk people to get annual COVID-19 vaccines if they want to, the FDA’s policy will likely make that difficult. Although the FDA states that its new policy aims to promote greater transparency and evidenced-based decision-making, the change is controversial – in part because it circumvents the usual process for evaluating vaccine recommendations. The FDA is enacting this policy change by limiting its approval of the vaccine to high-risk groups, and it is doing so without any new data supporting its decision. Usually, however, the FDA broadly approves a vaccine based on whether it is safe and effective, and decisions on who should be eligible to receive it are left to the CDC, which receives research-based guidance from the Advisory Committee on Immunization Practices. Additionally, FDA officials point to Canada, Australia and some European countries that limit vaccine recommendations to older adults and other high-risk people as a model for its revised framework. But vaccine strategies vary widely, and this more conservative approach has not necessarily proven superior. Also, those countries have universal health care systems and have a track record of more equitable access to COVID-19 care and better COVID-19 outcomes. Another question is how health officials’ positions on COVID-19 vaccines affect public perception. Makary and Prasad noted that COVID-19 vaccination campaigns may have actually eroded public trust in vaccination. But some vaccine experts have expressed concerns that limiting COVID-19 vaccine access might further fuel vaccine hesitancy because any barrier to vaccine access can reduce uptake and hinder efforts to achieve widespread immunity. What conditions count as risk factors? The New England Journal of Medicine article includes a lengthy list of conditions that increase the risk of severe COVID-19 and notes that about 100 million to 200 million people will fall into this category and will thus be eligible to get the vaccine. Pregnancy is included. Some items on the list, however, are unclear. For example, the list includes asthma, but the data that asthma is a risk factor for severe COVID-19 is scant. Also on the list is physical inactivity, which likely applies to a vast swath of Americans and is difficult to define. Studies have found links between regular physical activity and reduced risk of severe COVID-19 infection, but it’s unclear how health care providers will define and measure physical inactivity when assessing a patient’s eligibility for COVID-19 vaccines. Most importantly, the list leaves out an important group – caregivers and household members of people at high risk of severe illness from COVID-19 infection. This omission leaves high-risk people more vulnerable to exposure to COVID-19 from healthy people they regularly interact with. Multiple countries the new framework refers to do include this group. Why is the FDA requiring new clinical trials? According to the FDA, the benefits of multiple doses of COVID-19 vaccines for healthy adults are currently unproven. It’s true that studies beyond the fourth vaccine dose are scarce. However, multiple studies have demonstrated that the vaccine is effective at preventing the risk of severe COVID-19 infection, hospitalization and death in low-risk adults and children. Receiving multiple doses of COVID-19 vaccines has also been shown to reduce the risk of long COVID. The FDA is requiring vaccine manufactures to conduct additional large randomized clinical trials to further evaluate the safety and effectiveness of COVID-19 boosters for healthy adults and children. These trials will primarily test whether the vaccines prevent symptomatic infections, and secondarily whether they prevent hospitalization and death. Such trials are more complex, costly and time-consuming than the more common approach of testing for immunological response. This requirement will likely delay both the timeliness and the availability of COVID-19 vaccine boosters and slow public health decision-making. Will low-risk people be able to get a COVID-19 shot? Not automatically. Under the new FDA framework, healthy adults who wish to receive the fall COVID-19 vaccine will face obstacles. Health care providers can administer vaccines “off-label”, but insurance coverage is widely based on FDA recommendations. The new, narrower FDA approval will likely reduce both access to COVID-19 vaccines for the general public and insurance coverage for COVID-19 vaccines. The FDA’s focus on individual risks and benefits may overlook broader public health benefits. Communities with higher vaccination rates have fewer opportunities to spread the virus. What about vaccines for children? High-risk children age 6 months and older who have conditions that increase the risk of severe COVID-19 are still eligible for the vaccine under the new framework. As of now, healthy children age 6 months and older without underlying medical conditions will not have routine access to COVID-19 vaccines until further clinical trial data is available. Existing vaccines already on the market will remain available, but it is unclear how long they will stay authorized and how the change will affect childhood vaccination overall. Libby Richards is a professor of nursing at Purdue University. This article is republished from The Conversation under a Creative Commons license. Read the original article.
    0 Comments 0 Shares 0 Reviews
  • This AI Paper Introduces MathCoder-VL and FigCodifier: Advancing Multimodal Mathematical Reasoning with Vision-to-Code Alignment

    Multimodal mathematical reasoning enables machines to solve problems involving textual information and visual components like diagrams and figures. This requires combining language understanding and visual interpretation to make sense of complex mathematical contexts. Such capabilities are vital in education, automated tutoring, and document analysis, where problems are often presented with a blend of text and images.
    A major obstacle in this area is the lack of high-quality, precise alignment between math images and their textual or symbolic representations. Most datasets used to train large multimodal models are derived from image captions in natural settings, which often miss the detailed elements essential for mathematical accuracy. This creates problems for models that rely on these data sources, making them unreliable when dealing with geometry, figures, or technical diagrams. A model’s performance in mathematical reasoning depends heavily on its ability to correctly interpret and link these visual details with mathematical expressions or instructions.

    In the past, some approaches tried to address this by either enhancing the visual encoders or using manually crafted datasets. However, these methods tend to produce low image diversity, relying on hand-coded or template-based generation, which limits their applicability. Some efforts, like Math-LLaVA and MAVIS, developed synthetic datasets and used templates or predefined categories. Still, they could not dynamically create a wide variety of math-related visuals. This shortfall restricts the learning scope of models and leaves them struggling with more complex or less structured mathematical problems.
    Researchers from the Multimedia Laboratory at The Chinese University of Hong Kong and CPII under InnoHK introduced a novel approach called MathCoder-VL. This method combines a vision-to-code model named FigCodifier and a synthetic data engine. They constructed the ImgCode-8.6M dataset using a model-in-the-loop strategy, which allowed them to build the largest image-code dataset to date iteratively. Further, they developed MM-MathInstruct-3M, a multimodal instruction dataset enriched with newly synthesized images. The MathCoder-VL model is trained in two stages: mid-training on ImgCode-8.6M to improve visual-text alignment and fine-tuning on MM-MathInstruct-3M to strengthen reasoning abilities.

    The FigCodifier model works by translating mathematical figures into code that can recreate those figures exactly. This code-image pairing ensures strict alignment and accuracy, unlike caption-based datasets. The process begins with 119K image-code pairs from DaTikZ and expands through iterative training using images collected from textbooks, K12 datasets, and arXiv papers. The final dataset includes 8.6 million code-image pairs and covers various mathematical topics. FigCodifier also supports Python-based rendering, which adds variety to image generation. The system filters low-quality data by checking code validity and removing redundant or unhelpful visuals, resulting in 4.3M high-quality TikZ and 4.3M Python-based pairs.
    Performance evaluations show that MathCoder-VL outperforms multiple open-source models. The 8B version achieved 73.6% accuracy on the MathVista Geometry Problem Solving subset, surpassing GPT-4o and Claude 3.5 Sonnet by 8.9% and 9.2%, respectively. It also scored 26.1% on MATH-Vision and 46.5% on MathVerse. In Chinese-language benchmarks, it achieved 51.2% on GAOKAO-MM. On the We-Math benchmark, it solved two-step problems at 58.6%, outperforming GPT-4o’s 58.1%. Its performance on three-step problems reached 52.1%, again exceeding GPT-4o’s 43.6%. Compared to its base model InternVL2-8B, it showed gains of 6.1% on MATH-Vision and 11.6% on MathVista.

    This work clearly defines the problem of insufficient visual-textual alignment in multimodal math reasoning and provides a scalable and innovative solution. The introduction of FigCodifier and synthetic datasets allows models to learn from accurate, diverse visuals paired with exact code, significantly boosting their reasoning abilities. MathCoder-VL represents a practical advancement in this field, demonstrating how thoughtful model design and high-quality data can overcome longstanding limitations in mathematical AI.

    Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.
    NikhilNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.Nikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper Introduces PARSCALE: A Parallel Computation Method for Efficient and Scalable Language Model DeploymentNikhilhttps://www.marktechpost.com/author/nikhil0980/Google AI Releases Standalone NotebookLM Mobile App with Offline Audio and Seamless Source IntegrationNikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper from Microsoft Introduces a DiskANN-Integrated System: A Cost-Effective and Low-Latency Vector Search Using Azure Cosmos DBNikhilhttps://www.marktechpost.com/author/nikhil0980/LLMs Struggle to Act on What They Know: Google DeepMind Researchers Use Reinforcement Learning Fine-Tuning to Bridge the Knowing-Doing Gap
    #this #paper #introduces #mathcodervl #figcodifier
    This AI Paper Introduces MathCoder-VL and FigCodifier: Advancing Multimodal Mathematical Reasoning with Vision-to-Code Alignment
    Multimodal mathematical reasoning enables machines to solve problems involving textual information and visual components like diagrams and figures. This requires combining language understanding and visual interpretation to make sense of complex mathematical contexts. Such capabilities are vital in education, automated tutoring, and document analysis, where problems are often presented with a blend of text and images. A major obstacle in this area is the lack of high-quality, precise alignment between math images and their textual or symbolic representations. Most datasets used to train large multimodal models are derived from image captions in natural settings, which often miss the detailed elements essential for mathematical accuracy. This creates problems for models that rely on these data sources, making them unreliable when dealing with geometry, figures, or technical diagrams. A model’s performance in mathematical reasoning depends heavily on its ability to correctly interpret and link these visual details with mathematical expressions or instructions. In the past, some approaches tried to address this by either enhancing the visual encoders or using manually crafted datasets. However, these methods tend to produce low image diversity, relying on hand-coded or template-based generation, which limits their applicability. Some efforts, like Math-LLaVA and MAVIS, developed synthetic datasets and used templates or predefined categories. Still, they could not dynamically create a wide variety of math-related visuals. This shortfall restricts the learning scope of models and leaves them struggling with more complex or less structured mathematical problems. Researchers from the Multimedia Laboratory at The Chinese University of Hong Kong and CPII under InnoHK introduced a novel approach called MathCoder-VL. This method combines a vision-to-code model named FigCodifier and a synthetic data engine. They constructed the ImgCode-8.6M dataset using a model-in-the-loop strategy, which allowed them to build the largest image-code dataset to date iteratively. Further, they developed MM-MathInstruct-3M, a multimodal instruction dataset enriched with newly synthesized images. The MathCoder-VL model is trained in two stages: mid-training on ImgCode-8.6M to improve visual-text alignment and fine-tuning on MM-MathInstruct-3M to strengthen reasoning abilities. The FigCodifier model works by translating mathematical figures into code that can recreate those figures exactly. This code-image pairing ensures strict alignment and accuracy, unlike caption-based datasets. The process begins with 119K image-code pairs from DaTikZ and expands through iterative training using images collected from textbooks, K12 datasets, and arXiv papers. The final dataset includes 8.6 million code-image pairs and covers various mathematical topics. FigCodifier also supports Python-based rendering, which adds variety to image generation. The system filters low-quality data by checking code validity and removing redundant or unhelpful visuals, resulting in 4.3M high-quality TikZ and 4.3M Python-based pairs. Performance evaluations show that MathCoder-VL outperforms multiple open-source models. The 8B version achieved 73.6% accuracy on the MathVista Geometry Problem Solving subset, surpassing GPT-4o and Claude 3.5 Sonnet by 8.9% and 9.2%, respectively. It also scored 26.1% on MATH-Vision and 46.5% on MathVerse. In Chinese-language benchmarks, it achieved 51.2% on GAOKAO-MM. On the We-Math benchmark, it solved two-step problems at 58.6%, outperforming GPT-4o’s 58.1%. Its performance on three-step problems reached 52.1%, again exceeding GPT-4o’s 43.6%. Compared to its base model InternVL2-8B, it showed gains of 6.1% on MATH-Vision and 11.6% on MathVista. This work clearly defines the problem of insufficient visual-textual alignment in multimodal math reasoning and provides a scalable and innovative solution. The introduction of FigCodifier and synthetic datasets allows models to learn from accurate, diverse visuals paired with exact code, significantly boosting their reasoning abilities. MathCoder-VL represents a practical advancement in this field, demonstrating how thoughtful model design and high-quality data can overcome longstanding limitations in mathematical AI. Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. NikhilNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.Nikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper Introduces PARSCALE: A Parallel Computation Method for Efficient and Scalable Language Model DeploymentNikhilhttps://www.marktechpost.com/author/nikhil0980/Google AI Releases Standalone NotebookLM Mobile App with Offline Audio and Seamless Source IntegrationNikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper from Microsoft Introduces a DiskANN-Integrated System: A Cost-Effective and Low-Latency Vector Search Using Azure Cosmos DBNikhilhttps://www.marktechpost.com/author/nikhil0980/LLMs Struggle to Act on What They Know: Google DeepMind Researchers Use Reinforcement Learning Fine-Tuning to Bridge the Knowing-Doing Gap #this #paper #introduces #mathcodervl #figcodifier
    WWW.MARKTECHPOST.COM
    This AI Paper Introduces MathCoder-VL and FigCodifier: Advancing Multimodal Mathematical Reasoning with Vision-to-Code Alignment
    Multimodal mathematical reasoning enables machines to solve problems involving textual information and visual components like diagrams and figures. This requires combining language understanding and visual interpretation to make sense of complex mathematical contexts. Such capabilities are vital in education, automated tutoring, and document analysis, where problems are often presented with a blend of text and images. A major obstacle in this area is the lack of high-quality, precise alignment between math images and their textual or symbolic representations. Most datasets used to train large multimodal models are derived from image captions in natural settings, which often miss the detailed elements essential for mathematical accuracy. This creates problems for models that rely on these data sources, making them unreliable when dealing with geometry, figures, or technical diagrams. A model’s performance in mathematical reasoning depends heavily on its ability to correctly interpret and link these visual details with mathematical expressions or instructions. In the past, some approaches tried to address this by either enhancing the visual encoders or using manually crafted datasets. However, these methods tend to produce low image diversity, relying on hand-coded or template-based generation, which limits their applicability. Some efforts, like Math-LLaVA and MAVIS, developed synthetic datasets and used templates or predefined categories. Still, they could not dynamically create a wide variety of math-related visuals. This shortfall restricts the learning scope of models and leaves them struggling with more complex or less structured mathematical problems. Researchers from the Multimedia Laboratory at The Chinese University of Hong Kong and CPII under InnoHK introduced a novel approach called MathCoder-VL. This method combines a vision-to-code model named FigCodifier and a synthetic data engine. They constructed the ImgCode-8.6M dataset using a model-in-the-loop strategy, which allowed them to build the largest image-code dataset to date iteratively. Further, they developed MM-MathInstruct-3M, a multimodal instruction dataset enriched with newly synthesized images. The MathCoder-VL model is trained in two stages: mid-training on ImgCode-8.6M to improve visual-text alignment and fine-tuning on MM-MathInstruct-3M to strengthen reasoning abilities. The FigCodifier model works by translating mathematical figures into code that can recreate those figures exactly. This code-image pairing ensures strict alignment and accuracy, unlike caption-based datasets. The process begins with 119K image-code pairs from DaTikZ and expands through iterative training using images collected from textbooks, K12 datasets, and arXiv papers. The final dataset includes 8.6 million code-image pairs and covers various mathematical topics. FigCodifier also supports Python-based rendering, which adds variety to image generation. The system filters low-quality data by checking code validity and removing redundant or unhelpful visuals, resulting in 4.3M high-quality TikZ and 4.3M Python-based pairs. Performance evaluations show that MathCoder-VL outperforms multiple open-source models. The 8B version achieved 73.6% accuracy on the MathVista Geometry Problem Solving subset, surpassing GPT-4o and Claude 3.5 Sonnet by 8.9% and 9.2%, respectively. It also scored 26.1% on MATH-Vision and 46.5% on MathVerse. In Chinese-language benchmarks, it achieved 51.2% on GAOKAO-MM. On the We-Math benchmark, it solved two-step problems at 58.6%, outperforming GPT-4o’s 58.1%. Its performance on three-step problems reached 52.1%, again exceeding GPT-4o’s 43.6%. Compared to its base model InternVL2-8B, it showed gains of 6.1% on MATH-Vision and 11.6% on MathVista. This work clearly defines the problem of insufficient visual-textual alignment in multimodal math reasoning and provides a scalable and innovative solution. The introduction of FigCodifier and synthetic datasets allows models to learn from accurate, diverse visuals paired with exact code, significantly boosting their reasoning abilities. MathCoder-VL represents a practical advancement in this field, demonstrating how thoughtful model design and high-quality data can overcome longstanding limitations in mathematical AI. Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. NikhilNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.Nikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper Introduces PARSCALE (Parallel Scaling): A Parallel Computation Method for Efficient and Scalable Language Model DeploymentNikhilhttps://www.marktechpost.com/author/nikhil0980/Google AI Releases Standalone NotebookLM Mobile App with Offline Audio and Seamless Source IntegrationNikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper from Microsoft Introduces a DiskANN-Integrated System: A Cost-Effective and Low-Latency Vector Search Using Azure Cosmos DBNikhilhttps://www.marktechpost.com/author/nikhil0980/LLMs Struggle to Act on What They Know: Google DeepMind Researchers Use Reinforcement Learning Fine-Tuning to Bridge the Knowing-Doing Gap
    0 Comments 0 Shares 0 Reviews
More Results
CGShares https://cgshares.com