• The Word is Out: Danish Ministry Drops Microsoft, Goes Open Source

    Key Takeaways

    Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices.
    The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it.
    A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation.

    Denmark’s Ministry of Digitalization has recently announced that it will leave the Microsoft ecosystem in favor of Linux and other open-source software.
    Minister Caroline Stage Olsen revealed this in an interview with Politiken, the country’s leading newspaper. According to Olsen, the Ministry plans to switch half of its employees to Linux and LibreOffice by summer, and the rest by fall.
    The announcement comes after Denmark’s largest cities – Copenhagen and Aarhus – made similar moves earlier this month.
    Why the Danish Ministry of Digitalization Switched to Open-Source Software
    The three main reasons Denmark is moving away from Microsoft are costs, politics, and security.
    In the case of Aarhus, the city was able to slash its annual costs from 800K kroner to just 225K by replacing Microsoft with a German service provider. 
    The same is a pain point for Copenhagen, which saw its costs on Microsoft balloon from 313M kroner in 2018 to 538M kroner in 2023.
    It’s also part of a broader move to increase its digital sovereignty. In her LinkedIn post, Olsen further explained that the strategy is not about isolation or digital nationalism, adding that they should not turn their backs completely on global tech companies like Microsoft. 

    Instead, it’s about avoiding being too dependent on these companies, which could prevent them from acting freely.
    Then there’s politics. Since his reelection earlier this year, US President Donald Trump has repeatedly threatened to take over Greenland, an autonomous territory of Denmark. 
    In May, the Danish Foreign Minister Lars Løkke Rasmussen summoned the US ambassador regarding news that US spy agencies have been told to focus on the territory.
    If the relationship between the two countries continues to erode, Trump can order Microsoft and other US tech companies to cut off Denmark from their services. After all, Microsoft and Facebook’s parent company Meta, have close ties to the US president after contributing M each for his inauguration in January.
    Denmark Isn’t Alone: Other EU Countries Are Making Similar Moves
    Denmark is only one of the growing number of European Unioncountries taking measures to become more digitally independent.
    Germany’s Federal Digital Minister Karsten Wildberger emphasized the need to be more independent of global tech companies during the re:publica internet conference in May. He added that IT companies in the EU have the opportunity to create tech that is based on the region’s values.

    Meanwhile, Bert Hubert, a technical advisor to the Dutch Electoral Council, wrote in February that ‘it is no longer safe to move our governments and societies to US clouds.’ He said that America is no longer a ‘reliable partner,’ making it risky to have the data of European governments and businesses at the mercy of US-based cloud providers.
    Earlier this month, the chief prosecutor of the International Criminal Court, Karim Khan, experienced a disconnection from his Microsoft-based email account, sparking uproar across the region. 
    Speculation quickly arose that the incident was linked to sanctions previously imposed on the ICC by the Trump administration, an assertion Microsoft has denied.
    Earlier this month, the chief prosecutor of the International Criminal Court, Karim Khan, disconnection from his Microsoft-based email account caused an uproar in the region. Some speculated that this was connected to sanctions imposed by Trump against the ICC, which Microsoft denied.
    Weaning the EU Away from US Tech is Possible, But Challenges Lie Ahead
    Change like this doesn’t happen overnight. Just finding, let alone developing, reliable alternatives to tools that have been part of daily workflows for decades, is a massive undertaking.
    It will also take time for users to adapt to these new tools, especially when transitioning to an entirely new ecosystem. In Aarhus, for example, municipal staff initially viewed the shift to open source as a step down from the familiarity and functionality of Microsoft products.
    Overall, these are only temporary hurdles. Momentum is building, with growing calls for digital independence from leaders like Ministers Olsen and Wildberger.
     Initiatives such as the Digital Europe Programme, which seeks to reduce reliance on foreign systems and solutions, further accelerate this push. As a result, the EU’s transition could arrive sooner rather than later

    As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy.
    With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility.
    Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines.
    Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech. 
    He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom.
    That fascination with tech didn’t just stick. It evolved into a full-blown calling.
    After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career.
    He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy.
    His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers.
    At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap.
    Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual.
    As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting.
    From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it.

    View all articles by Cedric Solidon

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    #word #out #danish #ministry #drops
    The Word is Out: Danish Ministry Drops Microsoft, Goes Open Source
    Key Takeaways Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices. The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it. A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation. Denmark’s Ministry of Digitalization has recently announced that it will leave the Microsoft ecosystem in favor of Linux and other open-source software. Minister Caroline Stage Olsen revealed this in an interview with Politiken, the country’s leading newspaper. According to Olsen, the Ministry plans to switch half of its employees to Linux and LibreOffice by summer, and the rest by fall. The announcement comes after Denmark’s largest cities – Copenhagen and Aarhus – made similar moves earlier this month. Why the Danish Ministry of Digitalization Switched to Open-Source Software The three main reasons Denmark is moving away from Microsoft are costs, politics, and security. In the case of Aarhus, the city was able to slash its annual costs from 800K kroner to just 225K by replacing Microsoft with a German service provider.  The same is a pain point for Copenhagen, which saw its costs on Microsoft balloon from 313M kroner in 2018 to 538M kroner in 2023. It’s also part of a broader move to increase its digital sovereignty. In her LinkedIn post, Olsen further explained that the strategy is not about isolation or digital nationalism, adding that they should not turn their backs completely on global tech companies like Microsoft.  Instead, it’s about avoiding being too dependent on these companies, which could prevent them from acting freely. Then there’s politics. Since his reelection earlier this year, US President Donald Trump has repeatedly threatened to take over Greenland, an autonomous territory of Denmark.  In May, the Danish Foreign Minister Lars Løkke Rasmussen summoned the US ambassador regarding news that US spy agencies have been told to focus on the territory. If the relationship between the two countries continues to erode, Trump can order Microsoft and other US tech companies to cut off Denmark from their services. After all, Microsoft and Facebook’s parent company Meta, have close ties to the US president after contributing M each for his inauguration in January. Denmark Isn’t Alone: Other EU Countries Are Making Similar Moves Denmark is only one of the growing number of European Unioncountries taking measures to become more digitally independent. Germany’s Federal Digital Minister Karsten Wildberger emphasized the need to be more independent of global tech companies during the re:publica internet conference in May. He added that IT companies in the EU have the opportunity to create tech that is based on the region’s values. Meanwhile, Bert Hubert, a technical advisor to the Dutch Electoral Council, wrote in February that ‘it is no longer safe to move our governments and societies to US clouds.’ He said that America is no longer a ‘reliable partner,’ making it risky to have the data of European governments and businesses at the mercy of US-based cloud providers. Earlier this month, the chief prosecutor of the International Criminal Court, Karim Khan, experienced a disconnection from his Microsoft-based email account, sparking uproar across the region.  Speculation quickly arose that the incident was linked to sanctions previously imposed on the ICC by the Trump administration, an assertion Microsoft has denied. Earlier this month, the chief prosecutor of the International Criminal Court, Karim Khan, disconnection from his Microsoft-based email account caused an uproar in the region. Some speculated that this was connected to sanctions imposed by Trump against the ICC, which Microsoft denied. Weaning the EU Away from US Tech is Possible, But Challenges Lie Ahead Change like this doesn’t happen overnight. Just finding, let alone developing, reliable alternatives to tools that have been part of daily workflows for decades, is a massive undertaking. It will also take time for users to adapt to these new tools, especially when transitioning to an entirely new ecosystem. In Aarhus, for example, municipal staff initially viewed the shift to open source as a step down from the familiarity and functionality of Microsoft products. Overall, these are only temporary hurdles. Momentum is building, with growing calls for digital independence from leaders like Ministers Olsen and Wildberger.  Initiatives such as the Digital Europe Programme, which seeks to reduce reliance on foreign systems and solutions, further accelerate this push. As a result, the EU’s transition could arrive sooner rather than later As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy. With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility. Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines. Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech.  He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom. That fascination with tech didn’t just stick. It evolved into a full-blown calling. After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career. He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy. His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers. At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap. Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual. As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting. From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it. View all articles by Cedric Solidon Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. #word #out #danish #ministry #drops
    TECHREPORT.COM
    The Word is Out: Danish Ministry Drops Microsoft, Goes Open Source
    Key Takeaways Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices. The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it. A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation. Denmark’s Ministry of Digitalization has recently announced that it will leave the Microsoft ecosystem in favor of Linux and other open-source software. Minister Caroline Stage Olsen revealed this in an interview with Politiken, the country’s leading newspaper. According to Olsen, the Ministry plans to switch half of its employees to Linux and LibreOffice by summer, and the rest by fall. The announcement comes after Denmark’s largest cities – Copenhagen and Aarhus – made similar moves earlier this month. Why the Danish Ministry of Digitalization Switched to Open-Source Software The three main reasons Denmark is moving away from Microsoft are costs, politics, and security. In the case of Aarhus, the city was able to slash its annual costs from 800K kroner to just 225K by replacing Microsoft with a German service provider.  The same is a pain point for Copenhagen, which saw its costs on Microsoft balloon from 313M kroner in 2018 to 538M kroner in 2023. It’s also part of a broader move to increase its digital sovereignty. In her LinkedIn post, Olsen further explained that the strategy is not about isolation or digital nationalism, adding that they should not turn their backs completely on global tech companies like Microsoft.  Instead, it’s about avoiding being too dependent on these companies, which could prevent them from acting freely. Then there’s politics. Since his reelection earlier this year, US President Donald Trump has repeatedly threatened to take over Greenland, an autonomous territory of Denmark.  In May, the Danish Foreign Minister Lars Løkke Rasmussen summoned the US ambassador regarding news that US spy agencies have been told to focus on the territory. If the relationship between the two countries continues to erode, Trump can order Microsoft and other US tech companies to cut off Denmark from their services. After all, Microsoft and Facebook’s parent company Meta, have close ties to the US president after contributing $1M each for his inauguration in January. Denmark Isn’t Alone: Other EU Countries Are Making Similar Moves Denmark is only one of the growing number of European Union (EU) countries taking measures to become more digitally independent. Germany’s Federal Digital Minister Karsten Wildberger emphasized the need to be more independent of global tech companies during the re:publica internet conference in May. He added that IT companies in the EU have the opportunity to create tech that is based on the region’s values. Meanwhile, Bert Hubert, a technical advisor to the Dutch Electoral Council, wrote in February that ‘it is no longer safe to move our governments and societies to US clouds.’ He said that America is no longer a ‘reliable partner,’ making it risky to have the data of European governments and businesses at the mercy of US-based cloud providers. Earlier this month, the chief prosecutor of the International Criminal Court (ICC), Karim Khan, experienced a disconnection from his Microsoft-based email account, sparking uproar across the region.  Speculation quickly arose that the incident was linked to sanctions previously imposed on the ICC by the Trump administration, an assertion Microsoft has denied. Earlier this month, the chief prosecutor of the International Criminal Court (ICC), Karim Khan, disconnection from his Microsoft-based email account caused an uproar in the region. Some speculated that this was connected to sanctions imposed by Trump against the ICC, which Microsoft denied. Weaning the EU Away from US Tech is Possible, But Challenges Lie Ahead Change like this doesn’t happen overnight. Just finding, let alone developing, reliable alternatives to tools that have been part of daily workflows for decades, is a massive undertaking. It will also take time for users to adapt to these new tools, especially when transitioning to an entirely new ecosystem. In Aarhus, for example, municipal staff initially viewed the shift to open source as a step down from the familiarity and functionality of Microsoft products. Overall, these are only temporary hurdles. Momentum is building, with growing calls for digital independence from leaders like Ministers Olsen and Wildberger.  Initiatives such as the Digital Europe Programme, which seeks to reduce reliance on foreign systems and solutions, further accelerate this push. As a result, the EU’s transition could arrive sooner rather than later As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy. With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility. Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines. Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech.  He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom. That fascination with tech didn’t just stick. It evolved into a full-blown calling. After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career. He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy. His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers. At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap. Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual. As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting. From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it. View all articles by Cedric Solidon Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    Like
    Love
    Wow
    Sad
    Angry
    526
    2 Comments 0 Shares
  • New Court Order in Stratasys v. Bambu Lab Lawsuit

    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit. 
    Both parties have agreed to consolidate the lead and member casesinto a single case under Case No. 2:25-cv-00465-JRG. 
    Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299.
    On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward.
    Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299. The court will now decide whether to merge the cases.
    This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits. 
    The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year. 
    Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.       
    A Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry.
    Another twist in the Stratasys v. Bambu Lab lawsuit 
    Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities.
    Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers.
    Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice.
    It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab. 
    Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022.
    In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas.
    Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.   
    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.  
    In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party.
    Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12. Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment.
    The Bambu Lab X1C 3D printer. Image via Bambu Lab.
    3D printing patent battles 
    The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit. 
    The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent.
    Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.  
    The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs. 
    In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.  
    San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer.
    3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets.
    Who won the 2024 3D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry.
    #new #court #order #stratasys #bambu
    New Court Order in Stratasys v. Bambu Lab Lawsuit
    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit.  Both parties have agreed to consolidate the lead and member casesinto a single case under Case No. 2:25-cv-00465-JRG.  Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299. On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward. Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299. The court will now decide whether to merge the cases. This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits.  The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year.  Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.        A Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry. Another twist in the Stratasys v. Bambu Lab lawsuit  Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities. Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers. Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice. It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab.  Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022. In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas. Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.   In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party. Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12. Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment. The Bambu Lab X1C 3D printer. Image via Bambu Lab. 3D printing patent battles  The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit.  The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent. Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.   The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs.  In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.   San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer. 3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry. #new #court #order #stratasys #bambu
    3DPRINTINGINDUSTRY.COM
    New Court Order in Stratasys v. Bambu Lab Lawsuit
    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit.  Both parties have agreed to consolidate the lead and member cases (2:24-CV-00644-JRG and 2:24-CV-00645-JRG) into a single case under Case No. 2:25-cv-00465-JRG.  Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299(a). On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward. Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299(a). The court will now decide whether to merge the cases. This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits.  The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year.  Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.        A Stratasys Fortus 450mc (left) and a Bambu Lab X1C (right). Image by 3D Printing industry. Another twist in the Stratasys v. Bambu Lab lawsuit  Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities. Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers. Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice. It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab.  Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022. In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas. Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.   In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party. Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12(b)(7). Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment. The Bambu Lab X1C 3D printer. Image via Bambu Lab. 3D printing patent battles  The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit.  The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent. Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.   The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs.  In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.   San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer. 3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mc (left) and a Bambu Lab X1C (right). Image by 3D Printing industry.
    Like
    Love
    Wow
    Sad
    Angry
    522
    2 Comments 0 Shares
  • EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025
    Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset.
    Key Takeaways:

    Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task.
    Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map.
    Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models.
    Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal.

    Challenge: Seeing the World from Two Different Angles
    The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings.

    FG2: Matching Fine-Grained Features
    The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map.

    Here’s a breakdown of their innovative pipeline:

    Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment.
    Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view.
    Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose.

    Unprecedented Performance and Interpretability
    The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research.

    Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems.
    “A Clearer Path” for Autonomous Navigation
    The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models #epfl #researchers #unveil #fg2 #cvpr
    WWW.MARKTECHPOST.COM
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerial (or satellite) image. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-View (BEV) but are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the vertical (height) dimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoF (x, y, and yaw) pose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    Like
    Love
    Wow
    Angry
    Sad
    601
    0 Comments 0 Shares
  • Nier: Automata creators deny characters "were problematic overseas" and blame mistranslated subtitle for censorship rumours

    Nier: Automata creators deny characters "were problematic overseas" and blame mistranslated subtitle for censorship rumours
    Nier miss.

    Image credit: Square Enix

    News

    by Vikki Blake
    Contributor

    Published on June 14, 2025

    Nier: Automata producer Yosuke Saito and director Yoko Taro have denied that any of their character designs were restricted for Western audiences.
    As spotted by Automaton, the developers were compelled to comment after a mistranslated Japanese-to-English subtitle intimated Nier: Automata had been subjected to censorship from Square Enix to meet global standards.

    GODDESS OF VICTORY: NIKKE | Producers' Creative Dialogue Special Livestream.Watch on YouTube
    In the interview above, Sony executive Yoshida Shuhei asked the developers about their design process.
    "Our concept is always to do something that's 'not like anything else'. What I mean is, if Nier: Replicant had a boy as the main character, Nier: Automata would have a girl protagonist. If Western sci-fi is filled with Marine-like soldiers, we might go in the opposite direction and use Gothic Lolita outfits, for example," Taro said. "We tend to take the contrarian route."
    "There are, of course, certain things that are ethically or morally inappropriate – even if they're just aspects of a character," Saito added, according to the subtitles. "We try to draw a line by establishing rules about what’s acceptable and what’s not.
    "While certain things might be acceptable in Japan, they could become problematic in certain overseas regions, and even characters could become problematic as well. These are the kind of situationwe usually try to avoid creating. As a result, there are actually countries where we couldn't officially release Nier: Automata."
    This immediately caused consternation with fans but as Automaton points out, this "could be a little tricky to translate, even for an advanced Japanese speaker".
    When asked directly about the claim, Taro denied it, saying on X/Twitter: "I've never heard of such a thing happening". Saito simply said he thought the things he'd mentioned had been mistranslated, and would clarify this in a future livestream.
    In the same interview, former PlayStation exec Shuhei Yoshida called Nier: Automata the "game that changed everything", as it was responsible for reviving the Japanese games industry on its release. In a recent interview, Yoshida discussed how during the PS3 era, sales of Japanese games had declined, and increasingly studios there were chasing "overseas tastes".
    That changed with NieR: Automata in 2017, released for the PS4. "I think Yoko Taro created it without paying any mind at all to making it sell overseas, but it was a tremendous success," Yoshida said.
    #nier #automata #creators #deny #characters
    Nier: Automata creators deny characters "were problematic overseas" and blame mistranslated subtitle for censorship rumours
    Nier: Automata creators deny characters "were problematic overseas" and blame mistranslated subtitle for censorship rumours Nier miss. Image credit: Square Enix News by Vikki Blake Contributor Published on June 14, 2025 Nier: Automata producer Yosuke Saito and director Yoko Taro have denied that any of their character designs were restricted for Western audiences. As spotted by Automaton, the developers were compelled to comment after a mistranslated Japanese-to-English subtitle intimated Nier: Automata had been subjected to censorship from Square Enix to meet global standards. GODDESS OF VICTORY: NIKKE | Producers' Creative Dialogue Special Livestream.Watch on YouTube In the interview above, Sony executive Yoshida Shuhei asked the developers about their design process. "Our concept is always to do something that's 'not like anything else'. What I mean is, if Nier: Replicant had a boy as the main character, Nier: Automata would have a girl protagonist. If Western sci-fi is filled with Marine-like soldiers, we might go in the opposite direction and use Gothic Lolita outfits, for example," Taro said. "We tend to take the contrarian route." "There are, of course, certain things that are ethically or morally inappropriate – even if they're just aspects of a character," Saito added, according to the subtitles. "We try to draw a line by establishing rules about what’s acceptable and what’s not. "While certain things might be acceptable in Japan, they could become problematic in certain overseas regions, and even characters could become problematic as well. These are the kind of situationwe usually try to avoid creating. As a result, there are actually countries where we couldn't officially release Nier: Automata." This immediately caused consternation with fans but as Automaton points out, this "could be a little tricky to translate, even for an advanced Japanese speaker". When asked directly about the claim, Taro denied it, saying on X/Twitter: "I've never heard of such a thing happening". Saito simply said he thought the things he'd mentioned had been mistranslated, and would clarify this in a future livestream. In the same interview, former PlayStation exec Shuhei Yoshida called Nier: Automata the "game that changed everything", as it was responsible for reviving the Japanese games industry on its release. In a recent interview, Yoshida discussed how during the PS3 era, sales of Japanese games had declined, and increasingly studios there were chasing "overseas tastes". That changed with NieR: Automata in 2017, released for the PS4. "I think Yoko Taro created it without paying any mind at all to making it sell overseas, but it was a tremendous success," Yoshida said. #nier #automata #creators #deny #characters
    WWW.EUROGAMER.NET
    Nier: Automata creators deny characters "were problematic overseas" and blame mistranslated subtitle for censorship rumours
    Nier: Automata creators deny characters "were problematic overseas" and blame mistranslated subtitle for censorship rumours Nier miss. Image credit: Square Enix News by Vikki Blake Contributor Published on June 14, 2025 Nier: Automata producer Yosuke Saito and director Yoko Taro have denied that any of their character designs were restricted for Western audiences. As spotted by Automaton, the developers were compelled to comment after a mistranslated Japanese-to-English subtitle intimated Nier: Automata had been subjected to censorship from Square Enix to meet global standards. GODDESS OF VICTORY: NIKKE | Producers' Creative Dialogue Special Livestream.Watch on YouTube In the interview above (skip to 28:12 for the segment concerned), Sony executive Yoshida Shuhei asked the developers about their design process. "Our concept is always to do something that's 'not like anything else'. What I mean is, if Nier: Replicant had a boy as the main character, Nier: Automata would have a girl protagonist. If Western sci-fi is filled with Marine-like soldiers, we might go in the opposite direction and use Gothic Lolita outfits, for example," Taro said. "We tend to take the contrarian route." "There are, of course, certain things that are ethically or morally inappropriate – even if they're just aspects of a character," Saito added, according to the subtitles. "We try to draw a line by establishing rules about what’s acceptable and what’s not. "While certain things might be acceptable in Japan, they could become problematic in certain overseas regions, and even characters could become problematic as well. These are the kind of situation[s] we usually try to avoid creating. As a result, there are actually countries where we couldn't officially release Nier: Automata." This immediately caused consternation with fans but as Automaton points out, this "could be a little tricky to translate, even for an advanced Japanese speaker". When asked directly about the claim, Taro denied it, saying on X/Twitter: "I've never heard of such a thing happening". Saito simply said he thought the things he'd mentioned had been mistranslated, and would clarify this in a future livestream. In the same interview, former PlayStation exec Shuhei Yoshida called Nier: Automata the "game that changed everything", as it was responsible for reviving the Japanese games industry on its release. In a recent interview, Yoshida discussed how during the PS3 era, sales of Japanese games had declined, and increasingly studios there were chasing "overseas tastes". That changed with NieR: Automata in 2017, released for the PS4. "I think Yoko Taro created it without paying any mind at all to making it sell overseas, but it was a tremendous success," Yoshida said.
    0 Comments 0 Shares
  • How jam jars explain Apple’s success

    We are told to customize, expand, and provide more options, but that might be a silent killer for our conversion rate. Using behavioral psychology and modern product design, this piece explains why brands like Apple use fewer, smarter choices to convert better.Image generated using ChatgptJam-packed decisionsImagine standing in a supermarket aisle in front of the jam section. How do you decide which jam to buy? You could go for your usual jam, or maybe this is your first time buying jam. Either way, a choice has to be made. Or does it?You may have seen the vast number of choices, gotten overwhelmed, and walked away. The same scenario was reflected in the findings of a 2000 study by Iyengar and Lepper that explored how the number of choice options can affect decision-making.Iyengar and Lepper set up two scenarios; the first customers in a random supermarket being offered 24 jams for a free tasting. In another, they were offered only 6. One would expect that the first scenario would see more sales. After all, more variety means a happier customer. However:Image created using CanvaWhile 60% of customers stopped by for a tasting, only 3% ended up making a purchase.On the other hand, when faced with 6 options, 40% of customers stopped by, but 30% of this number ended up making a purchase.The implications of the study were evident. While one may think that more choices are better when faced with the same, decision-makers prefer fewer.This phenomenon is known as the Paradox of Choice. More choice leads to less satisfaction because one gets overwhelmed.This analysis paralysis results from humans being cognitive misers that is decisions that require deeper thinking feel exhausting and like they come at a cognitive cost. In such scenarios, we tend not to make a choice or choose a default option. Even after a decision has been made, in many cases, regret or the thought of whether you have made the ‘right’ choice can linger.A sticky situationHowever, a 2010 meta-analysis by Benjamin Scheibehenne was unable to replicate the findings. Scheibehenne questioned whether it was choice overload or information overload that was the issue. Other researchers have argued that it is the lack of meaningful choice that affects satisfaction. Additionally, Barry Schwartz, a renowned psychologist and the author of the book ‘The Paradox of Choice: Why Less Is More,’ also later suggested that the paradox of choice diminishes in the presence of a person’s knowledge of the options and if the choices have been presented well.Does that mean the paradox of choice was an overhyped notion? I conducted a mini-study to test this hypothesis.From shelves to spreadsheets: testing the jam jar theoryI created a simple scatterplot in R using a publicly available dataset from the Brazilian e-commerce site Olist. Olist is Brazil’s largest department store on marketplaces. After delivery, customers are asked to fill out a satisfaction survey with a rating or comment option. I analysed the relationship between the number of distinct products in a categoryand the average customer review.Scatterplot generated in R using the Olist datasetBased on the almost horizontal regression line on the plot above, it is evident that more choice does not lead to more satisfaction. Furthermore, categories with fewer than 200 products tend to have average review scores between 4.0 and 4.3. Whereas, categories with more than 1,000 products do not have a higher average satisfaction score, with some even falling below 4.0. This suggests that more choices do not equal more satisfaction and could also reduce satisfaction levels.These findings support the Paradox of Choice, and the dataset helps bring theory into real-world commerce. A curation of lesser, well-presented, and differentiated options could lead to more customer satisfaction.Image created using CanvaFurthermore, the plot could help suggest a more nuanced perspective; people want more choices, as this gives them autonomy. However, beyond a certain point, excessive choice overwhelms rather than empowers, leaving people dissatisfied. Many product strategies reflect this insight: the goal is to inspire confident decision-making rather than limiting freedom. A powerful example of this shift in thinking comes from Apple’s history.Simple tastes, sweeter decisionsImage source: Apple InsiderIt was 1997, and Steve Jobs had just made his return to Apple. The company at the time offered 40 different products; however, its sales were declining. Jobs made one question the company’s mantra,“What are the four products we should be building?”The following year, Apple saw itself return to profitability after introducing the iMac G3. While its success can be attributed to the introduction of a new product line and increased efficiency, one cannot deny that the reduction in the product line simplified the decision-making process for its consumers.To this day, Apple continues to implement this strategy by having a few SKUs and confident defaults.Apple does not just sell premium products; it sells a premium decision-making experience by reducing friction in decision-making for the consumer.Furthermore, a 2015 study based on analyzing scenarios where fewer choice options led to increased sales found the following mitigating factors in buying choices:Time Pressure: Easier and quicker choices led to more sales.Complexity of options: The easier it was to understand what a product was, the better the outcome.Clarity of Preference: How easy it was to compare alternatives and the clarity of one’s preferences.Motivation to Optimize: Whether the consumer wanted to put in the effort to find the ‘best’ option.Picking the right spreadWhile the extent of the validity of the Paradox of Choice is up for debate, its impact cannot be denied. It is still a helpful model that can be used to drive sales and boost customer satisfaction. So, how can one use it as a part of your business’s strategy?Remember, what people want isn’t 50 good choices. They want one confident, easy-to-understand decision that they think they will not regret.Here are some common mistakes that confuse consumers and how you can apply the Jam Jar strategy to curate choices instead:Image is created using CanvaToo many choices lead to decision fatigue.Offering many SKU options usually causes customers to get overwhelmed. Instead, try curating 2–3 strong options that will cover the majority of their needs.2. Being dependent on the users to use filters and specificationsWhen users have to compare specifications themselves, they usually end up doing nothing. Instead, it is better to replace filters with clear labels like “Best for beginners” or “Best for oily skin.”3. Leaving users to make comparisons by themselvesToo many options can make users overwhelmed. Instead, offer default options to show what you recommend. This instills within them a sense of confidence when making the final decision.4. More transparency does not always mean more trustInformation overload never leads to conversions. Instead, create a thoughtful flow that guides the users to the right choices.5. Users do not aim for optimizationAssuming that users will weigh every detail before making a decision is not rooted in reality. In most cases, they will go with their gut. Instead, highlight emotional outcomes, benefits, and uses instead of numbers.6. Not onboarding users is a critical mistakeHoping that users will easily navigate a sea of products without guidance is unrealistic. Instead, use onboarding tools like starter kits, quizzes, or bundles that act as starting points.7. Variety for the sake of varietyUsers crave clarity more than they crave variety. Instead, focus on simplicity when it comes to differentiation.And lastly, remember that while the paradox of choice is a helpful tool in your business strategy arsenal, more choice is not inherently bad. It is the lack of structure in the decision-making process that is the problem. Clear framing will always make decision-making a seamless experience for both your consumers and your business.How jam jars explain Apple’s success was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #how #jam #jars #explain #apples
    How jam jars explain Apple’s success
    We are told to customize, expand, and provide more options, but that might be a silent killer for our conversion rate. Using behavioral psychology and modern product design, this piece explains why brands like Apple use fewer, smarter choices to convert better.Image generated using ChatgptJam-packed decisionsImagine standing in a supermarket aisle in front of the jam section. How do you decide which jam to buy? You could go for your usual jam, or maybe this is your first time buying jam. Either way, a choice has to be made. Or does it?You may have seen the vast number of choices, gotten overwhelmed, and walked away. The same scenario was reflected in the findings of a 2000 study by Iyengar and Lepper that explored how the number of choice options can affect decision-making.Iyengar and Lepper set up two scenarios; the first customers in a random supermarket being offered 24 jams for a free tasting. In another, they were offered only 6. One would expect that the first scenario would see more sales. After all, more variety means a happier customer. However:Image created using CanvaWhile 60% of customers stopped by for a tasting, only 3% ended up making a purchase.On the other hand, when faced with 6 options, 40% of customers stopped by, but 30% of this number ended up making a purchase.The implications of the study were evident. While one may think that more choices are better when faced with the same, decision-makers prefer fewer.This phenomenon is known as the Paradox of Choice. More choice leads to less satisfaction because one gets overwhelmed.This analysis paralysis results from humans being cognitive misers that is decisions that require deeper thinking feel exhausting and like they come at a cognitive cost. In such scenarios, we tend not to make a choice or choose a default option. Even after a decision has been made, in many cases, regret or the thought of whether you have made the ‘right’ choice can linger.A sticky situationHowever, a 2010 meta-analysis by Benjamin Scheibehenne was unable to replicate the findings. Scheibehenne questioned whether it was choice overload or information overload that was the issue. Other researchers have argued that it is the lack of meaningful choice that affects satisfaction. Additionally, Barry Schwartz, a renowned psychologist and the author of the book ‘The Paradox of Choice: Why Less Is More,’ also later suggested that the paradox of choice diminishes in the presence of a person’s knowledge of the options and if the choices have been presented well.Does that mean the paradox of choice was an overhyped notion? I conducted a mini-study to test this hypothesis.From shelves to spreadsheets: testing the jam jar theoryI created a simple scatterplot in R using a publicly available dataset from the Brazilian e-commerce site Olist. Olist is Brazil’s largest department store on marketplaces. After delivery, customers are asked to fill out a satisfaction survey with a rating or comment option. I analysed the relationship between the number of distinct products in a categoryand the average customer review.Scatterplot generated in R using the Olist datasetBased on the almost horizontal regression line on the plot above, it is evident that more choice does not lead to more satisfaction. Furthermore, categories with fewer than 200 products tend to have average review scores between 4.0 and 4.3. Whereas, categories with more than 1,000 products do not have a higher average satisfaction score, with some even falling below 4.0. This suggests that more choices do not equal more satisfaction and could also reduce satisfaction levels.These findings support the Paradox of Choice, and the dataset helps bring theory into real-world commerce. A curation of lesser, well-presented, and differentiated options could lead to more customer satisfaction.Image created using CanvaFurthermore, the plot could help suggest a more nuanced perspective; people want more choices, as this gives them autonomy. However, beyond a certain point, excessive choice overwhelms rather than empowers, leaving people dissatisfied. Many product strategies reflect this insight: the goal is to inspire confident decision-making rather than limiting freedom. A powerful example of this shift in thinking comes from Apple’s history.Simple tastes, sweeter decisionsImage source: Apple InsiderIt was 1997, and Steve Jobs had just made his return to Apple. The company at the time offered 40 different products; however, its sales were declining. Jobs made one question the company’s mantra,“What are the four products we should be building?”The following year, Apple saw itself return to profitability after introducing the iMac G3. While its success can be attributed to the introduction of a new product line and increased efficiency, one cannot deny that the reduction in the product line simplified the decision-making process for its consumers.To this day, Apple continues to implement this strategy by having a few SKUs and confident defaults.Apple does not just sell premium products; it sells a premium decision-making experience by reducing friction in decision-making for the consumer.Furthermore, a 2015 study based on analyzing scenarios where fewer choice options led to increased sales found the following mitigating factors in buying choices:Time Pressure: Easier and quicker choices led to more sales.Complexity of options: The easier it was to understand what a product was, the better the outcome.Clarity of Preference: How easy it was to compare alternatives and the clarity of one’s preferences.Motivation to Optimize: Whether the consumer wanted to put in the effort to find the ‘best’ option.Picking the right spreadWhile the extent of the validity of the Paradox of Choice is up for debate, its impact cannot be denied. It is still a helpful model that can be used to drive sales and boost customer satisfaction. So, how can one use it as a part of your business’s strategy?Remember, what people want isn’t 50 good choices. They want one confident, easy-to-understand decision that they think they will not regret.Here are some common mistakes that confuse consumers and how you can apply the Jam Jar strategy to curate choices instead:Image is created using CanvaToo many choices lead to decision fatigue.Offering many SKU options usually causes customers to get overwhelmed. Instead, try curating 2–3 strong options that will cover the majority of their needs.2. Being dependent on the users to use filters and specificationsWhen users have to compare specifications themselves, they usually end up doing nothing. Instead, it is better to replace filters with clear labels like “Best for beginners” or “Best for oily skin.”3. Leaving users to make comparisons by themselvesToo many options can make users overwhelmed. Instead, offer default options to show what you recommend. This instills within them a sense of confidence when making the final decision.4. More transparency does not always mean more trustInformation overload never leads to conversions. Instead, create a thoughtful flow that guides the users to the right choices.5. Users do not aim for optimizationAssuming that users will weigh every detail before making a decision is not rooted in reality. In most cases, they will go with their gut. Instead, highlight emotional outcomes, benefits, and uses instead of numbers.6. Not onboarding users is a critical mistakeHoping that users will easily navigate a sea of products without guidance is unrealistic. Instead, use onboarding tools like starter kits, quizzes, or bundles that act as starting points.7. Variety for the sake of varietyUsers crave clarity more than they crave variety. Instead, focus on simplicity when it comes to differentiation.And lastly, remember that while the paradox of choice is a helpful tool in your business strategy arsenal, more choice is not inherently bad. It is the lack of structure in the decision-making process that is the problem. Clear framing will always make decision-making a seamless experience for both your consumers and your business.How jam jars explain Apple’s success was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #how #jam #jars #explain #apples
    UXDESIGN.CC
    How jam jars explain Apple’s success
    We are told to customize, expand, and provide more options, but that might be a silent killer for our conversion rate. Using behavioral psychology and modern product design, this piece explains why brands like Apple use fewer, smarter choices to convert better.Image generated using ChatgptJam-packed decisionsImagine standing in a supermarket aisle in front of the jam section. How do you decide which jam to buy? You could go for your usual jam, or maybe this is your first time buying jam. Either way, a choice has to be made. Or does it?You may have seen the vast number of choices, gotten overwhelmed, and walked away. The same scenario was reflected in the findings of a 2000 study by Iyengar and Lepper that explored how the number of choice options can affect decision-making.Iyengar and Lepper set up two scenarios; the first customers in a random supermarket being offered 24 jams for a free tasting. In another, they were offered only 6. One would expect that the first scenario would see more sales. After all, more variety means a happier customer. However:Image created using CanvaWhile 60% of customers stopped by for a tasting, only 3% ended up making a purchase.On the other hand, when faced with 6 options, 40% of customers stopped by, but 30% of this number ended up making a purchase.The implications of the study were evident. While one may think that more choices are better when faced with the same, decision-makers prefer fewer.This phenomenon is known as the Paradox of Choice. More choice leads to less satisfaction because one gets overwhelmed.This analysis paralysis results from humans being cognitive misers that is decisions that require deeper thinking feel exhausting and like they come at a cognitive cost. In such scenarios, we tend not to make a choice or choose a default option. Even after a decision has been made, in many cases, regret or the thought of whether you have made the ‘right’ choice can linger.A sticky situationHowever, a 2010 meta-analysis by Benjamin Scheibehenne was unable to replicate the findings. Scheibehenne questioned whether it was choice overload or information overload that was the issue. Other researchers have argued that it is the lack of meaningful choice that affects satisfaction. Additionally, Barry Schwartz, a renowned psychologist and the author of the book ‘The Paradox of Choice: Why Less Is More,’ also later suggested that the paradox of choice diminishes in the presence of a person’s knowledge of the options and if the choices have been presented well.Does that mean the paradox of choice was an overhyped notion? I conducted a mini-study to test this hypothesis.From shelves to spreadsheets: testing the jam jar theoryI created a simple scatterplot in R using a publicly available dataset from the Brazilian e-commerce site Olist. Olist is Brazil’s largest department store on marketplaces. After delivery, customers are asked to fill out a satisfaction survey with a rating or comment option. I analysed the relationship between the number of distinct products in a category (choices) and the average customer review (satisfaction).Scatterplot generated in R using the Olist datasetBased on the almost horizontal regression line on the plot above, it is evident that more choice does not lead to more satisfaction. Furthermore, categories with fewer than 200 products tend to have average review scores between 4.0 and 4.3. Whereas, categories with more than 1,000 products do not have a higher average satisfaction score, with some even falling below 4.0. This suggests that more choices do not equal more satisfaction and could also reduce satisfaction levels.These findings support the Paradox of Choice, and the dataset helps bring theory into real-world commerce. A curation of lesser, well-presented, and differentiated options could lead to more customer satisfaction.Image created using CanvaFurthermore, the plot could help suggest a more nuanced perspective; people want more choices, as this gives them autonomy. However, beyond a certain point, excessive choice overwhelms rather than empowers, leaving people dissatisfied. Many product strategies reflect this insight: the goal is to inspire confident decision-making rather than limiting freedom. A powerful example of this shift in thinking comes from Apple’s history.Simple tastes, sweeter decisionsImage source: Apple InsiderIt was 1997, and Steve Jobs had just made his return to Apple. The company at the time offered 40 different products; however, its sales were declining. Jobs made one question the company’s mantra,“What are the four products we should be building?”The following year, Apple saw itself return to profitability after introducing the iMac G3. While its success can be attributed to the introduction of a new product line and increased efficiency, one cannot deny that the reduction in the product line simplified the decision-making process for its consumers.To this day, Apple continues to implement this strategy by having a few SKUs and confident defaults.Apple does not just sell premium products; it sells a premium decision-making experience by reducing friction in decision-making for the consumer.Furthermore, a 2015 study based on analyzing scenarios where fewer choice options led to increased sales found the following mitigating factors in buying choices:Time Pressure: Easier and quicker choices led to more sales.Complexity of options: The easier it was to understand what a product was, the better the outcome.Clarity of Preference: How easy it was to compare alternatives and the clarity of one’s preferences.Motivation to Optimize: Whether the consumer wanted to put in the effort to find the ‘best’ option.Picking the right spreadWhile the extent of the validity of the Paradox of Choice is up for debate, its impact cannot be denied. It is still a helpful model that can be used to drive sales and boost customer satisfaction. So, how can one use it as a part of your business’s strategy?Remember, what people want isn’t 50 good choices. They want one confident, easy-to-understand decision that they think they will not regret.Here are some common mistakes that confuse consumers and how you can apply the Jam Jar strategy to curate choices instead:Image is created using CanvaToo many choices lead to decision fatigue.Offering many SKU options usually causes customers to get overwhelmed. Instead, try curating 2–3 strong options that will cover the majority of their needs.2. Being dependent on the users to use filters and specificationsWhen users have to compare specifications themselves, they usually end up doing nothing. Instead, it is better to replace filters with clear labels like “Best for beginners” or “Best for oily skin.”3. Leaving users to make comparisons by themselvesToo many options can make users overwhelmed. Instead, offer default options to show what you recommend. This instills within them a sense of confidence when making the final decision.4. More transparency does not always mean more trustInformation overload never leads to conversions. Instead, create a thoughtful flow that guides the users to the right choices.5. Users do not aim for optimizationAssuming that users will weigh every detail before making a decision is not rooted in reality. In most cases, they will go with their gut. Instead, highlight emotional outcomes, benefits, and uses instead of numbers.6. Not onboarding users is a critical mistakeHoping that users will easily navigate a sea of products without guidance is unrealistic. Instead, use onboarding tools like starter kits, quizzes, or bundles that act as starting points.7. Variety for the sake of varietyUsers crave clarity more than they crave variety. Instead, focus on simplicity when it comes to differentiation.And lastly, remember that while the paradox of choice is a helpful tool in your business strategy arsenal, more choice is not inherently bad. It is the lack of structure in the decision-making process that is the problem. Clear framing will always make decision-making a seamless experience for both your consumers and your business.How jam jars explain Apple’s success was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comments 0 Shares
  • Harassment by Ubisoft executives left female staff terrified, French court hears

    Three former executives at the French video game company Ubisoft used their position to bully or sexually harass staff, leaving women terrified and feeling like pieces of meat, a French court has heard.The state prosecutor Antoine Haushalter said the trial of three senior game creators for alleged bullying, sexual harassment and, in one case, attempted sexual assault was a “turning point” for the gaming world. It is the first big trial to result from the #MeToo movement in the video games industry, and Haushalter said the case had revealed “overwhelming” evidence of harassment.In four days of hearings, female former staff members variously described being tied to a chair, forced to do handstands, subjected to constant comments about sex and their bodies, having to endure sexist and homophobic jokes, drawings of penises being stuck to computers, a manager who farted in workers’ faces or scribbled on women with marker pens, gave unsolicited shoulder massages, played pornographic films in an open-plan office, and another executive who cracked a whip near people’s heads. The three men deny all charges.Haushalter said “the world of video games and its subculture” had an element of “systemic” sexism and potential abuse. He said the #MeToo movement in the gaming industry had allowed people to speak out.“It’s not that these actions were not punished by the law before. It’s just that they were silenced, and from now on they will not be silenced,” he said.Ubisoft is a French family business that rose to become one of the biggest video game creators in the world. It has been behind several blockbusters including Assassin’s Creed, Far Cry and the children’s favourite Just Dance.The court in Bobigny, in Seine-Saint-Denis, heard that between 2010 and 2020 at Ubisoft’s offices in Montreuil, east of Paris, the three executives created an atmosphere of bullying and sexism that one member of staff likened to a “boys’ club”. One alleged victim told the court: “The sexual remarks and sexual jokes were almost daily.”Tommy François, 52, a former vice-president of editorial and creative services, is accused of sexual harassment, bullying and attempted sexual assault. He was alleged once to have tied a female member of staff to a chair with tape, pushed the chair into a lift and pressed a button at random. He was also accused of forcing one woman wearing a skirt to do handstands.“He was my superior and I was afraid of him. He made me do handstands. I did it to get it over with and get rid of him,” one woman told the court.At a 2015 office Christmas party with a Back to the Future theme, François allegedly told a member of staff that he liked her 1950s dress. He then allegedly stepped towards her to kiss her on the mouth as his colleagues restrained her by the arms and back. She shouted and broke free. François denied all allegations.Another witness told the court that during a video games fair in the US, François “grabbed me by the hair and kissed me by force”. She said no one reacted, and that when she reported it to her human resources manager she was told “don’t make a big thing of it”.The woman said that later, in a key meeting, another unnamed senior figure told staff he had seen her “snogging” François, “even though he knew it had been an assault”.She said François called her into his office to show her pictures of his naked backside on his computers and on a phone. “Once he drew a penis on my arm when I was in a video call with top management,” she said.The woman said these incidents made her feel “stupefied, humiliated and professionally discredited”.François told the court he denied all charges. He said there had been a “culture of joking around”. He said: “I never tried to harm anyone.”Serge Hascoët told the court: ‘I have never wanted to harass anyone and I don’t think I have.’ Photograph: Xavier Galiana/AFP/Getty ImagesSerge Hascoët, 59, Ubisoft’s former chief creative officer and second-in-command, was accused of bullying and sexual harassment. The court heard how at a meeting of staff on an away day he complained about a senior female employee, saying she clearly did not have enough sex and that he would “show how to calm her” by having sex with her in a meeting room in front of everyone.He was alleged to have handed a young female member of staff a tissue in which he had blown his nose, saying: “You can resell it, it’s worth gold at Ubisoft.”The court heard he made guttural noises in the office and talked about sex. Hascoët was also alleged to have bullied assistants by making them carry out personal tasks for him such as going to his home to wait for parcel deliveries.Hascoët denied all the charges. He said: “I have never wanted to harass anyone and I don’t think I have.”The former game director Guillaume Patrux, 41, is accused of sexual harassment and bullying. He was alleged to have punched walls, mimed hitting staff, cracked a whip near colleagues’ faces, threatened to carry out an office shooting and played with a cigarette lighter near workers’ faces, setting alight a man’s beard. He denied the charges.The panel of judges retired to consider their verdict, which will be handed down at a later date.
    #harassment #ubisoft #executives #left #female
    Harassment by Ubisoft executives left female staff terrified, French court hears
    Three former executives at the French video game company Ubisoft used their position to bully or sexually harass staff, leaving women terrified and feeling like pieces of meat, a French court has heard.The state prosecutor Antoine Haushalter said the trial of three senior game creators for alleged bullying, sexual harassment and, in one case, attempted sexual assault was a “turning point” for the gaming world. It is the first big trial to result from the #MeToo movement in the video games industry, and Haushalter said the case had revealed “overwhelming” evidence of harassment.In four days of hearings, female former staff members variously described being tied to a chair, forced to do handstands, subjected to constant comments about sex and their bodies, having to endure sexist and homophobic jokes, drawings of penises being stuck to computers, a manager who farted in workers’ faces or scribbled on women with marker pens, gave unsolicited shoulder massages, played pornographic films in an open-plan office, and another executive who cracked a whip near people’s heads. The three men deny all charges.Haushalter said “the world of video games and its subculture” had an element of “systemic” sexism and potential abuse. He said the #MeToo movement in the gaming industry had allowed people to speak out.“It’s not that these actions were not punished by the law before. It’s just that they were silenced, and from now on they will not be silenced,” he said.Ubisoft is a French family business that rose to become one of the biggest video game creators in the world. It has been behind several blockbusters including Assassin’s Creed, Far Cry and the children’s favourite Just Dance.The court in Bobigny, in Seine-Saint-Denis, heard that between 2010 and 2020 at Ubisoft’s offices in Montreuil, east of Paris, the three executives created an atmosphere of bullying and sexism that one member of staff likened to a “boys’ club”. One alleged victim told the court: “The sexual remarks and sexual jokes were almost daily.”Tommy François, 52, a former vice-president of editorial and creative services, is accused of sexual harassment, bullying and attempted sexual assault. He was alleged once to have tied a female member of staff to a chair with tape, pushed the chair into a lift and pressed a button at random. He was also accused of forcing one woman wearing a skirt to do handstands.“He was my superior and I was afraid of him. He made me do handstands. I did it to get it over with and get rid of him,” one woman told the court.At a 2015 office Christmas party with a Back to the Future theme, François allegedly told a member of staff that he liked her 1950s dress. He then allegedly stepped towards her to kiss her on the mouth as his colleagues restrained her by the arms and back. She shouted and broke free. François denied all allegations.Another witness told the court that during a video games fair in the US, François “grabbed me by the hair and kissed me by force”. She said no one reacted, and that when she reported it to her human resources manager she was told “don’t make a big thing of it”.The woman said that later, in a key meeting, another unnamed senior figure told staff he had seen her “snogging” François, “even though he knew it had been an assault”.She said François called her into his office to show her pictures of his naked backside on his computers and on a phone. “Once he drew a penis on my arm when I was in a video call with top management,” she said.The woman said these incidents made her feel “stupefied, humiliated and professionally discredited”.François told the court he denied all charges. He said there had been a “culture of joking around”. He said: “I never tried to harm anyone.”Serge Hascoët told the court: ‘I have never wanted to harass anyone and I don’t think I have.’ Photograph: Xavier Galiana/AFP/Getty ImagesSerge Hascoët, 59, Ubisoft’s former chief creative officer and second-in-command, was accused of bullying and sexual harassment. The court heard how at a meeting of staff on an away day he complained about a senior female employee, saying she clearly did not have enough sex and that he would “show how to calm her” by having sex with her in a meeting room in front of everyone.He was alleged to have handed a young female member of staff a tissue in which he had blown his nose, saying: “You can resell it, it’s worth gold at Ubisoft.”The court heard he made guttural noises in the office and talked about sex. Hascoët was also alleged to have bullied assistants by making them carry out personal tasks for him such as going to his home to wait for parcel deliveries.Hascoët denied all the charges. He said: “I have never wanted to harass anyone and I don’t think I have.”The former game director Guillaume Patrux, 41, is accused of sexual harassment and bullying. He was alleged to have punched walls, mimed hitting staff, cracked a whip near colleagues’ faces, threatened to carry out an office shooting and played with a cigarette lighter near workers’ faces, setting alight a man’s beard. He denied the charges.The panel of judges retired to consider their verdict, which will be handed down at a later date. #harassment #ubisoft #executives #left #female
    WWW.THEGUARDIAN.COM
    Harassment by Ubisoft executives left female staff terrified, French court hears
    Three former executives at the French video game company Ubisoft used their position to bully or sexually harass staff, leaving women terrified and feeling like pieces of meat, a French court has heard.The state prosecutor Antoine Haushalter said the trial of three senior game creators for alleged bullying, sexual harassment and, in one case, attempted sexual assault was a “turning point” for the gaming world. It is the first big trial to result from the #MeToo movement in the video games industry, and Haushalter said the case had revealed “overwhelming” evidence of harassment.In four days of hearings, female former staff members variously described being tied to a chair, forced to do handstands, subjected to constant comments about sex and their bodies, having to endure sexist and homophobic jokes, drawings of penises being stuck to computers, a manager who farted in workers’ faces or scribbled on women with marker pens, gave unsolicited shoulder massages, played pornographic films in an open-plan office, and another executive who cracked a whip near people’s heads. The three men deny all charges.Haushalter said “the world of video games and its subculture” had an element of “systemic” sexism and potential abuse. He said the #MeToo movement in the gaming industry had allowed people to speak out.“It’s not that these actions were not punished by the law before. It’s just that they were silenced, and from now on they will not be silenced,” he said.Ubisoft is a French family business that rose to become one of the biggest video game creators in the world. It has been behind several blockbusters including Assassin’s Creed, Far Cry and the children’s favourite Just Dance.The court in Bobigny, in Seine-Saint-Denis, heard that between 2010 and 2020 at Ubisoft’s offices in Montreuil, east of Paris, the three executives created an atmosphere of bullying and sexism that one member of staff likened to a “boys’ club”. One alleged victim told the court: “The sexual remarks and sexual jokes were almost daily.”Tommy François, 52, a former vice-president of editorial and creative services, is accused of sexual harassment, bullying and attempted sexual assault. He was alleged once to have tied a female member of staff to a chair with tape, pushed the chair into a lift and pressed a button at random. He was also accused of forcing one woman wearing a skirt to do handstands.“He was my superior and I was afraid of him. He made me do handstands. I did it to get it over with and get rid of him,” one woman told the court.At a 2015 office Christmas party with a Back to the Future theme, François allegedly told a member of staff that he liked her 1950s dress. He then allegedly stepped towards her to kiss her on the mouth as his colleagues restrained her by the arms and back. She shouted and broke free. François denied all allegations.Another witness told the court that during a video games fair in the US, François “grabbed me by the hair and kissed me by force”. She said no one reacted, and that when she reported it to her human resources manager she was told “don’t make a big thing of it”.The woman said that later, in a key meeting, another unnamed senior figure told staff he had seen her “snogging” François, “even though he knew it had been an assault”.She said François called her into his office to show her pictures of his naked backside on his computers and on a phone. “Once he drew a penis on my arm when I was in a video call with top management,” she said.The woman said these incidents made her feel “stupefied, humiliated and professionally discredited”.François told the court he denied all charges. He said there had been a “culture of joking around”. He said: “I never tried to harm anyone.”Serge Hascoët told the court: ‘I have never wanted to harass anyone and I don’t think I have.’ Photograph: Xavier Galiana/AFP/Getty ImagesSerge Hascoët, 59, Ubisoft’s former chief creative officer and second-in-command, was accused of bullying and sexual harassment. The court heard how at a meeting of staff on an away day he complained about a senior female employee, saying she clearly did not have enough sex and that he would “show how to calm her” by having sex with her in a meeting room in front of everyone.He was alleged to have handed a young female member of staff a tissue in which he had blown his nose, saying: “You can resell it, it’s worth gold at Ubisoft.”The court heard he made guttural noises in the office and talked about sex. Hascoët was also alleged to have bullied assistants by making them carry out personal tasks for him such as going to his home to wait for parcel deliveries.Hascoët denied all the charges. He said: “I have never wanted to harass anyone and I don’t think I have.”The former game director Guillaume Patrux, 41, is accused of sexual harassment and bullying. He was alleged to have punched walls, mimed hitting staff, cracked a whip near colleagues’ faces, threatened to carry out an office shooting and played with a cigarette lighter near workers’ faces, setting alight a man’s beard. He denied the charges.The panel of judges retired to consider their verdict, which will be handed down at a later date.
    Like
    Love
    Wow
    Sad
    Angry
    573
    0 Comments 0 Shares
  • The Orb Will See You Now

    Once again, Sam Altman wants to show you the future. The CEO of OpenAI is standing on a sparse stage in San Francisco, preparing to reveal his next move to an attentive crowd. “We needed some way for identifying, authenticating humans in the age of AGI,” Altman explains, referring to artificial general intelligence. “We wanted a way to make sure that humans stayed special and central.” The solution Altman came up with is looming behind him. It’s a white sphere about the size of a beach ball, with a camera at its center. The company that makes it, known as Tools for Humanity, calls this mysterious device the Orb. Stare into the heart of the plastic-and-silicon globe and it will map the unique furrows and ciliary zones of your iris. Seconds later, you’ll receive inviolable proof of your humanity: a 12,800-digit binary number, known as an iris code, sent to an app on your phone. At the same time, a packet of cryptocurrency called Worldcoin, worth approximately will be transferred to your digital wallet—your reward for becoming a “verified human.” Altman co-founded Tools for Humanity in 2019 as part of a suite of companies he believed would reshape the world. Once the tech he was developing at OpenAI passed a certain level of intelligence, he reasoned, it would mark the end of one era on the Internet and the beginning of another, in which AI became so advanced, so human-like, that you would no longer be able to tell whether what you read, saw, or heard online came from a real person. When that happened, Altman imagined, we would need a new kind of online infrastructure: a human-verification layer for the Internet, to distinguish real people from the proliferating number of bots and AI “agents.”And so Tools for Humanity set out to build a global “proof-of-humanity” network. It aims to verify 50 million people by the end of 2025; ultimately its goal is to sign up every single human being on the planet. The free crypto serves as both an incentive for users to sign up, and also an entry point into what the company hopes will become the world’s largest financial network, through which it believes “double-digit percentages of the global economy” will eventually flow. Even for Altman, these missions are audacious. “If this really works, it’s like a fundamental piece of infrastructure for the world,” Altman tells TIME in a video interview from the passenger seat of a car a few days before his April 30 keynote address.Internal hardware of the Orb in mid-assembly in March. Davide Monteleone for TIMEThe project’s goal is to solve a problem partly of Altman’s own making. In the near future, he and other tech leaders say, advanced AIs will be imbued with agency: the ability to not just respond to human prompting, but to take actions independently in the world. This will enable the creation of AI coworkers that can drop into your company and begin solving problems; AI tutors that can adapt their teaching style to students’ preferences; even AI doctors that can diagnose routine cases and handle scheduling or logistics. The arrival of these virtual agents, their venture capitalist backers predict, will turbocharge our productivity and unleash an age of material abundance.But AI agents will also have cascading consequences for the human experience online. “As AI systems become harder to distinguish from people, websites may face difficult trade-offs,” says a recent paper by researchers from 25 different universities, nonprofits, and tech companies, including OpenAI. “There is a significant risk that digital institutions will be unprepared for a time when AI-powered agents, including those leveraged by malicious actors, overwhelm other activity online.” On social-media platforms like X and Facebook, bot-driven accounts are amassing billions of views on AI-generated content. In April, the foundation that runs Wikipedia disclosed that AI bots scraping their site were making the encyclopedia too costly to sustainably run. Later the same month, researchers from the University of Zurich found that AI-generated comments on the subreddit /r/ChangeMyView were up to six times more successful than human-written ones at persuading unknowing users to change their minds.  Photograph by Davide Monteleone for TIMEBuy a copy of the Orb issue hereThe arrival of agents won’t only threaten our ability to distinguish between authentic and AI content online. It will also challenge the Internet’s core business model, online advertising, which relies on the assumption that ads are being viewed by humans. “The Internet will change very drastically sometime in the next 12 to 24 months,” says Tools for Humanity CEO Alex Blania. “So we have to succeed, or I’m not sure what else would happen.”For four years, Blania’s team has been testing the Orb’s hardware abroad. Now the U.S. rollout has arrived. Over the next 12 months, 7,500 Orbs will be arriving in dozens of American cities, in locations like gas stations, bodegas, and flagship stores in Los Angeles, Austin, and Miami. The project’s founders and fans hope the Orb’s U.S. debut will kickstart a new phase of growth. The San Francisco keynote was titled: “At Last.” It’s not clear the public appetite matches the exultant branding. Tools for Humanity has “verified” just 12 million humans since mid 2023, a pace Blania concedes is well behind schedule. Few online platforms currently support the so-called “World ID” that the Orb bestows upon its visitors, leaving little to entice users to give up their biometrics beyond the lure of free crypto. Even Altman isn’t sure whether the whole thing can work. “I can seethis becomes a fairly mainstream thing in a few years,” he says. “Or I can see that it’s still only used by a small subset of people who think about the world in a certain way.” Blaniaand Altman debut the Orb at World’s U.S. launch in San Francisco on April 30, 2025. Jason Henry—The New York Times/ReduxYet as the Internet becomes overrun with AI, the creators of this strange new piece of hardware are betting that everybody in the world will soon want—or need—to visit an Orb. The biometric code it creates, they predict, will become a new type of digital passport, without which you might be denied passage to the Internet of the future, from dating apps to government services. In a best-case scenario, World ID could be a privacy-preserving way to fortify the Internet against an AI-driven deluge of fake or deceptive content. It could also enable the distribution of universal basic income—a policy that Altman has previously touted—as AI automation transforms the global economy. To examine what this new technology might mean, I reported from three continents, interviewed 10 Tools for Humanity executives and investors, reviewed hundreds of pages of company documents, and “verified” my own humanity. The Internet will inevitably need some kind of proof-of-humanity system in the near future, says Divya Siddarth, founder of the nonprofit Collective Intelligence Project. The real question, she argues, is whether such a system will be centralized—“a big security nightmare that enables a lot of surveillance”—or privacy-preserving, as the Orb claims to be. Questions remain about Tools for Humanity’s corporate structure, its yoking to an unstable cryptocurrency, and what power it would concentrate in the hands of its owners if successful. Yet it’s also one of the only attempts to solve what many see as an increasingly urgent problem. “There are some issues with it,” Siddarth says of World ID. “But you can’t preserve the Internet in amber. Something in this direction is necessary.”In March, I met Blania at Tools for Humanity’s San Francisco headquarters, where a large screen displays the number of weekly “Orb verifications” by country. A few days earlier, the CEO had attended a million-per-head dinner at Mar-a-Lago with President Donald Trump, whom he credits with clearing the way for the company’s U.S. launch by relaxing crypto regulations. “Given Sam is a very high profile target,” Blania says, “we just decided that we would let other companies fight that fight, and enter the U.S. once the air is clear.” As a kid growing up in Germany, Blania was a little different than his peers. “Other kids were, like, drinking a lot, or doing a lot of parties, and I was just building a lot of things that could potentially blow up,” he recalls. At the California Institute of Technology, where he was pursuing research for a masters degree, he spent many evenings reading the blogs of startup gurus like Paul Graham and Altman. Then, in 2019, Blania received an email from Max Novendstern, an entrepreneur who had been kicking around a concept with Altman to build a global cryptocurrency network. They were looking for technical minds to help with the project. Over cappuccinos, Altman told Blania he was certain about three things. First, smarter-than-human AI was not only possible, but inevitable—and it would soon mean you could no longer assume that anything you read, saw, or heard on the Internet was human-created. Second, cryptocurrency and other decentralized technologies would be a massive force for change in the world. And third, scale was essential to any crypto network’s value. The Orb is tested on a calibration rig, surrounded by checkerboard targets to ensure precision in iris detection. Davide Monteleone for TIMEThe goal of Worldcoin, as the project was initially called, was to combine those three insights. Altman took a lesson from PayPal, the company co-founded by his mentor Peter Thiel. Of its initial funding, PayPal spent less than million actually building its app—but pumped an additional million or so into a referral program, whereby new users and the person who invited them would each receive in credit. The referral program helped make PayPal a leading payment platform. Altman thought a version of that strategy would propel Worldcoin to similar heights. He wanted to create a new cryptocurrency and give it to users as a reward for signing up. The more people who joined the system, the higher the token’s value would theoretically rise. Since 2019, the project has raised million from investors like Coinbase and the venture capital firm Andreessen Horowitz. That money paid for the million cost of designing the Orb, plus maintaining the software it runs on. The total market value of all Worldcoins in existence, however, is far higher—around billion. That number is a bit misleading: most of those coins are not in circulation and Worldcoin’s price has fluctuated wildly. Still, it allows the company to reward users for signing up at no cost to itself. The main lure for investors is the crypto upside. Some 75% of all Worldcoins are set aside for humans to claim when they sign up, or as referral bonuses. The remaining 25% are split between Tools for Humanity’s backers and staff, including Blania and Altman. “I’m really excited to make a lot of money,” ” Blania says.From the beginning, Altman was thinking about the consequences of the AI revolution he intended to unleash.A future in which advanced AI could perform most tasks more effectively than humans would bring a wave of unemployment and economic dislocation, he reasoned. Some kind of wealth redistribution might be necessary. In 2016, he partially funded a study of basic income, which gave per-month handouts to low-income individuals in Illinois and Texas. But there was no single financial system that would allow money to be sent to everybody in the world. Nor was there a way to stop an individual human from claiming their share twice—or to identify a sophisticated AI pretending to be human and pocketing some cash of its own. In 2023, Tools for Humanity raised the possibility of using the network to redistribute the profits of AI labs that were able to automate human labor. “As AI advances,” it said, “fairly distributing access and some of the created value through UBI will play an increasingly vital role in counteracting the concentration of economic power.”Blania was taken by the pitch, and agreed to join the project as a co-founder. “Most people told us we were very stupid or crazy or insane, including Silicon Valley investors,” Blania says. At least until ChatGPT came out in 2022, transforming OpenAI into one of the world’s most famous tech companies and kickstarting a market bull-run. “Things suddenly started to make more and more sense to the external world,” Blania says of the vision to develop a global “proof-of-humanity” network. “You have to imagine a world in which you will have very smart and competent systems somehow flying through the Internet with different goals and ideas of what they want to do, and us having no idea anymore what we’re dealing with.”After our interview, Blania’s head of communications ushers me over to a circular wooden structure where eight Orbs face one another. The scene feels like a cross between an Apple Store and a ceremonial altar. “Do you want to get verified?” she asks. Putting aside my reservations for the purposes of research, I download the World App and follow its prompts. I flash a QR code at the Orb, then gaze into it. A minute or so later, my phone buzzes with confirmation: I’ve been issued my own personal World ID and some Worldcoin.The first thing the Orb does is check if you’re human, using a neural network that takes input from various sensors, including an infrared camera and a thermometer. Davide Monteleone for TIMEWhile I stared into the Orb, several complex procedures had taken place at once. A neural network took inputs from multiple sensors—an infrared camera, a thermometer—to confirm I was a living human. Simultaneously, a telephoto lens zoomed in on my iris, capturing the physical traits within that distinguish me from every other human on Earth. It then converted that image into an iris code: a numerical abstraction of my unique biometric data. Then the Orb checked to see if my iris code matched any it had seen before, using a technique allowing encrypted data to be compared without revealing the underlying information. Before the Orb deleted my data, it turned my iris code into several derivative codes—none of which on its own can be linked back to the original—encrypted them, deleted the only copies of the decryption keys, and sent each one to a different secure server, so that future users’ iris codes can be checked for uniqueness against mine. If I were to use my World ID to access a website, that site would learn nothing about me except that I’m human. The Orb is open-source, so outside experts can examine its code and verify the company’s privacy claims. “I did a colonoscopy on this company and these technologies before I agreed to join,” says Trevor Traina, a Trump donor and former U.S. ambassador to Austria who now serves as Tools for Humanity’s chief business officer. “It is the most privacy-preserving technology on the planet.”Only weeks later, when researching what would happen if I wanted to delete my data, do I discover that Tools for Humanity’s privacy claims rest on what feels like a sleight of hand. The company argues that in modifying your iris code, it has “effectively anonymized” your biometric data. If you ask Tools for Humanity to delete your iris codes, they will delete the one stored on your phone, but not the derivatives. Those, they argue, are no longer your personal data at all. But if I were to return to an Orb after deleting my data, it would still recognize those codes as uniquely mine. Once you look into the Orb, a piece of your identity remains in the system forever. If users could truly delete that data, the premise of one ID per human would collapse, Tools for Humanity’s chief privacy officer Damien Kieran tells me when I call seeking an explanation. People could delete and sign up for new World IDs after being suspended from a platform. Or claim their Worldcoin tokens, sell them, delete their data, and cash in again. This argument fell flat with European Union regulators in Germany, who recently declared that the Orb posed “fundamental data protection issues” and ordered the company to allow European users to fully delete even their anonymized data.“Just like any other technology service, users cannot delete data that is not personal data,” Kieran said in a statement. “If a person could delete anonymized data that can’t be linked to them by World or any third party, it would allow bad actors to circumvent the security and safety that World ID is working to bring to every human.”On a balmy afternoon this spring, I climb a flight of stairs up to a room above a restaurant in an outer suburb of Seoul. Five elderly South Koreans tap on their phones as they wait to be “verified” by the two Orbs in the center of the room. “We don’t really know how to distinguish between AI and humans anymore,” an attendant in a company t-shirt explains in Korean, gesturing toward the spheres. “We need a way to verify that we’re human and not AI. So how do we do that? Well, humans have irises, but AI doesn’t.”The attendant ushers an elderly woman over to an Orb. It bleeps. “Open your eyes,” a disembodied voice says in English. The woman stares into the camera. Seconds later, she checks her phone and sees that a packet of Worldcoin worth 75,000 Korean wonhas landed in her digital wallet. Congratulations, the app tells her. You are now a verified human.A visitor views the Orbs in Seoul on April 14, 2025. Taemin Ha for TIMETools for Humanity aims to “verify” 1 million Koreans over the next year. Taemin Ha for TIMEA couple dozen Orbs have been available in South Korea since 2023, verifying roughly 55,000 people. Now Tools for Humanity is redoubling its efforts there. At an event in a traditional wooden hanok house in central Seoul, an executive announces that 250 Orbs will soon be dispersed around the country—with the aim of verifying 1 million Koreans in the next 12 months. South Korea has high levels of smartphone usage, crypto and AI adoption, and Internet access, while average wages are modest enough for the free Worldcoin on offer to still be an enticing draw—all of which makes it fertile testing ground for the company’s ambitious global expansion. Yet things seem off to a slow start. In a retail space I visited in central Seoul, Tools for Humanity had constructed a wooden structure with eight Orbs facing each other. Locals and tourists wander past looking bemused; few volunteer themselves up. Most who do tell me they are crypto enthusiasts who came intentionally, driven more by the spirit of early adoption than the free coins. The next day, I visit a coffee shop in central Seoul where a chrome Orb sits unassumingly in one corner. Wu Ruijun, a 20-year-old student from China, strikes up a conversation with the barista, who doubles as the Orb’s operator. Wu was invited here by a friend who said both could claim free cryptocurrency if he signed up. The barista speeds him through the process. Wu accepts the privacy disclosure without reading it, and widens his eyes for the Orb. Soon he’s verified. “I wasn’t told anything about the privacy policy,” he says on his way out. “I just came for the money.”As Altman’s car winds through San Francisco, I ask about the vision he laid out in 2019: that AI would make it harder for us to trust each other online. To my surprise, he rejects the framing. “I’m much morelike: what is the good we can create, rather than the bad we can stop?” he says. “It’s not like, ‘Oh, we’ve got to avoid the bot overrun’ or whatever. It’s just that we can do a lot of special things for humans.” It’s an answer that may reflect how his role has changed over the years. Altman is now the chief public cheerleader of a billion company that’s touting the transformative utility of AI agents. The rise of agents, he and others say, will be a boon for our quality of life—like having an assistant on hand who can answer your most pressing questions, carry out mundane tasks, and help you develop new skills. It’s an optimistic vision that may well pan out. But it doesn’t quite fit with the prophecies of AI-enabled infopocalypse that Tools for Humanity was founded upon.Altman waves away a question about the influence he and other investors stand to gain if their vision is realized. Most holders, he assumes, will have already started selling their tokens—too early, he adds. “What I think would be bad is if an early crew had a lot of control over the protocol,” he says, “and that’s where I think the commitment to decentralization is so cool.” Altman is referring to the World Protocol, the underlying technology upon which the Orb, Worldcoin, and World ID all rely. Tools for Humanity is developing it, but has committed to giving control to its users over time—a process they say will prevent power from being concentrated in the hands of a few executives or investors. Tools for Humanity would remain a for-profit company, and could levy fees on platforms that use World ID, but other companies would be able to compete for customers by building alternative apps—or even alternative Orbs. The plan draws on ideas that animated the crypto ecosystem in the late 2010s and early 2020s, when evangelists for emerging blockchain technologies argued that the centralization of power—especially in large so-called “Web 2.0” tech companies—was responsible for many of the problems plaguing the modern Internet. Just as decentralized cryptocurrencies could reform a financial system controlled by economic elites, so too would it be possible to create decentralized organizations, run by their members instead of CEOs. How such a system might work in practice remains unclear. “Building a community-based governance system,” Tools for Humanity says in a 2023 white paper, “represents perhaps the most formidable challenge of the entire project.”Altman has a pattern of making idealistic promises that shift over time. He founded OpenAI as a nonprofit in 2015, with a mission to develop AGI safely and for the benefit of all humanity. To raise money, OpenAI restructured itself as a for-profit company in 2019, but with overall control still in the hands of its nonprofit board. Last year, Altman proposed yet another restructure—one which would dilute the board’s control and allow more profits to flow to shareholders. Why, I ask, should the public trust Tools for Humanity’s commitment to freely surrender influence and power? “I think you will just see the continued decentralization via the protocol,” he says. “The value here is going to live in the network, and the network will be owned and governed by a lot of people.” Altman talks less about universal basic income these days. He recently mused about an alternative, which he called “universal basic compute.” Instead of AI companies redistributing their profits, he seemed to suggest, they could instead give everyone in the world fair access to super-powerful AI. Blania tells me he recently “made the decision to stop talking” about UBI at Tools for Humanity. “UBI is one potential answer,” he says. “Just givingaccess to the latestmodels and having them learn faster and better is another.” Says Altman: “I still don’t know what the right answer is. I believe we should do a better job of distribution of resources than we currently do.” When I probe the question of why people should trust him, Altman gets irritated. “I understand that you hate AI, and that’s fine,” he says. “If you want to frame it as the downside of AI is that there’s going to be a proliferation of very convincing AI systems that are pretending to be human, and we need ways to know what is really human-authorized versus not, then yeah, I think you can call that a downside of AI. It’s not how I would naturally frame it.” The phrase human-authorized hints at a tension between World ID and OpenAI’s plans for AI agents. An Internet where a World ID is required to access most services might impede the usefulness of the agents that OpenAI and others are developing. So Tools for Humanity is building a system that would allow users to delegate their World ID to an agent, allowing the bot to take actions online on their behalf, according to Tiago Sada, the company’s chief product officer. “We’ve built everything in a way that can be very easily delegatable to an agent,” Sada says. It’s a measure that would allow humans to be held accountable for the actions of their AIs. But it suggests that Tools for Humanity’s mission may be shifting beyond simply proving humanity, and toward becoming the infrastructure that enables AI agents to proliferate with human authorization. World ID doesn’t tell you whether a piece of content is AI-generated or human-generated; all it tells you is whether the account that posted it is a human or a bot. Even in a world where everybody had a World ID, our online spaces might still be filled with AI-generated text, images, and videos.As I say goodbye to Altman, I’m left feeling conflicted about his project. If the Internet is going to be transformed by AI agents, then some kind of proof-of-humanity system will almost certainly be necessary. Yet if the Orb becomes a piece of Internet infrastructure, it could give Altman—a beneficiary of the proliferation of AI content—significant influence over a leading defense mechanism against it. People might have no choice but to participate in the network in order to access social media or online services.I thought of an encounter I witnessed in Seoul. In the room above the restaurant, Cho Jeong-yeon, 75, watched her friend get verified by an Orb. Cho had been invited to do the same, but demurred. The reward wasn’t enough for her to surrender a part of her identity. “Your iris is uniquely yours, and we don’t really know how it might be used,” she says. “Seeing the machine made me think: are we becoming machines instead of humans now? Everything is changing, and we don’t know how it’ll all turn out.”—With reporting by Stephen Kim/Seoul. This story was supported by Tarbell Grants.Correction, May 30The original version of this story misstated the market capitalization of Worldcoin if all coins were in circulation. It is billion, not billion.
    #orb #will #see #you #now
    The Orb Will See You Now
    Once again, Sam Altman wants to show you the future. The CEO of OpenAI is standing on a sparse stage in San Francisco, preparing to reveal his next move to an attentive crowd. “We needed some way for identifying, authenticating humans in the age of AGI,” Altman explains, referring to artificial general intelligence. “We wanted a way to make sure that humans stayed special and central.” The solution Altman came up with is looming behind him. It’s a white sphere about the size of a beach ball, with a camera at its center. The company that makes it, known as Tools for Humanity, calls this mysterious device the Orb. Stare into the heart of the plastic-and-silicon globe and it will map the unique furrows and ciliary zones of your iris. Seconds later, you’ll receive inviolable proof of your humanity: a 12,800-digit binary number, known as an iris code, sent to an app on your phone. At the same time, a packet of cryptocurrency called Worldcoin, worth approximately will be transferred to your digital wallet—your reward for becoming a “verified human.” Altman co-founded Tools for Humanity in 2019 as part of a suite of companies he believed would reshape the world. Once the tech he was developing at OpenAI passed a certain level of intelligence, he reasoned, it would mark the end of one era on the Internet and the beginning of another, in which AI became so advanced, so human-like, that you would no longer be able to tell whether what you read, saw, or heard online came from a real person. When that happened, Altman imagined, we would need a new kind of online infrastructure: a human-verification layer for the Internet, to distinguish real people from the proliferating number of bots and AI “agents.”And so Tools for Humanity set out to build a global “proof-of-humanity” network. It aims to verify 50 million people by the end of 2025; ultimately its goal is to sign up every single human being on the planet. The free crypto serves as both an incentive for users to sign up, and also an entry point into what the company hopes will become the world’s largest financial network, through which it believes “double-digit percentages of the global economy” will eventually flow. Even for Altman, these missions are audacious. “If this really works, it’s like a fundamental piece of infrastructure for the world,” Altman tells TIME in a video interview from the passenger seat of a car a few days before his April 30 keynote address.Internal hardware of the Orb in mid-assembly in March. Davide Monteleone for TIMEThe project’s goal is to solve a problem partly of Altman’s own making. In the near future, he and other tech leaders say, advanced AIs will be imbued with agency: the ability to not just respond to human prompting, but to take actions independently in the world. This will enable the creation of AI coworkers that can drop into your company and begin solving problems; AI tutors that can adapt their teaching style to students’ preferences; even AI doctors that can diagnose routine cases and handle scheduling or logistics. The arrival of these virtual agents, their venture capitalist backers predict, will turbocharge our productivity and unleash an age of material abundance.But AI agents will also have cascading consequences for the human experience online. “As AI systems become harder to distinguish from people, websites may face difficult trade-offs,” says a recent paper by researchers from 25 different universities, nonprofits, and tech companies, including OpenAI. “There is a significant risk that digital institutions will be unprepared for a time when AI-powered agents, including those leveraged by malicious actors, overwhelm other activity online.” On social-media platforms like X and Facebook, bot-driven accounts are amassing billions of views on AI-generated content. In April, the foundation that runs Wikipedia disclosed that AI bots scraping their site were making the encyclopedia too costly to sustainably run. Later the same month, researchers from the University of Zurich found that AI-generated comments on the subreddit /r/ChangeMyView were up to six times more successful than human-written ones at persuading unknowing users to change their minds.  Photograph by Davide Monteleone for TIMEBuy a copy of the Orb issue hereThe arrival of agents won’t only threaten our ability to distinguish between authentic and AI content online. It will also challenge the Internet’s core business model, online advertising, which relies on the assumption that ads are being viewed by humans. “The Internet will change very drastically sometime in the next 12 to 24 months,” says Tools for Humanity CEO Alex Blania. “So we have to succeed, or I’m not sure what else would happen.”For four years, Blania’s team has been testing the Orb’s hardware abroad. Now the U.S. rollout has arrived. Over the next 12 months, 7,500 Orbs will be arriving in dozens of American cities, in locations like gas stations, bodegas, and flagship stores in Los Angeles, Austin, and Miami. The project’s founders and fans hope the Orb’s U.S. debut will kickstart a new phase of growth. The San Francisco keynote was titled: “At Last.” It’s not clear the public appetite matches the exultant branding. Tools for Humanity has “verified” just 12 million humans since mid 2023, a pace Blania concedes is well behind schedule. Few online platforms currently support the so-called “World ID” that the Orb bestows upon its visitors, leaving little to entice users to give up their biometrics beyond the lure of free crypto. Even Altman isn’t sure whether the whole thing can work. “I can seethis becomes a fairly mainstream thing in a few years,” he says. “Or I can see that it’s still only used by a small subset of people who think about the world in a certain way.” Blaniaand Altman debut the Orb at World’s U.S. launch in San Francisco on April 30, 2025. Jason Henry—The New York Times/ReduxYet as the Internet becomes overrun with AI, the creators of this strange new piece of hardware are betting that everybody in the world will soon want—or need—to visit an Orb. The biometric code it creates, they predict, will become a new type of digital passport, without which you might be denied passage to the Internet of the future, from dating apps to government services. In a best-case scenario, World ID could be a privacy-preserving way to fortify the Internet against an AI-driven deluge of fake or deceptive content. It could also enable the distribution of universal basic income—a policy that Altman has previously touted—as AI automation transforms the global economy. To examine what this new technology might mean, I reported from three continents, interviewed 10 Tools for Humanity executives and investors, reviewed hundreds of pages of company documents, and “verified” my own humanity. The Internet will inevitably need some kind of proof-of-humanity system in the near future, says Divya Siddarth, founder of the nonprofit Collective Intelligence Project. The real question, she argues, is whether such a system will be centralized—“a big security nightmare that enables a lot of surveillance”—or privacy-preserving, as the Orb claims to be. Questions remain about Tools for Humanity’s corporate structure, its yoking to an unstable cryptocurrency, and what power it would concentrate in the hands of its owners if successful. Yet it’s also one of the only attempts to solve what many see as an increasingly urgent problem. “There are some issues with it,” Siddarth says of World ID. “But you can’t preserve the Internet in amber. Something in this direction is necessary.”In March, I met Blania at Tools for Humanity’s San Francisco headquarters, where a large screen displays the number of weekly “Orb verifications” by country. A few days earlier, the CEO had attended a million-per-head dinner at Mar-a-Lago with President Donald Trump, whom he credits with clearing the way for the company’s U.S. launch by relaxing crypto regulations. “Given Sam is a very high profile target,” Blania says, “we just decided that we would let other companies fight that fight, and enter the U.S. once the air is clear.” As a kid growing up in Germany, Blania was a little different than his peers. “Other kids were, like, drinking a lot, or doing a lot of parties, and I was just building a lot of things that could potentially blow up,” he recalls. At the California Institute of Technology, where he was pursuing research for a masters degree, he spent many evenings reading the blogs of startup gurus like Paul Graham and Altman. Then, in 2019, Blania received an email from Max Novendstern, an entrepreneur who had been kicking around a concept with Altman to build a global cryptocurrency network. They were looking for technical minds to help with the project. Over cappuccinos, Altman told Blania he was certain about three things. First, smarter-than-human AI was not only possible, but inevitable—and it would soon mean you could no longer assume that anything you read, saw, or heard on the Internet was human-created. Second, cryptocurrency and other decentralized technologies would be a massive force for change in the world. And third, scale was essential to any crypto network’s value. The Orb is tested on a calibration rig, surrounded by checkerboard targets to ensure precision in iris detection. Davide Monteleone for TIMEThe goal of Worldcoin, as the project was initially called, was to combine those three insights. Altman took a lesson from PayPal, the company co-founded by his mentor Peter Thiel. Of its initial funding, PayPal spent less than million actually building its app—but pumped an additional million or so into a referral program, whereby new users and the person who invited them would each receive in credit. The referral program helped make PayPal a leading payment platform. Altman thought a version of that strategy would propel Worldcoin to similar heights. He wanted to create a new cryptocurrency and give it to users as a reward for signing up. The more people who joined the system, the higher the token’s value would theoretically rise. Since 2019, the project has raised million from investors like Coinbase and the venture capital firm Andreessen Horowitz. That money paid for the million cost of designing the Orb, plus maintaining the software it runs on. The total market value of all Worldcoins in existence, however, is far higher—around billion. That number is a bit misleading: most of those coins are not in circulation and Worldcoin’s price has fluctuated wildly. Still, it allows the company to reward users for signing up at no cost to itself. The main lure for investors is the crypto upside. Some 75% of all Worldcoins are set aside for humans to claim when they sign up, or as referral bonuses. The remaining 25% are split between Tools for Humanity’s backers and staff, including Blania and Altman. “I’m really excited to make a lot of money,” ” Blania says.From the beginning, Altman was thinking about the consequences of the AI revolution he intended to unleash.A future in which advanced AI could perform most tasks more effectively than humans would bring a wave of unemployment and economic dislocation, he reasoned. Some kind of wealth redistribution might be necessary. In 2016, he partially funded a study of basic income, which gave per-month handouts to low-income individuals in Illinois and Texas. But there was no single financial system that would allow money to be sent to everybody in the world. Nor was there a way to stop an individual human from claiming their share twice—or to identify a sophisticated AI pretending to be human and pocketing some cash of its own. In 2023, Tools for Humanity raised the possibility of using the network to redistribute the profits of AI labs that were able to automate human labor. “As AI advances,” it said, “fairly distributing access and some of the created value through UBI will play an increasingly vital role in counteracting the concentration of economic power.”Blania was taken by the pitch, and agreed to join the project as a co-founder. “Most people told us we were very stupid or crazy or insane, including Silicon Valley investors,” Blania says. At least until ChatGPT came out in 2022, transforming OpenAI into one of the world’s most famous tech companies and kickstarting a market bull-run. “Things suddenly started to make more and more sense to the external world,” Blania says of the vision to develop a global “proof-of-humanity” network. “You have to imagine a world in which you will have very smart and competent systems somehow flying through the Internet with different goals and ideas of what they want to do, and us having no idea anymore what we’re dealing with.”After our interview, Blania’s head of communications ushers me over to a circular wooden structure where eight Orbs face one another. The scene feels like a cross between an Apple Store and a ceremonial altar. “Do you want to get verified?” she asks. Putting aside my reservations for the purposes of research, I download the World App and follow its prompts. I flash a QR code at the Orb, then gaze into it. A minute or so later, my phone buzzes with confirmation: I’ve been issued my own personal World ID and some Worldcoin.The first thing the Orb does is check if you’re human, using a neural network that takes input from various sensors, including an infrared camera and a thermometer. Davide Monteleone for TIMEWhile I stared into the Orb, several complex procedures had taken place at once. A neural network took inputs from multiple sensors—an infrared camera, a thermometer—to confirm I was a living human. Simultaneously, a telephoto lens zoomed in on my iris, capturing the physical traits within that distinguish me from every other human on Earth. It then converted that image into an iris code: a numerical abstraction of my unique biometric data. Then the Orb checked to see if my iris code matched any it had seen before, using a technique allowing encrypted data to be compared without revealing the underlying information. Before the Orb deleted my data, it turned my iris code into several derivative codes—none of which on its own can be linked back to the original—encrypted them, deleted the only copies of the decryption keys, and sent each one to a different secure server, so that future users’ iris codes can be checked for uniqueness against mine. If I were to use my World ID to access a website, that site would learn nothing about me except that I’m human. The Orb is open-source, so outside experts can examine its code and verify the company’s privacy claims. “I did a colonoscopy on this company and these technologies before I agreed to join,” says Trevor Traina, a Trump donor and former U.S. ambassador to Austria who now serves as Tools for Humanity’s chief business officer. “It is the most privacy-preserving technology on the planet.”Only weeks later, when researching what would happen if I wanted to delete my data, do I discover that Tools for Humanity’s privacy claims rest on what feels like a sleight of hand. The company argues that in modifying your iris code, it has “effectively anonymized” your biometric data. If you ask Tools for Humanity to delete your iris codes, they will delete the one stored on your phone, but not the derivatives. Those, they argue, are no longer your personal data at all. But if I were to return to an Orb after deleting my data, it would still recognize those codes as uniquely mine. Once you look into the Orb, a piece of your identity remains in the system forever. If users could truly delete that data, the premise of one ID per human would collapse, Tools for Humanity’s chief privacy officer Damien Kieran tells me when I call seeking an explanation. People could delete and sign up for new World IDs after being suspended from a platform. Or claim their Worldcoin tokens, sell them, delete their data, and cash in again. This argument fell flat with European Union regulators in Germany, who recently declared that the Orb posed “fundamental data protection issues” and ordered the company to allow European users to fully delete even their anonymized data.“Just like any other technology service, users cannot delete data that is not personal data,” Kieran said in a statement. “If a person could delete anonymized data that can’t be linked to them by World or any third party, it would allow bad actors to circumvent the security and safety that World ID is working to bring to every human.”On a balmy afternoon this spring, I climb a flight of stairs up to a room above a restaurant in an outer suburb of Seoul. Five elderly South Koreans tap on their phones as they wait to be “verified” by the two Orbs in the center of the room. “We don’t really know how to distinguish between AI and humans anymore,” an attendant in a company t-shirt explains in Korean, gesturing toward the spheres. “We need a way to verify that we’re human and not AI. So how do we do that? Well, humans have irises, but AI doesn’t.”The attendant ushers an elderly woman over to an Orb. It bleeps. “Open your eyes,” a disembodied voice says in English. The woman stares into the camera. Seconds later, she checks her phone and sees that a packet of Worldcoin worth 75,000 Korean wonhas landed in her digital wallet. Congratulations, the app tells her. You are now a verified human.A visitor views the Orbs in Seoul on April 14, 2025. Taemin Ha for TIMETools for Humanity aims to “verify” 1 million Koreans over the next year. Taemin Ha for TIMEA couple dozen Orbs have been available in South Korea since 2023, verifying roughly 55,000 people. Now Tools for Humanity is redoubling its efforts there. At an event in a traditional wooden hanok house in central Seoul, an executive announces that 250 Orbs will soon be dispersed around the country—with the aim of verifying 1 million Koreans in the next 12 months. South Korea has high levels of smartphone usage, crypto and AI adoption, and Internet access, while average wages are modest enough for the free Worldcoin on offer to still be an enticing draw—all of which makes it fertile testing ground for the company’s ambitious global expansion. Yet things seem off to a slow start. In a retail space I visited in central Seoul, Tools for Humanity had constructed a wooden structure with eight Orbs facing each other. Locals and tourists wander past looking bemused; few volunteer themselves up. Most who do tell me they are crypto enthusiasts who came intentionally, driven more by the spirit of early adoption than the free coins. The next day, I visit a coffee shop in central Seoul where a chrome Orb sits unassumingly in one corner. Wu Ruijun, a 20-year-old student from China, strikes up a conversation with the barista, who doubles as the Orb’s operator. Wu was invited here by a friend who said both could claim free cryptocurrency if he signed up. The barista speeds him through the process. Wu accepts the privacy disclosure without reading it, and widens his eyes for the Orb. Soon he’s verified. “I wasn’t told anything about the privacy policy,” he says on his way out. “I just came for the money.”As Altman’s car winds through San Francisco, I ask about the vision he laid out in 2019: that AI would make it harder for us to trust each other online. To my surprise, he rejects the framing. “I’m much morelike: what is the good we can create, rather than the bad we can stop?” he says. “It’s not like, ‘Oh, we’ve got to avoid the bot overrun’ or whatever. It’s just that we can do a lot of special things for humans.” It’s an answer that may reflect how his role has changed over the years. Altman is now the chief public cheerleader of a billion company that’s touting the transformative utility of AI agents. The rise of agents, he and others say, will be a boon for our quality of life—like having an assistant on hand who can answer your most pressing questions, carry out mundane tasks, and help you develop new skills. It’s an optimistic vision that may well pan out. But it doesn’t quite fit with the prophecies of AI-enabled infopocalypse that Tools for Humanity was founded upon.Altman waves away a question about the influence he and other investors stand to gain if their vision is realized. Most holders, he assumes, will have already started selling their tokens—too early, he adds. “What I think would be bad is if an early crew had a lot of control over the protocol,” he says, “and that’s where I think the commitment to decentralization is so cool.” Altman is referring to the World Protocol, the underlying technology upon which the Orb, Worldcoin, and World ID all rely. Tools for Humanity is developing it, but has committed to giving control to its users over time—a process they say will prevent power from being concentrated in the hands of a few executives or investors. Tools for Humanity would remain a for-profit company, and could levy fees on platforms that use World ID, but other companies would be able to compete for customers by building alternative apps—or even alternative Orbs. The plan draws on ideas that animated the crypto ecosystem in the late 2010s and early 2020s, when evangelists for emerging blockchain technologies argued that the centralization of power—especially in large so-called “Web 2.0” tech companies—was responsible for many of the problems plaguing the modern Internet. Just as decentralized cryptocurrencies could reform a financial system controlled by economic elites, so too would it be possible to create decentralized organizations, run by their members instead of CEOs. How such a system might work in practice remains unclear. “Building a community-based governance system,” Tools for Humanity says in a 2023 white paper, “represents perhaps the most formidable challenge of the entire project.”Altman has a pattern of making idealistic promises that shift over time. He founded OpenAI as a nonprofit in 2015, with a mission to develop AGI safely and for the benefit of all humanity. To raise money, OpenAI restructured itself as a for-profit company in 2019, but with overall control still in the hands of its nonprofit board. Last year, Altman proposed yet another restructure—one which would dilute the board’s control and allow more profits to flow to shareholders. Why, I ask, should the public trust Tools for Humanity’s commitment to freely surrender influence and power? “I think you will just see the continued decentralization via the protocol,” he says. “The value here is going to live in the network, and the network will be owned and governed by a lot of people.” Altman talks less about universal basic income these days. He recently mused about an alternative, which he called “universal basic compute.” Instead of AI companies redistributing their profits, he seemed to suggest, they could instead give everyone in the world fair access to super-powerful AI. Blania tells me he recently “made the decision to stop talking” about UBI at Tools for Humanity. “UBI is one potential answer,” he says. “Just givingaccess to the latestmodels and having them learn faster and better is another.” Says Altman: “I still don’t know what the right answer is. I believe we should do a better job of distribution of resources than we currently do.” When I probe the question of why people should trust him, Altman gets irritated. “I understand that you hate AI, and that’s fine,” he says. “If you want to frame it as the downside of AI is that there’s going to be a proliferation of very convincing AI systems that are pretending to be human, and we need ways to know what is really human-authorized versus not, then yeah, I think you can call that a downside of AI. It’s not how I would naturally frame it.” The phrase human-authorized hints at a tension between World ID and OpenAI’s plans for AI agents. An Internet where a World ID is required to access most services might impede the usefulness of the agents that OpenAI and others are developing. So Tools for Humanity is building a system that would allow users to delegate their World ID to an agent, allowing the bot to take actions online on their behalf, according to Tiago Sada, the company’s chief product officer. “We’ve built everything in a way that can be very easily delegatable to an agent,” Sada says. It’s a measure that would allow humans to be held accountable for the actions of their AIs. But it suggests that Tools for Humanity’s mission may be shifting beyond simply proving humanity, and toward becoming the infrastructure that enables AI agents to proliferate with human authorization. World ID doesn’t tell you whether a piece of content is AI-generated or human-generated; all it tells you is whether the account that posted it is a human or a bot. Even in a world where everybody had a World ID, our online spaces might still be filled with AI-generated text, images, and videos.As I say goodbye to Altman, I’m left feeling conflicted about his project. If the Internet is going to be transformed by AI agents, then some kind of proof-of-humanity system will almost certainly be necessary. Yet if the Orb becomes a piece of Internet infrastructure, it could give Altman—a beneficiary of the proliferation of AI content—significant influence over a leading defense mechanism against it. People might have no choice but to participate in the network in order to access social media or online services.I thought of an encounter I witnessed in Seoul. In the room above the restaurant, Cho Jeong-yeon, 75, watched her friend get verified by an Orb. Cho had been invited to do the same, but demurred. The reward wasn’t enough for her to surrender a part of her identity. “Your iris is uniquely yours, and we don’t really know how it might be used,” she says. “Seeing the machine made me think: are we becoming machines instead of humans now? Everything is changing, and we don’t know how it’ll all turn out.”—With reporting by Stephen Kim/Seoul. This story was supported by Tarbell Grants.Correction, May 30The original version of this story misstated the market capitalization of Worldcoin if all coins were in circulation. It is billion, not billion. #orb #will #see #you #now
    TIME.COM
    The Orb Will See You Now
    Once again, Sam Altman wants to show you the future. The CEO of OpenAI is standing on a sparse stage in San Francisco, preparing to reveal his next move to an attentive crowd. “We needed some way for identifying, authenticating humans in the age of AGI,” Altman explains, referring to artificial general intelligence. “We wanted a way to make sure that humans stayed special and central.” The solution Altman came up with is looming behind him. It’s a white sphere about the size of a beach ball, with a camera at its center. The company that makes it, known as Tools for Humanity, calls this mysterious device the Orb. Stare into the heart of the plastic-and-silicon globe and it will map the unique furrows and ciliary zones of your iris. Seconds later, you’ll receive inviolable proof of your humanity: a 12,800-digit binary number, known as an iris code, sent to an app on your phone. At the same time, a packet of cryptocurrency called Worldcoin, worth approximately $42, will be transferred to your digital wallet—your reward for becoming a “verified human.” Altman co-founded Tools for Humanity in 2019 as part of a suite of companies he believed would reshape the world. Once the tech he was developing at OpenAI passed a certain level of intelligence, he reasoned, it would mark the end of one era on the Internet and the beginning of another, in which AI became so advanced, so human-like, that you would no longer be able to tell whether what you read, saw, or heard online came from a real person. When that happened, Altman imagined, we would need a new kind of online infrastructure: a human-verification layer for the Internet, to distinguish real people from the proliferating number of bots and AI “agents.”And so Tools for Humanity set out to build a global “proof-of-humanity” network. It aims to verify 50 million people by the end of 2025; ultimately its goal is to sign up every single human being on the planet. The free crypto serves as both an incentive for users to sign up, and also an entry point into what the company hopes will become the world’s largest financial network, through which it believes “double-digit percentages of the global economy” will eventually flow. Even for Altman, these missions are audacious. “If this really works, it’s like a fundamental piece of infrastructure for the world,” Altman tells TIME in a video interview from the passenger seat of a car a few days before his April 30 keynote address.Internal hardware of the Orb in mid-assembly in March. Davide Monteleone for TIMEThe project’s goal is to solve a problem partly of Altman’s own making. In the near future, he and other tech leaders say, advanced AIs will be imbued with agency: the ability to not just respond to human prompting, but to take actions independently in the world. This will enable the creation of AI coworkers that can drop into your company and begin solving problems; AI tutors that can adapt their teaching style to students’ preferences; even AI doctors that can diagnose routine cases and handle scheduling or logistics. The arrival of these virtual agents, their venture capitalist backers predict, will turbocharge our productivity and unleash an age of material abundance.But AI agents will also have cascading consequences for the human experience online. “As AI systems become harder to distinguish from people, websites may face difficult trade-offs,” says a recent paper by researchers from 25 different universities, nonprofits, and tech companies, including OpenAI. “There is a significant risk that digital institutions will be unprepared for a time when AI-powered agents, including those leveraged by malicious actors, overwhelm other activity online.” On social-media platforms like X and Facebook, bot-driven accounts are amassing billions of views on AI-generated content. In April, the foundation that runs Wikipedia disclosed that AI bots scraping their site were making the encyclopedia too costly to sustainably run. Later the same month, researchers from the University of Zurich found that AI-generated comments on the subreddit /r/ChangeMyView were up to six times more successful than human-written ones at persuading unknowing users to change their minds.  Photograph by Davide Monteleone for TIMEBuy a copy of the Orb issue hereThe arrival of agents won’t only threaten our ability to distinguish between authentic and AI content online. It will also challenge the Internet’s core business model, online advertising, which relies on the assumption that ads are being viewed by humans. “The Internet will change very drastically sometime in the next 12 to 24 months,” says Tools for Humanity CEO Alex Blania. “So we have to succeed, or I’m not sure what else would happen.”For four years, Blania’s team has been testing the Orb’s hardware abroad. Now the U.S. rollout has arrived. Over the next 12 months, 7,500 Orbs will be arriving in dozens of American cities, in locations like gas stations, bodegas, and flagship stores in Los Angeles, Austin, and Miami. The project’s founders and fans hope the Orb’s U.S. debut will kickstart a new phase of growth. The San Francisco keynote was titled: “At Last.” It’s not clear the public appetite matches the exultant branding. Tools for Humanity has “verified” just 12 million humans since mid 2023, a pace Blania concedes is well behind schedule. Few online platforms currently support the so-called “World ID” that the Orb bestows upon its visitors, leaving little to entice users to give up their biometrics beyond the lure of free crypto. Even Altman isn’t sure whether the whole thing can work. “I can see [how] this becomes a fairly mainstream thing in a few years,” he says. “Or I can see that it’s still only used by a small subset of people who think about the world in a certain way.” Blania (left) and Altman debut the Orb at World’s U.S. launch in San Francisco on April 30, 2025. Jason Henry—The New York Times/ReduxYet as the Internet becomes overrun with AI, the creators of this strange new piece of hardware are betting that everybody in the world will soon want—or need—to visit an Orb. The biometric code it creates, they predict, will become a new type of digital passport, without which you might be denied passage to the Internet of the future, from dating apps to government services. In a best-case scenario, World ID could be a privacy-preserving way to fortify the Internet against an AI-driven deluge of fake or deceptive content. It could also enable the distribution of universal basic income (UBI)—a policy that Altman has previously touted—as AI automation transforms the global economy. To examine what this new technology might mean, I reported from three continents, interviewed 10 Tools for Humanity executives and investors, reviewed hundreds of pages of company documents, and “verified” my own humanity. The Internet will inevitably need some kind of proof-of-humanity system in the near future, says Divya Siddarth, founder of the nonprofit Collective Intelligence Project. The real question, she argues, is whether such a system will be centralized—“a big security nightmare that enables a lot of surveillance”—or privacy-preserving, as the Orb claims to be. Questions remain about Tools for Humanity’s corporate structure, its yoking to an unstable cryptocurrency, and what power it would concentrate in the hands of its owners if successful. Yet it’s also one of the only attempts to solve what many see as an increasingly urgent problem. “There are some issues with it,” Siddarth says of World ID. “But you can’t preserve the Internet in amber. Something in this direction is necessary.”In March, I met Blania at Tools for Humanity’s San Francisco headquarters, where a large screen displays the number of weekly “Orb verifications” by country. A few days earlier, the CEO had attended a $1 million-per-head dinner at Mar-a-Lago with President Donald Trump, whom he credits with clearing the way for the company’s U.S. launch by relaxing crypto regulations. “Given Sam is a very high profile target,” Blania says, “we just decided that we would let other companies fight that fight, and enter the U.S. once the air is clear.” As a kid growing up in Germany, Blania was a little different than his peers. “Other kids were, like, drinking a lot, or doing a lot of parties, and I was just building a lot of things that could potentially blow up,” he recalls. At the California Institute of Technology, where he was pursuing research for a masters degree, he spent many evenings reading the blogs of startup gurus like Paul Graham and Altman. Then, in 2019, Blania received an email from Max Novendstern, an entrepreneur who had been kicking around a concept with Altman to build a global cryptocurrency network. They were looking for technical minds to help with the project. Over cappuccinos, Altman told Blania he was certain about three things. First, smarter-than-human AI was not only possible, but inevitable—and it would soon mean you could no longer assume that anything you read, saw, or heard on the Internet was human-created. Second, cryptocurrency and other decentralized technologies would be a massive force for change in the world. And third, scale was essential to any crypto network’s value. The Orb is tested on a calibration rig, surrounded by checkerboard targets to ensure precision in iris detection. Davide Monteleone for TIMEThe goal of Worldcoin, as the project was initially called, was to combine those three insights. Altman took a lesson from PayPal, the company co-founded by his mentor Peter Thiel. Of its initial funding, PayPal spent less than $10 million actually building its app—but pumped an additional $70 million or so into a referral program, whereby new users and the person who invited them would each receive $10 in credit. The referral program helped make PayPal a leading payment platform. Altman thought a version of that strategy would propel Worldcoin to similar heights. He wanted to create a new cryptocurrency and give it to users as a reward for signing up. The more people who joined the system, the higher the token’s value would theoretically rise. Since 2019, the project has raised $244 million from investors like Coinbase and the venture capital firm Andreessen Horowitz. That money paid for the $50 million cost of designing the Orb, plus maintaining the software it runs on. The total market value of all Worldcoins in existence, however, is far higher—around $12 billion. That number is a bit misleading: most of those coins are not in circulation and Worldcoin’s price has fluctuated wildly. Still, it allows the company to reward users for signing up at no cost to itself. The main lure for investors is the crypto upside. Some 75% of all Worldcoins are set aside for humans to claim when they sign up, or as referral bonuses. The remaining 25% are split between Tools for Humanity’s backers and staff, including Blania and Altman. “I’m really excited to make a lot of money,” ” Blania says.From the beginning, Altman was thinking about the consequences of the AI revolution he intended to unleash. (On May 21, he announced plans to team up with famed former Apple designer Jony Ive on a new AI personal device.) A future in which advanced AI could perform most tasks more effectively than humans would bring a wave of unemployment and economic dislocation, he reasoned. Some kind of wealth redistribution might be necessary. In 2016, he partially funded a study of basic income, which gave $1,000 per-month handouts to low-income individuals in Illinois and Texas. But there was no single financial system that would allow money to be sent to everybody in the world. Nor was there a way to stop an individual human from claiming their share twice—or to identify a sophisticated AI pretending to be human and pocketing some cash of its own. In 2023, Tools for Humanity raised the possibility of using the network to redistribute the profits of AI labs that were able to automate human labor. “As AI advances,” it said, “fairly distributing access and some of the created value through UBI will play an increasingly vital role in counteracting the concentration of economic power.”Blania was taken by the pitch, and agreed to join the project as a co-founder. “Most people told us we were very stupid or crazy or insane, including Silicon Valley investors,” Blania says. At least until ChatGPT came out in 2022, transforming OpenAI into one of the world’s most famous tech companies and kickstarting a market bull-run. “Things suddenly started to make more and more sense to the external world,” Blania says of the vision to develop a global “proof-of-humanity” network. “You have to imagine a world in which you will have very smart and competent systems somehow flying through the Internet with different goals and ideas of what they want to do, and us having no idea anymore what we’re dealing with.”After our interview, Blania’s head of communications ushers me over to a circular wooden structure where eight Orbs face one another. The scene feels like a cross between an Apple Store and a ceremonial altar. “Do you want to get verified?” she asks. Putting aside my reservations for the purposes of research, I download the World App and follow its prompts. I flash a QR code at the Orb, then gaze into it. A minute or so later, my phone buzzes with confirmation: I’ve been issued my own personal World ID and some Worldcoin.The first thing the Orb does is check if you’re human, using a neural network that takes input from various sensors, including an infrared camera and a thermometer. Davide Monteleone for TIMEWhile I stared into the Orb, several complex procedures had taken place at once. A neural network took inputs from multiple sensors—an infrared camera, a thermometer—to confirm I was a living human. Simultaneously, a telephoto lens zoomed in on my iris, capturing the physical traits within that distinguish me from every other human on Earth. It then converted that image into an iris code: a numerical abstraction of my unique biometric data. Then the Orb checked to see if my iris code matched any it had seen before, using a technique allowing encrypted data to be compared without revealing the underlying information. Before the Orb deleted my data, it turned my iris code into several derivative codes—none of which on its own can be linked back to the original—encrypted them, deleted the only copies of the decryption keys, and sent each one to a different secure server, so that future users’ iris codes can be checked for uniqueness against mine. If I were to use my World ID to access a website, that site would learn nothing about me except that I’m human. The Orb is open-source, so outside experts can examine its code and verify the company’s privacy claims. “I did a colonoscopy on this company and these technologies before I agreed to join,” says Trevor Traina, a Trump donor and former U.S. ambassador to Austria who now serves as Tools for Humanity’s chief business officer. “It is the most privacy-preserving technology on the planet.”Only weeks later, when researching what would happen if I wanted to delete my data, do I discover that Tools for Humanity’s privacy claims rest on what feels like a sleight of hand. The company argues that in modifying your iris code, it has “effectively anonymized” your biometric data. If you ask Tools for Humanity to delete your iris codes, they will delete the one stored on your phone, but not the derivatives. Those, they argue, are no longer your personal data at all. But if I were to return to an Orb after deleting my data, it would still recognize those codes as uniquely mine. Once you look into the Orb, a piece of your identity remains in the system forever. If users could truly delete that data, the premise of one ID per human would collapse, Tools for Humanity’s chief privacy officer Damien Kieran tells me when I call seeking an explanation. People could delete and sign up for new World IDs after being suspended from a platform. Or claim their Worldcoin tokens, sell them, delete their data, and cash in again. This argument fell flat with European Union regulators in Germany, who recently declared that the Orb posed “fundamental data protection issues” and ordered the company to allow European users to fully delete even their anonymized data. (Tools for Humanity has appealed; the regulator is now reassessing the decision.) “Just like any other technology service, users cannot delete data that is not personal data,” Kieran said in a statement. “If a person could delete anonymized data that can’t be linked to them by World or any third party, it would allow bad actors to circumvent the security and safety that World ID is working to bring to every human.”On a balmy afternoon this spring, I climb a flight of stairs up to a room above a restaurant in an outer suburb of Seoul. Five elderly South Koreans tap on their phones as they wait to be “verified” by the two Orbs in the center of the room. “We don’t really know how to distinguish between AI and humans anymore,” an attendant in a company t-shirt explains in Korean, gesturing toward the spheres. “We need a way to verify that we’re human and not AI. So how do we do that? Well, humans have irises, but AI doesn’t.”The attendant ushers an elderly woman over to an Orb. It bleeps. “Open your eyes,” a disembodied voice says in English. The woman stares into the camera. Seconds later, she checks her phone and sees that a packet of Worldcoin worth 75,000 Korean won (about $54) has landed in her digital wallet. Congratulations, the app tells her. You are now a verified human.A visitor views the Orbs in Seoul on April 14, 2025. Taemin Ha for TIMETools for Humanity aims to “verify” 1 million Koreans over the next year. Taemin Ha for TIMEA couple dozen Orbs have been available in South Korea since 2023, verifying roughly 55,000 people. Now Tools for Humanity is redoubling its efforts there. At an event in a traditional wooden hanok house in central Seoul, an executive announces that 250 Orbs will soon be dispersed around the country—with the aim of verifying 1 million Koreans in the next 12 months. South Korea has high levels of smartphone usage, crypto and AI adoption, and Internet access, while average wages are modest enough for the free Worldcoin on offer to still be an enticing draw—all of which makes it fertile testing ground for the company’s ambitious global expansion. Yet things seem off to a slow start. In a retail space I visited in central Seoul, Tools for Humanity had constructed a wooden structure with eight Orbs facing each other. Locals and tourists wander past looking bemused; few volunteer themselves up. Most who do tell me they are crypto enthusiasts who came intentionally, driven more by the spirit of early adoption than the free coins. The next day, I visit a coffee shop in central Seoul where a chrome Orb sits unassumingly in one corner. Wu Ruijun, a 20-year-old student from China, strikes up a conversation with the barista, who doubles as the Orb’s operator. Wu was invited here by a friend who said both could claim free cryptocurrency if he signed up. The barista speeds him through the process. Wu accepts the privacy disclosure without reading it, and widens his eyes for the Orb. Soon he’s verified. “I wasn’t told anything about the privacy policy,” he says on his way out. “I just came for the money.”As Altman’s car winds through San Francisco, I ask about the vision he laid out in 2019: that AI would make it harder for us to trust each other online. To my surprise, he rejects the framing. “I’m much more [about] like: what is the good we can create, rather than the bad we can stop?” he says. “It’s not like, ‘Oh, we’ve got to avoid the bot overrun’ or whatever. It’s just that we can do a lot of special things for humans.” It’s an answer that may reflect how his role has changed over the years. Altman is now the chief public cheerleader of a $300 billion company that’s touting the transformative utility of AI agents. The rise of agents, he and others say, will be a boon for our quality of life—like having an assistant on hand who can answer your most pressing questions, carry out mundane tasks, and help you develop new skills. It’s an optimistic vision that may well pan out. But it doesn’t quite fit with the prophecies of AI-enabled infopocalypse that Tools for Humanity was founded upon.Altman waves away a question about the influence he and other investors stand to gain if their vision is realized. Most holders, he assumes, will have already started selling their tokens—too early, he adds. “What I think would be bad is if an early crew had a lot of control over the protocol,” he says, “and that’s where I think the commitment to decentralization is so cool.” Altman is referring to the World Protocol, the underlying technology upon which the Orb, Worldcoin, and World ID all rely. Tools for Humanity is developing it, but has committed to giving control to its users over time—a process they say will prevent power from being concentrated in the hands of a few executives or investors. Tools for Humanity would remain a for-profit company, and could levy fees on platforms that use World ID, but other companies would be able to compete for customers by building alternative apps—or even alternative Orbs. The plan draws on ideas that animated the crypto ecosystem in the late 2010s and early 2020s, when evangelists for emerging blockchain technologies argued that the centralization of power—especially in large so-called “Web 2.0” tech companies—was responsible for many of the problems plaguing the modern Internet. Just as decentralized cryptocurrencies could reform a financial system controlled by economic elites, so too would it be possible to create decentralized organizations, run by their members instead of CEOs. How such a system might work in practice remains unclear. “Building a community-based governance system,” Tools for Humanity says in a 2023 white paper, “represents perhaps the most formidable challenge of the entire project.”Altman has a pattern of making idealistic promises that shift over time. He founded OpenAI as a nonprofit in 2015, with a mission to develop AGI safely and for the benefit of all humanity. To raise money, OpenAI restructured itself as a for-profit company in 2019, but with overall control still in the hands of its nonprofit board. Last year, Altman proposed yet another restructure—one which would dilute the board’s control and allow more profits to flow to shareholders. Why, I ask, should the public trust Tools for Humanity’s commitment to freely surrender influence and power? “I think you will just see the continued decentralization via the protocol,” he says. “The value here is going to live in the network, and the network will be owned and governed by a lot of people.” Altman talks less about universal basic income these days. He recently mused about an alternative, which he called “universal basic compute.” Instead of AI companies redistributing their profits, he seemed to suggest, they could instead give everyone in the world fair access to super-powerful AI. Blania tells me he recently “made the decision to stop talking” about UBI at Tools for Humanity. “UBI is one potential answer,” he says. “Just giving [people] access to the latest [AI] models and having them learn faster and better is another.” Says Altman: “I still don’t know what the right answer is. I believe we should do a better job of distribution of resources than we currently do.” When I probe the question of why people should trust him, Altman gets irritated. “I understand that you hate AI, and that’s fine,” he says. “If you want to frame it as the downside of AI is that there’s going to be a proliferation of very convincing AI systems that are pretending to be human, and we need ways to know what is really human-authorized versus not, then yeah, I think you can call that a downside of AI. It’s not how I would naturally frame it.” The phrase human-authorized hints at a tension between World ID and OpenAI’s plans for AI agents. An Internet where a World ID is required to access most services might impede the usefulness of the agents that OpenAI and others are developing. So Tools for Humanity is building a system that would allow users to delegate their World ID to an agent, allowing the bot to take actions online on their behalf, according to Tiago Sada, the company’s chief product officer. “We’ve built everything in a way that can be very easily delegatable to an agent,” Sada says. It’s a measure that would allow humans to be held accountable for the actions of their AIs. But it suggests that Tools for Humanity’s mission may be shifting beyond simply proving humanity, and toward becoming the infrastructure that enables AI agents to proliferate with human authorization. World ID doesn’t tell you whether a piece of content is AI-generated or human-generated; all it tells you is whether the account that posted it is a human or a bot. Even in a world where everybody had a World ID, our online spaces might still be filled with AI-generated text, images, and videos.As I say goodbye to Altman, I’m left feeling conflicted about his project. If the Internet is going to be transformed by AI agents, then some kind of proof-of-humanity system will almost certainly be necessary. Yet if the Orb becomes a piece of Internet infrastructure, it could give Altman—a beneficiary of the proliferation of AI content—significant influence over a leading defense mechanism against it. People might have no choice but to participate in the network in order to access social media or online services.I thought of an encounter I witnessed in Seoul. In the room above the restaurant, Cho Jeong-yeon, 75, watched her friend get verified by an Orb. Cho had been invited to do the same, but demurred. The reward wasn’t enough for her to surrender a part of her identity. “Your iris is uniquely yours, and we don’t really know how it might be used,” she says. “Seeing the machine made me think: are we becoming machines instead of humans now? Everything is changing, and we don’t know how it’ll all turn out.”—With reporting by Stephen Kim/Seoul. This story was supported by Tarbell Grants.Correction, May 30The original version of this story misstated the market capitalization of Worldcoin if all coins were in circulation. It is $12 billion, not $1.2 billion.
    Like
    Love
    Wow
    Sad
    Angry
    240
    0 Comments 0 Shares
  • Security Is Not Privacy, Part 1: The Mobile Target

    In technical fields like information technology, definitions are fundamental. They are the building blocks for constructing useful applications and systems. Yet, despite this, it’s easy to assume a term’s definition and wield it confidently before discovering its true meaning. The two closely related cases that stand out to me are “security” and “privacy.”
    I say this with full awareness that, in my many writings on information security, I never adequately distinguished these two concepts. It was only after observing enough conflation of these terms that I resolved to examine my own casual treatment of them.
    So, with the aim of solidifying my own understanding, let’s properly differentiate “information security” and “information privacy.”
    Security vs. Privacy: Definitions That Matter
    In the context of information technology, what exactly are security and privacy?

    Security is the property of denying unauthorized parties from accessing or altering your data.
    Privacy is the property of preventing the observation of your activities by any third parties to whom you do not expressly consent to observe those activities.

    As you can see, these principles are related, which is one reason why they’re commonly interchanged. This distinction becomes comprehensible with examples.
    Let’s start with an instance where security applies, but privacy does not.
    Spotify uses digital rights managementsoftware to keep its media secure but not private. DRM is a whole topic of its own, but it essentially uses cryptography to enforce copyright. In Spotify’s case, it’s what constitutes streaming rather than just downloading: the song’s file is present on your devicejust as if you’d downloaded it, but Spotify’s DRM cryptography prevents you from opening the file without the Spotify application.
    The data on Spotifyare secure because only users of the application can stream audio, and streamed content can’t be retained, opened, or transmitted to non-users. However, Spotify’s data is not private because nearly anyone with an email address can be a user. Thus, in practice, the company cannot control who exactly can access its data.
    A more complex example of security without privacy is social media.
    When you sign up for a social media platform, you accept an end-user license agreementauthorizing the platform to share your data with its partners and affiliates. Your data stored with “authorized parties” on servers controlled by the platform and its affiliates would be considered secure, provided all these entities successfully defend your data against theft by unauthorized parties.
    In other words, if everyone who is allowedto have your data encrypts it in transit and at rest, insulates and segments their networks, etc., then your data is secure no matter how many affiliates receive it. In practice, the more parties that have your data, the more likely it is that any one of them is breached, but in theory, they could all defend your data.

    On the other hand, any data you fork over to the social network is not private because you can’t control who uses your data and how. As soon as your data lands on the platform’s servers, you can’t restrict what they do with it, including sharing your data with other entities, which you also can’t control.
    Both examples illustrate security without privacy. That’s because privacy entails security, but not the reverse. All squares are rectangles, but not all rectangles are squares. If you have privacy, meaning you can completely enforce how any party uses your data, it is secure by definition because only authorized parties can access your data.
    Mobile Devices: Secure but Not Private
    Casually mixing security and privacy can lead people to misunderstand the information security properties that apply to their data in any given scenario. By reevaluating for ourselves whether a given technology affords us security and privacy, we can have a more accurate understanding of how accessible our data really is.
    One significant misconception I’ve noticed concerns mobile devices. I get the impression that the digital privacy content sphere regards mobile devices as not secure because they aren’t private. But while mobile is designed not to be private, it is specifically designed to be secure.
    Why is that?
    Because the value of data is in keeping it in your hands and out of your competitor’s. If you collect data but anyone else can grab your copy, you are not only at no advantage but also at a disadvantage since you’re the only party that spent time and money to collect it from the source.
    With modest scrutiny, we’ll find that every element of a mobile OS that might be marketed as a privacy feature is, in fact, strictly a security feature.
    Cybersecurity professionals have hailed application permissions as a major stride in privacy. But whom are they designed to help? These menus apply to applications that request access to certain hardware, from microphones and cameras to flash memory storage and wireless radios. This access restriction feature serves the OS developer by letting users lock out as much of their competition as possible from taking their data. The mobile OS developer controls the OS with un-auditable compiled code. For all you know, permission controls on all the OS’s native apps could be ignored.

    However, even if we assume that the OS developer doesn’t thwart your restrictions on their own apps, the first-party apps still enjoy pride of place. There are more of them; they are preinstalled on your device, facilitate core mobile device features, require more permissions, and often lose core functions when those permissions are denied.
    Mobile OSes also sandbox every application, forcing each to run in an isolated software environment, oblivious to other applications and the underlying operating system. This, too, benefits the OS vendor. Like the app permission settings, this functionality makes it harder for third parties to grab the same data the OS effortlessly ingests. The OS relies on its own background processes to obtain the most valuable data and walls off every other app from those processes.
    Mobile Security Isn’t Designed With You in Mind
    The most powerful mobile security control is the denial of root privileges to all applications and users. While it goes a long way toward keeping the user’s data safe, it is just as effective at subjecting everything and everyone using the device to the dictates of the OS. The security advantage is undeniable: if your user account can’t use root, then any malware that compromises it can’t either.
    By the same token, because you don’t have complete control over the OS, you are unable to reconfigure your device for privacy from the OS vendor.
    I’m not disparaging any of these security controls. All of them reinforce the protection of your data. I’m saying that they are not done primarily for the user’s benefit; that is secondary.
    Those of you familiar with my work might see the scroll bar near the bottom of this page and wonder why I haven’t mentioned Linux yet. The answer is that desktop operating systems, my preferred kind of Linux OS, benefit from their own examination. In a follow-up to this piece, I will discuss the paradox of desktop security and privacy.
    Please stay tuned.
    #security #not #privacy #part #mobile
    Security Is Not Privacy, Part 1: The Mobile Target
    In technical fields like information technology, definitions are fundamental. They are the building blocks for constructing useful applications and systems. Yet, despite this, it’s easy to assume a term’s definition and wield it confidently before discovering its true meaning. The two closely related cases that stand out to me are “security” and “privacy.” I say this with full awareness that, in my many writings on information security, I never adequately distinguished these two concepts. It was only after observing enough conflation of these terms that I resolved to examine my own casual treatment of them. So, with the aim of solidifying my own understanding, let’s properly differentiate “information security” and “information privacy.” Security vs. Privacy: Definitions That Matter In the context of information technology, what exactly are security and privacy? Security is the property of denying unauthorized parties from accessing or altering your data. Privacy is the property of preventing the observation of your activities by any third parties to whom you do not expressly consent to observe those activities. As you can see, these principles are related, which is one reason why they’re commonly interchanged. This distinction becomes comprehensible with examples. Let’s start with an instance where security applies, but privacy does not. Spotify uses digital rights managementsoftware to keep its media secure but not private. DRM is a whole topic of its own, but it essentially uses cryptography to enforce copyright. In Spotify’s case, it’s what constitutes streaming rather than just downloading: the song’s file is present on your devicejust as if you’d downloaded it, but Spotify’s DRM cryptography prevents you from opening the file without the Spotify application. The data on Spotifyare secure because only users of the application can stream audio, and streamed content can’t be retained, opened, or transmitted to non-users. However, Spotify’s data is not private because nearly anyone with an email address can be a user. Thus, in practice, the company cannot control who exactly can access its data. A more complex example of security without privacy is social media. When you sign up for a social media platform, you accept an end-user license agreementauthorizing the platform to share your data with its partners and affiliates. Your data stored with “authorized parties” on servers controlled by the platform and its affiliates would be considered secure, provided all these entities successfully defend your data against theft by unauthorized parties. In other words, if everyone who is allowedto have your data encrypts it in transit and at rest, insulates and segments their networks, etc., then your data is secure no matter how many affiliates receive it. In practice, the more parties that have your data, the more likely it is that any one of them is breached, but in theory, they could all defend your data. On the other hand, any data you fork over to the social network is not private because you can’t control who uses your data and how. As soon as your data lands on the platform’s servers, you can’t restrict what they do with it, including sharing your data with other entities, which you also can’t control. Both examples illustrate security without privacy. That’s because privacy entails security, but not the reverse. All squares are rectangles, but not all rectangles are squares. If you have privacy, meaning you can completely enforce how any party uses your data, it is secure by definition because only authorized parties can access your data. Mobile Devices: Secure but Not Private Casually mixing security and privacy can lead people to misunderstand the information security properties that apply to their data in any given scenario. By reevaluating for ourselves whether a given technology affords us security and privacy, we can have a more accurate understanding of how accessible our data really is. One significant misconception I’ve noticed concerns mobile devices. I get the impression that the digital privacy content sphere regards mobile devices as not secure because they aren’t private. But while mobile is designed not to be private, it is specifically designed to be secure. Why is that? Because the value of data is in keeping it in your hands and out of your competitor’s. If you collect data but anyone else can grab your copy, you are not only at no advantage but also at a disadvantage since you’re the only party that spent time and money to collect it from the source. With modest scrutiny, we’ll find that every element of a mobile OS that might be marketed as a privacy feature is, in fact, strictly a security feature. Cybersecurity professionals have hailed application permissions as a major stride in privacy. But whom are they designed to help? These menus apply to applications that request access to certain hardware, from microphones and cameras to flash memory storage and wireless radios. This access restriction feature serves the OS developer by letting users lock out as much of their competition as possible from taking their data. The mobile OS developer controls the OS with un-auditable compiled code. For all you know, permission controls on all the OS’s native apps could be ignored. However, even if we assume that the OS developer doesn’t thwart your restrictions on their own apps, the first-party apps still enjoy pride of place. There are more of them; they are preinstalled on your device, facilitate core mobile device features, require more permissions, and often lose core functions when those permissions are denied. Mobile OSes also sandbox every application, forcing each to run in an isolated software environment, oblivious to other applications and the underlying operating system. This, too, benefits the OS vendor. Like the app permission settings, this functionality makes it harder for third parties to grab the same data the OS effortlessly ingests. The OS relies on its own background processes to obtain the most valuable data and walls off every other app from those processes. Mobile Security Isn’t Designed With You in Mind The most powerful mobile security control is the denial of root privileges to all applications and users. While it goes a long way toward keeping the user’s data safe, it is just as effective at subjecting everything and everyone using the device to the dictates of the OS. The security advantage is undeniable: if your user account can’t use root, then any malware that compromises it can’t either. By the same token, because you don’t have complete control over the OS, you are unable to reconfigure your device for privacy from the OS vendor. I’m not disparaging any of these security controls. All of them reinforce the protection of your data. I’m saying that they are not done primarily for the user’s benefit; that is secondary. Those of you familiar with my work might see the scroll bar near the bottom of this page and wonder why I haven’t mentioned Linux yet. The answer is that desktop operating systems, my preferred kind of Linux OS, benefit from their own examination. In a follow-up to this piece, I will discuss the paradox of desktop security and privacy. Please stay tuned. #security #not #privacy #part #mobile
    Security Is Not Privacy, Part 1: The Mobile Target
    In technical fields like information technology, definitions are fundamental. They are the building blocks for constructing useful applications and systems. Yet, despite this, it’s easy to assume a term’s definition and wield it confidently before discovering its true meaning. The two closely related cases that stand out to me are “security” and “privacy.” I say this with full awareness that, in my many writings on information security, I never adequately distinguished these two concepts. It was only after observing enough conflation of these terms that I resolved to examine my own casual treatment of them. So, with the aim of solidifying my own understanding, let’s properly differentiate “information security” and “information privacy.” Security vs. Privacy: Definitions That Matter In the context of information technology, what exactly are security and privacy? Security is the property of denying unauthorized parties from accessing or altering your data. Privacy is the property of preventing the observation of your activities by any third parties to whom you do not expressly consent to observe those activities. As you can see, these principles are related, which is one reason why they’re commonly interchanged. This distinction becomes comprehensible with examples. Let’s start with an instance where security applies, but privacy does not. Spotify uses digital rights management (DRM) software to keep its media secure but not private. DRM is a whole topic of its own, but it essentially uses cryptography to enforce copyright. In Spotify’s case, it’s what constitutes streaming rather than just downloading: the song’s file is present on your device (at least temporarily) just as if you’d downloaded it, but Spotify’s DRM cryptography prevents you from opening the file without the Spotify application. The data on Spotify (audio files) are secure because only users of the application can stream audio, and streamed content can’t be retained, opened, or transmitted to non-users. However, Spotify’s data is not private because nearly anyone with an email address can be a user. Thus, in practice, the company cannot control who exactly can access its data. A more complex example of security without privacy is social media. When you sign up for a social media platform, you accept an end-user license agreement (EULA) authorizing the platform to share your data with its partners and affiliates. Your data stored with “authorized parties” on servers controlled by the platform and its affiliates would be considered secure, provided all these entities successfully defend your data against theft by unauthorized parties. In other words, if everyone who is allowed (by agreement) to have your data encrypts it in transit and at rest, insulates and segments their networks, etc., then your data is secure no matter how many affiliates receive it. In practice, the more parties that have your data, the more likely it is that any one of them is breached, but in theory, they could all defend your data. On the other hand, any data you fork over to the social network is not private because you can’t control who uses your data and how. As soon as your data lands on the platform’s servers, you can’t restrict what they do with it, including sharing your data with other entities, which you also can’t control. Both examples illustrate security without privacy. That’s because privacy entails security, but not the reverse. All squares are rectangles, but not all rectangles are squares. If you have privacy, meaning you can completely enforce how any party uses your data (or doesn’t), it is secure by definition because only authorized parties can access your data. Mobile Devices: Secure but Not Private Casually mixing security and privacy can lead people to misunderstand the information security properties that apply to their data in any given scenario. By reevaluating for ourselves whether a given technology affords us security and privacy, we can have a more accurate understanding of how accessible our data really is. One significant misconception I’ve noticed concerns mobile devices. I get the impression that the digital privacy content sphere regards mobile devices as not secure because they aren’t private. But while mobile is designed not to be private, it is specifically designed to be secure. Why is that? Because the value of data is in keeping it in your hands and out of your competitor’s. If you collect data but anyone else can grab your copy, you are not only at no advantage but also at a disadvantage since you’re the only party that spent time and money to collect it from the source. With modest scrutiny, we’ll find that every element of a mobile OS that might be marketed as a privacy feature is, in fact, strictly a security feature. Cybersecurity professionals have hailed application permissions as a major stride in privacy. But whom are they designed to help? These menus apply to applications that request access to certain hardware, from microphones and cameras to flash memory storage and wireless radios. This access restriction feature serves the OS developer by letting users lock out as much of their competition as possible from taking their data. The mobile OS developer controls the OS with un-auditable compiled code. For all you know, permission controls on all the OS’s native apps could be ignored. However, even if we assume that the OS developer doesn’t thwart your restrictions on their own apps, the first-party apps still enjoy pride of place. There are more of them; they are preinstalled on your device, facilitate core mobile device features, require more permissions, and often lose core functions when those permissions are denied. Mobile OSes also sandbox every application, forcing each to run in an isolated software environment, oblivious to other applications and the underlying operating system. This, too, benefits the OS vendor. Like the app permission settings, this functionality makes it harder for third parties to grab the same data the OS effortlessly ingests. The OS relies on its own background processes to obtain the most valuable data and walls off every other app from those processes. Mobile Security Isn’t Designed With You in Mind The most powerful mobile security control is the denial of root privileges to all applications and users (besides, again, the OS itself). While it goes a long way toward keeping the user’s data safe, it is just as effective at subjecting everything and everyone using the device to the dictates of the OS. The security advantage is undeniable: if your user account can’t use root, then any malware that compromises it can’t either. By the same token, because you don’t have complete control over the OS, you are unable to reconfigure your device for privacy from the OS vendor. I’m not disparaging any of these security controls. All of them reinforce the protection of your data. I’m saying that they are not done primarily for the user’s benefit; that is secondary. Those of you familiar with my work might see the scroll bar near the bottom of this page and wonder why I haven’t mentioned Linux yet. The answer is that desktop operating systems, my preferred kind of Linux OS, benefit from their own examination. In a follow-up to this piece, I will discuss the paradox of desktop security and privacy. Please stay tuned.
    0 Comments 0 Shares
  • Five Tricks Insurance Companies Use to Deny Claims (and How to Avoid Them)

    Insurance is a modern necessity—and it can often be a literal lifesaver. When any sort of disaster strikes, whether it’s to your property, your business, or your health, getting a payment on your claim can mean the difference between getting through the emergency in one piece and being buried under bills for years to come.While it might seem pretty straightforward, anyone who’s actually dealt with insurance companies knows that their least favorite thing to do in this world is pay out on claims. In fact, studies have found that 17% of claims made to HealthCare.gov insurers and 18% of claims made to private insurers are denied—with some insurance companies rejecting nearly 50% of all claims.In fact, some insurers employ several legal but underhanded tricks to avoid paying out on claims. Here are five common tricks insurance companies will use against you.Moving as slowly as possibleThe simplest trick an insurance company can play is to drag their feet on your claim—there’s a reason the phrase “deny and delay” has become linked to the insurance industry as a whole. This tactic works because your insurer knows you’re desperate for a settlement on your claim; that’s the whole point of insurance. The longer they make you wait, the more likely you’ll accept whatever they offer you, even if you know it’s much less than you’re entitled to.Delay tactics vary, but can include:Documentation overload. Insurance companies sometimes request huge amounts ofdocumentation that takes a lot of time to procure and organize.Personnel swaps. Insurers may change the adjuster and other employees on your case, with each new person requiring some time to review the claim and get up to speed.Claiming no response. Your insurer asked for something, and you provided it. Weeks later they claim they never received it, and ask that you submit it again.The best way to counter these tactics is meticulous record-keeping. If you feel that your insurer is asking for unnecessary documentation, ask them to explain why they need the documents—this sometimes makes document requests magically vanish. Send all written communications via certified mail and turn on read receipts for emails and other electronic communications.Love bombingInsurers are often extremely nice to you when you initially start dealing with them. They express sympathy, use the words “I’m sorry” in the context of your injuries or loss, and generally sound warm and friendly. That’s nice, and may even be genuine. But it’s also a tactic they use to delay or eventually find reasons to deny your claim.Studies have found that apologies can be weaponized, and that people will tolerate delays and settle for less if they think the other party is taking responsibility or expressing empathy. Being excessively nice to you can fool you into thinking you can rely on the insurer to handle things on your behalf. This lets the company delay as much as possible, stringing you along. Love bombing like this can also get you to say things you shouldn’t say—as we’ll see in the next entry.Twisting your wordsAnother reason an insurance adjuster or other employee might apologize to you? Because it can gently goad you into expressing regrets of your own which can then be used against you. Any expression of apology or regret could be interpreted as an admittance of fault. And establishing a friendly, sympathetic rapport with you could be a tactic to get you to say more than you should by speculating on causes or your responsibility in a claim.For example, while speaking with a friendly adjuster about a car accident you were involved with, you might tell them you wish you’d noticed the other car driving erratically—if you had, you might have avoided the accident. This might seem like a simple observation, but it could be twisted into an admittance that you weren’t paying attention—and thus your claim is denied.Insurers will also sometimes demand that you submit a recorded statement, often very quickly after the incident covered by the claim. They will sometimes state that this is best for you, as it will get the claim moving more quickly. But a rushed recorded statement can also lead to a denied claim—if you’re wrong or inaccurate about anything in your statement, this could be used as a basis for denial.To avoid the tactics listed above, follow a few simple rules for dealing with insurers:Lawyer up. Hiring a lawyer to help you deal with insurance claims might seem like overkill, but you should always have a lawyer with you when speaking with insurers. An experienced attorney can ensure you don’t say anything that could hurt your claim.Never record. You’re typically not required to submit recorded statements, so don’t do it. Insist that all communications be done in person or over the phone, with your attorney present.Be suspicious. If an adjuster or other representative of the insurance company is overly friendly, don’t fall for it. Their sympathy might be genuine, but it can still get you into trouble if you let your guard down.RushingIronically, an effective tactic to delay and deny claims is to rush things. You file your claim, and before you can get organized, hire a lawyer, and recover from whatever happened, the insurer is on the phone, offering a settlement. This is often combined with pressure—like sending you a letter stating that you have two weeks to accept the settlement offer, or the company will “close the file.”This is usually designed to reduce the payout—that easy, fast settlement is probably for a lot less than the amount you’re actually entitled to—and to force errors on your part. By rushing to pull together information, you’ll probably make mistakes that can then be used to deny the claim.You can defend against this in two simple ways:Slow down. Ignore pressure to respond immediately or by arbitrary deadlines. Take your time, gather information, and respond when you’re ready. An attorney can help by acting as a buffer between you and the company.Know the statute. If your insurer tells you it will close your claim file, let it happen. It’s a meaningless administrative action. Your state’s laws will have a defined Statute of Limitations on your claim—a period of time when you’re legally allowed to pursue the claim. As long as you’re within that time period, you can force the insurer to re-open the claim.LowballingIf your insurer doesn’t think it can totally deny your claim, it may offer you a lowball settlement. You shouldn’t accept an insurance company’s calculations as gospel. In fact, the most important piece of information you can have about the claims process is that you are allowed to hire your own insurance adjuster. A public insurance adjuster works for you, not the insurance company, and will often arrive at a much higher figure for your claim.Hiring your own adjuster also insulates you from internal delays, as you won’t have to wait on your insurer’s processes or play phone tag with their adjuster. In addition to an experienced lawyer, a public adjuster can also ensure that you have all the support and knowledge you need to navigate the often murky world of insurance claims.
    #five #tricks #insurance #companies #use
    Five Tricks Insurance Companies Use to Deny Claims (and How to Avoid Them)
    Insurance is a modern necessity—and it can often be a literal lifesaver. When any sort of disaster strikes, whether it’s to your property, your business, or your health, getting a payment on your claim can mean the difference between getting through the emergency in one piece and being buried under bills for years to come.While it might seem pretty straightforward, anyone who’s actually dealt with insurance companies knows that their least favorite thing to do in this world is pay out on claims. In fact, studies have found that 17% of claims made to HealthCare.gov insurers and 18% of claims made to private insurers are denied—with some insurance companies rejecting nearly 50% of all claims.In fact, some insurers employ several legal but underhanded tricks to avoid paying out on claims. Here are five common tricks insurance companies will use against you.Moving as slowly as possibleThe simplest trick an insurance company can play is to drag their feet on your claim—there’s a reason the phrase “deny and delay” has become linked to the insurance industry as a whole. This tactic works because your insurer knows you’re desperate for a settlement on your claim; that’s the whole point of insurance. The longer they make you wait, the more likely you’ll accept whatever they offer you, even if you know it’s much less than you’re entitled to.Delay tactics vary, but can include:Documentation overload. Insurance companies sometimes request huge amounts ofdocumentation that takes a lot of time to procure and organize.Personnel swaps. Insurers may change the adjuster and other employees on your case, with each new person requiring some time to review the claim and get up to speed.Claiming no response. Your insurer asked for something, and you provided it. Weeks later they claim they never received it, and ask that you submit it again.The best way to counter these tactics is meticulous record-keeping. If you feel that your insurer is asking for unnecessary documentation, ask them to explain why they need the documents—this sometimes makes document requests magically vanish. Send all written communications via certified mail and turn on read receipts for emails and other electronic communications.Love bombingInsurers are often extremely nice to you when you initially start dealing with them. They express sympathy, use the words “I’m sorry” in the context of your injuries or loss, and generally sound warm and friendly. That’s nice, and may even be genuine. But it’s also a tactic they use to delay or eventually find reasons to deny your claim.Studies have found that apologies can be weaponized, and that people will tolerate delays and settle for less if they think the other party is taking responsibility or expressing empathy. Being excessively nice to you can fool you into thinking you can rely on the insurer to handle things on your behalf. This lets the company delay as much as possible, stringing you along. Love bombing like this can also get you to say things you shouldn’t say—as we’ll see in the next entry.Twisting your wordsAnother reason an insurance adjuster or other employee might apologize to you? Because it can gently goad you into expressing regrets of your own which can then be used against you. Any expression of apology or regret could be interpreted as an admittance of fault. And establishing a friendly, sympathetic rapport with you could be a tactic to get you to say more than you should by speculating on causes or your responsibility in a claim.For example, while speaking with a friendly adjuster about a car accident you were involved with, you might tell them you wish you’d noticed the other car driving erratically—if you had, you might have avoided the accident. This might seem like a simple observation, but it could be twisted into an admittance that you weren’t paying attention—and thus your claim is denied.Insurers will also sometimes demand that you submit a recorded statement, often very quickly after the incident covered by the claim. They will sometimes state that this is best for you, as it will get the claim moving more quickly. But a rushed recorded statement can also lead to a denied claim—if you’re wrong or inaccurate about anything in your statement, this could be used as a basis for denial.To avoid the tactics listed above, follow a few simple rules for dealing with insurers:Lawyer up. Hiring a lawyer to help you deal with insurance claims might seem like overkill, but you should always have a lawyer with you when speaking with insurers. An experienced attorney can ensure you don’t say anything that could hurt your claim.Never record. You’re typically not required to submit recorded statements, so don’t do it. Insist that all communications be done in person or over the phone, with your attorney present.Be suspicious. If an adjuster or other representative of the insurance company is overly friendly, don’t fall for it. Their sympathy might be genuine, but it can still get you into trouble if you let your guard down.RushingIronically, an effective tactic to delay and deny claims is to rush things. You file your claim, and before you can get organized, hire a lawyer, and recover from whatever happened, the insurer is on the phone, offering a settlement. This is often combined with pressure—like sending you a letter stating that you have two weeks to accept the settlement offer, or the company will “close the file.”This is usually designed to reduce the payout—that easy, fast settlement is probably for a lot less than the amount you’re actually entitled to—and to force errors on your part. By rushing to pull together information, you’ll probably make mistakes that can then be used to deny the claim.You can defend against this in two simple ways:Slow down. Ignore pressure to respond immediately or by arbitrary deadlines. Take your time, gather information, and respond when you’re ready. An attorney can help by acting as a buffer between you and the company.Know the statute. If your insurer tells you it will close your claim file, let it happen. It’s a meaningless administrative action. Your state’s laws will have a defined Statute of Limitations on your claim—a period of time when you’re legally allowed to pursue the claim. As long as you’re within that time period, you can force the insurer to re-open the claim.LowballingIf your insurer doesn’t think it can totally deny your claim, it may offer you a lowball settlement. You shouldn’t accept an insurance company’s calculations as gospel. In fact, the most important piece of information you can have about the claims process is that you are allowed to hire your own insurance adjuster. A public insurance adjuster works for you, not the insurance company, and will often arrive at a much higher figure for your claim.Hiring your own adjuster also insulates you from internal delays, as you won’t have to wait on your insurer’s processes or play phone tag with their adjuster. In addition to an experienced lawyer, a public adjuster can also ensure that you have all the support and knowledge you need to navigate the often murky world of insurance claims. #five #tricks #insurance #companies #use
    LIFEHACKER.COM
    Five Tricks Insurance Companies Use to Deny Claims (and How to Avoid Them)
    Insurance is a modern necessity—and it can often be a literal lifesaver. When any sort of disaster strikes, whether it’s to your property, your business, or your health, getting a payment on your claim can mean the difference between getting through the emergency in one piece and being buried under bills for years to come.While it might seem pretty straightforward, anyone who’s actually dealt with insurance companies knows that their least favorite thing to do in this world is pay out on claims. In fact, studies have found that 17% of claims made to HealthCare.gov insurers and 18% of claims made to private insurers are denied—with some insurance companies rejecting nearly 50% of all claims.In fact, some insurers employ several legal but underhanded tricks to avoid paying out on claims. Here are five common tricks insurance companies will use against you.Moving as slowly as possibleThe simplest trick an insurance company can play is to drag their feet on your claim—there’s a reason the phrase “deny and delay” has become linked to the insurance industry as a whole. This tactic works because your insurer knows you’re desperate for a settlement on your claim; that’s the whole point of insurance. The longer they make you wait, the more likely you’ll accept whatever they offer you, even if you know it’s much less than you’re entitled to.Delay tactics vary, but can include:Documentation overload. Insurance companies sometimes request huge amounts of (sometimes unnecessary) documentation that takes a lot of time to procure and organize.Personnel swaps. Insurers may change the adjuster and other employees on your case, with each new person requiring some time to review the claim and get up to speed.Claiming no response. Your insurer asked for something, and you provided it. Weeks later they claim they never received it, and ask that you submit it again.The best way to counter these tactics is meticulous record-keeping. If you feel that your insurer is asking for unnecessary documentation, ask them to explain why they need the documents—this sometimes makes document requests magically vanish. Send all written communications via certified mail and turn on read receipts for emails and other electronic communications.Love bombingInsurers are often extremely nice to you when you initially start dealing with them. They express sympathy, use the words “I’m sorry” in the context of your injuries or loss, and generally sound warm and friendly. That’s nice, and may even be genuine. But it’s also a tactic they use to delay or eventually find reasons to deny your claim.Studies have found that apologies can be weaponized, and that people will tolerate delays and settle for less if they think the other party is taking responsibility or expressing empathy. Being excessively nice to you can fool you into thinking you can rely on the insurer to handle things on your behalf. This lets the company delay as much as possible, stringing you along. Love bombing like this can also get you to say things you shouldn’t say—as we’ll see in the next entry.Twisting your wordsAnother reason an insurance adjuster or other employee might apologize to you? Because it can gently goad you into expressing regrets of your own which can then be used against you. Any expression of apology or regret could be interpreted as an admittance of fault. And establishing a friendly, sympathetic rapport with you could be a tactic to get you to say more than you should by speculating on causes or your responsibility in a claim.For example, while speaking with a friendly adjuster about a car accident you were involved with, you might tell them you wish you’d noticed the other car driving erratically—if you had, you might have avoided the accident. This might seem like a simple observation, but it could be twisted into an admittance that you weren’t paying attention—and thus your claim is denied.Insurers will also sometimes demand that you submit a recorded statement, often very quickly after the incident covered by the claim. They will sometimes state that this is best for you, as it will get the claim moving more quickly. But a rushed recorded statement can also lead to a denied claim—if you’re wrong or inaccurate about anything in your statement, this could be used as a basis for denial.To avoid the tactics listed above, follow a few simple rules for dealing with insurers:Lawyer up. Hiring a lawyer to help you deal with insurance claims might seem like overkill, but you should always have a lawyer with you when speaking with insurers. An experienced attorney can ensure you don’t say anything that could hurt your claim.Never record. You’re typically not required to submit recorded statements, so don’t do it. Insist that all communications be done in person or over the phone, with your attorney present.Be suspicious. If an adjuster or other representative of the insurance company is overly friendly, don’t fall for it. Their sympathy might be genuine, but it can still get you into trouble if you let your guard down.RushingIronically, an effective tactic to delay and deny claims is to rush things. You file your claim, and before you can get organized, hire a lawyer, and recover from whatever happened, the insurer is on the phone, offering a settlement. This is often combined with pressure—like sending you a letter stating that you have two weeks to accept the settlement offer, or the company will “close the file.”This is usually designed to reduce the payout—that easy, fast settlement is probably for a lot less than the amount you’re actually entitled to—and to force errors on your part. By rushing to pull together information, you’ll probably make mistakes that can then be used to deny the claim.You can defend against this in two simple ways:Slow down. Ignore pressure to respond immediately or by arbitrary deadlines. Take your time, gather information, and respond when you’re ready. An attorney can help by acting as a buffer between you and the company.Know the statute. If your insurer tells you it will close your claim file, let it happen. It’s a meaningless administrative action. Your state’s laws will have a defined Statute of Limitations on your claim—a period of time when you’re legally allowed to pursue the claim. As long as you’re within that time period, you can force the insurer to re-open the claim.LowballingIf your insurer doesn’t think it can totally deny your claim, it may offer you a lowball settlement. You shouldn’t accept an insurance company’s calculations as gospel. In fact, the most important piece of information you can have about the claims process is that you are allowed to hire your own insurance adjuster. A public insurance adjuster works for you, not the insurance company, and will often arrive at a much higher figure for your claim.Hiring your own adjuster also insulates you from internal delays, as you won’t have to wait on your insurer’s processes or play phone tag with their adjuster. In addition to an experienced lawyer, a public adjuster can also ensure that you have all the support and knowledge you need to navigate the often murky world of insurance claims.
    0 Comments 0 Shares