• New Court Order in Stratasys v. Bambu Lab Lawsuit

    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit. 
    Both parties have agreed to consolidate the lead and member casesinto a single case under Case No. 2:25-cv-00465-JRG. 
    Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299.
    On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward.
    Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299. The court will now decide whether to merge the cases.
    This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits. 
    The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year. 
    Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.       
    A Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry.
    Another twist in the Stratasys v. Bambu Lab lawsuit 
    Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities.
    Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers.
    Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice.
    It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab. 
    Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022.
    In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas.
    Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.   
    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.  
    In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party.
    Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12. Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment.
    The Bambu Lab X1C 3D printer. Image via Bambu Lab.
    3D printing patent battles 
    The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit. 
    The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent.
    Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.  
    The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs. 
    In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.  
    San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer.
    3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets.
    Who won the 2024 3D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry.
    #new #court #order #stratasys #bambu
    New Court Order in Stratasys v. Bambu Lab Lawsuit
    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit.  Both parties have agreed to consolidate the lead and member casesinto a single case under Case No. 2:25-cv-00465-JRG.  Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299. On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward. Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299. The court will now decide whether to merge the cases. This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits.  The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year.  Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.        A Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry. Another twist in the Stratasys v. Bambu Lab lawsuit  Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities. Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers. Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice. It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab.  Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022. In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas. Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.   In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party. Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12. Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment. The Bambu Lab X1C 3D printer. Image via Bambu Lab. 3D printing patent battles  The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit.  The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent. Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.   The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs.  In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.   San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer. 3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry. #new #court #order #stratasys #bambu
    3DPRINTINGINDUSTRY.COM
    New Court Order in Stratasys v. Bambu Lab Lawsuit
    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit.  Both parties have agreed to consolidate the lead and member cases (2:24-CV-00644-JRG and 2:24-CV-00645-JRG) into a single case under Case No. 2:25-cv-00465-JRG.  Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299(a). On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward. Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299(a). The court will now decide whether to merge the cases. This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits.  The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year.  Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.        A Stratasys Fortus 450mc (left) and a Bambu Lab X1C (right). Image by 3D Printing industry. Another twist in the Stratasys v. Bambu Lab lawsuit  Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities. Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers. Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice. It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab.  Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022. In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas. Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.   In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party. Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12(b)(7). Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment. The Bambu Lab X1C 3D printer. Image via Bambu Lab. 3D printing patent battles  The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit.  The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent. Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.   The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs.  In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.   San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer. 3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mc (left) and a Bambu Lab X1C (right). Image by 3D Printing industry.
    Like
    Love
    Wow
    Sad
    Angry
    522
    2 Commenti 0 condivisioni
  • Malicious PyPI Package Masquerades as Chimera Module to Steal AWS, CI/CD, and macOS Data

    Jun 16, 2025Ravie LakshmananMalware / DevOps

    Cybersecurity researchers have discovered a malicious package on the Python Package Indexrepository that's capable of harvesting sensitive developer-related information, such as credentials, configuration data, and environment variables, among others.
    The package, named chimera-sandbox-extensions, attracted 143 downloads and likely targets users of a service called Chimera Sandbox, which was released by Singaporean tech company Grab last August to facilitate "experimentation and development ofsolutions."
    The package masquerades as a helper module for Chimera Sandbox, but "aims to steal credentials and other sensitive information such as Jamf configuration, CI/CD environment variables, AWS tokens, and more," JFrog security researcher Guy Korolevski said in a report published last week.
    Once installed, it attempts to connect to an external domain whose domain name is generated using a domain generation algorithmin order to download and execute a next-stage payload.
    Specifically, the malware acquires from the domain an authentication token, which is then used to send a request to the same domain and retrieve the Python-based information stealer.

    The stealer malware is equipped to siphon a wide range of data from infected machines. This includes -

    JAMF receipts, which are records of software packages installed by Jamf Pro on managed computers
    Pod sandbox environment authentication tokens and git information
    CI/CD information from environment variables
    Zscaler host configuration
    Amazon Web Services account information and tokens
    Public IP address
    General platform, user, and host information

    The kind of data gathered by the malware shows that it's mainly geared towards corporate and cloud infrastructure. In addition, the extraction of JAMF receipts indicates that it's also capable of targeting Apple macOS systems.
    The collected information is sent via a POST request back to the same domain, after which the server assesses if the machine is a worthy target for further exploitation. However, JFrog said it was unable to obtain the payload at the time of analysis.
    "The targeted approach employed by this malware, along with the complexity of its multi-stage targeted payload, distinguishes it from the more generic open-source malware threats we have encountered thus far, highlighting the advancements that malicious packages have made recently," Jonathan Sar Shalom, director of threat research at JFrog Security Research team, said.

    "This new sophistication of malware underscores why development teams remain vigilant with updates—alongside proactive security research – to defend against emerging threats and maintain software integrity."
    The disclosure comes as SafeDep and Veracode detailed a number of malware-laced npm packages that are designed to execute remote code and download additional payloads. The packages in question are listed below -

    eslint-config-airbnb-compatts-runtime-compat-checksolders@mediawave/libAll the identified npm packages have since been taken down from npm, but not before they were downloaded hundreds of times from the package registry.
    SafeDep's analysis of eslint-config-airbnb-compat found that the JavaScript library has ts-runtime-compat-check listed as a dependency, which, in turn, contacts an external server defined in the former packageto retrieve and execute a Base64-encoded string. The exact nature of the payload is unknown.
    "It implements a multi-stage remote code execution attack using a transitive dependency to hide the malicious code," SafeDep researcher Kunal Singh said.
    Solders, on the other hand, has been found to incorporate a post-install script in its package.json, causing the malicious code to be automatically executed as soon as the package is installed.
    "At first glance, it's hard to believe that this is actually valid JavaScript," the Veracode Threat Research team said. "It looks like a seemingly random collection of Japanese symbols. It turns out that this particular obfuscation scheme uses the Unicode characters as variable names and a sophisticated chain of dynamic code generation to work."
    Decoding the script reveals an extra layer of obfuscation, unpacking which reveals its main function: Check if the compromised machine is Windows, and if so, run a PowerShell command to retrieve a next-stage payload from a remote server.
    This second-stage PowerShell script, also obscured, is designed to fetch a Windows batch script from another domainand configures a Windows Defender Antivirus exclusion list to avoid detection. The batch script then paves the way for the execution of a .NET DLL that reaches out to a PNG image hosted on ImgBB.
    "is grabbing the last two pixels from this image and then looping through some data contained elsewhere in it," Veracode said. "It ultimately builds up in memory YET ANOTHER .NET DLL."

    Furthermore, the DLL is equipped to create task scheduler entries and features the ability to bypass user account controlusing a combination of FodHelper.exe and programmatic identifiersto evade defenses and avoid triggering any security alerts to the user.
    The newly-downloaded DLL is Pulsar RAT, a "free, open-source Remote Administration Tool for Windows" and a variant of the Quasar RAT.
    "From a wall of Japanese characters to a RAT hidden within the pixels of a PNG file, the attacker went to extraordinary lengths to conceal their payload, nesting it a dozen layers deep to evade detection," Veracode said. "While the attacker's ultimate objective for deploying the Pulsar RAT remains unclear, the sheer complexity of this delivery mechanism is a powerful indicator of malicious intent."
    Crypto Malware in the Open-Source Supply Chain
    The findings also coincide with a report from Socket that identified credential stealers, cryptocurrency drainers, cryptojackers, and clippers as the main types of threats targeting the cryptocurrency and blockchain development ecosystem.

    Some of the examples of these packages include -

    express-dompurify and pumptoolforvolumeandcomment, which are capable of harvesting browser credentials and cryptocurrency wallet keys
    bs58js, which drains a victim's wallet and uses multi-hop transfers to obscure theft and frustrate forensic tracing.
    lsjglsjdv, asyncaiosignal, and raydium-sdk-liquidity-init, which functions as a clipper to monitor the system clipboard for cryptocurrency wallet strings and replace them with threat actor‑controlled addresses to reroute transactions to the attackers

    "As Web3 development converges with mainstream software engineering, the attack surface for blockchain-focused projects is expanding in both scale and complexity," Socket security researcher Kirill Boychenko said.
    "Financially motivated threat actors and state-sponsored groups are rapidly evolving their tactics to exploit systemic weaknesses in the software supply chain. These campaigns are iterative, persistent, and increasingly tailored to high-value targets."
    AI and Slopsquatting
    The rise of artificial intelligence-assisted coding, also called vibe coding, has unleashed another novel threat in the form of slopsquatting, where large language modelscan hallucinate non-existent but plausible package names that bad actors can weaponize to conduct supply chain attacks.
    Trend Micro, in a report last week, said it observed an unnamed advanced agent "confidently" cooking up a phantom Python package named starlette-reverse-proxy, only for the build process to crash with the error "module not found." However, should an adversary upload a package with the same name on the repository, it can have serious security consequences.

    Furthermore, the cybersecurity company noted that advanced coding agents and workflows such as Claude Code CLI, OpenAI Codex CLI, and Cursor AI with Model Context Protocol-backed validation can help reduce, but not completely eliminate, the risk of slopsquatting.
    "When agents hallucinate dependencies or install unverified packages, they create an opportunity for slopsquatting attacks, in which malicious actors pre-register those same hallucinated names on public registries," security researcher Sean Park said.
    "While reasoning-enhanced agents can reduce the rate of phantom suggestions by approximately half, they do not eliminate them entirely. Even the vibe-coding workflow augmented with live MCP validations achieves the lowest rates of slip-through, but still misses edge cases."

    Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

    SHARE




    #malicious #pypi #package #masquerades #chimera
    Malicious PyPI Package Masquerades as Chimera Module to Steal AWS, CI/CD, and macOS Data
    Jun 16, 2025Ravie LakshmananMalware / DevOps Cybersecurity researchers have discovered a malicious package on the Python Package Indexrepository that's capable of harvesting sensitive developer-related information, such as credentials, configuration data, and environment variables, among others. The package, named chimera-sandbox-extensions, attracted 143 downloads and likely targets users of a service called Chimera Sandbox, which was released by Singaporean tech company Grab last August to facilitate "experimentation and development ofsolutions." The package masquerades as a helper module for Chimera Sandbox, but "aims to steal credentials and other sensitive information such as Jamf configuration, CI/CD environment variables, AWS tokens, and more," JFrog security researcher Guy Korolevski said in a report published last week. Once installed, it attempts to connect to an external domain whose domain name is generated using a domain generation algorithmin order to download and execute a next-stage payload. Specifically, the malware acquires from the domain an authentication token, which is then used to send a request to the same domain and retrieve the Python-based information stealer. The stealer malware is equipped to siphon a wide range of data from infected machines. This includes - JAMF receipts, which are records of software packages installed by Jamf Pro on managed computers Pod sandbox environment authentication tokens and git information CI/CD information from environment variables Zscaler host configuration Amazon Web Services account information and tokens Public IP address General platform, user, and host information The kind of data gathered by the malware shows that it's mainly geared towards corporate and cloud infrastructure. In addition, the extraction of JAMF receipts indicates that it's also capable of targeting Apple macOS systems. The collected information is sent via a POST request back to the same domain, after which the server assesses if the machine is a worthy target for further exploitation. However, JFrog said it was unable to obtain the payload at the time of analysis. "The targeted approach employed by this malware, along with the complexity of its multi-stage targeted payload, distinguishes it from the more generic open-source malware threats we have encountered thus far, highlighting the advancements that malicious packages have made recently," Jonathan Sar Shalom, director of threat research at JFrog Security Research team, said. "This new sophistication of malware underscores why development teams remain vigilant with updates—alongside proactive security research – to defend against emerging threats and maintain software integrity." The disclosure comes as SafeDep and Veracode detailed a number of malware-laced npm packages that are designed to execute remote code and download additional payloads. The packages in question are listed below - eslint-config-airbnb-compatts-runtime-compat-checksolders@mediawave/libAll the identified npm packages have since been taken down from npm, but not before they were downloaded hundreds of times from the package registry. SafeDep's analysis of eslint-config-airbnb-compat found that the JavaScript library has ts-runtime-compat-check listed as a dependency, which, in turn, contacts an external server defined in the former packageto retrieve and execute a Base64-encoded string. The exact nature of the payload is unknown. "It implements a multi-stage remote code execution attack using a transitive dependency to hide the malicious code," SafeDep researcher Kunal Singh said. Solders, on the other hand, has been found to incorporate a post-install script in its package.json, causing the malicious code to be automatically executed as soon as the package is installed. "At first glance, it's hard to believe that this is actually valid JavaScript," the Veracode Threat Research team said. "It looks like a seemingly random collection of Japanese symbols. It turns out that this particular obfuscation scheme uses the Unicode characters as variable names and a sophisticated chain of dynamic code generation to work." Decoding the script reveals an extra layer of obfuscation, unpacking which reveals its main function: Check if the compromised machine is Windows, and if so, run a PowerShell command to retrieve a next-stage payload from a remote server. This second-stage PowerShell script, also obscured, is designed to fetch a Windows batch script from another domainand configures a Windows Defender Antivirus exclusion list to avoid detection. The batch script then paves the way for the execution of a .NET DLL that reaches out to a PNG image hosted on ImgBB. "is grabbing the last two pixels from this image and then looping through some data contained elsewhere in it," Veracode said. "It ultimately builds up in memory YET ANOTHER .NET DLL." Furthermore, the DLL is equipped to create task scheduler entries and features the ability to bypass user account controlusing a combination of FodHelper.exe and programmatic identifiersto evade defenses and avoid triggering any security alerts to the user. The newly-downloaded DLL is Pulsar RAT, a "free, open-source Remote Administration Tool for Windows" and a variant of the Quasar RAT. "From a wall of Japanese characters to a RAT hidden within the pixels of a PNG file, the attacker went to extraordinary lengths to conceal their payload, nesting it a dozen layers deep to evade detection," Veracode said. "While the attacker's ultimate objective for deploying the Pulsar RAT remains unclear, the sheer complexity of this delivery mechanism is a powerful indicator of malicious intent." Crypto Malware in the Open-Source Supply Chain The findings also coincide with a report from Socket that identified credential stealers, cryptocurrency drainers, cryptojackers, and clippers as the main types of threats targeting the cryptocurrency and blockchain development ecosystem. Some of the examples of these packages include - express-dompurify and pumptoolforvolumeandcomment, which are capable of harvesting browser credentials and cryptocurrency wallet keys bs58js, which drains a victim's wallet and uses multi-hop transfers to obscure theft and frustrate forensic tracing. lsjglsjdv, asyncaiosignal, and raydium-sdk-liquidity-init, which functions as a clipper to monitor the system clipboard for cryptocurrency wallet strings and replace them with threat actor‑controlled addresses to reroute transactions to the attackers "As Web3 development converges with mainstream software engineering, the attack surface for blockchain-focused projects is expanding in both scale and complexity," Socket security researcher Kirill Boychenko said. "Financially motivated threat actors and state-sponsored groups are rapidly evolving their tactics to exploit systemic weaknesses in the software supply chain. These campaigns are iterative, persistent, and increasingly tailored to high-value targets." AI and Slopsquatting The rise of artificial intelligence-assisted coding, also called vibe coding, has unleashed another novel threat in the form of slopsquatting, where large language modelscan hallucinate non-existent but plausible package names that bad actors can weaponize to conduct supply chain attacks. Trend Micro, in a report last week, said it observed an unnamed advanced agent "confidently" cooking up a phantom Python package named starlette-reverse-proxy, only for the build process to crash with the error "module not found." However, should an adversary upload a package with the same name on the repository, it can have serious security consequences. Furthermore, the cybersecurity company noted that advanced coding agents and workflows such as Claude Code CLI, OpenAI Codex CLI, and Cursor AI with Model Context Protocol-backed validation can help reduce, but not completely eliminate, the risk of slopsquatting. "When agents hallucinate dependencies or install unverified packages, they create an opportunity for slopsquatting attacks, in which malicious actors pre-register those same hallucinated names on public registries," security researcher Sean Park said. "While reasoning-enhanced agents can reduce the rate of phantom suggestions by approximately half, they do not eliminate them entirely. Even the vibe-coding workflow augmented with live MCP validations achieves the lowest rates of slip-through, but still misses edge cases." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE     #malicious #pypi #package #masquerades #chimera
    THEHACKERNEWS.COM
    Malicious PyPI Package Masquerades as Chimera Module to Steal AWS, CI/CD, and macOS Data
    Jun 16, 2025Ravie LakshmananMalware / DevOps Cybersecurity researchers have discovered a malicious package on the Python Package Index (PyPI) repository that's capable of harvesting sensitive developer-related information, such as credentials, configuration data, and environment variables, among others. The package, named chimera-sandbox-extensions, attracted 143 downloads and likely targets users of a service called Chimera Sandbox, which was released by Singaporean tech company Grab last August to facilitate "experimentation and development of [machine learning] solutions." The package masquerades as a helper module for Chimera Sandbox, but "aims to steal credentials and other sensitive information such as Jamf configuration, CI/CD environment variables, AWS tokens, and more," JFrog security researcher Guy Korolevski said in a report published last week. Once installed, it attempts to connect to an external domain whose domain name is generated using a domain generation algorithm (DGA) in order to download and execute a next-stage payload. Specifically, the malware acquires from the domain an authentication token, which is then used to send a request to the same domain and retrieve the Python-based information stealer. The stealer malware is equipped to siphon a wide range of data from infected machines. This includes - JAMF receipts, which are records of software packages installed by Jamf Pro on managed computers Pod sandbox environment authentication tokens and git information CI/CD information from environment variables Zscaler host configuration Amazon Web Services account information and tokens Public IP address General platform, user, and host information The kind of data gathered by the malware shows that it's mainly geared towards corporate and cloud infrastructure. In addition, the extraction of JAMF receipts indicates that it's also capable of targeting Apple macOS systems. The collected information is sent via a POST request back to the same domain, after which the server assesses if the machine is a worthy target for further exploitation. However, JFrog said it was unable to obtain the payload at the time of analysis. "The targeted approach employed by this malware, along with the complexity of its multi-stage targeted payload, distinguishes it from the more generic open-source malware threats we have encountered thus far, highlighting the advancements that malicious packages have made recently," Jonathan Sar Shalom, director of threat research at JFrog Security Research team, said. "This new sophistication of malware underscores why development teams remain vigilant with updates—alongside proactive security research – to defend against emerging threats and maintain software integrity." The disclosure comes as SafeDep and Veracode detailed a number of malware-laced npm packages that are designed to execute remote code and download additional payloads. The packages in question are listed below - eslint-config-airbnb-compat (676 Downloads) ts-runtime-compat-check (1,588 Downloads) solders (983 Downloads) @mediawave/lib (386 Downloads) All the identified npm packages have since been taken down from npm, but not before they were downloaded hundreds of times from the package registry. SafeDep's analysis of eslint-config-airbnb-compat found that the JavaScript library has ts-runtime-compat-check listed as a dependency, which, in turn, contacts an external server defined in the former package ("proxy.eslint-proxy[.]site") to retrieve and execute a Base64-encoded string. The exact nature of the payload is unknown. "It implements a multi-stage remote code execution attack using a transitive dependency to hide the malicious code," SafeDep researcher Kunal Singh said. Solders, on the other hand, has been found to incorporate a post-install script in its package.json, causing the malicious code to be automatically executed as soon as the package is installed. "At first glance, it's hard to believe that this is actually valid JavaScript," the Veracode Threat Research team said. "It looks like a seemingly random collection of Japanese symbols. It turns out that this particular obfuscation scheme uses the Unicode characters as variable names and a sophisticated chain of dynamic code generation to work." Decoding the script reveals an extra layer of obfuscation, unpacking which reveals its main function: Check if the compromised machine is Windows, and if so, run a PowerShell command to retrieve a next-stage payload from a remote server ("firewall[.]tel"). This second-stage PowerShell script, also obscured, is designed to fetch a Windows batch script from another domain ("cdn.audiowave[.]org") and configures a Windows Defender Antivirus exclusion list to avoid detection. The batch script then paves the way for the execution of a .NET DLL that reaches out to a PNG image hosted on ImgBB ("i.ibb[.]co"). "[The DLL] is grabbing the last two pixels from this image and then looping through some data contained elsewhere in it," Veracode said. "It ultimately builds up in memory YET ANOTHER .NET DLL." Furthermore, the DLL is equipped to create task scheduler entries and features the ability to bypass user account control (UAC) using a combination of FodHelper.exe and programmatic identifiers (ProgIDs) to evade defenses and avoid triggering any security alerts to the user. The newly-downloaded DLL is Pulsar RAT, a "free, open-source Remote Administration Tool for Windows" and a variant of the Quasar RAT. "From a wall of Japanese characters to a RAT hidden within the pixels of a PNG file, the attacker went to extraordinary lengths to conceal their payload, nesting it a dozen layers deep to evade detection," Veracode said. "While the attacker's ultimate objective for deploying the Pulsar RAT remains unclear, the sheer complexity of this delivery mechanism is a powerful indicator of malicious intent." Crypto Malware in the Open-Source Supply Chain The findings also coincide with a report from Socket that identified credential stealers, cryptocurrency drainers, cryptojackers, and clippers as the main types of threats targeting the cryptocurrency and blockchain development ecosystem. Some of the examples of these packages include - express-dompurify and pumptoolforvolumeandcomment, which are capable of harvesting browser credentials and cryptocurrency wallet keys bs58js, which drains a victim's wallet and uses multi-hop transfers to obscure theft and frustrate forensic tracing. lsjglsjdv, asyncaiosignal, and raydium-sdk-liquidity-init, which functions as a clipper to monitor the system clipboard for cryptocurrency wallet strings and replace them with threat actor‑controlled addresses to reroute transactions to the attackers "As Web3 development converges with mainstream software engineering, the attack surface for blockchain-focused projects is expanding in both scale and complexity," Socket security researcher Kirill Boychenko said. "Financially motivated threat actors and state-sponsored groups are rapidly evolving their tactics to exploit systemic weaknesses in the software supply chain. These campaigns are iterative, persistent, and increasingly tailored to high-value targets." AI and Slopsquatting The rise of artificial intelligence (AI)-assisted coding, also called vibe coding, has unleashed another novel threat in the form of slopsquatting, where large language models (LLMs) can hallucinate non-existent but plausible package names that bad actors can weaponize to conduct supply chain attacks. Trend Micro, in a report last week, said it observed an unnamed advanced agent "confidently" cooking up a phantom Python package named starlette-reverse-proxy, only for the build process to crash with the error "module not found." However, should an adversary upload a package with the same name on the repository, it can have serious security consequences. Furthermore, the cybersecurity company noted that advanced coding agents and workflows such as Claude Code CLI, OpenAI Codex CLI, and Cursor AI with Model Context Protocol (MCP)-backed validation can help reduce, but not completely eliminate, the risk of slopsquatting. "When agents hallucinate dependencies or install unverified packages, they create an opportunity for slopsquatting attacks, in which malicious actors pre-register those same hallucinated names on public registries," security researcher Sean Park said. "While reasoning-enhanced agents can reduce the rate of phantom suggestions by approximately half, they do not eliminate them entirely. Even the vibe-coding workflow augmented with live MCP validations achieves the lowest rates of slip-through, but still misses edge cases." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE    
    Like
    Love
    Wow
    Sad
    Angry
    514
    2 Commenti 0 condivisioni
  • Archaeologists Stumble Onto Sprawling Ancient Roman Villa During Construction of a Road in France

    Cool Finds

    Archaeologists Stumble Onto Sprawling Ancient Roman Villa During Construction of a Road in France
    Located near Auxerre, the grand estate once possessed an exorbitant level of wealth, with thermal baths and heated floors

    Aerial view of the villa, with thermal baths at the bottom right, the garden and fountain in the center, and the agricultural fields expanding to the left
    Ch. Fouquin / INRAP

    In ancient times, all roads led to Rome—or so the saying goes. Nowadays, new roads can lead to Roman ruins.
    During construction on an alternative route to D606, a regional road just under two miles outside of Auxerre, in central France, salvage archaeologists unearthed a sprawling Roman villa complete with a stately garden, a fountain and an elaborate system of underfloor heating known as a hypocaust, according to a statement from the French National Institute for Preventive Archaeological Research.
    While researchers have been aware of the ruins on the outskirts of the Gallo-Roman settlement of Autissiodorumsince the 19th century, previous excavations have been limited. The most recent dig, in 1966, found a 7,500-square-foot building with ten rooms and amenities that suggested its residents enjoyed great wealth and regional power.

    The site of Sainte-Nitasse, adjacent to a regional highway

    Ch. Fouquin / INRAP

    But until now, the true scale of the villa known as Sainte-Nitasse and its surrounding agricultural estates along the River Yonne was unclear. Archaeologists at INRAP have since discovered a 43,000-square-foot building thought to date to between the first and third centuries C.E. It suggests a previously unimagined level of grandeur.
    INRAP identifies the site as one of the “grand villas of Roman Gaul,” according to the statement. Grand villas are typified by their vast dimensions and sophisticated architectural style. They typically encompass both agricultural and residential portions, known in Latin as pars rustica and pars urbana, respectively. In the pars urbana, grand villas tend to feature stately construction materials like marble; extensive mosaics and frescoes; and amenities like private baths, fountains and gardens.
    So far, the excavations at Sainte-Nitasse have revealed all these features and more.
    The villa’s development is extensive. A 4,800-square-foot garden is enclosed by a fountain to the south and a water basin, or an ornamental pond, to the north. The hypocaust, an ancient system of central heating that circulated hot air beneath the floors of the house, signals a level of luxury atypical for rural estates in Roman Gaul.

    A section of the villa's hypocaust heating system, which circulated hot air beneath the floor

    Ch. Fouquin / INRAP

    “We can imagine it as an ‘aristocratic’ villa, belonging to someone with riches, responsibilities—perhaps municipal, given the proximity to Auxerre—a landowner who had staff on site,” Alexandre Burgevin, the archaeologist in charge of the excavations with INRAP, tells France Info’s Lisa Guyenne.
    Near the banks of the Yonne, a thermal bath site contains several pools where the landowner and his family bathed. On the other side of the garden, workers toiled in the fields of a massive agricultural estate.
    Aside from its size and amenities, the villa’s level of preservation also astounded archaeologists. “For a rural site, it’s quite exceptional,” Burgevin tells L’Yonne Républicaine’s Titouan Stücker. “You can walk on floors from the time period, circulate between rooms like the Gallo-Romans did.”Over time, Autissiodorum grew to become a major city along the Via Agrippa, eventually earning the honor of serving as a provincial Roman capital by the fourth century C.E. As Gaul began slipping away from the Roman Empire around the same time, the prominence of the city fluctuated. INRAP archaeologists speculate that the site was repurposed during medieval times, around the 13th century.
    Burgevin offers several explanations for why the site remained so well preserved in subsequent centuries. The humid conditions along the banks of the river might have prevented excess decay. Since this portion of the River Yonne wasn’t canalized until the 19th century, engineers may have already been aware of the presence of ruins. Or, perhaps the rubble of the villa created “bumpy,” intractable soil that was “not easy to pass over with a tractor,” he tells France Info.
    While the site will briefly open to the public on June 15 for European Archaeology Days, an annual event held at sites across the continent, excavations will continue until September, at which time construction on the road will resume. Much work is to be done, including filling in large gaps of the site’s chronology between the Roman and medieval eras.
    “We have well-built walls but few objects,” says Burgevin, per L’Yonne Républicaine. “It will be necessary to continue digging to understand better.”

    Get the latest stories in your inbox every weekday.
    #archaeologists #stumble #onto #sprawling #ancient
    Archaeologists Stumble Onto Sprawling Ancient Roman Villa During Construction of a Road in France
    Cool Finds Archaeologists Stumble Onto Sprawling Ancient Roman Villa During Construction of a Road in France Located near Auxerre, the grand estate once possessed an exorbitant level of wealth, with thermal baths and heated floors Aerial view of the villa, with thermal baths at the bottom right, the garden and fountain in the center, and the agricultural fields expanding to the left Ch. Fouquin / INRAP In ancient times, all roads led to Rome—or so the saying goes. Nowadays, new roads can lead to Roman ruins. During construction on an alternative route to D606, a regional road just under two miles outside of Auxerre, in central France, salvage archaeologists unearthed a sprawling Roman villa complete with a stately garden, a fountain and an elaborate system of underfloor heating known as a hypocaust, according to a statement from the French National Institute for Preventive Archaeological Research. While researchers have been aware of the ruins on the outskirts of the Gallo-Roman settlement of Autissiodorumsince the 19th century, previous excavations have been limited. The most recent dig, in 1966, found a 7,500-square-foot building with ten rooms and amenities that suggested its residents enjoyed great wealth and regional power. The site of Sainte-Nitasse, adjacent to a regional highway Ch. Fouquin / INRAP But until now, the true scale of the villa known as Sainte-Nitasse and its surrounding agricultural estates along the River Yonne was unclear. Archaeologists at INRAP have since discovered a 43,000-square-foot building thought to date to between the first and third centuries C.E. It suggests a previously unimagined level of grandeur. INRAP identifies the site as one of the “grand villas of Roman Gaul,” according to the statement. Grand villas are typified by their vast dimensions and sophisticated architectural style. They typically encompass both agricultural and residential portions, known in Latin as pars rustica and pars urbana, respectively. In the pars urbana, grand villas tend to feature stately construction materials like marble; extensive mosaics and frescoes; and amenities like private baths, fountains and gardens. So far, the excavations at Sainte-Nitasse have revealed all these features and more. The villa’s development is extensive. A 4,800-square-foot garden is enclosed by a fountain to the south and a water basin, or an ornamental pond, to the north. The hypocaust, an ancient system of central heating that circulated hot air beneath the floors of the house, signals a level of luxury atypical for rural estates in Roman Gaul. A section of the villa's hypocaust heating system, which circulated hot air beneath the floor Ch. Fouquin / INRAP “We can imagine it as an ‘aristocratic’ villa, belonging to someone with riches, responsibilities—perhaps municipal, given the proximity to Auxerre—a landowner who had staff on site,” Alexandre Burgevin, the archaeologist in charge of the excavations with INRAP, tells France Info’s Lisa Guyenne. Near the banks of the Yonne, a thermal bath site contains several pools where the landowner and his family bathed. On the other side of the garden, workers toiled in the fields of a massive agricultural estate. Aside from its size and amenities, the villa’s level of preservation also astounded archaeologists. “For a rural site, it’s quite exceptional,” Burgevin tells L’Yonne Républicaine’s Titouan Stücker. “You can walk on floors from the time period, circulate between rooms like the Gallo-Romans did.”Over time, Autissiodorum grew to become a major city along the Via Agrippa, eventually earning the honor of serving as a provincial Roman capital by the fourth century C.E. As Gaul began slipping away from the Roman Empire around the same time, the prominence of the city fluctuated. INRAP archaeologists speculate that the site was repurposed during medieval times, around the 13th century. Burgevin offers several explanations for why the site remained so well preserved in subsequent centuries. The humid conditions along the banks of the river might have prevented excess decay. Since this portion of the River Yonne wasn’t canalized until the 19th century, engineers may have already been aware of the presence of ruins. Or, perhaps the rubble of the villa created “bumpy,” intractable soil that was “not easy to pass over with a tractor,” he tells France Info. While the site will briefly open to the public on June 15 for European Archaeology Days, an annual event held at sites across the continent, excavations will continue until September, at which time construction on the road will resume. Much work is to be done, including filling in large gaps of the site’s chronology between the Roman and medieval eras. “We have well-built walls but few objects,” says Burgevin, per L’Yonne Républicaine. “It will be necessary to continue digging to understand better.” Get the latest stories in your inbox every weekday. #archaeologists #stumble #onto #sprawling #ancient
    WWW.SMITHSONIANMAG.COM
    Archaeologists Stumble Onto Sprawling Ancient Roman Villa During Construction of a Road in France
    Cool Finds Archaeologists Stumble Onto Sprawling Ancient Roman Villa During Construction of a Road in France Located near Auxerre, the grand estate once possessed an exorbitant level of wealth, with thermal baths and heated floors Aerial view of the villa, with thermal baths at the bottom right, the garden and fountain in the center, and the agricultural fields expanding to the left Ch. Fouquin / INRAP In ancient times, all roads led to Rome—or so the saying goes. Nowadays, new roads can lead to Roman ruins. During construction on an alternative route to D606, a regional road just under two miles outside of Auxerre, in central France, salvage archaeologists unearthed a sprawling Roman villa complete with a stately garden, a fountain and an elaborate system of underfloor heating known as a hypocaust, according to a statement from the French National Institute for Preventive Archaeological Research (INRAP). While researchers have been aware of the ruins on the outskirts of the Gallo-Roman settlement of Autissiodorum (as Auxerre was once known) since the 19th century, previous excavations have been limited. The most recent dig, in 1966, found a 7,500-square-foot building with ten rooms and amenities that suggested its residents enjoyed great wealth and regional power. The site of Sainte-Nitasse, adjacent to a regional highway Ch. Fouquin / INRAP But until now, the true scale of the villa known as Sainte-Nitasse and its surrounding agricultural estates along the River Yonne was unclear. Archaeologists at INRAP have since discovered a 43,000-square-foot building thought to date to between the first and third centuries C.E. It suggests a previously unimagined level of grandeur. INRAP identifies the site as one of the “grand villas of Roman Gaul,” according to the statement. Grand villas are typified by their vast dimensions and sophisticated architectural style. They typically encompass both agricultural and residential portions, known in Latin as pars rustica and pars urbana, respectively. In the pars urbana, grand villas tend to feature stately construction materials like marble; extensive mosaics and frescoes; and amenities like private baths, fountains and gardens. So far, the excavations at Sainte-Nitasse have revealed all these features and more. The villa’s development is extensive. A 4,800-square-foot garden is enclosed by a fountain to the south and a water basin, or an ornamental pond, to the north. The hypocaust, an ancient system of central heating that circulated hot air beneath the floors of the house, signals a level of luxury atypical for rural estates in Roman Gaul. A section of the villa's hypocaust heating system, which circulated hot air beneath the floor Ch. Fouquin / INRAP “We can imagine it as an ‘aristocratic’ villa, belonging to someone with riches, responsibilities—perhaps municipal, given the proximity to Auxerre—a landowner who had staff on site,” Alexandre Burgevin, the archaeologist in charge of the excavations with INRAP, tells France Info’s Lisa Guyenne. Near the banks of the Yonne, a thermal bath site contains several pools where the landowner and his family bathed. On the other side of the garden, workers toiled in the fields of a massive agricultural estate. Aside from its size and amenities, the villa’s level of preservation also astounded archaeologists. “For a rural site, it’s quite exceptional,” Burgevin tells L’Yonne Républicaine’s Titouan Stücker. “You can walk on floors from the time period, circulate between rooms like the Gallo-Romans did.”Over time, Autissiodorum grew to become a major city along the Via Agrippa, eventually earning the honor of serving as a provincial Roman capital by the fourth century C.E. As Gaul began slipping away from the Roman Empire around the same time, the prominence of the city fluctuated. INRAP archaeologists speculate that the site was repurposed during medieval times, around the 13th century. Burgevin offers several explanations for why the site remained so well preserved in subsequent centuries. The humid conditions along the banks of the river might have prevented excess decay. Since this portion of the River Yonne wasn’t canalized until the 19th century, engineers may have already been aware of the presence of ruins. Or, perhaps the rubble of the villa created “bumpy,” intractable soil that was “not easy to pass over with a tractor,” he tells France Info. While the site will briefly open to the public on June 15 for European Archaeology Days, an annual event held at sites across the continent, excavations will continue until September, at which time construction on the road will resume. Much work is to be done, including filling in large gaps of the site’s chronology between the Roman and medieval eras. “We have well-built walls but few objects,” says Burgevin, per L’Yonne Républicaine. “It will be necessary to continue digging to understand better.” Get the latest stories in your inbox every weekday.
    Like
    Love
    Wow
    Sad
    Angry
    509
    2 Commenti 0 condivisioni
  • Block’s CFO explains Gen Z’s surprising approach to money management

    One stock recently impacted by a whirlwind of volatility is Block—the fintech powerhouse behind Square, Cash App, Tidal Music, and more. The company’s COO and CFO, Amrita Ahuja, shares how her team is using new AI tools to find opportunity amid disruption and reach customers left behind by traditional financial systems. Ahuja also shares lessons from the video game industry and discusses Gen Z’s surprising approach to money management.  

    This is an abridged transcript of an interview from Rapid Response, hosted by Robert Safian, former editor-in-chief of Fast Company. From the team behind the Masters of Scale podcast, Rapid Response features candid conversations with today’s top business leaders navigating real-time challenges. Subscribe to Rapid Response wherever you get your podcasts to ensure you never miss an episode.

    As a leader, when you’re looking at all of this volatility—the tariffs, consumer sentiment’s been unclear, the stock market’s been all over the place. You guys had a huge one-day drop in early May, and it quickly bounced back. How do you make sense of all these external factors?

    Yeah, our focus is on what we can control. And ultimately, the thing that we are laser-focused on for our business is product velocity. How quickly can we start small with something, launch something for our customers, and then test and iterate and learn so that ultimately, that something that we’ve launched scales into an important product?

    I’ll give you an example. Cash App Borrow, which is a product where our customers can get access to a line of credit, often that bridges them from paycheck to paycheck. We know so many Americans are living paycheck to paycheck. That’s a product that we launched about three years ago and have now scaled to serve 9 million actives with billion in credit supply to our customers in a span of a couple short years.

    The more we can be out testing and launching product at a pace, the more we know we are ultimately delivering value to our customers, and the right things will happen from a stock perspective.

    Block is a financial services provider. You have Square, the point-of-sale system; the digital wallet Cash App, which you mentioned, which competes with Venmo and Robinhood; and a bunch of others. Then you’ve got the buy-now, pay-later leader Afterpay. You chair Square Financial Services, which is Block’s chartered bank. But you’ve said that in the fintech world, Block is only a little bit fin—that comparatively, it’s more tech. Can you explain what you mean by that?

    What we think is unique about us is our ability as a technology company to completely change innovation in the space, such that we can help solve systemic issues across credit, payments, commerce, and banking. What that means ultimately is we use technologies like AI and machine learning and data science, and we use these technologies in a unique way, in a way that’s different from a traditional bank. We are able to underwrite those who are often frankly forgotten by the traditional financial ecosystems.

    Our Square Loans product has almost triple the rate of women-owned businesses that we underwrite. Fifty-eight percent of our loans go to women-owned businesses versus 20% for the industry average. For that Cash App Borrow product I was talking about, 70% of those actives, the 9 million actives that we underwrote, fell below 580 as a FICO score. That’s considered a poor FICO score, and yet 97% of repayments are made on time. And this is because we have unique access to data and these technology and tools which can help us uniquely underwrite this often forgotten customer base.

    Yeah. I mean, credit—sometimes it’s been blamed for financial excesses. But access to credit is also, as you say, an advantage that’s not available to everyone. Do you have a philosophy between those poles—between risk and opportunity? Or is what you’re saying is that the tech you have allows you to avoid that risk?

    That’s right. Let’s start with how do the current systems work? It works using inferior data, frankly. It’s more limited data. It’s outdated. Sometimes it’s inaccurate. And it ignores things like someone’s cash flows, the stability of your income, your savings rate, how money moves through your accounts, or how you use alternative forms of credit—like buy now, pay later, which we have in our ecosystem through Afterpay.

    We have a lot of these signals for our 57 million monthly actives on the Cash App side and for the 4 million small businesses on the Square side, and those, frankly, billions of transaction data points that we have on any given day paired with new technologies. And we intend to continue to be on the forefront of AI, machine learning, and data science to be able to empower more people into the economy. The combination of the superior data and the technologies is what we believe ultimately helps expand access.

    You have a financial background, but not in the financial services industry. Before Block, you were a video game developer at Activision. Are financial businesses and video games similar? Are there things that are similar about them?

    There are. There actually are some things that are similar, I will say. There are many things that are unique to each industry. Each industry is incredibly complex. You find that when big technology companies try to do gaming. They’ve taken over the world in many different ways, but they can’t always crack the nut on putting out a great game. Similarly, some of the largest technology companies have dabbled in fintech but haven’t been able to go as deep, so they’re both very nuanced and complex industries.

    I would say another similarity is that design really matters. Industrial design, the design of products, the interface of products, is absolutely mission-critical to a great game, and it’s absolutely mission-critical to the simplicity and accessibility of our products, be it on Square or Cash App.

    And then maybe the third thing that I would say is that when I was in gaming, at least the business models were rapidly changing from an intermediary distribution mechanism, like releasing a game once and then selling it through a retailer, to an always-on, direct-to-consumer connection. And similarly with banking, people don’t want to bank from 9 to 5, six days a week. They want 24/7 access to their money and the ability to, again, grow their financial livelihood, move their money around seamlessly. So, some similarities are there in that shift to an intermediary model or a slower model to an always-on, direct-to-consumer connection.

    Part of your target audience or your target customer base at Block are Gen Z folks. Did you learn things at Activision about Gen Z that has been useful? Are there things that businesses misunderstand about younger generations still?

    What we’ve learned is that Gen Z, millennial customers, aren’t going to do things the way their parents did. Some of our stats show that 63% of Gen Z customers have moved away from traditional credit cards, and over 80% are skeptical of them. Which means they’re not using a credit card to manage expenses; they’re using a debit card, but then layering on on a transaction-by-transaction basis. Or again, using tools like buy now, pay later, or Cash App Borrow, the means in which they’re managing their consistent cash flows. So that’s an example of how things are changing, and you’ve got to get up to speed with how the next generation of customers expects to manage their money.
    #blocks #cfo #explains #gen #surprising
    Block’s CFO explains Gen Z’s surprising approach to money management
    One stock recently impacted by a whirlwind of volatility is Block—the fintech powerhouse behind Square, Cash App, Tidal Music, and more. The company’s COO and CFO, Amrita Ahuja, shares how her team is using new AI tools to find opportunity amid disruption and reach customers left behind by traditional financial systems. Ahuja also shares lessons from the video game industry and discusses Gen Z’s surprising approach to money management.   This is an abridged transcript of an interview from Rapid Response, hosted by Robert Safian, former editor-in-chief of Fast Company. From the team behind the Masters of Scale podcast, Rapid Response features candid conversations with today’s top business leaders navigating real-time challenges. Subscribe to Rapid Response wherever you get your podcasts to ensure you never miss an episode. As a leader, when you’re looking at all of this volatility—the tariffs, consumer sentiment’s been unclear, the stock market’s been all over the place. You guys had a huge one-day drop in early May, and it quickly bounced back. How do you make sense of all these external factors? Yeah, our focus is on what we can control. And ultimately, the thing that we are laser-focused on for our business is product velocity. How quickly can we start small with something, launch something for our customers, and then test and iterate and learn so that ultimately, that something that we’ve launched scales into an important product? I’ll give you an example. Cash App Borrow, which is a product where our customers can get access to a line of credit, often that bridges them from paycheck to paycheck. We know so many Americans are living paycheck to paycheck. That’s a product that we launched about three years ago and have now scaled to serve 9 million actives with billion in credit supply to our customers in a span of a couple short years. The more we can be out testing and launching product at a pace, the more we know we are ultimately delivering value to our customers, and the right things will happen from a stock perspective. Block is a financial services provider. You have Square, the point-of-sale system; the digital wallet Cash App, which you mentioned, which competes with Venmo and Robinhood; and a bunch of others. Then you’ve got the buy-now, pay-later leader Afterpay. You chair Square Financial Services, which is Block’s chartered bank. But you’ve said that in the fintech world, Block is only a little bit fin—that comparatively, it’s more tech. Can you explain what you mean by that? What we think is unique about us is our ability as a technology company to completely change innovation in the space, such that we can help solve systemic issues across credit, payments, commerce, and banking. What that means ultimately is we use technologies like AI and machine learning and data science, and we use these technologies in a unique way, in a way that’s different from a traditional bank. We are able to underwrite those who are often frankly forgotten by the traditional financial ecosystems. Our Square Loans product has almost triple the rate of women-owned businesses that we underwrite. Fifty-eight percent of our loans go to women-owned businesses versus 20% for the industry average. For that Cash App Borrow product I was talking about, 70% of those actives, the 9 million actives that we underwrote, fell below 580 as a FICO score. That’s considered a poor FICO score, and yet 97% of repayments are made on time. And this is because we have unique access to data and these technology and tools which can help us uniquely underwrite this often forgotten customer base. Yeah. I mean, credit—sometimes it’s been blamed for financial excesses. But access to credit is also, as you say, an advantage that’s not available to everyone. Do you have a philosophy between those poles—between risk and opportunity? Or is what you’re saying is that the tech you have allows you to avoid that risk? That’s right. Let’s start with how do the current systems work? It works using inferior data, frankly. It’s more limited data. It’s outdated. Sometimes it’s inaccurate. And it ignores things like someone’s cash flows, the stability of your income, your savings rate, how money moves through your accounts, or how you use alternative forms of credit—like buy now, pay later, which we have in our ecosystem through Afterpay. We have a lot of these signals for our 57 million monthly actives on the Cash App side and for the 4 million small businesses on the Square side, and those, frankly, billions of transaction data points that we have on any given day paired with new technologies. And we intend to continue to be on the forefront of AI, machine learning, and data science to be able to empower more people into the economy. The combination of the superior data and the technologies is what we believe ultimately helps expand access. You have a financial background, but not in the financial services industry. Before Block, you were a video game developer at Activision. Are financial businesses and video games similar? Are there things that are similar about them? There are. There actually are some things that are similar, I will say. There are many things that are unique to each industry. Each industry is incredibly complex. You find that when big technology companies try to do gaming. They’ve taken over the world in many different ways, but they can’t always crack the nut on putting out a great game. Similarly, some of the largest technology companies have dabbled in fintech but haven’t been able to go as deep, so they’re both very nuanced and complex industries. I would say another similarity is that design really matters. Industrial design, the design of products, the interface of products, is absolutely mission-critical to a great game, and it’s absolutely mission-critical to the simplicity and accessibility of our products, be it on Square or Cash App. And then maybe the third thing that I would say is that when I was in gaming, at least the business models were rapidly changing from an intermediary distribution mechanism, like releasing a game once and then selling it through a retailer, to an always-on, direct-to-consumer connection. And similarly with banking, people don’t want to bank from 9 to 5, six days a week. They want 24/7 access to their money and the ability to, again, grow their financial livelihood, move their money around seamlessly. So, some similarities are there in that shift to an intermediary model or a slower model to an always-on, direct-to-consumer connection. Part of your target audience or your target customer base at Block are Gen Z folks. Did you learn things at Activision about Gen Z that has been useful? Are there things that businesses misunderstand about younger generations still? What we’ve learned is that Gen Z, millennial customers, aren’t going to do things the way their parents did. Some of our stats show that 63% of Gen Z customers have moved away from traditional credit cards, and over 80% are skeptical of them. Which means they’re not using a credit card to manage expenses; they’re using a debit card, but then layering on on a transaction-by-transaction basis. Or again, using tools like buy now, pay later, or Cash App Borrow, the means in which they’re managing their consistent cash flows. So that’s an example of how things are changing, and you’ve got to get up to speed with how the next generation of customers expects to manage their money. #blocks #cfo #explains #gen #surprising
    WWW.FASTCOMPANY.COM
    Block’s CFO explains Gen Z’s surprising approach to money management
    One stock recently impacted by a whirlwind of volatility is Block—the fintech powerhouse behind Square, Cash App, Tidal Music, and more. The company’s COO and CFO, Amrita Ahuja, shares how her team is using new AI tools to find opportunity amid disruption and reach customers left behind by traditional financial systems. Ahuja also shares lessons from the video game industry and discusses Gen Z’s surprising approach to money management.   This is an abridged transcript of an interview from Rapid Response, hosted by Robert Safian, former editor-in-chief of Fast Company. From the team behind the Masters of Scale podcast, Rapid Response features candid conversations with today’s top business leaders navigating real-time challenges. Subscribe to Rapid Response wherever you get your podcasts to ensure you never miss an episode. As a leader, when you’re looking at all of this volatility—the tariffs, consumer sentiment’s been unclear, the stock market’s been all over the place. You guys had a huge one-day drop in early May, and it quickly bounced back. How do you make sense of all these external factors? Yeah, our focus is on what we can control. And ultimately, the thing that we are laser-focused on for our business is product velocity. How quickly can we start small with something, launch something for our customers, and then test and iterate and learn so that ultimately, that something that we’ve launched scales into an important product? I’ll give you an example. Cash App Borrow, which is a product where our customers can get access to a line of credit, often $100, $200, that bridges them from paycheck to paycheck. We know so many Americans are living paycheck to paycheck. That’s a product that we launched about three years ago and have now scaled to serve 9 million actives with $15 billion in credit supply to our customers in a span of a couple short years. The more we can be out testing and launching product at a pace, the more we know we are ultimately delivering value to our customers, and the right things will happen from a stock perspective. Block is a financial services provider. You have Square, the point-of-sale system; the digital wallet Cash App, which you mentioned, which competes with Venmo and Robinhood; and a bunch of others. Then you’ve got the buy-now, pay-later leader Afterpay. You chair Square Financial Services, which is Block’s chartered bank. But you’ve said that in the fintech world, Block is only a little bit fin—that comparatively, it’s more tech. Can you explain what you mean by that? What we think is unique about us is our ability as a technology company to completely change innovation in the space, such that we can help solve systemic issues across credit, payments, commerce, and banking. What that means ultimately is we use technologies like AI and machine learning and data science, and we use these technologies in a unique way, in a way that’s different from a traditional bank. We are able to underwrite those who are often frankly forgotten by the traditional financial ecosystems. Our Square Loans product has almost triple the rate of women-owned businesses that we underwrite. Fifty-eight percent of our loans go to women-owned businesses versus 20% for the industry average. For that Cash App Borrow product I was talking about, 70% of those actives, the 9 million actives that we underwrote, fell below 580 as a FICO score. That’s considered a poor FICO score, and yet 97% of repayments are made on time. And this is because we have unique access to data and these technology and tools which can help us uniquely underwrite this often forgotten customer base. Yeah. I mean, credit—sometimes it’s been blamed for financial excesses. But access to credit is also, as you say, an advantage that’s not available to everyone. Do you have a philosophy between those poles—between risk and opportunity? Or is what you’re saying is that the tech you have allows you to avoid that risk? That’s right. Let’s start with how do the current systems work? It works using inferior data, frankly. It’s more limited data. It’s outdated. Sometimes it’s inaccurate. And it ignores things like someone’s cash flows, the stability of your income, your savings rate, how money moves through your accounts, or how you use alternative forms of credit—like buy now, pay later, which we have in our ecosystem through Afterpay. We have a lot of these signals for our 57 million monthly actives on the Cash App side and for the 4 million small businesses on the Square side, and those, frankly, billions of transaction data points that we have on any given day paired with new technologies. And we intend to continue to be on the forefront of AI, machine learning, and data science to be able to empower more people into the economy. The combination of the superior data and the technologies is what we believe ultimately helps expand access. You have a financial background, but not in the financial services industry. Before Block, you were a video game developer at Activision. Are financial businesses and video games similar? Are there things that are similar about them? There are. There actually are some things that are similar, I will say. There are many things that are unique to each industry. Each industry is incredibly complex. You find that when big technology companies try to do gaming. They’ve taken over the world in many different ways, but they can’t always crack the nut on putting out a great game. Similarly, some of the largest technology companies have dabbled in fintech but haven’t been able to go as deep, so they’re both very nuanced and complex industries. I would say another similarity is that design really matters. Industrial design, the design of products, the interface of products, is absolutely mission-critical to a great game, and it’s absolutely mission-critical to the simplicity and accessibility of our products, be it on Square or Cash App. And then maybe the third thing that I would say is that when I was in gaming, at least the business models were rapidly changing from an intermediary distribution mechanism, like releasing a game once and then selling it through a retailer, to an always-on, direct-to-consumer connection. And similarly with banking, people don’t want to bank from 9 to 5, six days a week. They want 24/7 access to their money and the ability to, again, grow their financial livelihood, move their money around seamlessly. So, some similarities are there in that shift to an intermediary model or a slower model to an always-on, direct-to-consumer connection. Part of your target audience or your target customer base at Block are Gen Z folks. Did you learn things at Activision about Gen Z that has been useful? Are there things that businesses misunderstand about younger generations still? What we’ve learned is that Gen Z, millennial customers, aren’t going to do things the way their parents did. Some of our stats show that 63% of Gen Z customers have moved away from traditional credit cards, and over 80% are skeptical of them. Which means they’re not using a credit card to manage expenses; they’re using a debit card, but then layering on on a transaction-by-transaction basis. Or again, using tools like buy now, pay later, or Cash App Borrow, the means in which they’re managing their consistent cash flows. So that’s an example of how things are changing, and you’ve got to get up to speed with how the next generation of customers expects to manage their money.
    Like
    Love
    Wow
    Sad
    Angry
    449
    2 Commenti 0 condivisioni
  • Why Designers Get Stuck In The Details And How To Stop

    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar?
    In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap.
    Reason #1 You’re Afraid To Show Rough Work
    We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed.
    I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them.
    The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief.
    The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem.
    So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychologyshows there are a couple of flavors driving this:

    Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den.
    Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off.

    Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback.
    Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift:
    Treat early sketches as disposable tools for thinking and actively share them to get feedback faster.

    Reason #2: You Fix The Symptom, Not The Cause
    Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data.
    From my experience, here are several reasons why users might not be clicking that coveted button:

    Users don’t understand that this step is for payment.
    They understand it’s about payment but expect order confirmation first.
    Due to incorrect translation, users don’t understand what the button means.
    Lack of trust signals.
    Unexpected additional coststhat appear at this stage.
    Technical issues.

    Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly.
    Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button.
    Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers— and understanding that using our product logic expertise proactively is crucial for modern designers.
    There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers.
    Reason #3: You’re Solving The Wrong Problem
    Before solving anything, ask whether the problem even deserves your attention.
    During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B testsshowed minimal impact, we continued to tweak those buttons.
    Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned:
    Without the right context, any visual tweak is lipstick on a pig.

    Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising.
    It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours.
    Reason #4: You’re Drowning In Unactionable Feedback
    We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow.
    What matters here are two things:

    The question you ask,
    The context you give.

    That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it.
    For instance:
    “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?”

    Here, you’ve stated the problem, shared your insight, explained your solution, and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?”
    Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside.
    I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory.
    So, to wrap up this point, here are two recommendations:

    Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”.
    Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it.

    Reason #5 You’re Just Tired
    Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing.
    A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the daycompared to late in the daysimply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity.
    What helps here:

    Swap tasks.Trade tickets with another designer; novelty resets your focus.
    Talk to another designer.If NDA permits, ask peers outside the team for a sanity check.
    Step away.Even a ten‑minute walk can do more than a double‑shot espresso.

    By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit.

    And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time.
    Four Steps I Use to Avoid Drowning In Detail
    Knowing these potential traps, here’s the practical process I use to stay on track:
    1. Define the Core Problem & Business Goal
    Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream.
    2. Choose the MechanicOnce the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels.
    3. Wireframe the Flow & Get Focused Feedback
    Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear contextto get actionable feedback, not just vague opinions.
    4. Polish the VisualsI only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution.
    Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering.
    Wrapping Up
    Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution.
    Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink.
    #why #designers #get #stuck #details
    Why Designers Get Stuck In The Details And How To Stop
    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar? In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap. Reason #1 You’re Afraid To Show Rough Work We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed. I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them. The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief. The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem. So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychologyshows there are a couple of flavors driving this: Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den. Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off. Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback. Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift: Treat early sketches as disposable tools for thinking and actively share them to get feedback faster. Reason #2: You Fix The Symptom, Not The Cause Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data. From my experience, here are several reasons why users might not be clicking that coveted button: Users don’t understand that this step is for payment. They understand it’s about payment but expect order confirmation first. Due to incorrect translation, users don’t understand what the button means. Lack of trust signals. Unexpected additional coststhat appear at this stage. Technical issues. Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly. Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button. Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers— and understanding that using our product logic expertise proactively is crucial for modern designers. There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers. Reason #3: You’re Solving The Wrong Problem Before solving anything, ask whether the problem even deserves your attention. During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B testsshowed minimal impact, we continued to tweak those buttons. Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned: Without the right context, any visual tweak is lipstick on a pig. Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising. It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours. Reason #4: You’re Drowning In Unactionable Feedback We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow. What matters here are two things: The question you ask, The context you give. That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it. For instance: “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?” Here, you’ve stated the problem, shared your insight, explained your solution, and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?” Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside. I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory. So, to wrap up this point, here are two recommendations: Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”. Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it. Reason #5 You’re Just Tired Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing. A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the daycompared to late in the daysimply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity. What helps here: Swap tasks.Trade tickets with another designer; novelty resets your focus. Talk to another designer.If NDA permits, ask peers outside the team for a sanity check. Step away.Even a ten‑minute walk can do more than a double‑shot espresso. By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit. And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time. Four Steps I Use to Avoid Drowning In Detail Knowing these potential traps, here’s the practical process I use to stay on track: 1. Define the Core Problem & Business Goal Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream. 2. Choose the MechanicOnce the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels. 3. Wireframe the Flow & Get Focused Feedback Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear contextto get actionable feedback, not just vague opinions. 4. Polish the VisualsI only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution. Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering. Wrapping Up Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution. Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink. #why #designers #get #stuck #details
    SMASHINGMAGAZINE.COM
    Why Designers Get Stuck In The Details And How To Stop
    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar? In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap. Reason #1 You’re Afraid To Show Rough Work We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed. I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them. The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief. The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem. So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychology (like the research by Hewitt and Flett) shows there are a couple of flavors driving this: Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den. Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off. Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback. Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift: Treat early sketches as disposable tools for thinking and actively share them to get feedback faster. Reason #2: You Fix The Symptom, Not The Cause Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data. From my experience, here are several reasons why users might not be clicking that coveted button: Users don’t understand that this step is for payment. They understand it’s about payment but expect order confirmation first. Due to incorrect translation, users don’t understand what the button means. Lack of trust signals (no security icons, unclear seller information). Unexpected additional costs (hidden fees, shipping) that appear at this stage. Technical issues (inactive button, page freezing). Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly. Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button. Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers (which might come from a fear of speaking up or a desire to avoid challenging authority) — and understanding that using our product logic expertise proactively is crucial for modern designers. There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers. Reason #3: You’re Solving The Wrong Problem Before solving anything, ask whether the problem even deserves your attention. During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B tests (a method of comparing two versions of a design to determine which performs better) showed minimal impact, we continued to tweak those buttons. Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned: Without the right context, any visual tweak is lipstick on a pig. Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising. It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours. Reason #4: You’re Drowning In Unactionable Feedback We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow. What matters here are two things: The question you ask, The context you give. That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it. For instance: “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?” Here, you’ve stated the problem (conversion drop), shared your insight (user confusion), explained your solution (cost breakdown), and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?” Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside. I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory. So, to wrap up this point, here are two recommendations: Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”. Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it. Reason #5 You’re Just Tired Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing. A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the day (about 70% of cases) compared to late in the day (less than 10%) simply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity. What helps here: Swap tasks.Trade tickets with another designer; novelty resets your focus. Talk to another designer.If NDA permits, ask peers outside the team for a sanity check. Step away.Even a ten‑minute walk can do more than a double‑shot espresso. By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit. And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time. Four Steps I Use to Avoid Drowning In Detail Knowing these potential traps, here’s the practical process I use to stay on track: 1. Define the Core Problem & Business Goal Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream. 2. Choose the Mechanic (Solution Principle) Once the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels. 3. Wireframe the Flow & Get Focused Feedback Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear context (as discussed in ‘Reason #4’) to get actionable feedback, not just vague opinions. 4. Polish the Visuals (Mindfully) I only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution. Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering. Wrapping Up Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution. Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink.
    Like
    Love
    Wow
    Angry
    Sad
    596
    0 Commenti 0 condivisioni
  • CD Projekt RED: TW4 has console first development with a 60fps target; 60fps on Series S will be "extremely challenging"

    DriftingSpirit
    Member

    Oct 25, 2017

    18,563

    They note how they usually start with PC and scale down, but they will be doing it the other way around this time to avoid issues with the console versions.

    4:15 for console focus and 60fps
    38:50 for the Series S comment 

    bsigg
    Member

    Oct 25, 2017

    25,153Inside The Witcher 4 Unreal Engine 5 Tech Demo: CD Projekt RED + Epic Deep Dive Interview



    www.resetera.com

     

    Skot
    Member

    Oct 30, 2017

    645

    720p on Series S incoming
     

    Bulby
    Prophet of Truth
    Member

    Oct 29, 2017

    6,006

    Berlin

    I think think any series s user will be happy with a beautiful 900p 30fps
     

    Chronos
    Member

    Oct 27, 2017

    1,249

    This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.
     

    HellofaMouse
    Member

    Oct 27, 2017

    8,551

    i wonder if this'll come out before the gen is over?

    good chance itll be a 2077 situation, cross-gen release with a broken ps6 version 

    logash
    Member

    Oct 27, 2017

    6,526

    This makes sense since they want to have good performance on lower end machines and they mentioned that it was easier to scale up than to scale down. They also mentioned their legacy on PC and how they plan on scaling it up high like they usually do on PC.
     

    KRT
    Member

    Aug 7, 2020

    247

    Series S was a mistake
     

    chris 1515
    Member

    Oct 27, 2017

    7,116

    Barcelona Spain

    The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S.
     

    bitcloudrzr
    Member

    May 31, 2018

    21,044

    Bulby said:

    I think think any series s user will be happy with a beautiful 900p 30fps

    Click to expand...
    Click to shrink...

     

    Yuuber
    Member

    Oct 28, 2017

    4,540

    KRT said:

    Series S was a mistake

    Click to expand...
    Click to shrink...

    Can we stop with these stupid takes? For all we know it sold as much as Series X, helped several games have better optimization on bigger consoles and it will definitely help optimizing newer games to the Nintendo Switch 2. 

    MANTRA
    Member

    Feb 21, 2024

    1,198

    No one who cares about 60fps should be buying a Series S, just make it 30fps.
     

    Roytheone
    Member

    Oct 25, 2017

    6,185

    Chronos said:

    This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.

    Click to expand...
    Click to shrink...

    They can just go for 30 fps instead on the Series S. No need for a special deal for that, that's allowed. 

    Matterhorn
    Member

    Feb 6, 2019

    254

    United States

    Hoping for a very nice looking 30fps Switch 2 version.
     

    Universal Acclaim
    Member

    Oct 5, 2024

    2,617

    Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the game can't be scaled down to 720-900p/60fps?
     

    chris 1515
    Member

    Oct 27, 2017

    7,116

    Barcelona Spain

    Matterhorn said:

    Hoping for a very nice looking 30fps Switch 2 version.

    Click to expand...
    Click to shrink...

    It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version.

    EDIT: And they probably need to redo all the assets.

    /

    Fortnite doesn't use Nanite and Lumen on Switch 2. 

    Last edited: Yesterday at 4:18 PM

    bitcloudrzr
    Member

    May 31, 2018

    21,044

    Universal Acclaim said:

    Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the graphics can't be scaled down to 720p/60fps?

    Click to expand...
    Click to shrink...

    Graphics are the part of the game that can be scaled, it is CPU load that is the more difficult part, although devs have actually made cuts in the latter to increase performance mode fps viability. Even with this focus on 60fps performance modes, they are always going to have room to make a higher fidelity 30fps mode. Specifically with UE5 though, performance has been such a disaster all around and Epic seems to be taking it seriously now.
     

    Greywaren
    Member

    Jul 16, 2019

    13,530

    Spain

    60 fps target is fantastic, I wish it was the norm.
     

    julia crawford
    Took the red AND the blue pills
    Member

    Oct 27, 2017

    40,709

    i am very ok with lower fps on the series s, it is far more palatable than severe resolution drops with upscaling artifacts.
     

    Spoit
    Member

    Oct 28, 2017

    5,599

    Chronos said:

    This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.

    Click to expand...
    Click to shrink...

    And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back
     

    PLASTICA-MAN
    Member

    Oct 26, 2017

    29,563

    chris 1515 said:

    The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S.

    Click to expand...
    Click to shrink...

    There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too.
    Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced.
    UE5 can still trace shadows if they want to push things even further. 

    overthewaves
    Member

    Sep 30, 2020

    1,203

    What about the PS5 handheld?
     

    nullpotential
    Member

    Jun 24, 2024

    87

    KRT said:

    Series S was a mistake

    Click to expand...
    Click to shrink...

    Consoles were a mistake. 

    GPU
    Member

    Oct 10, 2024

    1,075

    I really dont think Series S/X will be much of a factor by the time this game comes out.
     

    Lashley
    <<Tag Here>>
    Member

    Oct 25, 2017

    65,679

    Just make series s 480p 30fps
     

    pappacone
    Member

    Jan 10, 2020

    4,076

    Greywaren said:

    60 fps target is fantastic, I wish it was the norm.

    Click to expand...
    Click to shrink...

    It pretty much is
     

    Super
    Studied the Buster Sword
    Member

    Jan 29, 2022

    13,601

    I hope they can pull 60 FPS off in the full game.
     

    Theorry
    Member

    Oct 27, 2017

    69,045

    "target"

    Uh huh. We know how that is gonna go. 

    Jakartalado
    Member

    Oct 27, 2017

    2,818

    São Paulo, Brazil

    Skot said:

    720p on Series S incoming

    Click to expand...
    Click to shrink...

    If the PS5 is internally at 720p up to 900p, I seriously doubt that. 

    Revoltoftheunique
    Member

    Jan 23, 2022

    2,312

    It will be unstable 60fps with lots of stuttering.
     

    defaltoption
    Plug in a controller and enter the Konami code
    The Fallen

    Oct 27, 2017

    12,485

    Austin

    KRT said:

    Series S was a mistake

    Click to expand...
    Click to shrink...

    With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.
     

    Horns
    Member

    Dec 7, 2018

    3,423

    I hope Microsoft drops the requirement for Series S by the time this comes out.
     

    chris 1515
    Member

    Oct 27, 2017

    7,116

    Barcelona Spain

    PLASTICA-MAN said:

    There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too.

    Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced.
    UE5 can still trace shadows if they want to push things even further.
    Click to expand...
    Click to shrink...

    Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S. 

    bitcloudrzr
    Member

    May 31, 2018

    21,044

    Spoit said:

    And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back

    Click to expand...
    Click to shrink...

    Has it been confirmed that Sony is going to have release requirements like the XS?
     

    Commander Shepherd
    Member

    Jan 27, 2023

    173

    Anyone remember when no load screens was talked about for Witcher 3?
     

    chris 1515
    Member

    Oct 27, 2017

    7,116

    Barcelona Spain

    No this is probably different than most game are doing it here the main focus is the 60 fps mode and after they can create a balancedand 30 fps mode.

    This is not the other way around. 

    stanman
    Member

    Feb 13, 2025

    235

    defaltoption said:

    With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.

    Click to expand...
    Click to shrink...

    And your mistake is comparing a PC graphics card to a console. 

    PLASTICA-MAN
    Member

    Oct 26, 2017

    29,563

    chris 1515 said:

    Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S.

    Click to expand...
    Click to shrink...

    Yes. I am sure Series S will have HW solution but probably at 30 FPS. that would be a miracle if they achieve 60 FPS. 

    ArchedThunder
    Uncle Beerus
    Member

    Oct 25, 2017

    21,278

    chris 1515 said:

    It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version.

    EDIT: And they probably need to redo all the assets.

    /

    Fortnite doesn't use Nanite and Lumen on Switch 2.
    Click to expand...
    Click to shrink...

    Fortnite not using Lumen or Nanite at launch doesn't mean they can't run well on Switch 2. It's a launch port and they prioritized clean IQ and 60fps. I wouldn't be surprised to see them added later. Also it's not like the ray tracing in a Witcher 3 port has to match PS5, there's a lot of scaling back that can be done with ray tracing without ripping out the kitchen sink. Software lumen is also likely to be an option on P.
     

    jroc74
    Member

    Oct 27, 2017

    34,465

    Interesting times ahead....

    bitcloudrzr said:

    Has it been confirmed that Sony is going to have release requirements like the XS?

    Click to expand...
    Click to shrink...

    Your know good n well everything about this rumor has been confirmed.

    /S 

    Derbel McDillet
    ▲ Legend ▲
    Member

    Nov 23, 2022

    25,250

    Chronos said:

    This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.

    Click to expand...
    Click to shrink...

    How does this sound like a Cyberpunk issue? They didn't say they can't get it to work on the S.
     

    defaltoption
    Plug in a controller and enter the Konami code
    The Fallen

    Oct 27, 2017

    12,485

    Austin

    stanman said:

    And your mistake is comparing a PC graphics card to a console.

    Click to expand...
    Click to shrink...

     

    reksveks
    Member

    May 17, 2022

    7,628

    Horns said:

    I hope Microsoft drops the requirement for Series S by the time this comes out.

    Click to expand...
    Click to shrink...

    why? dev can make it 30 fps on series s and 60 fps on series x if needed.

    if they aren't or don't have to drop it for gta vi, they probably ain't dropping it for tw4. 

    chris 1515
    Member

    Oct 27, 2017

    7,116

    Barcelona Spain

    defaltoption said:

    With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.

    Click to expand...
    Click to shrink...

    No the consoles won't hold back your 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version?

    If the game was made with software lumen as the base it would have holding back your 5090...

    Your PC will have much better IQ, framerate and better raytracing with Megalightand better raytracing settings in general. 

    bitcloudrzr
    Member

    May 31, 2018

    21,044

    jroc74 said:

    Interesting times ahead....

    Your know good n well everything about this rumor has been confirmed.

    /S
    Click to expand...
    Click to shrink...

    Sony is like the opposite of a platform holder "forcing" adoption, for better or worse.
     

    defaltoption
    Plug in a controller and enter the Konami code
    The Fallen

    Oct 27, 2017

    12,485

    Austin

    chris 1515 said:

    No the consoles won't hold back yout 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version?

    If the game was made with software lumen as the base it would have holding back your 5090...

    Your PC will have much better IQ, framerate and better raytracing with Megalightand better raytracing settings in general.
    Click to expand...
    Click to shrink...

    Exactly, the series s is not a "mistake" or holding any version of the game on console or even PC back, that's what I'm saying to the person I replied to, its stupid to say that.
     

    cursed beef
    Member

    Jan 3, 2021

    998

    Have to imagine MS will lift the Series S parity clause when the next consoles launch. Which will be before/around the time W4 hits, right?
     

    Alvis
    Saw the truth behind the copied door
    Member

    Oct 25, 2017

    12,270

    EU

    Chronos said:

    This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.

    Click to expand...
    Click to shrink...

    ? they said that 60 FPS on Series S is challenging, not the act of releasing the game there at all. The game can simply run at 30 FPS on Series S if they can't pull off 60 FPS. Or have a 40 FPS mode in lieu of 60 FPS.

    The CPU and storage speed differences between last gen and current gen were gigantic. This isn't even remotely close to a comparable situation. 

    defaltoption
    Plug in a controller and enter the Konami code
    The Fallen

    Oct 27, 2017

    12,485

    Austin

    misqoute post
     

    jroc74
    Member

    Oct 27, 2017

    34,465

    defaltoption said:

    With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.

    Click to expand...
    Click to shrink...

    Ah yes, clearly 5090 cards are the vast majority of the minimum requirements for PC games.

    How can anyone say this with a straight face anymore when there are now PC games running on a Steam Deck.

    At least ppl saying that about the Series S are comparing it to other consoles.

    That said, it is interesting they are focusing on consoles first, then PC. 
    #projekt #red #tw4 #has #console
    CD Projekt RED: TW4 has console first development with a 60fps target; 60fps on Series S will be "extremely challenging"
    DriftingSpirit Member Oct 25, 2017 18,563 They note how they usually start with PC and scale down, but they will be doing it the other way around this time to avoid issues with the console versions. 4:15 for console focus and 60fps 38:50 for the Series S comment  bsigg Member Oct 25, 2017 25,153Inside The Witcher 4 Unreal Engine 5 Tech Demo: CD Projekt RED + Epic Deep Dive Interview www.resetera.com   Skot Member Oct 30, 2017 645 720p on Series S incoming   Bulby Prophet of Truth Member Oct 29, 2017 6,006 Berlin I think think any series s user will be happy with a beautiful 900p 30fps   Chronos Member Oct 27, 2017 1,249 This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.   HellofaMouse Member Oct 27, 2017 8,551 i wonder if this'll come out before the gen is over? good chance itll be a 2077 situation, cross-gen release with a broken ps6 version  logash Member Oct 27, 2017 6,526 This makes sense since they want to have good performance on lower end machines and they mentioned that it was easier to scale up than to scale down. They also mentioned their legacy on PC and how they plan on scaling it up high like they usually do on PC.   KRT Member Aug 7, 2020 247 Series S was a mistake   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S.   bitcloudrzr Member May 31, 2018 21,044 Bulby said: I think think any series s user will be happy with a beautiful 900p 30fps Click to expand... Click to shrink...   Yuuber Member Oct 28, 2017 4,540 KRT said: Series S was a mistake Click to expand... Click to shrink... Can we stop with these stupid takes? For all we know it sold as much as Series X, helped several games have better optimization on bigger consoles and it will definitely help optimizing newer games to the Nintendo Switch 2.  MANTRA Member Feb 21, 2024 1,198 No one who cares about 60fps should be buying a Series S, just make it 30fps.   Roytheone Member Oct 25, 2017 6,185 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... They can just go for 30 fps instead on the Series S. No need for a special deal for that, that's allowed.  Matterhorn Member Feb 6, 2019 254 United States Hoping for a very nice looking 30fps Switch 2 version.   Universal Acclaim Member Oct 5, 2024 2,617 Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the game can't be scaled down to 720-900p/60fps?   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain Matterhorn said: Hoping for a very nice looking 30fps Switch 2 version. Click to expand... Click to shrink... It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version. EDIT: And they probably need to redo all the assets. / Fortnite doesn't use Nanite and Lumen on Switch 2.  Last edited: Yesterday at 4:18 PM bitcloudrzr Member May 31, 2018 21,044 Universal Acclaim said: Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the graphics can't be scaled down to 720p/60fps? Click to expand... Click to shrink... Graphics are the part of the game that can be scaled, it is CPU load that is the more difficult part, although devs have actually made cuts in the latter to increase performance mode fps viability. Even with this focus on 60fps performance modes, they are always going to have room to make a higher fidelity 30fps mode. Specifically with UE5 though, performance has been such a disaster all around and Epic seems to be taking it seriously now.   Greywaren Member Jul 16, 2019 13,530 Spain 60 fps target is fantastic, I wish it was the norm.   julia crawford Took the red AND the blue pills Member Oct 27, 2017 40,709 i am very ok with lower fps on the series s, it is far more palatable than severe resolution drops with upscaling artifacts.   Spoit Member Oct 28, 2017 5,599 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back   PLASTICA-MAN Member Oct 26, 2017 29,563 chris 1515 said: The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S. Click to expand... Click to shrink... There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too. Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced. UE5 can still trace shadows if they want to push things even further.  overthewaves Member Sep 30, 2020 1,203 What about the PS5 handheld?   nullpotential Member Jun 24, 2024 87 KRT said: Series S was a mistake Click to expand... Click to shrink... Consoles were a mistake.  GPU Member Oct 10, 2024 1,075 I really dont think Series S/X will be much of a factor by the time this game comes out.   Lashley <<Tag Here>> Member Oct 25, 2017 65,679 Just make series s 480p 30fps   pappacone Member Jan 10, 2020 4,076 Greywaren said: 60 fps target is fantastic, I wish it was the norm. Click to expand... Click to shrink... It pretty much is   Super Studied the Buster Sword Member Jan 29, 2022 13,601 I hope they can pull 60 FPS off in the full game.   Theorry Member Oct 27, 2017 69,045 "target" Uh huh. We know how that is gonna go.  Jakartalado Member Oct 27, 2017 2,818 São Paulo, Brazil Skot said: 720p on Series S incoming Click to expand... Click to shrink... If the PS5 is internally at 720p up to 900p, I seriously doubt that.  Revoltoftheunique Member Jan 23, 2022 2,312 It will be unstable 60fps with lots of stuttering.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin KRT said: Series S was a mistake Click to expand... Click to shrink... With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.   Horns Member Dec 7, 2018 3,423 I hope Microsoft drops the requirement for Series S by the time this comes out.   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain PLASTICA-MAN said: There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too. Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced. UE5 can still trace shadows if they want to push things even further. Click to expand... Click to shrink... Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S.  bitcloudrzr Member May 31, 2018 21,044 Spoit said: And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back Click to expand... Click to shrink... Has it been confirmed that Sony is going to have release requirements like the XS?   Commander Shepherd Member Jan 27, 2023 173 Anyone remember when no load screens was talked about for Witcher 3?   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain No this is probably different than most game are doing it here the main focus is the 60 fps mode and after they can create a balancedand 30 fps mode. This is not the other way around.  stanman Member Feb 13, 2025 235 defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... And your mistake is comparing a PC graphics card to a console.  PLASTICA-MAN Member Oct 26, 2017 29,563 chris 1515 said: Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S. Click to expand... Click to shrink... Yes. I am sure Series S will have HW solution but probably at 30 FPS. that would be a miracle if they achieve 60 FPS.  ArchedThunder Uncle Beerus Member Oct 25, 2017 21,278 chris 1515 said: It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version. EDIT: And they probably need to redo all the assets. / Fortnite doesn't use Nanite and Lumen on Switch 2. Click to expand... Click to shrink... Fortnite not using Lumen or Nanite at launch doesn't mean they can't run well on Switch 2. It's a launch port and they prioritized clean IQ and 60fps. I wouldn't be surprised to see them added later. Also it's not like the ray tracing in a Witcher 3 port has to match PS5, there's a lot of scaling back that can be done with ray tracing without ripping out the kitchen sink. Software lumen is also likely to be an option on P.   jroc74 Member Oct 27, 2017 34,465 Interesting times ahead.... bitcloudrzr said: Has it been confirmed that Sony is going to have release requirements like the XS? Click to expand... Click to shrink... Your know good n well everything about this rumor has been confirmed. /S  Derbel McDillet ▲ Legend ▲ Member Nov 23, 2022 25,250 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... How does this sound like a Cyberpunk issue? They didn't say they can't get it to work on the S.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin stanman said: And your mistake is comparing a PC graphics card to a console. Click to expand... Click to shrink...   reksveks Member May 17, 2022 7,628 Horns said: I hope Microsoft drops the requirement for Series S by the time this comes out. Click to expand... Click to shrink... why? dev can make it 30 fps on series s and 60 fps on series x if needed. if they aren't or don't have to drop it for gta vi, they probably ain't dropping it for tw4.  chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... No the consoles won't hold back your 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version? If the game was made with software lumen as the base it would have holding back your 5090... Your PC will have much better IQ, framerate and better raytracing with Megalightand better raytracing settings in general.  bitcloudrzr Member May 31, 2018 21,044 jroc74 said: Interesting times ahead.... Your know good n well everything about this rumor has been confirmed. /S Click to expand... Click to shrink... Sony is like the opposite of a platform holder "forcing" adoption, for better or worse.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin chris 1515 said: No the consoles won't hold back yout 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version? If the game was made with software lumen as the base it would have holding back your 5090... Your PC will have much better IQ, framerate and better raytracing with Megalightand better raytracing settings in general. Click to expand... Click to shrink... Exactly, the series s is not a "mistake" or holding any version of the game on console or even PC back, that's what I'm saying to the person I replied to, its stupid to say that.   cursed beef Member Jan 3, 2021 998 Have to imagine MS will lift the Series S parity clause when the next consoles launch. Which will be before/around the time W4 hits, right?   Alvis Saw the truth behind the copied door Member Oct 25, 2017 12,270 EU Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... ? they said that 60 FPS on Series S is challenging, not the act of releasing the game there at all. The game can simply run at 30 FPS on Series S if they can't pull off 60 FPS. Or have a 40 FPS mode in lieu of 60 FPS. The CPU and storage speed differences between last gen and current gen were gigantic. This isn't even remotely close to a comparable situation.  defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin misqoute post   jroc74 Member Oct 27, 2017 34,465 defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... Ah yes, clearly 5090 cards are the vast majority of the minimum requirements for PC games. How can anyone say this with a straight face anymore when there are now PC games running on a Steam Deck. At least ppl saying that about the Series S are comparing it to other consoles. That said, it is interesting they are focusing on consoles first, then PC.  #projekt #red #tw4 #has #console
    WWW.RESETERA.COM
    CD Projekt RED: TW4 has console first development with a 60fps target; 60fps on Series S will be "extremely challenging"
    DriftingSpirit Member Oct 25, 2017 18,563 They note how they usually start with PC and scale down, but they will be doing it the other way around this time to avoid issues with the console versions. 4:15 for console focus and 60fps 38:50 for the Series S comment  bsigg Member Oct 25, 2017 25,153 [DF] Inside The Witcher 4 Unreal Engine 5 Tech Demo: CD Projekt RED + Epic Deep Dive Interview https://www.youtube.com/watch?v=OplYN2MMI4Q www.resetera.com   Skot Member Oct 30, 2017 645 720p on Series S incoming   Bulby Prophet of Truth Member Oct 29, 2017 6,006 Berlin I think think any series s user will be happy with a beautiful 900p 30fps   Chronos Member Oct 27, 2017 1,249 This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.   HellofaMouse Member Oct 27, 2017 8,551 i wonder if this'll come out before the gen is over? good chance itll be a 2077 situation, cross-gen release with a broken ps6 version  logash Member Oct 27, 2017 6,526 This makes sense since they want to have good performance on lower end machines and they mentioned that it was easier to scale up than to scale down. They also mentioned their legacy on PC and how they plan on scaling it up high like they usually do on PC.   KRT Member Aug 7, 2020 247 Series S was a mistake   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S.   bitcloudrzr Member May 31, 2018 21,044 Bulby said: I think think any series s user will be happy with a beautiful 900p 30fps Click to expand... Click to shrink...   Yuuber Member Oct 28, 2017 4,540 KRT said: Series S was a mistake Click to expand... Click to shrink... Can we stop with these stupid takes? For all we know it sold as much as Series X, helped several games have better optimization on bigger consoles and it will definitely help optimizing newer games to the Nintendo Switch 2.  MANTRA Member Feb 21, 2024 1,198 No one who cares about 60fps should be buying a Series S, just make it 30fps.   Roytheone Member Oct 25, 2017 6,185 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... They can just go for 30 fps instead on the Series S. No need for a special deal for that, that's allowed.  Matterhorn Member Feb 6, 2019 254 United States Hoping for a very nice looking 30fps Switch 2 version.   Universal Acclaim Member Oct 5, 2024 2,617 Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the game can't be scaled down to 720-900p/60fps?   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain Matterhorn said: Hoping for a very nice looking 30fps Switch 2 version. Click to expand... Click to shrink... It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version. EDIT: And they probably need to redo all the assets. https://www.reddit.com/r/FortNiteBR/comments/1l4a1o4/fortnite_on_the_switch_2_looks_great_these_low/ Fortnite doesn't use Nanite and Lumen on Switch 2.  Last edited: Yesterday at 4:18 PM bitcloudrzr Member May 31, 2018 21,044 Universal Acclaim said: Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the graphics can't be scaled down to 720p/60fps? Click to expand... Click to shrink... Graphics are the part of the game that can be scaled, it is CPU load that is the more difficult part, although devs have actually made cuts in the latter to increase performance mode fps viability. Even with this focus on 60fps performance modes, they are always going to have room to make a higher fidelity 30fps mode. Specifically with UE5 though, performance has been such a disaster all around and Epic seems to be taking it seriously now.   Greywaren Member Jul 16, 2019 13,530 Spain 60 fps target is fantastic, I wish it was the norm.   julia crawford Took the red AND the blue pills Member Oct 27, 2017 40,709 i am very ok with lower fps on the series s, it is far more palatable than severe resolution drops with upscaling artifacts.   Spoit Member Oct 28, 2017 5,599 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back   PLASTICA-MAN Member Oct 26, 2017 29,563 chris 1515 said: The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S. Click to expand... Click to shrink... There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too. Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced. UE5 can still trace shadows if they want to push things even further.  overthewaves Member Sep 30, 2020 1,203 What about the PS5 handheld?   nullpotential Member Jun 24, 2024 87 KRT said: Series S was a mistake Click to expand... Click to shrink... Consoles were a mistake.  GPU Member Oct 10, 2024 1,075 I really dont think Series S/X will be much of a factor by the time this game comes out.   Lashley <<Tag Here>> Member Oct 25, 2017 65,679 Just make series s 480p 30fps   pappacone Member Jan 10, 2020 4,076 Greywaren said: 60 fps target is fantastic, I wish it was the norm. Click to expand... Click to shrink... It pretty much is   Super Studied the Buster Sword Member Jan 29, 2022 13,601 I hope they can pull 60 FPS off in the full game.   Theorry Member Oct 27, 2017 69,045 "target" Uh huh. We know how that is gonna go.  Jakartalado Member Oct 27, 2017 2,818 São Paulo, Brazil Skot said: 720p on Series S incoming Click to expand... Click to shrink... If the PS5 is internally at 720p up to 900p, I seriously doubt that.  Revoltoftheunique Member Jan 23, 2022 2,312 It will be unstable 60fps with lots of stuttering.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin KRT said: Series S was a mistake Click to expand... Click to shrink... With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.   Horns Member Dec 7, 2018 3,423 I hope Microsoft drops the requirement for Series S by the time this comes out.   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain PLASTICA-MAN said: There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too. Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced. UE5 can still trace shadows if they want to push things even further. Click to expand... Click to shrink... Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S.  bitcloudrzr Member May 31, 2018 21,044 Spoit said: And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back Click to expand... Click to shrink... Has it been confirmed that Sony is going to have release requirements like the XS?   Commander Shepherd Member Jan 27, 2023 173 Anyone remember when no load screens was talked about for Witcher 3?   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain No this is probably different than most game are doing it here the main focus is the 60 fps mode and after they can create a balanced(40 fps) and 30 fps mode. This is not the other way around.  stanman Member Feb 13, 2025 235 defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... And your mistake is comparing a PC graphics card to a console.  PLASTICA-MAN Member Oct 26, 2017 29,563 chris 1515 said: Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S. Click to expand... Click to shrink... Yes. I am sure Series S will have HW solution but probably at 30 FPS. that would be a miracle if they achieve 60 FPS.  ArchedThunder Uncle Beerus Member Oct 25, 2017 21,278 chris 1515 said: It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version. EDIT: And they probably need to redo all the assets. https://www.reddit.com/r/FortNiteBR/comments/1l4a1o4/fortnite_on_the_switch_2_looks_great_these_low/ Fortnite doesn't use Nanite and Lumen on Switch 2. Click to expand... Click to shrink... Fortnite not using Lumen or Nanite at launch doesn't mean they can't run well on Switch 2. It's a launch port and they prioritized clean IQ and 60fps. I wouldn't be surprised to see them added later. Also it's not like the ray tracing in a Witcher 3 port has to match PS5, there's a lot of scaling back that can be done with ray tracing without ripping out the kitchen sink. Software lumen is also likely to be an option on P.   jroc74 Member Oct 27, 2017 34,465 Interesting times ahead.... bitcloudrzr said: Has it been confirmed that Sony is going to have release requirements like the XS? Click to expand... Click to shrink... Your know good n well everything about this rumor has been confirmed. /S  Derbel McDillet ▲ Legend ▲ Member Nov 23, 2022 25,250 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... How does this sound like a Cyberpunk issue? They didn't say they can't get it to work on the S.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin stanman said: And your mistake is comparing a PC graphics card to a console. Click to expand... Click to shrink...   reksveks Member May 17, 2022 7,628 Horns said: I hope Microsoft drops the requirement for Series S by the time this comes out. Click to expand... Click to shrink... why? dev can make it 30 fps on series s and 60 fps on series x if needed. if they aren't or don't have to drop it for gta vi, they probably ain't dropping it for tw4.  chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... No the consoles won't hold back your 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version? If the game was made with software lumen as the base it would have holding back your 5090... Your PC will have much better IQ, framerate and better raytracing with Megalight(direct raytraced shadows with tons of lighe source) and better raytracing settings in general.  bitcloudrzr Member May 31, 2018 21,044 jroc74 said: Interesting times ahead.... Your know good n well everything about this rumor has been confirmed. /S Click to expand... Click to shrink... Sony is like the opposite of a platform holder "forcing" adoption, for better or worse.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin chris 1515 said: No the consoles won't hold back yout 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version? If the game was made with software lumen as the base it would have holding back your 5090... Your PC will have much better IQ, framerate and better raytracing with Megalight(direct raytraced shadows) and better raytracing settings in general. Click to expand... Click to shrink... Exactly, the series s is not a "mistake" or holding any version of the game on console or even PC back, that's what I'm saying to the person I replied to, its stupid to say that.   cursed beef Member Jan 3, 2021 998 Have to imagine MS will lift the Series S parity clause when the next consoles launch. Which will be before/around the time W4 hits, right?   Alvis Saw the truth behind the copied door Member Oct 25, 2017 12,270 EU Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... ? they said that 60 FPS on Series S is challenging, not the act of releasing the game there at all. The game can simply run at 30 FPS on Series S if they can't pull off 60 FPS. Or have a 40 FPS mode in lieu of 60 FPS. The CPU and storage speed differences between last gen and current gen were gigantic. This isn't even remotely close to a comparable situation.  defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin misqoute post   jroc74 Member Oct 27, 2017 34,465 defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... Ah yes, clearly 5090 cards are the vast majority of the minimum requirements for PC games. How can anyone say this with a straight face anymore when there are now PC games running on a Steam Deck. At least ppl saying that about the Series S are comparing it to other consoles. That said, it is interesting they are focusing on consoles first, then PC. 
    0 Commenti 0 condivisioni
  • Reclaiming Control: Digital Sovereignty in 2025

    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders.
    Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure.
    The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself.
    But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades, most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack.
    Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas.
    Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty.
    As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems.
    What does the digital sovereignty landscape look like today?
    Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts.
    We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas othersare adopting a risk-based approach based on trusted locales.
    We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data?
    This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks.
    Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP.
    How Are Cloud Providers Responding?
    Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoringits spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now.
    We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France. However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue.
    Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players.
    What Can Enterprise Organizations Do About It?
    First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience.
    If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that.
    This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture.
    It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency.
    Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate.
    Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing.
    Where to start? Look after your own organization first
    Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once.
    Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario.
    Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it.
    Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience.
    The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom.
    #reclaiming #control #digital #sovereignty
    Reclaiming Control: Digital Sovereignty in 2025
    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders. Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure. The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself. But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades, most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack. Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas. Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty. As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems. What does the digital sovereignty landscape look like today? Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts. We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas othersare adopting a risk-based approach based on trusted locales. We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data? This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks. Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP. How Are Cloud Providers Responding? Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoringits spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now. We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France. However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue. Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players. What Can Enterprise Organizations Do About It? First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience. If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that. This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture. It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency. Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate. Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing. Where to start? Look after your own organization first Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once. Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario. Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it. Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience. The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom. #reclaiming #control #digital #sovereignty
    GIGAOM.COM
    Reclaiming Control: Digital Sovereignty in 2025
    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders. Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure. The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself. But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades (according to historical surveys I’ve run), most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack. Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas. Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty. As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems. What does the digital sovereignty landscape look like today? Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts. We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas others (the UK included) are adopting a risk-based approach based on trusted locales. We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data? This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks. Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP. How Are Cloud Providers Responding? Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoring (in the French sense) its spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now. We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France (Microsoft has similar in Germany). However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue. Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players. What Can Enterprise Organizations Do About It? First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience. If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that. This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture. It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency. Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate. Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing. Where to start? Look after your own organization first Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once. Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario. Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it. Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience. The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom.
    0 Commenti 0 condivisioni
  • Dave Bautista’s Next Franchise Play? Becoming a ‘Cat Assassin’

    After hanging up his daggers as Drax the Destroyer and getting got as Glossu Rabban in Dune: Part Two, Dave Bautista is stepping into video games and animation with a new franchise by the name of Cat Assassin. The wrestler-actor and his production company Dogbone Entertainment will bring to life a new idea from Steve Lerner, who wrote 2022’s feline adventure game Stray. This would-be franchise will comprise a stealth-action video game—influenced by titles such as Assassin’s Creed, Splinter Cell, and Sifu—from developer Titan1Studiosand a “neo-noir adult animated series.” Cat Assassin focuses on Hugh, an expert killer “caught between various cartels and power brokers in a dark and twisted city.” Bautista’s part of the enterprise’s “creative vision,” but at the moment, it’s unclear if that also means he’ll lend his voice to Hugh in either animated or video game form.Titan1 has several TV and game projects in the works, so at the moment, there’s no real window on when to expect Cat Assassin. Still, in a statement on Titan1’s website Bautista called teaming with the company “a pleasure … Their ability to build worlds through animation has been so impressive and they’ve created a truly unique world in this game that I can’t wait to share with players.”

    While the game is seemingly expected for release in October 2027 for PC and several consoles, including the Nintendo Switch 2, Titan1 said more details on the overall franchise’s future is expected “in the coming months.” Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.
    #dave #bautistas #next #franchise #play
    Dave Bautista’s Next Franchise Play? Becoming a ‘Cat Assassin’
    After hanging up his daggers as Drax the Destroyer and getting got as Glossu Rabban in Dune: Part Two, Dave Bautista is stepping into video games and animation with a new franchise by the name of Cat Assassin. The wrestler-actor and his production company Dogbone Entertainment will bring to life a new idea from Steve Lerner, who wrote 2022’s feline adventure game Stray. This would-be franchise will comprise a stealth-action video game—influenced by titles such as Assassin’s Creed, Splinter Cell, and Sifu—from developer Titan1Studiosand a “neo-noir adult animated series.” Cat Assassin focuses on Hugh, an expert killer “caught between various cartels and power brokers in a dark and twisted city.” Bautista’s part of the enterprise’s “creative vision,” but at the moment, it’s unclear if that also means he’ll lend his voice to Hugh in either animated or video game form.Titan1 has several TV and game projects in the works, so at the moment, there’s no real window on when to expect Cat Assassin. Still, in a statement on Titan1’s website Bautista called teaming with the company “a pleasure … Their ability to build worlds through animation has been so impressive and they’ve created a truly unique world in this game that I can’t wait to share with players.” While the game is seemingly expected for release in October 2027 for PC and several consoles, including the Nintendo Switch 2, Titan1 said more details on the overall franchise’s future is expected “in the coming months.” Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who. #dave #bautistas #next #franchise #play
    GIZMODO.COM
    Dave Bautista’s Next Franchise Play? Becoming a ‘Cat Assassin’
    After hanging up his daggers as Drax the Destroyer and getting got as Glossu Rabban in Dune: Part Two, Dave Bautista is stepping into video games and animation with a new franchise by the name of Cat Assassin. The wrestler-actor and his production company Dogbone Entertainment will bring to life a new idea from Steve Lerner, who wrote 2022’s feline adventure game Stray. This would-be franchise will comprise a stealth-action video game—influenced by titles such as Assassin’s Creed, Splinter Cell, and Sifu—from developer Titan1Studios (Love is a Roguelike, The Events at Unity Farm) and a “neo-noir adult animated series.” Cat Assassin focuses on Hugh, an expert killer “caught between various cartels and power brokers in a dark and twisted city.” Bautista’s part of the enterprise’s “creative vision,” but at the moment, it’s unclear if that also means he’ll lend his voice to Hugh in either animated or video game form. (His current voice work includes the upcoming Army of the Dead animated series and playing himself in WWE games since 2003.) Titan1 has several TV and game projects in the works, so at the moment, there’s no real window on when to expect Cat Assassin. Still, in a statement on Titan1’s website Bautista called teaming with the company “a pleasure … Their ability to build worlds through animation has been so impressive and they’ve created a truly unique world in this game that I can’t wait to share with players.” While the game is seemingly expected for release in October 2027 for PC and several consoles, including the Nintendo Switch 2, Titan1 said more details on the overall franchise’s future is expected “in the coming months.” Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.
    0 Commenti 0 condivisioni
  • From Networks to Business Models, AI Is Rewiring Telecom

    Artificial intelligence is already rewriting the rules of wireless and telecom — powering predictive maintenance, streamlining network operations, and enabling more innovative services.
    As AI scales, the disruption will be faster, deeper, and harder to reverse than any prior shift in the industry.
    Compared to the sweeping changes AI is set to unleash, past telecom innovations look incremental.
    AI is redefining how networks operate, services are delivered, and data is secured — across every device and digital touchpoint.
    AI Is Reshaping Wireless Networks Already
    Artificial intelligence is already transforming wireless through smarter private networks, fixed wireless access, and intelligent automation across the stack.
    AI detects and resolves network issues before they impact service, improving uptime and customer satisfaction. It’s also opening the door to entirely new revenue streams and business models.
    Each wireless generation brought new capabilities. AI, however, marks a more profound shift — networks that think, respond, and evolve in real time.
    AI Acceleration Will Outpace Past Tech Shifts
    Many may underestimate the speed and magnitude of AI-driven change.
    The shift from traditional voice and data systems to AI-driven network intelligence is already underway.
    Although predictions abound, the true scope remains unclear.
    It’s tempting to assume we understand AI’s trajectory, but history suggests otherwise.

    Today, AI is already automating maintenance and optimizing performance without user disruption. The technologies we’ll rely on in the near future may still be on the drawing board.
    Few predicted that smartphones would emerge from analog beginnings—a reminder of how quickly foundational technologies can be reimagined.
    History shows that disruptive technologies rarely follow predictable paths — and AI is no exception. It’s already upending business models across industries.
    Technological shifts bring both new opportunities and complex trade-offs.
    AI Disruption Will Move Faster Than Ever
    The same cycle of reinvention is happening now — but with AI, it’s moving at unprecedented speed.
    Despite all the discussion, many still treat AI as a future concern — yet the shift is already well underway.
    As with every major technological leap, there will be gains and losses. The AI transition brings clear trade-offs: efficiency and innovation on one side, job displacement, and privacy erosion on the other.
    Unlike past tech waves that unfolded over decades, the AI shift will reshape industries in just a few years — and that change wave will only continue to move forward.
    AI Will Reshape All Sectors and Companies
    This shift will unfold faster than most organizations or individuals are prepared to handle.
    Today’s industries will likely look very different tomorrow. Entirely new sectors will emerge as legacy models become obsolete — redefining market leadership across industries.
    Telecom’s past holds a clear warning: market dominance can vanish quickly when companies ignore disruption.
    Eventually, the Baby Bells moved into long-distance service, while AT&T remained barred from selling local access — undermining its advantage.
    As the market shifted and competitors gained ground, AT&T lost its dominance and became vulnerable enough that SBC, a former regional Bell, acquired it and took on its name.

    It’s a case study of how incumbents fall when they fail to adapt — precisely the kind of pressure AI is now exerting across industries.
    SBC’s acquisition of AT&T flipped the power dynamic — proof that size doesn’t protect against disruption.
    The once-crowded telecom field has consolidated into just a few dominant players — each facing new threats from AI-native challengers.
    Legacy telecom models are being steadily displaced by faster, more flexible wireless, broadband, and streaming alternatives.
    No Industry Is Immune From AI Disruption
    AI will accelerate the next wave of industrial evolution — bringing innovations and consequences we’re only beginning to grasp.
    New winners will emerge as past leaders struggle to hang on — a shift that will also reshape the investment landscape. Startups leveraging AI will likely redefine leadership in sectors where incumbents have grown complacent.
    Nvidia’s rise is part of a broader trend: the next market leaders will emerge wherever AI creates a clear competitive advantage — whether in chips, code, or entirely new markets.
    The AI-driven future is arriving faster than most organizations are ready for. Adapting to this accelerating wave of change is no longer optional — it’s essential. Companies that act decisively today will define the winners of tomorrow.
    #networks #business #models #rewiring #telecom
    From Networks to Business Models, AI Is Rewiring Telecom
    Artificial intelligence is already rewriting the rules of wireless and telecom — powering predictive maintenance, streamlining network operations, and enabling more innovative services. As AI scales, the disruption will be faster, deeper, and harder to reverse than any prior shift in the industry. Compared to the sweeping changes AI is set to unleash, past telecom innovations look incremental. AI is redefining how networks operate, services are delivered, and data is secured — across every device and digital touchpoint. AI Is Reshaping Wireless Networks Already Artificial intelligence is already transforming wireless through smarter private networks, fixed wireless access, and intelligent automation across the stack. AI detects and resolves network issues before they impact service, improving uptime and customer satisfaction. It’s also opening the door to entirely new revenue streams and business models. Each wireless generation brought new capabilities. AI, however, marks a more profound shift — networks that think, respond, and evolve in real time. AI Acceleration Will Outpace Past Tech Shifts Many may underestimate the speed and magnitude of AI-driven change. The shift from traditional voice and data systems to AI-driven network intelligence is already underway. Although predictions abound, the true scope remains unclear. It’s tempting to assume we understand AI’s trajectory, but history suggests otherwise. Today, AI is already automating maintenance and optimizing performance without user disruption. The technologies we’ll rely on in the near future may still be on the drawing board. Few predicted that smartphones would emerge from analog beginnings—a reminder of how quickly foundational technologies can be reimagined. History shows that disruptive technologies rarely follow predictable paths — and AI is no exception. It’s already upending business models across industries. Technological shifts bring both new opportunities and complex trade-offs. AI Disruption Will Move Faster Than Ever The same cycle of reinvention is happening now — but with AI, it’s moving at unprecedented speed. Despite all the discussion, many still treat AI as a future concern — yet the shift is already well underway. As with every major technological leap, there will be gains and losses. The AI transition brings clear trade-offs: efficiency and innovation on one side, job displacement, and privacy erosion on the other. Unlike past tech waves that unfolded over decades, the AI shift will reshape industries in just a few years — and that change wave will only continue to move forward. AI Will Reshape All Sectors and Companies This shift will unfold faster than most organizations or individuals are prepared to handle. Today’s industries will likely look very different tomorrow. Entirely new sectors will emerge as legacy models become obsolete — redefining market leadership across industries. Telecom’s past holds a clear warning: market dominance can vanish quickly when companies ignore disruption. Eventually, the Baby Bells moved into long-distance service, while AT&T remained barred from selling local access — undermining its advantage. As the market shifted and competitors gained ground, AT&T lost its dominance and became vulnerable enough that SBC, a former regional Bell, acquired it and took on its name. It’s a case study of how incumbents fall when they fail to adapt — precisely the kind of pressure AI is now exerting across industries. SBC’s acquisition of AT&T flipped the power dynamic — proof that size doesn’t protect against disruption. The once-crowded telecom field has consolidated into just a few dominant players — each facing new threats from AI-native challengers. Legacy telecom models are being steadily displaced by faster, more flexible wireless, broadband, and streaming alternatives. No Industry Is Immune From AI Disruption AI will accelerate the next wave of industrial evolution — bringing innovations and consequences we’re only beginning to grasp. New winners will emerge as past leaders struggle to hang on — a shift that will also reshape the investment landscape. Startups leveraging AI will likely redefine leadership in sectors where incumbents have grown complacent. Nvidia’s rise is part of a broader trend: the next market leaders will emerge wherever AI creates a clear competitive advantage — whether in chips, code, or entirely new markets. The AI-driven future is arriving faster than most organizations are ready for. Adapting to this accelerating wave of change is no longer optional — it’s essential. Companies that act decisively today will define the winners of tomorrow. #networks #business #models #rewiring #telecom
    From Networks to Business Models, AI Is Rewiring Telecom
    Artificial intelligence is already rewriting the rules of wireless and telecom — powering predictive maintenance, streamlining network operations, and enabling more innovative services. As AI scales, the disruption will be faster, deeper, and harder to reverse than any prior shift in the industry. Compared to the sweeping changes AI is set to unleash, past telecom innovations look incremental. AI is redefining how networks operate, services are delivered, and data is secured — across every device and digital touchpoint. AI Is Reshaping Wireless Networks Already Artificial intelligence is already transforming wireless through smarter private networks, fixed wireless access (FWA), and intelligent automation across the stack. AI detects and resolves network issues before they impact service, improving uptime and customer satisfaction. It’s also opening the door to entirely new revenue streams and business models. Each wireless generation brought new capabilities. AI, however, marks a more profound shift — networks that think, respond, and evolve in real time. AI Acceleration Will Outpace Past Tech Shifts Many may underestimate the speed and magnitude of AI-driven change. The shift from traditional voice and data systems to AI-driven network intelligence is already underway. Although predictions abound, the true scope remains unclear. It’s tempting to assume we understand AI’s trajectory, but history suggests otherwise. Today, AI is already automating maintenance and optimizing performance without user disruption. The technologies we’ll rely on in the near future may still be on the drawing board. Few predicted that smartphones would emerge from analog beginnings—a reminder of how quickly foundational technologies can be reimagined. History shows that disruptive technologies rarely follow predictable paths — and AI is no exception. It’s already upending business models across industries. Technological shifts bring both new opportunities and complex trade-offs. AI Disruption Will Move Faster Than Ever The same cycle of reinvention is happening now — but with AI, it’s moving at unprecedented speed. Despite all the discussion, many still treat AI as a future concern — yet the shift is already well underway. As with every major technological leap, there will be gains and losses. The AI transition brings clear trade-offs: efficiency and innovation on one side, job displacement, and privacy erosion on the other. Unlike past tech waves that unfolded over decades, the AI shift will reshape industries in just a few years — and that change wave will only continue to move forward. AI Will Reshape All Sectors and Companies This shift will unfold faster than most organizations or individuals are prepared to handle. Today’s industries will likely look very different tomorrow. Entirely new sectors will emerge as legacy models become obsolete — redefining market leadership across industries. Telecom’s past holds a clear warning: market dominance can vanish quickly when companies ignore disruption. Eventually, the Baby Bells moved into long-distance service, while AT&T remained barred from selling local access — undermining its advantage. As the market shifted and competitors gained ground, AT&T lost its dominance and became vulnerable enough that SBC, a former regional Bell, acquired it and took on its name. It’s a case study of how incumbents fall when they fail to adapt — precisely the kind of pressure AI is now exerting across industries. SBC’s acquisition of AT&T flipped the power dynamic — proof that size doesn’t protect against disruption. The once-crowded telecom field has consolidated into just a few dominant players — each facing new threats from AI-native challengers. Legacy telecom models are being steadily displaced by faster, more flexible wireless, broadband, and streaming alternatives. No Industry Is Immune From AI Disruption AI will accelerate the next wave of industrial evolution — bringing innovations and consequences we’re only beginning to grasp. New winners will emerge as past leaders struggle to hang on — a shift that will also reshape the investment landscape. Startups leveraging AI will likely redefine leadership in sectors where incumbents have grown complacent. Nvidia’s rise is part of a broader trend: the next market leaders will emerge wherever AI creates a clear competitive advantage — whether in chips, code, or entirely new markets. The AI-driven future is arriving faster than most organizations are ready for. Adapting to this accelerating wave of change is no longer optional — it’s essential. Companies that act decisively today will define the winners of tomorrow.
    0 Commenti 0 condivisioni