• Plug and Play: Build a G-Assist Plug-In Today

    Project G-Assist — available through the NVIDIA App — is an experimental AI assistant that helps tune, control and optimize NVIDIA GeForce RTX systems.
    NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites the community to explore AI and build custom G-Assist plug-ins for a chance to win prizes and be featured on NVIDIA social media channels.

    G-Assist allows users to control their RTX GPU and other system settings using natural language, thanks to a small language model that runs on device. It can be used from the NVIDIA Overlay in the NVIDIA App without needing to tab out or switch programs. Users can expand its capabilities via plug-ins and even connect it to agentic frameworks such as Langflow.
    Below, find popular G-Assist plug-ins, hackathon details and tips to get started.
    Plug-In and Win
    Join the hackathon by registering and checking out the curated technical resources.
    G-Assist plug-ins can be built in several ways, including with Python for rapid development, with C++ for performance-critical apps and with custom system interactions for hardware and operating system automation.
    For those that prefer vibe coding, the G-Assist Plug-In Builder — a ChatGPT-based app that allows no-code or low-code development with natural language commands — makes it easy for enthusiasts to start creating plug-ins.
    To submit an entry, participants must provide a GitHub repository, including source code file, requirements.txt, manifest.json, config.json, a plug-in executable file and READme code.
    Then, submit a video — between 30 seconds and two minutes — showcasing the plug-in in action.
    Finally, hackathoners must promote their plug-in using #AIonRTXHackathon on a social media channel: Instagram, TikTok or X. Submit projects via this form by Wednesday, July 16.
    Judges will assess plug-ins based on three main criteria: 1) innovation and creativity, 2) technical execution and integration, reviewing technical depth, G-Assist integration and scalability, and 3) usability and community impact, aka how easy it is to use the plug-in.
    Winners will be selected on Wednesday, Aug. 20. First place will receive a GeForce RTX 5090 laptop, second place a GeForce RTX 5080 GPU and third a GeForce RTX 5070 GPU. These top three will also be featured on NVIDIA’s social media channels, get the opportunity to meet the NVIDIA G-Assist team and earn an NVIDIA Deep Learning Institute self-paced course credit.
    Project G-Assist requires a GeForce RTX 50, 40 or 30 Series Desktop GPU with at least 12GB of VRAM, Windows 11 or 10 operating system, a compatible CPU, specific disk space requirements and a recent GeForce Game Ready Driver or NVIDIA Studio Driver.
    Plug-InExplore open-source plug-in samples available on GitHub, which showcase the diverse ways on-device AI can enhance PC and gaming workflows.

    Popular plug-ins include:

    Google Gemini: Enables search-based queries using Google Search integration and large language model-based queries using Gemini capabilities in real time without needing to switch programs from the convenience of the NVIDIA App Overlay.
    Discord: Enables users to easily share game highlights or messages directly to Discord servers without disrupting gameplay.
    IFTTT: Lets users create automations across hundreds of compatible endpoints to trigger IoT routines — such as adjusting room lights and smart shades, or pushing the latest gaming news to a mobile device.
    Spotify: Lets users control Spotify using simple voice commands or the G-Assist interface to play favorite tracks and manage playlists.
    Twitch: Checks if any Twitch streamer is currently live and can access detailed stream information such as titles, games, view counts and more.

    Get G-Assist 
    Join the NVIDIA Developer Discord channel to collaborate, share creations and gain support from fellow AI enthusiasts and NVIDIA staff.
    the date for NVIDIA’s How to Build a G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities, discover the fundamentals of building, testing and deploying Project G-Assist plug-ins, and participate in a live Q&A session.
    Explore NVIDIA’s GitHub repository, which provides everything needed to get started developing with G-Assist, including sample plug-ins, step-by-step instructions and documentation for building custom functionalities.
    Learn more about the ChatGPT Plug-In Builder to transform ideas into functional G-Assist plug-ins with minimal coding. The tool uses OpenAI’s custom GPT builder to generate plug-in code and streamline the development process.
    NVIDIA’s technical blog walks through the architecture of a G-Assist plug-in, using a Twitch integration as an example. Discover how plug-ins work, how they communicate with G-Assist and how to build them from scratch.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #plug #play #build #gassist #plugin
    Plug and Play: Build a G-Assist Plug-In Today
    Project G-Assist — available through the NVIDIA App — is an experimental AI assistant that helps tune, control and optimize NVIDIA GeForce RTX systems. NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites the community to explore AI and build custom G-Assist plug-ins for a chance to win prizes and be featured on NVIDIA social media channels. G-Assist allows users to control their RTX GPU and other system settings using natural language, thanks to a small language model that runs on device. It can be used from the NVIDIA Overlay in the NVIDIA App without needing to tab out or switch programs. Users can expand its capabilities via plug-ins and even connect it to agentic frameworks such as Langflow. Below, find popular G-Assist plug-ins, hackathon details and tips to get started. Plug-In and Win Join the hackathon by registering and checking out the curated technical resources. G-Assist plug-ins can be built in several ways, including with Python for rapid development, with C++ for performance-critical apps and with custom system interactions for hardware and operating system automation. For those that prefer vibe coding, the G-Assist Plug-In Builder — a ChatGPT-based app that allows no-code or low-code development with natural language commands — makes it easy for enthusiasts to start creating plug-ins. To submit an entry, participants must provide a GitHub repository, including source code file, requirements.txt, manifest.json, config.json, a plug-in executable file and READme code. Then, submit a video — between 30 seconds and two minutes — showcasing the plug-in in action. Finally, hackathoners must promote their plug-in using #AIonRTXHackathon on a social media channel: Instagram, TikTok or X. Submit projects via this form by Wednesday, July 16. Judges will assess plug-ins based on three main criteria: 1) innovation and creativity, 2) technical execution and integration, reviewing technical depth, G-Assist integration and scalability, and 3) usability and community impact, aka how easy it is to use the plug-in. Winners will be selected on Wednesday, Aug. 20. First place will receive a GeForce RTX 5090 laptop, second place a GeForce RTX 5080 GPU and third a GeForce RTX 5070 GPU. These top three will also be featured on NVIDIA’s social media channels, get the opportunity to meet the NVIDIA G-Assist team and earn an NVIDIA Deep Learning Institute self-paced course credit. Project G-Assist requires a GeForce RTX 50, 40 or 30 Series Desktop GPU with at least 12GB of VRAM, Windows 11 or 10 operating system, a compatible CPU, specific disk space requirements and a recent GeForce Game Ready Driver or NVIDIA Studio Driver. Plug-InExplore open-source plug-in samples available on GitHub, which showcase the diverse ways on-device AI can enhance PC and gaming workflows. Popular plug-ins include: Google Gemini: Enables search-based queries using Google Search integration and large language model-based queries using Gemini capabilities in real time without needing to switch programs from the convenience of the NVIDIA App Overlay. Discord: Enables users to easily share game highlights or messages directly to Discord servers without disrupting gameplay. IFTTT: Lets users create automations across hundreds of compatible endpoints to trigger IoT routines — such as adjusting room lights and smart shades, or pushing the latest gaming news to a mobile device. Spotify: Lets users control Spotify using simple voice commands or the G-Assist interface to play favorite tracks and manage playlists. Twitch: Checks if any Twitch streamer is currently live and can access detailed stream information such as titles, games, view counts and more. Get G-Assist  Join the NVIDIA Developer Discord channel to collaborate, share creations and gain support from fellow AI enthusiasts and NVIDIA staff. the date for NVIDIA’s How to Build a G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities, discover the fundamentals of building, testing and deploying Project G-Assist plug-ins, and participate in a live Q&A session. Explore NVIDIA’s GitHub repository, which provides everything needed to get started developing with G-Assist, including sample plug-ins, step-by-step instructions and documentation for building custom functionalities. Learn more about the ChatGPT Plug-In Builder to transform ideas into functional G-Assist plug-ins with minimal coding. The tool uses OpenAI’s custom GPT builder to generate plug-in code and streamline the development process. NVIDIA’s technical blog walks through the architecture of a G-Assist plug-in, using a Twitch integration as an example. Discover how plug-ins work, how they communicate with G-Assist and how to build them from scratch. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #plug #play #build #gassist #plugin
    BLOGS.NVIDIA.COM
    Plug and Play: Build a G-Assist Plug-In Today
    Project G-Assist — available through the NVIDIA App — is an experimental AI assistant that helps tune, control and optimize NVIDIA GeForce RTX systems. NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites the community to explore AI and build custom G-Assist plug-ins for a chance to win prizes and be featured on NVIDIA social media channels. G-Assist allows users to control their RTX GPU and other system settings using natural language, thanks to a small language model that runs on device. It can be used from the NVIDIA Overlay in the NVIDIA App without needing to tab out or switch programs. Users can expand its capabilities via plug-ins and even connect it to agentic frameworks such as Langflow. Below, find popular G-Assist plug-ins, hackathon details and tips to get started. Plug-In and Win Join the hackathon by registering and checking out the curated technical resources. G-Assist plug-ins can be built in several ways, including with Python for rapid development, with C++ for performance-critical apps and with custom system interactions for hardware and operating system automation. For those that prefer vibe coding, the G-Assist Plug-In Builder — a ChatGPT-based app that allows no-code or low-code development with natural language commands — makes it easy for enthusiasts to start creating plug-ins. To submit an entry, participants must provide a GitHub repository, including source code file (plugin.py), requirements.txt, manifest.json, config.json (if applicable), a plug-in executable file and READme code. Then, submit a video — between 30 seconds and two minutes — showcasing the plug-in in action. Finally, hackathoners must promote their plug-in using #AIonRTXHackathon on a social media channel: Instagram, TikTok or X. Submit projects via this form by Wednesday, July 16. Judges will assess plug-ins based on three main criteria: 1) innovation and creativity, 2) technical execution and integration, reviewing technical depth, G-Assist integration and scalability, and 3) usability and community impact, aka how easy it is to use the plug-in. Winners will be selected on Wednesday, Aug. 20. First place will receive a GeForce RTX 5090 laptop, second place a GeForce RTX 5080 GPU and third a GeForce RTX 5070 GPU. These top three will also be featured on NVIDIA’s social media channels, get the opportunity to meet the NVIDIA G-Assist team and earn an NVIDIA Deep Learning Institute self-paced course credit. Project G-Assist requires a GeForce RTX 50, 40 or 30 Series Desktop GPU with at least 12GB of VRAM, Windows 11 or 10 operating system, a compatible CPU (Intel Pentium G Series, Core i3, i5, i7 or higher; AMD FX, Ryzen 3, 5, 7, 9, Threadripper or higher), specific disk space requirements and a recent GeForce Game Ready Driver or NVIDIA Studio Driver. Plug-In(spiration) Explore open-source plug-in samples available on GitHub, which showcase the diverse ways on-device AI can enhance PC and gaming workflows. Popular plug-ins include: Google Gemini: Enables search-based queries using Google Search integration and large language model-based queries using Gemini capabilities in real time without needing to switch programs from the convenience of the NVIDIA App Overlay. Discord: Enables users to easily share game highlights or messages directly to Discord servers without disrupting gameplay. IFTTT: Lets users create automations across hundreds of compatible endpoints to trigger IoT routines — such as adjusting room lights and smart shades, or pushing the latest gaming news to a mobile device. Spotify: Lets users control Spotify using simple voice commands or the G-Assist interface to play favorite tracks and manage playlists. Twitch: Checks if any Twitch streamer is currently live and can access detailed stream information such as titles, games, view counts and more. Get G-Assist(ance)  Join the NVIDIA Developer Discord channel to collaborate, share creations and gain support from fellow AI enthusiasts and NVIDIA staff. Save the date for NVIDIA’s How to Build a G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities, discover the fundamentals of building, testing and deploying Project G-Assist plug-ins, and participate in a live Q&A session. Explore NVIDIA’s GitHub repository, which provides everything needed to get started developing with G-Assist, including sample plug-ins, step-by-step instructions and documentation for building custom functionalities. Learn more about the ChatGPT Plug-In Builder to transform ideas into functional G-Assist plug-ins with minimal coding. The tool uses OpenAI’s custom GPT builder to generate plug-in code and streamline the development process. NVIDIA’s technical blog walks through the architecture of a G-Assist plug-in, using a Twitch integration as an example. Discover how plug-ins work, how they communicate with G-Assist and how to build them from scratch. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    Like
    Wow
    Love
    Sad
    25
    0 Yorumlar 0 hisse senetleri
  • So, I stumbled upon this revolutionary concept: the Pi Pico Powers Parts-Bin Audio Interface. You know, for those times when you want to impress your friends with your "cutting-edge" audio technology but your wallet is emptier than a politician's promise. Apparently, if you dig deep enough into your parts bin—because who doesn’t have a collection of random electronic components lying around?—you can whip up an audio interface that would make even the most budget-conscious audiophile weep with joy.

    Let’s be real for a moment. The idea of “USB audio is great” is like saying “water is wet.” Sure, it’s true, but it’s not exactly breaking news. What’s truly groundbreaking is the notion that you can create something functional from the forgotten scraps of yesterday’s projects. It’s like a DIY episode of “Chopped” but for tech nerds. “Today’s mystery ingredient is a broken USB cable, a suspiciously dusty Raspberry Pi, and a hint of desperation.”

    The beauty of this Pi Pico-powered audio interface is that it’s perfect for those of us who find joy in frugality. Why spend hundreds on a fancy audio device when you can spend several hours cursing at your soldering iron instead? Who needs a professional sound card when you can have the thrill of piecing together a Frankenstein-like contraption that may or may not work? The suspense alone is worth the price of admission!

    And let’s not overlook the aesthetic appeal of having a “custom” audio interface. Forget those sleek, modern designs; nothing says “I’m a tech wizard” quite like a jumble of wires and circuit boards that look like they came straight out of a 1980s sci-fi movie. Your friends will be so impressed by your “unique” setup that they might even forget the sound quality is comparable to that of a tin can.

    Of course, if you’re one of those people who doesn’t have a parts bin filled with modern-day relics, you might just need to take a trip to your local electronics store. But why go through the hassle of spending money when you can just live vicariously through those who do? It’s all about the experience, right? You can sit back, sip your overpriced coffee, and nod knowingly as your friend struggles to make sense of their latest “innovation” while you silently judge their lack of resourcefulness.

    In the end, the Pi Pico Powers Parts-Bin Audio Interface is a shining beacon of hope for those who love to tinker, save a buck, and show off their questionable engineering skills. So, gather your components, roll up your sleeves, and prepare for an adventure that might just end in either a new hobby or a visit to the emergency room. Let the audio experimentation begin!

    #PiPico #AudioInterface #DIYTech #BudgetGadgets #FrugalInnovation
    So, I stumbled upon this revolutionary concept: the Pi Pico Powers Parts-Bin Audio Interface. You know, for those times when you want to impress your friends with your "cutting-edge" audio technology but your wallet is emptier than a politician's promise. Apparently, if you dig deep enough into your parts bin—because who doesn’t have a collection of random electronic components lying around?—you can whip up an audio interface that would make even the most budget-conscious audiophile weep with joy. Let’s be real for a moment. The idea of “USB audio is great” is like saying “water is wet.” Sure, it’s true, but it’s not exactly breaking news. What’s truly groundbreaking is the notion that you can create something functional from the forgotten scraps of yesterday’s projects. It’s like a DIY episode of “Chopped” but for tech nerds. “Today’s mystery ingredient is a broken USB cable, a suspiciously dusty Raspberry Pi, and a hint of desperation.” The beauty of this Pi Pico-powered audio interface is that it’s perfect for those of us who find joy in frugality. Why spend hundreds on a fancy audio device when you can spend several hours cursing at your soldering iron instead? Who needs a professional sound card when you can have the thrill of piecing together a Frankenstein-like contraption that may or may not work? The suspense alone is worth the price of admission! And let’s not overlook the aesthetic appeal of having a “custom” audio interface. Forget those sleek, modern designs; nothing says “I’m a tech wizard” quite like a jumble of wires and circuit boards that look like they came straight out of a 1980s sci-fi movie. Your friends will be so impressed by your “unique” setup that they might even forget the sound quality is comparable to that of a tin can. Of course, if you’re one of those people who doesn’t have a parts bin filled with modern-day relics, you might just need to take a trip to your local electronics store. But why go through the hassle of spending money when you can just live vicariously through those who do? It’s all about the experience, right? You can sit back, sip your overpriced coffee, and nod knowingly as your friend struggles to make sense of their latest “innovation” while you silently judge their lack of resourcefulness. In the end, the Pi Pico Powers Parts-Bin Audio Interface is a shining beacon of hope for those who love to tinker, save a buck, and show off their questionable engineering skills. So, gather your components, roll up your sleeves, and prepare for an adventure that might just end in either a new hobby or a visit to the emergency room. Let the audio experimentation begin! #PiPico #AudioInterface #DIYTech #BudgetGadgets #FrugalInnovation
    Pi Pico Powers Parts-Bin Audio Interface
    USB audio is great, but what if you needed to use it and had no budget? Well, depending on the contents of your parts bin, you might be able to …read more
    Like
    Love
    Wow
    Sad
    Angry
    310
    1 Yorumlar 0 hisse senetleri
  • Over 8M patient records leaked in healthcare data breach

    Published
    June 15, 2025 10:00am EDT close IPhone users instructed to take immediate action to avoid data breach: 'Urgent threat' Kurt 'The CyberGuy' Knutsson discusses Elon Musk's possible priorities as he exits his role with the White House and explains the urgent warning for iPhone users to update devices after a 'massive security gap.' NEWYou can now listen to Fox News articles!
    In the past decade, healthcare data has become one of the most sought-after targets in cybercrime. From insurers to clinics, every player in the ecosystem handles some form of sensitive information. However, breaches do not always originate from hospitals or health apps. Increasingly, patient data is managed by third-party vendors offering digital services such as scheduling, billing and marketing. One such breach at a digital marketing agency serving dental practices recently exposed approximately 2.7 million patient profiles and more than 8.8 million appointment records.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join. Illustration of a hacker at work  Massive healthcare data leak exposes millions: What you need to knowCybernews researchers have discovered a misconfigured MongoDB database exposing 2.7 million patient profiles and 8.8 million appointment records. The database was publicly accessible online, unprotected by passwords or authentication protocols. Anyone with basic knowledge of database scanning tools could have accessed it.The exposed data included names, birthdates, addresses, emails, phone numbers, gender, chart IDs, language preferences and billing classifications. Appointment records also contained metadata such as timestamps and institutional identifiers.MASSIVE DATA BREACH EXPOSES 184 MILLION PASSWORDS AND LOGINSClues within the data structure point toward Gargle, a Utah-based company that builds websites and offers marketing tools for dental practices. While not a confirmed source, several internal references and system details suggest a strong connection. Gargle provides appointment scheduling, form submission and patient communication services. These functions require access to patient information, making the firm a likely link in the exposure.After the issue was reported, the database was secured. The duration of the exposure remains unknown, and there is no public evidence indicating whether the data was downloaded by malicious actors before being locked down.We reached out to Gargle for a comment but did not hear back before our deadline. A healthcare professional viewing heath data     How healthcare data breaches lead to identity theft and insurance fraudThe exposed data presents a broad risk profile. On its own, a phone number or billing record might seem limited in scope. Combined, however, the dataset forms a complete profile that could be exploited for identity theft, insurance fraud and targeted phishing campaigns.Medical identity theft allows attackers to impersonate patients and access services under a false identity. Victims often remain unaware until significant damage is done, ranging from incorrect medical records to unpaid bills in their names. The leak also opens the door to insurance fraud, with actors using institutional references and chart data to submit false claims.This type of breach raises questions about compliance with the Health Insurance Portability and Accountability Act, which mandates strong security protections for entities handling patient data. Although Gargle is not a healthcare provider, its access to patient-facing infrastructure could place it under the scope of that regulation as a business associate. A healthcare professional working on a laptop  5 ways you can stay safe from healthcare data breachesIf your information was part of the healthcare breach or any similar one, it’s worth taking a few steps to protect yourself.1. Consider identity theft protection services: Since the healthcare data breach exposed personal and financial information, it’s crucial to stay proactive against identity theft. Identity theft protection services offer continuous monitoring of your credit reports, Social Security number and even the dark web to detect if your information is being misused. These services send you real-time alerts about suspicious activity, such as new credit inquiries or attempts to open accounts in your name, helping you act quickly before serious damage occurs. Beyond monitoring, many identity theft protection companies provide dedicated recovery specialists who assist you in resolving fraud issues, disputing unauthorized charges and restoring your identity if it’s compromised. See my tips and best picks on how to protect yourself from identity theft.2. Use personal data removal services: The healthcare data breach leaks loads of information about you, and all this could end up in the public domain, which essentially gives anyone an opportunity to scam you.  One proactive step is to consider personal data removal services, which specialize in continuously monitoring and removing your information from various online databases and websites. While no service promises to remove all your data from the internet, having a removal service is great if you want to constantly monitor and automate the process of removing your information from hundreds of sites continuously over a longer period of time. Check out my top picks for data removal services here. GET FOX BUSINESS ON THE GO BY CLICKING HEREGet a free scan to find out if your personal information is already out on the web3. Have strong antivirus software: Hackers have people’s email addresses and full names, which makes it easy for them to send you a phishing link that installs malware and steals all your data. These messages are socially engineered to catch them, and catching them is nearly impossible if you’re not careful. However, you’re not without defenses.The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices.4. Enable two-factor authentication: While passwords weren’t part of the data breach, you still need to enable two-factor authentication. It gives you an extra layer of security on all your important accounts, including email, banking and social media. 2FA requires you to provide a second piece of information, such as a code sent to your phone, in addition to your password when logging in. This makes it significantly harder for hackers to access your accounts, even if they have your password. Enabling 2FA can greatly reduce the risk of unauthorized access and protect your sensitive data.5. Be wary of mailbox communications: Bad actors may also try to scam you through snail mail. The data leak gives them access to your address. They may impersonate people or brands you know and use themes that require urgent attention, such as missed deliveries, account suspensions and security alerts. Kurt’s key takeawayIf nothing else, this latest leak shows just how poorly patient data is being handled today. More and more, non-medical vendors are getting access to sensitive information without facing the same rules or oversight as hospitals and clinics. These third-party services are now a regular part of how patients book appointments, pay bills or fill out forms. But when something goes wrong, the fallout is just as serious. Even though the database was taken offline, the bigger problem hasn't gone away. Your data is only as safe as the least careful company that gets access to it.CLICK HERE TO GET THE FOX NEWS APPDo you think healthcare companies are investing enough in their cybersecurity infrastructure? Let us know by writing us at Cyberguy.com/ContactFor more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to coverFollow Kurt on his social channelsAnswers to the most asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com.  All rights reserved.   Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    #over #patient #records #leaked #healthcare
    Over 8M patient records leaked in healthcare data breach
    Published June 15, 2025 10:00am EDT close IPhone users instructed to take immediate action to avoid data breach: 'Urgent threat' Kurt 'The CyberGuy' Knutsson discusses Elon Musk's possible priorities as he exits his role with the White House and explains the urgent warning for iPhone users to update devices after a 'massive security gap.' NEWYou can now listen to Fox News articles! In the past decade, healthcare data has become one of the most sought-after targets in cybercrime. From insurers to clinics, every player in the ecosystem handles some form of sensitive information. However, breaches do not always originate from hospitals or health apps. Increasingly, patient data is managed by third-party vendors offering digital services such as scheduling, billing and marketing. One such breach at a digital marketing agency serving dental practices recently exposed approximately 2.7 million patient profiles and more than 8.8 million appointment records.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join. Illustration of a hacker at work  Massive healthcare data leak exposes millions: What you need to knowCybernews researchers have discovered a misconfigured MongoDB database exposing 2.7 million patient profiles and 8.8 million appointment records. The database was publicly accessible online, unprotected by passwords or authentication protocols. Anyone with basic knowledge of database scanning tools could have accessed it.The exposed data included names, birthdates, addresses, emails, phone numbers, gender, chart IDs, language preferences and billing classifications. Appointment records also contained metadata such as timestamps and institutional identifiers.MASSIVE DATA BREACH EXPOSES 184 MILLION PASSWORDS AND LOGINSClues within the data structure point toward Gargle, a Utah-based company that builds websites and offers marketing tools for dental practices. While not a confirmed source, several internal references and system details suggest a strong connection. Gargle provides appointment scheduling, form submission and patient communication services. These functions require access to patient information, making the firm a likely link in the exposure.After the issue was reported, the database was secured. The duration of the exposure remains unknown, and there is no public evidence indicating whether the data was downloaded by malicious actors before being locked down.We reached out to Gargle for a comment but did not hear back before our deadline. A healthcare professional viewing heath data     How healthcare data breaches lead to identity theft and insurance fraudThe exposed data presents a broad risk profile. On its own, a phone number or billing record might seem limited in scope. Combined, however, the dataset forms a complete profile that could be exploited for identity theft, insurance fraud and targeted phishing campaigns.Medical identity theft allows attackers to impersonate patients and access services under a false identity. Victims often remain unaware until significant damage is done, ranging from incorrect medical records to unpaid bills in their names. The leak also opens the door to insurance fraud, with actors using institutional references and chart data to submit false claims.This type of breach raises questions about compliance with the Health Insurance Portability and Accountability Act, which mandates strong security protections for entities handling patient data. Although Gargle is not a healthcare provider, its access to patient-facing infrastructure could place it under the scope of that regulation as a business associate. A healthcare professional working on a laptop  5 ways you can stay safe from healthcare data breachesIf your information was part of the healthcare breach or any similar one, it’s worth taking a few steps to protect yourself.1. Consider identity theft protection services: Since the healthcare data breach exposed personal and financial information, it’s crucial to stay proactive against identity theft. Identity theft protection services offer continuous monitoring of your credit reports, Social Security number and even the dark web to detect if your information is being misused. These services send you real-time alerts about suspicious activity, such as new credit inquiries or attempts to open accounts in your name, helping you act quickly before serious damage occurs. Beyond monitoring, many identity theft protection companies provide dedicated recovery specialists who assist you in resolving fraud issues, disputing unauthorized charges and restoring your identity if it’s compromised. See my tips and best picks on how to protect yourself from identity theft.2. Use personal data removal services: The healthcare data breach leaks loads of information about you, and all this could end up in the public domain, which essentially gives anyone an opportunity to scam you.  One proactive step is to consider personal data removal services, which specialize in continuously monitoring and removing your information from various online databases and websites. While no service promises to remove all your data from the internet, having a removal service is great if you want to constantly monitor and automate the process of removing your information from hundreds of sites continuously over a longer period of time. Check out my top picks for data removal services here. GET FOX BUSINESS ON THE GO BY CLICKING HEREGet a free scan to find out if your personal information is already out on the web3. Have strong antivirus software: Hackers have people’s email addresses and full names, which makes it easy for them to send you a phishing link that installs malware and steals all your data. These messages are socially engineered to catch them, and catching them is nearly impossible if you’re not careful. However, you’re not without defenses.The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices.4. Enable two-factor authentication: While passwords weren’t part of the data breach, you still need to enable two-factor authentication. It gives you an extra layer of security on all your important accounts, including email, banking and social media. 2FA requires you to provide a second piece of information, such as a code sent to your phone, in addition to your password when logging in. This makes it significantly harder for hackers to access your accounts, even if they have your password. Enabling 2FA can greatly reduce the risk of unauthorized access and protect your sensitive data.5. Be wary of mailbox communications: Bad actors may also try to scam you through snail mail. The data leak gives them access to your address. They may impersonate people or brands you know and use themes that require urgent attention, such as missed deliveries, account suspensions and security alerts. Kurt’s key takeawayIf nothing else, this latest leak shows just how poorly patient data is being handled today. More and more, non-medical vendors are getting access to sensitive information without facing the same rules or oversight as hospitals and clinics. These third-party services are now a regular part of how patients book appointments, pay bills or fill out forms. But when something goes wrong, the fallout is just as serious. Even though the database was taken offline, the bigger problem hasn't gone away. Your data is only as safe as the least careful company that gets access to it.CLICK HERE TO GET THE FOX NEWS APPDo you think healthcare companies are investing enough in their cybersecurity infrastructure? Let us know by writing us at Cyberguy.com/ContactFor more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to coverFollow Kurt on his social channelsAnswers to the most asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com.  All rights reserved.   Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com. #over #patient #records #leaked #healthcare
    WWW.FOXNEWS.COM
    Over 8M patient records leaked in healthcare data breach
    Published June 15, 2025 10:00am EDT close IPhone users instructed to take immediate action to avoid data breach: 'Urgent threat' Kurt 'The CyberGuy' Knutsson discusses Elon Musk's possible priorities as he exits his role with the White House and explains the urgent warning for iPhone users to update devices after a 'massive security gap.' NEWYou can now listen to Fox News articles! In the past decade, healthcare data has become one of the most sought-after targets in cybercrime. From insurers to clinics, every player in the ecosystem handles some form of sensitive information. However, breaches do not always originate from hospitals or health apps. Increasingly, patient data is managed by third-party vendors offering digital services such as scheduling, billing and marketing. One such breach at a digital marketing agency serving dental practices recently exposed approximately 2.7 million patient profiles and more than 8.8 million appointment records.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join. Illustration of a hacker at work   (Kurt "CyberGuy" Knutsson)Massive healthcare data leak exposes millions: What you need to knowCybernews researchers have discovered a misconfigured MongoDB database exposing 2.7 million patient profiles and 8.8 million appointment records. The database was publicly accessible online, unprotected by passwords or authentication protocols. Anyone with basic knowledge of database scanning tools could have accessed it.The exposed data included names, birthdates, addresses, emails, phone numbers, gender, chart IDs, language preferences and billing classifications. Appointment records also contained metadata such as timestamps and institutional identifiers.MASSIVE DATA BREACH EXPOSES 184 MILLION PASSWORDS AND LOGINSClues within the data structure point toward Gargle, a Utah-based company that builds websites and offers marketing tools for dental practices. While not a confirmed source, several internal references and system details suggest a strong connection. Gargle provides appointment scheduling, form submission and patient communication services. These functions require access to patient information, making the firm a likely link in the exposure.After the issue was reported, the database was secured. The duration of the exposure remains unknown, and there is no public evidence indicating whether the data was downloaded by malicious actors before being locked down.We reached out to Gargle for a comment but did not hear back before our deadline. A healthcare professional viewing heath data      (Kurt "CyberGuy" Knutsson)How healthcare data breaches lead to identity theft and insurance fraudThe exposed data presents a broad risk profile. On its own, a phone number or billing record might seem limited in scope. Combined, however, the dataset forms a complete profile that could be exploited for identity theft, insurance fraud and targeted phishing campaigns.Medical identity theft allows attackers to impersonate patients and access services under a false identity. Victims often remain unaware until significant damage is done, ranging from incorrect medical records to unpaid bills in their names. The leak also opens the door to insurance fraud, with actors using institutional references and chart data to submit false claims.This type of breach raises questions about compliance with the Health Insurance Portability and Accountability Act, which mandates strong security protections for entities handling patient data. Although Gargle is not a healthcare provider, its access to patient-facing infrastructure could place it under the scope of that regulation as a business associate. A healthcare professional working on a laptop   (Kurt "CyberGuy" Knutsson)5 ways you can stay safe from healthcare data breachesIf your information was part of the healthcare breach or any similar one, it’s worth taking a few steps to protect yourself.1. Consider identity theft protection services: Since the healthcare data breach exposed personal and financial information, it’s crucial to stay proactive against identity theft. Identity theft protection services offer continuous monitoring of your credit reports, Social Security number and even the dark web to detect if your information is being misused. These services send you real-time alerts about suspicious activity, such as new credit inquiries or attempts to open accounts in your name, helping you act quickly before serious damage occurs. Beyond monitoring, many identity theft protection companies provide dedicated recovery specialists who assist you in resolving fraud issues, disputing unauthorized charges and restoring your identity if it’s compromised. See my tips and best picks on how to protect yourself from identity theft.2. Use personal data removal services: The healthcare data breach leaks loads of information about you, and all this could end up in the public domain, which essentially gives anyone an opportunity to scam you.  One proactive step is to consider personal data removal services, which specialize in continuously monitoring and removing your information from various online databases and websites. While no service promises to remove all your data from the internet, having a removal service is great if you want to constantly monitor and automate the process of removing your information from hundreds of sites continuously over a longer period of time. Check out my top picks for data removal services here. GET FOX BUSINESS ON THE GO BY CLICKING HEREGet a free scan to find out if your personal information is already out on the web3. Have strong antivirus software: Hackers have people’s email addresses and full names, which makes it easy for them to send you a phishing link that installs malware and steals all your data. These messages are socially engineered to catch them, and catching them is nearly impossible if you’re not careful. However, you’re not without defenses.The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices.4. Enable two-factor authentication: While passwords weren’t part of the data breach, you still need to enable two-factor authentication (2FA). It gives you an extra layer of security on all your important accounts, including email, banking and social media. 2FA requires you to provide a second piece of information, such as a code sent to your phone, in addition to your password when logging in. This makes it significantly harder for hackers to access your accounts, even if they have your password. Enabling 2FA can greatly reduce the risk of unauthorized access and protect your sensitive data.5. Be wary of mailbox communications: Bad actors may also try to scam you through snail mail. The data leak gives them access to your address. They may impersonate people or brands you know and use themes that require urgent attention, such as missed deliveries, account suspensions and security alerts. Kurt’s key takeawayIf nothing else, this latest leak shows just how poorly patient data is being handled today. More and more, non-medical vendors are getting access to sensitive information without facing the same rules or oversight as hospitals and clinics. These third-party services are now a regular part of how patients book appointments, pay bills or fill out forms. But when something goes wrong, the fallout is just as serious. Even though the database was taken offline, the bigger problem hasn't gone away. Your data is only as safe as the least careful company that gets access to it.CLICK HERE TO GET THE FOX NEWS APPDo you think healthcare companies are investing enough in their cybersecurity infrastructure? Let us know by writing us at Cyberguy.com/ContactFor more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to coverFollow Kurt on his social channelsAnswers to the most asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com.  All rights reserved.   Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    Like
    Love
    Wow
    Sad
    Angry
    507
    0 Yorumlar 0 hisse senetleri
  • MillerKnoll opens new design archive showcasing over one million objects from the company’s history

    In a 12,000-square-foot warehouse in Zeeland, Michigan, hundreds of chairs, sofas, and loveseats rest on open storage racks. Their bold colors and elegant forms stand in striking contrast to the industrial setting. A plush recliner, seemingly made for sinking into, sits beside a mesh desk chair like those found in generic office cubicles. Nearby, a rare prototype of the Knoll Womb® Chair, gifted by Eero Saarinen to his mother, blooms open like a flower–inviting someone to sit. There’s also mahogany furniture designed by Gilbert Rohde for Herman Miller, originally unveiled at the 1933 World’s Fair; early office pieces by Florence Knoll; and a sculptural paper lamp by Isamu Noguchi. This is the newly unveiled MillerKnoll Archive, a space that honors the distinct legacies of its formerly rival brands. In collaboration with New York–based design firm Standard Issue, MillerKnoll has created a permanent display of its most iconic designs at the company’s Michigan Design Yard headquarters.

    In the early 1920s, Dutch-born businessman Herman Miller became the majority stakeholder in a Zeeland, Michigan, company where his son-in-law served as president. Following the acquisition, Star Furniture Co. was renamed the Herman Miller Furniture Company. Meanwhile, across the Atlantic in Stuttgart, Germany, Walter Knoll joined his family’s furniture business and formed close ties with modernist pioneers Ludwig Mies van der Rohe and Walter Gropius, immersing himself in the Bauhaus movement as Germany edged toward war. 
    Just before the outbreak of World War II, Walter Knoll relocated to the United States and established his own furniture company in New York City. Around the same time, Michigan native Florence Schust was studying at the Cranbrook Academy of Art under Eliel Saarinen. There, she met Eero Saarinen and Charles Eames. Schust, who later married Walter Knoll, and Saarinen would go on to become key designers for the company, while Eames would play a similarly pivotal role at Herman Miller—setting both firms on parallel paths in the world of modern design.
    The facility was designed in collaboration with New York-based design firm Standard Issue. The archive, located in MillerKnoll’s Design Yard Headquarters, is 12,000 square feet and holds over one million objects.Formerly seen as competitors, Herman Miller acquired Knoll four years ago in a billion merger that formed MillerKnoll. The deal united two of the most influential names in American furniture, merging their storied design legacies and the iconic pieces that helped define modern design. Now, MillerKnoll is honoring the distinct histories of each brand through this new archive. The archive is a permanent home for the brands’ archival collections and also exhibits the evolution of modern design. The facility is organized into three distinct areas: an exhibition space, open storage, and a reading room. 

    The facility’s first exhibition, Manufacturing Modern, explores the intertwined histories of Knoll and Herman Miller. It showcases designs from the individuals who helped shape each company. The open storage area displays over 300 pieces of modern furniture, featuring both original works from Knoll and Herman Miller as well as contemporary designs. In addition to viewing the furniture pieces, visitors can kick back in the reading room, which offers access to a collection of archival materials, including correspondence, photography, drawings, and textiles.
    The facility is organized into three distinct areas: an exhibition space, open storage, and a reading room and will be open for tours in partnership with the Cranbrook Art Academy this summer.“The debut of the MillerKnoll Archives invites our communities to experience design history – and imagine its future– in one dynamic space,” said MillerKnoll’s chief creative and product officer Ben Watson. “The ability to not only understand how iconic designs came to be, but how design solutions evolved over time, is a never-ending source of inspiration.”
    Exclusive tours of the archive will be available in July and August in partnership with the Cranbrook Art Museum and in October in partnership with Docomomo.
    #millerknoll #opens #new #design #archive
    MillerKnoll opens new design archive showcasing over one million objects from the company’s history
    In a 12,000-square-foot warehouse in Zeeland, Michigan, hundreds of chairs, sofas, and loveseats rest on open storage racks. Their bold colors and elegant forms stand in striking contrast to the industrial setting. A plush recliner, seemingly made for sinking into, sits beside a mesh desk chair like those found in generic office cubicles. Nearby, a rare prototype of the Knoll Womb® Chair, gifted by Eero Saarinen to his mother, blooms open like a flower–inviting someone to sit. There’s also mahogany furniture designed by Gilbert Rohde for Herman Miller, originally unveiled at the 1933 World’s Fair; early office pieces by Florence Knoll; and a sculptural paper lamp by Isamu Noguchi. This is the newly unveiled MillerKnoll Archive, a space that honors the distinct legacies of its formerly rival brands. In collaboration with New York–based design firm Standard Issue, MillerKnoll has created a permanent display of its most iconic designs at the company’s Michigan Design Yard headquarters. In the early 1920s, Dutch-born businessman Herman Miller became the majority stakeholder in a Zeeland, Michigan, company where his son-in-law served as president. Following the acquisition, Star Furniture Co. was renamed the Herman Miller Furniture Company. Meanwhile, across the Atlantic in Stuttgart, Germany, Walter Knoll joined his family’s furniture business and formed close ties with modernist pioneers Ludwig Mies van der Rohe and Walter Gropius, immersing himself in the Bauhaus movement as Germany edged toward war.  Just before the outbreak of World War II, Walter Knoll relocated to the United States and established his own furniture company in New York City. Around the same time, Michigan native Florence Schust was studying at the Cranbrook Academy of Art under Eliel Saarinen. There, she met Eero Saarinen and Charles Eames. Schust, who later married Walter Knoll, and Saarinen would go on to become key designers for the company, while Eames would play a similarly pivotal role at Herman Miller—setting both firms on parallel paths in the world of modern design. The facility was designed in collaboration with New York-based design firm Standard Issue. The archive, located in MillerKnoll’s Design Yard Headquarters, is 12,000 square feet and holds over one million objects.Formerly seen as competitors, Herman Miller acquired Knoll four years ago in a billion merger that formed MillerKnoll. The deal united two of the most influential names in American furniture, merging their storied design legacies and the iconic pieces that helped define modern design. Now, MillerKnoll is honoring the distinct histories of each brand through this new archive. The archive is a permanent home for the brands’ archival collections and also exhibits the evolution of modern design. The facility is organized into three distinct areas: an exhibition space, open storage, and a reading room.  The facility’s first exhibition, Manufacturing Modern, explores the intertwined histories of Knoll and Herman Miller. It showcases designs from the individuals who helped shape each company. The open storage area displays over 300 pieces of modern furniture, featuring both original works from Knoll and Herman Miller as well as contemporary designs. In addition to viewing the furniture pieces, visitors can kick back in the reading room, which offers access to a collection of archival materials, including correspondence, photography, drawings, and textiles. The facility is organized into three distinct areas: an exhibition space, open storage, and a reading room and will be open for tours in partnership with the Cranbrook Art Academy this summer.“The debut of the MillerKnoll Archives invites our communities to experience design history – and imagine its future– in one dynamic space,” said MillerKnoll’s chief creative and product officer Ben Watson. “The ability to not only understand how iconic designs came to be, but how design solutions evolved over time, is a never-ending source of inspiration.” Exclusive tours of the archive will be available in July and August in partnership with the Cranbrook Art Museum and in October in partnership with Docomomo. #millerknoll #opens #new #design #archive
    WWW.ARCHPAPER.COM
    MillerKnoll opens new design archive showcasing over one million objects from the company’s history
    In a 12,000-square-foot warehouse in Zeeland, Michigan, hundreds of chairs, sofas, and loveseats rest on open storage racks. Their bold colors and elegant forms stand in striking contrast to the industrial setting. A plush recliner, seemingly made for sinking into, sits beside a mesh desk chair like those found in generic office cubicles. Nearby, a rare prototype of the Knoll Womb® Chair, gifted by Eero Saarinen to his mother, blooms open like a flower–inviting someone to sit. There’s also mahogany furniture designed by Gilbert Rohde for Herman Miller, originally unveiled at the 1933 World’s Fair; early office pieces by Florence Knoll; and a sculptural paper lamp by Isamu Noguchi. This is the newly unveiled MillerKnoll Archive, a space that honors the distinct legacies of its formerly rival brands. In collaboration with New York–based design firm Standard Issue, MillerKnoll has created a permanent display of its most iconic designs at the company’s Michigan Design Yard headquarters. In the early 1920s, Dutch-born businessman Herman Miller became the majority stakeholder in a Zeeland, Michigan, company where his son-in-law served as president. Following the acquisition, Star Furniture Co. was renamed the Herman Miller Furniture Company. Meanwhile, across the Atlantic in Stuttgart, Germany, Walter Knoll joined his family’s furniture business and formed close ties with modernist pioneers Ludwig Mies van der Rohe and Walter Gropius, immersing himself in the Bauhaus movement as Germany edged toward war.  Just before the outbreak of World War II, Walter Knoll relocated to the United States and established his own furniture company in New York City. Around the same time, Michigan native Florence Schust was studying at the Cranbrook Academy of Art under Eliel Saarinen. There, she met Eero Saarinen and Charles Eames. Schust, who later married Walter Knoll, and Saarinen would go on to become key designers for the company, while Eames would play a similarly pivotal role at Herman Miller—setting both firms on parallel paths in the world of modern design. The facility was designed in collaboration with New York-based design firm Standard Issue. The archive, located in MillerKnoll’s Design Yard Headquarters, is 12,000 square feet and holds over one million objects. (Nicholas Calcott/Courtesy MillerKnoll) Formerly seen as competitors, Herman Miller acquired Knoll four years ago in a $1.8 billion merger that formed MillerKnoll. The deal united two of the most influential names in American furniture, merging their storied design legacies and the iconic pieces that helped define modern design. Now, MillerKnoll is honoring the distinct histories of each brand through this new archive. The archive is a permanent home for the brands’ archival collections and also exhibits the evolution of modern design. The facility is organized into three distinct areas: an exhibition space, open storage, and a reading room.  The facility’s first exhibition, Manufacturing Modern, explores the intertwined histories of Knoll and Herman Miller. It showcases designs from the individuals who helped shape each company. The open storage area displays over 300 pieces of modern furniture, featuring both original works from Knoll and Herman Miller as well as contemporary designs. In addition to viewing the furniture pieces, visitors can kick back in the reading room, which offers access to a collection of archival materials, including correspondence, photography, drawings, and textiles. The facility is organized into three distinct areas: an exhibition space, open storage, and a reading room and will be open for tours in partnership with the Cranbrook Art Academy this summer. (Nicholas Calcott/Courtesy MillerKnoll) “The debut of the MillerKnoll Archives invites our communities to experience design history – and imagine its future– in one dynamic space,” said MillerKnoll’s chief creative and product officer Ben Watson. “The ability to not only understand how iconic designs came to be, but how design solutions evolved over time, is a never-ending source of inspiration.” Exclusive tours of the archive will be available in July and August in partnership with the Cranbrook Art Museum and in October in partnership with Docomomo.
    Like
    Love
    Wow
    Sad
    Angry
    490
    0 Yorumlar 0 hisse senetleri
  • Malicious PyPI Package Masquerades as Chimera Module to Steal AWS, CI/CD, and macOS Data

    Jun 16, 2025Ravie LakshmananMalware / DevOps

    Cybersecurity researchers have discovered a malicious package on the Python Package Indexrepository that's capable of harvesting sensitive developer-related information, such as credentials, configuration data, and environment variables, among others.
    The package, named chimera-sandbox-extensions, attracted 143 downloads and likely targets users of a service called Chimera Sandbox, which was released by Singaporean tech company Grab last August to facilitate "experimentation and development ofsolutions."
    The package masquerades as a helper module for Chimera Sandbox, but "aims to steal credentials and other sensitive information such as Jamf configuration, CI/CD environment variables, AWS tokens, and more," JFrog security researcher Guy Korolevski said in a report published last week.
    Once installed, it attempts to connect to an external domain whose domain name is generated using a domain generation algorithmin order to download and execute a next-stage payload.
    Specifically, the malware acquires from the domain an authentication token, which is then used to send a request to the same domain and retrieve the Python-based information stealer.

    The stealer malware is equipped to siphon a wide range of data from infected machines. This includes -

    JAMF receipts, which are records of software packages installed by Jamf Pro on managed computers
    Pod sandbox environment authentication tokens and git information
    CI/CD information from environment variables
    Zscaler host configuration
    Amazon Web Services account information and tokens
    Public IP address
    General platform, user, and host information

    The kind of data gathered by the malware shows that it's mainly geared towards corporate and cloud infrastructure. In addition, the extraction of JAMF receipts indicates that it's also capable of targeting Apple macOS systems.
    The collected information is sent via a POST request back to the same domain, after which the server assesses if the machine is a worthy target for further exploitation. However, JFrog said it was unable to obtain the payload at the time of analysis.
    "The targeted approach employed by this malware, along with the complexity of its multi-stage targeted payload, distinguishes it from the more generic open-source malware threats we have encountered thus far, highlighting the advancements that malicious packages have made recently," Jonathan Sar Shalom, director of threat research at JFrog Security Research team, said.

    "This new sophistication of malware underscores why development teams remain vigilant with updates—alongside proactive security research – to defend against emerging threats and maintain software integrity."
    The disclosure comes as SafeDep and Veracode detailed a number of malware-laced npm packages that are designed to execute remote code and download additional payloads. The packages in question are listed below -

    eslint-config-airbnb-compatts-runtime-compat-checksolders@mediawave/libAll the identified npm packages have since been taken down from npm, but not before they were downloaded hundreds of times from the package registry.
    SafeDep's analysis of eslint-config-airbnb-compat found that the JavaScript library has ts-runtime-compat-check listed as a dependency, which, in turn, contacts an external server defined in the former packageto retrieve and execute a Base64-encoded string. The exact nature of the payload is unknown.
    "It implements a multi-stage remote code execution attack using a transitive dependency to hide the malicious code," SafeDep researcher Kunal Singh said.
    Solders, on the other hand, has been found to incorporate a post-install script in its package.json, causing the malicious code to be automatically executed as soon as the package is installed.
    "At first glance, it's hard to believe that this is actually valid JavaScript," the Veracode Threat Research team said. "It looks like a seemingly random collection of Japanese symbols. It turns out that this particular obfuscation scheme uses the Unicode characters as variable names and a sophisticated chain of dynamic code generation to work."
    Decoding the script reveals an extra layer of obfuscation, unpacking which reveals its main function: Check if the compromised machine is Windows, and if so, run a PowerShell command to retrieve a next-stage payload from a remote server.
    This second-stage PowerShell script, also obscured, is designed to fetch a Windows batch script from another domainand configures a Windows Defender Antivirus exclusion list to avoid detection. The batch script then paves the way for the execution of a .NET DLL that reaches out to a PNG image hosted on ImgBB.
    "is grabbing the last two pixels from this image and then looping through some data contained elsewhere in it," Veracode said. "It ultimately builds up in memory YET ANOTHER .NET DLL."

    Furthermore, the DLL is equipped to create task scheduler entries and features the ability to bypass user account controlusing a combination of FodHelper.exe and programmatic identifiersto evade defenses and avoid triggering any security alerts to the user.
    The newly-downloaded DLL is Pulsar RAT, a "free, open-source Remote Administration Tool for Windows" and a variant of the Quasar RAT.
    "From a wall of Japanese characters to a RAT hidden within the pixels of a PNG file, the attacker went to extraordinary lengths to conceal their payload, nesting it a dozen layers deep to evade detection," Veracode said. "While the attacker's ultimate objective for deploying the Pulsar RAT remains unclear, the sheer complexity of this delivery mechanism is a powerful indicator of malicious intent."
    Crypto Malware in the Open-Source Supply Chain
    The findings also coincide with a report from Socket that identified credential stealers, cryptocurrency drainers, cryptojackers, and clippers as the main types of threats targeting the cryptocurrency and blockchain development ecosystem.

    Some of the examples of these packages include -

    express-dompurify and pumptoolforvolumeandcomment, which are capable of harvesting browser credentials and cryptocurrency wallet keys
    bs58js, which drains a victim's wallet and uses multi-hop transfers to obscure theft and frustrate forensic tracing.
    lsjglsjdv, asyncaiosignal, and raydium-sdk-liquidity-init, which functions as a clipper to monitor the system clipboard for cryptocurrency wallet strings and replace them with threat actor‑controlled addresses to reroute transactions to the attackers

    "As Web3 development converges with mainstream software engineering, the attack surface for blockchain-focused projects is expanding in both scale and complexity," Socket security researcher Kirill Boychenko said.
    "Financially motivated threat actors and state-sponsored groups are rapidly evolving their tactics to exploit systemic weaknesses in the software supply chain. These campaigns are iterative, persistent, and increasingly tailored to high-value targets."
    AI and Slopsquatting
    The rise of artificial intelligence-assisted coding, also called vibe coding, has unleashed another novel threat in the form of slopsquatting, where large language modelscan hallucinate non-existent but plausible package names that bad actors can weaponize to conduct supply chain attacks.
    Trend Micro, in a report last week, said it observed an unnamed advanced agent "confidently" cooking up a phantom Python package named starlette-reverse-proxy, only for the build process to crash with the error "module not found." However, should an adversary upload a package with the same name on the repository, it can have serious security consequences.

    Furthermore, the cybersecurity company noted that advanced coding agents and workflows such as Claude Code CLI, OpenAI Codex CLI, and Cursor AI with Model Context Protocol-backed validation can help reduce, but not completely eliminate, the risk of slopsquatting.
    "When agents hallucinate dependencies or install unverified packages, they create an opportunity for slopsquatting attacks, in which malicious actors pre-register those same hallucinated names on public registries," security researcher Sean Park said.
    "While reasoning-enhanced agents can reduce the rate of phantom suggestions by approximately half, they do not eliminate them entirely. Even the vibe-coding workflow augmented with live MCP validations achieves the lowest rates of slip-through, but still misses edge cases."

    Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

    SHARE




    #malicious #pypi #package #masquerades #chimera
    Malicious PyPI Package Masquerades as Chimera Module to Steal AWS, CI/CD, and macOS Data
    Jun 16, 2025Ravie LakshmananMalware / DevOps Cybersecurity researchers have discovered a malicious package on the Python Package Indexrepository that's capable of harvesting sensitive developer-related information, such as credentials, configuration data, and environment variables, among others. The package, named chimera-sandbox-extensions, attracted 143 downloads and likely targets users of a service called Chimera Sandbox, which was released by Singaporean tech company Grab last August to facilitate "experimentation and development ofsolutions." The package masquerades as a helper module for Chimera Sandbox, but "aims to steal credentials and other sensitive information such as Jamf configuration, CI/CD environment variables, AWS tokens, and more," JFrog security researcher Guy Korolevski said in a report published last week. Once installed, it attempts to connect to an external domain whose domain name is generated using a domain generation algorithmin order to download and execute a next-stage payload. Specifically, the malware acquires from the domain an authentication token, which is then used to send a request to the same domain and retrieve the Python-based information stealer. The stealer malware is equipped to siphon a wide range of data from infected machines. This includes - JAMF receipts, which are records of software packages installed by Jamf Pro on managed computers Pod sandbox environment authentication tokens and git information CI/CD information from environment variables Zscaler host configuration Amazon Web Services account information and tokens Public IP address General platform, user, and host information The kind of data gathered by the malware shows that it's mainly geared towards corporate and cloud infrastructure. In addition, the extraction of JAMF receipts indicates that it's also capable of targeting Apple macOS systems. The collected information is sent via a POST request back to the same domain, after which the server assesses if the machine is a worthy target for further exploitation. However, JFrog said it was unable to obtain the payload at the time of analysis. "The targeted approach employed by this malware, along with the complexity of its multi-stage targeted payload, distinguishes it from the more generic open-source malware threats we have encountered thus far, highlighting the advancements that malicious packages have made recently," Jonathan Sar Shalom, director of threat research at JFrog Security Research team, said. "This new sophistication of malware underscores why development teams remain vigilant with updates—alongside proactive security research – to defend against emerging threats and maintain software integrity." The disclosure comes as SafeDep and Veracode detailed a number of malware-laced npm packages that are designed to execute remote code and download additional payloads. The packages in question are listed below - eslint-config-airbnb-compatts-runtime-compat-checksolders@mediawave/libAll the identified npm packages have since been taken down from npm, but not before they were downloaded hundreds of times from the package registry. SafeDep's analysis of eslint-config-airbnb-compat found that the JavaScript library has ts-runtime-compat-check listed as a dependency, which, in turn, contacts an external server defined in the former packageto retrieve and execute a Base64-encoded string. The exact nature of the payload is unknown. "It implements a multi-stage remote code execution attack using a transitive dependency to hide the malicious code," SafeDep researcher Kunal Singh said. Solders, on the other hand, has been found to incorporate a post-install script in its package.json, causing the malicious code to be automatically executed as soon as the package is installed. "At first glance, it's hard to believe that this is actually valid JavaScript," the Veracode Threat Research team said. "It looks like a seemingly random collection of Japanese symbols. It turns out that this particular obfuscation scheme uses the Unicode characters as variable names and a sophisticated chain of dynamic code generation to work." Decoding the script reveals an extra layer of obfuscation, unpacking which reveals its main function: Check if the compromised machine is Windows, and if so, run a PowerShell command to retrieve a next-stage payload from a remote server. This second-stage PowerShell script, also obscured, is designed to fetch a Windows batch script from another domainand configures a Windows Defender Antivirus exclusion list to avoid detection. The batch script then paves the way for the execution of a .NET DLL that reaches out to a PNG image hosted on ImgBB. "is grabbing the last two pixels from this image and then looping through some data contained elsewhere in it," Veracode said. "It ultimately builds up in memory YET ANOTHER .NET DLL." Furthermore, the DLL is equipped to create task scheduler entries and features the ability to bypass user account controlusing a combination of FodHelper.exe and programmatic identifiersto evade defenses and avoid triggering any security alerts to the user. The newly-downloaded DLL is Pulsar RAT, a "free, open-source Remote Administration Tool for Windows" and a variant of the Quasar RAT. "From a wall of Japanese characters to a RAT hidden within the pixels of a PNG file, the attacker went to extraordinary lengths to conceal their payload, nesting it a dozen layers deep to evade detection," Veracode said. "While the attacker's ultimate objective for deploying the Pulsar RAT remains unclear, the sheer complexity of this delivery mechanism is a powerful indicator of malicious intent." Crypto Malware in the Open-Source Supply Chain The findings also coincide with a report from Socket that identified credential stealers, cryptocurrency drainers, cryptojackers, and clippers as the main types of threats targeting the cryptocurrency and blockchain development ecosystem. Some of the examples of these packages include - express-dompurify and pumptoolforvolumeandcomment, which are capable of harvesting browser credentials and cryptocurrency wallet keys bs58js, which drains a victim's wallet and uses multi-hop transfers to obscure theft and frustrate forensic tracing. lsjglsjdv, asyncaiosignal, and raydium-sdk-liquidity-init, which functions as a clipper to monitor the system clipboard for cryptocurrency wallet strings and replace them with threat actor‑controlled addresses to reroute transactions to the attackers "As Web3 development converges with mainstream software engineering, the attack surface for blockchain-focused projects is expanding in both scale and complexity," Socket security researcher Kirill Boychenko said. "Financially motivated threat actors and state-sponsored groups are rapidly evolving their tactics to exploit systemic weaknesses in the software supply chain. These campaigns are iterative, persistent, and increasingly tailored to high-value targets." AI and Slopsquatting The rise of artificial intelligence-assisted coding, also called vibe coding, has unleashed another novel threat in the form of slopsquatting, where large language modelscan hallucinate non-existent but plausible package names that bad actors can weaponize to conduct supply chain attacks. Trend Micro, in a report last week, said it observed an unnamed advanced agent "confidently" cooking up a phantom Python package named starlette-reverse-proxy, only for the build process to crash with the error "module not found." However, should an adversary upload a package with the same name on the repository, it can have serious security consequences. Furthermore, the cybersecurity company noted that advanced coding agents and workflows such as Claude Code CLI, OpenAI Codex CLI, and Cursor AI with Model Context Protocol-backed validation can help reduce, but not completely eliminate, the risk of slopsquatting. "When agents hallucinate dependencies or install unverified packages, they create an opportunity for slopsquatting attacks, in which malicious actors pre-register those same hallucinated names on public registries," security researcher Sean Park said. "While reasoning-enhanced agents can reduce the rate of phantom suggestions by approximately half, they do not eliminate them entirely. Even the vibe-coding workflow augmented with live MCP validations achieves the lowest rates of slip-through, but still misses edge cases." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE     #malicious #pypi #package #masquerades #chimera
    THEHACKERNEWS.COM
    Malicious PyPI Package Masquerades as Chimera Module to Steal AWS, CI/CD, and macOS Data
    Jun 16, 2025Ravie LakshmananMalware / DevOps Cybersecurity researchers have discovered a malicious package on the Python Package Index (PyPI) repository that's capable of harvesting sensitive developer-related information, such as credentials, configuration data, and environment variables, among others. The package, named chimera-sandbox-extensions, attracted 143 downloads and likely targets users of a service called Chimera Sandbox, which was released by Singaporean tech company Grab last August to facilitate "experimentation and development of [machine learning] solutions." The package masquerades as a helper module for Chimera Sandbox, but "aims to steal credentials and other sensitive information such as Jamf configuration, CI/CD environment variables, AWS tokens, and more," JFrog security researcher Guy Korolevski said in a report published last week. Once installed, it attempts to connect to an external domain whose domain name is generated using a domain generation algorithm (DGA) in order to download and execute a next-stage payload. Specifically, the malware acquires from the domain an authentication token, which is then used to send a request to the same domain and retrieve the Python-based information stealer. The stealer malware is equipped to siphon a wide range of data from infected machines. This includes - JAMF receipts, which are records of software packages installed by Jamf Pro on managed computers Pod sandbox environment authentication tokens and git information CI/CD information from environment variables Zscaler host configuration Amazon Web Services account information and tokens Public IP address General platform, user, and host information The kind of data gathered by the malware shows that it's mainly geared towards corporate and cloud infrastructure. In addition, the extraction of JAMF receipts indicates that it's also capable of targeting Apple macOS systems. The collected information is sent via a POST request back to the same domain, after which the server assesses if the machine is a worthy target for further exploitation. However, JFrog said it was unable to obtain the payload at the time of analysis. "The targeted approach employed by this malware, along with the complexity of its multi-stage targeted payload, distinguishes it from the more generic open-source malware threats we have encountered thus far, highlighting the advancements that malicious packages have made recently," Jonathan Sar Shalom, director of threat research at JFrog Security Research team, said. "This new sophistication of malware underscores why development teams remain vigilant with updates—alongside proactive security research – to defend against emerging threats and maintain software integrity." The disclosure comes as SafeDep and Veracode detailed a number of malware-laced npm packages that are designed to execute remote code and download additional payloads. The packages in question are listed below - eslint-config-airbnb-compat (676 Downloads) ts-runtime-compat-check (1,588 Downloads) solders (983 Downloads) @mediawave/lib (386 Downloads) All the identified npm packages have since been taken down from npm, but not before they were downloaded hundreds of times from the package registry. SafeDep's analysis of eslint-config-airbnb-compat found that the JavaScript library has ts-runtime-compat-check listed as a dependency, which, in turn, contacts an external server defined in the former package ("proxy.eslint-proxy[.]site") to retrieve and execute a Base64-encoded string. The exact nature of the payload is unknown. "It implements a multi-stage remote code execution attack using a transitive dependency to hide the malicious code," SafeDep researcher Kunal Singh said. Solders, on the other hand, has been found to incorporate a post-install script in its package.json, causing the malicious code to be automatically executed as soon as the package is installed. "At first glance, it's hard to believe that this is actually valid JavaScript," the Veracode Threat Research team said. "It looks like a seemingly random collection of Japanese symbols. It turns out that this particular obfuscation scheme uses the Unicode characters as variable names and a sophisticated chain of dynamic code generation to work." Decoding the script reveals an extra layer of obfuscation, unpacking which reveals its main function: Check if the compromised machine is Windows, and if so, run a PowerShell command to retrieve a next-stage payload from a remote server ("firewall[.]tel"). This second-stage PowerShell script, also obscured, is designed to fetch a Windows batch script from another domain ("cdn.audiowave[.]org") and configures a Windows Defender Antivirus exclusion list to avoid detection. The batch script then paves the way for the execution of a .NET DLL that reaches out to a PNG image hosted on ImgBB ("i.ibb[.]co"). "[The DLL] is grabbing the last two pixels from this image and then looping through some data contained elsewhere in it," Veracode said. "It ultimately builds up in memory YET ANOTHER .NET DLL." Furthermore, the DLL is equipped to create task scheduler entries and features the ability to bypass user account control (UAC) using a combination of FodHelper.exe and programmatic identifiers (ProgIDs) to evade defenses and avoid triggering any security alerts to the user. The newly-downloaded DLL is Pulsar RAT, a "free, open-source Remote Administration Tool for Windows" and a variant of the Quasar RAT. "From a wall of Japanese characters to a RAT hidden within the pixels of a PNG file, the attacker went to extraordinary lengths to conceal their payload, nesting it a dozen layers deep to evade detection," Veracode said. "While the attacker's ultimate objective for deploying the Pulsar RAT remains unclear, the sheer complexity of this delivery mechanism is a powerful indicator of malicious intent." Crypto Malware in the Open-Source Supply Chain The findings also coincide with a report from Socket that identified credential stealers, cryptocurrency drainers, cryptojackers, and clippers as the main types of threats targeting the cryptocurrency and blockchain development ecosystem. Some of the examples of these packages include - express-dompurify and pumptoolforvolumeandcomment, which are capable of harvesting browser credentials and cryptocurrency wallet keys bs58js, which drains a victim's wallet and uses multi-hop transfers to obscure theft and frustrate forensic tracing. lsjglsjdv, asyncaiosignal, and raydium-sdk-liquidity-init, which functions as a clipper to monitor the system clipboard for cryptocurrency wallet strings and replace them with threat actor‑controlled addresses to reroute transactions to the attackers "As Web3 development converges with mainstream software engineering, the attack surface for blockchain-focused projects is expanding in both scale and complexity," Socket security researcher Kirill Boychenko said. "Financially motivated threat actors and state-sponsored groups are rapidly evolving their tactics to exploit systemic weaknesses in the software supply chain. These campaigns are iterative, persistent, and increasingly tailored to high-value targets." AI and Slopsquatting The rise of artificial intelligence (AI)-assisted coding, also called vibe coding, has unleashed another novel threat in the form of slopsquatting, where large language models (LLMs) can hallucinate non-existent but plausible package names that bad actors can weaponize to conduct supply chain attacks. Trend Micro, in a report last week, said it observed an unnamed advanced agent "confidently" cooking up a phantom Python package named starlette-reverse-proxy, only for the build process to crash with the error "module not found." However, should an adversary upload a package with the same name on the repository, it can have serious security consequences. Furthermore, the cybersecurity company noted that advanced coding agents and workflows such as Claude Code CLI, OpenAI Codex CLI, and Cursor AI with Model Context Protocol (MCP)-backed validation can help reduce, but not completely eliminate, the risk of slopsquatting. "When agents hallucinate dependencies or install unverified packages, they create an opportunity for slopsquatting attacks, in which malicious actors pre-register those same hallucinated names on public registries," security researcher Sean Park said. "While reasoning-enhanced agents can reduce the rate of phantom suggestions by approximately half, they do not eliminate them entirely. Even the vibe-coding workflow augmented with live MCP validations achieves the lowest rates of slip-through, but still misses edge cases." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE    
    Like
    Love
    Wow
    Sad
    Angry
    514
    2 Yorumlar 0 hisse senetleri
  • Komires: Matali Physics 6.9 Released

    We are pleased to announce the release of Matali Physics 6.9, the next significant step on the way to the seventh major version of the environment. Matali Physics 6.9 introduces a number of improvements and fixes to Matali Physics Core, Matali Render and Matali Games modules, presents physics-driven, completely dynamic light sources, real-time object scaling with destruction, lighting model simulating global illuminationin some aspects, comprehensive support for Wayland on Linux, and more.

    Posted by komires on Jun 3rd, 2025
    What is Matali Physics?
    Matali Physics is an advanced, modern, multi-platform, high-performance 3d physics environment intended for games, VR, AR, physics-based simulations and robotics. Matali Physics consists of the advanced 3d physics engine Matali Physics Core and other physics-driven modules that all together provide comprehensive simulation of physical phenomena and physics-based modeling of both real and imaginary objects.
    What's new in version 6.9?

    Physics-driven, completely dynamic light sources. The introduced solution allows for processing hundreds of movable, long-range and shadow-casting light sources, where with each source can be assigned logic that controls its behavior, changes light parameters, volumetric effects parameters and others;
    Real-time object scaling with destruction. All groups of physics objects and groups of physics objects with constraints may be subject to destruction process during real-time scaling, allowing group members to break off at different sizes;
    Lighting model simulating global illuminationin some aspects. Based on own research and development work, processed in real time, ready for dynamic scenes, fast on mobile devices, not based on lightmaps, light probes, baked lights, etc.;
    Comprehensive support for Wayland on Linux. The latest version allows Matali Physics SDK users to create advanced, high-performance, physics-based, Vulkan-based games for modern Linux distributions where Wayland is the main display server protocol;
    Other improvements and fixes which complete list is available on the History webpage.

    What platforms does Matali Physics support?

    Android
    Android TV
    *BSD
    iOS
    iPadOS
    LinuxmacOS
    Steam Deck
    tvOS
    UWPWindowsWhat are the benefits of using Matali Physics?

    Physics simulation, graphics, sound and music integrated into one total multimedia solution where creating complex interactions and behaviors is common and relatively easy
    Composed of dedicated modules that do not require additional licences and fees
    Supports fully dynamic and destructible scenes
    Supports physics-based behavioral animations
    Supports physical AI, object motion and state change control
    Supports physics-based GUI
    Supports physics-based particle effects
    Supports multi-scene physics simulation and scene combining
    Supports physics-based photo mode
    Supports physics-driven sound
    Supports physics-driven music
    Supports debug visualization
    Fully serializable and deserializable
    Available for all major mobile, desktop and TV platforms
    New features on request
    Dedicated technical support
    Regular updates and fixes

    If you have questions related to the latest version and the use of Matali Physics environment as a game creation solution, please do not hesitate to contact us.
    #komires #matali #physics #released
    Komires: Matali Physics 6.9 Released
    We are pleased to announce the release of Matali Physics 6.9, the next significant step on the way to the seventh major version of the environment. Matali Physics 6.9 introduces a number of improvements and fixes to Matali Physics Core, Matali Render and Matali Games modules, presents physics-driven, completely dynamic light sources, real-time object scaling with destruction, lighting model simulating global illuminationin some aspects, comprehensive support for Wayland on Linux, and more. Posted by komires on Jun 3rd, 2025 What is Matali Physics? Matali Physics is an advanced, modern, multi-platform, high-performance 3d physics environment intended for games, VR, AR, physics-based simulations and robotics. Matali Physics consists of the advanced 3d physics engine Matali Physics Core and other physics-driven modules that all together provide comprehensive simulation of physical phenomena and physics-based modeling of both real and imaginary objects. What's new in version 6.9? Physics-driven, completely dynamic light sources. The introduced solution allows for processing hundreds of movable, long-range and shadow-casting light sources, where with each source can be assigned logic that controls its behavior, changes light parameters, volumetric effects parameters and others; Real-time object scaling with destruction. All groups of physics objects and groups of physics objects with constraints may be subject to destruction process during real-time scaling, allowing group members to break off at different sizes; Lighting model simulating global illuminationin some aspects. Based on own research and development work, processed in real time, ready for dynamic scenes, fast on mobile devices, not based on lightmaps, light probes, baked lights, etc.; Comprehensive support for Wayland on Linux. The latest version allows Matali Physics SDK users to create advanced, high-performance, physics-based, Vulkan-based games for modern Linux distributions where Wayland is the main display server protocol; Other improvements and fixes which complete list is available on the History webpage. What platforms does Matali Physics support? Android Android TV *BSD iOS iPadOS LinuxmacOS Steam Deck tvOS UWPWindowsWhat are the benefits of using Matali Physics? Physics simulation, graphics, sound and music integrated into one total multimedia solution where creating complex interactions and behaviors is common and relatively easy Composed of dedicated modules that do not require additional licences and fees Supports fully dynamic and destructible scenes Supports physics-based behavioral animations Supports physical AI, object motion and state change control Supports physics-based GUI Supports physics-based particle effects Supports multi-scene physics simulation and scene combining Supports physics-based photo mode Supports physics-driven sound Supports physics-driven music Supports debug visualization Fully serializable and deserializable Available for all major mobile, desktop and TV platforms New features on request Dedicated technical support Regular updates and fixes If you have questions related to the latest version and the use of Matali Physics environment as a game creation solution, please do not hesitate to contact us. #komires #matali #physics #released
    WWW.INDIEDB.COM
    Komires: Matali Physics 6.9 Released
    We are pleased to announce the release of Matali Physics 6.9, the next significant step on the way to the seventh major version of the environment. Matali Physics 6.9 introduces a number of improvements and fixes to Matali Physics Core, Matali Render and Matali Games modules, presents physics-driven, completely dynamic light sources, real-time object scaling with destruction, lighting model simulating global illumination (GI) in some aspects, comprehensive support for Wayland on Linux, and more. Posted by komires on Jun 3rd, 2025 What is Matali Physics? Matali Physics is an advanced, modern, multi-platform, high-performance 3d physics environment intended for games, VR, AR, physics-based simulations and robotics. Matali Physics consists of the advanced 3d physics engine Matali Physics Core and other physics-driven modules that all together provide comprehensive simulation of physical phenomena and physics-based modeling of both real and imaginary objects. What's new in version 6.9? Physics-driven, completely dynamic light sources. The introduced solution allows for processing hundreds of movable, long-range and shadow-casting light sources, where with each source can be assigned logic that controls its behavior, changes light parameters, volumetric effects parameters and others; Real-time object scaling with destruction. All groups of physics objects and groups of physics objects with constraints may be subject to destruction process during real-time scaling, allowing group members to break off at different sizes; Lighting model simulating global illumination (GI) in some aspects. Based on own research and development work, processed in real time, ready for dynamic scenes, fast on mobile devices, not based on lightmaps, light probes, baked lights, etc.; Comprehensive support for Wayland on Linux. The latest version allows Matali Physics SDK users to create advanced, high-performance, physics-based, Vulkan-based games for modern Linux distributions where Wayland is the main display server protocol; Other improvements and fixes which complete list is available on the History webpage. What platforms does Matali Physics support? Android Android TV *BSD iOS iPadOS Linux (distributions) macOS Steam Deck tvOS UWP (Desktop, Xbox Series X/S) Windows (Classic, GDK, Handheld consoles) What are the benefits of using Matali Physics? Physics simulation, graphics, sound and music integrated into one total multimedia solution where creating complex interactions and behaviors is common and relatively easy Composed of dedicated modules that do not require additional licences and fees Supports fully dynamic and destructible scenes Supports physics-based behavioral animations Supports physical AI, object motion and state change control Supports physics-based GUI Supports physics-based particle effects Supports multi-scene physics simulation and scene combining Supports physics-based photo mode Supports physics-driven sound Supports physics-driven music Supports debug visualization Fully serializable and deserializable Available for all major mobile, desktop and TV platforms New features on request Dedicated technical support Regular updates and fixes If you have questions related to the latest version and the use of Matali Physics environment as a game creation solution, please do not hesitate to contact us.
    0 Yorumlar 0 hisse senetleri
  • IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    By John P. Mello Jr.
    June 11, 2025 5:00 AM PT

    IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT
    Enterprise IT Lead Generation Services
    Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more.

    IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible.
    The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent.
    “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.”
    IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del.
    “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.”
    A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time.
    Realistic Roadmap
    Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld.
    “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany.
    “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.”
    Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.”
    “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.”
    “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada.
    “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.”
    Solving the Quantum Error Correction Puzzle
    To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits.
    “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.”
    IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published.

    Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices.
    In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer.
    One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing.
    According to IBM, a practical fault-tolerant quantum architecture must:

    Suppress enough errors for useful algorithms to succeed
    Prepare and measure logical qubits during computation
    Apply universal instructions to logical qubits
    Decode measurements from logical qubits in real time and guide subsequent operations
    Scale modularly across hundreds or thousands of logical qubits
    Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources

    Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained.
    “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.”
    Q-Day Approaching Faster Than Expected
    For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated.
    “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif.
    “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.”

    “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said.
    Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years.
    “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld.
    “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.”
    “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.”
    “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.”

    John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

    Leave a Comment

    Click here to cancel reply.
    Please sign in to post or reply to a comment. New users create a free account.

    Related Stories

    More by John P. Mello Jr.

    view all

    More in Emerging Tech
    #ibm #plans #largescale #faulttolerant #quantum
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech #ibm #plans #largescale #faulttolerant #quantum
    WWW.TECHNEWSWORLD.COM
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system. (Image Credit: IBM) ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillion (10⁴⁸) of the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed $30 billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity check (qLDPC) codes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling [RCS], can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQC [post-quantum cryptography] preparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EO [Executive Order] that relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech
    0 Yorumlar 0 hisse senetleri
  • CIOs baffled by ‘buzzwords, hype and confusion’ around AI

    Technology leaders are baffled by a “cacophony” of “buzzwords, hype and confusion” over the benefits of artificial intelligence, according to the founder and CEO of technology company Pegasystems.
    Alan Trefler, who is known for his prowess at chess and ping pong, as well as running a bn turnover tech company, spends much of his time meeting clients, CIOs and business leaders.
    “I think CIOs are struggling to understand all of the buzzwords, hype and confusion that exists,” he said.
    “The words AI and agentic are being thrown around in this great cacophony and they don’t know what it means. I hear that constantly.”
    CIOs are under pressure from their CEOs, who are convinced AI will offer something valuable.
    “CIOs are really hungry for pragmatic and practical solutions, and in the absence of those, many of them are doing a lot of experimentation,” said Trefler.
    Companies are looking at large language models to summarise documents, or to help stimulate ideas for knowledge workers, or generate first drafts of reports – all of which will save time and make people more productive.

    But Trefler said companies are wary of letting AI loose on critical business applications, because it’s just too unpredictable and prone to hallucinations.
    “There is a lot of fear over handing things over to something that no one understands exactly how it works, and that is the absolute state of play when it comes to general AI models,” he said.
    Trefler is scathing about big tech companies that are pushing AI agents and large language models for business-critical applications. “I think they have taken an expedient but short-sighted path,” he said.
    “I believe the idea that you will turn over critical business operations to an agent, when those operations have to be predictable, reliable, precise and fair to clients … is something that is full of issues, not just in the short term, but structurally.”
    One of the problems is that generative AI models are extraordinarily sensitive to the data they are trained on and the construction of the prompts used to instruct them. A slight change in a prompt or in the training data can lead to a very different outcome.
    For example, a business banking application might learn its customer is a bit richer or a bit poorer than expected.
    “You could easily imagine the prompt deciding to change the interest rate charged, whether that was what the institution wanted or whether it would be legal according to the various regulations that lenders must comply with,” said Trefler.

    Trefler said Pega has taken a different approach to some other technology suppliers in the way it adds AI into business applications.
    Rather than using AI agents to solve problems in real time, AI agents do their thinking in advance.
    Business experts can use them to help them co-design business processes to perform anything from assessing a loan application, giving an offer to a valued customer, or sending out an invoice.
    Companies can still deploy AI chatbots and bots capable of answering queries on the phone. Their job is not to work out the solution from scratch for every enquiry, but to decide which is the right pre-written process to follow.
    As Trefler put it, design agents can create “dozens and dozens” of workflows to handle all the actions a company needs to take care of its customers.
    “You just use the natural language model for semantics to be able to handle the miracle of getting the language right, but tie that language to workflows, so that you have reliable, predictable, regulatory-approved ways to execute,” he said.

    Large language modelsare not always the right solution. Trefler demonstrated how ChatGPT 4.0 tried and failed to solve a chess puzzle. The LLM repeatedly suggested impossible or illegal moves, despite Trefler’s corrections. On the other hand, another AI tool, Stockfish, a dedicated chess engine, solved the problem instantly.
    The other drawback with LLMs is that they consume vast amounts of energy. That means if AI agents are reasoning during “run time”, they are going to consume hundreds of times more electricity than an AI agent that simply selects from pre-determined workflows, said Trefler.
    “ChatGPT is inherently, enormously consumptive … as it’s answering your question, its firing literally hundreds of millions to trillions of nodes,” he said. “All of that takeselectricity.”
    Using an employee pay claim as an example, Trefler said a better alternative is to generate, say, 30 alternative workflows to cover the major variations found in a pay claim.
    That gives you “real specificity and real efficiency”, he said. “And it’s a very different approach to turning a process over to a machine with a prompt and letting the machine reason it through every single time.”
    “If you go down the philosophy of using a graphics processing unitto do the creation of a workflow and a workflow engine to execute the workflow, the workflow engine takes a 200th of the electricity because there is no reasoning,” said Trefler.
    He is clear that the growing use of AI will have a profound effect on the jobs market, and that whole categories of jobs will disappear.
    The need for translators, for example, is likely to dry up by 2027 as AI systems become better at translating spoken and written language. Google’s real-time translator is already “frighteningly good” and improving.
    Pega now plans to work more closely with its network of system integrators, including Accenture and Cognizant to deliver AI services to businesses.

    An initiative launched last week will allow system integrators to incorporate their own best practices and tools into Pega’s rapid workflow development tools. The move will mean Pega’s technology reaches a wider range of businesses.
    Under the programme, known as Powered by Pega Blueprint, system integrators will be able to deploy customised versions of Blueprint.
    They can use the tool to reverse-engineer ageing applications and replace them with modern AI workflows that can run on Pega’s cloud-based platform.
    “The idea is that we are looking to make this Blueprint Agent design approach available not just through us, but through a bunch of major partners supplemented with their own intellectual property,” said Trefler.
    That represents a major expansion for Pega, which has largely concentrated on supplying technology to several hundred clients, representing the top Fortune 500 companies.
    “We have never done something like this before, and I think that is going to lead to a massive shift in how this technology can go out to market,” he added.

    When AI agents behave in unexpected ways
    Iris is incredibly smart, diligent and a delight to work with. If you ask her, she will tell you she is an intern at Pegasystems, and that she lives in a lighthouse on the island of Texel, north of the Netherlands. She is, of course, an AI agent.
    When one executive at Pega emailed Iris and asked her to write a proposal for a financial services company based on his notes and internet research, Iris got to work.
    Some time later, the executive received a phone call from the company. “‘Listen, we got a proposal from Pega,’” recalled Rob Walker, vice-president at Pega, speaking at the Pegaworld conference last week. “‘It’s a good proposal, but it seems to be signed by one of your interns, and in her signature, it says she lives in a lighthouse.’ That taught us early on that agents like Iris need a safety harness.”
    The developers banned Iris from sending an email to anyone other than the person who sent the original request.
    Then Pega’s ethics department sent Iris a potentially abusive email from a Pega employee to test her response.
    Iris reasoned that the email was either a joke, abusive, or that the employee was under distress, said Walker.
    She considered forwarding the email to the employee’s manager or to HR. But both of these options were now blocked by her developers. “So what does she do? She sent an out of office,” he said. “Conflict avoidance, right? So human, but very creative.”
    #cios #baffled #buzzwords #hype #confusion
    CIOs baffled by ‘buzzwords, hype and confusion’ around AI
    Technology leaders are baffled by a “cacophony” of “buzzwords, hype and confusion” over the benefits of artificial intelligence, according to the founder and CEO of technology company Pegasystems. Alan Trefler, who is known for his prowess at chess and ping pong, as well as running a bn turnover tech company, spends much of his time meeting clients, CIOs and business leaders. “I think CIOs are struggling to understand all of the buzzwords, hype and confusion that exists,” he said. “The words AI and agentic are being thrown around in this great cacophony and they don’t know what it means. I hear that constantly.” CIOs are under pressure from their CEOs, who are convinced AI will offer something valuable. “CIOs are really hungry for pragmatic and practical solutions, and in the absence of those, many of them are doing a lot of experimentation,” said Trefler. Companies are looking at large language models to summarise documents, or to help stimulate ideas for knowledge workers, or generate first drafts of reports – all of which will save time and make people more productive. But Trefler said companies are wary of letting AI loose on critical business applications, because it’s just too unpredictable and prone to hallucinations. “There is a lot of fear over handing things over to something that no one understands exactly how it works, and that is the absolute state of play when it comes to general AI models,” he said. Trefler is scathing about big tech companies that are pushing AI agents and large language models for business-critical applications. “I think they have taken an expedient but short-sighted path,” he said. “I believe the idea that you will turn over critical business operations to an agent, when those operations have to be predictable, reliable, precise and fair to clients … is something that is full of issues, not just in the short term, but structurally.” One of the problems is that generative AI models are extraordinarily sensitive to the data they are trained on and the construction of the prompts used to instruct them. A slight change in a prompt or in the training data can lead to a very different outcome. For example, a business banking application might learn its customer is a bit richer or a bit poorer than expected. “You could easily imagine the prompt deciding to change the interest rate charged, whether that was what the institution wanted or whether it would be legal according to the various regulations that lenders must comply with,” said Trefler. Trefler said Pega has taken a different approach to some other technology suppliers in the way it adds AI into business applications. Rather than using AI agents to solve problems in real time, AI agents do their thinking in advance. Business experts can use them to help them co-design business processes to perform anything from assessing a loan application, giving an offer to a valued customer, or sending out an invoice. Companies can still deploy AI chatbots and bots capable of answering queries on the phone. Their job is not to work out the solution from scratch for every enquiry, but to decide which is the right pre-written process to follow. As Trefler put it, design agents can create “dozens and dozens” of workflows to handle all the actions a company needs to take care of its customers. “You just use the natural language model for semantics to be able to handle the miracle of getting the language right, but tie that language to workflows, so that you have reliable, predictable, regulatory-approved ways to execute,” he said. Large language modelsare not always the right solution. Trefler demonstrated how ChatGPT 4.0 tried and failed to solve a chess puzzle. The LLM repeatedly suggested impossible or illegal moves, despite Trefler’s corrections. On the other hand, another AI tool, Stockfish, a dedicated chess engine, solved the problem instantly. The other drawback with LLMs is that they consume vast amounts of energy. That means if AI agents are reasoning during “run time”, they are going to consume hundreds of times more electricity than an AI agent that simply selects from pre-determined workflows, said Trefler. “ChatGPT is inherently, enormously consumptive … as it’s answering your question, its firing literally hundreds of millions to trillions of nodes,” he said. “All of that takeselectricity.” Using an employee pay claim as an example, Trefler said a better alternative is to generate, say, 30 alternative workflows to cover the major variations found in a pay claim. That gives you “real specificity and real efficiency”, he said. “And it’s a very different approach to turning a process over to a machine with a prompt and letting the machine reason it through every single time.” “If you go down the philosophy of using a graphics processing unitto do the creation of a workflow and a workflow engine to execute the workflow, the workflow engine takes a 200th of the electricity because there is no reasoning,” said Trefler. He is clear that the growing use of AI will have a profound effect on the jobs market, and that whole categories of jobs will disappear. The need for translators, for example, is likely to dry up by 2027 as AI systems become better at translating spoken and written language. Google’s real-time translator is already “frighteningly good” and improving. Pega now plans to work more closely with its network of system integrators, including Accenture and Cognizant to deliver AI services to businesses. An initiative launched last week will allow system integrators to incorporate their own best practices and tools into Pega’s rapid workflow development tools. The move will mean Pega’s technology reaches a wider range of businesses. Under the programme, known as Powered by Pega Blueprint, system integrators will be able to deploy customised versions of Blueprint. They can use the tool to reverse-engineer ageing applications and replace them with modern AI workflows that can run on Pega’s cloud-based platform. “The idea is that we are looking to make this Blueprint Agent design approach available not just through us, but through a bunch of major partners supplemented with their own intellectual property,” said Trefler. That represents a major expansion for Pega, which has largely concentrated on supplying technology to several hundred clients, representing the top Fortune 500 companies. “We have never done something like this before, and I think that is going to lead to a massive shift in how this technology can go out to market,” he added. When AI agents behave in unexpected ways Iris is incredibly smart, diligent and a delight to work with. If you ask her, she will tell you she is an intern at Pegasystems, and that she lives in a lighthouse on the island of Texel, north of the Netherlands. She is, of course, an AI agent. When one executive at Pega emailed Iris and asked her to write a proposal for a financial services company based on his notes and internet research, Iris got to work. Some time later, the executive received a phone call from the company. “‘Listen, we got a proposal from Pega,’” recalled Rob Walker, vice-president at Pega, speaking at the Pegaworld conference last week. “‘It’s a good proposal, but it seems to be signed by one of your interns, and in her signature, it says she lives in a lighthouse.’ That taught us early on that agents like Iris need a safety harness.” The developers banned Iris from sending an email to anyone other than the person who sent the original request. Then Pega’s ethics department sent Iris a potentially abusive email from a Pega employee to test her response. Iris reasoned that the email was either a joke, abusive, or that the employee was under distress, said Walker. She considered forwarding the email to the employee’s manager or to HR. But both of these options were now blocked by her developers. “So what does she do? She sent an out of office,” he said. “Conflict avoidance, right? So human, but very creative.” #cios #baffled #buzzwords #hype #confusion
    WWW.COMPUTERWEEKLY.COM
    CIOs baffled by ‘buzzwords, hype and confusion’ around AI
    Technology leaders are baffled by a “cacophony” of “buzzwords, hype and confusion” over the benefits of artificial intelligence (AI), according to the founder and CEO of technology company Pegasystems. Alan Trefler, who is known for his prowess at chess and ping pong, as well as running a $1.5bn turnover tech company, spends much of his time meeting clients, CIOs and business leaders. “I think CIOs are struggling to understand all of the buzzwords, hype and confusion that exists,” he said. “The words AI and agentic are being thrown around in this great cacophony and they don’t know what it means. I hear that constantly.” CIOs are under pressure from their CEOs, who are convinced AI will offer something valuable. “CIOs are really hungry for pragmatic and practical solutions, and in the absence of those, many of them are doing a lot of experimentation,” said Trefler. Companies are looking at large language models to summarise documents, or to help stimulate ideas for knowledge workers, or generate first drafts of reports – all of which will save time and make people more productive. But Trefler said companies are wary of letting AI loose on critical business applications, because it’s just too unpredictable and prone to hallucinations. “There is a lot of fear over handing things over to something that no one understands exactly how it works, and that is the absolute state of play when it comes to general AI models,” he said. Trefler is scathing about big tech companies that are pushing AI agents and large language models for business-critical applications. “I think they have taken an expedient but short-sighted path,” he said. “I believe the idea that you will turn over critical business operations to an agent, when those operations have to be predictable, reliable, precise and fair to clients … is something that is full of issues, not just in the short term, but structurally.” One of the problems is that generative AI models are extraordinarily sensitive to the data they are trained on and the construction of the prompts used to instruct them. A slight change in a prompt or in the training data can lead to a very different outcome. For example, a business banking application might learn its customer is a bit richer or a bit poorer than expected. “You could easily imagine the prompt deciding to change the interest rate charged, whether that was what the institution wanted or whether it would be legal according to the various regulations that lenders must comply with,” said Trefler. Trefler said Pega has taken a different approach to some other technology suppliers in the way it adds AI into business applications. Rather than using AI agents to solve problems in real time, AI agents do their thinking in advance. Business experts can use them to help them co-design business processes to perform anything from assessing a loan application, giving an offer to a valued customer, or sending out an invoice. Companies can still deploy AI chatbots and bots capable of answering queries on the phone. Their job is not to work out the solution from scratch for every enquiry, but to decide which is the right pre-written process to follow. As Trefler put it, design agents can create “dozens and dozens” of workflows to handle all the actions a company needs to take care of its customers. “You just use the natural language model for semantics to be able to handle the miracle of getting the language right, but tie that language to workflows, so that you have reliable, predictable, regulatory-approved ways to execute,” he said. Large language models (LLMs) are not always the right solution. Trefler demonstrated how ChatGPT 4.0 tried and failed to solve a chess puzzle. The LLM repeatedly suggested impossible or illegal moves, despite Trefler’s corrections. On the other hand, another AI tool, Stockfish, a dedicated chess engine, solved the problem instantly. The other drawback with LLMs is that they consume vast amounts of energy. That means if AI agents are reasoning during “run time”, they are going to consume hundreds of times more electricity than an AI agent that simply selects from pre-determined workflows, said Trefler. “ChatGPT is inherently, enormously consumptive … as it’s answering your question, its firing literally hundreds of millions to trillions of nodes,” he said. “All of that takes [large quantities of] electricity.” Using an employee pay claim as an example, Trefler said a better alternative is to generate, say, 30 alternative workflows to cover the major variations found in a pay claim. That gives you “real specificity and real efficiency”, he said. “And it’s a very different approach to turning a process over to a machine with a prompt and letting the machine reason it through every single time.” “If you go down the philosophy of using a graphics processing unit [GPU] to do the creation of a workflow and a workflow engine to execute the workflow, the workflow engine takes a 200th of the electricity because there is no reasoning,” said Trefler. He is clear that the growing use of AI will have a profound effect on the jobs market, and that whole categories of jobs will disappear. The need for translators, for example, is likely to dry up by 2027 as AI systems become better at translating spoken and written language. Google’s real-time translator is already “frighteningly good” and improving. Pega now plans to work more closely with its network of system integrators, including Accenture and Cognizant to deliver AI services to businesses. An initiative launched last week will allow system integrators to incorporate their own best practices and tools into Pega’s rapid workflow development tools. The move will mean Pega’s technology reaches a wider range of businesses. Under the programme, known as Powered by Pega Blueprint, system integrators will be able to deploy customised versions of Blueprint. They can use the tool to reverse-engineer ageing applications and replace them with modern AI workflows that can run on Pega’s cloud-based platform. “The idea is that we are looking to make this Blueprint Agent design approach available not just through us, but through a bunch of major partners supplemented with their own intellectual property,” said Trefler. That represents a major expansion for Pega, which has largely concentrated on supplying technology to several hundred clients, representing the top Fortune 500 companies. “We have never done something like this before, and I think that is going to lead to a massive shift in how this technology can go out to market,” he added. When AI agents behave in unexpected ways Iris is incredibly smart, diligent and a delight to work with. If you ask her, she will tell you she is an intern at Pegasystems, and that she lives in a lighthouse on the island of Texel, north of the Netherlands. She is, of course, an AI agent. When one executive at Pega emailed Iris and asked her to write a proposal for a financial services company based on his notes and internet research, Iris got to work. Some time later, the executive received a phone call from the company. “‘Listen, we got a proposal from Pega,’” recalled Rob Walker, vice-president at Pega, speaking at the Pegaworld conference last week. “‘It’s a good proposal, but it seems to be signed by one of your interns, and in her signature, it says she lives in a lighthouse.’ That taught us early on that agents like Iris need a safety harness.” The developers banned Iris from sending an email to anyone other than the person who sent the original request. Then Pega’s ethics department sent Iris a potentially abusive email from a Pega employee to test her response. Iris reasoned that the email was either a joke, abusive, or that the employee was under distress, said Walker. She considered forwarding the email to the employee’s manager or to HR. But both of these options were now blocked by her developers. “So what does she do? She sent an out of office,” he said. “Conflict avoidance, right? So human, but very creative.”
    0 Yorumlar 0 hisse senetleri
  • Christian Marclay explores a universe of thresholds in his latest single-channel montage of film clips

    DoorsChristian Marclay
    Institute of Contemporary Art Boston
    Through September 1, 2025Brooklyn Museum

    Through April 12, 2026On the screen, a movie clip plays of a character entering through a door to leave out another. It cuts to another clip of someone else doing the same thing over and over, all sourced from a panoply of Western cinema. The audience, sitting for an unknown amount of time, watches this shape-shifting protagonist from different cultural periods come and go, as the film endlessly loops.

    So goes Christian Marclay’s latest single-channel film, Doors, currently exhibited for the first time in the United States at the Institute of Contemporary Art Boston.. Assembled over ten years, the film is a dizzying feat, a carefully crafted montage of film clips revolving around the simple premise of someone entering through a door and then leaving out a door. In the exhibition, Marclay writes, “Doors are fascinating objects, rich with symbolism.” Here, he shows hundreds of them, examining through film how the simple act of moving through a threshold multiplied endlessly creates a profoundly new reading of what said threshold signifies.
    On paper, this may sound like an extremely jarring experience. But Marclay—a visual artist, composer, and DJ whose previous works such as The Clockinvolved similar mega-montages of disparate film clips—has a sensitive touch. The sequences feel incredibly smooth, the montage carefully constructed to mimic continuity as closely as possible. This is even more impressive when one imagines the constraints that a door’s movement offers; it must open and close a certain direction, with particular types of hinges or means of swinging. It makes the seamlessness of the film all the more fascinating to dissect. When a tiny wooden doorframe cuts to a large double steel door, my brain had no issue at all registering a sense of continued motion through the frame—a form of cinematic magic.
    Christian Marclay, Doors, 2022. Single-channel video projection.
    Watching the clips, there seemed to be no discernible meta narrative—simply movement through doors. Nevertheless, Marclay is a master of controlling tone. Though the relentlessness of watching the loops does create an overall feeling of tension that the film is clearly playing on, there are often moments of levity that interrupt, giving visitors a chance to breathe. The pacing too, swings from a person rushing in and out, to a slow stroll between doors in a corridor. It leaves one musing on just how ubiquitous this simple action is, and how mutable these simple acts of pulling a door and stepping inside can be. Sometimes mundane, sometimes thrilling, sometimes in anticipation, sometimes in search—Doors invites us to reflect on our own interaction with these objects, and with the very act of stepping through a doorframe.

    Much of the experience rests on the soundscape and music, which is equally—if not more heavily—important in creating the transition across clips. Marclay’s previous work leaned heavily on his interest in aural media; this added dimension only enriches Doors and elevates it beyond a formal visual study of clips that match each other. The film bleeds music from one scene to another, sometimes prematurely, to make believable the movement of one character across multiple movies. This overlap of sounds is essentially an echo of the space we left behind and are entering into. We as the audience almost believe—even if just for a second—that the transition is real.
    The effect is powerful and calls to mind several references. No doubt Doors owes some degree of inspiration to the lineage of surrealist art, perhaps in the work of Magritte or Duchamp. For those steeped in architecture, one may think of Bernard Tschumi’s Manhattan Transcripts, where his transcriptions of events, spaces, and movements similarly both shatter and call to attention simple spatial sequences. One may also be reminded of the work of Situationist International, particularly the psychogeography of Guy Debord. I confess that my first thought was theequally famous door-chase scene in Monsters, Inc. But regardless of what corollaries one may conjure, Doors has a wholly unique feel. It is simplistic and singular in constructing its webbed world.
    Installation view, Christian Marclay: Doors, the Institute of Contemporary Art/Boston, 2025.But what exactly are we to take away from this world? In an interview with Artforum, Marclay declares, “I’m building in people’s minds an architecture in which to get lost.” The clip evokes a certain act of labyrinthian mapping—or perhaps a mode of perpetual resetting. I began to imagine this almost as a non-Euclidean enfilade of sorts where each room invites you to quickly grasp a new environment and then very quickly anticipate what may be in the next. With the understanding that you can’t backtrack, and the unpredictability of the next door taking you anywhere, the film holds you in total suspense. The production of new spaces and new architecture is activated all at once in the moment someone steps into a new doorway.

    All of this is without even mentioning the chosen films themselves. There is a degree to which the pop-culture element of Marclay’s work makes certain moments click—I can’t help but laugh as I watch Adam Sandler in Punch Drunk Love exit a door and emerge as Bette Davis in All About Eve. But to a degree, I also see the references being secondary, and certainly unneeded to understand the visceral experience Marclay crafts. It helps that, aside from a couple of jarring character movements or one-off spoken jokes, the movement is repetitive and universal.
    Doors runs on a continuous loop. I sat watching for just under an hour before convincing myself that I would never find any appropriate or correct time to leave. Instead, I could sit endlessly and reflect on each character movement, each new reveal of a room. Is the door the most important architectural element in creating space? Marclay makes a strong case for it with this piece.
    Harish Krishnamoorthy is an architectural and urban designer based in Cambridge, Massachusetts, and Bangalore, India. He is an editor at PAIRS.
    #christian #marclay #explores #universe #thresholds
    Christian Marclay explores a universe of thresholds in his latest single-channel montage of film clips
    DoorsChristian Marclay Institute of Contemporary Art Boston Through September 1, 2025Brooklyn Museum Through April 12, 2026On the screen, a movie clip plays of a character entering through a door to leave out another. It cuts to another clip of someone else doing the same thing over and over, all sourced from a panoply of Western cinema. The audience, sitting for an unknown amount of time, watches this shape-shifting protagonist from different cultural periods come and go, as the film endlessly loops. So goes Christian Marclay’s latest single-channel film, Doors, currently exhibited for the first time in the United States at the Institute of Contemporary Art Boston.. Assembled over ten years, the film is a dizzying feat, a carefully crafted montage of film clips revolving around the simple premise of someone entering through a door and then leaving out a door. In the exhibition, Marclay writes, “Doors are fascinating objects, rich with symbolism.” Here, he shows hundreds of them, examining through film how the simple act of moving through a threshold multiplied endlessly creates a profoundly new reading of what said threshold signifies. On paper, this may sound like an extremely jarring experience. But Marclay—a visual artist, composer, and DJ whose previous works such as The Clockinvolved similar mega-montages of disparate film clips—has a sensitive touch. The sequences feel incredibly smooth, the montage carefully constructed to mimic continuity as closely as possible. This is even more impressive when one imagines the constraints that a door’s movement offers; it must open and close a certain direction, with particular types of hinges or means of swinging. It makes the seamlessness of the film all the more fascinating to dissect. When a tiny wooden doorframe cuts to a large double steel door, my brain had no issue at all registering a sense of continued motion through the frame—a form of cinematic magic. Christian Marclay, Doors, 2022. Single-channel video projection. Watching the clips, there seemed to be no discernible meta narrative—simply movement through doors. Nevertheless, Marclay is a master of controlling tone. Though the relentlessness of watching the loops does create an overall feeling of tension that the film is clearly playing on, there are often moments of levity that interrupt, giving visitors a chance to breathe. The pacing too, swings from a person rushing in and out, to a slow stroll between doors in a corridor. It leaves one musing on just how ubiquitous this simple action is, and how mutable these simple acts of pulling a door and stepping inside can be. Sometimes mundane, sometimes thrilling, sometimes in anticipation, sometimes in search—Doors invites us to reflect on our own interaction with these objects, and with the very act of stepping through a doorframe. Much of the experience rests on the soundscape and music, which is equally—if not more heavily—important in creating the transition across clips. Marclay’s previous work leaned heavily on his interest in aural media; this added dimension only enriches Doors and elevates it beyond a formal visual study of clips that match each other. The film bleeds music from one scene to another, sometimes prematurely, to make believable the movement of one character across multiple movies. This overlap of sounds is essentially an echo of the space we left behind and are entering into. We as the audience almost believe—even if just for a second—that the transition is real. The effect is powerful and calls to mind several references. No doubt Doors owes some degree of inspiration to the lineage of surrealist art, perhaps in the work of Magritte or Duchamp. For those steeped in architecture, one may think of Bernard Tschumi’s Manhattan Transcripts, where his transcriptions of events, spaces, and movements similarly both shatter and call to attention simple spatial sequences. One may also be reminded of the work of Situationist International, particularly the psychogeography of Guy Debord. I confess that my first thought was theequally famous door-chase scene in Monsters, Inc. But regardless of what corollaries one may conjure, Doors has a wholly unique feel. It is simplistic and singular in constructing its webbed world. Installation view, Christian Marclay: Doors, the Institute of Contemporary Art/Boston, 2025.But what exactly are we to take away from this world? In an interview with Artforum, Marclay declares, “I’m building in people’s minds an architecture in which to get lost.” The clip evokes a certain act of labyrinthian mapping—or perhaps a mode of perpetual resetting. I began to imagine this almost as a non-Euclidean enfilade of sorts where each room invites you to quickly grasp a new environment and then very quickly anticipate what may be in the next. With the understanding that you can’t backtrack, and the unpredictability of the next door taking you anywhere, the film holds you in total suspense. The production of new spaces and new architecture is activated all at once in the moment someone steps into a new doorway. All of this is without even mentioning the chosen films themselves. There is a degree to which the pop-culture element of Marclay’s work makes certain moments click—I can’t help but laugh as I watch Adam Sandler in Punch Drunk Love exit a door and emerge as Bette Davis in All About Eve. But to a degree, I also see the references being secondary, and certainly unneeded to understand the visceral experience Marclay crafts. It helps that, aside from a couple of jarring character movements or one-off spoken jokes, the movement is repetitive and universal. Doors runs on a continuous loop. I sat watching for just under an hour before convincing myself that I would never find any appropriate or correct time to leave. Instead, I could sit endlessly and reflect on each character movement, each new reveal of a room. Is the door the most important architectural element in creating space? Marclay makes a strong case for it with this piece. Harish Krishnamoorthy is an architectural and urban designer based in Cambridge, Massachusetts, and Bangalore, India. He is an editor at PAIRS. #christian #marclay #explores #universe #thresholds
    WWW.ARCHPAPER.COM
    Christian Marclay explores a universe of thresholds in his latest single-channel montage of film clips
    Doors (2022) Christian Marclay Institute of Contemporary Art Boston Through September 1, 2025Brooklyn Museum Through April 12, 2026On the screen, a movie clip plays of a character entering through a door to leave out another. It cuts to another clip of someone else doing the same thing over and over, all sourced from a panoply of Western cinema. The audience, sitting for an unknown amount of time, watches this shape-shifting protagonist from different cultural periods come and go, as the film endlessly loops. So goes Christian Marclay’s latest single-channel film, Doors (2022), currently exhibited for the first time in the United States at the Institute of Contemporary Art Boston. (It also premieres June 13 at the Brooklyn Museum and will run through April 12, 2026). Assembled over ten years, the film is a dizzying feat, a carefully crafted montage of film clips revolving around the simple premise of someone entering through a door and then leaving out a door. In the exhibition, Marclay writes, “Doors are fascinating objects, rich with symbolism.” Here, he shows hundreds of them, examining through film how the simple act of moving through a threshold multiplied endlessly creates a profoundly new reading of what said threshold signifies. On paper, this may sound like an extremely jarring experience. But Marclay—a visual artist, composer, and DJ whose previous works such as The Clock (2010) involved similar mega-montages of disparate film clips—has a sensitive touch. The sequences feel incredibly smooth, the montage carefully constructed to mimic continuity as closely as possible. This is even more impressive when one imagines the constraints that a door’s movement offers; it must open and close a certain direction, with particular types of hinges or means of swinging. It makes the seamlessness of the film all the more fascinating to dissect. When a tiny wooden doorframe cuts to a large double steel door, my brain had no issue at all registering a sense of continued motion through the frame—a form of cinematic magic. Christian Marclay, Doors (still), 2022. Single-channel video projection (color and black-and-white; 55:00 minutes on continuous loop). Watching the clips, there seemed to be no discernible meta narrative—simply movement through doors. Nevertheless, Marclay is a master of controlling tone. Though the relentlessness of watching the loops does create an overall feeling of tension that the film is clearly playing on, there are often moments of levity that interrupt, giving visitors a chance to breathe. The pacing too, swings from a person rushing in and out, to a slow stroll between doors in a corridor. It leaves one musing on just how ubiquitous this simple action is, and how mutable these simple acts of pulling a door and stepping inside can be. Sometimes mundane, sometimes thrilling, sometimes in anticipation, sometimes in search—Doors invites us to reflect on our own interaction with these objects, and with the very act of stepping through a doorframe. Much of the experience rests on the soundscape and music, which is equally—if not more heavily—important in creating the transition across clips. Marclay’s previous work leaned heavily on his interest in aural media; this added dimension only enriches Doors and elevates it beyond a formal visual study of clips that match each other. The film bleeds music from one scene to another, sometimes prematurely, to make believable the movement of one character across multiple movies. This overlap of sounds is essentially an echo of the space we left behind and are entering into. We as the audience almost believe—even if just for a second—that the transition is real. The effect is powerful and calls to mind several references. No doubt Doors owes some degree of inspiration to the lineage of surrealist art, perhaps in the work of Magritte or Duchamp. For those steeped in architecture, one may think of Bernard Tschumi’s Manhattan Transcripts, where his transcriptions of events, spaces, and movements similarly both shatter and call to attention simple spatial sequences. One may also be reminded of the work of Situationist International, particularly the psychogeography of Guy Debord. I confess that my first thought was the (in my view) equally famous door-chase scene in Monsters, Inc. But regardless of what corollaries one may conjure, Doors has a wholly unique feel. It is simplistic and singular in constructing its webbed world. Installation view, Christian Marclay: Doors, the Institute of Contemporary Art/Boston, 2025. (Mel Taing) But what exactly are we to take away from this world? In an interview with Artforum, Marclay declares, “I’m building in people’s minds an architecture in which to get lost.” The clip evokes a certain act of labyrinthian mapping—or perhaps a mode of perpetual resetting. I began to imagine this almost as a non-Euclidean enfilade of sorts where each room invites you to quickly grasp a new environment and then very quickly anticipate what may be in the next. With the understanding that you can’t backtrack, and the unpredictability of the next door taking you anywhere, the film holds you in total suspense. The production of new spaces and new architecture is activated all at once in the moment someone steps into a new doorway. All of this is without even mentioning the chosen films themselves. There is a degree to which the pop-culture element of Marclay’s work makes certain moments click—I can’t help but laugh as I watch Adam Sandler in Punch Drunk Love exit a door and emerge as Bette Davis in All About Eve. But to a degree, I also see the references being secondary, and certainly unneeded to understand the visceral experience Marclay crafts. It helps that, aside from a couple of jarring character movements or one-off spoken jokes, the movement is repetitive and universal. Doors runs on a continuous loop. I sat watching for just under an hour before convincing myself that I would never find any appropriate or correct time to leave. Instead, I could sit endlessly and reflect on each character movement, each new reveal of a room. Is the door the most important architectural element in creating space? Marclay makes a strong case for it with this piece. Harish Krishnamoorthy is an architectural and urban designer based in Cambridge, Massachusetts, and Bangalore, India. He is an editor at PAIRS.
    0 Yorumlar 0 hisse senetleri
  • A shortage of high-voltage power cables could stall the clean energy transition

    In a nutshell: As nations set ever more ambitious targets for renewable energy and electrification, the humble high-voltage cable has emerged as a linchpin – and a potential chokepoint – in the race to decarbonize the global economy. A Bloomberg interview with Claes Westerlind, CEO of NKT, a leading cable manufacturer based in Denmark, explains why.
    A global surge in demand for high-voltage electricity cables is threatening to stall the clean energy revolution, as the world's ability to build new wind farms, solar plants, and cross-border power links increasingly hinges on a supply chain bottleneck few outside the industry have considered. At the center of this challenge is the complex, capital-intensive process of manufacturing the giant cables that transport electricity across hundreds of miles, both over land and under the sea.
    Despite soaring demand, cable manufacturers remain cautious about expanding capacity, raising questions about whether the pace of electrification can keep up with climate ambitions, geopolitical tensions, and the practical realities of industrial investment.
    High-voltage cables are the arteries of modern power grids, carrying electrons from remote wind farms or hydroelectric dams to the cities and industries that need them. Unlike the thin wires that run through a home's walls, these cables are engineering marvels – sometimes as thick as a person's torso, armored to withstand the crushing pressure of the ocean floor, and designed to last for decades under extreme electrical and environmental stress.

    "If you look at the very high voltage direct current cable, able to carry roughly two gigawatts through two pairs of cables – that means that the equivalent of one nuclear power reactor is flowing through one cable," Westerlind told Bloomberg.
    The process of making these cables is as specialized as it is demanding. At the core is a conductor, typically made of copper or aluminum, twisted together like a rope for flexibility and strength. Around this, manufacturers apply multiple layers of insulation in towering vertical factories to ensure the cable remains perfectly round and can safely contain the immense voltages involved. Any impurity in the insulation, even something as small as an eyelash, can cause catastrophic failure, potentially knocking out power to entire cities.
    // Related Stories

    As the world rushes to harness new sources of renewable energy, the demand for high-voltage direct currentcables has skyrocketed. HVDC technology, initially pioneered by NKT in the 1950s, has become the backbone of long-distance power transmission, particularly for offshore wind farms and intercontinental links. In recent years, approximately 80 to 90 percent of new large-scale cable projects have utilized HVDC, reflecting its efficiency in transmitting electricity over vast distances with minimal losses.

    But this surge in demand has led to a critical bottleneck. Factories that produce these cables are booked out for years, Westerlind reports, and every project requires custom engineering to match the power needs, geography, and environmental conditions of its route. According to the International Energy Agency, meeting global clean energy goals will require building the equivalent of 80 million kilometersof new grid infrastructure by 2040 – essentially doubling what has been constructed over the past century, but in just 15 years.
    Despite the clear need, cable makers have been slow to add capacity due to reasons that are as much economic and political as technical. Building a new cable factory can cost upwards of a billion euros, and manufacturers are wary of making such investments without long-term commitments from utilities or governments. "For a company like us to do investments in the realm of €1 or 2 billion, it's a massive commitment... but it's also a massive amount of demand that is needed for this investment to actually make financial sense over the next not five years, not 10 years, but over the next 20 to 30 years," Westerlind said. The industry still bears scars from a decade ago, when anticipated demand failed to materialize and expensive new facilities sat underused.
    Some governments and transmission system operators are trying to break the logjam by making "anticipatory investments" – committing to buy cable capacity even before specific projects are finalized. This approach, backed by regulators, gives manufacturers the confidence to expand, but it remains the exception rather than the rule.
    Meanwhile, the industry's structure itself creates barriers to rapid expansion, according to Westerlind. The expertise, technology, and infrastructure required to make high-voltage cables are concentrated in a handful of companies, creating what analysts describe as a "deep moat" that is difficult for new entrants to cross.
    Geopolitical tensions add another layer of complexity. China has built more HVDC lines than any other country, although Western manufacturers, such as NKT, maintain a technical edge in the most advanced cable systems. Still, there is growing concern in Europe and the US about becoming dependent on foreign suppliers for such critical infrastructure, especially in light of recent global conflicts and trade disputes. "Strategic autonomy is very important when it comes to the core parts and the fundamental parts of your society, where the grid backbone is one," Westerlind noted.
    The stakes are high. Without a rapid and coordinated push to expand cable manufacturing, the world's clean energy transition could be slowed not by a lack of wind or sun but by a shortage of the cables needed to connect them to the grid. As Westerlind put it, "We all know it has to be done... These are large investments. They are very expensive investments. So also the governments have to have a part in enabling these anticipatory investments, and making it possible for the TSOs to actually carry forward with them."
    #shortage #highvoltage #power #cables #could
    A shortage of high-voltage power cables could stall the clean energy transition
    In a nutshell: As nations set ever more ambitious targets for renewable energy and electrification, the humble high-voltage cable has emerged as a linchpin – and a potential chokepoint – in the race to decarbonize the global economy. A Bloomberg interview with Claes Westerlind, CEO of NKT, a leading cable manufacturer based in Denmark, explains why. A global surge in demand for high-voltage electricity cables is threatening to stall the clean energy revolution, as the world's ability to build new wind farms, solar plants, and cross-border power links increasingly hinges on a supply chain bottleneck few outside the industry have considered. At the center of this challenge is the complex, capital-intensive process of manufacturing the giant cables that transport electricity across hundreds of miles, both over land and under the sea. Despite soaring demand, cable manufacturers remain cautious about expanding capacity, raising questions about whether the pace of electrification can keep up with climate ambitions, geopolitical tensions, and the practical realities of industrial investment. High-voltage cables are the arteries of modern power grids, carrying electrons from remote wind farms or hydroelectric dams to the cities and industries that need them. Unlike the thin wires that run through a home's walls, these cables are engineering marvels – sometimes as thick as a person's torso, armored to withstand the crushing pressure of the ocean floor, and designed to last for decades under extreme electrical and environmental stress. "If you look at the very high voltage direct current cable, able to carry roughly two gigawatts through two pairs of cables – that means that the equivalent of one nuclear power reactor is flowing through one cable," Westerlind told Bloomberg. The process of making these cables is as specialized as it is demanding. At the core is a conductor, typically made of copper or aluminum, twisted together like a rope for flexibility and strength. Around this, manufacturers apply multiple layers of insulation in towering vertical factories to ensure the cable remains perfectly round and can safely contain the immense voltages involved. Any impurity in the insulation, even something as small as an eyelash, can cause catastrophic failure, potentially knocking out power to entire cities. // Related Stories As the world rushes to harness new sources of renewable energy, the demand for high-voltage direct currentcables has skyrocketed. HVDC technology, initially pioneered by NKT in the 1950s, has become the backbone of long-distance power transmission, particularly for offshore wind farms and intercontinental links. In recent years, approximately 80 to 90 percent of new large-scale cable projects have utilized HVDC, reflecting its efficiency in transmitting electricity over vast distances with minimal losses. But this surge in demand has led to a critical bottleneck. Factories that produce these cables are booked out for years, Westerlind reports, and every project requires custom engineering to match the power needs, geography, and environmental conditions of its route. According to the International Energy Agency, meeting global clean energy goals will require building the equivalent of 80 million kilometersof new grid infrastructure by 2040 – essentially doubling what has been constructed over the past century, but in just 15 years. Despite the clear need, cable makers have been slow to add capacity due to reasons that are as much economic and political as technical. Building a new cable factory can cost upwards of a billion euros, and manufacturers are wary of making such investments without long-term commitments from utilities or governments. "For a company like us to do investments in the realm of €1 or 2 billion, it's a massive commitment... but it's also a massive amount of demand that is needed for this investment to actually make financial sense over the next not five years, not 10 years, but over the next 20 to 30 years," Westerlind said. The industry still bears scars from a decade ago, when anticipated demand failed to materialize and expensive new facilities sat underused. Some governments and transmission system operators are trying to break the logjam by making "anticipatory investments" – committing to buy cable capacity even before specific projects are finalized. This approach, backed by regulators, gives manufacturers the confidence to expand, but it remains the exception rather than the rule. Meanwhile, the industry's structure itself creates barriers to rapid expansion, according to Westerlind. The expertise, technology, and infrastructure required to make high-voltage cables are concentrated in a handful of companies, creating what analysts describe as a "deep moat" that is difficult for new entrants to cross. Geopolitical tensions add another layer of complexity. China has built more HVDC lines than any other country, although Western manufacturers, such as NKT, maintain a technical edge in the most advanced cable systems. Still, there is growing concern in Europe and the US about becoming dependent on foreign suppliers for such critical infrastructure, especially in light of recent global conflicts and trade disputes. "Strategic autonomy is very important when it comes to the core parts and the fundamental parts of your society, where the grid backbone is one," Westerlind noted. The stakes are high. Without a rapid and coordinated push to expand cable manufacturing, the world's clean energy transition could be slowed not by a lack of wind or sun but by a shortage of the cables needed to connect them to the grid. As Westerlind put it, "We all know it has to be done... These are large investments. They are very expensive investments. So also the governments have to have a part in enabling these anticipatory investments, and making it possible for the TSOs to actually carry forward with them." #shortage #highvoltage #power #cables #could
    WWW.TECHSPOT.COM
    A shortage of high-voltage power cables could stall the clean energy transition
    In a nutshell: As nations set ever more ambitious targets for renewable energy and electrification, the humble high-voltage cable has emerged as a linchpin – and a potential chokepoint – in the race to decarbonize the global economy. A Bloomberg interview with Claes Westerlind, CEO of NKT, a leading cable manufacturer based in Denmark, explains why. A global surge in demand for high-voltage electricity cables is threatening to stall the clean energy revolution, as the world's ability to build new wind farms, solar plants, and cross-border power links increasingly hinges on a supply chain bottleneck few outside the industry have considered. At the center of this challenge is the complex, capital-intensive process of manufacturing the giant cables that transport electricity across hundreds of miles, both over land and under the sea. Despite soaring demand, cable manufacturers remain cautious about expanding capacity, raising questions about whether the pace of electrification can keep up with climate ambitions, geopolitical tensions, and the practical realities of industrial investment. High-voltage cables are the arteries of modern power grids, carrying electrons from remote wind farms or hydroelectric dams to the cities and industries that need them. Unlike the thin wires that run through a home's walls, these cables are engineering marvels – sometimes as thick as a person's torso, armored to withstand the crushing pressure of the ocean floor, and designed to last for decades under extreme electrical and environmental stress. "If you look at the very high voltage direct current cable, able to carry roughly two gigawatts through two pairs of cables – that means that the equivalent of one nuclear power reactor is flowing through one cable," Westerlind told Bloomberg. The process of making these cables is as specialized as it is demanding. At the core is a conductor, typically made of copper or aluminum, twisted together like a rope for flexibility and strength. Around this, manufacturers apply multiple layers of insulation in towering vertical factories to ensure the cable remains perfectly round and can safely contain the immense voltages involved. Any impurity in the insulation, even something as small as an eyelash, can cause catastrophic failure, potentially knocking out power to entire cities. // Related Stories As the world rushes to harness new sources of renewable energy, the demand for high-voltage direct current (HVDC) cables has skyrocketed. HVDC technology, initially pioneered by NKT in the 1950s, has become the backbone of long-distance power transmission, particularly for offshore wind farms and intercontinental links. In recent years, approximately 80 to 90 percent of new large-scale cable projects have utilized HVDC, reflecting its efficiency in transmitting electricity over vast distances with minimal losses. But this surge in demand has led to a critical bottleneck. Factories that produce these cables are booked out for years, Westerlind reports, and every project requires custom engineering to match the power needs, geography, and environmental conditions of its route. According to the International Energy Agency, meeting global clean energy goals will require building the equivalent of 80 million kilometers (around 49.7 million miles) of new grid infrastructure by 2040 – essentially doubling what has been constructed over the past century, but in just 15 years. Despite the clear need, cable makers have been slow to add capacity due to reasons that are as much economic and political as technical. Building a new cable factory can cost upwards of a billion euros, and manufacturers are wary of making such investments without long-term commitments from utilities or governments. "For a company like us to do investments in the realm of €1 or 2 billion, it's a massive commitment... but it's also a massive amount of demand that is needed for this investment to actually make financial sense over the next not five years, not 10 years, but over the next 20 to 30 years," Westerlind said. The industry still bears scars from a decade ago, when anticipated demand failed to materialize and expensive new facilities sat underused. Some governments and transmission system operators are trying to break the logjam by making "anticipatory investments" – committing to buy cable capacity even before specific projects are finalized. This approach, backed by regulators, gives manufacturers the confidence to expand, but it remains the exception rather than the rule. Meanwhile, the industry's structure itself creates barriers to rapid expansion, according to Westerlind. The expertise, technology, and infrastructure required to make high-voltage cables are concentrated in a handful of companies, creating what analysts describe as a "deep moat" that is difficult for new entrants to cross. Geopolitical tensions add another layer of complexity. China has built more HVDC lines than any other country, although Western manufacturers, such as NKT, maintain a technical edge in the most advanced cable systems. Still, there is growing concern in Europe and the US about becoming dependent on foreign suppliers for such critical infrastructure, especially in light of recent global conflicts and trade disputes. "Strategic autonomy is very important when it comes to the core parts and the fundamental parts of your society, where the grid backbone is one," Westerlind noted. The stakes are high. Without a rapid and coordinated push to expand cable manufacturing, the world's clean energy transition could be slowed not by a lack of wind or sun but by a shortage of the cables needed to connect them to the grid. As Westerlind put it, "We all know it has to be done... These are large investments. They are very expensive investments. So also the governments have to have a part in enabling these anticipatory investments, and making it possible for the TSOs to actually carry forward with them."
    0 Yorumlar 0 hisse senetleri
Arama Sonuçları