• NUS researchers 3D print self-powered photonic skin for underwater communication and safety

    Researchers from the National University of Singaporehave developed a 3D printed, self-powered mechanoluminescentphotonic skin designed for communication and safety monitoring in underwater environments. The stretchable device emits light in response to mechanical deformation, requires no external power source, and remains functional under conditions such as high salinity and extreme temperatures.
    The findings were published in Advanced Materials by Xiaolu Sun, Shaohua Ling, Zhihang Qin, Jinrun Zhou, Quangang Shi, Zhuangjian Liu, and Yu Jun Tan. The research was conducted at NUS and Singapore’s Agency for Science, Technology and Research.
    Schematic of the 3D printed mechanoluminescent photonic skin showing fabrication steps and light emission under deformation. Image via Sun et al., Advanced Materials.
    3D printing stretchable light-emitting skins with auxetic geometry
    The photonic skin was produced using a 3D printing method called direct-ink-writing, which involves extruding a specially formulated ink through a fine nozzle to build up complex structures layer by layer. In this case, the ink was made by mixing tiny particles of zinc sulfide doped with copper, a material that glows when stretched, with a flexible silicone rubber. These particles serve as the active ingredient that lights up when the material is deformed, while the silicone acts as a soft, stretchable support structure.
    To make the device more adaptable to movement and curved surfaces, like human skin or underwater equipment, the researchers printed it using auxetic designs. Auxetic structures have a rare mechanical property known as a negative Poisson’s ratio. Unlike most materials, which become thinner when stretched, auxetic designs expand laterally under tension. This makes them ideal for conforming to curved or irregular surfaces, such as joints, flexible robots, or underwater gear, without wrinkling or detaching.
    Encapsulating the printed skin in a clear silicone layer further improves performance by distributing mechanical stress evenly. This prevents localized tearing and ensures that the light emission remains bright and uniform, even after 10,000 cycles of stretching and relaxing. In previous stretchable light-emitting devices, uneven stress often led to dimming, flickering, or early material failure.
    Mechanical and optical performance of encapsulated photonic skin across 10,000 stretch cycles. Image via Sun et al., Advanced Materials.
    Underwater signaling, robotics, and gas leak detection
    The team demonstrated multiple applications for the photonic skin. When integrated into wearable gloves, the skin enabled light-based Morse code communication through simple finger gestures. Bending one or more fingers activated the mechanoluminescence, emitting visible flashes that corresponded to messages such as “UP,” “OK,” or “SOS.” The system remained fully functional when submerged in cold water, simulating deep-sea conditions.
    In a separate test, the skin was applied to a gas tank mock-up to monitor for leaks. A pinhole defect was covered with the printed skin and sealed using stretchable tape. When pressurized air escaped through the leak, the localized mechanical force caused a bright cyan glow at the exact leak site, offering a passive, electronics-free alternative to conventional gas sensors.
    To test performance on soft and mobile platforms, the researchers also mounted the photonic skin onto a robotic fish. As the robot swam through water tanks at different temperatures, the skin continued to light up reliably, demonstrating its resilience and utility for marine robotics.
    Comparison of printed photonic skin structures with different geometries and their conformability to complex surfaces. Image via Sun et al., Advanced Materials.
    Toward electronics-free underwater communication
    While LEDs and optical fibers are widely used in underwater lighting systems, their dependence on rigid form factors and external power makes them unsuitable for dynamic, flexible applications. In contrast, the stretchable ML photonic skin developed by NUS researchers provides a self-powered, adaptable alternative for diver signaling, robotic inspection, and leak detection, potentially transforming the toolkit for underwater communication and safety systems.
    Future directions include enhanced sensory integration and robotic applications, as the team continues exploring robust photonic systems for extreme environments.
    Photonic skin integrated into gloves for Morse code signaling and applied to robotic fish and gas tanks for underwater safety monitoring. Image via Sun et al., Advanced Materials.
    The rise of 3D printed multifunctional materials
    The development of the photonic skin reflects a broader trend in additive manufacturing toward multifunctional materials, structures that serve more than a structural role. Researchers are increasingly using multimaterial 3D printing to embed sensing, actuation, and signaling functions directly into devices. For example, recent work by SUSTech and City University of Hong Kong on thick-panel origami structures showed how multimaterial printing can enable large, foldable systems with high strength and motion control. These and other advances, including conductive FDM processes and Lithoz’s multimaterial ceramic tools, mark a shift toward printing entire systems. The NUS photonic skin fits squarely within this movement, combining mechanical adaptability, environmental durability, and real-time optical output into a single printable form.
    Read the full article in Advanced Materials
    Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.
    You can also follow us onLinkedIn and subscribe to the 3D Printing Industry YouTube channel to access more exclusive content. At 3DPI, our mission is to deliver high-quality journalism, technical insight, and industry intelligence to professionals across the AM ecosystem.Help us shape the future of 3D printing industry news with our2025 reader survey.
    Featured image shows a schematic of the 3D printed mechanoluminescent photonic skin showing fabrication steps and light emission under deformation. Image via Sun et al., Advanced Materials.
    #nus #researchers #print #selfpowered #photonic
    NUS researchers 3D print self-powered photonic skin for underwater communication and safety
    Researchers from the National University of Singaporehave developed a 3D printed, self-powered mechanoluminescentphotonic skin designed for communication and safety monitoring in underwater environments. The stretchable device emits light in response to mechanical deformation, requires no external power source, and remains functional under conditions such as high salinity and extreme temperatures. The findings were published in Advanced Materials by Xiaolu Sun, Shaohua Ling, Zhihang Qin, Jinrun Zhou, Quangang Shi, Zhuangjian Liu, and Yu Jun Tan. The research was conducted at NUS and Singapore’s Agency for Science, Technology and Research. Schematic of the 3D printed mechanoluminescent photonic skin showing fabrication steps and light emission under deformation. Image via Sun et al., Advanced Materials. 3D printing stretchable light-emitting skins with auxetic geometry The photonic skin was produced using a 3D printing method called direct-ink-writing, which involves extruding a specially formulated ink through a fine nozzle to build up complex structures layer by layer. In this case, the ink was made by mixing tiny particles of zinc sulfide doped with copper, a material that glows when stretched, with a flexible silicone rubber. These particles serve as the active ingredient that lights up when the material is deformed, while the silicone acts as a soft, stretchable support structure. To make the device more adaptable to movement and curved surfaces, like human skin or underwater equipment, the researchers printed it using auxetic designs. Auxetic structures have a rare mechanical property known as a negative Poisson’s ratio. Unlike most materials, which become thinner when stretched, auxetic designs expand laterally under tension. This makes them ideal for conforming to curved or irregular surfaces, such as joints, flexible robots, or underwater gear, without wrinkling or detaching. Encapsulating the printed skin in a clear silicone layer further improves performance by distributing mechanical stress evenly. This prevents localized tearing and ensures that the light emission remains bright and uniform, even after 10,000 cycles of stretching and relaxing. In previous stretchable light-emitting devices, uneven stress often led to dimming, flickering, or early material failure. Mechanical and optical performance of encapsulated photonic skin across 10,000 stretch cycles. Image via Sun et al., Advanced Materials. Underwater signaling, robotics, and gas leak detection The team demonstrated multiple applications for the photonic skin. When integrated into wearable gloves, the skin enabled light-based Morse code communication through simple finger gestures. Bending one or more fingers activated the mechanoluminescence, emitting visible flashes that corresponded to messages such as “UP,” “OK,” or “SOS.” The system remained fully functional when submerged in cold water, simulating deep-sea conditions. In a separate test, the skin was applied to a gas tank mock-up to monitor for leaks. A pinhole defect was covered with the printed skin and sealed using stretchable tape. When pressurized air escaped through the leak, the localized mechanical force caused a bright cyan glow at the exact leak site, offering a passive, electronics-free alternative to conventional gas sensors. To test performance on soft and mobile platforms, the researchers also mounted the photonic skin onto a robotic fish. As the robot swam through water tanks at different temperatures, the skin continued to light up reliably, demonstrating its resilience and utility for marine robotics. Comparison of printed photonic skin structures with different geometries and their conformability to complex surfaces. Image via Sun et al., Advanced Materials. Toward electronics-free underwater communication While LEDs and optical fibers are widely used in underwater lighting systems, their dependence on rigid form factors and external power makes them unsuitable for dynamic, flexible applications. In contrast, the stretchable ML photonic skin developed by NUS researchers provides a self-powered, adaptable alternative for diver signaling, robotic inspection, and leak detection, potentially transforming the toolkit for underwater communication and safety systems. Future directions include enhanced sensory integration and robotic applications, as the team continues exploring robust photonic systems for extreme environments. Photonic skin integrated into gloves for Morse code signaling and applied to robotic fish and gas tanks for underwater safety monitoring. Image via Sun et al., Advanced Materials. The rise of 3D printed multifunctional materials The development of the photonic skin reflects a broader trend in additive manufacturing toward multifunctional materials, structures that serve more than a structural role. Researchers are increasingly using multimaterial 3D printing to embed sensing, actuation, and signaling functions directly into devices. For example, recent work by SUSTech and City University of Hong Kong on thick-panel origami structures showed how multimaterial printing can enable large, foldable systems with high strength and motion control. These and other advances, including conductive FDM processes and Lithoz’s multimaterial ceramic tools, mark a shift toward printing entire systems. The NUS photonic skin fits squarely within this movement, combining mechanical adaptability, environmental durability, and real-time optical output into a single printable form. Read the full article in Advanced Materials Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news. You can also follow us onLinkedIn and subscribe to the 3D Printing Industry YouTube channel to access more exclusive content. At 3DPI, our mission is to deliver high-quality journalism, technical insight, and industry intelligence to professionals across the AM ecosystem.Help us shape the future of 3D printing industry news with our2025 reader survey. Featured image shows a schematic of the 3D printed mechanoluminescent photonic skin showing fabrication steps and light emission under deformation. Image via Sun et al., Advanced Materials. #nus #researchers #print #selfpowered #photonic
    3DPRINTINGINDUSTRY.COM
    NUS researchers 3D print self-powered photonic skin for underwater communication and safety
    Researchers from the National University of Singapore (NUS) have developed a 3D printed, self-powered mechanoluminescent (ML) photonic skin designed for communication and safety monitoring in underwater environments. The stretchable device emits light in response to mechanical deformation, requires no external power source, and remains functional under conditions such as high salinity and extreme temperatures. The findings were published in Advanced Materials by Xiaolu Sun, Shaohua Ling, Zhihang Qin, Jinrun Zhou, Quangang Shi, Zhuangjian Liu, and Yu Jun Tan. The research was conducted at NUS and Singapore’s Agency for Science, Technology and Research (A*STAR). Schematic of the 3D printed mechanoluminescent photonic skin showing fabrication steps and light emission under deformation. Image via Sun et al., Advanced Materials. 3D printing stretchable light-emitting skins with auxetic geometry The photonic skin was produced using a 3D printing method called direct-ink-writing (DIW), which involves extruding a specially formulated ink through a fine nozzle to build up complex structures layer by layer. In this case, the ink was made by mixing tiny particles of zinc sulfide doped with copper (ZnS:Cu), a material that glows when stretched, with a flexible silicone rubber. These particles serve as the active ingredient that lights up when the material is deformed, while the silicone acts as a soft, stretchable support structure. To make the device more adaptable to movement and curved surfaces, like human skin or underwater equipment, the researchers printed it using auxetic designs. Auxetic structures have a rare mechanical property known as a negative Poisson’s ratio. Unlike most materials, which become thinner when stretched, auxetic designs expand laterally under tension. This makes them ideal for conforming to curved or irregular surfaces, such as joints, flexible robots, or underwater gear, without wrinkling or detaching. Encapsulating the printed skin in a clear silicone layer further improves performance by distributing mechanical stress evenly. This prevents localized tearing and ensures that the light emission remains bright and uniform, even after 10,000 cycles of stretching and relaxing. In previous stretchable light-emitting devices, uneven stress often led to dimming, flickering, or early material failure. Mechanical and optical performance of encapsulated photonic skin across 10,000 stretch cycles. Image via Sun et al., Advanced Materials. Underwater signaling, robotics, and gas leak detection The team demonstrated multiple applications for the photonic skin. When integrated into wearable gloves, the skin enabled light-based Morse code communication through simple finger gestures. Bending one or more fingers activated the mechanoluminescence, emitting visible flashes that corresponded to messages such as “UP,” “OK,” or “SOS.” The system remained fully functional when submerged in cold water (~7°C), simulating deep-sea conditions. In a separate test, the skin was applied to a gas tank mock-up to monitor for leaks. A pinhole defect was covered with the printed skin and sealed using stretchable tape. When pressurized air escaped through the leak, the localized mechanical force caused a bright cyan glow at the exact leak site, offering a passive, electronics-free alternative to conventional gas sensors. To test performance on soft and mobile platforms, the researchers also mounted the photonic skin onto a robotic fish. As the robot swam through water tanks at different temperatures (24°C, 50°C, and 7°C), the skin continued to light up reliably, demonstrating its resilience and utility for marine robotics. Comparison of printed photonic skin structures with different geometries and their conformability to complex surfaces. Image via Sun et al., Advanced Materials. Toward electronics-free underwater communication While LEDs and optical fibers are widely used in underwater lighting systems, their dependence on rigid form factors and external power makes them unsuitable for dynamic, flexible applications. In contrast, the stretchable ML photonic skin developed by NUS researchers provides a self-powered, adaptable alternative for diver signaling, robotic inspection, and leak detection, potentially transforming the toolkit for underwater communication and safety systems. Future directions include enhanced sensory integration and robotic applications, as the team continues exploring robust photonic systems for extreme environments. Photonic skin integrated into gloves for Morse code signaling and applied to robotic fish and gas tanks for underwater safety monitoring. Image via Sun et al., Advanced Materials. The rise of 3D printed multifunctional materials The development of the photonic skin reflects a broader trend in additive manufacturing toward multifunctional materials, structures that serve more than a structural role. Researchers are increasingly using multimaterial 3D printing to embed sensing, actuation, and signaling functions directly into devices. For example, recent work by SUSTech and City University of Hong Kong on thick-panel origami structures showed how multimaterial printing can enable large, foldable systems with high strength and motion control. These and other advances, including conductive FDM processes and Lithoz’s multimaterial ceramic tools, mark a shift toward printing entire systems. The NUS photonic skin fits squarely within this movement, combining mechanical adaptability, environmental durability, and real-time optical output into a single printable form. Read the full article in Advanced Materials Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news. You can also follow us onLinkedIn and subscribe to the 3D Printing Industry YouTube channel to access more exclusive content. At 3DPI, our mission is to deliver high-quality journalism, technical insight, and industry intelligence to professionals across the AM ecosystem.Help us shape the future of 3D printing industry news with our2025 reader survey. Featured image shows a schematic of the 3D printed mechanoluminescent photonic skin showing fabrication steps and light emission under deformation. Image via Sun et al., Advanced Materials.
    Like
    Love
    Wow
    Sad
    Angry
    192
    0 Kommentare 0 Anteile
  • Analysis of job vacancies shows earnings boost for AI skills

    Looker_Studio - stock.adobe.com

    News

    Analysis of job vacancies shows earnings boost for AI skills
    Even when parts of a job are being automated, those who know how to work with artificial intelligence tools can expect higher salaries

    By

    Cliff Saran,
    Managing Editor

    Published: 03 Jun 2025 7:00

    UK workers with skills in artificial intelligenceappear to earn 11% more on average, even in sectors where AI is automating parts of their existing job functions.
    Workers in sectors exposed to AI, where the technology can be deployed for some tasks, are more productive and command higher salaries, according to PwC’s 2025 Global AI Jobs Barometer. The study, which was based on an analysis of almost one billion job adverts, found that wages are rising twice as fast in industries most exposed to AI.
    From a skills perspective, PwC reported that AI is changing the skills required of job applicants. According to PwC, to succeed in the workplace, candidates are more likely to need experience in using AI tools and the ability to demonstrate critical thinking and collaboration.
    Phillippa O’Connor, chief people officer at PwC UK, noted that while degrees are still important for many jobs, a reduction in degree requirements suggests employers are looking at a broader range of measures to assess skills and potential.
    In occupations most exposed to AI, PwC noted that the skills sought by employers are changing 59% faster than in occupations least exposed to AI. “AI is reshaping the jobs market – lowering barriers to entry in some areas, while raising the bar on the skills required in others,” O’Connor added.
    Those with the right AI skills are being rewarded with higher salaries. In fact, PwC found that wages are growing twice as fast in AI-exposed industries. This includes jobs that are classed as “automatable”, which means they contain some tasks that can readily be automated. The highest premiums are attached to occupations requiring AI skills, with an average premium in 2024 of 11% for UK workers in these roles.  

    AI is reshaping the jobs market – lowering barriers to entry in some areas, while raising the bar on the skills required in others

    Phillippa O’Connor PwC UK

    PwC’s analysis shows that sectors exposed to AI experience three times higher growth in the revenue generated by each employee. It also reported that growth in revenue per employee for AI-exposed industries surged when large language modelssuch as generative AIbecame mainstream.
    Revenue growth per employee has nearly quadrupled in industries most exposed to AI, such as software, rising from 7% between 2018 and 2022, to 27% between 2018 and 2024. In contrast, revenue growth per employee in industries least exposed to AI, such as mining and hospitality, fell slightly, from 10% between 2018 and 2022, to 9% between 2018 and 2024.
    However, since 2018, job postings for occupations with greater exposure to AI have grown at a slower pace than those with lower exposure – and this gap is widening.
    Umang Paw, chief technology officerat PwC UK, said: “There are still many unknowns about AI’s potential. AI can provide stardust to those ready to adapt, but risks leaving others behind.”
    Paw believes there needs to be a concerted effort to expand access to technology and training to ensure the benefits of AI are widely shared.
    “In the intelligence age, the fusion of AI with technologies like real-time data analytics – and businesses broadening their products and services – will create new industries and fresh job opportunities,” Paw added.

    about AI skills

    AWS addresses the skills barrier holding back enterprises: The AWS Summit in London saw the public cloud giant appoint itself to take on the task of skilling up hundreds of thousands of UK people in using AI technologies.
    Could generative AI help to fill the skills gap in engineering: The role of artificial intelligence and machine learning in society continues to be hotly debated as the tools promise to revolutionise our lives, but how will they affect the engineering sector?

    In The Current Issue:

    UK government outlines plan to surveil migrants with eVisa data
    Why we must reform the Computer Misuse Act: A cyber pro speaks out

    Download Current Issue

    What to expect from Aera Technology AeraHUB 25
    – CW Developer Network

    NTT IOWN all-photonics ‘saves Princess Miku’ from dragon
    – CW Developer Network

    View All Blogs
    #analysis #job #vacancies #shows #earnings
    Analysis of job vacancies shows earnings boost for AI skills
    Looker_Studio - stock.adobe.com News Analysis of job vacancies shows earnings boost for AI skills Even when parts of a job are being automated, those who know how to work with artificial intelligence tools can expect higher salaries By Cliff Saran, Managing Editor Published: 03 Jun 2025 7:00 UK workers with skills in artificial intelligenceappear to earn 11% more on average, even in sectors where AI is automating parts of their existing job functions. Workers in sectors exposed to AI, where the technology can be deployed for some tasks, are more productive and command higher salaries, according to PwC’s 2025 Global AI Jobs Barometer. The study, which was based on an analysis of almost one billion job adverts, found that wages are rising twice as fast in industries most exposed to AI. From a skills perspective, PwC reported that AI is changing the skills required of job applicants. According to PwC, to succeed in the workplace, candidates are more likely to need experience in using AI tools and the ability to demonstrate critical thinking and collaboration. Phillippa O’Connor, chief people officer at PwC UK, noted that while degrees are still important for many jobs, a reduction in degree requirements suggests employers are looking at a broader range of measures to assess skills and potential. In occupations most exposed to AI, PwC noted that the skills sought by employers are changing 59% faster than in occupations least exposed to AI. “AI is reshaping the jobs market – lowering barriers to entry in some areas, while raising the bar on the skills required in others,” O’Connor added. Those with the right AI skills are being rewarded with higher salaries. In fact, PwC found that wages are growing twice as fast in AI-exposed industries. This includes jobs that are classed as “automatable”, which means they contain some tasks that can readily be automated. The highest premiums are attached to occupations requiring AI skills, with an average premium in 2024 of 11% for UK workers in these roles.   AI is reshaping the jobs market – lowering barriers to entry in some areas, while raising the bar on the skills required in others Phillippa O’Connor PwC UK PwC’s analysis shows that sectors exposed to AI experience three times higher growth in the revenue generated by each employee. It also reported that growth in revenue per employee for AI-exposed industries surged when large language modelssuch as generative AIbecame mainstream. Revenue growth per employee has nearly quadrupled in industries most exposed to AI, such as software, rising from 7% between 2018 and 2022, to 27% between 2018 and 2024. In contrast, revenue growth per employee in industries least exposed to AI, such as mining and hospitality, fell slightly, from 10% between 2018 and 2022, to 9% between 2018 and 2024. However, since 2018, job postings for occupations with greater exposure to AI have grown at a slower pace than those with lower exposure – and this gap is widening. Umang Paw, chief technology officerat PwC UK, said: “There are still many unknowns about AI’s potential. AI can provide stardust to those ready to adapt, but risks leaving others behind.” Paw believes there needs to be a concerted effort to expand access to technology and training to ensure the benefits of AI are widely shared. “In the intelligence age, the fusion of AI with technologies like real-time data analytics – and businesses broadening their products and services – will create new industries and fresh job opportunities,” Paw added. about AI skills AWS addresses the skills barrier holding back enterprises: The AWS Summit in London saw the public cloud giant appoint itself to take on the task of skilling up hundreds of thousands of UK people in using AI technologies. Could generative AI help to fill the skills gap in engineering: The role of artificial intelligence and machine learning in society continues to be hotly debated as the tools promise to revolutionise our lives, but how will they affect the engineering sector? In The Current Issue: UK government outlines plan to surveil migrants with eVisa data Why we must reform the Computer Misuse Act: A cyber pro speaks out Download Current Issue What to expect from Aera Technology AeraHUB 25 – CW Developer Network NTT IOWN all-photonics ‘saves Princess Miku’ from dragon – CW Developer Network View All Blogs #analysis #job #vacancies #shows #earnings
    WWW.COMPUTERWEEKLY.COM
    Analysis of job vacancies shows earnings boost for AI skills
    Looker_Studio - stock.adobe.com News Analysis of job vacancies shows earnings boost for AI skills Even when parts of a job are being automated, those who know how to work with artificial intelligence tools can expect higher salaries By Cliff Saran, Managing Editor Published: 03 Jun 2025 7:00 UK workers with skills in artificial intelligence (AI) appear to earn 11% more on average, even in sectors where AI is automating parts of their existing job functions. Workers in sectors exposed to AI, where the technology can be deployed for some tasks, are more productive and command higher salaries, according to PwC’s 2025 Global AI Jobs Barometer. The study, which was based on an analysis of almost one billion job adverts, found that wages are rising twice as fast in industries most exposed to AI. From a skills perspective, PwC reported that AI is changing the skills required of job applicants. According to PwC, to succeed in the workplace, candidates are more likely to need experience in using AI tools and the ability to demonstrate critical thinking and collaboration. Phillippa O’Connor, chief people officer at PwC UK, noted that while degrees are still important for many jobs, a reduction in degree requirements suggests employers are looking at a broader range of measures to assess skills and potential. In occupations most exposed to AI, PwC noted that the skills sought by employers are changing 59% faster than in occupations least exposed to AI. “AI is reshaping the jobs market – lowering barriers to entry in some areas, while raising the bar on the skills required in others,” O’Connor added. Those with the right AI skills are being rewarded with higher salaries. In fact, PwC found that wages are growing twice as fast in AI-exposed industries. This includes jobs that are classed as “automatable”, which means they contain some tasks that can readily be automated. The highest premiums are attached to occupations requiring AI skills, with an average premium in 2024 of 11% for UK workers in these roles.   AI is reshaping the jobs market – lowering barriers to entry in some areas, while raising the bar on the skills required in others Phillippa O’Connor PwC UK PwC’s analysis shows that sectors exposed to AI experience three times higher growth in the revenue generated by each employee. It also reported that growth in revenue per employee for AI-exposed industries surged when large language models (LLMs) such as generative AI (GenAI) became mainstream. Revenue growth per employee has nearly quadrupled in industries most exposed to AI, such as software, rising from 7% between 2018 and 2022, to 27% between 2018 and 2024. In contrast, revenue growth per employee in industries least exposed to AI, such as mining and hospitality, fell slightly, from 10% between 2018 and 2022, to 9% between 2018 and 2024. However, since 2018, job postings for occupations with greater exposure to AI have grown at a slower pace than those with lower exposure – and this gap is widening. Umang Paw, chief technology officer (CTO) at PwC UK, said: “There are still many unknowns about AI’s potential. AI can provide stardust to those ready to adapt, but risks leaving others behind.” Paw believes there needs to be a concerted effort to expand access to technology and training to ensure the benefits of AI are widely shared. “In the intelligence age, the fusion of AI with technologies like real-time data analytics – and businesses broadening their products and services – will create new industries and fresh job opportunities,” Paw added. Read more about AI skills AWS addresses the skills barrier holding back enterprises: The AWS Summit in London saw the public cloud giant appoint itself to take on the task of skilling up hundreds of thousands of UK people in using AI technologies. Could generative AI help to fill the skills gap in engineering: The role of artificial intelligence and machine learning in society continues to be hotly debated as the tools promise to revolutionise our lives, but how will they affect the engineering sector? In The Current Issue: UK government outlines plan to surveil migrants with eVisa data Why we must reform the Computer Misuse Act: A cyber pro speaks out Download Current Issue What to expect from Aera Technology AeraHUB 25 – CW Developer Network NTT IOWN all-photonics ‘saves Princess Miku’ from dragon – CW Developer Network View All Blogs
    0 Kommentare 0 Anteile
  • Bioprinted organs ‘10–15 years away,’ says startup regenerating dog skin

    Human organs could be bioprinted for transplants within 10 years, according to Lithuanian startup Vital3D. But before reaching human hearts and kidneys, the company is starting with something simpler: regenerating dog skin.
    Based in Vilnius, Vital3D is already bioprinting functional tissue constructs. Using a proprietary laser system, the startup deposits living cells and biomaterials in precise 3D patterns. The structures mimic natural biological systems — and could one day form entire organs tailored to a patient’s unique anatomy.
    That mission is both professional and personal for CEO Vidmantas Šakalys. After losing a mentor to urinary cancer, he set out to develop 3D-printed kidneys that could save others from the same fate. But before reaching that goal, the company needs a commercial product to fund the long road ahead.
    That product is VitalHeal — the first-ever bioprinted wound patch for pets. Dogs are the initial target, with human applications slated to follow.
    Šakalys calls the patch “a first step” towards bioprinted kidneys. “Printing organs for transplantation is a really challenging task,” he tells TNW after a tour of his lab. “It’s 10 or 15 years away from now, and as a commercial entity, we need to have commercially available products earlier. So we start with simpler products and then move into more difficult ones.”
    Register Now

    The path may be simpler, but the technology is anything but.
    Bioprinting goes to the vet
    VitalHeal is embedded with growth factors that accelerate skin regeneration.
    Across the patch’s surface, tiny pores about one-fifth the width of a human hair enable air circulation while blocking bacteria. Once applied, VitalHeal seals the wound and maintains constant pressure while the growth factors get to work.
    According to Vital3D, the patch can reduce healing time from 10–12 weeks to just four to six. Infection risk can drop from 30% to under 10%, vet visits from eight to two or three, and surgery times by half.
    Current treatments, the startup argues, can be costly, ineffective, and distressing for animals. VitalHeal is designed to provide a safer, faster, and cheaper alternative.
    Vital3D says the market is big — and the data backs up the claim.
    Vital3D’s FemtoBrush system promises high-speed and high-precision bioprinting. Credit: Vital3D
    Commercial prospects
    The global animal wound care market is projected to grow from bnin 2024 to bnby 2030, fuelled by rising pet ownership and demand for advanced veterinary care. Vital3D forecasts an initial serviceable addressable marketof €76.5mn across the EU and US. By 2027-2028, the company aims to sell 100,000 units.
    Dogs are a logical starting point. Their size, activity levels, and surgeries raise their risk of wounds. Around half of dogs over age 10 are also affected by cancer, further increasing demand for effective wound care.
    At €300 retail, the patches won’t be cheap. But Vital3D claims they could slash treatment costs for pet owners from €3,000 to €1,500. Production at scale is expected to bring prices down further. 
    After strong results in rats, trials on dogs will begin this summer in clinics in Lithuania and the UK — Vital3D’s pilot markets.
    If all goes to plan, a non-degradable patch will launch in Europe next year. The company will then progress to a biodegradable version.
    From there, the company plans to adapt the tech for humans. The initial focus will be wound care for people with diabetes, 25% of whom suffer from impaired healing. Future versions could support burn victims, injured soldiers, and others in need of advanced skin restoration.
    Freshly printed fluids in a bio-ink droplet. Credit: Vital3D
    Vital3D is also exploring other medical frontiers. In partnership with Lithuania’s National Cancer Institute, the startup is building organoids — mini versions of organs — for cancer drug testing. Another project involves bioprinted stents, which are showing promise in early animal trials. But all these efforts serve a bigger mission.
    “Our final target is to move to organ printing for transplants,” says Šakalys.
    Bioprinting organs
    A computer engineer by training, Šakalys has worked with photonic innovations for over 10 years. 
    At his previous startup, Femtika, he harnessed lasers to produce tiny components for microelectronics, medical devices, and aerospace engineering. He realised they could also enable precise bioprinting. 
    In 2021, he co-founded Vital3D to advance the concept. The company’s printing system directs light towards a photosensitive bio-ink. The material is hardened and formed into a structure, with living cells and biomaterials moulded into intricate 3D patterns.
    The shape of the laser beam can be adjusted to replicate complex biological forms — potentially even entire organs.
    But there are still major scientific hurdles to overcome. One is vascularisation, the formation of blood vessels in intricate networks. Another is the diverse variety of cell types in many organs. Replicating these sophisticated natural structures will be challenging.
    “First of all, we want to solve the vasculature. Then we will go into the differentiation of cells,” Šakalys says.
    “Our target is to see if we can print from fewer cells, but try to differentiate them while printing into different types of cells.” 
    If successful, Vital3D could help ease the global shortage of transplantable organs. Fewer than 10% of patients who need a transplant receive one each year, according to the World Health Organisation. In the US alone, around 90,000 people are waiting for a kidney — a shortfall that’s fuelling a thriving black market.
    Šakalys believes that could be just the start. He envisions bioprinting not just creating organs, but also advancing a new era of personalised medicine.
    “It can bring a lot of benefits to society,” he says. “Not just bioprinting for transplants, but also tissue engineering as well.”
    Want to discover the next big thing in tech? Then take a trip to TNW Conference, where thousands of founders, investors, and corporate innovators will share their ideas. The event takes place on June 19–20 in Amsterdam and tickets are on sale now. Use the code TNWXMEDIA2025 at the checkout to get 30% off.

    Story by

    Thomas Macaulay

    Managing editor

    Thomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he eThomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he enjoys playing chessand the guitar.

    Get the TNW newsletter
    Get the most important tech news in your inbox each week.

    Also tagged with
    #bioprinted #organs #years #away #says
    Bioprinted organs ‘10–15 years away,’ says startup regenerating dog skin
    Human organs could be bioprinted for transplants within 10 years, according to Lithuanian startup Vital3D. But before reaching human hearts and kidneys, the company is starting with something simpler: regenerating dog skin. Based in Vilnius, Vital3D is already bioprinting functional tissue constructs. Using a proprietary laser system, the startup deposits living cells and biomaterials in precise 3D patterns. The structures mimic natural biological systems — and could one day form entire organs tailored to a patient’s unique anatomy. That mission is both professional and personal for CEO Vidmantas Šakalys. After losing a mentor to urinary cancer, he set out to develop 3D-printed kidneys that could save others from the same fate. But before reaching that goal, the company needs a commercial product to fund the long road ahead. That product is VitalHeal — the first-ever bioprinted wound patch for pets. Dogs are the initial target, with human applications slated to follow. Šakalys calls the patch “a first step” towards bioprinted kidneys. “Printing organs for transplantation is a really challenging task,” he tells TNW after a tour of his lab. “It’s 10 or 15 years away from now, and as a commercial entity, we need to have commercially available products earlier. So we start with simpler products and then move into more difficult ones.” Register Now The path may be simpler, but the technology is anything but. Bioprinting goes to the vet VitalHeal is embedded with growth factors that accelerate skin regeneration. Across the patch’s surface, tiny pores about one-fifth the width of a human hair enable air circulation while blocking bacteria. Once applied, VitalHeal seals the wound and maintains constant pressure while the growth factors get to work. According to Vital3D, the patch can reduce healing time from 10–12 weeks to just four to six. Infection risk can drop from 30% to under 10%, vet visits from eight to two or three, and surgery times by half. Current treatments, the startup argues, can be costly, ineffective, and distressing for animals. VitalHeal is designed to provide a safer, faster, and cheaper alternative. Vital3D says the market is big — and the data backs up the claim. Vital3D’s FemtoBrush system promises high-speed and high-precision bioprinting. Credit: Vital3D Commercial prospects The global animal wound care market is projected to grow from bnin 2024 to bnby 2030, fuelled by rising pet ownership and demand for advanced veterinary care. Vital3D forecasts an initial serviceable addressable marketof €76.5mn across the EU and US. By 2027-2028, the company aims to sell 100,000 units. Dogs are a logical starting point. Their size, activity levels, and surgeries raise their risk of wounds. Around half of dogs over age 10 are also affected by cancer, further increasing demand for effective wound care. At €300 retail, the patches won’t be cheap. But Vital3D claims they could slash treatment costs for pet owners from €3,000 to €1,500. Production at scale is expected to bring prices down further.  After strong results in rats, trials on dogs will begin this summer in clinics in Lithuania and the UK — Vital3D’s pilot markets. If all goes to plan, a non-degradable patch will launch in Europe next year. The company will then progress to a biodegradable version. From there, the company plans to adapt the tech for humans. The initial focus will be wound care for people with diabetes, 25% of whom suffer from impaired healing. Future versions could support burn victims, injured soldiers, and others in need of advanced skin restoration. Freshly printed fluids in a bio-ink droplet. Credit: Vital3D Vital3D is also exploring other medical frontiers. In partnership with Lithuania’s National Cancer Institute, the startup is building organoids — mini versions of organs — for cancer drug testing. Another project involves bioprinted stents, which are showing promise in early animal trials. But all these efforts serve a bigger mission. “Our final target is to move to organ printing for transplants,” says Šakalys. Bioprinting organs A computer engineer by training, Šakalys has worked with photonic innovations for over 10 years.  At his previous startup, Femtika, he harnessed lasers to produce tiny components for microelectronics, medical devices, and aerospace engineering. He realised they could also enable precise bioprinting.  In 2021, he co-founded Vital3D to advance the concept. The company’s printing system directs light towards a photosensitive bio-ink. The material is hardened and formed into a structure, with living cells and biomaterials moulded into intricate 3D patterns. The shape of the laser beam can be adjusted to replicate complex biological forms — potentially even entire organs. But there are still major scientific hurdles to overcome. One is vascularisation, the formation of blood vessels in intricate networks. Another is the diverse variety of cell types in many organs. Replicating these sophisticated natural structures will be challenging. “First of all, we want to solve the vasculature. Then we will go into the differentiation of cells,” Šakalys says. “Our target is to see if we can print from fewer cells, but try to differentiate them while printing into different types of cells.”  If successful, Vital3D could help ease the global shortage of transplantable organs. Fewer than 10% of patients who need a transplant receive one each year, according to the World Health Organisation. In the US alone, around 90,000 people are waiting for a kidney — a shortfall that’s fuelling a thriving black market. Šakalys believes that could be just the start. He envisions bioprinting not just creating organs, but also advancing a new era of personalised medicine. “It can bring a lot of benefits to society,” he says. “Not just bioprinting for transplants, but also tissue engineering as well.” Want to discover the next big thing in tech? Then take a trip to TNW Conference, where thousands of founders, investors, and corporate innovators will share their ideas. The event takes place on June 19–20 in Amsterdam and tickets are on sale now. Use the code TNWXMEDIA2025 at the checkout to get 30% off. Story by Thomas Macaulay Managing editor Thomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he eThomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he enjoys playing chessand the guitar. Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with #bioprinted #organs #years #away #says
    THENEXTWEB.COM
    Bioprinted organs ‘10–15 years away,’ says startup regenerating dog skin
    Human organs could be bioprinted for transplants within 10 years, according to Lithuanian startup Vital3D. But before reaching human hearts and kidneys, the company is starting with something simpler: regenerating dog skin. Based in Vilnius, Vital3D is already bioprinting functional tissue constructs. Using a proprietary laser system, the startup deposits living cells and biomaterials in precise 3D patterns. The structures mimic natural biological systems — and could one day form entire organs tailored to a patient’s unique anatomy. That mission is both professional and personal for CEO Vidmantas Šakalys. After losing a mentor to urinary cancer, he set out to develop 3D-printed kidneys that could save others from the same fate. But before reaching that goal, the company needs a commercial product to fund the long road ahead. That product is VitalHeal — the first-ever bioprinted wound patch for pets. Dogs are the initial target, with human applications slated to follow. Šakalys calls the patch “a first step” towards bioprinted kidneys. “Printing organs for transplantation is a really challenging task,” he tells TNW after a tour of his lab. “It’s 10 or 15 years away from now, and as a commercial entity, we need to have commercially available products earlier. So we start with simpler products and then move into more difficult ones.” Register Now The path may be simpler, but the technology is anything but. Bioprinting goes to the vet VitalHeal is embedded with growth factors that accelerate skin regeneration. Across the patch’s surface, tiny pores about one-fifth the width of a human hair enable air circulation while blocking bacteria. Once applied, VitalHeal seals the wound and maintains constant pressure while the growth factors get to work. According to Vital3D, the patch can reduce healing time from 10–12 weeks to just four to six. Infection risk can drop from 30% to under 10%, vet visits from eight to two or three, and surgery times by half. Current treatments, the startup argues, can be costly, ineffective, and distressing for animals. VitalHeal is designed to provide a safer, faster, and cheaper alternative. Vital3D says the market is big — and the data backs up the claim. Vital3D’s FemtoBrush system promises high-speed and high-precision bioprinting. Credit: Vital3D Commercial prospects The global animal wound care market is projected to grow from $1.4bn (€1.24bn) in 2024 to $2.1bn (€1.87bn) by 2030, fuelled by rising pet ownership and demand for advanced veterinary care. Vital3D forecasts an initial serviceable addressable market (ISAM) of €76.5mn across the EU and US. By 2027-2028, the company aims to sell 100,000 units. Dogs are a logical starting point. Their size, activity levels, and surgeries raise their risk of wounds. Around half of dogs over age 10 are also affected by cancer, further increasing demand for effective wound care. At €300 retail (or €150 wholesale), the patches won’t be cheap. But Vital3D claims they could slash treatment costs for pet owners from €3,000 to €1,500. Production at scale is expected to bring prices down further.  After strong results in rats, trials on dogs will begin this summer in clinics in Lithuania and the UK — Vital3D’s pilot markets. If all goes to plan, a non-degradable patch will launch in Europe next year. The company will then progress to a biodegradable version. From there, the company plans to adapt the tech for humans. The initial focus will be wound care for people with diabetes, 25% of whom suffer from impaired healing. Future versions could support burn victims, injured soldiers, and others in need of advanced skin restoration. Freshly printed fluids in a bio-ink droplet. Credit: Vital3D Vital3D is also exploring other medical frontiers. In partnership with Lithuania’s National Cancer Institute, the startup is building organoids — mini versions of organs — for cancer drug testing. Another project involves bioprinted stents, which are showing promise in early animal trials. But all these efforts serve a bigger mission. “Our final target is to move to organ printing for transplants,” says Šakalys. Bioprinting organs A computer engineer by training, Šakalys has worked with photonic innovations for over 10 years.  At his previous startup, Femtika, he harnessed lasers to produce tiny components for microelectronics, medical devices, and aerospace engineering. He realised they could also enable precise bioprinting.  In 2021, he co-founded Vital3D to advance the concept. The company’s printing system directs light towards a photosensitive bio-ink. The material is hardened and formed into a structure, with living cells and biomaterials moulded into intricate 3D patterns. The shape of the laser beam can be adjusted to replicate complex biological forms — potentially even entire organs. But there are still major scientific hurdles to overcome. One is vascularisation, the formation of blood vessels in intricate networks. Another is the diverse variety of cell types in many organs. Replicating these sophisticated natural structures will be challenging. “First of all, we want to solve the vasculature. Then we will go into the differentiation of cells,” Šakalys says. “Our target is to see if we can print from fewer cells, but try to differentiate them while printing into different types of cells.”  If successful, Vital3D could help ease the global shortage of transplantable organs. Fewer than 10% of patients who need a transplant receive one each year, according to the World Health Organisation. In the US alone, around 90,000 people are waiting for a kidney — a shortfall that’s fuelling a thriving black market. Šakalys believes that could be just the start. He envisions bioprinting not just creating organs, but also advancing a new era of personalised medicine. “It can bring a lot of benefits to society,” he says. “Not just bioprinting for transplants, but also tissue engineering as well.” Want to discover the next big thing in tech? Then take a trip to TNW Conference, where thousands of founders, investors, and corporate innovators will share their ideas. The event takes place on June 19–20 in Amsterdam and tickets are on sale now. Use the code TNWXMEDIA2025 at the checkout to get 30% off. Story by Thomas Macaulay Managing editor Thomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he e (show all) Thomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he enjoys playing chess (badly) and the guitar (even worse). Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with
    0 Kommentare 0 Anteile
  • Rethinking secure comms: Are encrypted platforms still enough?

    Maksim Kabakou - Fotolia

    Opinion

    Rethinking secure comms: Are encrypted platforms still enough?
    A leak of information on American military operations caused a major political incident in March 2025. The Security Think Tank considers what can CISOs can learn from this potentially fatal error.

    By

    Russell Auld, PAC

    Published: 30 May 2025

    In today’s constantly changing cyber landscape, answering the question “what does best practice now look like?” is far from simple. While emerging technologies and AI-driven security tools continue to make the headlines and become the topics of discussion, the real pivot point for modern security lies not just in the technological advancements but in context, people and process. 
    The recent Signal messaging platform incident in which a journalist was mistakenly added to a group chat, exposing sensitive information, serves as a timely reminder that even the most secure platform is vulnerable to human error. The platform wasn’t breached by malicious actors, or a zero-day exploit being utilised or the end-to-end encryption failing; the shortfall here was likely poorly defined acceptable use polices and controls alongside a lack of training and awareness.
    This incident, if nothing else, highlights a critical truth within cyber security – security tools are only as good as the environment, policies, and people operating them. While it’s tempting to focus on implementing more technical controls to prevent this from happening again, the reality is that many incidents result from a failure of process, governance, or awareness. 
    What does good security look like today? Some key areas include:

    Context over features, for example, whether Signal should have been used in the first place;
    There is no such thing as a silver bullet approach to protect your organisation;
    The importance of your team’s training and education;
    Reviewing and adapting continuously. 

    Security must be context-driven. Business leaders need to consider what their key area of concern is – reputational risk, state-sponsored surveillance, insider threats, or regulatory compliance. Each threat vector requires a different set of controls. For example, an organisation handling official-sensitive or classified data will require not just encryption, but assured platforms, robust access controls, identity validation, and clear auditability.
    Conversely, a commercial enterprise concerned about intellectual property leakage might strategically focus on user training, data loss prevention, and device control. Best practice isn’t picking the platform with the cheapest price tag or the most commonly used; it’s selecting a platform that supports the controls and policies required based on a deep understanding of your specific risks and use cases.  
    There is no one-size-fits-all solution for your organisation. The security product landscape is filled with vendors offering overlapping solutions that all claim to provide more protection than the other. And, although we know some potentially do offer better protection, features or functionality, even the best tool will fail if used incorrectly or implemented without a clear understanding of its limitations. Worse, organisations may gain a false sense of security by relying solely on a supplier’s claims. The priority must be to assess your organisation’s internal capability to manage and operate these tools effectively. Reassessing the threat landscape and taking advantage of the wealth of threat intelligence tools available, helps ensure you have the right skills, policies, and processes in place. 

    The Computer Weekly Security Think Tank on Signalgate

    Todd Thiemann, ESG: Signalgate: Learnings for CISOs securing enterprise data.
    Javvad Malik, KnowBe4: What CISOs can learn from Signalgate.
    Aditya Sood, Aryaka: Unspoken risk: Human factors undermine trusted platforms.
    Raihan Islam, defineXTEND: When leaders ignore cyber security rules, the whole system weakens.
    Elliot Wilkes, ACDS: Security vs. usability: Why rogue corporate comms are still an issue.
    Mike Gillespie and Ellie Hurst, Advent IM: Signalgate is a signal to revisit security onboarding and training.

    Best practice in 2025 means recognising that many security incidents stem from simple human mistakes, misaddressed emails, poor password hygiene, or even sharing access with the wrong person. Investing in continual staff education, security awareness, and skills gap analysis is essential to risk reduction.  
    This doesn’t mean showing an annual 10-minute cyber awareness video; you need to identify what will motivate your people and run security campaigns that capture their attention and change behaviour. For example you could consider using engaging nudges such as mandatory phishing alerts on laptops, interactive lock screen campaigns, and quizzes on key policies such as acceptable use and password complexity. Incorporate gamification elements, for example rewards for completing quizzes, and timely reminders to reinforce security best practices and fostering a culture of vigilance. 
    These campaigns should be a mixture of communications that engage people coupled with training which is seen as relevant by the workforce, as well as meeting role specific needs. Your developers need to understand secure coding practices, while those in front line operations may need training in how to detect phishing or social engineering attacks. In doing so this helps to create a better security culture within the organisation and enhance your overall security posture. 
    Finally, what’s considered “best practice” today may be outdated by tomorrow. Threats are constantly evolving, regulations change, and your own business operations and strategy may shift. Adopting a cyber security lifecycle that encompasses people, process and technology, supported by business continuous improvement activities and a clear vision from senior stakeholders will be vital. Conducting regular security reviews, red-teaming, and reassessing governance and policies will help ensure that defences remain relevant and proportional to your organisation’s threats.
    Encryption, however, still matters. As do SSO, MFA, secure coding practises, and access controls. But the real cornerstone of best practice in today’s cyber world is understanding why you need them, and how they’ll be used in practice. Securing your organisation is no longer just about picking the best platform, it's about creating a holistic view that incorporates people, process, and technology. And that may be the most secure approach, after all. 
    Russell Auld is digital trust and cyber security expert at PA Consulting

    In The Current Issue:

    UK government outlines plan to surveil migrants with eVisa data
    Why we must reform the Computer Misuse Act: A cyber pro speaks out

    Download Current Issue

    NTT IOWN all-photonics ‘saves Princess Miku’ from dragon
    – CW Developer Network

    FinOps Foundation lays down 2025 Framework for Cloud+ cost control
    – Open Source Insider

    View All Blogs
    #rethinking #secure #comms #are #encrypted
    Rethinking secure comms: Are encrypted platforms still enough?
    Maksim Kabakou - Fotolia Opinion Rethinking secure comms: Are encrypted platforms still enough? A leak of information on American military operations caused a major political incident in March 2025. The Security Think Tank considers what can CISOs can learn from this potentially fatal error. By Russell Auld, PAC Published: 30 May 2025 In today’s constantly changing cyber landscape, answering the question “what does best practice now look like?” is far from simple. While emerging technologies and AI-driven security tools continue to make the headlines and become the topics of discussion, the real pivot point for modern security lies not just in the technological advancements but in context, people and process.  The recent Signal messaging platform incident in which a journalist was mistakenly added to a group chat, exposing sensitive information, serves as a timely reminder that even the most secure platform is vulnerable to human error. The platform wasn’t breached by malicious actors, or a zero-day exploit being utilised or the end-to-end encryption failing; the shortfall here was likely poorly defined acceptable use polices and controls alongside a lack of training and awareness. This incident, if nothing else, highlights a critical truth within cyber security – security tools are only as good as the environment, policies, and people operating them. While it’s tempting to focus on implementing more technical controls to prevent this from happening again, the reality is that many incidents result from a failure of process, governance, or awareness.  What does good security look like today? Some key areas include: Context over features, for example, whether Signal should have been used in the first place; There is no such thing as a silver bullet approach to protect your organisation; The importance of your team’s training and education; Reviewing and adapting continuously.  Security must be context-driven. Business leaders need to consider what their key area of concern is – reputational risk, state-sponsored surveillance, insider threats, or regulatory compliance. Each threat vector requires a different set of controls. For example, an organisation handling official-sensitive or classified data will require not just encryption, but assured platforms, robust access controls, identity validation, and clear auditability. Conversely, a commercial enterprise concerned about intellectual property leakage might strategically focus on user training, data loss prevention, and device control. Best practice isn’t picking the platform with the cheapest price tag or the most commonly used; it’s selecting a platform that supports the controls and policies required based on a deep understanding of your specific risks and use cases.   There is no one-size-fits-all solution for your organisation. The security product landscape is filled with vendors offering overlapping solutions that all claim to provide more protection than the other. And, although we know some potentially do offer better protection, features or functionality, even the best tool will fail if used incorrectly or implemented without a clear understanding of its limitations. Worse, organisations may gain a false sense of security by relying solely on a supplier’s claims. The priority must be to assess your organisation’s internal capability to manage and operate these tools effectively. Reassessing the threat landscape and taking advantage of the wealth of threat intelligence tools available, helps ensure you have the right skills, policies, and processes in place.  The Computer Weekly Security Think Tank on Signalgate Todd Thiemann, ESG: Signalgate: Learnings for CISOs securing enterprise data. Javvad Malik, KnowBe4: What CISOs can learn from Signalgate. Aditya Sood, Aryaka: Unspoken risk: Human factors undermine trusted platforms. Raihan Islam, defineXTEND: When leaders ignore cyber security rules, the whole system weakens. Elliot Wilkes, ACDS: Security vs. usability: Why rogue corporate comms are still an issue. Mike Gillespie and Ellie Hurst, Advent IM: Signalgate is a signal to revisit security onboarding and training. Best practice in 2025 means recognising that many security incidents stem from simple human mistakes, misaddressed emails, poor password hygiene, or even sharing access with the wrong person. Investing in continual staff education, security awareness, and skills gap analysis is essential to risk reduction.   This doesn’t mean showing an annual 10-minute cyber awareness video; you need to identify what will motivate your people and run security campaigns that capture their attention and change behaviour. For example you could consider using engaging nudges such as mandatory phishing alerts on laptops, interactive lock screen campaigns, and quizzes on key policies such as acceptable use and password complexity. Incorporate gamification elements, for example rewards for completing quizzes, and timely reminders to reinforce security best practices and fostering a culture of vigilance.  These campaigns should be a mixture of communications that engage people coupled with training which is seen as relevant by the workforce, as well as meeting role specific needs. Your developers need to understand secure coding practices, while those in front line operations may need training in how to detect phishing or social engineering attacks. In doing so this helps to create a better security culture within the organisation and enhance your overall security posture.  Finally, what’s considered “best practice” today may be outdated by tomorrow. Threats are constantly evolving, regulations change, and your own business operations and strategy may shift. Adopting a cyber security lifecycle that encompasses people, process and technology, supported by business continuous improvement activities and a clear vision from senior stakeholders will be vital. Conducting regular security reviews, red-teaming, and reassessing governance and policies will help ensure that defences remain relevant and proportional to your organisation’s threats. Encryption, however, still matters. As do SSO, MFA, secure coding practises, and access controls. But the real cornerstone of best practice in today’s cyber world is understanding why you need them, and how they’ll be used in practice. Securing your organisation is no longer just about picking the best platform, it's about creating a holistic view that incorporates people, process, and technology. And that may be the most secure approach, after all.  Russell Auld is digital trust and cyber security expert at PA Consulting In The Current Issue: UK government outlines plan to surveil migrants with eVisa data Why we must reform the Computer Misuse Act: A cyber pro speaks out Download Current Issue NTT IOWN all-photonics ‘saves Princess Miku’ from dragon – CW Developer Network FinOps Foundation lays down 2025 Framework for Cloud+ cost control – Open Source Insider View All Blogs #rethinking #secure #comms #are #encrypted
    WWW.COMPUTERWEEKLY.COM
    Rethinking secure comms: Are encrypted platforms still enough?
    Maksim Kabakou - Fotolia Opinion Rethinking secure comms: Are encrypted platforms still enough? A leak of information on American military operations caused a major political incident in March 2025. The Security Think Tank considers what can CISOs can learn from this potentially fatal error. By Russell Auld, PAC Published: 30 May 2025 In today’s constantly changing cyber landscape, answering the question “what does best practice now look like?” is far from simple. While emerging technologies and AI-driven security tools continue to make the headlines and become the topics of discussion, the real pivot point for modern security lies not just in the technological advancements but in context, people and process.  The recent Signal messaging platform incident in which a journalist was mistakenly added to a group chat, exposing sensitive information, serves as a timely reminder that even the most secure platform is vulnerable to human error. The platform wasn’t breached by malicious actors, or a zero-day exploit being utilised or the end-to-end encryption failing; the shortfall here was likely poorly defined acceptable use polices and controls alongside a lack of training and awareness. This incident, if nothing else, highlights a critical truth within cyber security – security tools are only as good as the environment, policies, and people operating them. While it’s tempting to focus on implementing more technical controls to prevent this from happening again, the reality is that many incidents result from a failure of process, governance, or awareness.  What does good security look like today? Some key areas include: Context over features, for example, whether Signal should have been used in the first place; There is no such thing as a silver bullet approach to protect your organisation; The importance of your team’s training and education; Reviewing and adapting continuously.  Security must be context-driven. Business leaders need to consider what their key area of concern is – reputational risk, state-sponsored surveillance, insider threats, or regulatory compliance. Each threat vector requires a different set of controls. For example, an organisation handling official-sensitive or classified data will require not just encryption, but assured platforms, robust access controls, identity validation, and clear auditability. Conversely, a commercial enterprise concerned about intellectual property leakage might strategically focus on user training, data loss prevention, and device control. Best practice isn’t picking the platform with the cheapest price tag or the most commonly used; it’s selecting a platform that supports the controls and policies required based on a deep understanding of your specific risks and use cases.   There is no one-size-fits-all solution for your organisation. The security product landscape is filled with vendors offering overlapping solutions that all claim to provide more protection than the other. And, although we know some potentially do offer better protection, features or functionality, even the best tool will fail if used incorrectly or implemented without a clear understanding of its limitations. Worse, organisations may gain a false sense of security by relying solely on a supplier’s claims. The priority must be to assess your organisation’s internal capability to manage and operate these tools effectively. Reassessing the threat landscape and taking advantage of the wealth of threat intelligence tools available, helps ensure you have the right skills, policies, and processes in place.  The Computer Weekly Security Think Tank on Signalgate Todd Thiemann, ESG: Signalgate: Learnings for CISOs securing enterprise data. Javvad Malik, KnowBe4: What CISOs can learn from Signalgate. Aditya Sood, Aryaka: Unspoken risk: Human factors undermine trusted platforms. Raihan Islam, defineXTEND: When leaders ignore cyber security rules, the whole system weakens. Elliot Wilkes, ACDS: Security vs. usability: Why rogue corporate comms are still an issue. Mike Gillespie and Ellie Hurst, Advent IM: Signalgate is a signal to revisit security onboarding and training. Best practice in 2025 means recognising that many security incidents stem from simple human mistakes, misaddressed emails, poor password hygiene, or even sharing access with the wrong person. Investing in continual staff education, security awareness, and skills gap analysis is essential to risk reduction.   This doesn’t mean showing an annual 10-minute cyber awareness video; you need to identify what will motivate your people and run security campaigns that capture their attention and change behaviour. For example you could consider using engaging nudges such as mandatory phishing alerts on laptops, interactive lock screen campaigns, and quizzes on key policies such as acceptable use and password complexity. Incorporate gamification elements, for example rewards for completing quizzes, and timely reminders to reinforce security best practices and fostering a culture of vigilance.  These campaigns should be a mixture of communications that engage people coupled with training which is seen as relevant by the workforce, as well as meeting role specific needs. Your developers need to understand secure coding practices, while those in front line operations may need training in how to detect phishing or social engineering attacks. In doing so this helps to create a better security culture within the organisation and enhance your overall security posture.  Finally, what’s considered “best practice” today may be outdated by tomorrow. Threats are constantly evolving, regulations change, and your own business operations and strategy may shift. Adopting a cyber security lifecycle that encompasses people, process and technology, supported by business continuous improvement activities and a clear vision from senior stakeholders will be vital. Conducting regular security reviews, red-teaming, and reassessing governance and policies will help ensure that defences remain relevant and proportional to your organisation’s threats. Encryption, however, still matters. As do SSO, MFA, secure coding practises, and access controls. But the real cornerstone of best practice in today’s cyber world is understanding why you need them, and how they’ll be used in practice. Securing your organisation is no longer just about picking the best platform, it's about creating a holistic view that incorporates people, process, and technology. And that may be the most secure approach, after all.  Russell Auld is digital trust and cyber security expert at PA Consulting In The Current Issue: UK government outlines plan to surveil migrants with eVisa data Why we must reform the Computer Misuse Act: A cyber pro speaks out Download Current Issue NTT IOWN all-photonics ‘saves Princess Miku’ from dragon – CW Developer Network FinOps Foundation lays down 2025 Framework for Cloud+ cost control – Open Source Insider View All Blogs
    0 Kommentare 0 Anteile
  • Fueling seamless AI at scale

    From large language modelsto reasoning agents, today’s AI tools bring unprecedented computational demands. Trillion-parameter models, workloads running on-device, and swarms of agents collaborating to complete tasks all require a new paradigm of computing to become truly seamless and ubiquitous.

    First, technical progress in hardware and silicon design is critical to pushing the boundaries of compute. Second, advances in machine learningallow AI systems to achieve increased efficiency with smaller computational demands. Finally, the integration, orchestration, and adoption of AI into applications, devices, and systems is crucial to delivering tangible impact and value.

    Silicon’s mid-life crisis

    AI has evolved from classical ML to deep learning to generative AI. The most recent chapter, which took AI mainstream, hinges on two phases—training and inference—that are data and energy-intensive in terms of computation, data movement, and cooling. At the same time, Moore’s Law, which determines that the number of transistors on a chip doubles every two years, is reaching a physical and economic plateau.

    For the last 40 years, silicon chips and digital technology have nudged each other forward—every step ahead in processing capability frees the imagination of innovators to envision new products, which require yet more power to run. That is happening at light speed in the AI age.

    As models become more readily available, deployment at scale puts the spotlight on inference and the application of trained models for everyday use cases. This transition requires the appropriate hardware to handle inference tasks efficiently. Central processing unitshave managed general computing tasks for decades, but the broad adoption of ML introduced computational demands that stretched the capabilities of traditional CPUs. This has led to the adoption of graphics processing unitsand other accelerator chips for training complex neural networks, due to their parallel execution capabilities and high memory bandwidth that allow large-scale mathematical operations to be processed efficiently.

    But CPUs are already the most widely deployed and can be companions to processors like GPUs and tensor processing units. AI developers are also hesitant to adapt software to fit specialized or bespoke hardware, and they favor the consistency and ubiquity of CPUs. Chip designers are unlocking performance gains through optimized software tooling, adding novel processing features and data types specifically to serve ML workloads, integrating specialized units and accelerators, and advancing silicon chip innovations, including custom silicon. AI itself is a helpful aid for chip design, creating a positive feedback loop in which AI helps optimize the chips that it needs to run. These enhancements and strong software support mean modern CPUs are a good choice to handle a range of inference tasks.

    Beyond silicon-based processors, disruptive technologies are emerging to address growing AI compute and data demands. The unicorn start-up Lightmatter, for instance, introduced photonic computing solutions that use light for data transmission to generate significant improvements in speed and energy efficiency. Quantum computing represents another promising area in AI hardware. While still years or even decades away, the integration of quantum computing with AI could further transform fields like drug discovery and genomics.

    Understanding models and paradigms

    The developments in ML theories and network architectures have significantly enhanced the efficiency and capabilities of AI models. Today, the industry is moving from monolithic models to agent-based systems characterized by smaller, specialized models that work together to complete tasks more efficiently at the edge—on devices like smartphones or modern vehicles. This allows them to extract increased performance gains, like faster model response times, from the same or even less compute.

    Researchers have developed techniques, including few-shot learning, to train AI models using smaller datasets and fewer training iterations. AI systems can learn new tasks from a limited number of examples to reduce dependency on large datasets and lower energy demands. Optimization techniques like quantization, which lower the memory requirements by selectively reducing precision, are helping reduce model sizes without sacrificing performance. 

    New system architectures, like retrieval-augmented generation, have streamlined data access during both training and inference to reduce computational costs and overhead. The DeepSeek R1, an open source LLM, is a compelling example of how more output can be extracted using the same hardware. By applying reinforcement learning techniques in novel ways, R1 has achieved advanced reasoning capabilities while using far fewer computational resources in some contexts.

    The integration of heterogeneous computing architectures, which combine various processing units like CPUs, GPUs, and specialized accelerators, has further optimized AI model performance. This approach allows for the efficient distribution of workloads across different hardware components to optimize computational throughput and energy efficiency based on the use case.

    Orchestrating AI

    As AI becomes an ambient capability humming in the background of many tasks and workflows, agents are taking charge and making decisions in real-world scenarios. These range from customer support to edge use cases, where multiple agents coordinate and handle localized tasks across devices.

    With AI increasingly used in daily life, the role of user experiences becomes critical for mass adoption. Features like predictive text in touch keyboards, and adaptive gearboxes in vehicles, offer glimpses of AI as a vital enabler to improve technology interactions for users.

    Edge processing is also accelerating the diffusion of AI into everyday applications, bringing computational capabilities closer to the source of data generation. Smart cameras, autonomous vehicles, and wearable technology now process information locally to reduce latency and improve efficiency. Advances in CPU design and energy-efficient chips have made it feasible to perform complex AI tasks on devices with limited power resources. This shift toward heterogeneous compute enhances the development of ambient intelligence, where interconnected devices create responsive environments that adapt to user needs.

    Seamless AI naturally requires common standards, frameworks, and platforms to bring the industry together. Contemporary AI brings new risks. For instance, by adding more complex software and personalized experiences to consumer devices, it expands the attack surface for hackers, requiring stronger security at both the software and silicon levels, including cryptographic safeguards and transforming the trust model of compute environments.

    More than 70% of respondents to a 2024 DarkTrace survey reported that AI-powered cyber threats significantly impact their organizations, while 60% say their organizations are not adequately prepared to defend against AI-powered attacks.

    Collaboration is essential to forging common frameworks. Universities contribute foundational research, companies apply findings to develop practical solutions, and governments establish policies for ethical and responsible deployment. Organizations like Anthropic are setting industry standards by introducing frameworks, such as the Model Context Protocol, to unify the way developers connect AI systems with data. Arm is another leader in driving standards-based and open source initiatives, including ecosystem development to accelerate and harmonize the chiplet market, where chips are stacked together through common frameworks and standards. Arm also helps optimize open source AI frameworks and models for inference on the Arm compute platform, without needing customized tuning. 

    How far AI goes to becoming a general-purpose technology, like electricity or semiconductors, is being shaped by technical decisions taken today. Hardware-agnostic platforms, standards-based approaches, and continued incremental improvements to critical workhorses like CPUs, all help deliver the promise of AI as a seamless and silent capability for individuals and businesses alike. Open source contributions are also helpful in allowing a broader range of stakeholders to participate in AI advances. By sharing tools and knowledge, the community can cultivate innovation and help ensure that the benefits of AI are accessible to everyone, everywhere.

    Learn more about Arm’s approach to enabling AI everywhere.

    This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

    This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
    #fueling #seamless #scale
    Fueling seamless AI at scale
    From large language modelsto reasoning agents, today’s AI tools bring unprecedented computational demands. Trillion-parameter models, workloads running on-device, and swarms of agents collaborating to complete tasks all require a new paradigm of computing to become truly seamless and ubiquitous. First, technical progress in hardware and silicon design is critical to pushing the boundaries of compute. Second, advances in machine learningallow AI systems to achieve increased efficiency with smaller computational demands. Finally, the integration, orchestration, and adoption of AI into applications, devices, and systems is crucial to delivering tangible impact and value. Silicon’s mid-life crisis AI has evolved from classical ML to deep learning to generative AI. The most recent chapter, which took AI mainstream, hinges on two phases—training and inference—that are data and energy-intensive in terms of computation, data movement, and cooling. At the same time, Moore’s Law, which determines that the number of transistors on a chip doubles every two years, is reaching a physical and economic plateau. For the last 40 years, silicon chips and digital technology have nudged each other forward—every step ahead in processing capability frees the imagination of innovators to envision new products, which require yet more power to run. That is happening at light speed in the AI age. As models become more readily available, deployment at scale puts the spotlight on inference and the application of trained models for everyday use cases. This transition requires the appropriate hardware to handle inference tasks efficiently. Central processing unitshave managed general computing tasks for decades, but the broad adoption of ML introduced computational demands that stretched the capabilities of traditional CPUs. This has led to the adoption of graphics processing unitsand other accelerator chips for training complex neural networks, due to their parallel execution capabilities and high memory bandwidth that allow large-scale mathematical operations to be processed efficiently. But CPUs are already the most widely deployed and can be companions to processors like GPUs and tensor processing units. AI developers are also hesitant to adapt software to fit specialized or bespoke hardware, and they favor the consistency and ubiquity of CPUs. Chip designers are unlocking performance gains through optimized software tooling, adding novel processing features and data types specifically to serve ML workloads, integrating specialized units and accelerators, and advancing silicon chip innovations, including custom silicon. AI itself is a helpful aid for chip design, creating a positive feedback loop in which AI helps optimize the chips that it needs to run. These enhancements and strong software support mean modern CPUs are a good choice to handle a range of inference tasks. Beyond silicon-based processors, disruptive technologies are emerging to address growing AI compute and data demands. The unicorn start-up Lightmatter, for instance, introduced photonic computing solutions that use light for data transmission to generate significant improvements in speed and energy efficiency. Quantum computing represents another promising area in AI hardware. While still years or even decades away, the integration of quantum computing with AI could further transform fields like drug discovery and genomics. Understanding models and paradigms The developments in ML theories and network architectures have significantly enhanced the efficiency and capabilities of AI models. Today, the industry is moving from monolithic models to agent-based systems characterized by smaller, specialized models that work together to complete tasks more efficiently at the edge—on devices like smartphones or modern vehicles. This allows them to extract increased performance gains, like faster model response times, from the same or even less compute. Researchers have developed techniques, including few-shot learning, to train AI models using smaller datasets and fewer training iterations. AI systems can learn new tasks from a limited number of examples to reduce dependency on large datasets and lower energy demands. Optimization techniques like quantization, which lower the memory requirements by selectively reducing precision, are helping reduce model sizes without sacrificing performance.  New system architectures, like retrieval-augmented generation, have streamlined data access during both training and inference to reduce computational costs and overhead. The DeepSeek R1, an open source LLM, is a compelling example of how more output can be extracted using the same hardware. By applying reinforcement learning techniques in novel ways, R1 has achieved advanced reasoning capabilities while using far fewer computational resources in some contexts. The integration of heterogeneous computing architectures, which combine various processing units like CPUs, GPUs, and specialized accelerators, has further optimized AI model performance. This approach allows for the efficient distribution of workloads across different hardware components to optimize computational throughput and energy efficiency based on the use case. Orchestrating AI As AI becomes an ambient capability humming in the background of many tasks and workflows, agents are taking charge and making decisions in real-world scenarios. These range from customer support to edge use cases, where multiple agents coordinate and handle localized tasks across devices. With AI increasingly used in daily life, the role of user experiences becomes critical for mass adoption. Features like predictive text in touch keyboards, and adaptive gearboxes in vehicles, offer glimpses of AI as a vital enabler to improve technology interactions for users. Edge processing is also accelerating the diffusion of AI into everyday applications, bringing computational capabilities closer to the source of data generation. Smart cameras, autonomous vehicles, and wearable technology now process information locally to reduce latency and improve efficiency. Advances in CPU design and energy-efficient chips have made it feasible to perform complex AI tasks on devices with limited power resources. This shift toward heterogeneous compute enhances the development of ambient intelligence, where interconnected devices create responsive environments that adapt to user needs. Seamless AI naturally requires common standards, frameworks, and platforms to bring the industry together. Contemporary AI brings new risks. For instance, by adding more complex software and personalized experiences to consumer devices, it expands the attack surface for hackers, requiring stronger security at both the software and silicon levels, including cryptographic safeguards and transforming the trust model of compute environments. More than 70% of respondents to a 2024 DarkTrace survey reported that AI-powered cyber threats significantly impact their organizations, while 60% say their organizations are not adequately prepared to defend against AI-powered attacks. Collaboration is essential to forging common frameworks. Universities contribute foundational research, companies apply findings to develop practical solutions, and governments establish policies for ethical and responsible deployment. Organizations like Anthropic are setting industry standards by introducing frameworks, such as the Model Context Protocol, to unify the way developers connect AI systems with data. Arm is another leader in driving standards-based and open source initiatives, including ecosystem development to accelerate and harmonize the chiplet market, where chips are stacked together through common frameworks and standards. Arm also helps optimize open source AI frameworks and models for inference on the Arm compute platform, without needing customized tuning.  How far AI goes to becoming a general-purpose technology, like electricity or semiconductors, is being shaped by technical decisions taken today. Hardware-agnostic platforms, standards-based approaches, and continued incremental improvements to critical workhorses like CPUs, all help deliver the promise of AI as a seamless and silent capability for individuals and businesses alike. Open source contributions are also helpful in allowing a broader range of stakeholders to participate in AI advances. By sharing tools and knowledge, the community can cultivate innovation and help ensure that the benefits of AI are accessible to everyone, everywhere. Learn more about Arm’s approach to enabling AI everywhere. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review. #fueling #seamless #scale
    WWW.TECHNOLOGYREVIEW.COM
    Fueling seamless AI at scale
    From large language models (LLMs) to reasoning agents, today’s AI tools bring unprecedented computational demands. Trillion-parameter models, workloads running on-device, and swarms of agents collaborating to complete tasks all require a new paradigm of computing to become truly seamless and ubiquitous. First, technical progress in hardware and silicon design is critical to pushing the boundaries of compute. Second, advances in machine learning (ML) allow AI systems to achieve increased efficiency with smaller computational demands. Finally, the integration, orchestration, and adoption of AI into applications, devices, and systems is crucial to delivering tangible impact and value. Silicon’s mid-life crisis AI has evolved from classical ML to deep learning to generative AI. The most recent chapter, which took AI mainstream, hinges on two phases—training and inference—that are data and energy-intensive in terms of computation, data movement, and cooling. At the same time, Moore’s Law, which determines that the number of transistors on a chip doubles every two years, is reaching a physical and economic plateau. For the last 40 years, silicon chips and digital technology have nudged each other forward—every step ahead in processing capability frees the imagination of innovators to envision new products, which require yet more power to run. That is happening at light speed in the AI age. As models become more readily available, deployment at scale puts the spotlight on inference and the application of trained models for everyday use cases. This transition requires the appropriate hardware to handle inference tasks efficiently. Central processing units (CPUs) have managed general computing tasks for decades, but the broad adoption of ML introduced computational demands that stretched the capabilities of traditional CPUs. This has led to the adoption of graphics processing units (GPUs) and other accelerator chips for training complex neural networks, due to their parallel execution capabilities and high memory bandwidth that allow large-scale mathematical operations to be processed efficiently. But CPUs are already the most widely deployed and can be companions to processors like GPUs and tensor processing units (TPUs). AI developers are also hesitant to adapt software to fit specialized or bespoke hardware, and they favor the consistency and ubiquity of CPUs. Chip designers are unlocking performance gains through optimized software tooling, adding novel processing features and data types specifically to serve ML workloads, integrating specialized units and accelerators, and advancing silicon chip innovations, including custom silicon. AI itself is a helpful aid for chip design, creating a positive feedback loop in which AI helps optimize the chips that it needs to run. These enhancements and strong software support mean modern CPUs are a good choice to handle a range of inference tasks. Beyond silicon-based processors, disruptive technologies are emerging to address growing AI compute and data demands. The unicorn start-up Lightmatter, for instance, introduced photonic computing solutions that use light for data transmission to generate significant improvements in speed and energy efficiency. Quantum computing represents another promising area in AI hardware. While still years or even decades away, the integration of quantum computing with AI could further transform fields like drug discovery and genomics. Understanding models and paradigms The developments in ML theories and network architectures have significantly enhanced the efficiency and capabilities of AI models. Today, the industry is moving from monolithic models to agent-based systems characterized by smaller, specialized models that work together to complete tasks more efficiently at the edge—on devices like smartphones or modern vehicles. This allows them to extract increased performance gains, like faster model response times, from the same or even less compute. Researchers have developed techniques, including few-shot learning, to train AI models using smaller datasets and fewer training iterations. AI systems can learn new tasks from a limited number of examples to reduce dependency on large datasets and lower energy demands. Optimization techniques like quantization, which lower the memory requirements by selectively reducing precision, are helping reduce model sizes without sacrificing performance.  New system architectures, like retrieval-augmented generation (RAG), have streamlined data access during both training and inference to reduce computational costs and overhead. The DeepSeek R1, an open source LLM, is a compelling example of how more output can be extracted using the same hardware. By applying reinforcement learning techniques in novel ways, R1 has achieved advanced reasoning capabilities while using far fewer computational resources in some contexts. The integration of heterogeneous computing architectures, which combine various processing units like CPUs, GPUs, and specialized accelerators, has further optimized AI model performance. This approach allows for the efficient distribution of workloads across different hardware components to optimize computational throughput and energy efficiency based on the use case. Orchestrating AI As AI becomes an ambient capability humming in the background of many tasks and workflows, agents are taking charge and making decisions in real-world scenarios. These range from customer support to edge use cases, where multiple agents coordinate and handle localized tasks across devices. With AI increasingly used in daily life, the role of user experiences becomes critical for mass adoption. Features like predictive text in touch keyboards, and adaptive gearboxes in vehicles, offer glimpses of AI as a vital enabler to improve technology interactions for users. Edge processing is also accelerating the diffusion of AI into everyday applications, bringing computational capabilities closer to the source of data generation. Smart cameras, autonomous vehicles, and wearable technology now process information locally to reduce latency and improve efficiency. Advances in CPU design and energy-efficient chips have made it feasible to perform complex AI tasks on devices with limited power resources. This shift toward heterogeneous compute enhances the development of ambient intelligence, where interconnected devices create responsive environments that adapt to user needs. Seamless AI naturally requires common standards, frameworks, and platforms to bring the industry together. Contemporary AI brings new risks. For instance, by adding more complex software and personalized experiences to consumer devices, it expands the attack surface for hackers, requiring stronger security at both the software and silicon levels, including cryptographic safeguards and transforming the trust model of compute environments. More than 70% of respondents to a 2024 DarkTrace survey reported that AI-powered cyber threats significantly impact their organizations, while 60% say their organizations are not adequately prepared to defend against AI-powered attacks. Collaboration is essential to forging common frameworks. Universities contribute foundational research, companies apply findings to develop practical solutions, and governments establish policies for ethical and responsible deployment. Organizations like Anthropic are setting industry standards by introducing frameworks, such as the Model Context Protocol, to unify the way developers connect AI systems with data. Arm is another leader in driving standards-based and open source initiatives, including ecosystem development to accelerate and harmonize the chiplet market, where chips are stacked together through common frameworks and standards. Arm also helps optimize open source AI frameworks and models for inference on the Arm compute platform, without needing customized tuning.  How far AI goes to becoming a general-purpose technology, like electricity or semiconductors, is being shaped by technical decisions taken today. Hardware-agnostic platforms, standards-based approaches, and continued incremental improvements to critical workhorses like CPUs, all help deliver the promise of AI as a seamless and silent capability for individuals and businesses alike. Open source contributions are also helpful in allowing a broader range of stakeholders to participate in AI advances. By sharing tools and knowledge, the community can cultivate innovation and help ensure that the benefits of AI are accessible to everyone, everywhere. Learn more about Arm’s approach to enabling AI everywhere. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
    0 Kommentare 0 Anteile
  • The Netherlands is building a leading neuromorphic computing industry

    Our latest and most advanced technologies — from AI to Industrial IoT, advanced robotics, and self-driving cars — share serious problems: massive energy consumption, limited on-edge capabilities, system hallucinations, and serious accuracy gaps. 
    One possible solution is emerging in the Netherlands. The country is developing a promising ecosystem for neuromorphic computing, which draws on neuroscience to boost IT efficiencies and performance. Billions of euros are being invested in this new form of computing worldwide. The Netherlands aims to become a leader in the market by bringing together startups, established companies, government organisations, and academics in a neuromorphic computing ecosystem.
    A Dutch mission to the UK
    In March, a Dutch delegation landed in the UK to host an “Innovation Mission” with local tech and government representatives. Top Sector ICT, a Dutch government–supported organisation, led the mission, which sought to strengthen and discuss the future of neuromorphic computing in Europe and the Netherlands. 
    We contacted Top Sector ICT, who connected us with one of their collaborators: Dr Johan H. Mentink, an expert in computational physics at Radboud University in the Netherlands. Dr Mentink spoke about how neuromorphic computing can solve the energy, accuracy, and efficiency challenges of our current computing architectures. 

    Grab that deal
    “Current digital computers use power-hungry processes to handle data,” Dr Mentink said. 
    “The result is that some modern data centres use so much energy that they even need their own power plant.” 
    Computing today stores data in one placeand processes it in another place. This means that a lot of energy is spent on transporting data, Dr Mentink explained. 
    In contrast, neuromorphic computing architectures are different at the hardware and software levels. For example, instead of using processors and memories, neuromorphic systems leverage new hardware components such as memristors. These act as both memory and processors. 
    By processing and saving data on the same hardware component, neuromorphic computing removes the energy-intensive and error-prone task of transporting data. Additionally, because data is stored on these components, it can be processed more immediately, resulting in faster decision-making, reduced hallucinations, improved accuracy, and better performance. This concept is being applied to edge computing, Industrial IoT, and robotics to drive faster real-time decision-making. 
    “Just like our brains process and store information in the same place, we can make computers that would combine data storage and processing in one place,” Dr Mentink explained.  
    Early use cases for neuromorphic computing
    Neuromorphic computing is far from just experimental. A great number of new and established technology companies are heavily invested in developing new hardware, edge devices, software, and neuromorphic computing applications.
    Big tech brands such as IBM, NVIDIA, and Intel, with its Loihi chips, are all involved in neuromorphic computing, while companies in the Netherlands, aligned with a 2024 national white paper, are taking a leading regional role. 
    For example, the Dutch company Innatera — a leader in ultra-low power neuromorphic processors — recently secured €15 million in Series-A funding from Invest-NL Deep Tech Fund, the EIC Fund, MIG Capital, Matterwave Ventures, and Delft Enterprises. 
    Innatera is just the tip of the iceberg, as the Netherlands continues to support the new industry through funds, grants, and other incentives.
    Immediate use cases for neuromorphic computing include event-based sensing technologies integrated into smart sensors such as cameras or audio. These neuromorphic devices only process change, which can dramatically reduce power and data load, said Sylvester Kaczmarek, the CEO of OrbiSky Systems, a company providing AI integration for space technology.  
    Neuromorphic hardware and software have the potential to transfer AI running on the edge, especially for low-power devices such as mobile, wearables, or IoT. 
    Pattern recognition, keyword spotting, and simple diagnostics — such as real-time signal processing of complex sensor data streams for biomedical uses, robotics, or industrial monitoring — are some of the leading use cases, Dr Kaczmarek explained. 
    When applied to pattern recognition and classification or anomaly detection, neuromorphic computing can make decisions very quickly and efficiently, 
    Professor Dr Hans Hilgenkamp, Scientific Director of the MESA+ Institute at the University of Twente, agreed that pattern recognition is one of the fields where neuromorphic computing excels. 
    “One may also think aboutfailure prediction in industrial or automotive applications,” he said.   
    The gaps creating neuromorphic opportunities
    Despite the recent progress, the road to establishing robust neuromorphic computing ecosystems in the Netherlands is challenging. Globalised tech supply chains and the standardisation of new technologies leave little room for hardware-level innovation. 
    For example, optical networks and optical chips have proven to outperform traditional systems in use today, but the tech has not been deployed globally. Deploying new hardware involves strategic coordination between the public and private sectors. The global rollout of 5G technology provides a good example of the challenges. It required telcos and governments around the world to deploy not only new antennas, but also smartphones, laptops, and a lot of hardware that could support the new standard. 
    On the software side, meanwhile, 5G systems had a pressing need for global standards to ensure integration, interoperability, and smooth deployment. Additionally, established telcos had to move from pure competition to strategic collaboration— an unfamiliar shift for an industry long built on siloed operations.
    Neuromorphic computing ecosystems face similar obstacles. The Netherlands recognises that the entire industry’s success depends on innovation in materials, devices, circuit designs, hardware architecture, algorithms, and applications. 
    These challenges and gaps are driving new opportunities for tech companies, startups, vendors, and partners. 
    Dr Kaczmarek told us that neuromorphic computing requires full-stack integration. This involves expertise that can connect novel materials and devices through circuit design and architectures to algorithms and applications. “Bringing these layers together is crucial but challenging,” he said. 
    On the algorithms and software side of things, developing new paradigms of programming, learning rules, and software tools native to neuromorphic hardware are also priorities. 
    “It is crucial to make the hardware usable and efficient — co-designing hardware and algorithms because they are intimately coupled in neuromorphic systems,” said Dr Kaczmarek. 
    Other industries which have developed or are considering research on neuromorphic computing include healthcare, agri-food, and sustainable energy. 
    Neuromorphic computing modules or components can also be integrated with conventional CMOS, photonics, AI, and even quantum technologies. 
    Long-term opportunities in the Netherlands
    We asked Dr Hilgenkamp what expertise or innovations are most needed and offer the greatest opportunities for contribution and growth within this emerging ecosystem.
    “The long-term developments involve new materials and a lot of research, which is already taking place on an academic level,” Dr Hilgenkamp said. 
    He added that the idea of “materials that can learn” brings up completely new concepts in materials science that are exciting for researchers. 
    On the other hand, Dr Mentink pointed to the opportunity to transform our economies, which rely on processing massive amounts of data. 
    “Even replacing a small part of that with neuromorphic computing will lead to massive energy savings,” he said. 
    “Moreover, with neuromorphic computing, much more processing can be done close to where the data is produced. This is good news for situations in which data contains privacy-sensitive information.” 
    Concrete examples, according to Dr Mentink, also include fraud detection for credit card transactions, image analysis by robots and drones, anomaly detection of heartbeats, and processing of telecom data.
    “The most promising use cases are those involving huge data flows, strong demands for very fast response times, and small energy budgets,” said Dr Mentink. 
    As the use cases for neuromorphic computing increase, Dr Mentink expects the development of software toolchains that enable quick adoption of new neuromorphic platforms to see growth. This new sector would include services to streamline deployment.
    “Longer-term sustainable growth requires a concerted interdisciplinary effort across the whole computing stack to enable seamless integration of foundational discoveries to applications in new neuromorphic computing systems,” Dr Mentink said. 
    The bottom line
    The potential of neuromorphic computing has translated into billions of dollars in investment in the Netherlands and Europe, as well as in Asia and the rest of the world. 
    Businesses that can innovate, develop, and integrate hardware and software-level neuromorphic technologies stand to gain the most.  
    The potential of neuromorphic computing for greater energy efficiency and performance could ripple across industries. Energy, healthcare, robotics, AI, industrial IoT, and quantum tech all stand to benefit if they integrate the technology. And if the Dutch ecosystem takes off, the Netherlands will be in a position to lead the way.
    Supporting Dutch tech is a key mission of TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets are now on sale — use the code TNWXMEDIA2025 at the checkout to get 30% off.

    Story by

    Ray Fernandez

    Ray Fernandez is a journalist with over a decade of experience reporting on technology, finance, science, and natural resources. His work haRay Fernandez is a journalist with over a decade of experience reporting on technology, finance, science, and natural resources. His work has been published in Bloomberg, TechRepublic, The Sunday Mail, eSecurityPlanet, and many others. He is a contributing writer for Espacio Media Incubator, which has reporters across the US, Europe, Asia, and Latin America.

    Get the TNW newsletter
    Get the most important tech news in your inbox each week.

    Also tagged with
    #netherlands #building #leading #neuromorphic #computing
    The Netherlands is building a leading neuromorphic computing industry
    Our latest and most advanced technologies — from AI to Industrial IoT, advanced robotics, and self-driving cars — share serious problems: massive energy consumption, limited on-edge capabilities, system hallucinations, and serious accuracy gaps.  One possible solution is emerging in the Netherlands. The country is developing a promising ecosystem for neuromorphic computing, which draws on neuroscience to boost IT efficiencies and performance. Billions of euros are being invested in this new form of computing worldwide. The Netherlands aims to become a leader in the market by bringing together startups, established companies, government organisations, and academics in a neuromorphic computing ecosystem. A Dutch mission to the UK In March, a Dutch delegation landed in the UK to host an “Innovation Mission” with local tech and government representatives. Top Sector ICT, a Dutch government–supported organisation, led the mission, which sought to strengthen and discuss the future of neuromorphic computing in Europe and the Netherlands.  We contacted Top Sector ICT, who connected us with one of their collaborators: Dr Johan H. Mentink, an expert in computational physics at Radboud University in the Netherlands. Dr Mentink spoke about how neuromorphic computing can solve the energy, accuracy, and efficiency challenges of our current computing architectures.  Grab that deal “Current digital computers use power-hungry processes to handle data,” Dr Mentink said.  “The result is that some modern data centres use so much energy that they even need their own power plant.”  Computing today stores data in one placeand processes it in another place. This means that a lot of energy is spent on transporting data, Dr Mentink explained.  In contrast, neuromorphic computing architectures are different at the hardware and software levels. For example, instead of using processors and memories, neuromorphic systems leverage new hardware components such as memristors. These act as both memory and processors.  By processing and saving data on the same hardware component, neuromorphic computing removes the energy-intensive and error-prone task of transporting data. Additionally, because data is stored on these components, it can be processed more immediately, resulting in faster decision-making, reduced hallucinations, improved accuracy, and better performance. This concept is being applied to edge computing, Industrial IoT, and robotics to drive faster real-time decision-making.  “Just like our brains process and store information in the same place, we can make computers that would combine data storage and processing in one place,” Dr Mentink explained.   Early use cases for neuromorphic computing Neuromorphic computing is far from just experimental. A great number of new and established technology companies are heavily invested in developing new hardware, edge devices, software, and neuromorphic computing applications. Big tech brands such as IBM, NVIDIA, and Intel, with its Loihi chips, are all involved in neuromorphic computing, while companies in the Netherlands, aligned with a 2024 national white paper, are taking a leading regional role.  For example, the Dutch company Innatera — a leader in ultra-low power neuromorphic processors — recently secured €15 million in Series-A funding from Invest-NL Deep Tech Fund, the EIC Fund, MIG Capital, Matterwave Ventures, and Delft Enterprises.  Innatera is just the tip of the iceberg, as the Netherlands continues to support the new industry through funds, grants, and other incentives. Immediate use cases for neuromorphic computing include event-based sensing technologies integrated into smart sensors such as cameras or audio. These neuromorphic devices only process change, which can dramatically reduce power and data load, said Sylvester Kaczmarek, the CEO of OrbiSky Systems, a company providing AI integration for space technology.   Neuromorphic hardware and software have the potential to transfer AI running on the edge, especially for low-power devices such as mobile, wearables, or IoT.  Pattern recognition, keyword spotting, and simple diagnostics — such as real-time signal processing of complex sensor data streams for biomedical uses, robotics, or industrial monitoring — are some of the leading use cases, Dr Kaczmarek explained.  When applied to pattern recognition and classification or anomaly detection, neuromorphic computing can make decisions very quickly and efficiently,  Professor Dr Hans Hilgenkamp, Scientific Director of the MESA+ Institute at the University of Twente, agreed that pattern recognition is one of the fields where neuromorphic computing excels.  “One may also think aboutfailure prediction in industrial or automotive applications,” he said.    The gaps creating neuromorphic opportunities Despite the recent progress, the road to establishing robust neuromorphic computing ecosystems in the Netherlands is challenging. Globalised tech supply chains and the standardisation of new technologies leave little room for hardware-level innovation.  For example, optical networks and optical chips have proven to outperform traditional systems in use today, but the tech has not been deployed globally. Deploying new hardware involves strategic coordination between the public and private sectors. The global rollout of 5G technology provides a good example of the challenges. It required telcos and governments around the world to deploy not only new antennas, but also smartphones, laptops, and a lot of hardware that could support the new standard.  On the software side, meanwhile, 5G systems had a pressing need for global standards to ensure integration, interoperability, and smooth deployment. Additionally, established telcos had to move from pure competition to strategic collaboration— an unfamiliar shift for an industry long built on siloed operations. Neuromorphic computing ecosystems face similar obstacles. The Netherlands recognises that the entire industry’s success depends on innovation in materials, devices, circuit designs, hardware architecture, algorithms, and applications.  These challenges and gaps are driving new opportunities for tech companies, startups, vendors, and partners.  Dr Kaczmarek told us that neuromorphic computing requires full-stack integration. This involves expertise that can connect novel materials and devices through circuit design and architectures to algorithms and applications. “Bringing these layers together is crucial but challenging,” he said.  On the algorithms and software side of things, developing new paradigms of programming, learning rules, and software tools native to neuromorphic hardware are also priorities.  “It is crucial to make the hardware usable and efficient — co-designing hardware and algorithms because they are intimately coupled in neuromorphic systems,” said Dr Kaczmarek.  Other industries which have developed or are considering research on neuromorphic computing include healthcare, agri-food, and sustainable energy.  Neuromorphic computing modules or components can also be integrated with conventional CMOS, photonics, AI, and even quantum technologies.  Long-term opportunities in the Netherlands We asked Dr Hilgenkamp what expertise or innovations are most needed and offer the greatest opportunities for contribution and growth within this emerging ecosystem. “The long-term developments involve new materials and a lot of research, which is already taking place on an academic level,” Dr Hilgenkamp said.  He added that the idea of “materials that can learn” brings up completely new concepts in materials science that are exciting for researchers.  On the other hand, Dr Mentink pointed to the opportunity to transform our economies, which rely on processing massive amounts of data.  “Even replacing a small part of that with neuromorphic computing will lead to massive energy savings,” he said.  “Moreover, with neuromorphic computing, much more processing can be done close to where the data is produced. This is good news for situations in which data contains privacy-sensitive information.”  Concrete examples, according to Dr Mentink, also include fraud detection for credit card transactions, image analysis by robots and drones, anomaly detection of heartbeats, and processing of telecom data. “The most promising use cases are those involving huge data flows, strong demands for very fast response times, and small energy budgets,” said Dr Mentink.  As the use cases for neuromorphic computing increase, Dr Mentink expects the development of software toolchains that enable quick adoption of new neuromorphic platforms to see growth. This new sector would include services to streamline deployment. “Longer-term sustainable growth requires a concerted interdisciplinary effort across the whole computing stack to enable seamless integration of foundational discoveries to applications in new neuromorphic computing systems,” Dr Mentink said.  The bottom line The potential of neuromorphic computing has translated into billions of dollars in investment in the Netherlands and Europe, as well as in Asia and the rest of the world.  Businesses that can innovate, develop, and integrate hardware and software-level neuromorphic technologies stand to gain the most.   The potential of neuromorphic computing for greater energy efficiency and performance could ripple across industries. Energy, healthcare, robotics, AI, industrial IoT, and quantum tech all stand to benefit if they integrate the technology. And if the Dutch ecosystem takes off, the Netherlands will be in a position to lead the way. Supporting Dutch tech is a key mission of TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets are now on sale — use the code TNWXMEDIA2025 at the checkout to get 30% off. Story by Ray Fernandez Ray Fernandez is a journalist with over a decade of experience reporting on technology, finance, science, and natural resources. His work haRay Fernandez is a journalist with over a decade of experience reporting on technology, finance, science, and natural resources. His work has been published in Bloomberg, TechRepublic, The Sunday Mail, eSecurityPlanet, and many others. He is a contributing writer for Espacio Media Incubator, which has reporters across the US, Europe, Asia, and Latin America. Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with #netherlands #building #leading #neuromorphic #computing
    THENEXTWEB.COM
    The Netherlands is building a leading neuromorphic computing industry
    Our latest and most advanced technologies — from AI to Industrial IoT, advanced robotics, and self-driving cars — share serious problems: massive energy consumption, limited on-edge capabilities, system hallucinations, and serious accuracy gaps.  One possible solution is emerging in the Netherlands. The country is developing a promising ecosystem for neuromorphic computing, which draws on neuroscience to boost IT efficiencies and performance. Billions of euros are being invested in this new form of computing worldwide. The Netherlands aims to become a leader in the market by bringing together startups, established companies, government organisations, and academics in a neuromorphic computing ecosystem. A Dutch mission to the UK In March, a Dutch delegation landed in the UK to host an “Innovation Mission” with local tech and government representatives. Top Sector ICT, a Dutch government–supported organisation, led the mission, which sought to strengthen and discuss the future of neuromorphic computing in Europe and the Netherlands.  We contacted Top Sector ICT, who connected us with one of their collaborators: Dr Johan H. Mentink, an expert in computational physics at Radboud University in the Netherlands. Dr Mentink spoke about how neuromorphic computing can solve the energy, accuracy, and efficiency challenges of our current computing architectures.  Grab that deal “Current digital computers use power-hungry processes to handle data,” Dr Mentink said.  “The result is that some modern data centres use so much energy that they even need their own power plant.”  Computing today stores data in one place (memory) and processes it in another place (processors). This means that a lot of energy is spent on transporting data, Dr Mentink explained.  In contrast, neuromorphic computing architectures are different at the hardware and software levels. For example, instead of using processors and memories, neuromorphic systems leverage new hardware components such as memristors. These act as both memory and processors.  By processing and saving data on the same hardware component, neuromorphic computing removes the energy-intensive and error-prone task of transporting data. Additionally, because data is stored on these components, it can be processed more immediately, resulting in faster decision-making, reduced hallucinations, improved accuracy, and better performance. This concept is being applied to edge computing, Industrial IoT, and robotics to drive faster real-time decision-making.  “Just like our brains process and store information in the same place, we can make computers that would combine data storage and processing in one place,” Dr Mentink explained.   Early use cases for neuromorphic computing Neuromorphic computing is far from just experimental. A great number of new and established technology companies are heavily invested in developing new hardware, edge devices, software, and neuromorphic computing applications. Big tech brands such as IBM, NVIDIA, and Intel, with its Loihi chips, are all involved in neuromorphic computing, while companies in the Netherlands, aligned with a 2024 national white paper, are taking a leading regional role.  For example, the Dutch company Innatera — a leader in ultra-low power neuromorphic processors — recently secured €15 million in Series-A funding from Invest-NL Deep Tech Fund, the EIC Fund, MIG Capital, Matterwave Ventures, and Delft Enterprises.  Innatera is just the tip of the iceberg, as the Netherlands continues to support the new industry through funds, grants, and other incentives. Immediate use cases for neuromorphic computing include event-based sensing technologies integrated into smart sensors such as cameras or audio. These neuromorphic devices only process change, which can dramatically reduce power and data load, said Sylvester Kaczmarek, the CEO of OrbiSky Systems, a company providing AI integration for space technology.   Neuromorphic hardware and software have the potential to transfer AI running on the edge, especially for low-power devices such as mobile, wearables, or IoT.  Pattern recognition, keyword spotting, and simple diagnostics — such as real-time signal processing of complex sensor data streams for biomedical uses, robotics, or industrial monitoring — are some of the leading use cases, Dr Kaczmarek explained.  When applied to pattern recognition and classification or anomaly detection, neuromorphic computing can make decisions very quickly and efficiently,  Professor Dr Hans Hilgenkamp, Scientific Director of the MESA+ Institute at the University of Twente, agreed that pattern recognition is one of the fields where neuromorphic computing excels.  “One may also think about [for example] failure prediction in industrial or automotive applications,” he said.    The gaps creating neuromorphic opportunities Despite the recent progress, the road to establishing robust neuromorphic computing ecosystems in the Netherlands is challenging. Globalised tech supply chains and the standardisation of new technologies leave little room for hardware-level innovation.  For example, optical networks and optical chips have proven to outperform traditional systems in use today, but the tech has not been deployed globally. Deploying new hardware involves strategic coordination between the public and private sectors. The global rollout of 5G technology provides a good example of the challenges. It required telcos and governments around the world to deploy not only new antennas, but also smartphones, laptops, and a lot of hardware that could support the new standard.  On the software side, meanwhile, 5G systems had a pressing need for global standards to ensure integration, interoperability, and smooth deployment. Additionally, established telcos had to move from pure competition to strategic collaboration— an unfamiliar shift for an industry long built on siloed operations. Neuromorphic computing ecosystems face similar obstacles. The Netherlands recognises that the entire industry’s success depends on innovation in materials, devices, circuit designs, hardware architecture, algorithms, and applications.  These challenges and gaps are driving new opportunities for tech companies, startups, vendors, and partners.  Dr Kaczmarek told us that neuromorphic computing requires full-stack integration. This involves expertise that can connect novel materials and devices through circuit design and architectures to algorithms and applications. “Bringing these layers together is crucial but challenging,” he said.  On the algorithms and software side of things, developing new paradigms of programming, learning rules (beyond standard deep learning backpropagation), and software tools native to neuromorphic hardware are also priorities.  “It is crucial to make the hardware usable and efficient — co-designing hardware and algorithms because they are intimately coupled in neuromorphic systems,” said Dr Kaczmarek.  Other industries which have developed or are considering research on neuromorphic computing include healthcare (brain-computer interfaces and prosthetics), agri-food, and sustainable energy.  Neuromorphic computing modules or components can also be integrated with conventional CMOS, photonics, AI, and even quantum technologies.  Long-term opportunities in the Netherlands We asked Dr Hilgenkamp what expertise or innovations are most needed and offer the greatest opportunities for contribution and growth within this emerging ecosystem. “The long-term developments involve new materials and a lot of research, which is already taking place on an academic level,” Dr Hilgenkamp said.  He added that the idea of “materials that can learn” brings up completely new concepts in materials science that are exciting for researchers.  On the other hand, Dr Mentink pointed to the opportunity to transform our economies, which rely on processing massive amounts of data.  “Even replacing a small part of that with neuromorphic computing will lead to massive energy savings,” he said.  “Moreover, with neuromorphic computing, much more processing can be done close to where the data is produced. This is good news for situations in which data contains privacy-sensitive information.”  Concrete examples, according to Dr Mentink, also include fraud detection for credit card transactions, image analysis by robots and drones, anomaly detection of heartbeats, and processing of telecom data. “The most promising use cases are those involving huge data flows, strong demands for very fast response times, and small energy budgets,” said Dr Mentink.  As the use cases for neuromorphic computing increase, Dr Mentink expects the development of software toolchains that enable quick adoption of new neuromorphic platforms to see growth. This new sector would include services to streamline deployment. “Longer-term sustainable growth requires a concerted interdisciplinary effort across the whole computing stack to enable seamless integration of foundational discoveries to applications in new neuromorphic computing systems,” Dr Mentink said.  The bottom line The potential of neuromorphic computing has translated into billions of dollars in investment in the Netherlands and Europe, as well as in Asia and the rest of the world.  Businesses that can innovate, develop, and integrate hardware and software-level neuromorphic technologies stand to gain the most.   The potential of neuromorphic computing for greater energy efficiency and performance could ripple across industries. Energy, healthcare, robotics, AI, industrial IoT, and quantum tech all stand to benefit if they integrate the technology. And if the Dutch ecosystem takes off, the Netherlands will be in a position to lead the way. Supporting Dutch tech is a key mission of TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets are now on sale — use the code TNWXMEDIA2025 at the checkout to get 30% off. Story by Ray Fernandez Ray Fernandez is a journalist with over a decade of experience reporting on technology, finance, science, and natural resources. His work ha (show all) Ray Fernandez is a journalist with over a decade of experience reporting on technology, finance, science, and natural resources. His work has been published in Bloomberg, TechRepublic, The Sunday Mail, eSecurityPlanet, and many others. He is a contributing writer for Espacio Media Incubator, which has reporters across the US, Europe, Asia, and Latin America. Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with
    0 Kommentare 0 Anteile
  • Are entangled qubits following a quantum Moore's law?

    Jiuzhang, an advanced photonic quantum computer like the one that entangled a record amount of qubitsUniversity of Science and Technology of China
    The number of qubits that have been entangled in quantum computers has nearly doubled within the past year – the increase is happening so fast, it seems to be following a “quantum Moore’s law”.
    First proposed by Gordon Moore at Intel in 1965, Moore’s law states that the power we can get out of a single traditional computer chip doubles at regular intervals; every year at first, then every two years as manufacturing encountered…
    #are #entangled #qubits #following #quantum
    Are entangled qubits following a quantum Moore's law?
    Jiuzhang, an advanced photonic quantum computer like the one that entangled a record amount of qubitsUniversity of Science and Technology of China The number of qubits that have been entangled in quantum computers has nearly doubled within the past year – the increase is happening so fast, it seems to be following a “quantum Moore’s law”. First proposed by Gordon Moore at Intel in 1965, Moore’s law states that the power we can get out of a single traditional computer chip doubles at regular intervals; every year at first, then every two years as manufacturing encountered… #are #entangled #qubits #following #quantum
    WWW.NEWSCIENTIST.COM
    Are entangled qubits following a quantum Moore's law?
    Jiuzhang, an advanced photonic quantum computer like the one that entangled a record amount of qubitsUniversity of Science and Technology of China The number of qubits that have been entangled in quantum computers has nearly doubled within the past year – the increase is happening so fast, it seems to be following a “quantum Moore’s law”. First proposed by Gordon Moore at Intel in 1965, Moore’s law states that the power we can get out of a single traditional computer chip doubles at regular intervals; every year at first, then every two years as manufacturing encountered…
    0 Kommentare 0 Anteile