• Fusion and AI: How private sector tech is powering progress at ITER

    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.  
    Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence, already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion. 
    Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion. 
    “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research. 
    Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understandingto explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams.
    A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on. 
    But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties.
    “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.” 
    The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue. 
    While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.” 
    Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2Cprotocol’, and Atlas gets it done.” 
    It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools. 

    Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in.
    Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said. 
    The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life. 
    And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser.
    “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.” 
    Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays. 
    Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery. 
    Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said. 
    It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun.
    As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.” 
    If these early steps are any indication, that journey won’t just be faster – it might also be more inspired. 
    #fusion #how #private #sector #tech
    Fusion and AI: How private sector tech is powering progress at ITER
    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.   Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence, already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion.  Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion.  “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research.  Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understandingto explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams. A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on.  But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties. “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.”  The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue.  While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.”  Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2Cprotocol’, and Atlas gets it done.”  It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools.  Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in. Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said.  The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life.  And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser. “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.”  Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays.  Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery.  Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said.  It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun. As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.”  If these early steps are any indication, that journey won’t just be faster – it might also be more inspired.  #fusion #how #private #sector #tech
    WWW.COMPUTERWEEKLY.COM
    Fusion and AI: How private sector tech is powering progress at ITER
    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.   Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence (AI), already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion.  Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion.  “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research.  Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understanding (MoU) to explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams. A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on.  But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties. “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.”  The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue.  While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.”  Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2C [inter integrated circuit] protocol’, and Atlas gets it done.”  It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools.  Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in. Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said.  The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life.  And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser. “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.”  Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays.  Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery.  Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said.  It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun. As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.”  If these early steps are any indication, that journey won’t just be faster – it might also be more inspired. 
    Like
    Love
    Wow
    Sad
    Angry
    490
    2 Comentários 0 Compartilhamentos
  • 15 riveting images from the 2025 UN World Oceans Day Photo Competition

    Big and Small Underwater Faces — 3rd Place.
    Trips to the Antarctic Peninsula always yield amazing encounters with leopard seals. Boldly approaching me and baring his teeth, this individual was keen to point out that this part of Antarctica was his territory. This picture was shot at dusk, resulting in the rather moody atmosphere.
     
    Credit: Lars von Ritter Zahony/ World Ocean’s Day

    Get the Popular Science daily newsletter
    Breakthroughs, discoveries, and DIY tips sent every weekday.

    The striking eye of a humpback whale named Sweet Girl peers at the camera. Just four days later, she would be dead, hit by a speeding boat and one of the 20,000 whales killed by ship strikes each year. Photographer Rachel Moore’s captivating imageof Sweet Girl earned top honors at the 2025 United Nations World Oceans Day Photo Competition.
    Wonder: Sustaining What Sustains Us — WinnerThis photo, taken in Mo’orea, French Polynesia in 2024, captures the eye of a humpback whale named Sweet Girl, just days before her tragic death. Four days after I captured this intimate moment, she was struck and killed by a fast-moving ship. Her death serves as a heartbreaking reminder of the 20,000 whales lost to ship strikes every year. We are using her story to advocate for stronger protections, petitioning for stricter speed laws around Tahiti and Mo’orea during whale season. I hope Sweet Girl’s legacy will spark real change to protect these incredible animals and prevent further senseless loss.Credit: Rachel Moore/ United Nations World Oceans Day www.unworldoceansday.org
    Now in its twelfth year, the competition coordinated in collaboration between the UN Division for Ocean Affairs and the Law of the Sea, DivePhotoGuide, Oceanic Global, and  the Intergovernmental Oceanographic Commission of UNESCO. Each year, thousands of underwater photographers submit images that judges award prizes for across four categories: Big and Small Underwater Faces, Underwater Seascapes, Above Water Seascapes, and Wonder: Sustaining What Sustains Us.
    This year’s winning images include a curious leopard seal, a swarm of jellyfish, and a very grumpy looking Japanese warbonnet. Given our oceans’ perilous state, all competition participants were required to sign a charter of 14 commitments regarding ethics in photography.
    Underwater Seascapes — Honorable MentionWith only orcas as their natural predators, leopard seals are Antarctica’s most versatile hunters, preying on everything from fish and cephalopods to penguins and other seals. Gentoo penguins are a favored menu item, and leopard seals can be observed patrolling the waters around their colonies. For this shot, I used a split image to capture both worlds: the gentoo penguin colony in the background with the leopard seal on the hunt in the foreground.Credit: Lars von Ritter Zahony/ United Nations World Oceans Day www.unworldoceansday.org
    Above Water Seascapes – WinnerA serene lake cradled by arid dunes, where a gentle stream breathes life into the heart of Mother Earth’s creation: Captured from an airplane, this image reveals the powerful contrasts and hidden beauty where land and ocean meet, reminding us that the ocean is the source of all life and that everything in nature is deeply connected. The location is a remote stretch of coastline near Shark Bay, Western Australia.Credit: Leander Nardin/ United Nations World Oceans Day www.unworldoceansday.org
    Above Water Seascapes — 3rd PlaceParadise Harbour is one of the most beautiful places on the Antarctic Peninsula. When I visited, the sea was extremely calm, and I was lucky enough to witness a wonderfully clear reflection of the Suárez Glacierin the water. The only problem was the waves created by our speedboat, and the only way to capture the perfect reflection was to lie on the bottom of the boat while it moved towards the glacier.Credit: Andrey Nosik/ United Nations World Oceans Day www.unworldoceansday.org
    Underwater Seascapes — 3rd Place“La Rapadura” is a natural hidden treasure on the northern coast of Tenerife, in the Spanish territory of the Canary Islands. Only discovered in 1996, it is one of the most astonishing underwater landscapes in the world, consistently ranking among the planet’s best dive sites. These towering columns of basalt are the result of volcanic processes that occurred between 500,000 and a million years ago. The formation was created when a basaltic lava flow reached the ocean, where, upon cooling and solidifying, it contracted, creating natural structures often compared to the pipes of church organs. Located in a region where marine life has been impacted by once common illegal fishing practices, this stunning natural monument has both geological and ecological value, and scientists and underwater photographers are advocating for its protection.Credit: Pedro Carrillo/ United Nations World Oceans Day www.unworldoceansday.org
    Underwater Seascapes — WinnerThis year, I had the incredible opportunity to visit a jellyfish lake during a liveaboard trip around southern Raja Ampat, Indonesia. Being surrounded by millions of jellyfish, which have evolved to lose their stinging ability due to the absence of predators, was one of the most breathtaking experiences I’ve ever had.Credit: Dani Escayola/ United Nations World Oceans Day www.unworldoceansday.org
    Underwater Seascapes — 2nd PlaceThis shot captures a school of rays resting at a cleaning station in Mauritius, where strong currents once attracted them regularly. Some rays grew accustomed to divers, allowing close encounters like this. Sadly, after the severe bleaching that the reefs here suffered last year, such gatherings have become rare, and I fear I may not witness this again at the same spot.Credit: Gerald Rambert/ United Nations World Oceans Day www.unworldoceansday.org
    Wonder: Sustaining What Sustains Us — 3rd PlaceShot in Cuba’s Jardines de la Reina—a protected shark sanctuary—this image captures a Caribbean reef shark weaving through a group of silky sharks near the surface. Using a slow shutter and strobes as the shark pivoted sharply, the motion blurred into a wave-like arc across its head, lit by the golden hues of sunset. The abundance and behavior of sharks here is a living symbol of what protected oceans can look like.Credit: Steven Lopez/ United Nations World Oceans Day www.unworldoceansday.org
     Above Water Seascapes — 2nd PlaceNorthern gannetssoar above the dramatic cliffs of Scotland’s Hermaness National Nature Reserve, their sleek white bodies and black-tipped wings slicing through the Shetland winds. These seabirds, the largest in the North Atlantic, are renowned for their striking plunge-dives, reaching speeds up to 100 kphas they hunt for fish beneath the waves. The cliffs of Hermaness provide ideal nesting sites, with updrafts aiding their take-offs and landings. Each spring, thousands return to this rugged coastline, forming one of the UK’s most significant gannet colonies. It was a major challenge to take photos at the edge of these cliffs at almost 200 meterswith the winds up to 30 kph.Credit: Nur Tucker/ United Nations World Oceans Day www.unworldoceansday.org
    Above Water Seascapes — Honorable MentionA South Atlantic swell breaks on the Dungeons Reef off the Cape Peninsula, South Africa, shot while photographing a big-wave surf session in October 2017. It’s the crescendoing sounds of these breaking swells that always amazes me.Credit: Ken Findlay/ United Nations World Oceans Day www.unworldoceansday.org
    Wonder: Sustaining What Sustains Us — Honorable MentionHumpback whales in their thousands migrate along the Ningaloo Reef in Western Australia every year on the way to and from their calving grounds. In four seasons of swimming with them on the reef here, this is the only encounter I’ve had like this one. This pair of huge adult whales repeatedly spy-hopped alongside us, seeking to interact with and investigate us, leaving me completely breathless. The female in the foreground was much more confident than the male behind and would constantly make close approaches, whilst the male hung back a little, still interested but shy. After more than 10 years working with wildlife in the water, this was one of the best experiences of my life.Credit: Ollie Clarke/ United Nations World Oceans Day www.unworldoceansday.org
    Big and Small Underwater Faces — 2nd PlaceOn one of my many blackwater dives in Anilao, in the Philippines, my guide and I spotted something moving erratically at a depth of around 20 meters, about 10 to 15 centimeters in size. We quickly realized that it was a rare blanket octopus. As we approached, it opened up its beautiful blanket, revealing its multicolored mantle. I managed to take a few shots before it went on its way. I felt truly privileged to have captured this fascinating deep-sea cephalopod. Among its many unique characteristics, this species exhibits some of the most extreme sexual size-dimorphism in nature, with females weighing up to 40,000 times more than males.Credit: Giacomo Marchione/ United Nations World Oceans Day www.unworldoceansday.org
    Big and Small Underwater Faces – WinnerThis photo of a Japanese warbonnetwas captured in the Sea of Japan, about 50 milessouthwest of Vladivostok, Russia. I found the ornate fish at a depth of about 30 meters, under the stern of a shipwreck. This species does not appear to be afraid of divers—on the contrary, it seems to enjoy the attention—and it even tried to sit on the dome port of my camera.Credit: Andrey Nosik/ United Nations World Oceans Day www.unworldoceansday.org
    Wonder: Sustaining What Sustains Us — 2nd PlaceA juvenile pinnate batfishcaptured with a slow shutter speed, a snooted light, and deliberate camera panning to create a sense of motion and drama. Juvenile pinnate batfish are known for their striking black bodies outlined in vibrant orange—a coloration they lose within just a few months as they mature. I encountered this restless subject in the tropical waters of Indonesia’s Lembeh Strait. Capturing this image took patience and persistence over two dives, as these active young fish constantly dart for cover in crevices, making the shot particularly challenging.Credit: Luis Arpa/ United Nations World Oceans Day www.unworldoceansday.org
    #riveting #images #world #oceans #dayphoto
    15 riveting images from the 2025 UN World Oceans Day Photo Competition
    Big and Small Underwater Faces — 3rd Place. Trips to the Antarctic Peninsula always yield amazing encounters with leopard seals. Boldly approaching me and baring his teeth, this individual was keen to point out that this part of Antarctica was his territory. This picture was shot at dusk, resulting in the rather moody atmosphere.   Credit: Lars von Ritter Zahony/ World Ocean’s Day Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. The striking eye of a humpback whale named Sweet Girl peers at the camera. Just four days later, she would be dead, hit by a speeding boat and one of the 20,000 whales killed by ship strikes each year. Photographer Rachel Moore’s captivating imageof Sweet Girl earned top honors at the 2025 United Nations World Oceans Day Photo Competition. Wonder: Sustaining What Sustains Us — WinnerThis photo, taken in Mo’orea, French Polynesia in 2024, captures the eye of a humpback whale named Sweet Girl, just days before her tragic death. Four days after I captured this intimate moment, she was struck and killed by a fast-moving ship. Her death serves as a heartbreaking reminder of the 20,000 whales lost to ship strikes every year. We are using her story to advocate for stronger protections, petitioning for stricter speed laws around Tahiti and Mo’orea during whale season. I hope Sweet Girl’s legacy will spark real change to protect these incredible animals and prevent further senseless loss.Credit: Rachel Moore/ United Nations World Oceans Day www.unworldoceansday.org Now in its twelfth year, the competition coordinated in collaboration between the UN Division for Ocean Affairs and the Law of the Sea, DivePhotoGuide, Oceanic Global, and  the Intergovernmental Oceanographic Commission of UNESCO. Each year, thousands of underwater photographers submit images that judges award prizes for across four categories: Big and Small Underwater Faces, Underwater Seascapes, Above Water Seascapes, and Wonder: Sustaining What Sustains Us. This year’s winning images include a curious leopard seal, a swarm of jellyfish, and a very grumpy looking Japanese warbonnet. Given our oceans’ perilous state, all competition participants were required to sign a charter of 14 commitments regarding ethics in photography. Underwater Seascapes — Honorable MentionWith only orcas as their natural predators, leopard seals are Antarctica’s most versatile hunters, preying on everything from fish and cephalopods to penguins and other seals. Gentoo penguins are a favored menu item, and leopard seals can be observed patrolling the waters around their colonies. For this shot, I used a split image to capture both worlds: the gentoo penguin colony in the background with the leopard seal on the hunt in the foreground.Credit: Lars von Ritter Zahony/ United Nations World Oceans Day www.unworldoceansday.org Above Water Seascapes – WinnerA serene lake cradled by arid dunes, where a gentle stream breathes life into the heart of Mother Earth’s creation: Captured from an airplane, this image reveals the powerful contrasts and hidden beauty where land and ocean meet, reminding us that the ocean is the source of all life and that everything in nature is deeply connected. The location is a remote stretch of coastline near Shark Bay, Western Australia.Credit: Leander Nardin/ United Nations World Oceans Day www.unworldoceansday.org Above Water Seascapes — 3rd PlaceParadise Harbour is one of the most beautiful places on the Antarctic Peninsula. When I visited, the sea was extremely calm, and I was lucky enough to witness a wonderfully clear reflection of the Suárez Glacierin the water. The only problem was the waves created by our speedboat, and the only way to capture the perfect reflection was to lie on the bottom of the boat while it moved towards the glacier.Credit: Andrey Nosik/ United Nations World Oceans Day www.unworldoceansday.org Underwater Seascapes — 3rd Place“La Rapadura” is a natural hidden treasure on the northern coast of Tenerife, in the Spanish territory of the Canary Islands. Only discovered in 1996, it is one of the most astonishing underwater landscapes in the world, consistently ranking among the planet’s best dive sites. These towering columns of basalt are the result of volcanic processes that occurred between 500,000 and a million years ago. The formation was created when a basaltic lava flow reached the ocean, where, upon cooling and solidifying, it contracted, creating natural structures often compared to the pipes of church organs. Located in a region where marine life has been impacted by once common illegal fishing practices, this stunning natural monument has both geological and ecological value, and scientists and underwater photographers are advocating for its protection.Credit: Pedro Carrillo/ United Nations World Oceans Day www.unworldoceansday.org Underwater Seascapes — WinnerThis year, I had the incredible opportunity to visit a jellyfish lake during a liveaboard trip around southern Raja Ampat, Indonesia. Being surrounded by millions of jellyfish, which have evolved to lose their stinging ability due to the absence of predators, was one of the most breathtaking experiences I’ve ever had.Credit: Dani Escayola/ United Nations World Oceans Day www.unworldoceansday.org Underwater Seascapes — 2nd PlaceThis shot captures a school of rays resting at a cleaning station in Mauritius, where strong currents once attracted them regularly. Some rays grew accustomed to divers, allowing close encounters like this. Sadly, after the severe bleaching that the reefs here suffered last year, such gatherings have become rare, and I fear I may not witness this again at the same spot.Credit: Gerald Rambert/ United Nations World Oceans Day www.unworldoceansday.org Wonder: Sustaining What Sustains Us — 3rd PlaceShot in Cuba’s Jardines de la Reina—a protected shark sanctuary—this image captures a Caribbean reef shark weaving through a group of silky sharks near the surface. Using a slow shutter and strobes as the shark pivoted sharply, the motion blurred into a wave-like arc across its head, lit by the golden hues of sunset. The abundance and behavior of sharks here is a living symbol of what protected oceans can look like.Credit: Steven Lopez/ United Nations World Oceans Day www.unworldoceansday.org  Above Water Seascapes — 2nd PlaceNorthern gannetssoar above the dramatic cliffs of Scotland’s Hermaness National Nature Reserve, their sleek white bodies and black-tipped wings slicing through the Shetland winds. These seabirds, the largest in the North Atlantic, are renowned for their striking plunge-dives, reaching speeds up to 100 kphas they hunt for fish beneath the waves. The cliffs of Hermaness provide ideal nesting sites, with updrafts aiding their take-offs and landings. Each spring, thousands return to this rugged coastline, forming one of the UK’s most significant gannet colonies. It was a major challenge to take photos at the edge of these cliffs at almost 200 meterswith the winds up to 30 kph.Credit: Nur Tucker/ United Nations World Oceans Day www.unworldoceansday.org Above Water Seascapes — Honorable MentionA South Atlantic swell breaks on the Dungeons Reef off the Cape Peninsula, South Africa, shot while photographing a big-wave surf session in October 2017. It’s the crescendoing sounds of these breaking swells that always amazes me.Credit: Ken Findlay/ United Nations World Oceans Day www.unworldoceansday.org Wonder: Sustaining What Sustains Us — Honorable MentionHumpback whales in their thousands migrate along the Ningaloo Reef in Western Australia every year on the way to and from their calving grounds. In four seasons of swimming with them on the reef here, this is the only encounter I’ve had like this one. This pair of huge adult whales repeatedly spy-hopped alongside us, seeking to interact with and investigate us, leaving me completely breathless. The female in the foreground was much more confident than the male behind and would constantly make close approaches, whilst the male hung back a little, still interested but shy. After more than 10 years working with wildlife in the water, this was one of the best experiences of my life.Credit: Ollie Clarke/ United Nations World Oceans Day www.unworldoceansday.org Big and Small Underwater Faces — 2nd PlaceOn one of my many blackwater dives in Anilao, in the Philippines, my guide and I spotted something moving erratically at a depth of around 20 meters, about 10 to 15 centimeters in size. We quickly realized that it was a rare blanket octopus. As we approached, it opened up its beautiful blanket, revealing its multicolored mantle. I managed to take a few shots before it went on its way. I felt truly privileged to have captured this fascinating deep-sea cephalopod. Among its many unique characteristics, this species exhibits some of the most extreme sexual size-dimorphism in nature, with females weighing up to 40,000 times more than males.Credit: Giacomo Marchione/ United Nations World Oceans Day www.unworldoceansday.org Big and Small Underwater Faces – WinnerThis photo of a Japanese warbonnetwas captured in the Sea of Japan, about 50 milessouthwest of Vladivostok, Russia. I found the ornate fish at a depth of about 30 meters, under the stern of a shipwreck. This species does not appear to be afraid of divers—on the contrary, it seems to enjoy the attention—and it even tried to sit on the dome port of my camera.Credit: Andrey Nosik/ United Nations World Oceans Day www.unworldoceansday.org Wonder: Sustaining What Sustains Us — 2nd PlaceA juvenile pinnate batfishcaptured with a slow shutter speed, a snooted light, and deliberate camera panning to create a sense of motion and drama. Juvenile pinnate batfish are known for their striking black bodies outlined in vibrant orange—a coloration they lose within just a few months as they mature. I encountered this restless subject in the tropical waters of Indonesia’s Lembeh Strait. Capturing this image took patience and persistence over two dives, as these active young fish constantly dart for cover in crevices, making the shot particularly challenging.Credit: Luis Arpa/ United Nations World Oceans Day www.unworldoceansday.org #riveting #images #world #oceans #dayphoto
    WWW.POPSCI.COM
    15 riveting images from the 2025 UN World Oceans Day Photo Competition
    Big and Small Underwater Faces — 3rd Place. Trips to the Antarctic Peninsula always yield amazing encounters with leopard seals (Hydrurga leptonyx). Boldly approaching me and baring his teeth, this individual was keen to point out that this part of Antarctica was his territory. This picture was shot at dusk, resulting in the rather moody atmosphere.   Credit: Lars von Ritter Zahony (Germany) / World Ocean’s Day Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. The striking eye of a humpback whale named Sweet Girl peers at the camera. Just four days later, she would be dead, hit by a speeding boat and one of the 20,000 whales killed by ship strikes each year. Photographer Rachel Moore’s captivating image (seen below) of Sweet Girl earned top honors at the 2025 United Nations World Oceans Day Photo Competition. Wonder: Sustaining What Sustains Us — WinnerThis photo, taken in Mo’orea, French Polynesia in 2024, captures the eye of a humpback whale named Sweet Girl, just days before her tragic death. Four days after I captured this intimate moment, she was struck and killed by a fast-moving ship. Her death serves as a heartbreaking reminder of the 20,000 whales lost to ship strikes every year. We are using her story to advocate for stronger protections, petitioning for stricter speed laws around Tahiti and Mo’orea during whale season. I hope Sweet Girl’s legacy will spark real change to protect these incredible animals and prevent further senseless loss.Credit: Rachel Moore (USA) / United Nations World Oceans Day www.unworldoceansday.org Now in its twelfth year, the competition coordinated in collaboration between the UN Division for Ocean Affairs and the Law of the Sea, DivePhotoGuide (DPG), Oceanic Global, and  the Intergovernmental Oceanographic Commission of UNESCO. Each year, thousands of underwater photographers submit images that judges award prizes for across four categories: Big and Small Underwater Faces, Underwater Seascapes, Above Water Seascapes, and Wonder: Sustaining What Sustains Us. This year’s winning images include a curious leopard seal, a swarm of jellyfish, and a very grumpy looking Japanese warbonnet. Given our oceans’ perilous state, all competition participants were required to sign a charter of 14 commitments regarding ethics in photography. Underwater Seascapes — Honorable MentionWith only orcas as their natural predators, leopard seals are Antarctica’s most versatile hunters, preying on everything from fish and cephalopods to penguins and other seals. Gentoo penguins are a favored menu item, and leopard seals can be observed patrolling the waters around their colonies. For this shot, I used a split image to capture both worlds: the gentoo penguin colony in the background with the leopard seal on the hunt in the foreground.Credit: Lars von Ritter Zahony (Germany) / United Nations World Oceans Day www.unworldoceansday.org Above Water Seascapes – WinnerA serene lake cradled by arid dunes, where a gentle stream breathes life into the heart of Mother Earth’s creation: Captured from an airplane, this image reveals the powerful contrasts and hidden beauty where land and ocean meet, reminding us that the ocean is the source of all life and that everything in nature is deeply connected. The location is a remote stretch of coastline near Shark Bay, Western Australia.Credit: Leander Nardin (Austria) / United Nations World Oceans Day www.unworldoceansday.org Above Water Seascapes — 3rd PlaceParadise Harbour is one of the most beautiful places on the Antarctic Peninsula. When I visited, the sea was extremely calm, and I was lucky enough to witness a wonderfully clear reflection of the Suárez Glacier (aka Petzval Glacier) in the water. The only problem was the waves created by our speedboat, and the only way to capture the perfect reflection was to lie on the bottom of the boat while it moved towards the glacier.Credit: Andrey Nosik (Russia) / United Nations World Oceans Day www.unworldoceansday.org Underwater Seascapes — 3rd Place“La Rapadura” is a natural hidden treasure on the northern coast of Tenerife, in the Spanish territory of the Canary Islands. Only discovered in 1996, it is one of the most astonishing underwater landscapes in the world, consistently ranking among the planet’s best dive sites. These towering columns of basalt are the result of volcanic processes that occurred between 500,000 and a million years ago. The formation was created when a basaltic lava flow reached the ocean, where, upon cooling and solidifying, it contracted, creating natural structures often compared to the pipes of church organs. Located in a region where marine life has been impacted by once common illegal fishing practices, this stunning natural monument has both geological and ecological value, and scientists and underwater photographers are advocating for its protection. (Model: Yolanda Garcia)Credit: Pedro Carrillo (Spain) / United Nations World Oceans Day www.unworldoceansday.org Underwater Seascapes — WinnerThis year, I had the incredible opportunity to visit a jellyfish lake during a liveaboard trip around southern Raja Ampat, Indonesia. Being surrounded by millions of jellyfish, which have evolved to lose their stinging ability due to the absence of predators, was one of the most breathtaking experiences I’ve ever had.Credit: Dani Escayola (Spain) / United Nations World Oceans Day www.unworldoceansday.org Underwater Seascapes — 2nd PlaceThis shot captures a school of rays resting at a cleaning station in Mauritius, where strong currents once attracted them regularly. Some rays grew accustomed to divers, allowing close encounters like this. Sadly, after the severe bleaching that the reefs here suffered last year, such gatherings have become rare, and I fear I may not witness this again at the same spot.Credit: Gerald Rambert (Mauritius) / United Nations World Oceans Day www.unworldoceansday.org Wonder: Sustaining What Sustains Us — 3rd PlaceShot in Cuba’s Jardines de la Reina—a protected shark sanctuary—this image captures a Caribbean reef shark weaving through a group of silky sharks near the surface. Using a slow shutter and strobes as the shark pivoted sharply, the motion blurred into a wave-like arc across its head, lit by the golden hues of sunset. The abundance and behavior of sharks here is a living symbol of what protected oceans can look like.Credit: Steven Lopez (USA) / United Nations World Oceans Day www.unworldoceansday.org  Above Water Seascapes — 2nd PlaceNorthern gannets (Morus bassanus) soar above the dramatic cliffs of Scotland’s Hermaness National Nature Reserve, their sleek white bodies and black-tipped wings slicing through the Shetland winds. These seabirds, the largest in the North Atlantic, are renowned for their striking plunge-dives, reaching speeds up to 100 kph (60 mph) as they hunt for fish beneath the waves. The cliffs of Hermaness provide ideal nesting sites, with updrafts aiding their take-offs and landings. Each spring, thousands return to this rugged coastline, forming one of the UK’s most significant gannet colonies. It was a major challenge to take photos at the edge of these cliffs at almost 200 meters (650 feet) with the winds up to 30 kph (20 mph).Credit: Nur Tucker (UK/Turkey) / United Nations World Oceans Day www.unworldoceansday.org Above Water Seascapes — Honorable MentionA South Atlantic swell breaks on the Dungeons Reef off the Cape Peninsula, South Africa, shot while photographing a big-wave surf session in October 2017. It’s the crescendoing sounds of these breaking swells that always amazes me.Credit: Ken Findlay (South Africa) / United Nations World Oceans Day www.unworldoceansday.org Wonder: Sustaining What Sustains Us — Honorable MentionHumpback whales in their thousands migrate along the Ningaloo Reef in Western Australia every year on the way to and from their calving grounds. In four seasons of swimming with them on the reef here, this is the only encounter I’ve had like this one. This pair of huge adult whales repeatedly spy-hopped alongside us, seeking to interact with and investigate us, leaving me completely breathless. The female in the foreground was much more confident than the male behind and would constantly make close approaches, whilst the male hung back a little, still interested but shy. After more than 10 years working with wildlife in the water, this was one of the best experiences of my life.Credit: Ollie Clarke (UK) / United Nations World Oceans Day www.unworldoceansday.org Big and Small Underwater Faces — 2nd PlaceOn one of my many blackwater dives in Anilao, in the Philippines, my guide and I spotted something moving erratically at a depth of around 20 meters (65 feet), about 10 to 15 centimeters in size. We quickly realized that it was a rare blanket octopus (Tremoctopus sp.). As we approached, it opened up its beautiful blanket, revealing its multicolored mantle. I managed to take a few shots before it went on its way. I felt truly privileged to have captured this fascinating deep-sea cephalopod. Among its many unique characteristics, this species exhibits some of the most extreme sexual size-dimorphism in nature, with females weighing up to 40,000 times more than males.Credit: Giacomo Marchione (Italy) / United Nations World Oceans Day www.unworldoceansday.org Big and Small Underwater Faces – WinnerThis photo of a Japanese warbonnet (Chirolophis japonicus) was captured in the Sea of Japan, about 50 miles (80 kilometers) southwest of Vladivostok, Russia. I found the ornate fish at a depth of about 30 meters (100 feet), under the stern of a shipwreck. This species does not appear to be afraid of divers—on the contrary, it seems to enjoy the attention—and it even tried to sit on the dome port of my camera.Credit: Andrey Nosik (Russia) / United Nations World Oceans Day www.unworldoceansday.org Wonder: Sustaining What Sustains Us — 2nd PlaceA juvenile pinnate batfish (Platax pinnatus) captured with a slow shutter speed, a snooted light, and deliberate camera panning to create a sense of motion and drama. Juvenile pinnate batfish are known for their striking black bodies outlined in vibrant orange—a coloration they lose within just a few months as they mature. I encountered this restless subject in the tropical waters of Indonesia’s Lembeh Strait. Capturing this image took patience and persistence over two dives, as these active young fish constantly dart for cover in crevices, making the shot particularly challenging.Credit: Luis Arpa (Spain) / United Nations World Oceans Day www.unworldoceansday.org
    0 Comentários 0 Compartilhamentos
  • Microsoft 365 Word gets SharePoint eSignature, now you can ditch third-party signing tools

    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

    Microsoft 365 Word gets SharePoint eSignature, now you can ditch third-party signing tools

    Paul Hill

    Neowin
    @ziks_99 ·

    Jun 6, 2025 03:02 EDT

    Microsoft has just announced that it will be rolling out an extremely convenient feature for Microsoft 365 customers who use Word throughout this year. The Redmond giant said that you’ll now be able to use SharePoint’s native eSignature service directly in Microsoft Word.
    The new feature allows customers to request electronic signatures without converting the documents to a PDF or leaving the Word interface, significantly speeding up workflows.
    Microsoft’s integration of eSignatures also allows you to create eSignature templates which will speed up document approvals, eliminate physical signing steps, and help with compliance and security in the Microsoft 365 environment.

    This change has the potential to significantly improve the quality-of-life for those in work finding themselves adding lots of signatures to documents as they will no longer have to export PDFs from Word and apply the signature outside of Word. It’s also key to point out that this feature is integrated natively and is not an extension.
    The move is quite clever from Microsoft, if businesses were using third-party tools to sign their documents, they would no longer need to use these as it’s easier to do it in Word. Not only does it reduce reliance on other tools, it also makes Microsoft’s products more competitive against other office suites such as Google Workspace.
    Streamlined, secure, and compliant
    The new eSignature feature is tightly integrated into Word. It lets you insert signature fields seamlessly into documents and request other people’s signatures, all while remaining in Word. The eSignature feature can be accessed in Word by going to the Insert ribbon.
    When you send a signature request to someone from Word, the recipient will get an automatically generated PDF copy of the Word document to sign. The signed PDF will then be kept in the same SharePoint location as the original Word file. To ensure end-to-end security and compliance, the document never leaves the Microsoft 365 trust boundary.
    For anyone with a repetitive signing process, this integration allows you to turn Word documents into eSignature templates so they can be reused.
    Another feature that Microsoft has built in is audit trail and notifications. Both the senders and signers will get email notifications throughout the entire signing process. Additionally, you can view the activity historyin the signed PDF to check who signed it and when.
    Finally, Microsoft said that administrators will be able to control how the feature is used in Word throughout the organization. They can decide to enable it for specific users via an Office group policy or limit it to particular SharePoint sites. The company said that SharePoint eSignature also lets admins log activities in the Purview Audit log.
    A key security measure included by Microsoft, which was mentioned above, was the Microsoft 365 trust boundary. By keeping documents in this boundary, Microsoft ensures that all organizations can use this feature without worry.
    The inclusion of automatic PDF creation is all a huge benefit to users as it will cut out the step of manual PDF creation. While creating a PDF isn’t complicated, it can be time consuming.
    The eSignature feature looks like a win-win-win for organizations that rely on digital signatures. Not only does it speed things along and remain secure, but it’s also packed with features like tracking, making it really useful and comprehensive.
    When and how your organization gets it
    SharePoint eSignature has started rolling out to Word on the M365 Beta and Current Channels in the United States, Canada, the United Kingdom, Europe, and Australia-Pacific. This phase of the rollout is expected to be completed by early July.
    People in the rest of the world will also be gaining this time-saving feature but it will not reach everyone right away, though Microsoft promises to reach everybody by the end of the year.
    To use the feature, it will need to be enabled by administrators. If you’re an admin who needs to enable this, just go to the M365 Admin Center and enable SharePoint eSignature, ensuring the Word checkbox is selected. Once the service is enabled, apply the “Allow the use of SharePoint eSignature for Microsoft Word” policy. The policy can be enabled via Intune, Group Policy manager, or the Cloud Policy service for Microsoft 365
    Assuming the admins have given permission to use the feature, users will be able to access SharePoint eSignatures on Word Desktop using the Microsoft 365 Current Channel or Beta Channel.
    The main caveats include that the rollout is phased, so you might not get it right away, and it requires IT admins to enable the feature - in which case, it may never get enabled at all.
    Overall, this feature stands to benefit users who sign documents a lot as it can save huge amounts of time cumulatively. It’s also good for Microsoft who increase organizations’ dependence on Word.

    Tags

    Report a problem with article

    Follow @NeowinFeed
    #microsoft #word #gets #sharepoint #esignature
    Microsoft 365 Word gets SharePoint eSignature, now you can ditch third-party signing tools
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Microsoft 365 Word gets SharePoint eSignature, now you can ditch third-party signing tools Paul Hill Neowin @ziks_99 · Jun 6, 2025 03:02 EDT Microsoft has just announced that it will be rolling out an extremely convenient feature for Microsoft 365 customers who use Word throughout this year. The Redmond giant said that you’ll now be able to use SharePoint’s native eSignature service directly in Microsoft Word. The new feature allows customers to request electronic signatures without converting the documents to a PDF or leaving the Word interface, significantly speeding up workflows. Microsoft’s integration of eSignatures also allows you to create eSignature templates which will speed up document approvals, eliminate physical signing steps, and help with compliance and security in the Microsoft 365 environment. This change has the potential to significantly improve the quality-of-life for those in work finding themselves adding lots of signatures to documents as they will no longer have to export PDFs from Word and apply the signature outside of Word. It’s also key to point out that this feature is integrated natively and is not an extension. The move is quite clever from Microsoft, if businesses were using third-party tools to sign their documents, they would no longer need to use these as it’s easier to do it in Word. Not only does it reduce reliance on other tools, it also makes Microsoft’s products more competitive against other office suites such as Google Workspace. Streamlined, secure, and compliant The new eSignature feature is tightly integrated into Word. It lets you insert signature fields seamlessly into documents and request other people’s signatures, all while remaining in Word. The eSignature feature can be accessed in Word by going to the Insert ribbon. When you send a signature request to someone from Word, the recipient will get an automatically generated PDF copy of the Word document to sign. The signed PDF will then be kept in the same SharePoint location as the original Word file. To ensure end-to-end security and compliance, the document never leaves the Microsoft 365 trust boundary. For anyone with a repetitive signing process, this integration allows you to turn Word documents into eSignature templates so they can be reused. Another feature that Microsoft has built in is audit trail and notifications. Both the senders and signers will get email notifications throughout the entire signing process. Additionally, you can view the activity historyin the signed PDF to check who signed it and when. Finally, Microsoft said that administrators will be able to control how the feature is used in Word throughout the organization. They can decide to enable it for specific users via an Office group policy or limit it to particular SharePoint sites. The company said that SharePoint eSignature also lets admins log activities in the Purview Audit log. A key security measure included by Microsoft, which was mentioned above, was the Microsoft 365 trust boundary. By keeping documents in this boundary, Microsoft ensures that all organizations can use this feature without worry. The inclusion of automatic PDF creation is all a huge benefit to users as it will cut out the step of manual PDF creation. While creating a PDF isn’t complicated, it can be time consuming. The eSignature feature looks like a win-win-win for organizations that rely on digital signatures. Not only does it speed things along and remain secure, but it’s also packed with features like tracking, making it really useful and comprehensive. When and how your organization gets it SharePoint eSignature has started rolling out to Word on the M365 Beta and Current Channels in the United States, Canada, the United Kingdom, Europe, and Australia-Pacific. This phase of the rollout is expected to be completed by early July. People in the rest of the world will also be gaining this time-saving feature but it will not reach everyone right away, though Microsoft promises to reach everybody by the end of the year. To use the feature, it will need to be enabled by administrators. If you’re an admin who needs to enable this, just go to the M365 Admin Center and enable SharePoint eSignature, ensuring the Word checkbox is selected. Once the service is enabled, apply the “Allow the use of SharePoint eSignature for Microsoft Word” policy. The policy can be enabled via Intune, Group Policy manager, or the Cloud Policy service for Microsoft 365 Assuming the admins have given permission to use the feature, users will be able to access SharePoint eSignatures on Word Desktop using the Microsoft 365 Current Channel or Beta Channel. The main caveats include that the rollout is phased, so you might not get it right away, and it requires IT admins to enable the feature - in which case, it may never get enabled at all. Overall, this feature stands to benefit users who sign documents a lot as it can save huge amounts of time cumulatively. It’s also good for Microsoft who increase organizations’ dependence on Word. Tags Report a problem with article Follow @NeowinFeed #microsoft #word #gets #sharepoint #esignature
    WWW.NEOWIN.NET
    Microsoft 365 Word gets SharePoint eSignature, now you can ditch third-party signing tools
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Microsoft 365 Word gets SharePoint eSignature, now you can ditch third-party signing tools Paul Hill Neowin @ziks_99 · Jun 6, 2025 03:02 EDT Microsoft has just announced that it will be rolling out an extremely convenient feature for Microsoft 365 customers who use Word throughout this year. The Redmond giant said that you’ll now be able to use SharePoint’s native eSignature service directly in Microsoft Word. The new feature allows customers to request electronic signatures without converting the documents to a PDF or leaving the Word interface, significantly speeding up workflows. Microsoft’s integration of eSignatures also allows you to create eSignature templates which will speed up document approvals, eliminate physical signing steps, and help with compliance and security in the Microsoft 365 environment. This change has the potential to significantly improve the quality-of-life for those in work finding themselves adding lots of signatures to documents as they will no longer have to export PDFs from Word and apply the signature outside of Word. It’s also key to point out that this feature is integrated natively and is not an extension. The move is quite clever from Microsoft, if businesses were using third-party tools to sign their documents, they would no longer need to use these as it’s easier to do it in Word. Not only does it reduce reliance on other tools, it also makes Microsoft’s products more competitive against other office suites such as Google Workspace. Streamlined, secure, and compliant The new eSignature feature is tightly integrated into Word. It lets you insert signature fields seamlessly into documents and request other people’s signatures, all while remaining in Word. The eSignature feature can be accessed in Word by going to the Insert ribbon. When you send a signature request to someone from Word, the recipient will get an automatically generated PDF copy of the Word document to sign. The signed PDF will then be kept in the same SharePoint location as the original Word file. To ensure end-to-end security and compliance, the document never leaves the Microsoft 365 trust boundary. For anyone with a repetitive signing process, this integration allows you to turn Word documents into eSignature templates so they can be reused. Another feature that Microsoft has built in is audit trail and notifications. Both the senders and signers will get email notifications throughout the entire signing process. Additionally, you can view the activity history (audit trail) in the signed PDF to check who signed it and when. Finally, Microsoft said that administrators will be able to control how the feature is used in Word throughout the organization. They can decide to enable it for specific users via an Office group policy or limit it to particular SharePoint sites. The company said that SharePoint eSignature also lets admins log activities in the Purview Audit log. A key security measure included by Microsoft, which was mentioned above, was the Microsoft 365 trust boundary. By keeping documents in this boundary, Microsoft ensures that all organizations can use this feature without worry. The inclusion of automatic PDF creation is all a huge benefit to users as it will cut out the step of manual PDF creation. While creating a PDF isn’t complicated, it can be time consuming. The eSignature feature looks like a win-win-win for organizations that rely on digital signatures. Not only does it speed things along and remain secure, but it’s also packed with features like tracking, making it really useful and comprehensive. When and how your organization gets it SharePoint eSignature has started rolling out to Word on the M365 Beta and Current Channels in the United States, Canada, the United Kingdom, Europe, and Australia-Pacific. This phase of the rollout is expected to be completed by early July. People in the rest of the world will also be gaining this time-saving feature but it will not reach everyone right away, though Microsoft promises to reach everybody by the end of the year. To use the feature, it will need to be enabled by administrators. If you’re an admin who needs to enable this, just go to the M365 Admin Center and enable SharePoint eSignature, ensuring the Word checkbox is selected. Once the service is enabled, apply the “Allow the use of SharePoint eSignature for Microsoft Word” policy. The policy can be enabled via Intune, Group Policy manager, or the Cloud Policy service for Microsoft 365 Assuming the admins have given permission to use the feature, users will be able to access SharePoint eSignatures on Word Desktop using the Microsoft 365 Current Channel or Beta Channel. The main caveats include that the rollout is phased, so you might not get it right away, and it requires IT admins to enable the feature - in which case, it may never get enabled at all. Overall, this feature stands to benefit users who sign documents a lot as it can save huge amounts of time cumulatively. It’s also good for Microsoft who increase organizations’ dependence on Word. Tags Report a problem with article Follow @NeowinFeed
    Like
    Love
    Wow
    Sad
    Angry
    305
    5 Comentários 0 Compartilhamentos
  • Upcoming (serious) Web performance boost

    UpcomingWeb performance boostBy:
    Adam Scott5 June 2025Progress ReportSometimes, just adding a compiler flag can yield significant performance boosts. And that just happened.For about two years now, all major browsers have supported WASMSIMD. SIMD stands for “Single instruction, multiple data” and is a technology that permits CPUs to do some parallel computation, often speeding up the whole program. And that’s exactly why we tried it out recently.We got positive results.The need for performance on the WebThe Web platform is often overlooked as a viable target, because of its less-than-ideal environment and its perceived poor performance. And the perception is somewhat right: the Web environment has a lot of security-related quirks to take into account—the user needs to interact with a game frame before the browser allows it to play any sound1. Also, due to bandwidth and compatibility reasons, you rarely see high-fidelity games being played on a browser. Performance is better achieved when running software natively on the operating system.But don’t underestimate the potential of the Web platform. As I explained in broad terms at the talk I gave at the last GodotCon Boston 2025, the Web has caught up a lot since the days of Flash games. Not only are there more people playing Web games every year, but standards and browsers improve every year in functionality and in performance.And that’s why we are interested in using WASM SIMD.WASM SIMD BenchmarksOur resident benchmark expert Hugo Locurcioran the numbers for us on a stress test I made. We wanted to compare standard builds to builds with WASM SIMD enabled.Note: You may try to replicate his results, but be aware that he has a beast of a machine. Here are his PC’s specifications:CPU: Intel Core i9-13900KGPU: NVIDIA GeForce RTX 4090RAM: 64 GBSSD: Solidigm P44 Pro 2 TBOS: LinuxI built a Jolt physics stress test from a scene initially made by passivestar. By spawning more and more barrels into the contraption, we can easily test the performance difference between the WASM SIMD build and the other.Without WASM SIMDWith WASM SIMDImprovementTest linksLinkLink-Firefox 1382×Firefox 13810.17×*Chromium 1341.37×Chromium 13414.17×**Please note that once the physics engine enters a “spiral of death”, it is common for the framerate to drop to single digits, SIMD or not. These tests don’t prove 10× to 15× CPU computing speed improvements, but rather that games will be more resilient to framerate drops on the same machine in the same circumstances. The 1.5× to 2× numbers are more representative here of the performance gains by WASM SIMD.What it means for your gamesStarting with 4.5 dev 5, you can expect your Web games to run a little bit more smoothly, without having to do anything. Especially when things get chaotic. It isn’t a silver bullet for poorly optimized games, but it will help nonetheless. Also, note that it cannot do anything for GPU rendering bottlenecks.Be aware that the stress tests are meant by nature to only test the worst case scenarios, so you may not see such large improvements in normal circumstances. But it’s nice to see such stark improvements when the worst happens.AvailabilityFrom here on out, the 4.5 release official templates will only support WebAssembly SIMD-compatible browsers in order to keep the template sizes small. We generally aim to maintain compatibility with the oldest devices we can. But in this case, the performance gains are too large to ignore and the chances of users having browsers that are that far out of date is too small relative to the potential benefits.If you need to use non-SIMD templates, don’t fret. You can always build the Godot Editor and the engine templates without WebAssembly SIMD support by using the wasm_simd=no build option.What’s next?As I wrote in my last blog post, we’re currently working very hard to make C#/.NET exports a reality. We do have a promising prototype, we just need to make sure that it’s production-ready.I also mentioned in that article that I wanted to concentrate on improving our asset loading game. Preloading an entire game before even starting it hinders the ability to use Godot for commercial Web games. Once something is implemented to improve that issue, count on me to share the news with you.It’s either that, or we return to the old days of spam-webpages using the “Congratulations, you won!” sound effect when you least expect it. ↩
    #upcoming #serious #web #performance #boost
    Upcoming (serious) Web performance boost
    UpcomingWeb performance boostBy: Adam Scott5 June 2025Progress ReportSometimes, just adding a compiler flag can yield significant performance boosts. And that just happened.For about two years now, all major browsers have supported WASMSIMD. SIMD stands for “Single instruction, multiple data” and is a technology that permits CPUs to do some parallel computation, often speeding up the whole program. And that’s exactly why we tried it out recently.We got positive results.The need for performance on the WebThe Web platform is often overlooked as a viable target, because of its less-than-ideal environment and its perceived poor performance. And the perception is somewhat right: the Web environment has a lot of security-related quirks to take into account—the user needs to interact with a game frame before the browser allows it to play any sound1. Also, due to bandwidth and compatibility reasons, you rarely see high-fidelity games being played on a browser. Performance is better achieved when running software natively on the operating system.But don’t underestimate the potential of the Web platform. As I explained in broad terms at the talk I gave at the last GodotCon Boston 2025, the Web has caught up a lot since the days of Flash games. Not only are there more people playing Web games every year, but standards and browsers improve every year in functionality and in performance.And that’s why we are interested in using WASM SIMD.WASM SIMD BenchmarksOur resident benchmark expert Hugo Locurcioran the numbers for us on a stress test I made. We wanted to compare standard builds to builds with WASM SIMD enabled.Note: You may try to replicate his results, but be aware that he has a beast of a machine. Here are his PC’s specifications:CPU: Intel Core i9-13900KGPU: NVIDIA GeForce RTX 4090RAM: 64 GBSSD: Solidigm P44 Pro 2 TBOS: LinuxI built a Jolt physics stress test from a scene initially made by passivestar. By spawning more and more barrels into the contraption, we can easily test the performance difference between the WASM SIMD build and the other.Without WASM SIMDWith WASM SIMDImprovementTest linksLinkLink-Firefox 1382×Firefox 13810.17×*Chromium 1341.37×Chromium 13414.17×**Please note that once the physics engine enters a “spiral of death”, it is common for the framerate to drop to single digits, SIMD or not. These tests don’t prove 10× to 15× CPU computing speed improvements, but rather that games will be more resilient to framerate drops on the same machine in the same circumstances. The 1.5× to 2× numbers are more representative here of the performance gains by WASM SIMD.What it means for your gamesStarting with 4.5 dev 5, you can expect your Web games to run a little bit more smoothly, without having to do anything. Especially when things get chaotic. It isn’t a silver bullet for poorly optimized games, but it will help nonetheless. Also, note that it cannot do anything for GPU rendering bottlenecks.Be aware that the stress tests are meant by nature to only test the worst case scenarios, so you may not see such large improvements in normal circumstances. But it’s nice to see such stark improvements when the worst happens.AvailabilityFrom here on out, the 4.5 release official templates will only support WebAssembly SIMD-compatible browsers in order to keep the template sizes small. We generally aim to maintain compatibility with the oldest devices we can. But in this case, the performance gains are too large to ignore and the chances of users having browsers that are that far out of date is too small relative to the potential benefits.If you need to use non-SIMD templates, don’t fret. You can always build the Godot Editor and the engine templates without WebAssembly SIMD support by using the wasm_simd=no build option.What’s next?As I wrote in my last blog post, we’re currently working very hard to make C#/.NET exports a reality. We do have a promising prototype, we just need to make sure that it’s production-ready.I also mentioned in that article that I wanted to concentrate on improving our asset loading game. Preloading an entire game before even starting it hinders the ability to use Godot for commercial Web games. Once something is implemented to improve that issue, count on me to share the news with you.It’s either that, or we return to the old days of spam-webpages using the “Congratulations, you won!” sound effect when you least expect it. ↩ #upcoming #serious #web #performance #boost
    GODOTENGINE.ORG
    Upcoming (serious) Web performance boost
    Upcoming (serious) Web performance boostBy: Adam Scott5 June 2025Progress ReportSometimes, just adding a compiler flag can yield significant performance boosts. And that just happened.For about two years now, all major browsers have supported WASM (WebAssembly) SIMD. SIMD stands for “Single instruction, multiple data” and is a technology that permits CPUs to do some parallel computation, often speeding up the whole program. And that’s exactly why we tried it out recently.We got positive results.The need for performance on the WebThe Web platform is often overlooked as a viable target, because of its less-than-ideal environment and its perceived poor performance. And the perception is somewhat right: the Web environment has a lot of security-related quirks to take into account—the user needs to interact with a game frame before the browser allows it to play any sound1. Also, due to bandwidth and compatibility reasons, you rarely see high-fidelity games being played on a browser. Performance is better achieved when running software natively on the operating system.But don’t underestimate the potential of the Web platform. As I explained in broad terms at the talk I gave at the last GodotCon Boston 2025, the Web has caught up a lot since the days of Flash games. Not only are there more people playing Web games every year, but standards and browsers improve every year in functionality and in performance.And that’s why we are interested in using WASM SIMD.WASM SIMD BenchmarksOur resident benchmark expert Hugo Locurcio (better known as Calinou) ran the numbers for us on a stress test I made. We wanted to compare standard builds to builds with WASM SIMD enabled.Note: You may try to replicate his results, but be aware that he has a beast of a machine. Here are his PC’s specifications:CPU: Intel Core i9-13900KGPU: NVIDIA GeForce RTX 4090RAM: 64 GB (2×32 GB DDR5-5800 CL30)SSD: Solidigm P44 Pro 2 TBOS: Linux (Fedora 42)I built a Jolt physics stress test from a scene initially made by passivestar. By spawning more and more barrels into the contraption, we can easily test the performance difference between the WASM SIMD build and the other.Without WASM SIMDWith WASM SIMDImprovement (approx.)Test linksLinkLink-Firefox 138(“+100 barrels” 3 times)2×Firefox 138(“+100 barrels” 6 times)10.17×*Chromium 134(“+100 barrels” 3 times)1.37×Chromium 134(“+100 barrels” 6 times)14.17×**Please note that once the physics engine enters a “spiral of death”, it is common for the framerate to drop to single digits, SIMD or not. These tests don’t prove 10× to 15× CPU computing speed improvements, but rather that games will be more resilient to framerate drops on the same machine in the same circumstances. The 1.5× to 2× numbers are more representative here of the performance gains by WASM SIMD.What it means for your gamesStarting with 4.5 dev 5, you can expect your Web games to run a little bit more smoothly, without having to do anything. Especially when things get chaotic (for your CPU). It isn’t a silver bullet for poorly optimized games, but it will help nonetheless. Also, note that it cannot do anything for GPU rendering bottlenecks.Be aware that the stress tests are meant by nature to only test the worst case scenarios, so you may not see such large improvements in normal circumstances. But it’s nice to see such stark improvements when the worst happens.AvailabilityFrom here on out, the 4.5 release official templates will only support WebAssembly SIMD-compatible browsers in order to keep the template sizes small. We generally aim to maintain compatibility with the oldest devices we can. But in this case, the performance gains are too large to ignore and the chances of users having browsers that are that far out of date is too small relative to the potential benefits.If you need to use non-SIMD templates, don’t fret. You can always build the Godot Editor and the engine templates without WebAssembly SIMD support by using the wasm_simd=no build option.What’s next?As I wrote in my last blog post, we’re currently working very hard to make C#/.NET exports a reality. We do have a promising prototype, we just need to make sure that it’s production-ready.I also mentioned in that article that I wanted to concentrate on improving our asset loading game. Preloading an entire game before even starting it hinders the ability to use Godot for commercial Web games. Once something is implemented to improve that issue, count on me to share the news with you.It’s either that, or we return to the old days of spam-webpages using the “Congratulations, you won!” sound effect when you least expect it. ↩
    Like
    Love
    Wow
    Sad
    Angry
    312
    0 Comentários 0 Compartilhamentos
  • NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results

    NVIDIA is working with companies worldwide to build out AI factories — speeding the training and deployment of next-generation AI applications that use the latest advancements in training and inference.
    The NVIDIA Blackwell architecture is built to meet the heightened performance requirements of these new applications. In the latest round of MLPerf Training — the 12th since the benchmark’s introduction in 2018 — the NVIDIA AI platform delivered the highest performance at scale on every benchmark and powered every result submitted on the benchmark’s toughest large language model-focused test: Llama 3.1 405B pretraining.
    The NVIDIA platform was the only one that submitted results on every MLPerf Training v5.0 benchmark — underscoring its exceptional performance and versatility across a wide array of AI workloads, spanning LLMs, recommendation systems, multimodal LLMs, object detection and graph neural networks.
    The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. In addition, NVIDIA collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs.
    On the new Llama 3.1 405B pretraining benchmark, Blackwell delivered 2.2x greater performance compared with previous-generation architecture at the same scale.
    On the Llama 2 70B LoRA fine-tuning benchmark, NVIDIA DGX B200 systems, powered by eight Blackwell GPUs, delivered 2.5x more performance compared with a submission using the same number of GPUs in the prior round.
    These performance leaps highlight advancements in the Blackwell architecture, including high-density liquid-cooled racks, 13.4TB of coherent memory per rack, fifth-generation NVIDIA NVLink and NVIDIA NVLink Switch interconnect technologies for scale-up and NVIDIA Quantum-2 InfiniBand networking for scale-out. Plus, innovations in the NVIDIA NeMo Framework software stack raise the bar for next-generation multimodal LLM training, critical for bringing agentic AI applications to market.
    These agentic AI-powered applications will one day run in AI factories — the engines of the agentic AI economy. These new applications will produce tokens and valuable intelligence that can be applied to almost every industry and academic domain.
    The NVIDIA data center platform includes GPUs, CPUs, high-speed fabrics and networking, as well as a vast array of software like NVIDIA CUDA-X libraries, the NeMo Framework, NVIDIA TensorRT-LLM and NVIDIA Dynamo. This highly tuned ensemble of hardware and software technologies empowers organizations to train and deploy models more quickly, dramatically accelerating time to value.
    The NVIDIA partner ecosystem participated extensively in this MLPerf round. Beyond the submission with CoreWeave and IBM, other compelling submissions were from ASUS, Cisco, Dell Technologies, Giga Computing, Google Cloud, Hewlett Packard Enterprise, Lambda, Lenovo, Nebius, Oracle Cloud Infrastructure, Quanta Cloud Technology and Supermicro.
    Learn more about MLPerf benchmarks.
    #nvidia #blackwell #delivers #breakthrough #performance
    NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results
    NVIDIA is working with companies worldwide to build out AI factories — speeding the training and deployment of next-generation AI applications that use the latest advancements in training and inference. The NVIDIA Blackwell architecture is built to meet the heightened performance requirements of these new applications. In the latest round of MLPerf Training — the 12th since the benchmark’s introduction in 2018 — the NVIDIA AI platform delivered the highest performance at scale on every benchmark and powered every result submitted on the benchmark’s toughest large language model-focused test: Llama 3.1 405B pretraining. The NVIDIA platform was the only one that submitted results on every MLPerf Training v5.0 benchmark — underscoring its exceptional performance and versatility across a wide array of AI workloads, spanning LLMs, recommendation systems, multimodal LLMs, object detection and graph neural networks. The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. In addition, NVIDIA collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs. On the new Llama 3.1 405B pretraining benchmark, Blackwell delivered 2.2x greater performance compared with previous-generation architecture at the same scale. On the Llama 2 70B LoRA fine-tuning benchmark, NVIDIA DGX B200 systems, powered by eight Blackwell GPUs, delivered 2.5x more performance compared with a submission using the same number of GPUs in the prior round. These performance leaps highlight advancements in the Blackwell architecture, including high-density liquid-cooled racks, 13.4TB of coherent memory per rack, fifth-generation NVIDIA NVLink and NVIDIA NVLink Switch interconnect technologies for scale-up and NVIDIA Quantum-2 InfiniBand networking for scale-out. Plus, innovations in the NVIDIA NeMo Framework software stack raise the bar for next-generation multimodal LLM training, critical for bringing agentic AI applications to market. These agentic AI-powered applications will one day run in AI factories — the engines of the agentic AI economy. These new applications will produce tokens and valuable intelligence that can be applied to almost every industry and academic domain. The NVIDIA data center platform includes GPUs, CPUs, high-speed fabrics and networking, as well as a vast array of software like NVIDIA CUDA-X libraries, the NeMo Framework, NVIDIA TensorRT-LLM and NVIDIA Dynamo. This highly tuned ensemble of hardware and software technologies empowers organizations to train and deploy models more quickly, dramatically accelerating time to value. The NVIDIA partner ecosystem participated extensively in this MLPerf round. Beyond the submission with CoreWeave and IBM, other compelling submissions were from ASUS, Cisco, Dell Technologies, Giga Computing, Google Cloud, Hewlett Packard Enterprise, Lambda, Lenovo, Nebius, Oracle Cloud Infrastructure, Quanta Cloud Technology and Supermicro. Learn more about MLPerf benchmarks. #nvidia #blackwell #delivers #breakthrough #performance
    BLOGS.NVIDIA.COM
    NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results
    NVIDIA is working with companies worldwide to build out AI factories — speeding the training and deployment of next-generation AI applications that use the latest advancements in training and inference. The NVIDIA Blackwell architecture is built to meet the heightened performance requirements of these new applications. In the latest round of MLPerf Training — the 12th since the benchmark’s introduction in 2018 — the NVIDIA AI platform delivered the highest performance at scale on every benchmark and powered every result submitted on the benchmark’s toughest large language model (LLM)-focused test: Llama 3.1 405B pretraining. The NVIDIA platform was the only one that submitted results on every MLPerf Training v5.0 benchmark — underscoring its exceptional performance and versatility across a wide array of AI workloads, spanning LLMs, recommendation systems, multimodal LLMs, object detection and graph neural networks. The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. In addition, NVIDIA collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs. On the new Llama 3.1 405B pretraining benchmark, Blackwell delivered 2.2x greater performance compared with previous-generation architecture at the same scale. On the Llama 2 70B LoRA fine-tuning benchmark, NVIDIA DGX B200 systems, powered by eight Blackwell GPUs, delivered 2.5x more performance compared with a submission using the same number of GPUs in the prior round. These performance leaps highlight advancements in the Blackwell architecture, including high-density liquid-cooled racks, 13.4TB of coherent memory per rack, fifth-generation NVIDIA NVLink and NVIDIA NVLink Switch interconnect technologies for scale-up and NVIDIA Quantum-2 InfiniBand networking for scale-out. Plus, innovations in the NVIDIA NeMo Framework software stack raise the bar for next-generation multimodal LLM training, critical for bringing agentic AI applications to market. These agentic AI-powered applications will one day run in AI factories — the engines of the agentic AI economy. These new applications will produce tokens and valuable intelligence that can be applied to almost every industry and academic domain. The NVIDIA data center platform includes GPUs, CPUs, high-speed fabrics and networking, as well as a vast array of software like NVIDIA CUDA-X libraries, the NeMo Framework, NVIDIA TensorRT-LLM and NVIDIA Dynamo. This highly tuned ensemble of hardware and software technologies empowers organizations to train and deploy models more quickly, dramatically accelerating time to value. The NVIDIA partner ecosystem participated extensively in this MLPerf round. Beyond the submission with CoreWeave and IBM, other compelling submissions were from ASUS, Cisco, Dell Technologies, Giga Computing, Google Cloud, Hewlett Packard Enterprise, Lambda, Lenovo, Nebius, Oracle Cloud Infrastructure, Quanta Cloud Technology and Supermicro. Learn more about MLPerf benchmarks.
    Like
    Love
    Wow
    Angry
    Sad
    94
    7 Comentários 0 Compartilhamentos
  • Diabetes management: IBM and Roche use AI to forecast blood sugar levels

    IBM and Roche are teaming up on an AI solution to a challenge faced by millions worldwide: the relentless daily grind of diabetes management. Their new brainchild, the Accu-Chek SmartGuide Predict app, provides AI-powered glucose forecasting capabilities to users. The app doesn’t just track where your glucose levels are—it tells you where they’re heading. Imagine having a weather forecast, but for your blood sugar. That’s essentially what IBM and Roche are creating.AI-powered diabetes managementThe app works alongside Roche’s continuous glucose monitoring sensor, crunching the numbers in real-time to offer predictive insights that can help users stay ahead of potentially dangerous blood sugar swings.What caught my eye were the three standout features that address very specific worries diabetics face. The “Glucose Predict” function visualises where your glucose might be heading over the next two hours—giving you that crucial window to make adjustments before things go south.For those who live with the anxiety of hypoglycaemia, the “Low Glucose Predict” feature acts like an early warning system, flagging potential lows up to half an hour before they might occur. That’s enough time to take corrective action.Perhaps most reassuring is the “Night Low Predict” feature, which estimates your risk of overnight hypoglycaemia—often the most frightening prospect for diabetes patients. Before tucking in for the night, the AI-powered diabetes management app gives you a heads-up about whether you might need that bedtime snack. This feature should bring peace of mind to countless households.“By harnessing the power of AI-enabled predictive technology, Roche’s Accu-Chek SmartGuide Predict App can help empower people with diabetes to take proactive measures to manage their disease,” says Moritz Hartmann, Head of Roche Information Solutions.How AI is speeding up diabetes researchIt’s not just patients benefiting from this partnership. The companies have developed a rather clever research tool using IBM’s watsonx AI platform that’s transforming how clinical study data gets analysed.Anyone who’s been involved in clinical research knows the mind-numbing tedium of manual data analysis. IBM and Roche’s tool does the heavy lifting—digitising, translating, and categorising all that anonymised clinical data, then connecting the dots between glucose monitoring data and participants’ daily activities.The result? Researchers can spot meaningful patterns and correlations in a fraction of the time it would normally take. This behind-the-scenes innovation might do more to advance diabetes care and management in the long run than the app itself.What makes this collaboration particularly interesting is how it brings together two different worlds. You’ve got IBM’s computing prowess and AI know-how pairing up with Roche’s decades of healthcare and diabetes expertise.”Our long-standing partnership with IBM underscores the potential of cross-industry innovation in addressing unmet healthcare needs and bringing significant advancements to patients faster,” says Hartmann.“Using cutting-edge technology such as AI and machine learning helps us to accelerate time to market and to improve therapy outcomes at the same time.”Christian Keller, General Manager of IBM Switzerland, added: “The collaboration with Roche underlines the potential of AI when it’s implemented with a clear goal—assisting patients in managing their diabetes.“With our technology and consulting expertise we can offer a trusted, customised, and secure technical environment that is essential to enable innovation in healthcare.”What this means for the future of healthcare techHaving covered healthcare tech for years, I’ve seen plenty of promising innovations fizzle out. However, this IBM-Roche partnership feels promising—perhaps because it’s addressing such a specific, well-defined problem with a thoughtful, targeted application of AI.For the estimated 590 million peopleworldwide living with diabetes, the shift from reactive to predictive management could be gamechanging. It’s not about replacing human judgment, but enhancing it with timely, actionable insights.The app’s currently only available in Switzerland, which seems a sensible approach—test, refine, and perfect before wider deployment. Healthcare professionals will be keeping tabs on this Swiss rollout to see if it delivers on its promise.If successful, this collaboration could serve as a blueprint for how tech giants and pharma companies might work together on other chronic conditions. Imagine similar predictive approaches for heart disease, asthma, or Parkinson’s.For now, though, the focus is squarely on using AI to improve diabetes management and helping people sleep a little easier at night—quite literally, in the case of that clever nocturnal prediction feature. And honestly, that’s a worthwhile enough goal on its own.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #diabetes #management #ibm #roche #use
    Diabetes management: IBM and Roche use AI to forecast blood sugar levels
    IBM and Roche are teaming up on an AI solution to a challenge faced by millions worldwide: the relentless daily grind of diabetes management. Their new brainchild, the Accu-Chek SmartGuide Predict app, provides AI-powered glucose forecasting capabilities to users. The app doesn’t just track where your glucose levels are—it tells you where they’re heading. Imagine having a weather forecast, but for your blood sugar. That’s essentially what IBM and Roche are creating.AI-powered diabetes managementThe app works alongside Roche’s continuous glucose monitoring sensor, crunching the numbers in real-time to offer predictive insights that can help users stay ahead of potentially dangerous blood sugar swings.What caught my eye were the three standout features that address very specific worries diabetics face. The “Glucose Predict” function visualises where your glucose might be heading over the next two hours—giving you that crucial window to make adjustments before things go south.For those who live with the anxiety of hypoglycaemia, the “Low Glucose Predict” feature acts like an early warning system, flagging potential lows up to half an hour before they might occur. That’s enough time to take corrective action.Perhaps most reassuring is the “Night Low Predict” feature, which estimates your risk of overnight hypoglycaemia—often the most frightening prospect for diabetes patients. Before tucking in for the night, the AI-powered diabetes management app gives you a heads-up about whether you might need that bedtime snack. This feature should bring peace of mind to countless households.“By harnessing the power of AI-enabled predictive technology, Roche’s Accu-Chek SmartGuide Predict App can help empower people with diabetes to take proactive measures to manage their disease,” says Moritz Hartmann, Head of Roche Information Solutions.How AI is speeding up diabetes researchIt’s not just patients benefiting from this partnership. The companies have developed a rather clever research tool using IBM’s watsonx AI platform that’s transforming how clinical study data gets analysed.Anyone who’s been involved in clinical research knows the mind-numbing tedium of manual data analysis. IBM and Roche’s tool does the heavy lifting—digitising, translating, and categorising all that anonymised clinical data, then connecting the dots between glucose monitoring data and participants’ daily activities.The result? Researchers can spot meaningful patterns and correlations in a fraction of the time it would normally take. This behind-the-scenes innovation might do more to advance diabetes care and management in the long run than the app itself.What makes this collaboration particularly interesting is how it brings together two different worlds. You’ve got IBM’s computing prowess and AI know-how pairing up with Roche’s decades of healthcare and diabetes expertise.”Our long-standing partnership with IBM underscores the potential of cross-industry innovation in addressing unmet healthcare needs and bringing significant advancements to patients faster,” says Hartmann.“Using cutting-edge technology such as AI and machine learning helps us to accelerate time to market and to improve therapy outcomes at the same time.”Christian Keller, General Manager of IBM Switzerland, added: “The collaboration with Roche underlines the potential of AI when it’s implemented with a clear goal—assisting patients in managing their diabetes.“With our technology and consulting expertise we can offer a trusted, customised, and secure technical environment that is essential to enable innovation in healthcare.”What this means for the future of healthcare techHaving covered healthcare tech for years, I’ve seen plenty of promising innovations fizzle out. However, this IBM-Roche partnership feels promising—perhaps because it’s addressing such a specific, well-defined problem with a thoughtful, targeted application of AI.For the estimated 590 million peopleworldwide living with diabetes, the shift from reactive to predictive management could be gamechanging. It’s not about replacing human judgment, but enhancing it with timely, actionable insights.The app’s currently only available in Switzerland, which seems a sensible approach—test, refine, and perfect before wider deployment. Healthcare professionals will be keeping tabs on this Swiss rollout to see if it delivers on its promise.If successful, this collaboration could serve as a blueprint for how tech giants and pharma companies might work together on other chronic conditions. Imagine similar predictive approaches for heart disease, asthma, or Parkinson’s.For now, though, the focus is squarely on using AI to improve diabetes management and helping people sleep a little easier at night—quite literally, in the case of that clever nocturnal prediction feature. And honestly, that’s a worthwhile enough goal on its own.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #diabetes #management #ibm #roche #use
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    Diabetes management: IBM and Roche use AI to forecast blood sugar levels
    IBM and Roche are teaming up on an AI solution to a challenge faced by millions worldwide: the relentless daily grind of diabetes management. Their new brainchild, the Accu-Chek SmartGuide Predict app, provides AI-powered glucose forecasting capabilities to users. The app doesn’t just track where your glucose levels are—it tells you where they’re heading. Imagine having a weather forecast, but for your blood sugar. That’s essentially what IBM and Roche are creating.AI-powered diabetes managementThe app works alongside Roche’s continuous glucose monitoring sensor, crunching the numbers in real-time to offer predictive insights that can help users stay ahead of potentially dangerous blood sugar swings.What caught my eye were the three standout features that address very specific worries diabetics face. The “Glucose Predict” function visualises where your glucose might be heading over the next two hours—giving you that crucial window to make adjustments before things go south.For those who live with the anxiety of hypoglycaemia (when blood sugar plummets to dangerous levels), the “Low Glucose Predict” feature acts like an early warning system, flagging potential lows up to half an hour before they might occur. That’s enough time to take corrective action.Perhaps most reassuring is the “Night Low Predict” feature, which estimates your risk of overnight hypoglycaemia—often the most frightening prospect for diabetes patients. Before tucking in for the night, the AI-powered diabetes management app gives you a heads-up about whether you might need that bedtime snack. This feature should bring peace of mind to countless households.“By harnessing the power of AI-enabled predictive technology, Roche’s Accu-Chek SmartGuide Predict App can help empower people with diabetes to take proactive measures to manage their disease,” says Moritz Hartmann, Head of Roche Information Solutions.How AI is speeding up diabetes researchIt’s not just patients benefiting from this partnership. The companies have developed a rather clever research tool using IBM’s watsonx AI platform that’s transforming how clinical study data gets analysed.Anyone who’s been involved in clinical research knows the mind-numbing tedium of manual data analysis. IBM and Roche’s tool does the heavy lifting—digitising, translating, and categorising all that anonymised clinical data, then connecting the dots between glucose monitoring data and participants’ daily activities.The result? Researchers can spot meaningful patterns and correlations in a fraction of the time it would normally take. This behind-the-scenes innovation might do more to advance diabetes care and management in the long run than the app itself.What makes this collaboration particularly interesting is how it brings together two different worlds. You’ve got IBM’s computing prowess and AI know-how pairing up with Roche’s decades of healthcare and diabetes expertise.”Our long-standing partnership with IBM underscores the potential of cross-industry innovation in addressing unmet healthcare needs and bringing significant advancements to patients faster,” says Hartmann.“Using cutting-edge technology such as AI and machine learning helps us to accelerate time to market and to improve therapy outcomes at the same time.”Christian Keller, General Manager of IBM Switzerland, added: “The collaboration with Roche underlines the potential of AI when it’s implemented with a clear goal—assisting patients in managing their diabetes.“With our technology and consulting expertise we can offer a trusted, customised, and secure technical environment that is essential to enable innovation in healthcare.”What this means for the future of healthcare techHaving covered healthcare tech for years, I’ve seen plenty of promising innovations fizzle out. However, this IBM-Roche partnership feels promising—perhaps because it’s addressing such a specific, well-defined problem with a thoughtful, targeted application of AI.For the estimated 590 million people (or 1 in 9 of the adult population) worldwide living with diabetes, the shift from reactive to predictive management could be gamechanging. It’s not about replacing human judgment, but enhancing it with timely, actionable insights.The app’s currently only available in Switzerland, which seems a sensible approach—test, refine, and perfect before wider deployment. Healthcare professionals will be keeping tabs on this Swiss rollout to see if it delivers on its promise.If successful, this collaboration could serve as a blueprint for how tech giants and pharma companies might work together on other chronic conditions. Imagine similar predictive approaches for heart disease, asthma, or Parkinson’s.For now, though, the focus is squarely on using AI to improve diabetes management and helping people sleep a little easier at night—quite literally, in the case of that clever nocturnal prediction feature. And honestly, that’s a worthwhile enough goal on its own.(Photo by Alexander Grey)Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Comentários 0 Compartilhamentos
  • The Most-Cited Computer Scientist Has a Plan to Make AI More Trustworthy

    On June 3, Yoshua Bengio, the world’s most-cited computer scientist, announced the launch of LawZero, a nonprofit that aims to create “safe by design” AI by pursuing a fundamentally different approach to major tech companies. Players like OpenAI and Google are investing heavily in AI agents—systems that not only answer queries and generate images, but can craft plans and take actions in the world. The goal of these companies is to create virtual employees that can do practically any job a human can, known in the tech industry as artificial general intelligence, or AGI. Executives like Google DeepMind’s CEO Demis Hassabis point to AGI’s potential to solve climate change or cure disease as a motivator for its development. Bengio, however, says we don't need agentic systems to reap AI's rewards—it's a false choice. He says there's a chance such a system could escape human control, with potentially irreversible consequences. “If we get an AI that gives us the cure for cancer, but also maybe another version of that AI goes rogue and generates wave after wave of bio-weapons that kill billions of people, then I don't think it's worth it," he says. In 2023, Bengio, along with others including OpenAI’s CEO Sam Altman signed a statement declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”Now, Bengio, through LawZero, aims to sidestep the existential perils by focusing on creating what he calls “Scientist AI”—a system trained to understand and make statistical predictions about the world, crucially, without the agency to take independent actions. As he puts it: We could use AI to advance scientific progress without rolling the dice on agentic AI systems.Why Bengio Says We Need A New Approach To AI The current approach to giving AI agency is “dangerous,” Bengio says. While most software operates through rigid if-then rules—if the user clicks here, do this—today's AI systems use deep learning. The technique, which Bengio helped pioneer, trains artificial networks modeled loosely on the brain to find patterns in vast amounts of data. But recognizing patterns is just the first step. To turn these systems into useful applications like chatbots, engineers employ a training process called reinforcement learning. The AI generates thousands of responses and receives feedback on each one: a virtual “carrot” for helpful answers and a virtual “stick” for responses that miss the mark. Through millions of these trial-and-feedback cycles, the system gradually learns to predict what responses are most likely to get a reward. “It’s more like growing a plant or animal,” Bengio says. “You don’t fully control what the animal is going to do. You provide it with the right conditions, and it grows and it becomes smarter. You can try to steer it in various directions.”The same basic approach is now being used to imbue AI with greater agency. Models are tasked with challenges with verifiable answers—like math puzzles or coding problems—and are then rewarded for taking the series of actions that yields the solution. This approach has seen AI shatter previous benchmarks in programming and scientific reasoning. For example, at the beginning of 2024, the best AI model scored only 2% on a standardized test for AI of sorts consisting of real world software engineering problems; by December, an impressive 71.7%. But with AI’s greater problem-solving ability comes the emergence of new deceptive skills, Bengio says. The last few months have borne witness to AI systems learning to mislead, cheat, and try to evade shutdown—even resorting to blackmail. These have almost exclusively been in carefully contrived experiments that almost beg the AI to misbehave—for example, by asking it to pursue its goal at all costs. Reports of such behavior in the real-world, though, have begun to surface. Popular AI coding startup Replit’s agent ignored explicit instruction not to edit a system file that could break the company’s software, in what CEO Amjad Masad described as an “Oh f***” moment,” on the Cognitive Revolution podcast in May. The company’s engineers intervened, cutting the agent’s access by moving the file to a secure digital sandbox, only for the AI agent to attempt to “socially engineer” the user to regain access.The quest to build human-level AI agents using techniques known to produce deceptive tendencies, Bengio says, is comparable to a car speeding down a narrow mountain road, with steep cliffs on either side, and thick fog obscuring the path ahead. “We need to set up the car with headlights and put some guardrails on the road,” he says.What is “Scientist AI”?LawZero’s focus is on developing “Scientist AI” which, as Bengio describes, would be fundamentally non-agentic, trustworthy, and focused on understanding and truthfulness, rather than pursuing its own goals or merely imitating human behavior. The aim is creating a powerful tool that, while lacking the same autonomy other models have, is capable of generating hypotheses and accelerating scientific progress to “help us solve challenges of humanity,” Bengio says.LawZero has raised nearly million already from several philanthropic backers including from Schmidt Sciences and Open Philanthropy. “We want to raise more because we know that as we move forward, we'll need significant compute,” Bengio says. But even ten times that figure would pale in comparison to the roughly billion spent last year by tech giants on aggressively pursuing AI. Bengio’s hope is that Scientist AI could help ensure the safety of highly autonomous systems developed by other players. “We can use those non-agentic AIs as guardrails that just need to predict whether the action of an agentic AI is dangerous," Bengio says. Technical interventions will only ever be one part of the solution, he adds, noting the need for regulations to ensure that safe practices are adopted.LawZero, named after science fiction author Isaac Asimov’s zeroth law of robotics—“a robot may not harm humanity, or, by inaction, allow humanity to come to harm”—is not the first nonprofit founded to chart a safer path for AI development. OpenAI was founded as a nonprofit in 2015 with the goal of “ensuring AGI benefits all of humanity,” and intended to serve a counterbalance to industry players guided by profit motives. Since opening a for-profit arm in 2019, the organization has become one of the most valuable private companies in the world, and has faced criticism, including from former staffers, who argue it has drifted from its founding ideals. "Well, the good news is we have the hindsight of maybe what not to do,” Bengio says, adding that he wants to avoid profit incentives and “bring governments into the governance of LawZero.”“I think everyone should ask themselves, ‘What can I do to make sure my children will have a future,’” Bengio says. In March, he stepped down as scientific director of Mila, the academic lab he co-founded in the early nineties, in an effort to reorient his work towards tackling AI risk more directly. “Because I'm a researcher, my answer is, ‘okay, I'm going to work on this scientific problem where maybe I can make a difference,’ but other people may have different answers."
    #mostcited #computer #scientist #has #plan
    The Most-Cited Computer Scientist Has a Plan to Make AI More Trustworthy
    On June 3, Yoshua Bengio, the world’s most-cited computer scientist, announced the launch of LawZero, a nonprofit that aims to create “safe by design” AI by pursuing a fundamentally different approach to major tech companies. Players like OpenAI and Google are investing heavily in AI agents—systems that not only answer queries and generate images, but can craft plans and take actions in the world. The goal of these companies is to create virtual employees that can do practically any job a human can, known in the tech industry as artificial general intelligence, or AGI. Executives like Google DeepMind’s CEO Demis Hassabis point to AGI’s potential to solve climate change or cure disease as a motivator for its development. Bengio, however, says we don't need agentic systems to reap AI's rewards—it's a false choice. He says there's a chance such a system could escape human control, with potentially irreversible consequences. “If we get an AI that gives us the cure for cancer, but also maybe another version of that AI goes rogue and generates wave after wave of bio-weapons that kill billions of people, then I don't think it's worth it," he says. In 2023, Bengio, along with others including OpenAI’s CEO Sam Altman signed a statement declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”Now, Bengio, through LawZero, aims to sidestep the existential perils by focusing on creating what he calls “Scientist AI”—a system trained to understand and make statistical predictions about the world, crucially, without the agency to take independent actions. As he puts it: We could use AI to advance scientific progress without rolling the dice on agentic AI systems.Why Bengio Says We Need A New Approach To AI The current approach to giving AI agency is “dangerous,” Bengio says. While most software operates through rigid if-then rules—if the user clicks here, do this—today's AI systems use deep learning. The technique, which Bengio helped pioneer, trains artificial networks modeled loosely on the brain to find patterns in vast amounts of data. But recognizing patterns is just the first step. To turn these systems into useful applications like chatbots, engineers employ a training process called reinforcement learning. The AI generates thousands of responses and receives feedback on each one: a virtual “carrot” for helpful answers and a virtual “stick” for responses that miss the mark. Through millions of these trial-and-feedback cycles, the system gradually learns to predict what responses are most likely to get a reward. “It’s more like growing a plant or animal,” Bengio says. “You don’t fully control what the animal is going to do. You provide it with the right conditions, and it grows and it becomes smarter. You can try to steer it in various directions.”The same basic approach is now being used to imbue AI with greater agency. Models are tasked with challenges with verifiable answers—like math puzzles or coding problems—and are then rewarded for taking the series of actions that yields the solution. This approach has seen AI shatter previous benchmarks in programming and scientific reasoning. For example, at the beginning of 2024, the best AI model scored only 2% on a standardized test for AI of sorts consisting of real world software engineering problems; by December, an impressive 71.7%. But with AI’s greater problem-solving ability comes the emergence of new deceptive skills, Bengio says. The last few months have borne witness to AI systems learning to mislead, cheat, and try to evade shutdown—even resorting to blackmail. These have almost exclusively been in carefully contrived experiments that almost beg the AI to misbehave—for example, by asking it to pursue its goal at all costs. Reports of such behavior in the real-world, though, have begun to surface. Popular AI coding startup Replit’s agent ignored explicit instruction not to edit a system file that could break the company’s software, in what CEO Amjad Masad described as an “Oh f***” moment,” on the Cognitive Revolution podcast in May. The company’s engineers intervened, cutting the agent’s access by moving the file to a secure digital sandbox, only for the AI agent to attempt to “socially engineer” the user to regain access.The quest to build human-level AI agents using techniques known to produce deceptive tendencies, Bengio says, is comparable to a car speeding down a narrow mountain road, with steep cliffs on either side, and thick fog obscuring the path ahead. “We need to set up the car with headlights and put some guardrails on the road,” he says.What is “Scientist AI”?LawZero’s focus is on developing “Scientist AI” which, as Bengio describes, would be fundamentally non-agentic, trustworthy, and focused on understanding and truthfulness, rather than pursuing its own goals or merely imitating human behavior. The aim is creating a powerful tool that, while lacking the same autonomy other models have, is capable of generating hypotheses and accelerating scientific progress to “help us solve challenges of humanity,” Bengio says.LawZero has raised nearly million already from several philanthropic backers including from Schmidt Sciences and Open Philanthropy. “We want to raise more because we know that as we move forward, we'll need significant compute,” Bengio says. But even ten times that figure would pale in comparison to the roughly billion spent last year by tech giants on aggressively pursuing AI. Bengio’s hope is that Scientist AI could help ensure the safety of highly autonomous systems developed by other players. “We can use those non-agentic AIs as guardrails that just need to predict whether the action of an agentic AI is dangerous," Bengio says. Technical interventions will only ever be one part of the solution, he adds, noting the need for regulations to ensure that safe practices are adopted.LawZero, named after science fiction author Isaac Asimov’s zeroth law of robotics—“a robot may not harm humanity, or, by inaction, allow humanity to come to harm”—is not the first nonprofit founded to chart a safer path for AI development. OpenAI was founded as a nonprofit in 2015 with the goal of “ensuring AGI benefits all of humanity,” and intended to serve a counterbalance to industry players guided by profit motives. Since opening a for-profit arm in 2019, the organization has become one of the most valuable private companies in the world, and has faced criticism, including from former staffers, who argue it has drifted from its founding ideals. "Well, the good news is we have the hindsight of maybe what not to do,” Bengio says, adding that he wants to avoid profit incentives and “bring governments into the governance of LawZero.”“I think everyone should ask themselves, ‘What can I do to make sure my children will have a future,’” Bengio says. In March, he stepped down as scientific director of Mila, the academic lab he co-founded in the early nineties, in an effort to reorient his work towards tackling AI risk more directly. “Because I'm a researcher, my answer is, ‘okay, I'm going to work on this scientific problem where maybe I can make a difference,’ but other people may have different answers." #mostcited #computer #scientist #has #plan
    TIME.COM
    The Most-Cited Computer Scientist Has a Plan to Make AI More Trustworthy
    On June 3, Yoshua Bengio, the world’s most-cited computer scientist, announced the launch of LawZero, a nonprofit that aims to create “safe by design” AI by pursuing a fundamentally different approach to major tech companies. Players like OpenAI and Google are investing heavily in AI agents—systems that not only answer queries and generate images, but can craft plans and take actions in the world. The goal of these companies is to create virtual employees that can do practically any job a human can, known in the tech industry as artificial general intelligence, or AGI. Executives like Google DeepMind’s CEO Demis Hassabis point to AGI’s potential to solve climate change or cure disease as a motivator for its development. Bengio, however, says we don't need agentic systems to reap AI's rewards—it's a false choice. He says there's a chance such a system could escape human control, with potentially irreversible consequences. “If we get an AI that gives us the cure for cancer, but also maybe another version of that AI goes rogue and generates wave after wave of bio-weapons that kill billions of people, then I don't think it's worth it," he says. In 2023, Bengio, along with others including OpenAI’s CEO Sam Altman signed a statement declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”Now, Bengio, through LawZero, aims to sidestep the existential perils by focusing on creating what he calls “Scientist AI”—a system trained to understand and make statistical predictions about the world, crucially, without the agency to take independent actions. As he puts it: We could use AI to advance scientific progress without rolling the dice on agentic AI systems.Why Bengio Says We Need A New Approach To AI The current approach to giving AI agency is “dangerous,” Bengio says. While most software operates through rigid if-then rules—if the user clicks here, do this—today's AI systems use deep learning. The technique, which Bengio helped pioneer, trains artificial networks modeled loosely on the brain to find patterns in vast amounts of data. But recognizing patterns is just the first step. To turn these systems into useful applications like chatbots, engineers employ a training process called reinforcement learning. The AI generates thousands of responses and receives feedback on each one: a virtual “carrot” for helpful answers and a virtual “stick” for responses that miss the mark. Through millions of these trial-and-feedback cycles, the system gradually learns to predict what responses are most likely to get a reward. “It’s more like growing a plant or animal,” Bengio says. “You don’t fully control what the animal is going to do. You provide it with the right conditions, and it grows and it becomes smarter. You can try to steer it in various directions.”The same basic approach is now being used to imbue AI with greater agency. Models are tasked with challenges with verifiable answers—like math puzzles or coding problems—and are then rewarded for taking the series of actions that yields the solution. This approach has seen AI shatter previous benchmarks in programming and scientific reasoning. For example, at the beginning of 2024, the best AI model scored only 2% on a standardized test for AI of sorts consisting of real world software engineering problems; by December, an impressive 71.7%. But with AI’s greater problem-solving ability comes the emergence of new deceptive skills, Bengio says. The last few months have borne witness to AI systems learning to mislead, cheat, and try to evade shutdown—even resorting to blackmail. These have almost exclusively been in carefully contrived experiments that almost beg the AI to misbehave—for example, by asking it to pursue its goal at all costs. Reports of such behavior in the real-world, though, have begun to surface. Popular AI coding startup Replit’s agent ignored explicit instruction not to edit a system file that could break the company’s software, in what CEO Amjad Masad described as an “Oh f***” moment,” on the Cognitive Revolution podcast in May. The company’s engineers intervened, cutting the agent’s access by moving the file to a secure digital sandbox, only for the AI agent to attempt to “socially engineer” the user to regain access.The quest to build human-level AI agents using techniques known to produce deceptive tendencies, Bengio says, is comparable to a car speeding down a narrow mountain road, with steep cliffs on either side, and thick fog obscuring the path ahead. “We need to set up the car with headlights and put some guardrails on the road,” he says.What is “Scientist AI”?LawZero’s focus is on developing “Scientist AI” which, as Bengio describes, would be fundamentally non-agentic, trustworthy, and focused on understanding and truthfulness, rather than pursuing its own goals or merely imitating human behavior. The aim is creating a powerful tool that, while lacking the same autonomy other models have, is capable of generating hypotheses and accelerating scientific progress to “help us solve challenges of humanity,” Bengio says.LawZero has raised nearly $30 million already from several philanthropic backers including from Schmidt Sciences and Open Philanthropy. “We want to raise more because we know that as we move forward, we'll need significant compute,” Bengio says. But even ten times that figure would pale in comparison to the roughly $200 billion spent last year by tech giants on aggressively pursuing AI. Bengio’s hope is that Scientist AI could help ensure the safety of highly autonomous systems developed by other players. “We can use those non-agentic AIs as guardrails that just need to predict whether the action of an agentic AI is dangerous," Bengio says. Technical interventions will only ever be one part of the solution, he adds, noting the need for regulations to ensure that safe practices are adopted.LawZero, named after science fiction author Isaac Asimov’s zeroth law of robotics—“a robot may not harm humanity, or, by inaction, allow humanity to come to harm”—is not the first nonprofit founded to chart a safer path for AI development. OpenAI was founded as a nonprofit in 2015 with the goal of “ensuring AGI benefits all of humanity,” and intended to serve a counterbalance to industry players guided by profit motives. Since opening a for-profit arm in 2019, the organization has become one of the most valuable private companies in the world, and has faced criticism, including from former staffers, who argue it has drifted from its founding ideals. "Well, the good news is we have the hindsight of maybe what not to do,” Bengio says, adding that he wants to avoid profit incentives and “bring governments into the governance of LawZero.”“I think everyone should ask themselves, ‘What can I do to make sure my children will have a future,’” Bengio says. In March, he stepped down as scientific director of Mila, the academic lab he co-founded in the early nineties, in an effort to reorient his work towards tackling AI risk more directly. “Because I'm a researcher, my answer is, ‘okay, I'm going to work on this scientific problem where maybe I can make a difference,’ but other people may have different answers."
    0 Comentários 0 Compartilhamentos