• European Robot Makers Adopt NVIDIA Isaac, Omniverse and Halos to Develop Safe, Physical AI-Driven Robot Fleets

    In the face of growing labor shortages and need for sustainability, European manufacturers are racing to reinvent their processes to become software-defined and AI-driven.
    To achieve this, robot developers and industrial digitalization solution providers are working with NVIDIA to build safe, AI-driven robots and industrial technologies to drive modern, sustainable manufacturing.
    At NVIDIA GTC Paris at VivaTech, Europe’s leading robotics companies including Agile Robots, Extend Robotics, Humanoid, idealworks, Neura Robotics, SICK, Universal Robots, Vorwerk and Wandelbots are showcasing their latest AI-driven robots and automation breakthroughs, all accelerated by NVIDIA technologies. In addition, NVIDIA is releasing new models and tools to support the entire robotics ecosystem.
    NVIDIA Releases Tools for Accelerating Robot Development and Safety
    NVIDIA Isaac GR00T N1.5, an open foundation model for humanoid robot reasoning and skills, is now available for download on Hugging Face. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks. The NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 open-source robotics simulation and learning frameworks, optimized for NVIDIA RTX PRO 6000 workstations, are available on GitHub for developer preview.
    In addition, NVIDIA announced that NVIDIA Halos — a full-stack, comprehensive safety system that unifies hardware architecture, AI models, software, tools and services — now expands to robotics, promoting safety across the entire development lifecycle of AI-driven robots.
    The NVIDIA Halos AI Systems Inspection Lab has earned accreditation from the ANSI National Accreditation Boardto perform inspections across functional safety for robotics, in addition to automotive vehicles.
    “NVIDIA’s latest evaluation with ANAB verifies the demonstration of competence and compliance with internationally recognized standards, helping ensure that developers of autonomous machines — from automotive to robotics — can meet the highest benchmarks for functional safety,” said R. Douglas Leonard Jr., executive director of ANAB.
    Arcbest, Advantech, Bluewhite, Boston Dynamics, FORT, Inxpect, KION, NexCobot — a NEXCOM company, and Synapticon are among the first robotics companies to join the Halos Inspection Lab, ensuring their products meet NVIDIA safety and cybersecurity requirements.
    To support robotics leaders in strengthening safety across the entire development lifecycle of AI-driven robots, Halos will now provide:

    Safety extension packages for the NVIDIA IGX platform, enabling manufacturers to easily program safety functions into their robots, supported by TÜV Rheinland’s inspection of NVIDIA IGX.
    A robotic safety platform, which includes IGX and NVIDIA Holoscan Sensor Bridge for a unified approach to designing sensor-to-compute architecture with built-in AI safety.
    An outside-in safety AI inspector — an AI-powered agent for monitoring robot operations, helping improve worker safety.

    Europe’s Robotics Ecosystem Builds on NVIDIA’s Three Computers
    Europe’s leading robotics developers and solution providers are integrating the NVIDIA Isaac robotics platform to train, simulate and deploy robots across different embodiments.
    Agile Robots is post-training the GR00T N1 model in Isaac Lab to train its dual-arm manipulator robots, which run on NVIDIA Jetson hardware, to execute a variety of tasks in industrial environments.
    Meanwhile, idealworks has adopted the Mega NVIDIA Omniverse Blueprint for robotic fleet simulation to extend the blueprint’s capabilities to humanoids. Building on the VDA 5050 framework, idealworks contributes to the development of guidance that supports tasks uniquely enabled by humanoid robots, such as picking, moving and placing objects.
    Neura Robotics is integrating NVIDIA Isaac to further enhance its robot development workflows. The company is using GR00T-Mimic to post-train the Isaac GR00T N1 robot foundation model for its service robot MiPA. Neura is also collaborating with SAP and NVIDIA to integrate SAP’s Joule agents with its robots, using the Mega NVIDIA Omniverse Blueprint to simulate and refine robot behavior in complex, realistic operational scenarios before deployment.
    Vorwerk is using NVIDIA technologies to power its AI-driven collaborative robots. The company is post-training GR00T N1 models in Isaac Lab with its custom synthetic data pipeline, which is built on Isaac GR00T-Mimic and powered by the NVIDIA Omniverse platform. The enhanced models are then deployed on NVIDIA Jetson AGX, Jetson Orin or Jetson Thor modules for advanced, real-time home robotics.
    Humanoid is using NVIDIA’s full robotics stack, including Isaac Sim and Isaac Lab, to cut its prototyping time down by six weeks. The company is training its vision language action models on NVIDIA DGX B200 systems to boost the cognitive abilities of its robots, allowing them to operate autonomously in complex environments using Jetson Thor onboard computing.
    Universal Robots is introducing UR15, its fastest collaborative robot yet, to the European market. Using UR’s AI Accelerator — developed on NVIDIA Isaac’s CUDA-accelerated libraries and AI models, as well as NVIDIA Jetson AGX Orin — manufacturers can build AI applications to embed intelligence into the company’s new cobots.
    Wandelbots is showcasing its NOVA Operating System, now integrated with Omniverse, to simulate, validate and optimize robotic behaviors virtually before deploying them to physical robots. Wandelbots also announced a collaboration with EY and EDAG to offer manufacturers a scalable automation platform on Omniverse that speeds up the transition from proof of concept to full-scale deployment.
    Extend Robotics is using the Isaac GR00T platform to enable customers to control and train robots for industrial tasks like visual inspection and handling radioactive materials. The company’s Advanced Mechanics Assistance System lets users collect demonstration data and generate diverse synthetic datasets with NVIDIA GR00T-Mimic and GR00T-Gen to train the GR00T N1 foundation model.
    SICK is enhancing its autonomous perception solutions by integrating new certified sensor models — as well as 2D and 3D lidars, safety scanners and cameras — into NVIDIA Isaac Sim. This enables engineers to virtually design, test and validate machines using SICK’s sensing models within Omniverse, supporting processes spanning product development to large-scale robotic fleet management.
    Toyota Material Handling Europe is working with SoftServe to simulate its autonomous mobile robots working alongside human workers, using the Mega NVIDIA Omniverse Blueprint. Toyota Material Handling Europe is testing and simulating a multitude of traffic scenarios — allowing the company to refine its AI algorithms before real-world deployment.
    NVIDIA’s partner ecosystem is enabling European industries to tap into intelligent, AI-powered robotics. By harnessing advanced simulation, digital twins and generative AI, manufacturers are rapidly developing and deploying safe, adaptable robot fleets that address labor shortages, boost sustainability and drive operational efficiency.
    Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
    See notice regarding software product information.
    #european #robot #makers #adopt #nvidia
    European Robot Makers Adopt NVIDIA Isaac, Omniverse and Halos to Develop Safe, Physical AI-Driven Robot Fleets
    In the face of growing labor shortages and need for sustainability, European manufacturers are racing to reinvent their processes to become software-defined and AI-driven. To achieve this, robot developers and industrial digitalization solution providers are working with NVIDIA to build safe, AI-driven robots and industrial technologies to drive modern, sustainable manufacturing. At NVIDIA GTC Paris at VivaTech, Europe’s leading robotics companies including Agile Robots, Extend Robotics, Humanoid, idealworks, Neura Robotics, SICK, Universal Robots, Vorwerk and Wandelbots are showcasing their latest AI-driven robots and automation breakthroughs, all accelerated by NVIDIA technologies. In addition, NVIDIA is releasing new models and tools to support the entire robotics ecosystem. NVIDIA Releases Tools for Accelerating Robot Development and Safety NVIDIA Isaac GR00T N1.5, an open foundation model for humanoid robot reasoning and skills, is now available for download on Hugging Face. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks. The NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 open-source robotics simulation and learning frameworks, optimized for NVIDIA RTX PRO 6000 workstations, are available on GitHub for developer preview. In addition, NVIDIA announced that NVIDIA Halos — a full-stack, comprehensive safety system that unifies hardware architecture, AI models, software, tools and services — now expands to robotics, promoting safety across the entire development lifecycle of AI-driven robots. The NVIDIA Halos AI Systems Inspection Lab has earned accreditation from the ANSI National Accreditation Boardto perform inspections across functional safety for robotics, in addition to automotive vehicles. “NVIDIA’s latest evaluation with ANAB verifies the demonstration of competence and compliance with internationally recognized standards, helping ensure that developers of autonomous machines — from automotive to robotics — can meet the highest benchmarks for functional safety,” said R. Douglas Leonard Jr., executive director of ANAB. Arcbest, Advantech, Bluewhite, Boston Dynamics, FORT, Inxpect, KION, NexCobot — a NEXCOM company, and Synapticon are among the first robotics companies to join the Halos Inspection Lab, ensuring their products meet NVIDIA safety and cybersecurity requirements. To support robotics leaders in strengthening safety across the entire development lifecycle of AI-driven robots, Halos will now provide: Safety extension packages for the NVIDIA IGX platform, enabling manufacturers to easily program safety functions into their robots, supported by TÜV Rheinland’s inspection of NVIDIA IGX. A robotic safety platform, which includes IGX and NVIDIA Holoscan Sensor Bridge for a unified approach to designing sensor-to-compute architecture with built-in AI safety. An outside-in safety AI inspector — an AI-powered agent for monitoring robot operations, helping improve worker safety. Europe’s Robotics Ecosystem Builds on NVIDIA’s Three Computers Europe’s leading robotics developers and solution providers are integrating the NVIDIA Isaac robotics platform to train, simulate and deploy robots across different embodiments. Agile Robots is post-training the GR00T N1 model in Isaac Lab to train its dual-arm manipulator robots, which run on NVIDIA Jetson hardware, to execute a variety of tasks in industrial environments. Meanwhile, idealworks has adopted the Mega NVIDIA Omniverse Blueprint for robotic fleet simulation to extend the blueprint’s capabilities to humanoids. Building on the VDA 5050 framework, idealworks contributes to the development of guidance that supports tasks uniquely enabled by humanoid robots, such as picking, moving and placing objects. Neura Robotics is integrating NVIDIA Isaac to further enhance its robot development workflows. The company is using GR00T-Mimic to post-train the Isaac GR00T N1 robot foundation model for its service robot MiPA. Neura is also collaborating with SAP and NVIDIA to integrate SAP’s Joule agents with its robots, using the Mega NVIDIA Omniverse Blueprint to simulate and refine robot behavior in complex, realistic operational scenarios before deployment. Vorwerk is using NVIDIA technologies to power its AI-driven collaborative robots. The company is post-training GR00T N1 models in Isaac Lab with its custom synthetic data pipeline, which is built on Isaac GR00T-Mimic and powered by the NVIDIA Omniverse platform. The enhanced models are then deployed on NVIDIA Jetson AGX, Jetson Orin or Jetson Thor modules for advanced, real-time home robotics. Humanoid is using NVIDIA’s full robotics stack, including Isaac Sim and Isaac Lab, to cut its prototyping time down by six weeks. The company is training its vision language action models on NVIDIA DGX B200 systems to boost the cognitive abilities of its robots, allowing them to operate autonomously in complex environments using Jetson Thor onboard computing. Universal Robots is introducing UR15, its fastest collaborative robot yet, to the European market. Using UR’s AI Accelerator — developed on NVIDIA Isaac’s CUDA-accelerated libraries and AI models, as well as NVIDIA Jetson AGX Orin — manufacturers can build AI applications to embed intelligence into the company’s new cobots. Wandelbots is showcasing its NOVA Operating System, now integrated with Omniverse, to simulate, validate and optimize robotic behaviors virtually before deploying them to physical robots. Wandelbots also announced a collaboration with EY and EDAG to offer manufacturers a scalable automation platform on Omniverse that speeds up the transition from proof of concept to full-scale deployment. Extend Robotics is using the Isaac GR00T platform to enable customers to control and train robots for industrial tasks like visual inspection and handling radioactive materials. The company’s Advanced Mechanics Assistance System lets users collect demonstration data and generate diverse synthetic datasets with NVIDIA GR00T-Mimic and GR00T-Gen to train the GR00T N1 foundation model. SICK is enhancing its autonomous perception solutions by integrating new certified sensor models — as well as 2D and 3D lidars, safety scanners and cameras — into NVIDIA Isaac Sim. This enables engineers to virtually design, test and validate machines using SICK’s sensing models within Omniverse, supporting processes spanning product development to large-scale robotic fleet management. Toyota Material Handling Europe is working with SoftServe to simulate its autonomous mobile robots working alongside human workers, using the Mega NVIDIA Omniverse Blueprint. Toyota Material Handling Europe is testing and simulating a multitude of traffic scenarios — allowing the company to refine its AI algorithms before real-world deployment. NVIDIA’s partner ecosystem is enabling European industries to tap into intelligent, AI-powered robotics. By harnessing advanced simulation, digital twins and generative AI, manufacturers are rapidly developing and deploying safe, adaptable robot fleets that address labor shortages, boost sustainability and drive operational efficiency. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions. See notice regarding software product information. #european #robot #makers #adopt #nvidia
    BLOGS.NVIDIA.COM
    European Robot Makers Adopt NVIDIA Isaac, Omniverse and Halos to Develop Safe, Physical AI-Driven Robot Fleets
    In the face of growing labor shortages and need for sustainability, European manufacturers are racing to reinvent their processes to become software-defined and AI-driven. To achieve this, robot developers and industrial digitalization solution providers are working with NVIDIA to build safe, AI-driven robots and industrial technologies to drive modern, sustainable manufacturing. At NVIDIA GTC Paris at VivaTech, Europe’s leading robotics companies including Agile Robots, Extend Robotics, Humanoid, idealworks, Neura Robotics, SICK, Universal Robots, Vorwerk and Wandelbots are showcasing their latest AI-driven robots and automation breakthroughs, all accelerated by NVIDIA technologies. In addition, NVIDIA is releasing new models and tools to support the entire robotics ecosystem. NVIDIA Releases Tools for Accelerating Robot Development and Safety NVIDIA Isaac GR00T N1.5, an open foundation model for humanoid robot reasoning and skills, is now available for download on Hugging Face. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks. The NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 open-source robotics simulation and learning frameworks, optimized for NVIDIA RTX PRO 6000 workstations, are available on GitHub for developer preview. In addition, NVIDIA announced that NVIDIA Halos — a full-stack, comprehensive safety system that unifies hardware architecture, AI models, software, tools and services — now expands to robotics, promoting safety across the entire development lifecycle of AI-driven robots. The NVIDIA Halos AI Systems Inspection Lab has earned accreditation from the ANSI National Accreditation Board (ANAB) to perform inspections across functional safety for robotics, in addition to automotive vehicles. “NVIDIA’s latest evaluation with ANAB verifies the demonstration of competence and compliance with internationally recognized standards, helping ensure that developers of autonomous machines — from automotive to robotics — can meet the highest benchmarks for functional safety,” said R. Douglas Leonard Jr., executive director of ANAB. Arcbest, Advantech, Bluewhite, Boston Dynamics, FORT, Inxpect, KION, NexCobot — a NEXCOM company, and Synapticon are among the first robotics companies to join the Halos Inspection Lab, ensuring their products meet NVIDIA safety and cybersecurity requirements. To support robotics leaders in strengthening safety across the entire development lifecycle of AI-driven robots, Halos will now provide: Safety extension packages for the NVIDIA IGX platform, enabling manufacturers to easily program safety functions into their robots, supported by TÜV Rheinland’s inspection of NVIDIA IGX. A robotic safety platform, which includes IGX and NVIDIA Holoscan Sensor Bridge for a unified approach to designing sensor-to-compute architecture with built-in AI safety. An outside-in safety AI inspector — an AI-powered agent for monitoring robot operations, helping improve worker safety. Europe’s Robotics Ecosystem Builds on NVIDIA’s Three Computers Europe’s leading robotics developers and solution providers are integrating the NVIDIA Isaac robotics platform to train, simulate and deploy robots across different embodiments. Agile Robots is post-training the GR00T N1 model in Isaac Lab to train its dual-arm manipulator robots, which run on NVIDIA Jetson hardware, to execute a variety of tasks in industrial environments. Meanwhile, idealworks has adopted the Mega NVIDIA Omniverse Blueprint for robotic fleet simulation to extend the blueprint’s capabilities to humanoids. Building on the VDA 5050 framework, idealworks contributes to the development of guidance that supports tasks uniquely enabled by humanoid robots, such as picking, moving and placing objects. Neura Robotics is integrating NVIDIA Isaac to further enhance its robot development workflows. The company is using GR00T-Mimic to post-train the Isaac GR00T N1 robot foundation model for its service robot MiPA. Neura is also collaborating with SAP and NVIDIA to integrate SAP’s Joule agents with its robots, using the Mega NVIDIA Omniverse Blueprint to simulate and refine robot behavior in complex, realistic operational scenarios before deployment. Vorwerk is using NVIDIA technologies to power its AI-driven collaborative robots. The company is post-training GR00T N1 models in Isaac Lab with its custom synthetic data pipeline, which is built on Isaac GR00T-Mimic and powered by the NVIDIA Omniverse platform. The enhanced models are then deployed on NVIDIA Jetson AGX, Jetson Orin or Jetson Thor modules for advanced, real-time home robotics. Humanoid is using NVIDIA’s full robotics stack, including Isaac Sim and Isaac Lab, to cut its prototyping time down by six weeks. The company is training its vision language action models on NVIDIA DGX B200 systems to boost the cognitive abilities of its robots, allowing them to operate autonomously in complex environments using Jetson Thor onboard computing. Universal Robots is introducing UR15, its fastest collaborative robot yet, to the European market. Using UR’s AI Accelerator — developed on NVIDIA Isaac’s CUDA-accelerated libraries and AI models, as well as NVIDIA Jetson AGX Orin — manufacturers can build AI applications to embed intelligence into the company’s new cobots. Wandelbots is showcasing its NOVA Operating System, now integrated with Omniverse, to simulate, validate and optimize robotic behaviors virtually before deploying them to physical robots. Wandelbots also announced a collaboration with EY and EDAG to offer manufacturers a scalable automation platform on Omniverse that speeds up the transition from proof of concept to full-scale deployment. Extend Robotics is using the Isaac GR00T platform to enable customers to control and train robots for industrial tasks like visual inspection and handling radioactive materials. The company’s Advanced Mechanics Assistance System lets users collect demonstration data and generate diverse synthetic datasets with NVIDIA GR00T-Mimic and GR00T-Gen to train the GR00T N1 foundation model. SICK is enhancing its autonomous perception solutions by integrating new certified sensor models — as well as 2D and 3D lidars, safety scanners and cameras — into NVIDIA Isaac Sim. This enables engineers to virtually design, test and validate machines using SICK’s sensing models within Omniverse, supporting processes spanning product development to large-scale robotic fleet management. Toyota Material Handling Europe is working with SoftServe to simulate its autonomous mobile robots working alongside human workers, using the Mega NVIDIA Omniverse Blueprint. Toyota Material Handling Europe is testing and simulating a multitude of traffic scenarios — allowing the company to refine its AI algorithms before real-world deployment. NVIDIA’s partner ecosystem is enabling European industries to tap into intelligent, AI-powered robotics. By harnessing advanced simulation, digital twins and generative AI, manufacturers are rapidly developing and deploying safe, adaptable robot fleets that address labor shortages, boost sustainability and drive operational efficiency. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions. See notice regarding software product information.
    Like
    Love
    Wow
    Angry
    15
    0 Kommentare 0 Anteile
  • Aspora gets $50M from Sequioa to build remittance and banking solutions for Indian diaspora

    India has been one of the top recipients of remittances in the world for more than a decade. Inward remittances jumped from billion in 2010-11 to billion in 2023-24, according to data from the country’s central bank. The bank projects that figure will reach billion in 2029.
    This means there is an increasing market for digitalized banking experiences for non-resident Indians, ranging from remittances to investing in different assets back home.
    Asporais trying to build a verticalized financial experience for the Indian diaspora by keeping convenience at the center. While a lot of financial products are in its future roadmap, the company currently focuses largely on remittances.
    “While multiple financial products for non-resident Indians exist, they don’t know about them because there is no digital journey for them. They possibly use the same banking app as residents, which makes it harder for them to discover products catered towards them,” Garg said.
    In the last year, the company has grown the volume of remittances by 6x — from million to billion in yearly volume processed.
    With this growth, the company has attracted a lot of investor interest. It raised million in Series A funding last December — which was previously unreported — led by Sequoia with participation from Greylock, Y Combinator, Hummingbird Ventures, and Global Founders Capital. The round pegged the company’s valuation at million. In the four months following, the company tripled its transaction volume, prompting investors to put in more money.
    The company announced today it has raised million in Series B funding, co-led by Sequoia and Greylock, with Hummingbird, Quantum Light Ventures, and Y Combinator also contributing to the round. The startup said this round values the company at million. The startup has raised over million in funding to date.

    Techcrunch event

    + on your TechCrunch All Stage pass
    Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections.

    + on your TechCrunch All Stage pass
    Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections.

    Boston, MA
    |
    July 15

    REGISTER NOW

    After pivoting from being Pipe.com for India, the company started by offering remittance for NRIs in the U.K. in 2023 and has expanded its presence in other markets, including Europe and the United Arab Emirates. It charges a flat fee for money transfer and offers a competitive rate. Now it also allows customers to invest in mutual funds in India. The startup markets its exchange rates as “Google rate” as customers often search for currency conversion rates, even though they may not reflect live rates.
    The startup is also set to launch in the U.S., one of the biggest remittance corridors to India, next month. Plus, it plans to open up shop in Canada, Singapore, and Australia by the fourth quarter of this year.
    Garg, who grew up in the UAE, said that remittances are just the start, and the company wants to build out more financial tools for NRIs.
    “We want to use remittances as a wedge and build all the financial solutions that the diaspora needs, including banking, investing, insurance, lending in the home country, and products that help them take care of their parents,” he told TechCrunch.
    He added that a large chunk of money that NRIs send home is for wealth creation rather than family sustenance. The startup said that 80% of its users are sending money to their own accounts back home.
    In the next few months, the company is launching a few products to offer more services. This month, it plans to launch a bill payment platform to let users pay for services like rent and utilities. Next month, it plans to launch fixed deposit accounts for non-resident Indians that allow them to park money in foreign currency. By the end of the year, it plans to launch a full-stack banking account for NRIs that typically takes days for users to open. While these accounts can help the diaspora maintain their tax status in India, a lot of people use a family member’s account because of the cumbersome process, and Aspora wants to simplify this.
    Apart from banking, the company also plans to launch a product that would help NRIs take care of their parents back home by offering regular medical checkups, emergency care coverage, and concierge services for other assistance.
    Besides global competitors like Remittly and Wise, the company also has India-based rivals like Abound, which was spun off from Times Internet.
    Sequoia’s Luciana Lixandru is confident that Aspora’s execution speed and verticalized solution will give it an edge.
    “Speed of execution, for me, is one of the main indicators in the early days of the future success of a company,” she told TechCrunch over a call. “Aspora moves fast, but it is also very deliberate in building corridor by corridor, which is very important in financial services.”
    #aspora #gets #50m #sequioa #build
    Aspora gets $50M from Sequioa to build remittance and banking solutions for Indian diaspora
    India has been one of the top recipients of remittances in the world for more than a decade. Inward remittances jumped from billion in 2010-11 to billion in 2023-24, according to data from the country’s central bank. The bank projects that figure will reach billion in 2029. This means there is an increasing market for digitalized banking experiences for non-resident Indians, ranging from remittances to investing in different assets back home. Asporais trying to build a verticalized financial experience for the Indian diaspora by keeping convenience at the center. While a lot of financial products are in its future roadmap, the company currently focuses largely on remittances. “While multiple financial products for non-resident Indians exist, they don’t know about them because there is no digital journey for them. They possibly use the same banking app as residents, which makes it harder for them to discover products catered towards them,” Garg said. In the last year, the company has grown the volume of remittances by 6x — from million to billion in yearly volume processed. With this growth, the company has attracted a lot of investor interest. It raised million in Series A funding last December — which was previously unreported — led by Sequoia with participation from Greylock, Y Combinator, Hummingbird Ventures, and Global Founders Capital. The round pegged the company’s valuation at million. In the four months following, the company tripled its transaction volume, prompting investors to put in more money. The company announced today it has raised million in Series B funding, co-led by Sequoia and Greylock, with Hummingbird, Quantum Light Ventures, and Y Combinator also contributing to the round. The startup said this round values the company at million. The startup has raised over million in funding to date. Techcrunch event + on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. + on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | July 15 REGISTER NOW After pivoting from being Pipe.com for India, the company started by offering remittance for NRIs in the U.K. in 2023 and has expanded its presence in other markets, including Europe and the United Arab Emirates. It charges a flat fee for money transfer and offers a competitive rate. Now it also allows customers to invest in mutual funds in India. The startup markets its exchange rates as “Google rate” as customers often search for currency conversion rates, even though they may not reflect live rates. The startup is also set to launch in the U.S., one of the biggest remittance corridors to India, next month. Plus, it plans to open up shop in Canada, Singapore, and Australia by the fourth quarter of this year. Garg, who grew up in the UAE, said that remittances are just the start, and the company wants to build out more financial tools for NRIs. “We want to use remittances as a wedge and build all the financial solutions that the diaspora needs, including banking, investing, insurance, lending in the home country, and products that help them take care of their parents,” he told TechCrunch. He added that a large chunk of money that NRIs send home is for wealth creation rather than family sustenance. The startup said that 80% of its users are sending money to their own accounts back home. In the next few months, the company is launching a few products to offer more services. This month, it plans to launch a bill payment platform to let users pay for services like rent and utilities. Next month, it plans to launch fixed deposit accounts for non-resident Indians that allow them to park money in foreign currency. By the end of the year, it plans to launch a full-stack banking account for NRIs that typically takes days for users to open. While these accounts can help the diaspora maintain their tax status in India, a lot of people use a family member’s account because of the cumbersome process, and Aspora wants to simplify this. Apart from banking, the company also plans to launch a product that would help NRIs take care of their parents back home by offering regular medical checkups, emergency care coverage, and concierge services for other assistance. Besides global competitors like Remittly and Wise, the company also has India-based rivals like Abound, which was spun off from Times Internet. Sequoia’s Luciana Lixandru is confident that Aspora’s execution speed and verticalized solution will give it an edge. “Speed of execution, for me, is one of the main indicators in the early days of the future success of a company,” she told TechCrunch over a call. “Aspora moves fast, but it is also very deliberate in building corridor by corridor, which is very important in financial services.” #aspora #gets #50m #sequioa #build
    TECHCRUNCH.COM
    Aspora gets $50M from Sequioa to build remittance and banking solutions for Indian diaspora
    India has been one of the top recipients of remittances in the world for more than a decade. Inward remittances jumped from $55.6 billion in 2010-11 to $118.7 billion in 2023-24, according to data from the country’s central bank. The bank projects that figure will reach $160 billion in 2029. This means there is an increasing market for digitalized banking experiences for non-resident Indians(NRIs), ranging from remittances to investing in different assets back home. Aspora (formerly Vance) is trying to build a verticalized financial experience for the Indian diaspora by keeping convenience at the center. While a lot of financial products are in its future roadmap, the company currently focuses largely on remittances. “While multiple financial products for non-resident Indians exist, they don’t know about them because there is no digital journey for them. They possibly use the same banking app as residents, which makes it harder for them to discover products catered towards them,” Garg said. In the last year, the company has grown the volume of remittances by 6x — from $400 million to $2 billion in yearly volume processed. With this growth, the company has attracted a lot of investor interest. It raised $35 million in Series A funding last December — which was previously unreported — led by Sequoia with participation from Greylock, Y Combinator, Hummingbird Ventures, and Global Founders Capital. The round pegged the company’s valuation at $150 million. In the four months following, the company tripled its transaction volume, prompting investors to put in more money. The company announced today it has raised $50 million in Series B funding, co-led by Sequoia and Greylock, with Hummingbird, Quantum Light Ventures, and Y Combinator also contributing to the round. The startup said this round values the company at $500 million. The startup has raised over $99 million in funding to date. Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | July 15 REGISTER NOW After pivoting from being Pipe.com for India, the company started by offering remittance for NRIs in the U.K. in 2023 and has expanded its presence in other markets, including Europe and the United Arab Emirates. It charges a flat fee for money transfer and offers a competitive rate. Now it also allows customers to invest in mutual funds in India. The startup markets its exchange rates as “Google rate” as customers often search for currency conversion rates, even though they may not reflect live rates. The startup is also set to launch in the U.S., one of the biggest remittance corridors to India, next month. Plus, it plans to open up shop in Canada, Singapore, and Australia by the fourth quarter of this year. Garg, who grew up in the UAE, said that remittances are just the start, and the company wants to build out more financial tools for NRIs. “We want to use remittances as a wedge and build all the financial solutions that the diaspora needs, including banking, investing, insurance, lending in the home country, and products that help them take care of their parents,” he told TechCrunch. He added that a large chunk of money that NRIs send home is for wealth creation rather than family sustenance. The startup said that 80% of its users are sending money to their own accounts back home. In the next few months, the company is launching a few products to offer more services. This month, it plans to launch a bill payment platform to let users pay for services like rent and utilities. Next month, it plans to launch fixed deposit accounts for non-resident Indians that allow them to park money in foreign currency. By the end of the year, it plans to launch a full-stack banking account for NRIs that typically takes days for users to open. While these accounts can help the diaspora maintain their tax status in India, a lot of people use a family member’s account because of the cumbersome process, and Aspora wants to simplify this. Apart from banking, the company also plans to launch a product that would help NRIs take care of their parents back home by offering regular medical checkups, emergency care coverage, and concierge services for other assistance. Besides global competitors like Remittly and Wise, the company also has India-based rivals like Abound, which was spun off from Times Internet. Sequoia’s Luciana Lixandru is confident that Aspora’s execution speed and verticalized solution will give it an edge. “Speed of execution, for me, is one of the main indicators in the early days of the future success of a company,” she told TechCrunch over a call. “Aspora moves fast, but it is also very deliberate in building corridor by corridor, which is very important in financial services.”
    Like
    Love
    Wow
    Sad
    Angry
    514
    2 Kommentare 0 Anteile
  • Tavernspite housing, Pembrokeshire

    The commission, valued at up to £46,000, will see the appointed architect work closely with ateb’s internal teams to deliver a 30-unit housing development, supporting the group’s mission to create better living solutions for the people and communities of West Wales.
    The two-year contract, running from July 2025 to July 2027, will require the architect to oversee all stages of design, from feasibility through to tender, in line with Welsh Government technical scrutiny and local authority planning requirements.
    The project is part of ateb’s ongoing commitment to respond to local housing need, regenerate communities, and provide a variety of affordable tenures, including social rent, rent to buy, and shared ownership.Advertisement

    According to the brief: ‘The ateb Groupis a unique set o companies that collectively has the shared purpose of 'Creating better living solutions for the people and communities of West Wales.
    ‘ateb currently has around 3,100 homes predominantly in Pembrokeshire, that we rent on either a social or intermediate rental basis.  ateb works closely with its Local Authority and other partners to develop around 150 new homes every year, to meet affordable housing need through a range of tenures such as, for rent, rent to buy or shared ownership.’
    Tavernspite is a small village of around 350 inhabitants located 9.7km southeast of Narberth in Pembrokeshire. Ateb, based in nearby Haverfordwest, is a not-for-profit housing association managing around 3,100 homes across the county.
    The group’s social purpose is supported by its subsidiaries: Mill Bay Homes, which develops homes for sale to reinvest profits into affordable housing, and West Wales Care and Repair, which supports older and vulnerable residents to remain independent in their homes.
    Bids will be assessed 60 per cent on quality and 40 per cent on price, with a strong emphasis on experience in the housing association sector and collaborative working with internal client teams.Advertisement

    Applicants must hold professional indemnity insurance of at least £2 million and be prepared to attend in-person evaluation presentations as part of the assessment process.

    Competition details
    Project title Provision of Architect Services for Tavernspite Development
    Client
    Contract value Tbc
    First round deadline Midday, 3 July 2025
    Restrictions The contract particularly welcomes submissions from small and medium-sized enterprisesand voluntary, community, and social enterprisesMore information
    #tavernspite #housing #pembrokeshire
    Tavernspite housing, Pembrokeshire
    The commission, valued at up to £46,000, will see the appointed architect work closely with ateb’s internal teams to deliver a 30-unit housing development, supporting the group’s mission to create better living solutions for the people and communities of West Wales. The two-year contract, running from July 2025 to July 2027, will require the architect to oversee all stages of design, from feasibility through to tender, in line with Welsh Government technical scrutiny and local authority planning requirements. The project is part of ateb’s ongoing commitment to respond to local housing need, regenerate communities, and provide a variety of affordable tenures, including social rent, rent to buy, and shared ownership.Advertisement According to the brief: ‘The ateb Groupis a unique set o companies that collectively has the shared purpose of 'Creating better living solutions for the people and communities of West Wales. ‘ateb currently has around 3,100 homes predominantly in Pembrokeshire, that we rent on either a social or intermediate rental basis.  ateb works closely with its Local Authority and other partners to develop around 150 new homes every year, to meet affordable housing need through a range of tenures such as, for rent, rent to buy or shared ownership.’ Tavernspite is a small village of around 350 inhabitants located 9.7km southeast of Narberth in Pembrokeshire. Ateb, based in nearby Haverfordwest, is a not-for-profit housing association managing around 3,100 homes across the county. The group’s social purpose is supported by its subsidiaries: Mill Bay Homes, which develops homes for sale to reinvest profits into affordable housing, and West Wales Care and Repair, which supports older and vulnerable residents to remain independent in their homes. Bids will be assessed 60 per cent on quality and 40 per cent on price, with a strong emphasis on experience in the housing association sector and collaborative working with internal client teams.Advertisement Applicants must hold professional indemnity insurance of at least £2 million and be prepared to attend in-person evaluation presentations as part of the assessment process. Competition details Project title Provision of Architect Services for Tavernspite Development Client Contract value Tbc First round deadline Midday, 3 July 2025 Restrictions The contract particularly welcomes submissions from small and medium-sized enterprisesand voluntary, community, and social enterprisesMore information #tavernspite #housing #pembrokeshire
    WWW.ARCHITECTSJOURNAL.CO.UK
    Tavernspite housing, Pembrokeshire
    The commission, valued at up to £46,000 (including VAT), will see the appointed architect work closely with ateb’s internal teams to deliver a 30-unit housing development, supporting the group’s mission to create better living solutions for the people and communities of West Wales. The two-year contract, running from July 2025 to July 2027, will require the architect to oversee all stages of design, from feasibility through to tender, in line with Welsh Government technical scrutiny and local authority planning requirements. The project is part of ateb’s ongoing commitment to respond to local housing need, regenerate communities, and provide a variety of affordable tenures, including social rent, rent to buy, and shared ownership.Advertisement According to the brief: ‘The ateb Group (where ateb means answer or solution In Welsh) is a unique set o companies that collectively has the shared purpose of 'Creating better living solutions for the people and communities of West Wales. ‘ateb currently has around 3,100 homes predominantly in Pembrokeshire, that we rent on either a social or intermediate rental basis.  ateb works closely with its Local Authority and other partners to develop around 150 new homes every year, to meet affordable housing need through a range of tenures such as, for rent, rent to buy or shared ownership.’ Tavernspite is a small village of around 350 inhabitants located 9.7km southeast of Narberth in Pembrokeshire. Ateb, based in nearby Haverfordwest, is a not-for-profit housing association managing around 3,100 homes across the county. The group’s social purpose is supported by its subsidiaries: Mill Bay Homes, which develops homes for sale to reinvest profits into affordable housing, and West Wales Care and Repair, which supports older and vulnerable residents to remain independent in their homes. Bids will be assessed 60 per cent on quality and 40 per cent on price, with a strong emphasis on experience in the housing association sector and collaborative working with internal client teams.Advertisement Applicants must hold professional indemnity insurance of at least £2 million and be prepared to attend in-person evaluation presentations as part of the assessment process. Competition details Project title Provision of Architect Services for Tavernspite Development Client Contract value Tbc First round deadline Midday, 3 July 2025 Restrictions The contract particularly welcomes submissions from small and medium-sized enterprises (SMEs) and voluntary, community, and social enterprises (VCSEs) More information https://www.find-tender.service.gov.uk/Notice/031815-2025
    Like
    Love
    Wow
    Sad
    Angry
    544
    2 Kommentare 0 Anteile
  • Aga Khan Award for Architecture 2025 announces 19 shortlisted projects from 15 countries

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";
    19 shortlisted projects for the 2025 Award cycle were revealed by the Aga Khan Award for Architecture. A portion of the million prize, one of the biggest in architecture, will be awarded to the winning proposals. Out of the 369 projects nominated for the 16th Award Cycle, an independent Master Jury chose the 19 shortlisted projects from 15 countries.The nine members of the Master Jury for the 16th Award cycle include Azra Akšamija, Noura Al-Sayeh Holtrop, Lucia Allais, David Basulto, Yvonne Farrell, Kabage Karanja, Yacouba Konaté, Hassan Radoine, and Mun Summ Wong.His Late Highness Prince Karim Aga Khan IV created the Aga Khan Award for Architecture in 1977 to recognize and promote architectural ideas that effectively meet the needs and goals of communities where Muslims are a major population. Nearly 10,000 construction projects have been documented since the award's inception 48 years ago, and 128 projects have been granted it. The AKAA's selection method places a strong emphasis on architecture that stimulates and responds to people's cultural ambitions in addition to meeting their physical, social, and economic demands.The Aga Khan Award for Architecture is governed by a Steering Committee chaired by His Highness the Aga Khan. The other members of the Steering Committee are Meisa Batayneh, Principal Architect, Founder, maisam architects and engineers, Amman, Jordan; Souleymane Bachir Diagne, Professor of Philosophy and Francophone Studies, Columbia University, New York, United States of America; Lesley Lokko, Founder & Director, African Futures Institute, Accra, Ghana; Gülru Necipoğlu, Director and Professor, Aga Khan Program for Islamic Architecture, Harvard University, Cambridge, United States of America; Hashim Sarkis, Founder & Principal, Hashim Sarkis Studios; Dean, School of Architecture and Planning, Massachusetts Institute of Technology, Cambridge, United States of America; and Sarah M. Whiting, Partner, WW Architecture; Dean and Josep Lluís Sert Professor of Architecture, Graduate School of Design, Harvard University, Cambridge, United States of America. Farrokh Derakhshani is the Director of the Award.Examples of outstanding architecture in the areas of modern design, social housing, community development and enhancement, historic preservation, reuse and area conservation, landscape design, and environmental enhancement are recognized by the Aga Khan Award for Architecture.Building plans that creatively utilize local resources and relevant technologies, as well as initiatives that could spur such initiatives abroad, are given special consideration. It should be mentioned that in addition to honoring architects, the Award also recognizes towns, builders, clients, master craftspeople, and engineers who have contributed significantly to the project.Projects had to be completed between January 1, 2018, and December 31, 2023, and they had to have been operational for a minimum of one year in order to be eligible for consideration in the 2025 Award cycle. The Award is not available for projects that His Highness the Aga Khan or any of the Aga Khan Development Networkinstitutions have commissioned.See the 19 shortlisted projects with their short project descriptions competing for the 2025 Award Cycle:Khudi Bari. Image © Aga Khan Trust for Culture / City SyntaxBangladeshKhudi Bari, in various locations, by Marina Tabassum ArchitectsMarina Tabassum Architects' Khudi Bari, which can be readily disassembled and reassembled to suit the needs of the users, is a replicable solution for displaced communities impacted by geographic and climatic changes.West Wusutu Village Community Centre. Image © Aga Khan Trust for Culture / Dou YujunChinaWest Wusutu Village Community Centre, Hohhot, Inner Mongolia, by Zhang PengjuIn addition to meeting the religious demands of the local Hui Muslims, Zhang Pengju's West Wusutu Village Community Centre in Hohhot, Inner Mongolia, offers social and cultural spaces for locals and artists. Constructed from recycled bricks, it features multipurpose indoor and outdoor areas that promote communal harmony.Revitalisation of Historic Esna. Image © Aga Khan Trust for Culture / Ahmed SalemEgyptRevitalisation of Historic Esna, by Takween Integrated Community DevelopmentBy using physical interventions, socioeconomic projects, and creative urban planning techniques, Takween Integrated Community Development's Revitalization of Historic Esna tackles the issues of cultural tourism in Upper Egypt and turns the once-forgotten area around the Temple of Khnum into a thriving historic city.The Arc at Green School. Image © Aga Khan Trust for Culture / Andreas Perbowo WidityawanIndonesiaThe Arc at Green School, in Bali, by IBUKU / Elora HardyAfter 15 years of bamboo experimenting at the Green School Bali, IBUKU/Elora Hardy created The Arc at Green School. The Arc is a brand-new community wellness facility built on the foundations of a temporary gym. High-precision engineering and regional handicraft are combined in this construction.Islamic Centre Nurul Yaqin Mosque. Image © Aga Khan Trust for Culture / Andreas Perbowo WidityawanIndonesiaIslamic Centre Nurul Yaqin Mosque, in Palu, Central Sulawesi, by Dave Orlando and Fandy GunawanDave Orlando and Fandy Gunawan built the Islamic Center Nurul Yaqin Mosque in Palu, Central Sulawesi, on the location of a previous mosque that was damaged by a 2018 tsunami. There is a place for worship and assembly at the new Islamic Center. Surrounded by a shallow reflecting pool that may be drained to make room for more guests, it is open to the countryside.Microlibrary Warak Kayu. Image © Aga Khan Trust for Culture / Andreas Perbowo WidityawanIndonesiaMicrolibraries in various cities, by SHAU / Daliana Suryawinata, Florian HeinzelmannFlorian Heinzelmann, the project's initiator, works with stakeholders at all levels to provide high-quality public spaces in a number of Indonesian parks and kampungs through microlibraries in different towns run by SHAU/Daliana Suryawinata. So far, six have been constructed, and by 2045, 100 are planned.Majara Residence. Image © Aga Khan Trust for Culture / Deed StudioIranMajara Complex and Community Redevelopment, in Hormuz Island by ZAV Architects / Mohamadreza GhodousiThe Majara Complex and Community Redevelopment on Hormuz Island, designed by ZAV Architects and Mohamadreza Ghodousi, is well-known for its vibrant domes that offer eco-friendly lodging for visitors visiting Hormuz's distinctive scenery. In addition to providing new amenities for the islanders who visit to socialize, pray, or utilize the library, it was constructed by highly trained local laborers.Jahad Metro Plaza. Image © Aga Khan Trust for Culture / Deed StudioIranJahad Metro Plaza in Tehran, by KA Architecture StudioKA Architecture Studio's Jahad Metro Plaza in Tehran was constructed to replace the dilapidated old buildings. It turned the location into a beloved pedestrian-friendly landmark. The arched vaults, which are covered in locally manufactured brick, vary in height to let air and light into the area they are protecting.Khan Jaljulia Restoration. Image © Aga Khan Trust for Culture / Mikaela BurstowIsraelKhan Jaljulia Restoration in Jaljulia by Elias KhuriElias Khuri's Khan Jaljulia Restoration is a cost-effective intervention set amidst the remnants of a 14th-century Khan in Jaljulia. By converting the abandoned historical location into a bustling public area for social gatherings, it helps the locals rediscover their cultural history.Campus Startup Lions. Image © Aga Khan Trust for Culture / Christopher Wilton-SteerKenyaCampus Startup Lions, in Turkana by Kéré ArchitectsKéré Architecture's Campus Startup Lions in Turkana is an educational and entrepreneurial center that offers a venue for community involvement, business incubation, and technology-driven education. The design incorporates solar energy, rainwater harvesting, and tall ventilation towers that resemble the nearby termite mounds, and it was constructed using local volcanic stone.Lalla Yeddouna Square. Image © Aga Khan Trust for Culture / Amine HouariMoroccoRevitalisation of Lalla Yeddouna Square in the medina of Fez, by Mossessian Architecture and Yassir Khalil StudioMossessian Architecture and Yassir Khalil Studio's revitalization of Lalla Yeddouna Square in the Fez medina aims to improve pedestrian circulation and reestablish a connection to the waterfront. For the benefit of locals, craftspeople, and tourists from around the globe, existing buildings were maintained and new areas created.Vision Pakistan. Image © Aga Khan Trust for Culture / Usman Saqib ZuberiPakistanVision Pakistan, in Islamabad by DB Studios / Mohammad Saifullah SiddiquiA tailoring training center run by Vision Pakistan, a nonprofit organization dedicated to empowering underprivileged adolescents, is located in Islamabad by DB Studios/Mohammad Saifullah Siddiqui. Situated in a crowded neighborhood, this multi-story building features flashy jaalis influenced by Arab and Pakistani crafts, echoing the city's 1960s design.Denso Hall Rahguzar Project. Image © Aga Khan Trust for Culture / Usman Saqib ZuberiPakistanDenso Hall Rahguzar Project, in Karachi by Heritage Foundation Pakistan / Yasmeen LariThe Heritage Foundation of Pakistan/Yasmeen Lari's Denso Hall Rahguzar Project in Karachi is a heritage-led eco-urban enclave that was built with low-carbon materials in response to the city's severe climate, which is prone to heat waves and floods. The freshly planted "forests" are irrigated by the handcrafted terracotta cobbles, which absorb rainfall and cool and purify the air.Wonder Cabinet. Image © Aga Khan Trust for Culture / Mikaela BurstowPalestineWonder Cabinet, in Bethlehem by AAU AnastasThe architects at AAU Anastas established Wonder Cabinet, a multifunctional, nonprofit exhibition and production venue in Bethlehem. The three-story concrete building was constructed with the help of regional contractors and artisans, and it is quickly emerging as a major center for learning, design, craft, and innovation.The Ned. Image © Aga Khan Trust for Culture / Cemal EmdenQatarThe Ned Hotel, in Doha by David Chipperfield ArchitectsThe Ministry of Interior was housed in the Ned Hotel in Doha, which was designed by David Chipperfield Architects. Its Middle Eastern brutalist building was meticulously transformed into a 90-room boutique hotel, thereby promoting architectural revitalization in the region.Shamalat Cultural Centre. Image © Aga Khan Trust for Culture / Hassan Al ShattiSaudi ArabiaShamalat Cultural Centre, in Riyadh, by Syn Architects / Sara Alissa, Nojoud AlsudairiOn the outskirts of Diriyah, the Shamalat Cultural Centre in Riyadh was created by Syn Architects/Sara Alissa, Nojoud Alsudairi. It was created from an old mud home that artist Maha Malluh had renovated. The center, which aims to incorporate historic places into daily life, provides a sensitive viewpoint on heritage conservation in the area by contrasting the old and the contemporary.Rehabilitation and Extension of Dakar Railway Station. Image © Aga Khan Trust for Culture / Sylvain CherkaouiSenegalRehabilitation and Extension of Dakar Railway Station, in Dakar by Ga2DIn order to accommodate the passengers of a new express train line, Ga2D extended and renovated Dakar train Station, which purposefully contrasts the old and modern buildings. The forecourt was once again open to pedestrian traffic after vehicular traffic was limited to the rear of the property.Rami Library. Image © Aga Khan Trust for Culture / Cemal EmdenTürkiyeRami Library, by Han Tümertekin Design & ConsultancyThe largest library in Istanbul is the Rami Library, designed by Han Tümertekin Design & Consultancy. It occupied the former Rami Barracks, a sizable, single-story building with enormous volumes that was constructed in the eighteenth century. In order to accommodate new library operations while maintaining the structure's original spatial features, a minimal intervention method was used.Morocco Pavilion Expo Dubai 2020. Image © Aga Khan Trust for Culture / Deed StudioUnited Arab EmiratesMorocco Pavilion Expo Dubai 2020, by Oualalou + ChoiOualalou + Choi's Morocco Pavilion Expo Dubai 2020 is intended to last beyond Expo 2020 and be transformed into a cultural center. The pavilion is a trailblazer in the development of large-scale rammed earth building techniques. Its use of passive cooling techniques, which minimize the need for mechanical air conditioning, earned it the gold LEED accreditation.At each project location, independent professionals such as architects, conservation specialists, planners, and structural engineers have conducted thorough evaluations of the nominated projects. This summer, the Master Jury convenes once more to analyze the on-site evaluations and choose the ultimate Award winners.The top image in the article: The Arc at Green School. Image © Aga Khan Trust for Culture / Andreas Perbowo Widityawan.> via Aga Khan Award for Architecture
    #aga #khan #award #architecture #announces
    Aga Khan Award for Architecture 2025 announces 19 shortlisted projects from 15 countries
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; 19 shortlisted projects for the 2025 Award cycle were revealed by the Aga Khan Award for Architecture. A portion of the million prize, one of the biggest in architecture, will be awarded to the winning proposals. Out of the 369 projects nominated for the 16th Award Cycle, an independent Master Jury chose the 19 shortlisted projects from 15 countries.The nine members of the Master Jury for the 16th Award cycle include Azra Akšamija, Noura Al-Sayeh Holtrop, Lucia Allais, David Basulto, Yvonne Farrell, Kabage Karanja, Yacouba Konaté, Hassan Radoine, and Mun Summ Wong.His Late Highness Prince Karim Aga Khan IV created the Aga Khan Award for Architecture in 1977 to recognize and promote architectural ideas that effectively meet the needs and goals of communities where Muslims are a major population. Nearly 10,000 construction projects have been documented since the award's inception 48 years ago, and 128 projects have been granted it. The AKAA's selection method places a strong emphasis on architecture that stimulates and responds to people's cultural ambitions in addition to meeting their physical, social, and economic demands.The Aga Khan Award for Architecture is governed by a Steering Committee chaired by His Highness the Aga Khan. The other members of the Steering Committee are Meisa Batayneh, Principal Architect, Founder, maisam architects and engineers, Amman, Jordan; Souleymane Bachir Diagne, Professor of Philosophy and Francophone Studies, Columbia University, New York, United States of America; Lesley Lokko, Founder & Director, African Futures Institute, Accra, Ghana; Gülru Necipoğlu, Director and Professor, Aga Khan Program for Islamic Architecture, Harvard University, Cambridge, United States of America; Hashim Sarkis, Founder & Principal, Hashim Sarkis Studios; Dean, School of Architecture and Planning, Massachusetts Institute of Technology, Cambridge, United States of America; and Sarah M. Whiting, Partner, WW Architecture; Dean and Josep Lluís Sert Professor of Architecture, Graduate School of Design, Harvard University, Cambridge, United States of America. Farrokh Derakhshani is the Director of the Award.Examples of outstanding architecture in the areas of modern design, social housing, community development and enhancement, historic preservation, reuse and area conservation, landscape design, and environmental enhancement are recognized by the Aga Khan Award for Architecture.Building plans that creatively utilize local resources and relevant technologies, as well as initiatives that could spur such initiatives abroad, are given special consideration. It should be mentioned that in addition to honoring architects, the Award also recognizes towns, builders, clients, master craftspeople, and engineers who have contributed significantly to the project.Projects had to be completed between January 1, 2018, and December 31, 2023, and they had to have been operational for a minimum of one year in order to be eligible for consideration in the 2025 Award cycle. The Award is not available for projects that His Highness the Aga Khan or any of the Aga Khan Development Networkinstitutions have commissioned.See the 19 shortlisted projects with their short project descriptions competing for the 2025 Award Cycle:Khudi Bari. Image © Aga Khan Trust for Culture / City SyntaxBangladeshKhudi Bari, in various locations, by Marina Tabassum ArchitectsMarina Tabassum Architects' Khudi Bari, which can be readily disassembled and reassembled to suit the needs of the users, is a replicable solution for displaced communities impacted by geographic and climatic changes.West Wusutu Village Community Centre. Image © Aga Khan Trust for Culture / Dou YujunChinaWest Wusutu Village Community Centre, Hohhot, Inner Mongolia, by Zhang PengjuIn addition to meeting the religious demands of the local Hui Muslims, Zhang Pengju's West Wusutu Village Community Centre in Hohhot, Inner Mongolia, offers social and cultural spaces for locals and artists. Constructed from recycled bricks, it features multipurpose indoor and outdoor areas that promote communal harmony.Revitalisation of Historic Esna. Image © Aga Khan Trust for Culture / Ahmed SalemEgyptRevitalisation of Historic Esna, by Takween Integrated Community DevelopmentBy using physical interventions, socioeconomic projects, and creative urban planning techniques, Takween Integrated Community Development's Revitalization of Historic Esna tackles the issues of cultural tourism in Upper Egypt and turns the once-forgotten area around the Temple of Khnum into a thriving historic city.The Arc at Green School. Image © Aga Khan Trust for Culture / Andreas Perbowo WidityawanIndonesiaThe Arc at Green School, in Bali, by IBUKU / Elora HardyAfter 15 years of bamboo experimenting at the Green School Bali, IBUKU/Elora Hardy created The Arc at Green School. The Arc is a brand-new community wellness facility built on the foundations of a temporary gym. High-precision engineering and regional handicraft are combined in this construction.Islamic Centre Nurul Yaqin Mosque. Image © Aga Khan Trust for Culture / Andreas Perbowo WidityawanIndonesiaIslamic Centre Nurul Yaqin Mosque, in Palu, Central Sulawesi, by Dave Orlando and Fandy GunawanDave Orlando and Fandy Gunawan built the Islamic Center Nurul Yaqin Mosque in Palu, Central Sulawesi, on the location of a previous mosque that was damaged by a 2018 tsunami. There is a place for worship and assembly at the new Islamic Center. Surrounded by a shallow reflecting pool that may be drained to make room for more guests, it is open to the countryside.Microlibrary Warak Kayu. Image © Aga Khan Trust for Culture / Andreas Perbowo WidityawanIndonesiaMicrolibraries in various cities, by SHAU / Daliana Suryawinata, Florian HeinzelmannFlorian Heinzelmann, the project's initiator, works with stakeholders at all levels to provide high-quality public spaces in a number of Indonesian parks and kampungs through microlibraries in different towns run by SHAU/Daliana Suryawinata. So far, six have been constructed, and by 2045, 100 are planned.Majara Residence. Image © Aga Khan Trust for Culture / Deed StudioIranMajara Complex and Community Redevelopment, in Hormuz Island by ZAV Architects / Mohamadreza GhodousiThe Majara Complex and Community Redevelopment on Hormuz Island, designed by ZAV Architects and Mohamadreza Ghodousi, is well-known for its vibrant domes that offer eco-friendly lodging for visitors visiting Hormuz's distinctive scenery. In addition to providing new amenities for the islanders who visit to socialize, pray, or utilize the library, it was constructed by highly trained local laborers.Jahad Metro Plaza. Image © Aga Khan Trust for Culture / Deed StudioIranJahad Metro Plaza in Tehran, by KA Architecture StudioKA Architecture Studio's Jahad Metro Plaza in Tehran was constructed to replace the dilapidated old buildings. It turned the location into a beloved pedestrian-friendly landmark. The arched vaults, which are covered in locally manufactured brick, vary in height to let air and light into the area they are protecting.Khan Jaljulia Restoration. Image © Aga Khan Trust for Culture / Mikaela BurstowIsraelKhan Jaljulia Restoration in Jaljulia by Elias KhuriElias Khuri's Khan Jaljulia Restoration is a cost-effective intervention set amidst the remnants of a 14th-century Khan in Jaljulia. By converting the abandoned historical location into a bustling public area for social gatherings, it helps the locals rediscover their cultural history.Campus Startup Lions. Image © Aga Khan Trust for Culture / Christopher Wilton-SteerKenyaCampus Startup Lions, in Turkana by Kéré ArchitectsKéré Architecture's Campus Startup Lions in Turkana is an educational and entrepreneurial center that offers a venue for community involvement, business incubation, and technology-driven education. The design incorporates solar energy, rainwater harvesting, and tall ventilation towers that resemble the nearby termite mounds, and it was constructed using local volcanic stone.Lalla Yeddouna Square. Image © Aga Khan Trust for Culture / Amine HouariMoroccoRevitalisation of Lalla Yeddouna Square in the medina of Fez, by Mossessian Architecture and Yassir Khalil StudioMossessian Architecture and Yassir Khalil Studio's revitalization of Lalla Yeddouna Square in the Fez medina aims to improve pedestrian circulation and reestablish a connection to the waterfront. For the benefit of locals, craftspeople, and tourists from around the globe, existing buildings were maintained and new areas created.Vision Pakistan. Image © Aga Khan Trust for Culture / Usman Saqib ZuberiPakistanVision Pakistan, in Islamabad by DB Studios / Mohammad Saifullah SiddiquiA tailoring training center run by Vision Pakistan, a nonprofit organization dedicated to empowering underprivileged adolescents, is located in Islamabad by DB Studios/Mohammad Saifullah Siddiqui. Situated in a crowded neighborhood, this multi-story building features flashy jaalis influenced by Arab and Pakistani crafts, echoing the city's 1960s design.Denso Hall Rahguzar Project. Image © Aga Khan Trust for Culture / Usman Saqib ZuberiPakistanDenso Hall Rahguzar Project, in Karachi by Heritage Foundation Pakistan / Yasmeen LariThe Heritage Foundation of Pakistan/Yasmeen Lari's Denso Hall Rahguzar Project in Karachi is a heritage-led eco-urban enclave that was built with low-carbon materials in response to the city's severe climate, which is prone to heat waves and floods. The freshly planted "forests" are irrigated by the handcrafted terracotta cobbles, which absorb rainfall and cool and purify the air.Wonder Cabinet. Image © Aga Khan Trust for Culture / Mikaela BurstowPalestineWonder Cabinet, in Bethlehem by AAU AnastasThe architects at AAU Anastas established Wonder Cabinet, a multifunctional, nonprofit exhibition and production venue in Bethlehem. The three-story concrete building was constructed with the help of regional contractors and artisans, and it is quickly emerging as a major center for learning, design, craft, and innovation.The Ned. Image © Aga Khan Trust for Culture / Cemal EmdenQatarThe Ned Hotel, in Doha by David Chipperfield ArchitectsThe Ministry of Interior was housed in the Ned Hotel in Doha, which was designed by David Chipperfield Architects. Its Middle Eastern brutalist building was meticulously transformed into a 90-room boutique hotel, thereby promoting architectural revitalization in the region.Shamalat Cultural Centre. Image © Aga Khan Trust for Culture / Hassan Al ShattiSaudi ArabiaShamalat Cultural Centre, in Riyadh, by Syn Architects / Sara Alissa, Nojoud AlsudairiOn the outskirts of Diriyah, the Shamalat Cultural Centre in Riyadh was created by Syn Architects/Sara Alissa, Nojoud Alsudairi. It was created from an old mud home that artist Maha Malluh had renovated. The center, which aims to incorporate historic places into daily life, provides a sensitive viewpoint on heritage conservation in the area by contrasting the old and the contemporary.Rehabilitation and Extension of Dakar Railway Station. Image © Aga Khan Trust for Culture / Sylvain CherkaouiSenegalRehabilitation and Extension of Dakar Railway Station, in Dakar by Ga2DIn order to accommodate the passengers of a new express train line, Ga2D extended and renovated Dakar train Station, which purposefully contrasts the old and modern buildings. The forecourt was once again open to pedestrian traffic after vehicular traffic was limited to the rear of the property.Rami Library. Image © Aga Khan Trust for Culture / Cemal EmdenTürkiyeRami Library, by Han Tümertekin Design & ConsultancyThe largest library in Istanbul is the Rami Library, designed by Han Tümertekin Design & Consultancy. It occupied the former Rami Barracks, a sizable, single-story building with enormous volumes that was constructed in the eighteenth century. In order to accommodate new library operations while maintaining the structure's original spatial features, a minimal intervention method was used.Morocco Pavilion Expo Dubai 2020. Image © Aga Khan Trust for Culture / Deed StudioUnited Arab EmiratesMorocco Pavilion Expo Dubai 2020, by Oualalou + ChoiOualalou + Choi's Morocco Pavilion Expo Dubai 2020 is intended to last beyond Expo 2020 and be transformed into a cultural center. The pavilion is a trailblazer in the development of large-scale rammed earth building techniques. Its use of passive cooling techniques, which minimize the need for mechanical air conditioning, earned it the gold LEED accreditation.At each project location, independent professionals such as architects, conservation specialists, planners, and structural engineers have conducted thorough evaluations of the nominated projects. This summer, the Master Jury convenes once more to analyze the on-site evaluations and choose the ultimate Award winners.The top image in the article: The Arc at Green School. Image © Aga Khan Trust for Culture / Andreas Perbowo Widityawan.> via Aga Khan Award for Architecture #aga #khan #award #architecture #announces
    WORLDARCHITECTURE.ORG
    Aga Khan Award for Architecture 2025 announces 19 shortlisted projects from 15 countries
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" 19 shortlisted projects for the 2025 Award cycle were revealed by the Aga Khan Award for Architecture (AKAA). A portion of the $1 million prize, one of the biggest in architecture, will be awarded to the winning proposals. Out of the 369 projects nominated for the 16th Award Cycle (2023-2025), an independent Master Jury chose the 19 shortlisted projects from 15 countries.The nine members of the Master Jury for the 16th Award cycle include Azra Akšamija, Noura Al-Sayeh Holtrop, Lucia Allais, David Basulto, Yvonne Farrell, Kabage Karanja, Yacouba Konaté, Hassan Radoine, and Mun Summ Wong.His Late Highness Prince Karim Aga Khan IV created the Aga Khan Award for Architecture in 1977 to recognize and promote architectural ideas that effectively meet the needs and goals of communities where Muslims are a major population. Nearly 10,000 construction projects have been documented since the award's inception 48 years ago, and 128 projects have been granted it. The AKAA's selection method places a strong emphasis on architecture that stimulates and responds to people's cultural ambitions in addition to meeting their physical, social, and economic demands.The Aga Khan Award for Architecture is governed by a Steering Committee chaired by His Highness the Aga Khan. The other members of the Steering Committee are Meisa Batayneh, Principal Architect, Founder, maisam architects and engineers, Amman, Jordan; Souleymane Bachir Diagne, Professor of Philosophy and Francophone Studies, Columbia University, New York, United States of America; Lesley Lokko, Founder & Director, African Futures Institute, Accra, Ghana; Gülru Necipoğlu, Director and Professor, Aga Khan Program for Islamic Architecture, Harvard University, Cambridge, United States of America; Hashim Sarkis, Founder & Principal, Hashim Sarkis Studios (HSS); Dean, School of Architecture and Planning, Massachusetts Institute of Technology, Cambridge, United States of America; and Sarah M. Whiting, Partner, WW Architecture; Dean and Josep Lluís Sert Professor of Architecture, Graduate School of Design, Harvard University, Cambridge, United States of America. Farrokh Derakhshani is the Director of the Award.Examples of outstanding architecture in the areas of modern design, social housing, community development and enhancement, historic preservation, reuse and area conservation, landscape design, and environmental enhancement are recognized by the Aga Khan Award for Architecture.Building plans that creatively utilize local resources and relevant technologies, as well as initiatives that could spur such initiatives abroad, are given special consideration. It should be mentioned that in addition to honoring architects, the Award also recognizes towns, builders, clients, master craftspeople, and engineers who have contributed significantly to the project.Projects had to be completed between January 1, 2018, and December 31, 2023, and they had to have been operational for a minimum of one year in order to be eligible for consideration in the 2025 Award cycle. The Award is not available for projects that His Highness the Aga Khan or any of the Aga Khan Development Network (AKDN) institutions have commissioned.See the 19 shortlisted projects with their short project descriptions competing for the 2025 Award Cycle:Khudi Bari. Image © Aga Khan Trust for Culture / City Syntax (F. M. Faruque Abdullah Shawon, H. M. Fozla Rabby Apurbo)BangladeshKhudi Bari, in various locations, by Marina Tabassum ArchitectsMarina Tabassum Architects' Khudi Bari, which can be readily disassembled and reassembled to suit the needs of the users, is a replicable solution for displaced communities impacted by geographic and climatic changes.West Wusutu Village Community Centre. Image © Aga Khan Trust for Culture / Dou Yujun (photographer)ChinaWest Wusutu Village Community Centre, Hohhot, Inner Mongolia, by Zhang PengjuIn addition to meeting the religious demands of the local Hui Muslims, Zhang Pengju's West Wusutu Village Community Centre in Hohhot, Inner Mongolia, offers social and cultural spaces for locals and artists. Constructed from recycled bricks, it features multipurpose indoor and outdoor areas that promote communal harmony.Revitalisation of Historic Esna. Image © Aga Khan Trust for Culture / Ahmed Salem (photographer)EgyptRevitalisation of Historic Esna, by Takween Integrated Community DevelopmentBy using physical interventions, socioeconomic projects, and creative urban planning techniques, Takween Integrated Community Development's Revitalization of Historic Esna tackles the issues of cultural tourism in Upper Egypt and turns the once-forgotten area around the Temple of Khnum into a thriving historic city.The Arc at Green School. Image © Aga Khan Trust for Culture / Andreas Perbowo Widityawan (photographer)IndonesiaThe Arc at Green School, in Bali, by IBUKU / Elora HardyAfter 15 years of bamboo experimenting at the Green School Bali, IBUKU/Elora Hardy created The Arc at Green School. The Arc is a brand-new community wellness facility built on the foundations of a temporary gym. High-precision engineering and regional handicraft are combined in this construction.Islamic Centre Nurul Yaqin Mosque. Image © Aga Khan Trust for Culture / Andreas Perbowo Widityawan (photographer)IndonesiaIslamic Centre Nurul Yaqin Mosque, in Palu, Central Sulawesi, by Dave Orlando and Fandy GunawanDave Orlando and Fandy Gunawan built the Islamic Center Nurul Yaqin Mosque in Palu, Central Sulawesi, on the location of a previous mosque that was damaged by a 2018 tsunami. There is a place for worship and assembly at the new Islamic Center. Surrounded by a shallow reflecting pool that may be drained to make room for more guests, it is open to the countryside.Microlibrary Warak Kayu. Image © Aga Khan Trust for Culture / Andreas Perbowo Widityawan (photographer)IndonesiaMicrolibraries in various cities, by SHAU / Daliana Suryawinata, Florian HeinzelmannFlorian Heinzelmann, the project's initiator, works with stakeholders at all levels to provide high-quality public spaces in a number of Indonesian parks and kampungs through microlibraries in different towns run by SHAU/Daliana Suryawinata. So far, six have been constructed, and by 2045, 100 are planned.Majara Residence. Image © Aga Khan Trust for Culture / Deed Studio (photographer)IranMajara Complex and Community Redevelopment, in Hormuz Island by ZAV Architects / Mohamadreza GhodousiThe Majara Complex and Community Redevelopment on Hormuz Island, designed by ZAV Architects and Mohamadreza Ghodousi, is well-known for its vibrant domes that offer eco-friendly lodging for visitors visiting Hormuz's distinctive scenery. In addition to providing new amenities for the islanders who visit to socialize, pray, or utilize the library, it was constructed by highly trained local laborers.Jahad Metro Plaza. Image © Aga Khan Trust for Culture / Deed Studio (photographer)IranJahad Metro Plaza in Tehran, by KA Architecture StudioKA Architecture Studio's Jahad Metro Plaza in Tehran was constructed to replace the dilapidated old buildings. It turned the location into a beloved pedestrian-friendly landmark. The arched vaults, which are covered in locally manufactured brick, vary in height to let air and light into the area they are protecting.Khan Jaljulia Restoration. Image © Aga Khan Trust for Culture / Mikaela Burstow (photographer)IsraelKhan Jaljulia Restoration in Jaljulia by Elias KhuriElias Khuri's Khan Jaljulia Restoration is a cost-effective intervention set amidst the remnants of a 14th-century Khan in Jaljulia. By converting the abandoned historical location into a bustling public area for social gatherings, it helps the locals rediscover their cultural history.Campus Startup Lions. Image © Aga Khan Trust for Culture / Christopher Wilton-Steer (photographer)KenyaCampus Startup Lions, in Turkana by Kéré ArchitectsKéré Architecture's Campus Startup Lions in Turkana is an educational and entrepreneurial center that offers a venue for community involvement, business incubation, and technology-driven education. The design incorporates solar energy, rainwater harvesting, and tall ventilation towers that resemble the nearby termite mounds, and it was constructed using local volcanic stone.Lalla Yeddouna Square. Image © Aga Khan Trust for Culture / Amine Houari (photographer)MoroccoRevitalisation of Lalla Yeddouna Square in the medina of Fez, by Mossessian Architecture and Yassir Khalil StudioMossessian Architecture and Yassir Khalil Studio's revitalization of Lalla Yeddouna Square in the Fez medina aims to improve pedestrian circulation and reestablish a connection to the waterfront. For the benefit of locals, craftspeople, and tourists from around the globe, existing buildings were maintained and new areas created.Vision Pakistan. Image © Aga Khan Trust for Culture / Usman Saqib Zuberi (photographer)PakistanVision Pakistan, in Islamabad by DB Studios / Mohammad Saifullah SiddiquiA tailoring training center run by Vision Pakistan, a nonprofit organization dedicated to empowering underprivileged adolescents, is located in Islamabad by DB Studios/Mohammad Saifullah Siddiqui. Situated in a crowded neighborhood, this multi-story building features flashy jaalis influenced by Arab and Pakistani crafts, echoing the city's 1960s design.Denso Hall Rahguzar Project. Image © Aga Khan Trust for Culture / Usman Saqib Zuberi (photographer)PakistanDenso Hall Rahguzar Project, in Karachi by Heritage Foundation Pakistan / Yasmeen LariThe Heritage Foundation of Pakistan/Yasmeen Lari's Denso Hall Rahguzar Project in Karachi is a heritage-led eco-urban enclave that was built with low-carbon materials in response to the city's severe climate, which is prone to heat waves and floods. The freshly planted "forests" are irrigated by the handcrafted terracotta cobbles, which absorb rainfall and cool and purify the air.Wonder Cabinet. Image © Aga Khan Trust for Culture / Mikaela Burstow (photographer)PalestineWonder Cabinet, in Bethlehem by AAU AnastasThe architects at AAU Anastas established Wonder Cabinet, a multifunctional, nonprofit exhibition and production venue in Bethlehem. The three-story concrete building was constructed with the help of regional contractors and artisans, and it is quickly emerging as a major center for learning, design, craft, and innovation.The Ned. Image © Aga Khan Trust for Culture / Cemal Emden (photographer)QatarThe Ned Hotel, in Doha by David Chipperfield ArchitectsThe Ministry of Interior was housed in the Ned Hotel in Doha, which was designed by David Chipperfield Architects. Its Middle Eastern brutalist building was meticulously transformed into a 90-room boutique hotel, thereby promoting architectural revitalization in the region.Shamalat Cultural Centre. Image © Aga Khan Trust for Culture / Hassan Al Shatti (photographer)Saudi ArabiaShamalat Cultural Centre, in Riyadh, by Syn Architects / Sara Alissa, Nojoud AlsudairiOn the outskirts of Diriyah, the Shamalat Cultural Centre in Riyadh was created by Syn Architects/Sara Alissa, Nojoud Alsudairi. It was created from an old mud home that artist Maha Malluh had renovated. The center, which aims to incorporate historic places into daily life, provides a sensitive viewpoint on heritage conservation in the area by contrasting the old and the contemporary.Rehabilitation and Extension of Dakar Railway Station. Image © Aga Khan Trust for Culture / Sylvain Cherkaoui (photographer)SenegalRehabilitation and Extension of Dakar Railway Station, in Dakar by Ga2DIn order to accommodate the passengers of a new express train line, Ga2D extended and renovated Dakar train Station, which purposefully contrasts the old and modern buildings. The forecourt was once again open to pedestrian traffic after vehicular traffic was limited to the rear of the property.Rami Library. Image © Aga Khan Trust for Culture / Cemal Emden (photographer)TürkiyeRami Library, by Han Tümertekin Design & ConsultancyThe largest library in Istanbul is the Rami Library, designed by Han Tümertekin Design & Consultancy. It occupied the former Rami Barracks, a sizable, single-story building with enormous volumes that was constructed in the eighteenth century. In order to accommodate new library operations while maintaining the structure's original spatial features, a minimal intervention method was used.Morocco Pavilion Expo Dubai 2020. Image © Aga Khan Trust for Culture / Deed Studio (photographer)United Arab EmiratesMorocco Pavilion Expo Dubai 2020, by Oualalou + ChoiOualalou + Choi's Morocco Pavilion Expo Dubai 2020 is intended to last beyond Expo 2020 and be transformed into a cultural center. The pavilion is a trailblazer in the development of large-scale rammed earth building techniques. Its use of passive cooling techniques, which minimize the need for mechanical air conditioning, earned it the gold LEED accreditation.At each project location, independent professionals such as architects, conservation specialists, planners, and structural engineers have conducted thorough evaluations of the nominated projects. This summer, the Master Jury convenes once more to analyze the on-site evaluations and choose the ultimate Award winners.The top image in the article: The Arc at Green School. Image © Aga Khan Trust for Culture / Andreas Perbowo Widityawan (photographer).> via Aga Khan Award for Architecture
    Like
    Love
    Wow
    Sad
    Angry
    531
    2 Kommentare 0 Anteile
  • La revue « Réseaux » s’intéresse aux pouvoirs des algorithmes en matière de sécurité

    La revue « Réseaux » s’intéresse aux pouvoirs des algorithmes en matière de sécurité Dans son numéro de printemps, la revue décortique cette technologie onéreuse qui, malgré ses promesses, pourrait bien ne pas être aussi efficace que prévu. Article réservé aux abonnés La revue des revues. Depuis plusieurs années, la vidéosurveillance dite « algorithmique », dopée à l’intelligence artificielle, s’est répandue en France. D’abord cantonnée à un usage statistique, elle a vu son usage élargi à des fins sécuritaires lors des Jeux olympiques de Paris, en 2024, pour détecter les mouvements de foule ou les bagages oubliés. Une expérimentation que le législateur a décidé de reconduire jusqu’en 2027 malgré un rapport d’évaluation pointant une efficacité relative. Lire aussi | Article réservé à nos abonnés « La généralisation de la vidéosurveillance algorithmique fait peser des risques majeurs sur nos libertés » La revue Réseaux aborde cette nouvelle technologie dans son numéro de mai-juin portant sur « Les politiques numériques de la sécurité urbaine » L’article « Qui rend lisibles les images ? », de Clément Le Ludec et Maxime Cornet, fondé sur une observation fine de deux systèmes de vidéosurveillance algorithmique, remet en cause cette idée. Les sociologues reviennent sur le nécessaire travail humain pour nettoyer et annoter les images employées afin d’entraîner l’IA à reconnaître une situation donnée. Cette intervention humaine influe sur la « définition de l’infraction » que le système va rechercher. Effet pervers C’est particulièrement vrai dans un des cas étudiés, un algorithme de détection de vol à l’étalage dans les supermarchés. Les annotateurs doivent identifier des gestes qu’ils jugent suspects, présageant d’un vol. Une tâche qui conduit à une « simplification du réel » obérant l’efficacité du dispositif. L’intervention humaine dépasse même l’entraînement : la caractérisation des événements est parfois faite en temps réel par les annotateurs, basés à Madagascar. Le système n’est dès lors plus vraiment automatique. Lire aussi | Article réservé à nos abonnés Algorithmes sociaux : « Plus le score de risque est élevé, plus le contrôle est intensifié » Par ailleurs, l’utilisation de cet outil ne se traduisant pas par une baisse du nombre de vols, cette inefficacité conduit les opérateurs à se rabattre sur la vidéosurveillance. Un autre système, utilisé contre les infractions routières, a tendance à être focalisé sur des zones déjà sous surveillance, limitant l’apport du dispositif. Et pour rentabiliser ce coûteux système, on lui cherche même des applications différentes de celles prévues au départ. Il vous reste 22.86% de cet article à lire. La suite est réservée aux abonnés.
    #revue #réseaux #sintéresse #aux #pouvoirs
    La revue « Réseaux » s’intéresse aux pouvoirs des algorithmes en matière de sécurité
    La revue « Réseaux » s’intéresse aux pouvoirs des algorithmes en matière de sécurité Dans son numéro de printemps, la revue décortique cette technologie onéreuse qui, malgré ses promesses, pourrait bien ne pas être aussi efficace que prévu. Article réservé aux abonnés La revue des revues. Depuis plusieurs années, la vidéosurveillance dite « algorithmique », dopée à l’intelligence artificielle, s’est répandue en France. D’abord cantonnée à un usage statistique, elle a vu son usage élargi à des fins sécuritaires lors des Jeux olympiques de Paris, en 2024, pour détecter les mouvements de foule ou les bagages oubliés. Une expérimentation que le législateur a décidé de reconduire jusqu’en 2027 malgré un rapport d’évaluation pointant une efficacité relative. Lire aussi | Article réservé à nos abonnés « La généralisation de la vidéosurveillance algorithmique fait peser des risques majeurs sur nos libertés » La revue Réseaux aborde cette nouvelle technologie dans son numéro de mai-juin portant sur « Les politiques numériques de la sécurité urbaine » L’article « Qui rend lisibles les images ? », de Clément Le Ludec et Maxime Cornet, fondé sur une observation fine de deux systèmes de vidéosurveillance algorithmique, remet en cause cette idée. Les sociologues reviennent sur le nécessaire travail humain pour nettoyer et annoter les images employées afin d’entraîner l’IA à reconnaître une situation donnée. Cette intervention humaine influe sur la « définition de l’infraction » que le système va rechercher. Effet pervers C’est particulièrement vrai dans un des cas étudiés, un algorithme de détection de vol à l’étalage dans les supermarchés. Les annotateurs doivent identifier des gestes qu’ils jugent suspects, présageant d’un vol. Une tâche qui conduit à une « simplification du réel » obérant l’efficacité du dispositif. L’intervention humaine dépasse même l’entraînement : la caractérisation des événements est parfois faite en temps réel par les annotateurs, basés à Madagascar. Le système n’est dès lors plus vraiment automatique. Lire aussi | Article réservé à nos abonnés Algorithmes sociaux : « Plus le score de risque est élevé, plus le contrôle est intensifié » Par ailleurs, l’utilisation de cet outil ne se traduisant pas par une baisse du nombre de vols, cette inefficacité conduit les opérateurs à se rabattre sur la vidéosurveillance. Un autre système, utilisé contre les infractions routières, a tendance à être focalisé sur des zones déjà sous surveillance, limitant l’apport du dispositif. Et pour rentabiliser ce coûteux système, on lui cherche même des applications différentes de celles prévues au départ. Il vous reste 22.86% de cet article à lire. La suite est réservée aux abonnés. #revue #réseaux #sintéresse #aux #pouvoirs
    WWW.LEMONDE.FR
    La revue « Réseaux » s’intéresse aux pouvoirs des algorithmes en matière de sécurité
    La revue « Réseaux » s’intéresse aux pouvoirs des algorithmes en matière de sécurité Dans son numéro de printemps, la revue décortique cette technologie onéreuse qui, malgré ses promesses, pourrait bien ne pas être aussi efficace que prévu. Article réservé aux abonnés La revue des revues. Depuis plusieurs années, la vidéosurveillance dite « algorithmique », dopée à l’intelligence artificielle (IA), s’est répandue en France. D’abord cantonnée à un usage statistique, elle a vu son usage élargi à des fins sécuritaires lors des Jeux olympiques de Paris, en 2024, pour détecter les mouvements de foule ou les bagages oubliés. Une expérimentation que le législateur a décidé de reconduire jusqu’en 2027 malgré un rapport d’évaluation pointant une efficacité relative. Lire aussi | Article réservé à nos abonnés « La généralisation de la vidéosurveillance algorithmique fait peser des risques majeurs sur nos libertés » La revue Réseaux aborde cette nouvelle technologie dans son numéro de mai-juin portant sur « Les politiques numériques de la sécurité urbaine » L’article « Qui rend lisibles les images ? », de Clément Le Ludec et Maxime Cornet, fondé sur une observation fine de deux systèmes de vidéosurveillance algorithmique, remet en cause cette idée. Les sociologues reviennent sur le nécessaire travail humain pour nettoyer et annoter les images employées afin d’entraîner l’IA à reconnaître une situation donnée. Cette intervention humaine influe sur la « définition de l’infraction » que le système va rechercher. Effet pervers C’est particulièrement vrai dans un des cas étudiés, un algorithme de détection de vol à l’étalage dans les supermarchés. Les annotateurs doivent identifier des gestes qu’ils jugent suspects, présageant d’un vol. Une tâche qui conduit à une « simplification du réel » obérant l’efficacité du dispositif. L’intervention humaine dépasse même l’entraînement : la caractérisation des événements est parfois faite en temps réel par les annotateurs, basés à Madagascar. Le système n’est dès lors plus vraiment automatique. Lire aussi | Article réservé à nos abonnés Algorithmes sociaux : « Plus le score de risque est élevé, plus le contrôle est intensifié » Par ailleurs, l’utilisation de cet outil ne se traduisant pas par une baisse du nombre de vols, cette inefficacité conduit les opérateurs à se rabattre sur la vidéosurveillance. Un autre système, utilisé contre les infractions routières, a tendance à être focalisé sur des zones déjà sous surveillance, limitant l’apport du dispositif. Et pour rentabiliser ce coûteux système, on lui cherche même des applications différentes de celles prévues au départ. Il vous reste 22.86% de cet article à lire. La suite est réservée aux abonnés.
    0 Kommentare 0 Anteile
  • New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know

    The Secure Government EmailCommon Implementation Framework
    New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service. 
    Key Takeaways

    All NZ government agencies must comply with new email security requirements by October 2025.
    The new framework strengthens trust and security in government communications by preventing spoofing and phishing.
    The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls.
    EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting.

    Start a Free Trial

    What is the Secure Government Email Common Implementation Framework?
    The Secure Government EmailCommon Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service.
    Why is New Zealand Implementing New Government Email Security Standards?
    The framework was developed by New Zealand’s Department of Internal Affairsas part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name Systemto enable the retirement of the legacy SEEMail service and provide:

    Encryption for transmission security
    Digital signing for message integrity
    Basic non-repudiationDomain spoofing protection

    These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications.
    What Email Security Technologies Are Required by the New NZ SGE Framework?
    The SGE Framework outlines the following key technologies that agencies must implement:

    TLS 1.2 or higher with implicit TLS enforced
    TLS-RPTSPFDKIMDMARCwith reporting
    MTA-STSData Loss Prevention controls

    These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks.

    Get in touch

    When Do NZ Government Agencies Need to Comply with this Framework?
    All New Zealand government agencies are expected to fully implement the Secure Government EmailCommon Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline.
    The All of Government Secure Email Common Implementation Framework v1.0
    What are the Mandated Requirements for Domains?
    Below are the exact requirements for all email-enabled domains under the new framework.
    ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manualand Protective Security Requirements.
    Compliance Monitoring and Reporting
    The All of Government Service Deliveryteam will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies. 
    Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually.
    Deployment Checklist for NZ Government Compliance

    Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT
    SPF with -all
    DKIM on all outbound email
    DMARC p=reject 
    adkim=s where suitable
    For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict
    Compliance dashboard
    Inbound DMARC evaluation enforced
    DLP aligned with NZISM

    Start a Free Trial

    How EasyDMARC Can Help Government Agencies Comply
    EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance.
    1. TLS-RPT / MTA-STS audit
    EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures.

    Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks.

    As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources.
    2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation.

    Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports.
    Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues.
    3. DKIM on all outbound email
    DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases.
    As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly. If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface.
    EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs. 
    Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements.
    If you’re using a dedicated MTA, DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS.

    4. DMARC p=reject rollout
    As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated. 
    This phased approach ensures full protection against domain spoofing without risking legitimate email delivery.

    5. adkim Strict Alignment Check
    This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender.

    6. Securing Non-Email Enabled Domains
    The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record.
    Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”.
    • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”.
    EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject.
    7. Compliance Dashboard
    Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework.

    8. Inbound DMARC Evaluation Enforced
    You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails.
    However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender.
    If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change.
    9. Data Loss Prevention Aligned with NZISM
    The New Zealand Information Security Manualis the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention, which must be followed to be aligned with the SEG.
    Need Help Setting up SPF and DKIM for your Email Provider?
    Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients.
    Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs.
    Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider.
    Here are our step-by-step guides for the most common platforms:

    Google Workspace

    Microsoft 365

    These guides will help ensure your DNS records are configured correctly as part of the Secure Government EmailFramework rollout.
    Meet New Government Email Security Standards With EasyDMARC
    New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail.
    #new #zealands #email #security #requirements
    New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know
    The Secure Government EmailCommon Implementation Framework New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service.  Key Takeaways All NZ government agencies must comply with new email security requirements by October 2025. The new framework strengthens trust and security in government communications by preventing spoofing and phishing. The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls. EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting. Start a Free Trial What is the Secure Government Email Common Implementation Framework? The Secure Government EmailCommon Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service. Why is New Zealand Implementing New Government Email Security Standards? The framework was developed by New Zealand’s Department of Internal Affairsas part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name Systemto enable the retirement of the legacy SEEMail service and provide: Encryption for transmission security Digital signing for message integrity Basic non-repudiationDomain spoofing protection These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications. What Email Security Technologies Are Required by the New NZ SGE Framework? The SGE Framework outlines the following key technologies that agencies must implement: TLS 1.2 or higher with implicit TLS enforced TLS-RPTSPFDKIMDMARCwith reporting MTA-STSData Loss Prevention controls These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks. Get in touch When Do NZ Government Agencies Need to Comply with this Framework? All New Zealand government agencies are expected to fully implement the Secure Government EmailCommon Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline. The All of Government Secure Email Common Implementation Framework v1.0 What are the Mandated Requirements for Domains? Below are the exact requirements for all email-enabled domains under the new framework. ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manualand Protective Security Requirements. Compliance Monitoring and Reporting The All of Government Service Deliveryteam will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies.  Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually. Deployment Checklist for NZ Government Compliance Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT SPF with -all DKIM on all outbound email DMARC p=reject  adkim=s where suitable For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict Compliance dashboard Inbound DMARC evaluation enforced DLP aligned with NZISM Start a Free Trial How EasyDMARC Can Help Government Agencies Comply EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance. 1. TLS-RPT / MTA-STS audit EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures. Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks. As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources. 2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation. Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports. Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues. 3. DKIM on all outbound email DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases. As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly. If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface. EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs.  Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements. If you’re using a dedicated MTA, DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS. 4. DMARC p=reject rollout As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated.  This phased approach ensures full protection against domain spoofing without risking legitimate email delivery. 5. adkim Strict Alignment Check This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender. 6. Securing Non-Email Enabled Domains The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record. Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”. • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”. EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject. 7. Compliance Dashboard Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework. 8. Inbound DMARC Evaluation Enforced You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails. However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender. If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change. 9. Data Loss Prevention Aligned with NZISM The New Zealand Information Security Manualis the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention, which must be followed to be aligned with the SEG. Need Help Setting up SPF and DKIM for your Email Provider? Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients. Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs. Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider. Here are our step-by-step guides for the most common platforms: Google Workspace Microsoft 365 These guides will help ensure your DNS records are configured correctly as part of the Secure Government EmailFramework rollout. Meet New Government Email Security Standards With EasyDMARC New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail. #new #zealands #email #security #requirements
    EASYDMARC.COM
    New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know
    The Secure Government Email (SGE) Common Implementation Framework New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service.  Key Takeaways All NZ government agencies must comply with new email security requirements by October 2025. The new framework strengthens trust and security in government communications by preventing spoofing and phishing. The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls. EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting. Start a Free Trial What is the Secure Government Email Common Implementation Framework? The Secure Government Email (SGE) Common Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service. Why is New Zealand Implementing New Government Email Security Standards? The framework was developed by New Zealand’s Department of Internal Affairs (DIA) as part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name System (DNS) to enable the retirement of the legacy SEEMail service and provide: Encryption for transmission security Digital signing for message integrity Basic non-repudiation (by allowing only authorized senders) Domain spoofing protection These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications. What Email Security Technologies Are Required by the New NZ SGE Framework? The SGE Framework outlines the following key technologies that agencies must implement: TLS 1.2 or higher with implicit TLS enforced TLS-RPT (TLS Reporting) SPF (Sender Policy Framework) DKIM (DomainKeys Identified Mail) DMARC (Domain-based Message Authentication, Reporting, and Conformance) with reporting MTA-STS (Mail Transfer Agent Strict Transport Security) Data Loss Prevention controls These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks. Get in touch When Do NZ Government Agencies Need to Comply with this Framework? All New Zealand government agencies are expected to fully implement the Secure Government Email (SGE) Common Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline. The All of Government Secure Email Common Implementation Framework v1.0 What are the Mandated Requirements for Domains? Below are the exact requirements for all email-enabled domains under the new framework. ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manual (NZISM) and Protective Security Requirements (PSR). Compliance Monitoring and Reporting The All of Government Service Delivery (AoGSD) team will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies.  Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually. Deployment Checklist for NZ Government Compliance Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT SPF with -all DKIM on all outbound email DMARC p=reject  adkim=s where suitable For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict Compliance dashboard Inbound DMARC evaluation enforced DLP aligned with NZISM Start a Free Trial How EasyDMARC Can Help Government Agencies Comply EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance. 1. TLS-RPT / MTA-STS audit EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures. Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks. As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources. 2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation. Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports. Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues. 3. DKIM on all outbound email DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases. As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly (see first screenshot). If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface (see second screenshot). EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs.  Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements. If you’re using a dedicated MTA (e.g., Postfix), DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS (see third and fourth screenshots). 4. DMARC p=reject rollout As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated.  This phased approach ensures full protection against domain spoofing without risking legitimate email delivery. 5. adkim Strict Alignment Check This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender. 6. Securing Non-Email Enabled Domains The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record. Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”. • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”. EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject. 7. Compliance Dashboard Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework. 8. Inbound DMARC Evaluation Enforced You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails. However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. Read more about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender. If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change. 9. Data Loss Prevention Aligned with NZISM The New Zealand Information Security Manual (NZISM) is the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention (DLP), which must be followed to be aligned with the SEG. Need Help Setting up SPF and DKIM for your Email Provider? Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients. Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs. Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider. Here are our step-by-step guides for the most common platforms: Google Workspace Microsoft 365 These guides will help ensure your DNS records are configured correctly as part of the Secure Government Email (SGE) Framework rollout. Meet New Government Email Security Standards With EasyDMARC New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail.
    0 Kommentare 0 Anteile
  • How AI is reshaping the future of healthcare and medical research

    Transcript       
    PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”          
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.” 
    In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.   
    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open. 
    As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.  
    Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home. 
    Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.     
    Here’s my conversation with Bill Gates and Sébastien Bubeck. 
    LEE: Bill, welcome. 
    BILL GATES: Thank you. 
    LEE: Seb … 
    SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here. 
    LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening? 
    And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?  
    GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines. 
    And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.  
    And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning. 
    LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that? 
    GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, … 
    LEE: Right.  
    GATES: … that is a bit weird.  
    LEE: Yeah. 
    GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training. 
    LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. 
    BUBECK: Yes.  
    LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you. 
    BUBECK: Yeah. 
    LEE: And so what were your first encounters? Because I actually don’t remember what happened then. 
    BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3. 
    I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1. 
    So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts. 
    So this was really, to me, the first moment where I saw some understanding in those models.  
    LEE: So this was, just to get the timing right, that was before I pulled you into the tent. 
    BUBECK: That was before. That was like a year before. 
    LEE: Right.  
    BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4. 
    So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.  
    So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x. 
    And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?  
    LEE: Yeah.
    BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.  
    LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine. 
    And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.  
    And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.  
    I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book. 
    But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements. 
    But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today? 
    You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.  
    Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork? 
    GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.  
    It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision. 
    But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view. 
    LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you? 
    BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong? 
    Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.  
    Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them. 
    And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.  
    Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. 
    It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine. 
    LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all? 
    GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that. 
    The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa,
    So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.  
    LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking? 
    GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.  
    The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.  
    LEE: Right.  
    GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.  
    LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication. 
    BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI. 
    It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for. 
    LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes. 
    I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?  
    That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that? 
    BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there. 
    Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad. 
    But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model. 
    So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model. 
    LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and … 
    BUBECK: It’s a very difficult, very difficult balance. 
    LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models? 
    GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there. 
    Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?  
    Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there.
    LEE: Yeah.
    GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake. 
    LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on. 
    BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything. 
    That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind. 
    LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two? 
    BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it. 
    LEE: So we have about three hours of stuff to talk about, but our time is actually running low.
    BUBECK: Yes, yes, yes.  
    LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now? 
    GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.  
    The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities. 
    And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period. 
    LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers? 
    GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them. 
    LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.  
    I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why. 
    BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.  
    And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.  
    LEE: Yeah. 
    BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.  
    Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not. 
    Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision. 
    LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist … 
    BUBECK: Yeah.
    LEE: … or an endocrinologist might not.
    BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know.
    LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today? 
    BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later. 
    And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …  
    LEE: Will AI prescribe your medicines? Write your prescriptions? 
    BUBECK: I think yes. I think yes. 
    LEE: OK. Bill? 
    GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate?
    And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries. 
    You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that. 
    LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.  
    I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  
    GATES: Yeah. Thanks, you guys. 
    BUBECK: Thank you, Peter. Thanks, Bill. 
    LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.   
    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.  
    And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.  
    One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.  
    HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings. 
    You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.  
    If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  
    I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.  
    Until next time.  
    #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    Transcript        PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”           This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.      Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent.  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.   GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.   I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   #how #reshaping #future #healthcare #medical
    WWW.MICROSOFT.COM
    How AI is reshaping the future of healthcare and medical research
    Transcript [MUSIC]      [BOOK PASSAGE]   PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”   [END OF BOOK PASSAGE]     [THEME MUSIC]     This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.   [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.    [TRANSITION MUSIC]   Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weakness [LAUGHTER] that, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. [LAUGHS]  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSR [Microsoft Research] to join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well. [LAUGHS] My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair. [LAUGHTER] And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE: [LAUGHS] One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce about [LAUGHS] or indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients. [LAUGHTER] Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT (opens in new tab). And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE [United States Medical Licensing Examination], for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential. [LAUGHTER] What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back that [LAUGHS] version of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF [reinforcement learning from human feedback], where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGI [artificial general intelligence] that kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects. [LAUGHTER] So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and see [if you have] produced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini (opens in new tab). So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected [LAUGHTER] just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  [TRANSITION MUSIC]  GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  [THEME MUSIC]  I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   [MUSIC FADES]
    0 Kommentare 0 Anteile
  • Premier Truck Rental: Inside Sales Representative - Remote Salt Lake Area

    Are you in search of a company that resonates with your proactive spirit and entrepreneurial mindset? Your search ends here with Premier Truck Rental! Company Overview At Premier Truck Rental, we provide customized commercial fleet rentals nationwide, helping businesses get the right trucks and equipment to get the job done. Headquartered in Fort Wayne, Indiana, PTR is a family-owned company built on a foundation of integrity, innovation, and exceptional service. We serve a wide range of industriesincluding construction, utilities, and infrastructureby delivering high-quality, ready-to-work trucks and trailers tailored to each customers needs. At PTR, we dont just rent truckswe partner with our customers to drive efficiency and success on every job site. Please keep reading Not sure if you meet every requirement? Thats okay! We encourage you to apply if youre passionate, hardworking, and eager to contribute. We know that diverse perspectives and experiences make us stronger, and we want you to be part of our journey. Inside Sales Representativeat PTR is a friendly, people-oriented, and persuasive steward of the sales process. This role will support our Territory Managers with their sales pipeline while also prospecting and cross-selling PTR products themselves. This support includes driving results by enrolling the commitment and buy-in of other internal departments to achieve sales initiatives. The Inside Sales Representative will also represent PTRs commitment to being our customers easy button by serving as the main point of contact. They will be the front-line hero by assisting them in making informed decisions, providing guidance on our rentals, and resolving any issues they might face. We are seeking someone eager to develop their sales skills and grow within our organization. This role is designed as a stepping stone to a Territory Sales Managerposition, providing hands-on experience with customer interactions, lead qualification, and sales process execution. Ideal candidates will demonstrate a strong drive for results, the ability to build relationships, and a proactive approach to learning and development. High-performing ISRs will have the opportunity to be mentored, trained, and considered for promotion into a TSM role as part of their career path at PTR. COMPENSATION This position offers a competitive compensation package of base salaryplus uncapped commissions =OTE annually. RESPONSIBILITIES Offer top-notch customer service and respond with a sense of urgency for goal achievement in a fast-paced sales environment. Build a strong pipeline of customers by qualifying potential leads in your territory. This includes strategic prospecting and sourcing. Develop creative ways to engage and build rapport with prospective customers by pitching the Premier Truck Rental value proposition. Partner with assigned Territory Managers by assisting with scheduling customer visits, trade shows, new customer hand-offs, and any other travel requested. Facilitate in-person meetings and set appointments with prospective customers. Qualify and quote inquiries for your prospective territories both online and from the Territory Manager. Input data into the system with accuracy and follow up in a timely fashion. Facilitate the onboarding of new customers through the credit process. Drive collaboration between customers, Territory Managers, Logistics, and internal teams to coordinate On-Rent and Off-Rent notices with excellent attention to detail. Identify and arrange the swap of equipment from customers meeting the PTR de-fleeting criteria. Manage the sales tools to organize, compile, and analyze data with accuracy for a variety of activities and multiple projects occurring simultaneously.Building and developing a new 3-4 state territory! REQUIREMENTS MUST HAVE2+ years of strategic prospecting or account manager/sales experience; or an advanced degree or equivalent experience converting prospects into closed sales. Tech-forward approach to sales strategy. Excellent prospecting, follow-up, and follow-through skills. Committed to seeing deals through completion. Accountability and ownership of the sales process and a strong commitment to results. Comfortable with a job that has a variety of tasks and is dynamic and changing. Proactive prospecting skills and can overcome objections; driven to establish relationships with new customers. Ability to communicate in a clear, logical manner in formal and informal situations. Proficiency in CRMs and sales tracking systems Hunters mindsetsomeone who thrives on pursuing new business, driving outbound sales, and generating qualified opportunities. Prospecting: Going on LinkedIn, Looking at Competitor data, grabbing contacts for the TM, may use technology like Apollo and LinkedIn Sales Navigator Partner closely with the Territory Manager to ensure a unified approach in managing customer relationships, pipeline development, and revenue growth. Maintain clear and consistent communication to align on sales strategies, customer needs, and market opportunities, fostering a seamless and collaborative partnership with the Territory Manager. Consistently meet and exceed key performance indicators, including rental revenue, upfit revenue, and conversion rates, by actively managing customer accounts and identifying growth opportunities. Support the saturation and maturation of the customer base through strategic outreach, relationship management, and alignment with the Territory Manager to drive long-term success. Remote in the United States with some travel to trade shows, quarterly travel up to a week at a time, and sales meetingsNICE TO HAVE Rental and/or sales experience in the industry. Proficiency in , Apollo.io , LinkedIn Sales Navigator, Power BI, MS Dynamics, Chat GPT. Established relationships within the marketplace or territory. Motivated to grow into outside territory management position with relocation On Target Earnings:EMPLOYEE BENEFITSWellness & Fitness: Take advantage of our on-site CrossFit-style gym, featuring a full-time personal trainer dedicated to helping you reach your fitness goals. Whether you're into group classes, virtual personal training, personalized workout plans, or nutrition coaching, weve got you covered!Exclusive Employee Perks: PTR Swag & a Uniform/Boot Allowance, On-site Micro-Markets stocked with snacks & essentials, discounts on phone plans, supplier vehicles, mobile detailing, tools, & equipmentand much more!Profit SharingYour Success, rewarded: At PTR, we believe in sharing success. Our Profit-SharingComprehensive BenefitsStarting Day One:Premium healthcare coverage401matching & long-term financial planning Paid time off that lets you recharge Life, accidental death, and disability coverage Ongoing learning & development opportunitiesTraining, Growth & RecognitionWe partner with Predictive Index to better understand your strengths, ensuring tailored coaching, structured training, and career development. Performance and attitude evaluations every 6 months keep you on track for growth.Culture & ConnectionMore Than Just a JobAt PTR, we dont just build relationships with our customerswe build them with each other. Our tech-forward, highly collaborative culture is rooted in our core values. Connect and engage through:PTR Field Days & Team EventsThe Extra Mile Recognition ProgramPTR Text Alerts & Open CommunicationPremier Truck Rental Is an Equal Opportunity Employer We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law. If you need support or accommodation due to a disability, contact us at PI6e547fa1c5-
    #premier #truck #rental #inside #sales
    Premier Truck Rental: Inside Sales Representative - Remote Salt Lake Area
    Are you in search of a company that resonates with your proactive spirit and entrepreneurial mindset? Your search ends here with Premier Truck Rental! Company Overview At Premier Truck Rental, we provide customized commercial fleet rentals nationwide, helping businesses get the right trucks and equipment to get the job done. Headquartered in Fort Wayne, Indiana, PTR is a family-owned company built on a foundation of integrity, innovation, and exceptional service. We serve a wide range of industriesincluding construction, utilities, and infrastructureby delivering high-quality, ready-to-work trucks and trailers tailored to each customers needs. At PTR, we dont just rent truckswe partner with our customers to drive efficiency and success on every job site. Please keep reading Not sure if you meet every requirement? Thats okay! We encourage you to apply if youre passionate, hardworking, and eager to contribute. We know that diverse perspectives and experiences make us stronger, and we want you to be part of our journey. Inside Sales Representativeat PTR is a friendly, people-oriented, and persuasive steward of the sales process. This role will support our Territory Managers with their sales pipeline while also prospecting and cross-selling PTR products themselves. This support includes driving results by enrolling the commitment and buy-in of other internal departments to achieve sales initiatives. The Inside Sales Representative will also represent PTRs commitment to being our customers easy button by serving as the main point of contact. They will be the front-line hero by assisting them in making informed decisions, providing guidance on our rentals, and resolving any issues they might face. We are seeking someone eager to develop their sales skills and grow within our organization. This role is designed as a stepping stone to a Territory Sales Managerposition, providing hands-on experience with customer interactions, lead qualification, and sales process execution. Ideal candidates will demonstrate a strong drive for results, the ability to build relationships, and a proactive approach to learning and development. High-performing ISRs will have the opportunity to be mentored, trained, and considered for promotion into a TSM role as part of their career path at PTR. COMPENSATION This position offers a competitive compensation package of base salaryplus uncapped commissions =OTE annually. RESPONSIBILITIES Offer top-notch customer service and respond with a sense of urgency for goal achievement in a fast-paced sales environment. Build a strong pipeline of customers by qualifying potential leads in your territory. This includes strategic prospecting and sourcing. Develop creative ways to engage and build rapport with prospective customers by pitching the Premier Truck Rental value proposition. Partner with assigned Territory Managers by assisting with scheduling customer visits, trade shows, new customer hand-offs, and any other travel requested. Facilitate in-person meetings and set appointments with prospective customers. Qualify and quote inquiries for your prospective territories both online and from the Territory Manager. Input data into the system with accuracy and follow up in a timely fashion. Facilitate the onboarding of new customers through the credit process. Drive collaboration between customers, Territory Managers, Logistics, and internal teams to coordinate On-Rent and Off-Rent notices with excellent attention to detail. Identify and arrange the swap of equipment from customers meeting the PTR de-fleeting criteria. Manage the sales tools to organize, compile, and analyze data with accuracy for a variety of activities and multiple projects occurring simultaneously.Building and developing a new 3-4 state territory! REQUIREMENTS MUST HAVE2+ years of strategic prospecting or account manager/sales experience; or an advanced degree or equivalent experience converting prospects into closed sales. Tech-forward approach to sales strategy. Excellent prospecting, follow-up, and follow-through skills. Committed to seeing deals through completion. Accountability and ownership of the sales process and a strong commitment to results. Comfortable with a job that has a variety of tasks and is dynamic and changing. Proactive prospecting skills and can overcome objections; driven to establish relationships with new customers. Ability to communicate in a clear, logical manner in formal and informal situations. Proficiency in CRMs and sales tracking systems Hunters mindsetsomeone who thrives on pursuing new business, driving outbound sales, and generating qualified opportunities. Prospecting: Going on LinkedIn, Looking at Competitor data, grabbing contacts for the TM, may use technology like Apollo and LinkedIn Sales Navigator Partner closely with the Territory Manager to ensure a unified approach in managing customer relationships, pipeline development, and revenue growth. Maintain clear and consistent communication to align on sales strategies, customer needs, and market opportunities, fostering a seamless and collaborative partnership with the Territory Manager. Consistently meet and exceed key performance indicators, including rental revenue, upfit revenue, and conversion rates, by actively managing customer accounts and identifying growth opportunities. Support the saturation and maturation of the customer base through strategic outreach, relationship management, and alignment with the Territory Manager to drive long-term success. Remote in the United States with some travel to trade shows, quarterly travel up to a week at a time, and sales meetingsNICE TO HAVE Rental and/or sales experience in the industry. Proficiency in , Apollo.io , LinkedIn Sales Navigator, Power BI, MS Dynamics, Chat GPT. Established relationships within the marketplace or territory. Motivated to grow into outside territory management position with relocation On Target Earnings:EMPLOYEE BENEFITSWellness & Fitness: Take advantage of our on-site CrossFit-style gym, featuring a full-time personal trainer dedicated to helping you reach your fitness goals. Whether you're into group classes, virtual personal training, personalized workout plans, or nutrition coaching, weve got you covered!Exclusive Employee Perks: PTR Swag & a Uniform/Boot Allowance, On-site Micro-Markets stocked with snacks & essentials, discounts on phone plans, supplier vehicles, mobile detailing, tools, & equipmentand much more!Profit SharingYour Success, rewarded: At PTR, we believe in sharing success. Our Profit-SharingComprehensive BenefitsStarting Day One:Premium healthcare coverage401matching & long-term financial planning Paid time off that lets you recharge Life, accidental death, and disability coverage Ongoing learning & development opportunitiesTraining, Growth & RecognitionWe partner with Predictive Index to better understand your strengths, ensuring tailored coaching, structured training, and career development. Performance and attitude evaluations every 6 months keep you on track for growth.Culture & ConnectionMore Than Just a JobAt PTR, we dont just build relationships with our customerswe build them with each other. Our tech-forward, highly collaborative culture is rooted in our core values. Connect and engage through:PTR Field Days & Team EventsThe Extra Mile Recognition ProgramPTR Text Alerts & Open CommunicationPremier Truck Rental Is an Equal Opportunity Employer We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law. If you need support or accommodation due to a disability, contact us at PI6e547fa1c5- #premier #truck #rental #inside #sales
    WEWORKREMOTELY.COM
    Premier Truck Rental: Inside Sales Representative - Remote Salt Lake Area
    Are you in search of a company that resonates with your proactive spirit and entrepreneurial mindset? Your search ends here with Premier Truck Rental! Company Overview At Premier Truck Rental (PTR), we provide customized commercial fleet rentals nationwide, helping businesses get the right trucks and equipment to get the job done. Headquartered in Fort Wayne, Indiana, PTR is a family-owned company built on a foundation of integrity, innovation, and exceptional service. We serve a wide range of industriesincluding construction, utilities, and infrastructureby delivering high-quality, ready-to-work trucks and trailers tailored to each customers needs. At PTR, we dont just rent truckswe partner with our customers to drive efficiency and success on every job site. Please keep reading Not sure if you meet every requirement? Thats okay! We encourage you to apply if youre passionate, hardworking, and eager to contribute. We know that diverse perspectives and experiences make us stronger, and we want you to be part of our journey. Inside Sales Representative (ISR) at PTR is a friendly, people-oriented, and persuasive steward of the sales process. This role will support our Territory Managers with their sales pipeline while also prospecting and cross-selling PTR products themselves. This support includes driving results by enrolling the commitment and buy-in of other internal departments to achieve sales initiatives. The Inside Sales Representative will also represent PTRs commitment to being our customers easy button by serving as the main point of contact. They will be the front-line hero by assisting them in making informed decisions, providing guidance on our rentals, and resolving any issues they might face. We are seeking someone eager to develop their sales skills and grow within our organization. This role is designed as a stepping stone to a Territory Sales Manager (TSM) position, providing hands-on experience with customer interactions, lead qualification, and sales process execution. Ideal candidates will demonstrate a strong drive for results, the ability to build relationships, and a proactive approach to learning and development. High-performing ISRs will have the opportunity to be mentored, trained, and considered for promotion into a TSM role as part of their career path at PTR. COMPENSATION This position offers a competitive compensation package of base salary ($50,000/yr) plus uncapped commissions =OTE $85,000 annually. RESPONSIBILITIES Offer top-notch customer service and respond with a sense of urgency for goal achievement in a fast-paced sales environment. Build a strong pipeline of customers by qualifying potential leads in your territory. This includes strategic prospecting and sourcing. Develop creative ways to engage and build rapport with prospective customers by pitching the Premier Truck Rental value proposition. Partner with assigned Territory Managers by assisting with scheduling customer visits, trade shows, new customer hand-offs, and any other travel requested. Facilitate in-person meetings and set appointments with prospective customers. Qualify and quote inquiries for your prospective territories both online and from the Territory Manager. Input data into the system with accuracy and follow up in a timely fashion. Facilitate the onboarding of new customers through the credit process. Drive collaboration between customers, Territory Managers, Logistics, and internal teams to coordinate On-Rent and Off-Rent notices with excellent attention to detail. Identify and arrange the swap of equipment from customers meeting the PTR de-fleeting criteria. Manage the sales tools to organize, compile, and analyze data with accuracy for a variety of activities and multiple projects occurring simultaneously.Building and developing a new 3-4 state territory! REQUIREMENTS MUST HAVE2+ years of strategic prospecting or account manager/sales experience; or an advanced degree or equivalent experience converting prospects into closed sales. Tech-forward approach to sales strategy. Excellent prospecting, follow-up, and follow-through skills. Committed to seeing deals through completion. Accountability and ownership of the sales process and a strong commitment to results. Comfortable with a job that has a variety of tasks and is dynamic and changing. Proactive prospecting skills and can overcome objections; driven to establish relationships with new customers. Ability to communicate in a clear, logical manner in formal and informal situations. Proficiency in CRMs and sales tracking systems Hunters mindsetsomeone who thrives on pursuing new business, driving outbound sales, and generating qualified opportunities. Prospecting: Going on LinkedIn, Looking at Competitor data, grabbing contacts for the TM, may use technology like Apollo and LinkedIn Sales Navigator Partner closely with the Territory Manager to ensure a unified approach in managing customer relationships, pipeline development, and revenue growth. Maintain clear and consistent communication to align on sales strategies, customer needs, and market opportunities, fostering a seamless and collaborative partnership with the Territory Manager. Consistently meet and exceed key performance indicators (KPIs), including rental revenue, upfit revenue, and conversion rates, by actively managing customer accounts and identifying growth opportunities. Support the saturation and maturation of the customer base through strategic outreach, relationship management, and alignment with the Territory Manager to drive long-term success. Remote in the United States with some travel to trade shows, quarterly travel up to a week at a time, and sales meetingsNICE TO HAVE Rental and/or sales experience in the industry. Proficiency in , Apollo.io , LinkedIn Sales Navigator, Power BI, MS Dynamics, Chat GPT. Established relationships within the marketplace or territory. Motivated to grow into outside territory management position with relocation On Target Earnings: ($85,000)EMPLOYEE BENEFITSWellness & Fitness: Take advantage of our on-site CrossFit-style gym, featuring a full-time personal trainer dedicated to helping you reach your fitness goals. Whether you're into group classes, virtual personal training, personalized workout plans, or nutrition coaching, weve got you covered!Exclusive Employee Perks: PTR Swag & a Uniform/Boot Allowance, On-site Micro-Markets stocked with snacks & essentials, discounts on phone plans, supplier vehicles, mobile detailing, tools, & equipmentand much more!Profit SharingYour Success, rewarded: At PTR, we believe in sharing success. Our Profit-SharingComprehensive BenefitsStarting Day One:Premium healthcare coverage (medical, dental, vision, mental health & virtual healthcare)401(k) matching & long-term financial planning Paid time off that lets you recharge Life, accidental death, and disability coverage Ongoing learning & development opportunitiesTraining, Growth & RecognitionWe partner with Predictive Index to better understand your strengths, ensuring tailored coaching, structured training, and career development. Performance and attitude evaluations every 6 months keep you on track for growth.Culture & ConnectionMore Than Just a JobAt PTR, we dont just build relationships with our customerswe build them with each other. Our tech-forward, highly collaborative culture is rooted in our core values. Connect and engage through:PTR Field Days & Team EventsThe Extra Mile Recognition ProgramPTR Text Alerts & Open CommunicationPremier Truck Rental Is an Equal Opportunity Employer We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law. If you need support or accommodation due to a disability, contact us at PI6e547fa1c5-
    0 Kommentare 0 Anteile
  • Ansys: UX Designer II (Remote - US)

    Requisition #: 16391 Our Mission: Powering Innovation That Drives Human Advancement When visionary companies need to know how their world-changing ideas will perform, they close the gap between design and reality with Ansys simulation. For more than 50 years, Ansys software has enabled innovators across industries to push boundaries by using the predictive power of simulation. From sustainable transportation to advanced semiconductors, from satellite systems to life-saving medical devices, the next great leaps in human advancement will be powered by Ansys. Innovate With Ansys, Power Your Career. Summary / Role Purpose The User Experience Designer II creates easy and delightful experiences for users interacting with ANSYS products and services. The UX designer assesses the functional and content requirements of a product, develops storyboards, creates wireframes and task flows based on user needs, and produces visually detailed mockups. A passion for visual design and familiarity with UI trends and technologies are essential in this role, enabling the UX designer to bring fresh and innovative ideas to a project. This is an intermediate role, heavily focused on content production and communication. It is intended to expose the UX professional to the nuts-and-bolts aspects of their UX career; while building on presentation, communication, and usability aspects of the design role. The User Experience Designer II will contribute to the development of a new web-based, collaborative solution for the ModelCenter and optiSLang product lines. This work will be based on an innovative modeling framework, modern web technologies, micro-services and integrations with Ansys' core products. The User Experience Designer II will contribute to the specification and design of user interactions and workflows for new features. The solution will be used by Ansys customers to design next generation systems in the most innovative industries. Location: Can be 100% Remote within US Key Duties and Responsibilities Designs, develops, and evaluates cutting-edge user interfaces Reviews UX artifacts created by other UX team members Utilizes prototyping tools and UX toolkits Creates and delivers usability studies Communicates design rationale across product creation disciplines and personnel Records usability/UX problems with clear explanations and recommendations for improvement Works closely with product managers, development teams, and other designers Minimum Education/Certification Requirements and Experience BS or BA in Human-Computer Interaction, Design Engineering, or Industrial Design with 2 years' experience or MS Working experience with technical software development proven by academic, research, or industry projects. Professional working proficiency in English Preferred Qualifications and Skills Experience with: UX design and collaboration tools: Figma, Balsamiq or similar tools Tools & technologies for UI implementation: HTML, CSS, JavaScript, Angular, React Screen-capture/editing/video-editing tools Adobe Creative Suite Ability to: Smoothly iterate on designs, taking direction, adjusting, and re-focusing towards a converged design Organize deliverables for future reflection and current investigations Communicate succinctly and professionally via email, chat, remote meetings, usability evaluations, etc. Prototype rapidly using any tools available Knowledge of Model Based System Engineeringor optimization is a plus Culture and Values Culture and values are incredibly important to ANSYS. They inform us of who we are, of how we act. Values aren't posters hanging on a wall or about trite or glib slogans. They aren't about rules and regulations. They can't just be handed down the organization. They are shared beliefs - guideposts that we all follow when we're facing a challenge or a decision. Our values tell us how we live our lives; how we approach our jobs. Our values are crucial for fostering a culture of winning for our company: • Customer focus • Results and Accountability • Innovation • Transparency and Integrity • Mastery • Inclusiveness • Sense of urgency • Collaboration and Teamwork At Ansys, we know that changing the world takes vision, skill, and each other. We fuel new ideas, build relationships, and help each other realize our greatest potential. We are ONE Ansys. We operate on three key components: our commitments to stakeholders, our values that guide how we work together, and our actions to deliver results. As ONE Ansys, we are powering innovation that drives human advancement Our Commitments:Amaze with innovative products and solutionsMake our customers incredibly successfulAct with integrityEnsure employees thrive and shareholders prosper Our Values:Adaptability: Be open, welcome what's nextCourage: Be courageous, move forward passionatelyGenerosity: Be generous, share, listen, serveAuthenticity: Be you, make us stronger Our Actions:We commit to audacious goalsWe work seamlessly as a teamWe demonstrate masteryWe deliver outstanding resultsVALUES IN ACTION Ansys is committed to powering the people who power human advancement. We believe in creating and nurturing a workplace that supports and welcomes people of all backgrounds; encouraging them to bring their talents and experience to a workplace where they are valued and can thrive. Our culture is grounded in our four core values of adaptability, courage, generosity, and authenticity. Through our behaviors and actions, these values foster higher team performance and greater innovation for our customers. We're proud to offer programs, available to all employees, to further impact innovation and business outcomes, such as employee networks and learning communities that inform solutions for our globally minded customer base. WELCOME WHAT'S NEXT IN YOUR CAREER AT ANSYS At Ansys, you will find yourself among the sharpest minds and most visionary leaders across the globe. Collectively, we strive to change the world with innovative technology and transformational solutions. With a prestigious reputation in working with well-known, world-class companies, standards at Ansys are high - met by those willing to rise to the occasion and meet those challenges head on. Our team is passionate about pushing the limits of world-class simulation technology, empowering our customers to turn their design concepts into successful, innovative products faster and at a lower cost. Ready to feel inspired? Check out some of our recent customer stories, here and here . At Ansys, it's about the learning, the discovery, and the collaboration. It's about the "what's next" as much as the "mission accomplished." And it's about the melding of disciplined intellect with strategic direction and results that have, can, and do impact real people in real ways. All this is forged within a working environment built on respect, autonomy, and ethics.CREATING A PLACE WE'RE PROUD TO BEAnsys is an S&P 500 company and a member of the NASDAQ-100. We are proud to have been recognized for the following more recent awards, although our list goes on: Newsweek's Most Loved Workplace globally and in the U.S., Gold Stevie Award Winner, America's Most Responsible Companies, Fast Company World Changing Ideas, Great Place to Work Certified.For more information, please visit us at Ansys is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, and other protected characteristics.Ansys does not accept unsolicited referrals for vacancies, and any unsolicited referral will become the property of Ansys. Upon hire, no fee will be owed to the agency, person, or entity.Apply NowLet's start your dream job Apply now Meet JobCopilot: Your Personal AI Job HunterAutomatically Apply to Remote Full-Stack Programming JobsJust set your preferences and Job Copilot will do the rest-finding, filtering, and applying while you focus on what matters. Activate JobCopilot
    #ansys #designer #remote
    Ansys: UX Designer II (Remote - US)
    Requisition #: 16391 Our Mission: Powering Innovation That Drives Human Advancement When visionary companies need to know how their world-changing ideas will perform, they close the gap between design and reality with Ansys simulation. For more than 50 years, Ansys software has enabled innovators across industries to push boundaries by using the predictive power of simulation. From sustainable transportation to advanced semiconductors, from satellite systems to life-saving medical devices, the next great leaps in human advancement will be powered by Ansys. Innovate With Ansys, Power Your Career. Summary / Role Purpose The User Experience Designer II creates easy and delightful experiences for users interacting with ANSYS products and services. The UX designer assesses the functional and content requirements of a product, develops storyboards, creates wireframes and task flows based on user needs, and produces visually detailed mockups. A passion for visual design and familiarity with UI trends and technologies are essential in this role, enabling the UX designer to bring fresh and innovative ideas to a project. This is an intermediate role, heavily focused on content production and communication. It is intended to expose the UX professional to the nuts-and-bolts aspects of their UX career; while building on presentation, communication, and usability aspects of the design role. The User Experience Designer II will contribute to the development of a new web-based, collaborative solution for the ModelCenter and optiSLang product lines. This work will be based on an innovative modeling framework, modern web technologies, micro-services and integrations with Ansys' core products. The User Experience Designer II will contribute to the specification and design of user interactions and workflows for new features. The solution will be used by Ansys customers to design next generation systems in the most innovative industries. Location: Can be 100% Remote within US Key Duties and Responsibilities Designs, develops, and evaluates cutting-edge user interfaces Reviews UX artifacts created by other UX team members Utilizes prototyping tools and UX toolkits Creates and delivers usability studies Communicates design rationale across product creation disciplines and personnel Records usability/UX problems with clear explanations and recommendations for improvement Works closely with product managers, development teams, and other designers Minimum Education/Certification Requirements and Experience BS or BA in Human-Computer Interaction, Design Engineering, or Industrial Design with 2 years' experience or MS Working experience with technical software development proven by academic, research, or industry projects. Professional working proficiency in English Preferred Qualifications and Skills Experience with: UX design and collaboration tools: Figma, Balsamiq or similar tools Tools & technologies for UI implementation: HTML, CSS, JavaScript, Angular, React Screen-capture/editing/video-editing tools Adobe Creative Suite Ability to: Smoothly iterate on designs, taking direction, adjusting, and re-focusing towards a converged design Organize deliverables for future reflection and current investigations Communicate succinctly and professionally via email, chat, remote meetings, usability evaluations, etc. Prototype rapidly using any tools available Knowledge of Model Based System Engineeringor optimization is a plus Culture and Values Culture and values are incredibly important to ANSYS. They inform us of who we are, of how we act. Values aren't posters hanging on a wall or about trite or glib slogans. They aren't about rules and regulations. They can't just be handed down the organization. They are shared beliefs - guideposts that we all follow when we're facing a challenge or a decision. Our values tell us how we live our lives; how we approach our jobs. Our values are crucial for fostering a culture of winning for our company: • Customer focus • Results and Accountability • Innovation • Transparency and Integrity • Mastery • Inclusiveness • Sense of urgency • Collaboration and Teamwork At Ansys, we know that changing the world takes vision, skill, and each other. We fuel new ideas, build relationships, and help each other realize our greatest potential. We are ONE Ansys. We operate on three key components: our commitments to stakeholders, our values that guide how we work together, and our actions to deliver results. As ONE Ansys, we are powering innovation that drives human advancement Our Commitments:Amaze with innovative products and solutionsMake our customers incredibly successfulAct with integrityEnsure employees thrive and shareholders prosper Our Values:Adaptability: Be open, welcome what's nextCourage: Be courageous, move forward passionatelyGenerosity: Be generous, share, listen, serveAuthenticity: Be you, make us stronger Our Actions:We commit to audacious goalsWe work seamlessly as a teamWe demonstrate masteryWe deliver outstanding resultsVALUES IN ACTION Ansys is committed to powering the people who power human advancement. We believe in creating and nurturing a workplace that supports and welcomes people of all backgrounds; encouraging them to bring their talents and experience to a workplace where they are valued and can thrive. Our culture is grounded in our four core values of adaptability, courage, generosity, and authenticity. Through our behaviors and actions, these values foster higher team performance and greater innovation for our customers. We're proud to offer programs, available to all employees, to further impact innovation and business outcomes, such as employee networks and learning communities that inform solutions for our globally minded customer base. WELCOME WHAT'S NEXT IN YOUR CAREER AT ANSYS At Ansys, you will find yourself among the sharpest minds and most visionary leaders across the globe. Collectively, we strive to change the world with innovative technology and transformational solutions. With a prestigious reputation in working with well-known, world-class companies, standards at Ansys are high - met by those willing to rise to the occasion and meet those challenges head on. Our team is passionate about pushing the limits of world-class simulation technology, empowering our customers to turn their design concepts into successful, innovative products faster and at a lower cost. Ready to feel inspired? Check out some of our recent customer stories, here and here . At Ansys, it's about the learning, the discovery, and the collaboration. It's about the "what's next" as much as the "mission accomplished." And it's about the melding of disciplined intellect with strategic direction and results that have, can, and do impact real people in real ways. All this is forged within a working environment built on respect, autonomy, and ethics.CREATING A PLACE WE'RE PROUD TO BEAnsys is an S&P 500 company and a member of the NASDAQ-100. We are proud to have been recognized for the following more recent awards, although our list goes on: Newsweek's Most Loved Workplace globally and in the U.S., Gold Stevie Award Winner, America's Most Responsible Companies, Fast Company World Changing Ideas, Great Place to Work Certified.For more information, please visit us at Ansys is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, and other protected characteristics.Ansys does not accept unsolicited referrals for vacancies, and any unsolicited referral will become the property of Ansys. Upon hire, no fee will be owed to the agency, person, or entity.Apply NowLet's start your dream job Apply now Meet JobCopilot: Your Personal AI Job HunterAutomatically Apply to Remote Full-Stack Programming JobsJust set your preferences and Job Copilot will do the rest-finding, filtering, and applying while you focus on what matters. Activate JobCopilot #ansys #designer #remote
    WEWORKREMOTELY.COM
    Ansys: UX Designer II (Remote - US)
    Requisition #: 16391 Our Mission: Powering Innovation That Drives Human Advancement When visionary companies need to know how their world-changing ideas will perform, they close the gap between design and reality with Ansys simulation. For more than 50 years, Ansys software has enabled innovators across industries to push boundaries by using the predictive power of simulation. From sustainable transportation to advanced semiconductors, from satellite systems to life-saving medical devices, the next great leaps in human advancement will be powered by Ansys. Innovate With Ansys, Power Your Career. Summary / Role Purpose The User Experience Designer II creates easy and delightful experiences for users interacting with ANSYS products and services. The UX designer assesses the functional and content requirements of a product, develops storyboards, creates wireframes and task flows based on user needs, and produces visually detailed mockups. A passion for visual design and familiarity with UI trends and technologies are essential in this role, enabling the UX designer to bring fresh and innovative ideas to a project. This is an intermediate role, heavily focused on content production and communication. It is intended to expose the UX professional to the nuts-and-bolts aspects of their UX career; while building on presentation, communication, and usability aspects of the design role. The User Experience Designer II will contribute to the development of a new web-based, collaborative solution for the ModelCenter and optiSLang product lines. This work will be based on an innovative modeling framework, modern web technologies, micro-services and integrations with Ansys' core products. The User Experience Designer II will contribute to the specification and design of user interactions and workflows for new features. The solution will be used by Ansys customers to design next generation systems in the most innovative industries (Aerospace and Defense, Automotive, semi-conductors, and others). Location: Can be 100% Remote within US Key Duties and Responsibilities Designs, develops, and evaluates cutting-edge user interfaces Reviews UX artifacts created by other UX team members Utilizes prototyping tools and UX toolkits Creates and delivers usability studies Communicates design rationale across product creation disciplines and personnel Records usability/UX problems with clear explanations and recommendations for improvement Works closely with product managers, development teams, and other designers Minimum Education/Certification Requirements and Experience BS or BA in Human-Computer Interaction, Design Engineering, or Industrial Design with 2 years' experience or MS Working experience with technical software development proven by academic, research, or industry projects. Professional working proficiency in English Preferred Qualifications and Skills Experience with: UX design and collaboration tools: Figma, Balsamiq or similar tools Tools & technologies for UI implementation: HTML, CSS, JavaScript, Angular, React Screen-capture/editing/video-editing tools Adobe Creative Suite Ability to: Smoothly iterate on designs, taking direction, adjusting, and re-focusing towards a converged design Organize deliverables for future reflection and current investigations Communicate succinctly and professionally via email, chat, remote meetings, usability evaluations, etc. Prototype rapidly using any tools available Knowledge of Model Based System Engineering (MBSE) or optimization is a plus Culture and Values Culture and values are incredibly important to ANSYS. They inform us of who we are, of how we act. Values aren't posters hanging on a wall or about trite or glib slogans. They aren't about rules and regulations. They can't just be handed down the organization. They are shared beliefs - guideposts that we all follow when we're facing a challenge or a decision. Our values tell us how we live our lives; how we approach our jobs. Our values are crucial for fostering a culture of winning for our company: • Customer focus • Results and Accountability • Innovation • Transparency and Integrity • Mastery • Inclusiveness • Sense of urgency • Collaboration and Teamwork At Ansys, we know that changing the world takes vision, skill, and each other. We fuel new ideas, build relationships, and help each other realize our greatest potential. We are ONE Ansys. We operate on three key components: our commitments to stakeholders, our values that guide how we work together, and our actions to deliver results. As ONE Ansys, we are powering innovation that drives human advancement Our Commitments:Amaze with innovative products and solutionsMake our customers incredibly successfulAct with integrityEnsure employees thrive and shareholders prosper Our Values:Adaptability: Be open, welcome what's nextCourage: Be courageous, move forward passionatelyGenerosity: Be generous, share, listen, serveAuthenticity: Be you, make us stronger Our Actions:We commit to audacious goalsWe work seamlessly as a teamWe demonstrate masteryWe deliver outstanding resultsVALUES IN ACTION Ansys is committed to powering the people who power human advancement. We believe in creating and nurturing a workplace that supports and welcomes people of all backgrounds; encouraging them to bring their talents and experience to a workplace where they are valued and can thrive. Our culture is grounded in our four core values of adaptability, courage, generosity, and authenticity. Through our behaviors and actions, these values foster higher team performance and greater innovation for our customers. We're proud to offer programs, available to all employees, to further impact innovation and business outcomes, such as employee networks and learning communities that inform solutions for our globally minded customer base. WELCOME WHAT'S NEXT IN YOUR CAREER AT ANSYS At Ansys, you will find yourself among the sharpest minds and most visionary leaders across the globe. Collectively, we strive to change the world with innovative technology and transformational solutions. With a prestigious reputation in working with well-known, world-class companies, standards at Ansys are high - met by those willing to rise to the occasion and meet those challenges head on. Our team is passionate about pushing the limits of world-class simulation technology, empowering our customers to turn their design concepts into successful, innovative products faster and at a lower cost. Ready to feel inspired? Check out some of our recent customer stories, here and here . At Ansys, it's about the learning, the discovery, and the collaboration. It's about the "what's next" as much as the "mission accomplished." And it's about the melding of disciplined intellect with strategic direction and results that have, can, and do impact real people in real ways. All this is forged within a working environment built on respect, autonomy, and ethics.CREATING A PLACE WE'RE PROUD TO BEAnsys is an S&P 500 company and a member of the NASDAQ-100. We are proud to have been recognized for the following more recent awards, although our list goes on: Newsweek's Most Loved Workplace globally and in the U.S., Gold Stevie Award Winner, America's Most Responsible Companies, Fast Company World Changing Ideas, Great Place to Work Certified (China, Greece, France, India, Japan, Korea, Spain, Sweden, Taiwan, and U.K.).For more information, please visit us at Ansys is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, and other protected characteristics.Ansys does not accept unsolicited referrals for vacancies, and any unsolicited referral will become the property of Ansys. Upon hire, no fee will be owed to the agency, person, or entity.Apply NowLet's start your dream job Apply now Meet JobCopilot: Your Personal AI Job HunterAutomatically Apply to Remote Full-Stack Programming JobsJust set your preferences and Job Copilot will do the rest-finding, filtering, and applying while you focus on what matters. Activate JobCopilot
    0 Kommentare 0 Anteile
  • OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs

    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs
    Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty. 
    Limitations of Existing Training-Based and Training-Free Approaches
    Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly. 
    Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework
    Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks. 
    System Architecture: Reasoning Pruning and Dual-Reference Optimization
    The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth. 

    Empirical Evaluation and Comparative Performance
    The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning. 

    Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems
    In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future. 

    Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger
    #othinkr1 #dualmode #reasoning #framework #cut
    OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs
    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty.  Limitations of Existing Training-Based and Training-Free Approaches Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly.  Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks.  System Architecture: Reasoning Pruning and Dual-Reference Optimization The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth.  Empirical Evaluation and Comparative Performance The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning.  Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future.  Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger #othinkr1 #dualmode #reasoning #framework #cut
    WWW.MARKTECHPOST.COM
    OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs
    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty.  Limitations of Existing Training-Based and Training-Free Approaches Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly.  Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks.  System Architecture: Reasoning Pruning and Dual-Reference Optimization The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth.  Empirical Evaluation and Comparative Performance The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning.  Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future.  Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger
    0 Kommentare 0 Anteile
Suchergebnis