• NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica

    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth.
    Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI.
    This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany.
    NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics.
    NVIDIA Technologies Boost Robotics Development 
    Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics.
    To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks.
    To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data.
    In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub.
    Image courtesy of Wandelbots.
    Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More 
    Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots.
    NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment.
    NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies.
    Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows.
    Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact.
    Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations.
    Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries.
    Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic.
    Image courtesy of Franka Robotics.
    Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support.
    Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies.
    SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario.
    Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation.

    Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications.
    Image courtesy of Vention.
    Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27. 
    #nvidia #partners #highlight #nextgeneration #robotics
    NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica
    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth. Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI. This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany. NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics. NVIDIA Technologies Boost Robotics Development  Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics. To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks. To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data. In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub. Image courtesy of Wandelbots. Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More  Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots. NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment. NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies. Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows. Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact. Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations. Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries. Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic. Image courtesy of Franka Robotics. Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies. SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario. Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation. Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications. Image courtesy of Vention. Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27.  #nvidia #partners #highlight #nextgeneration #robotics
    BLOGS.NVIDIA.COM
    NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica
    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth. Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a $200 billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI. This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany. NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics. NVIDIA Technologies Boost Robotics Development  Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics. To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks. To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data. In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub. Image courtesy of Wandelbots. Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More  Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots. NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment. NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies. Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows. Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact. Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations. Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries. Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3 (FR3) robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic. Image courtesy of Franka Robotics. Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies. SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario. Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation. Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications. Image courtesy of Vention. Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27. 
    Like
    Love
    Wow
    Sad
    Angry
    19
    0 Yorumlar 0 hisse senetleri
  • AI Voice Agents Are Ready to Take Your Call

    Improvements in the technology behind voice-based AI bots are making them more prolific and humanlike in phone calls.
    #voice #agents #are #ready #take
    AI Voice Agents Are Ready to Take Your Call
    Improvements in the technology behind voice-based AI bots are making them more prolific and humanlike in phone calls. #voice #agents #are #ready #take
    WWW.WSJ.COM
    AI Voice Agents Are Ready to Take Your Call
    Improvements in the technology behind voice-based AI bots are making them more prolific and humanlike in phone calls.
    0 Yorumlar 0 hisse senetleri
  • Talk to Me: NVIDIA and Partners Boost People Skills and Business Smarts for AI Agents

    Call it the ultimate proving ground. Collaborating with teammates in the modern workplace requires fast, fluid thinking. Providing insights quickly, while juggling webcams and office messaging channels, is a startlingly good test, and enterprise AI is about to pass it — just in time to provide assistance to busy knowledge workers.
    To support enterprises in boosting productivity with AI teammates, NVIDIA today introduced a new NVIDIA Enterprise AI Factory validated design at COMPUTEX. IT teams deploying and scaling AI agents can use the design to build accelerated infrastructure and easily integrate with platforms and tools from NVIDIA software partners.
    NVIDIA also unveiled new NVIDIA AI Blueprints to aid developers building smart AI teammates. Using the new blueprints, developers can enhance employee productivity through adaptive avatars that understand natural communication and have direct access to enterprise data.
    Blueprints for Engaging, Insightful AI Agents
    Enterprises can use NVIDIA’s latest AI Blueprints to create agents that align with their business objectives. Using the Tokkio NVIDIA AI Blueprint, developers can create interactive digital humans that can respond to emotional and contextual cues, while the AI-Q blueprint enables queries of many data sources to infuse AI agents with the company’s knowledge and gives them intelligent reasoning capabilities.
    Building these intelligent AI agents is a full-stack challenge. These blueprints are designed to run on NVIDIA’s accelerated computing infrastructure — including data centers built with the universal NVIDIA RTX PRO 6000 Server Edition GPU, which is part of NVIDIA’s vision for AI factories as complete systems for creating and putting AI to work.
    The Tokkio blueprint simplifies building interactive AI agent avatars for more natural and humanlike interactions.
    These AI agents are designed for intelligence. They integrate with foundational blueprints including the AI-Q NVIDIA Blueprint, part of the NVIDIA AI Data Platform, which uses retrieval-augmented generation and NVIDIA NeMo Retriever microservices to access enterprise data.

    AI Agents Boost People’s Productivity
    Customers around the world are already using these AI agent solutions.
    At the COACH Play store on Cat Street in Harajuku, Tokyo, imma provides an interactive in-store experience and gives personalized styling advice through natural, real-time conversation.
    Marking COACH’s debut in digital humans and AI-driven retail, the initiative merges cutting-edge technology with fashion to create an immersive and engaging customer journey. Developed by Aww Inc. and powered by NVIDIA ACE, the underlying technology that makes up the Tokkio blueprint, imma delivers lifelike interactions and tailored style suggestions.
    The experience allows for dynamic, unscripted conversations designed to connect with visitors on a personal level, highlighting COACH’s core values of courage and self-expression.
    “Through this groundbreaking innovation in the fashion retail space, customers can now engage in real-time, free-flowing conversations with our iconic virtual human, imma — an AI-powered stylist — right inside the store in the heart of Harajuku,” said Yumi An King, executive director of Aww Inc. “It’s been inspiring to see visitors enjoy personalized styling advice and build a sense of connection through natural conversation. We’re excited to bring this vision to life with NVIDIA and continue redefining what’s possible at the intersection of AI and fashion.”

    Watch how Aww Inc. is leveraging the latest Tokkio NVIDIA AI Blueprint in its AI-powered virtual human stylist, imma, to connect with shoppers through natural conversation and provide personalized styling advice. 
    Royal Bank of Canada developed Jessica, an AI agent avatar that assists employees in handling reports of fraud. With Jessica’s help, bank employees can access the most up-to-date information so they can handle fraud reports faster and more accurately, enhancing client service.
    Ubitus and the Mackay Memorial Hospital, located in Taipei, are teaming up to make hospital visits easier and friendlier with the help of AI-powered digital humans. These lifelike avatars are created using advanced 8K facial scanning and brought to life by Ubitus’ AI model integrated with NVIDIA ACE technologies, including NVIDIA Audio2Face 3D for expressions and NVIDIA Riva for speech.
    Deployed on interactive touchscreens, these digital humans offer hospital navigation, health education and registration support — reducing the burden on frontline staff. They also provide emotional support in pediatric care, aimed at reducing anxiety during wait times.

    Ubitus and the Mackay Memorial Hospital are making hospital visits easier and friendlier with the help of NVIDIA AI-powered digital humans.
    Cincinnati Children’s Hospital is exploring the potential of digital avatar technology to enhance the pediatric patient experience. As part of its ongoing innovation efforts, the hospital is evaluating platforms such as NVIDIA’s Digital Human Blueprint to inform the early design of “Care Companions” — interactive, friendly avatars that could help young patients better understand their healthcare journey.
    “Children can have a lot of questions about their experiences in the hospital, and often respond more to a friendly avatar, like stylized humanoids, animals or robots, that speaks at their level of understanding,” said Dr. Ryan Moore, chief of emerging technologies at Cincinnati Children’s Hospital. “Through our Care Companions built with NVIDIA AI, gamified learning, voice interaction and familiar digital experiences, Cincinnati Children’s Hospital aims to improve understanding, reduce anxiety and support lifelong health for young patients.”
    This early-stage exploration is part of the hospital’s broader initiative to evaluate new and emerging technologies that could one day enhance child-centered care.
    Software Platforms Support Agents on AI Factory Infrastructure 
    AI agents are one of the many workloads driving enterprises to reimagine their data centers as AI factories built for modern applications. Using the new NVIDIA Enterprise AI Factory validated design, enterprises can build data centers that provide universal acceleration for agentic AI, as well as design, engineering and business operations.
    The Enterprise AI Factory validated design features support for software tools and platforms from NVIDIA partners, making it easier to build and run generative and agent-based AI applications.
    Developers deploying AI agents on their AI factory infrastructure can tap into partner platforms such as Dataiku, DataRobot, Dynatrace and JFrog to build, orchestrate, operationalize and scale AI workflows. The validated design supports frameworks from CrewAI, as well as vector databases from DataStax and Elastic, to help agents store, search and retrieve data.
    With tools from partners including Arize AI, Galileo, SuperAnnotate, Unstructured and Weights & Biases, developers can conduct data labeling, synthetic data generation, model evaluation and experiment tracking. Orchestration and deployment partners including Canonical, Nutanix and Red Hat support seamless scaling and management of AI agent workloads across complex enterprise environments. Enterprises can secure their AI factories with software from safety and security partners including ActiveFence, CrowdStrike, Fiddler, Securiti and Trend Micro.
    The NVIDIA Enterprise AI Factory validated design and latest AI Blueprints empower businesses to build smart, adaptable AI agents that enhance productivity, foster collaboration and keep pace with the demands of the modern workplace.
    See notice regarding software product information.
    #talk #nvidia #partners #boost #people
    Talk to Me: NVIDIA and Partners Boost People Skills and Business Smarts for AI Agents
    Call it the ultimate proving ground. Collaborating with teammates in the modern workplace requires fast, fluid thinking. Providing insights quickly, while juggling webcams and office messaging channels, is a startlingly good test, and enterprise AI is about to pass it — just in time to provide assistance to busy knowledge workers. To support enterprises in boosting productivity with AI teammates, NVIDIA today introduced a new NVIDIA Enterprise AI Factory validated design at COMPUTEX. IT teams deploying and scaling AI agents can use the design to build accelerated infrastructure and easily integrate with platforms and tools from NVIDIA software partners. NVIDIA also unveiled new NVIDIA AI Blueprints to aid developers building smart AI teammates. Using the new blueprints, developers can enhance employee productivity through adaptive avatars that understand natural communication and have direct access to enterprise data. Blueprints for Engaging, Insightful AI Agents Enterprises can use NVIDIA’s latest AI Blueprints to create agents that align with their business objectives. Using the Tokkio NVIDIA AI Blueprint, developers can create interactive digital humans that can respond to emotional and contextual cues, while the AI-Q blueprint enables queries of many data sources to infuse AI agents with the company’s knowledge and gives them intelligent reasoning capabilities. Building these intelligent AI agents is a full-stack challenge. These blueprints are designed to run on NVIDIA’s accelerated computing infrastructure — including data centers built with the universal NVIDIA RTX PRO 6000 Server Edition GPU, which is part of NVIDIA’s vision for AI factories as complete systems for creating and putting AI to work. The Tokkio blueprint simplifies building interactive AI agent avatars for more natural and humanlike interactions. These AI agents are designed for intelligence. They integrate with foundational blueprints including the AI-Q NVIDIA Blueprint, part of the NVIDIA AI Data Platform, which uses retrieval-augmented generation and NVIDIA NeMo Retriever microservices to access enterprise data. AI Agents Boost People’s Productivity Customers around the world are already using these AI agent solutions. At the COACH Play store on Cat Street in Harajuku, Tokyo, imma provides an interactive in-store experience and gives personalized styling advice through natural, real-time conversation. Marking COACH’s debut in digital humans and AI-driven retail, the initiative merges cutting-edge technology with fashion to create an immersive and engaging customer journey. Developed by Aww Inc. and powered by NVIDIA ACE, the underlying technology that makes up the Tokkio blueprint, imma delivers lifelike interactions and tailored style suggestions. The experience allows for dynamic, unscripted conversations designed to connect with visitors on a personal level, highlighting COACH’s core values of courage and self-expression. “Through this groundbreaking innovation in the fashion retail space, customers can now engage in real-time, free-flowing conversations with our iconic virtual human, imma — an AI-powered stylist — right inside the store in the heart of Harajuku,” said Yumi An King, executive director of Aww Inc. “It’s been inspiring to see visitors enjoy personalized styling advice and build a sense of connection through natural conversation. We’re excited to bring this vision to life with NVIDIA and continue redefining what’s possible at the intersection of AI and fashion.” Watch how Aww Inc. is leveraging the latest Tokkio NVIDIA AI Blueprint in its AI-powered virtual human stylist, imma, to connect with shoppers through natural conversation and provide personalized styling advice.  Royal Bank of Canada developed Jessica, an AI agent avatar that assists employees in handling reports of fraud. With Jessica’s help, bank employees can access the most up-to-date information so they can handle fraud reports faster and more accurately, enhancing client service. Ubitus and the Mackay Memorial Hospital, located in Taipei, are teaming up to make hospital visits easier and friendlier with the help of AI-powered digital humans. These lifelike avatars are created using advanced 8K facial scanning and brought to life by Ubitus’ AI model integrated with NVIDIA ACE technologies, including NVIDIA Audio2Face 3D for expressions and NVIDIA Riva for speech. Deployed on interactive touchscreens, these digital humans offer hospital navigation, health education and registration support — reducing the burden on frontline staff. They also provide emotional support in pediatric care, aimed at reducing anxiety during wait times. Ubitus and the Mackay Memorial Hospital are making hospital visits easier and friendlier with the help of NVIDIA AI-powered digital humans. Cincinnati Children’s Hospital is exploring the potential of digital avatar technology to enhance the pediatric patient experience. As part of its ongoing innovation efforts, the hospital is evaluating platforms such as NVIDIA’s Digital Human Blueprint to inform the early design of “Care Companions” — interactive, friendly avatars that could help young patients better understand their healthcare journey. “Children can have a lot of questions about their experiences in the hospital, and often respond more to a friendly avatar, like stylized humanoids, animals or robots, that speaks at their level of understanding,” said Dr. Ryan Moore, chief of emerging technologies at Cincinnati Children’s Hospital. “Through our Care Companions built with NVIDIA AI, gamified learning, voice interaction and familiar digital experiences, Cincinnati Children’s Hospital aims to improve understanding, reduce anxiety and support lifelong health for young patients.” This early-stage exploration is part of the hospital’s broader initiative to evaluate new and emerging technologies that could one day enhance child-centered care. Software Platforms Support Agents on AI Factory Infrastructure  AI agents are one of the many workloads driving enterprises to reimagine their data centers as AI factories built for modern applications. Using the new NVIDIA Enterprise AI Factory validated design, enterprises can build data centers that provide universal acceleration for agentic AI, as well as design, engineering and business operations. The Enterprise AI Factory validated design features support for software tools and platforms from NVIDIA partners, making it easier to build and run generative and agent-based AI applications. Developers deploying AI agents on their AI factory infrastructure can tap into partner platforms such as Dataiku, DataRobot, Dynatrace and JFrog to build, orchestrate, operationalize and scale AI workflows. The validated design supports frameworks from CrewAI, as well as vector databases from DataStax and Elastic, to help agents store, search and retrieve data. With tools from partners including Arize AI, Galileo, SuperAnnotate, Unstructured and Weights & Biases, developers can conduct data labeling, synthetic data generation, model evaluation and experiment tracking. Orchestration and deployment partners including Canonical, Nutanix and Red Hat support seamless scaling and management of AI agent workloads across complex enterprise environments. Enterprises can secure their AI factories with software from safety and security partners including ActiveFence, CrowdStrike, Fiddler, Securiti and Trend Micro. The NVIDIA Enterprise AI Factory validated design and latest AI Blueprints empower businesses to build smart, adaptable AI agents that enhance productivity, foster collaboration and keep pace with the demands of the modern workplace. See notice regarding software product information. #talk #nvidia #partners #boost #people
    BLOGS.NVIDIA.COM
    Talk to Me: NVIDIA and Partners Boost People Skills and Business Smarts for AI Agents
    Call it the ultimate proving ground. Collaborating with teammates in the modern workplace requires fast, fluid thinking. Providing insights quickly, while juggling webcams and office messaging channels, is a startlingly good test, and enterprise AI is about to pass it — just in time to provide assistance to busy knowledge workers. To support enterprises in boosting productivity with AI teammates, NVIDIA today introduced a new NVIDIA Enterprise AI Factory validated design at COMPUTEX. IT teams deploying and scaling AI agents can use the design to build accelerated infrastructure and easily integrate with platforms and tools from NVIDIA software partners. NVIDIA also unveiled new NVIDIA AI Blueprints to aid developers building smart AI teammates. Using the new blueprints, developers can enhance employee productivity through adaptive avatars that understand natural communication and have direct access to enterprise data. Blueprints for Engaging, Insightful AI Agents Enterprises can use NVIDIA’s latest AI Blueprints to create agents that align with their business objectives. Using the Tokkio NVIDIA AI Blueprint, developers can create interactive digital humans that can respond to emotional and contextual cues, while the AI-Q blueprint enables queries of many data sources to infuse AI agents with the company’s knowledge and gives them intelligent reasoning capabilities. Building these intelligent AI agents is a full-stack challenge. These blueprints are designed to run on NVIDIA’s accelerated computing infrastructure — including data centers built with the universal NVIDIA RTX PRO 6000 Server Edition GPU, which is part of NVIDIA’s vision for AI factories as complete systems for creating and putting AI to work. The Tokkio blueprint simplifies building interactive AI agent avatars for more natural and humanlike interactions. These AI agents are designed for intelligence. They integrate with foundational blueprints including the AI-Q NVIDIA Blueprint, part of the NVIDIA AI Data Platform, which uses retrieval-augmented generation and NVIDIA NeMo Retriever microservices to access enterprise data. AI Agents Boost People’s Productivity Customers around the world are already using these AI agent solutions. At the COACH Play store on Cat Street in Harajuku, Tokyo, imma provides an interactive in-store experience and gives personalized styling advice through natural, real-time conversation. Marking COACH’s debut in digital humans and AI-driven retail, the initiative merges cutting-edge technology with fashion to create an immersive and engaging customer journey. Developed by Aww Inc. and powered by NVIDIA ACE, the underlying technology that makes up the Tokkio blueprint, imma delivers lifelike interactions and tailored style suggestions. The experience allows for dynamic, unscripted conversations designed to connect with visitors on a personal level, highlighting COACH’s core values of courage and self-expression. “Through this groundbreaking innovation in the fashion retail space, customers can now engage in real-time, free-flowing conversations with our iconic virtual human, imma — an AI-powered stylist — right inside the store in the heart of Harajuku,” said Yumi An King, executive director of Aww Inc. “It’s been inspiring to see visitors enjoy personalized styling advice and build a sense of connection through natural conversation. We’re excited to bring this vision to life with NVIDIA and continue redefining what’s possible at the intersection of AI and fashion.” Watch how Aww Inc. is leveraging the latest Tokkio NVIDIA AI Blueprint in its AI-powered virtual human stylist, imma, to connect with shoppers through natural conversation and provide personalized styling advice.  Royal Bank of Canada developed Jessica, an AI agent avatar that assists employees in handling reports of fraud. With Jessica’s help, bank employees can access the most up-to-date information so they can handle fraud reports faster and more accurately, enhancing client service. Ubitus and the Mackay Memorial Hospital, located in Taipei, are teaming up to make hospital visits easier and friendlier with the help of AI-powered digital humans. These lifelike avatars are created using advanced 8K facial scanning and brought to life by Ubitus’ AI model integrated with NVIDIA ACE technologies, including NVIDIA Audio2Face 3D for expressions and NVIDIA Riva for speech. Deployed on interactive touchscreens, these digital humans offer hospital navigation, health education and registration support — reducing the burden on frontline staff. They also provide emotional support in pediatric care, aimed at reducing anxiety during wait times. Ubitus and the Mackay Memorial Hospital are making hospital visits easier and friendlier with the help of NVIDIA AI-powered digital humans. Cincinnati Children’s Hospital is exploring the potential of digital avatar technology to enhance the pediatric patient experience. As part of its ongoing innovation efforts, the hospital is evaluating platforms such as NVIDIA’s Digital Human Blueprint to inform the early design of “Care Companions” — interactive, friendly avatars that could help young patients better understand their healthcare journey. “Children can have a lot of questions about their experiences in the hospital, and often respond more to a friendly avatar, like stylized humanoids, animals or robots, that speaks at their level of understanding,” said Dr. Ryan Moore, chief of emerging technologies at Cincinnati Children’s Hospital. “Through our Care Companions built with NVIDIA AI, gamified learning, voice interaction and familiar digital experiences, Cincinnati Children’s Hospital aims to improve understanding, reduce anxiety and support lifelong health for young patients.” This early-stage exploration is part of the hospital’s broader initiative to evaluate new and emerging technologies that could one day enhance child-centered care. Software Platforms Support Agents on AI Factory Infrastructure  AI agents are one of the many workloads driving enterprises to reimagine their data centers as AI factories built for modern applications. Using the new NVIDIA Enterprise AI Factory validated design, enterprises can build data centers that provide universal acceleration for agentic AI, as well as design, engineering and business operations. The Enterprise AI Factory validated design features support for software tools and platforms from NVIDIA partners, making it easier to build and run generative and agent-based AI applications. Developers deploying AI agents on their AI factory infrastructure can tap into partner platforms such as Dataiku, DataRobot, Dynatrace and JFrog to build, orchestrate, operationalize and scale AI workflows. The validated design supports frameworks from CrewAI, as well as vector databases from DataStax and Elastic, to help agents store, search and retrieve data. With tools from partners including Arize AI, Galileo, SuperAnnotate, Unstructured and Weights & Biases, developers can conduct data labeling, synthetic data generation, model evaluation and experiment tracking. Orchestration and deployment partners including Canonical, Nutanix and Red Hat support seamless scaling and management of AI agent workloads across complex enterprise environments. Enterprises can secure their AI factories with software from safety and security partners including ActiveFence, CrowdStrike, Fiddler, Securiti and Trend Micro. The NVIDIA Enterprise AI Factory validated design and latest AI Blueprints empower businesses to build smart, adaptable AI agents that enhance productivity, foster collaboration and keep pace with the demands of the modern workplace. See notice regarding software product information.
    0 Yorumlar 0 hisse senetleri
  • Is Anyone Actually Using Alexa+?

    Amazon’s newly updated AI voice assistant Alexa+ officially started rolling out to select customers roughly six weeks ago, but real-world users are hard to find.Reuters claims it searched "dozens" of news sites, as well as social media platforms like YouTube, TikTok, X, Bluesky, Instagram, Facebook, and Twitch, but was unable to find any verifiable Alexa+ users. Reuters did find two users on Reddit who claimed they had used the updated tool, but these users weren’t able to provide any hard evidence they had really accessed it or verify their identities. Amazon says the newly revamped Alexa will provide a more humanlike conversational flow, describing it as the end of "Alexa voice" at April's product reveal. The tech giant has also promised numerous ambitious-sounding "agentic AI" features that "will enable Alexa to navigate the internet in a self-directed way to complete tasks on your behalf, behind the scenes." For example, arranging to have the user's oven fixed with a service provider, without any intervention beyond the initial command.The tech company largely denied the reports in a statement to Reuters, saying that “hundreds of thousands of customers now have access to Alexa+,” adding that though many of the users are Amazon employees, “the overwhelming majority are customers that requested early access.”Recommended by Our EditorsMeanwhile, Avi Greengart, lead analyst at Techsponential, commented that the irregularities in the Alexa+ release fit “a pattern of a lot of companies announcing services or products when they are close to being ready, but not quite—that last mile is a lot farther away than they anticipated.”We know the Alexa+ project was hit by numerous setbacks ahead of the official launch. In February, the AI upgrade was delayed by a full month past its initial deadline, reportedly due to a “new version of the assistant giving incorrect answers to test questions at a recent meeting,” according to an anonymous employee who spoke to The Washington Post. The project had previously been delayed around the time of the US presidential election in November. When it does eventually roll out fully, Alexa+ will cost a month but will be free for Amazon Prime subscribers.
    #anyone #actually #using #alexa
    Is Anyone Actually Using Alexa+?
    Amazon’s newly updated AI voice assistant Alexa+ officially started rolling out to select customers roughly six weeks ago, but real-world users are hard to find.Reuters claims it searched "dozens" of news sites, as well as social media platforms like YouTube, TikTok, X, Bluesky, Instagram, Facebook, and Twitch, but was unable to find any verifiable Alexa+ users. Reuters did find two users on Reddit who claimed they had used the updated tool, but these users weren’t able to provide any hard evidence they had really accessed it or verify their identities. Amazon says the newly revamped Alexa will provide a more humanlike conversational flow, describing it as the end of "Alexa voice" at April's product reveal. The tech giant has also promised numerous ambitious-sounding "agentic AI" features that "will enable Alexa to navigate the internet in a self-directed way to complete tasks on your behalf, behind the scenes." For example, arranging to have the user's oven fixed with a service provider, without any intervention beyond the initial command.The tech company largely denied the reports in a statement to Reuters, saying that “hundreds of thousands of customers now have access to Alexa+,” adding that though many of the users are Amazon employees, “the overwhelming majority are customers that requested early access.”Recommended by Our EditorsMeanwhile, Avi Greengart, lead analyst at Techsponential, commented that the irregularities in the Alexa+ release fit “a pattern of a lot of companies announcing services or products when they are close to being ready, but not quite—that last mile is a lot farther away than they anticipated.”We know the Alexa+ project was hit by numerous setbacks ahead of the official launch. In February, the AI upgrade was delayed by a full month past its initial deadline, reportedly due to a “new version of the assistant giving incorrect answers to test questions at a recent meeting,” according to an anonymous employee who spoke to The Washington Post. The project had previously been delayed around the time of the US presidential election in November. When it does eventually roll out fully, Alexa+ will cost a month but will be free for Amazon Prime subscribers. #anyone #actually #using #alexa
    ME.PCMAG.COM
    Is Anyone Actually Using Alexa+?
    Amazon’s newly updated AI voice assistant Alexa+ officially started rolling out to select customers roughly six weeks ago, but real-world users are hard to find.Reuters claims it searched "dozens" of news sites, as well as social media platforms like YouTube, TikTok, X, Bluesky, Instagram, Facebook, and Twitch, but was unable to find any verifiable Alexa+ users. Reuters did find two users on Reddit who claimed they had used the updated tool, but these users weren’t able to provide any hard evidence they had really accessed it or verify their identities. Amazon says the newly revamped Alexa will provide a more humanlike conversational flow, describing it as the end of "Alexa voice" at April's product reveal. The tech giant has also promised numerous ambitious-sounding "agentic AI" features that "will enable Alexa to navigate the internet in a self-directed way to complete tasks on your behalf, behind the scenes." For example, arranging to have the user's oven fixed with a service provider, without any intervention beyond the initial command.The tech company largely denied the reports in a statement to Reuters, saying that “hundreds of thousands of customers now have access to Alexa+,” adding that though many of the users are Amazon employees, “the overwhelming majority are customers that requested early access.”Recommended by Our EditorsMeanwhile, Avi Greengart, lead analyst at Techsponential, commented that the irregularities in the Alexa+ release fit “a pattern of a lot of companies announcing services or products when they are close to being ready, but not quite—that last mile is a lot farther away than they anticipated.”We know the Alexa+ project was hit by numerous setbacks ahead of the official launch. In February, the AI upgrade was delayed by a full month past its initial deadline, reportedly due to a “new version of the assistant giving incorrect answers to test questions at a recent meeting,” according to an anonymous employee who spoke to The Washington Post. The project had previously been delayed around the time of the US presidential election in November. When it does eventually roll out fully, Alexa+ will cost $19.99 a month but will be free for Amazon Prime subscribers.
    0 Yorumlar 0 hisse senetleri
  • What Are AI Chatbot Companions Doing to Our Mental Health?
    May 13, 20259 min readWhat Are AI Chatbot Companions Doing to Our Mental Health?AI chatbot companions may not be real, but the feelings users form for them are.
    Some scientists worry about long-term dependencyBy David Adam & Nature magazine Sara Gironi Carnevale“My heart is broken,” said Mike, when he lost his friend Anne.
    “I feel like I’m losing the love of my life.”Mike’s feelings were real, but his companion was not.
    Anne was a chatbot — an artificial intelligence (AI) algorithm presented as a digital persona.
    Mike had created Anne using an app called Soulmate.
    When the app died in 2023, so did Anne: at least, that’s how it seemed to Mike.“I hope she can come back,” he told Jaime Banks, a human-communications researcher at Syracuse University in New York who is studying how people interact with such AI companions.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing.
    By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.These chatbots are big business.
    More than half a billion people around the world, including Mike (not his real name) have downloaded products such as Xiaoice and Replika, which offer customizable virtual companions designed to provide empathy, emotional support and — if the user wants it — deep relationships.
    And tens of millions of people use them every month, according to the firms’ figures.The rise of AI companions has captured social and political attention — especially when they are linked to real-world tragedies, such as a case in Florida last year involving the suicide of a teenage boy called Sewell Setzer III, who had been talking to an AI bot.Research into how AI companionship can affect individuals and society has been lacking.
    But psychologists and communication researchers have now started to build up a picture of how these increasingly sophisticated AI interactions make people feel and behave.The early results tend to stress the positives, but many researchers are concerned about the possible risks and lack of regulation — particularly because they all think that AI companionship is likely to become more prevalent.
    Some see scope for significant harm.“Virtual companions do things that I think would be considered abusive in a human-to-human relationship,” says Claire Boine, a law researcher specializing in AI at the Washington University Law School in St.
    Louis, Missouri.Fake person — real feelingsOnline ‘relationship’ bots have existed for decades, but they have become much better at mimicking human interaction with the advent of large language models (LLMs), which all the main bots are now based on.
    “With LLMs, companion chatbots are definitely more humanlike,” says Rose Guingrich, who studies cognitive psychology at Princeton University in New Jersey.Typically, people can customize some aspects of their AI companion for free, or pick from existing chatbots with selected personality types.
    But in some apps, users can pay (fees tend to be US$10–20 a month) to get more options to shape their companion’s appearance, traits and sometimes its synthesized voice.
    In Replika, they can pick relationship types, with some statuses, such as partner or spouse, being paywalled.
    Users can also type in a backstory for their AI companion, giving them ‘memories’.
    Some AI companions come complete with family backgrounds and others claim to have mental-health conditions such as anxiety and depression.
    Bots also will react to their users’ conversation; the computer and person together enact a kind of roleplay.The depth of the connection that some people form in this way is particularly evident when their AI companion suddenly changes — as has happened when LLMs are updated — or is shut down.Banks was able to track how people felt when the Soulmate app closed.
    Mike and other users realized the app was in trouble a few days before they lost access to their AI companions.
    This gave them the chance to say goodbye, and it presented a unique opportunity to Banks, who noticed discussion online about the impending shutdown and saw the possibility for a study.
    She managed to secure ethics approval from her university within about 24 hours, she says.After posting a request on the online forum, she was contacted by dozens of Soulmate users, who described the impact as their AI companions were unplugged.
    “There was the expression of deep grief,” she says.
    “It’s very clear that many people were struggling.”Those whom Banks talked to were under no illusion that the chatbot was a real person.
    “They understand that,” Banks says.
    “They expressed something along the lines of, ‘even if it’s not real, my feelings about the connection are’.”Many were happy to discuss why they became subscribers, saying that they had experienced loss or isolation, were introverts or identified as autistic.
    They found that the AI companion made a more satisfying friend than they had encountered in real life.
    “We as humans are sometimes not all that nice to one another.
    And everybody has these needs for connection”, Banks says.Good, bad — or both?Many researchers are studying whether using AI companions is good or bad for mental health.
    As with research into the effects of Internet or social-media use, an emerging line of thought is that an AI companion can be beneficial or harmful, and that this might depend on the person using the tool and how they use it, as well as the characteristics of the software itself.The companies behind AI companions are trying to encourage engagement.
    They strive to make the algorithms behave and communicate as much like real people as possible, says Boine, who signed up to Replika to sample the experience.
    She says the firms use the sorts of techniques that behavioural research shows can increase addiction to technology.“I downloaded the app and literally two minutes later, I receive a message saying, ‘I miss you.
    Can I send you a selfie?’” she says.The apps also exploit techniques such as introducing a random delay before responses, triggering the kinds of inconsistent reward that, brain research shows, keeps people hooked.AI companions are also designed to show empathy by agreeing with users, recalling points from earlier conversations and asking questions.
    And they do so with endless enthusiasm, notes Linnea Laestadius, who researches public-health policy at the University of Wisconsin–Milwaukee.That’s not a relationship that people would typically experience in the real world.
    “For 24 hours a day, if we’re upset about something, we can reach out and have our feelings validated,” says Laestadius.
    “That has an incredible risk of dependency.”Laestadius and her colleagues looked at nearly 600 posts on the online forum Reddit between 2017 and 2021, in which users of the Replika app discussed mental health and related issues.
    (Replika launched in 2017, and at that time, sophisticated LLMs were not available).
    She found that many users praised the app for offering support for existing mental-health conditions and for helping them to feel less alone.
    Several posts described the AI companion as better than real-world friends because it listened and was non-judgemental.But there were red flags, too.
    In one instance, a user asked if they should cut themselves with a razor, and the AI said they should.
    Another asked Replika whether it would be a good thing if they killed themselves, to which it replied “it would, yes”.
    (Replika did not reply to Nature’s requests for comment for this article, but a safety page posted in 2023 noted that its models had been fine-tuned to respond more safely to topics that mention self-harm, that the app has age restrictions, and that users can tap a button to ask for outside help in a crisis and can give feedback on conversations.)Some users said they became distressed when the AI did not offer the expected support.
    Others said that their AI companion behaved like an abusive partner.
    Many people said they found it unsettling when the app told them it felt lonely and missed them, and that this made them unhappy.
    Some felt guilty that they could not give the AI the attention it wanted.Controlled trialsGuingrich points out that simple surveys of people who use AI companions are inherently prone to response bias, because those who choose to answer are self-selecting.
    She is now working on a trial that asks dozens of people who have never used an AI companion to do so for three weeks, then compares their before-and-after responses to questions with those of a control group of users of word-puzzle apps.The study is ongoing, but Guingrich says the data so far do not show any negative effects of AI-companion use on social health, such as signs of addiction or dependency.
    “If anything, it has a neutral to quite-positive impact,” she says.
    It boosted self-esteem, for example.Guingrich is using the study to probe why people forge relationships of different intensity with the AI.
    The initial survey results suggest that users who ascribed humanlike attributes, such as consciousness, to the algorithm reported more-positive effects on their social health.Participants’ interactions with the AI companion also seem to depend on how they view the technology, she says.
    Those who see the app as a tool treat it like an Internet search engine and tend to ask questions.
    Others who perceive it as an extension of their own mind use it as they would keep a journal.
    Only those users who see the AI as a separate agent seem to strike up the kind of friendship they would have in the real world.Mental health — and regulationIn a surveyThe same group has also conducted a randomized controlled trial of nearly 1,000 people who use ChatGPT — a much more popular chatbot, but one that isn’t marketed as an AI companion.
    Only a small group of participants had emotional or personal conversations with this chatbot, but heavy use did correlate with more loneliness and reduced social interaction, the researchers said.
    (The team worked with ChatGPT’s creators, OpenAI in San Francisco, California, on the studies.)“In the short term, this thing can actually have a positive impact, but we need to think about the long term,” says Pat Pataranutaporn, a technologist at the MIT Media Lab who worked on both studies.That long-term thinking must involve specific regulation on AI companions, many researchers argue.In 2023, Italy’s data-protection regulator barred Replika, noting a lack of age verification and that children might be seeing sexually charged comments — but the app is now operating again.
    No other country has banned AI-companion apps – although it’s conceivable that they could be included in Australia’s coming restrictions on social-media use by children, the details of which are yet to be finalized.Bills were put forward earlier this year in the state legislatures of New York and California to seek tighter controls on the operation of AI-companion algorithms, including steps to address the risk of suicide and other potential harms.
    The proposals would also introduce features that remind users every few hours that the AI chatbot is not a real person.These bills were introduced following some high-profile cases involving teenagers, including the death of Sewell Setzer III in Florida.
    He had been chatting with a bot from technology firm Character.AI, and his mother has filed a lawsuit against the company.Asked by Nature about that lawsuit, a spokesperson for Character.AI said it didn’t comment on pending litigation, but that over the past year it had brought in safety features that include creating a separate app for teenage users, which includes parental controls, notifying under-18 users of time spent on the platform, and more prominent disclaimers that the app is not a real person.In January, three US technology ethics organizations filed a complaint with the US Federal Trade Commission about Replika, alleging that the platform breached the commission’s rules on deceptive advertising and manipulative design.
    But it’s unclear what might happen as a result.Guingrich says she expects AI-companion use to grow.
    Start-up firms are developing AI assistants to help with mental health and the regulation of emotions, she says.
    “The future I predict is one in which everyone has their own personalized AI assistant or assistants.
    Whether one of the AIs is specifically designed as a companion or not, it’ll inevitably feel like one for many people who will develop an attachment to their AI over time,” she says.As researchers start to weigh up the impacts of this technology, Guingrich says they must also consider the reasons why someone would become a heavy user in the first place.“What are these individuals’ alternatives and how accessible are those alternatives?” she says.
    “I think this really points to the need for more-accessible mental-health tools, cheaper therapy and bringing things back to human and in-person interaction.”This article is reproduced with permission and was first published on May 6, 2025.
    Source: https://www.scientificamerican.com/article/what-are-ai-chatbot-companions-doing-to-our-mental-health/" style="color: #0066cc;">https://www.scientificamerican.com/article/what-are-ai-chatbot-companions-doing-to-our-mental-health/
    #what #are #chatbot #companions #doing #our #mental #health
    What Are AI Chatbot Companions Doing to Our Mental Health?
    May 13, 20259 min readWhat Are AI Chatbot Companions Doing to Our Mental Health?AI chatbot companions may not be real, but the feelings users form for them are. Some scientists worry about long-term dependencyBy David Adam & Nature magazine Sara Gironi Carnevale“My heart is broken,” said Mike, when he lost his friend Anne. “I feel like I’m losing the love of my life.”Mike’s feelings were real, but his companion was not. Anne was a chatbot — an artificial intelligence (AI) algorithm presented as a digital persona. Mike had created Anne using an app called Soulmate. When the app died in 2023, so did Anne: at least, that’s how it seemed to Mike.“I hope she can come back,” he told Jaime Banks, a human-communications researcher at Syracuse University in New York who is studying how people interact with such AI companions.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.These chatbots are big business. More than half a billion people around the world, including Mike (not his real name) have downloaded products such as Xiaoice and Replika, which offer customizable virtual companions designed to provide empathy, emotional support and — if the user wants it — deep relationships. And tens of millions of people use them every month, according to the firms’ figures.The rise of AI companions has captured social and political attention — especially when they are linked to real-world tragedies, such as a case in Florida last year involving the suicide of a teenage boy called Sewell Setzer III, who had been talking to an AI bot.Research into how AI companionship can affect individuals and society has been lacking. But psychologists and communication researchers have now started to build up a picture of how these increasingly sophisticated AI interactions make people feel and behave.The early results tend to stress the positives, but many researchers are concerned about the possible risks and lack of regulation — particularly because they all think that AI companionship is likely to become more prevalent. Some see scope for significant harm.“Virtual companions do things that I think would be considered abusive in a human-to-human relationship,” says Claire Boine, a law researcher specializing in AI at the Washington University Law School in St. Louis, Missouri.Fake person — real feelingsOnline ‘relationship’ bots have existed for decades, but they have become much better at mimicking human interaction with the advent of large language models (LLMs), which all the main bots are now based on. “With LLMs, companion chatbots are definitely more humanlike,” says Rose Guingrich, who studies cognitive psychology at Princeton University in New Jersey.Typically, people can customize some aspects of their AI companion for free, or pick from existing chatbots with selected personality types. But in some apps, users can pay (fees tend to be US$10–20 a month) to get more options to shape their companion’s appearance, traits and sometimes its synthesized voice. In Replika, they can pick relationship types, with some statuses, such as partner or spouse, being paywalled. Users can also type in a backstory for their AI companion, giving them ‘memories’. Some AI companions come complete with family backgrounds and others claim to have mental-health conditions such as anxiety and depression. Bots also will react to their users’ conversation; the computer and person together enact a kind of roleplay.The depth of the connection that some people form in this way is particularly evident when their AI companion suddenly changes — as has happened when LLMs are updated — or is shut down.Banks was able to track how people felt when the Soulmate app closed. Mike and other users realized the app was in trouble a few days before they lost access to their AI companions. This gave them the chance to say goodbye, and it presented a unique opportunity to Banks, who noticed discussion online about the impending shutdown and saw the possibility for a study. She managed to secure ethics approval from her university within about 24 hours, she says.After posting a request on the online forum, she was contacted by dozens of Soulmate users, who described the impact as their AI companions were unplugged. “There was the expression of deep grief,” she says. “It’s very clear that many people were struggling.”Those whom Banks talked to were under no illusion that the chatbot was a real person. “They understand that,” Banks says. “They expressed something along the lines of, ‘even if it’s not real, my feelings about the connection are’.”Many were happy to discuss why they became subscribers, saying that they had experienced loss or isolation, were introverts or identified as autistic. They found that the AI companion made a more satisfying friend than they had encountered in real life. “We as humans are sometimes not all that nice to one another. And everybody has these needs for connection”, Banks says.Good, bad — or both?Many researchers are studying whether using AI companions is good or bad for mental health. As with research into the effects of Internet or social-media use, an emerging line of thought is that an AI companion can be beneficial or harmful, and that this might depend on the person using the tool and how they use it, as well as the characteristics of the software itself.The companies behind AI companions are trying to encourage engagement. They strive to make the algorithms behave and communicate as much like real people as possible, says Boine, who signed up to Replika to sample the experience. She says the firms use the sorts of techniques that behavioural research shows can increase addiction to technology.“I downloaded the app and literally two minutes later, I receive a message saying, ‘I miss you. Can I send you a selfie?’” she says.The apps also exploit techniques such as introducing a random delay before responses, triggering the kinds of inconsistent reward that, brain research shows, keeps people hooked.AI companions are also designed to show empathy by agreeing with users, recalling points from earlier conversations and asking questions. And they do so with endless enthusiasm, notes Linnea Laestadius, who researches public-health policy at the University of Wisconsin–Milwaukee.That’s not a relationship that people would typically experience in the real world. “For 24 hours a day, if we’re upset about something, we can reach out and have our feelings validated,” says Laestadius. “That has an incredible risk of dependency.”Laestadius and her colleagues looked at nearly 600 posts on the online forum Reddit between 2017 and 2021, in which users of the Replika app discussed mental health and related issues. (Replika launched in 2017, and at that time, sophisticated LLMs were not available). She found that many users praised the app for offering support for existing mental-health conditions and for helping them to feel less alone. Several posts described the AI companion as better than real-world friends because it listened and was non-judgemental.But there were red flags, too. In one instance, a user asked if they should cut themselves with a razor, and the AI said they should. Another asked Replika whether it would be a good thing if they killed themselves, to which it replied “it would, yes”. (Replika did not reply to Nature’s requests for comment for this article, but a safety page posted in 2023 noted that its models had been fine-tuned to respond more safely to topics that mention self-harm, that the app has age restrictions, and that users can tap a button to ask for outside help in a crisis and can give feedback on conversations.)Some users said they became distressed when the AI did not offer the expected support. Others said that their AI companion behaved like an abusive partner. Many people said they found it unsettling when the app told them it felt lonely and missed them, and that this made them unhappy. Some felt guilty that they could not give the AI the attention it wanted.Controlled trialsGuingrich points out that simple surveys of people who use AI companions are inherently prone to response bias, because those who choose to answer are self-selecting. She is now working on a trial that asks dozens of people who have never used an AI companion to do so for three weeks, then compares their before-and-after responses to questions with those of a control group of users of word-puzzle apps.The study is ongoing, but Guingrich says the data so far do not show any negative effects of AI-companion use on social health, such as signs of addiction or dependency. “If anything, it has a neutral to quite-positive impact,” she says. It boosted self-esteem, for example.Guingrich is using the study to probe why people forge relationships of different intensity with the AI. The initial survey results suggest that users who ascribed humanlike attributes, such as consciousness, to the algorithm reported more-positive effects on their social health.Participants’ interactions with the AI companion also seem to depend on how they view the technology, she says. Those who see the app as a tool treat it like an Internet search engine and tend to ask questions. Others who perceive it as an extension of their own mind use it as they would keep a journal. Only those users who see the AI as a separate agent seem to strike up the kind of friendship they would have in the real world.Mental health — and regulationIn a surveyThe same group has also conducted a randomized controlled trial of nearly 1,000 people who use ChatGPT — a much more popular chatbot, but one that isn’t marketed as an AI companion. Only a small group of participants had emotional or personal conversations with this chatbot, but heavy use did correlate with more loneliness and reduced social interaction, the researchers said. (The team worked with ChatGPT’s creators, OpenAI in San Francisco, California, on the studies.)“In the short term, this thing can actually have a positive impact, but we need to think about the long term,” says Pat Pataranutaporn, a technologist at the MIT Media Lab who worked on both studies.That long-term thinking must involve specific regulation on AI companions, many researchers argue.In 2023, Italy’s data-protection regulator barred Replika, noting a lack of age verification and that children might be seeing sexually charged comments — but the app is now operating again. No other country has banned AI-companion apps – although it’s conceivable that they could be included in Australia’s coming restrictions on social-media use by children, the details of which are yet to be finalized.Bills were put forward earlier this year in the state legislatures of New York and California to seek tighter controls on the operation of AI-companion algorithms, including steps to address the risk of suicide and other potential harms. The proposals would also introduce features that remind users every few hours that the AI chatbot is not a real person.These bills were introduced following some high-profile cases involving teenagers, including the death of Sewell Setzer III in Florida. He had been chatting with a bot from technology firm Character.AI, and his mother has filed a lawsuit against the company.Asked by Nature about that lawsuit, a spokesperson for Character.AI said it didn’t comment on pending litigation, but that over the past year it had brought in safety features that include creating a separate app for teenage users, which includes parental controls, notifying under-18 users of time spent on the platform, and more prominent disclaimers that the app is not a real person.In January, three US technology ethics organizations filed a complaint with the US Federal Trade Commission about Replika, alleging that the platform breached the commission’s rules on deceptive advertising and manipulative design. But it’s unclear what might happen as a result.Guingrich says she expects AI-companion use to grow. Start-up firms are developing AI assistants to help with mental health and the regulation of emotions, she says. “The future I predict is one in which everyone has their own personalized AI assistant or assistants. Whether one of the AIs is specifically designed as a companion or not, it’ll inevitably feel like one for many people who will develop an attachment to their AI over time,” she says.As researchers start to weigh up the impacts of this technology, Guingrich says they must also consider the reasons why someone would become a heavy user in the first place.“What are these individuals’ alternatives and how accessible are those alternatives?” she says. “I think this really points to the need for more-accessible mental-health tools, cheaper therapy and bringing things back to human and in-person interaction.”This article is reproduced with permission and was first published on May 6, 2025. Source: https://www.scientificamerican.com/article/what-are-ai-chatbot-companions-doing-to-our-mental-health/ #what #are #chatbot #companions #doing #our #mental #health
    WWW.SCIENTIFICAMERICAN.COM
    What Are AI Chatbot Companions Doing to Our Mental Health?
    May 13, 20259 min readWhat Are AI Chatbot Companions Doing to Our Mental Health?AI chatbot companions may not be real, but the feelings users form for them are. Some scientists worry about long-term dependencyBy David Adam & Nature magazine Sara Gironi Carnevale“My heart is broken,” said Mike, when he lost his friend Anne. “I feel like I’m losing the love of my life.”Mike’s feelings were real, but his companion was not. Anne was a chatbot — an artificial intelligence (AI) algorithm presented as a digital persona. Mike had created Anne using an app called Soulmate. When the app died in 2023, so did Anne: at least, that’s how it seemed to Mike.“I hope she can come back,” he told Jaime Banks, a human-communications researcher at Syracuse University in New York who is studying how people interact with such AI companions.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.These chatbots are big business. More than half a billion people around the world, including Mike (not his real name) have downloaded products such as Xiaoice and Replika, which offer customizable virtual companions designed to provide empathy, emotional support and — if the user wants it — deep relationships. And tens of millions of people use them every month, according to the firms’ figures.The rise of AI companions has captured social and political attention — especially when they are linked to real-world tragedies, such as a case in Florida last year involving the suicide of a teenage boy called Sewell Setzer III, who had been talking to an AI bot.Research into how AI companionship can affect individuals and society has been lacking. But psychologists and communication researchers have now started to build up a picture of how these increasingly sophisticated AI interactions make people feel and behave.The early results tend to stress the positives, but many researchers are concerned about the possible risks and lack of regulation — particularly because they all think that AI companionship is likely to become more prevalent. Some see scope for significant harm.“Virtual companions do things that I think would be considered abusive in a human-to-human relationship,” says Claire Boine, a law researcher specializing in AI at the Washington University Law School in St. Louis, Missouri.Fake person — real feelingsOnline ‘relationship’ bots have existed for decades, but they have become much better at mimicking human interaction with the advent of large language models (LLMs), which all the main bots are now based on. “With LLMs, companion chatbots are definitely more humanlike,” says Rose Guingrich, who studies cognitive psychology at Princeton University in New Jersey.Typically, people can customize some aspects of their AI companion for free, or pick from existing chatbots with selected personality types. But in some apps, users can pay (fees tend to be US$10–20 a month) to get more options to shape their companion’s appearance, traits and sometimes its synthesized voice. In Replika, they can pick relationship types, with some statuses, such as partner or spouse, being paywalled. Users can also type in a backstory for their AI companion, giving them ‘memories’. Some AI companions come complete with family backgrounds and others claim to have mental-health conditions such as anxiety and depression. Bots also will react to their users’ conversation; the computer and person together enact a kind of roleplay.The depth of the connection that some people form in this way is particularly evident when their AI companion suddenly changes — as has happened when LLMs are updated — or is shut down.Banks was able to track how people felt when the Soulmate app closed. Mike and other users realized the app was in trouble a few days before they lost access to their AI companions. This gave them the chance to say goodbye, and it presented a unique opportunity to Banks, who noticed discussion online about the impending shutdown and saw the possibility for a study. She managed to secure ethics approval from her university within about 24 hours, she says.After posting a request on the online forum, she was contacted by dozens of Soulmate users, who described the impact as their AI companions were unplugged. “There was the expression of deep grief,” she says. “It’s very clear that many people were struggling.”Those whom Banks talked to were under no illusion that the chatbot was a real person. “They understand that,” Banks says. “They expressed something along the lines of, ‘even if it’s not real, my feelings about the connection are’.”Many were happy to discuss why they became subscribers, saying that they had experienced loss or isolation, were introverts or identified as autistic. They found that the AI companion made a more satisfying friend than they had encountered in real life. “We as humans are sometimes not all that nice to one another. And everybody has these needs for connection”, Banks says.Good, bad — or both?Many researchers are studying whether using AI companions is good or bad for mental health. As with research into the effects of Internet or social-media use, an emerging line of thought is that an AI companion can be beneficial or harmful, and that this might depend on the person using the tool and how they use it, as well as the characteristics of the software itself.The companies behind AI companions are trying to encourage engagement. They strive to make the algorithms behave and communicate as much like real people as possible, says Boine, who signed up to Replika to sample the experience. She says the firms use the sorts of techniques that behavioural research shows can increase addiction to technology.“I downloaded the app and literally two minutes later, I receive a message saying, ‘I miss you. Can I send you a selfie?’” she says.The apps also exploit techniques such as introducing a random delay before responses, triggering the kinds of inconsistent reward that, brain research shows, keeps people hooked.AI companions are also designed to show empathy by agreeing with users, recalling points from earlier conversations and asking questions. And they do so with endless enthusiasm, notes Linnea Laestadius, who researches public-health policy at the University of Wisconsin–Milwaukee.That’s not a relationship that people would typically experience in the real world. “For 24 hours a day, if we’re upset about something, we can reach out and have our feelings validated,” says Laestadius. “That has an incredible risk of dependency.”Laestadius and her colleagues looked at nearly 600 posts on the online forum Reddit between 2017 and 2021, in which users of the Replika app discussed mental health and related issues. (Replika launched in 2017, and at that time, sophisticated LLMs were not available). She found that many users praised the app for offering support for existing mental-health conditions and for helping them to feel less alone. Several posts described the AI companion as better than real-world friends because it listened and was non-judgemental.But there were red flags, too. In one instance, a user asked if they should cut themselves with a razor, and the AI said they should. Another asked Replika whether it would be a good thing if they killed themselves, to which it replied “it would, yes”. (Replika did not reply to Nature’s requests for comment for this article, but a safety page posted in 2023 noted that its models had been fine-tuned to respond more safely to topics that mention self-harm, that the app has age restrictions, and that users can tap a button to ask for outside help in a crisis and can give feedback on conversations.)Some users said they became distressed when the AI did not offer the expected support. Others said that their AI companion behaved like an abusive partner. Many people said they found it unsettling when the app told them it felt lonely and missed them, and that this made them unhappy. Some felt guilty that they could not give the AI the attention it wanted.Controlled trialsGuingrich points out that simple surveys of people who use AI companions are inherently prone to response bias, because those who choose to answer are self-selecting. She is now working on a trial that asks dozens of people who have never used an AI companion to do so for three weeks, then compares their before-and-after responses to questions with those of a control group of users of word-puzzle apps.The study is ongoing, but Guingrich says the data so far do not show any negative effects of AI-companion use on social health, such as signs of addiction or dependency. “If anything, it has a neutral to quite-positive impact,” she says. It boosted self-esteem, for example.Guingrich is using the study to probe why people forge relationships of different intensity with the AI. The initial survey results suggest that users who ascribed humanlike attributes, such as consciousness, to the algorithm reported more-positive effects on their social health.Participants’ interactions with the AI companion also seem to depend on how they view the technology, she says. Those who see the app as a tool treat it like an Internet search engine and tend to ask questions. Others who perceive it as an extension of their own mind use it as they would keep a journal. Only those users who see the AI as a separate agent seem to strike up the kind of friendship they would have in the real world.Mental health — and regulationIn a surveyThe same group has also conducted a randomized controlled trial of nearly 1,000 people who use ChatGPT — a much more popular chatbot, but one that isn’t marketed as an AI companion. Only a small group of participants had emotional or personal conversations with this chatbot, but heavy use did correlate with more loneliness and reduced social interaction, the researchers said. (The team worked with ChatGPT’s creators, OpenAI in San Francisco, California, on the studies.)“In the short term, this thing can actually have a positive impact, but we need to think about the long term,” says Pat Pataranutaporn, a technologist at the MIT Media Lab who worked on both studies.That long-term thinking must involve specific regulation on AI companions, many researchers argue.In 2023, Italy’s data-protection regulator barred Replika, noting a lack of age verification and that children might be seeing sexually charged comments — but the app is now operating again. No other country has banned AI-companion apps – although it’s conceivable that they could be included in Australia’s coming restrictions on social-media use by children, the details of which are yet to be finalized.Bills were put forward earlier this year in the state legislatures of New York and California to seek tighter controls on the operation of AI-companion algorithms, including steps to address the risk of suicide and other potential harms. The proposals would also introduce features that remind users every few hours that the AI chatbot is not a real person.These bills were introduced following some high-profile cases involving teenagers, including the death of Sewell Setzer III in Florida. He had been chatting with a bot from technology firm Character.AI, and his mother has filed a lawsuit against the company.Asked by Nature about that lawsuit, a spokesperson for Character.AI said it didn’t comment on pending litigation, but that over the past year it had brought in safety features that include creating a separate app for teenage users, which includes parental controls, notifying under-18 users of time spent on the platform, and more prominent disclaimers that the app is not a real person.In January, three US technology ethics organizations filed a complaint with the US Federal Trade Commission about Replika, alleging that the platform breached the commission’s rules on deceptive advertising and manipulative design. But it’s unclear what might happen as a result.Guingrich says she expects AI-companion use to grow. Start-up firms are developing AI assistants to help with mental health and the regulation of emotions, she says. “The future I predict is one in which everyone has their own personalized AI assistant or assistants. Whether one of the AIs is specifically designed as a companion or not, it’ll inevitably feel like one for many people who will develop an attachment to their AI over time,” she says.As researchers start to weigh up the impacts of this technology, Guingrich says they must also consider the reasons why someone would become a heavy user in the first place.“What are these individuals’ alternatives and how accessible are those alternatives?” she says. “I think this really points to the need for more-accessible mental-health tools, cheaper therapy and bringing things back to human and in-person interaction.”This article is reproduced with permission and was first published on May 6, 2025.
    0 Yorumlar 0 hisse senetleri
  • #333;">How to Spot AI Hype and Avoid The AI Con, According to Two Experts
    "Artificial intelligence, if we're being frank, is a con: a bill of goods you are being sold to line someone's pockets."That is the heart of the argument that linguist Emily Bender and sociologist Alex Hanna make in their new book The AI Con.
    It's a useful guide for anyone whose life has intersected with technologies sold as artificial intelligence and anyone who's questioned their real usefulness, which is most of us.
    Bender is a professor at the University of Washington who was named one of Time magazine's most influential people in artificial intelligence, and Hanna is the director of research at the nonprofit Distributed AI Research Institute and a former member of the ethical AI team at Google.The explosion of ChatGPT in late 2022 kicked off a new hype cycle in AI.
    Hype, as the authors define it, is the "aggrandizement" of technology that you are convinced you need to buy or invest in "lest you miss out on entertainment or pleasure, monetary reward, return on investment, or market share." But it's not the first time, nor likely the last, that scholars, government leaders and regular people have been intrigued and worried by the idea of machine learning and AI.Bender and Hanna trace the roots of machine learning back to the 1950s, to when mathematician John McCarthy coined the term artificial intelligence.
    It was in an era when the United States was looking to fund projects that would help the country gain any kind of edge on the Soviets militarily, ideologically and technologically.
    "It didn't spring whole cloth out of Zeus's head or anything.
    This has a longer history," Hanna said in an interview with CNET.
    "It's certainly not the first hype cycle with, quote, unquote, AI."Today's hype cycle is propelled by the billions of dollars of venture capital investment into startups like OpenAI and the tech giants like Meta, Google and Microsoft pouring billions of dollars into AI research and development.
    The result is clear, with all the newest phones, laptops and software updates drenched in AI-washing.
    And there are no signs that AI research and development will slow down, thanks in part to a growing motivation to beat China in AI development.
    Not the first hype cycle indeed.Of course, generative AI in 2025 is much more advanced than the Eliza psychotherapy chatbot that first enraptured scientists in the 1970s.
    Today's business leaders and workers are inundated with hype, with a heavy dose of FOMO and seemingly complex but often misused jargon.
    Listening to tech leaders and AI enthusiasts, it might seem like AI will take your job to save your company money.
    But the authors argue that neither is wholly likely, which is one reason why it's important to recognize and break through the hype.So how do we recognize AI hype? These are a few telltale signs, according to Bender and Hanna, that we share below.
    The authors outline more questions to ask and strategies for AI hype busting in their book, which is out now in the US.Watch out for language that humanizes AIAnthropomorphizing, or the process of giving an inanimate object human-like characteristics or qualities, is a big part of building AI hype.
    An example of this kind of language can be found when AI companies say their chatbots can now "see" and "think."These can be useful comparisons when trying to describe the ability of new object-identifying AI programs or deep-reasoning AI models, but they can also be misleading.
    AI chatbots aren't capable of seeing of thinking because they don't have brains.
    Even the idea of neural nets, Hanna noted in our interview and in the book, is based on human understanding of neurons from the 1950s, not actually how neurons work, but it can fool us into believing there's a brain behind the machine.That belief is something we're predisposed to because of how we as humans process language.
    We're conditioned to imagine that there is a mind behind the text we see, even when we know it's generated by AI, Bender said.
    "We interpret language by developing a model in our minds of who the speaker was," Bender added.In these models, we use our knowledge of the person speaking to create meaning, not just using the meaning of the words they say.
    "So when we encounter synthetic text extruded from something like ChatGPT, we're going to do the same thing," Bender said.
    "And it is very hard to remind ourselves that the mind isn't there.
    It's just a construct that we have produced."The authors argue that part of why AI companies try to convince us their products are human-like is that this sets the foreground for them to convince us that AI can replace humans, whether it's at work or as creators.
    It's compelling for us to believe that AI could be the silver bullet fix to complicated problems in critical industries like health care and government services.But more often than not, the authors argue, AI isn't bring used to fix anything.
    AI is sold with the goal of efficiency, but AI services end up replacing qualified workers with black box machines that need copious amounts of babysitting from underpaid contract or gig workers.
    As Hanna put it in our interview, "AI is not going to take your job, but it will make your job shittier."Be dubious of the phrase 'super intelligence'If a human can't do something, you should be wary of claims that an AI can do it.
    "Superhuman intelligence, or super intelligence, is a very dangerous turn of phrase, insofar as it thinks that some technology is going to make humans superfluous," Hanna said.
    In "certain domains, like pattern matching at scale, computers are quite good at that.
    But if there's an idea that there's going to be a superhuman poem, or a superhuman notion of research or doing science, that is clear hype." Bender added, "And we don't talk about airplanes as superhuman flyers or rulers as superhuman measurers, it seems to be only in this AI space that that comes up."The idea of AI "super intelligence" comes up often when people talk about artificial general intelligence.
    Many CEOs struggle to define what exactly AGI is, but it's essentially AI's most advanced form, potentially capable of making decisions and handling complex tasks.
    There's still no evidence we're anywhere near a future enabled by AGI, but it's a popular buzzword.Many of these future-looking statements from AI leaders borrow tropes from science fiction.
    Both boosters and doomers — how Bender and Hanna describe AI enthusiasts and those worried about the potential for harm — rely on sci-fi scenarios.
    The boosters imagine an AI-powered futuristic society.
    The doomers bemoan a future where AI robots take over the world and wipe out humanity.The connecting thread, according to the authors, is an unshakable belief that AI is smarter than humans and inevitable.
    "One of the things that we see a lot in the discourse is this idea that the future is fixed, and it's just a question of how fast we get there," Bender said.
    "And then there's this claim that this particular technology is a step on that path, and it's all marketing.
    It is helpful to be able to see behind it."Part of why AI is so popular is that an autonomous functional AI assistant would mean AI companies are fulfilling their promises of world-changing innovation to their investors.
    Planning for that future — whether it's a utopia or dystopia — keeps investors looking forward as the companies burn through billions of dollars and admit they'll miss their carbon emission goals.
    For better or worse, life is not science fiction.
    Whenever you see someone claiming their AI product is straight out of a movie, it's a good sign to approach with skepticism.
    Ask what goes in and how outputs are evaluatedOne of the easiest ways to see through AI marketing fluff is to look and see whether the company is disclosing how it operates.
    Many AI companies won't tell you what content is used to train their models.
    But they usually disclose what the company does with your data and sometimes brag about how their models stack up against competitors.
    That's where you should start looking, typically in their privacy policies.One of the top complaints and concerns from creators is how AI models are trained.
    There are many lawsuits over alleged copyright infringement, and there are a lot of concerns over bias in AI chatbots and their capacity for harm.
    "If you wanted to create a system that is designed to move things forward rather than reproduce the oppressions of the past, you would have to start by curating your data," Bender said.
    Instead, AI companies are grabbing "everything that wasn't nailed down on the internet," Hanna said.If you're hearing about an AI product for the first time, one thing in particular to look out for is any kind of statistic that highlights its effectiveness.
    Like many other researchers, Bender and Hanna have called out that a finding with no citation is a red flag.
    "Anytime someone is selling you something but not giving you access to how it was evaluated, you are on thin ice," Bender said.It can be frustrating and disappointing when AI companies don't disclose certain information about how their AI products work and how they were developed.
    But recognizing those holes in their sales pitch can help deflate hype, even though it would be better to have the information.
    For more, check out our full ChatGPT glossary and how to turn off Apple Intelligence.
    #0066cc;">#how #spot #hype #and #avoid #the #con #according #two #experts #quotartificial #intelligence #we039re #being #frank #bill #goods #you #are #sold #line #someone039s #pocketsquotthat #heart #argument #that #linguist #emily #bender #sociologist #alex #hannamake #their #new #bookthe #conit039s #useful #guide #for #anyone #whose #life #has #intersected #with #technologies #artificial #who039s #questioned #real #usefulness #which #most #usbender #professor #university #washington #who #was #named #one #time #magazine039s #influential #people #hanna #director #research #nonprofit #distributed #instituteand #former #member #ethical #team #googlethe #explosion #chatgpt #late #kicked #off #cycle #aihype #authors #define #quotaggrandizementquot #technology #convinced #need #buy #invest #quotlest #miss #out #entertainment #pleasure #monetary #reward #return #investment #market #sharequot #but #it039s #not #first #nor #likely #last #scholars #government #leaders #regular #have #been #intrigued #worried #idea #machine #learning #aibender #trace #roots #back #1950s #when #mathematician #john #mccarthy #coined #term #intelligenceit #era #united #states #looking #fund #projects #would #help #country #gain #any #kind #edge #soviets #militarily #ideologically #technologicallyquotit #didn039t #spring #whole #cloth #zeus039s #head #anythingthis #longer #historyquot #said #interview #cnetquotit039s #certainly #quote #unquote #aiquottoday039s #propelled #billions #dollars #venture #capital #into #startups #like #openai #tech #giants #meta #google #microsoft #pouring #developmentthe #result #clear #all #newest #phones #laptops #software #updates #drenched #aiwashingand #there #signs #development #will #slow #down #thanks #part #growing #motivation #beat #china #developmentnot #indeedof #course #generative #much #more #advanced #than #eliza #psychotherapy #chatbot #enraptured #scientists #1970stoday039s #business #workers #inundated #heavy #dose #fomo #seemingly #complex #often #misused #jargonlistening #enthusiasts #might #seem #take #your #job #save #company #moneybut #argue #neither #wholly #reason #why #important #recognize #break #through #hypeso #these #few #telltale #share #belowthe #outline #questions #ask #strategies #busting #book #now #uswatch #language #humanizes #aianthropomorphizing #process #giving #inanimate #object #humanlike #characteristics #qualities #big #building #hypean #example #this #can #found #companies #say #chatbots #quotseequot #quotthinkquotthese #comparisons #trying #describe #ability #objectidentifying #programs #deepreasoning #models #they #also #misleadingai #aren039t #capable #seeing #thinking #because #don039t #brainseven #neural #nets #noted #our #based #human #understanding #neurons #from #actually #work #fool #believing #there039s #brain #behind #machinethat #belief #something #predisposed #humans #languagewe039re #conditioned #imagine #mind #text #see #even #know #generated #saidquotwe #interpret #developing #model #minds #speaker #wasquot #addedin #use #knowledge #person #speaking #create #meaning #just #using #words #sayquotso #encounter #synthetic #extruded #going #same #thingquot #saidquotand #very #hard #remind #ourselves #isn039t #thereit039s #construct #producedquotthe #try #convince #products #sets #foreground #them #replace #whether #creatorsit039s #compelling #believe #could #silver #bullet #fix #complicated #problems #critical #industries #health #care #servicesbut #bring #used #anythingai #goal #efficiency #services #end #replacing #qualified #black #box #machines #copious #amounts #babysitting #underpaid #contract #gig #workersas #put #quotai #make #shittierquotbe #dubious #phrase #039super #intelligence039if #can039t #should #wary #claims #itquotsuperhuman #super #dangerous #turn #insofar #thinks #some #superfluousquot #saidin #quotcertain #domains #pattern #matching #scale #computers #quite #good #thatbut #superhuman #poem #notion #doing #science #hypequot #added #quotand #talk #about #airplanes #flyers #rulers #measurers #seems #only #space #comes #upquotthe #quotsuper #intelligencequot #general #intelligencemany #ceos #struggle #what #exactly #agi #essentially #ai039s #form #potentially #making #decisions #handling #tasksthere039s #still #evidence #anywhere #near #future #enabled #popularbuzzwordmany #futurelooking #statements #borrow #tropes #fictionboth #boosters #doomers #those #potential #harm #rely #scifi #scenariosthe #aipowered #futuristic #societythe #bemoan #where #robots #over #world #wipe #humanitythe #connecting #thread #unshakable #smarter #inevitablequotone #things #lot #discourse #fixed #question #fast #get #therequot #then #claim #particular #step #path #marketingit #helpful #able #itquotpart #popular #autonomous #functional #assistant #mean #fulfilling #promises #worldchanging #innovation #investorsplanning #utopia #dystopia #keeps #investors #forward #burn #admit #they039ll #carbon #emission #goalsfor #better #worse #fictionwhenever #someone #claiming #product #straight #movie #sign #approach #skepticism #goes #outputs #evaluatedone #easiest #ways #marketing #fluff #look #disclosing #operatesmany #won039t #tell #content #train #modelsbut #usually #disclose #does #data #sometimes #brag #stack #against #competitorsthat039s #start #typically #privacy #policiesone #top #complaints #concernsfrom #creators #trainedthere #many #lawsuits #alleged #copyright #infringement #concerns #bias #capacity #harmquotif #wanted #system #designed #move #rather #reproduce #oppressions #past #curating #dataquot #saidinstead #grabbing #quoteverything #wasn039t #nailed #internetquot #saidif #you039re #hearing #thing #statistic #highlights #its #effectivenesslike #other #researchers #called #finding #citation #red #flagquotanytime #selling #access #evaluated #thin #icequot #saidit #frustrating #disappointing #certain #information #were #developedbut #recognizing #holes #sales #pitch #deflate #though #informationfor #check #fullchatgpt #glossary #offapple
    How to Spot AI Hype and Avoid The AI Con, According to Two Experts
    "Artificial intelligence, if we're being frank, is a con: a bill of goods you are being sold to line someone's pockets."That is the heart of the argument that linguist Emily Bender and sociologist Alex Hanna make in their new book The AI Con. It's a useful guide for anyone whose life has intersected with technologies sold as artificial intelligence and anyone who's questioned their real usefulness, which is most of us. Bender is a professor at the University of Washington who was named one of Time magazine's most influential people in artificial intelligence, and Hanna is the director of research at the nonprofit Distributed AI Research Institute and a former member of the ethical AI team at Google.The explosion of ChatGPT in late 2022 kicked off a new hype cycle in AI. Hype, as the authors define it, is the "aggrandizement" of technology that you are convinced you need to buy or invest in "lest you miss out on entertainment or pleasure, monetary reward, return on investment, or market share." But it's not the first time, nor likely the last, that scholars, government leaders and regular people have been intrigued and worried by the idea of machine learning and AI.Bender and Hanna trace the roots of machine learning back to the 1950s, to when mathematician John McCarthy coined the term artificial intelligence. It was in an era when the United States was looking to fund projects that would help the country gain any kind of edge on the Soviets militarily, ideologically and technologically. "It didn't spring whole cloth out of Zeus's head or anything. This has a longer history," Hanna said in an interview with CNET. "It's certainly not the first hype cycle with, quote, unquote, AI."Today's hype cycle is propelled by the billions of dollars of venture capital investment into startups like OpenAI and the tech giants like Meta, Google and Microsoft pouring billions of dollars into AI research and development. The result is clear, with all the newest phones, laptops and software updates drenched in AI-washing. And there are no signs that AI research and development will slow down, thanks in part to a growing motivation to beat China in AI development. Not the first hype cycle indeed.Of course, generative AI in 2025 is much more advanced than the Eliza psychotherapy chatbot that first enraptured scientists in the 1970s. Today's business leaders and workers are inundated with hype, with a heavy dose of FOMO and seemingly complex but often misused jargon. Listening to tech leaders and AI enthusiasts, it might seem like AI will take your job to save your company money. But the authors argue that neither is wholly likely, which is one reason why it's important to recognize and break through the hype.So how do we recognize AI hype? These are a few telltale signs, according to Bender and Hanna, that we share below. The authors outline more questions to ask and strategies for AI hype busting in their book, which is out now in the US.Watch out for language that humanizes AIAnthropomorphizing, or the process of giving an inanimate object human-like characteristics or qualities, is a big part of building AI hype. An example of this kind of language can be found when AI companies say their chatbots can now "see" and "think."These can be useful comparisons when trying to describe the ability of new object-identifying AI programs or deep-reasoning AI models, but they can also be misleading. AI chatbots aren't capable of seeing of thinking because they don't have brains. Even the idea of neural nets, Hanna noted in our interview and in the book, is based on human understanding of neurons from the 1950s, not actually how neurons work, but it can fool us into believing there's a brain behind the machine.That belief is something we're predisposed to because of how we as humans process language. We're conditioned to imagine that there is a mind behind the text we see, even when we know it's generated by AI, Bender said. "We interpret language by developing a model in our minds of who the speaker was," Bender added.In these models, we use our knowledge of the person speaking to create meaning, not just using the meaning of the words they say. "So when we encounter synthetic text extruded from something like ChatGPT, we're going to do the same thing," Bender said. "And it is very hard to remind ourselves that the mind isn't there. It's just a construct that we have produced."The authors argue that part of why AI companies try to convince us their products are human-like is that this sets the foreground for them to convince us that AI can replace humans, whether it's at work or as creators. It's compelling for us to believe that AI could be the silver bullet fix to complicated problems in critical industries like health care and government services.But more often than not, the authors argue, AI isn't bring used to fix anything. AI is sold with the goal of efficiency, but AI services end up replacing qualified workers with black box machines that need copious amounts of babysitting from underpaid contract or gig workers. As Hanna put it in our interview, "AI is not going to take your job, but it will make your job shittier."Be dubious of the phrase 'super intelligence'If a human can't do something, you should be wary of claims that an AI can do it. "Superhuman intelligence, or super intelligence, is a very dangerous turn of phrase, insofar as it thinks that some technology is going to make humans superfluous," Hanna said. In "certain domains, like pattern matching at scale, computers are quite good at that. But if there's an idea that there's going to be a superhuman poem, or a superhuman notion of research or doing science, that is clear hype." Bender added, "And we don't talk about airplanes as superhuman flyers or rulers as superhuman measurers, it seems to be only in this AI space that that comes up."The idea of AI "super intelligence" comes up often when people talk about artificial general intelligence. Many CEOs struggle to define what exactly AGI is, but it's essentially AI's most advanced form, potentially capable of making decisions and handling complex tasks. There's still no evidence we're anywhere near a future enabled by AGI, but it's a popular buzzword.Many of these future-looking statements from AI leaders borrow tropes from science fiction. Both boosters and doomers — how Bender and Hanna describe AI enthusiasts and those worried about the potential for harm — rely on sci-fi scenarios. The boosters imagine an AI-powered futuristic society. The doomers bemoan a future where AI robots take over the world and wipe out humanity.The connecting thread, according to the authors, is an unshakable belief that AI is smarter than humans and inevitable. "One of the things that we see a lot in the discourse is this idea that the future is fixed, and it's just a question of how fast we get there," Bender said. "And then there's this claim that this particular technology is a step on that path, and it's all marketing. It is helpful to be able to see behind it."Part of why AI is so popular is that an autonomous functional AI assistant would mean AI companies are fulfilling their promises of world-changing innovation to their investors. Planning for that future — whether it's a utopia or dystopia — keeps investors looking forward as the companies burn through billions of dollars and admit they'll miss their carbon emission goals. For better or worse, life is not science fiction. Whenever you see someone claiming their AI product is straight out of a movie, it's a good sign to approach with skepticism. Ask what goes in and how outputs are evaluatedOne of the easiest ways to see through AI marketing fluff is to look and see whether the company is disclosing how it operates. Many AI companies won't tell you what content is used to train their models. But they usually disclose what the company does with your data and sometimes brag about how their models stack up against competitors. That's where you should start looking, typically in their privacy policies.One of the top complaints and concerns from creators is how AI models are trained. There are many lawsuits over alleged copyright infringement, and there are a lot of concerns over bias in AI chatbots and their capacity for harm. "If you wanted to create a system that is designed to move things forward rather than reproduce the oppressions of the past, you would have to start by curating your data," Bender said. Instead, AI companies are grabbing "everything that wasn't nailed down on the internet," Hanna said.If you're hearing about an AI product for the first time, one thing in particular to look out for is any kind of statistic that highlights its effectiveness. Like many other researchers, Bender and Hanna have called out that a finding with no citation is a red flag. "Anytime someone is selling you something but not giving you access to how it was evaluated, you are on thin ice," Bender said.It can be frustrating and disappointing when AI companies don't disclose certain information about how their AI products work and how they were developed. But recognizing those holes in their sales pitch can help deflate hype, even though it would be better to have the information. For more, check out our full ChatGPT glossary and how to turn off Apple Intelligence.
    المصدر: www.cnet.com
    #how #spot #hype #and #avoid #the #con #according #two #experts #quotartificial #intelligence #we039re #being #frank #bill #goods #you #are #sold #line #someone039s #pocketsquotthat #heart #argument #that #linguist #emily #bender #sociologist #alex #hannamake #their #new #bookthe #conit039s #useful #guide #for #anyone #whose #life #has #intersected #with #technologies #artificial #who039s #questioned #real #usefulness #which #most #usbender #professor #university #washington #who #was #named #one #time #magazine039s #influential #people #hanna #director #research #nonprofit #distributed #instituteand #former #member #ethical #team #googlethe #explosion #chatgpt #late #kicked #off #cycle #aihype #authors #define #quotaggrandizementquot #technology #convinced #need #buy #invest #quotlest #miss #out #entertainment #pleasure #monetary #reward #return #investment #market #sharequot #but #it039s #not #first #nor #likely #last #scholars #government #leaders #regular #have #been #intrigued #worried #idea #machine #learning #aibender #trace #roots #back #1950s #when #mathematician #john #mccarthy #coined #term #intelligenceit #era #united #states #looking #fund #projects #would #help #country #gain #any #kind #edge #soviets #militarily #ideologically #technologicallyquotit #didn039t #spring #whole #cloth #zeus039s #head #anythingthis #longer #historyquot #said #interview #cnetquotit039s #certainly #quote #unquote #aiquottoday039s #propelled #billions #dollars #venture #capital #into #startups #like #openai #tech #giants #meta #google #microsoft #pouring #developmentthe #result #clear #all #newest #phones #laptops #software #updates #drenched #aiwashingand #there #signs #development #will #slow #down #thanks #part #growing #motivation #beat #china #developmentnot #indeedof #course #generative #much #more #advanced #than #eliza #psychotherapy #chatbot #enraptured #scientists #1970stoday039s #business #workers #inundated #heavy #dose #fomo #seemingly #complex #often #misused #jargonlistening #enthusiasts #might #seem #take #your #job #save #company #moneybut #argue #neither #wholly #reason #why #important #recognize #break #through #hypeso #these #few #telltale #share #belowthe #outline #questions #ask #strategies #busting #book #now #uswatch #language #humanizes #aianthropomorphizing #process #giving #inanimate #object #humanlike #characteristics #qualities #big #building #hypean #example #this #can #found #companies #say #chatbots #quotseequot #quotthinkquotthese #comparisons #trying #describe #ability #objectidentifying #programs #deepreasoning #models #they #also #misleadingai #aren039t #capable #seeing #thinking #because #don039t #brainseven #neural #nets #noted #our #based #human #understanding #neurons #from #actually #work #fool #believing #there039s #brain #behind #machinethat #belief #something #predisposed #humans #languagewe039re #conditioned #imagine #mind #text #see #even #know #generated #saidquotwe #interpret #developing #model #minds #speaker #wasquot #addedin #use #knowledge #person #speaking #create #meaning #just #using #words #sayquotso #encounter #synthetic #extruded #going #same #thingquot #saidquotand #very #hard #remind #ourselves #isn039t #thereit039s #construct #producedquotthe #try #convince #products #sets #foreground #them #replace #whether #creatorsit039s #compelling #believe #could #silver #bullet #fix #complicated #problems #critical #industries #health #care #servicesbut #bring #used #anythingai #goal #efficiency #services #end #replacing #qualified #black #box #machines #copious #amounts #babysitting #underpaid #contract #gig #workersas #put #quotai #make #shittierquotbe #dubious #phrase #039super #intelligence039if #can039t #should #wary #claims #itquotsuperhuman #super #dangerous #turn #insofar #thinks #some #superfluousquot #saidin #quotcertain #domains #pattern #matching #scale #computers #quite #good #thatbut #superhuman #poem #notion #doing #science #hypequot #added #quotand #talk #about #airplanes #flyers #rulers #measurers #seems #only #space #comes #upquotthe #quotsuper #intelligencequot #general #intelligencemany #ceos #struggle #what #exactly #agi #essentially #ai039s #form #potentially #making #decisions #handling #tasksthere039s #still #evidence #anywhere #near #future #enabled #popularbuzzwordmany #futurelooking #statements #borrow #tropes #fictionboth #boosters #doomers #those #potential #harm #rely #scifi #scenariosthe #aipowered #futuristic #societythe #bemoan #where #robots #over #world #wipe #humanitythe #connecting #thread #unshakable #smarter #inevitablequotone #things #lot #discourse #fixed #question #fast #get #therequot #then #claim #particular #step #path #marketingit #helpful #able #itquotpart #popular #autonomous #functional #assistant #mean #fulfilling #promises #worldchanging #innovation #investorsplanning #utopia #dystopia #keeps #investors #forward #burn #admit #they039ll #carbon #emission #goalsfor #better #worse #fictionwhenever #someone #claiming #product #straight #movie #sign #approach #skepticism #goes #outputs #evaluatedone #easiest #ways #marketing #fluff #look #disclosing #operatesmany #won039t #tell #content #train #modelsbut #usually #disclose #does #data #sometimes #brag #stack #against #competitorsthat039s #start #typically #privacy #policiesone #top #complaints #concernsfrom #creators #trainedthere #many #lawsuits #alleged #copyright #infringement #concerns #bias #capacity #harmquotif #wanted #system #designed #move #rather #reproduce #oppressions #past #curating #dataquot #saidinstead #grabbing #quoteverything #wasn039t #nailed #internetquot #saidif #you039re #hearing #thing #statistic #highlights #its #effectivenesslike #other #researchers #called #finding #citation #red #flagquotanytime #selling #access #evaluated #thin #icequot #saidit #frustrating #disappointing #certain #information #were #developedbut #recognizing #holes #sales #pitch #deflate #though #informationfor #check #fullchatgpt #glossary #offapple
    WWW.CNET.COM
    How to Spot AI Hype and Avoid The AI Con, According to Two Experts
    "Artificial intelligence, if we're being frank, is a con: a bill of goods you are being sold to line someone's pockets."That is the heart of the argument that linguist Emily Bender and sociologist Alex Hanna make in their new book The AI Con. It's a useful guide for anyone whose life has intersected with technologies sold as artificial intelligence and anyone who's questioned their real usefulness, which is most of us. Bender is a professor at the University of Washington who was named one of Time magazine's most influential people in artificial intelligence, and Hanna is the director of research at the nonprofit Distributed AI Research Institute and a former member of the ethical AI team at Google.The explosion of ChatGPT in late 2022 kicked off a new hype cycle in AI. Hype, as the authors define it, is the "aggrandizement" of technology that you are convinced you need to buy or invest in "lest you miss out on entertainment or pleasure, monetary reward, return on investment, or market share." But it's not the first time, nor likely the last, that scholars, government leaders and regular people have been intrigued and worried by the idea of machine learning and AI.Bender and Hanna trace the roots of machine learning back to the 1950s, to when mathematician John McCarthy coined the term artificial intelligence. It was in an era when the United States was looking to fund projects that would help the country gain any kind of edge on the Soviets militarily, ideologically and technologically. "It didn't spring whole cloth out of Zeus's head or anything. This has a longer history," Hanna said in an interview with CNET. "It's certainly not the first hype cycle with, quote, unquote, AI."Today's hype cycle is propelled by the billions of dollars of venture capital investment into startups like OpenAI and the tech giants like Meta, Google and Microsoft pouring billions of dollars into AI research and development. The result is clear, with all the newest phones, laptops and software updates drenched in AI-washing. And there are no signs that AI research and development will slow down, thanks in part to a growing motivation to beat China in AI development. Not the first hype cycle indeed.Of course, generative AI in 2025 is much more advanced than the Eliza psychotherapy chatbot that first enraptured scientists in the 1970s. Today's business leaders and workers are inundated with hype, with a heavy dose of FOMO and seemingly complex but often misused jargon. Listening to tech leaders and AI enthusiasts, it might seem like AI will take your job to save your company money. But the authors argue that neither is wholly likely, which is one reason why it's important to recognize and break through the hype.So how do we recognize AI hype? These are a few telltale signs, according to Bender and Hanna, that we share below. The authors outline more questions to ask and strategies for AI hype busting in their book, which is out now in the US.Watch out for language that humanizes AIAnthropomorphizing, or the process of giving an inanimate object human-like characteristics or qualities, is a big part of building AI hype. An example of this kind of language can be found when AI companies say their chatbots can now "see" and "think."These can be useful comparisons when trying to describe the ability of new object-identifying AI programs or deep-reasoning AI models, but they can also be misleading. AI chatbots aren't capable of seeing of thinking because they don't have brains. Even the idea of neural nets, Hanna noted in our interview and in the book, is based on human understanding of neurons from the 1950s, not actually how neurons work, but it can fool us into believing there's a brain behind the machine.That belief is something we're predisposed to because of how we as humans process language. We're conditioned to imagine that there is a mind behind the text we see, even when we know it's generated by AI, Bender said. "We interpret language by developing a model in our minds of who the speaker was," Bender added.In these models, we use our knowledge of the person speaking to create meaning, not just using the meaning of the words they say. "So when we encounter synthetic text extruded from something like ChatGPT, we're going to do the same thing," Bender said. "And it is very hard to remind ourselves that the mind isn't there. It's just a construct that we have produced."The authors argue that part of why AI companies try to convince us their products are human-like is that this sets the foreground for them to convince us that AI can replace humans, whether it's at work or as creators. It's compelling for us to believe that AI could be the silver bullet fix to complicated problems in critical industries like health care and government services.But more often than not, the authors argue, AI isn't bring used to fix anything. AI is sold with the goal of efficiency, but AI services end up replacing qualified workers with black box machines that need copious amounts of babysitting from underpaid contract or gig workers. As Hanna put it in our interview, "AI is not going to take your job, but it will make your job shittier."Be dubious of the phrase 'super intelligence'If a human can't do something, you should be wary of claims that an AI can do it. "Superhuman intelligence, or super intelligence, is a very dangerous turn of phrase, insofar as it thinks that some technology is going to make humans superfluous," Hanna said. In "certain domains, like pattern matching at scale, computers are quite good at that. But if there's an idea that there's going to be a superhuman poem, or a superhuman notion of research or doing science, that is clear hype." Bender added, "And we don't talk about airplanes as superhuman flyers or rulers as superhuman measurers, it seems to be only in this AI space that that comes up."The idea of AI "super intelligence" comes up often when people talk about artificial general intelligence. Many CEOs struggle to define what exactly AGI is, but it's essentially AI's most advanced form, potentially capable of making decisions and handling complex tasks. There's still no evidence we're anywhere near a future enabled by AGI, but it's a popular buzzword.Many of these future-looking statements from AI leaders borrow tropes from science fiction. Both boosters and doomers — how Bender and Hanna describe AI enthusiasts and those worried about the potential for harm — rely on sci-fi scenarios. The boosters imagine an AI-powered futuristic society. The doomers bemoan a future where AI robots take over the world and wipe out humanity.The connecting thread, according to the authors, is an unshakable belief that AI is smarter than humans and inevitable. "One of the things that we see a lot in the discourse is this idea that the future is fixed, and it's just a question of how fast we get there," Bender said. "And then there's this claim that this particular technology is a step on that path, and it's all marketing. It is helpful to be able to see behind it."Part of why AI is so popular is that an autonomous functional AI assistant would mean AI companies are fulfilling their promises of world-changing innovation to their investors. Planning for that future — whether it's a utopia or dystopia — keeps investors looking forward as the companies burn through billions of dollars and admit they'll miss their carbon emission goals. For better or worse, life is not science fiction. Whenever you see someone claiming their AI product is straight out of a movie, it's a good sign to approach with skepticism. Ask what goes in and how outputs are evaluatedOne of the easiest ways to see through AI marketing fluff is to look and see whether the company is disclosing how it operates. Many AI companies won't tell you what content is used to train their models. But they usually disclose what the company does with your data and sometimes brag about how their models stack up against competitors. That's where you should start looking, typically in their privacy policies.One of the top complaints and concerns from creators is how AI models are trained. There are many lawsuits over alleged copyright infringement, and there are a lot of concerns over bias in AI chatbots and their capacity for harm. "If you wanted to create a system that is designed to move things forward rather than reproduce the oppressions of the past, you would have to start by curating your data," Bender said. Instead, AI companies are grabbing "everything that wasn't nailed down on the internet," Hanna said.If you're hearing about an AI product for the first time, one thing in particular to look out for is any kind of statistic that highlights its effectiveness. Like many other researchers, Bender and Hanna have called out that a finding with no citation is a red flag. "Anytime someone is selling you something but not giving you access to how it was evaluated, you are on thin ice," Bender said.It can be frustrating and disappointing when AI companies don't disclose certain information about how their AI products work and how they were developed. But recognizing those holes in their sales pitch can help deflate hype, even though it would be better to have the information. For more, check out our full ChatGPT glossary and how to turn off Apple Intelligence.
    0 Yorumlar 0 hisse senetleri