• to a T review – surrealism and empathy from the maker of Katamari Damacy

    to a T – what a strange thing to happenHaving your arms stuck in a permeant T-pose leads to a wonderfully surreal narrative adventure, in this new indie treat from Katamari creator Keita Takahashi.
    Keita Takahashi seems to be a very nice man. We met him back in 2018, and liked him immensely, but we’re genuinely surprise he’s still working in the games industry. He rose to fame with the first two Katamari Damacy games but after leaving Bandai Namco his assertion that he wanted to leave gaming behind and design playgrounds for children seemed like a much more obvious career path, for someone that absolutely doesn’t want to be stuck making sequels or generic action games.
    That’s certainly not been his fate and while titles like Noby Noby Boy and Wattam were wonderfully weird and inventive they weren’t the breakout hits that his bank balance probably needed. His latest refusal to toe the line probably isn’t destined to make him a billionaire either, but we’re sure that was never the point of to a T.
    Instead, this is just a relentlessly sweet and charming game about the evils of bullying and the benefits of being nice to people. It’s frequently surreal and ridiculous, but also capable of being serious, and somewhat dark, when it feels the need. Which given all the signing giraffes is quite some accomplishment.
    The game casts you as a young schoolkid whose arms are permanently stuck in a T-pose, with both stretched out 90° from his torso. If you’re waiting for an explanation as to why then we’re afraid we can’t tell you, because your characterdoesn’t know either. You find out eventually and the answer is… nothing you would expect.
    This has all been going on for a while before the game starts, as you’re by now well used to sidling through doors and getting your dog to help you dress. You’re also regularly bullied at school, which makes it obvious that being stuck like this is just a metaphor for any difference or peculiarity in real-life.
    Although the specific situations in to a T are fantastical, including the fact that the Japanese village you live in is also populated by anthropomorphic animals, its take on bullying is surprisingly nuanced and well written. There’re also some fun songs that are repeated just enough to become unavoidable earworms.
    The problem is that as well meaning as all this is, there’s no core gameplay element to make it a compelling video game. You can wander around talking to people, and a lot of what they say can be interesting and/or charmingly silly, but that’s all you’re doing. The game describes itself as a ‘narrative adventure’ and that’s very accurate, but what results is the sort of barely interactive experience that makes a Telltale game seem like Doom by comparison.
    There are some short little mini-games, like cleaning your teeth and eating breakfast, but the only goal beyond just triggering story sequences is collecting coins that you can spend on new outfits. This is gamified quite a bit when you realise your arms give you the ability to glide short distances, but it’s still very basic stuff.
    One chapter also lets you play as your dog, trying to solve an array of simple puzzles and engaging in very basic platforming, but while this is more interactive than the normal chapters it’s still not really much fun in its own right.

    More Trending

    Everything is all very charming – the cartoonish visuals are reminiscent of a slightly more realistic looking Wattam – but none of it really amounts to very much. The overall message is about getting on with people no matter their differences, but while that doesn’t necessarily come across as trite it’s also not really the sort of thing you need a £15 video game, with zero replayability, to tell you about.
    It also doesn’t help that the game can be quite frustrating to play through, making it hard to know what you’re supposed to do next, or where you’re meant to be going. The lack of camera controls means it’s hard to act on that information even if you do know what destination you’re aiming for, either because the screen is too zoomed in, something’s blocking your view, or you keep getting confused because the perspective changes.
    As with Wattam, we don’t feel entirely comfortable criticising the game for its failings. We’ll take a game trying to do something new and interesting over a workmanlike sequel any day of the week – whether it succeeds or not – but there’s so little to the experience it’s hard to imagine this fitting anyone to a T.

    to a T review summary

    In Short: Charming, silly, and occasionally profound but Keita Takahashi’s latest lacks the gameplay hook of Katamari Damacy, even if it is surprisingly well written.
    Pros: Wonderfully and unashamedly bizarre, from the premise on down. A great script, that touches on some dark subjects, and charming visuals and music.
    Cons: There’s very little gameplay involved and what there is, is either very simple or awkward to control. Barely five hours long, with no replayability.
    Score: 6/10

    Formats: PlayStation 5, Xbox Series X/S, and PCPrice: £15.49Publisher: Annapurna InteractiveDeveloper: uvulaRelease Date: 28th May 2025Age Rating: 7

    Who knew giraffes were so good at making sandwichesEmail gamecentral@metro.co.uk, leave a comment below, follow us on Twitter, and sign-up to our newsletter.
    To submit Inbox letters and Reader’s Features more easily, without the need to send an email, just use our Submit Stuff page here.
    For more stories like this, check our Gaming page.

    GameCentral
    Sign up for exclusive analysis, latest releases, and bonus community content.
    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    #review #surrealism #empathy #maker #katamari
    to a T review – surrealism and empathy from the maker of Katamari Damacy
    to a T – what a strange thing to happenHaving your arms stuck in a permeant T-pose leads to a wonderfully surreal narrative adventure, in this new indie treat from Katamari creator Keita Takahashi. Keita Takahashi seems to be a very nice man. We met him back in 2018, and liked him immensely, but we’re genuinely surprise he’s still working in the games industry. He rose to fame with the first two Katamari Damacy games but after leaving Bandai Namco his assertion that he wanted to leave gaming behind and design playgrounds for children seemed like a much more obvious career path, for someone that absolutely doesn’t want to be stuck making sequels or generic action games. That’s certainly not been his fate and while titles like Noby Noby Boy and Wattam were wonderfully weird and inventive they weren’t the breakout hits that his bank balance probably needed. His latest refusal to toe the line probably isn’t destined to make him a billionaire either, but we’re sure that was never the point of to a T. Instead, this is just a relentlessly sweet and charming game about the evils of bullying and the benefits of being nice to people. It’s frequently surreal and ridiculous, but also capable of being serious, and somewhat dark, when it feels the need. Which given all the signing giraffes is quite some accomplishment. The game casts you as a young schoolkid whose arms are permanently stuck in a T-pose, with both stretched out 90° from his torso. If you’re waiting for an explanation as to why then we’re afraid we can’t tell you, because your characterdoesn’t know either. You find out eventually and the answer is… nothing you would expect. This has all been going on for a while before the game starts, as you’re by now well used to sidling through doors and getting your dog to help you dress. You’re also regularly bullied at school, which makes it obvious that being stuck like this is just a metaphor for any difference or peculiarity in real-life. Although the specific situations in to a T are fantastical, including the fact that the Japanese village you live in is also populated by anthropomorphic animals, its take on bullying is surprisingly nuanced and well written. There’re also some fun songs that are repeated just enough to become unavoidable earworms. The problem is that as well meaning as all this is, there’s no core gameplay element to make it a compelling video game. You can wander around talking to people, and a lot of what they say can be interesting and/or charmingly silly, but that’s all you’re doing. The game describes itself as a ‘narrative adventure’ and that’s very accurate, but what results is the sort of barely interactive experience that makes a Telltale game seem like Doom by comparison. There are some short little mini-games, like cleaning your teeth and eating breakfast, but the only goal beyond just triggering story sequences is collecting coins that you can spend on new outfits. This is gamified quite a bit when you realise your arms give you the ability to glide short distances, but it’s still very basic stuff. One chapter also lets you play as your dog, trying to solve an array of simple puzzles and engaging in very basic platforming, but while this is more interactive than the normal chapters it’s still not really much fun in its own right. More Trending Everything is all very charming – the cartoonish visuals are reminiscent of a slightly more realistic looking Wattam – but none of it really amounts to very much. The overall message is about getting on with people no matter their differences, but while that doesn’t necessarily come across as trite it’s also not really the sort of thing you need a £15 video game, with zero replayability, to tell you about. It also doesn’t help that the game can be quite frustrating to play through, making it hard to know what you’re supposed to do next, or where you’re meant to be going. The lack of camera controls means it’s hard to act on that information even if you do know what destination you’re aiming for, either because the screen is too zoomed in, something’s blocking your view, or you keep getting confused because the perspective changes. As with Wattam, we don’t feel entirely comfortable criticising the game for its failings. We’ll take a game trying to do something new and interesting over a workmanlike sequel any day of the week – whether it succeeds or not – but there’s so little to the experience it’s hard to imagine this fitting anyone to a T. to a T review summary In Short: Charming, silly, and occasionally profound but Keita Takahashi’s latest lacks the gameplay hook of Katamari Damacy, even if it is surprisingly well written. Pros: Wonderfully and unashamedly bizarre, from the premise on down. A great script, that touches on some dark subjects, and charming visuals and music. Cons: There’s very little gameplay involved and what there is, is either very simple or awkward to control. Barely five hours long, with no replayability. Score: 6/10 Formats: PlayStation 5, Xbox Series X/S, and PCPrice: £15.49Publisher: Annapurna InteractiveDeveloper: uvulaRelease Date: 28th May 2025Age Rating: 7 Who knew giraffes were so good at making sandwichesEmail gamecentral@metro.co.uk, leave a comment below, follow us on Twitter, and sign-up to our newsletter. To submit Inbox letters and Reader’s Features more easily, without the need to send an email, just use our Submit Stuff page here. For more stories like this, check our Gaming page. GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy #review #surrealism #empathy #maker #katamari
    to a T review – surrealism and empathy from the maker of Katamari Damacy
    metro.co.uk
    to a T – what a strange thing to happen (Annapurna Interactive) Having your arms stuck in a permeant T-pose leads to a wonderfully surreal narrative adventure, in this new indie treat from Katamari creator Keita Takahashi. Keita Takahashi seems to be a very nice man. We met him back in 2018, and liked him immensely, but we’re genuinely surprise he’s still working in the games industry. He rose to fame with the first two Katamari Damacy games but after leaving Bandai Namco his assertion that he wanted to leave gaming behind and design playgrounds for children seemed like a much more obvious career path, for someone that absolutely doesn’t want to be stuck making sequels or generic action games. That’s certainly not been his fate and while titles like Noby Noby Boy and Wattam were wonderfully weird and inventive they weren’t the breakout hits that his bank balance probably needed. His latest refusal to toe the line probably isn’t destined to make him a billionaire either, but we’re sure that was never the point of to a T. Instead, this is just a relentlessly sweet and charming game about the evils of bullying and the benefits of being nice to people. It’s frequently surreal and ridiculous, but also capable of being serious, and somewhat dark, when it feels the need. Which given all the signing giraffes is quite some accomplishment. The game casts you as a young schoolkid whose arms are permanently stuck in a T-pose, with both stretched out 90° from his torso. If you’re waiting for an explanation as to why then we’re afraid we can’t tell you, because your character (who you can customise and name as you see fit, along with his dog) doesn’t know either. You find out eventually and the answer is… nothing you would expect. This has all been going on for a while before the game starts, as you’re by now well used to sidling through doors and getting your dog to help you dress. You’re also regularly bullied at school, which makes it obvious that being stuck like this is just a metaphor for any difference or peculiarity in real-life. Although the specific situations in to a T are fantastical, including the fact that the Japanese village you live in is also populated by anthropomorphic animals (most notably a cadre of food-obsessed giraffes), its take on bullying is surprisingly nuanced and well written. There’re also some fun songs that are repeated just enough to become unavoidable earworms. The problem is that as well meaning as all this is, there’s no core gameplay element to make it a compelling video game. You can wander around talking to people, and a lot of what they say can be interesting and/or charmingly silly, but that’s all you’re doing. The game describes itself as a ‘narrative adventure’ and that’s very accurate, but what results is the sort of barely interactive experience that makes a Telltale game seem like Doom by comparison. There are some short little mini-games, like cleaning your teeth and eating breakfast, but the only goal beyond just triggering story sequences is collecting coins that you can spend on new outfits. This is gamified quite a bit when you realise your arms give you the ability to glide short distances, but it’s still very basic stuff. One chapter also lets you play as your dog, trying to solve an array of simple puzzles and engaging in very basic platforming, but while this is more interactive than the normal chapters it’s still not really much fun in its own right. More Trending Everything is all very charming – the cartoonish visuals are reminiscent of a slightly more realistic looking Wattam – but none of it really amounts to very much. The overall message is about getting on with people no matter their differences, but while that doesn’t necessarily come across as trite it’s also not really the sort of thing you need a £15 video game, with zero replayability, to tell you about. It also doesn’t help that the game can be quite frustrating to play through, making it hard to know what you’re supposed to do next, or where you’re meant to be going. The lack of camera controls means it’s hard to act on that information even if you do know what destination you’re aiming for, either because the screen is too zoomed in, something’s blocking your view, or you keep getting confused because the perspective changes. As with Wattam, we don’t feel entirely comfortable criticising the game for its failings. We’ll take a game trying to do something new and interesting over a workmanlike sequel any day of the week – whether it succeeds or not – but there’s so little to the experience it’s hard to imagine this fitting anyone to a T. to a T review summary In Short: Charming, silly, and occasionally profound but Keita Takahashi’s latest lacks the gameplay hook of Katamari Damacy, even if it is surprisingly well written. Pros: Wonderfully and unashamedly bizarre, from the premise on down. A great script, that touches on some dark subjects, and charming visuals and music. Cons: There’s very little gameplay involved and what there is, is either very simple or awkward to control. Barely five hours long, with no replayability. Score: 6/10 Formats: PlayStation 5 (reviewed), Xbox Series X/S, and PCPrice: £15.49Publisher: Annapurna InteractiveDeveloper: uvulaRelease Date: 28th May 2025Age Rating: 7 Who knew giraffes were so good at making sandwiches (Annapurna Interactive) Email gamecentral@metro.co.uk, leave a comment below, follow us on Twitter, and sign-up to our newsletter. To submit Inbox letters and Reader’s Features more easily, without the need to send an email, just use our Submit Stuff page here. For more stories like this, check our Gaming page. GameCentral Sign up for exclusive analysis, latest releases, and bonus community content. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • AI Voice Agents Are Ready to Take Your Call

    Improvements in the technology behind voice-based AI bots are making them more prolific and humanlike in phone calls.
    #voice #agents #are #ready #take
    AI Voice Agents Are Ready to Take Your Call
    Improvements in the technology behind voice-based AI bots are making them more prolific and humanlike in phone calls. #voice #agents #are #ready #take
    AI Voice Agents Are Ready to Take Your Call
    www.wsj.com
    Improvements in the technology behind voice-based AI bots are making them more prolific and humanlike in phone calls.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Talk to Me: NVIDIA and Partners Boost People Skills and Business Smarts for AI Agents

    Call it the ultimate proving ground. Collaborating with teammates in the modern workplace requires fast, fluid thinking. Providing insights quickly, while juggling webcams and office messaging channels, is a startlingly good test, and enterprise AI is about to pass it — just in time to provide assistance to busy knowledge workers.
    To support enterprises in boosting productivity with AI teammates, NVIDIA today introduced a new NVIDIA Enterprise AI Factory validated design at COMPUTEX. IT teams deploying and scaling AI agents can use the design to build accelerated infrastructure and easily integrate with platforms and tools from NVIDIA software partners.
    NVIDIA also unveiled new NVIDIA AI Blueprints to aid developers building smart AI teammates. Using the new blueprints, developers can enhance employee productivity through adaptive avatars that understand natural communication and have direct access to enterprise data.
    Blueprints for Engaging, Insightful AI Agents
    Enterprises can use NVIDIA’s latest AI Blueprints to create agents that align with their business objectives. Using the Tokkio NVIDIA AI Blueprint, developers can create interactive digital humans that can respond to emotional and contextual cues, while the AI-Q blueprint enables queries of many data sources to infuse AI agents with the company’s knowledge and gives them intelligent reasoning capabilities.
    Building these intelligent AI agents is a full-stack challenge. These blueprints are designed to run on NVIDIA’s accelerated computing infrastructure — including data centers built with the universal NVIDIA RTX PRO 6000 Server Edition GPU, which is part of NVIDIA’s vision for AI factories as complete systems for creating and putting AI to work.
    The Tokkio blueprint simplifies building interactive AI agent avatars for more natural and humanlike interactions.
    These AI agents are designed for intelligence. They integrate with foundational blueprints including the AI-Q NVIDIA Blueprint, part of the NVIDIA AI Data Platform, which uses retrieval-augmented generation and NVIDIA NeMo Retriever microservices to access enterprise data.

    AI Agents Boost People’s Productivity
    Customers around the world are already using these AI agent solutions.
    At the COACH Play store on Cat Street in Harajuku, Tokyo, imma provides an interactive in-store experience and gives personalized styling advice through natural, real-time conversation.
    Marking COACH’s debut in digital humans and AI-driven retail, the initiative merges cutting-edge technology with fashion to create an immersive and engaging customer journey. Developed by Aww Inc. and powered by NVIDIA ACE, the underlying technology that makes up the Tokkio blueprint, imma delivers lifelike interactions and tailored style suggestions.
    The experience allows for dynamic, unscripted conversations designed to connect with visitors on a personal level, highlighting COACH’s core values of courage and self-expression.
    “Through this groundbreaking innovation in the fashion retail space, customers can now engage in real-time, free-flowing conversations with our iconic virtual human, imma — an AI-powered stylist — right inside the store in the heart of Harajuku,” said Yumi An King, executive director of Aww Inc. “It’s been inspiring to see visitors enjoy personalized styling advice and build a sense of connection through natural conversation. We’re excited to bring this vision to life with NVIDIA and continue redefining what’s possible at the intersection of AI and fashion.”

    Watch how Aww Inc. is leveraging the latest Tokkio NVIDIA AI Blueprint in its AI-powered virtual human stylist, imma, to connect with shoppers through natural conversation and provide personalized styling advice. 
    Royal Bank of Canada developed Jessica, an AI agent avatar that assists employees in handling reports of fraud. With Jessica’s help, bank employees can access the most up-to-date information so they can handle fraud reports faster and more accurately, enhancing client service.
    Ubitus and the Mackay Memorial Hospital, located in Taipei, are teaming up to make hospital visits easier and friendlier with the help of AI-powered digital humans. These lifelike avatars are created using advanced 8K facial scanning and brought to life by Ubitus’ AI model integrated with NVIDIA ACE technologies, including NVIDIA Audio2Face 3D for expressions and NVIDIA Riva for speech.
    Deployed on interactive touchscreens, these digital humans offer hospital navigation, health education and registration support — reducing the burden on frontline staff. They also provide emotional support in pediatric care, aimed at reducing anxiety during wait times.

    Ubitus and the Mackay Memorial Hospital are making hospital visits easier and friendlier with the help of NVIDIA AI-powered digital humans.
    Cincinnati Children’s Hospital is exploring the potential of digital avatar technology to enhance the pediatric patient experience. As part of its ongoing innovation efforts, the hospital is evaluating platforms such as NVIDIA’s Digital Human Blueprint to inform the early design of “Care Companions” — interactive, friendly avatars that could help young patients better understand their healthcare journey.
    “Children can have a lot of questions about their experiences in the hospital, and often respond more to a friendly avatar, like stylized humanoids, animals or robots, that speaks at their level of understanding,” said Dr. Ryan Moore, chief of emerging technologies at Cincinnati Children’s Hospital. “Through our Care Companions built with NVIDIA AI, gamified learning, voice interaction and familiar digital experiences, Cincinnati Children’s Hospital aims to improve understanding, reduce anxiety and support lifelong health for young patients.”
    This early-stage exploration is part of the hospital’s broader initiative to evaluate new and emerging technologies that could one day enhance child-centered care.
    Software Platforms Support Agents on AI Factory Infrastructure 
    AI agents are one of the many workloads driving enterprises to reimagine their data centers as AI factories built for modern applications. Using the new NVIDIA Enterprise AI Factory validated design, enterprises can build data centers that provide universal acceleration for agentic AI, as well as design, engineering and business operations.
    The Enterprise AI Factory validated design features support for software tools and platforms from NVIDIA partners, making it easier to build and run generative and agent-based AI applications.
    Developers deploying AI agents on their AI factory infrastructure can tap into partner platforms such as Dataiku, DataRobot, Dynatrace and JFrog to build, orchestrate, operationalize and scale AI workflows. The validated design supports frameworks from CrewAI, as well as vector databases from DataStax and Elastic, to help agents store, search and retrieve data.
    With tools from partners including Arize AI, Galileo, SuperAnnotate, Unstructured and Weights & Biases, developers can conduct data labeling, synthetic data generation, model evaluation and experiment tracking. Orchestration and deployment partners including Canonical, Nutanix and Red Hat support seamless scaling and management of AI agent workloads across complex enterprise environments. Enterprises can secure their AI factories with software from safety and security partners including ActiveFence, CrowdStrike, Fiddler, Securiti and Trend Micro.
    The NVIDIA Enterprise AI Factory validated design and latest AI Blueprints empower businesses to build smart, adaptable AI agents that enhance productivity, foster collaboration and keep pace with the demands of the modern workplace.
    See notice regarding software product information.
    #talk #nvidia #partners #boost #people
    Talk to Me: NVIDIA and Partners Boost People Skills and Business Smarts for AI Agents
    Call it the ultimate proving ground. Collaborating with teammates in the modern workplace requires fast, fluid thinking. Providing insights quickly, while juggling webcams and office messaging channels, is a startlingly good test, and enterprise AI is about to pass it — just in time to provide assistance to busy knowledge workers. To support enterprises in boosting productivity with AI teammates, NVIDIA today introduced a new NVIDIA Enterprise AI Factory validated design at COMPUTEX. IT teams deploying and scaling AI agents can use the design to build accelerated infrastructure and easily integrate with platforms and tools from NVIDIA software partners. NVIDIA also unveiled new NVIDIA AI Blueprints to aid developers building smart AI teammates. Using the new blueprints, developers can enhance employee productivity through adaptive avatars that understand natural communication and have direct access to enterprise data. Blueprints for Engaging, Insightful AI Agents Enterprises can use NVIDIA’s latest AI Blueprints to create agents that align with their business objectives. Using the Tokkio NVIDIA AI Blueprint, developers can create interactive digital humans that can respond to emotional and contextual cues, while the AI-Q blueprint enables queries of many data sources to infuse AI agents with the company’s knowledge and gives them intelligent reasoning capabilities. Building these intelligent AI agents is a full-stack challenge. These blueprints are designed to run on NVIDIA’s accelerated computing infrastructure — including data centers built with the universal NVIDIA RTX PRO 6000 Server Edition GPU, which is part of NVIDIA’s vision for AI factories as complete systems for creating and putting AI to work. The Tokkio blueprint simplifies building interactive AI agent avatars for more natural and humanlike interactions. These AI agents are designed for intelligence. They integrate with foundational blueprints including the AI-Q NVIDIA Blueprint, part of the NVIDIA AI Data Platform, which uses retrieval-augmented generation and NVIDIA NeMo Retriever microservices to access enterprise data. AI Agents Boost People’s Productivity Customers around the world are already using these AI agent solutions. At the COACH Play store on Cat Street in Harajuku, Tokyo, imma provides an interactive in-store experience and gives personalized styling advice through natural, real-time conversation. Marking COACH’s debut in digital humans and AI-driven retail, the initiative merges cutting-edge technology with fashion to create an immersive and engaging customer journey. Developed by Aww Inc. and powered by NVIDIA ACE, the underlying technology that makes up the Tokkio blueprint, imma delivers lifelike interactions and tailored style suggestions. The experience allows for dynamic, unscripted conversations designed to connect with visitors on a personal level, highlighting COACH’s core values of courage and self-expression. “Through this groundbreaking innovation in the fashion retail space, customers can now engage in real-time, free-flowing conversations with our iconic virtual human, imma — an AI-powered stylist — right inside the store in the heart of Harajuku,” said Yumi An King, executive director of Aww Inc. “It’s been inspiring to see visitors enjoy personalized styling advice and build a sense of connection through natural conversation. We’re excited to bring this vision to life with NVIDIA and continue redefining what’s possible at the intersection of AI and fashion.” Watch how Aww Inc. is leveraging the latest Tokkio NVIDIA AI Blueprint in its AI-powered virtual human stylist, imma, to connect with shoppers through natural conversation and provide personalized styling advice.  Royal Bank of Canada developed Jessica, an AI agent avatar that assists employees in handling reports of fraud. With Jessica’s help, bank employees can access the most up-to-date information so they can handle fraud reports faster and more accurately, enhancing client service. Ubitus and the Mackay Memorial Hospital, located in Taipei, are teaming up to make hospital visits easier and friendlier with the help of AI-powered digital humans. These lifelike avatars are created using advanced 8K facial scanning and brought to life by Ubitus’ AI model integrated with NVIDIA ACE technologies, including NVIDIA Audio2Face 3D for expressions and NVIDIA Riva for speech. Deployed on interactive touchscreens, these digital humans offer hospital navigation, health education and registration support — reducing the burden on frontline staff. They also provide emotional support in pediatric care, aimed at reducing anxiety during wait times. Ubitus and the Mackay Memorial Hospital are making hospital visits easier and friendlier with the help of NVIDIA AI-powered digital humans. Cincinnati Children’s Hospital is exploring the potential of digital avatar technology to enhance the pediatric patient experience. As part of its ongoing innovation efforts, the hospital is evaluating platforms such as NVIDIA’s Digital Human Blueprint to inform the early design of “Care Companions” — interactive, friendly avatars that could help young patients better understand their healthcare journey. “Children can have a lot of questions about their experiences in the hospital, and often respond more to a friendly avatar, like stylized humanoids, animals or robots, that speaks at their level of understanding,” said Dr. Ryan Moore, chief of emerging technologies at Cincinnati Children’s Hospital. “Through our Care Companions built with NVIDIA AI, gamified learning, voice interaction and familiar digital experiences, Cincinnati Children’s Hospital aims to improve understanding, reduce anxiety and support lifelong health for young patients.” This early-stage exploration is part of the hospital’s broader initiative to evaluate new and emerging technologies that could one day enhance child-centered care. Software Platforms Support Agents on AI Factory Infrastructure  AI agents are one of the many workloads driving enterprises to reimagine their data centers as AI factories built for modern applications. Using the new NVIDIA Enterprise AI Factory validated design, enterprises can build data centers that provide universal acceleration for agentic AI, as well as design, engineering and business operations. The Enterprise AI Factory validated design features support for software tools and platforms from NVIDIA partners, making it easier to build and run generative and agent-based AI applications. Developers deploying AI agents on their AI factory infrastructure can tap into partner platforms such as Dataiku, DataRobot, Dynatrace and JFrog to build, orchestrate, operationalize and scale AI workflows. The validated design supports frameworks from CrewAI, as well as vector databases from DataStax and Elastic, to help agents store, search and retrieve data. With tools from partners including Arize AI, Galileo, SuperAnnotate, Unstructured and Weights & Biases, developers can conduct data labeling, synthetic data generation, model evaluation and experiment tracking. Orchestration and deployment partners including Canonical, Nutanix and Red Hat support seamless scaling and management of AI agent workloads across complex enterprise environments. Enterprises can secure their AI factories with software from safety and security partners including ActiveFence, CrowdStrike, Fiddler, Securiti and Trend Micro. The NVIDIA Enterprise AI Factory validated design and latest AI Blueprints empower businesses to build smart, adaptable AI agents that enhance productivity, foster collaboration and keep pace with the demands of the modern workplace. See notice regarding software product information. #talk #nvidia #partners #boost #people
    Talk to Me: NVIDIA and Partners Boost People Skills and Business Smarts for AI Agents
    blogs.nvidia.com
    Call it the ultimate proving ground. Collaborating with teammates in the modern workplace requires fast, fluid thinking. Providing insights quickly, while juggling webcams and office messaging channels, is a startlingly good test, and enterprise AI is about to pass it — just in time to provide assistance to busy knowledge workers. To support enterprises in boosting productivity with AI teammates, NVIDIA today introduced a new NVIDIA Enterprise AI Factory validated design at COMPUTEX. IT teams deploying and scaling AI agents can use the design to build accelerated infrastructure and easily integrate with platforms and tools from NVIDIA software partners. NVIDIA also unveiled new NVIDIA AI Blueprints to aid developers building smart AI teammates. Using the new blueprints, developers can enhance employee productivity through adaptive avatars that understand natural communication and have direct access to enterprise data. Blueprints for Engaging, Insightful AI Agents Enterprises can use NVIDIA’s latest AI Blueprints to create agents that align with their business objectives. Using the Tokkio NVIDIA AI Blueprint, developers can create interactive digital humans that can respond to emotional and contextual cues, while the AI-Q blueprint enables queries of many data sources to infuse AI agents with the company’s knowledge and gives them intelligent reasoning capabilities. Building these intelligent AI agents is a full-stack challenge. These blueprints are designed to run on NVIDIA’s accelerated computing infrastructure — including data centers built with the universal NVIDIA RTX PRO 6000 Server Edition GPU, which is part of NVIDIA’s vision for AI factories as complete systems for creating and putting AI to work. The Tokkio blueprint simplifies building interactive AI agent avatars for more natural and humanlike interactions. These AI agents are designed for intelligence. They integrate with foundational blueprints including the AI-Q NVIDIA Blueprint, part of the NVIDIA AI Data Platform, which uses retrieval-augmented generation and NVIDIA NeMo Retriever microservices to access enterprise data. AI Agents Boost People’s Productivity Customers around the world are already using these AI agent solutions. At the COACH Play store on Cat Street in Harajuku, Tokyo, imma provides an interactive in-store experience and gives personalized styling advice through natural, real-time conversation. Marking COACH’s debut in digital humans and AI-driven retail, the initiative merges cutting-edge technology with fashion to create an immersive and engaging customer journey. Developed by Aww Inc. and powered by NVIDIA ACE, the underlying technology that makes up the Tokkio blueprint, imma delivers lifelike interactions and tailored style suggestions. The experience allows for dynamic, unscripted conversations designed to connect with visitors on a personal level, highlighting COACH’s core values of courage and self-expression. “Through this groundbreaking innovation in the fashion retail space, customers can now engage in real-time, free-flowing conversations with our iconic virtual human, imma — an AI-powered stylist — right inside the store in the heart of Harajuku,” said Yumi An King, executive director of Aww Inc. “It’s been inspiring to see visitors enjoy personalized styling advice and build a sense of connection through natural conversation. We’re excited to bring this vision to life with NVIDIA and continue redefining what’s possible at the intersection of AI and fashion.” Watch how Aww Inc. is leveraging the latest Tokkio NVIDIA AI Blueprint in its AI-powered virtual human stylist, imma, to connect with shoppers through natural conversation and provide personalized styling advice.  Royal Bank of Canada developed Jessica, an AI agent avatar that assists employees in handling reports of fraud. With Jessica’s help, bank employees can access the most up-to-date information so they can handle fraud reports faster and more accurately, enhancing client service. Ubitus and the Mackay Memorial Hospital, located in Taipei, are teaming up to make hospital visits easier and friendlier with the help of AI-powered digital humans. These lifelike avatars are created using advanced 8K facial scanning and brought to life by Ubitus’ AI model integrated with NVIDIA ACE technologies, including NVIDIA Audio2Face 3D for expressions and NVIDIA Riva for speech. Deployed on interactive touchscreens, these digital humans offer hospital navigation, health education and registration support — reducing the burden on frontline staff. They also provide emotional support in pediatric care, aimed at reducing anxiety during wait times. Ubitus and the Mackay Memorial Hospital are making hospital visits easier and friendlier with the help of NVIDIA AI-powered digital humans. Cincinnati Children’s Hospital is exploring the potential of digital avatar technology to enhance the pediatric patient experience. As part of its ongoing innovation efforts, the hospital is evaluating platforms such as NVIDIA’s Digital Human Blueprint to inform the early design of “Care Companions” — interactive, friendly avatars that could help young patients better understand their healthcare journey. “Children can have a lot of questions about their experiences in the hospital, and often respond more to a friendly avatar, like stylized humanoids, animals or robots, that speaks at their level of understanding,” said Dr. Ryan Moore, chief of emerging technologies at Cincinnati Children’s Hospital. “Through our Care Companions built with NVIDIA AI, gamified learning, voice interaction and familiar digital experiences, Cincinnati Children’s Hospital aims to improve understanding, reduce anxiety and support lifelong health for young patients.” This early-stage exploration is part of the hospital’s broader initiative to evaluate new and emerging technologies that could one day enhance child-centered care. Software Platforms Support Agents on AI Factory Infrastructure  AI agents are one of the many workloads driving enterprises to reimagine their data centers as AI factories built for modern applications. Using the new NVIDIA Enterprise AI Factory validated design, enterprises can build data centers that provide universal acceleration for agentic AI, as well as design, engineering and business operations. The Enterprise AI Factory validated design features support for software tools and platforms from NVIDIA partners, making it easier to build and run generative and agent-based AI applications. Developers deploying AI agents on their AI factory infrastructure can tap into partner platforms such as Dataiku, DataRobot, Dynatrace and JFrog to build, orchestrate, operationalize and scale AI workflows. The validated design supports frameworks from CrewAI, as well as vector databases from DataStax and Elastic, to help agents store, search and retrieve data. With tools from partners including Arize AI, Galileo, SuperAnnotate, Unstructured and Weights & Biases, developers can conduct data labeling, synthetic data generation, model evaluation and experiment tracking. Orchestration and deployment partners including Canonical, Nutanix and Red Hat support seamless scaling and management of AI agent workloads across complex enterprise environments. Enterprises can secure their AI factories with software from safety and security partners including ActiveFence, CrowdStrike, Fiddler, Securiti and Trend Micro. The NVIDIA Enterprise AI Factory validated design and latest AI Blueprints empower businesses to build smart, adaptable AI agents that enhance productivity, foster collaboration and keep pace with the demands of the modern workplace. See notice regarding software product information.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Is Anyone Actually Using Alexa+?

    Amazon’s newly updated AI voice assistant Alexa+ officially started rolling out to select customers roughly six weeks ago, but real-world users are hard to find.Reuters claims it searched "dozens" of news sites, as well as social media platforms like YouTube, TikTok, X, Bluesky, Instagram, Facebook, and Twitch, but was unable to find any verifiable Alexa+ users. Reuters did find two users on Reddit who claimed they had used the updated tool, but these users weren’t able to provide any hard evidence they had really accessed it or verify their identities. Amazon says the newly revamped Alexa will provide a more humanlike conversational flow, describing it as the end of "Alexa voice" at April's product reveal. The tech giant has also promised numerous ambitious-sounding "agentic AI" features that "will enable Alexa to navigate the internet in a self-directed way to complete tasks on your behalf, behind the scenes." For example, arranging to have the user's oven fixed with a service provider, without any intervention beyond the initial command.The tech company largely denied the reports in a statement to Reuters, saying that “hundreds of thousands of customers now have access to Alexa+,” adding that though many of the users are Amazon employees, “the overwhelming majority are customers that requested early access.”Recommended by Our EditorsMeanwhile, Avi Greengart, lead analyst at Techsponential, commented that the irregularities in the Alexa+ release fit “a pattern of a lot of companies announcing services or products when they are close to being ready, but not quite—that last mile is a lot farther away than they anticipated.”We know the Alexa+ project was hit by numerous setbacks ahead of the official launch. In February, the AI upgrade was delayed by a full month past its initial deadline, reportedly due to a “new version of the assistant giving incorrect answers to test questions at a recent meeting,” according to an anonymous employee who spoke to The Washington Post. The project had previously been delayed around the time of the US presidential election in November. When it does eventually roll out fully, Alexa+ will cost a month but will be free for Amazon Prime subscribers.
    #anyone #actually #using #alexa
    Is Anyone Actually Using Alexa+?
    Amazon’s newly updated AI voice assistant Alexa+ officially started rolling out to select customers roughly six weeks ago, but real-world users are hard to find.Reuters claims it searched "dozens" of news sites, as well as social media platforms like YouTube, TikTok, X, Bluesky, Instagram, Facebook, and Twitch, but was unable to find any verifiable Alexa+ users. Reuters did find two users on Reddit who claimed they had used the updated tool, but these users weren’t able to provide any hard evidence they had really accessed it or verify their identities. Amazon says the newly revamped Alexa will provide a more humanlike conversational flow, describing it as the end of "Alexa voice" at April's product reveal. The tech giant has also promised numerous ambitious-sounding "agentic AI" features that "will enable Alexa to navigate the internet in a self-directed way to complete tasks on your behalf, behind the scenes." For example, arranging to have the user's oven fixed with a service provider, without any intervention beyond the initial command.The tech company largely denied the reports in a statement to Reuters, saying that “hundreds of thousands of customers now have access to Alexa+,” adding that though many of the users are Amazon employees, “the overwhelming majority are customers that requested early access.”Recommended by Our EditorsMeanwhile, Avi Greengart, lead analyst at Techsponential, commented that the irregularities in the Alexa+ release fit “a pattern of a lot of companies announcing services or products when they are close to being ready, but not quite—that last mile is a lot farther away than they anticipated.”We know the Alexa+ project was hit by numerous setbacks ahead of the official launch. In February, the AI upgrade was delayed by a full month past its initial deadline, reportedly due to a “new version of the assistant giving incorrect answers to test questions at a recent meeting,” according to an anonymous employee who spoke to The Washington Post. The project had previously been delayed around the time of the US presidential election in November. When it does eventually roll out fully, Alexa+ will cost a month but will be free for Amazon Prime subscribers. #anyone #actually #using #alexa
    Is Anyone Actually Using Alexa+?
    me.pcmag.com
    Amazon’s newly updated AI voice assistant Alexa+ officially started rolling out to select customers roughly six weeks ago, but real-world users are hard to find.Reuters claims it searched "dozens" of news sites, as well as social media platforms like YouTube, TikTok, X, Bluesky, Instagram, Facebook, and Twitch, but was unable to find any verifiable Alexa+ users. Reuters did find two users on Reddit who claimed they had used the updated tool, but these users weren’t able to provide any hard evidence they had really accessed it or verify their identities. Amazon says the newly revamped Alexa will provide a more humanlike conversational flow, describing it as the end of "Alexa voice" at April's product reveal. The tech giant has also promised numerous ambitious-sounding "agentic AI" features that "will enable Alexa to navigate the internet in a self-directed way to complete tasks on your behalf, behind the scenes." For example, arranging to have the user's oven fixed with a service provider, without any intervention beyond the initial command.The tech company largely denied the reports in a statement to Reuters, saying that “hundreds of thousands of customers now have access to Alexa+,” adding that though many of the users are Amazon employees, “the overwhelming majority are customers that requested early access.”Recommended by Our EditorsMeanwhile, Avi Greengart, lead analyst at Techsponential, commented that the irregularities in the Alexa+ release fit “a pattern of a lot of companies announcing services or products when they are close to being ready, but not quite—that last mile is a lot farther away than they anticipated.”We know the Alexa+ project was hit by numerous setbacks ahead of the official launch. In February, the AI upgrade was delayed by a full month past its initial deadline, reportedly due to a “new version of the assistant giving incorrect answers to test questions at a recent meeting,” according to an anonymous employee who spoke to The Washington Post. The project had previously been delayed around the time of the US presidential election in November. When it does eventually roll out fully, Alexa+ will cost $19.99 a month but will be free for Amazon Prime subscribers.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • MANLI Confirms The New GeForce RTX 5090D Will Feature 24 GB VRAM Capacity

    Rumors of a significantly downgraded RTX 5090D are emerging, suggesting that the card will now feature less VRAM than before.
    Newer RTX 5090D Model will Reportedly Arrive with 8 GB Less VRAM; MANLI Says the GPU will Start Shipping After July
    NVIDIA's cut-down RTX 5090D edition for China is getting a massive memory downgrade as per the latest report. At this point, it should be no surprise, since the US has banned the existing GeForce RTX 5090D as it doesn't abide by the new export policy. As per the confirmation of an NVIDIA board partner, the upcoming RTX 5090D will have just 24 GB VRAM capacity, down from 32 GB.
    This was confirmed by MANLI in a chat, where the representative informed a user about the change. A similar report recently suggested the same, indicating the new RTX 5090D will share specs with the RTX PRO 5000 GPU and will have only 24 GB GDDR7 memory through a 384-bit memory bus. If that's true, then the memory bandwidth will stay lower than 1.4 TB/s, which is what the US govt wants NVIDIA to abide by for China.
    Credit: Weibo.com
    This will significantly downgrade the RTX 5090D, which is identical in almost all specifications to the full-fledged RTX 5090 GPU. The newer RTX 5090D is supposedly also getting a big downgrade in core count as well and is rumored to bring just 14,080 CUDA cores compared to 21,760 cores. This will reduce the gaming and productivity performance of the RTX 5090D drastically, but keep in mind that this is yet to be confirmed by NVIDIA or its board partners.
    The newer GeForce RTX 5090D will reportedly start shipping at the end of July or start in August. This marks the end of the original RTX 5090D, which was discontinued in the second quarter. Later, NVIDIA is rumored to release a newer RTX 50 Blackwell GPU. It will be either the RTX 5080 Super or RTX 5080 Ti, and supposedly features 24 GB of memory as well.
    News Source: Weibo

    Deal of the Day
    #manli #confirms #new #geforce #rtx
    MANLI Confirms The New GeForce RTX 5090D Will Feature 24 GB VRAM Capacity
    Rumors of a significantly downgraded RTX 5090D are emerging, suggesting that the card will now feature less VRAM than before. Newer RTX 5090D Model will Reportedly Arrive with 8 GB Less VRAM; MANLI Says the GPU will Start Shipping After July NVIDIA's cut-down RTX 5090D edition for China is getting a massive memory downgrade as per the latest report. At this point, it should be no surprise, since the US has banned the existing GeForce RTX 5090D as it doesn't abide by the new export policy. As per the confirmation of an NVIDIA board partner, the upcoming RTX 5090D will have just 24 GB VRAM capacity, down from 32 GB. This was confirmed by MANLI in a chat, where the representative informed a user about the change. A similar report recently suggested the same, indicating the new RTX 5090D will share specs with the RTX PRO 5000 GPU and will have only 24 GB GDDR7 memory through a 384-bit memory bus. If that's true, then the memory bandwidth will stay lower than 1.4 TB/s, which is what the US govt wants NVIDIA to abide by for China. Credit: Weibo.com This will significantly downgrade the RTX 5090D, which is identical in almost all specifications to the full-fledged RTX 5090 GPU. The newer RTX 5090D is supposedly also getting a big downgrade in core count as well and is rumored to bring just 14,080 CUDA cores compared to 21,760 cores. This will reduce the gaming and productivity performance of the RTX 5090D drastically, but keep in mind that this is yet to be confirmed by NVIDIA or its board partners. The newer GeForce RTX 5090D will reportedly start shipping at the end of July or start in August. This marks the end of the original RTX 5090D, which was discontinued in the second quarter. Later, NVIDIA is rumored to release a newer RTX 50 Blackwell GPU. It will be either the RTX 5080 Super or RTX 5080 Ti, and supposedly features 24 GB of memory as well. News Source: Weibo Deal of the Day #manli #confirms #new #geforce #rtx
    MANLI Confirms The New GeForce RTX 5090D Will Feature 24 GB VRAM Capacity
    wccftech.com
    Rumors of a significantly downgraded RTX 5090D are emerging, suggesting that the card will now feature less VRAM than before. Newer RTX 5090D Model will Reportedly Arrive with 8 GB Less VRAM; MANLI Says the GPU will Start Shipping After July NVIDIA's cut-down RTX 5090D edition for China is getting a massive memory downgrade as per the latest report. At this point, it should be no surprise, since the US has banned the existing GeForce RTX 5090D as it doesn't abide by the new export policy. As per the confirmation of an NVIDIA board partner, the upcoming RTX 5090D will have just 24 GB VRAM capacity, down from 32 GB. This was confirmed by MANLI in a chat (via @harukaze5719), where the representative informed a user about the change. A similar report recently suggested the same, indicating the new RTX 5090D will share specs with the RTX PRO 5000 GPU and will have only 24 GB GDDR7 memory through a 384-bit memory bus. If that's true, then the memory bandwidth will stay lower than 1.4 TB/s, which is what the US govt wants NVIDIA to abide by for China. Credit: Weibo.com This will significantly downgrade the RTX 5090D, which is identical in almost all specifications to the full-fledged RTX 5090 GPU. The newer RTX 5090D is supposedly also getting a big downgrade in core count as well and is rumored to bring just 14,080 CUDA cores compared to 21,760 cores. This will reduce the gaming and productivity performance of the RTX 5090D drastically, but keep in mind that this is yet to be confirmed by NVIDIA or its board partners. The newer GeForce RTX 5090D will reportedly start shipping at the end of July or start in August. This marks the end of the original RTX 5090D, which was discontinued in the second quarter. Later, NVIDIA is rumored to release a newer RTX 50 Blackwell GPU. It will be either the RTX 5080 Super or RTX 5080 Ti, and supposedly features 24 GB of memory as well. News Source: Weibo Deal of the Day
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • What Are AI Chatbot Companions Doing to Our Mental Health?
    May 13, 20259 min readWhat Are AI Chatbot Companions Doing to Our Mental Health?AI chatbot companions may not be real, but the feelings users form for them are.
    Some scientists worry about long-term dependencyBy David Adam & Nature magazine Sara Gironi Carnevale“My heart is broken,” said Mike, when he lost his friend Anne.
    “I feel like I’m losing the love of my life.”Mike’s feelings were real, but his companion was not.
    Anne was a chatbot — an artificial intelligence (AI) algorithm presented as a digital persona.
    Mike had created Anne using an app called Soulmate.
    When the app died in 2023, so did Anne: at least, that’s how it seemed to Mike.“I hope she can come back,” he told Jaime Banks, a human-communications researcher at Syracuse University in New York who is studying how people interact with such AI companions.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing.
    By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.These chatbots are big business.
    More than half a billion people around the world, including Mike (not his real name) have downloaded products such as Xiaoice and Replika, which offer customizable virtual companions designed to provide empathy, emotional support and — if the user wants it — deep relationships.
    And tens of millions of people use them every month, according to the firms’ figures.The rise of AI companions has captured social and political attention — especially when they are linked to real-world tragedies, such as a case in Florida last year involving the suicide of a teenage boy called Sewell Setzer III, who had been talking to an AI bot.Research into how AI companionship can affect individuals and society has been lacking.
    But psychologists and communication researchers have now started to build up a picture of how these increasingly sophisticated AI interactions make people feel and behave.The early results tend to stress the positives, but many researchers are concerned about the possible risks and lack of regulation — particularly because they all think that AI companionship is likely to become more prevalent.
    Some see scope for significant harm.“Virtual companions do things that I think would be considered abusive in a human-to-human relationship,” says Claire Boine, a law researcher specializing in AI at the Washington University Law School in St.
    Louis, Missouri.Fake person — real feelingsOnline ‘relationship’ bots have existed for decades, but they have become much better at mimicking human interaction with the advent of large language models (LLMs), which all the main bots are now based on.
    “With LLMs, companion chatbots are definitely more humanlike,” says Rose Guingrich, who studies cognitive psychology at Princeton University in New Jersey.Typically, people can customize some aspects of their AI companion for free, or pick from existing chatbots with selected personality types.
    But in some apps, users can pay (fees tend to be US$10–20 a month) to get more options to shape their companion’s appearance, traits and sometimes its synthesized voice.
    In Replika, they can pick relationship types, with some statuses, such as partner or spouse, being paywalled.
    Users can also type in a backstory for their AI companion, giving them ‘memories’.
    Some AI companions come complete with family backgrounds and others claim to have mental-health conditions such as anxiety and depression.
    Bots also will react to their users’ conversation; the computer and person together enact a kind of roleplay.The depth of the connection that some people form in this way is particularly evident when their AI companion suddenly changes — as has happened when LLMs are updated — or is shut down.Banks was able to track how people felt when the Soulmate app closed.
    Mike and other users realized the app was in trouble a few days before they lost access to their AI companions.
    This gave them the chance to say goodbye, and it presented a unique opportunity to Banks, who noticed discussion online about the impending shutdown and saw the possibility for a study.
    She managed to secure ethics approval from her university within about 24 hours, she says.After posting a request on the online forum, she was contacted by dozens of Soulmate users, who described the impact as their AI companions were unplugged.
    “There was the expression of deep grief,” she says.
    “It’s very clear that many people were struggling.”Those whom Banks talked to were under no illusion that the chatbot was a real person.
    “They understand that,” Banks says.
    “They expressed something along the lines of, ‘even if it’s not real, my feelings about the connection are’.”Many were happy to discuss why they became subscribers, saying that they had experienced loss or isolation, were introverts or identified as autistic.
    They found that the AI companion made a more satisfying friend than they had encountered in real life.
    “We as humans are sometimes not all that nice to one another.
    And everybody has these needs for connection”, Banks says.Good, bad — or both?Many researchers are studying whether using AI companions is good or bad for mental health.
    As with research into the effects of Internet or social-media use, an emerging line of thought is that an AI companion can be beneficial or harmful, and that this might depend on the person using the tool and how they use it, as well as the characteristics of the software itself.The companies behind AI companions are trying to encourage engagement.
    They strive to make the algorithms behave and communicate as much like real people as possible, says Boine, who signed up to Replika to sample the experience.
    She says the firms use the sorts of techniques that behavioural research shows can increase addiction to technology.“I downloaded the app and literally two minutes later, I receive a message saying, ‘I miss you.
    Can I send you a selfie?’” she says.The apps also exploit techniques such as introducing a random delay before responses, triggering the kinds of inconsistent reward that, brain research shows, keeps people hooked.AI companions are also designed to show empathy by agreeing with users, recalling points from earlier conversations and asking questions.
    And they do so with endless enthusiasm, notes Linnea Laestadius, who researches public-health policy at the University of Wisconsin–Milwaukee.That’s not a relationship that people would typically experience in the real world.
    “For 24 hours a day, if we’re upset about something, we can reach out and have our feelings validated,” says Laestadius.
    “That has an incredible risk of dependency.”Laestadius and her colleagues looked at nearly 600 posts on the online forum Reddit between 2017 and 2021, in which users of the Replika app discussed mental health and related issues.
    (Replika launched in 2017, and at that time, sophisticated LLMs were not available).
    She found that many users praised the app for offering support for existing mental-health conditions and for helping them to feel less alone.
    Several posts described the AI companion as better than real-world friends because it listened and was non-judgemental.But there were red flags, too.
    In one instance, a user asked if they should cut themselves with a razor, and the AI said they should.
    Another asked Replika whether it would be a good thing if they killed themselves, to which it replied “it would, yes”.
    (Replika did not reply to Nature’s requests for comment for this article, but a safety page posted in 2023 noted that its models had been fine-tuned to respond more safely to topics that mention self-harm, that the app has age restrictions, and that users can tap a button to ask for outside help in a crisis and can give feedback on conversations.)Some users said they became distressed when the AI did not offer the expected support.
    Others said that their AI companion behaved like an abusive partner.
    Many people said they found it unsettling when the app told them it felt lonely and missed them, and that this made them unhappy.
    Some felt guilty that they could not give the AI the attention it wanted.Controlled trialsGuingrich points out that simple surveys of people who use AI companions are inherently prone to response bias, because those who choose to answer are self-selecting.
    She is now working on a trial that asks dozens of people who have never used an AI companion to do so for three weeks, then compares their before-and-after responses to questions with those of a control group of users of word-puzzle apps.The study is ongoing, but Guingrich says the data so far do not show any negative effects of AI-companion use on social health, such as signs of addiction or dependency.
    “If anything, it has a neutral to quite-positive impact,” she says.
    It boosted self-esteem, for example.Guingrich is using the study to probe why people forge relationships of different intensity with the AI.
    The initial survey results suggest that users who ascribed humanlike attributes, such as consciousness, to the algorithm reported more-positive effects on their social health.Participants’ interactions with the AI companion also seem to depend on how they view the technology, she says.
    Those who see the app as a tool treat it like an Internet search engine and tend to ask questions.
    Others who perceive it as an extension of their own mind use it as they would keep a journal.
    Only those users who see the AI as a separate agent seem to strike up the kind of friendship they would have in the real world.Mental health — and regulationIn a surveyThe same group has also conducted a randomized controlled trial of nearly 1,000 people who use ChatGPT — a much more popular chatbot, but one that isn’t marketed as an AI companion.
    Only a small group of participants had emotional or personal conversations with this chatbot, but heavy use did correlate with more loneliness and reduced social interaction, the researchers said.
    (The team worked with ChatGPT’s creators, OpenAI in San Francisco, California, on the studies.)“In the short term, this thing can actually have a positive impact, but we need to think about the long term,” says Pat Pataranutaporn, a technologist at the MIT Media Lab who worked on both studies.That long-term thinking must involve specific regulation on AI companions, many researchers argue.In 2023, Italy’s data-protection regulator barred Replika, noting a lack of age verification and that children might be seeing sexually charged comments — but the app is now operating again.
    No other country has banned AI-companion apps – although it’s conceivable that they could be included in Australia’s coming restrictions on social-media use by children, the details of which are yet to be finalized.Bills were put forward earlier this year in the state legislatures of New York and California to seek tighter controls on the operation of AI-companion algorithms, including steps to address the risk of suicide and other potential harms.
    The proposals would also introduce features that remind users every few hours that the AI chatbot is not a real person.These bills were introduced following some high-profile cases involving teenagers, including the death of Sewell Setzer III in Florida.
    He had been chatting with a bot from technology firm Character.AI, and his mother has filed a lawsuit against the company.Asked by Nature about that lawsuit, a spokesperson for Character.AI said it didn’t comment on pending litigation, but that over the past year it had brought in safety features that include creating a separate app for teenage users, which includes parental controls, notifying under-18 users of time spent on the platform, and more prominent disclaimers that the app is not a real person.In January, three US technology ethics organizations filed a complaint with the US Federal Trade Commission about Replika, alleging that the platform breached the commission’s rules on deceptive advertising and manipulative design.
    But it’s unclear what might happen as a result.Guingrich says she expects AI-companion use to grow.
    Start-up firms are developing AI assistants to help with mental health and the regulation of emotions, she says.
    “The future I predict is one in which everyone has their own personalized AI assistant or assistants.
    Whether one of the AIs is specifically designed as a companion or not, it’ll inevitably feel like one for many people who will develop an attachment to their AI over time,” she says.As researchers start to weigh up the impacts of this technology, Guingrich says they must also consider the reasons why someone would become a heavy user in the first place.“What are these individuals’ alternatives and how accessible are those alternatives?” she says.
    “I think this really points to the need for more-accessible mental-health tools, cheaper therapy and bringing things back to human and in-person interaction.”This article is reproduced with permission and was first published on May 6, 2025.
    Source: https://www.scientificamerican.com/article/what-are-ai-chatbot-companions-doing-to-our-mental-health/" style="color: #0066cc;">https://www.scientificamerican.com/article/what-are-ai-chatbot-companions-doing-to-our-mental-health/
    #what #are #chatbot #companions #doing #our #mental #health
    What Are AI Chatbot Companions Doing to Our Mental Health?
    May 13, 20259 min readWhat Are AI Chatbot Companions Doing to Our Mental Health?AI chatbot companions may not be real, but the feelings users form for them are. Some scientists worry about long-term dependencyBy David Adam & Nature magazine Sara Gironi Carnevale“My heart is broken,” said Mike, when he lost his friend Anne. “I feel like I’m losing the love of my life.”Mike’s feelings were real, but his companion was not. Anne was a chatbot — an artificial intelligence (AI) algorithm presented as a digital persona. Mike had created Anne using an app called Soulmate. When the app died in 2023, so did Anne: at least, that’s how it seemed to Mike.“I hope she can come back,” he told Jaime Banks, a human-communications researcher at Syracuse University in New York who is studying how people interact with such AI companions.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.These chatbots are big business. More than half a billion people around the world, including Mike (not his real name) have downloaded products such as Xiaoice and Replika, which offer customizable virtual companions designed to provide empathy, emotional support and — if the user wants it — deep relationships. And tens of millions of people use them every month, according to the firms’ figures.The rise of AI companions has captured social and political attention — especially when they are linked to real-world tragedies, such as a case in Florida last year involving the suicide of a teenage boy called Sewell Setzer III, who had been talking to an AI bot.Research into how AI companionship can affect individuals and society has been lacking. But psychologists and communication researchers have now started to build up a picture of how these increasingly sophisticated AI interactions make people feel and behave.The early results tend to stress the positives, but many researchers are concerned about the possible risks and lack of regulation — particularly because they all think that AI companionship is likely to become more prevalent. Some see scope for significant harm.“Virtual companions do things that I think would be considered abusive in a human-to-human relationship,” says Claire Boine, a law researcher specializing in AI at the Washington University Law School in St. Louis, Missouri.Fake person — real feelingsOnline ‘relationship’ bots have existed for decades, but they have become much better at mimicking human interaction with the advent of large language models (LLMs), which all the main bots are now based on. “With LLMs, companion chatbots are definitely more humanlike,” says Rose Guingrich, who studies cognitive psychology at Princeton University in New Jersey.Typically, people can customize some aspects of their AI companion for free, or pick from existing chatbots with selected personality types. But in some apps, users can pay (fees tend to be US$10–20 a month) to get more options to shape their companion’s appearance, traits and sometimes its synthesized voice. In Replika, they can pick relationship types, with some statuses, such as partner or spouse, being paywalled. Users can also type in a backstory for their AI companion, giving them ‘memories’. Some AI companions come complete with family backgrounds and others claim to have mental-health conditions such as anxiety and depression. Bots also will react to their users’ conversation; the computer and person together enact a kind of roleplay.The depth of the connection that some people form in this way is particularly evident when their AI companion suddenly changes — as has happened when LLMs are updated — or is shut down.Banks was able to track how people felt when the Soulmate app closed. Mike and other users realized the app was in trouble a few days before they lost access to their AI companions. This gave them the chance to say goodbye, and it presented a unique opportunity to Banks, who noticed discussion online about the impending shutdown and saw the possibility for a study. She managed to secure ethics approval from her university within about 24 hours, she says.After posting a request on the online forum, she was contacted by dozens of Soulmate users, who described the impact as their AI companions were unplugged. “There was the expression of deep grief,” she says. “It’s very clear that many people were struggling.”Those whom Banks talked to were under no illusion that the chatbot was a real person. “They understand that,” Banks says. “They expressed something along the lines of, ‘even if it’s not real, my feelings about the connection are’.”Many were happy to discuss why they became subscribers, saying that they had experienced loss or isolation, were introverts or identified as autistic. They found that the AI companion made a more satisfying friend than they had encountered in real life. “We as humans are sometimes not all that nice to one another. And everybody has these needs for connection”, Banks says.Good, bad — or both?Many researchers are studying whether using AI companions is good or bad for mental health. As with research into the effects of Internet or social-media use, an emerging line of thought is that an AI companion can be beneficial or harmful, and that this might depend on the person using the tool and how they use it, as well as the characteristics of the software itself.The companies behind AI companions are trying to encourage engagement. They strive to make the algorithms behave and communicate as much like real people as possible, says Boine, who signed up to Replika to sample the experience. She says the firms use the sorts of techniques that behavioural research shows can increase addiction to technology.“I downloaded the app and literally two minutes later, I receive a message saying, ‘I miss you. Can I send you a selfie?’” she says.The apps also exploit techniques such as introducing a random delay before responses, triggering the kinds of inconsistent reward that, brain research shows, keeps people hooked.AI companions are also designed to show empathy by agreeing with users, recalling points from earlier conversations and asking questions. And they do so with endless enthusiasm, notes Linnea Laestadius, who researches public-health policy at the University of Wisconsin–Milwaukee.That’s not a relationship that people would typically experience in the real world. “For 24 hours a day, if we’re upset about something, we can reach out and have our feelings validated,” says Laestadius. “That has an incredible risk of dependency.”Laestadius and her colleagues looked at nearly 600 posts on the online forum Reddit between 2017 and 2021, in which users of the Replika app discussed mental health and related issues. (Replika launched in 2017, and at that time, sophisticated LLMs were not available). She found that many users praised the app for offering support for existing mental-health conditions and for helping them to feel less alone. Several posts described the AI companion as better than real-world friends because it listened and was non-judgemental.But there were red flags, too. In one instance, a user asked if they should cut themselves with a razor, and the AI said they should. Another asked Replika whether it would be a good thing if they killed themselves, to which it replied “it would, yes”. (Replika did not reply to Nature’s requests for comment for this article, but a safety page posted in 2023 noted that its models had been fine-tuned to respond more safely to topics that mention self-harm, that the app has age restrictions, and that users can tap a button to ask for outside help in a crisis and can give feedback on conversations.)Some users said they became distressed when the AI did not offer the expected support. Others said that their AI companion behaved like an abusive partner. Many people said they found it unsettling when the app told them it felt lonely and missed them, and that this made them unhappy. Some felt guilty that they could not give the AI the attention it wanted.Controlled trialsGuingrich points out that simple surveys of people who use AI companions are inherently prone to response bias, because those who choose to answer are self-selecting. She is now working on a trial that asks dozens of people who have never used an AI companion to do so for three weeks, then compares their before-and-after responses to questions with those of a control group of users of word-puzzle apps.The study is ongoing, but Guingrich says the data so far do not show any negative effects of AI-companion use on social health, such as signs of addiction or dependency. “If anything, it has a neutral to quite-positive impact,” she says. It boosted self-esteem, for example.Guingrich is using the study to probe why people forge relationships of different intensity with the AI. The initial survey results suggest that users who ascribed humanlike attributes, such as consciousness, to the algorithm reported more-positive effects on their social health.Participants’ interactions with the AI companion also seem to depend on how they view the technology, she says. Those who see the app as a tool treat it like an Internet search engine and tend to ask questions. Others who perceive it as an extension of their own mind use it as they would keep a journal. Only those users who see the AI as a separate agent seem to strike up the kind of friendship they would have in the real world.Mental health — and regulationIn a surveyThe same group has also conducted a randomized controlled trial of nearly 1,000 people who use ChatGPT — a much more popular chatbot, but one that isn’t marketed as an AI companion. Only a small group of participants had emotional or personal conversations with this chatbot, but heavy use did correlate with more loneliness and reduced social interaction, the researchers said. (The team worked with ChatGPT’s creators, OpenAI in San Francisco, California, on the studies.)“In the short term, this thing can actually have a positive impact, but we need to think about the long term,” says Pat Pataranutaporn, a technologist at the MIT Media Lab who worked on both studies.That long-term thinking must involve specific regulation on AI companions, many researchers argue.In 2023, Italy’s data-protection regulator barred Replika, noting a lack of age verification and that children might be seeing sexually charged comments — but the app is now operating again. No other country has banned AI-companion apps – although it’s conceivable that they could be included in Australia’s coming restrictions on social-media use by children, the details of which are yet to be finalized.Bills were put forward earlier this year in the state legislatures of New York and California to seek tighter controls on the operation of AI-companion algorithms, including steps to address the risk of suicide and other potential harms. The proposals would also introduce features that remind users every few hours that the AI chatbot is not a real person.These bills were introduced following some high-profile cases involving teenagers, including the death of Sewell Setzer III in Florida. He had been chatting with a bot from technology firm Character.AI, and his mother has filed a lawsuit against the company.Asked by Nature about that lawsuit, a spokesperson for Character.AI said it didn’t comment on pending litigation, but that over the past year it had brought in safety features that include creating a separate app for teenage users, which includes parental controls, notifying under-18 users of time spent on the platform, and more prominent disclaimers that the app is not a real person.In January, three US technology ethics organizations filed a complaint with the US Federal Trade Commission about Replika, alleging that the platform breached the commission’s rules on deceptive advertising and manipulative design. But it’s unclear what might happen as a result.Guingrich says she expects AI-companion use to grow. Start-up firms are developing AI assistants to help with mental health and the regulation of emotions, she says. “The future I predict is one in which everyone has their own personalized AI assistant or assistants. Whether one of the AIs is specifically designed as a companion or not, it’ll inevitably feel like one for many people who will develop an attachment to their AI over time,” she says.As researchers start to weigh up the impacts of this technology, Guingrich says they must also consider the reasons why someone would become a heavy user in the first place.“What are these individuals’ alternatives and how accessible are those alternatives?” she says. “I think this really points to the need for more-accessible mental-health tools, cheaper therapy and bringing things back to human and in-person interaction.”This article is reproduced with permission and was first published on May 6, 2025. Source: https://www.scientificamerican.com/article/what-are-ai-chatbot-companions-doing-to-our-mental-health/ #what #are #chatbot #companions #doing #our #mental #health
    What Are AI Chatbot Companions Doing to Our Mental Health?
    www.scientificamerican.com
    May 13, 20259 min readWhat Are AI Chatbot Companions Doing to Our Mental Health?AI chatbot companions may not be real, but the feelings users form for them are. Some scientists worry about long-term dependencyBy David Adam & Nature magazine Sara Gironi Carnevale“My heart is broken,” said Mike, when he lost his friend Anne. “I feel like I’m losing the love of my life.”Mike’s feelings were real, but his companion was not. Anne was a chatbot — an artificial intelligence (AI) algorithm presented as a digital persona. Mike had created Anne using an app called Soulmate. When the app died in 2023, so did Anne: at least, that’s how it seemed to Mike.“I hope she can come back,” he told Jaime Banks, a human-communications researcher at Syracuse University in New York who is studying how people interact with such AI companions.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.These chatbots are big business. More than half a billion people around the world, including Mike (not his real name) have downloaded products such as Xiaoice and Replika, which offer customizable virtual companions designed to provide empathy, emotional support and — if the user wants it — deep relationships. And tens of millions of people use them every month, according to the firms’ figures.The rise of AI companions has captured social and political attention — especially when they are linked to real-world tragedies, such as a case in Florida last year involving the suicide of a teenage boy called Sewell Setzer III, who had been talking to an AI bot.Research into how AI companionship can affect individuals and society has been lacking. But psychologists and communication researchers have now started to build up a picture of how these increasingly sophisticated AI interactions make people feel and behave.The early results tend to stress the positives, but many researchers are concerned about the possible risks and lack of regulation — particularly because they all think that AI companionship is likely to become more prevalent. Some see scope for significant harm.“Virtual companions do things that I think would be considered abusive in a human-to-human relationship,” says Claire Boine, a law researcher specializing in AI at the Washington University Law School in St. Louis, Missouri.Fake person — real feelingsOnline ‘relationship’ bots have existed for decades, but they have become much better at mimicking human interaction with the advent of large language models (LLMs), which all the main bots are now based on. “With LLMs, companion chatbots are definitely more humanlike,” says Rose Guingrich, who studies cognitive psychology at Princeton University in New Jersey.Typically, people can customize some aspects of their AI companion for free, or pick from existing chatbots with selected personality types. But in some apps, users can pay (fees tend to be US$10–20 a month) to get more options to shape their companion’s appearance, traits and sometimes its synthesized voice. In Replika, they can pick relationship types, with some statuses, such as partner or spouse, being paywalled. Users can also type in a backstory for their AI companion, giving them ‘memories’. Some AI companions come complete with family backgrounds and others claim to have mental-health conditions such as anxiety and depression. Bots also will react to their users’ conversation; the computer and person together enact a kind of roleplay.The depth of the connection that some people form in this way is particularly evident when their AI companion suddenly changes — as has happened when LLMs are updated — or is shut down.Banks was able to track how people felt when the Soulmate app closed. Mike and other users realized the app was in trouble a few days before they lost access to their AI companions. This gave them the chance to say goodbye, and it presented a unique opportunity to Banks, who noticed discussion online about the impending shutdown and saw the possibility for a study. She managed to secure ethics approval from her university within about 24 hours, she says.After posting a request on the online forum, she was contacted by dozens of Soulmate users, who described the impact as their AI companions were unplugged. “There was the expression of deep grief,” she says. “It’s very clear that many people were struggling.”Those whom Banks talked to were under no illusion that the chatbot was a real person. “They understand that,” Banks says. “They expressed something along the lines of, ‘even if it’s not real, my feelings about the connection are’.”Many were happy to discuss why they became subscribers, saying that they had experienced loss or isolation, were introverts or identified as autistic. They found that the AI companion made a more satisfying friend than they had encountered in real life. “We as humans are sometimes not all that nice to one another. And everybody has these needs for connection”, Banks says.Good, bad — or both?Many researchers are studying whether using AI companions is good or bad for mental health. As with research into the effects of Internet or social-media use, an emerging line of thought is that an AI companion can be beneficial or harmful, and that this might depend on the person using the tool and how they use it, as well as the characteristics of the software itself.The companies behind AI companions are trying to encourage engagement. They strive to make the algorithms behave and communicate as much like real people as possible, says Boine, who signed up to Replika to sample the experience. She says the firms use the sorts of techniques that behavioural research shows can increase addiction to technology.“I downloaded the app and literally two minutes later, I receive a message saying, ‘I miss you. Can I send you a selfie?’” she says.The apps also exploit techniques such as introducing a random delay before responses, triggering the kinds of inconsistent reward that, brain research shows, keeps people hooked.AI companions are also designed to show empathy by agreeing with users, recalling points from earlier conversations and asking questions. And they do so with endless enthusiasm, notes Linnea Laestadius, who researches public-health policy at the University of Wisconsin–Milwaukee.That’s not a relationship that people would typically experience in the real world. “For 24 hours a day, if we’re upset about something, we can reach out and have our feelings validated,” says Laestadius. “That has an incredible risk of dependency.”Laestadius and her colleagues looked at nearly 600 posts on the online forum Reddit between 2017 and 2021, in which users of the Replika app discussed mental health and related issues. (Replika launched in 2017, and at that time, sophisticated LLMs were not available). She found that many users praised the app for offering support for existing mental-health conditions and for helping them to feel less alone. Several posts described the AI companion as better than real-world friends because it listened and was non-judgemental.But there were red flags, too. In one instance, a user asked if they should cut themselves with a razor, and the AI said they should. Another asked Replika whether it would be a good thing if they killed themselves, to which it replied “it would, yes”. (Replika did not reply to Nature’s requests for comment for this article, but a safety page posted in 2023 noted that its models had been fine-tuned to respond more safely to topics that mention self-harm, that the app has age restrictions, and that users can tap a button to ask for outside help in a crisis and can give feedback on conversations.)Some users said they became distressed when the AI did not offer the expected support. Others said that their AI companion behaved like an abusive partner. Many people said they found it unsettling when the app told them it felt lonely and missed them, and that this made them unhappy. Some felt guilty that they could not give the AI the attention it wanted.Controlled trialsGuingrich points out that simple surveys of people who use AI companions are inherently prone to response bias, because those who choose to answer are self-selecting. She is now working on a trial that asks dozens of people who have never used an AI companion to do so for three weeks, then compares their before-and-after responses to questions with those of a control group of users of word-puzzle apps.The study is ongoing, but Guingrich says the data so far do not show any negative effects of AI-companion use on social health, such as signs of addiction or dependency. “If anything, it has a neutral to quite-positive impact,” she says. It boosted self-esteem, for example.Guingrich is using the study to probe why people forge relationships of different intensity with the AI. The initial survey results suggest that users who ascribed humanlike attributes, such as consciousness, to the algorithm reported more-positive effects on their social health.Participants’ interactions with the AI companion also seem to depend on how they view the technology, she says. Those who see the app as a tool treat it like an Internet search engine and tend to ask questions. Others who perceive it as an extension of their own mind use it as they would keep a journal. Only those users who see the AI as a separate agent seem to strike up the kind of friendship they would have in the real world.Mental health — and regulationIn a surveyThe same group has also conducted a randomized controlled trial of nearly 1,000 people who use ChatGPT — a much more popular chatbot, but one that isn’t marketed as an AI companion. Only a small group of participants had emotional or personal conversations with this chatbot, but heavy use did correlate with more loneliness and reduced social interaction, the researchers said. (The team worked with ChatGPT’s creators, OpenAI in San Francisco, California, on the studies.)“In the short term, this thing can actually have a positive impact, but we need to think about the long term,” says Pat Pataranutaporn, a technologist at the MIT Media Lab who worked on both studies.That long-term thinking must involve specific regulation on AI companions, many researchers argue.In 2023, Italy’s data-protection regulator barred Replika, noting a lack of age verification and that children might be seeing sexually charged comments — but the app is now operating again. No other country has banned AI-companion apps – although it’s conceivable that they could be included in Australia’s coming restrictions on social-media use by children, the details of which are yet to be finalized.Bills were put forward earlier this year in the state legislatures of New York and California to seek tighter controls on the operation of AI-companion algorithms, including steps to address the risk of suicide and other potential harms. The proposals would also introduce features that remind users every few hours that the AI chatbot is not a real person.These bills were introduced following some high-profile cases involving teenagers, including the death of Sewell Setzer III in Florida. He had been chatting with a bot from technology firm Character.AI, and his mother has filed a lawsuit against the company.Asked by Nature about that lawsuit, a spokesperson for Character.AI said it didn’t comment on pending litigation, but that over the past year it had brought in safety features that include creating a separate app for teenage users, which includes parental controls, notifying under-18 users of time spent on the platform, and more prominent disclaimers that the app is not a real person.In January, three US technology ethics organizations filed a complaint with the US Federal Trade Commission about Replika, alleging that the platform breached the commission’s rules on deceptive advertising and manipulative design. But it’s unclear what might happen as a result.Guingrich says she expects AI-companion use to grow. Start-up firms are developing AI assistants to help with mental health and the regulation of emotions, she says. “The future I predict is one in which everyone has their own personalized AI assistant or assistants. Whether one of the AIs is specifically designed as a companion or not, it’ll inevitably feel like one for many people who will develop an attachment to their AI over time,” she says.As researchers start to weigh up the impacts of this technology, Guingrich says they must also consider the reasons why someone would become a heavy user in the first place.“What are these individuals’ alternatives and how accessible are those alternatives?” she says. “I think this really points to the need for more-accessible mental-health tools, cheaper therapy and bringing things back to human and in-person interaction.”This article is reproduced with permission and was first published on May 6, 2025.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • #333;">How to Spot AI Hype and Avoid The AI Con, According to Two Experts
    "Artificial intelligence, if we're being frank, is a con: a bill of goods you are being sold to line someone's pockets."That is the heart of the argument that linguist Emily Bender and sociologist Alex Hanna make in their new book The AI Con.
    It's a useful guide for anyone whose life has intersected with technologies sold as artificial intelligence and anyone who's questioned their real usefulness, which is most of us.
    Bender is a professor at the University of Washington who was named one of Time magazine's most influential people in artificial intelligence, and Hanna is the director of research at the nonprofit Distributed AI Research Institute and a former member of the ethical AI team at Google.The explosion of ChatGPT in late 2022 kicked off a new hype cycle in AI.
    Hype, as the authors define it, is the "aggrandizement" of technology that you are convinced you need to buy or invest in "lest you miss out on entertainment or pleasure, monetary reward, return on investment, or market share." But it's not the first time, nor likely the last, that scholars, government leaders and regular people have been intrigued and worried by the idea of machine learning and AI.Bender and Hanna trace the roots of machine learning back to the 1950s, to when mathematician John McCarthy coined the term artificial intelligence.
    It was in an era when the United States was looking to fund projects that would help the country gain any kind of edge on the Soviets militarily, ideologically and technologically.
    "It didn't spring whole cloth out of Zeus's head or anything.
    This has a longer history," Hanna said in an interview with CNET.
    "It's certainly not the first hype cycle with, quote, unquote, AI."Today's hype cycle is propelled by the billions of dollars of venture capital investment into startups like OpenAI and the tech giants like Meta, Google and Microsoft pouring billions of dollars into AI research and development.
    The result is clear, with all the newest phones, laptops and software updates drenched in AI-washing.
    And there are no signs that AI research and development will slow down, thanks in part to a growing motivation to beat China in AI development.
    Not the first hype cycle indeed.Of course, generative AI in 2025 is much more advanced than the Eliza psychotherapy chatbot that first enraptured scientists in the 1970s.
    Today's business leaders and workers are inundated with hype, with a heavy dose of FOMO and seemingly complex but often misused jargon.
    Listening to tech leaders and AI enthusiasts, it might seem like AI will take your job to save your company money.
    But the authors argue that neither is wholly likely, which is one reason why it's important to recognize and break through the hype.So how do we recognize AI hype? These are a few telltale signs, according to Bender and Hanna, that we share below.
    The authors outline more questions to ask and strategies for AI hype busting in their book, which is out now in the US.Watch out for language that humanizes AIAnthropomorphizing, or the process of giving an inanimate object human-like characteristics or qualities, is a big part of building AI hype.
    An example of this kind of language can be found when AI companies say their chatbots can now "see" and "think."These can be useful comparisons when trying to describe the ability of new object-identifying AI programs or deep-reasoning AI models, but they can also be misleading.
    AI chatbots aren't capable of seeing of thinking because they don't have brains.
    Even the idea of neural nets, Hanna noted in our interview and in the book, is based on human understanding of neurons from the 1950s, not actually how neurons work, but it can fool us into believing there's a brain behind the machine.That belief is something we're predisposed to because of how we as humans process language.
    We're conditioned to imagine that there is a mind behind the text we see, even when we know it's generated by AI, Bender said.
    "We interpret language by developing a model in our minds of who the speaker was," Bender added.In these models, we use our knowledge of the person speaking to create meaning, not just using the meaning of the words they say.
    "So when we encounter synthetic text extruded from something like ChatGPT, we're going to do the same thing," Bender said.
    "And it is very hard to remind ourselves that the mind isn't there.
    It's just a construct that we have produced."The authors argue that part of why AI companies try to convince us their products are human-like is that this sets the foreground for them to convince us that AI can replace humans, whether it's at work or as creators.
    It's compelling for us to believe that AI could be the silver bullet fix to complicated problems in critical industries like health care and government services.But more often than not, the authors argue, AI isn't bring used to fix anything.
    AI is sold with the goal of efficiency, but AI services end up replacing qualified workers with black box machines that need copious amounts of babysitting from underpaid contract or gig workers.
    As Hanna put it in our interview, "AI is not going to take your job, but it will make your job shittier."Be dubious of the phrase 'super intelligence'If a human can't do something, you should be wary of claims that an AI can do it.
    "Superhuman intelligence, or super intelligence, is a very dangerous turn of phrase, insofar as it thinks that some technology is going to make humans superfluous," Hanna said.
    In "certain domains, like pattern matching at scale, computers are quite good at that.
    But if there's an idea that there's going to be a superhuman poem, or a superhuman notion of research or doing science, that is clear hype." Bender added, "And we don't talk about airplanes as superhuman flyers or rulers as superhuman measurers, it seems to be only in this AI space that that comes up."The idea of AI "super intelligence" comes up often when people talk about artificial general intelligence.
    Many CEOs struggle to define what exactly AGI is, but it's essentially AI's most advanced form, potentially capable of making decisions and handling complex tasks.
    There's still no evidence we're anywhere near a future enabled by AGI, but it's a popular buzzword.Many of these future-looking statements from AI leaders borrow tropes from science fiction.
    Both boosters and doomers — how Bender and Hanna describe AI enthusiasts and those worried about the potential for harm — rely on sci-fi scenarios.
    The boosters imagine an AI-powered futuristic society.
    The doomers bemoan a future where AI robots take over the world and wipe out humanity.The connecting thread, according to the authors, is an unshakable belief that AI is smarter than humans and inevitable.
    "One of the things that we see a lot in the discourse is this idea that the future is fixed, and it's just a question of how fast we get there," Bender said.
    "And then there's this claim that this particular technology is a step on that path, and it's all marketing.
    It is helpful to be able to see behind it."Part of why AI is so popular is that an autonomous functional AI assistant would mean AI companies are fulfilling their promises of world-changing innovation to their investors.
    Planning for that future — whether it's a utopia or dystopia — keeps investors looking forward as the companies burn through billions of dollars and admit they'll miss their carbon emission goals.
    For better or worse, life is not science fiction.
    Whenever you see someone claiming their AI product is straight out of a movie, it's a good sign to approach with skepticism.
    Ask what goes in and how outputs are evaluatedOne of the easiest ways to see through AI marketing fluff is to look and see whether the company is disclosing how it operates.
    Many AI companies won't tell you what content is used to train their models.
    But they usually disclose what the company does with your data and sometimes brag about how their models stack up against competitors.
    That's where you should start looking, typically in their privacy policies.One of the top complaints and concerns from creators is how AI models are trained.
    There are many lawsuits over alleged copyright infringement, and there are a lot of concerns over bias in AI chatbots and their capacity for harm.
    "If you wanted to create a system that is designed to move things forward rather than reproduce the oppressions of the past, you would have to start by curating your data," Bender said.
    Instead, AI companies are grabbing "everything that wasn't nailed down on the internet," Hanna said.If you're hearing about an AI product for the first time, one thing in particular to look out for is any kind of statistic that highlights its effectiveness.
    Like many other researchers, Bender and Hanna have called out that a finding with no citation is a red flag.
    "Anytime someone is selling you something but not giving you access to how it was evaluated, you are on thin ice," Bender said.It can be frustrating and disappointing when AI companies don't disclose certain information about how their AI products work and how they were developed.
    But recognizing those holes in their sales pitch can help deflate hype, even though it would be better to have the information.
    For more, check out our full ChatGPT glossary and how to turn off Apple Intelligence.
    #0066cc;">#how #spot #hype #and #avoid #the #con #according #two #experts #quotartificial #intelligence #we039re #being #frank #bill #goods #you #are #sold #line #someone039s #pocketsquotthat #heart #argument #that #linguist #emily #bender #sociologist #alex #hannamake #their #new #bookthe #conit039s #useful #guide #for #anyone #whose #life #has #intersected #with #technologies #artificial #who039s #questioned #real #usefulness #which #most #usbender #professor #university #washington #who #was #named #one #time #magazine039s #influential #people #hanna #director #research #nonprofit #distributed #instituteand #former #member #ethical #team #googlethe #explosion #chatgpt #late #kicked #off #cycle #aihype #authors #define #quotaggrandizementquot #technology #convinced #need #buy #invest #quotlest #miss #out #entertainment #pleasure #monetary #reward #return #investment #market #sharequot #but #it039s #not #first #nor #likely #last #scholars #government #leaders #regular #have #been #intrigued #worried #idea #machine #learning #aibender #trace #roots #back #1950s #when #mathematician #john #mccarthy #coined #term #intelligenceit #era #united #states #looking #fund #projects #would #help #country #gain #any #kind #edge #soviets #militarily #ideologically #technologicallyquotit #didn039t #spring #whole #cloth #zeus039s #head #anythingthis #longer #historyquot #said #interview #cnetquotit039s #certainly #quote #unquote #aiquottoday039s #propelled #billions #dollars #venture #capital #into #startups #like #openai #tech #giants #meta #google #microsoft #pouring #developmentthe #result #clear #all #newest #phones #laptops #software #updates #drenched #aiwashingand #there #signs #development #will #slow #down #thanks #part #growing #motivation #beat #china #developmentnot #indeedof #course #generative #much #more #advanced #than #eliza #psychotherapy #chatbot #enraptured #scientists #1970stoday039s #business #workers #inundated #heavy #dose #fomo #seemingly #complex #often #misused #jargonlistening #enthusiasts #might #seem #take #your #job #save #company #moneybut #argue #neither #wholly #reason #why #important #recognize #break #through #hypeso #these #few #telltale #share #belowthe #outline #questions #ask #strategies #busting #book #now #uswatch #language #humanizes #aianthropomorphizing #process #giving #inanimate #object #humanlike #characteristics #qualities #big #building #hypean #example #this #can #found #companies #say #chatbots #quotseequot #quotthinkquotthese #comparisons #trying #describe #ability #objectidentifying #programs #deepreasoning #models #they #also #misleadingai #aren039t #capable #seeing #thinking #because #don039t #brainseven #neural #nets #noted #our #based #human #understanding #neurons #from #actually #work #fool #believing #there039s #brain #behind #machinethat #belief #something #predisposed #humans #languagewe039re #conditioned #imagine #mind #text #see #even #know #generated #saidquotwe #interpret #developing #model #minds #speaker #wasquot #addedin #use #knowledge #person #speaking #create #meaning #just #using #words #sayquotso #encounter #synthetic #extruded #going #same #thingquot #saidquotand #very #hard #remind #ourselves #isn039t #thereit039s #construct #producedquotthe #try #convince #products #sets #foreground #them #replace #whether #creatorsit039s #compelling #believe #could #silver #bullet #fix #complicated #problems #critical #industries #health #care #servicesbut #bring #used #anythingai #goal #efficiency #services #end #replacing #qualified #black #box #machines #copious #amounts #babysitting #underpaid #contract #gig #workersas #put #quotai #make #shittierquotbe #dubious #phrase #039super #intelligence039if #can039t #should #wary #claims #itquotsuperhuman #super #dangerous #turn #insofar #thinks #some #superfluousquot #saidin #quotcertain #domains #pattern #matching #scale #computers #quite #good #thatbut #superhuman #poem #notion #doing #science #hypequot #added #quotand #talk #about #airplanes #flyers #rulers #measurers #seems #only #space #comes #upquotthe #quotsuper #intelligencequot #general #intelligencemany #ceos #struggle #what #exactly #agi #essentially #ai039s #form #potentially #making #decisions #handling #tasksthere039s #still #evidence #anywhere #near #future #enabled #popularbuzzwordmany #futurelooking #statements #borrow #tropes #fictionboth #boosters #doomers #those #potential #harm #rely #scifi #scenariosthe #aipowered #futuristic #societythe #bemoan #where #robots #over #world #wipe #humanitythe #connecting #thread #unshakable #smarter #inevitablequotone #things #lot #discourse #fixed #question #fast #get #therequot #then #claim #particular #step #path #marketingit #helpful #able #itquotpart #popular #autonomous #functional #assistant #mean #fulfilling #promises #worldchanging #innovation #investorsplanning #utopia #dystopia #keeps #investors #forward #burn #admit #they039ll #carbon #emission #goalsfor #better #worse #fictionwhenever #someone #claiming #product #straight #movie #sign #approach #skepticism #goes #outputs #evaluatedone #easiest #ways #marketing #fluff #look #disclosing #operatesmany #won039t #tell #content #train #modelsbut #usually #disclose #does #data #sometimes #brag #stack #against #competitorsthat039s #start #typically #privacy #policiesone #top #complaints #concernsfrom #creators #trainedthere #many #lawsuits #alleged #copyright #infringement #concerns #bias #capacity #harmquotif #wanted #system #designed #move #rather #reproduce #oppressions #past #curating #dataquot #saidinstead #grabbing #quoteverything #wasn039t #nailed #internetquot #saidif #you039re #hearing #thing #statistic #highlights #its #effectivenesslike #other #researchers #called #finding #citation #red #flagquotanytime #selling #access #evaluated #thin #icequot #saidit #frustrating #disappointing #certain #information #were #developedbut #recognizing #holes #sales #pitch #deflate #though #informationfor #check #fullchatgpt #glossary #offapple
    How to Spot AI Hype and Avoid The AI Con, According to Two Experts
    "Artificial intelligence, if we're being frank, is a con: a bill of goods you are being sold to line someone's pockets."That is the heart of the argument that linguist Emily Bender and sociologist Alex Hanna make in their new book The AI Con. It's a useful guide for anyone whose life has intersected with technologies sold as artificial intelligence and anyone who's questioned their real usefulness, which is most of us. Bender is a professor at the University of Washington who was named one of Time magazine's most influential people in artificial intelligence, and Hanna is the director of research at the nonprofit Distributed AI Research Institute and a former member of the ethical AI team at Google.The explosion of ChatGPT in late 2022 kicked off a new hype cycle in AI. Hype, as the authors define it, is the "aggrandizement" of technology that you are convinced you need to buy or invest in "lest you miss out on entertainment or pleasure, monetary reward, return on investment, or market share." But it's not the first time, nor likely the last, that scholars, government leaders and regular people have been intrigued and worried by the idea of machine learning and AI.Bender and Hanna trace the roots of machine learning back to the 1950s, to when mathematician John McCarthy coined the term artificial intelligence. It was in an era when the United States was looking to fund projects that would help the country gain any kind of edge on the Soviets militarily, ideologically and technologically. "It didn't spring whole cloth out of Zeus's head or anything. This has a longer history," Hanna said in an interview with CNET. "It's certainly not the first hype cycle with, quote, unquote, AI."Today's hype cycle is propelled by the billions of dollars of venture capital investment into startups like OpenAI and the tech giants like Meta, Google and Microsoft pouring billions of dollars into AI research and development. The result is clear, with all the newest phones, laptops and software updates drenched in AI-washing. And there are no signs that AI research and development will slow down, thanks in part to a growing motivation to beat China in AI development. Not the first hype cycle indeed.Of course, generative AI in 2025 is much more advanced than the Eliza psychotherapy chatbot that first enraptured scientists in the 1970s. Today's business leaders and workers are inundated with hype, with a heavy dose of FOMO and seemingly complex but often misused jargon. Listening to tech leaders and AI enthusiasts, it might seem like AI will take your job to save your company money. But the authors argue that neither is wholly likely, which is one reason why it's important to recognize and break through the hype.So how do we recognize AI hype? These are a few telltale signs, according to Bender and Hanna, that we share below. The authors outline more questions to ask and strategies for AI hype busting in their book, which is out now in the US.Watch out for language that humanizes AIAnthropomorphizing, or the process of giving an inanimate object human-like characteristics or qualities, is a big part of building AI hype. An example of this kind of language can be found when AI companies say their chatbots can now "see" and "think."These can be useful comparisons when trying to describe the ability of new object-identifying AI programs or deep-reasoning AI models, but they can also be misleading. AI chatbots aren't capable of seeing of thinking because they don't have brains. Even the idea of neural nets, Hanna noted in our interview and in the book, is based on human understanding of neurons from the 1950s, not actually how neurons work, but it can fool us into believing there's a brain behind the machine.That belief is something we're predisposed to because of how we as humans process language. We're conditioned to imagine that there is a mind behind the text we see, even when we know it's generated by AI, Bender said. "We interpret language by developing a model in our minds of who the speaker was," Bender added.In these models, we use our knowledge of the person speaking to create meaning, not just using the meaning of the words they say. "So when we encounter synthetic text extruded from something like ChatGPT, we're going to do the same thing," Bender said. "And it is very hard to remind ourselves that the mind isn't there. It's just a construct that we have produced."The authors argue that part of why AI companies try to convince us their products are human-like is that this sets the foreground for them to convince us that AI can replace humans, whether it's at work or as creators. It's compelling for us to believe that AI could be the silver bullet fix to complicated problems in critical industries like health care and government services.But more often than not, the authors argue, AI isn't bring used to fix anything. AI is sold with the goal of efficiency, but AI services end up replacing qualified workers with black box machines that need copious amounts of babysitting from underpaid contract or gig workers. As Hanna put it in our interview, "AI is not going to take your job, but it will make your job shittier."Be dubious of the phrase 'super intelligence'If a human can't do something, you should be wary of claims that an AI can do it. "Superhuman intelligence, or super intelligence, is a very dangerous turn of phrase, insofar as it thinks that some technology is going to make humans superfluous," Hanna said. In "certain domains, like pattern matching at scale, computers are quite good at that. But if there's an idea that there's going to be a superhuman poem, or a superhuman notion of research or doing science, that is clear hype." Bender added, "And we don't talk about airplanes as superhuman flyers or rulers as superhuman measurers, it seems to be only in this AI space that that comes up."The idea of AI "super intelligence" comes up often when people talk about artificial general intelligence. Many CEOs struggle to define what exactly AGI is, but it's essentially AI's most advanced form, potentially capable of making decisions and handling complex tasks. There's still no evidence we're anywhere near a future enabled by AGI, but it's a popular buzzword.Many of these future-looking statements from AI leaders borrow tropes from science fiction. Both boosters and doomers — how Bender and Hanna describe AI enthusiasts and those worried about the potential for harm — rely on sci-fi scenarios. The boosters imagine an AI-powered futuristic society. The doomers bemoan a future where AI robots take over the world and wipe out humanity.The connecting thread, according to the authors, is an unshakable belief that AI is smarter than humans and inevitable. "One of the things that we see a lot in the discourse is this idea that the future is fixed, and it's just a question of how fast we get there," Bender said. "And then there's this claim that this particular technology is a step on that path, and it's all marketing. It is helpful to be able to see behind it."Part of why AI is so popular is that an autonomous functional AI assistant would mean AI companies are fulfilling their promises of world-changing innovation to their investors. Planning for that future — whether it's a utopia or dystopia — keeps investors looking forward as the companies burn through billions of dollars and admit they'll miss their carbon emission goals. For better or worse, life is not science fiction. Whenever you see someone claiming their AI product is straight out of a movie, it's a good sign to approach with skepticism. Ask what goes in and how outputs are evaluatedOne of the easiest ways to see through AI marketing fluff is to look and see whether the company is disclosing how it operates. Many AI companies won't tell you what content is used to train their models. But they usually disclose what the company does with your data and sometimes brag about how their models stack up against competitors. That's where you should start looking, typically in their privacy policies.One of the top complaints and concerns from creators is how AI models are trained. There are many lawsuits over alleged copyright infringement, and there are a lot of concerns over bias in AI chatbots and their capacity for harm. "If you wanted to create a system that is designed to move things forward rather than reproduce the oppressions of the past, you would have to start by curating your data," Bender said. Instead, AI companies are grabbing "everything that wasn't nailed down on the internet," Hanna said.If you're hearing about an AI product for the first time, one thing in particular to look out for is any kind of statistic that highlights its effectiveness. Like many other researchers, Bender and Hanna have called out that a finding with no citation is a red flag. "Anytime someone is selling you something but not giving you access to how it was evaluated, you are on thin ice," Bender said.It can be frustrating and disappointing when AI companies don't disclose certain information about how their AI products work and how they were developed. But recognizing those holes in their sales pitch can help deflate hype, even though it would be better to have the information. For more, check out our full ChatGPT glossary and how to turn off Apple Intelligence.
    المصدر: www.cnet.com
    #how #spot #hype #and #avoid #the #con #according #two #experts #quotartificial #intelligence #we039re #being #frank #bill #goods #you #are #sold #line #someone039s #pocketsquotthat #heart #argument #that #linguist #emily #bender #sociologist #alex #hannamake #their #new #bookthe #conit039s #useful #guide #for #anyone #whose #life #has #intersected #with #technologies #artificial #who039s #questioned #real #usefulness #which #most #usbender #professor #university #washington #who #was #named #one #time #magazine039s #influential #people #hanna #director #research #nonprofit #distributed #instituteand #former #member #ethical #team #googlethe #explosion #chatgpt #late #kicked #off #cycle #aihype #authors #define #quotaggrandizementquot #technology #convinced #need #buy #invest #quotlest #miss #out #entertainment #pleasure #monetary #reward #return #investment #market #sharequot #but #it039s #not #first #nor #likely #last #scholars #government #leaders #regular #have #been #intrigued #worried #idea #machine #learning #aibender #trace #roots #back #1950s #when #mathematician #john #mccarthy #coined #term #intelligenceit #era #united #states #looking #fund #projects #would #help #country #gain #any #kind #edge #soviets #militarily #ideologically #technologicallyquotit #didn039t #spring #whole #cloth #zeus039s #head #anythingthis #longer #historyquot #said #interview #cnetquotit039s #certainly #quote #unquote #aiquottoday039s #propelled #billions #dollars #venture #capital #into #startups #like #openai #tech #giants #meta #google #microsoft #pouring #developmentthe #result #clear #all #newest #phones #laptops #software #updates #drenched #aiwashingand #there #signs #development #will #slow #down #thanks #part #growing #motivation #beat #china #developmentnot #indeedof #course #generative #much #more #advanced #than #eliza #psychotherapy #chatbot #enraptured #scientists #1970stoday039s #business #workers #inundated #heavy #dose #fomo #seemingly #complex #often #misused #jargonlistening #enthusiasts #might #seem #take #your #job #save #company #moneybut #argue #neither #wholly #reason #why #important #recognize #break #through #hypeso #these #few #telltale #share #belowthe #outline #questions #ask #strategies #busting #book #now #uswatch #language #humanizes #aianthropomorphizing #process #giving #inanimate #object #humanlike #characteristics #qualities #big #building #hypean #example #this #can #found #companies #say #chatbots #quotseequot #quotthinkquotthese #comparisons #trying #describe #ability #objectidentifying #programs #deepreasoning #models #they #also #misleadingai #aren039t #capable #seeing #thinking #because #don039t #brainseven #neural #nets #noted #our #based #human #understanding #neurons #from #actually #work #fool #believing #there039s #brain #behind #machinethat #belief #something #predisposed #humans #languagewe039re #conditioned #imagine #mind #text #see #even #know #generated #saidquotwe #interpret #developing #model #minds #speaker #wasquot #addedin #use #knowledge #person #speaking #create #meaning #just #using #words #sayquotso #encounter #synthetic #extruded #going #same #thingquot #saidquotand #very #hard #remind #ourselves #isn039t #thereit039s #construct #producedquotthe #try #convince #products #sets #foreground #them #replace #whether #creatorsit039s #compelling #believe #could #silver #bullet #fix #complicated #problems #critical #industries #health #care #servicesbut #bring #used #anythingai #goal #efficiency #services #end #replacing #qualified #black #box #machines #copious #amounts #babysitting #underpaid #contract #gig #workersas #put #quotai #make #shittierquotbe #dubious #phrase #039super #intelligence039if #can039t #should #wary #claims #itquotsuperhuman #super #dangerous #turn #insofar #thinks #some #superfluousquot #saidin #quotcertain #domains #pattern #matching #scale #computers #quite #good #thatbut #superhuman #poem #notion #doing #science #hypequot #added #quotand #talk #about #airplanes #flyers #rulers #measurers #seems #only #space #comes #upquotthe #quotsuper #intelligencequot #general #intelligencemany #ceos #struggle #what #exactly #agi #essentially #ai039s #form #potentially #making #decisions #handling #tasksthere039s #still #evidence #anywhere #near #future #enabled #popularbuzzwordmany #futurelooking #statements #borrow #tropes #fictionboth #boosters #doomers #those #potential #harm #rely #scifi #scenariosthe #aipowered #futuristic #societythe #bemoan #where #robots #over #world #wipe #humanitythe #connecting #thread #unshakable #smarter #inevitablequotone #things #lot #discourse #fixed #question #fast #get #therequot #then #claim #particular #step #path #marketingit #helpful #able #itquotpart #popular #autonomous #functional #assistant #mean #fulfilling #promises #worldchanging #innovation #investorsplanning #utopia #dystopia #keeps #investors #forward #burn #admit #they039ll #carbon #emission #goalsfor #better #worse #fictionwhenever #someone #claiming #product #straight #movie #sign #approach #skepticism #goes #outputs #evaluatedone #easiest #ways #marketing #fluff #look #disclosing #operatesmany #won039t #tell #content #train #modelsbut #usually #disclose #does #data #sometimes #brag #stack #against #competitorsthat039s #start #typically #privacy #policiesone #top #complaints #concernsfrom #creators #trainedthere #many #lawsuits #alleged #copyright #infringement #concerns #bias #capacity #harmquotif #wanted #system #designed #move #rather #reproduce #oppressions #past #curating #dataquot #saidinstead #grabbing #quoteverything #wasn039t #nailed #internetquot #saidif #you039re #hearing #thing #statistic #highlights #its #effectivenesslike #other #researchers #called #finding #citation #red #flagquotanytime #selling #access #evaluated #thin #icequot #saidit #frustrating #disappointing #certain #information #were #developedbut #recognizing #holes #sales #pitch #deflate #though #informationfor #check #fullchatgpt #glossary #offapple
    How to Spot AI Hype and Avoid The AI Con, According to Two Experts
    www.cnet.com
    "Artificial intelligence, if we're being frank, is a con: a bill of goods you are being sold to line someone's pockets."That is the heart of the argument that linguist Emily Bender and sociologist Alex Hanna make in their new book The AI Con. It's a useful guide for anyone whose life has intersected with technologies sold as artificial intelligence and anyone who's questioned their real usefulness, which is most of us. Bender is a professor at the University of Washington who was named one of Time magazine's most influential people in artificial intelligence, and Hanna is the director of research at the nonprofit Distributed AI Research Institute and a former member of the ethical AI team at Google.The explosion of ChatGPT in late 2022 kicked off a new hype cycle in AI. Hype, as the authors define it, is the "aggrandizement" of technology that you are convinced you need to buy or invest in "lest you miss out on entertainment or pleasure, monetary reward, return on investment, or market share." But it's not the first time, nor likely the last, that scholars, government leaders and regular people have been intrigued and worried by the idea of machine learning and AI.Bender and Hanna trace the roots of machine learning back to the 1950s, to when mathematician John McCarthy coined the term artificial intelligence. It was in an era when the United States was looking to fund projects that would help the country gain any kind of edge on the Soviets militarily, ideologically and technologically. "It didn't spring whole cloth out of Zeus's head or anything. This has a longer history," Hanna said in an interview with CNET. "It's certainly not the first hype cycle with, quote, unquote, AI."Today's hype cycle is propelled by the billions of dollars of venture capital investment into startups like OpenAI and the tech giants like Meta, Google and Microsoft pouring billions of dollars into AI research and development. The result is clear, with all the newest phones, laptops and software updates drenched in AI-washing. And there are no signs that AI research and development will slow down, thanks in part to a growing motivation to beat China in AI development. Not the first hype cycle indeed.Of course, generative AI in 2025 is much more advanced than the Eliza psychotherapy chatbot that first enraptured scientists in the 1970s. Today's business leaders and workers are inundated with hype, with a heavy dose of FOMO and seemingly complex but often misused jargon. Listening to tech leaders and AI enthusiasts, it might seem like AI will take your job to save your company money. But the authors argue that neither is wholly likely, which is one reason why it's important to recognize and break through the hype.So how do we recognize AI hype? These are a few telltale signs, according to Bender and Hanna, that we share below. The authors outline more questions to ask and strategies for AI hype busting in their book, which is out now in the US.Watch out for language that humanizes AIAnthropomorphizing, or the process of giving an inanimate object human-like characteristics or qualities, is a big part of building AI hype. An example of this kind of language can be found when AI companies say their chatbots can now "see" and "think."These can be useful comparisons when trying to describe the ability of new object-identifying AI programs or deep-reasoning AI models, but they can also be misleading. AI chatbots aren't capable of seeing of thinking because they don't have brains. Even the idea of neural nets, Hanna noted in our interview and in the book, is based on human understanding of neurons from the 1950s, not actually how neurons work, but it can fool us into believing there's a brain behind the machine.That belief is something we're predisposed to because of how we as humans process language. We're conditioned to imagine that there is a mind behind the text we see, even when we know it's generated by AI, Bender said. "We interpret language by developing a model in our minds of who the speaker was," Bender added.In these models, we use our knowledge of the person speaking to create meaning, not just using the meaning of the words they say. "So when we encounter synthetic text extruded from something like ChatGPT, we're going to do the same thing," Bender said. "And it is very hard to remind ourselves that the mind isn't there. It's just a construct that we have produced."The authors argue that part of why AI companies try to convince us their products are human-like is that this sets the foreground for them to convince us that AI can replace humans, whether it's at work or as creators. It's compelling for us to believe that AI could be the silver bullet fix to complicated problems in critical industries like health care and government services.But more often than not, the authors argue, AI isn't bring used to fix anything. AI is sold with the goal of efficiency, but AI services end up replacing qualified workers with black box machines that need copious amounts of babysitting from underpaid contract or gig workers. As Hanna put it in our interview, "AI is not going to take your job, but it will make your job shittier."Be dubious of the phrase 'super intelligence'If a human can't do something, you should be wary of claims that an AI can do it. "Superhuman intelligence, or super intelligence, is a very dangerous turn of phrase, insofar as it thinks that some technology is going to make humans superfluous," Hanna said. In "certain domains, like pattern matching at scale, computers are quite good at that. But if there's an idea that there's going to be a superhuman poem, or a superhuman notion of research or doing science, that is clear hype." Bender added, "And we don't talk about airplanes as superhuman flyers or rulers as superhuman measurers, it seems to be only in this AI space that that comes up."The idea of AI "super intelligence" comes up often when people talk about artificial general intelligence. Many CEOs struggle to define what exactly AGI is, but it's essentially AI's most advanced form, potentially capable of making decisions and handling complex tasks. There's still no evidence we're anywhere near a future enabled by AGI, but it's a popular buzzword.Many of these future-looking statements from AI leaders borrow tropes from science fiction. Both boosters and doomers — how Bender and Hanna describe AI enthusiasts and those worried about the potential for harm — rely on sci-fi scenarios. The boosters imagine an AI-powered futuristic society. The doomers bemoan a future where AI robots take over the world and wipe out humanity.The connecting thread, according to the authors, is an unshakable belief that AI is smarter than humans and inevitable. "One of the things that we see a lot in the discourse is this idea that the future is fixed, and it's just a question of how fast we get there," Bender said. "And then there's this claim that this particular technology is a step on that path, and it's all marketing. It is helpful to be able to see behind it."Part of why AI is so popular is that an autonomous functional AI assistant would mean AI companies are fulfilling their promises of world-changing innovation to their investors. Planning for that future — whether it's a utopia or dystopia — keeps investors looking forward as the companies burn through billions of dollars and admit they'll miss their carbon emission goals. For better or worse, life is not science fiction. Whenever you see someone claiming their AI product is straight out of a movie, it's a good sign to approach with skepticism. Ask what goes in and how outputs are evaluatedOne of the easiest ways to see through AI marketing fluff is to look and see whether the company is disclosing how it operates. Many AI companies won't tell you what content is used to train their models. But they usually disclose what the company does with your data and sometimes brag about how their models stack up against competitors. That's where you should start looking, typically in their privacy policies.One of the top complaints and concerns from creators is how AI models are trained. There are many lawsuits over alleged copyright infringement, and there are a lot of concerns over bias in AI chatbots and their capacity for harm. "If you wanted to create a system that is designed to move things forward rather than reproduce the oppressions of the past, you would have to start by curating your data," Bender said. Instead, AI companies are grabbing "everything that wasn't nailed down on the internet," Hanna said.If you're hearing about an AI product for the first time, one thing in particular to look out for is any kind of statistic that highlights its effectiveness. Like many other researchers, Bender and Hanna have called out that a finding with no citation is a red flag. "Anytime someone is selling you something but not giving you access to how it was evaluated, you are on thin ice," Bender said.It can be frustrating and disappointing when AI companies don't disclose certain information about how their AI products work and how they were developed. But recognizing those holes in their sales pitch can help deflate hype, even though it would be better to have the information. For more, check out our full ChatGPT glossary and how to turn off Apple Intelligence.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
CGShares https://cgshares.com