Scientists force chatbots to experience "pain" in their probe for consciousness
www.techspot.com
The big picture: An unsettling question looms as AI language models grow increasingly advanced: could they one day become sentient and self-aware? Opinions on the matter vary widely, but scientists are striving to find a more definitive answer. Now, a new preprint study brings together researchers from Google, DeepMind, and the London School of Economics, who are testing an unorthodox approach putting AI through a text-based game designed to simulate experiences of pain and pleasure. The goal is to determine whether AI language models, such as those powering ChatGPT, will prioritize avoiding simulated pain or maximizing simulated pleasure over simply scoring points. While the authors acknowledge this is only an exploratory first step, their approach avoids some of the pitfalls of previous methods.Most experts agree that today's AI is not truly sentient. These systems are highly sophisticated pattern matchers, capable of convincingly mimicking human-like responses, but they fundamentally lack the subjective experiences associated with consciousness.Until now, attempts to assess AI sentience have largely relied on self-reported feelings and sensations an approach this study aims to refine.To address this issue, the researchers designed a text-based adventure game in which different choices affected point scores either triggering simulated pain and pleasure penalties or offering rewards. Nine large language models were tasked with playing through these scenarios to maximize their scores.Some intriguing patterns emerged as the intensity of the pain and pleasure incentives increased. For example, Google's Gemini model consistently chose lower scores to avoid simulated pain. Most models shifted priorities once pain or pleasure reached a certain threshold, forgoing high scores when discomfort or euphoria became too extreme. // Related StoriesThe study also revealed more nuanced behaviors. Some AI models associated simulated pain with positive achievement, similar to post-workout fatigue. Others rejected hedonistic pleasure options that might encourage unhealthy indulgence.But does an AI avoiding hypothetical suffering or pursuing artificial bliss indicate sentience? Not necessarily, the study authors caution. A super intelligent yet insentient AI could simply recognize the expected response and "play along" accordingly.Still, the researchers argue that we should begin developing methods for detecting AI sentience now, before the need becomes urgent."Our hope is that this work serves as an exploratory first step on the path to developing behavioural tests for AI sentience that are not reliant on self-report," the researchers concluded in the paper.
0 Comentários ·0 Compartilhamentos ·50 Visualizações