• Australian startup Cortical Labs unveils CL1, the world’s first living computer powered by 800,000 human neurons. This breakthrough blends biology with silicon to mimic brain activity, offering real-time data processing and new research possibilities in neuroscience and drug testing.
    Australian startup Cortical Labs unveils CL1, the world’s first living computer powered by 800,000 human neurons. This breakthrough blends biology with silicon to mimic brain activity, offering real-time data processing and new research possibilities in neuroscience and drug testing.
    Like
    Love
    Wow
    Sad
    Angry
    572
    0 Комментарии 0 Поделились
  • The nine-armed octopus and the oddities of the cephalopod nervous system

    Extra-sensory perception

    The nine-armed octopus and the oddities of the cephalopod nervous system

    A mix of autonomous and top-down control manage the octopus's limbs.

    Kenna Hughes-Castleberry



    Jun 7, 2025 8:00 am

    |

    19

    Credit:

    Nikos Stavrinidis / 500px

    Credit:

    Nikos Stavrinidis / 500px

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    With their quick-change camouflage and high level of intelligence, it’s not surprising that the public and scientific experts alike are fascinated by octopuses. Their abilities to recognize faces, solve puzzles, and learn behaviors from other octopuses make these animals a captivating study.
    To perform these processes and others, like crawling or exploring, octopuses rely on their complex nervous system, one that has become a focus for neuroscientists. With about 500 million neurons—around the same number as dogs—octopuses’ nervous systems are the most complex of any invertebrate. But, unlike vertebrate organisms, the octopus’s nervous system is also decentralized, with around 350 million neurons, or 66 percent of it, located in its eight arms.
    “This means each arm is capable of independently processing sensory input, initiating movement, and even executing complex behaviors—without direct instructions from the brain,” explains Galit Pelled, a professor of Mechanical Engineering, Radiology, and Neuroscience at Michigan State University who studies octopus neuroscience. “In essence, the arms have their own ‘mini-brains.’”
    A decentralized nervous system is one factor that helps octopuses adapt to changes, such as injury or predation, as seen in the case of an Octopus vulgaris, or common octopus, that was observed with nine arms by researchers at the ECOBAR lab at the Institute of Marine Research in Spain between 2021 and 2022.
    By studying outliers like this cephalopod, researchers can gain insight into how the animal’s detailed scaffolding of nerves changes and regrows over time, uncovering more about how octopuses have evolved over millennia in our oceans.
    Brains, brains, and more brains
    Because each arm of an octopus contains its own bundle of neurons, the limbs can operate semi-independently from the central brain, enabling faster responses since signals don’t always need to travel back and forth between the brain and the arms. In fact, Pelled and her team recently discovered that “neural signals recorded in the octopus arm can predict movement type within 100 milliseconds of stimulation, without central brain involvement.” She notes that “that level of localized autonomy is unprecedented in vertebrate systems.”

    Though each limb moves on its own, the movements of the octopus’s body are smooth and conducted with a coordinated elegance that allows the animal to exhibit some of the broadest range of behaviors, adapting on the fly to changes in its surroundings.
    “That means the octopus can react quickly to its environment, especially when exploring, hunting, or defending itself,” Pelled says. “For example, one arm can grab food while another is feeling around a rock, without needing permission from the brain. This setup also makes the octopus more resilient. If one arm is injured, the others still work just fine. And because so much decision-making happens at the arms, the central brain is freed up to focus on the bigger picture—like navigating or learning new tasks.”
    As if each limb weren’t already buzzing with neural activity, things get even more intricate when researchers zoom in further—to the nerves within each individual sucker, a ring of muscular tissue, which octopuses use to sense and taste their surroundings.
    “There is a sucker ganglion, or nerve center, located in the stalk of every sucker. For some species of octopuses, that’s over a thousand ganglia,” says Cassady Olson, a graduate student at the University of Chicago who works with Cliff Ragsdale, a leading expert in octopus neuroscience.
    Given that each sucker has its own nerve centers—connected by a long axial nerve cord running down the limb—and each arm has hundreds of suckers, things get complicated very quickly, as researchers have historically struggled to study this peripheral nervous system, as it’s called, within the octopus’s body.
    “The large size of the brain makes it both really exciting to study and really challenging,” says Z. Yan Wang, an assistant professor of biology and psychology at the University of Washington. “Many of the tools available for neuroscience have to be adjusted or customized specifically for octopuses and other cephalopods because of their unique body plans.”

    While each limb acts independently, signals are transmitted back to the octopus’s central nervous system. The octopus’ brain sits between its eyes at the front of its mantle, or head, couched between its two optic lobes, large bean-shaped neural organs that help octopuses see the world around them. These optic lobes are just two of the over 30 lobes experts study within the animal’s centralized brain, as each lobe helps the octopus process its environment.
    This elaborate neural architecture is critical given the octopus’s dual role in the ecosystem as both predator and prey. Without natural defenses like a hard shell, octopuses have evolved a highly adaptable nervous system that allows them to rapidly process information and adjust as needed, helping their chances of survival.

    Some similarities remain
    While the octopus’s decentralized nervous system makes it a unique evolutionary example, it does have some structures similar to or analogous to the human nervous system.
    “The octopus has a central brain mass located between its eyes, and an axial nerve cord running down each arm,” says Wang. “The octopus has many sensory systems that we are familiar with, such as vision, touch, chemosensation, and gravity sensing.”
    Neuroscientists have homed in on these similarities to understand how these structures may have evolved across the different branches in the tree of life. As the most recent common ancestor for humans and octopuses lived around 750 million years ago, experts believe that many similarities, from similar camera-like eyes to maps of neural activities, evolved separately in a process known as convergent evolution.
    While these similarities shed light on evolution's independent paths, they also offer valuable insights for fields like soft robotics and regenerative medicine.
    Occasionally, unique individuals—like an octopus with an unexpected number of limbs—can provide even deeper clues into how this remarkable nervous system functions and adapts.

    Nine arms, no problem
    In 2021, researchers from the Institute of Marine Research in Spain used an underwater camera to follow a male Octopus vulgaris, or common octopus. On its left side, three arms were intact, while the others were reduced to uneven, stumpy lengths, sharply bitten off at varying points. Although the researchers didn’t witness the injury itself, they observed that the front right arm—known as R1—was regenerating unusually, splitting into two separate limbs and giving the octopus a total of nine arms.
    “In this individual, we believe this condition was a result of abnormal regenerationafter an encounter with a predator,” explains Sam Soule, one of the researchers and the first author on the corresponding paper recently published in Animals.
    The researchers named the octopus Salvador due to its bifurcated arm coiling up on itself like the two upturned ends of Salvador Dali’s moustache. For two years, the team studied the cephalopod’s behavior and found that it used its bifurcated arm less when doing “riskier” movements such as exploring or grabbing food, which would force the animal to stretch its arm out and expose it to further injury.
    “One of the conclusions of our research is that the octopus likely retains a long-term memory of the original injury, as it tends to use the bifurcated arms for less risky tasks compared to the others,” elaborates Jorge Hernández Urcera, a lead author of the study. “This idea of lasting memory brought to mind Dalí’s famous painting The Persistence of Memory, which ultimately became the title of the paper we published on monitoring this particular octopus.”
    While the octopus acted more protective of its extra limb, its nervous system had adapted to using the extra appendage, as the octopus was observed, after some time recovering from its injuries, using its ninth arm for probing its environment.
    “That nine-armed octopus is a perfect example of just how adaptable these animals are,” Pelled adds. “Most animals would struggle with an unusual body part, but not the octopus. In this case, the octopus had a bifurcatedarm and still used it effectively, just like any other arm. That tells us the nervous system didn’t treat it as a mistake—it figured out how to make it work.”
    Kenna Hughes-Castleberry is the science communicator at JILAand a freelance science journalist. Her main writing focuses are quantum physics, quantum technology, deep technology, social media, and the diversity of people in these fields, particularly women and people from minority ethnic and racial groups. Follow her on LinkedIn or visit her website.

    19 Comments
    #ninearmed #octopus #oddities #cephalopod #nervous
    The nine-armed octopus and the oddities of the cephalopod nervous system
    Extra-sensory perception The nine-armed octopus and the oddities of the cephalopod nervous system A mix of autonomous and top-down control manage the octopus's limbs. Kenna Hughes-Castleberry – Jun 7, 2025 8:00 am | 19 Credit: Nikos Stavrinidis / 500px Credit: Nikos Stavrinidis / 500px Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more With their quick-change camouflage and high level of intelligence, it’s not surprising that the public and scientific experts alike are fascinated by octopuses. Their abilities to recognize faces, solve puzzles, and learn behaviors from other octopuses make these animals a captivating study. To perform these processes and others, like crawling or exploring, octopuses rely on their complex nervous system, one that has become a focus for neuroscientists. With about 500 million neurons—around the same number as dogs—octopuses’ nervous systems are the most complex of any invertebrate. But, unlike vertebrate organisms, the octopus’s nervous system is also decentralized, with around 350 million neurons, or 66 percent of it, located in its eight arms. “This means each arm is capable of independently processing sensory input, initiating movement, and even executing complex behaviors—without direct instructions from the brain,” explains Galit Pelled, a professor of Mechanical Engineering, Radiology, and Neuroscience at Michigan State University who studies octopus neuroscience. “In essence, the arms have their own ‘mini-brains.’” A decentralized nervous system is one factor that helps octopuses adapt to changes, such as injury or predation, as seen in the case of an Octopus vulgaris, or common octopus, that was observed with nine arms by researchers at the ECOBAR lab at the Institute of Marine Research in Spain between 2021 and 2022. By studying outliers like this cephalopod, researchers can gain insight into how the animal’s detailed scaffolding of nerves changes and regrows over time, uncovering more about how octopuses have evolved over millennia in our oceans. Brains, brains, and more brains Because each arm of an octopus contains its own bundle of neurons, the limbs can operate semi-independently from the central brain, enabling faster responses since signals don’t always need to travel back and forth between the brain and the arms. In fact, Pelled and her team recently discovered that “neural signals recorded in the octopus arm can predict movement type within 100 milliseconds of stimulation, without central brain involvement.” She notes that “that level of localized autonomy is unprecedented in vertebrate systems.” Though each limb moves on its own, the movements of the octopus’s body are smooth and conducted with a coordinated elegance that allows the animal to exhibit some of the broadest range of behaviors, adapting on the fly to changes in its surroundings. “That means the octopus can react quickly to its environment, especially when exploring, hunting, or defending itself,” Pelled says. “For example, one arm can grab food while another is feeling around a rock, without needing permission from the brain. This setup also makes the octopus more resilient. If one arm is injured, the others still work just fine. And because so much decision-making happens at the arms, the central brain is freed up to focus on the bigger picture—like navigating or learning new tasks.” As if each limb weren’t already buzzing with neural activity, things get even more intricate when researchers zoom in further—to the nerves within each individual sucker, a ring of muscular tissue, which octopuses use to sense and taste their surroundings. “There is a sucker ganglion, or nerve center, located in the stalk of every sucker. For some species of octopuses, that’s over a thousand ganglia,” says Cassady Olson, a graduate student at the University of Chicago who works with Cliff Ragsdale, a leading expert in octopus neuroscience. Given that each sucker has its own nerve centers—connected by a long axial nerve cord running down the limb—and each arm has hundreds of suckers, things get complicated very quickly, as researchers have historically struggled to study this peripheral nervous system, as it’s called, within the octopus’s body. “The large size of the brain makes it both really exciting to study and really challenging,” says Z. Yan Wang, an assistant professor of biology and psychology at the University of Washington. “Many of the tools available for neuroscience have to be adjusted or customized specifically for octopuses and other cephalopods because of their unique body plans.” While each limb acts independently, signals are transmitted back to the octopus’s central nervous system. The octopus’ brain sits between its eyes at the front of its mantle, or head, couched between its two optic lobes, large bean-shaped neural organs that help octopuses see the world around them. These optic lobes are just two of the over 30 lobes experts study within the animal’s centralized brain, as each lobe helps the octopus process its environment. This elaborate neural architecture is critical given the octopus’s dual role in the ecosystem as both predator and prey. Without natural defenses like a hard shell, octopuses have evolved a highly adaptable nervous system that allows them to rapidly process information and adjust as needed, helping their chances of survival. Some similarities remain While the octopus’s decentralized nervous system makes it a unique evolutionary example, it does have some structures similar to or analogous to the human nervous system. “The octopus has a central brain mass located between its eyes, and an axial nerve cord running down each arm,” says Wang. “The octopus has many sensory systems that we are familiar with, such as vision, touch, chemosensation, and gravity sensing.” Neuroscientists have homed in on these similarities to understand how these structures may have evolved across the different branches in the tree of life. As the most recent common ancestor for humans and octopuses lived around 750 million years ago, experts believe that many similarities, from similar camera-like eyes to maps of neural activities, evolved separately in a process known as convergent evolution. While these similarities shed light on evolution's independent paths, they also offer valuable insights for fields like soft robotics and regenerative medicine. Occasionally, unique individuals—like an octopus with an unexpected number of limbs—can provide even deeper clues into how this remarkable nervous system functions and adapts. Nine arms, no problem In 2021, researchers from the Institute of Marine Research in Spain used an underwater camera to follow a male Octopus vulgaris, or common octopus. On its left side, three arms were intact, while the others were reduced to uneven, stumpy lengths, sharply bitten off at varying points. Although the researchers didn’t witness the injury itself, they observed that the front right arm—known as R1—was regenerating unusually, splitting into two separate limbs and giving the octopus a total of nine arms. “In this individual, we believe this condition was a result of abnormal regenerationafter an encounter with a predator,” explains Sam Soule, one of the researchers and the first author on the corresponding paper recently published in Animals. The researchers named the octopus Salvador due to its bifurcated arm coiling up on itself like the two upturned ends of Salvador Dali’s moustache. For two years, the team studied the cephalopod’s behavior and found that it used its bifurcated arm less when doing “riskier” movements such as exploring or grabbing food, which would force the animal to stretch its arm out and expose it to further injury. “One of the conclusions of our research is that the octopus likely retains a long-term memory of the original injury, as it tends to use the bifurcated arms for less risky tasks compared to the others,” elaborates Jorge Hernández Urcera, a lead author of the study. “This idea of lasting memory brought to mind Dalí’s famous painting The Persistence of Memory, which ultimately became the title of the paper we published on monitoring this particular octopus.” While the octopus acted more protective of its extra limb, its nervous system had adapted to using the extra appendage, as the octopus was observed, after some time recovering from its injuries, using its ninth arm for probing its environment. “That nine-armed octopus is a perfect example of just how adaptable these animals are,” Pelled adds. “Most animals would struggle with an unusual body part, but not the octopus. In this case, the octopus had a bifurcatedarm and still used it effectively, just like any other arm. That tells us the nervous system didn’t treat it as a mistake—it figured out how to make it work.” Kenna Hughes-Castleberry is the science communicator at JILAand a freelance science journalist. Her main writing focuses are quantum physics, quantum technology, deep technology, social media, and the diversity of people in these fields, particularly women and people from minority ethnic and racial groups. Follow her on LinkedIn or visit her website. 19 Comments #ninearmed #octopus #oddities #cephalopod #nervous
    ARSTECHNICA.COM
    The nine-armed octopus and the oddities of the cephalopod nervous system
    Extra-sensory perception The nine-armed octopus and the oddities of the cephalopod nervous system A mix of autonomous and top-down control manage the octopus's limbs. Kenna Hughes-Castleberry – Jun 7, 2025 8:00 am | 19 Credit: Nikos Stavrinidis / 500px Credit: Nikos Stavrinidis / 500px Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more With their quick-change camouflage and high level of intelligence, it’s not surprising that the public and scientific experts alike are fascinated by octopuses. Their abilities to recognize faces, solve puzzles, and learn behaviors from other octopuses make these animals a captivating study. To perform these processes and others, like crawling or exploring, octopuses rely on their complex nervous system, one that has become a focus for neuroscientists. With about 500 million neurons—around the same number as dogs—octopuses’ nervous systems are the most complex of any invertebrate. But, unlike vertebrate organisms, the octopus’s nervous system is also decentralized, with around 350 million neurons, or 66 percent of it, located in its eight arms. “This means each arm is capable of independently processing sensory input, initiating movement, and even executing complex behaviors—without direct instructions from the brain,” explains Galit Pelled, a professor of Mechanical Engineering, Radiology, and Neuroscience at Michigan State University who studies octopus neuroscience. “In essence, the arms have their own ‘mini-brains.’” A decentralized nervous system is one factor that helps octopuses adapt to changes, such as injury or predation, as seen in the case of an Octopus vulgaris, or common octopus, that was observed with nine arms by researchers at the ECOBAR lab at the Institute of Marine Research in Spain between 2021 and 2022. By studying outliers like this cephalopod, researchers can gain insight into how the animal’s detailed scaffolding of nerves changes and regrows over time, uncovering more about how octopuses have evolved over millennia in our oceans. Brains, brains, and more brains Because each arm of an octopus contains its own bundle of neurons, the limbs can operate semi-independently from the central brain, enabling faster responses since signals don’t always need to travel back and forth between the brain and the arms. In fact, Pelled and her team recently discovered that “neural signals recorded in the octopus arm can predict movement type within 100 milliseconds of stimulation, without central brain involvement.” She notes that “that level of localized autonomy is unprecedented in vertebrate systems.” Though each limb moves on its own, the movements of the octopus’s body are smooth and conducted with a coordinated elegance that allows the animal to exhibit some of the broadest range of behaviors, adapting on the fly to changes in its surroundings. “That means the octopus can react quickly to its environment, especially when exploring, hunting, or defending itself,” Pelled says. “For example, one arm can grab food while another is feeling around a rock, without needing permission from the brain. This setup also makes the octopus more resilient. If one arm is injured, the others still work just fine. And because so much decision-making happens at the arms, the central brain is freed up to focus on the bigger picture—like navigating or learning new tasks.” As if each limb weren’t already buzzing with neural activity, things get even more intricate when researchers zoom in further—to the nerves within each individual sucker, a ring of muscular tissue, which octopuses use to sense and taste their surroundings. “There is a sucker ganglion, or nerve center, located in the stalk of every sucker. For some species of octopuses, that’s over a thousand ganglia,” says Cassady Olson, a graduate student at the University of Chicago who works with Cliff Ragsdale, a leading expert in octopus neuroscience. Given that each sucker has its own nerve centers—connected by a long axial nerve cord running down the limb—and each arm has hundreds of suckers, things get complicated very quickly, as researchers have historically struggled to study this peripheral nervous system, as it’s called, within the octopus’s body. “The large size of the brain makes it both really exciting to study and really challenging,” says Z. Yan Wang, an assistant professor of biology and psychology at the University of Washington. “Many of the tools available for neuroscience have to be adjusted or customized specifically for octopuses and other cephalopods because of their unique body plans.” While each limb acts independently, signals are transmitted back to the octopus’s central nervous system. The octopus’ brain sits between its eyes at the front of its mantle, or head, couched between its two optic lobes, large bean-shaped neural organs that help octopuses see the world around them. These optic lobes are just two of the over 30 lobes experts study within the animal’s centralized brain, as each lobe helps the octopus process its environment. This elaborate neural architecture is critical given the octopus’s dual role in the ecosystem as both predator and prey. Without natural defenses like a hard shell, octopuses have evolved a highly adaptable nervous system that allows them to rapidly process information and adjust as needed, helping their chances of survival. Some similarities remain While the octopus’s decentralized nervous system makes it a unique evolutionary example, it does have some structures similar to or analogous to the human nervous system. “The octopus has a central brain mass located between its eyes, and an axial nerve cord running down each arm (similar to a spinal cord),” says Wang. “The octopus has many sensory systems that we are familiar with, such as vision, touch (somatosensation), chemosensation, and gravity sensing.” Neuroscientists have homed in on these similarities to understand how these structures may have evolved across the different branches in the tree of life. As the most recent common ancestor for humans and octopuses lived around 750 million years ago, experts believe that many similarities, from similar camera-like eyes to maps of neural activities, evolved separately in a process known as convergent evolution. While these similarities shed light on evolution's independent paths, they also offer valuable insights for fields like soft robotics and regenerative medicine. Occasionally, unique individuals—like an octopus with an unexpected number of limbs—can provide even deeper clues into how this remarkable nervous system functions and adapts. Nine arms, no problem In 2021, researchers from the Institute of Marine Research in Spain used an underwater camera to follow a male Octopus vulgaris, or common octopus. On its left side, three arms were intact, while the others were reduced to uneven, stumpy lengths, sharply bitten off at varying points. Although the researchers didn’t witness the injury itself, they observed that the front right arm—known as R1—was regenerating unusually, splitting into two separate limbs and giving the octopus a total of nine arms. “In this individual, we believe this condition was a result of abnormal regeneration [a genetic mutation] after an encounter with a predator,” explains Sam Soule, one of the researchers and the first author on the corresponding paper recently published in Animals. The researchers named the octopus Salvador due to its bifurcated arm coiling up on itself like the two upturned ends of Salvador Dali’s moustache. For two years, the team studied the cephalopod’s behavior and found that it used its bifurcated arm less when doing “riskier” movements such as exploring or grabbing food, which would force the animal to stretch its arm out and expose it to further injury. “One of the conclusions of our research is that the octopus likely retains a long-term memory of the original injury, as it tends to use the bifurcated arms for less risky tasks compared to the others,” elaborates Jorge Hernández Urcera, a lead author of the study. “This idea of lasting memory brought to mind Dalí’s famous painting The Persistence of Memory, which ultimately became the title of the paper we published on monitoring this particular octopus.” While the octopus acted more protective of its extra limb, its nervous system had adapted to using the extra appendage, as the octopus was observed, after some time recovering from its injuries, using its ninth arm for probing its environment. “That nine-armed octopus is a perfect example of just how adaptable these animals are,” Pelled adds. “Most animals would struggle with an unusual body part, but not the octopus. In this case, the octopus had a bifurcated (split) arm and still used it effectively, just like any other arm. That tells us the nervous system didn’t treat it as a mistake—it figured out how to make it work.” Kenna Hughes-Castleberry is the science communicator at JILA (a joint physics research institute between the National Institute of Standards and Technology and the University of Colorado Boulder) and a freelance science journalist. Her main writing focuses are quantum physics, quantum technology, deep technology, social media, and the diversity of people in these fields, particularly women and people from minority ethnic and racial groups. Follow her on LinkedIn or visit her website. 19 Comments
    Like
    Love
    Wow
    Sad
    Angry
    542
    0 Комментарии 0 Поделились
  • Brain implant enables ALS patient to communicate using AI

    Published
    May 31, 2025 6:00am EDT close Brain implant enables ALS patient to communicate using AI ALS patient communicates with the world using only his thoughts. Imagine losing your ability to speak or move, yet still having so much to say. For Brad G. Smith, this became his reality after being diagnosed with ALS, a rare and progressive disease that attacks the nerves controlling voluntary muscle movement. But thanks to a groundbreaking Neuralink brain implant, Smith is now able to communicate with the world using only his thoughts. ALS patient Brad G. Smith and his family.Life before NeuralinkBefore receiving the Neuralink implant, Smith relied on eye-tracking technology to communicate. While impressive, it came with major limitations. "It is a miracle of technology, but it is frustrating. It works best in dark rooms, so I was basically Batman. I was stuck in a dark room," Smith shared in a recent post on X. Bright environments would disrupt the system, making communication slow and sometimes impossible. Now, Smith says, "Neuralink lets me go outside and ignore lighting changes."PARALYZED MAN WITH ALS IS THIRD TO RECEIVE NEURALINK IMPLANT, CAN TYPE WITH BRAIN ALS patient Brad G. Smith.How the Neuralink brain implant worksSmith is the first non-verbal person and only the third individual worldwide to receive the Neuralink Brain-Computer Interface. The device, about as thick as five stacked coins, sits in his skull and connects to the motor cortex-the part of the brain that controls movement.Tiny wires, thinner than human hair, extend into Smith's brain. These pick up signals from his neurons and transmit them wirelessly to his MacBook Pro. The computer then decodes these signals, allowing Smith to move a cursor on the screen with his thoughts alone.As Smith explains, "The Neuralink implant embedded in my brain contains 1024 electrodes that capture neuron firings every 15 milliseconds generating a vast amount of data. Artificial intelligence processes this data on a connected MacBook Pro to decode my intended movements in real time to move the cursor on my screen. Neuralink does not read my deepest thoughts or words I think about. It just reads how I wanna move and moves the cursor where I want."WHAT IS ARTIFICIAL INTELLIGENCE? Neuralink brain implant.Training the brain-computer connectionLearning to use the system took some trial and error. At first, the team tried mapping Smith's hand movements to the cursor, but it didn't work well. After more research, they discovered that signals related to his tongue were the most effective for cursor movement, and clenching his jaw worked best for clicking. "I am not actively thinking about my tongue, just like you don't think about your wrist when you move a mouse. I have done a lot of cursor movements in my life. I think my brain has switched over to subconscious control quickly so I just think about moving the cursor," Smith said. ALS patient Brad G. Smith with his wife and child.Everyday life: Communication, play, and problem-solvingThe Neuralink implant has given Smith new ways to interact with his family and the world. He can now play games like Mario Kart with his children and communicate more quickly than before. The system includes a virtual keyboard and shortcuts for common actions, making tasks like copying, pasting and navigating web pages much easier.Smith also worked with Neuralink engineers to develop a "parking spot" feature for the cursor. "Sometimes you just wanna park the cursor and watch a video. When it is in the parking spot, I can watch a show or take a nap without worrying about the cursor," he explained. ALS patient Brad G. Smith and his child.AI assistance: Keeping up with conversationTo speed up communication even more, Smith uses Grok, Elon Musk's AI chatbot. Grok helps him write responses and even suggests witty replies. "We have created a chat app that uses AI to listen to the conversation and gives me options to say in response. It uses Grok 3 and an AI clone of my old voice to generate options for me to say. It is not perfect, but it keeps me in the conversation and it comes up with some great ideas," Smith shared. One example? When a friend needed a gift idea for his girlfriend who loves horses, the AI suggested a bouquet of carrots. ALS patient Brad G. Smith and his family.The human side: Family, faith and perspectiveSmith's journey has been shaped by more than just technology. He credits his wife, Tiffany, as his "best caregiver I could ever imagine," and recognizes the support of his kids, friends and family. Despite the challenges of ALS, Smith finds meaning and hope in his faith. "I have not always understood why God afflicted me with ALS but with time I am learning to trust his plan for me. I'm a better man because of ALS. I'm a better disciple of Jesus Christ because of ALS. I'm closer to my amazing wife, literally and figuratively, because of ALS," he said. ALS patient Brad G. Smith and his family.Looking ahead: What does this mean for others?Neuralink's technology is still in its early stages, but Smith's experience is already making waves. The company recently received a "breakthrough" designation from the Food and Drug Administration for its brain implant device, which hopes to help people with severe speech impairments caused by ALS, stroke, spinal cord injury and other neurological conditions.Neuro-ethicists are watching closely, as the merging of brain implants and AI raises important questions about privacy, autonomy and the future of human communication. ALS patient Brad G. Smith and his family.Kurt's key takeawaysSmith's story is about resilience, creativity and the power of technology to restore something as fundamental as the ability to communicate. As Smith puts it,CLICK HERE TO GET THE FOX NEWS APPIf you or a family member lost the ability to speak or move, would you consider a brain implant that lets you communicate with your thoughts? Let us know by writing to us atCyberguy.com/ContactFor more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to cover.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved.   Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    #brain #implant #enables #als #patient
    Brain implant enables ALS patient to communicate using AI
    Published May 31, 2025 6:00am EDT close Brain implant enables ALS patient to communicate using AI ALS patient communicates with the world using only his thoughts. Imagine losing your ability to speak or move, yet still having so much to say. For Brad G. Smith, this became his reality after being diagnosed with ALS, a rare and progressive disease that attacks the nerves controlling voluntary muscle movement. But thanks to a groundbreaking Neuralink brain implant, Smith is now able to communicate with the world using only his thoughts. ALS patient Brad G. Smith and his family.Life before NeuralinkBefore receiving the Neuralink implant, Smith relied on eye-tracking technology to communicate. While impressive, it came with major limitations. "It is a miracle of technology, but it is frustrating. It works best in dark rooms, so I was basically Batman. I was stuck in a dark room," Smith shared in a recent post on X. Bright environments would disrupt the system, making communication slow and sometimes impossible. Now, Smith says, "Neuralink lets me go outside and ignore lighting changes."PARALYZED MAN WITH ALS IS THIRD TO RECEIVE NEURALINK IMPLANT, CAN TYPE WITH BRAIN ALS patient Brad G. Smith.How the Neuralink brain implant worksSmith is the first non-verbal person and only the third individual worldwide to receive the Neuralink Brain-Computer Interface. The device, about as thick as five stacked coins, sits in his skull and connects to the motor cortex-the part of the brain that controls movement.Tiny wires, thinner than human hair, extend into Smith's brain. These pick up signals from his neurons and transmit them wirelessly to his MacBook Pro. The computer then decodes these signals, allowing Smith to move a cursor on the screen with his thoughts alone.As Smith explains, "The Neuralink implant embedded in my brain contains 1024 electrodes that capture neuron firings every 15 milliseconds generating a vast amount of data. Artificial intelligence processes this data on a connected MacBook Pro to decode my intended movements in real time to move the cursor on my screen. Neuralink does not read my deepest thoughts or words I think about. It just reads how I wanna move and moves the cursor where I want."WHAT IS ARTIFICIAL INTELLIGENCE? Neuralink brain implant.Training the brain-computer connectionLearning to use the system took some trial and error. At first, the team tried mapping Smith's hand movements to the cursor, but it didn't work well. After more research, they discovered that signals related to his tongue were the most effective for cursor movement, and clenching his jaw worked best for clicking. "I am not actively thinking about my tongue, just like you don't think about your wrist when you move a mouse. I have done a lot of cursor movements in my life. I think my brain has switched over to subconscious control quickly so I just think about moving the cursor," Smith said. ALS patient Brad G. Smith with his wife and child.Everyday life: Communication, play, and problem-solvingThe Neuralink implant has given Smith new ways to interact with his family and the world. He can now play games like Mario Kart with his children and communicate more quickly than before. The system includes a virtual keyboard and shortcuts for common actions, making tasks like copying, pasting and navigating web pages much easier.Smith also worked with Neuralink engineers to develop a "parking spot" feature for the cursor. "Sometimes you just wanna park the cursor and watch a video. When it is in the parking spot, I can watch a show or take a nap without worrying about the cursor," he explained. ALS patient Brad G. Smith and his child.AI assistance: Keeping up with conversationTo speed up communication even more, Smith uses Grok, Elon Musk's AI chatbot. Grok helps him write responses and even suggests witty replies. "We have created a chat app that uses AI to listen to the conversation and gives me options to say in response. It uses Grok 3 and an AI clone of my old voice to generate options for me to say. It is not perfect, but it keeps me in the conversation and it comes up with some great ideas," Smith shared. One example? When a friend needed a gift idea for his girlfriend who loves horses, the AI suggested a bouquet of carrots. ALS patient Brad G. Smith and his family.The human side: Family, faith and perspectiveSmith's journey has been shaped by more than just technology. He credits his wife, Tiffany, as his "best caregiver I could ever imagine," and recognizes the support of his kids, friends and family. Despite the challenges of ALS, Smith finds meaning and hope in his faith. "I have not always understood why God afflicted me with ALS but with time I am learning to trust his plan for me. I'm a better man because of ALS. I'm a better disciple of Jesus Christ because of ALS. I'm closer to my amazing wife, literally and figuratively, because of ALS," he said. ALS patient Brad G. Smith and his family.Looking ahead: What does this mean for others?Neuralink's technology is still in its early stages, but Smith's experience is already making waves. The company recently received a "breakthrough" designation from the Food and Drug Administration for its brain implant device, which hopes to help people with severe speech impairments caused by ALS, stroke, spinal cord injury and other neurological conditions.Neuro-ethicists are watching closely, as the merging of brain implants and AI raises important questions about privacy, autonomy and the future of human communication. ALS patient Brad G. Smith and his family.Kurt's key takeawaysSmith's story is about resilience, creativity and the power of technology to restore something as fundamental as the ability to communicate. As Smith puts it,CLICK HERE TO GET THE FOX NEWS APPIf you or a family member lost the ability to speak or move, would you consider a brain implant that lets you communicate with your thoughts? Let us know by writing to us atCyberguy.com/ContactFor more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to cover.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved.   Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com. #brain #implant #enables #als #patient
    WWW.FOXNEWS.COM
    Brain implant enables ALS patient to communicate using AI
    Published May 31, 2025 6:00am EDT close Brain implant enables ALS patient to communicate using AI ALS patient communicates with the world using only his thoughts. Imagine losing your ability to speak or move, yet still having so much to say. For Brad G. Smith, this became his reality after being diagnosed with ALS, a rare and progressive disease that attacks the nerves controlling voluntary muscle movement. But thanks to a groundbreaking Neuralink brain implant, Smith is now able to communicate with the world using only his thoughts. ALS patient Brad G. Smith and his family. (Bradford G. Smith/X)Life before NeuralinkBefore receiving the Neuralink implant, Smith relied on eye-tracking technology to communicate. While impressive, it came with major limitations. "It is a miracle of technology, but it is frustrating. It works best in dark rooms, so I was basically Batman. I was stuck in a dark room," Smith shared in a recent post on X. Bright environments would disrupt the system, making communication slow and sometimes impossible. Now, Smith says, "Neuralink lets me go outside and ignore lighting changes."PARALYZED MAN WITH ALS IS THIRD TO RECEIVE NEURALINK IMPLANT, CAN TYPE WITH BRAIN ALS patient Brad G. Smith. (Bradford G. Smith/X)How the Neuralink brain implant worksSmith is the first non-verbal person and only the third individual worldwide to receive the Neuralink Brain-Computer Interface (BCI). The device, about as thick as five stacked coins, sits in his skull and connects to the motor cortex-the part of the brain that controls movement.Tiny wires, thinner than human hair, extend into Smith's brain. These pick up signals from his neurons and transmit them wirelessly to his MacBook Pro. The computer then decodes these signals, allowing Smith to move a cursor on the screen with his thoughts alone.As Smith explains, "The Neuralink implant embedded in my brain contains 1024 electrodes that capture neuron firings every 15 milliseconds generating a vast amount of data. Artificial intelligence processes this data on a connected MacBook Pro to decode my intended movements in real time to move the cursor on my screen. Neuralink does not read my deepest thoughts or words I think about. It just reads how I wanna move and moves the cursor where I want."WHAT IS ARTIFICIAL INTELLIGENCE (AI)? Neuralink brain implant. (Bradford G. Smith/X)Training the brain-computer connectionLearning to use the system took some trial and error. At first, the team tried mapping Smith's hand movements to the cursor, but it didn't work well. After more research, they discovered that signals related to his tongue were the most effective for cursor movement, and clenching his jaw worked best for clicking. "I am not actively thinking about my tongue, just like you don't think about your wrist when you move a mouse. I have done a lot of cursor movements in my life. I think my brain has switched over to subconscious control quickly so I just think about moving the cursor," Smith said. ALS patient Brad G. Smith with his wife and child. (Bradford G. Smith/X)Everyday life: Communication, play, and problem-solvingThe Neuralink implant has given Smith new ways to interact with his family and the world. He can now play games like Mario Kart with his children and communicate more quickly than before. The system includes a virtual keyboard and shortcuts for common actions, making tasks like copying, pasting and navigating web pages much easier.Smith also worked with Neuralink engineers to develop a "parking spot" feature for the cursor. "Sometimes you just wanna park the cursor and watch a video. When it is in the parking spot, I can watch a show or take a nap without worrying about the cursor," he explained. ALS patient Brad G. Smith and his child. (Bradford G. Smith/X)AI assistance: Keeping up with conversationTo speed up communication even more, Smith uses Grok, Elon Musk's AI chatbot. Grok helps him write responses and even suggests witty replies. "We have created a chat app that uses AI to listen to the conversation and gives me options to say in response. It uses Grok 3 and an AI clone of my old voice to generate options for me to say. It is not perfect, but it keeps me in the conversation and it comes up with some great ideas," Smith shared. One example? When a friend needed a gift idea for his girlfriend who loves horses, the AI suggested a bouquet of carrots. ALS patient Brad G. Smith and his family. (Bradford G. Smith/X)The human side: Family, faith and perspectiveSmith's journey has been shaped by more than just technology. He credits his wife, Tiffany, as his "best caregiver I could ever imagine," and recognizes the support of his kids, friends and family. Despite the challenges of ALS, Smith finds meaning and hope in his faith. "I have not always understood why God afflicted me with ALS but with time I am learning to trust his plan for me. I'm a better man because of ALS. I'm a better disciple of Jesus Christ because of ALS. I'm closer to my amazing wife, literally and figuratively, because of ALS," he said. ALS patient Brad G. Smith and his family. (Bradford G. Smith/X)Looking ahead: What does this mean for others?Neuralink's technology is still in its early stages, but Smith's experience is already making waves. The company recently received a "breakthrough" designation from the Food and Drug Administration for its brain implant device, which hopes to help people with severe speech impairments caused by ALS, stroke, spinal cord injury and other neurological conditions.Neuro-ethicists are watching closely, as the merging of brain implants and AI raises important questions about privacy, autonomy and the future of human communication. ALS patient Brad G. Smith and his family. (Bradford G. Smith/X)Kurt's key takeawaysSmith's story is about resilience, creativity and the power of technology to restore something as fundamental as the ability to communicate. As Smith puts it,CLICK HERE TO GET THE FOX NEWS APPIf you or a family member lost the ability to speak or move, would you consider a brain implant that lets you communicate with your thoughts? Let us know by writing to us atCyberguy.com/ContactFor more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to cover.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved.   Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    0 Комментарии 0 Поделились
  • From LLMs to hallucinations, here’s a simple guide to common AI terms

    Artificial intelligence is a deep and convoluted world. The scientists who work in this field often rely on jargon and lingo to explain what they’re working on. As a result, we frequently have to use those technical terms in our coverage of the artificial intelligence industry. That’s why we thought it would be helpful to put together a glossary with definitions of some of the most important words and phrases that we use in our articles.
    We will regularly update this glossary to add new entries as researchers continually uncover novel methods to push the frontier of artificial intelligence while identifying emerging safety risks.

    AGI
    Artificial general intelligence, or AGI, is a nebulous term. But it generally refers to AI that’s more capable than the average human at many, if not most, tasks. OpenAI CEO Sam Altman recently described AGI as the “equivalent of a median human that you could hire as a co-worker.” Meanwhile, OpenAI’s charter defines AGI as “highly autonomous systems that outperform humans at most economically valuable work.” Google DeepMind’s understanding differs slightly from these two definitions; the lab views AGI as “AI that’s at least as capable as humans at most cognitive tasks.” Confused? Not to worry — so are experts at the forefront of AI research.
    AI agent
    An AI agent refers to a tool that uses AI technologies to perform a series of tasks on your behalf — beyond what a more basic AI chatbot could do — such as filing expenses, booking tickets or a table at a restaurant, or even writing and maintaining code. However, as we’ve explained before, there are lots of moving pieces in this emergent space, so “AI agent” might mean different things to different people. Infrastructure is also still being built out to deliver on its envisaged capabilities. But the basic concept implies an autonomous system that may draw on multiple AI systems to carry out multistep tasks.
    Chain of thought
    Given a simple question, a human brain can answer without even thinking too much about it — things like “which animal is taller, a giraffe or a cat?” But in many cases, you often need a pen and paper to come up with the right answer because there are intermediary steps. For instance, if a farmer has chickens and cows, and together they have 40 heads and 120 legs, you might need to write down a simple equation to come up with the answer.
    In an AI context, chain-of-thought reasoning for large language models means breaking down a problem into smaller, intermediate steps to improve the quality of the end result. It usually takes longer to get an answer, but the answer is more likely to be correct, especially in a logic or coding context. Reasoning models are developed from traditional large language models and optimized for chain-of-thought thinking thanks to reinforcement learning.Techcrunch event

    Join us at TechCrunch Sessions: AI
    Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    Deep learning
    A subset of self-improving machine learning in which AI algorithms are designed with a multi-layered, artificial neural networkstructure. This allows them to make more complex correlations compared to simpler machine learning-based systems, such as linear models or decision trees. The structure of deep learning algorithms draws inspiration from the interconnected pathways of neurons in the human brain.
    Deep learning AI models are able to identify important characteristics in data themselves, rather than requiring human engineers to define these features. The structure also supports algorithms that can learn from errors and, through a process of repetition and adjustment, improve their own outputs. However, deep learning systems require a lot of data points to yield good results. They also typically take longer to train compared to simpler machine learning algorithms — so development costs tend to be higher.Diffusion
    Diffusion is the tech at the heart of many art-, music-, and text-generating AI models. Inspired by physics, diffusion systems slowly “destroy” the structure of data — e.g. photos, songs, and so on — by adding noise until there’s nothing left. In physics, diffusion is spontaneous and irreversible — sugar diffused in coffee can’t be restored to cube form. But diffusion systems in AI aim to learn a sort of “reverse diffusion” process to restore the destroyed data, gaining the ability to recover the data from noise.
    Distillation
    Distillation is a technique used to extract knowledge from a large AI model with a ‘teacher-student’ model. Developers send requests to a teacher model and record the outputs. Answers are sometimes compared with a dataset to see how accurate they are. These outputs are then used to train the student model, which is trained to approximate the teacher’s behavior.
    Distillation can be used to create a smaller, more efficient model based on a larger model with a minimal distillation loss. This is likely how OpenAI developed GPT-4 Turbo, a faster version of GPT-4.
    While all AI companies use distillation internally, it may have also been used by some AI companies to catch up with frontier models. Distillation from a competitor usually violates the terms of service of AI API and chat assistants.
    Fine-tuning
    This refers to the further training of an AI model to optimize performance for a more specific task or area than was previously a focal point of its training — typically by feeding in new, specializeddata. 
    Many AI startups are taking large language models as a starting point to build a commercial product but are vying to amp up utility for a target sector or task by supplementing earlier training cycles with fine-tuning based on their own domain-specific knowledge and expertise.GAN
    A GAN, or Generative Adversarial Network, is a type of machine learning framework that underpins some important developments in generative AI when it comes to producing realistic data – includingdeepfake tools. GANs involve the use of a pair of neural networks, one of which draws on its training data to generate an output that is passed to the other model to evaluate. This second, discriminator model thus plays the role of a classifier on the generator’s output – enabling it to improve over time. 
    The GAN structure is set up as a competition– with the two models essentially programmed to try to outdo each other: the generator is trying to get its output past the discriminator, while the discriminator is working to spot artificially generated data. This structured contest can optimize AI outputs to be more realistic without the need for additional human intervention. Though GANs work best for narrower applications, rather than general purpose AI.
    Hallucination
    Hallucination is the AI industry’s preferred term for AI models making stuff up – literally generating information that is incorrect. Obviously, it’s a huge problem for AI quality. 
    Hallucinations produce GenAI outputs that can be misleading and could even lead to real-life risks — with potentially dangerous consequences. This is why most GenAI tools’ small print now warns users to verify AI-generated answers, even though such disclaimers are usually far less prominent than the information the tools dispense at the touch of a button.
    The problem of AIs fabricating information is thought to arise as a consequence of gaps in training data. For general purpose GenAI especially — also sometimes known as foundation models — this looks difficult to resolve. There is simply not enough data in existence to train AI models to comprehensively resolve all the questions we could possibly ask. TL;DR: we haven’t invented God. 
    Hallucinations are contributing to a push towards increasingly specialized and/or vertical AI models — i.e. domain-specific AIs that require narrower expertise – as a way to reduce the likelihood of knowledge gaps and shrink disinformation risks.
    Inference
    Inference is the process of running an AI model. It’s setting a model loose to make predictions or draw conclusions from previously-seen data. To be clear, inference can’t happen without training; a model must learn patterns in a set of data before it can effectively extrapolate from this training data.
    Many types of hardware can perform inference, ranging from smartphone processors to beefy GPUs to custom-designed AI accelerators. But not all of them can run models equally well. Very large models would take ages to make predictions on, say, a laptop versus a cloud server with high-end AI chips.Large language modelLarge language models, or LLMs, are the AI models used by popular AI assistants, such as ChatGPT, Claude, Google’s Gemini, Meta’s AI Llama, Microsoft Copilot, or Mistral’s Le Chat. When you chat with an AI assistant, you interact with a large language model that processes your request directly or with the help of different available tools, such as web browsing or code interpreters.
    AI assistants and LLMs can have different names. For instance, GPT is OpenAI’s large language model and ChatGPT is the AI assistant product.
    LLMs are deep neural networks made of billions of numerical parametersthat learn the relationships between words and phrases and create a representation of language, a sort of multidimensional map of words.
    These models are created from encoding the patterns they find in billions of books, articles, and transcripts. When you prompt an LLM, the model generates the most likely pattern that fits the prompt. It then evaluates the most probable next word after the last one based on what was said before. Repeat, repeat, and repeat.Neural network
    A neural network refers to the multi-layered algorithmic structure that underpins deep learning — and, more broadly, the whole boom in generative AI tools following the emergence of large language models. 
    Although the idea of taking inspiration from the densely interconnected pathways of the human brain as a design structure for data processing algorithms dates all the way back to the 1940s, it was the much more recent rise of graphical processing hardware— via the video game industry — that really unlocked the power of this theory. These chips proved well suited to training algorithms with many more layers than was possible in earlier epochs — enabling neural network-based AI systems to achieve far better performance across many domains, including voice recognition, autonomous navigation, and drug discovery.Training
    Developing machine learning AIs involves a process known as training. In simple terms, this refers to data being fed in in order that the model can learn from patterns and generate useful outputs.
    Things can get a bit philosophical at this point in the AI stack — since, pre-training, the mathematical structure that’s used as the starting point for developing a learning system is just a bunch of layers and random numbers. It’s only through training that the AI model really takes shape. Essentially, it’s the process of the system responding to characteristics in the data that enables it to adapt outputs towards a sought-for goal — whether that’s identifying images of cats or producing a haiku on demand.
    It’s important to note that not all AI requires training. Rules-based AIs that are programmed to follow manually predefined instructions — for example, such as linear chatbots — don’t need to undergo training. However, such AI systems are likely to be more constrained thanself-learning systems.
    Still, training can be expensive because it requires lots of inputs — and, typically, the volumes of inputs required for such models have been trending upwards.
    Hybrid approaches can sometimes be used to shortcut model development and help manage costs. Such as doing data-driven fine-tuning of a rules-based AI — meaning development requires less data, compute, energy, and algorithmic complexity than if the developer had started building from scratch.Transfer learning
    A technique where a previously trained AI model is used as the starting point for developing a new model for a different but typically related task – allowing knowledge gained in previous training cycles to be reapplied. 
    Transfer learning can drive efficiency savings by shortcutting model development. It can also be useful when data for the task that the model is being developed for is somewhat limited. But it’s important to note that the approach has limitations. Models that rely on transfer learning to gain generalized capabilities will likely require training on additional data in order to perform well in their domain of focusWeights
    Weights are core to AI training, as they determine how much importanceis given to different featuresin the data used for training the system — thereby shaping the AI model’s output. 
    Put another way, weights are numerical parameters that define what’s most salient in a dataset for the given training task. They achieve their function by applying multiplication to inputs. Model training typically begins with weights that are randomly assigned, but as the process unfolds, the weights adjust as the model seeks to arrive at an output that more closely matches the target.
    For example, an AI model for predicting housing prices that’s trained on historical real estate data for a target location could include weights for features such as the number of bedrooms and bathrooms, whether a property is detached or semi-detached, whether it has parking, a garage, and so on. 
    Ultimately, the weights the model attaches to each of these inputs reflect how much they influence the value of a property, based on the given dataset.

    Topics
    #llms #hallucinations #heres #simple #guide
    From LLMs to hallucinations, here’s a simple guide to common AI terms
    Artificial intelligence is a deep and convoluted world. The scientists who work in this field often rely on jargon and lingo to explain what they’re working on. As a result, we frequently have to use those technical terms in our coverage of the artificial intelligence industry. That’s why we thought it would be helpful to put together a glossary with definitions of some of the most important words and phrases that we use in our articles. We will regularly update this glossary to add new entries as researchers continually uncover novel methods to push the frontier of artificial intelligence while identifying emerging safety risks. AGI Artificial general intelligence, or AGI, is a nebulous term. But it generally refers to AI that’s more capable than the average human at many, if not most, tasks. OpenAI CEO Sam Altman recently described AGI as the “equivalent of a median human that you could hire as a co-worker.” Meanwhile, OpenAI’s charter defines AGI as “highly autonomous systems that outperform humans at most economically valuable work.” Google DeepMind’s understanding differs slightly from these two definitions; the lab views AGI as “AI that’s at least as capable as humans at most cognitive tasks.” Confused? Not to worry — so are experts at the forefront of AI research. AI agent An AI agent refers to a tool that uses AI technologies to perform a series of tasks on your behalf — beyond what a more basic AI chatbot could do — such as filing expenses, booking tickets or a table at a restaurant, or even writing and maintaining code. However, as we’ve explained before, there are lots of moving pieces in this emergent space, so “AI agent” might mean different things to different people. Infrastructure is also still being built out to deliver on its envisaged capabilities. But the basic concept implies an autonomous system that may draw on multiple AI systems to carry out multistep tasks. Chain of thought Given a simple question, a human brain can answer without even thinking too much about it — things like “which animal is taller, a giraffe or a cat?” But in many cases, you often need a pen and paper to come up with the right answer because there are intermediary steps. For instance, if a farmer has chickens and cows, and together they have 40 heads and 120 legs, you might need to write down a simple equation to come up with the answer. In an AI context, chain-of-thought reasoning for large language models means breaking down a problem into smaller, intermediate steps to improve the quality of the end result. It usually takes longer to get an answer, but the answer is more likely to be correct, especially in a logic or coding context. Reasoning models are developed from traditional large language models and optimized for chain-of-thought thinking thanks to reinforcement learning.Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Deep learning A subset of self-improving machine learning in which AI algorithms are designed with a multi-layered, artificial neural networkstructure. This allows them to make more complex correlations compared to simpler machine learning-based systems, such as linear models or decision trees. The structure of deep learning algorithms draws inspiration from the interconnected pathways of neurons in the human brain. Deep learning AI models are able to identify important characteristics in data themselves, rather than requiring human engineers to define these features. The structure also supports algorithms that can learn from errors and, through a process of repetition and adjustment, improve their own outputs. However, deep learning systems require a lot of data points to yield good results. They also typically take longer to train compared to simpler machine learning algorithms — so development costs tend to be higher.Diffusion Diffusion is the tech at the heart of many art-, music-, and text-generating AI models. Inspired by physics, diffusion systems slowly “destroy” the structure of data — e.g. photos, songs, and so on — by adding noise until there’s nothing left. In physics, diffusion is spontaneous and irreversible — sugar diffused in coffee can’t be restored to cube form. But diffusion systems in AI aim to learn a sort of “reverse diffusion” process to restore the destroyed data, gaining the ability to recover the data from noise. Distillation Distillation is a technique used to extract knowledge from a large AI model with a ‘teacher-student’ model. Developers send requests to a teacher model and record the outputs. Answers are sometimes compared with a dataset to see how accurate they are. These outputs are then used to train the student model, which is trained to approximate the teacher’s behavior. Distillation can be used to create a smaller, more efficient model based on a larger model with a minimal distillation loss. This is likely how OpenAI developed GPT-4 Turbo, a faster version of GPT-4. While all AI companies use distillation internally, it may have also been used by some AI companies to catch up with frontier models. Distillation from a competitor usually violates the terms of service of AI API and chat assistants. Fine-tuning This refers to the further training of an AI model to optimize performance for a more specific task or area than was previously a focal point of its training — typically by feeding in new, specializeddata.  Many AI startups are taking large language models as a starting point to build a commercial product but are vying to amp up utility for a target sector or task by supplementing earlier training cycles with fine-tuning based on their own domain-specific knowledge and expertise.GAN A GAN, or Generative Adversarial Network, is a type of machine learning framework that underpins some important developments in generative AI when it comes to producing realistic data – includingdeepfake tools. GANs involve the use of a pair of neural networks, one of which draws on its training data to generate an output that is passed to the other model to evaluate. This second, discriminator model thus plays the role of a classifier on the generator’s output – enabling it to improve over time.  The GAN structure is set up as a competition– with the two models essentially programmed to try to outdo each other: the generator is trying to get its output past the discriminator, while the discriminator is working to spot artificially generated data. This structured contest can optimize AI outputs to be more realistic without the need for additional human intervention. Though GANs work best for narrower applications, rather than general purpose AI. Hallucination Hallucination is the AI industry’s preferred term for AI models making stuff up – literally generating information that is incorrect. Obviously, it’s a huge problem for AI quality.  Hallucinations produce GenAI outputs that can be misleading and could even lead to real-life risks — with potentially dangerous consequences. This is why most GenAI tools’ small print now warns users to verify AI-generated answers, even though such disclaimers are usually far less prominent than the information the tools dispense at the touch of a button. The problem of AIs fabricating information is thought to arise as a consequence of gaps in training data. For general purpose GenAI especially — also sometimes known as foundation models — this looks difficult to resolve. There is simply not enough data in existence to train AI models to comprehensively resolve all the questions we could possibly ask. TL;DR: we haven’t invented God.  Hallucinations are contributing to a push towards increasingly specialized and/or vertical AI models — i.e. domain-specific AIs that require narrower expertise – as a way to reduce the likelihood of knowledge gaps and shrink disinformation risks. Inference Inference is the process of running an AI model. It’s setting a model loose to make predictions or draw conclusions from previously-seen data. To be clear, inference can’t happen without training; a model must learn patterns in a set of data before it can effectively extrapolate from this training data. Many types of hardware can perform inference, ranging from smartphone processors to beefy GPUs to custom-designed AI accelerators. But not all of them can run models equally well. Very large models would take ages to make predictions on, say, a laptop versus a cloud server with high-end AI chips.Large language modelLarge language models, or LLMs, are the AI models used by popular AI assistants, such as ChatGPT, Claude, Google’s Gemini, Meta’s AI Llama, Microsoft Copilot, or Mistral’s Le Chat. When you chat with an AI assistant, you interact with a large language model that processes your request directly or with the help of different available tools, such as web browsing or code interpreters. AI assistants and LLMs can have different names. For instance, GPT is OpenAI’s large language model and ChatGPT is the AI assistant product. LLMs are deep neural networks made of billions of numerical parametersthat learn the relationships between words and phrases and create a representation of language, a sort of multidimensional map of words. These models are created from encoding the patterns they find in billions of books, articles, and transcripts. When you prompt an LLM, the model generates the most likely pattern that fits the prompt. It then evaluates the most probable next word after the last one based on what was said before. Repeat, repeat, and repeat.Neural network A neural network refers to the multi-layered algorithmic structure that underpins deep learning — and, more broadly, the whole boom in generative AI tools following the emergence of large language models.  Although the idea of taking inspiration from the densely interconnected pathways of the human brain as a design structure for data processing algorithms dates all the way back to the 1940s, it was the much more recent rise of graphical processing hardware— via the video game industry — that really unlocked the power of this theory. These chips proved well suited to training algorithms with many more layers than was possible in earlier epochs — enabling neural network-based AI systems to achieve far better performance across many domains, including voice recognition, autonomous navigation, and drug discovery.Training Developing machine learning AIs involves a process known as training. In simple terms, this refers to data being fed in in order that the model can learn from patterns and generate useful outputs. Things can get a bit philosophical at this point in the AI stack — since, pre-training, the mathematical structure that’s used as the starting point for developing a learning system is just a bunch of layers and random numbers. It’s only through training that the AI model really takes shape. Essentially, it’s the process of the system responding to characteristics in the data that enables it to adapt outputs towards a sought-for goal — whether that’s identifying images of cats or producing a haiku on demand. It’s important to note that not all AI requires training. Rules-based AIs that are programmed to follow manually predefined instructions — for example, such as linear chatbots — don’t need to undergo training. However, such AI systems are likely to be more constrained thanself-learning systems. Still, training can be expensive because it requires lots of inputs — and, typically, the volumes of inputs required for such models have been trending upwards. Hybrid approaches can sometimes be used to shortcut model development and help manage costs. Such as doing data-driven fine-tuning of a rules-based AI — meaning development requires less data, compute, energy, and algorithmic complexity than if the developer had started building from scratch.Transfer learning A technique where a previously trained AI model is used as the starting point for developing a new model for a different but typically related task – allowing knowledge gained in previous training cycles to be reapplied.  Transfer learning can drive efficiency savings by shortcutting model development. It can also be useful when data for the task that the model is being developed for is somewhat limited. But it’s important to note that the approach has limitations. Models that rely on transfer learning to gain generalized capabilities will likely require training on additional data in order to perform well in their domain of focusWeights Weights are core to AI training, as they determine how much importanceis given to different featuresin the data used for training the system — thereby shaping the AI model’s output.  Put another way, weights are numerical parameters that define what’s most salient in a dataset for the given training task. They achieve their function by applying multiplication to inputs. Model training typically begins with weights that are randomly assigned, but as the process unfolds, the weights adjust as the model seeks to arrive at an output that more closely matches the target. For example, an AI model for predicting housing prices that’s trained on historical real estate data for a target location could include weights for features such as the number of bedrooms and bathrooms, whether a property is detached or semi-detached, whether it has parking, a garage, and so on.  Ultimately, the weights the model attaches to each of these inputs reflect how much they influence the value of a property, based on the given dataset. Topics #llms #hallucinations #heres #simple #guide
    TECHCRUNCH.COM
    From LLMs to hallucinations, here’s a simple guide to common AI terms
    Artificial intelligence is a deep and convoluted world. The scientists who work in this field often rely on jargon and lingo to explain what they’re working on. As a result, we frequently have to use those technical terms in our coverage of the artificial intelligence industry. That’s why we thought it would be helpful to put together a glossary with definitions of some of the most important words and phrases that we use in our articles. We will regularly update this glossary to add new entries as researchers continually uncover novel methods to push the frontier of artificial intelligence while identifying emerging safety risks. AGI Artificial general intelligence, or AGI, is a nebulous term. But it generally refers to AI that’s more capable than the average human at many, if not most, tasks. OpenAI CEO Sam Altman recently described AGI as the “equivalent of a median human that you could hire as a co-worker.” Meanwhile, OpenAI’s charter defines AGI as “highly autonomous systems that outperform humans at most economically valuable work.” Google DeepMind’s understanding differs slightly from these two definitions; the lab views AGI as “AI that’s at least as capable as humans at most cognitive tasks.” Confused? Not to worry — so are experts at the forefront of AI research. AI agent An AI agent refers to a tool that uses AI technologies to perform a series of tasks on your behalf — beyond what a more basic AI chatbot could do — such as filing expenses, booking tickets or a table at a restaurant, or even writing and maintaining code. However, as we’ve explained before, there are lots of moving pieces in this emergent space, so “AI agent” might mean different things to different people. Infrastructure is also still being built out to deliver on its envisaged capabilities. But the basic concept implies an autonomous system that may draw on multiple AI systems to carry out multistep tasks. Chain of thought Given a simple question, a human brain can answer without even thinking too much about it — things like “which animal is taller, a giraffe or a cat?” But in many cases, you often need a pen and paper to come up with the right answer because there are intermediary steps. For instance, if a farmer has chickens and cows, and together they have 40 heads and 120 legs, you might need to write down a simple equation to come up with the answer (20 chickens and 20 cows). In an AI context, chain-of-thought reasoning for large language models means breaking down a problem into smaller, intermediate steps to improve the quality of the end result. It usually takes longer to get an answer, but the answer is more likely to be correct, especially in a logic or coding context. Reasoning models are developed from traditional large language models and optimized for chain-of-thought thinking thanks to reinforcement learning. (See: Large language model) Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Deep learning A subset of self-improving machine learning in which AI algorithms are designed with a multi-layered, artificial neural network (ANN) structure. This allows them to make more complex correlations compared to simpler machine learning-based systems, such as linear models or decision trees. The structure of deep learning algorithms draws inspiration from the interconnected pathways of neurons in the human brain. Deep learning AI models are able to identify important characteristics in data themselves, rather than requiring human engineers to define these features. The structure also supports algorithms that can learn from errors and, through a process of repetition and adjustment, improve their own outputs. However, deep learning systems require a lot of data points to yield good results (millions or more). They also typically take longer to train compared to simpler machine learning algorithms — so development costs tend to be higher. (See: Neural network) Diffusion Diffusion is the tech at the heart of many art-, music-, and text-generating AI models. Inspired by physics, diffusion systems slowly “destroy” the structure of data — e.g. photos, songs, and so on — by adding noise until there’s nothing left. In physics, diffusion is spontaneous and irreversible — sugar diffused in coffee can’t be restored to cube form. But diffusion systems in AI aim to learn a sort of “reverse diffusion” process to restore the destroyed data, gaining the ability to recover the data from noise. Distillation Distillation is a technique used to extract knowledge from a large AI model with a ‘teacher-student’ model. Developers send requests to a teacher model and record the outputs. Answers are sometimes compared with a dataset to see how accurate they are. These outputs are then used to train the student model, which is trained to approximate the teacher’s behavior. Distillation can be used to create a smaller, more efficient model based on a larger model with a minimal distillation loss. This is likely how OpenAI developed GPT-4 Turbo, a faster version of GPT-4. While all AI companies use distillation internally, it may have also been used by some AI companies to catch up with frontier models. Distillation from a competitor usually violates the terms of service of AI API and chat assistants. Fine-tuning This refers to the further training of an AI model to optimize performance for a more specific task or area than was previously a focal point of its training — typically by feeding in new, specialized (i.e., task-oriented) data.  Many AI startups are taking large language models as a starting point to build a commercial product but are vying to amp up utility for a target sector or task by supplementing earlier training cycles with fine-tuning based on their own domain-specific knowledge and expertise. (See: Large language model [LLM]) GAN A GAN, or Generative Adversarial Network, is a type of machine learning framework that underpins some important developments in generative AI when it comes to producing realistic data – including (but not only) deepfake tools. GANs involve the use of a pair of neural networks, one of which draws on its training data to generate an output that is passed to the other model to evaluate. This second, discriminator model thus plays the role of a classifier on the generator’s output – enabling it to improve over time.  The GAN structure is set up as a competition (hence “adversarial”) – with the two models essentially programmed to try to outdo each other: the generator is trying to get its output past the discriminator, while the discriminator is working to spot artificially generated data. This structured contest can optimize AI outputs to be more realistic without the need for additional human intervention. Though GANs work best for narrower applications (such as producing realistic photos or videos), rather than general purpose AI. Hallucination Hallucination is the AI industry’s preferred term for AI models making stuff up – literally generating information that is incorrect. Obviously, it’s a huge problem for AI quality.  Hallucinations produce GenAI outputs that can be misleading and could even lead to real-life risks — with potentially dangerous consequences (think of a health query that returns harmful medical advice). This is why most GenAI tools’ small print now warns users to verify AI-generated answers, even though such disclaimers are usually far less prominent than the information the tools dispense at the touch of a button. The problem of AIs fabricating information is thought to arise as a consequence of gaps in training data. For general purpose GenAI especially — also sometimes known as foundation models — this looks difficult to resolve. There is simply not enough data in existence to train AI models to comprehensively resolve all the questions we could possibly ask. TL;DR: we haven’t invented God (yet).  Hallucinations are contributing to a push towards increasingly specialized and/or vertical AI models — i.e. domain-specific AIs that require narrower expertise – as a way to reduce the likelihood of knowledge gaps and shrink disinformation risks. Inference Inference is the process of running an AI model. It’s setting a model loose to make predictions or draw conclusions from previously-seen data. To be clear, inference can’t happen without training; a model must learn patterns in a set of data before it can effectively extrapolate from this training data. Many types of hardware can perform inference, ranging from smartphone processors to beefy GPUs to custom-designed AI accelerators. But not all of them can run models equally well. Very large models would take ages to make predictions on, say, a laptop versus a cloud server with high-end AI chips. [See: Training] Large language model (LLM) Large language models, or LLMs, are the AI models used by popular AI assistants, such as ChatGPT, Claude, Google’s Gemini, Meta’s AI Llama, Microsoft Copilot, or Mistral’s Le Chat. When you chat with an AI assistant, you interact with a large language model that processes your request directly or with the help of different available tools, such as web browsing or code interpreters. AI assistants and LLMs can have different names. For instance, GPT is OpenAI’s large language model and ChatGPT is the AI assistant product. LLMs are deep neural networks made of billions of numerical parameters (or weights, see below) that learn the relationships between words and phrases and create a representation of language, a sort of multidimensional map of words. These models are created from encoding the patterns they find in billions of books, articles, and transcripts. When you prompt an LLM, the model generates the most likely pattern that fits the prompt. It then evaluates the most probable next word after the last one based on what was said before. Repeat, repeat, and repeat. (See: Neural network) Neural network A neural network refers to the multi-layered algorithmic structure that underpins deep learning — and, more broadly, the whole boom in generative AI tools following the emergence of large language models.  Although the idea of taking inspiration from the densely interconnected pathways of the human brain as a design structure for data processing algorithms dates all the way back to the 1940s, it was the much more recent rise of graphical processing hardware (GPUs) — via the video game industry — that really unlocked the power of this theory. These chips proved well suited to training algorithms with many more layers than was possible in earlier epochs — enabling neural network-based AI systems to achieve far better performance across many domains, including voice recognition, autonomous navigation, and drug discovery. (See: Large language model [LLM]) Training Developing machine learning AIs involves a process known as training. In simple terms, this refers to data being fed in in order that the model can learn from patterns and generate useful outputs. Things can get a bit philosophical at this point in the AI stack — since, pre-training, the mathematical structure that’s used as the starting point for developing a learning system is just a bunch of layers and random numbers. It’s only through training that the AI model really takes shape. Essentially, it’s the process of the system responding to characteristics in the data that enables it to adapt outputs towards a sought-for goal — whether that’s identifying images of cats or producing a haiku on demand. It’s important to note that not all AI requires training. Rules-based AIs that are programmed to follow manually predefined instructions — for example, such as linear chatbots — don’t need to undergo training. However, such AI systems are likely to be more constrained than (well-trained) self-learning systems. Still, training can be expensive because it requires lots of inputs — and, typically, the volumes of inputs required for such models have been trending upwards. Hybrid approaches can sometimes be used to shortcut model development and help manage costs. Such as doing data-driven fine-tuning of a rules-based AI — meaning development requires less data, compute, energy, and algorithmic complexity than if the developer had started building from scratch. [See: Inference] Transfer learning A technique where a previously trained AI model is used as the starting point for developing a new model for a different but typically related task – allowing knowledge gained in previous training cycles to be reapplied.  Transfer learning can drive efficiency savings by shortcutting model development. It can also be useful when data for the task that the model is being developed for is somewhat limited. But it’s important to note that the approach has limitations. Models that rely on transfer learning to gain generalized capabilities will likely require training on additional data in order to perform well in their domain of focus (See: Fine tuning) Weights Weights are core to AI training, as they determine how much importance (or weight) is given to different features (or input variables) in the data used for training the system — thereby shaping the AI model’s output.  Put another way, weights are numerical parameters that define what’s most salient in a dataset for the given training task. They achieve their function by applying multiplication to inputs. Model training typically begins with weights that are randomly assigned, but as the process unfolds, the weights adjust as the model seeks to arrive at an output that more closely matches the target. For example, an AI model for predicting housing prices that’s trained on historical real estate data for a target location could include weights for features such as the number of bedrooms and bathrooms, whether a property is detached or semi-detached, whether it has parking, a garage, and so on.  Ultimately, the weights the model attaches to each of these inputs reflect how much they influence the value of a property, based on the given dataset. Topics
    0 Комментарии 0 Поделились
  • Breakthrough Alzheimer’s Blood Test Explained By Neurologists

    The FDA recently cleared the Lumipulse blood test for early diagnosis of Alzheimer's disease in ... More people 55 and over with memory loss. The noninvasive Lumipulse blood test measures the levels of two proteins—pTau 217 and β-Amyloid 1-42—in plasma and calculates the ratio between them. This ratio is correlated with the presence or absence of amyloid plaques, a hallmark of Alzheimer's disease, in the brain.getty

    Whether you’re noticing changes in your memory that are affecting your daily life, caring for a loved one recently diagnosed with dementia, evaluating a patient as a physician, or simply worried about someone close to you, the recent FDA clearance of the Lumipulse blood test for the early diagnosis of Alzheimer’s disease is a significant development that you should be aware of. Here’s what you need to know about this Breakthrough Alzheimer’s blood test.

    The Lumipulse G pTau217/β-Amyloid 1-42 Plasma Ratio test is designed for the early detection of amyloid plaques associated with Alzheimer’s disease in adults aged 55 years and older who are showing signs and symptoms of the condition. If you’ve witnessed a loved one gradually lose their memories due to the impact of amyloid plaques in their brain, you understand how important a test like this can be.

    The Lumipulse test measures the levels of two proteins—pTau 217 and β-Amyloid 1-42—in plasma and calculates the ratio between them. This ratio is correlated with the presence or absence of amyloid plaques in the brain, potentially reducing the need for more invasive procedures like PET scans or spinal fluid analysis.

    Benefits of testing with Lumipulse
    Dr. Phillipe Douyon, a neurologist and author of “7 Things You Should Be Doing to Minimize Your Risk of Dementia,” notes that the Alzheimer’s Association has reported that 50-70% of symptomatic patients in community settings are inaccurately diagnosed with Alzheimer’s disease. In specialized memory clinics, this misdiagnosis rate drops to 25-30%. “Having a test that provides early and accurate insights into the cause of someone’s dementia could be a massive game changer,” says Dr. Douyon.
    Alzheimer's disease. Neurodegeneration. Cross section of normal and Alzheimer brain, with Atrophy of ... More the cerebral cortex, Enlarged ventricles and Hippocampus. Close-up of neurons with Neurofibrillary tangles and Amyloid plaques. Vector illustrationgetty
    This new test follows the recent FDA approval of two medications, lecanemab and donanemab, which are highly effective in removing amyloid from the brain. Clinical trials have shown that these treatments can slow the progression of dementia. Currently, to qualify for these medications, patients must undergo expensive examinations, such as a brain amyloid PET scan or a lumbar puncture to analyze their spinal fluid. Many patients, however, do not have access to PET imaging or specialist care.

    “A blood test makes diagnostic procedures more accessible and benefits underserved populations,” says Dr. Haythum Tayeb, a neurologist at WMCHealth. “It also enables earlier and more personalized care planning, even before formal treatment begins. This empowers patients and their families to make informed decisions sooner,” Dr. Tayeb adds.
    Who Should Be Tested With Lumipulse
    While this blood test may improve access to care for patients from communities lacking neurology and other specialty services, it is recommended to use it only for individuals experiencing memory problems, rather than for those who are asymptomatic. “Given that there is no specific treatment indicated for asymptomatic persons, there is a risk of introducing psychological harm at this stage,” warns Dr. James Noble who is Professor of Neurology at Columbia University Irving Medical Center and author of Navigating Life With Dementia. “Healthy approaches to lifestyle will remain central in adulthood whether or not someone has a positive test, and that advice will not really change,” adds Dr. Noble.
    Living a healthy lifestyle can significantly enhance brain health, regardless of whether a person has an abnormal accumulation of amyloid in their brain. Key factors include regular exercise, following a healthy diet such as the Mediterranean diet, getting adequate sleep, engaging in social and cognitive activities. These practices are all essential for maintaining cognitive function. Additionally, taking steps to protect your hearing may help reduce the risk of developing dementia.To reduce your risk of dementia, you can do regular exercise, consume a healthy diet such as the ... More Mediterranean diet, get adequate sleep, and engage regularly in social and cognitive activities.getty
    Anyone experiencing memory loss should consult their medical provider for an evaluation. The provider can conduct basic cognitive testing and determine if a referral to a specialist is necessary. If the individual meets the criteria for testing, the lumipulse blood test should also be considered.
    Future Of Alzheimer’s Testing
    “Looking across the wide landscape of medicine, many other conditions benefit from early detection, diagnosis, and treatment. There is no reason to believe that Alzheimer’s disease will be any different” says Dr. Noble. Indeed, screening for diseases like colon cancer, breast cancer, and high blood pressure has significantly extended the average American lifespan. Imagine how much our lives could change if we could screen for Alzheimer’s dementia in the same way. This would be particularly useful for patients at higher risk due to age or family history.
    Providing earlier intervention for Alzheimer’s disease could potentially reduce amyloid buildup in the brain, help preserve memories, and allow individuals to live more independently at home, rather than in nursing homes.
    Another advantage of using a test like the Lumipulse blood test is the ability to inform a patient that their memory loss is not linked to Alzheimer’s disease. While a negative blood test does not entirely rule out an Alzheimer’s diagnosis, it does make it less probable. This could prompt the medical provider to conduct further testing to identify a more accurate cause for the patient’s memory loss. In some instances, the medical provider may conclude that the patient’s memory loss is related to normal aging. This is also important so that patients are not unnecessarily placed on medications that may not help them.
    It is reasonable to anticipate that additional blood-based biomarkers for diagnosing Alzheimer’s disease and other dementias will be available in the future. Perhaps one day, there will be a dementia panel blood test that can be sent off to provide early diagnosis of a wide range of dementias.
    Alzheimer’s blood testing is not only beneficial for individuals, but it also represents a significant advancement for research. Doctors and scientists can more easily identify individuals in the early stages of Alzheimer’s disease, which accelerates clinical trials for new medications. This increased diagnostic accuracy can enhance the effectiveness of Alzheimer’s clinical trials, as it ensures that patients enrolled have more reliable diagnoses. Consequently, new and more effective treatments could be developed and made available more quickly.
    The Lumipulse Alzheimer’s blood test marks a pivotal moment in our approach to this disease. While patients may still need confirmatory testing through brain imaging or spinal fluid analysis, this blood test enables the medical community to adopt a more proactive, precise, and personalized strategy for diagnosing and treating patients with dementia. This simple blood test brings us one step closer to earlier answers, better care, and renewed hope for millions of people facing the uncertainty of dementia.
    #breakthrough #alzheimers #blood #test #explained
    Breakthrough Alzheimer’s Blood Test Explained By Neurologists
    The FDA recently cleared the Lumipulse blood test for early diagnosis of Alzheimer's disease in ... More people 55 and over with memory loss. The noninvasive Lumipulse blood test measures the levels of two proteins—pTau 217 and β-Amyloid 1-42—in plasma and calculates the ratio between them. This ratio is correlated with the presence or absence of amyloid plaques, a hallmark of Alzheimer's disease, in the brain.getty Whether you’re noticing changes in your memory that are affecting your daily life, caring for a loved one recently diagnosed with dementia, evaluating a patient as a physician, or simply worried about someone close to you, the recent FDA clearance of the Lumipulse blood test for the early diagnosis of Alzheimer’s disease is a significant development that you should be aware of. Here’s what you need to know about this Breakthrough Alzheimer’s blood test. The Lumipulse G pTau217/β-Amyloid 1-42 Plasma Ratio test is designed for the early detection of amyloid plaques associated with Alzheimer’s disease in adults aged 55 years and older who are showing signs and symptoms of the condition. If you’ve witnessed a loved one gradually lose their memories due to the impact of amyloid plaques in their brain, you understand how important a test like this can be. The Lumipulse test measures the levels of two proteins—pTau 217 and β-Amyloid 1-42—in plasma and calculates the ratio between them. This ratio is correlated with the presence or absence of amyloid plaques in the brain, potentially reducing the need for more invasive procedures like PET scans or spinal fluid analysis. Benefits of testing with Lumipulse Dr. Phillipe Douyon, a neurologist and author of “7 Things You Should Be Doing to Minimize Your Risk of Dementia,” notes that the Alzheimer’s Association has reported that 50-70% of symptomatic patients in community settings are inaccurately diagnosed with Alzheimer’s disease. In specialized memory clinics, this misdiagnosis rate drops to 25-30%. “Having a test that provides early and accurate insights into the cause of someone’s dementia could be a massive game changer,” says Dr. Douyon. Alzheimer's disease. Neurodegeneration. Cross section of normal and Alzheimer brain, with Atrophy of ... More the cerebral cortex, Enlarged ventricles and Hippocampus. Close-up of neurons with Neurofibrillary tangles and Amyloid plaques. Vector illustrationgetty This new test follows the recent FDA approval of two medications, lecanemab and donanemab, which are highly effective in removing amyloid from the brain. Clinical trials have shown that these treatments can slow the progression of dementia. Currently, to qualify for these medications, patients must undergo expensive examinations, such as a brain amyloid PET scan or a lumbar puncture to analyze their spinal fluid. Many patients, however, do not have access to PET imaging or specialist care. “A blood test makes diagnostic procedures more accessible and benefits underserved populations,” says Dr. Haythum Tayeb, a neurologist at WMCHealth. “It also enables earlier and more personalized care planning, even before formal treatment begins. This empowers patients and their families to make informed decisions sooner,” Dr. Tayeb adds. Who Should Be Tested With Lumipulse While this blood test may improve access to care for patients from communities lacking neurology and other specialty services, it is recommended to use it only for individuals experiencing memory problems, rather than for those who are asymptomatic. “Given that there is no specific treatment indicated for asymptomatic persons, there is a risk of introducing psychological harm at this stage,” warns Dr. James Noble who is Professor of Neurology at Columbia University Irving Medical Center and author of Navigating Life With Dementia. “Healthy approaches to lifestyle will remain central in adulthood whether or not someone has a positive test, and that advice will not really change,” adds Dr. Noble. Living a healthy lifestyle can significantly enhance brain health, regardless of whether a person has an abnormal accumulation of amyloid in their brain. Key factors include regular exercise, following a healthy diet such as the Mediterranean diet, getting adequate sleep, engaging in social and cognitive activities. These practices are all essential for maintaining cognitive function. Additionally, taking steps to protect your hearing may help reduce the risk of developing dementia.To reduce your risk of dementia, you can do regular exercise, consume a healthy diet such as the ... More Mediterranean diet, get adequate sleep, and engage regularly in social and cognitive activities.getty Anyone experiencing memory loss should consult their medical provider for an evaluation. The provider can conduct basic cognitive testing and determine if a referral to a specialist is necessary. If the individual meets the criteria for testing, the lumipulse blood test should also be considered. Future Of Alzheimer’s Testing “Looking across the wide landscape of medicine, many other conditions benefit from early detection, diagnosis, and treatment. There is no reason to believe that Alzheimer’s disease will be any different” says Dr. Noble. Indeed, screening for diseases like colon cancer, breast cancer, and high blood pressure has significantly extended the average American lifespan. Imagine how much our lives could change if we could screen for Alzheimer’s dementia in the same way. This would be particularly useful for patients at higher risk due to age or family history. Providing earlier intervention for Alzheimer’s disease could potentially reduce amyloid buildup in the brain, help preserve memories, and allow individuals to live more independently at home, rather than in nursing homes. Another advantage of using a test like the Lumipulse blood test is the ability to inform a patient that their memory loss is not linked to Alzheimer’s disease. While a negative blood test does not entirely rule out an Alzheimer’s diagnosis, it does make it less probable. This could prompt the medical provider to conduct further testing to identify a more accurate cause for the patient’s memory loss. In some instances, the medical provider may conclude that the patient’s memory loss is related to normal aging. This is also important so that patients are not unnecessarily placed on medications that may not help them. It is reasonable to anticipate that additional blood-based biomarkers for diagnosing Alzheimer’s disease and other dementias will be available in the future. Perhaps one day, there will be a dementia panel blood test that can be sent off to provide early diagnosis of a wide range of dementias. Alzheimer’s blood testing is not only beneficial for individuals, but it also represents a significant advancement for research. Doctors and scientists can more easily identify individuals in the early stages of Alzheimer’s disease, which accelerates clinical trials for new medications. This increased diagnostic accuracy can enhance the effectiveness of Alzheimer’s clinical trials, as it ensures that patients enrolled have more reliable diagnoses. Consequently, new and more effective treatments could be developed and made available more quickly. The Lumipulse Alzheimer’s blood test marks a pivotal moment in our approach to this disease. While patients may still need confirmatory testing through brain imaging or spinal fluid analysis, this blood test enables the medical community to adopt a more proactive, precise, and personalized strategy for diagnosing and treating patients with dementia. This simple blood test brings us one step closer to earlier answers, better care, and renewed hope for millions of people facing the uncertainty of dementia. #breakthrough #alzheimers #blood #test #explained
    WWW.FORBES.COM
    Breakthrough Alzheimer’s Blood Test Explained By Neurologists
    The FDA recently cleared the Lumipulse blood test for early diagnosis of Alzheimer's disease in ... More people 55 and over with memory loss. The noninvasive Lumipulse blood test measures the levels of two proteins—pTau 217 and β-Amyloid 1-42—in plasma and calculates the ratio between them. This ratio is correlated with the presence or absence of amyloid plaques, a hallmark of Alzheimer's disease, in the brain.getty Whether you’re noticing changes in your memory that are affecting your daily life, caring for a loved one recently diagnosed with dementia, evaluating a patient as a physician, or simply worried about someone close to you, the recent FDA clearance of the Lumipulse blood test for the early diagnosis of Alzheimer’s disease is a significant development that you should be aware of. Here’s what you need to know about this Breakthrough Alzheimer’s blood test. The Lumipulse G pTau217/β-Amyloid 1-42 Plasma Ratio test is designed for the early detection of amyloid plaques associated with Alzheimer’s disease in adults aged 55 years and older who are showing signs and symptoms of the condition. If you’ve witnessed a loved one gradually lose their memories due to the impact of amyloid plaques in their brain, you understand how important a test like this can be. The Lumipulse test measures the levels of two proteins—pTau 217 and β-Amyloid 1-42—in plasma and calculates the ratio between them. This ratio is correlated with the presence or absence of amyloid plaques in the brain, potentially reducing the need for more invasive procedures like PET scans or spinal fluid analysis. Benefits of testing with Lumipulse Dr. Phillipe Douyon, a neurologist and author of “7 Things You Should Be Doing to Minimize Your Risk of Dementia,” notes that the Alzheimer’s Association has reported that 50-70% of symptomatic patients in community settings are inaccurately diagnosed with Alzheimer’s disease. In specialized memory clinics, this misdiagnosis rate drops to 25-30%. “Having a test that provides early and accurate insights into the cause of someone’s dementia could be a massive game changer,” says Dr. Douyon. Alzheimer's disease. Neurodegeneration. Cross section of normal and Alzheimer brain, with Atrophy of ... More the cerebral cortex, Enlarged ventricles and Hippocampus. Close-up of neurons with Neurofibrillary tangles and Amyloid plaques. Vector illustrationgetty This new test follows the recent FDA approval of two medications, lecanemab and donanemab, which are highly effective in removing amyloid from the brain. Clinical trials have shown that these treatments can slow the progression of dementia. Currently, to qualify for these medications, patients must undergo expensive examinations, such as a brain amyloid PET scan or a lumbar puncture to analyze their spinal fluid. Many patients, however, do not have access to PET imaging or specialist care. “A blood test makes diagnostic procedures more accessible and benefits underserved populations,” says Dr. Haythum Tayeb, a neurologist at WMCHealth. “It also enables earlier and more personalized care planning, even before formal treatment begins. This empowers patients and their families to make informed decisions sooner,” Dr. Tayeb adds. Who Should Be Tested With Lumipulse While this blood test may improve access to care for patients from communities lacking neurology and other specialty services, it is recommended to use it only for individuals experiencing memory problems, rather than for those who are asymptomatic. “Given that there is no specific treatment indicated for asymptomatic persons, there is a risk of introducing psychological harm at this stage,” warns Dr. James Noble who is Professor of Neurology at Columbia University Irving Medical Center and author of Navigating Life With Dementia. “Healthy approaches to lifestyle will remain central in adulthood whether or not someone has a positive test, and that advice will not really change,” adds Dr. Noble. Living a healthy lifestyle can significantly enhance brain health, regardless of whether a person has an abnormal accumulation of amyloid in their brain. Key factors include regular exercise, following a healthy diet such as the Mediterranean diet, getting adequate sleep, engaging in social and cognitive activities. These practices are all essential for maintaining cognitive function. Additionally, taking steps to protect your hearing may help reduce the risk of developing dementia.To reduce your risk of dementia, you can do regular exercise, consume a healthy diet such as the ... More Mediterranean diet, get adequate sleep, and engage regularly in social and cognitive activities.getty Anyone experiencing memory loss should consult their medical provider for an evaluation. The provider can conduct basic cognitive testing and determine if a referral to a specialist is necessary. If the individual meets the criteria for testing, the lumipulse blood test should also be considered. Future Of Alzheimer’s Testing “Looking across the wide landscape of medicine, many other conditions benefit from early detection, diagnosis, and treatment. There is no reason to believe that Alzheimer’s disease will be any different” says Dr. Noble. Indeed, screening for diseases like colon cancer, breast cancer, and high blood pressure has significantly extended the average American lifespan. Imagine how much our lives could change if we could screen for Alzheimer’s dementia in the same way. This would be particularly useful for patients at higher risk due to age or family history. Providing earlier intervention for Alzheimer’s disease could potentially reduce amyloid buildup in the brain, help preserve memories, and allow individuals to live more independently at home, rather than in nursing homes. Another advantage of using a test like the Lumipulse blood test is the ability to inform a patient that their memory loss is not linked to Alzheimer’s disease. While a negative blood test does not entirely rule out an Alzheimer’s diagnosis, it does make it less probable. This could prompt the medical provider to conduct further testing to identify a more accurate cause for the patient’s memory loss. In some instances, the medical provider may conclude that the patient’s memory loss is related to normal aging. This is also important so that patients are not unnecessarily placed on medications that may not help them. It is reasonable to anticipate that additional blood-based biomarkers for diagnosing Alzheimer’s disease and other dementias will be available in the future. Perhaps one day, there will be a dementia panel blood test that can be sent off to provide early diagnosis of a wide range of dementias. Alzheimer’s blood testing is not only beneficial for individuals, but it also represents a significant advancement for research. Doctors and scientists can more easily identify individuals in the early stages of Alzheimer’s disease, which accelerates clinical trials for new medications. This increased diagnostic accuracy can enhance the effectiveness of Alzheimer’s clinical trials, as it ensures that patients enrolled have more reliable diagnoses. Consequently, new and more effective treatments could be developed and made available more quickly. The Lumipulse Alzheimer’s blood test marks a pivotal moment in our approach to this disease. While patients may still need confirmatory testing through brain imaging or spinal fluid analysis, this blood test enables the medical community to adopt a more proactive, precise, and personalized strategy for diagnosing and treating patients with dementia. This simple blood test brings us one step closer to earlier answers, better care, and renewed hope for millions of people facing the uncertainty of dementia.
    0 Комментарии 0 Поделились