This AI scans Reddit for ‘extremist’ terms and plots bot-led intervention
A computer science student is behind a new AI tool designed to track down Redditors showing signs of radicalization and deploy bots to “deradicalize” them through conversation.
First reported by 404 Media, PrismX was built by Sairaj Balaji, a computer science student at SRMIST in Chennai, India. The tool works by analyzing posts for specific keywords and patterns associated with extreme views, giving those users a “radical score.” High scorers are then targeted by AI bots programmed to attempt deradicalization through engaging the user in conversation.
According to the federal government, the primary terror threat to the U.S. now is individuals radicalized to violence online through social media. At the same time, fears around surveillance technology and artificial intelligence infiltrating online communities pose an ethical minefield.
Responding to concerns, Balaji clarified in a Linkedin post that the conversation part of the tool has not been tested on real Reddit users without consent. Instead, the scoring and conversation elements were used in simulated environments for research-purposes only.
“The tool was designed to provoke discussion, not controversy,” he explained in the post. “We’re at a point in history where rogue actors and nation-states are already deploying weaponized AI. If a college student can build something like PrismX, it raises urgent questions: Who’s watching the watchers?”
While Balaji doesn’t claim to be an expert in deradicalization, as an engineer, he is interested in the ethical implications of surveillance technology. “Discomfort sparks debate. Debate leads to oversight. And oversight is how we prevent the misuse of emerging technologies,” he continued.
This isn’t the first time Redditors have been used as guinea pigs in recent months. Just last month, researchers from the University of Zurich faced intense backlash after experimenting on an unsuspecting subreddit.
The research involved deploying AI-powered bots into the r/ChangeMyView subreddit, which positions itself as a “place to post an opinion you accept may be flawed”, in an experiment to see if AI could be used to change peoples’ minds. When Redditors, and Reddit itself, found out they were being experimented on without their knowledge, they weren’t impressed.
Reddit’s chief legal officer, Ben Lee, wrote in a post that neither Reddit nor the r/changemyview mods knew about the experiment ahead of time. “What this University of Zurich team did is deeply wrong on both a moral and legal level,” Lee wrote. “It violates academic research and human rights norms, and is prohibited by Reddit’s user agreement and rules, in addition to the subreddit rules.”
While PrismX is not currently being tested on real unconsenting users, it piles on the ever-growing question of the role of artificial intelligence in human spaces.
#this #scans #reddit #extremist #terms
This AI scans Reddit for ‘extremist’ terms and plots bot-led intervention
A computer science student is behind a new AI tool designed to track down Redditors showing signs of radicalization and deploy bots to “deradicalize” them through conversation.
First reported by 404 Media, PrismX was built by Sairaj Balaji, a computer science student at SRMIST in Chennai, India. The tool works by analyzing posts for specific keywords and patterns associated with extreme views, giving those users a “radical score.” High scorers are then targeted by AI bots programmed to attempt deradicalization through engaging the user in conversation.
According to the federal government, the primary terror threat to the U.S. now is individuals radicalized to violence online through social media. At the same time, fears around surveillance technology and artificial intelligence infiltrating online communities pose an ethical minefield.
Responding to concerns, Balaji clarified in a Linkedin post that the conversation part of the tool has not been tested on real Reddit users without consent. Instead, the scoring and conversation elements were used in simulated environments for research-purposes only.
“The tool was designed to provoke discussion, not controversy,” he explained in the post. “We’re at a point in history where rogue actors and nation-states are already deploying weaponized AI. If a college student can build something like PrismX, it raises urgent questions: Who’s watching the watchers?”
While Balaji doesn’t claim to be an expert in deradicalization, as an engineer, he is interested in the ethical implications of surveillance technology. “Discomfort sparks debate. Debate leads to oversight. And oversight is how we prevent the misuse of emerging technologies,” he continued.
This isn’t the first time Redditors have been used as guinea pigs in recent months. Just last month, researchers from the University of Zurich faced intense backlash after experimenting on an unsuspecting subreddit.
The research involved deploying AI-powered bots into the r/ChangeMyView subreddit, which positions itself as a “place to post an opinion you accept may be flawed”, in an experiment to see if AI could be used to change peoples’ minds. When Redditors, and Reddit itself, found out they were being experimented on without their knowledge, they weren’t impressed.
Reddit’s chief legal officer, Ben Lee, wrote in a post that neither Reddit nor the r/changemyview mods knew about the experiment ahead of time. “What this University of Zurich team did is deeply wrong on both a moral and legal level,” Lee wrote. “It violates academic research and human rights norms, and is prohibited by Reddit’s user agreement and rules, in addition to the subreddit rules.”
While PrismX is not currently being tested on real unconsenting users, it piles on the ever-growing question of the role of artificial intelligence in human spaces.
#this #scans #reddit #extremist #terms
·1 Vue