OpenAI cracks down on users developing social media surveillance tool using ChatGPT
www.techspot.com
Doh! It's proven that anytime you release something to the World Wide Web, some people usually a lot will abuse it. So it's probably not surprising that people are abusing ChatGPT in ways against OpenAI's policies and privacy laws. Developers have difficulty catching everything, but they bring their ban hammer when they do. OpenAI recently published a report highlighting some attempted misuses of its ChatGPT service. The developer caught users in China exploiting ChatGPT's "reasoning" capabilities to develop a tool to surveil social media platforms. They asked the chatbot to advise them on creating a business strategy and to check the coding of the tool.OpenAI noted that its mission is to build "democratic" AI models, a technology that should benefit everyone by enforcing some common-sense rules. The company has actively looked for potential misuses or disruptions by various stakeholders and described a couple coming out of China.The most interesting case involves a set of ChatGPT accounts focused on developing a surveillance tool. The accounts used ChatGPT's AI model to generate detailed descriptions and sales pitches for a social media listening tool.The software, powered by non-OpenAI models, would generate real-time reports regarding Western protests and send them to Chinese security services. The users also used ChatGPT to debug the tool's code. OpenAI policy explicitly prohibits using its AI tech for performing surveillance tasks, including unauthorized monitoring on behalf of government and authoritarian regimes. The developers banned those accounts for disregarding the platform's rules.The Chinese actors attempted to conceal their location by using a VPN. They also utilized remote access tools such as AnyDesk and VoIP to appear to be working from the US. However, the accounts followed a time pattern consistent with Chinese business hours. The users also prompted ChatGPT to use Chinese. The surveillance tool they were developing used Meta's Llama AI models to generate documents based on the surveillance. // Related StoriesThe another instance of ChatGPT abuse involved Chinese users generating end-of-year performance reports for phishing email campaigns. OpenAI also banned an account that leveraged the LLM in a disinformation campaign against Cai Xia, a Chinese dissident currently living in the US.OpenAI Threat Intelligence Investigator Ben Nimmo told The New York Times that this was the first time the company caught people trying to exploit ChatGPT to make an AI-based surveillance tool. However, with millions of users mainly using it for legitimate reasons, cyber-criminal activity is the exception, not the norm.
0 Σχόλια ·0 Μοιράστηκε ·65 Views