OpenAI flags Chinese operatives misusing ChatGPT for mass surveillance

**OpenAI Flags Chinese Operatives Misusing ChatGPT for Mass Surveillance**

*By Mudit Dube | Oct 08, 2025, 05:09 PM*

OpenAI has recently identified the misuse of its AI chatbot, ChatGPT, by suspected Chinese government operatives. According to the company, these users attempted to develop tools aimed at large-scale monitoring of data collected from various social media platforms.

One such user was banned after attempting to utilize ChatGPT to create promotional materials and project plans for an AI-powered social media listening tool intended for a government client. This tool, referred to as a social media “probe,” was designed to scan platforms like X, Facebook, Instagram, Reddit, TikTok, and YouTube. Its focus was on detecting extremist speech as well as ethnic, religious, and political content.

In another case, an account believed to be linked to a government entity was banned after using ChatGPT to draft a proposal for a “High-Risk Uyghur-Related Inflow Warning Model.” This model aimed to analyze transport bookings against police records to monitor travel movements within the Uyghur community.

### OpenAI’s Stance and Access Restrictions

OpenAI highlighted that some of these activities appear intended to enable large-scale monitoring of online or even offline traffic. The company emphasized the importance of continued vigilance to prevent potential authoritarian abuses of its technology.

Interestingly, OpenAI’s AI models are not officially available in China. The company suspects that these users accessed its services through VPNs to bypass regional restrictions.

### Broader Misuse: Russian Hackers and Malware Creation

Beyond the Chinese operatives, OpenAI also reported that Russian hackers have exploited its AI models to develop and enhance malware, including remote access trojans and credential stealers. The company noted that persistent threat actors have modified their tactics to obscure recognizable signs of AI involvement in their malware development.

Despite these concerns, OpenAI found no evidence that its models have enabled threat actors to develop new attack techniques or significantly improved their offensive capabilities.

### Positive Usage Trends: Scam Detection

Despite instances of misuse, OpenAI observed that ChatGPT is being employed far more frequently to identify scams than to create them. The company estimates that ChatGPT is used for scam detection up to three times more often than for scam creation.

Since launching its public threat reporting in February 2024, OpenAI has disrupted and reported over 40 networks violating its usage policies.

OpenAI’s ongoing efforts reflect its commitment to balancing the innovative potential of AI with safeguards against abuse, ensuring that the technology serves positive and ethical purposes worldwide.
https://www.newsbytesapp.com/news/science/chinese-government-operatives-misusing-chatgpt-for-surveillance-openai/story

Leave a Reply

Your email address will not be published. Required fields are marked *