**Huawei Co-Develops Safety-Focused DeepSeek Model to Block Politically Sensitive Topics**
*By Akash Pandey | September 19, 2025 | 6:45 PM*
Huawei, the Chinese tech giant, has announced the co-development of a modified version of the artificial intelligence (AI) model DeepSeek. The new variant, named **DeepSeek-R1-Safe**, is claimed to be “nearly 100% successful” in censoring politically sensitive topics.
This development aligns with China’s stringent regulations that require domestic AI models and their applications to adhere to “socialist values,” reflecting the government’s ongoing efforts to control the flow of sensitive information online.
### Training and Development
Huawei utilized 1,000 of its Ascend AI chips to train this large language model. The DeepSeek-R1-Safe was adapted from DeepSeek’s open-source R1 version and co-developed in partnership with Zhejiang University, the alma mater of DeepSeek’s founder Liang Wenfeng. However, neither DeepSeek nor Liang Wenfeng were directly involved in this latest project.
### AI Chatbots and Political Sensitivity in China
Chinese AI chatbots, including Baidu’s Ernie Bot — the nation’s counterpart to OpenAI’s ChatGPT — consistently avoid discussing Chinese domestic politics or other sensitive issues. These restrictions comply with the ruling Communist Party’s guidelines, aiming to limit exposure to politically sensitive content.
### Model Efficiency and Performance
Huawei reports that DeepSeek-R1-Safe is “nearly 100% successful” in defending against common harmful issues such as toxic speech, politically sensitive content, and incitement to illegal activities. Nevertheless, its effectiveness drops to around 40% when such behaviors are hidden within scenario-based challenges, role-playing contexts, or encrypted coding.
The model’s comprehensive security defense capability reached an impressive 83%, surpassing several contemporary models including Qwen-235B and DeepSeek-R1-671B by 8% to 15% under identical testing conditions. Importantly, this enhanced safety feature resulted in less than a 1% drop in overall model performance compared to its predecessor, DeepSeek-R1.
### Conclusion
Huawei’s DeepSeek-R1-Safe highlights the growing focus on AI safety and regulatory compliance within China’s AI landscape. By enhancing content filtering and censoring abilities, the model supports the government’s agenda to regulate information flow while maintaining high performance standards in AI technology.
https://www.newsbytesapp.com/news/science/huawei-unveils-ai-model-deepseek-r1-safe-to-filter-politically-sensitive-content/story