CIVICUS discussed China’s tech-enabled repression with Fergus Ryan, a senior analyst at the Australian Strategic Policy Institute (ASPI), where he specializes in how the Chinese Communist Party shapes the global information environment through censorship, propaganda and platform governance. His research includes a major study on China’s AI ecosystem and its human rights impacts, as well as an investigation into China’s use of foreign influencers.
China’s authoritarian government is deploying AI on a large scale to censor, control, and monitor its population. As these devices become more sophisticated and are exported abroad, the implications for civic space extend far beyond China’s borders.
Which AI system is China developing?
Based on our research, China is developing a multilayered AI ecosystem designed to rapidly expand state control.
Tech giants like Alibaba’s Quen and Baidu’s Ernie bot are building multimodal large language models (LLMs) that censor and reshape the details of politically sensitive images. Hardware companies including Dahua, Hikvision and SenseTime supply camera networks that feed into these systems.
The state is building an AI-powered criminal justice pipeline. This includes City Brain operation centers such as the one in Shanghai’s Pudong district, which processes mass surveillance data, as well as the 206 system, developed by iFlyTek, which analyzes evidence and recommends criminal punishment. Inside prisons, AI monitors prisoners’ facial expressions and tracks their emotions.
AI-enabled satellite surveillance, such as Xinjiang Jiaotong-01, enables autonomous real-time tracking over politically sensitive areas. Additionally, AI-enabled fishing platforms like Sea Eagle expand economic extraction into the exclusive economic zones of countries including Mauritania and Vanuatu, thereby displacing artisanal fishing communities.
How does China use AI for censorship and policing?
China relies on a hybrid model of censorship that combines the speed of AI with human political judgment. The government is requiring companies to self-censor to create a commercial market for AI moderation tools. Tech giants like Baidu and Tencent have industrialized the process: systems automatically scan images, text and video to detect content deemed risky in real time, while human reviewers handle micro or coded speech.
In policing, City Brains takes data from millions of cameras, drones and Internet of Things sensors and uses AI to identify suspects, track vehicles and predict disturbances before they happen. In Xinjiang, the Integrated Joint Operations Platform aggregates data from cameras, phone scanners and informants to generate risk scores for individuals, enabling pre-emptive detention based on behavioral patterns rather than specific crimes.
On platforms like Douyin, the state not only removes content; It algorithmically suppresses dissent while increasing ‘positive energy’. AI connects surveillance data directly to narrative control and police action.
What are the human rights implications?
These AI systems destroy the rights to freedom of expression, privacy and fair trial.
Historically, online censorship has meant removing a post. Today, generative AI engages in ‘informational gaslighting’. When ASPI researchers showed Alibaba LLMs a photo of a protest against human rights violations in Xinjiang, the AI ​​described it as ‘individuals holding signs with false statements in a public setting’ based on ‘bias and lies’. Technology subtly constructs reality and prevents users from accessing objective historical truth.
AI also undermines the right to a fair trial. In courts that lack judicial independence, AI systems that recommend punishment or predict recidivism act as a black box that defense lawyers cannot examine.
Mass surveillance changes behavior even when not actively used, so its chilling effect can be as significant as direct deployment. Knowing that their conversations can be monitored, people self-censor online and in private messaging. Recognition of emotions in prisons takes this further: people can theoretically be marked for their internal mental states. Punishment is given not only for actions but also for thoughts.
Which groups are most affected?
While AI-enabled surveillance affects all people, ethnic minorities such as Koreans, Mongolians, Tibetans, and Uyghurs are disproportionately targeted.
Mainstream LLMs are trained primarily in Mandarin, leaving little business incentive to develop AI for minority languages. However, the Chinese state views those languages ​​as a security vulnerability. State-funded institutions, including the National Key Laboratory at Minzu University, are creating LLMs in minority languages, not for cultural preservation, but to empower public opinion control and prevention platforms. These scan text, audio and video in Tibetan and Uyghur to detect cultural advocacy, dissent or religious activity.
Feminist activist, human rights lawyer – especially since 709 crackdown In 2015 – Labor activists and religious minorities, including Falun Gong practitioners, faced disproportionate targeting. Chinese models consistently adopt state-aligned narratives about such groups, labeling Falun Gong a cult and avoiding human rights framing. Since 2020, Hong Kongers have also been subject to national security law surveillance using similar devices deployed on the mainland, a reminder that this infrastructure can be rapidly scaled up.
How can workers protect themselves in China?
It is becoming difficult to keep oneself safe inside China. AI leaves very few blind spots. But the system is not completely omniscient.
Activists have historically relied on coded speech, euphemisms and satire, the classic example of which is the use of ‘Winnie the Pooh’ to refer to President Xi Jinping. As AI struggles with cultural nuances and evolving memes, new linguistic solutions can temporarily bypass automatic filters. But it’s a constant game of whack-a-mole: Chinese tech companies employ thousands of human content reviewers whose sole job is to catch new memes and feed them back into the AI.
The most practical steps are to use a VPN to access blocked platforms, secure communication apps like Signal, and separate devices for sensitive tasks. None of these are foolproof. VPN use is technically illegal and is increasingly being detected and signals can only be accessed through a VPN. This helps in keeping a minimal digital footprint and communicating face-to-face on sensitive matters. However, for workers in Xinjiang, surveillance is so widespread that personal precautions offer little protection. Strong international networks and rigorous documentation practices are essential.
Is China exporting these technologies?
China is the world’s largest exporter of AI-powered surveillance technology, marketing these systems globally, particularly in the Global South.
The Chinese state is deliberately expanding its minority-language public-opinion surveillance software throughout the Belt and Road Initiative countries, effectively expanding its censorship apparatus to monitor Tibetan and Uyghur diaspora communities abroad. Chinese companies including Dahua, Hikvision, Huawei and ZTE have deployed surveillance and ‘safe city’ systems in more than 100 countries, with Saudi Arabia and the United Arab Emirates among the most significant recipients. Crucially, these companies operate under China’s 2017 national intelligence law, which requires cooperation with state intelligence, meaning data flowing through these systems may be accessible to Beijing as well as purchasing governments.
China is also exporting its governance model through open-source releases of its LLM, thereby embedding Chinese censorship norms into the underlying infrastructure used by developers around the world.
What should the international community do?
The international community needs to recognize that regulatory efforts are needed to combat this.
First, democratic states should set minimum transparency standards for public procurement. This means refusing to buy AI models that conceal political or historical censorship, and mandating that providers publish ‘moderation logs’ with denial reason codes so users know when content is restricted for political reasons.
Second, states should enact ‘safe-harbor laws’ to protect civil society organizations, journalists, and researchers who audit AI models for hidden censorship. Currently, doing so may violate the corporate terms of service.
Third, strict export controls should prevent the transfer of repression-enabling technologies to authoritarian regimes, while companies providing public-opinion management services should be excluded from democratic markets. Existing targeted sanctions on companies like Dahua and Hikvision for their role in Xinjiang should be more strictly enforced.
Finally, the international community must recognize that Chinese surveillance extends beyond China’s borders. Spyware targeting exiled Tibetan and Uighur activists is well documented, as is pressure on family members living in China. Rigorous documentation by international civil society is necessary to create an evidentiary record for future accountability.
CIVICUS interviews a wide range of civil society activists, experts and leaders to gather diverse perspectives on civil society action and current issues for publication on its CIVICUS Lens platform. The views expressed in the interviews are those of the interviewees and do not necessarily reflect the views of Civicus. Publication does not imply endorsement of the interviewees or the organizations they represent.
keep in touch
website
Linkedin
twitter/x
See also
Technology: innovation without accountability Civicus | 2026 State of Civil Society Report
Hong Kong’s silence Civicus Lens 25.June.2025
long reach of totalitarianism Civicus Lens 20.March.2024
© Inter Press Service (20260318085829) – All rights reserved. Original source: Inter Press Service
