Taiwan’s National Security Bureau (NSB) has publicly identified five Chinese-language artificial-intelligence tools — namely DeepSeek, Doubao, Wenxin Yiyan, Tongyi, and Yuanbao — as posing serious cybersecurity and content-bias risks. The NSB’s review found that each of these AI models violated numerous indicators: DeepSeek failed 8 out of 15 criteria related to risks including excessive access-to-location requests, screenshot harvesting, forced privacy-terms acceptance, and device-parameter collection. The bureau also noted that some output contained content “strongly biased” or tending towards disinformation or censorship. Taiwan has already barred such tools from government use, citing concerns over data exfiltration, manipulation, and state-influenced narratives. Independent observers point to this action as a growing flash-point in the broader U.S.–China tech and strategic competition for AI dominance and influence.
Sources: Epoch Times, Focus Taiwan
Key Takeaways
– The review reveals that Chinese-developed generative-AI tools are now under active security review by Taiwan for both data-privacy risks and content-bias, signalling that such tools are not considered benign consumer software but potential strategic risk vectors.
– Taiwan’s banning of these tools in public-sector use reflects growing distrust of Chinese tech platforms as conduits for state access, censorship, or influence operations — the implications extend beyond Taiwan into global tech and national-security arenas.
– The public naming of these tools, including DeepSeek, underscores how China’s push in AI is being viewed abroad less as pure innovation and more as entangled with state-control, surveillance, data-harvesting, and geopolitical leverage.
In-Depth
In a guest-appearance on the stage of global tech competition, Taiwan has drawn a red line around five Chinese-made artificial-intelligence tools, foremost among them DeepSeek. The island’s National Security Bureau (NSB) completed an inspection of the language-model outputs and underlying data-practices of DeepSeek, Doubao, Wenxin Yiyan, Tongyi and Yuanbao, finding each to fall short of acceptable standards in categories such as user-data collection, access permissions, and content bias. For instance, DeepSeek missed eight of fifteen key indicators, including location data access, screenshot harvesting, forced acceptance of broad privacy terms, and device-parameter collection. The NSB also flagged that some generative content tilted heavily toward biased or censored frames rather than neutral or open discourse.
Taiwan has already barred government agencies from using these tools, pre-empting the risk of state or foreign entity access to sensitive data or the propagation of influence campaigns via online AI tools. This move illustrates how AI has become a strategic frontier: no longer just a matter of consumer convenience or business efficiency, but one of national-security posture. Observers note that China’s drive for global AI leadership comes hand-in-hand with its national-intelligence and surveillance capabilities: commercial models developed under Beijing’s oversight may bring along back-doors, data-flows to state-linked servers, or mandated censorship that align with Beijing’s narratives.
For countries like Taiwan — already under persistent geopolitical pressure from Beijing — the scrutiny of foreign AI tools is not just academic. It reflects real-world concerns that use of these tools can corrode data sovereignty, weaponize user data, or become a vector for cognitive or informational warfare. The naming of these five tools segments a new front in the tech competition: where adoption of AI is weighed not only for performance, but for trust, provenance, control, and alignment with democratic values. As China accelerates its AI deployments, the global community is increasingly asking whether “cheap and effective” models are also safe, private, and unbiased — or whether they carry hidden costs beyond the user interface.

