Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    OpenAI Debuts ChatGPT Health With Medical Records, Wellness App Integration

    January 13, 2026

    Tech Firms Tackle Backlash by Redesigning Data Centers to Win Over Communities

    January 13, 2026

    Utah Launches First-Ever AI Prescription Pilot in the U.S., Sparking Debate on Safety and Innovation

    January 13, 2026
    Facebook X (Twitter) Instagram
    • Tech
    • AI News
    Facebook X (Twitter) Instagram Pinterest VKontakte
    TallwireTallwire
    • Tech

      Tech Firms Tackle Backlash by Redesigning Data Centers to Win Over Communities

      January 13, 2026

      OpenAI Debuts ChatGPT Health With Medical Records, Wellness App Integration

      January 13, 2026

      Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

      January 12, 2026

      Wearable Health Tech Could Create Over 1 Million Tons of E-Waste by 2050

      January 12, 2026

      Viral Reddit Food Delivery Fraud Claim Debunked as AI Hoax

      January 12, 2026
    • AI News
    TallwireTallwire
    Home»Tech»Taiwan Flags Five Chinese AI Models for National Risk, Including DeepSeek
    Tech

    Taiwan Flags Five Chinese AI Models for National Risk, Including DeepSeek

    3 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Taiwan Flags Five Chinese AI Models for National Risk, Including DeepSeek
    Taiwan Flags Five Chinese AI Models for National Risk, Including DeepSeek
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Taiwan’s National Security Bureau (NSB) has publicly identified five Chinese-language artificial-intelligence tools — namely DeepSeek, Doubao, Wenxin Yiyan, Tongyi, and Yuanbao — as posing serious cybersecurity and content-bias risks. The NSB’s review found that each of these AI models violated numerous indicators: DeepSeek failed 8 out of 15 criteria related to risks including excessive access-to-location requests, screenshot harvesting, forced privacy-terms acceptance, and device-parameter collection. The bureau also noted that some output contained content “strongly biased” or tending towards disinformation or censorship. Taiwan has already barred such tools from government use, citing concerns over data exfiltration, manipulation, and state-influenced narratives. Independent observers point to this action as a growing flash-point in the broader U.S.–China tech and strategic competition for AI dominance and influence.

    Sources: Epoch Times, Focus Taiwan

    Key Takeaways

    – The review reveals that Chinese-developed generative-AI tools are now under active security review by Taiwan for both data-privacy risks and content-bias, signalling that such tools are not considered benign consumer software but potential strategic risk vectors.

    – Taiwan’s banning of these tools in public-sector use reflects growing distrust of Chinese tech platforms as conduits for state access, censorship, or influence operations — the implications extend beyond Taiwan into global tech and national-security arenas.

    – The public naming of these tools, including DeepSeek, underscores how China’s push in AI is being viewed abroad less as pure innovation and more as entangled with state-control, surveillance, data-harvesting, and geopolitical leverage.

    In-Depth

    In a guest-appearance on the stage of global tech competition, Taiwan has drawn a red line around five Chinese-made artificial-intelligence tools, foremost among them DeepSeek. The island’s National Security Bureau (NSB) completed an inspection of the language-model outputs and underlying data-practices of DeepSeek, Doubao, Wenxin Yiyan, Tongyi and Yuanbao, finding each to fall short of acceptable standards in categories such as user-data collection, access permissions, and content bias. For instance, DeepSeek missed eight of fifteen key indicators, including location data access, screenshot harvesting, forced acceptance of broad privacy terms, and device-parameter collection. The NSB also flagged that some generative content tilted heavily toward biased or censored frames rather than neutral or open discourse.

    Taiwan has already barred government agencies from using these tools, pre-empting the risk of state or foreign entity access to sensitive data or the propagation of influence campaigns via online AI tools. This move illustrates how AI has become a strategic frontier: no longer just a matter of consumer convenience or business efficiency, but one of national-security posture. Observers note that China’s drive for global AI leadership comes hand-in-hand with its national-intelligence and surveil­lance capabilities: commercial models developed under Beijing’s oversight may bring along back-doors, data-flows to state-linked servers, or mandated censorship that align with Beijing’s narratives.

    For countries like Taiwan — already under persistent geopolitical pressure from Beijing — the scrutiny of foreign AI tools is not just academic. It reflects real-world concerns that use of these tools can corrode data sovereignty, weaponize user data, or become a vector for cognitive or informational warfare. The naming of these five tools segments a new front in the tech competition: where adoption of AI is weighed not only for performance, but for trust, provenance, control, and alignment with democratic values. As China accelerates its AI deployments, the global community is increasingly asking whether “cheap and effective” models are also safe, private, and unbiased — or whether they carry hidden costs beyond the user interface.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleSwedish Researchers Unveil Breakthrough “Retina” E-Ink Display Poised to Shrink XR Headsets
    Next Article Tech Giant Shuts Down Messenger Desktop Apps

    Related Posts

    Tech Firms Tackle Backlash by Redesigning Data Centers to Win Over Communities

    January 13, 2026

    OpenAI Debuts ChatGPT Health With Medical Records, Wellness App Integration

    January 13, 2026

    Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

    January 12, 2026

    Wearable Health Tech Could Create Over 1 Million Tons of E-Waste by 2050

    January 12, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Tech Firms Tackle Backlash by Redesigning Data Centers to Win Over Communities

    January 13, 2026

    OpenAI Debuts ChatGPT Health With Medical Records, Wellness App Integration

    January 13, 2026

    Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

    January 12, 2026

    Wearable Health Tech Could Create Over 1 Million Tons of E-Waste by 2050

    January 12, 2026
    Top Reviews
    Tallwire
    Facebook X (Twitter) Instagram Pinterest YouTube
    • Tech
    • AI News
    © 2026 Tallwire. Optimized by ARMOUR Digital Marketing Agency.

    Type above and press Enter to search. Press Esc to cancel.