Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    OpenAI Debuts ChatGPT Health With Medical Records, Wellness App Integration

    January 13, 2026

    Tech Firms Tackle Backlash by Redesigning Data Centers to Win Over Communities

    January 13, 2026

    Utah Launches First-Ever AI Prescription Pilot in the U.S., Sparking Debate on Safety and Innovation

    January 13, 2026
    Facebook X (Twitter) Instagram
    • Tech
    • AI News
    Facebook X (Twitter) Instagram Pinterest VKontakte
    TallwireTallwire
    • Tech

      Tech Firms Tackle Backlash by Redesigning Data Centers to Win Over Communities

      January 13, 2026

      OpenAI Debuts ChatGPT Health With Medical Records, Wellness App Integration

      January 13, 2026

      Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

      January 12, 2026

      Wearable Health Tech Could Create Over 1 Million Tons of E-Waste by 2050

      January 12, 2026

      Viral Reddit Food Delivery Fraud Claim Debunked as AI Hoax

      January 12, 2026
    • AI News
    TallwireTallwire
    Home»Tech»Startups Race to Offer Encrypted AI as Privacy Concerns Mount
    Tech

    Startups Race to Offer Encrypted AI as Privacy Concerns Mount

    5 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Startups Race to Offer Encrypted AI as Privacy Concerns Mount
    Startups Race to Offer Encrypted AI as Privacy Concerns Mount
    Share
    Facebook Twitter LinkedIn Pinterest Email

    A growing wave of technology startups is pushing encrypted artificial intelligence as a way to attract privacy-worried users and enterprises, offering solutions that keep data secure even during processing and promise to prevent companies from storing or training on sensitive inputs. NEAR AI’s platform, for example, encrypts prompts and responses so that even model hosts can’t view or retain chats, targeting business customers concerned about traditional AI data exposure. In Europe, Internxt AI launched a conversational AI with full user anonymity and end-to-end encryption, hosted on regional servers and designed to comply with strict data-protection laws. Meanwhile, in the hardware space, Niobium, a Dayton-based startup, has raised more than $23 million to build custom silicon that accelerates fully homomorphic encryption (FHE), which allows computation on encrypted data and aims to make “zero-trust” computing practical at scale as quantum threats grow. These developments reflect a shift toward privacy-first AI technologies as alternatives to models that log or repurpose user data, signaling a competitive landscape of encrypted AI options for both enterprise and consumer markets.

    Sources: Tech.eu, Ohio Tech News

    Key Takeaways

    – Encrypted AI is becoming a selling point as startups tout advanced privacy protections to win customers uneasy about traditional data practices.

    – European privacy laws and sovereignty goals are shaping AI offerings like Internxt AI that promise zero data storage and compliance with GDPR.

    – Hardware solutions like homomorphic encryption accelerators are drawing substantial investment to address performance limitations and future-proof privacy in AI and computing.

    In-Depth

    In the rapidly evolving world of artificial intelligence, data privacy is increasingly a battleground for startup innovation and user trust. Traditional AI services from major tech companies often rely on collecting, storing, and using user data to improve model performance—a practice that can leave customers feeling exposed and vulnerable, especially in enterprise settings where sensitive information may be involved. In response, a new class of encrypted AI technologies has emerged, blending cryptography and machine learning to deliver privacy-oriented solutions across platforms and use cases.

    At the forefront of this trend is NEAR AI, one of the companies highlighted in Semafor’s coverage of encrypted AI. NEAR’s platform encrypts the inputs and outputs of AI interactions so that the underlying model host cannot view, retain, or use the content for training or other purposes. This built-in encryption works by encrypting user prompts locally before they are sent for processing; the hardware decrypts them within a secure environment, runs inference, and then returns the encrypted result. For privacy-minded enterprises, this can reduce the risk associated with sending sensitive queries to remote AI services. Traditional safeguards like on-device processing and strict storage limits—such as those offered by Apple for small tasks or WhatsApp for certain AI-powered messaging features—have their place, but end-to-end encrypted AI promises a higher level of confidentiality by default, minimizing data exposure even in transit or at rest.

    Europe is rapidly becoming a hub for privacy-first AI alternatives. Spanish tech company Internxt recently rolled out Internxt AI, a privacy-centric generative AI assistant designed to operate with total user anonymity and strict compliance with European data-protection standards like GDPR. Built entirely on EU infrastructure, this platform uses end-to-end encryption and a zero-knowledge architecture that prevents even the service provider from accessing or storing user data. The emphasis on data sovereignty—keeping information fully under user control and within regional legal boundaries—earns Internxt distinction from American and global competitors that may rely on broader data collection practices. By promising no logs, no tracking, and no linkage to user identities, Internxt’s approach mirrors broader European policy goals that treat privacy not as an optional feature but as a fundamental expectation of digital services.

    On the infrastructure side, companies like Niobium are tackling the performance challenges that come with advanced encryption techniques. Fully homomorphic encryption (FHE) has long been considered a potential “holy grail” of data privacy because it allows computations to be performed on encrypted data without ever decrypting it. This could fundamentally change how sensitive data is handled in cloud computing, AI training, healthcare analytics, and financial modeling. The technical obstacle has always been performance: FHE computations are orders of magnitude slower on standard hardware, making them impractical for most real-world applications. Niobium’s strategy is to build custom silicon accelerators that dramatically speed up FHE processing, enabling encrypted data workloads to run at efficiency levels closer to unencrypted systems. The company’s recent funding—over $23 million raised in an oversubscribed round—reflects investor confidence that privacy-preserving computing will be a critical foundation for future data security, especially as quantum computing threatens to break existing cryptographic standards.

    Taken together, these developments illustrate a broader shift in the AI industry toward technologies that don’t force users to choose between powerful AI and data privacy. From encrypted conversation interfaces to sovereign AI services and specialized hardware, the innovation ecosystem is responding to user and regulatory demand for stronger, verifiable privacy protections. For enterprises, this means new tools to safely integrate AI into workflows without exposing proprietary or confidential information. For consumers, it can inspire greater trust in AI interactions that historically have been opaque about data use and storage practices. As these encrypted AI technologies mature, they may well redefine baseline expectations for how AI systems handle personal and sensitive data in the years ahead.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleStartup Seeks to Shatter AI Data Center Dominance
    Next Article State Attorneys General Issue Safety Warning to AI Giants Over Chatbot Harms

    Related Posts

    Tech Firms Tackle Backlash by Redesigning Data Centers to Win Over Communities

    January 13, 2026

    OpenAI Debuts ChatGPT Health With Medical Records, Wellness App Integration

    January 13, 2026

    Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

    January 12, 2026

    Wearable Health Tech Could Create Over 1 Million Tons of E-Waste by 2050

    January 12, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Tech Firms Tackle Backlash by Redesigning Data Centers to Win Over Communities

    January 13, 2026

    OpenAI Debuts ChatGPT Health With Medical Records, Wellness App Integration

    January 13, 2026

    Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

    January 12, 2026

    Wearable Health Tech Could Create Over 1 Million Tons of E-Waste by 2050

    January 12, 2026
    Top Reviews
    Tallwire
    Facebook X (Twitter) Instagram Pinterest YouTube
    • Tech
    • AI News
    © 2026 Tallwire. Optimized by ARMOUR Digital Marketing Agency.

    Type above and press Enter to search. Press Esc to cancel.