Switzerland—through collaborations among EPFL, ETH Zurich, and the CSCS—has launched a groundbreaking, fully open-source large language model named Apertus (Latin for “open”) as a sovereign alternative to proprietary AI systems like ChatGPT, Claude, or Meta’s Llama 3. Available in two sizes (8B and 70B parameters), Apertus supports over 1,000 languages—including underrepresented ones like Swiss German and Romansh—was trained on 15 trillion tokens drawn only from public and opt-out-compliant sources, and is released under a permissive license that includes model weights, training data, architecture, documentation, and development details. Accessible via Hugging Face, Swisscom’s AI platform, and public inference tools, Apertus is positioned as a blueprint of transparency, trust, and regulatory compliance in AI aimed at researchers, industry, and public institutions worldwide.
Sources: The Verge, Cyber Insider
Key Takeaways
– Apertus emphasizes complete openness—model weights, training data, architecture, and documentation are fully available under permissive terms for wide-ranging use.
– The model was built to be multilingual (1,000+ languages, 40 % non-English), publicly sourced, and compliant with Swiss and EU data protection and copyright norms.
– Multi-channel accessibility (Hugging Face, Swisscom, Public AI Inference Utility) positions Apertus as both public-oriented infrastructure and a foundation for innovation in AI.
In-Depth
In a world where big tech often dominates with proprietary models and closed doors, Switzerland has taken a deliberate, prudent step toward a more transparent and sovereign AI infrastructure. Apertus is more than just another language model—it’s a public offering built on principles of openness, compliance, and democratic access.
At its core, Apertus comes in two versions—8-billion and 70-billion parameters—making it accessible for a range of users, from individual researchers to larger organizations. The multilingual commitment is impressive, with over 1,000 languages supported and 40% of the training data being non-English. That means communities speaking Swiss German, Romansh, and many other underrepresented tongues can finally leverage AI in meaningful ways. That’s not just inclusive—it’s forward-thinking and respectful of linguistic diversity.
What stands out most is Apertus’s rigorous ethical and legal grounding. The development strictly adhered to Swiss and EU regulations, collected data only from public, opt-out-compliant sources, and respected website owners’ preferences—even retroactively. No stealth scraping, no hidden datasets. And everything—from the training recipes to documentation and model weights—is fully accessible under a permissive open-source license. This transparency may seem like governance, but it’s actually smart risk management that also helps build public trust.
Deployment is equally thoughtful. You can access Apertus via Hugging Face, Swisscom’s sovereign AI platform, or through public inference interfaces. It aligns AI with public infrastructure—think railways or utilities—with governance baked in, not patched on later. In a field often driven by sensational breakthroughs, Apertus is quietly setting a stable baseline for trust, accessibility, and responsible innovation.

