Tigris, a startup founded by engineers behind Uber’s storage platform, has secured a $25 million Series A to develop a distributed storage network built for AI workloads, challenging legacy cloud giants like AWS, Azure, and Google Cloud. The firm’s platform promises low-latency, region-aware storage that “moves with your compute,” replicating data to wherever GPUs run and avoiding costly egress fees, a real pain point for many AI companies. Tigris already serves over 4,000 customers, operates data centers in Virginia, Chicago, and San Jose, and plans to expand to Europe and Asia. SiliconANGLE notes the service is S3-compatible, helping ease adoption. Meanwhile, analysts flag latency and vendor lock-in in existing cloud infrastructure as major constraints for scalable AI deployments.
Sources: SiliconANGLE, TechBuzz
Key Takeaways
– AI workloads are pushing demand for storage architectures that minimize latency and distribution mismatch, creating space for startups like Tigris to offer alternatives to centralized cloud models.
– A major differentiator is eliminating or reducing egress fees (“cloud tax”) and enabling seamless data portability across clouds, weakening vendor lock-in.
– Adoption will depend on compatibility (e.g. S3-compatibility), geographic footprint, reliability, and trust—for regulated industries especially, data sovereignty and resilience matter a lot.
In-Depth
In recent years, cloud computing has largely centralized storage under a few dominant providers. Their architectures and pricing models were optimized for traditional workloads—but as artificial intelligence pushes the envelope, many of those assumptions are being challenged. Data-intensive AI tasks often require compute and storage to co-locate or at least behave with extremely low latency. When storage is centralized far from compute, performance degrades; when data has to cross provider boundaries, users incur steep “egress” or transfer fees. That’s where Tigris comes in.
Tigris is positioning itself as the storage layer designed for AI’s distributed future. Its model is to build a network of localized data centers that replicate and move data to wherever the compute (e.g. GPUs) lives. The idea is that your code doesn’t have to think about where data is, just that it’s fast and accessible. The company has already grown fast, secured 4,000+ customers—especially generative AI firms—and expanded to major U.S. nodes in Virginia, Chicago, and San Jose. With its $25 million Series A led by Spark Capital and participation from a16z, it has the backing to push further into Europe and Asia.
One strength of Tigris is that it supports S3 compatibility: developers familiar with Amazon’s interface can adopt it with fewer refactors. That lowers the switching friction. From the SiliconANGLE coverage, they’re marketing it as an “object storage service optimized for AI workloads.” On the flip side, the incumbents—AWS, Microsoft, Google—already have massive global scale, entrenched ecosystems, and trust in enterprises. Replacing or bypassing them isn’t trivial. That said, Tigris’s pitch directly targets inefficiencies in the incumbents: especially latency and egress fees. For AI firms that shuttle large datasets between clouds or across regions, those costs and bottlenecks aren’t theoretical—they materially affect their bottom line.
If Tigris can reliably deliver on low latency, robust reliability, and global reach, it may become a viable alternative or supplement to Big Cloud for AI-first customers. Whether it succeeds in breaking entrenched dominance will depend on execution, trust, and how much the AI demand curve forces enterprises to rethink where and how they store data.

