In a neat twist, XCENA has unveiled its MX1 “computational memory” chip at FMS 2025—packing thousands of RISC-V cores right next to RAM via CXL—to slash CPU-memory overhead and unlock petabyte-scale SSD-backed extension, with follow-on models MX1P this year and MX1S in 2026 supporting CXL 3.2. Another perspective notes how CXL as a protocol has matured to offer coherent memory access and far greater flexibility in memory expansion for data center architectures. And from the academic side, one CXL-centric design called CoaXiaL replaces all traditional DDR paths with CXL to boost bandwidth per pin and cut latency, delivering up to 3× throughput improvement on many-core servers.
Key Takeaways
– XCENA’s MX1 could significantly reshape server design by enabling compute right next to memory, reducing latency and increasing scalability.
– The evolving CXL standard underpins these advances, offering coherent memory protocols and expanding memory capacity in data centers.
– Academic research—like the CoaXiaL architecture—shows that CXL-only memory pathways can yield substantial performance gains over traditional DDR interfaces.
In-Depth
XCENA’s MX1 is a smart move that blends serious engineering with the flair of innovation. By sliding thousands of RISC-V cores onto the memory card—thanks to CXL—it’s basically turning the RAM stick into a mini super-server. That means data doesn’t have to do the long commute back and forth to the CPU, cutting latency and easing bottlenecks, especially when handling SSD-backed, petabyte-scale expansion. And they’ve already teased MX1P for this year and MX1S for 2026, both rocking CXL 3.2, which doubles down on speed and capability.
Backing it up, the Compute Express Link (CXL) standard is no flash in the pan—it’s an open-standard, built atop PCIe, that gives you cache-coherent protocols, direct memory access, and fair bandwidth scalability. That’s a fundamental shift, opening the door for disaggregated memory architectures beyond the limits of mere DIMMs.
And you don’t have to take XCENA’s word for it. Academics who put their hats in the ring with systems like CoaXiaL propose going all-in on CXL—ditching all DDR interfaces—and squeezing 4× more bandwidth per pin, while cutting down on access delays. Their tests show 1.5× to 3× throughput boost in many-core server workloads.
So here’s the bottom line: XCENA’s MX1 is not just incremental—it’s part of a broader shift. With CXL enabling these near-memory compute units, the line between CPU and memory blurs, promising more responsive, scalable, and cost-efficient servers. It’s the kind of practical—and frankly necessary—innovation we’ve been waiting on to power tomorrow’s data-heavy infrastructure.

