Semidynamics CEO Roger Espasa discusses the company’s high-performance, configurable RISC-V IP, which emphasizes memory bandwidth and customization for AI and HPC. He highlights their Gazzillion Misses latency-handling IP, integrated tensor units, and focus on real-world performance over benchmarks. Espasa sees RISC-V gaining ground on Arm, balancing standardization with flexibility, and predicts a shift in AI and chiplet adoption.
Semidynamics CEO Roger Espasa discusses the company’s high-performance, configurable RISC-V IP, which emphasizes memory bandwidth and customization for AI and HPC. He highlights their Gazzillion Misses latency-handling IP, integrated tensor units, and focus on real-world performance over benchmarks. Espasa sees RISC-V gaining ground on Arm, balancing standardization with flexibility, and predicts a shift in AI and chiplet adoption.
Its solutions range from stand-alone CPU cores to all-in-one designs (including NPUs) built around tensor, vector, and CPU combinations. One of Semidynamics’ key differentiators is its emphasis on memory bandwidth and customization. Features like the Gazzillion TLB (translation lookaside buffer) and configurable memory subsystems allow designers to adapt processors to specific workloads, particularly for AI and data-intensive applications.
In an interview with JPR’s David Harold, Semidynamics CEO Roger Espasa talks about the company’s RISC-V IP and what he envisions for RISC-V’s future.

Tell me about the history of Semidynamics. Where does it come from, and what is its purpose?
Espasa: We started around 2017. The first two years were focused on services, designing a RISC-V chip for an American start-up. After that, we decided to develop our own IP. By 2020–2022, our technology was ready, and we secured customers worldwide interested in our vector technology.
We were the first to introduce a RISC-V large vector unit and the first to combine an out-of-order vector unit with an out-of-order core. This was particularly relevant in HPC, which has now merged with AI. The boundary between HPC and AI has disappeared—everything is AI now.
Customers transitioning into AI told us they loved our vector unit but needed more operations. So, we developed and open-sourced tensor instructions fully compliant with RISC-V. These instructions are now progressing through a RISC-V working group. Our goal is to help customers deploy AI using a simple, RISC-V-only software stack, ensuring our solution remains viable as AI evolves.
There seems to be a shift toward standardization within RISC-V, with profiles ensuring compatibility, while allowing some customization. Is that how you see it? You’ve contributed tensor instructions to RISC-V International—do you think that’s the typical model moving forward?
Espasa: Customization is key to RISC-V’s success, even though it seems some competitors are hesitant. Profiles are useful—they provide a foundation for software optimization and leverage years of development, much like Linux.
However, if RISC-V were purely standardized, why not just use Arm? RISC-V’s appeal lies in offering extra performance through customization. The trick is to balance standardization with flexibility—leveraging profiles, while adding customer-specific enhancements without breaking compatibility.
You’ve spoken about eliminating a one-size-fits-all approach to CPUs. Your architecture supports multiple configurations, from tensor cores to vector cores. How does that differentiate you from competitors?
Espasa: Many assume computing is limited to laptops, smartphones, or data centers, but processors are everywhere—cars, gateways, TVs. These applications require specialized optimizations, which is where RISC-V shines.
We focus on moving data efficiently rather than chasing peak benchmark scores. Traditional CPU benchmarks like SPECint or Dhrystone are fine, but we prioritize real-world workloads, such as McCalpin’s Stream benchmark, which measures memory bandwidth utilization.
That ties into your Gazzillion technology. What makes it different?
Espasa: Gazzillion allows a single core to maximize memory bandwidth. One customer expected to need four cores but was thrilled to achieve the same performance with just one, simplifying design and reducing time to market.
To achieve this, we optimized the entire pipeline—from instruction renaming to memory requests—ensuring the core fully utilizes available bandwidth. If more bandwidth is provided, we scale accordingly.
AI demands massive data movement. How does Gazzillion support AI workloads?
Espasa: There are two schools of thought in RISC-V regarding tensor units—one advocating stand-alone tensor units on a bus and another integrating them within the core. Since we can deliver high bandwidth, we chose integration.
Stand-alone tensor units require complex DMA programming, synchronization, and data transfers. With our approach, the tensor unit sits within the core, streamlining AI workloads and reducing software complexity.
Ventana and others are exploring chiplets, partly for IP hardening. Arm is now discussing monolithic designs as well. Where do you stand on chiplets and your business model?
Espasa: Our business model is classic licensing plus royalties.
Chiplets are an obvious evolution, not necessarily because Arm promotes them, but because they capture more value closer to silicon. Some customers can’t afford a three-year design cycle and prefer a solution they can deploy in 18 months.
Chiplets aren’t a turnkey solution yet. Right now, major companies (like Nvidia) are connecting their own chiplets internally. The next step is for different vendors to integrate their chiplets. Until then, we foresee customers integrating existing chiplets with standard interfaces like PCIe or UCIe. We’re actively working in this space, though nothing to announce yet.
What are you doing to optimize power efficiency?
Espasa: There’s no magic trick—it’s all about careful engineering. You need to optimize flip-flops, clock gating, and clock trees.
New designs have an advantage over older ones that accumulate inefficiencies over decades. Our clean-sheet design allows for leaner power management. Additionally, we implement aggressive power gating—turning off vector units when running tensors, and vice versa.
How do you see Europe’s semiconductor strategy and funding landscape? Is there real demand for sovereign technology?
Espasa: There’s growing interest in AI sovereignty. Recently, Ursula von der Leyen [president of the European Commission] announced a €200 billion investment in AI. If Europe wants true sovereignty, it also needs hardware independence.
Right now, I’d say there’s strong appetite, which will likely turn into demand over the next few years. However, for that demand to materialize, purchasing power needs to translate into real adoption of new technology.
With RISC-V maturing, what are the remaining hurdles to mass adoption?
Espasa: The RISC-V Foundation is working hard on the server space, which is complex due to the vast number of specifications required. AI standardization is another key area—we need a unified approach to enable software reusability, while maintaining flexibility.
The microcontroller space is already dominated by RISC-V. Phones and laptops are next, but that depends on major players ensuring software readiness, particularly for Android.
Final question—what should I have asked you?
Espasa: How RISC-V and Arm will compete over the next few years. RISC-V is gaining market share from Arm, and both ecosystems will equalize. Arm’s centralized control may struggle to keep pace with RISC-V’s decentralized innovation. The next four to five years will be fascinating to watch.
LIKE WHAT YOU’RE READING? INTRODUCE US TO YOUR FRIENDS AND COLLEAGUES.