An unexpected alliance in the shadow of sanctions. NVIDIA courts RISC-V to keep CUDA relevant in China and beyond.
NVIDIA has quietly dropped a bombshell at the 2025 RISC-V Summit in China: its powerhouse CUDA platform, long tethered to x86 and ARM CPUs, is now embracing RISC-V, the open-source instruction set architecture that’s quickly becoming the favorite child of China’s chip industry. With this move, NVIDIA doesn’t just diversify its AI computing ecosystem. It’s sending a message: CUDA will go wherever AI needs it, and where U.S. export rules can’t stop it.
Why RISC-V, and Why Now?
RISC-V has been the underdog of the CPU world: lightweight, flexible, and royalty-free. For years, it’s been the darling of research labs and low-power devices. However, thanks to rising geopolitical tensions and tightening U.S. restrictions on AI hardware exports to China, this architecture has suddenly become a severe business concern. Chinese chipmakers are adopting RISC-V en masse, and now NVIDIA appears ready to ride that wave.
At the summit in Beijing, Frans Sijstermans, NVIDIA’s VP of Hardware Engineering (and a RISC-V board member), took the stage to announce CUDA’s native support for RISC-V CPUs. A diagram shown during his keynote painted a bold vision: a compute system where a RISC-V processor orchestrates CUDA tasks, a DPU handles networking, and an NVIDIA GPU crunches the numbers, a perfectly heterogeneous architecture for AI and HPC applications.
CUDA Finds a New Host
Traditionally, CUDA, NVIDIA’s secret sauce for GPU-accelerated computing, needed an x86 or ARM CPU to run its drivers and schedule work for the GPU. Now, RISC-V joins the party, and it’s not just a research experiment. CUDA drivers can now run natively on RISC-V CPUs, enabling the full AI stack, including the operating system, application logic, and GPU orchestration, without relying on Intel or ARM silicon.
As NVIDIA’s GB200 and GB300 AI accelerators remain restricted in China, the company is signaling that it won’t abandon its second-largest market. Instead, it’s offering CUDA on local hardware, no proprietary CPU required. CUDA stays relevant, even when the GPU is banned.
Strategic Maneuver
This isn’t just a technology upgrade. It’s a strategic maneuver in an industry where platforms and ecosystems are the real battleground. CUDA dominates the AI development world, but NVIDIA knows it can’t rest on its laurels, especially with AMD’s ROCm growing stronger and pushing open alternatives.
By enabling CUDA to work with RISC-V, NVIDIA is opening the door to more developers, particularly in regions where RISC-V is becoming the national standard. Embedded developers using Jetson, custom data center designs, or edge AI deployments now have the freedom to run CUDA on platforms other than ARM or x86.
It’s also a low-key power play: NVIDIA can now say, “CUDA runs everywhere,” even in places where it can’t sell its top-tier silicon.
A Future Beyond ARM?
Does this mean NVIDIA is preparing to shift away from ARM? Not likely, at least not entirely. ARM remains central to many of NVIDIA’s products, including its Grace CPU line. But it does show that NVIDIA is hedging its bets, preparing for a world where custom, local, and open silicon plays a growing role in AI infrastructure.
NVIDIA isn’t betting the farm on RISC-V; it’s placing a well-timed, high-stakes side bet. And if the open-source CPU boom really takes off, it’ll be ready.
A Political and Technical Pivot
In a world where chip supply chains are fragmented and nationalized, NVIDIA is doing what it does best: adapting faster than the rules can change. CUDA on RISC-V is more than an engineering feat; it’s a geopolitical workaround, a market expansion tool, and a signal to competitors.
CUDA is no longer married to proprietary CPU platforms. It’s going to open, just not completely. With RISC-V in the mix, NVIDIA ensures that wherever AI needs to run, CUDA will be there.