China is racing to achieve AI superpower status, but it’s running into a problem that has nothing to do with chip design or wafer fabrication. The real bottleneck right now isn’t processors, it’s memory. Specifically, high-bandwidth memory (HBM) is the ultra-fast, stacked DRAM that powers the world’s most advanced AI accelerators.
With U.S. export controls tightening and Korean suppliers under increasing pressure, Chinese companies are being forced into a pragmatic but imperfect workaround: replacing HBM with DDR5 in AI systems. It’s a stopgap that lets China keep building AI infrastructure, but it comes with clear performance trade-offs.
HBM vs DDR5
To understand what’s happening, you need to see how different these two memory types really are.
|
Feature |
HBM |
DDR5 |
|
Bandwidth |
Extremely high (multi-TB/s aggregate) |
Moderate (~50–60 GB/s per module) |
|
Architecture |
3D-stacked dies on an interposer, wide bus |
Planar design with a much narrower bus |
|
Power efficiency |
Superior (per bit transferred) |
Good, but worse than HBM at high bandwidth |
|
Cost |
Very expensive |
Far more cost-effective |
|
Typical use |
AI training, HPC, high-end GPUs |
General servers, PCs, and some AI inference |
HBM’s combination of massive bandwidth and excellent energy efficiency is why it’s standard in top-end AI accelerators from NVIDIA, AMD, and others. DDR5 can’t match that level of throughput, but it’s cheaper, more available, and easier to manufacture.
And that last part is where China’s strategy kicks in.
The Real Bottleneck: HBM Supply, Not Chip Manufacturing
According to SemiAnalysis, China’s AI semiconductor buildout is no longer primarily limited by foundry capacity. Domestic chipmakers like SMIC can produce a meaningful volume of AI processors. The real choke point is HBM:
- Chinese companies can design and produce AI chips domestically.
- But they cannot secure enough HBM to feed those chips at scale.
Huawei’s Ascend 910C is the perfect example. On paper, the company has enough foundry capacity, via pre-sanctions stockpiling at TSMC and newer capacity at SMIC, to produce around 805,000 units per year. In practice, HBM shortages mean a large part of that theoretical output is stranded capacity.
Before export controls tightened, Chinese firms reportedly stockpiled around 11.4 million HBM stacks from Samsung. That may sound like a lot, but in the context of hyperscale AI buildouts, it’s just a temporary cushion, not a long-term solution. Once those inventories are burned through, China’s AI ramp hits a hard ceiling.
Meanwhile, NVIDIA and AMD enjoy direct access to Korean and U.S.-aligned memory supply chains, giving Western AI infrastructure a sustained advantage even as China pours money into domestic semiconductor development.
The Long Road to Domestic HBM: CXMT, YMTC, and Xtacking
China’s path to reducing its HBM dependency runs through domestic DRAM and memory makers like CXMT and YMTC.
The problem? Converting a standard DRAM line to high-bandwidth memory production is not a simple upgrade:
- HBM requires 3D stacking of at least 16 dies with extreme precision.
- That demands specialized equipment and hybrid bonding tools, most of which are still controlled by Western or allied countries.
- Any new U.S. or allied export restrictions targeting HBM-manufacturing tools could significantly stretch timelines.
Current projections suggest that if investments continue and no new restrictions are imposed, China could reach competitive HBM3E production around 2026. That timeline, however, assumes:
- Successful technology transfer
- Stable access to key equipment
- No major geopolitical surprises
A key player here is YMTC, which is reportedly preparing to enter DRAM production and could start buying the necessary gear by late 2025. YMTC’s “secret weapon” is its Xtacking technology, one of the most advanced hybrid bonding techniques in the NAND world. That matters because:
HBM is basically precision 3D stacking of many memory chips, something YMTC already does at scale for NAND.
If they can adapt Xtacking to DRAM/HBM, it could dramatically accelerate China’s roadmap to domestic high-bandwidth memory.
Why China Is Suddenly So Hungry for DDR5
While HBM is under heavy export restrictions, DDR5 is not,and China is taking full advantage.
From July to September, Korea’s memory semiconductor exports to China hit $7.88 billion, up 20% year-on-year. In August alone, shipments surged by more than 33% in value and about 60% in volume. That’s a massive reversal from the first half of 2025, when exports were shrinking almost every month.
What changed?
- The Trump administration’s expanded export controls block not only U.S. companies but also allied suppliers from sending high-end GPUs and advanced HBM to China.
- In response, Beijing has reportedly instructed data centers with less than 30% completion to replace or cancel orders for foreign chips.
- Result: data centers and cloud operators are turning to domestically designed AI chips – and pairing them with massive amounts of DDR5 instead of HBM.
As Jeon Byeong-seo of the China Economic and Financial Research Institute notes, China is “replacing GPUs with DDR5 chips made in China,” where it cannot import foreign accelerators.
A concrete example is Huawei’s Ascend 910B:
- Earlier versions were announced with HBM.
- The latest version, shown at a tech conference in September, uses DDR5 instead of HBM, a clear signal of pivoting to what’s available.
This is precisely what China did with lithography: when it couldn’t access EUV scanners, it pushed deep ultraviolet (DUV) equipment to its absolute limits to produce 7 nm chips. Now it’s applying the same “bite the bullet and push what we have” mentality to memory.
DDR5: The “Good Enough” Alternative for AI, For Now
Using DDR5 instead of HBM in AI systems is not ideal, but it’s good enough for specific workloads, especially when the goal is to keep AI projects alive rather than chase absolute peak performance.
Key advantages for China:
- Availability: DDR5 can be imported (for now) and produced domestically.
- Cost: Much cheaper per GB than HBM.
- Scale: Data centers can compensate for lower per-chip bandwidth by using more memory channels, more servers, or optimized architectures.
The downside is obvious:
- Lower adequate bandwidth per accelerator
- Higher power consumption for the same throughput
- A more complex system design to reach the target performance
Still, in a world where HBM is the new choke point, DDR5-based architectures give China a way to keep scaling its AI infrastructure, even if it’s not at cutting-edge efficiency levels.
And it’s not just about imports:
- CXMT has already commercialized DDR5 and is focused on improving yield.
- Industry estimates suggest CXMT’s global DRAM market share could rise from 7% in 2025 to 10% by 2027.
- The company has reportedly started sampling HBM3 to Huawei, signaling that China is already testing high-end memory internally.
Memory Supercycle + China’s Self-Reliance Push = Volatile Future
The global memory market is in a supercycle, driven by the AI boom. DRAM and HBM prices are rising, and demand is outstripping supply. China’s aggressive imports of DDR5 – combined with its domestic DRAM ramp- are intensifying that cycle.
At the same time, its push for memory self-reliance is accelerating:
- More investment in DRAM and HBM R&D
- Higher local output and generational upgrades
- Rapid learning curve, backed by massive government and corporate funding
Experts like Kwon Seok-joon from Sungkyunkwan University warn that Chinese firms could reach Korean-level HBM3 capability much sooner than expected, especially if companies like CXMT and YMTC successfully leverage their existing know-how in stacking and advanced packaging.
The Bottom Line
China’s AI chip ambitions are no longer just about making processors that can rival NVIDIA or AMD. The real game is system-level sovereignty, CPUs, GPUs, accelerators, and the memory that feeds them.
Right now:
- HBM scarcity and export controls are slowing China down.
- DDR5 is the tactical workaround – cheaper, widely available, and increasingly made in China.
- Domestic HBM from CXMT and YMTC is the strategic endgame, potentially around mid-decade if tools and politics cooperate.
Until then, the global memory market and the AI race will be shaped as much by who controls HBM and DRAM supply as by who has the fastest GPU.