Home / News / Technology / SK Hynix and Samsung Compete for Nvidia Memory Chip Contracts
Technology
3 min read

SK Hynix and Samsung Compete for Nvidia Memory Chip Contracts

Published
James Morales
Published

Key Takeaways

  • Nvidia initially relied exclusively on SK Hynix to supply high-bandwidth memory (HBM) chips used in its AI processors.
  • However, with the H20, the company turned to Samsung for the first time.
  • Nvidia’s HBM contracts are a major prize for South Korean rivals.

Nvidia relies on the latest high-bandwidth memory (HBM) chips to build its most powerful graphics processing units (GPUs).

As demand for Nvidia GPUs has increased in recent years, its suppliers have struggled to keep up.

For South Korean memory chip rivals SK Hynix and Samsung, Nvidia contracts can provide significant revenues and promise future growth. As they compete for these lucrative deals, both companies are racing to develop better HBM solutions and scale their manufacturing capacity. 

The Role of HBM Chips in AI

While Nvidia still uses an alternative memory technology known as graphics double data rate (GDDR) for its smaller GPUs, it favors HBM for the heavyweight processors used in AI training. 

A crucial difference between the two chip families is that while GDDR memory is typically positioned separately on the GPU circuit board, HBM can be placed next to the processor itself. This proximity to the processor gives HBM its primary advantage—speed.

As Nvidia has accelerated GPU development, its memory needs have increased, and the firm has called on suppliers to improve their game. 

For instance, CEO Jensen Huang reportedly asked SK Hynix  to accelerate the production of more advanced HBM4 chips by six months. 

The Race to HBM4

Originally slated to hit mass production in 2026, SK Hynix reportedly  plans to expedite its HBM roadmap to ship the first chips in the second half of 2025—just in time for the anticipated launch of Nvidia’s next-generation Rubin R100 GPU.

Meanwhile, Samsung has also brought forward its HBM4 timeline, with a similar target to hit mass production in late 2025.

Nvidia’s Memory Needs

When it started incorporating HBM into its GPU designs, Nvidia originally favored SK Hynix chips. Media reports  suggest Samsung’s chips didn’t meet Nvidia’s standards for use in its AI processors due to problems with heat and power consumption.

However, in July this year, Nvidia reportedly approved  Samsung’s HBM3 for use in H20 processors, a slimmed-down version of its flagship H200 made for the Chinese market.

Working with Samsung could help Nvidia ramp up GPU production by addressing supply chain bottlenecks. Meanwhile, Samsung investors see an expanded Nvidia partnership as a potential cash cow.

During its recent quarterly earnings call, Samsung reported strong demand for HBM, driven by increased investments in AI infrastructure. 

“The Company plans to actively respond to the demand for high-value-added products for AI and will expand capacity to increase the portion of HBM3E sales,” it said in a statement.

Was this Article helpful? Yes No

James Morales

Although his background is in crypto and FinTech news, these days, James likes to roam across CCN’s editorial breadth, focusing mostly on digital technology. Having always been fascinated by the latest innovations, he uses his platform as a journalist to explore how new technologies work, why they matter and how they might shape our future.
See more