SK Hynix Places $13B Bet on AI Memory as Chip Shortages Intensify
South Korea’s SK Hynix has announced plans to invest roughly $13 billion in a new advanced chip packaging plant, as the global memory shortage deepens. As AI systems demand ever-greater computing power and faster data movement, memory makers are racing to expand capacity in some of the industry’s most complex and capital-intensive segments.
The South Korean memory giant said it will invest 19 trillion won in the new facility, which will be built in the city of Cheongju. Construction is scheduled to begin in April, with completion expected by the end of 2027.
According to CNBC reporting, the investment comes at a time when shortages of high-performance memory are pushing prices higher and reshaping competitive dynamics among the world’s leading chipmakers.
A major expansion in Cheongju
Cheongju is already a key manufacturing base for SK Hynix, so expanding operations there would enable the company to move quickly while leveraging established infrastructure and expertise, and building on its existing workforce and supply chain relationships.
Rather than adding sheer volume alone, the company said the new plant will play a central role in meeting growing customer demand tied to AI, which has surged far faster than demand for memory used in traditional consumer electronics.
Why advanced packaging matters and the role of HBM
Unlike conventional semiconductor plants that focus primarily on fabricating chips, the new SK Hynix facility will specialize in advanced packaging, an approach that involves stacking and connecting multiple memory dies into a single, high-density module. This process increases performance and energy efficiency while keeping physical size in check, which is an essential combination for modern data centers where power consumption and space are of the utmost importance.
This investment is also influenced by SK Hynix’s dominance in high-bandwidth memory (HBM), a specialized form of DRAM designed to work alongside powerful processors and deliver the rapid data transfer speeds required by AI workloads. HBM has become a significant component in AI accelerators, including those used by Nvidia, the leading supplier of data center AI chips.
Demand for HBM has jumped as companies race to train and deploy larger and more capable AI models. Industry projections cited by SK Hynix suggest the HBM market will grow at a compound annual rate of about 33% between 2025 and 2030. That rapid expansion has made HBM one of the most lucrative and strategically important segments in the memory industry.
The company’s aggressive expansion comes as competition in AI memory builds. Samsung Electronics has also pledged to ramp up HBM production, while US-based Micron is investing heavily to close the gap. What was once a relatively niche category of memory has now become one of the most strategic assets for the entire semiconductor ecosystem.
Rising prices and a memory supercycle
The surge in AI-related demand is already driving sharp price increases. TrendForce, a Taipei-based research firm, said it expects average DRAM prices, including HBM, to jump by as much as 50% to 55% compared with the end of 2025. While higher prices pose challenges for electronics manufacturers that rely on memory components, they have significantly boosted earnings for memory producers. Samsung recently said its operating profit for the December quarter is expected to nearly triple from a year earlier, showing strong demand.
Some analysts believe the industry is entering a prolonged “memory supercycle,” with AI investment driving sustained growth over several years rather than a short-lived spike. Unlike previous cycles driven by consumer electronics, this one is being propelled by massive, long-term spending on data centers and cloud infrastructure. For SK Hynix, expanding advanced packaging capacity now could lock in years of strong demand from major AI chip customers.
AI’s unrelenting appetite reshapes the industry
Ultimately, the new Cheongju plant is a response to the relentless demands of AI. Training large AI models and running them at scale requires enormous amounts of memory bandwidth, low latency, and high energy efficiency. As models grow larger, memory bandwidth becomes just as important as raw computing power.
That turn of events is forcing chipmakers to rethink their strategies. Advanced packaging, once considered a secondary manufacturing step, is now front and center in the race to support AI at scale. Companies that can scale packaging capacity and deliver reliable HBM supply stand to gain long-term advantages in the AI era.
Also read: AI predictions for 2026 highlight agentic systems, structured governance, and AI-enabled cloud spending as deployments scale.
The post SK Hynix Places $13B Bet on AI Memory as Chip Shortages Intensify appeared first on eWEEK.