Intel Teams Up With SoftBank Subsidiary on Next-Gen AI Memory
Hello Saimemory. It may be a name worth remembering.
Saimemory, a subsidiary of multinational investment giant SoftBank, has signed a collaboration agreement with US chipmaker Intel Corp. to advance the commercialization of memory technology.
The partnership, announced today (Feb. 3), underscores intensifying global efforts to address some of AI’s biggest bottlenecks: memory performance, power consumption, and supply constraints.
As AI models grow larger and more complex, conventional memory architectures are struggling to keep pace, prompting chipmakers and investors to explore new designs that could reshape data centers and high-performance computing over the next decade.
Thanks for the memory
The joint effort is currently known as the “Z-Angle Memory program,” or ZAM.
According to SoftBank, the companies plan to produce prototypes by the fiscal year ending Mar. 31, 2028, with commercialization targeted for fiscal 2029. While technical details remain limited, the timeline suggests a long-term development effort rather than an incremental upgrade to existing memory products.
Market reaction to the announcement was positive. Shares of SoftBank rose 3.13% in Tokyo trading, while Intel stock gained about 5% in overnight trading on Robinhood, reflecting investor optimism around Intel’s renewed focus on advanced technologies and SoftBank’s expanding role in the AI hardware ecosystem.
Intel’s government-funded research
Saimemory, which was established in December 2024, will draw on memory technology and expertise developed by Intel through prior research initiatives. In particular, the company plans to use work Intel conducted under the US Department of Energy’s Advanced Memory Technology program.
That government-backed program focused on developing core technologies for advanced memory systems, with Intel’s contributions aimed at improving performance and power efficiency for next-generation Dynamic random-access memory (DRAM) used in computers and servers. Such improvements are increasingly critical as AI workloads require faster data access while operating within tight energy and cost constraints.
“Standard memory architectures aren’t meeting AI needs,” Dr. Joshua Fryman, Intel Fellow and chief technology officer of Intel Government Technologies, said in a statement.
He added that Intel has developed a new memory architecture and assembly approach that improves DRAM performance while lowering power use and costs, positioning the technology for broader adoption over the next decade.
Addressing memory shortages in the AI boom
The partnership comes at a time when demand for memory used in AI-related applications has surged far faster than supply. Memory components such as DRAM and high-bandwidth memory (HBM) are essential for training and running large AI models, and shortages have rippled across the semiconductor supply chain.
Major cloud providers and AI developers have been competing for limited supplies of advanced memory, contributing to rising prices and longer lead times. By pursuing alternative architectures and manufacturing approaches, projects like ZAM aim to alleviate some of these constraints and create new capacity for future AI systems.
For Intel, the collaboration also fits into a broader strategy to regain technological leadership after several years of manufacturing delays and intense competition from rivals. Partnering with SoftBank-backed Saimemory allows Intel to extend the commercial potential of research originally developed for government programs into global markets.
Energy efficiency
Beyond performance and supply, energy efficiency is a central theme of the ZAM program. The emphasis reflects growing concerns over the massive energy consumption associated with AI computing, particularly in large-scale data centers.
As AI workloads scale, power usage has become a limiting factor, driving up operational costs and drawing scrutiny from regulators and environmental groups. Memory systems play a significant role in overall power consumption, making efficiency gains especially valuable.
By targeting lower power use alongside improved performance, the Intel–SoftBank collaboration could help data center operators manage energy costs while continuing to expand AI capabilities. If successful, the technology may appeal not only to hyperscalers but also to governments and enterprises seeking more sustainable computing infrastructure.
Broader implications
Fujitsu, a Japanese multinational IT equipment and services company, is also reportedly involved in the project, suggesting a broader ecosystem of partners spanning the US and Japan.
For SoftBank, the deal reinforces its strategy of positioning itself at the center of the AI value chain, extending beyond software and platforms into critical hardware components. Through Saimemory, the company is betting that memory innovation will be as crucial to AI’s future as advances in processors and accelerators.
While commercialization remains several years away, the ZAM program highlights how geopolitical considerations, government-funded research, and private capital are increasingly intersecting in the race to build the next generation of AI infrastructure.
Snowflake and OpenAI have revealed a $200 million partnership that brings AI capabilities directly into a data platform.
The post Intel Teams Up With SoftBank Subsidiary on Next-Gen AI Memory appeared first on eWEEK.