Why Is Silicon Valley Spending a Fortune on AI Data Centers?
This week, President Donald Trump and a small group of tech executives — OpenAI CEO Sam Altman, SoftBank CEO Masayoshi Son and Oracle founder Larry Ellison — announced Stargate, a four-year, $500 billion project to build data centers and other artificial intelligence (AI) infrastructure in the U.S. MGX, a UAE AI sovereign fund, is also an equity holder. Technical partners are Nvidia, Arm, Microsoft, Oracle and OpenAI.
Half a trillion dollars is a lot of money, even by Silicon Valley standards. According to IDC, the overall server market is forecasted to hit $1.3 trillion by 2028. The biggest builders of data centers include Amazon, Microsoft and Google Cloud as well as data center companies Digital Realty and Equinix. (AWS, Azure and Google Cloud build data centers as part of their cloud computing business. Meta also builds them, but for its own needs.)
Why are AI data centers needed? Traditional data centers and power grids are struggling to accommodate the intense computational power, data storage and energy required by AI. Processing AI models, for instance, are expensive because the underlying algorithms are “extremely computationally hard,” storied VC firm Andreessen Horowitz said in a blog post.
Data Centers Being Built
On Friday (Jan. 24), Meta, which is not part of Stargate, announced its own data center plans. On his Facebook page, CEO Mark Zuckerberg said the company is investing $60 billion to $65 billion in capital expenditures. That’s up from $38 billion in 2024. He said more computing power is needed for Meta AI, the company’s AI assistant that has been deployed in its social media and devices. Meta AI is now serving over a billion people, he said.
More computing power is needed as well to develop Llama 4, the next version of its flagship open-source large language model (LLM), and create an AI engineer that will code alongside Meta’s human engineers in R&D to speed up its AI efforts.
Zuckerberg said he plans to build a data center with a power capacity of more than 2 gigawatts, enough for 2 million homes. This data center is “so large it would cover a significant portion of Manhattan,” he said. In 2025, Meta is also bringing online another 1 gigawatt or power and end the year with more than 1.3 billion GPUs.
“This is a massive effort, and over the coming years it will drive our core products and business, unlock historic innovation, and extend American technology leadership,” Zuckerberg said. “This will be a defining year for AI.”
India is also stepping up. On Thursday (Jan. 23), Bloomberg reported that Reliance Group, a company led by India’s richest man, Mukesh Ambani, will build what could be the world’s largest data center: a 3 gigawatt facility in Jamnagar, India. The data center is expected to be completed in about two years. Currently, many of the largest data centers in operation are less than 1 gigawatt. (Data center capacity is measured in how much power it can supply to computing operations.)
Earlier this month, Microsoft announced plans to invest $3 billion in cloud and AI infrastructure in India, including building new data centers. In total, Microsoft has the highest number of data centers globally at more than 300. AWS comes second with more than 100. Google has around 33 while Meta currently operates 27 and Apple reportedly has at least nine.
How AI Data Centers Are Different
“AI data centers are fundamentally different because they require specialized hardware and infrastructure to handle the massive parallel processing needed for AI workloads,” Deborah Perry Piscione, co-founder of Work3 Institute, an AI and Web3 advisory firm, told PYMNTS.com.
“Traditional data centers focus on storage and basic compute, while AI facilities need dense configurations of GPUs and AI accelerators, like Nvidia’s H100s, designed specifically for the complex matrix calculations that power AI models,”
Demand for computational power will only increase: Training the next generation of AI models raises the need for more processing power. For example, OpenAI’s GPT-3 LLM used up 1,300 megawatt hours to train — enough to power 130 U.S. homes for a year, according to a blog post by the World Economic Forum (WEF). But GPT-4 is estimated to have used 50 times more electricity, the international group said.
WEF said AI’s computational power demand is doubling roughly every 100 days. Currently, data centers supporting AI are consuming around 4% of U.S. electricity, and the figure could double by 2030, WEF added.
Notably, AI processes run continuously, unlike traditional computing with its downtimes. AI models are constantly learning and processing data, which contribute to its high power requirements. “This relentless requirement results in a nonstop drain on energy resources, as systems remain active around the clock,” clean energy provider Bloom Energy wrote in a blog post.
Where to Find Power Sources?
With its high-power usage, AI model training and inference processing are raising concerns that it would strain the power grid. One solution has been for data centers to build their own power generators. That’s the case with Stargate; Trump said these data centers will have their own power plants.
Last December, Google partnered with Intersect Power and TPG Rise Climate to develop clean power plants next to its data centers. The three will develop industrial parks with gigawatts of data center capacity in the U.S. “co-located with new clean energy plants to power them,” Alphabet President Ruth Porat said in a blog post. The first plant will be online in 2027.
Tech giants have also been looking to nuclear power for their data centers. In March 2024, AWS purchased a 960-megawatt data center next to a 2.5 gigawatt nuclear power plant in northeast Pennsylvania from Talen Energy for $650 million. Last September, Microsoft signed a 20-year deal to buy nuclear energy from Constellation, restarting Unit 1 reactor of Three Mile Island in Pennsylvania. It was Unit 2 that partially melted down in 1979.
Later, Amazon would move deeper into nuclear, announcing last October that it would invest in small nuclear reactors. Their size would let them be built closer to the grid; construction would be faster as well. Two days earlier, Google made a similar announcement: It would be buying nuclear energy from several small modular reactors to be built by Kairos Power. The first reactor is slated to come online in 2030. Total capacity is 500 megawatts.
Proposals for locating data centers next to nuclear power plants have cropped up in New Jersey, Texas and Ohio, according to the Institute of Electrical and Electronics Engineers, or IEEE. Sweden is considering using small modular reactors to power its data centers, the organization said.
The post Why Is Silicon Valley Spending a Fortune on AI Data Centers? appeared first on PYMNTS.com.