Matt Garman, the Low-Profile Engineer Driving Amazon’s A.I. Infrastructure Push
On a humid stretch of land in northwest Louisiana, construction crews will soon begin preparing the ground for one of the largest digital infrastructure projects in the U.S. On Feb. 23, Amazon announced plans to invest $12 billion to build a network of data center campuses in Caddo and Bossier Parishes—a cornerstone of its expanding A.I. computing backbone. The project is expected to create 540 direct jobs and another 1,700 indirectly, according to state officials.
Amazon is also strengthening its presence in Northern Virginia. Earlier this month, Amazon Data Services purchased George Washington University’s Ashburn campus for $427 million. The company plans to convert the former research site, located in Loudoun County’s “data center alley,” into an advanced facility to handle high-performance computing and A.I. workloads.
Driving this infrastructure expansion is Matt Garman, the low-profile engineer who became CEO of Amazon Web Services (AWS) in March 2023. AWS powers much of the internet by providing on-demand computing, storage and A.I. tools to companies worldwide.
A computer science graduate from the University of Victoria, Garman joined Amazon in 2005 as a software engineer and helped design its pioneering cloud platforms EC2 and S3—services that defined early cloud computing. Over the next two decades, he climbed the ranks, becoming vice president of compute services in 2019, head of worldwide sales and marketing in 2020, and finally CEO of AWS at age 42. He succeeded Andy Jassy, who created AWS and later became Amazon’s CEO in 2021, as the cloud business surpassed $90 billion in annual revenue.
“Matt has an unusually strong set of skills and experiences for his new role,” Jassy said in a company blog post announcing Garman’s appointment. “He knows our customers and business as well as anybody in the world, and has senior leadership experience on both the product and demand generation sides.”
Within the tech world, Garman is known as a technically grounded, customer-obsessed operator rather than a showman. His measured approach contrasts with the industry’s louder A.I. hype. “We are incredibly bullish on the company’s growth,” he told CNBC in February. “A.I. is moving beyond content creation to task completion,” he said, describing a shift toward systems that can “process insurance claims” or handle other complex business functions.
He has also pushed back on alarmist narratives about A.I. replacing human jobs. On a podcast with Matthew Berman, Garman dismissed the idea that companies should replace junior developers with A.I. as “one of the dumbest things I’ve ever heard,” arguing that entry-level engineers are often the most fluent users of new A.I. tools.
Under Garman, AWS is investing heavily in custom hardware to make A.I. training and deployment more efficient and affordable. The company recently introduced its fifth-generation Graviton processors and third-generation Trainium chips, designed to speed up large-scale machine learning workloads. Its A.I. platform, Amazon Bedrock, gives businesses access to multiple foundation models from providers such as Anthropic, Meta, Mistral AI, and Stability AI.
Unlike Microsoft or Google, which promote their own flagship A.I. models, Amazon is positioning AWS as a neutral platform and a marketplace where enterprises can choose among competing A.I. systems. This strategy echoes Jassy’s 2024 shareholder letter, which cast Amazon’s long-term A.I. investments in chips and data centers as essential to powering the next era of computing. “Generative A.I. is going to reinvent virtually every customer experience we know,” Jassy wrote, “and enable altogether new ones.”
But competition is fierce. Microsoft has surged ahead through its partnership with OpenAI, while Google is rapidly integrating its Gemini models into tools such as Vertex A.I. Both companies are racing to deepen control over their A.I. ecosystems through proprietary chips, developer tools and massive infrastructure spending.
Amazon, too, has made its own high-profile bets. In late 2025, it invested $8 billion in Anthropic, which now uses AWS as its primary cloud provider and trains its models on Amazon’s Trainium chips. AWS also signed a multi-year infrastructure deal with OpenAI to run portions of its workloads on Amazon’s cloud.
Behind these moves lies one clear reality: the A.I. race is increasingly being fought over infrastructure. Building and running massive data centers—complete with power systems, cooling networks, and fiber links—has become the central bottleneck for the tech industry’s A.I. ambitions.
Amazon’s Louisiana project illustrates both the scale of the opportunity and the challenge. Data centers require enormous energy and water resources, raising concerns from environmental groups about their impact. The Caddo Lake Institute, a local organization, said it is reviewing how Amazon’s proposed campuses might affect the regional water system.
Amazon has pledged to minimize the impact by relying primarily on outside-air cooling and using water only during extreme heat. The company also plans to invest up to $400 million in local water infrastructure improvements to support nearby communities.
For Garman, these expansions represent the physical foundation of Amazon’s long-term A.I. strategy. As cloud providers increasingly double as power utilities for intelligent software, he’s betting that the companies with the most scalable and efficient infrastructure will define the future of computing.