When Apollo 17 left Earth in December 1972 for the moon, six men in a control room in Houston tracked every movement by hand, cross-checking calculations on paper and calling out numbers across the room. Fifty-three years later, four astronauts lifted off aboard Orion on Wednesday (April 1), headed for the same destination. This time, AI is in the room too, handling much of the heavy lifting.
Commander Reid Wiseman, Pilot Victor Glover, Mission Specialist Christina Koch and Canadian Space Agency astronaut Jeremy Hansen launched at 6:35 p.m. EDT on an approximately 10-day mission around the Moon and back, NASA confirmed on Thursday (April 2).
The mission is designed to validate everything the agency needs before it attempts a lunar landing, including life support, navigation, communication and crew systems in conditions that cannot be fully replicated on Earth.
Running underneath all of it is a layer of machine intelligence that is doing work no Apollo crew ever had available to them, and that is redefining what human spaceflight could look like.
The Spacecraft That Monitors Itself
For most of its journey, the trajectory and life-support monitoring of the spacecraft is handled by advanced algorithms rather than human operators. The mission relies on digital twin simulations and AI systems that watch over life support and plot trajectories in real time, a continuous layer of oversight that would have required dozens of additional specialists in the Apollo era, according to WION News,
Aerospace infrastructure firm Redwire said in a Monday release that its camera system aboard Orion comprises 11 internal and external cameras that automatically feed imagery into a navigation system tracking Orion’s position and velocity relative to Earth.
Before launch, engineers and astronauts ran full mission simulations inside a one-to-one replica of the spacecraft, rehearsing failure scenarios they could not otherwise test, Lockheed Martin said in February. The onboard computers are built to keep those systems running even under the high radiation conditions of deep space, using a cross-checking mechanism where any affected processor is outvoted by its peers, according to WION News.
Why the Distance Changes Everything
The deeper shift Artemis II represents is about a constraint Apollo never had to solve at this scale. The farther a crewed spacecraft travels from Earth, the less useful ground control becomes.
During the lunar flyby, the crew will lose contact with Earth for up to 50 minutes. At Mars distances, that gap stretches to hours each way. AI running directly on the spacecraft allows Orion to detect anomalies and respond without waiting for Houston, as covered by Aeronautics and Defence Technologies.
The Economy Behind the Mission
Artemis II did not arrive at this level of ambition in a vacuum.
The space economy reached a record $613 billion in value in 2024, with McKinsey estimating it could grow to $1.8 trillion by 2035, according to the Brookings Institution. The number of satellites in orbit is approaching 15,000 and is projected to reach 100,000 by 2030, while NASA’s Earth observation archive has already surpassed 100 petabytes. At that volume, no ground team can keep pace without AI doing the processing.
As missions like Artemis II grow in complexity, AI running directly on spacecraft rather than routing decisions back to Earth is becoming foundational to how space exploration operates, HPCwire reported.
The governance questions are moving just as fast. The United Nations Office for Outer Space Affairs has called for frameworks that pre-authorize AI decisions within defined parameters for deep-space missions where real-time human intervention is impossible, according to the Brookings Institution.
The question is no longer whether AI will make decisions in space. It is under what conditions and with what safeguards. What Artemis II is doing above the moon this week is providing the first full-scale answer, with four lives depending on getting it right.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.