There is something wonderfully indecent about seeing Apollo 11 software on GitHub . One expects moon software to be stored in a vault beneath a mountain, guarded by men with clipboards and severe haircuts. Instead, there it is, browsing-friendly, forkable, starred, and exposed to the same ecosystem that gave humanity left-pad, YAML indentation crimes, and JavaScript build systems requiring the power budget of a minor principality.
The repository contains the original Apollo 11 Guidance Computer source code for the Command Module, Comanche055, and the Lunar Module, Luminary099. It was digitized by the Virtual AGC project and the MIT Museum, with the stated goal of preserving the original Apollo 11 source in one place. The attribution notes public-domain copyright and identifies the program material as NASA Apollo Guidance Computer code assembled in 1969.
This is already funny before one reads a single instruction. The code that helped land humans on the Moon now lives in the same civic bazaar as experimental todo apps, abandoned cryptocurrency wallets, and “my first Rust compiler.” Somewhere, an Apollo engineer is either smiling or reaching for a slide rule as a weapon.
The Apollo Guidance Computer is a useful insult to modern software complacency. Each Apollo computer contained about 4 kilobytes of read-write memory and 72 kilobytes of read-only memory. The read-only memory was not merely “stored” in the contemporary, invisible silicon sense; it was physically woven into core rope memory. A wire passing through a core meant one thing; a wire bypassing it meant another. The programs were developed at MIT, translated, punched, and then threaded by workers, most of them women, in a process where “deployment” meant, quite literally, weaving software into hardware.
Today we complain when CI takes seven minutes.
This is not nostalgia for inconvenience. Nobody should want to debug a lunar descent routine with a memory budget smaller than a modern favicon ecosystem. But the Apollo code reminds us that constraints can produce intellectual hygiene. When memory is scarce, naming matters. When computation is expensive, architecture matters. When power, heat, weight, and reliability are not aesthetic concerns but mission boundaries, software stops being a fog machine and becomes engineering.
The restoration projects make the lesson even sharper. Virtual AGC exists to provide original flight software and emulations of onboard guidance computers so they can be run on ordinary public computers today. Other teams have gone further, restoring or replicating Apollo Guidance Computer hardware, building FPGA replicas, using original schematics, and even flying simulated lunar landings through restored or emulated systems. This is not antiquarian stamp collecting. It is executable archaeology. The past is not just admired; it is powered on, probed, disagreed with, repaired, and made to blink again.
The obvious modern question is whether this software has been used to train AI. The honest answer is: we cannot verify that this specific repository was included in any particular proprietary model’s training run. But it would be naïve to treat the idea as far-fetched. GitHub says Copilot has been trained on natural language and source code from publicly available sources, including public GitHub repositories. Public code datasets such as BigCode’s The Stack have also gathered terabytes of permissively licensed source code for training and evaluating code models, with explicit opt-out mechanisms because many developers care about that use. So the best answer is not “yes” and not “no,” but the very modern “probably possible, not specifically proven.”
And if some model has ingested Apollo source, the irony is delicious. We may now have large neural systems, trained across billions of tokens, glancing statistically at code written for a machine with less writable memory than a long email. The AI may have learned from software whose every byte had to justify its oxygen consumption. This is like sending a decadent prince to survival school.
What should it learn?
First, that resources are moral facts. Speed, memory, energy, and bandwidth are not just implementation details; they shape what kind of civilization software becomes. A system that wastes memory wastes machines. A system that wastes compute wastes electricity. A system that treats every task as an invitation to summon a model the size of a minor deity eventually turns engineering into ritual sacrifice. Apollo’s engineers could not hide inefficiency under another abstraction layer. They had to know what the machine was doing.
Second, that graceful degradation is not a product-management slogan. During Apollo 11’s descent, the Guidance Computer produced the famous 1201 and 1202 alarms. NASA’s account notes that each alarm caused the computer to reboot and restart the important work, such as steering the descent engine and running the DSKY display, while not restarting the erroneously scheduled radar jobs. The mission could proceed because the restart behavior had been extensively tested. That is a beautiful sentence in software architecture: restart the important stuff. Not everything. Not the dashboard. Not the decorative telemetry peacock. The important stuff.
Third, that documentation and preservation are part of engineering, not afterthoughts. The Apollo repository exists because code was preserved, scanned, transcribed, compared, and contextualized. The restoration projects exist because schematics, memories, signals, and interfaces were treated as recoverable knowledge. Modern software often behaves as if the future is someone else’s outage. Dependencies vanish, APIs mutate, containers rot, and institutional memory is stored in the head of a developer who left in 2023 to become a ceramicist.
Fourth, Apollo teaches us that “old” is not the opposite of “advanced.” The Guidance Computer was primitive by raw metric and sophisticated by design discipline. It is possible to have a slow machine and a sharp system. It is also possible, as we prove daily, to have absurdly fast machines running software that behaves like a confused waiter in a burning restaurant.
The deeper lesson for AI is not that we should run language models on rope memory. That would be cruel to both rope and memory. The lesson is that intelligence without restraint tends toward obesity. The Apollo tradition asks a better question than “How large can we make it?” It asks: What is the smallest, most reliable, most testable system that can do the job?
The Moon was reached by people who knew their machine intimately. They did not have infinite compute. They had judgment, priority scheduling, physical memory, and a very low tolerance for nonsense. Perhaps that is what the Apollo code can still teach our AI age: not how to think bigger, but how to think smaller without becoming small.
No comments yet