gekko

quality online informatics since 1994

The Universe in Silico: Euclid’s Flagship 2 Simulation and the Philosophy of Cosmic Models

by

in

Mapping the Immeasurable

In late 2025, the Euclid Consortium — the international group responsible for managing the European Space Agency’s Euclid space telescope — announced the release of the most extensive simulation of the universe ever produced. Called Flagship 2, it is not merely a technical achievement but a landmark in our species’ attempt to model the cosmos in its entirety.

The sheer numbers are staggering: the simulation maps 3.4 billion galaxies and tracks the gravitational interactions of more than 4 trillion particles. The result is an extraordinarily detailed “virtual universe” that astrophysicists can now probe for answers to fundamental questions about cosmic structure, dark matter, and dark energy.

Behind this milestone stands an algorithm designed by Joachim Stadel, astrophysicist at the University of Zürich. In 2019, Stadel harnessed the raw power of Piz Daint, at that time the third most powerful supercomputer in the world, to generate this immense dataset. Today, the simulation has matured into a tool not only for the Euclid mission but also for the philosophical imagination.

This article will first explain the scientific and computational background of Flagship 2, then examine its role in cosmology, and finally turn to a deeper question: can a simulation simulate the very thing that enables it to exist?

Euclid and the Quest for Cosmic Clarity

Euclid, launched by the European Space Agency, has a specific mission: to unravel the mysteries of dark matter and dark energy, the unseen components that dominate the mass-energy content of the universe. While only about 5% of the cosmos is made of ordinary matter, approximately 27% is dark matter and nearly 68% is dark energy — elusive phenomena detectable only through their gravitational effects.

By mapping billions of galaxies and their distribution across cosmic time, Euclid aims to measure how structures in the universe have evolved. These measurements, in turn, will allow scientists to test competing theories of cosmic acceleration, structure formation, and gravity itself.

But to interpret the telescope’s data, scientists need a reference model: a simulated universe whose physics is known in exquisite detail. This is where Flagship 2 enters the stage. It provides a virtual twin of the cosmos against which Euclid’s observations can be compared, calibrated, and understood.

Flagship 2: The Simulation of Simulations

Simulating the universe is unlike simulating a weather system or a traffic network. The scales involved are incomprehensible: billions of galaxies, each containing billions of stars, evolving across 13.8 billion years of cosmic history. Directly modeling every star, planet, or particle of gas is impossible.

Instead, cosmologists use N-body simulations, where the matter content of the universe is represented by billions or trillions of “particles” that interact gravitationally. Each particle does not represent a single atom but rather a clump of matter — perhaps millions of solar masses of dark matter — and its motion under gravity.

Flagship 2 advances this method to unprecedented scale:

  • 3.4 billion galaxies are tracked.
  • 4 trillion particles interact gravitationally.
  • The simulation volume encompasses a cube several billion light-years wide.

Running such a calculation requires more than raw computing power. It demands clever algorithms that can reduce complexity without sacrificing physical realism. Joachim Stadel’s algorithm, optimized for Piz Daint’s architecture, was key to making this feasible.

The Power of Piz Daint

To appreciate Flagship 2, one must appreciate the machine that gave birth to it. Piz Daint, housed at the Swiss National Supercomputing Centre in Lugano, was in 2019 the third most powerful supercomputer in the world. Named after a mountain in the Swiss Alps, Piz Daint combined thousands of computing nodes, each equipped with high-performance CPUs and GPUs, to perform tens of quadrillions of calculations per second.

For a simulation like Flagship 2, which must integrate gravitational interactions across trillions of particles over cosmic timescales, such throughput is essential. Each step in the simulation requires computing the forces on every particle from every other particle — a problem that naïvely scales as (N2), where (N) is the number of particles. Clever approximations, such as tree codes and particle-mesh methods, allow this calculation to be performed efficiently.

Without such optimizations, even Piz Daint’s enormous capabilities would have been insufficient. The project demonstrates not only the power of hardware but also the ingenuity of human-designed algorithms.

Why Simulate the Universe?

The obvious question is: why spend years of effort and millions of supercomputer hours to create a virtual universe? The answer lies in the role of simulations as cosmic laboratories.

  1. Testing Observations:
    Observational data is messy. Telescopes suffer from noise, selection effects, and instrumental limitations. By comparing real observations to simulated ones, astronomers can disentangle the underlying physics from observational artifacts.
  2. Exploring Hypotheses:
    Simulations allow scientists to tweak fundamental parameters — the amount of dark matter, the nature of dark energy, the laws of gravity — and see how the universe would have evolved differently.
  3. Understanding Structure Formation:
    Galaxies, clusters, and filaments emerge from initial quantum fluctuations stretched by cosmic inflation. Simulations let researchers follow this evolution step by step, showing how small fluctuations grew into the cosmic web.
  4. Training AI Models:
    Increasingly, machine learning tools are used in astronomy. They need vast amounts of training data — and simulations like Flagship 2 provide precisely that.

In short, Flagship 2 is not a static artifact but a living environment for testing cosmological theories.

The Limits of Simulation

Despite its grandeur, Flagship 2 is not the universe itself. It is an approximation, subject to limitations:

  • Resolution: Each “particle” represents enormous masses, meaning small-scale physics like star formation or black hole accretion is not directly modeled.
  • Physics Choices: Only gravity is fully simulated. Other processes, such as gas dynamics, magnetic fields, or feedback from stars, must be approximated.
  • Initial Conditions: The simulation must assume a set of starting conditions, often based on the cosmic microwave background, but these themselves depend on theoretical assumptions.

Thus, while the simulation is vast, it is also fragile: a representation, not a reality. It functions as a map, not the territory.

From Virtual Universes to Virtual Philosophies

Here the philosophical reflection begins. When we simulate a universe, we are doing more than predicting numbers — we are exploring the boundaries of representation itself.

The final question we must ask is: Can a simulation simulate the very thing that enables it to function in the first place?

Consider:

  • The simulation requires computers, algorithms, and human minds to design and interpret it.
  • Those computers, algorithms, and minds are themselves part of the universe.
  • Yet the simulation pretends to be the whole universe, including — implicitly — the matter and energy that make the computer possible.

This leads to a paradox: if the simulation were truly complete, it would need to simulate itself, and the simulation of itself, and so on in an infinite regress. In practice, of course, it stops short: it simulates galaxies and particles but not the Zurich computer scientists and their machines.

This reveals something profound: simulations depend on a selective boundary between what is modeled and what is not. That boundary is invisible within the simulation itself but obvious from outside.

The Mirror and the Abyss

Philosophers from Descartes to Baudrillard have wrestled with the idea of worlds within worlds, of simulations that blur with reality. The Flagship 2 simulation does not claim to be reality. But by encompassing so much of the universe, it gestures toward the possibility of total simulation — and forces us to ask whether such a totality is even coherent.

The danger is to mistake the mirror for the thing mirrored. A simulated galaxy is not a galaxy, no matter how accurate the gravitational interactions appear. The abyss opens when we begin to wonder whether our own universe might be a simulation — whether the “outside” we imagine for Flagship 2 also exists for us.

Conclusion: The Function of Cosmic Simulations

Flagship 2 is a triumph of human ingenuity: an unprecedentedly detailed model of the cosmos that will guide the interpretation of Euclid’s observations for years to come. It shows us how galaxies cluster, how matter flows, and how the cosmic web unfolds across billions of years.

But its philosophical importance may be even greater. It demonstrates that every simulation is bounded by the conditions that allow it to exist. A simulation can model the universe, but not the electricity that powers the computer; it can model galaxies, but not the researchers who interpret them.

So, can a simulation simulate the very thing that enables it to function? The answer is no — not fully. It can only ever be partial, because it relies on an external substrate that remains unmodeled.

And yet, perhaps that is the most human thing of all: to attempt to model the infinite, knowing our models will always stop short. The Flagship 2 simulation isn’t the universe, but it showcases our relentless pursuit of understanding and the fascinating recursive beauty of minds within the cosmos simulating the cosmos itself.


Keep up, get in touch.

About

Contact

GPTs

©

2025

gekko