The AI Simulation Stack Is the New Design System
#006: How AI-driven predictive systems are reshaping governance, policy, and the role of design in our collective future.
👋 Welcome to Systems & Signals.
I publish two issues a week:
Tuesdays focus on design leadership — practical insights on teams, orgs, hiring, and navigating modern product environments.
Thursdays zoom out to explore design at higher altitudes — power dynamics, socioeconomic systems, global conflict, and the futures we’re shaping through design.
I’d love to know what’s resonating. You can connect with me on LinkedIn or on Bluesky @delabar.design. You can also learn more about me and my work on my personal site.

In 2016, social platforms revealed their capacity to influence elections through targeted misinformation, microtargeted ads, and automated engagement. Nearly a decade later, we are no longer dealing with crude nudging. We are entering an era where artificial intelligence is used to simulate entire societies before any action is taken.
Today, some of the most powerful simulation systems are owned not by governments but by private corporations. BlackRock’s Aladdin platform, used to assess financial risks across trillions in assets, serves as a simulation engine for global markets. Palantir’s AI-driven platforms power simulations for logistics, healthcare, and policing, often behind closed doors. Meanwhile, Meta and Google simulate human attention and behavior patterns to optimize engagement and ad revenue, making billions by predicting what we will think, click, and buy next.
And according to McKinsey and Co., 75% of large enterprises are actively investing in scaling digital twins (virtual models that mirror real-world systems using real-time and historical data) with AI:
Gen AI not only enhances digital-twin capabilities but also uses real-time data from digital twins as a source of additional context, making inputs (such as prompt engineering) far more dynamic. With stores of robust, contextualized data, digital twins provide a secure environment in which gen AI can “learn” and broaden the scope of prompts and output. Through “what if” simulations run by digital twins, users can fine-tune gen AI, enabling it to conduct predictive modeling, as opposed to the primarily backward-looking view that most LLMs provide.
Academic projects echo this shift. For example, the Y Social Network Digital Twin project, which is built and maintained by a team of AI researchers from various universities, enables realistic social media interaction simulations by integrating LLM agents.
These tools don’t just analyze the present. They test, predict, and shape the future. In doing so, they are redrawing the boundaries of design, governance, and public agency.
Simulating Society: Not Hypothetical Anymore
These simulation systems are no longer hypothetical research projects or speculative forecasts. They are operational tools embedded in institutional decision-making. The European Commission’s Destination Earth project, for example, is developing a digital twin of the planet to help policymakers visualize and test climate adaptation and mitigation strategies. MIT is doing similar with its Earth Mission Control VR experience.
The U.S. National Reconnaissance Office's Sentient program analyzes satellite data to predict threats and guide national security decisions. Meanwhile, China's smart city infrastructure uses real-time biometric and behavioral data to optimize urban management and policy enforcement.
From Wikipedia:
Sentient is a classified artificial intelligence (AI)–powered satellite-based intelligence analysis system developed and operated by the National Reconnaissance Office (NRO) of the United States. Described as an artificial brain, Sentient autonomously processes orbital and terrestrial sensor data to detect, track, and forecast activity on and above Earth.
The common thread across these efforts is the shift from observation to intervention. These models are not just passive representations; they actively inform decisions that shape social, environmental, political, and security outcomes.
From Social Platforms to Simulation Infrastructure
This evolution reflects a broader structural change: the rise of the simulation stack. At its foundation are massive data collection systems. These are layered with proprietary AI models that simulate scenarios, and topped with outputs distributed through the platforms that shape how people receive information and make decisions.
The simulation stack is controlled primarily by private actors. OpenAI, Meta, and Google develop the models. Amazon, Apple, and social platforms collect the data. Cloud providers like Microsoft and AWS host the infrastructure. Together, these entities don't just shape digital experiences — they shape the very context in which decisions are made at scale.
Simulation and Regulation: Early Signals of Public Response
Recognizing the expanding influence of AI and simulation systems, some governments are beginning to act. Just last week, California released a 53-page report calling for AI transparency and harm mitigation, especially around high-risk AI systems. New York’s Responsible AI Safety & Education(RAISE) Act was recently passed by the state’s legislature, mandating companies investing heavily in AI to disclose their models’ potential societal impacts. It remains to be seen if New York governor Kathy Hochul will sign the act into law amidst pressure from private interests to veto.
These developments indicate a shift in how AI is perceived: no longer just as an engineering challenge, but as a matter of governance, ethics, and social accountability. If predictive models are shaping decisions at scale, public institutions must define guardrails around their deployment while staving off the influence of corporate actors with seemingly bottomless lobbying budgets.
Forecasts That Shape Reality
The need for strong guardrails around simulation becomes clearer when we consider how predictions can become self-fulfilling. This dynamic, first identified by sociologist Robert K. Merton as the well-known "self-fulfilling prophecy", occurs when a forecast itself alters behavior in ways that make the prediction come true.
This is not merely theoretical. During the 2020 George Floyd protests in the U.S., aggressive preemptive policing often escalated tensions, provoking unrest rather than preventing it. Studies have shown similar patterns in Turkey and Burundi, where anticipated protests led to forceful responses that in turn intensified the demonstrations.
On digital platforms, these loops take the form of algorithmic recommendation systems that amplify emotionally charged or polarizing content. The result is the creation of echo chambers that deepen social divides. Research has shown that engagement-driven algorithms can prioritize low-quality or inaccurate content, reinforcing existing biases and creating feedback cycles that skew public discourse.
The need for strong guardrails around simulation becomes clearer when we consider how predictions can become self-fulfilling.
Academic studies on recommender systems have confirmed these effects, noting that they often promote popularity bias (PDF), reinforce homogeneity in user exposure, and disadvantage minority viewpoints. These systems are not passive observers — they actively shape what users see, think, and believe.
Design, in this context, is not just about presenting information; it’s about structuring the interface between prediction and action. If a forecast directs investment, security, or communication strategy, the designer’s role extends to curating the potential realities that become possible — or impossible — in its wake.
Designing for a Simulated World
As AI-enabled predictive simulations become embedded in everything from policy to user experience, the role of design must expand. Designers are not merely stylists or problem-solvers in this new paradigm, they become shapers of the systems that define future options.
Emerging roles could include:
Civic Simulation Designers who co-create visual tools that help governments and communities prototype outcomes.
Model Auditors who interpret and interrogate the embedded values within simulations.
Participatory Foresight Facilitators who help people imagine and evaluate multiple futures, not just the most probable ones.
These are not speculative futures. They are already being tested in policymaking, product design, and public engagement. The challenge is to ensure they are grounded in ethics, equity, and long-term thinking.
Conclusion
The internet began as a network of documents. It evolved into a network of behaviors. Today, it’s becoming a network of simulations.
This shift changes what it means to design, to decide, and to deliberate. As predictive systems expand into every domain, we must ask not only how accurate they are, but how accountable they are. And as designers, technologists, and citizens, we must collectively ask:
"Who is modeling our future, and on whose terms?"