Designing Through the AI Panic
#038: AI anxiety and indifference is not a design strategy.
“If designers don’t engage deeply with these [AI] systems, others will define their behavior, defaults, and values for us. And history suggests those defaults won’t prioritize clarity, agency, restraint, or human dignity unless someone insists on them.”

The anxiety about AI and what it might mean for the future of work, let alone design, is loud. Before we can talk about experience design or interaction patterns, the conversation almost always collapses into a single, unavoidable question: Is this going to take my job?
You can feel it everywhere: in team meetings, office hallways, Slack threads, and late-night messages from colleagues who haven’t slept particularly well since they last had to answer an executive’s question about how AI is being used to “accelerate outcomes.” Even when it’s never said outright, the anxiety dominates the emotional register of AI-related conversations. Curiosity gives way to self-preservation and exploration gets superseded by threat detection.
That reaction isn’t irrational, especially as we watch software generate layouts, write copy, summarize research, and produce artifacts that used to require real time, skill, and effort. When output starts to look indistinguishable from competence to the untrained eye, it’s not paranoia to wonder whether the ground is shifting beneath you because it is.
When Anxiety Blocks the Real Conversation
The problem is that AI anxiety has started to consume all available oxygen. When that happens, it blocks a more important conversation that will actually determine whether designers lose relevance or become more essential in the years ahead.
“AI” has collapsed into a single word that absorbs far too much meaning. It gets treated like a force of nature, something like the weather: either it washes over us and we adapt, or it wipes us out entirely. We’re told there’s no agency here, only inevitability. When fear gets aimed at something this vague, there’s no productive response available, leaving only denial or resignation.
The Cost of Treating AI as a Monolith
What’s missing is specificity and proactivity.
AI doesn’t show up as one thing. It shows up as distinct experiences, like any other digital software, and those experiences differ in meaningful, consequential ways, especially in how they distribute authority, agency, and responsibility between humans and machines. Some systems speak, some act on our behalf, and others shape what we see, trust, and decide.
Each of these carries distinct risks that are magnified when human-centered design and research methods aren’t applied. Each demands different kinds of design judgment and fail in different ways when things go wrong.
The Three Assumptions Fueling the Anxiety
Based on my conversations with other designers, a lot of the anxiety seems to trace back to three assumptions — ones designers sometimes internalize themselves, and that leaders and stakeholders often take as gospel:
That because AI can generate artifacts, it must also be capable of judgment.
That because it can speak fluently, it must understand.
That because it can act autonomously, it must be trustworthy.
None of those assumptions actually hold.
What’s Actually Changing
Once you start pulling on that thread, the picture changes. What’s really happening isn’t that design work is disappearing, but the nature of the systems we’re designing is fundamentally shifting.
We’re moving away from largely deterministic software (systems that behave the same way every time) and toward probabilistic systems that operate on likelihoods, inference, and approximation. This shift doesn’t eliminate design work or the core of what makes quality experiences possible, but it should change how we think about applying design expertise.
Where Design Judgment Now Carries the Most Weight
The hardest part of design is no longer producing outputs. It’s deciding how much authority a system should project, where humans must remain in control, and how uncertainty is communicated instead of hidden. Those decisions don’t scale automatically, and they absolutely should not be delegated to a model.
Seen through that lens, the anxiety designers are feeling isn’t a sign that the profession is ending, but that it’s concentrating into fewer, higher-stakes decisions. We’re no longer just designing interfaces. We’re designing how knowledge is framed, how action is taken, and how much control people should retain over otherwise automated systems.
And this is where we have a choice.
Hoping It Goes Away Isn’t a Strategy
It’s tempting to disengage by letting fear harden into skepticism, or to hope that AI will follow the arc of NFTs or the metaverse hype cycle and suddenly disappear. But sitting back and hoping this all blows over is abdication and not strategy. The more we lean away from AI and its prevailing narrative, the less likely we’ll be able to clearly define and showcase the criticality of design in this new era.
That doesn’t mean we ignore the ethical issues. There are real, unresolved concerns around how large language models have been trained through very direct intellectual property theft. There are clearly unethical and extractive uses of this technology, and we must be vocal about them and advocate for ethnical controls in our work. But it’s a mistake to collapse all uses of AI into that critique. When applied within organizations on wholly owned or consented data, many of those ethical concerns shift meaningfully and the design responsibility only increases.
If designers don’t engage deeply with these systems, others will define their behavior, defaults, and values for us. And history suggests those defaults won’t prioritize clarity, agency, restraint, or human dignity unless someone insists on them.
What This Series Is About
Before we talk about tools, prompts, or workflows, we need a clearer mental model of the kinds of AI experiences actually showing up in products today, as well as what each one asks of designers.
That’s what this new AI series is about.
In the next issue, I’ll start with the most visible category: direct AI interaction, where the interface speaks, confidence is easy to project, and epistemic mistakes get amplified at terrifying scale. From there, we’ll move on to agentic systems that act on our behalf, and finally to AI-enriched experiences that reward restraint and judgment more than novelty.
The anxiety is real, but it isn’t the conclusion. It’s the signal pointing toward where the most critical work of design now lives, and an invitation to engage with it head-on.


