Sunday Signals: Figma, the job market, and other things not bouncing back
Doom-and-gloom edition.
Something a little different this morning.
Starting today, I’m adding Open Tabs to the mix — a subscribers-only Sunday brief featuring a curated look at what’s worth paying attention to from the past week. Think of it as a Sunday morning read with your coffee: shorter takes, pointed observations, relevant links. The regular weekly issues aren’t going anywhere — they’re back starting this week, in fact, after a longer-than-intended hiatus. This is additive.
Here’s what caught my attention this week.
Figma’s falling and can’t get up
Figma IPO’d last July at $33/share, peaked at $142 within 48 hours, and is now trading around $21. I’m not great at math, but I’m pretty certain that’s down 84% from its high. I’ve written previously about Figma’s strategic drift in the lead up to the IPO, which has resulted in some unfortunate blind spots for CEO Dylan Field.
One of those blindspots — somehow — was Google. This week Figma’s stock dropped another 12% in two days after Google launched Stitch, a “vibe design” tool that “turns natural language prompts into high-fidelity UI and interactive prototypes” for free. Ah, the honeymoon phase before the inevitable platform enshitification commences.
The market’s read is clear: the core value proposition of a design tool is increasingly something AI gives away for free, for now.
Here’s a small sampling of the Figma doom-and-gloom that’s been circulating this week:
Figma’s Stock Drops 12% in Two Days After Google Releases ‘Vibe Design’ Product — CNBC
AI Is Killing Figma: A Capital Structure Story — Dave Friedman
The design job market isn’t bouncing back
Speaking of doom-and-gloom: The share of designers finding a new role within three months dropped from 68% in 2019 to 49.5% in 2024 — and analysts aren’t modeling a bounce back in 2026. This isn’t post-layoff turbulence as the roles being eliminated aren’t coming back in their previous forms.
The UX middle is hollowing out
NNGroup’s 2026 UX Benchmark confirms a split that’s been forming for a while: senior practitioners are gaining more influence at product tables while mid-level roles face sustained pressure from AI tooling that now does execution work well enough. The middle isn’t disappearing slowly, it’s moving faster than most have planned for.
NNGroup: The State of UX in 2026 →
Colorado’s AI liability law lands on designers, not just engineers
Colorado’s AI Act puts UX designers directly in the accountability chain for consequential algorithmic decisions that touch on employment, lending, and healthcare. Most design teams don’t know it’s coming. The compliance frameworks it creates will fundamentally reshape how algorithmic systems get designed and built.
Colorado SB 205 — Artificial Intelligence →
Policy tends to arrive faster than practices do, so if you work on any product that makes automated decisions affecting real people, this is your homework.
Most government digital services aren’t accessible, and the government knows it
A March 2026 GSA report found that only about one-third of the federal government’s most-viewed websites meet legal accessibility requirements, with agencies scoring an average of 1.96 out of 5 across all technology. Half of agencies don’t regularly test for accessibility at all. The kicker: remediation teams are shrinking as AI gets layered on top of these already-broken foundations.
Much of the Government’s Technology Isn’t Accessible, Internal Report Finds — Nextgov/FCW
Government Websites Are Littered With Accessibility Issues — Route Fifty
Support for accessibility is often the difference between a service that works and one that doesn’t for a significant portion of the population. A government self-assessment scoring 1.96 out of 5 is not exactly a rounding error (although with DOGE puppeteering the corpse of the former US Digital Service, who knows really… maybe things are actually worse).
Coming this week: Design systems are becoming a spec that LLMs can leverage to drive consistent, high quality UI. Issue #039 gets into what that actually means in practice: how tokens and component APIs bound AI output and what the teams getting ahead of this are doing differently to make their systems machine-readable.
Until then — Justin


