Revenue is accelerating. Responsibility needs to keep up.
What "move fast and break things" got wrong — and why AI is following the same pattern on a compressed timeline.
A decade ago, the defining question for product teams was how fast you could ship. Velocity was the metric, and the assumption was that whatever broke along the way was cheaper to fix than to prevent. For a long time, that was true. A botched UI change, a flaky deploy, a badly scoped feature — each one was painful but reversible. The cost of a mistake was mostly internal.
That logic is breaking down. Not because velocity stopped mattering, but because what we ship has changed. The systems coming out of the last two years — LLM-powered workflows, autonomous agents, generated content at scale — don't fail the way traditional software fails. They fail quietly, confidently, and in public.
The compression problem
Every major software transition of the last thirty years gave teams time to learn how their systems misbehaved in production. The web took a decade to mature. Mobile took five years. Cloud infrastructure took three. In each case, the gap between "early adopters" and "mainstream use" left room for conventions to form — security patterns, failure modes, institutional memory.
AI doesn't have that runway. The distance between a weekend prototype and a customer-facing production system is now measured in days. The failure modes are novel, the tooling is immature, and the people making deployment decisions often don't have a clear mental model of what they've shipped.
The most consequential choices about an AI product are usually made before a user ever encounters it — in the rooms where scope, data, and fallbacks get decided.
What "responsibility" actually looks like
It's easy to talk about responsible AI in the abstract. It's harder to make it operational. In the programs I've built, three things consistently separate the systems that hold up from the ones that quietly degrade:
- Explicit decision logs. Every non-obvious choice — model selection, prompt structure, fallback behavior — written down with the reason. If the person who made the call leaves, the reasoning doesn't go with them.
- Failure rehearsal. Before launch, the team writes out the five ways this system could embarrass us, and builds the guardrails for each one. Not theoretical — specific.
- A named owner for model behavior. Not "the team." One person whose job it is to notice when output quality drifts and to decide what to do about it.
Speed is still the goal
None of this is a call to slow down. The companies that will define this decade are the ones that ship AI systems faster than their competitors — and don't get pulled back into retrospectives every six months cleaning up decisions that could have been made more carefully the first time.
The move-fast-and-break-things era worked because the blast radius was small. It isn't anymore. Responsibility isn't a drag on velocity; at this scale, it is velocity — the version that compounds instead of eroding.
Thanks for reading.
More writing →