Delivery decoupled from understanding
A topic from The Illegibility Crisis
AI breaks an old proxy. A motivated engineer can now produce code, tests, and docs for parts of the system they barely know. Reviewers see clear code, good abstractions, and strong coverage. The change sails through. The system accepts it, until it does not.
When something goes wrong and you ask why this design, why this ordering, why this tradeoff, you discover there is no model behind the change. There is only a chain of prompts and edits no one recorded. Output now tells you how well someone worked with a tool, not how well they understood the system. Artifact quality can be manufactured at scale without the underlying judgment ever forming, especially when reviews grade artifacts more than reasoning.
This is not a character critique. It is a broken signal critique.
Everything looks fine until it matters
On paper, things look fine. Dashboards are green. Roadmaps are full. AI initiatives sound impressive in board decks. Then something small and ugly happens.
Imagine a case like this, for example. A person at a pharmacy trying to buy insulin. Card declined. The pharmacist tries to override it and cannot. Support sees fraud risk flagged but cannot explain why, cannot override it, and cannot even tell which data points triggered it. Somewhere in the stack, an AI mediated system made a call. No one in the chain can describe, in plain language, how that call was made, on whose behalf, or who had the authority to say not like this. Swap insulin for any entitlement or access decision and the shape is the same.
That is the moment you find out what you actually built. Not the feature. Not the dashboard. The reality of who can see, who can explain, and who can reverse.
You do not just have a bug. You have an illegibility problem.
What illegibility means
Illegibility is what happens when organizations fill critical systems with AI, vendors, and internal tools faster than they fill them with people who can actually see, explain, and govern what those systems do. It is not a vibe. It is a set of fractures that break the connection between who understands what, who gets to decide what, and who ends up carrying the risk.
You can have perfect observability and still have no legibility.
Underneath, key relationships snap. Output no longer tracks understanding. People who produce impressive work cannot explain it without tools. People who actually understand the systems are not always the ones who look most productive. The social cues you used to rely on become unreliable, and the organization starts rewarding the wrong signals as if they were judgment.
What it looks like inside teams
It shows up as a certain kind of weirdness.
Not downtime. Not clean failure. Not a page that wakes everyone up and forces reality into the room.
Weirdness looks like decisions that cannot be reconstructed. It looks like a chain of changes where nobody can tell you where the judgment lived, what uncertainty was accepted, and who had the authority to reverse it. It looks like work that is easy to ship and hard to explain. It looks like postmortems that read smoothly but do not change behavior.
You will hear it as confessions dressed up as status updates. We cannot explain why it did that. Nobody remembers the tradeoff. The vendor says it is working as designed. The person who knew left. We need a new policy.
Those are not updates. Those are admissions that judgment left the building and nobody wrote down where it went.
Why standard fixes fail
The reflex is old and predictable.
Most governance talk turns into principles that cannot be tested, checklists no one uses, and committees that meet and feel very responsible. Tightening rituals that only grade output does not restore legibility. It just improves the ceremony.
Mandatory architecture reviews that never block are not governance, they are ceremony. Runbooks nobody runs are not preparedness, they are shelf decor. Governance boards with no stop authority do not govern.
They witness.
Meanwhile, the real decisions happen in model version flips inside vendor consoles, threshold changes in hidden configs, whispered ship it calls under quarterly pressure, and product experiments that quietly alter who gets what. The people who sign the policies are not in those rooms. The people in those rooms are not writing down what they are actually doing or why.
This is how you get trust us governance without anyone saying the words. The organization produces artifacts that look like control while the decisions migrate to wherever friction is lowest and time is shortest.
Why it happens
AI can help. It also breaks artifact based trust signals.
Models optimize for plausible artifacts. Tools optimize for reduced friction. Organizations optimize for visible progress. None of those optimize for preserved human understanding.
So output goes up, reconstructability goes down, authority surfaces blur, and accountability stays human anyway.
The result is predictable. You get compliance artifacts, not reconstructable decisions. You get speed that feels like competence right up until the moment you need judgment, and discover the judgment was never captured in a form that can survive turnover, pressure, and time.
When it becomes real
A denial. A breach. A misrouting of care. An access control failure. A regulator asking questions. A board asking sharper ones. The question will not be whether the pipeline passed. The question will be who decided, based on what, with what uncertainty, with what stop rule, and how you reverse it safely.
If you cannot answer those, you have an illegible system. Illegible systems are governed by panic, blame, and paperwork.
What to do instead
You cannot afford to be illegible about your crown jewels. For the systems that can really hurt people if they drift, know who understands them. Know who can change them. Know who can be harmed. Make sure those lines cross on purpose.
The book’s through-line is four verbs.
Map it, name where authority and knowledge actually live.
Probe it, run short drills that force explanation without tools.
Trace it, record the decision chain from prompt to production, with owners.
Teach it, make the reasoning portable so turnover does not erase it.
That is not a slogan. It is instrumentation. It is a way of forcing reality back into the room without pretending you can policy your way out of a visibility problem.
The price
It is tempting to promise this work is free. It is not. If you use these instruments honestly, you will slow some launches, say no to some AI experiments that would look great in a press release, and annoy people whose power depends on illegibility.
The cost of not doing this is quieter, right up until it is not.
Takeaway
This week, pick one production change you approved. Ask two questions. Who can explain why it exists, in plain language, without looking anything up. Who can reverse it safely, right now. If you cannot answer both, you are not seeing the system, you are staring at the artifacts.
Adapted from my book The Illegibility Crisis.
https://leanpub.com/illegibility_crisis



