How much do you actually trust AI output? Most people answer "it depends" — and then act as if it doesn't, treating every reply at the same level of trust as the last one. The result is either over-checking (which kills the speed benefit) or under-checking (which ships errors).
The trust ladder is a way to be honest with yourself about which rung you're on for the task at hand. Five rungs. Most of us are on different rungs for different tasks, and we should be.
You don't trust the output as content. You trust it as a starting point. Useful for brainstorming, breaking writer's block, getting a list to react against. The actual words don't survive. Treat the output the way you'd treat a junior colleague who is being paid to be wrong sometimes.
You let it produce a draft, but every fact, every name, every number gets checked against a source you trust. Right rung for research summaries, anything with statistics, anything you'll quote in front of someone who'd notice an error. Slow but defensible.
You let it choose the shape — the order of sections, the headline structure, the bullets. Then you rewrite the actual sentences in your voice. Right rung for emails, internal documents, anything where the form is conventional but the words are yours. Most "writing with AI" lands here.
You skim, fix the obvious, and send. Right rung for low-stakes throwaways: meeting notes nobody will reread, the rough first draft of a long document you'll edit later, internal Q&A. Wrong rung for anything a customer or boss will see cold.
You let AI not just produce output but take action — book the thing, send the email, make the booking, file the form. Right rung only for tasks that are genuinely repetitive, low-stakes, easy to audit afterwards, and ideally reversible. Wrong rung for almost everything else, no matter what the demos suggest.
How to use the ladder
Before you ask AI for something, decide which rung you're on. Out loud, even. "This is a Rung 3 task — I'll keep the structure, rewrite the words." That single bit of intention transforms how you use the output, because it tells you in advance how much checking is owed.
Most mistakes are rung mismatches. People ship at Rung 4 work that needed Rung 2. Or they painstakingly verify every word of something that was always going to be a Rung 1 throwaway, and waste the morning.
Trust should be earned, task by task — not granted wholesale.
One step that is not AI
Whichever rung you're on, the last move is yours. Read the output back as if you wrote it. Ask yourself: would I send this with my name on it? If the answer is no, climb down a rung and rework. If it's yes, ship.
That final read — eyes only, no AI involvement — is the bit that catches the things AI doesn't know it got wrong. Don't skip it.
Which rung are you on today?
The honest answers are the interesting ones. Bring yours to the Kent AI Meet Up — comparing rungs across different work is where it gets useful.
See upcoming meet-ups →