Most Insights Aren't

· 4 min read · ideas

← Back to Blog

Part 2 of 3 on compression and cognition. The previous essay, AI Feels Intelligent — But That's Not Why I Don't Trust It, asked what value in AI output actually looks like if fluency isn't the signal. This one tries to answer that. The next, Why AI Productivity Metrics Are Lying to Us, follows the answer to where measurement breaks.


"Insight" is one of the most used words in AI and analytics. Dashboards generate them, models surface them, every summary promises them. The word gets applied so broadly that it signals value without specifying what kind, and because it sounds good, nobody asks. If everything is an insight, nothing has to justify itself.

So I tried to pin it down.

A description doesn't qualify. It can be accurate and leave the situation unchanged. A summary doesn't either. Reducing volume is not the same as reducing doubt. Novelty falls short too: surprise can register without altering what anyone would do next.

Each of those can feel valuable. None of them require consequence.

What separates insight from these weaker forms is effect. Something has to move. A previously plausible decision has to become less so, or more. And the only thing that has that property is uncertainty.

The definition I've been working with, and I'm not being flexible about it:

An insight is a reduction in uncertainty that changes the plausibility of a decision.

This excludes most things currently called insights. That's the point.

The confusion is understandable. Clean summaries feel decisive. Aggregation feels like progress. Your brain registers reduced effort and mistakes it for reduced uncertainty. But those are different things. Effort is about how hard it is to process what you know. Uncertainty is about how hard it is to choose what to do. You can make something perfectly legible and still be no closer to a decision, because what blocked you wasn't confusion. It was risk, tradeoff, missing information at the exact point where commitment gets expensive.

Clarity makes situations easier to think about. Insight makes them easier to act on. Most analytics outputs stop at the first one.

I've been applying three questions to everything I read now:

What uncertainty did this reduce? What would I do differently after reading it? If no decision changes, why did this feel valuable?

Run those against the last ten "insights" your dashboards generated. I did. Most of them were summaries with good formatting. They reduced cognitive load without collapsing the uncertainty that governed the actual decision. They felt valuable while leaving action unchanged.

This is the failure mode that's hard to see from inside it. Systems produce outputs, outputs get labeled insights, insights get counted as value delivered. The whole chain looks productive. Nothing is technically wrong with any individual step. But the word "insight" floated free of consequence somewhere in the middle, and nobody caught it because the output looked good.

What gets produced in volume is what's easy to measure: coverage and responsiveness. What stays scarce is the ability to identify which uncertainty actually governs a specific decision and to reduce that rather than everything around it.

I saw this done well twice in the past year. Both times someone — a person, not the system — identified the exact uncertainty blocking a decision and used AI to collapse specifically that. A pricing call we'd been going back and forth on for weeks: the right data cut ended it in a meeting. A product scope question where a simulation of user paths killed an option we'd been debating for too long. In both cases the conversation visibly shifted. Before: five people with opinions. After: a decision. That's what the word should mean.

You can't manufacture that by producing more summaries faster.

Which leads to the measurement problem. If most AI output is compression labeled as insight, and we're measuring AI's value by the volume of that output, we're measuring the wrong thing. The metrics assume value scales with quantity.

That assumption is broken, and it's broken for a specific reason.