When nothing sounds wrong anymore

Pick a website, any website. Scroll through the landing page, their “About”, their social responsibility commitments and recruitment. So much of it is polished to the point of suspicion. It’s professional, and competent. It feels… fine.

But since when has fine been good enough? I don’t know who is talking, or what they stand for. There’s a dull “just-enoughness” to their expression, the kind you only used to expect from governments, or the dullest, most corporate brands.

Now? It’s everywhere—and spreading.

We’re building a content ecosystem where being “right” outweighs everything else—check that, where being not wrong does. Positions are averaged, points of view relegated to the most widely accepted. Brands are using content to go through their marketing motions without sticking out.

There’s a growing tsunami of content that can be prompted by anyone, but written by AI. It’s the filler we’re expected to accept. It’s becoming de facto in white papers, and expert articles and social media posts. It’s recognizable in many ways—with the worst being its obviousness.

And frankly, it’s driving me a little batty. Because when nothing sounds wrong anymore, how will we know what’s right?

Normalizing sameness

That’s the hidden danger of AI for communicators. It’s so good at making anyone sound like they know what they’re talking about. It adds the same coat of professionalism to every written communication—if you use AI to write your whatever, it’ll sound just as professional as any other.

The other side of that coin? It won’t sound any different.

Approvable content on demand. It doesn’t raise voices, or objections, or temperaments—it doesn’t need reviews or supervision or endless rounds of check-ins and rewrites. It feels easier than the way things were done pre-AI.

The problem is that brands don’t compete on correctness. They compete on distinctiveness and belief. When AI aims for average with everything, it takes your brand with it. You may not see it as a problem right away. But chances are you’ve felt it—and instinctively know this can’t be good for the brand in the long run.

When AI does its thing, it defaults to what is most defensible—and subtly weakens what makes a brand singular. Not by making it worse — but by making it safer.

And safety scales effortlessly.

Automate but protect

Once enough brands rely on the same safety bias, the brand voice becomes a moderated version of itself. Over time, that moderation becomes normal. And once normalised, it becomes invisible.

From a marketing professional perspective, the danger isn’t that AI produces bad work. It’s that it produces work that can’t win. Everything sounds aligned, responsible, and intentional. Which means it also can’t be particularly powerful, compelling and distinct.

For a consumer of that content, distinguishing between brands becomes a lot harder. Everybody says the same reasonable thing, so nothing is memorable. It’s the cost of optimising for being right—for downloading the responsibility for our communication onto a tool optimised for speed and efficiency.

AI may be faster and cheaper. But you can’t bore anyone into believing in you.

AI feels like it’s always starting over—because it is

Imagine going to work every day, say in a kitchen. You clocked in, changed, prepped, took breaks and went through the ups and downs of service, cleaned up, changed and went home.

Now imagine going to work the next day—but without having any idea about yesterday’s shift. You have all the same equipment and recipes. But you’d gained no experience—every shift is a brand new one that doesn’t build on the familiarity of the ones that came before it.

That’s AI. It knows the rules. It’s seen the examples, and it can imitate the patterns. But it doesn’t accumulate experience the way we do. So every prompt is a new first shift in the kitchen. It has all the tools to be the greatest chef on the planet right this minute, but none of the familiarity with its situation—experience—that a human chef builds naturally over time.

How safe becomes default

What would you do if every decision had to be made consciously—if you couldn’t rely on what you implicitly know and take for granted to skip the small decisions, and focus on the important ones?

You’d do what AI does—take the safest option possible. No risks, or “pushing it.” You’d make every decision based on what’s most likely to work — and never try anything new or out of the ordinary. You’d become boring: effective, but dull.

For AI, every prompt is a clean slate. Past work is a reference, not an experience. It knows all the rules, better than you. But it’s missing a sense of why it’s doing the work in the first place. What kind of decisions are expected. Or what usually matters most when things compete. Nothing from its previous work carries over, nothing becomes instinct and second-nature, nothing becomes familiar enough to make assumptions about.

This is why AI-generated work so often comes across as fast, professional, competent—and ever so slightly “off.” The system isn’t failing. It’s behaving exactly as you’d expect when nothing ever becomes assumed.

And safety, at scale, looks like blandness.

Why we need to move past prompting

Adding more reference material—guidelines, decks, examples—won’t fix the problem.

Because reference assumes something crucial already exists: background judgment. Humans bring that automatically. AI doesn’t — and isn’t designed to. We will always have to compensate for its lack of situational awareness—the kind that only comes from familiarity.

This is where a narrative approach can add value. Not as a story to be told — but as a way of structuring judgment: what follows from what, what usually matters most, and how trade-offs are handled when things aren’t obvious.

When that narrative thinking is made explicit, it acts as the “experience” AI can fall back on—the unwritten rules that guide its work in the same way instinct might for us.

So when your new prompt appears, it can be approached as a task with the context already resolved—it’s no longer approached from zero. It won’t be perfect—human experience isn’t mimicable entirely—but there’s at least something to work from.

Not inspiration like it might be for us, but orientation for what to say, and when, that doesn’t drift off into jargon and platitudes when you prompt it for more.

The AI mirage we keep chasing

We assume that the versions of AI we use in business and marketing today will develop the capacity to “experience” as people do.

It’s an honest mistake. We’ve all seen how powerful of a tool it can be—especially how fast AI is. It’s hard not to be impressed by the game-changing burst of pure efficiency that everyone suddenly has access to.

Which also makes it easy to assume AI already has — or will eventually develop — the background humans rely on without thinking.

It won’t.

And until we accept that difference and design around it, AI output will keep delivering the outputs that impress with speed and competence—while quietly letting us down in ways that are hard to point to, but easy to feel.