Pick a website, any website. Scroll through the landing page, their “About”, their social responsibility commitments and recruitment. So much of it is polished to the point of suspicion. It’s professional, and competent. It feels… fine.
But since when has fine been good enough? I don’t know who is talking, or what they stand for. There’s a dull “just-enoughness” to their expression, the kind you only used to expect from governments, or the dullest, most corporate brands.
Now? It’s everywhere—and spreading.
We’re building a content ecosystem where being “right” outweighs everything else—check that, where being not wrong does. Positions are averaged, points of view relegated to the most widely accepted. Brands are using content to go through their marketing motions without sticking out.
There’s a growing tsunami of content that can be prompted by anyone, but written by AI. It’s the filler we’re expected to accept. It’s becoming de facto in white papers, and expert articles and social media posts. It’s recognizable in many ways—with the worst being its obviousness.
And frankly, it’s driving me a little batty. Because when nothing sounds wrong anymore, how will we know what’s right?
Normalizing sameness
That’s the hidden danger of AI for communicators. It’s so good at making anyone sound like they know what they’re talking about. It adds the same coat of professionalism to every written communication—if you use AI to write your whatever, it’ll sound just as professional as any other.
The other side of that coin? It won’t sound any different.
Approvable content on demand. It doesn’t raise voices, or objections, or temperaments—it doesn’t need reviews or supervision or endless rounds of check-ins and rewrites. It feels easier than the way things were done pre-AI.
The problem is that brands don’t compete on correctness. They compete on distinctiveness and belief. When AI aims for average with everything, it takes your brand with it. You may not see it as a problem right away. But chances are you’ve felt it—and instinctively know this can’t be good for the brand in the long run.
When AI does its thing, it defaults to what is most defensible—and subtly weakens what makes a brand singular. Not by making it worse — but by making it safer.
And safety scales effortlessly.
Automate but protect
Once enough brands rely on the same safety bias, the brand voice becomes a moderated version of itself. Over time, that moderation becomes normal. And once normalised, it becomes invisible.
From a marketing professional perspective, the danger isn’t that AI produces bad work. It’s that it produces work that can’t win. Everything sounds aligned, responsible, and intentional. Which means it also can’t be particularly powerful, compelling and distinct.
For a consumer of that content, distinguishing between brands becomes a lot harder. Everybody says the same reasonable thing, so nothing is memorable. It’s the cost of optimising for being right—for downloading the responsibility for our communication onto a tool optimised for speed and efficiency.
AI may be faster and cheaper. But you can’t bore anyone into believing in you.