When nothing sounds wrong anymore

Pick a website, any website. Scroll through the landing page, their “About”, their social responsibility commitments and recruitment. So much of it is polished to the point of suspicion. It’s professional, and competent. It feels… fine.

But since when has fine been good enough? I don’t know who is talking, or what they stand for. There’s a dull “just-enoughness” to their expression, the kind you only used to expect from governments, or the dullest, most corporate brands.

Now? It’s everywhere—and spreading.

We’re building a content ecosystem where being “right” outweighs everything else—check that, where being not wrong does. Positions are averaged, points of view relegated to the most widely accepted. Brands are using content to go through their marketing motions without sticking out.

There’s a growing tsunami of content that can be prompted by anyone, but written by AI. It’s the filler we’re expected to accept. It’s becoming de facto in white papers, and expert articles and social media posts. It’s recognizable in many ways—with the worst being its obviousness.

And frankly, it’s driving me a little batty. Because when nothing sounds wrong anymore, how will we know what’s right?

Normalizing sameness

That’s the hidden danger of AI for communicators. It’s so good at making anyone sound like they know what they’re talking about. It adds the same coat of professionalism to every written communication—if you use AI to write your whatever, it’ll sound just as professional as any other.

The other side of that coin? It won’t sound any different.

Approvable content on demand. It doesn’t raise voices, or objections, or temperaments—it doesn’t need reviews or supervision or endless rounds of check-ins and rewrites. It feels easier than the way things were done pre-AI.

The problem is that brands don’t compete on correctness. They compete on distinctiveness and belief. When AI aims for average with everything, it takes your brand with it. You may not see it as a problem right away. But chances are you’ve felt it—and instinctively know this can’t be good for the brand in the long run.

When AI does its thing, it defaults to what is most defensible—and subtly weakens what makes a brand singular. Not by making it worse — but by making it safer.

And safety scales effortlessly.

Automate but protect

Once enough brands rely on the same safety bias, the brand voice becomes a moderated version of itself. Over time, that moderation becomes normal. And once normalised, it becomes invisible.

From a marketing professional perspective, the danger isn’t that AI produces bad work. It’s that it produces work that can’t win. Everything sounds aligned, responsible, and intentional. Which means it also can’t be particularly powerful, compelling and distinct.

For a consumer of that content, distinguishing between brands becomes a lot harder. Everybody says the same reasonable thing, so nothing is memorable. It’s the cost of optimising for being right—for downloading the responsibility for our communication onto a tool optimised for speed and efficiency.

AI may be faster and cheaper. But you can’t bore anyone into believing in you.

Should you trust your Copilot?

The more I work with Copilot, the more I see why this is the biggest breakthrough to come from Microsoft since the software’s introduction.

I imagine anyone who’s worked seriously with Copilot, especially for marketing content, knows this better than I do. You can draft, adapt, and develop ideas at a pace that would have felt unrealistic not long ago. And once you have it and know how to use it, new content doesn’t cost a thing.

From a business perspective, that’s a big deal—but there is a tradeoff.

Copilot-generated content is fluent, professional, and approval-safe — the kind of work that looks credible everywhere, but commits to almost nothing.

Left unchecked over a few iterations, it’s also the kind of output that, almost imperceptibly, starts redefining what the brand sounds like. Once that becomes the default, it starts undoing years of investment in recognition, trust, and preference.

The “Co” is what counts

Copilot is extremely good at working within a frame. Give it a task, some constraints, and a stable reference point, and it will execute reliably. That’s its strength.

What it can’t do is establish that frame for you.

It doesn’t decide things like:

  • what the brand consistently stands for when trade-offs appear
  • how strongly it should commit to a position
  • where it should push, and where it should deliberately hold back
  • what kind of voice is appropriate when the answer isn’t obvious

It’s designed to assume those decisions already exist. And it tries to infer them from prompt contexts, decks, guidelines, past campaigns—whatever you give it. In all the places it can’t, Copilot still produces output—but now it makes choices about what to say and where to say it using the safest, most broadly plausible option available.

In other words, if you leave everything to Copilot, it will do what copilots do—exactly what it’s told, and when unsure, what’s safest.

Why you can’t prompt your way out of this one

If you sense this drift toward blandness, the natural response will be to try and tighten control through the prompts. That can help a little.

But to Copilot, every new or adjusted prompt is treated as fresh, as if it had just been invented. It reads the prompt as new, so it has to decide again what matters most — based only on what’s written in front of it.

That’s when compensating through prompts starts making them overly long—difficult to craft, and even harder to understand outside a few internal prompt “heroes.” And the more complex they become, the more that very same complexity seems to make them important—creating a “prompt-output-re-prompt” loop that gets more and more unmanageable without truly fixing the problem.

We’re using prompts to dictate outputs. But Copilot wants to know how to make decisions first.

Give Copilot what it needs—and watch it fly

Copilot doesn’t need more rules. It needs clearer direction on:

  • what the brand believes
  • what problem it’s really addressing
  • what it consistently prioritises
  • what it deliberately avoids

With that locked in before content generation, Copilot knows what to do, regardless what you ask of it.

Prompts can just be prompts—simple output requests in plain language, and with little or no added context. Outputs start sounding professional-quality and tied together as part of a larger story, with less brand babysitting. You get engaging variation without going off-brand—even when you scale volumes and over different platforms.

And isn’t that exactly what you expect from Copilot in the first place?