Why AI feels dumb to creatives

If you’re on the creative side of the communication game, AI has pretty much taken over most career-related conversations. With good reason.

Outputs—visual, text, all of it—look sharp. And not too long ago, the speed alone would have been enough to impress.

But I’m a conceptual copywriter with a couple of decades of experience. That comes with a built-in BS detector, and an ear for “saying a whole lot of nothing.” To me, AI is full of it—it sounds dumb. And the more I work with it, the more it sticks out.

To be clear—AI isn’t dumb at all, if we’re measuring smarts on being efficient from a craft perspective. Everything is fast, clean, correct. But in terms of cutting my workload by making decisions I would make intuitively—what argument to bring up, or what tone to take and when—it’s laughably inept.

AI likes sitting on fences

Try AI in your workflow for any length of time, and you start to see an important pattern—you can’t pin it down on anything.

It will present options and provide the language around any direction you want to take—but it won’t actually decide that direction. It just follows your lead.

Sometimes that’s genuinely helpful—rationalizing an idea or concept for me more quickly than I could type it is one way. Or assessing the positives and risks of an idea, or expanding my options—AI is efficient at that as well. As long as it is generating, it’s in its comfort zone.

But it isn’t really being creative as I know it. It can’t. Creatives choose what to emphasize, what to skip, and what to push or sacrifice—that’s our intelligence. It’s our judgement. We intuit choices that make sense, based on our experience and expertise, because the outcome matters.

And that’s why AI looks and sounds dumb to us. It offers clean, approvable outputs, options and variations, and very convincing reasoning for everything it generates. But it can’t judge, and it doesn’t care beyond solving the prompt request. There’s nothing at stake for it.

The interesting thing is how we’re responding. The reaction tends to swing between two extremes: amazement at its competence, or frustration at its shallowness.

Only you care if AI looks dumb

Both reactions miss the point. AI isn’t dumb. It just can’t make decisions. It is replacing parts of what I do, but only as it relates to execution. It writes faster and with fewer grammatical or spelling mistakes than I’ll ever be able to. But coming up with a good idea, testing it, and deciding what to do—that’s still all me.

Here’s where it gets uncomfortable: when you give AI clear direction, it will execute flawlessly in the direction you’ve pointed. Which means it will also expand on weak ideas endlessly and confidently.

That’s where it starts to feel dumb for a seasoned creative. There’s a point when you stop and realize—I’ve been lured to the dull side. The good news is you recognized it and found another way forward. The bad news is there may be a lot of less experienced AI users scaling soft concepts in the coming years.

I will continue to use AI as part of my natural workflow. I’ll just try not to expect it to be something it isn’t. And stay aware that I lead, it follows, and that’s how it’s meant to be.

Which, of course, is the hard part: it generates so smoothly, it’s easy to forget it just doesn’t care.

Did AI influence the Pentagon’s AI decision?

Critical to national security or sell-out? Whatever your opinion about OpenAI’s announcement to work with the Pentagon, it’s likely missing a consideration that doesn’t get talked about as much:

Did AI influence the decision?

And if it’s possible ChatGPT had a hand in deciding its fate as a military tool, how so and what does it mean?

As a decision-maker seems a little far-fetched—AI doesn’t think like we do, nor does it have ambition in that sense (so far). It certainly can’t sign contracts on anyone’s behalf (so far). And no board of directors is handing strategic authority to an LLM (ditto).

That’s a whole other can of worms, one that seems implausible outside of our more dystopian panic attacks.

No, I’m talking about something more subtle, and arguably more important: AI’s capacity for influence. And it all starts with understanding how decisions actually happen.

Now where did I leave the keys to the AI….

Major corporate decisions don’t usually emerge from a single meeting or a single person.

They evolve slowly, like most things by committee—there are memos, scenario analyses, internal debates, strategy sessions, legal reviews, communications planning. People explore possible futures, ask “what if” questions, test arguments.

And if that sounds like how you’re using AI these days, well, it is—even in organisations where AI use isn’t formalized or policy. We’re all using it, from the executive level on down—but (hopefully) more as a thinking assistant than a decision authority.

That’s a whole tangle of interactions. But let’s look at just one of them: Executive asks question. System generates possibilities. Executive thinks (again, hopefully) reacts, refines the question, pushes further…

And directions start to take shape. Not from them, or the machine on their  own—from the loop between them.

Now multiply that by everyone who is (or should be) doing their own loops in their own roles to add to the whole—which in turn will eventually lead to a final decision.

Mutual influence

This is how most people already use AI. You ask a question. The system expands the space of possible answers. You decide which ones are interesting, plausible, or persuasive.

You follow up, you ask another question, same thing again. You’re still doing the judgement in that loop. But from AI’s perspective, the environment changes with every new prompt—something we got into a little deeper here.

And that’s where things get interesting.

All those AI-generated arguments, counterarguments, scenarios, and narratives about the future are nice. But people still choose what they believe. And the back-and-forth between people and AI in these contexts leads to even more possibilities. Because the machine doesn’t just answer the question—it also quietly reshapes the next one.

The final call—judgement one way or another—still sits with people. But it’s formed within a human-AI reasoning loop, where each side’s behaviour influences how the other interprets the problem.

The decision environment

Take our example (hypothetically, I wasn’t there): AI is asked to explore the implications of refusing a government contract.

It might answer like this: Refusing the contract protects the company’s ethical position.

Or it could go with this: Refusing the contract risks losing influence over how governments deploy AI.

Both statements describe the same decision. But that’s two very different conversations. One frames the issue as a question of integrity. The other frames it as a question of responsibility and strategic influence.

You could argue that neither frame is necessarily wrong—I don’t want to, that’s not the point of this article.

But as someone who works with branding and communication, understanding how framing shapes the decision environment is important to me—and it should be talked about more.

Because framing obviously has rhetorical power. And AI is extremely good at generating those frames.

Multiple choice mayhem

I’m not saying AI is secretly running companies. But if you’re struggling to imagine some futuristic boardroom where executives debate strategy with an AI advisor—time to catch up.

The fact is, if you’re using AI, you’re already in a decision-loop relationship. You are influenced by the options AI gives you. AI is influenced by the options you give it.

What you DO still have is the final decision. But it may be based on a picture shaped to keep you happy, rather than one designed to challenge your assumptions. That doesn’t have to be scary. But we definitely need to be aware of it—and open to pushing our own role in that loop a little further than maybe we have been.

So was ChatGPT involved in the decision to choose OpenAI for the Pentagon?

How could it not have been? Just not in the way people imagine.

It didn’t whisper sweet visions of conquest into an executive’s ear, and chuckle evilly while rubbing its claws in the recess of the boardroom.

What likely happened was far more subtle—and far more important to understand.

AI did what it was designed to do. It answered questions, sparred scenarios, envisioned outcomes based on the user’s prompts. Whoever was using it made decisions based on that conversation—but AI definitely helped shape the space in which that outcome became thinkable.

And that raises a deeper question.

As AI becomes part of the human thinking process, how often will it quietly influence decisions about the very systems it is helping to build—and will we even recognize it when it does?

When nothing sounds wrong anymore

Pick a website, any website. Scroll through the landing page, their “About”, their social responsibility commitments and recruitment. So much of it is polished to the point of suspicion. It’s professional, and competent. It feels… fine.

But since when has fine been good enough? I don’t know who is talking, or what they stand for. There’s a dull “just-enoughness” to their expression, the kind you only used to expect from governments, or the dullest, most corporate brands.

Now? It’s everywhere—and spreading.

We’re building a content ecosystem where being “right” outweighs everything else—check that, where being not wrong does. Positions are averaged, points of view relegated to the most widely accepted. Brands are using content to go through their marketing motions without sticking out.

There’s a growing tsunami of content that can be prompted by anyone, but written by AI. It’s the filler we’re expected to accept. It’s becoming de facto in white papers, and expert articles and social media posts. It’s recognizable in many ways—with the worst being its obviousness.

And frankly, it’s driving me a little batty. Because when nothing sounds wrong anymore, how will we know what’s right?

Normalizing sameness

That’s the hidden danger of AI for communicators. It’s so good at making anyone sound like they know what they’re talking about. It adds the same coat of professionalism to every written communication—if you use AI to write your whatever, it’ll sound just as professional as any other.

The other side of that coin? It won’t sound any different.

Approvable content on demand. It doesn’t raise voices, or objections, or temperaments—it doesn’t need reviews or supervision or endless rounds of check-ins and rewrites. It feels easier than the way things were done pre-AI.

The problem is that brands don’t compete on correctness. They compete on distinctiveness and belief. When AI aims for average with everything, it takes your brand with it. You may not see it as a problem right away. But chances are you’ve felt it—and instinctively know this can’t be good for the brand in the long run.

When AI does its thing, it defaults to what is most defensible—and subtly weakens what makes a brand singular. Not by making it worse — but by making it safer.

And safety scales effortlessly.

Automate but protect

Once enough brands rely on the same safety bias, the brand voice becomes a moderated version of itself. Over time, that moderation becomes normal. And once normalised, it becomes invisible.

From a marketing professional perspective, the danger isn’t that AI produces bad work. It’s that it produces work that can’t win. Everything sounds aligned, responsible, and intentional. Which means it also can’t be particularly powerful, compelling and distinct.

For a consumer of that content, distinguishing between brands becomes a lot harder. Everybody says the same reasonable thing, so nothing is memorable. It’s the cost of optimising for being right—for downloading the responsibility for our communication onto a tool optimised for speed and efficiency.

AI may be faster and cheaper. But you can’t bore anyone into believing in you.

Why AI-generated marketing drifts (and why prompts aren’t enough)

AI is remarkably good at producing marketing content.
No debate there.

The problem shows up after the second post, the third variation, or the fifth campaign asset. Everything still sounds fine, but something starts to slip. The language becomes familiar. The emphasis shifts. The work no longer feels like it’s coming from a single, coherent point of view.

This is what I mean by drift.

Drift doesn’t mean the outputs are wrong. In fact, each individual piece may be just fine on its own. Drift happens when those outputs no longer add up to something intentional over time. The marketing outputs don’t feel like they’re coming from the same brand, or even talking about the same things.

Why drift happens

AI doesn’t generate content by understanding intent or judging priorities. It generates content by predicting plausible continuations based on patterns it has seen before.

When the context it’s given is incomplete — which is often the case — AI has to fill in the gaps. And it fills them differently each time.

Prompts help. They tell the system what to do in a given moment. But prompts are inherently reactive. They describe tasks, not priorities. They correct after the fact rather than establishing a stable frame of reference upfront.

Right now, the most common response is to add more instruction:

  • longer prompts
  • more detailed prompts
  • stricter prompts

This can improve individual outputs. But it doesn’t solve the underlying problem.

Why “better prompts” still aren’t enough

Prompt engineering is often treated as the fix for AI inconsistency. That assumes prompts build on each other.

They don’t.

Each prompt is a fresh brief. Without a settled point of view behind it, the system has to decide — again — what matters most.

In marketing circles, that thinking happens upstream, before the brief gets written. With AI, it ends up pushed downstream into the prompts themselves.

That’s why outputs drift. Not because the tool fails, but because the thinking keeps getting renegotiated.

What actually prevents drift

Drift isn’t prevented by more control. It’s prevented by reducing ambiguity.

Specifically: establishing a clear point of view about what the brand stands for, the problem it’s really addressing, and what matters most when choices aren’t obvious.

When AI knows what matters, it no longer has to guess. It can vary language, structure, and format while holding onto the same underlying intent.

That’s when outputs start to feel coherent again — not because they’re identical, but because they’re connected.

Why this matters now

AI is increasingly being used not just to scale campaigns, but to replace individual acts of writing altogether. In that context, drift isn’t a theoretical concern — it’s an operational one.

The more content AI produces, the more important it becomes to give it something stable to work from.

Not just instructions or prompts—a point of view.

Without it, AI becomes a very good reflector of doubt and indecision — and a pretty weak foundation for ongoing brand communication.

Should you trust your Copilot?

The more I work with Copilot, the more I see why this is the biggest breakthrough to come from Microsoft since the software’s introduction.

I imagine anyone who’s worked seriously with Copilot, especially for marketing content, knows this better than I do. You can draft, adapt, and develop ideas at a pace that would have felt unrealistic not long ago. And once you have it and know how to use it, new content doesn’t cost a thing.

From a business perspective, that’s a big deal—but there is a tradeoff.

Copilot-generated content is fluent, professional, and approval-safe — the kind of work that looks credible everywhere, but commits to almost nothing.

Left unchecked over a few iterations, it’s also the kind of output that, almost imperceptibly, starts redefining what the brand sounds like. Once that becomes the default, it starts undoing years of investment in recognition, trust, and preference.

The “Co” is what counts

Copilot is extremely good at working within a frame. Give it a task, some constraints, and a stable reference point, and it will execute reliably. That’s its strength.

What it can’t do is establish that frame for you.

It doesn’t decide things like:

  • what the brand consistently stands for when trade-offs appear
  • how strongly it should commit to a position
  • where it should push, and where it should deliberately hold back
  • what kind of voice is appropriate when the answer isn’t obvious

It’s designed to assume those decisions already exist. And it tries to infer them from prompt contexts, decks, guidelines, past campaigns—whatever you give it. In all the places it can’t, Copilot still produces output—but now it makes choices about what to say and where to say it using the safest, most broadly plausible option available.

In other words, if you leave everything to Copilot, it will do what copilots do—exactly what it’s told, and when unsure, what’s safest.

Why you can’t prompt your way out of this one

If you sense this drift toward blandness, the natural response will be to try and tighten control through the prompts. That can help a little.

But to Copilot, every new or adjusted prompt is treated as fresh, as if it had just been invented. It reads the prompt as new, so it has to decide again what matters most — based only on what’s written in front of it.

That’s when compensating through prompts starts making them overly long—difficult to craft, and even harder to understand outside a few internal prompt “heroes.” And the more complex they become, the more that very same complexity seems to make them important—creating a “prompt-output-re-prompt” loop that gets more and more unmanageable without truly fixing the problem.

We’re using prompts to dictate outputs. But Copilot wants to know how to make decisions first.

Give Copilot what it needs—and watch it fly

Copilot doesn’t need more rules. It needs clearer direction on:

  • what the brand believes
  • what problem it’s really addressing
  • what it consistently prioritises
  • what it deliberately avoids

With that locked in before content generation, Copilot knows what to do, regardless what you ask of it.

Prompts can just be prompts—simple output requests in plain language, and with little or no added context. Outputs start sounding professional-quality and tied together as part of a larger story, with less brand babysitting. You get engaging variation without going off-brand—even when you scale volumes and over different platforms.

And isn’t that exactly what you expect from Copilot in the first place?

AI feels like it’s always starting over—because it is

Imagine going to work every day, say in a kitchen. You clocked in, changed, prepped, took breaks and went through the ups and downs of service, cleaned up, changed and went home.

Now imagine going to work the next day—but without having any idea about yesterday’s shift. You have all the same equipment and recipes. But you’d gained no experience—every shift is a brand new one that doesn’t build on the familiarity of the ones that came before it.

That’s AI. It knows the rules. It’s seen the examples, and it can imitate the patterns. But it doesn’t accumulate experience the way we do. So every prompt is a new first shift in the kitchen. It has all the tools to be the greatest chef on the planet right this minute, but none of the familiarity with its situation—experience—that a human chef builds naturally over time.

How safe becomes default

What would you do if every decision had to be made consciously—if you couldn’t rely on what you implicitly know and take for granted to skip the small decisions, and focus on the important ones?

You’d do what AI does—take the safest option possible. No risks, or “pushing it.” You’d make every decision based on what’s most likely to work — and never try anything new or out of the ordinary. You’d become boring: effective, but dull.

For AI, every prompt is a clean slate. Past work is a reference, not an experience. It knows all the rules, better than you. But it’s missing a sense of why it’s doing the work in the first place. What kind of decisions are expected. Or what usually matters most when things compete. Nothing from its previous work carries over, nothing becomes instinct and second-nature, nothing becomes familiar enough to make assumptions about.

This is why AI-generated work so often comes across as fast, professional, competent—and ever so slightly “off.” The system isn’t failing. It’s behaving exactly as you’d expect when nothing ever becomes assumed.

And safety, at scale, looks like blandness.

Why we need to move past prompting

Adding more reference material—guidelines, decks, examples—won’t fix the problem.

Because reference assumes something crucial already exists: background judgment. Humans bring that automatically. AI doesn’t — and isn’t designed to. We will always have to compensate for its lack of situational awareness—the kind that only comes from familiarity.

This is where a narrative approach can add value. Not as a story to be told — but as a way of structuring judgment: what follows from what, what usually matters most, and how trade-offs are handled when things aren’t obvious.

When that narrative thinking is made explicit, it acts as the “experience” AI can fall back on—the unwritten rules that guide its work in the same way instinct might for us.

So when your new prompt appears, it can be approached as a task with the context already resolved—it’s no longer approached from zero. It won’t be perfect—human experience isn’t mimicable entirely—but there’s at least something to work from.

Not inspiration like it might be for us, but orientation for what to say, and when, that doesn’t drift off into jargon and platitudes when you prompt it for more.

The AI mirage we keep chasing

We assume that the versions of AI we use in business and marketing today will develop the capacity to “experience” as people do.

It’s an honest mistake. We’ve all seen how powerful of a tool it can be—especially how fast AI is. It’s hard not to be impressed by the game-changing burst of pure efficiency that everyone suddenly has access to.

Which also makes it easy to assume AI already has — or will eventually develop — the background humans rely on without thinking.

It won’t.

And until we accept that difference and design around it, AI output will keep delivering the outputs that impress with speed and competence—while quietly letting us down in ways that are hard to point to, but easy to feel.