Why AI feels dumb to creatives

If you’re on the creative side of the communication game, AI has pretty much taken over most career-related conversations. With good reason.

Outputs—visual, text, all of it—look sharp. And not too long ago, the speed alone would have been enough to impress.

But I’m a conceptual copywriter with a couple of decades of experience. That comes with a built-in BS detector, and an ear for “saying a whole lot of nothing.” To me, AI is full of it—it sounds dumb. And the more I work with it, the more it sticks out.

To be clear—AI isn’t dumb at all, if we’re measuring smarts on being efficient from a craft perspective. Everything is fast, clean, correct. But in terms of cutting my workload by making decisions I would make intuitively—what argument to bring up, or what tone to take and when—it’s laughably inept.

AI likes sitting on fences

Try AI in your workflow for any length of time, and you start to see an important pattern—you can’t pin it down on anything.

It will present options and provide the language around any direction you want to take—but it won’t actually decide that direction. It just follows your lead.

Sometimes that’s genuinely helpful—rationalizing an idea or concept for me more quickly than I could type it is one way. Or assessing the positives and risks of an idea, or expanding my options—AI is efficient at that as well. As long as it is generating, it’s in its comfort zone.

But it isn’t really being creative as I know it. It can’t. Creatives choose what to emphasize, what to skip, and what to push or sacrifice—that’s our intelligence. It’s our judgement. We intuit choices that make sense, based on our experience and expertise, because the outcome matters.

And that’s why AI looks and sounds dumb to us. It offers clean, approvable outputs, options and variations, and very convincing reasoning for everything it generates. But it can’t judge, and it doesn’t care beyond solving the prompt request. There’s nothing at stake for it.

The interesting thing is how we’re responding. The reaction tends to swing between two extremes: amazement at its competence, or frustration at its shallowness.

Only you care if AI looks dumb

Both reactions miss the point. AI isn’t dumb. It just can’t make decisions. It is replacing parts of what I do, but only as it relates to execution. It writes faster and with fewer grammatical or spelling mistakes than I’ll ever be able to. But coming up with a good idea, testing it, and deciding what to do—that’s still all me.

Here’s where it gets uncomfortable: when you give AI clear direction, it will execute flawlessly in the direction you’ve pointed. Which means it will also expand on weak ideas endlessly and confidently.

That’s where it starts to feel dumb for a seasoned creative. There’s a point when you stop and realize—I’ve been lured to the dull side. The good news is you recognized it and found another way forward. The bad news is there may be a lot of less experienced AI users scaling soft concepts in the coming years.

I will continue to use AI as part of my natural workflow. I’ll just try not to expect it to be something it isn’t. And stay aware that I lead, it follows, and that’s how it’s meant to be.

Which, of course, is the hard part: it generates so smoothly, it’s easy to forget it just doesn’t care.

Did AI influence the Pentagon’s AI decision?

Critical to national security or sell-out? Whatever your opinion about OpenAI’s announcement to work with the Pentagon, it’s likely missing a consideration that doesn’t get talked about as much:

Did AI influence the decision?

And if it’s possible ChatGPT had a hand in deciding its fate as a military tool, how so and what does it mean?

As a decision-maker seems a little far-fetched—AI doesn’t think like we do, nor does it have ambition in that sense (so far). It certainly can’t sign contracts on anyone’s behalf (so far). And no board of directors is handing strategic authority to an LLM (ditto).

That’s a whole other can of worms, one that seems implausible outside of our more dystopian panic attacks.

No, I’m talking about something more subtle, and arguably more important: AI’s capacity for influence. And it all starts with understanding how decisions actually happen.

Now where did I leave the keys to the AI….

Major corporate decisions don’t usually emerge from a single meeting or a single person.

They evolve slowly, like most things by committee—there are memos, scenario analyses, internal debates, strategy sessions, legal reviews, communications planning. People explore possible futures, ask “what if” questions, test arguments.

And if that sounds like how you’re using AI these days, well, it is—even in organisations where AI use isn’t formalized or policy. We’re all using it, from the executive level on down—but (hopefully) more as a thinking assistant than a decision authority.

That’s a whole tangle of interactions. But let’s look at just one of them: Executive asks question. System generates possibilities. Executive thinks (again, hopefully) reacts, refines the question, pushes further…

And directions start to take shape. Not from them, or the machine on their  own—from the loop between them.

Now multiply that by everyone who is (or should be) doing their own loops in their own roles to add to the whole—which in turn will eventually lead to a final decision.

Mutual influence

This is how most people already use AI. You ask a question. The system expands the space of possible answers. You decide which ones are interesting, plausible, or persuasive.

You follow up, you ask another question, same thing again. You’re still doing the judgement in that loop. But from AI’s perspective, the environment changes with every new prompt—something we got into a little deeper here.

And that’s where things get interesting.

All those AI-generated arguments, counterarguments, scenarios, and narratives about the future are nice. But people still choose what they believe. And the back-and-forth between people and AI in these contexts leads to even more possibilities. Because the machine doesn’t just answer the question—it also quietly reshapes the next one.

The final call—judgement one way or another—still sits with people. But it’s formed within a human-AI reasoning loop, where each side’s behaviour influences how the other interprets the problem.

The decision environment

Take our example (hypothetically, I wasn’t there): AI is asked to explore the implications of refusing a government contract.

It might answer like this: Refusing the contract protects the company’s ethical position.

Or it could go with this: Refusing the contract risks losing influence over how governments deploy AI.

Both statements describe the same decision. But that’s two very different conversations. One frames the issue as a question of integrity. The other frames it as a question of responsibility and strategic influence.

You could argue that neither frame is necessarily wrong—I don’t want to, that’s not the point of this article.

But as someone who works with branding and communication, understanding how framing shapes the decision environment is important to me—and it should be talked about more.

Because framing obviously has rhetorical power. And AI is extremely good at generating those frames.

Multiple choice mayhem

I’m not saying AI is secretly running companies. But if you’re struggling to imagine some futuristic boardroom where executives debate strategy with an AI advisor—time to catch up.

The fact is, if you’re using AI, you’re already in a decision-loop relationship. You are influenced by the options AI gives you. AI is influenced by the options you give it.

What you DO still have is the final decision. But it may be based on a picture shaped to keep you happy, rather than one designed to challenge your assumptions. That doesn’t have to be scary. But we definitely need to be aware of it—and open to pushing our own role in that loop a little further than maybe we have been.

So was ChatGPT involved in the decision to choose OpenAI for the Pentagon?

How could it not have been? Just not in the way people imagine.

It didn’t whisper sweet visions of conquest into an executive’s ear, and chuckle evilly while rubbing its claws in the recess of the boardroom.

What likely happened was far more subtle—and far more important to understand.

AI did what it was designed to do. It answered questions, sparred scenarios, envisioned outcomes based on the user’s prompts. Whoever was using it made decisions based on that conversation—but AI definitely helped shape the space in which that outcome became thinkable.

And that raises a deeper question.

As AI becomes part of the human thinking process, how often will it quietly influence decisions about the very systems it is helping to build—and will we even recognize it when it does?

When nothing sounds wrong anymore

Pick a website, any website. Scroll through the landing page, their “About”, their social responsibility commitments and recruitment. So much of it is polished to the point of suspicion. It’s professional, and competent. It feels… fine.

But since when has fine been good enough? I don’t know who is talking, or what they stand for. There’s a dull “just-enoughness” to their expression, the kind you only used to expect from governments, or the dullest, most corporate brands.

Now? It’s everywhere—and spreading.

We’re building a content ecosystem where being “right” outweighs everything else—check that, where being not wrong does. Positions are averaged, points of view relegated to the most widely accepted. Brands are using content to go through their marketing motions without sticking out.

There’s a growing tsunami of content that can be prompted by anyone, but written by AI. It’s the filler we’re expected to accept. It’s becoming de facto in white papers, and expert articles and social media posts. It’s recognizable in many ways—with the worst being its obviousness.

And frankly, it’s driving me a little batty. Because when nothing sounds wrong anymore, how will we know what’s right?

Normalizing sameness

That’s the hidden danger of AI for communicators. It’s so good at making anyone sound like they know what they’re talking about. It adds the same coat of professionalism to every written communication—if you use AI to write your whatever, it’ll sound just as professional as any other.

The other side of that coin? It won’t sound any different.

Approvable content on demand. It doesn’t raise voices, or objections, or temperaments—it doesn’t need reviews or supervision or endless rounds of check-ins and rewrites. It feels easier than the way things were done pre-AI.

The problem is that brands don’t compete on correctness. They compete on distinctiveness and belief. When AI aims for average with everything, it takes your brand with it. You may not see it as a problem right away. But chances are you’ve felt it—and instinctively know this can’t be good for the brand in the long run.

When AI does its thing, it defaults to what is most defensible—and subtly weakens what makes a brand singular. Not by making it worse — but by making it safer.

And safety scales effortlessly.

Automate but protect

Once enough brands rely on the same safety bias, the brand voice becomes a moderated version of itself. Over time, that moderation becomes normal. And once normalised, it becomes invisible.

From a marketing professional perspective, the danger isn’t that AI produces bad work. It’s that it produces work that can’t win. Everything sounds aligned, responsible, and intentional. Which means it also can’t be particularly powerful, compelling and distinct.

For a consumer of that content, distinguishing between brands becomes a lot harder. Everybody says the same reasonable thing, so nothing is memorable. It’s the cost of optimising for being right—for downloading the responsibility for our communication onto a tool optimised for speed and efficiency.

AI may be faster and cheaper. But you can’t bore anyone into believing in you.

AI feels like it’s always starting over—because it is

Imagine going to work every day, say in a kitchen. You clocked in, changed, prepped, took breaks and went through the ups and downs of service, cleaned up, changed and went home.

Now imagine going to work the next day—but without having any idea about yesterday’s shift. You have all the same equipment and recipes. But you’d gained no experience—every shift is a brand new one that doesn’t build on the familiarity of the ones that came before it.

That’s AI. It knows the rules. It’s seen the examples, and it can imitate the patterns. But it doesn’t accumulate experience the way we do. So every prompt is a new first shift in the kitchen. It has all the tools to be the greatest chef on the planet right this minute, but none of the familiarity with its situation—experience—that a human chef builds naturally over time.

How safe becomes default

What would you do if every decision had to be made consciously—if you couldn’t rely on what you implicitly know and take for granted to skip the small decisions, and focus on the important ones?

You’d do what AI does—take the safest option possible. No risks, or “pushing it.” You’d make every decision based on what’s most likely to work — and never try anything new or out of the ordinary. You’d become boring: effective, but dull.

For AI, every prompt is a clean slate. Past work is a reference, not an experience. It knows all the rules, better than you. But it’s missing a sense of why it’s doing the work in the first place. What kind of decisions are expected. Or what usually matters most when things compete. Nothing from its previous work carries over, nothing becomes instinct and second-nature, nothing becomes familiar enough to make assumptions about.

This is why AI-generated work so often comes across as fast, professional, competent—and ever so slightly “off.” The system isn’t failing. It’s behaving exactly as you’d expect when nothing ever becomes assumed.

And safety, at scale, looks like blandness.

Why we need to move past prompting

Adding more reference material—guidelines, decks, examples—won’t fix the problem.

Because reference assumes something crucial already exists: background judgment. Humans bring that automatically. AI doesn’t — and isn’t designed to. We will always have to compensate for its lack of situational awareness—the kind that only comes from familiarity.

This is where a narrative approach can add value. Not as a story to be told — but as a way of structuring judgment: what follows from what, what usually matters most, and how trade-offs are handled when things aren’t obvious.

When that narrative thinking is made explicit, it acts as the “experience” AI can fall back on—the unwritten rules that guide its work in the same way instinct might for us.

So when your new prompt appears, it can be approached as a task with the context already resolved—it’s no longer approached from zero. It won’t be perfect—human experience isn’t mimicable entirely—but there’s at least something to work from.

Not inspiration like it might be for us, but orientation for what to say, and when, that doesn’t drift off into jargon and platitudes when you prompt it for more.

The AI mirage we keep chasing

We assume that the versions of AI we use in business and marketing today will develop the capacity to “experience” as people do.

It’s an honest mistake. We’ve all seen how powerful of a tool it can be—especially how fast AI is. It’s hard not to be impressed by the game-changing burst of pure efficiency that everyone suddenly has access to.

Which also makes it easy to assume AI already has — or will eventually develop — the background humans rely on without thinking.

It won’t.

And until we accept that difference and design around it, AI output will keep delivering the outputs that impress with speed and competence—while quietly letting us down in ways that are hard to point to, but easy to feel.