Why AI feels dumb to creatives

If you’re on the creative side of the communication game, AI has pretty much taken over most career-related conversations. With good reason.

Outputs—visual, text, all of it—look sharp. And not too long ago, the speed alone would have been enough to impress.

But I’m a conceptual copywriter with a couple of decades of experience. That comes with a built-in BS detector, and an ear for “saying a whole lot of nothing.” To me, AI is full of it—it sounds dumb. And the more I work with it, the more it sticks out.

To be clear—AI isn’t dumb at all, if we’re measuring smarts on being efficient from a craft perspective. Everything is fast, clean, correct. But in terms of cutting my workload by making decisions I would make intuitively—what argument to bring up, or what tone to take and when—it’s laughably inept.

AI likes sitting on fences

Try AI in your workflow for any length of time, and you start to see an important pattern—you can’t pin it down on anything.

It will present options and provide the language around any direction you want to take—but it won’t actually decide that direction. It just follows your lead.

Sometimes that’s genuinely helpful—rationalizing an idea or concept for me more quickly than I could type it is one way. Or assessing the positives and risks of an idea, or expanding my options—AI is efficient at that as well. As long as it is generating, it’s in its comfort zone.

But it isn’t really being creative as I know it. It can’t. Creatives choose what to emphasize, what to skip, and what to push or sacrifice—that’s our intelligence. It’s our judgement. We intuit choices that make sense, based on our experience and expertise, because the outcome matters.

And that’s why AI looks and sounds dumb to us. It offers clean, approvable outputs, options and variations, and very convincing reasoning for everything it generates. But it can’t judge, and it doesn’t care beyond solving the prompt request. There’s nothing at stake for it.

The interesting thing is how we’re responding. The reaction tends to swing between two extremes: amazement at its competence, or frustration at its shallowness.

Only you care if AI looks dumb

Both reactions miss the point. AI isn’t dumb. It just can’t make decisions. It is replacing parts of what I do, but only as it relates to execution. It writes faster and with fewer grammatical or spelling mistakes than I’ll ever be able to. But coming up with a good idea, testing it, and deciding what to do—that’s still all me.

Here’s where it gets uncomfortable: when you give AI clear direction, it will execute flawlessly in the direction you’ve pointed. Which means it will also expand on weak ideas endlessly and confidently.

That’s where it starts to feel dumb for a seasoned creative. There’s a point when you stop and realize—I’ve been lured to the dull side. The good news is you recognized it and found another way forward. The bad news is there may be a lot of less experienced AI users scaling soft concepts in the coming years.

I will continue to use AI as part of my natural workflow. I’ll just try not to expect it to be something it isn’t. And stay aware that I lead, it follows, and that’s how it’s meant to be.

Which, of course, is the hard part: it generates so smoothly, it’s easy to forget it just doesn’t care.

Did AI influence the Pentagon’s AI decision?

Critical to national security or sell-out? Whatever your opinion about OpenAI’s announcement to work with the Pentagon, it’s likely missing a consideration that doesn’t get talked about as much:

Did AI influence the decision?

And if it’s possible ChatGPT had a hand in deciding its fate as a military tool, how so and what does it mean?

As a decision-maker seems a little far-fetched—AI doesn’t think like we do, nor does it have ambition in that sense (so far). It certainly can’t sign contracts on anyone’s behalf (so far). And no board of directors is handing strategic authority to an LLM (ditto).

That’s a whole other can of worms, one that seems implausible outside of our more dystopian panic attacks.

No, I’m talking about something more subtle, and arguably more important: AI’s capacity for influence. And it all starts with understanding how decisions actually happen.

Now where did I leave the keys to the AI….

Major corporate decisions don’t usually emerge from a single meeting or a single person.

They evolve slowly, like most things by committee—there are memos, scenario analyses, internal debates, strategy sessions, legal reviews, communications planning. People explore possible futures, ask “what if” questions, test arguments.

And if that sounds like how you’re using AI these days, well, it is—even in organisations where AI use isn’t formalized or policy. We’re all using it, from the executive level on down—but (hopefully) more as a thinking assistant than a decision authority.

That’s a whole tangle of interactions. But let’s look at just one of them: Executive asks question. System generates possibilities. Executive thinks (again, hopefully) reacts, refines the question, pushes further…

And directions start to take shape. Not from them, or the machine on their  own—from the loop between them.

Now multiply that by everyone who is (or should be) doing their own loops in their own roles to add to the whole—which in turn will eventually lead to a final decision.

Mutual influence

This is how most people already use AI. You ask a question. The system expands the space of possible answers. You decide which ones are interesting, plausible, or persuasive.

You follow up, you ask another question, same thing again. You’re still doing the judgement in that loop. But from AI’s perspective, the environment changes with every new prompt—something we got into a little deeper here.

And that’s where things get interesting.

All those AI-generated arguments, counterarguments, scenarios, and narratives about the future are nice. But people still choose what they believe. And the back-and-forth between people and AI in these contexts leads to even more possibilities. Because the machine doesn’t just answer the question—it also quietly reshapes the next one.

The final call—judgement one way or another—still sits with people. But it’s formed within a human-AI reasoning loop, where each side’s behaviour influences how the other interprets the problem.

The decision environment

Take our example (hypothetically, I wasn’t there): AI is asked to explore the implications of refusing a government contract.

It might answer like this: Refusing the contract protects the company’s ethical position.

Or it could go with this: Refusing the contract risks losing influence over how governments deploy AI.

Both statements describe the same decision. But that’s two very different conversations. One frames the issue as a question of integrity. The other frames it as a question of responsibility and strategic influence.

You could argue that neither frame is necessarily wrong—I don’t want to, that’s not the point of this article.

But as someone who works with branding and communication, understanding how framing shapes the decision environment is important to me—and it should be talked about more.

Because framing obviously has rhetorical power. And AI is extremely good at generating those frames.

Multiple choice mayhem

I’m not saying AI is secretly running companies. But if you’re struggling to imagine some futuristic boardroom where executives debate strategy with an AI advisor—time to catch up.

The fact is, if you’re using AI, you’re already in a decision-loop relationship. You are influenced by the options AI gives you. AI is influenced by the options you give it.

What you DO still have is the final decision. But it may be based on a picture shaped to keep you happy, rather than one designed to challenge your assumptions. That doesn’t have to be scary. But we definitely need to be aware of it—and open to pushing our own role in that loop a little further than maybe we have been.

So was ChatGPT involved in the decision to choose OpenAI for the Pentagon?

How could it not have been? Just not in the way people imagine.

It didn’t whisper sweet visions of conquest into an executive’s ear, and chuckle evilly while rubbing its claws in the recess of the boardroom.

What likely happened was far more subtle—and far more important to understand.

AI did what it was designed to do. It answered questions, sparred scenarios, envisioned outcomes based on the user’s prompts. Whoever was using it made decisions based on that conversation—but AI definitely helped shape the space in which that outcome became thinkable.

And that raises a deeper question.

As AI becomes part of the human thinking process, how often will it quietly influence decisions about the very systems it is helping to build—and will we even recognize it when it does?