DisAImproving marketing — Kelford Labs Weekly

You can’t improve without a target.

Mar 17, 2026
DisAImproving marketing — Kelford Labs Weekly

My hometown is built on a hill.

One of the points that hill comes to is the Halifax Citadel, overlooking downtown.

The Halifax Town Clock on the side of Citadel Hill.

The Citadel was a common grade school field trip destination, and one of the few things I remember from those visits recently came to mind:

To level the hill to place a fort atop it, they ended up removing about 10–12 meters of height, or more than 32 feet.

Because flattening removes depth.

This came to mind last week when the New York Times published their quiz to see whether people can tell the difference between AI and human writing.

Obviously I was able to tell the difference (because, you know, taste), but what shocked and horrified me was how the NYT decided to frame the result:

In case you missed the outrageous bit, this is the New York Times saying that famous-for-his-perfect-prose Cormac McCarthy (author of No Country for Old Men), had made “mistakes” in his punctuation, something AI would not do.

They’re saying original writing is a mistake. A defect. And that AI can help smooth it out.

Better to say, I think: AI will help you flatten it.

This is the thing about AI: It doesn’t know what good is, only what others have said is good, and only what hews closely to the average, the obvious, the slick and smooth and flat.

As if great writing is what doesn’t make you think.

But the thing is, AI will flatten anything if you ask it.

Here, let me show you how this works:

To test it out, you need to be able to find a phrase that is already good but that the AI isn’t trained on. For instance, if you ask it to improve William Shakespeare or Toni Morrison, it might just tell you it can’t, because they’re already great and it knows it. Because it’s been trained on all their writing, and all the writing about their writing, thoroughly and infringingly.

Luckily, though, I have a bookshelf full of out-of-print and rare tomes that contain perfect but unpopular writing.

One of my favourite phrases contained within is this one, about a Civil War general:

It was said he would “Flank the devil, and make heaven in spite of the guards.”

This is a fairly obscure phrase, with only a few pages of Google results (and typically my own post using it appearing at the top). So the LLMs don’t really know this phrase, at least not well.

So they’ll be happy to help “improve” it.

I gave Claude Opus 4.6, in the opinions of many, including me, the best model for writing, the challenge: “Improve the phrase, ‘Flank the devil, and make heaven in spite of the guards’”

What it didn’t do is what any human writer would: Tell me it’s already darn good and almost certainly good enough.

No, it told me that these constituted improvements:

  • “Flank the devil and seize heaven — damn the guards.”
  • “Outmaneuver hell. Build heaven. Let the guards watch.”
  • “Flank the devil, and take paradise through the gates.”

Notice how those rewrites are barely different than the original text and yet, somehow, entirely worse.

But why does AI do this? Why will it confidently and happily offer edits to just about anything, even things that are already good?

Well, think about how these systems are trained:

First, they ingest the entire corpus of the internet (yes, including the worst and most vile things ever uploaded) and are trained to reproduce what they’ve seen before. This is called “pre-training.”

So let me ask you this: In all the internet, which is more common: Feedback that says things could be better, or feedback that says things are good to go?

Exactly: Most feedback given offers advice, not confirmation. So that’s what the LLM learns to reproduce.

From there, it goes into a stage of Reinforcement Learning from Human Feedback (RLHF). This is where underpaid contractors around the world rate the responses of the LLM to tune its output to the AI labs’ business and customer goals.

So let’s say you’re tasked with reviewing who-knows-how-many AI outputs per hour. When you see an output that gives advice and one that says the piece is good to go, which do you think you’d be more likely to rate highly?

Obviously the one that gives advice, right? The heuristic for feedback is almost always going to be, “Was there any?”

You can think about it this way: LLM systems have no internal sense of taste, they only have what they’ve been trained to simulate.

It’s like trying to learn whether something is good by only ever seeing the mold and never the model.

But notice how this isn’t really an AI problem. It begins as a human one. When we ask someone to help us improve something, they’re likely to try. To say something, anything, to feel useful.

Even if they’re actively disimproving it.

AI just exacerbates the problem, churning out feedback round after feedback round in a vicious and endless cycle. Documents balloon, offers become opaque, value gets buried in verbiage.

And every single pass is a slickness, a flattening, of what was already good and already there.

And that flattening removes depth.

I mean, you see it, right? Go to LinkedIn or watch some YouTube or TV ads and count the occurrences of:

“It’s not X—it’s Y.”

“We’re for x. For y. And for z.”

“Imagine a world where a isn’t the b, it’s the c.”

These are generic, flat, but smooth phrases that roll off the tongue and right out of your mind the moment you read them. They get repeated and reused because they’re what everyone else is doing, and following the crowd feels right even though it’s wrong.

But they have nothing to stick to because they don’t say anything. And they have no ability to stick because they aren’t barbed with novelty, those elements that make messages noticeable. Memorable.

It’s like David Ogilvy once wrote: “Any fool can write a bad advertisement, but it takes a genius to keep his hands off a good one.”

So if you want better marketing, and AI isn’t necessarily a good sounding board, what is? Who is?

You are.

Because once you know the target, you can assess and improve your aim. But so long as you’re trying to abstractly “improve” your wording, your writing, your marketing, AI and even humans will almost always only flatten and smooth it out.

But more than 20 years of marketing, measuring, and messaging for hundreds of clients across thousands of ads, social posts, blogs, and newsletters, have taught me this:

Marketing that does work can work. And marketing that can work is aimed correctly. At the right target, at the right distance.

For marketing to work:

Because if we don’t, how could we ever know what to say to them?

If you’re struggling to improve your marketing, start by asking yourself if you know the target, the distance, the objective.

Otherwise, when you ask for advice, from a human or an AI, they might just make it smoother, instead of making it richer.

When that is what we need: The texture and novelty that makes memorability, not the smooth cliches we’ve heard before.

Are you ready to start making marketing messages that get noticed, remembered, and repeated?

That’s what the Marketing Rangefinder is for. Next week, I’m going to show you how to use it.


Kelford Inc. is the marketing team that’s never at a loss for words. If you’re struggling with what to say and where to say it to attract ideal clients, we’ll show you the way.