top of page
  • Writer's pictureDario Cavegn

Testing beats opinion, but nobody wants to do it

Galileo Galilei and his Florentine focus group
He didn’t. Good on him.

Testing is easier than ever before, but still extremely unpopular. We ask why, and explain how personal fancies wreak havoc on business.

Tracked advertising is more than 130 years old, the chief difference between the likes of the Sears catalogue in the late 1800s and the kind of testing we can do now being speed.

Today, you can get up with an idea in the morning, and, come lunchtime, have a pretty good idea whether or not it will work at scale.

The internet, and the fact that almost everything can be tested in real time these days, make that possible, and we would be utter fools not to use both to the absolute maximum extent.

Yet hardly anyone seems keen to do it. Why is that?

Opinion isn’t the same as testing

First off, let’s look at what we usually replace testing with. Where we’re talking about the search for truth in marketing and advertising, we’re basically looking at three domains. They are research, qualified opinion (aka hypothesis), and testing.

They often get mixed up.

For example, unless a questionnaire is properly analysed and put in context, it is just a collection of statements. Since statements in marketing questionnaires typically refer to preferences and emotions, in their raw form they are no more than the collected opinions of the respondents.

They become research as soon as they are qualified, mostly by statistical and comparative means.

Focus groups are similar. A company pays to find out what a selection of individuals think about an issue, service, product, company, activity, you name it. Again, the best you can hope for here are qualified opinions, because very different from a questionnaire, you’re not even relying on a representative sample to project your insights on a larger population.

On the contrary, you’re doing the exact opposite. You’re trying to deduce, from the behaviour of a much smaller group of people you’ve decided are representative of your population, what most individuals would think given a particular situation.

This is fraught with problems. For instance, the participants in a focus group are selected by the people setting up the experiment. This means they come with a couple of biases that are almost impossible to overcome, including confirmation bias (the researchers pick the people they want for the result they’re working towards), availability bias (the same people keep getting invited), peer pressure, recency effects, and so on.

Don’t get me wrong, focus groups do have their use. However, they cannot be substituted for testing because they rely on biased opinions rather than fact.

Finally, we have that veritable cesspool of opinion, the corporate mid and upper-level management meeting. Not only are those meetings a bonfire of the vanities, but they also tend to festoon the opinions expressed with the mantle of competence and qualification.

In other words, random opinions suddenly carry the weight of expert-checked fact simply because of the position of those who express them.

Funny enough, the more powerful the people, the more absurd the outcomes get. For example, in the latter half of the 2010s Lufthansa decided to ditch the colour yellow from its brand. It later reintroduced it in many places, but their plane livery still expresses the sentiment of the airline’s chief marketing officer at the time, who, and that is an actual quote, thought that “In the digital age, we need to look more digital.”

Go figure.

Testing produces facts—and not everyone likes facts

A counterintuitive truth about today’s marketing and advertising industry is that even though we have more ways to test than ever before, we don’t like to use them.

There are lots of different opinions out there about why that is. My two cents’ worth is perhaps more on the cynical side: I’ve come to believe that marketeers and advertisers don’t like to test because tests make their work look bad.

Or at the very least, they force them to revisit and improve their darlings, which nobody really loves to do.

Testing comes with the risk of finding out that your last couple of brilliant ideas and beautiful designs simply don’t work. And so most marketeers eschew it.

In the case of agencies selling expensive creative services, tests mean the risk of the customer potentially finding out that a 25,000-euro combined direct and email campaign produces a multitude of the sales their Clio-winning, 250,000-euro TV and video ad will ever manage to conjure up.

In short, to many marketeers and advertisers, testing would mean the tide going out, and suddenly being able to see who’s been swimming naked.

Facts aren’t always a comfortable thing to stakeholders on the client side, either. For instance, if tracking the conversions of leads into sales suddenly brings to light that the sales department never really follow up, people are acutely embarrassed.

It’s a funny thing, but in those moments, as a direct response marketeer, you often feel perversely like Galileo going against the Roman Inquisition. Facts on your side, there still is no way to prevail because what you present threatens the very position and self-image of the people you’re working for.

Losing pitches because you propose to measure

It has happened to us more than once that we actually lost a pitch because we proposed to measure.

In some cases, the reason, we were told, was that in the opinion of their marketing department, our approach wouldn’t be a good fit for how they do things.

And quite right, it likely wouldn’t have been. They would have been found out.

On another occasion, when the discussion turned to the possible maximisation of customer lifetime value and how in order to be effective we would need to review current efforts, how much money the company was making per customer over the entire duration of their time with them suddenly wasn’t a priority anymore.

Testing can be very, very painful.

The benefits by far outweigh the complications

I’ve been an advocate for direct response principles—contact prospects directly, immediately work on building a relationship, and constantly test and optimise—for well over a decade, and the only way I’ve ever seen my customers’ results go whenever I was free to apply them is up.

Testing produces real insights. It produces facts. It emphasises the priority of reality over wishful thinking.

It also allows you to test every aspect of your ad creative, from your arguments to benefits to appeals to headlines to design to colours to various aspects of language in detail.

As Ricky Gervais often says: The truth doesn’t hurt. On the contrary, it is the only thing that will allow you to consistently do better.

Pair up your superstars with a direct response geek—I promise you, you won’t be disappointed.



bottom of page