As an artform, marketing and advertising have been a part of human history since antiquity. Earthenware pots in Ancient Athens oftentimes advertised a particular artist or shop. Stone tablets and papyrus leaflets from the Roman Empire advertising anything from fish sauce to brothels have been found all across Italy. Interestingly, it wasn’t uncommon for popular gladiators to endorse certain products before and after their bouts in the Colosseum. And of course, it only takes a sweaty trip to Colorado’s Renaissance Festival to see modern cringelords re-enacting an age-old form of marketing: hawking.
While marketing, the art form, is nothing new, marketing as a science (that is, efforts that can be measured and tested), is the product of the Industrial Revolution of the late 19th century. Mass production of consumer goods facilitated the need to understand the most effective ways to get these products into the hands of masses. However, until very recently, marketers have not had a great idea how effective their marketing efforts have been other than saying, “I dunno. I think we’re doing good!” As technology has improved, so too has our understanding of effective marketing measurement.
One Channel Makes Attribution Easy!
For the majority of human history, unless you were standing outside of your shitty store, yelling at anyone walking by, your only option for advertising was print media. Typically, this included newspapers, magazines, and posters. Starting in the 19th and 20th century, pretty much anyone who advertised in this way only advertised in a very small handful of different magazines, and the majority of those companies found themselves advertising in a single magazine or newspaper. This made attribution incredibly easy. Sales up? Ad good. Sales down? Ad bad.
This methodology had obvious issues. As soon as your product or company appeared in any more than one magazine, it was impossible to tell which channel offered the biggest ROI.
New Channels, Same Problems
The next massive boom for print advertisement came in the form of billboards. As Henry Ford was churning out his Model T (while muttering some anti-semitic bullshit, probably), advertisers saw outdoor advertisement as the way to go. Also, the 1920s saw a great movement of people into urban centers, meaning you’d get a considerable amount of foot traffic in cities. Unfortunately, marketers would gauge a billboard’s effectiveness entirely on how many people may have seen the ad, based on road and foot traffic estimates in front of billboard sign placements.
TV and Radio Try, Fail To Measure Attribution
In the early days of television and radio, marketers still really only cared about mass distribution, that is, how getting the product in front of the largest audience possible, regardless of how likely that audience is to buy the product. They measured this primarily using Nielsen Ratings Points. Much like the rough estimates of billboard views, Nielsen Ratings Points were simply a rough estimate, since only about 0.2% of households were measured. Predictably, with the number of channels increasing, the problem of accurate attribution became increasingly apparent.
The Early Internet
Companies continued to rely on sheer numbers even with the advent and proliferation of the internet in the 1990s. Early banner ads on the internet were sold and tracked using “CPM”, or “cost per mille”, or “cost per thousand.” Analysts would consider the number of impressions an ad received when determining its effectiveness. This measurement also had its limitations. CPM would only measure impressions, and not whether someone clicked on an ad or not.
To understand the perseverance of measuring impressions to gauge marketing effectiveness, it’s important to remember the mindset marketing professionals have traditionally held. For years, marketing teams only really cared about the raw number of people they delivered to their sales teams. The logic was to stuff a sales funnel with the highest number of leads as possible, regardless of their quality. All leads were considered equal, regardless of how informed, interested, or engaged a lead was.
From Quantity to “Quality”
The internet moves fast; we all know this. As our connections sped up, and our tolerance for bullshit popups dropped, marketers also devised clever ways to understand where their traffic was coming from. Beyond that, marketers began understanding not just where people were coming from, but also how qualified these leads were based on numerous factors. Such was the birth of the MQL, or the “Marketing Qualified Lead.” The idea was, you could predict how “good” a lead was based on an indiscriminate number of factors. Some of these factors could include:
- Source: “We know that 75% of leads that come from Facebook will schedule sales demonstrations, so we’d consider any lead from Facebook an MQL.”
- Demographics: “We know that our product is generally suited for people making between $40,000 and $60,000 a year, so anyone with this reported income would be an MQL.”
- Job Title: “Our product works best for software developers, so anyone with a job title of ‘software developer’ who interacts with our site will be an MQL.”
- Site Activity: “If someone visits 6 or more pages on our site before filling out a form, they are highly engaged, and therefore are an MQL.”
Life was great! Rather than focusing on selling a specific product to everyone, marketers could finally focus on selling the right product to the right person. Marketing teams could start digging deep into historical data
Finally, a Full-Funnel Approach
Well, as they say, too much of a good thing can also be really fucking terrible, including a reliance on MQLs. What happened? Well, managers started gauging marketing success based entirely on the number of MQLs delivered. That sounds an awful lot like… oh shit, we’re back to the quantity versus quality problem again.
Indeed, what tends to happen to marketing teams that rely entirely on the number of MQLs delivered is the quality of those leads starts to drop. Look, if I’m told I need to deliver a certain number of leads before my yearly review, you better believe I’m going to hit that number, no matter what, even if it means moving the goalposts on my MQL definition. Then, you fall out of alignment with your sales team, and they start complaining that the leads you’re delivering are trash, and we’re back to square one.
Unfortunately, plenty of marketing teams are still stuck in the above scenario. However, growth marketing teams have taken a bit of a different approach to this problem, by simply moving a marketing team’s KPIs down the funnel a bit. While MQLs still play a part (I guess?), what we actually care about is the amount of revenue our marketing efforts are generating. Tying a marketing team’s success to the amount of revenue generated ends the argument between sales and marketing immediately, because they’re both fighting for the same goal. If everyone cares about revenue, then everyone can work cohesively towards larger company goals.
Is this an easy transition? Of course not, and it takes a lot of communication and trust between sales and marketing teams. Is it a necessary transition? While there might not be a crystal-clear answer to that question, think of it this way: if we have the data and the means to track leads all the way through the funnel (and beyond), why wouldn’t we? If we can actually work hand-in-hand with our sales brethren, why wouldn’t we?
Like I said, however, technology moves fast, and as our ability to understand our customers improves through technology, so too will our strategies to reach and influence them.