Timing is everything.
Good ideas launched at the wrong time are still good ideas. The Kozmos and Webvans of the late 1990s promised fast grocery delivery—and failed spectacularly. But the idea of groceries on demand was sound: today, we have Instacart and FreshDirect and others that deliver groceries to millions of customers within hours.
It’s not just groceries, of course. Plenty of thriving businesses today have ancestors that died young: digital music, pet supplies on rotation, currency. Cause of death varies—everything from immature tech to bad product design to subscale distribution. What they have in common is that they all launched at a time when they were unable to generate sufficient demand to survive.
Here is one thing that hasn’t changed in a couple of decades: new products continue to launch—and fail. How in the world do they not know that no one wants them? Why don’t they wait till they are confident about demand? Why is timing so hard?
Conventional wisdom about when to launch goes something like this: Ask customers if they are willing to buy your product. If they say yes, launch.
Sounds right in principle. In practice, however, asking customers about readiness to buy is a long, fraught process that rarely gets to a satisfying answer.
The main problem? The asking.
Most research about market readiness is an extended series of questions delivered via focus groups, surveys, panels, discrete choice analysis, and other formats that fall into categories that look something like this:
Here’s the problem with asking us whether we are ready to buy a product or not: what we say has little to do with what we do. When we are in a focus group or taking a survey or weighing trade-offs in a conjoint analysis, we are imagining what we would do in our ordinary course of business. But we are not actually doing it.
It’s why conventional market research is rarely predictive*. It’s not based on real-world behavior; instead, it’s based on our guesses and opinions about how we will behave. So how do we know when to launch?
*But it’s not the only reason. There is also confirmation bias, sampling issues, and question design, just to name a few.
It’s not exactly a secret that conventional market research is not that great. As a result, many of us just launch anyway and try to make it a win.
Fixing the plane while flying is not as bad as it sounds. After all, the incentives to make it work are pretty strong.
The whole Lean Startup movement is more or less based on this philosophy, which arguably works better for easily updated intangibles like software than it does for physical products. (Indeed, new analysis from Stray Partners looks at failure rates of ‘fast-moving consumer goods’ in Europe. It’s pretty grim: based on 12 years of data, which is a lot, only 4.1% (!) of new products are in the market after 5 years.)
Reid Hoffman has famously said that if we’re not embarrassed by the first version of our product, we’ve launched too late. So if we just go for it, we’ve got plenty of company.
We can do better.
There is something powerful about relying on intuition, taking a risk, and just putting your product out there. That’s how big innovation often happens. But there is also something pretty amazing about not betting your entire wad on something that no one wants or isn’t ready for.
How can we solve for both? Can’t we time the market?
If you ask techy marketing types, they will tell you that AI is going to crack the market research problem. Already you can use AI tools to generate survey responses from ‘synthetic personas.’ Pretty cool.
The problem is that AI is just replicating an approach that isn’t that great in the first place.
Most market research methods that are delivered online today were invented in the pre-internet era. Even techniques like discrete choice (conjoint) analysis that seem digital-native were delivered on cards in the 1970s.
Putting surveys and panels online didn’t make them more effective as research tools; in fact, they are arguably less effective because of the ease with which they can be manipulated and gamed. And putting research tools online didn’t make them better at predicting people’s behavior; it just delivered the responses a little faster.
Similarly, replicating human responses to conventional market research using AI doesn’t improve the results, especially when they are based on existing research; it just means we don’t have to deal with those pesky humans. Also, synthetic respondents tend to lack the emotion and lived experience that result in humans’ lack of predictability. The world of AI is not exactly full of surprises.
Here at S9, we answer three questions for our clients:
We find that most of our clients, whether they are launching a new product or repositioning a brand, spend a lot of energy on the first two questions and less on the third. But we know how important market readiness is to success.
How do we get answers about market readiness? Advertising.
Clicking on an ad is a better indicator of intent than filling out a survey. It’s behavioral, and it takes place in real life. It’s sort of a merger between the let’s-do-launch ethos of Lean Startup and the measured studiousness of the survey folks.
Why? Because marketing first—before you launch—means you collect data about all the stuff that makes you fail: Which features are most important? Which audience wants my product the most? Which positioning is most effective? Marketing first allows lots of room for fine-tuning.
More importantly, running tiny but statistically valid marketing tests before launching measures real-world demand—that critical piece of data that prevents spectacular face-plants due to bad timing.
You don't have to do it our way, but we do hope you will heed the advice of the Pets.com sock puppet from 1999: think about non-conventional ways to prove out market timing. Otherwise, your big new idea may turn out to be just a plain old sock.