The usual metaphor for a startup is the iterative experiment: you start with a new idea and keep iterating on it until it works or you run out of money. Let's take that metaphor seriously. What do the conceptual tools of optimization tell us about how to design the startup process?
The basic setup is a familiar one. Most startup ideas, no matter how nicely formatted their pitch deck, are unavoidably uncertain: if you had good ex ante information about how they will work, it wouldn't be a startup! (and it wouldn't have the possibility of high returns). What they describe is a space of possible startups with a priori unknown potential value:
Iteration in the startup world means testing a relatively simple case and then trying new versions based on how well they do. Sometimes it works!
Sometimes it doesn't.
If startup potentials were fully random there wouldn't be much more that we could do. But in practice they aren't: even the most innovative proposal consists mostly of a well-understood concept (an existing business model, product, etc) supporting an untested, potentially valuable idea. Every iteration of a startup, and every startup attempting a similar idea, gives us information about both aspects of things: how well the startup does what we already know how to do, and how well the new idea works. The more innovative your idea, the more you should focus on the second question. If you know that a so-far untested idea works, you've turned an startup bet into something closer to an implementation bet — but because you're still one of the first ones exploiting with the idea, you get something closer to the returns of the startup with the risk of the repeat business. Not sure money, there's no such thing, but you've passed the riskier part of the path.
When experimentation is cheap — when you have the money to run many large experiments for a relatively long time — this isn't your main bottleneck. But less funding means you have less time and scale to gather experimental information about your idea, which means you have to increase sample efficiency. That's not the same as "Record All The Things" - actually, every dollar/day/developer hour you spend recording data that's not informative about your startup's key hypothesis is a net negative because it's one dollar/day/developer hour less of relevant information.
So the first and most important fact for new startups in the current environment is that, because startups are by definition iterative experiments but you now have less samples to work with, those samples have to be carefully designed to maximize the usable information about the real question, which isn't really about the startup but the new idea.
Here, to paraphrase Westheimer's famous phrase, a couple hundred thousand dollars of seed money can save a couple weeks of modeling and simulation. Most founding teams have enough familiarity with the state of the art they are trying to surpass that they can build, with specialist help, a model of their idea that disentangles the known aspects of performance from the unknown ones. That is, conceptually isolate the innovation they are proposing, and relate it quantitatively to the overall system.
This means you can build a map where you know how the value of a startup changes as you move from x0 to x1 — e.g., the impact of choosing one of two known manufacturing methods — but not the impact of moving from x0 to x2 — e.g. using a certain AI to generate product designs.
This sort of modeling work doesn't tell you what to build or how well it will work, but on the other hand it reduces, sometimes exponentially, how much information (and hence time and money) you need to acquire to find that out.
In other words, instead of iterating like this:
you can iterate like this:
The latter has the same number of iterations — the same cost — but it's enormously more effective, because you're choosing them to be informative about what you need to know in order to validate the idea, not what you already know about the industry.
There's a downside, as usual: This strategy maximizes the rate of learning about the innovation, and therefore the investors' hit ratio per dollar (to "fail fast" you have to learn fast, which means having a higher sample efficiency, which means being very explicit about what is the unknown thing you're testing and setting things up to maximize the information return from each iteration), but it doesn't maximize the survival chances of any particular startup, and it certainly makes for a less gripping pitch. You aren't raising money to build The Next ACME over the next few years, you're raising money to verify in a few months if a satellite-driven ads targeting algorithm is economically efficient. The attempt may become a company itself, it may not - it could fail even if the algorithm itself works well. But if the algorithm works well enough, then capital allocation at larger scales becomes much easier, and even opens other options. You answered the important question, and you did it for cheap.
The approach is one hardware engineers are familiar with: the MVP of a new engine technology isn't the simplest plane that might fly. Rather, it's a scale model of the engine that's not designed to ever leave the ground. Prototype planes, like successful startups, grab hearts, headlines, and funding rounds, but advances in modeling and simulation make it easier every year to move more of the analysis to before you build anything — a sort of sanity check — and then to isolate the innovation in order to test it as quickly and thoroughly as possible with the available budget.
The first startup implementation of an idea doesn't need to be successful. It's good if it is, but what it has to be is informative... and informative about the right thing. Funds (and founders) are likely to move to a more deliberate and quantitative startup design process at first simply as a adaptation to more constrained resources, but the long-term advantages are even more significant, and will change not just the way projects are presented and evaluated but also the dynamics of the investment ecosystem itself.
Post-credits scene
- The simplest way to think about this is in terms of types of A/B testing. Let's say you're launching an startup for VR e-commerce: testing design variations for the landing page might help your numbers, but it won't tell you if your fundamental VR e-commerce technology works or not - and that is the bet you have to resolve with whatever time you have. Premature optimization is the root of running out of money without learning if the basic idea works.
- It's an error to compare yourself with the large incumbents; it can make you believe that most of what you're doing is new, and therefore that you need to build the whole thing to find if you have a good idea on your hands. Find the closest existing competitor you have, including other startups, isolate what the difference is going to be — the smaller and easier to measure the better — and then figure out the simplest experimental setup that will let you test it. That may not even be a whole company!
- If you're doing something interesting along these lines, drop me a note.