Dave Best, Technical Director, Mile Two
“Minimum Viable Product,” or MVP, is a well-known acronym in the software and product development community. Eric Ries popularized this term in the book, The Lean Startup. He describes the MVP as:
“[...] the minimum viable product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort” (https://leanstartup.co/what-is-an-mvp/).
That definition is reasonable, but the reality is more complicated. “MVP” has become a largely unhelpful term. In this article, I will present two ways that it is unhelpful and two mechanisms for mitigating those concerns.
A Lack of Shared Understanding
Shared understanding is one of the most valuable assets of a team, especially when moving quickly. It is inefficient and expensive if your team members are building too much or building the wrong things. The group may be well-aligned on the “what” you are developing, but the ambiguity of the MVP term creeps in around the edges. You may find yourself answering questions like:
● Do we need tests?
● That feature needs to be complete, but what about these related features?
● What sort of user load does this need to support?
● How, where, and which users will be using this MVP?
● Will this become production code? Really? (How many teams have been burned by this one?)
I’ve seen the term MVP used as a stand-in for non-production work or as an excuse for bugs or poorly implemented features. This lack of shared understanding may not be a problem for the disciplined and experienced team, but I’ve repeatedly seen senior teams make poor assumptions.
The Build Trap
The second concern with MVP is the last word in the acronym - ‘product.’ Product implies a level of fidelity much higher than ‘experiment.’ The original definition and the Lean Startup book couch the MVP in terms of the scientific method. MVPs are experiments; they allow the team to test the output from a single cycle through the “Build, Measure, Learn” loop (read more about it here: http://theleanstartup.com/principles).
Shipping a product can be uncomfortable work; I’ve seen many teams get bogged down in the details of their MVP; they are doing too much (or the wrong) work because, for many groups, building a product is less daunting than facing the customer.
At Mile Two, we use a handful of terms alongside the stray MVP; experiment, design seed, prototype, mockups, etc. Our early “MVPs” for some projects were simple pen and paper or whiteboard exercises. Some relied on mockups designed in Adobe XD. When your business is developing software, you’re going to get some odd looks if you call your mockup drawn on a whiteboard your “MVP.” You’ll get fewer strange looks calling it a “Process Experiment.”
Fundamentally, my problem isn’t the term itself. It is a placeholder; shared understanding must support it to make it effective.
Scales of Fidelity
At Mile Two, we’ve been experimenting with using “fidelity scales” to better understand and communicate the level of effort to be invested in any endeavor. The categories that we use are in flux but include:
● Software Fidelity
● Design Artifact Fidelity
● Project Resilience
● Progress Alignment to Plan
Our goal is to develop consensus around these levels so that every team (and team member) has a shared understanding of the work needed. An experiment that is (for example) a “3” on the software fidelity scale, regarding the level of testing, reliability, and roughly how long the team will work on the iteration.
These metrics connect to the specific way that Mile Two works, but they are adaptable to the processes of other organizations.
Embrace the Experiment
One of the easiest ways to frame the work is to walk through the three parts of the “Build, Measure, Learn” loop backward:
● What one thing are you trying to learn?
● What can you measure so that you will learn what you need?
● What is the simplest thing you can build to measure what you need adequately?
This framing can help you break out of the “build trap” where you over-engineer or over-develop your experiments. The “Build, Measure, Learn loop” is meant to be an iterative process; if you try to learn too many disparate things from an experiment, you can end up learning nothing at all.
At Mile Two, we believe strongly in iterative development and co-creating the solution with our customers. We bring them into the process early and often; we get feedback on small experiments that advance our (and sometimes, our customer’s) understanding of the problem domain as frequently as possible.
I’m always happy to talk about product development processes or how Mile Two can help you solve your complex problems. Feel free to send me an email at email@example.com.