Smart Data Systems
If you lead an organization that is required to estimate software development projects, you may already know estimating software development is not easy. In fact, the entire topic of estimating software is often a contentious one within many organizations. Whether that is internal stakeholders or clients who have a desire to know what will be delivered and when, or a software development team that is hesitant to create something that stakeholders will use against them. A lot of this frustration and the challenges surrounding estimating this type of work stems from a belief that software developers should be good at estimating in the first place.
Don’t get me wrong, we humans are amazing creatures in so many ways. We have the capacity to solve complex problems, make technological advances that have made all of our lives easier and more enjoyable. And yet, despite all those achievements, we still fall prey to flawed thinking and in general humans are not always great at estimating many complex challenges. While not all software projects are complex, many tend to be and one very real reason why is how we think about complex problems and how being more aware of where that thinking can go wrong.
A great book on the process of thinking that highlights some of these basic mistakes is Thomas Kida’s “Don’t Believe Everything You Think, The Six Mistakes We Make in Thinking”. While not specifically about software estimating, it does provide some very interesting insights and research on why most humans can make common mistakes in how they are applying critical thinking
The six basic mistakes are:
- · We have faulty memories
- · We tend to oversimplify our thinking
- · We sometimes misperceive the world around us
- · We rarely appreciate the role of chance and coincidence in shaping events
- · We seek to confirm, not to question, our ideas
- · We prefer stories to statistics
One common challenge for software teams is what is often called “the problem of perfection” or the pressure to provide “perfect” estimates. Most good teams can empathize with the need to prioritize work, the love of metrics, and the all too common need to push for estimates of work that will tell you when something will be “done”. What can often happen with or without this pressure is good teams or developers will fall into one of Mr. Kida’s six mistakes which is our human tendency to oversimplify our thinking.
Heuristics or general rules of thumb are what most of us use to try to effectively simplify complicated judgments we need to make. They also can give us good approximate solutions to our problems. The great news is approximate solutions are surprisingly very effective and for many project estimates they can provide teams with good estimates and can keep many organizations from following into the “Perfect is the enemy of good” rut. However, the challenge is those same rules of thumbs can also lead to systematic biases that result in grossly inaccurate judgments. A common mistake is our “representativeness” rule of thumb or “of course it’s the same, it looks the same doesn’t it.” This method of oversimplification works well for many decisions as things that go together can often be similar in development. However, things will go off the tracks with this way of thinking, if enough other relevant data is overlooked, and thus often can lead to major decision errors. This is sometimes reflected in a piece of development that was done by one group of developers so the natural thought would be, it’s the same relative size of work so the new work must take the same number of hours. A is similar to B, so therefore they must be equal, right? Factor in the relevant data that the previous development team was made up of six people all with two or more years of experience in working with this same solution and your new team consists of only two of those same people and now two new people that started last month. Is that still similar?
The great news is simplifying as a thinking strategy is not all bad. In fact, simplifying strategies will serve you quite well in most software development estimates, just remember to recognize that oversimplifying, with little or no relevant data, and your own biases will lead to major problems with how your team thinks and estimates. I would also suggest looking at more of Thomas Kida’s work or the work of others to understand how we think about problems and see if it can’t help you or your teams with your process of estimating software development.