This post has been a long time coming. I’ve started and restarted it at least a half-dozen times.
Quality – there may be no more multi-faceted and powerful attribute in successful software development. Quality is central to everything we do and seek.
- Higher quality leads to greater productivity, throughput and velocity
- Higher quality leads to increased responsiveness, reduced cycle-times, shorter lead-times
- Higher quality leads to improved customer satisfaction, employee satisfaction
- Higher quality leads to better predictability, reduced risk, improved decision making
Or at least that’s my hypothesis…
And that hypothesis is widely shared amongst the Agile and product development communities. We’ve developed numerous principles, practices and techniques intended to improve quality: Test Driven Development; Continuous Integration; Automated Build and Deploy; Pair Programming; Customer Demos; Behavior Driven Development; Acceptance Test Driven Development; and Set-based Design techniques are all at least partially focused on yielding quality improvements.
But quality can’t simply be viewed as a set of tools and techniques – independent variables/levers which we hypothesize will lead to improved business outcomes. Quality is also a business outcome unto itself.
This series emphasizes the need to focus on business outcomes (success) first – methods and practices second. So, putting aside the methods and good practice assumptions of Agile, and focusing solely on the business outcome of improved quality:
Quality = Fewer Defects in Production
We apply Agile quality practices and techniques, because we believe that doing so will yield improved business outcomes – quality (fewer defects in production) being one of those outcomes – along with productivity, predictability, responsiveness, customer and employee satisfaction.
Large, manual, end-of-cycle execution of formal testing by an independent QA organization is also a method aimed at improving these business outcomes. I hypothesize that it is less effective than alternative Agile techniques. But I don’t take that on faith, and neither should you. We must test our hypothesis.
How Do We Measure Quality?
There are innumerable quality metrics that have been devised over the years – each with its own strengths and weaknesses. In my experience, it’s important to keep metrics simple, and to not let great become the enemy of good enough. In other words, if you have a metric that does a good job of providing insight into the quality of your product/solution, and is simple to collect and interpret; that is likely better than chasing after a metric that will do a great job, but would be more complicated.
For my part, I’ve had success over the years using a couple relatively simple metrics:
- DEFECT DENSITY – # Defects / KLOC
- DEFECT ARRIVAL – # Defects Identified / Month
What Do We Consider a Defect?
In both cases, I include only defects in the production system.
Measuring defects found and eliminated during the development cycle may be useful for managing your development and quality processes. But from a business outcomes perspective, our focus is reducing the number of defects that make it to production – not making assumptions about how or when to achieve that.
Not All Defects Are Created Equal
Any good metric should drive in more questions than answers. I find it useful to tag defects with information about type and severity, so that we can consider some of those questions more deeply.
- Our defect density is high; but our severity 1 & 2 density is low. What is the impact on other outcomes (productivity, customer satisfaction, etc.) if we were to invest in reducing our low severity defects?
- Our defect arrival is very high immediately following a major release. But the defects are mostly Type = Usability. Why are our customers having such a tough time using our new features; and how can we ease them through the learning curve?
You may have some hypotheses based on these questions. Perhaps those hypotheses involve application or improved use of Agile tools and techniques. What experiments would you run to prove or disprove your hypothesis? What new questions will those results yield?
This is the third post in our blog series, Measuring the Impact of Your Agile Investments. The series focuses on measuring the impact that Agile practices have on business outcomes.
Isaac Montgomery is the harried father of twin sons, a frustrated hack on the golf course, and an Agile Coach at Rally Software. He blogs at Leading Results, you can follow him on twitter at @iwmontgomery