Agile can be perceived in different ways: as a manifesto of values or list of principles, as an emphasis on collaboration, as a repackaging of spiral and iterative development concepts, or as the overarching prioritization of adapting to change over following a plan. I’ve been around  software development long enough to see several of these shifts in perception and I’ve come to think about all such shifts as reactions to environmental changes, similar to what occurs in nature. Take the  flounder, for example. A flounder’s environment is the ocean floor. He lies on his side and has adapted so that both eyes are on one side of his head, because he only cares about feedback from above.

Biodiversity Heritage Library, CC

This is the first post in a series by Rally Director of Analytics, Larry Maccherone, on “The Seven Deadly Sins of Agile Measurement.” Find Sins #2 and #3 here, #4, #5, and #6 here, and Sin #7 here.

 

Agile can be perceived in different ways: as a manifesto of values or list of principles, as an emphasis on collaboration, as a repackaging of spiral and iterative development concepts, or as the overarching prioritization of adapting to change over following a plan. I’ve been around  software development long enough to see several of these shifts in perception and I’ve come to think about all such shifts as reactions to environmental changes, similar to what occurs in nature. Take the  flounder, for example. A flounder’s environment is the ocean floor. He lies on his side and has adapted so that both eyes are on one side of his head, because he only cares about feedback from above.

The movement between the different stages of software’s lifecycle used to be very expensive. Compilers ran for hours. Testing was labor-intensive. Distribution of a completed product involved physical media and could take months. In this environment, it was critical to minimize the number of times you went through these costly transitions. Fittingly, the emphasis for feedback was on the process: you could improve the checklists you used to stage-gate each of the transitions, with the hope of reducing rework that crossed backward over these expensive boundaries. Similarly, the success of a project minimally required that it be finished before funding ran out, so there was similar emphasis on the plan feedback.

Then the environment changed. The costs of compilation, testing, and distribution have been driven close to zero. The biggest threat to success is not that you’ll run out of money, but that you’ll miss your market window. “Rework” is no longer verboten. In fact, it’s much better to build something minimally usable --and then “rework” it based upon usage feedback--than it is to try to “build it right the first time.” Like the flounder who no longer needs feedback from below, we no longer value feedback on the process or the plan as much. Our most valuable feedback is feedback on the product.

The old metrics systems we used before Agile had the wrong feedback loop emphasis, so agilists fittingly threw them out. They replaced them with qualitative insight, which works well on smaller teams. But Agile is going through another environmental shift. It’s scaling up to larger projects and being adopted by larger organizations. It’s starting to be used in environments that still have significant stage-gate transition costs (like hardware/firmware systems). In these environments, qualitative insight alone is insufficient--it must be complemented with appropriate quantitative insight. So, the pendulum has swung and the value of appropriate measurement is clear, but we don’t want to make the same mistakes that caused early agilists to throw out the old measurement regimes.

I have been working hard to identify the the key perspective shifts necessary to successfully introduce measurement into an Agile development environment, and I've coined the phrase, "Seven Deadly Sins of Agile Measurement" to capture this learning. Let’s look at the first of these Seven Deadly Sins.

Sin #1 - Using measurement as a lever to drive someone else's behavior

If feedback emphasis is key to the success of Agile, the key to effective Agile measurement is to think of measurement in terms of feedback, not as the traditional lever to motivate behavior. Using measurement as a lever often devolves into keeping score, which is where the dark side of measurement starts.

There is a subtle, but important, distinction between “feedback” and “lever.”

Feedback is something you seek to improve your own performance. Levers are used to influence others. The difference is in how you use the measure more than the measure itself.

An example will help illustrate this point.

Below is a chart found in an ALM tool. It's an attempt to borrow a concept from the manufacturing world that I believe has been misapplied to our domain. (There are several problems with this chart and I'll come back to another one of them in a later post, but for Sin #1, I want to highlight the use of the red line and the red dots.)

Each dot on the chart represents a particular user story. How high up on the chart they appear is proportional to how long the story took to be completed. The higher the story, the longer it took. That red "upper control limit" line and those red dots say, "this is bad!" The stories in red took too long; they are literally "out of control."The Seven Deadly Sins of Agile Measurement: Introduction and Sin #1

What's going to happen the next time you show this chart? There probably will be fewer red dots, but why? I'd like to think it will be because people have deeply analyzed their process and made necessary changes, blah-blah-blah ... but what's more likely is that they will just game the system to make sure  they don't have any red dots. Maybe they'll split stories artificially instead of where they deliver value. This is bad because it's wasteful and doesn’t improve the process; but what really hurts is that you've now hidden data from yourself. You’re making decisions with a distorted picture of reality.

As an alternative, I propose this chart:

The Seven Deadly Sins of Agile Measurement: Introduction and Sin #1

It's conceptually very similar to the first chart. The primary difference is the lack of a red line and red dots. This visualization is designed to allow teams to explore their evidence, enabling learning and improvement. You can hover over a dot and get the details about what happened, which should enable discussion. You can talk about probabilities to help gauge risk. You’ll see that 95% of all stories finish in 28 days; maybe that will help you make a service-level agreement commitment.

So, the heavenly virtue here is to inspire pull and resist the push. Use metrics as feedback to improve yourself: never as a lever to alter someone else's behavior.

Learn More: Download "The Impact of Agile Quantified"

Read the next blog post in this series: "The Seven Deadly Sins of Agile Measurement: Sins #2 and #3"

Request a Call

Looking for support?

Send Us Your Feedback

Provide us with some information about yourself and we'll be in touch soon. * Required Field