/
Insight
Logo
Insight

The first step to improving at anything is to start measuring

Professional swimmers are at a complete loss without a clock on the wall to help them stay on course. They don't know whether they're on pace or falling behind. They've only got their instincts to tell them how they're doing.

That's pretty much the way most development teams track their progress today—with no clock on the wall with which to gauge their performance.

What's worse, tracking yourself as a programmer is a lot trickier than simply putting a clock on the wall.

One metric does not fit all

Metrics will vary from one team to another. Your team might want to track how long it takes you to complete a certain feature. Another team might want to track the number of open bugs over time. Yet another team might be interested in tracking velocity.

There is no right or wrong approach. It boils down to finding what you want to improve—and then tracking the right metric to start experiments on your process.

Here are some steps I use when I want to improve something:

  1. I start tracking to find out where I am on a regular basis and to identify bottlenecks.
  2. I make hypotheses about possible causes behind the less-than-ideal performance.
  3. As I make changes and adjustments, I compare current metrics with past metrics. Now's a good time to ask why and dig deeper before going back to number 1.

It's important to assemble clear indicators to figure out whether or not you're improving.
It's even more important to keep an eye on them and to ask the right questions. The goal is to generate hypotheses based on actual numbers, and then track those numbers as you experiment.

Above all, though, it's important to stay away from those soul-eating vanity metrics.

Vanity metrics: good if you want to feel good, bad if you're after results

It can be very tempting to start measuring anything and everything that's easy to track, and then fool yourself into believing that you're measuring your process. But we're not interested in what's easy. We're interested in results.

You can spot a vanity metric fairly quickly. If it looks pretty and makes you feel good, but doesn't give you any sort of indication as to what you should be doing next, it's a vanity metric. In other words,

vanity metrics are like sweets: they look good and make you feel great, but you should really pass if you want to stay healthy.

Take this basic metric: your current velocity. What can you do with the information? Do you know what factors led you to get this particular velocity? Can you figure out whether you're getting faster or slower? More importantly, what can you do to improve your velocity?

Velocity means nothing without a bit of context. For example, knowing how it evolved over time could give you a sense of improvement. It would also be helpful to know how your actual velocity compares to how much work you'd planned to do.
velocity
If you discovered that your velocity was slowing down, you'd want to find out why. (I know I would.) You could start by simply taking a look at stories by type chart. That could tell you if you're swamped with bugs and chores, for example.

If that doesn't yield any insight into the problem, you could try adding a cycle time chart to find out if some stories are stuck and, if so, how long they've been stuck.
cycle time
In the end, what you really want is to judge your performance based on actionable metrics. Feelings have no place in this.

You're after the cold hard truth. Gloria Steinem put it very well when she said,

The truth will set you free, but first it will piss you off.

Actionable metrics only

The most important rule I know about metrics is that they have to be actionable, meaning you have to be able to do something with the information. For example, knowing the number of bugs you have right now carries little value without context to provide a frame of reference.

On the other hand, if you display the number of bugs over time, you can begin identifying patterns and seeing how your team reacts to bugs as they pile up—and then find out why that is. Same goes for velocity, as explained above, and a multitude of other metrics.

Once you've tracked some interesting metrics, you're ready to start asking questions and finding possible solutions.

The point is that you have to track actionable data in order to be able to take real actions and consistently improve your process.

What metric to track?

My typical board for our projects includes a small kanban column, a burndown chart for our current iteration, and bugs over time.

The burndown chart allows you to quickly see if you're on track and to measure how much the scope creep is affecting your iteration. If too many stories were added during the iteration, it could compromise it. The scope creep line will show how much work was added, while the projection will tell you if you're still on track. The projection will also tell you if you can add additional work. Whenever a projected date comes before the end of an iteration, it's a pretty good sign that you could add more work.

The kanban column is a great way to track how stories are progressing through various states and who's working on them. Like most widgets, its true power comes from its filtering capabilities: it enables you to track exactly what you want to know.

Bugs over time is my key indicator for seeing how well we're reacting to bugs over time.
basic dashboard
There are a ton of other widgets for different scenarios, and I'll cover them in future posts, but those are the ones I use most often.

Let me know what you think are the most valuable metrics to track.