Agile Bug Prioritization

Bugs are the inevitable bane of every software developer’s existence (of course.) Just as inevitable is the question of when to fix them.

Death's Head Hawkmoth. Image Credit: Wikimedia Commons
Death’s Head Hawkmoth. Image Credit: Wikimedia Commons

“When should we fix this bug?” is usually addressed through some sort of prioritization scheme, and at many companies that’s driven by severity. A bug’s severity is usually determined by the impact the bug has on clients, the effort involved in the fix, etc. Here are just a few considerations that may come into play:

  • How many clients are affected? (Usually one, some, or all)
  • Are the clients able to use the software at all? If so, how severe is the loss of functionality?
  • Is there a workaround for the problem until the bug is fixed?
  • Does the company have an SLA with the affected clients?
  • How hard is it to fix the bug?
  • How risky will it be to deploy the fix?

This typically results in a bug prioritization scheme that has four (or more) levels. Unfortunately, having four or five (or more) prioritization levels creates more problems than it solves:

  • Prioritization levels tend to work their way into the language of SLAs and reports to management, which makes them hard to change.
  • As the number of levels goes up, the distinction between them tends to get fuzzier, and the challenge of categorizing bugs gets harder.
  • Even with relatively few levels, it’s hard to distinguish between those at the bottom of the scale.
  • The bottom-most priority level bugs never get attention. Ever. They might as well not be there.

In other words, too many bug priority levels produces too much confusion and delay.

Bug Prioritization for Iterative Development

Many Agile shops follow development process frameworks like Scrum that operate in iterations called sprints. A sprint is simply a period of time in which software development work is planned, conducted, and delivered; this cycle is then repeated, iteratively. It’s a bit like doing laundry: 1) sort out the clothes to be cleaned; 2) wash, dry, and fold them; 3) put them away in the dresser. Repeat from step one.

The industry standard for the length of a sprint is two weeks, but it varies from shop to shop. Regardless of how long the sprints are, there are only three priority levels needed to classify bugs for an iterative system:

Fix It Now

Our product is really broken. Many or most of our clients can’t get their work done, period. It’s worth sacrificing some or all of the current sprint’s planned work that’s not yet done to get the problem solved as quickly as possible.

Fix It Next Sprint

Our product is broken, but there are workarounds or sacrifices our clients can live with until we finish off the work we’d planned this sprint. We’ll put the bug at the top of the priority backlog for fixing in the next sprint.

Fix It Later

There’s some sort of noticeable defect in our product, but it’s not slowing our clients down significantly, nor is it preventing them from getting stuff done. It can wait indefinitely.

Bug Prioritization for Continuous Development

Some Agile shops operate in a framework like Kanban that provides a continuous, sustainable pace of development, without pausing for planning or demonstrations except on an as-needed basis. For these shops, the solution is even simpler:

Fix It Now

Our clients can’t get work done at all with our product, or their work is seriously impeded; put it at the top of the work queue.

Fix It Later

Our clients might have noticed a problem, but it can wait. Drop it in the queue wherever is appropriate.

Either way, it’s important to note that except in cases of extreme emergency, you shouldn’t sacrifice work in progress to fix a bug! You’ll pay all kinds of costs in quality and time. You’ll be better off if some team members work on the new problem, while others clear work in progress out of their way.

Important note on priority labels

If you follow a prioritization scheme like those laid out here, and you decide to use labels like P1, P2, and P3, or Critical, Important, and Unimportant, you may run into problems with clients (or account managers!) who don’t understand why all of their bugs aren’t P1/Critical. By using innocuous labels like +0 (fix it now), +1 (fix it next sprint), and +n (fix it later), you may be able to avoid this problem, at least for a while.

 

Bill Horvath
I'm an enthusiastic and experienced Agilist and Scrum Master, with depth in both software development and application implementations. I have decades of professional experience in technology, entrepreneurship, industrial psychology, and management consulting, and my most noteworthy industry expertise is in software, automotive, healthcare, government, and non-profits.

Metrics for Management

One of the fundamental principles underlying the success of any Agile workplace is transparency. When something in the work process is “transparent”, it can be observed and measured, which allows us to gauge how efficient that part of the process is. A simple example would be a workflow in which a bolt has to be screwed into a threaded hole; the time it takes to retrieve a bolt and screw it into the hole could be measured, making that part of the process transparent.

Picture of a magnifying glass

The measures we use in Agile to provide transparency on work processes are collectively referred to as process metrics. Process metrics are great for Agile teams, because they allow the team to inspect and adapt their processes for the purpose of improving them. Unfortunately, making processes transparent also means exposing these metrics to the business’ management, which often has a highly undesirable side-effect: if the team members feel the data might be used to punish (or reward!) them, the metrics become unreliable.

There is a solution to this particular conundrum. In the excellent book The Four Disciplines of Execution, the authors argue that there are two types of indicators that provide insight into how the business is doing. Leading indicators are measures of things that are directly under the control of those doing the work; i.e., process metrics. Lagging indicators are measures of things beyond the control of the workforce, but which are indicative of the value or quality of what the workforce is making.  Examples of lagging indicators would include things such as sales volume, warranty returns, or customer satisfaction. If management leaves the inspection of leading indicators to those doing the work, and instead attends to lagging indicators, they will eliminate the manipulation of process metrics, while satisfying their need for data indicating the state of the business.

It is sorely tempting for managers to attend to leading indicators. After all, they provide data right now (as opposed to in the future, which is the case with lagging indicators), and they’re indicative of the process efficiency of the team. If you’re a manager of Agile teams, don’t fall into the leading-indicator trap. Being successful with Agile is fundamentally an issue of trust, in which the business operates under an implied social contract: the managers trust that the teams are continuously working to improve the speed of delivery, quality, and value of the end product, and that the teams will assume responsibility for the effectiveness of their members. The teams trust that the managers will provide the support the teams need to improve their processes and clear impediments, and to not use process metrics against them. Only by sticking to the terms of this contract can a business realize the true benefits of being Agile.

Bill Horvath
I'm an enthusiastic and experienced Agilist and Scrum Master, with depth in both software development and application implementations. I have decades of professional experience in technology, entrepreneurship, industrial psychology, and management consulting, and my most noteworthy industry expertise is in software, automotive, healthcare, government, and non-profits.

Long Live Estimates! (Story Points Part II)

In my last post, I argued that estimating story points for the purpose of predicting velocity adds no predictive power to the art of forecasting Sprint performance or release dates.

Image of a linear regression graph
Estimation…It’s not an exact science.

However, this isn’t to say that the process of estimating story points lacks value. Because everyone on the team is accountable for the accuracy of the forecast in the Sprint Plan, and they have to agree on the final outcome, they have an incentive to actively participate in the discussion about the story in order to plan how it will get done, and to identify any potential risks or impediments that might stand in the way. That’s where the real value of story point estimates lies.

As such, whether story points include complexity, effort, risk, or anything else is an arbitrary decision that ultimately doesn’t matter, except to the extent to which it influences the discussion the team has about each item. Whatever method is chosen, it needs to be applied consistently across stories.

Disclaimer: Aside from stimulating discussion during planning, there may be value in using story points for purposes other than velocity. Say, for example, a team is estimating story points based purely on complexity. When they look at the top of the backlog during planning, they find a story that’s the equivalent of digging a ditch: dead simple, but it will take a long time. Because they recognize that’s the only story they’ll have time to do during the sprint, that’s the only story they chose to pull in. As a result, their velocity tanks that sprint.

The value here may be in how that data point is used down the road. For example, the team might go to management to say ‘You see this dip in the graph here? That’s something you could have outsourced to a less expensive person or group.’ Or the developers might go to the PO and say ‘This kind of story isn’t very well constructed. It’ll keep us busy, but we could create value more quickly if you rearranged the work to leverage our skills more effectively.’

I’ve also been considering whether value might be extracted from story points if they were used as a measure of the width of the ‘cone of uncertainty’ around the estimated time needed to get a story to done. I.e., the bigger the story point value, the less certain it is that a time estimate is accurate. For example, a developer might estimate that the tasks needed to get a story done will take ten hours overall. If the story is rated a one, the developer will very likely get it done in about ten hours. If the story is rated thirteen, it might take ten hours, but it could just as easily take two, or twenty, or fifty. If this postulate turns out to be true, the team could then use the information represented by the story points during a sprint to help decide whether they’re likely to get everything to done. (I’ll be considering this theory in more detail in a future post, when I have more empirical data to throw at it.)

Bill Horvath
I'm an enthusiastic and experienced Agilist and Scrum Master, with depth in both software development and application implementations. I have decades of professional experience in technology, entrepreneurship, industrial psychology, and management consulting, and my most noteworthy industry expertise is in software, automotive, healthcare, government, and non-profits.

Estimates Are Dead! (Story Points Part I)

I’ve been in a lot of debates at conferences and work about whether story points (which indicate the ‘size’ of any particular user story) should include effort, complexity, or both, so I’d like to throw in my two cents.

Two CentsBasically, I think these debates miss the point of estimation entirely.

The first question is why bother using story points at all? Most teams use them to measure velocity, which is then used as a guide when deciding how much can be done in the next sprint, and (in some cases) how long a particular feature set will be in development before release. However, the Scrum Guide doesn’t really specify how to measure the items in the Product Backlog:

“Work may be of varying size, or estimated effort. However, enough work is planned during Sprint Planning for the Development Team to forecast what it believes it can do in the upcoming Sprint. Work planned for the first days of the Sprint by the Development Team is decomposed by the end of this meeting, often to units of one day or less.”

And as it turns out, experience has shown that velocity in story points correlates so closely with the number of stories completed in sprints that there’s no additional predictive power gained by using them. I.e., you can predict how many stories you’ll get done in a sprint by averaging the number of stories you’ve done in previous sprints. Story points have no value in that respect.

But (and this is a big but!), the estimation process itself has tremendous value. We’ll look at this caveat in the next post, Story Points Part II.

Bill Horvath
I'm an enthusiastic and experienced Agilist and Scrum Master, with depth in both software development and application implementations. I have decades of professional experience in technology, entrepreneurship, industrial psychology, and management consulting, and my most noteworthy industry expertise is in software, automotive, healthcare, government, and non-profits.