I worked most of today from home, thanks to the big dumping of snow that we got overnight. I planned to whip up a new spreadsheet to measure Bug Longevity Trends (how long bugs tend to be open before getting fixed) in the morning and get on to something else in the afternoon... except that as of 10:00 tonight I was still toiling away on the spreadsheet!
The challenge, this time around, was that there's two types of bugs to look at: the ones that have been fixed (by the time you run the report) and the ones that haven't. The first group are easy to measure longevity on, as it's just the # of days between the bug report being opened and subsequently closed. But looking at the bugs that are still hanging around proved more difficult to get a good grip on.
My first impulse was just to measure longevity as being the time between when the bug was reported and now, but that has the unfortunate side effect of making teams look like they're improving as time goes by, without actually doing anything to improve! The reason I say that is, the # of days that a bug has been ignored, as measured today, is smaller the closer you get to now along the timeline. So if a team were completely neglecting to fix bugs each Iteration over the past 18 months, the trend curve would still indicate that the longevity of those bugs was shorter with each passing month (which would look good) when in fact nothing positive is actually happening.
Anyway, I'm still not convinced that I've solved the problem, but at least I got far enough - by 10:00 - that I have something to show a few people at work, and get more feedback on. This has turned out to be the toughest Bug Trend analysis that I've done to date!
Subscribe to:
Post Comments (Atom)
2 comments:
Why not just a separate statistic of "unsolved bugs". Could be a bar graph with iteration and # of unsolved bugs open. The farther to the left the bug, the worse it looks on the team. Although again... P4 vs P1 comes to mind here.
In summary:
You have a thankless job, and no matter what statistic you produce, statistically some people will complain about how it doesn't represent them properly.
The epiphany this week was to separate Closed from Open bug reports, as lumping them together was just not working for me.
So the Closed ones get graphed a couple of ways:
- average # of days to close the ones opened in each Iteration, and
- average Closure percentage for each Iteration (what % of each Iteration's bugs have so far been closed?)
For the Open ones,I took a completely different tack. For them, I'm graphing:
- the cumulative "bug debt" for whatever bugs are still open at the end of each Iteration (# of days open for each bug), and
- the total # of open bugs at the end of each Iteration
I'm planning to add some Severity/Priority views of the same next week, if I get the time. I agree those are important factors.
My hope is that looking at these charts, and the earlier ones, will allow teams to figure out if they're moving in the right direction along the Quality Line or not.
Post a Comment