Monday, December 04, 2006

The Art of Uncertainty

A topic of some discussion at work recently has been the notion of what it means when a Feature Team provides an Iteration forecast (the artifact previously known as 'project plan', except that's it not). This forecast shows the work they expect to do in upcoming, but not imminent, Iterations. There are a lot of good reasons why every Feature Team should (in fact, must) have such a forecast, not the least of which are:

1) to provide the customer with at least a thumbnail sketch of about how long the entire project should take, assuming no huge changes occur;
2) to allow the Product Team to determine if additional resources are needed (the forecast extends too long) or if some people can be moved around (this team's forecast is five months longer than that team's, and yet they both need to finish around the same time)
3) to give other Feature Teams a sense of when any features they're dependent on may be coming their way, so they can plan accordingly;
4) to give the Feature Team itself a clear impression of how much work is outstanding, both to alleviate any concerns of them running out of work and also to foster at least a little bit of that sense of urgency that marks any real project.

So all of our Feature Teams did this forecasting, way back before we even started Iteration 1. That was a good thing to have done, for all of the reasons above plus it got the new teams together early and allowed some preliminary jelling to occur.

But now I see two big issues stemming from those forecasts, both of which are at the root of the discussions that have going on around them. The first, and perhaps lesser of the two, is that it's now almost 4 months later, and it's anyone's guess as to how much, if at all, those forecasts have been adjusted over that time. Speaking for the only team I can speak for - the one I'm the Feature Lead of - I know we haven't spent any time recently reflecting on upcoming work. The most obvious bit of work to do, and I've at least started warming a few team members up to the idea, is to spend a couple hours reviewing the Story Points that we assigned to each upcoming Feature Card. As Mike Cohn told us in Las Vegas, Story Points don't degrade over time, and that's a good thing. But assigning those Story Point values was the first thing we - and the rest of the Feature Teams - did together as a team, so you just know there's some room for improvement now that we know so much more about how we work together, and our new environment. As a specific example, I remember one particular Feature Card getting a very high Story Point value assigned to it, mostly because we had no idea how we'd ever regression test the huge amount of re-factoring involved. While there's still some work to do in that area, a large chunk of what we included in those 40 Story Points has now been done, during our build-up of the Automated Regression Framework. So it's possible that 40 isn't the right number anymore... or maybe it is! But clearly some team discussion on that topic is called for.

What appears to be a more significant issue around the forecasts is the lack of understanding, at least by some, of just why the word forecast is being used instead of plan. This was not a whimsical decision. When thinking about the weather, the word forecast has exactly the right meaning inferred from it by most people. If I were to check out the forecast for tomorrow, for example, the expectation on my part is that what I see on The Weather Network has a high likelihood of being accurate. Certainly not 100%, but probably in the 70 - 100% range. Picnics and bike rides to work are often predicated on this sort of thing, after all! But, on Monday, when I look up the forecast for the upcoming weekend, I'm filled with considerably less confidence that it won't change, possibly several times, before the day - and accompanying weather - actually arrives. This is exactly the characteristic we were going for with the word forecast, as applied to features within future Iterations: the further out they are, the less likely it is that you'll actually get what's being forecast today.

Which is all well and good, but that message doesn't seem to have gotten out, in all cases. Perhaps the strongest evidence of this is the growing popularity of referring to our current forecast as "the laminated plan." When I first heard those words, I thought someone was simply making a joke. The kind where I say, "Don't count on this forecast, it's certainly going to change" and you fire back with, "Too late! I already laminated it!" and we both enjoy a hearty laugh at your wry wit. Unfortunately, it doesn't appear that anyone's laughing in this case (least of all me). Those of us deeply entrenched in what I lovingly refer to as The Agile Experience (or sometimes The Agile Experiment) have gotten very familiar with the notion of putting great detail into work that's right in front of us and worrying much less about the particulars of later features. But I guess we're forgetting that there's a whole 'nother world around us, filled with folks who are either not in the Agile game at all, or barely in it. And they're still operating under the old paradigm of "Plan to execute, and then execute the plan" that most of us grew up under. To them, I'm thinking, all this talk of forecasts in place of plans, and cones of uncertainty, and inspect and adapt, is probably sounding a lot like when your doctor tells you that you really ought to cut back on the cholesterol: you hear "spread your butter a little thinner and you'll live to a ripe old age!" So the challenge in this area is going to be to amp up the uncertainty talk, and make it sound less like rhetoric, and more like reality.

Anyone who's feeling like wishing me lots of luck, should do so now... ;-)

No comments: