Monday, April 14, 2008

Explaining Agile, Part 2

In Part 1 of my attempt to explain the basics of Agile software delivery, I mostly just beat up on Agile's ideological opposite, the traditional Waterfall method. I offered up some theories as to why running software projects using such a deterministic, predictive approach might have inherent flaws in it when applied to software creation. We're talking about something that, after all, has a certain degree of innovation and discovery to it that's in stark contrast to the manufacturing world, for example. Expecting to be able to crank out Java classes or C functions in the same way that widgets are stamped out on an assembly line at a predictable rate would seem to be folly, and this is borne out by the high failure rate of traditional software projects (as mentioned in Part 1).

So if that's what's wrong with Waterfall, then how is Agile any better? Well, for one thing, Agile came about as a deliberate attempt to address some of the shortcomings within the long-established process (Waterfall). For example, a common characteristic of a Waterfall project is the "big bang" delivery of a product (or set of features) that comes out at the end. Remembering that Waterfall work is serialized - requirements lead to design which is followed by coding and then testing - it's only natural that nothing usable arrives until the end of the project. Sure, there are artifacts along the way, including requirements and design documents, but there's not even any guarantee that what they describe will actually match what's delivered! Basically the customer puts in their "order" and then waits some period of time (typically longer than whatever was originally estimated) after which, hopefully, they receive some software that allegedly matches their requirements. If, however, it doesn't quite live up to the expectations, it's usually too late to do much about it, since changes during the later stages often mean going back to the drawing board, as it were. New or changed requirements may lead to alterations to the design, more coding, and then testing everything all over again.

Agile, on the other hand, advocates incremental delivery. It attempts to avoid the perilous "big bang, slam bam, hope you like what you got, ma'am" approach of Waterall. In an Agile project, small bits of working software are delivered every "iteration", where an iteration is usually anywhere from a week to a month in length. A common programmer response to hearing such a thing as that last statement is, "A month? You can't possibly build anything from soup to nuts in as little as a month!" The reality is, once a team has gotten used to working in short Agile cycles, thin slices of functionality can easily be delivered - including design, coding, testing, and documentation - in a mere week! But it definitely requires a paradigm shift to accomplish this, and it usually takes a while to get there. (At my work place, we're still working toward that goal.)

One of the keys to succeeding with a transition away from Waterfall, with its long Quality Assurance testing cycle near the end, into an Agile approach involves the introduction of automated tests. Most Waterfall processes require hundreds or even thousands of person-hours of manual regression testing, leading some companies to bulk up with QA departments of dozens and dozens of "QA testers." These individuals may engage in exploratory testing, which is always valuable, but more often than not they spend most of their time running (and re-running and re-re-running) vast numbers of test scripts by hand. That's what's required to regress a product, and it usually has to be done over and over, because every time a bug is found that needs to be fixed, the code will be changed again. Once that happens, everything needs to be tested once more, to ensure that nothing new was broken. Anyone who's worked in software knows all about this process, and some of the ugly cultural developments that arise from it (such as my company's change-averse mantra of "change code, break code" that I wrote about here).

When you only have a week to add new functionality, though, and a team of maybe only three or four individuals, regressing your entire product within that small window is wholly impractical. At best, you might be able to spend the entire iteration regressing, but then you don't have any time to actually write the new code! The solution, of course, is to emphasize automated tests instead of manual. This can be accomplished through JUnit tests, or automated system tests, or any number of other innovative measures that safeguard the functionality of your product through the clever use of software in place of the repetitive abuse of wetware. Once you've written a good, comprehensive suite of automated tests, you can run it many times per day and the need for a long QA cycle at the end of the project or iteration goes away. A nice side benefit of this practice is that, when you, as a programmer, do introduce a new bug (unintentionally, of course), you find out about it within hours or even minutes of the offending action, rather than days, weeks or months later (as you would with a back-loaded QA cycle in Waterfall). Any coder will tell you that rapid feedback on a booboo leads to much quicker turnaround on a fix.

That's probably enough to think about for Part 2 of our little Agile journey...

No comments: