In a number of articles and posts, writers have referred to the situation when testing is cut short, or just done poorly, and sub-standard work is released. The cause can be anything from the business drivers demanding an early release, refusing to accept a slippage due to an issue or extra work, components worked on in isolation and the effort to merge them has been overlooked, or simply promised documentation hasn’t been delivered so there is more work left to be done.
The least offensive thing I’ve personally heard this referred to was ‘to kick the can down the road‘. However, a lot of the more recent articles are calling this ‘Technical Debt‘.
Digging around it looks like the XP teams first picked up on this with Ward Cunningham presenting at OOPSLA ’92. You can hear (see) Cunningham speak on the debt metaphor on YouTube:
Back to the metaphor, the best visual I’ve seen on this comes from Patrick Wolf over at CollabNet in his post Technical Debt – The High Cost of Future Change.
This graph shouldn’t be a surprise to anyone; the later in a project you need a change, the more impact it will have.
It also shows just how critical robust testing is for Agile projects. If you skimp in testing in Agile then at some point you’re worse off than running Waterfall.
For a simple run-down on Technical Debt, a great place to start is Steve McConnell’s slide deck on Managing Technical Debt given at the 2013 International Conference on Software Engineering (ICSE) conference. The reasons given for getting into Technical Debt are all too familiar – especially:
If we don’t get this release done, there won’t be a second release. (Rationale for cutting corners)
Of course, by cutting corners to get a half-baked release out, there won’t be a second release either.
This leads naturally on to what is the definition of ‘done‘? A number of writers have covered how to agree the definition of ‘done’. For example, Ken Schwaber states the following in Agile Project Management with Scrum:
Scrum requires Teams to build an increment of product functionality every Sprint. This increment must be potentially shippable, because the Product Owner might choose to immediately implement the functionality. This requires that the increment consist of thoroughly tested, well-structured, and well-written code that has been built into an executable and that the user operation of the functionality is documented, either in Help files or in user documentation. This is the definition of a “done” increment.
When and how do a team mark the task as ‘done‘? The post by drunkenpm, Done Done and the Bag of Oranges sticks in my mind. Not quite the best way to get your 5-a-day.
**Added 21 Aug 2013:
Mike Cohn has just published a great, short post on the Definition of Done. Well worth a quick read.