Monthly Archives: April 2012

Risk – Its all about time!

I was talking to a colleague about risk. They mentioned that they list the risks and assign a probability and impact  to each risk. This appears to be the standard approach taught on many management courses.

“I take a different approach” I said “I only consider impact”.

I am interested in two measures.

  • How long it will take to fix the problem.
  • How long the business can survive with the problem.

If the time to fix the problem is less than the time the business can survive, we do not have a problem.

If the time to fix is greater than the time available, then we need some more options.

Olav and I mention a number of examples in our InfoQ risk article

Can you think of other examples?


Is TDD always the best solution? The third way?

Recently I have been involved in an “Agile” transformation. I have been forced to look at a number of teams and assess which tools from the Agile toolkit are relevant.

Rather than simply foist Agile goodness on the teams, I’ve been engaged in a discussion about the relevance of the tools… especially from a risk perspective (what a surprise!).

To help people understand Agile I have been casting it in a historical perspective against the (perverted version of) Royce’s Waterfall approach with sign-offs and change control mechanisms. Adopting an iterative development approach mitigates the need for those sign-offs and change control. In addition, the iterations help business investors mitigate a whole bunch of other risks.

Multiple iterations cause some additional issues. There are transaction costs associated with an iteration such as regression testing and releases. Without careful management, some of these transaction costs (regression testing) will increase with each iteration which will start to have a negative impact on the teams ability to deliver new functionality.

The traditional approach is to perform extensive manual regression testing. This increases with each iteration and as a result the economics of the situation lead to larger iterations which breaks the risk management goodness of small iterations.

The agile approach to this problem is to automate testing. To ensure comprehensive and effective coverage, it is necessary to consider testing as a primary concern. Thus test driven development comes to the fore.

The team I work on has an interesting context. The system is not seen outisde of the organisation. It is not used to make payments to external entities. The users do not use the information solely to make decisions. As a result the users are tolerant to bugs… provided they can be fixed quickly. The team can turn around bug fixes within a few minutes to an hour which is acceptable to the business. So this presents a “third way” to address the regression testing overhead. Minimal testing with the acceptable risk of introducing bugs that are fixed quickly (including by roll back which has not yet happened).

It will cost the business investor a lot of time and money to retro-fit effective automated tests. It will take time and effort for the team to develop the skills to learn TDD and build automated tests around the legacy code base. Time and effort that the business investor could spend on other things.

Currently the team are able to produce several releases a week. The business investors are happy with the performance of the team.

Would TDD be the best approach or would the “Third Way” be more appropriate to this context? Would the introduction of TDD break the existing development process from the business investor’s perspective?

Should TDD always be foisted on a team? Or not? Would a team be Agile if they did not do TDD?

Thoughts?

P.S. I am aware that TDD brings many other benefits beyond regression testing.