Cost of Delay is often mentioned as a possible solution to the difficult problem of deciding the order in which you want to invest in two or more software development investments. ( For an introduction to cost of delay, check out Don Reinertson’s talk and Joshua Arnold’s blog and experience report presentations. Both are well worth the time invested ). I would like to highlight two capabilities your organisation may need before it can effectively use Cost of Delay. These are:
The ability to estimate value.
The ability to convert value to a common “currency”.
In effect, Cost of Delay assumes a level of corporate maturity.
The ability to estimate value
Todd Little wrote an excellent paper that shows the difference between actual effort and estimated effort follows a log-normal distribution ( Troy Magennis would argue that its a Weibull distribution ). IT professionals are pretty good at estimating things it would seem. The same is not true when it comes to estimating value or return. I once worked on a project where the business case estimated the annual profit for an investment would be €100M. In the first year, the investment gave a revenue of only €500K. The estimate of return was out by several orders of magnitude.
Lean Startup and a experimental metric based approach to predicting the improvement in a metric is a much more accurate approach but still is not that accurate.
Rather than a single value, estimates should take the form of a range with a confidence interval. For example, the return will be $1,000 to $1,000,000 with a 90% confidence interval. (Check out Doug Hubbard’s presentation )
So which is the better investment, the one that delivers -$50,000 to $2,000,000 with a 90% confidence interval, or one that delivers $1,000 to $1,000,000 with a 80% confidence interval. My maths is not good enough to compare these two investments. Now let’s consider that this is the cost of delay for the two investments. Which do I choose.
It is quite likely that the two or more potential investments are from different people or groups. How do we ensure that they have adopted a consistent estimation process. One possible solution is to engage the finance department to act of the coaches for putting together business cases. The finance department should not tell people how to build business cases, instead they should coach people and share useful practices. ( I almost wrote share “best practices” but could not face the strangling sound from Dave Snowden). The finance department can ensure a level playing field and help raise the game for the people writing business cases.
The ability to convert value to a common “currency”
Not all value is equal!
With the rise of business metrics, it is possible and desirable for organisations to focus on a particular part of the “funnel” that does not directly lead to a “dollar” value. An investment may be to increase the number of installed customers, or improve the usage or stickiness of customers.
Even with the same metric, there may be more value to customers on a particular platform, or in a geographical or demographic grouping. What are more valuable? Customers in the developed world or in developing markets? Teenagers or Pensioners?
In order to compare an investment to increase usage with teenagers in the USA versus revenue from Baby Boomers in Europe versus New customers in Brazil, the organisation needs an exchange rate to a single currency (The Corporate Peso or the “Corpeso”). This exchange rate needs to be set by the executives of the organisation taking into account market opportunities and the organisations vision. The exchange rate becomes a strategic tool to steer the software development investments. Some organisations may to choose a simpler approach and focus on a small handful of metrics instead.
Simplified Cost of Delay
It is possible to introduce as simplified version of cost of delay focusing on the two or three basic shapes. The delayed return is calculated by multiplying the rate of loss by the delay for a delayed product intro. A step cost for things like fines, and a total loss for things like missing the Christmas season.
There is a danger with introducing the simplified version in that people devalue cost of delay, especially as they are already doing the simple shapes. You risk that cost of delay is seen as adding unnecessary complexity to something simple. This may innoculate the organisation against using the full cost of delay in the future.
Cost of Delay is a great concept which will work well in certain contexts. If you try to implement it in more complex contexts, you need to consider the organisational maturity needed to support it.
December 22nd, 2013 at 10:51 am
Don Reinertsen was kind enough to provide feedback on a draft of the post. He also provided permission to share the feedback. Hopefully he will blog on the subject as well. 🙂
————-
Hi Chris,
What you are saying does not bother me. I think it is helpful to show people that CoD is not substituting a superior form of magic for an inferior one. Ultimately it is only another view on the data we already use, or that we should be using.
When people disagree with a CoD calculation I usually point out that the data set that we use for the basis of this calculation is the proforma income statement for the product. Traditionally we manipulate this data with analyses like ROI, DCF, NPV, IRR and EVA. We use the output of these traditional manipulations to decide whether we want to invest thousands or millions of dollars in product opportunities. CoD takes the same data set and uses it to do sensitivities. When people reject CoD they are ultimately either rejected the arithmetic or the data set. I point out that it is a bit irrational to claim that the data set is good enough to analyze ROI and not good enough to do a sensitivity.
The example you gave, which had a 200 to 1 range, clearly had a data set was a complete fantasy. Since it is virtually impossible to miss price by 200x it is most likely that there was a huge miss on the volume estimate. Such huge misses typically occur when there is no underlying analytical market model for the volume estimate — someone postulates a huge market and nobody asks them how they calculated that number. I’d be highly confident that even minimal due diligence would have exposed the weak basis for the volume assumption. You say 100 million? Give me the phone numbers of 5 customers who are willing to buy this today so I can talk with them….poof, the forecast then disappears.
Nevertheless, I think it is useful and important to show people that the sensitivity analysis can not transform garbage data into quality information. This vulnerability is as important for CoD as it is for ROI analysis. We don’t want results produced with garbage data causing people to assume the analysis cannot be done well.
Finally I would point out that when garbage data is used as input for a systematic sensitivity analysis the answers typically have less variation than when we simply ignore the garbage data and generate random guesses. (Which typically produce a 50 to 1 range.) Put another way, although the data may be noisy, it is not 100 percent noise.
I think this is a fertile topic to explore in your blog. Go for it!
Best regards,
Don Reinertsen
December 22nd, 2013 at 10:54 am
Joshua Arnold* also gave some fantastic feedback which can be found on his blog (The link is in the above article).
*Joshua is one of the leading practitioners in Cost of Delay. He coached Maersk to use COD on their IT investment portfolio. Check out the presentation on his linkedin profile.
December 22nd, 2013 at 3:50 pm
[…] of Delay they sometimes doubt whether their organisation is ready for it. They say things like, “We don’t have the maturity for it”, or “We couldn’t do that because our stakeholders wouldn’t support it”. […]
December 22nd, 2013 at 4:16 pm
It would be a shame if people were discouraged from experimenting with Cost of Delay based on your comments above. As I tried to explain in my blog, “organisational maturity” is a poor excuse in my view.
http://costofdelay.com/value-urgency-and-organisational-maturity/
Most organisations already have a common currency and they already estimate value. Simplifying to a small set of urgency curves might help with the urgency part of Cost of Delay, but it doesn’t help with the value part at all.
I wrote a bit about estimating value here:
http://costofdelay.com/why-making-value-estimates-is-hard-but-worthwhile/
December 22nd, 2013 at 9:32 pm
Hi Chris
I have been thinking about both this blog and Joshua’s http://costofdelay.com/value-urgency-and-organisational-maturity, based on my own experience of introducing Cost of Delay to an organisation for a Continuous Delivery programme. My rationale is outlined at http://www.stephen-smith.co.uk/continuous-delivery-cost-of-delay.
In my (limited) experience, the ability to estimate value exists and the ability to convert to a common currency exists – ideally Money, or Time if the development team(s) and product team(s) are unfortunately segregated. The bigger issues are awareness, and gumption – are people aware of value estimation? Do they want to try something different?
I agree with both yourself and Joshua. I do think organisational maturity is an issue, but I agree with Joshua that maturity is not static and that we need to build up momentum with a few like-minded souls. The trick is to uncover those early adopters, trial Cost of Delay practices, and demonstrate some success.
Steve
January 18th, 2014 at 11:48 pm
[…] of Delay they sometimes doubt whether their organisation is ready for it. They say things like, “We don’t have the maturity for it”, or “We couldn’t do that because our stakeholders wouldn’t support it”. […]