UPDATE – The proof is flawed. I have found an example that breaks the model. See update section at the bottom of the post.
I love applying Real Option thinking to things to see what pops out. This is an example of how Real Options makes proving Theory of Constraints really simple.
A few weeks ago Marc Burgauer (@Somesheep) introduced us to Klaus Leopold’s excellent boat game to demonstrate the benefits of limiting WIP and its impact on lead time. The game is elegantly simple, a line of people take turns making origami folds to turn a sheet of paper into a paper boat. Each person does the same one or two folds each time. The last person records when each boat arrives since the start of the exercise.We had a bit of a challenge to get the point. i.e. The rate of completing things remains the same, but the lead time becomes stable and predictable. After some discussions, Marc, Nick Poulton and I decide to modify the experience to hit people over the head with the results. We modified the game to record the time at which each boat was started as well as the time at which each boat was finished. Then we plotted a Cumulative Flow Diagram of the start and finish times…
When you push work into the system, you build up work in progress (inventory) and the lead time increases.
Next you introduce single piece flow and plot the start and finish times…
This results in a fixed amount of work in progress (inventory) and a fixed lead time.
IT DOES NOT INCREASE THE RATE AT WHICH VALUE IS DELIVERED! THAT IS FIXED.
Any additional work (investment) entered into the system above the finish rate is waste. It simply builds inventory. The area in the triangle formed by the two start lines (red and blue) below is pure waste. It is investment that is trapped in the system that is not generating a return.
I plotted the graph to complete each item of work as it moved through the process. To do this I considered that there are two ways to express the rate at which a process can be expressed.
- The number of widgets in a fixed time.
- The amount of time a widget takes to process.
We tend to use the number of widgets in a fixed time. It is easier to discard the time elements and easier to compare one rate to another. In software development, we refer to velocity which is the number of widgets in a fixed time. Plotting the graph of each work item is easier when you use the amount of time a widget takes to process.
One thing that jumps out at you is that the rate for finishing a widget is the same as the rate of the slowest process step. This is not intuitive, especially when the process is a network rather than a linear set of steps. But why was I surprised? This is surely just the Theory of Constraints.
And then I realised. Theory of Constraints was a belief for me. It contained uncertainty. I did not know for certain that it always worked, in all situations. I had doubt and because I had doubt, I did not always apply it.
So I created a reasonably complicated network process:
<Warning – Possible bad maths ahead. I’ve not checked this with an expert yet>
And I created a spreadsheet to see how long it would take things to complete. This is where Real Options thinking came in (or value stream mapping if you prefer) and we work backward from the end of the process to the start. For each process step, the time to the end of the process step Pn(k) is:
- The time to complete the process step T(Pn), plus
- The earliest time the process step can start.
Pn(k) = T(Pn) + ET(Pn(k))
The earliest time the process step can start is the latest of:
- The time the previous Pn piece of work finished ( Pn(k-1) )
- The latest time the preceding x process steps finished
Therefore the earliest time the process step can start is:
ET(Pn(k)) = MAX ( Pn(k-1), Pn-1(k) … Pn-x(k) )
The rate for Pn (expressed in seconds per widget) is
Rate(Pn) = MAX ( T(Pn), Rate(Pn-1) …. Rate(Pn-x) )
This leads through recursively to
The rate for Pn (expressed in seconds per widget) is
Rate(Pn) = MAX ( T(Pn), MAX(T(Pn-1), Rate(Pn-y), ….), MAX(T(Pn-x), Rate(Pn-z) )
As the MAX function is associative, We can write
Rate(Pn) = MAX ( T(Pn), T(Pn-1), T(Pn-x), T(Pn-y), T(Pn-z) )
i.e. The rate for any process step is the max rate (expressed in seconds per widget) of itself or any preceding process step in the graph.
<Math note – I’m not up to snuff on how you annotate graphs. Hence its a bit of a mess.>
Conclusion – Theory of constraints is not a theory. It should be easy to prove mathematically by someone with better math and more rigour than me.
It also means that pushing more work into a system beyond the rate of the constraint is quite simply a waste of money. IT Executives should be judged on the rate at which money is invested into the IT department and the rate at which that investment delivers value. In effect, a CFD based on money in and money out which is just another way of representing lead time.
If you want a copy of the spreadsheet, leave a comment here, or ping me on twitter.
UPDATE – The following example has a rate faster than the constraint.This is due to inventory building up behind the constraint before work items through the other path reach the convergence process point (P4).
It looks like it still holds over the long run in the steady state. Need to think more about the start up phase and the implications of that.
More details to follow.
February 5th, 2017 at 10:56 pm
Thanks for the article – would love to see more cases of the TOC being applied. If possible, I’d like to get a copy of the spreadsheet. Thanks!
February 6th, 2017 at 7:09 am
It may be that you are heading towards the max flow/min cut theorem? https://en.m.wikipedia.org/wiki/Max-flow_min-cut_theorem
Hope that helps,
February 6th, 2017 at 8:19 am
Hmm, ok, actually this is more interesting than that. Thanks for the puzzle.
It differs from the max-flow-min-cut setup in that flow is not conserved at nodes. Instead, the outflow from each node must be no greater than the minimal inflow, reflecting that each task can only start once its preceding tasks have all been completed.
It feels like the Ford-Fulkerson algorithm ought still to work in this scenario, although the calculation of the residual graph needs to subtract capacity from more edges than in the traditional setup. And if Ford-Fulkerson still works then something like max-flow-min-cut should still hold true.
February 6th, 2017 at 2:27 pm
Chris
WRT: “IT DOES NOT INCREASE THE RATE AT WHICH VALUE IS DELIVERED! THAT IS FIXED”
Well… yes and no.
What is not evident: if you use DBR properly to pull work into the system then you will (typically) improve flow efficiency. Which, in the first graph, means that the green line shifts to the left. (!!)
By shifting to the green line to the left you are reducing the lead/flow/cycle time denominator in Little’s Law. Therefore, your throughput will increase even if the slope of the line remains the same. (!!)
Of course, if you then want to improve even further by truly increasing the slope of the line as well, then the only way to do that is to improve on the constraint.
-ST
February 12th, 2017 at 8:25 pm
You are actually getting into querying theory here. You need to start considering arrival rate and completion rate. These follow a statistical distributing. But it gets more complicated…
You actually have a number of interlinked activities each with its own queue with its own arrival rate and complain rate and you need to consider these distributions. Plus feedback loops.
You also need to consider if each activity had one server or many (in quitting theory a server is the person, machine or whatever actually doing the work.)
Now… The maths gets very complicated and very statistical.
Querying theory is itself a provable theory. But when you start to model systems with multiple interconnected queues it quickly becomes beyond our ability to handle.
Thus proof proceeds not through maths but through stimulation.
If you want to probe ToC you are going to need a simulator and lots of processing time.