You actually have a number of interlinked activities each with its own queue with its own arrival rate and complain rate and you need to consider these distributions. Plus feedback loops.

You also need to consider if each activity had one server or many (in quitting theory a server is the person, machine or whatever actually doing the work.)

Now… The maths gets very complicated and very statistical.

Querying theory is itself a provable theory. But when you start to model systems with multiple interconnected queues it quickly becomes beyond our ability to handle.

Thus proof proceeds not through maths but through stimulation.

If you want to probe ToC you are going to need a simulator and lots of processing time.

]]>WRT: “IT DOES NOT INCREASE THE RATE AT WHICH VALUE IS DELIVERED! THAT IS FIXED”

Well… yes and no.

What is not evident: if you use DBR properly to pull work into the system then you will (typically) improve flow efficiency. Which, in the first graph, means that the green line shifts to the left. (!!)

By shifting to the green line to the left you are reducing the lead/flow/cycle time denominator in Little’s Law. Therefore, your throughput will increase even if the slope of the line remains the same. (!!)

Of course, if you then want to improve even further by truly increasing the slope of the line as well, then the only way to do that is to improve on the constraint.

-ST

]]>It differs from the max-flow-min-cut setup in that flow is not conserved at nodes. Instead, the outflow from each node must be no greater than the minimal inflow, reflecting that each task can only start once its preceding tasks have all been completed.

It feels like the Ford-Fulkerson algorithm ought still to work in this scenario, although the calculation of the residual graph needs to subtract capacity from more edges than in the traditional setup. And if Ford-Fulkerson still works then something like max-flow-min-cut should still hold true.

]]>Hope that helps,

]]>