Cynefin and Estimates

This morning I saw a discussion on twitter about “Estimates being wrong”. This struck me as really odd. I’m a huge fan of Todd Little and Troy Magennis on the subject of estimation. They have taught me that the relationship between actual and estimate follows a log-normal ( or rather Weibel ) distribution.

When we make an estimate, it occurs at a time t based on an information set ( I0…It) with a filtration funtion F(t). There is a huge amount of uncertainty involved in the estimate. It is obviously going to be wrong. A better question is, is the estimate useful?

We estimate in order to make decisions. Do we want to do this thing? Do we want to do something else?

In Capacity Planning we use estimates to help us work out the approximate demand on a team during the quarter. We can use this information to identify which teams are constrained and this helps with creating an organisational backlog. Our process asks the Product Owner for the estimate, and NOT the engineering team. This means the engineering team cannot be held accountable for the estimate if it turns out to be WRONG. It also means the team do not try to get it RIGHT. The process allows the Product Owner to ask the engineering team if they choose, it just does not require it. The executive in charge of the process was given Todd Little’s paper to read so they understand the nature of the estimates. The goal is not that they are right, it is that they are consistent in approach. The estimates are good enough to be useful to help identify constraints and form the backlog. No one cares if they are right or wrong.

Something about this made me stop and think about Cynefin. Much of my work recently has been assisted by understanding whether the person I’m dealing with operates in a “Complicated’ world where outcomes are knowable or whether they believe the work is unknowable/emergent in a “Complex” world. I’ve noted that certain words are useful cultural markers to help you spot which. Right and Wrong indicate certainty which implies a belief in expertise. Hypothesis, Useful, Better or Worse indicate an appreciation of the uncertainty inherent in the process. People are not computers, so it is common to find someone struggling to find the appropriate words to use.

So if you hear someone referring to estimates as right or wrong, then you know they are thinking about expertise. If they focus on the usefulness of the estimate and the context, they are thinking about complexity.

Am I right or Am I wrong? And is this useful?

About theitriskmanager

Currently an “engineering performance coach” because “transformation” and “Agile” are now toxic. In the past, “Transformation lead”, “Agile Coach”, “Programme Manager”, “Project Manager”, “Business Analyst”, and “Developer”. Did some stuff with the Agile Community. Put the “Given” into “Given-When-Then”. Discovered “Real Options” View all posts by theitriskmanager

5 responses to “Cynefin and Estimates

  • Anthony Green (@anthonycgreen)

    The Complicated domain is the provenance of experts. Thus estimates can be obtained from those with knowledge of the domain so long as they’re kept isolated from others in the field to avoid consensus bias. This is the aphorism ‘wisdom of crowds’ as distinct from the ‘tyranny of herds’ or ‘group think’.

  • theitriskmanager

    Hi Anthony

    I feel that I have failed to communicate what I intended. I am not suggesting that you do not have experts in the Complex and Chaotic domains. Far from it. You are more likely to encounter experts there.

    I am discussed whether the culture and thinking leans towards the knowable or the unknowable (Emergent). How you approach uncertainty is a key Cultural marker. I will write another post on my thinking on Cynefin and the Hofstaedter Cultural Index. For me the beauty of Cynefin is that is helps us more easily spot these cultural differences. Something I am exploring.

    Chris

  • Anthony Green (@anthonycgreen)

    ‘Right and Wrong’ and ‘Hypothesis’ could both be cultural indicators of a belief this is an ordered domain problem. ‘Hypothesis’ could indicate an understanding of the unordered domains but only if there’s an additional recognition of unknowable non-predetermination. Uncertainty in ordered domains is complicated and addressed with experts – the wisdom of crowds. Otherwise it just obvious. Uncertainty in unordered domains is either complex where multivariate approaches are taken and emergent properties monitored or chaotic where action must be taken immediately and you invariably have your fingers crossed as to the outcome.

  • KentMcDonald

    Hi Chris,
    Most of my experience ends up being in complicated domains but people that are there act as if they were in simple (obvious) and say the domain is complex, because that’s the fashionable place to be. Needless to say, they often expect more out of estimates than they could possibly hope to experience.

    I do like your discussion of how you do estimates, and I think the key thing to focus on there is you don’t care whether they are right or wrong as long as they are consistently wrong so that they are still helpful from a relative perspective.

    One thing that might be helpful to point out is that this works when you are using estimates explicitly for deciding between options where relativity (not necessarily the Einsteinian kind) is important.

  • toddelittle

    Language is very powerful. In many of my presentations on agility and risk I would show a set of 18 projections of a hurricane’s path and ask what we know about every one of those projections. The answer that I was looking for was that each of those projections was WRONG. My point was that each individual projection was wrong, but that the collective set of projections was useful. When I gave this presentation to a set of high school students, one of them offered up that each projection was a POSSIBILITY. Immediately I knew he was RIGHT. Technically, each projection was an equally probable realization with an infinitesimal probability of being precisely correct. So each projection is both useful and wrong, the important question is which do you care more about? Useful and wrong is more valuable than useless and right. This is in many ways counter to Keynes’s often quoted: “I would rather be vaguely right than precisely wrong.” For example, I could ask you for the time and you could respond “today.” That would be both right and useless. On the other hand, you could tell me 2:15 when it is in fact 2:07. Even though that answer is wrong, it might be useful. But that same information in a different context could also be worse than useless—it could be harmful. If I believe it is precisely accurate and I am trying to get on a train that is scheduled for 2:10, I may use the information that it is 2:15 to move on to a new plan.

    Regarding Cynefin and Estimates, I found this (http://jhelmassociates.com/publicDownloads/SpaceSuttleProjects-Agile2013-JohnHelm.pdf#page=51) depiction of distribution curves as related to Cynefin to be rather interesting. With Simple/Obvious we have a defined distribution, with Complicated we have something like a normal distribution, with Complex it looks log-normal like with a long tail, and with Chaotic there is no obvious distribution. Intuitively, it seemed to make a lot of sense to me. I doubt that there is empirical data to support it but would be curious what anyone else has come across.

    I do have some interesting empirical data that would appear to be at odds with this. For a while I have been running an experiment during my presentations where I hold a jelly bean estimation contest. One would expect that jelly bean estimation would be in the Complicated domain. Yet I get nearly the same results every time I run this, with a very log normal looking distribution and a typical range of 6X between the low and high estimation bounds (p90/p10). While one would expect software estimation to be more Complex than jelly bean estimation, the distribution curves are very similar (actually for software I typically see 4x*).

    Back to the jelly bean data, my current thinking is that estimation in the Complicated domain is still difficult for non-experts. I do not have data, but I do believe that if I had trained jelly bean estimators I would get a much tighter distribution, although that distribution may still look log normal.

    * For those interested in my paper on software estimation and the cone of uncertainty, it is available here (http://www.toddlittleweb.com/Papers/Little%20Cone%20of%20Uncertainty.pdf). I’m in the early stages of a follow up to that paper that broadens the research and also looks to the practical implications of the reality of the uncertainty that we face.

Leave a Reply to theitriskmanager Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: