Monthly Archives: August 2014

Viewing the Scottish Devolution Debate through the Cynefin Framework

A few nights ago I watched the debate on Scottish Devolution with my two boys. They are interested in this historic decision and were keen to discuss the debate.

I used the debate to help them understand a little of what I do at work, namely the Cynefin framework.

Scottish Devolution is a complex situation. No one knows whether it will be good for Scotland (and Great Britain) or otherwise. When considering whether its a good thing, we won’t find out for a generation or two…. Its that big a deal. Think back to Ireland joining the Euro. For years, Ireland was one of Europe’s fast growing Tiger economies. Then the credit crunch happened, and Ireland struggled within the Euro. I do not know whether Scottish Devolution with work or not, and I do not think anyone else does either. That’s the thing about complex situations.

That said, there are aspects of Scottish Devolution that are simple, and those that are complicated.

A simple aspect is whether Scottish people want more independence from Westminster. Its simple that Scotland can use the British Pound or any other currency including the US Dollar and the Euro.

There are complicated things that can be analysed or resolved with help of experts by looking at how similar problems .

Will Scotland need to create a physical border between itself and England? Will joining the EU require Scotland to join the Chingen agreement that allows Europeans to move between countries without passports.
If Scotland creates a more beneficial and successful health service, how will they ensure that hoards of English don’t cross the border to make use of the service?

There are complex problems such as what currency and mechanism will Scotland use?
Use the Pound or another Currency.
A Scottish Pound Pegged to the Pound, Euro or US Dollar.
Co-manage the pound with the Bank of England. This is not only Scotland’s decision as the Bank of England’s responsibility is to manage the Pound for the benefit of Great Britain, and Scotland would not be part of Great Britain.
Blah, blah, some other thing.

The interesting thing about the debate is that Alex Salmond was making simple arguments, and Alistair Darling was making complicated and complex points. Since the time of Lenin’s success with “Peace, Land, Bread!”, politicians have known that electoral messages need to be simple, simple, simple. This is why the Sun newspaper is so powerful and hire some of the best journalists, it keeps things simple.

So from looking at the debate using Cynefin, the big question I had was, why isn’t a seasoned politician making things simple? Why isn’t he pointing out that Westminster was run by Scottish Lawyers? Unless he’s hedging Labour’s bets. If the “No to independence” win, then OK. But if the “Yes” vote win, when the inevitable problems occur (There are bound to be at least teething problems rebuilding Hadrian’s Wall) he can say “I told you so” and Labour will see a resurgence in Scotland.

Of course, if Scotland devolve and we build a wall, the Cornish might want to go next. If so, we will need to rename the conference to “Agile across the Breach”.


Sailing – Complex or Complicated?

I was lucky enough to meet Dr Alistair Cockburn at the first Agile Development Conference in 2003. He “rebooted” my brain in a bar. The next morning I asked him what book I should read. Without hesitating he recommended “Situated Learning” by Lave and Wenger. I read it a year or so later. A heavy book but brilliant. It introduces the concept of legitimate peripheral participation which is similar to an apprenticeship.

I’m currently on holiday with my boys and they are learning to sail. They wanted me to teach them. The first day we listened to the refresher course being given to those who had lessons on previous years. The training was on the beach in a mock up dinghy without a sale. They were learning how to tack (turning by going into the wind) with one of the students simulating the wind by moving the boom. The key skill seemed to be getting the right hand grip so that the sailor could easily pass the main sheet from one hand to the other when they tacked. On previous holidays I have always seen a blackboard with arrow representing the wind to explain the different points of sail. The instructor was great and helped me pick out a boat that I could easily control with both of the boys in it. Safety is obviously the first priority, and we took it out for a spin.

The first skill they had to learn was balancing the boat. Where to move when the wind changed. The next obvious thing to learn was how to spot a gust of wind (a dark patch on the water) or a lull (smoother water), so that they could get ready to move. I taught them to steer using the main sheet* and the centre board. They had a go at steering with the main and they are constantly adjusting the centre board at my command. When the wind blew up I was able to instantly take over the main as I had a hand on it at all times. It struck me that learning to sail this way is a classic example of legitimate peripheral participation. Their balancing of the boat by leaning out, and the raising and lowering / raising of the centre board are real work. They are also learning craft like spotting gusts and lulls. They are also close to the other tasks like steering, and they are learning what is involved in tacking and gybing. This is Legitimate peripheral participation and I realised it is a fantastic approach when context dominates the practice. The three days we have been out, the wind conditions have been different and changing, and we have used two different classes of boat.

By contrast the shore based learning stripped away context entirely. White boards showing wind models and forces. Grounded craft with people simulating wind. Complicated learning strips away context. It shows you skills that are needed with certainty. One thing is certain, the hand grip I use is nothing like theirs and it doesn’t seem to matter.

I think this explains why Real Options and Feature Injection are not as popular as other techniques. Both are complex tools for resolving uncertainty, and the problem with uncertainty is that it is uncertain. They do not lend themselves to classroom training where the context is stripped away. In fact, the problem is that they are tools for managing context. If you want to learn how to do real options, the best approach is probably legitimate peripheral participation.

It also shows the dangers of trying to learn using the wrong approach. Whilst classroom training will appeal because it ensures coverage, it may cover the wrong skills. You run the risk of learning the wrong things. As a result, you will have a false sense of competency. You will know how to hold the sheet in your hand but you wont be able to spot a strong gust of wind. Ask yourself this. How many times have you been on a training course and applied almost none of what you learnt on the course? You were probably attending a course to learn the complicated things and not the complex ones. Complex skills are best learnt on the job from practitioners rather than in a classroom from thought leaders.

So is there a role for classroom training for complex subjects? The answer is a resounding yes. Classroom training is good for helping you achieve a state of conscious incompetence in a complex subject. You know a skill or tool is available and you understand its value. The transition from conscious incompetence to conscious competence is probably best learnt using legitimate peripheral participation.

*The main sheet is the rope that pulls the sail in.


Capacity Planning

Capacity Planning is a process whereby an organisation comes together to agree on what they are going to focus on in the medium term. There are two outputs from the Capacity Planning process:

1. An ordered backlog of initiatives (MVDs) that the entire organisation agrees on.
2. A list of the constraints and options within the organisation.

The Capacity Planning does NOT produce a plan or schedule of work.

The purpose of Capacity Planning is to provide the organisation with focus. To agree a backlog based on the constraints that exist within the organisation. Rather than determine the capacity of the organisation based on its headcount, the capacity is based on the capacity of each group within the organisation.
Without an organisational backlog, there is a very real chance that the organisation will end up with too much work in progress because each group will seek to develop their own locally optimised priority.

<strong>Preparation</strong>

The start of the process is to create a roughly ordered list of Minimum Viable Deliveries (MVDs*). This provisional list is then pruned to create a list that is probably about twice the size of the eventual backlog. I call this a list of unicorn horns as it little to do with reality.

The groups are the set of Scrum Teams that look after a particular system, service, component or function within the organisation. In fact, any potentially limited resource that needs to participate in the delivery of an MVD.

The (product) owner of each MVD then engages with the product owners working with the groups. The groups create an Epic for each MVD they need to contribute to, and provide a SWAG, a Sweet Wild Assed Guess. The minimum effort required is that the group’s product owner makes up a SWAG. It would be too much to involve the team in a story point style estimate (even if bulk estimation is used) as they are only needed to help order the backlog and identify constraints. They are not a commitment by the team. The units for a SWAG are team weeks, the work that a single “Scrum” team can do in a week. One of the main reasons for the SWAG is to ensure that there is engagement between the MVP Owner and the Group’s Product Owners, To ensure that the groups are aware of any work that may be required of them in the medium term.

You will probably want control reports to help product owners spot any gaps.

Each group calculates its available capacity. The default is the number of weeks in the period multiplied by number of the “Scrum” teams in group, multiplied by 50%. The 50% is based on the work of Todd Little.

<strong>Capacity Planning</strong>

Capacity Planning should involve all key decision makers. If any are missing they may go outside the process to get their MVD done.

Now that you have the ordered list of MVDs, you go through them in order. For each MVD, you check to see if there is enough Capacity in each of the groups to deliver all of the Epics in the MVD. If there is enough capacity, the MVD is included in the backlog and the remaining Capacity of the groups is reduced by the amount on the Epics. Whenever the Capacity of a group becomes zero, the group is identified as a constraint. If there is not enough capacity in one or more of the groups, then the following can be done:
1. The decision makers decides to deselect one of the MVDs that has already been selected in order to free up the capacity.
2. The MVD can be reduced in scope. Theoretically it should be impossible to reduce something that is minimal but given the choice between nothing and something, the MVP Owner will find a way to shave some functionality off.
3. The MVP Owner and Group Owner can defer the decision to reject the MVD by looking for additional Capacity.

Eventually, it will not be possible to do any further MVDs as they all rely on groups that have run out of Capacity.

You will find that about 20% of the groups will be constraints, having used all of their capacity. About 60% will have some work on the Corporate Backlog, and 20% of groups will have no work on the Corporate Backlog. This list is gold dust for the IT management team.

How the spare capacity is allocated will be the subject of a subsequent blog post.

You now have a backlog agreed by all the key decision makers, and the list of constraints in the organisation. You have done step one and two of Theory of Constraints, namely Identify the constraint, and prioritise the constraint. The rest is easy from here on in.

* I have deliberately avoided defining an MVD. Its the smallest piece of work to deliver value.


A tale of two coaches.

It was the best of times, it was the worst of times. In the words of Jim Collins, some coaches supported “The Tyranny of the OR” whereas other Coaches promoted “The Genius of the AND”.

This is a tale of Yves and David. Two coaches working with identical companies in a way that is only possible in literature, movies and the minds of thought leaders. The similarities are spooky. Even the managers had the same name… Neil, though David spelled it with a K.

David’s Story

David: “Hello, I’m your new Agile coach.”

Kneel: “Let me explain how our business works.”

David: “No need for that, I’m off to the Gemba.”

Knell: “What’s a Zumba? Is that like my wife’s fitness dance class?”

David: “Sigh. Nothing for you to worry about. You go and learn to be a servant leader.”

Kneel: “A savant lieder? What’s that?”

David: “You have to work it out for yourself whilst you still have a job.”

Six months later. Kneel is talking to his team.

Kneel: “So let me get this right. He changed it, and now its broken something else but if we change it back it will break the thing it fixed.”

Peon: “Yep. What happened to that David guy.”

Kneel: “He told the CEO to sack all the middle managers and get you lot to self organise. The CEO sacked him for being a moron.”

Yves’s Story

Yves: “Hello, I’m your new Agile coach.”

Neil: “Let me explain how our business works.”

Yves: “Great, that will be useful context. After that we’ll head off to the Gemba.”

Neil: “What’s that?

Yves: “I’m going to pair coach with you so that you learn how to coach your teams?

Neil: “Is that necessary? Surely you can do it? Do I need to do coaching?”

Yves: “Management are part of the governance, risk management or control function of the process. Imagine a simple boiler with a controller. Now imagine that the controller does not know how the process works. What would happen?”

Neil: “Chaos. I see your point. But what if they need skills I do not have?”

Yves: “You can help them to find them. It may be someone on one of the other teams or you may need to bring someone in.”

Neil: “So if I go to the Gemba, I don’t need to sit in the glass booth anymore?”

Yves: “You need to do both. Some risks are best observed close up. Some, you need to get some distance”.

Neil: “Can you give me an example.”

Yves: “Imagine all of your teams are burning down through work nicely. However, you have a feeling you are not delivering as much as you think you should.”

Neil: “I get it. Management reporting will help me get the big picture view to spot risks and issues that are at a higher level. A bit like fractals. If you measure the coastline using a one metre ruler you will get a very different answer to if you measure it using a mile long ruler.”

Yves: “Yes, you are looking for problems at a different scale which means you need a different measure and viewpoint. Sometimes at the Gemba. Sometimes in the glass booth.”

Six months later

Neil: “Hi Yves, You remember that change we put in. Well we had to take it out again, and replace it with something else. I didn’t need to get involved, the team did it. They just wanted me to keep an eye on things… Anyway, you know how you said we would probably need to talk about Cynefin and Staff Liquidity. Well I think we’ve hit that point. When can we bring you in again?”

This post is a response to Chris Young’s excellent post and a tweet by Joshua Arnold

All names of the characters are purely fictional. Any resemblance blah blah…


Kicking Risk Down the Road Jeopardizes Success

It’s 1230 and you have a report to write, an international flight to catch at 1600 and at least an hours drive to get to the airport. What do you do first?

Most of us assess the risk and realize that there are many more ways we can be late for the flight than on time. The journey to the airport has the most uncertainty so we complete this first. We can then write the report relaxed at the gate. We call this risk management approach nothing more than common sense.

Common Sense Uncommonly Applied
We have the same opportunity to use risk management in our planning process when we order our backlog. But most of us don’t. Why?

Most product owners now prioritize the backlog order based on short term value rather than taking risk in to account. One of the key risks they reasons is the loud legacy of too many late projects due to engineering choosing to implement the product in bleeding edge technology and this not working out so well.

In this move to focus on shorter term value in our products we’re now failing to acknowledge that we can’t just mitigate the risks of technical novelty by scoping it out. Technical novelty has to happen for reasons of competitive advantage or deprecated technologies. So we need to bring that common sense back in to our planning process.

One of the ways to address this is to bring technical risk into the prioritization process.

A Way of Using Risk to Prioritize the Backlog
So how can we think about this from a backlog priority perspective?

One way is to create a matrix of technical risks to be addressed on the y axis and user stories on the x axis. See below:-

Risk to Story Matrix

The user stories are true user stories, if they’re delivered a user can do something valuable in their process. I put a cross at the intersection of a risk and user story where it shows that if we build and appropriately test that user story it demonstrates that we’ve mitigated the risk. I then look at the user stories with most crosses, balanced with the least effort and the largest value and prioritize those to the top of the backlog. In the above example I’d want to build and test user story 3 as early possible.

This approach is an aid not the answer to your backlog prioritization. We still have to make the trade offs in terms of prioritization between building and maintaining market credibility and building a sustainable solution. We reserve the right to make short term trade offs by building “sparkler” features to keep market interest, preferably those with high value and low technical risk. We may even decide to build “sparkler” features early that have high technical risk but either cut the capability back to manage the risk or accept the risk; then rework when we have time. This way we go in clearly understanding that we’re taking on debt in terms of rework and/or risk in terms of market credibility to realise the additional value of reduced time to market.

Conclusion
Most organisations I’ve come across have little recognition of how to actively manage technical risk.

Ordering your backlog by taking into account those user stories that if built would address most technical risk is one way to increase schedule predictability

It can be argued that the value I describe above is a positive way of stating market risk and that I could have one combined list of technical and market risks. That leads on to the next post…


Given When Then – A Cynefin Case Study

Dan and I created the “Given When Then” pattern on August 23rd 2004, or rather, that was the day we realised we needed the “Given” part. On November 30th, I wrote a series of blogs explaining the format that Dan and I had created in its more familiar form. JBehave II , JBehave III, JBehave IV and JBehave and Postmodernism.

This experience report is best explored by considering it through the lens of the Cynefin framework. Although I read the Cynefin whitepaper before this time, I did not understand it until fairly recently.

Obliquity

Neither Dan or I deliberately set out to create the “Given When Then” framework. We were both working on other projects. Dan was working on BDD where he was trying to change the language of TDD using NLP to make it easier for people to learn TDD. I was trying to work out an analysis approach that would work with Extreme Programming. Dan had replaced TDD’s assert with “Should” and was having more success explaining TDD to people. A week or two before, we had traveled back from the Agile Development Conference together and we had realised that “should” was the language of specification.

On the day that Dan and I first came up with “Given”, our goal was for Dan to explain BDD with Mock Objects and Patterns to me as I was going on sales visits to clients and talking about something I had never actually done. As an amusing aside, several years later Liz Keogh told me it took her six months to remove the visitor pattern from JBehave v1.

The key point is that Dan and I were working on oblique problems. Dan was trying to create a better way to teach TDD and I was trying to learn TDD. We were not deliberately trying to create a pattern to allow non technical people to effectively communicate with Agile developers.

Activated Individuals

Dan and I did not create the “GIVEN” word. We tripped over it. It was literally a movie moment when I said something and Dan and I looked at each other realising it was something useful.

The night before I had been to see a friend researching a PhD in Historiography. Historiography is the study of Post-Modernism as applied to how history is understood and taught. Historiography shows us that the way History is taught, understood and interpreted is more a function of the context than the events themselves.

When Dan showed me the code for TDD with Mocks, my head was full of “Context” (thanks to my friend Rob) which meant I recognised mock objects as context. When I said “Mocks provide the context” Dan and I both realised it was significant as our goals had activated us to the importance of the statement.

The key point which Dave Snowden makes in his talks is that you need individuals in a heightened state of awareness to spot something important* You need experts to spot something subtle and significant.

Recipe Books and Chefs

Both Dan and I were “Chefs” in Dave Snowden’s language. I had a decade of experience of Business Analysis and was used to coming up with new approaches when needed. Dan had several lifetimes worth of experience as a developer, and in particular coaching developers.

As Chefs we recognised that “context” was a new ingredient that was missing from the meal shared between Agile developers and non-developers trying to communicate to them.

This was only possible because we had both served our apprenticeships, and had an understanding of how to combine ingredients that we knew the “GIVEN” was a new ingredient**.

Exaptation

BDD was designed for developers to more easily learn TDD. Dan and I adapted it to communicate between Non technical people and developers.

Actually, the “Given When Then” format is an exaptation of TDD. TDD has the steps Setup – Execute – Assert. Dan and I exapted the TDD form into a specification format that non developers could use to communicate more effectively to developers.

Multiple Safe to Fail Experiments.

I went off to develop Feature Injection while Dan developed JBehave and promoted BDD. Rather than focus all his attention on one BDD solution, Dan promoted and supported many open source communities as they developed tools. From JBehave, through rSpec, to Cucumber and SpecFlow, Dan has supported them.

So Dan engaged in Multiple Safe to Fail experiments in his search to realise a BDD tool that worked.

Cynefin

Thanks to Cynefin, I now have a better understanding of what happened when we discovered the Given When Then format. As a result in the future, I will have a better understanding the context I need to create in order for innovation to occur.

  • GIVEN I use Cynefin to understand the world
  • WHEN I look at situations going on around me
  • THEN I’ll find new meaning

* I’ve watched three of Dave’s keynotes to find the point where he says this but could not find it. My wording may be wrong.

**A while later I realised that GWT was a subset of the use case. Unfortunately the Use Case is so bloated as a tool that it is not focused on specifying behaviour. The Use Case has become the jack of all trades and master of none.


Pull – An experimental blog backlog

I would like to try an experiment. I currently have a fairly large backlog of blog posts that I intend to write that are fairly self constrained. Rather than put them out as I feel like it, I would like to try an experiment in pull. So I will give below a selection of posts I’ve planned. If you want to see one more than others, then leave a comment. I will count the comments (if any) on Monday evening. The one with the most comments will be the next one I write.

The backlog

1. Capacity Planning – An experience report on using Theory of Constraints to create an organisational backlog. This will use details (photos) from the experience report given by Lisa Long at XPDay 2013.

2. The role of the Agile Manager – A description of the role and the training for an Agile Manager (Delivery Manager, Risk Manager and Coach ). The importance of reporting.

3. Given When Then – A Cynefin Case Study. An experience report describing how Dan North and I created the Given When Then format made sense using Cynefin.

4. Using Cynefin’s Butterfly Stamping to determine the most appropriate to building the backlog.

5. The ups and downs of Hippos and Data Scientists. How to order your organisational backlog.

6. Something else you want me to write about. Some aspect of Feature Injection? Real Options? Staff Liquidity? Scaling Agile for Practitioners? What…evvva?

7. Tornado Maps and Skills Matrices. How to use your skills matrix and backlog to build a tornado map.

That should be enough. On Tuesday night, Dermot and Simon Cowell will count the votes and announce the vote on twitter.

Chris