The Language of Risk

Last week I had an enjoyable chat with Janet Gregory about risk management. We were discussing how risk management is not one thing, rather it is an attitude or approach. The conversation reminded me why I started this blog.

Earlier this year the system I worked on was subject to a methodology audit. I spent several hours working with the auditor to explain the approach we were taking. At the end of the conversation the auditor said they wanted to use a kanban system to manage their audit process.

Why did the auditor like what I said? Because I explained everything we did in terms of risk. When they asked for a “process”, I explained the risk the process was meant to address. I then explained how our different process addressed the risk more effectively.

A couple of examples.

“Do you have stakeholder sign-off on your requirements to ensure they all agree on the priorities?”

reframed in risk terms as:

“How do we address the risk that we might be building the wrong thing?”

and we addressed as follows:

“Every week we present the status of the list of projects we are working on to the steering committee. The longest we can work on the wrong thing for is one week.”

“Where is your functional spec.”

reframed in risk terms as:

“How do we avoid the risk of building the wrong functionality?”

“We have a two page functional spec. Any more and the stakeholders will not read it. We also have a whole bunch of examples that the stakesholders have verified as correct, and a mock up in Excel so they have an idea of what it will look like. The key is to get quality feedback from the stakeholders rather than get a signature that means nothing. A signed off spec. sort of transfers responsibility from IT to the business for getting the requirements right… but not really. IT will get the blame if it does not work, even if they have it all signed in stone.”

For the fifty or so process audit points we went through each one doing the same kind of thing. Step one, agree the risk the process step is meant to address. Step two, explain how our team addressed it.

The language is important because it helps you think about the problem in the right way.

From my experience, middle and senior IT management respond well to this way of thinking, as do the business investors.


What is the purpose of The Software Craftsmanship movement?

Rather than an opinion, this is a question. What is the purpose of the Software Craftsmanship movement?

I heard the start of Bob Martin’s speech at Agile2008 in Toronto. I walked out because I wasn’t comfortable with the language. I missed the speech that launched the Software Craftsmanship movement.

Since then, there have been conferences and code retreats, and even a manifesto. I know some of what the software craftsmanship community do. I’m just not sure why? I’m not sure what problem it is trying to solve.

From reading the manifesto it seems the purpose of the software craftsmanship movement is to help the top X% of developers get better. This is not a bad goal, I am a member of the Agile community for the same reason.  (The few members of the movement I have discussed this with deny that this is the purpose.) If so, all is good and fine but it does not really help improve the software development industry.

As a project manager I’ve never had a problem with the top X%. They care about the job they do and want to get better at it. The problem for me has always been the bottom Y% who do not care and oppose learning new things. I would prefer a movement that pitched its level much lower and made it possible to address the problem at the bottom rather than elevate those at the top.

So what is the purpose of the software craftsmanship movement? Can anyone help me with this one because at the moment I don’t get it.

Many thanks

Chris


The risk of comments in code.

Many years ago I used to program. Received wisdom had it that we should put comments in the code to explain what was done. When a developer we worked with said that the others should read code like sheet music, we thought him a tad eccentric. A couple of colleagues would comment his code after he had explained it to them. They would complain the next morning that he had deleted the comments overnight. It was a tit for tat battle. Eccentric developer versus developers following best practice. As a manager I found it frustrating that he kept deleting the comments.

That was fifteen years ago. What about now?

When I learnt to code at university, it was common to see something like

v = h * l * w

Meaningless and totally unreadable. Comments were necessary to understand what is going on.

Instead, the variables should be renamed such that

volume = height * length * width

Totally readable and easy to follow. Now that the code makes sense, the need for the comments is reduced.

Sometimes you might find comments where the mental model of the domain is different to the one used to implement the solution. Often the comments are placed in the code when someone has to change someone elses code. Another way of explaining what is going on. An obvious violation of domain driven design principles. An exception to this rule is if the domain is complex which leads to complex code. (Thank you Dan. See comments)

The key thing is that comments indicate the presense of code that is hard to understand. They may indicate the presense of technical debt.

Some years ago I attended an XP Game involving Lego at XTC. It was my second or third visit to XTC. We had to iterate through the design of a lego car that we controlled with software built in a basic computer language. I had the great fortune to pair with Alistair Cockburn (said the Scottish way). I was numbering our versions of software as “1”, “2”… Alistair said we should name them “Go”, “Go and Stop”, “Reverse” and so on. My version names would have required comments.

Comments are needed when the name or symbol we ascribe to a thing is a name rather than a description of the thing. Changing the name to something meaningful allows us to remove the comments in many cases.

Not so many years ago I did some analysis to work out how you calculate the cash flows associated with Index Tranches (Exotic Credit Derivatives). I documented the cash flow generation using pseudo code.

Pool Notional = Tranche Notional / ( Detachment Point % – Attachment Point % ).

etc. etc.

I created thirty six examples based on the pseudo-code. A user spotted a missing condition and as a result I changed the pseudo code and added a further six examples.

The pseudo code spread to several pages and I was concerned that it was too difficult for people to follow. I suggested to the same user that I comment the pseudo code. He rejected the idea “If they cannot understand from reading the psuedo code I do not want them signing this off. There is a danger they will sign off based on the comments which may be different to the pseudo code in subtle but important ways.”

As a risk manager, comments act as a risk indicator. The presence of comments indicates that the code is possibly not clearly written. That a developer may make a mistake in that area of code. That the code is risky to change and care (automated testing) is needed.

Of course, an absense of comments does not indicate an absense of risk.

Thank you to Nat Pryce for inspiring me to write this post with your tweet the other day.


Red Bead Roulette.

Have you ever been to a casino? My parent took me a few times when I was younger. I avoided the card games as they seemed to require skills that I knew I did not have. I favoured roulette. That big wheel with numbers around the edge. The brief flash of the ball as it left the skilled croupier’s (?) hand, and the tick, tock, plunk as the ball decided the fate of the anxious gamblers and the fresh faced maths geek.

Now imagine a variant on the game. You get to run the game twenty times. Each time, you have to bet on black. You count up the number of times you win. This is your score.

Now run the game again. But this time make a change. Instead of one person placing the bet, get two, or five, or one hundred. Will it change the results?

Now try doing it standing on one leg.

Or with a glass in your hand. A glass containing whiskey. Now drink the whiskey. Drink lots of whiskey. Drink whiskey with ice, and whiskey without ice.

Do any of these changes make any difference to the score?

No!

The outcome of the game is determined by the inherent randomness of the process and the constraints placed on it (always bet on black).

This is Deming’s famous red bead experiment. It is meant to teach us that nothing we do will affect the outcome of the system. It proves that the system dominates. It ensures that by construction.

Consider an game that is more a production process in the real world. Imagine a casino where we could do whatever we like. the individuals playing the game might decide the take the ice cubes out of their whiskey glasses (there are a hundred playing so numbers count. whiskey with ice is needed) and block the red and white slots in the wheel. This would force the ball to always land on black.

That’s what inidividuals do. They change the rules of the game. They change the system.

Next time someone offers to play the red bead game, discreetly remove all the “bad” beads before one of the goes. That way, you can demonstrate to the people running the game that the system does not dominate and that individuals can have an affect bigger than 6%. Of course, expect them to be angry because they are the ones in control of the system.

I would like to thank Don Reinertson for inspiring me to think about the “Red Bead Con” with his keynote at Lean Kanban Benelux


Information Arrival Process

Last week I read a nice post by David Anderson about the Information Generation Process. I realised that apart from badly naming what I was talking about, I had also failed to explain what I was talking about.

Most literature I’ve read about Process Improvement including Lean focus on the delivery of the product or service. None that I’ve read really focus on the flow of information in the process. Unless you consider the information flows, it is possible that your decisions will be severely sub-optimal.

As an example, consider the software product and information flows in software development. Traditional software development belief has an analysis phase followed by a parallel software development and test preparation, which comes together in a test execution phase. Focusingf on software only, parallelising the software development and test preparation phase makes perfect sense as it reduces the time to delivery of the software… Unless you consider the information flowing into the system. The test preparation phase involves creating detailed examples that would be used to test the software. Creation of these examples results in the discovery of details about how the system needs to behave that will be tested. The information about the examples needs to be built into the software. In other words, the information needs to be incorporated into the software development phase before the software can leave the test execution phase. The only way that the information is guaranteed to get into the software development process is as a “bug”.

In other words focusing on the software product only results in parallelising software development and test preparation which is the optimal way to deliver software INTO test execution. When you consider information flows in your system, you realise that the optimal way to deliver software OUT of test execution is to place test preparation before software development. In other words use “Specification by Example” or “Automated Acceptance Testing”.

This is one example. The general rule for optimising an information arrival process is that “All information needed to make a commitment should be available before a commitment is made.” The Feature Injection process is one way of implementing this principle. “Sense and Respond” describes a similar process for service organisations. Feature Injection starts with the value, whilst “Sense and Respond” starts with the customer.

Both Feature Injection and “Sense and Respond” are “knowledge discovery processes” that David refers to in his post.

Neither process is passive, simply waiting for information to arrive. Both actively discover the information needed in a structured manner, guided by the goal of delivering (customer) value.

I like to pride myself on the fact that I give things really really terrible names. “Information Arrival Process” is one of my worst. A ponsy name for considering how information arrives into a system as well as how the “product” flows through it. The aim is to avoid information loops where information flows backwards in the process, and ensuring the process is not halted whilst it has to wait for information to arrive. In effect, endeavor to create flow of information in the opposite direction to the “product” as well as flow of the “product” itself.

Note: “Product” is the thing delivered to the customer that generates value for them.


Time to ditch “The Backlog”

Have you ever been offered a “stottie” or a “balm cake” or a “cob”? Have ever been called “Pet” or “Duck” or “Hen” and not known what they meant? Have you ever been confused when someone told you to get on the “side walk” or “pavement”?

The words we use can often be used to identify where we are from. However, those words can make it harder to get where we want to because people do not understand us or have a different understanding of the words we use.

“Backlog” is one such word. It is a word that expresses the IT view of the world. This was fine when Agile was being sold to IT teams. Now that Agile is being sold to business investors, we need a new phrase to describe outstanding work.

Ask someone who has not heard of Agile “What is a backlog?”. They might look up a definition

“Value of unfulfilled orders, or the number of unprocessed jobs, on a given day. While a backlog indicates the workload that is beyond the production capacity of a department or firm, it also serves as a pointer toward the firm’s future sales revenue and earnings. Also called open order.”

The key is “unfulfilled orders / unprocessed jobs”. They describes the commitments that the IT department has yet to complete. Backlog is a term to describe IT’s relationship with outstanding work. It does not describe how the business view the same things.

Without knowing the name, a business investor might describe the backlog as “A list of things I might invest in that will deliver business value.” Backlog implies commitment and makes no reference to value. This is why I prefer to call the backlog “A portfolio of investment options”.  It is a phrase that business investors I’ve dealt with have responded well to. One that makes their role as an investor clear.

So pet, fancy a stottie and bottle of dog to celebrate our new found understanding?


Using your keys to pull the door shut.

Last week I was lucky to spend a few days at #ALE2011.

Whilst there, Olaf Lewitz shared a trick he has to close the front door of his house. He follows this process. Open door, Put the keys in the lock, Pull the door closed using the key and Lock the door.

Whenever you pull a door shut behind you, you are making a commitment. If you want to pass back through the door you need an option (key) to get back in. Whenever you pull the door shut, there is the risk you have left your keys inside. Olaf’s process addresses the risk of pulling the door closed whilst your keys are inside.

This simple pattern could be used on adjust (IT) processes to achieve the same effect.

<cheeky>If you implement the pattern, be careful the Lean Tool Heads do not optimise them to remove the waste (or what we call risk management)</cheeky>.

Update: Thank you to Laurent Bossavit for pointing out the “Berlin Key” which enforces this process.


A.L.E.2011 – The relationship factory

I’m still landing after last week’s #ALE2011 unconference in Berlin. The organisers did an amazing job of hosting a truly memorable event with people from ALL over Europe and even the (very) odd interloper from the U.S.A. ( <- Brian, Another example of taking the piss ).

ALE is short for the “Agile Lean Europe Network”. Network is probably a better term than community for Agile and Lean. A network is formed of nodes (People, Companies) AND the relationships between them. Agile is the learning machine that operates on top of the Agile / Lean network. In order to learn from another node, you need a relationship. If you ask people “What is the opposite of a good relationship”, their obvious response is “a bad relationship”. The correct answer is “No relationship” as you can still learn from a bad relationship. There is no learning from a non-existant relationship. The success of a learning community is due to the relationships within it.

For me, ALE2011 was successful because it facilitated the creation of new relationships and the deepening of existing relationships. This happened in a number of ways, some organised, others spontaneous.

  1. I had the opportunity to spend time with people I knew well and people who I now know better.
  2. The Conference started with Jon Jagger’s Coding Dojo to get people talking. The message from the start was audience participation.
  3. The open space was the only thing running at the time. It did not compete with any other sessions. You could not hide from interaction with your fellow attendees.
  4. There was no trade fair activity.
  5. Dinner with a Stranger encouraged people to speak to new people.
  6. The sessions were not tutorials or workshops but rather quick presentations to inspire conversation in the open space, over dinner and in the bar.
  7. No one was SELLING!
  8. Lots and lots of hugging. Franck asked if it was German thing, or a British thing which prompted lots of blank looks. Someone said “Its an Agile thing”.
  9. Lightening talks provided everyone with an opportunity to take centre stage.
  10. The audience could change the programme. The lightening talks were extended by thirty minutes based on a two sentence interaction.
  11. Lots of talking points, like the map of Europe with pins, and the Post-It note pictures.
  12. Groups ate lunch together in the hotel.
  13. The kids and spouse track bought something very special to the atmosphere of the conference. It felt less like work. More friendly somehow.
  14. The stars of the conference were the participants rather than the speakers.
  15. We had Marcin, Marc, Oana, Franck, Ivana and Olaf to act as emotional glue.

#ALE2011 reminded me of the first two Agile Development Conferences in Salt Lake City back in 2003/2004. I spent some time trying to think of why. I have to thank Brian “The Token American” Marick for helping me see through the noise in my mind. At the most timely moment, Brian mentioned a book in which the purpose of conferences is suggested as a place where “experts can come together to form collaborations” and “people can be energised for the future”.

What made ALE2011 successful were the same things that made ADCv1/2 (Agile20xx v0.1) a success. The expert practitioners had time to talk and form collaborations. Time to learn what people were interested in beyond the stuff they are known for. Time to explore the possible areas of collaboration and whether you want to collaborate. In short, time to form relationships. 

Agile20xx serves two purposes.

  • Provide a meeting place for expert practitioners
  • A trade fair.

Those engaging in the trade fair do not have the time to renew the relationships or build new ones. Agile20xx does the trade fair well and hopes the practitioners can look after themselves. The reality is that many of the practitioners at Agile20xx are there for the trade fair. Practitioners from Europe, Asia and South America travel all the way to the USA (and submit to invasive search by Homeland Security) to talk to themselves. Something that may not continue now that we now have a place in Europe where we can do the same without the jet lag.

Agile20xx needs to consider whether it only wants to be a trade fair. I hope that the future organisers of Agile20xx learn from ALE2011, otherwise we need to create an Agile Lean Americas Network with ALA20xx as a place where “experts can come together to form collaborations”.

—————————————————————————————————————————————–

This post is dedicated to those who made ALE2011 happen… Alex,  Aleksey, Andrea,  Catia, Christian,ChristianeEelco, ErikFranck, GregGreg, Ivana, Jaume, Jens, Jule,Ken, Marcin, Marc, MarcMicheal, Michael, Mike,Monika,Natalia, Nick, Oana, Olaf, Pablo, Sergey, Sven, Will and Wolfgang. ( I stole the list from here )

Thank you Jurgen for lighting the match that set fire to this group.

Special thanks go to Monika and Oana for giving me the opportunity to show I can “respond to change”.

Very special thanks and to Olaf for being the linchpin and showing me a slice of life in West Berlin.


UK Government IT passes up on 50% discount.

Disclaimer : None of the numbers in the post are researched!

The UK Government contracts out its software development to systems integrators. In order to stay competitive, the system integrators off-shore the software development to places in the world where it is cheaper to hire developer. Places like India, China and Eastern Europe. And quite frankly if they were not driving down the costs, we the British Tax Payers would want to know why.

The reality is that the cost savings of off-shoring are not as mouth-wateringly good as they initially look. The off-shoring argument is simple. A developer cost £50,000 in the UK and only £10,000 in the off-shore location, i.e. An 80% discount. (Made up numbers before anyone asks)

However anyone who has worked with off-shore teams will know that they bring additional overheads.

  1. You need two sets of (middle) management. One for both the on-shore and off-shore team.
  2. You need more formal processes (additional process overhead).
  3. Time zones mean you only have a fraction of the day when both teams are in the office.
  4. Small delays creep in due to having to wait for the other team to be available. The delays have a compound effect.
  5. There are often inefficiencies in communication due to language.
  6. Off-shore companies often have higher levels of staff turnover which means you have a lesser chance of retaining high quality staff and the benefit of gelled teams.
  7. More chance of building the wrong thing.

Off-shoring people are smart and they have worked hard to address these issues. The reality is that the savings are not in the 80% ball-park but more likely to be in the 25% – 30% zone. But that is before we include the “building the wrong thing” factor.

The real benefit of off-shoring is liquidity. Access to a greater pool of quality programming talent. Those hiring during the Dot-Com era will remember the days when it was impossible to hire decent developers. I remember times when you had decide on the day whether to hire a Java developer or they would be snapped up by someone else.

For most UK organisations, the economic argumant for off-shoring sort of makes sense. For the UK Government IT, it does NOT make sense. The UK Government gets a 50% discount on hiring developers in the UK. Once this is factored in, the arguments for off-shoring fall apart.

Where does the 50% discount comes from? Easy. Developers in the UK pay income tax and national insurance in the UK. Off shore developers do not pay tax in the UK. In fact, given that the system integrators are mostly US Corporations, it is likely that almost NONE of the investment by UK Government IT will come back as tax revenue.

Its time for UK Government IT to have some tough discussions with the systems integrators so that we can claim our 50% discount.

There is an argument that we do not have enough developers in the UK. Perhaps this will provide the stimulus to invest in IT developer skills in the UK.


Debuggers work backwards.

Yesterday I had to debug a programme written by a colleague. I realised that the majority of debugging is analysis. Understanding how the code works. As a result, I realised that the debugger was running backwards.

As someone working to work out why there is a problem with the output. Ideally I would like to start with the output and move backward through the execution of the code until I reach the point where the problem occured. The analysis of a bug is similar to analysis in Feature Injection.

For example, I want “A” as an output.

A = B + C

Therefore I need B and C.

B = D * C

C = E + F

D, E & F are all inputs to the system.

To help with analysis of a bug, the debugger should move backward through the execution of the code. I should be able to put a watch on output values and then the debugger should show me when the values are updated. If I start watching A, the debugger should ask me (automatically) show me B & C. When B is updated, it should show the values in D & C.

Currently the debugger runs forward in the direction of execution. As a result I have to search for variables, get to understand the code and all sorts of stuff to analyse where the problem is.

I haven’t done debugging of real code in years, err, cough, decades. When I used to do it we did not have debuggers and had to step through the code instead. It was impossible to work backwards using the the original manual process. However, this process has become automated and updated until we have the current “engine in front of the car because the horse was in front of the cart design”.

Am I missing something obvious or is it time to move the engine behind the car where it really should be.