Monthly Archives: March 2015

“Left Shifting” a culture.

Yesterday, Richard Warner and I ran a session at Cukeup. The session included a “Left Shifting” exercise. This exercise was based on the Seeing Culture post.

I explained that I had recorded the different behaviours I had observed in an “Innovation” (or Risk Managed) culture, and the corresponding behaviours I had observed in a “Traditional” (or Risk Averse) culture. Richard and I then ran the exercise:

  1. Each group was given a cut down set of the behaviours listed in the seeing culture post.
  2. Each group selects a behaviour pair.
  3. The group then discusses what they could do to shift the culture from the behaviour on right to the behaviour on the left, or “Left Shifting”.

Examples so far discussed.

  • Count the number of times people praise “Hero” or “Dragonslayer” behaviour versus collaborative behaviour.
  • Create a game where you exchange a risk card for an option card every time you transfer a risk to someone else.
  • Publish how much has been invested based on hypothesis testing versus hippos.

Please leave other examples of things you could do as comments, and examples of behaviour pairs as comments on the Seeing Culture post.


“Inclusion” The BDD principle we call “The Three Amigoes”

There was an amazing panel discussion at Cukeup yesterday. If you are interested in BDD I would recommend watching it when it is posted. For me, the main take aways were:

  1. BDD is a centered community rather than a bounded community. This means that there is no division between those that do BDD and those that do not. Instead, teams are closer to the principle or further away.
  2. The BDD principles should be defined as a set of examples.

This is a great opportunity to explain “Break the Model” in Feature Injection.

We start by spotting an example. e.g. The Three Amigoes. We then create an examplar:

GIVEN we can identify all of the users of an investment

AND we can access those users

AND we can confirm the output they require

AND we can confirm their needs for the output

AND we can confirm the value to the business by satisfying the users needs

WHEN we prepare our set of examples

THEN The Business Analyst, The Tester and the Developer should create the examples together

AND They should ask the users to confirm their correctness.

This one example is our model, so we can easily spot another example that breaks this model.

GIVEN we cannot identify all of the users of an investment

AND we cannot access those users

WHEN we prepare our set of examples

THEN The Product Manager, The UX Designer, The UX Researcher, The Big Data Analyst, The Business Analyst, The Tester and the Developer should create the examples together

AND They will confirm the examples using usability tests and MVT.

We now have two examples and we can abstract out a model. The purpose of the model (Olaf)  is NOT to define the principle. The purpose of the model is to help us spot other examples.

The model we abstract is “Include someone representing every aspect of the the development process”

We now have another example:

GIVEN we are a start up containing a product manager and a developer only

AND we have customers prepared to work with us

WHEN we create examples

THEN the product manager, the developer and a customer should create an example together

AND we test the examples using usability and MVT.

We now abstract a new model “Inclusion”. The principle of BDD is that it is better to include as many people in the creation of examples as possible. Its a common sense thing though. Too many people in a single workshop can be disruptive and counter-productive. Those people not in the workshop should be engaged to confirm the correctness of the examples, and be encouraged to find additional examples that break the model.

Now that we have a model called “inclusion”, it helps us to focus on trying to find examples where someone should  be excluded from the process of example creation.

“Inclusion” is a useful name for the Olf/Model/Abstraction. It is not a good name for the principle which is “The Three Amigoes” which refers to the first (founding) example in the principle.

The model does not define the principle. The examples do. We can add to the examples and as we do, the model/abstraction will change. The principle name will stay the same.

I’m looking forward to seeing the “BDD by Example” wiki.


Risk Aversion and Risk Management – A case study.

WARNING: I’ve been lazy. It is quicker and easier ( and more fun ) to write this than hunt through the inter-web to find out if someone else has already written this down (I’m kinda assuming Troy or Liz have). I just need something to refer to.

I have been helping a waterfall Scrum* project adopt some Agile practices. Unfortunately the project had reached the point where there was a large backlog of manual User Acceptance Testing to be done. During the week I had a great conversation with the test manager for the project (The person producing the management reports). I realised that it was the perfect case study to understand the impact of the culture (risk aversion versus risk management) on a project.

The test manager categories all bugs as either Priority 1 or Priority 2. All of the Priority 1 & 2 bugs need to be fixed and signed off for the UAT to come to an end. Note that it is not desirable to time box the UAT on a Waterfall project as this will drive the wrong behaviours. The inspiration for this post came from a conversation on whether Priority 1 or 2 bugs should be fixed first.

Managing the UAT

In order to manage the UAT, the test manager needs two graphs.

  1. A burn-down of the tests to be run.
  2. A Cumulative Flow Diagram showing bugs as they are opened and closed.

The graphs below illustrate this point.

new doc 4_1

The UAT is over when the number of bugs closed line (Green) intersects with the number of bugs open line (black), assuming the zero outstanding test cases which have not reached the end of the test cycle.

There are two key milestones for a UAT.

  1. The most important milestone is reaching the end of the UAT (Obviously).
  2. The second important milestone is when all test cases have reached the end of their cycle, even if they contain bugs and are not signed off. The burn down for this is the blue dotted line. The reason that this is so important is that this represents the point at which further bugs are unlikely to be raised. The bugs discovered after this point are likely to be introduced as a result of fixing another bug.

The impact of Culture

So what does this have to do with culture?

In a risk averse culture, we avoid or ignore risk. We would want to demonstrate our competence as soon as possible. This would means we would want to demonstrate progress as soon as possible to gain the confidence of our managers.

In a risk managed culture, we would want to address the riskiest elements of the work first. We would be more interested in doing what is right for the organisation than demonstrating our own competence quickly.

Opening Bugs

So how would this impact the testing cycles?

If we are a risk averse tester we would want to demonstrate progress. We would want to show that we were getting through the test cases as quickly as possible.

Our focus would be on the pink line as this shows progress to the end of the UAT, the most important milestone.

To do this, we would pick those test cases that are easiest to sign off. We would pick those test cases that are least likely to contain bugs because they are quick and easy.

Furthermore, we would want to close bugs and sign-off test cases in preference to finding new bugs.

A risk managing tester would want to find bugs as soon as possible. Our focus would be on the blue dotted line. We would want to get to the end of all the test cases so that we would know most of the bugs in the system. For us, the most important thing would be to see the number of bugs raised flatten off as this allows us to manage the fixing of the bugs.

The graphs below demonstrate the impact of culture on the shapes of the lines. It is assumed that the same project with identical number of bugs discovered.

new doc 6_1

Both projects finish on the same date because the contain the same amount of bugs that require the same amount of effort to fix. However, consider the perspective of the test manager. In the risk managed culture, they know that they have discovered most of the bugs much earlier, and can take appropriate resourcing decisions.

Closing Bugs

So how would this impact the fixing of bugs?

If we are a risk averse developer we would want to demonstrate progress. We would want to show that we were getting through the bugs as quickly as possible.

Our focus would be on the green and pink line as this shows progress to the end of the UAT, the most important milestone.

To do this, we would pick those bugs that are easiest and quickest to fix and sign off. We would defer bugs that are complex, or take a long while to fix. We would potentially defer those bugs that block us reaching the end of the test cycles if they are complex or take a long time to fix.

Furthermore, we would want to close bugs and sign-off test cases in preference to finding new bugs as this looks better on management reports.

A risk managing developer would want to find bugs as soon as possible. Our focus would be on the blue dotted line (fixing bugs that block us from reaching the end of test cycle), and on fixing bugs that are most likely to result in further bugs, namely the complex and large bugs. The number of bugs closed and the number of test cases signed off would be of secondary importance.

Assuming a risk managed tester, the below graph shows the impact of culture that the developer works in impacts on how bugs are fixed.

new doc 8_1

The risk averse developer would want to work on easier bugs that show them making progress. As a result there might be a delay (marked as “A” on the graph) on finding all the bugs in the system. This means that the risk managed developer would know with more certainty how much work needs to be done for “A” units of time.

The risk managed developer trades uncertainty at the start of the testing cycle for more certainty at the end of the test cycle. Conversely, the risk averse developer pushes more uncertainty (risk) to the end of the test cycle.

The risk averse behaviour makes it very difficult to manage resource loading for the test cycle whereas risk managed behaviour allows effective management of resource loading.

Simply put, risk managed behaviour gives more options than risk averse behaviour.

A final point on this, consider the projection (using a velocity measure) of the end of UAT. The graph below shows that the risk averse behaviour results in larger and larger time extensions at the end of the testing cycle ( A -> B ), whereas the risk managed behaviour results in smaller and smaller reductions of the time to the end ( A -> B ).

new doc 7_1

In conclusion, risk managed behaviour does allow management of testing, whereas risk averse behaviour does not. Anyone managing a testing cycle should understand that attempting to demonstrate early progress leads to a loss of management and control.

*These days it is really easy to find Waterfall projects. They are normally called a “Flagship Agile” project. The irony escapes them. A Culture Changing Agile Project would be called an “Agile Canoe” or “Agile Raft” or “Agile Rowing Boat”. Certainly not the flagship which is normally an aircraft carrier.


The Responsive Manifesto

I am increasing working with groups other than IT who want to adopt Agile. In addition, Agile is becoming an increasing toxic term causing division and fear within the organisations that I work. On that basis, as a bit of fun last week, Richard Warner and I wrote our own “Responsive Manifestos”. Both were very similar. This is mine.

We chose the word responsive because it is a term that has resonance with the business. There is already a concept of responsive marketing. Responsiveness is the goal, agility is the way that IT has chosen to achieve the goal of responsiveness.

By choosing to break with Agility, it allows the business to focus on the goal rather than the implementation of Scrum, Kanban or Lean as a end point for Agility.

Screen Shot 2015-03-15 at 14.32.41


PM/UX/Design/Data/Agile Pulling it all together

For the past couple of months, Richard Warner and I have been working with a number of teams to help them better work together. Each of the teams contains a Product Manager, UX Researcher(s), UX Designer(s), and Big Data Analysts who are working in an Agile manner.

Last week Richard and I sat down to work through one of the problems we are facing. A number of teams are using some kind of “Agile Canvas” derived from the start up community. It was our feeling that these canvases are driving the wrong behaviour in the teams. The most popular canvas with our teams is the one taught by Roman Pilcher.

Screen Shot 2015-03-14 at 10.06.26

You start with a strategic statement of intent. Then you list the Target Group (Customer Segment/Market), Needs, Product, and finally value. As Richard and I were chewing this over, we realised that although this was a great canvas for a start up, it lead to the wrong behaviour for enterprises. Value is an after thought. For a start up, you establish your product in the market and then think about revenue models, which often means being bought out by another more established company (The Instagram approach). As an established enterprise, this is a risky approach as you may develop a product that is not sustainable. If you pull the product, your customer will stop trusting you which will impact your existing business. At this point we decided to use “Break the Model” from Feature Injection to model the different order in which you perform the tasks. We realised that there are a number of different valid paths you would take based on the nature of the insight driving your discovery process.

IMG_20150310_164516927IMG_20150310_164545172

We realised that a design hypothesis that we test potentially contains a number of sub-hypotheses.

IMG_20150310_164530515

A design hypothesis needs to contain each of these sub-hypotheses. However the order in which you build up the design does not matter as long as the Feature/Design is the last thing you consider.

IMG_20150310_164613391

Richard and I looked at all the different combinations of order in which you could create the design hypothesis.

Screen Shot 2015-03-14 at 10.56.04

So here is our working hypothesis. Depending on your insight, you need first to establish the outcome, market and needs. Once you have these three, they form the design brief for the UX designer to design the “feature”.

The feature should always be the last part of the hypothesis. If you start with a “request for a feature”, this should be regarded as an insight (tea-bag). If we end with the outcome, customer or needs, they will likely be ignored. If you end with the outcome, you may be subject to reputational risk if you have to pull the feature because it is not sustainable (not a problem for startups who are looking for an IPO exit). If you end with the customer, you will not know who should like the feature and who should not. You may lose an important customer segment without realising it. If you end with the need, you may deliver a product that nobody wants, like Segway or Betamax. Betamax was higher quality but did not satisfy the need of fitting a movie on a single tape.

Screen Shot 2015-03-14 at 11.16.28

The PM is responsible for establishing the correct outcome. Data Analysts and UX Research are responsible for providing insights and customer segmentation (market). The PM/UX Rearcher/Data Analyst and UX Designer “pair” to identify the customer segment and associated insight that is most likely to deliver the outcome. From the insights, they create hypotheses about the customer needs to be satisfied.  Note that sometimes, the process starts with an outcome, sometimes with a need, and sometimes with a customer segment. It should never start with feature. If the starting point is a request for a feature, this should be treated as an insight.

IMG_00013

The combination of Outcome, Customer Segment (Persona) and Needs form the design brief. e.g. In order to increase blah, as a blah, my need is blah. ( A Job Story ). There may be several job stories. e.g. The need to learn quickly at the expenses of a slower process, and the need to learn slower in order to have a faster process. Each design brief may result in several designs.

Screen Shot 2015-03-14 at 11.14.39

It is the responsibility of the PM and UX Researcher to confirm whether the designs have achieved the design brief. The designs can then go through Melissa Perri’s “Bad Idea Terminator” process. We start with the fastest and cheapest ways to invalidate a hypothesis, gradually moving on to more expensive methods. (e.g. Team/Expert review, Usability Testing, then MVT, and finally full production)

Screen Shot 2015-03-14 at 11.14.53

As soon as we get a UX design, we can create the Given-When-Then scenarios to describe it’s behaviour.

Screen Shot 2015-03-14 at 11.13.35

This process will help us split the design (Epic) into several Epics. Once we have the Epics with GWT scenarios, we can easily slice them into user stories.

Screen Shot 2015-03-14 at 11.18.07

At this point it is pretty easy to identify the APIs (services) needed to deliver the application.

This whole process is summarised as follows:

Screen Shot 2015-03-14 at 11.19.57

Richard and I are looking for feedback on where we got it wrong. 🙂