Last week I read a nice post by David Anderson about the Information Generation Process. I realised that apart from badly naming what I was talking about, I had also failed to explain what I was talking about.
Most literature I’ve read about Process Improvement including Lean focus on the delivery of the product or service. None that I’ve read really focus on the flow of information in the process. Unless you consider the information flows, it is possible that your decisions will be severely sub-optimal.
As an example, consider the software product and information flows in software development. Traditional software development belief has an analysis phase followed by a parallel software development and test preparation, which comes together in a test execution phase. Focusingf on software only, parallelising the software development and test preparation phase makes perfect sense as it reduces the time to delivery of the software… Unless you consider the information flowing into the system. The test preparation phase involves creating detailed examples that would be used to test the software. Creation of these examples results in the discovery of details about how the system needs to behave that will be tested. The information about the examples needs to be built into the software. In other words, the information needs to be incorporated into the software development phase before the software can leave the test execution phase. The only way that the information is guaranteed to get into the software development process is as a “bug”.
In other words focusing on the software product only results in parallelising software development and test preparation which is the optimal way to deliver software INTO test execution. When you consider information flows in your system, you realise that the optimal way to deliver software OUT of test execution is to place test preparation before software development. In other words use “Specification by Example” or “Automated Acceptance Testing”.
This is one example. The general rule for optimising an information arrival process is that “All information needed to make a commitment should be available before a commitment is made.” The Feature Injection process is one way of implementing this principle. “Sense and Respond” describes a similar process for service organisations. Feature Injection starts with the value, whilst “Sense and Respond” starts with the customer.
Both Feature Injection and “Sense and Respond” are “knowledge discovery processes” that David refers to in his post.
Neither process is passive, simply waiting for information to arrive. Both actively discover the information needed in a structured manner, guided by the goal of delivering (customer) value.
I like to pride myself on the fact that I give things really really terrible names. “Information Arrival Process” is one of my worst. A ponsy name for considering how information arrives into a system as well as how the “product” flows through it. The aim is to avoid information loops where information flows backwards in the process, and ensuring the process is not halted whilst it has to wait for information to arrive. In effect, endeavor to create flow of information in the opposite direction to the “product” as well as flow of the “product” itself.
Note: “Product” is the thing delivered to the customer that generates value for them.
October 18th, 2011 at 12:36 pm
Thank you Chris, for an easy to read, to the point and clarifying article.
I particularly like that you point out that this is not a passive but an active approach to information handling.
I would like to point out that if you combine a practice which makes it possible to handle information flowing in the opposite direction, like ATDD, BDD or similar with Real Options, you are opening up the door to pro-active management, on a scope dimension.
Which in turn hopefully gives you the option to do less and focus more on the stuff that produces the most value.
Am I making sense?
October 18th, 2011 at 1:58 pm
Thank you for your comment.
It makes sense to me.
Its my view that the business analyst adds most value by the features that are not built yet still delivering the desired value.
October 20th, 2011 at 6:53 am
All very well, but your suggestions attempt to ameliorate a contrived problem (albeit one very common to Analytic-mindced organisations everywhere). The contrived problem of which I speak? The separation of testing from development (into e.g. silos). Combine the two and the “problem” evaporates.
Have a nice day 🙂
October 20th, 2011 at 9:12 am
Please could you help me understand why the problem is contrived. I still encounter many organisations (that need a nudge to the right) that still parallelise testing and development. As you say, combining testing into the development process (a known solution) addresses this problem.
October 20th, 2011 at 10:25 am
Maybe contrived is not the best choice of word. How about “specious” as an alternative? And I’m not directing the contrived or speciousness claim at your post so much as at the “many organisations” you mention – I posit that they would not have to address the problem you describe – and of their own making (hence contrived in that sense) if they did the “sensible” thing and combined testing into the dev process.
October 20th, 2011 at 10:37 am
Yep. I agree. They could simply do the “sensible” thing and combine development and testing on the basis that many organisations have already done that with much success. There is a danger that this leads to Cargo Cult agile. Consider “studying the information arrival” as another argument for doing the sensible thing.
The problem is that many organisations do the “sensible” thing. They do it based on their context which in their world results in sub-optimal solutions. Arguments as to combining or parallelising should be based on reasoned arguments rather than doing stuff which is “common sense” or “sensible”.
I have had too many discussions in the past where one “sensible” idea competes with another opposing “sensible” idea.
October 20th, 2011 at 10:57 am
Bit of history around this example.
At Agile 2007, David Anderson said (or probably someone said that he said) that he was not interested in whether testing came before or after development. He was interested in the process and using Kanban to improve it. I used the information arrival to demonstrate that test preparation needed to occur before development.
October 20th, 2011 at 6:55 am
P.S. Your blog comment clock is 1 hour slow. :}
October 20th, 2011 at 9:12 am
Thanks. I have reset the time zone to London. 🙂
January 10th, 2012 at 2:07 am
[…] with feedback and direct input to this “management mind map“: Marc Bless, Ivana Gancheva, Chris Matts, Olav Maassen, Liz Keogh, Ken Power, and to people whose ideas we build upon: Bjarte Bogsnes, […]
September 25th, 2016 at 12:46 am
[…] Matts calls when information is discovered (in the development lifecycle) “information arrival“. He has a great diagram in his European Testing Conference keynote “We don’t […]