# Its not a model, its an Olaf.

At ScanAgile in Helsinki (great conference), Olaf Lewitz pointed out that the “model” in the “break the model” part of Feature Injection is not a model as we would normally describe it.

My understanding of the term model is “A simplification of reality”. This was my understanding when I started using the term.

Olaf pointed out that what we had in “Break the model” was “A summary of examples”. The important part is that the process used to create it is to incrementally add examples and adjust it as we add examples that are different in some way. We do not create our model by simplifying the reality of the situation. We do not have a good word for this and so we are calling it an Olaf until a better name is discovered. In effect, we now have “Break the Olaf” in  Feature Injection which seems a bit harsh on Olaf, so I think we will stick with “Break the Model” until we find the right name.

So what is an Olaf? An Olaf is a set of examples that represent our reality. We use it as a filter to identify new examples that do not fit in the set.

e.g. We have a white square and a black square in our set. We then spot a red square. We know that this does not exist in our current set so we add it.

In its simplest form, the Olaf is a the set of examples itself. However, it becomes hard to spot new examples by comparing them to the existing set of examples. To make it easier to spot new examples, we abstract properties and behaviours.

“Break the model” means we have found an example that does not fit into our set of examples.

Our Olaf is a black square.

We spot a white square which breaks the Olaf.

Our Olaf is now a black or white square.

We spot a red square which breaks the Olaf. We abstract black, white and red into “coloured”.

Our Olaf is now a “coloured” square.

We spot a triangle which breaks the Olaf. We abstract triangle and square into straight sided shapes.

Our Olaf is now a coloured “straight sided shape.

We spot a circle……….

The key thing is that the Olaf describes our existing examples so that we can more easily spot new examples. Once we spot an example, we simply add it to the list of examples. To make things easier, we may create an abstraction of the examples.

Let me know if you think of a better name for an Olaf.

Currently an “engineering performance coach” because “transformation” and “Agile” are now toxic. In the past, “Transformation lead”, “Agile Coach”, “Programme Manager”, “Project Manager”, “Business Analyst”, and “Developer”. Did some stuff with the Agile Community. Put the “Given” into “Given-When-Then”. Discovered “Real Options” View all posts by theitriskmanager

#### 20 responses to “Its not a model, its an Olaf.”

• olaflewitz

Thank you. This made me LOL and blush at the same time. As hilarious as useful:-)
Thanks for the clarification.
Take care
Olaf

• Ola Ellnestam

The word that pops into my head is Schema, analogous to Database Schema or Schema as it is used in psychology and cognitive science.

The language would have to change slightly and perhaps become ‘Improve the Schema’ or even ‘Model the Schema’ — instead of ‘Break the model’.

Too abstract?

• flowchainsensei

Nice clarification.

Scenario Modelling (Kahane et al) might refer to this thing as a “scenario”. Not that I’m suggesting this is necessarily a better terms, given it’s potential for confusion.

– Bob

• imaginarytime

What you’re describing has a distinct Kantian vibe to it.

I don’t know if Ola had Kantian schema in mind when he proposed the use of the term ‘schema’, but if we use the term in Kantian sense it would appear very appropriate.

Model-cum-Olaf is an abstraction, a Kantian, empirical, a posteriori concept, and Schema connects concepts to perceptions so that they have sense and meaning, Sinn and Bedeutung.
http://en.wikipedia.org/wiki/Schema_(Kant)

In Logic, Kant described how a posteriori concepts are created. I’ll quote from http://en.wikipedia.org/wiki/Concepts

“The logical acts of the understanding by which concepts are generated as to their form are:

* comparison, i.e., the likening of mental images to one another in relation to the unity of consciousness;

* reflection, i.e., the going back over different mental images, how they can be comprehended in one consciousness; and finally

* abstraction or the segregation of everything else by which the mental images differ …

In order to make our mental images into concepts, one must thus be able to compare, reflect, and abstract, for these three logical operations of the understanding are essential and general conditions of generating any concept whatever. For example, I see a fir, a willow, and a linden. In firstly comparing these objects, I notice that they are different from one another in respect of trunk, branches, leaves, and the like; further, however, I reflect only on what they have in common, the trunk, the branches, the leaves themselves, and abstract from their size, shape, and so forth; thus I gain a concept of a tree.
— Logic, §6”

Comparison, reflection, abstraction.

It strikes to me you’re describing Kantian transcendental apperception. Constructing Olafs (previously models) is essentially a sense-making process, where we’re trying to bring together the self and the world. http://en.wikipedia.org/wiki/Transcendental_apperception

I would suggest we rename Olaf to concept, and schema is how we synthesize concepts and intuitions, as per Kant.

• Karl Scotland

My initial reaction is that it is still a model, just a deliberately simplistic and knowingly ‘wrong’ model (yes, I know, /all/ models are wrong 😉

The “Olaf” seems to be more about the design/structure of the model, than the model itself.

So Feature Injection is the Olafication of a model…

Some raw, unformed thoughts on first reading this…
– I *think* this is the same as some concepts I was discussing yesterday afternoon when trying to help a colleague move from an “unbounded” problem – through examples – back to the patterns represented by the examples that would allow him to demonstrate direction and progress to his sponsor and to allow the team to feel they were progressing (rather than just resolving specific examples).

We discussed starting with a single example and then working through repetitive “can I think of a simpler example” and “can I think of a more complex example” and then capturing or removing the “specifics” to replace them with increasingly abstract alternatives until we felt we were at the “right *natural* level” (not too abstract!)

The concrete (but anonymised) example we used was:

“I want to upgrade from the standard edition of product X to a product bundle *containing* product X plus a bunch of other products”

There are a bunch of varying examples but we wanted to keep the scope narrow enough to deliver something small and valuable in a short space of time so we focused on:

– “Upgrading from a single product to a bundle.”

This was the “right” level of abstraction to implement against (for us)

To give context, we felt “Upgrading” was the “wrong” level of abstraction as it went too far and contained too many other abstractions:
Here’s a couple of other abstractions that covered opposite ends of the scale for the problem…

Most Challenging: “upgrading everything I own”
Simplest: “upgrading to a new edition of the same product”

Chances are we’ll implement the simpler abstraction as part of solving the one we’re actually interested in but certainly not the more complex.

So… actually what you’re doing is breaking successive levels of abstraction. – “break your abstractions”

• Liz Keogh (@lunivore)

Taking examples and deriving abstracted behaviour for them is called “Chunking Up” in NLP. So I think a better name for an Olaf might be an Up-chunk.

Now all you need to worry about is the granularity. How many chunks would an up-chunk chunk if an up-chunk could be chunked up?

• Kent McDonald (@BeyondReqs)

Would need to be careful that “up-chunk” doesn’t become an “up-chuck”.

If “Break the Model” is really refining our list of examples, would it be too simplistic to say “refine the examples”?

• olaflewitz

I like all the suggestions. My primary reason for questioning the word “model” is that people misinterpret it. And it was the reason that I didn’t get the value of this method for a long time…
Analyse, Model, Automate is a pattern that brought us into a lot of trouble—I wrote on that earlier: http://www.agile42.com/en/blog/2012/02/02/agile-inspires-betterness/

The only suggestion that doesn’t carry this risk IMO is Liz suggestion of an Up-chunk. It’s new (in this context) so you need to explain it (like “feature injection”). It sounds pragmatic, not theoretical, which is exactly what this method is as well.
So, as long as you don’t start calling me by that name…

And I’d like to suggest another line of thought: we’re talking about a set of examples, right? English is rich on alternatives for “set” in the context of groups of animals, which fits especially since FI employs a hunting metaphor already (“hunt the value”):
– break the flock?
– break the pack…

• Liz Keogh (@lunivore)

The other name I thought of was: creating specific examples of behaviour is called chunking down. So how would you rephrase an example to make it sound like chunking up? You could call it an amplex… that might work; it even sounds like a big amalgamation of examples.

• cyetain

Sounds like you are using abduction to create a hypothesis, then using deduction to create a set of observable “facts” that must be true if our hypothesis is true… then comparing the facts to our understanding of reality for coherence. Assuming we find a coherent hypothesis well we can infer (generalize) an implementation (in code) from the deduced examples and our previous experience in software development.

Kinda like solving a murder mystery or diagnosing a patients illness, then coming up with a treatment plan.

Science is fun.

I really like the part where the model gets thrown away…

• Mika Latokartano

A great blog post that has inspired good commentary. Very much enjoying this and finding it valuable.

I would strongly argue for considering terms that have been established in natural sciences and philosophy first over esoteric and/or novel terms the meaning of which cannot be easily inferred. This is a part of a wider on-going discourse and dialectics around advocating naturalised theory-informed approaches in management science and Lean / Agile practises, like in the CALM(alpha/beta) initiatives and naturalised sense-making (Snowden et al).

Going back to theory provides a common baseline and and doesn’t require deep expertise in Agile/Lean/etc to understand the language and terminology and what is being discussed. The point is not a trivial one, and also has implications to the propagation of the idea(l)s being advocated.

• olaflewitz

Feminine vocative… I like too.
Ex-ample-xe:-)
To say you’re good with words gives new meaning to the word understatement.

• Real Options—a Mindset | Lean Procrastination

[…] very grateful to Chris Matts and Olav Maassen for introducing me to Real Options. They’re writing a graphic business […]

• RedE

All swans are white, aren’t they?

• agile42 | Effect Mapping at the Nordstrom Innovation Lab

[…] about the topic at ScanAgile in Helsinki earlier this year, which you can watch here. It features me, near the […]