I was back at St. Aidan’s United Church in Victoria yesterday, hosting another conversation in their continued evolution into their next shape. Last December we worked together to explore four possible scenarios that were being proposed for the congregation. In the past few months they have been working on implementing one of these scenarios – the one which featured a plan to develop a Spiritual Learning Centre. Yesterday was a short strategic conversation called to explore the shape of what that Centre could be and how it will change life at the church.
Share:
A couple of good blog posts in my feed this morning that provoked some thinking. These quotes reminded me how much evaluation and planning is directed towards goals, targets and patterns that cause us to look for data that supports what we want to see rather than learning what the data is telling us about what’s really going on. These helped me to reflect on a conversation I had with a client yesterday, where we designed a process for dealing with this.
Share:
Regular readers will know that I’ve been thinking a lot about evaluation for many years now. I am not an evaluator, but almost every project I am involved in contains some element of evaluation. Sometimes this evaluation is well done, well thought through and effective and other times (the worst of times, more often than you think) the well thought through evaluation plan crumbles in the face of the HIPPO – the Highest Paid Person’s Opinion. So how do we really know what is going on?
When I stumbled across Michael Quinn Patton’s work in Developmental Evaluation, a whole bunch of new doors opened up to me. I was able to see the crude boundaries of traditional evaluation methods very clearly and was able to see that most of the work I do in the world – facilitating strategic conversations – was actually a core practice of developmental evaluation. Crudely put, traditional “merit and worth” evaluation methods work well when you have a knowable and ordered system where the actual execution can be evaluated against a set of ideal causes that lead to an ideal state. Did we build the bridge? Does it work according to the specifications of the project? Was it a good use of money? All of that can be evaluated summatively.
In the unordered systems where complexity and emergence is at play, summative evaluation cannot work at all. The problem with complex systems is that you cannot know what set of actions will lead to the result you need to get to, so evaluating efforts against an ideal state is impossible. Well, it’s POSSIBLE, but what happens is that the evaluator brings her judgements to the situation. Complex problems (or more precisely, emergent problems generated from complex systems) cannot be solved, per se. While it is possible to build a bridge, it is not possible to create a violence free society. Violent societies are emergent.
So that’s the back story. Last December I went to London to do a deep dive into how the Cynefin framework and Cognitive Edge’s work in general can inform a more sophisticated practice of developmental evaluation. After a few months of thinking about it and being in conversation with several Cognitive Edge practitioners including Ray MacNeil in Nova Scotia, I think that my problem is that that term “evaluation” can’t actually make the jump to understanding action in complex systems. Ray and I agreed that Quinn Patton’s work on Developmental Evaluation is a great departure point to inviting people to leave behind what they usually think of as evaluation and to enter into the capacities that are needed in complexity. These capacities include addressing problems obliquely rather than head on, making small safe to fail experiments, undertaking action to better understand the system rather than to effect a change, practicing true adaptive leadership which means practicing anticipatory awareness and not predictive planning, working with patterns and sense-making as you go rather than rules and accountabilities, and so on.
Last night a little twitter exchange between myself, Viv McWaters and Dave Snowden based on Dave’s recent post compelled me to explore this a bit further. What grabbed me was especially this line: “The minute we evaluate, assess, judge, interpret or whatever we start to reduce what we scan. The more we can hold open a description the more we scan, the more possibility of seeing novel solutions or interesting features.”
What is needed in this practice is monitoring. You need to monitor the system in all kinds of different ways and monitor yourself, because in a complex system you are part of it. Monitoring is a fine art, and requires us to pay attention to story, patterns, finely grained events and simple numbers that are used to measure things rather than to be targets. Monitoring temperatures helps us to understand climate change, but we don’t use temperatures as targets. Nor should we equate large scale climate change with fine grained indicators like temperature.
Action in complex systems is a never ending art of responding to the changing context. This requires us to be adopting more sophisticated monitoring tools and using individual and distributed cognition to make enough sense of things to move, all the while watching what happens when you do move. It is possible to understand retrospectively what you have done, and that is fine as long as you don’t confuse what you learn by doing that with the urge to turn it into a strategic plan going forward.
What role can “evaluation” have when your learning about the past cannot be applied to the future?
For technical problems in ordered systems, evaluation is of course important and correct. Expert judgement is required to build safe bridges, to fix broken water mains, to do the books, audit banks and get food to those who need it. But in complex systems – economies, families, communities and democracies, I’m beginning to think that we need to stop using the word evaluation and really start adopting new language like monitoring and sense-making.
Share:
One has to be very careful attributing causes to things, or even attributing causality to things. in complex systems, causality is a trap. We can be, as Dave Snowden says “retrospectively coherent” but you can not know which causes will produce which effects going forward. That is the essence of emergent phenomena in the complex world.
But even in complicated problems, where causality should be straightforward, our thinking and view can confuse the situation. Consider this example.
Imagine someone, a man, who has never seen a cat. I know, highly implausible, but this is a hypothetical from Alan Watts’ book, On the Taboo Against Knowing Who You Are, which was written in the sixties; pre-YouTube. Watts uses this fictional fella to illustrate the unfelt influence of perspective and the dangers inherent in our strong inclination to seek cause-and-effect relationships.
“He is looking through a narrow slit in a fence, and, on the other side a cat walks by. [The man] sees first the head, then the less distinctly shaped furry trunk, and then the tail. Extraordinary! The cat turns round and walks back, and again he sees the head and a little later the tail. The sequence begins to look like something regular and reliable. Yet again the cat turns round and he witnesses the same regular sequence: first the head and later the tail. Thereupon he reasons that the event head is the invariable and necessary cause of the event tail which is the head’s effect. This absurd and confusing gobbledygook comes from his failure to see the head and tail go together; they are all one cat.”
We often create and embed the wrong patterns because we are looking through a slit. As Watts says, by paying very close attention to something, we are ignoring everything else. We try and infer simple cause-and-effect relationships much, much more often than is likely in a complex world. For example, making everyone in an organisation focus on hitting a few key performance indicators isn’t gong to mean that the organisation is going to get better at anything other than hitting those key performance indicators. All too often this will lead to damaging unintended consequences; absurd and confusing gobbledygook.
via abc ltd.
Share:
When I popped off to London last week to take a deep dive into Cognitive Edge’s work with complexity, one of the questions I held was about working with evaluation in the complex domain.
The context for this question stems from a couple of realities. First, evaluation of social programs, social innovation and other interventions in the human services is a huge industry and it holds great sway. And it is dominated by a world view of linear rationalism that says that we can learn something by determining whether or not you achieved the goals that you set out to achieve. Second, evaluation is an incredibly privileged part of many projects and initiatives and itself becomes a strange attractor for project planning and funding approval. In order for funders to show others that their funding is making a difference, they need a “merit and worth” evaluation of their funds. The only way to do that is to gauge progress against expected results. And no non-profit in its right mind will say “we failed to achieve the goals we set out to address” even though everyone knows that “creating safe communities” for example is an aspiration out of the control of any social institution and is subject to global economic trends as much as it is subject to discrete interventions undertaken by specific projects. The fact that folks working in human services are working in a complex domain means that we can all engage in a conspiracy of false causality in order to keep the money flowing (an observation Van Jones inspired in me a while ago.) Lots of folks are making change, because they know intuitively how to do this, but they way we learn about that change is so tied to an inappropriate knowledge system, that I’m not convinced we have much of an idea what works and what doesn’t. And I’m not talking about articulating “best practices.”
The evaluation methods that are used are great in the complicated domain, where causes and effects are easy to determine and where understanding critical pathways to solutions can have a positive influence on process. in other words, where you have replicable results, linear, summative evaluation works great. Where you have a system that is complex, where there are many dynamics working at many different scales to produce the problems you are facing, an entirely different way of knowing is needed. As Dave Snowden says, there is an intimate connection between ontology, epistemology and phenomenology. In plain terms, the kind of system we are in is connected to the ways of knowing about it and the ways of interpreting that knowledge.
I’m going to make this overly simplistic: If you are working with a machine, or a mechanistic process, that unfolds along a linear trajectory, than mechanistic knowledge (problems solving) and interpretive stratgies are fantastic. For complex systems, we need knowledge that is produced FROM the system and interpreted within the system. Evaluation that is done by people “outside” of the system and that reports finding filtered through “expert” or “disinterested” lenses is not useful for a system to understand itself.
Going into the Cynefin course I was interested to learn about how developmental evaluation fit into the complex domain. What I learned was the term “disintermediated sensemaking” which is actually the radical shift I was looking for. Here is an example of what it looks like in leadership practice.
Most evaluation uses processes employing a specialized evaluator undertaking the work. The problem with this is that it places a person between the data and experience and the use of the knowledge. And it also increases the time between an experience and the meaning making of that experience, which can be a fatal lag with strategy in emergent systems. The answer to this problem is to let people in the system have direct experience of the data, and make sense of it themselves.
There are many many ways to do this, depending on what you are doing. For example:
- When clustering ideas, have the group do it. When only a few people come forward, let them start and then break them up and let others continue. Avoid premature convergence.
- When people are creating data, let them tag what it means, for example, in the decision making process we used last weekend, participants tagged their thoughts with numbers, and tagged their numbers with thoughts, which meant that they ordered their own data.
- Produce knowledge at a scale you can do something about. A system needs to be able to produce knowledge at a scale that is usable, and only the system can determine this scale. I see many strategic plans for organizations that state things like “In order to create safe communities for children we must create a system of safe and nurturing foster homes.” The job of creating safe foster homes falls into the scope of the plan, but tying that to any bigger dynamics gets us into the problem of trying to focus our work on making an impact we have no ability to influence.
- Be really clear about the data you want people to produce and have a strategy for how they will make sense of it. World Cafe processes for example, often produce scads of data on table cloths at the centre of the table, but there is often so little context for this information that it is hard to make use of. My practice these days is to invite people to use the table cloths as scratch pads, and to collect important data on post it notes or forms that the group can work with. AND to do that in a way that allows people to be tagging and coding the data themselves, so that we don’t have to have someone else figure out what they meant.
- Have leaders and teams pour over the raw data and the signification frameworks that people have used and translate it into strategy.
These just begin to scratch the surface of this inquiry in practice. Over the next little while I’m going to be giving this approach a lot of thought and try it out in practice as often as I can, and where the context warrants it.
If you would like to try an exercise to see why this matters try this. the next time you are facilitating a brainstorm session, have the group record dozens of insights on post its and place them randomly on a wall. Take a break and look over the post its. Without touching the post its, start categorizing them and record your categorization scheme. Then invite the group to have a go at it. Make sure everyone gets a chance to participate. Compare your two categorization schemes and discuss the differences. Discuss what might happen if the group were to follow the strategy implicit in your scheme vs. the strategy implicit in their scheme.

