Regular readers will know that I’ve been thinking a lot about evaluation for many years now. I am not an evaluator, but almost every project I am involved in contains some element of evaluation. Sometimes this evaluation is well done, well thought through and effective and other times (the worst of times, more often than you think) the well thought through evaluation plan crumbles in the face of the HIPPO – the Highest Paid Person’s Opinion. So how do we really know what is going on?
When I stumbled across Michael Quinn Patton’s work in Developmental Evaluation, a whole bunch of new doors opened up to me. I was able to see the crude boundaries of traditional evaluation methods very clearly and was able to see that most of the work I do in the world – facilitating strategic conversations – was actually a core practice of developmental evaluation. Crudely put, traditional “merit and worth” evaluation methods work well when you have a knowable and ordered system where the actual execution can be evaluated against a set of ideal causes that lead to an ideal state. Did we build the bridge? Does it work according to the specifications of the project? Was it a good use of money? All of that can be evaluated summatively.
In the unordered systems where complexity and emergence is at play, summative evaluation cannot work at all. The problem with complex systems is that you cannot know what set of actions will lead to the result you need to get to, so evaluating efforts against an ideal state is impossible. Well, it’s POSSIBLE, but what happens is that the evaluator brings her judgements to the situation. Complex problems (or more precisely, emergent problems generated from complex systems) cannot be solved, per se. While it is possible to build a bridge, it is not possible to create a violence free society. Violent societies are emergent.
So that’s the back story. Last December I went to London to do a deep dive into how the Cynefin framework and Cognitive Edge’s work in general can inform a more sophisticated practice of developmental evaluation. After a few months of thinking about it and being in conversation with several Cognitive Edge practitioners including Ray MacNeil in Nova Scotia, I think that my problem is that that term “evaluation” can’t actually make the jump to understanding action in complex systems. Ray and I agreed that Quinn Patton’s work on Developmental Evaluation is a great departure point to inviting people to leave behind what they usually think of as evaluation and to enter into the capacities that are needed in complexity. These capacities include addressing problems obliquely rather than head on, making small safe to fail experiments, undertaking action to better understand the system rather than to effect a change, practicing true adaptive leadership which means practicing anticipatory awareness and not predictive planning, working with patterns and sense-making as you go rather than rules and accountabilities, and so on.
Last night a little twitter exchange between myself, Viv McWaters and Dave Snowden based on Dave’s recent post compelled me to explore this a bit further. What grabbed me was especially this line: “The minute we evaluate, assess, judge, interpret or whatever we start to reduce what we scan. The more we can hold open a description the more we scan, the more possibility of seeing novel solutions or interesting features.”
What is needed in this practice is monitoring. You need to monitor the system in all kinds of different ways and monitor yourself, because in a complex system you are part of it. Monitoring is a fine art, and requires us to pay attention to story, patterns, finely grained events and simple numbers that are used to measure things rather than to be targets. Monitoring temperatures helps us to understand climate change, but we don’t use temperatures as targets. Nor should we equate large scale climate change with fine grained indicators like temperature.
Action in complex systems is a never ending art of responding to the changing context. This requires us to be adopting more sophisticated monitoring tools and using individual and distributed cognition to make enough sense of things to move, all the while watching what happens when you do move. It is possible to understand retrospectively what you have done, and that is fine as long as you don’t confuse what you learn by doing that with the urge to turn it into a strategic plan going forward.
What role can “evaluation” have when your learning about the past cannot be applied to the future?
For technical problems in ordered systems, evaluation is of course important and correct. Expert judgement is required to build safe bridges, to fix broken water mains, to do the books, audit banks and get food to those who need it. But in complex systems – economies, families, communities and democracies, I’m beginning to think that we need to stop using the word evaluation and really start adopting new language like monitoring and sense-making.