Sometimes people see that I’m a dialogue practitioner and the assume that I am not a fan of quantitative measurement. I think this has to do with the fact that the dialogue practitioner community has been a kind of antithesis to the “measure and manage” world of empirical scientific management.
In any endeavour both qualitative and quantitative measurements are important. The issue isn’t whether or not numbers are to be more trusted than meaning making; the issue is whether we are measuring thing properly.
The issue is whether or not we use measurements as targets or gauges.
Again, this is helpful in understanding the distinction between summative and developmental evaluation and sensemaking. In a linear system, you are aiming for certain end states and targets. In a complex and non-linear system you are aiming to keep to vectors. So using technology to increase production by 5% and decrease expense by 15% can be achieved and you can look back and see how well you achieved that target. You can also do tests and host conversations with workers and customers to discuss the quality of your product, aiming for a general score of “happy” which in turn might be reflected in numbers like sales, returns, recommendations and so on.
In a complex system, lilke an organization’s culture however, you are not managing for a target, but rather you are managing a kind of balance and a direction. You get to choose that direction from your own moral and ethical sense of what is right to do. For example, maintaining an organizational culture of openness, respect, creativity and support requires monitoring your culture in real time, a lot, and noticing how things are shifting and changing. Dialogic methods play an important role here, especially in perceiving patterns and making decisions about what to do, as well as engaging people in the endless negotiation about what those values look like on a daily basis. As a management tool, developing skillful dialogue tools allow you to manage the day to day issues with departures from your preferred set of values, beliefs or practices. Being complex, things like organizational cultures won’t always act they way you want them too, and so good leaders do two things well: they help resolve the inevitable violations of standards and practices in a manner that reflects the preferred way, and they gather together people over time to discuss what everyone is learning about the way the culture is working.
It’s not good enough to convene an annual meeting about the organization’s values and culture. That simply gives you a snapshot in time and tells you nothing about how an organization is evolving and changing, nor does it provide information about promising practices. To monitor over time, you can use a tool like CultureScan or a series of other regular ways of documenting the small observations of daily life that together help provide a picture of what the organization is doing.
Share:

Evaluation is such an influential constraint in organizational and community life. When resources and attention are tied to evaluation results, a kind of tautology gets set up. One begins managing projects towards the evaluation outcomes, in order to give the best chance of an initiative surviving and continuing to attract resources. One of the things I appreciate about developmental evaluation is its deliberate engagement with emergence. Making sense of emergence however can be a really time consuming affair, and so I’m thinking about how we can use good use of time to use dialogue and collective meaning making to help make sense of data and direction.
Developmental evaluation is for the complex domain. That means that we are not working with evaluating actions against desired end states, but instead noticing and paying attention to vectors and directions – intentions and hypotheses that help shape emerging strategy. Developmental evaluation is the process of gathering information about our work to give us some intelligence about what we are doing.
Think of the information needs of two different kinds of athletes. A golf player relies on solid objective data (how many yards to the hole, where the wind is coming from, the nature of the lie of the ball and so on) and interprets that data through her own self-knowledge (I hit a five iron 160 yards. Adjusting for wind and lie and the target topography, I should hit a 4 iron with backspin…) Of course the better a golfer one is, the easier it is to execute a plan and understand exactly where one succeeded or failed.
By contrast soccer players work in a dynamic environment. The information available to them only becomes apparent as they begin to play the match. They may know something about the other team, but they learn rapidly in the first ten minutes or so how the game is going to go. A team will discover where the opposition’s weakness is, or what its attacking strategy is, or where the open spots are on the pitch. Making good use of this information requires excellent communication in real time to share what is being learned. It requires players to play with potentials and patterns rather than certainties. Every move provides yet more information. The better a team works together, the faster they can adjust their strategy to take advantage of potentials.
When we are evaluating work there is a mix of these two types of approaches at play. Summative evaluation will look at the gap between expected outcomes and what actually happened and suggest how to adjust for next time. Budget planning and auditing is a good example of this technical kind of results based evaluation. Count the money and compare against projections. Look for causes. Some of these causes will be technical and some will be down to culture.
Developmental evaluation requires a different strategic approach, and simply put, it might fall into these four things (I’m trying for simplicity here, to try to be able to describe this in an easy way):
- Data points that give us the ability to capture information about a current state of an evolving system. This can render a series of pictures that will allow us to see patterns and trends. You need multiple snapshots over time to make sense of what is happening. One photo of a soccer game in progress tells you nothing. You need to monitor indicators not manage end points. Soccer is much more than just putting the ball in the net, even though that is the desired end result.
- Feedback loops from data to human sensemaking so that data can be used in real time to develop strategy and adjustments to the directionality of work.
- A facilitated sensemaking process to bring together multiple perspectives to interpret what is happening. In a complex system the data won’t give you answers. It will provide information to form hypotheses about the patterns that are emerging, and that information can give you guidance for action.
- A way of acting that doesn’t over commit resources to emerging potential strategies, but which gives enough momentum to see if we can shift things in a desired way. Snowden calls this “safe-to-fail.” This is tricky and calls for good context dependant leadership, but it is the essence of good decision making.
There are all kinds of ways of implementing these strategies. You can use surveys to discover what people are accessing on your website and you can use interviews or sensemaking tools to find out HOW they are using that information. You can use a strategic group to interpret these results and see how they are either coherent with our intentions, or at odds with them. You can then create new initiatives that support what is emerging or figure out ways to abandon what is not working. There are thousands of dialogue methods and processes to use to ask questions about and develop action around the data that is emerging.
Importantly, developmental evaluation needs to be a part of the way you work strategically. It needs a rhythm and a cadence to it, so that you know you are coming back on a regular basis to the emerging picture of what is happening. You need outsiders occasionally to come in and disrupt your point of view and offer alternative views of the patterns, and you need to choose a longer rhythm to continue to develop and refine your evaluation strategy as a whole.
I want this to be simple as a process to use. Strategy without information is just a wild guess. But if we tie our decisions too closely to the data emerging from dynamic systems we can get equally stuck making decisions that try to game the system towards desired results, with sometimes disastrous results for clients, customers and ultimately, organizational integrity. It’s a balance and a practice. How can we make this easy?
Share:

A few years ago, Juanita Brown shared a very powerful image with me. She talked about how those of us that practice dialogue and facilitation in a deep way have access to various gateways that take us into a “central garden.” All of our pathways invite us into this garden where we come to discover and realize something about the role of dialogue, meaning making and collaboration. It is a set of realizations that lies beneath the practice of methods.
On a call today with my friend Mark McKergow, we were discussing this image There are a bunch of us – although not a large bunch of us – from different practitioner communities who are always interested in transcending our methods and entering into this conversation. Alongside Juanita, Mark has also been wondering “where is everybody else, and how come we’re not connecting?”
Today we were discussing the failure of dialogue to have enough presence to provide workable and practical alternatives to everything from public policy decisions (such as the EU referendum in Britain, or the polarization of US society) to the everyday challenges of managing and running large organizations, evaluating, strategizing and controlling outcomes, people and money.
We know that our field of dialogic practice is massive, well researched and well documented. We know that leadership literature is filled with the importance of relational and sense making work. We know that that mid-career professionals end up coming to our various workshops to take on skills and ideas that are fundamentally transformative to their work and lives and that they go back to places where “it’s difficult to implement” because other mid-career professionals are wedded to globalized management practices that are good enough for what they are trying to do, within the highly constrained performance frameworks within which they are forced to operate. We even know (thanks to people like Jon Husband) that global organizations like Hay Associates have spent the better part of a century ensuring that these management science constraints are widely deployed and understood. They frame everything, not without utility, but to the exclusion of almost every other way of organizing and being together in human endeavour.
So what is the problem? Are we just lousy storytellers? Are we being deliberately marginalized? Is there something fundamentally flawed about the ability of dialogic practice to actually be of value? And how do we disrupt the standard set of management tools and the narcissism of our own communities of practice in a way that creates some serious openings for change?
What do you think?
Share:

It’s good to have Dave Snowden back from his treks in the Himalayas. He’s been a big influence on my thinking and practice over the past few years and his near daily blog posts are always rich, irreverent and practical. He is in the process of creating an important body of theory and practice that is useful even if the language and the concepts are sometimes a lot of work to grasp. The payoff from wrestling with his ideas is rich.
Today he’s discussing “dispositionality” which simply means that making change in a system is much easier when you have a sense of what the system is pre-disposed to do (and what it is NOT pre-disposed to do…)
Back in the summer Caitlin and I led a learning lab for the board and staff members of various community foundations from around British Columbia. The five principles that Dave articulated today were very much embedded in our work and they are becoming very much the basis for any change and planning work I do. Here’s how we made it work, pen and paper style.
1. Map the current state of the system, including its dominant flows, eddy points and whirlpools.
We began with a World Cafe design based on small stories of change. It is always good to ask people about actual decisions or stories that they remember to ground their experience in discovery. If you run a cafe on “What are the big sources of change in our sector?” you get a data set that is divorced from reality, meaning that it is subject to being gamed by the participants. I can just insert the things I want to see in there. But if I am asked to tell a story about a particular decision I had to made, the data set is richer and we have a good chance to see emerging patterns.
And so our Cafe ran like that: “Tell a story of a time when you knew things needed to change?”
Each person told a story and the other three at the table listened and wrote down what they heard was the impetus for change, with one data point on a post-it note. We did several rounds of story telling. At the end of the round, we asked people to give the post-its to the story teller, and we gave the story tellers time to rank each post it note on a scale of 1-3. A one meant that the impetus for change was just known to me (a weak signal), two meant that a few other people know about this impetus, and a three meant that this change trigger was known by everybody.
We then had the group cluster all the post-its to find major categories, and we sorted post it notes within the categories to produce a map that was rendered by our graphic recorder, Corrina Keeling. You can see that above.
2. Identify the energy gradient associated with existing dominant patterns and what adjacent possible states to any undesirable pattern present themselves.
The resulting map shows the major areas for change making, specific “acupuncture points” and the “energy gradients associated with the dominant patterns.” Practically what this means is that items marked in yellow were very weak signals and could be candidates for a change initiative that would appear out of left field for the dominant system. Not a bad thing to do, but it requires a lot of resources and political capital to initiate. The red items were things that EVERYBODY was talking about, which meant that the space for innovation was quite closed down. There are a lot of experts, large consulting firms, influential funding pots and politically committed people tackling change at this level because it is perceived to be an influential place to play. As a result it is generally a zone that is not failure tolerant and so these items are not good candidates for a probe or prototyping approach.
But the orange items were in a kind of Goldilocks zone: there are a few people who know that you can make change here, so you have allies, but the field is not cluttered with competing experts trying to assert their ideological solutions.
The whole map allows you to make choices.
3. Engage in safe-to-fail experiments in parallel either to change the energy gradient or to nudge (or shift) a dominant pattern to a more desirable state ideally through action rather than platitude.
This is of course the best approach to making change within complex systems. We took time to develop prototypes that were intended to tell us something about the system. A bonus would be that we might might create ideas that would turn into interesting new initiatives, but the primary function of running prototypes is to probe the system to tell us something about what is possible. Making tentative conclusions from action inspires people to try more, on a path that is a little more blazed. Just creating platitudes such as “Let’s build networks for knowledge transfer” doesn’t do enough to help change makers poke around and try things that are likely to work.
Each participant in the group created one or two prototypes which they rolled out, seeking to make a bit of change and learn about what helped or hindered change making in a relatively conservative sector of civil society.
4. Monitor the impact in real time and take multiple small actions to reinforce the good and disrupt the bad.
We kept the group together over a few months, having them check in over webinars to share the progress on their prototypes. We deliberately created a space where things were allowed to fail or radically change and we harvested learning all the way along. Where things were working, prototypes evolved in that direction, and we had a little funding to help accelerate them. By simply starting, participants discovered oblique strategies and in some cases entirely new ways to address their basic desire for changing some element of their environment. Without engaging in a deliberate yet loosely held action-based project, it is very difficult to see the opportunities that lie in the blind spots.
This learning was summarized in a report, but the bigger harvest was the capacity that each participant built to take steps to sense, design and implement change initiatives with a better informed complexity approach.
5. At all costs avoid any announcement of a change initiative or idealistic outcome based targets
I think this goes without saying. Change making in the complex space is essentially learning on overdrive. When we are truly stuck and yet we have a sense that “this might just work” we need good support to explore that instinct. Being deliberate about it helps. But announcing that “this is what we are doing and here are the targets we have to meet” will collapse people’s inherent creativity down to narrowing the focus of their work on achieve the pre-determined outcomes. That is a perfect strategy for destroying the capacity to engage with complexity, and it can result in a myopic approach to change that guarantees “black swan events” and other nasty surprises.
Share:

This morning we began our Harvesting and Collective Sensemaking online course. Rowan Simonsen, Amy Lenzo and I were really excited to be able to share our first little insights with people, and especially this new mnemonic that we created to capture five key principles of harvesting practice: PLUME. We are excited to introduce this into the world.