Tonight in Vancouver I’m acting as a provocateur at an event sponsored by my friends and colleagues at Waterlution. Water City 2040 is a ten-city scenario planning process which engages people about the future of water across 10 Canadian cities. Tonight’s event is part of a pilot cohort to see what the process can offer to the conversation nationally.
What’s powerful about this work is that it’s citizens convening, hosting and engaging with one another. This is not a local government engagement process or a formal consultation. This is a non-profit organization convening deliberative conversations. The advantage of that is that the process is free from the usual constraints that governments put on engagement. So tonight we are thinking about possibilities that push out 25 years into the future and absolutely everything is one the table. In fact I’m asking people to consider that in these kinds of complex systems the biggest problem you have in addressing change comes from your assumptions about what will remain the same. It’s one thing to confront demographic, economic and environmental change, but are we also questioning things we take for granted like governance models, planning mindsets, innovation processes, value systems and infrastructure?
Organizations like Waterlution offer an unconstrained look at the future and if local governments are smart, they will pay attention to what’s happening here. (And they are – Metro Vancouver has sent a film crew to document the evening!).
Waterlution teaches these skills to citizen practitioners, government employees and private sector staff through our Waterlution Art of Hosting Water Dialogues workshops. We have workshops happening in April 20-22 on Bowen Island and April 27-29 near Toronto. If this is work you want to do more of, think about joining us. And if you contact me to inquire, you might get a little incentive…
Share:
I was working with a couple of clients recently who were trying to design powerful questions for invitations to their strategic conversations. Both organizations are dealing with complex situations and specifically with complex changes that were overtaking their ability to respond. Here are some of the questions that cam up:
- How can we be more effective in accomplishing our purpose?
- How can we create more engagement to address our outcomes?
- What can we do to innovate regardless of our structure?
- Help us create new ideas for executive alignment around our plan to address the change we are now seeing?
Can you see what is wrong with these questions, especially as they relate to addressing complexity?
The answer is that each of these questions contains a proposed solution to the problem, buried as assumptions in the question itself. In these questions the answers to addressing complexity are assumed to be: sticking to purpose, creating more engagement, innovating except structurally, aligning executives around our plan. In other contexts these may well be powerful questions: they are questions which invite execution once strategic decisions have been taken. But in addressing complex questions, they narrow the focus too much and embed assumptions that some may actually think are the cause of their problems in the first place
The problem is that my clients were stuck arguing over the questions themselves because they couldn’t agree on solutions. As a result they found themselves going around and around in circles.
The right question for all four of these situations is something like “What is going on?” or “How can we address the changes that are happening to us?”
You need to back up to ask that question first, before arriving at any preferred solutions. It is very important in discerning and making sense of your context that you are able to let go of your natural inclination to want to DO something, in favour of first understanding what we have in front of us. Seeing the situation correctly goes a long way to be able to make good strategic choices about what to do next. From there, planning, aligning, purpose and structure might be useful responses, but you don’t know that until you’ve made sense of where you are.
Share:

Two weeks ago in our Leadership 2020 program I experimented with using a signification framework to harvest a World Cafe. We are beginning another cohort this week and so I had a chance to further refine the process and gather much more information.
We began the evening the same way, with a World Cafe aimed at exploring the shared context for the work that these folks are in. Our cohort is made up of about 2/3rds staff from community social services agencies and 1/3 staff from the Ministry of Children and Family Development. This time I used prepared post it notes for the sense making exercise, which you can see here:
Our process went like this:
- At Cafe tables of five for 20 minutes, discuss the question “What is a story of the future you are anticipating for this sector?”
- Second round, new tables, same question, 20 minutes
- About ten minutes of hearing some random insights from the group, and checking to see how those resonate.
- 2 minutes of silent reflection on the question of ‘What do you need to learn here that will help us all move forward?”
- Each participants took a pink and blue post it note. On the blue post it they wrote what they needed to learn that would be immediately applicable and on the red ones, learning that is needed to prepare for the future.
- Participants filled out the post-its and then were instructed on how to signify the data on a triangle framework that helped them signify whether what they needed to learn would help them “in their personal life,” “do their jobs” and/or “make change.”
- Participants also indicated on the post-its whether the worked for the Ministry or worked for a community organization.
At the conclusion of the exercise we had a tremendous amount of information to draw from. Our immediate use was to take a small group and use affinity grouping to identify the themes that the whole has around their learning and curiosity. We have used these themes to structure a collective story harvest exercise this morning.
But there is some much more richness that can come from this model. Here are some of the ways people are playing with the date:
- Removing all the pink post-its to see what the immediate learning needs are and vice versa.
- Looking at and comparing the learning needs between the two sectors to see where the overlaps and differences are
- examining the clusters at the extremes to see what ot tells us about personal needs, and professional needs.
- Uncovering a theory of change by looking at the post its clustered around the “Making change” point and also seeing if these theories of change are different between the community and the government.
And of course because the data has been signified on each post it, we can recreate the framework easily. The next level for me will be using this data to create a Cynefin framework using the four-points contextualization exercise. Probably won’t happen in this cohort.
Big learning is the rich amount of data that proceeds from collecting finely-grained objects, allowing for disintermediated sense-making, and seeing all these multiple ways in which signified data can be used to address complex challenges obliquely, which allows you to get out of the pattern entrainment that blinds you to the weak signals and emergent patterns that are needed to develop emergent practice. This pen and paper version is powerful on its own. You can imagine how working with SenseMaker across multiple signification frameworks can produce patterns and results that are many magnitudes richer.
Share:
When I popped off to London last week to take a deep dive into Cognitive Edge’s work with complexity, one of the questions I held was about working with evaluation in the complex domain.
The context for this question stems from a couple of realities. First, evaluation of social programs, social innovation and other interventions in the human services is a huge industry and it holds great sway. And it is dominated by a world view of linear rationalism that says that we can learn something by determining whether or not you achieved the goals that you set out to achieve. Second, evaluation is an incredibly privileged part of many projects and initiatives and itself becomes a strange attractor for project planning and funding approval. In order for funders to show others that their funding is making a difference, they need a “merit and worth” evaluation of their funds. The only way to do that is to gauge progress against expected results. And no non-profit in its right mind will say “we failed to achieve the goals we set out to address” even though everyone knows that “creating safe communities” for example is an aspiration out of the control of any social institution and is subject to global economic trends as much as it is subject to discrete interventions undertaken by specific projects. The fact that folks working in human services are working in a complex domain means that we can all engage in a conspiracy of false causality in order to keep the money flowing (an observation Van Jones inspired in me a while ago.) Lots of folks are making change, because they know intuitively how to do this, but they way we learn about that change is so tied to an inappropriate knowledge system, that I’m not convinced we have much of an idea what works and what doesn’t. And I’m not talking about articulating “best practices.”
The evaluation methods that are used are great in the complicated domain, where causes and effects are easy to determine and where understanding critical pathways to solutions can have a positive influence on process. in other words, where you have replicable results, linear, summative evaluation works great. Where you have a system that is complex, where there are many dynamics working at many different scales to produce the problems you are facing, an entirely different way of knowing is needed. As Dave Snowden says, there is an intimate connection between ontology, epistemology and phenomenology. In plain terms, the kind of system we are in is connected to the ways of knowing about it and the ways of interpreting that knowledge.
I’m going to make this overly simplistic: If you are working with a machine, or a mechanistic process, that unfolds along a linear trajectory, than mechanistic knowledge (problems solving) and interpretive stratgies are fantastic. For complex systems, we need knowledge that is produced FROM the system and interpreted within the system. Evaluation that is done by people “outside” of the system and that reports finding filtered through “expert” or “disinterested” lenses is not useful for a system to understand itself.
Going into the Cynefin course I was interested to learn about how developmental evaluation fit into the complex domain. What I learned was the term “disintermediated sensemaking” which is actually the radical shift I was looking for. Here is an example of what it looks like in leadership practice.
Most evaluation uses processes employing a specialized evaluator undertaking the work. The problem with this is that it places a person between the data and experience and the use of the knowledge. And it also increases the time between an experience and the meaning making of that experience, which can be a fatal lag with strategy in emergent systems. The answer to this problem is to let people in the system have direct experience of the data, and make sense of it themselves.
There are many many ways to do this, depending on what you are doing. For example:
- When clustering ideas, have the group do it. When only a few people come forward, let them start and then break them up and let others continue. Avoid premature convergence.
- When people are creating data, let them tag what it means, for example, in the decision making process we used last weekend, participants tagged their thoughts with numbers, and tagged their numbers with thoughts, which meant that they ordered their own data.
- Produce knowledge at a scale you can do something about. A system needs to be able to produce knowledge at a scale that is usable, and only the system can determine this scale. I see many strategic plans for organizations that state things like “In order to create safe communities for children we must create a system of safe and nurturing foster homes.” The job of creating safe foster homes falls into the scope of the plan, but tying that to any bigger dynamics gets us into the problem of trying to focus our work on making an impact we have no ability to influence.
- Be really clear about the data you want people to produce and have a strategy for how they will make sense of it. World Cafe processes for example, often produce scads of data on table cloths at the centre of the table, but there is often so little context for this information that it is hard to make use of. My practice these days is to invite people to use the table cloths as scratch pads, and to collect important data on post it notes or forms that the group can work with. AND to do that in a way that allows people to be tagging and coding the data themselves, so that we don’t have to have someone else figure out what they meant.
- Have leaders and teams pour over the raw data and the signification frameworks that people have used and translate it into strategy.
These just begin to scratch the surface of this inquiry in practice. Over the next little while I’m going to be giving this approach a lot of thought and try it out in practice as often as I can, and where the context warrants it.
If you would like to try an exercise to see why this matters try this. the next time you are facilitating a brainstorm session, have the group record dozens of insights on post its and place them randomly on a wall. Take a break and look over the post its. Without touching the post its, start categorizing them and record your categorization scheme. Then invite the group to have a go at it. Make sure everyone gets a chance to participate. Compare your two categorization schemes and discuss the differences. Discuss what might happen if the group were to follow the strategy implicit in your scheme vs. the strategy implicit in their scheme.
Share:
This afternoon I’m coming home after a morning running a short process for a church in Victoria, BC. The brief was pretty straightforward: help us decide between four possible scenarios about our future. Lucky for me, it gave me an instant application for some of the stuff I was learning in London last week.
The scenarios themselves were designed through a series of meetings with people over a number of months and were intended to capture the church’s profile for its future, as a way of advertising themselves for new staff. What was smart about this exercise was the fact that the scenarios were left in very draft form so there was no way they could be confused for a “vision” of the future. It is quite common in the church world for people to engage in “visioning exercises” to deal with the complex problems that they face, but such visions are doomed evermore to failure as the bigger organization is beginning to enter into a period of massive transformation and churches are suffering from all kinds of influences over which they have no control.
Visioning therefore is not as useful as selecting a lens through which the organization can make some decisions.
Each scenario contained some possible activities and challenges that the church would be facing, and the committee overseeing the work was charged with refining these down to a report that would, to use my own terms, be a collection of heuristics for the way the organization would act as it addressed future challenges.
Our process was very informed by some thinking I have been doing with Dave Snowden’s “Simple rules for dealing with complexity.” Notably principles about avoiding premature convergence, distributing cognition and disrupting pattern entrainment. Furthermore, the follow up work will be informed by the heuristic of “disintermediation” meaning that the team working on the project will all be working with the raw data. There is no consultants report here. The meaning making is still very much located with the participants.
So here was our process.
- At small tables of four, participants were given 5 minutes to read over the scenarios silently.
- We then entered a period of three 15 minute small group conversations on the topic of “what do you think about these scenarios?” Cafe style, each conversation happened with three different groups of people. I was surprised how much introduction was going on as people met new folks. The question was deliberately chosen not to be too deep or powerful because with a simple question, the participants will provide their own depth and power. When you have a powerful need, you don’t need to contrive anything more powerful than what people are already up for.
- Following the cafe conversations, a round of silent reflection in which people were given the following direction. “Express your preference for each of the scenarios on a scale of 1-7. Seven means “Let’s do it” and one means “No Way.” For each scenario write your preference on your post it and write a short sentence about the one concrete thing that would make your vote one point higher.” So there is lots in this little exercise. First it’s a way of registering all of the objections to the scenarios without personalizing them. Secondly it gets at concrete things that the team can do to improve scenarios and third it harvest preferences and not simple yes/no decisions which are not appropriate for this kind of work.
- At each table someone gathered all the posts its of the same colour and by colour folks came to the front and placed them on the scale. Doing it this way meant that no one was sure whose preference was going where and it also meant that people couldn’t revise their post its once they saw how the preferences were being expressed.
The whole thing took about 75 minutes.
The result of this sense making was the chart you see above. Two hundred pieces of finely grained information ordered by the people themselves. The project team now has at least three things they can do with this material.
- They can recreate the scale, as each post it is colour and preference coded. That way they have a rough idea of the scenario with the greatest support, and they can show anyone who wants to see metrics where we stand on the proposals.
- They can cluster post its for each scenario according to “work that will make it better” which means they don’t have to pay attention to the scale. The scale is completely subjective, but each of these post-its contains one piece of concrete information to make the scenario better, so in some ways the numbers don’t really matter. They can cluster these ideas by each scenario AND they can re-cluster them by each topic to give an idea of overall issues that are happening within the organization.
- If we wanted to go a step further, we could use these post it notes to do a number of Cognitive Edge exercises including a Cynefin contextualization (which would tell us which things were Obvious, Complicated and Complex (and maybe Chaotic) and we could also do some archetype extraction which might be very useful indeed for constructing the final report, which would stand as an invitation to thier new personal and an invitation to the congregation.