
I’ve been for a beautiful walk this morning in the warm mist of a spring day in the highlands near Victoria. It was quiet but for the cacophony of bird song, and everything was wet with mist and dew. This is the greenest time of year on the west coast, and the mossy outcroppings and forest floor were verdant.
There is a beauty in what is, in any given moment.
I’ve been thinking about this as I have been struggling with watching people be evaluated in their work recently. My daughter is a jazz musician, training her art in a university program where she is judged on her performance and where that number assigned to that moment in time affects much in her life. My son laid out the papers he has been graded on, showing me a variety of marks that surprised him and made him proud of what he had accomplished. All of it a shallow judgement applied to a limited action in a tiny slice of time. Do these numbers take into consideration my daughter’s love of jazz or my son’s pride in the story he wrote or his ability to solve quadratic equations? Do they take into account how my kids approached this test, what it meant to them, what they were trying to do? How do these numbers track their changes, their growth, the affect that they are having on the world around them?
The evaluator’s job comes with enormous privilege. The privilege is in determining the frame within which the noticing takes place. Poorly done evaluation happens when an evaluator reduces a complex outcome like “impact” into a few arbitrary indicators developed in isolation with a poorly articulated rationale and coherence with what is happening. When an evaluator walks into a process it is amazing how much gravity also enters the work.
At some point in our culture – and maybe it was always thus – evaluation became something of an investigation used to justify accountability pursued with a particular agenda in mind. Frameworks became both too narrow and too fuzzy. I have been in processes where evaluators wanted a single number on a scale from 1-5 to rate the effectiveness of an experience. And I have been in processes where evaluators are seeking to measure “impact” without every defining it, or only defining it on how a process has advanced their client’s singular needs and not the need of the whole ecosystem. I have never seen an evaluation that says to a client “these people are discovering some stuff that has nothing to do with what you funded them for, and therefore your assumptions about change are wrong.”
Done well however, evaluation contributes a tremendous amount of knowledge, awareness and confidence to a process. It allows us to make sense of our work, it opens our eyes to different questions we should be asking and it can put the tools of meaning making in the hands of people doing work. In complex environments, it can give us a new set of senses that help us see and hear and feel what is happening, and that open up promising new directions to nudge an effort.
When evaluation is part of the work it makes a huge difference. When evaluation is a separate project, laid on top of the work or done at a distance, it can bring the work to a standstill as everyone organizes around what the evaluator is looking for instead of where the project is at in its evolution or what the needs are.
Evaluations conducted with principles such as these ones are amazingly useful and empowering. They are deeply powerful influencers in the life of a project, and they need to be done with intense awareness of this power. We need to demand from our clients and funders and stakeholders, a more sophisticated standard of engagement around evaluation, and we need to hold evaluators to these principles too.
There is tremendous beauty in the moments of people working together, learning, creating, trying to improve the lives of others. Some days are rich with green and lush life and others are despairing failures. I would love to read an evaluation report that is as rich as Thoreau’s observations of life at Walden capturing the changes and the beauty, witnessing the growth all around, understanding its meaning and being open to the surprises that come with being immersed in an experience.
I’ll be writing more about this topic in the next little while. What are your longings for or experiences of great evaluation?
Share:

A couple of years ago I wrote a post that was critical of the way in which the Representative for Children in Youth in British Columbia drove practice changes among social workers. In short the reason had to do with apply too much order (rules and checklists) in a complex space (social work practice). At a certain point, when you are trying to prevent deaths that have occurred in the past, you end up outlawing all but the deaths that will surprise you in the future. We look at reviews of child deaths as if they were expected and predictable and create highly ordered accountability mechanisms to prevent them from happening again. The problem with this, as anyone knows who works with complexity, is that you create a break between good social work practice which is sensitive to nuance and context, and rigorous accountability standards. While no one is arguing that social workers should not be accountable, what is required is the ability for social workers to develop and rely on their practice, because no amount of rules will prevent children from dying in novel ways, but good social work practice does have an effect. In fact, checklists over practice almost ensure children will die in increasingly novel ways because as social work becomes constrained simply to what is on the checklist, social workers narrow their gaze too much and are unable to detect the weak signals in a situation that would otherwise anticipate a problem before it happens. This is the dilemma between anticipatory and predictive awareness and getting it wrong is costly.
It’s a brutal example, but I do believe it points to the the consequences of accountability models that assume that all outcomes are predictable and negative effects can be prevented with best practices even when its proven that they can’t be. (the confusion in that link is perfectly illustrative, by the way. “Child deaths are preventable” on the one hand and “we lack the most basic information about why children die” on the other.) That can be true in ordered systems but not in complex ones. This particular problem has a major implication for philanthropic organizations that are seeking to have “impact.” In many cases, the impact is a pre-defined outcome of a process taken largely in a narrowly defined strategic context. Real life is messy but logic models are sweetly and seductively clean.
Messiness is important and working in messy ways is a critical skill of philanthropic workers, donors and directors. In this recent article Martin Morse Wooster argues for a loosening of constraints on philanthropic work and although he doesn’t provide a solid theoretical basis for his assertions, but good theory on the limits of managing and measuring impact backs him up.
Many front line philanthropic workers – grants administrators, programs staff and consultants – know this approach but they are often constrained by donors, Boards and executives who demand simple outcomes, simple metrics and clear impact. I’m increasingly interested in putting together specific trainings and learnings for boards and donors that will increase their literacy of messiness in support of making smart changes and supporting good in a way that is much more aligned with how community actually works.
One such offering is currently open for enrolment. We are gathering in June in Glasgow and will be repeating the workshop in October in Vancouver. If you’d like it in your neck of the woods, let me know.
Share:

Reflecting these days on some two day courses I have coming up, including one on complexity and social change, one on invitation practice and one on Open Space.
Each of these courses is workshop to introduce people to a practice or a set of practices, as opposed to techniques and skills. In each of these workshops people will come away with an ability to go into the practice, literally as artists. These are not technical trainings designed to download procedures and methods. They are courses that will leave you ready to practice, ready to make mistakes and learn as you go, and ready to improve.
It’s always hard to explain to people when they come on these courses that they will not leave as competent practitioners of the stuff they are learning. All artists make mistakes when they are first using a tool. What’s most important is that you have a way of developing your mastery with a tool, which is to say that you have a framework that helps you understand what you are doing and how well your are doing it. In traditional settings, mentorship is an important piece of this, to help one develop mastery from every attempt as you learn. The point is that these kinds of tools are useful in complexity, meaning that they are context and practitioner dependant. How you use these tools and where you use them matters.
Teaching, therefore, requires a disruption to the pedagogy of filling another person’s brain and body with competence. In my courses, my favourite answer to questions about application is “it depends.” But what doesn’t change over time is the body of theory that needs to inform one’s practice.
Theory is the constant, and therefore a heuristic (a basic set of measurable principles) are they way to develop practice that is appropriate in context. By theory I mean a serious understanding from natural sciences that underpin the ways systems work.
Courses that are pure theory are generally not helpful without grounding them in practice, and courses that are just collections of tools and practices are somewhat useful but can lead practitioners astray if they don;t understand why things work (or they aren’t able to see why things aren’t likely to work). So my basic approach to teaching these kinds of things is to use the following heuristic:
- Theory
- Framework
- Practice examples
- Application
Teaching theory – in my case usually complexity theory – is critical for setting the groundwork for the practices that follow. If you don’t understand the nature of the context you are working in, you are likely to make serious errors in applying practices: linear problem solving doesn’t work in non-linear settings. That seem intuitive but you need to know why and be able to explain it.
Frameworks are helpful because they provide touchstones to connect theory to practice. When we were teaching the harvesting course last year, we came up with the mnemonic PLUME to describe five heuristics that help practitioners design methods that are coherent with good theory. (We have a new one for the invitation course by the way: VALUE. You can learn more about it on the course or in the blog posts that come as a result of the teaching). Sometimes that framework is Cynefin, sometimes it’s the chaordic path.
The important thing about a framework is that it helps you to create something and then it can fall away and what you have created can stand on its own. If your practice relies on maintaining the integrity of the framework then your framework isn’t effective. This is an issue I see sometimes with things like sociocracy where in poor application it’s important that people retain accountability to the framework (but not even necessarily the theory). Frameworks should be important enough containers to inspire grounded and coherent action, but not so critical that the action depends on the framework.
Dave Snowden uses the metaphor of the scaffold, which is useful. Build a scaffold to build your house. But if the scaffolding is a part of your house and your house depends on the scaffolding for it’s structural integrity, you haven’t succeeded
Once we are grounded in theory and have a way of carrying it with us, we can share practices that help practitioners to ground this in real life. I always combine this with an opportunity to apply the learning on real projects. This gives people an opportunity to work together to make sense of what they are learning. It means that folks working on projects get a variety of perspectives from people who have just learned something, including naive and oblique perspectives, which is good when you are trying to do new things. For those that are giving their help with projects, they learn a lot by stepping into the coach or critic role, as they are forced to think about what they have been learning in an application context.
So that’s my basic pedagogy these days. I’ve been on a few facilitation workshops over the years and been shocked at two things: the lack of theory (so how do I know how your methods work) and the over reliance on tips and tricks, which is basically a kind of addictive mechanism for people learning facilitation. many people are super-interested in adding a few things to their tool box, and while I love helping people add tools, I would never give an apprentice carver a knife without helping them understand why this thing works and what happens if you use it incorrectly. And I would never say “here’s a knife, now go make your masterpiece.” Their first effort is going to be terrible, and that’s what practice is. We need more folks teaching the art of facilitation as artists teaching artists and less shady selling of recipes and tools for guaranteed success.
Share:

For a long time I have known that the idea that culture change can be managed is a myth. A culture is emergent and is the result of millions of interactions, behaviours, artifacts and stories that people build up over time. It is unpredictable and results in surprise. The idea that a “culture change initiative” can be rolled out from the top of an organization is not only a myth, it’s a hidden form of colonization. And worse, the idea that people need to be changed in the way the boss determines if we are to become the kind of place that we all aspire too is cruel and violent.
So what to do when an organization says that its culture needs change? Until I had stumbled over David Snowden’s work, I had few practical tools, principles and practices for doing this work. Since working with the theory that Dave has assembled and translating it into praxis, I have come up with a number o
Here are a few key notes for working with people who ask me to help them with that.
Principles
- Culture is an emergent set of patterns that are formed from the interactions between people. These patterns cannot be reverse engineered. Once they exist you need to change the interactions between people if you want to change the patterns.
- Culture includes stories but it is not a story. This is important because simply changing the story of the organization will not change the culture. Instead you need to create ways for people to interact differently and see what comes of it.
- Cultural evolution is not predictable and cannot be led to a pre-determined character. You can aspire all you want to a particular future culture but it is impossible to script or predict that evolution.
Practices
- Start by getting clear about the actual work. In my experience people use the term “culture change” as a proxy for the real work that needs to be done: improving employee relations, becoming more risk tolerant, shifting leadership styles…whatever it is, it’s best to start with getting clear what is ACTUALLY going on before assuming that the problem is the “culture.”
- Look at what actually is. Studying the way things are is important, because that helps you to identify what you are actually doing. It seems simple, but it’s important to do it in a way that doesn’t bring a pre-existing framework to the work. You have to look at the patterns from the work that you already do, not from how it illuminates a pre-existing model.
- Work with emergence to understand patterns together. Using tools such as anecdote circles, organizations can discover the patterns that are present in the current environment. Anecdote circles generate small data fragements that describe actual actions and activities. Taken together and worked through, patterns become clear, like the process of generating a Sierpinsky triangle. Out of large data sets, hidden patterns appear.
- Identify those patterns and discuss ways to address them with safe to fail experiments. Run a session to create several ideas that are coherent with the patterns, design multiple small experiments to try to shift the patterns. Institute rigorous monitoring and learning and allow for experiments to fail.
- Support new ideas with appropriate resources. If you really want to change the interactions between people you need to resource these changes with time, money and attention. The enemy of focused innovation is time. Even allowing employees to work on something a half day a week could be enough to create and implement new things. Butif they have to do it on top of the full workload they have, nothing will get done.
- Learn as you go. Developmental evaluation is they way to go with new forms of emergent practice. To be strategic about how change is happening, it’s important to design and build in evaluation at the outset.
These are just notes and practices, but are becoming standard operating procedures in my world when working with groups and organizations who are trying to address that elusive idea of “culture change.”