Chris Corrigan Chris Corrigan Menu
  • Blog
  • Chaordic design
  • Resources for Facilitators
    • Facilitation Resources
    • Books, Papers, Interviews, and Videos
    • Books in my library
    • Open Space Resources
      • Planning an Open Space Technology Meeting
  • Courses
  • About Me
    • Services
      • What I do
      • How I work with you
    • CV and Client list
    • Music
    • Who I am
  • Contact me
  • Blog
  • Chaordic design
  • Resources for Facilitators
    • Facilitation Resources
    • Books, Papers, Interviews, and Videos
    • Books in my library
    • Open Space Resources
      • Planning an Open Space Technology Meeting
  • Courses
  • About Me
    • Services
      • What I do
      • How I work with you
    • CV and Client list
    • Music
    • Who I am
  • Contact me

Category Archives "Evaluation"

From the feed

December 9, 2018 By Chris Corrigan Art of Harvesting, Art of Hosting, Collaboration, Complexity, Evaluation, Links, Philanthropy

Some interesting links that caught my eye this week.

Why Black Hole Interiors Grow (Almost) Forever

Leonard Susskind has linked the growth of black holes to increasing complexity. Is it true that the world is becoming more complex?

“It’s not only black hole interiors that grow with time. The space of cosmology grows with time,” he said. “I think it’s a very, very interesting question whether the cosmological growth of space is connected to the growth of some kind of complexity. And whether the cosmic clock, the evolution of the universe, is connected with the evolution of complexity. There, I don’t know the answer.”

With a Green New Deal, here’s what the world could look like for the next generation

This is the vision I have been asking for from our governments.  This vision is the one that would get me on board with using our existing oil and gas resources to manufacture and fund and infrastructure to accelerate this future for my kids. The cost of increasing fossil fuel use is so high, it needs to be accompanied by a commitment to faster transition to this kind of world. Read the whole thing.

Why we suck at ‘solving wicked problems”

Sonja Blignault is one of the people in the world with whom I share the greatest overlap of theory and practice curiosities regarding complexity. I know this, because whenever she posts something on her blog I almost always find myself wishing I had written that!  Here’s a great post of five things we can do to disrupt thinking about problem solving to enable us to work much better with complexity.

Money and technology are hugely valuable resources:  they are certaintly necessary but they are not sufficient.  Simply throwing more money and/or more advanced technology at a problem will not make it go away.  We need to fundamentally change our thinking paradigm and approach things in context-appropriate ways, otherwise we will never move the needle on these so-called wicked problems.

rock/paper/scissors and beyond

I miss Bernie DeKoven. Since he died earlier this year I’ve missed seeing his poetic and playful blog posts about games and fun.  Here is one from his archives about variations on rock/paper/scissors

The relationship between the two players is both playful and intimate. The contest is both strategic and arbitrary. There are rumors that some strategies actually work. Unless, of course, the players know what those strategies are. Sometimes, choosing a symbol at random, without logic or forethought, is strategically brilliant. Other times, it’s just plain silly.

So they play, nevertheless. Believing whatever it is that they want or need to believe about the efficacy of their strategies, knowing that there is no way to know.

The longer they play together, the more mystical the game becomes.

They play between mind and mindlessness. For the duration of the game, they occupy both worlds. The fun may not feel special, certainly not mystical. But the reality they are sharing is most definitely something that can only be found in play.

How Evaluation Supports Systems Change

An unassuming little article that outlines five key practices that could be the basis of a five-day deep dive into complexity and evaluation. I found this article earlier in the year, and notice that my own practice and attention has come back to these five points over and over.

While evaluation is often conducted as a means to learn about the progress or impact of an initiative, evaluative thinking and continuous learning can be particularly important when working on complex issues in a constantly evolving system. And, when evaluation goes hand in hand with strategy, it helps organizations challenge their assumptions, gather information on the progress, effects, and influence of their work, and see new opportunities for adaptation and change. 

Share:

  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to email a link to a friend (Opens in new window) Email
  • Click to print (Opens in new window) Print
  • More
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Telegram (Opens in new window) Telegram

Like this:

Like Loading...

Towards the idea that complexity IS a theory of change

November 7, 2018 By Chris Corrigan Complexity, Design, Emergence, Evaluation, Featured, Learning 20 Comments

In the world of non-profits, social change, and philanthropy it seems essential that change agents provide funders with a theory of change.  This is nominally a way for funders to see how an organization intends to make change in their work.  Often on application forms, funders provide guidance, asking that a grantee provide an articulation of their theory of change and a logic model to show how, step by step, their program will help transform something, address an issue or solve a problem.

In my experience, most of the time “theory of change” is really just another word for “strategic plan” in which an end point is specified, and steps are articulated backwards from that end point, with outcomes identified along the way.  Here’s an example. While that is helpful for situations in which you have a high degree of control and influence, and in which the nature of the problem is well ordered and predictable, these are not useful with complex emergent problems.  Most importantly they are not theories of change, but descriptors of activities.

For me a theory of change is critical. Looking at the problem you are facing, ask yourself how do these kinds of problems change? If, for example, we are trying to work on a specific change to an education policy, the theory of change needs to be based on the reality of how policy change actually happens. For example, to change policy you need to be influential enough with the government in power to be able to design and enact your desired changes with politicians and policy makers. How does policy change? Through lobbying, a groundswell of support, pressure during elections, participation in consultation processes and so on. From there you can design a campaign – a strategic plan – to see if you can get the policy changed.  

Complex problems are a different beast altogether. They are non-linear, unpredictable and emergent. Traffic safety is an example. A theory of change for these kind of problems looks much more like the dynamics of flocking behaviour. The problem changes through many many small interactions and butterfly effects. A road safety program might work for a while until new factors come into play, such as distractions or raised speed limits, or increased use of particular sections of road.  Suddenly the problem changes in a complex and adaptive way.  It is not logical or rational and one certainly can’t predict the outcome of actions.

In my perfect world I wish it would be perfectly acceptable for grantees to say that “Our theory of change is complexity.”  Complexity, to quote Michael Quinn Patton, IS a theory of change.  Understanding that reality has radical implications for doing change work. This is why I am so passionate about teaching complexity to organizations and especially to funders. If funders believe that all problems can be solved with predictive planning and a logic model adhered to with accountability structures, then they will constrain grantees in ways that prevent grantees from actually addressing the nature of complex phenomena. Working with foundations to change their grant forms is hugely rewarding, but it needs to be supported with change theory literacy at the more powerful levels of the organization and with those who are making granting decisions.

So what does it look like?

I’m trying these days to be very practical in describing how to address complex problems in the world of social change. For me it comes down to these basic activities:

Describe the current state of the system. This is a process of describing what is happening. It can be through a combination of looking at data, conducting narrative research and indeed, sitting in groups full of diversity and different lived experience and talking about what’s going on. If we are looking at road safety we could say “there are 70 accidents here this year” or “I don’t feel safe crossing the road at this intersection.” Collecting data about the current state of things is essential, because no change initiative starts from scratch.

Ask what patterns are occurring the system. Gathering scads of data will reveal patterns that are repeating and reoccurring in the system,  Being able to name these patterns is essential. It often looks as simple as “hey, do you notice that there are way more accidents at night concentrated on this stretch of road?” Pattern logic, a process used in the Human Systems Dynamics community, is one way that we make sense of what is happening. It is an essential step because in complexity we cannot simply solve problems but instead we seek to shift patterns.

Ask yourself what might be holding these patterns in place. Recently I have been doing this by asking groups to look at the patterns they have identified and answer this question. “If this pattern was the result of set of principles and advice that we have been following, what would those principles be?” This helps you to see the structures that keep problems in place, and that is an essential intelligence for strategic change work. This is one adaptation of part of the process called TRIZ which seeks to uncover principles and patterns. So in our road safety example we might say, “make sure you drive too fast in the evening on this stretch of road” is a principle that, if followed, would increase danger at this intersection. Ask what principles would give you the behaviours that you are seeing? You are trying to find principles that are hypotheses, things you can test and learn more about. Those principles are what you are aiming to change, to therefore shift behaviour.  A key piece of complexity as a theory of change is that constraints influence behaviour. These are sometimes called “simple rules” but I’m going to refer to them as principles, because it will later dovetail better with a particular evaluation method. 

Determine a direction of travel towards “better.”  As opposed to starting with an end point in sight, in complexity you get to determine which direction you want to head towards, and you get to do it with others. “Better” is a set of choices you get to make, and they can be socially constructed and socially contested. “Better” is not inevitable and it cannot be predictive but choosing an indicator like “fewer accidents everywhere and a feeling of safety amongst pedestrians” will help guide your decisions.  In a road safety initiative this will direct you towards a monitoring strategy and towards context specific actions for certain places that are more unsafe than others. Note that “eliminating accidents” isn’t possible, because the work you are trying to do is dynamic and adaptive, and changes over time. The only way to eliminate accidents is to ban cars. That may be one strategy, and in certain places that might be how you do it.  It will of course generate other problems, and you have to be aware and monitor for those as well.  In this work we are looking for what is called an “adjacent possible” state for the system.  What can we possibly change to take us towards a better state? What is the system inclined to do?  Banning cars might not be that adjacent possible.

Choose principles that will help guide you away from the current state towards “better.” It’s a key piece of complexity as a theory of change that constraints in a system cause emergent actions. One of my favourite writers on constraints is Mark O Sullivan, a soccer coach with AIK in Sweden. He pioneers and research constraint based learning for children at the AIK academy. Rather than teach children strategy, he creates the conditions so that they can discover it for themselves. He gives children simple rules to follow in constrained game simulated situations and lets them explore and experiment with solutions to problems in a dynamic context. In this presentation he shows a video of kids practicing simple rules like “move away from the ball” and “pass” and watches as they discover ways to create and use space, which is an essential tactical skill for players, but which cannot be taught abstractly and which must be learned in application.  Principles aimed at changing the constraints will help design interventions to shift patterns.

Design actions aimed at shifting constraints and monitor them closely. Using these simple rules (principles) and a direction of travel, you can begin to design and try actions that give you a sense of what works and what doesn’t.  These are called safe to fail probes. In the road safety example, probes might include placing temporary speed bumps on the road, installing reflective tape or silhouettes on posts at pedestrian crossings, placing a large object on the road to constrain the driving lanes and cause drivers to slow down. All of these probes will give you information about how to shift the patterns in the system, and some might produce results that will inspire you to make them more permanent. But in addition to monitoring for success, you have to also monitor for emergent side effects.  Slowing traffic down might increase delays for drivers, meaning that they drive with more frustration, meaning more fender benders elsewhere in the system. Complex adaptive systems produce emergent outcomes. You have to watch for them. 

Evaluate the effectiveness of your principles in changing the constraints in the system. Evaluation in complex systems is about monitoring and watching what develops as you work. It is not about measuring the results of your work, doing a gap analysis and making recommendations. There are many, many approaches to evaluation, and you have to be smart in using the methods that work for the nature of the problem you are facing. In my opinion we all need become much more literate in evaluation theory, because done poorly, evaluation can have the effect of constraining change work into a few easily observed outcomes. One form of evaluation that is getting my attention is principles-based evaluation, which helps you to look at the effectiveness of the principles you are using to guide action. This is why using principles as a framework helps to plan, act and evaluate.

Monitor and repeat. Working on complex problems has no end. A traffic safety initiative will change over time due to factors well outside the control of an organization to respond to it. And so there never can be an end point to the work. Strategies will have an effect and then you need to look at the current state again and repeat the process.  Embedding this cycle in daily practice is actually good capacity building and teams and organizations that can do this become more responsive and strategic over time. 

Complexity IS indeed a theory of change. I feel like I’m on a mission to help organizations, social change workers and funders get a sense of how and why adopting to that reality is beneficial all round.  

How are you working with complexity as a theory of change?

Share:

  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to email a link to a friend (Opens in new window) Email
  • Click to print (Opens in new window) Print
  • More
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Telegram (Opens in new window) Telegram

Like this:

Like Loading...

Struggling to pick up the trash in the face of weaponized evaluation

October 7, 2018 By Chris Corrigan Democracy, Evaluation, Featured 5 Comments

Most of my work lies with the organizations of what Henry Mintzberg calls “The plural sector.” These are the organizations tasked with picking up the work that governments and corporations refuse to do. As we have sunk further and further into the 40 year experiment of neo-liberalism, governments have abandoned the space of care for communities and citizens especially if that care clashes with an ideology of reducing taxes to favour the wealthy and the largely global corporate sector. Likewise on the corporate side a singular focus on shareholder return and the pursuit of capital friendly jurisdictions with low tax rates and low wages means that corporations can reap economic benefits without any responsibility for the social effects of their policy influence.

Here’s how Mintzberg puts it, in a passionate defence of the role of these organizations:

“We can hardly expect governments—even ostensibly democratic ones—that have been coopted by their private sectors or overwhelmed by the forces of corporate globalization to take the lead in initiating radical renewal. A sequence of failed conferences on global warming has made this quite clear.

Nor can private sector businesses be expected to take the lead. Why should they promote changes to redress an imbalance that favors so many of them, especially the most powerful? And although corporate social responsibility is certainly to be welcomed, anyone who believes that it will compensate for corporate social irresponsibility is not reading today’s newspapers.”

What constantly surprises me in this work is how much accountability is placed on the plural sector for achieving outcomes around issues that they have so little role in creating.

While corporations are able to simply externalize effects of their operations that are relevant to their KPIs and balance sheets, governments are increasingly held to account by citizens for failing to make significant change with ever reduced resources and regulatory influence. Strident anti-government governments are elected and they immediately set out to dismantle what is left of the government’s role, peddling platitudes such as “taxation is theft” and associated libertarian nonsense. They generally, and irresponsibly, claim that the market is the better mechanism to solve social problems even though the market has been shown to be a psychotic beast hell bent on destroying local communities, families and the climate in pursuit of it’s narrowly focused agenda. In the forty years since Regan, Thatcher and Mulroney went to war against government, the market has failed on nearly every score to create secure economic and environmental futures for all peoples. And it has utterly stripped entire nations of wealth and resources causing their people to flee the ensuing wars, depressions, and environmental destruction. Migrants run headlong into the very countries that displaced them in the first place and meet there a hostile resistance to the newcomers. Xenophobia and racism gets channeled into policy and simply increases the rate of exploitation and wealth concentration.

And yet, the people I know who struggle under the most pressure to prove their worth are the organizations of the plural sector who are subject to onerous and ontologically incorrect evaluation criteria aimed at, presumably, assuring their founders that the rabble are not only responsibly spending money (which is totally understsndsble) but also making a powerful impact on issues which are driven by forces well outside their control.

I’m increasingly understanding the role of a great deal of superficial evaluation in actually restricting the effectiveness of the plural sector so that they may be relegated to harm reduction for capitalism, rather than pursuing the radical reforms to our global economic system that will lead to sustainability. It’s fristrating for so many on the frontlines and it has led for calls for much more unrestricted granting in order to allow organizations to effectively allocate their resources, respond to emerging patterns, and learn from their work.

There are some fabulous people working in the field of evaluation to try to disrupt this dynamic by developing robust methods of complexity informed research in support of what the front line of the plural sector is tasked with. The battle now, especially now that science itself is under attack, is to make these research methods widely understood and effective in not simply evaluating the work of the plural sector but also shunting a light on the clear patterns at play in our economic system.

I’ll be running an online course in the winter with Beehive Productions where we look at evaluation from the perspective of facilitators and leaders of social change. We won’t shy away from this conversation as we look at where evaluation practice has extended beyond the narrow confines of program improvement and into larger social conversation. We will look at history and power and how evaluation is weaponized against radical reform in favour of, at best, sustaining good programs and at worst shutting down effective work.

Share:

  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to email a link to a friend (Opens in new window) Email
  • Click to print (Opens in new window) Print
  • More
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Telegram (Opens in new window) Telegram

Like this:

Like Loading...

The limits of certainty

September 28, 2018 By Chris Corrigan Complexity, Evaluation, Featured

An interesting review essay by John Quiggan looks at a new book by Ellen Broad called Made by Humans: The Ai Condition. Quiggan is intrigued by Broad’s documentation of the way algorithms have changed over the years, from originating as “a well-defined formal procedure for deriving a verifiable solution to a mathematical problem” to becoming a formula for predicting unknown and unknowable futures.  Math problems that benefit from algorithms fall firmly in the Ordered domains of Cynefin. But the problems that AI is now be deployed upon are complex and emergent in nature, and therefore instead of producing certainty and replicability, AI is being asked to provide probabilistic forecasts of the future.

For the last thousand years or so, an algorithm (derived from the name of an Arab mathematician, al-Khwarizmi) has had a pretty clear meaning — namely, it is a well-defined formal procedure for deriving a verifiable solution to a mathematical problem. The standard example, Euclid’s algorithm for finding the greatest common divisor of two numbers, goes back to 300 BCE. There are algorithms for sorting lists, for maximising the value of a function, and so on.


As their long history indicates, algorithms can be applied by humans. But humans can only handle algorithmic processes up to a certain scale. The invention of computers made human limits irrelevant; indeed, the mechanical nature of the task made solving algorithms an ideal task for computers. On the other hand, the hope of many early AI researchers that computers would be able to develop and improve their own algorithms has so far proved almost entirely illusory.


Why, then, are we suddenly hearing so much about “AI algorithms”? The answer is that the meaning of the term “algorithm” has changed. A typical example, says Broad, is the use of an “algorithm” to predict the chance that someone convicted of a crime will reoffend, drawing on data about their characteristics and those of the previous crime. The “algorithm” turns out to over-predict reoffending by blacks relative to whites.


Social scientists have been working on problems like these for decades, with varying degrees of success. Until very recently, though, predictive systems of this kind would have been called “models.” The archetypal examples — the first econometric models used in Keynesian macroeconomics in the 1960s, and “global systems” models like that of the Club of Rome in the 1970s — illustrate many of the pitfalls.
A vast body of statistical work has developed around models like these, probing the validity or otherwise of the predictions they yield, and a great many sources of error have been found. Model estimation can go wrong because causal relationships are misspecified (as every budding statistician learns, correlation does not imply causation), because crucial variables are omitted, or because models are “over-fitted” to a limited set of data.


Broad’s book suggests that the developers of AI “algorithms” have made all of these errors anew. Asthmatic patients are classified as being at low risk for pneumonia when in fact their good outcomes on that measure are due to more intensive treatment. Models that are supposed to predict sexual orientation from a photograph work by finding non-causative correlations, such as the angle from which the shot is taken. Designers fail to consider elementary distinctions, such as those between “false positives” and “false negatives.” As with autonomous weapons, moral choices are made in the design and use of computer models. The more these choices are hidden behind a veneer of objectivity, the more likely they are to reinforce existing social structures and inequalities.


The superstitious reverence with which computer “models” were regarded when they first appeared has been replaced by (sometimes excessive) scepticism. Practitioners now understand that models provide a useful way of clarifying our assumptions and deriving their implications, but not a guaranteed path to truth. These lessons will need to be relearned as we deal with AI.


Broad makes a compelling case that AI techniques can obscure human agency but not replace it. Decisions nominally made by AI algorithms inevitably reflect the choices made by their designers. Whether those choices are the result of careful reflection, or of unthinking prejudice, is up to us.

In general I think that scientists understand the limits of this approach to modelling, and that was borne out in several discussions that I had with ecologists last week in Quebec. We do have to define what we mean by “prediction” though. Potential futures can be predicated with some probability if you understand the nature of the system, but exact outcomes cannot be predicted. However, we (by whom I mean the electorate and policy makers who work to make single decisions out of forecasts) do tend to venerate predictive technologies because we cling to the original definition of an algorithm, and we can come to believe that the model’s robustness is enough to guarantee the accuracy of a prediction.  We end up trusting forecasts without understanding probability, and when things don’t go according to plan, we blame the forecasters rather than our own complexity illiteracy. 

Share:

  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to email a link to a friend (Opens in new window) Email
  • Click to print (Opens in new window) Print
  • More
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Telegram (Opens in new window) Telegram

Like this:

Like Loading...

Principles focused evaluation and racial equity

June 4, 2018 By Chris Corrigan Collaboration, Community, Evaluation, Featured, First Nations 8 Comments

I was happy to be able to spend a short time this week at a gathering of Art of Hosting practitioners in Columbus, Ohio. People had gathered from across North America and further afield to discuss issues of racial equity in hosting and harvesting practices. I’ve been called back home early to deal with a broken pipe and a small flood in my house, but before I left I was beginning to think about how to apply what I was learning with respect to strategy and evaluation practices. I was going to host a conversation about this, but instead, I have a 12 hour journey to think with my fingers.

My own thinking on this topic has largely been informed by the work I’ve done over thirty years at the intersection between indigenous and non-indigenous communities and people in Canada. Recently this work has been influenced by the national conversation on reconciliation. That conversation, which started promisingly, has been treated with more and more cynicism by indigenous people, who are watching non-indigenous Canadians pat themselves on the back for small efforts while large issues of social, economic and political justice have gone begging for attention. Reconciliation is gradually losing its ability to inspire transformative action. And people are forgetting the very important work of truth coming before reconciliation. Truth is hard to hear. Reconciliation is easy to intend.

As a result, I’m beginning to suggest to some non-indigenous groups that they should not think of their work as attempting to get to reconciliation, but instead to focus on work with indigenous communities that has a real and tangible and material impact on indigenous people. Reconciliation can then a by-product and a way of evaluating the work while we work together to achieve positive effects.

So my question now is, what if reconciliation was one of the ways we evaluated work done with indigenous communities, and not as an end in itself?

x x x

“Every action happens within a frame and the frame is very important.”

— Maurice Stevens, on Sunday prefacing a story he told about race.

Evaluation is a very powerful tool because it is often a hidden frame that guides strategic work. Ethical evaluators work hard to prevent their work from becoming an intervention that determines the direction of a project. In work that involves social change, poorly designed evaluation can narrow the work to a few isolated outcomes, and leave people with the impression that complex problems can only be addressed by linear and predictable planning practices.

Wielded unconsciously, evaluation can be a colonizing tool introducing ways of knowing that are alien to the cultures of the communities that are doing the work. Sometimes called “epistemic violence” this kind of intervention devalues and erases the ways participants themselves make sense of their world, know about their work and the standards by which they value an action as good.

Complexity demands of us that we work towards an unknowable and unpredictable future in a direction that we agree is good, useful, and desirable. Agreeing together what is good and desirable for a project should be the work of the people upon whom the project will have a direct affect. The principle of “Nothing about us without us” captures this ethical imperative. In complex adaptive systems and problems, outcomes are impossible to predict and the ways forward need to be discovered. Imposing a direction or a destination can have a substantial negative impact on the ability of a community to address its issues in a way that is meaningful to the community. Many projects fail because they became about achieving a good evaluation score. It is a powerful attractor in a system.

Evaluation frameworks are based on stories about how we believe change happens. I have seen many examples of these stories over the years:

  • An orderly sequence of steps will get you to your goal.
  • The people need to be changed in order for a new world to arise.
  • Leadership must go tot the mountain of enlightenment and bring down a new set of brilliant teachings to lead the people in a different direction.
  • We are feeling our way through the woods, discovering the truth as we go.
  • Life is like navigating on a storm tossed sea and our ability to get where we are going relies on our ability to understand how the ship and the weather and the ocean works.
  • If only we can put the parts together in a greater whole, then the collective impact we desire will be made.

You can probably name dozens of the archetypal stories that underlie the way you’ve made sense of projects you are involved in. But how often are these stories questioned? And what if the stories we use to frame our evaluation and ways of knowing about what’s good are based on stories that are not relevant or, worse, dangerous, in the context in which we are working?

I once sat with Jake Swamp, a well known Mohawk elder who told me a story of the numerous times that he met with the Dalai Lama. Jake said that he and the Dalai Lama often discussed peace as that was a key focus of their work, and their approaches to peace differed quite substantially. To paraphrase Jake, for the Dalai Lama, peace was attainable through individual practice and enlightenment, mainly through personal meditation. Jake offered a different view, based on the Great Law of Peace, which is the set of organizing principles for the Haudenosaunee Confederacy. In this context, individuals achieving a state of peace separate from their family and clan are dangerous to the whole. For Jake, peace is an endeavour to be worked on collectively and and in relationship and the difference for him was critical.

Imagine an evaluator then, working with the Dalai Lama’s ideas of peace and applying them to the workings of the Haudensaunee Confederacy. A de-emphasis on personal practice would get a failing grade. The story of how to achieve peace determines what the evaluator looks for and, if the evaluator was a practicing Tibetan Buddhist for example, they might not even be able to see how Haudenosaunee chiefs clan mothers, families, and communities were working on maintaining peace.

This happens all the time with evaluation practice. The stories and lenses that evaluators use determine what they see, and their intervention in the project often determines the direction of the work..

x x x

Recently several colleagues and I attended a workshop with Michael Quinn Patton who was introducing the new field of principles-focused evaluation. I got excited at this workshop, not only because Quinn Patton is an important theorist who has brought complexity thinking into the evaluation world, but also because this new approach offers some promise for how we might evaluate the principles that actively shape the way we plan, work and evaluate action.

Interventions in complex systems rely on the skillful use of constraints. If you constrain action too tightly – through rules and regulations and accountability for unknowable outcomes – you get people gaming the system, taking reductionist approaches to problems by breaking them into easily achievable chunks and generally avoiding the difficult and uncomfortable work in favour of doing what needs to be done to pass the test. It doesnot result in systemic change, but a lot of work gets done. However, if you apply constraints too loosely and offer no guideposts at all, work goes many different ways, money and energy gets stretched and the impact is diffuse, if even noticeable at all.

The answer is to guide work with principles that are flexible and yet strong enough to keep everyone moving in a desirable direction. You need a malleable riverbank, not a canal wall or a flooded field. Choose principles that will help keep you together and do good work, and evaluate the effectiveness of those principles to achieve effective means and not simply desired ends.

Quinn Patton gives a useful heuristic for developing effective principles for complexity work. These principles are remembered by the acronym GUIDE (explanations are mine):

  • GUIDING: Principles should give you a sense of direction
  • INSPIRATIONAL: Principles should inspire new action
  • USEFUL: Principles should help you make a decision when you find yourself in a new context
  • DEVELOPMENTAL: Principles should be able to evolve with time and practice to meet new contexts
  • EVALUABLE: You should be able to know whether you are following a principles or not.

Because principles focused evaluation – and I would say principles-based planning – are context dependant, one has a choice about what principles to use. If I was evaluating the Dalai Lama’s approach to peace making I might use a principle like:

The development of individual mindfulness practice twice a day is essential to peace.

If I was working with Jake perhaps we might use a principle like:

A chief must be in good relation with his clan mothers in order to deliberate in the longhouse to maintain peace.

Principles are then used to structure action so that it happens in a certain way and evaluation questions are designed to discover how well people are able to use these principles and whether they had the desired effect. Using monitoring processes, rapid feedback, story telling and reflection means that the principles themselves become the thing that is also evaluated, in addition to outcomes and other learning that goes on in a project.

The source of those principles are deeply rooted in stories and teaching from the culture that is pursuing peace and peacefulness. It is very useful for those principles to be applied within their context, but very ineffective for those principles to be applied in the other context.

And so perhaps you can see what this has to do now with reconciliation – and racial justice – as a evaluation framework and not necessarily a stated outcome. If reconciliation and racial justice is a consequence of the WAY we work together instead of an outcome we know how to get to, then we must place our focus on evaluating the principles that guide our work together, no matter what it is, so that in doing it, we increase racial equity.

It is entirely possible for settler-colonial governments to do work that benefits indigenous communities without that work contributing towards reconciliation. The federal government could choose to fund the installation and maintenance of safe running water systems in all indigenous communities, and impose that on First Nations governments, sending in their own construction crews and holding maintenance contracts without involvement of First Nations communities. The outcome of the project might be judged to be good, but doing it that way would be against several principles of reconciliation, including the principle of working in relationship. Everyone would have running water – which is desperately needed – but the cause of reconciliation might be set back. Ends and means both matter.

x x x

So this brings me to practicalities. How can we embed racial justice, equity or reconciliation in our work using the evaluation of principles?

Part of the work of racial justice and reconciliation is to work from stories and ways of knowing of groups that have been marginalized by privilege and colonization. We often work hard – but often not hard enough – to include people in the design of the participatory strategic and process work that affects their communities but it is rare in my experience that those same voices and ways of knowing are included in the evaluation of that work. If reconciliation and justice is to ALSO be an outcome of development work, then the way to create evaluation frameworks is to work with the stories of community and question the implicit narrative and value structures of the evaluators.

This can be done by, for example, having Elders and traditional storytellers share important traditional stories of justice or relationship with project participants and then convening participants in a workshop to identify the values and principles that come through the teachings in these stories. Making these principles the core around which the evaluation takes place, and including the storytellers and Elders in the evaluation of the effectiveness of those principles within the project over time, seems to me to a simple and direct way to embed the practice of racial justice and reconciliation in the work of funding and resourcing projects in indigenous communities.

I am not a professional evaluator but my interest in the field is central to the work that I do, and I have seen for years the impact that evaluation has had on the projects I have been involved in. Anything that disrupts traditional evaluation to open up frameworks to different ways of knowing holds tremendous value for undermining the hidden effects of whiteness and privilege that threads through typical social change work supported by large foundations and governments.

But from this reflection, perhaps I can offer my own cursory principles of disrupting evaluation to build more racial equity into the work I do. How about these:

  • Work with stories about justice and relationship from the communities that are most affected by the work.
  • Have members of those communities tell the stories, distill the teachings and create the principles that can be used to evaluate the means of social change work.
  • Include storytellers and wisdom keepers on the evaluation team to guide the work according to teh principles.
  • Create containers and spaces for people of privilege to be stretched and challenged to stay in the work despite discomfort, unfamiliarity and uncertainty. As my friend Tuesday Ryan-Hart says, “relationship is the result.”

I’ll stop there for now and invite you to digest this thinking. If you are willing to offer feedback on this, I’m willing to hear it.

Share:

  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to email a link to a friend (Opens in new window) Email
  • Click to print (Opens in new window) Print
  • More
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Telegram (Opens in new window) Telegram

Like this:

Like Loading...

1 … 5 6 7 8 9 … 12

Find Interesting Things
Events
  • Art of Hosting November 12-14, 2025, with Caitlin Frost, Kelly Poirier and Kris Archie Vancouver, Canada
  • The Art of Hosting and Reimagining Education, October 16-19, Elgin Ontario Canada, with Jenn Williams, Cédric Jamet and Troy Maracle
Resources
  • A list of books in my library
  • Facilitation Resources
  • Open Space Resources
  • Planning an Open Space Technology meeting
SIGN UP

Enter your email address to subscribe to this blog and receive notifications of new posts by email.
  

Find Interesting Things

© 2015 Chris Corrigan. All rights reserved. | Site by Square Wave Studio

%d