A couple of years ago I wrote a post that was critical of the way in which the Representative for Children in Youth in British Columbia drove practice changes among social workers. In short the reason had to do with apply too much order (rules and checklists) in a complex space (social work practice). At a certain point, when you are trying to prevent deaths that have occurred in the past, you end up outlawing all but the deaths that will surprise you in the future. We look at reviews of child deaths as if they were expected and predictable and create highly ordered accountability mechanisms to prevent them from happening again. The problem with this, as anyone knows who works with complexity, is that you create a break between good social work practice which is sensitive to nuance and context, and rigorous accountability standards. While no one is arguing that social workers should not be accountable, what is required is the ability for social workers to develop and rely on their practice, because no amount of rules will prevent children from dying in novel ways, but good social work practice does have an effect. In fact, checklists over practice almost ensure children will die in increasingly novel ways because as social work becomes constrained simply to what is on the checklist, social workers narrow their gaze too much and are unable to detect the weak signals in a situation that would otherwise anticipate a problem before it happens. This is the dilemma between anticipatory and predictive awareness and getting it wrong is costly.
It’s a brutal example, but I do believe it points to the the consequences of accountability models that assume that all outcomes are predictable and negative effects can be prevented with best practices even when its proven that they can’t be. (the confusion in that link is perfectly illustrative, by the way. “Child deaths are preventable” on the one hand and “we lack the most basic information about why children die” on the other.) That can be true in ordered systems but not in complex ones. This particular problem has a major implication for philanthropic organizations that are seeking to have “impact.” In many cases, the impact is a pre-defined outcome of a process taken largely in a narrowly defined strategic context. Real life is messy but logic models are sweetly and seductively clean.
Messiness is important and working in messy ways is a critical skill of philanthropic workers, donors and directors. In this recent article Martin Morse Wooster argues for a loosening of constraints on philanthropic work and although he doesn’t provide a solid theoretical basis for his assertions, but good theory on the limits of managing and measuring impact backs him up.
Many front line philanthropic workers – grants administrators, programs staff and consultants – know this approach but they are often constrained by donors, Boards and executives who demand simple outcomes, simple metrics and clear impact. I’m increasingly interested in putting together specific trainings and learnings for boards and donors that will increase their literacy of messiness in support of making smart changes and supporting good in a way that is much more aligned with how community actually works.
One such offering is currently open for enrolment. We are gathering in June in Glasgow and will be repeating the workshop in October in Vancouver. If you’d like it in your neck of the woods, let me know.