I travel around many different kinds of organizations. Many of them preach the mantra that goes something like “it’s okay to fail here. Please take risks and try new things!” Unfortunately, when I look around I can’t see much infrastructure in place that allows the work context to be safe enough to fail.
An organization needs to build learning and experimentation into its operations, especially if it is required to respond to changing conditions, improvements in services, or new ideas. And so the idea that “we want people to take risks” is promoted, often alongside an exhortation to do so prudently but really with no further direction than that.
Anyone who has worked in a large organization will know that risk-taking is perilous. There are many ways to be punished for doing something wrong, and the worst punishments are the invisible ones: shaming, exclusion, a tattered reputation, eroded trust, political maneuvering that takes you away from access to power and influence. Not to mention the material punishments of reduced budgets, demotions, poor performance reviews, and limited permission to try new things in the future.
Failure in context
Before going any further, let’s talk about what I mean by failure. Using Cynefin, we can focus on the difference between failure in complicated contexts and failure complex contexts. When we have a complicated failure in a stable and linear and predictable system, the answer is to fix it right away. Ensure you have the right experts on tap, do a good analysis of the situation and apply a solution.
In complex adaptive systems, failure is context-dependent. Here failure is an inevitable part of learning and doing new things. Because complex problems demand us to create emergent solutions, we are likely to get somewhere when we can try many different things and see what works and what doesn’t. Dave Snowden calls this “safe-to-fail” and it means taking a small bet, based on a hunch that what you are doing is coherent with the nature of the system and where you want to go, and acting to see what happens. If it fails, you stop it, and if it works, you support it.
I think I once heard Dave say something like “probes in a system should fail 8 out of 10 times or you aren’t trying to find emergent practice.” That is certainly a rubric I find helpful. This means that in developing new things, you should expect to fail 80% of the time and to do that requires that you put into place a system for supporting failure and learning.
Stuck on a cliff
Imagine you are free rock climbing – no ropes or belyaing – and there is a handhold you are reaching for that requires you to do something you’ve never done before. Your partner says “you’ll never learn to solve this problem if you don’t try something. Don’t be afraid to fail.” Far from being imbued with confidence, you are likely to be frozen with fear, seeing all the ways that things could go wrong. Better to just stick to what you know, and don’t try the move.
If however, you are in this same scenario, but you are roped up and belayed by someone you trust, you can feel safe to try the move knowing that if you fail, you will be caught and you will have a chance to try a different strategy. As you develop mastery in the move, you can use it more and more in your rock climbing life, and you may loosen the safety constraints as you develop more capability
Implications for facilitation and leadership
Safety is about creating good constraints so that your people can take risks and know they will be safe if they get it wrong. The job of leaders is to set the constraints for action in such a way that a safe space is available for work. This can take the form of limited time, money, the scope of action, or other things so that folks know what they can and cannot do. Within that space, leaders need to trust people to do their learning and create feedback loops that share the results of experiments with the bigger system. If you can have people all working separately on the same problem – working in parallel as we would say in Cognitive Edge-speak – then you increase the chances of lots more failures and also of finding lots of different ways to do things. This is called “distributed cognition” in complex facilitation and keeping people from influencing each other increases the creative possibilities within constraints.
The next level of this practice is to honestly incentivize failure. Give a reward to a person or a team that has the best report of their failure, the one that helps us all to learn more. You could easily do this in an innovation meeting by having different groups work on a problem in a fixed amount of time. Watch for the group that fails to get anywhere by the end of the time and ask them to share WHY they failed. Their experience will be a cautionary tale to the whole system.
Almost every organization I work with says that they embrace learning, tolerate failure, and want their employees to take more risks. When I ask to see how they do this, it’s rare to find organizations that have a formal process for doing so. Without that in place, employees will always respond to these kinds of platitudes with a little fear and trembling, and in general, take fewer risks if it clashes with their stated deliverables.