Our traditional Boxing Day movie this year was The Imitation Game, the new film about Alan Turing and his team’s efforts to break the Enigma code used by the German Navy in the Second World War. While the film itself was good it was full of fictional scenes that were intended to point at some of the interesting things that happened at Bletchley Park during that war. Having done a bit of reading on the subject, it’s clear that the film simplified many things, took liberties with others and glossed over what is a really interesting story, but the movie itself still holds up, even if Cumberbatch basically turns Turing into his Sherlock Holmes.
At any rate there were a few things in the fils that provided interesting reflections on some of the ideas I have working with a learning about both through my study of Cynefin dynamics (the way problems and solutions move through the Cynefin domains) and with the two loop theory of change which I am using a lot. So here are a few examples.
Solving problems obliquely. Complex problems can’t be solved by taking a head-on, brute force approach to the solution. The film is basically about this writ large, but one vignette stands out as interesting. When Turing needs new staff he devises a way to find them by running a crossword contest in a newspaper. Anyone who solves the problem in under ten minutes gets contacted by MI6 and invited to come and write a test. Although this is not how Joan Clarke joined the project, it was a good way of sorting out the talent from the confirmation biases that riddled the intelligence establishment (in this case gender bias).
Disintermediated sensemaking. The idea of letting everyone have the data and find patterns there is an important aspect of working with complexity. While the problems that the team were solving were indeed complicated, they needed to exploit complex human behaviours in order to have a chance to solve them. A complex problem is solvable with enough expertise, and indeed making a code HAS to be solvable if it is to work. If you don’t want others to solve it you simply make the encryption keys so elaborate that there isn’t enough time in the history of the universe to solve the problem. So while in theory, code breaking is a merely technical problem, in order to solve it, you have to narrow down the permutations to make it possible for the technical solutions to be applied. At Bletchley Park, this came down to reading human factors, which is something only the human operators could do. But they could only do that by having access to the raw data and by creating safe-to-fail probes of the system (by using these factors to solve the codes). When they worked, they were exploited.
There are some incredible stories about the way which the women who were intercepting messages came to know their counterparts in Germany. Each German communications officer had his own style, his own signature. And human error in creating predictable procedures meant that people could use these patterns as weak signal detection in order to break some messages, in the case of the Polish codebreakers that did much of the early work on cracking Enigma, even discern the wiring of the machines themselves. This is a classic pattern of what Dave Snowden calls Cynefin dynamics, specifically how we move from safe-to-fail probes in the complex domain to exploiting findings using complicated and in some cases obvious solutions.
This is a really interesting story, and I’ve ordered a couple of books to read in further. I’m very interested to see how the human factors were sensed, discerned, exploited. Combining that capacity with the incredible engineering talents of Turing and his crew provides some excellent stories and examples of Cynefin dynamics at work.