Complexity, AI, and democratic deliberation
Chris Mowles has a lovely post on the perils of an unquestioned commitment to directionality in complexity. Our work is never starting from scratch, and what does “going forward” even mean in a non-linear context?
…maybe there is more to uncover about complex experience than talking as if there is only one tense which is important, the future, and only the individual’s rationality and will to map it out. The future is important, and we are oriented towards it, but this shouldn’t prevent us from thinking about how we have become who we are, and what matters to us. What remains of the embers of the past from which we can still derive succour and find resource?
Rosa Zubazarreta has long been a curious “pracademic” – as she calls herself – about facilitation and deliberation. We have met a few times in the past, but I consider her a close colleague in the work of constantly trying to learn about how to host conversations and design group spaces in which dialogue and listening is maximized. She recently had a peer-reviewed article published called “Listening Across Differences” about deliberative “mini-publics” which are small democratic fora hosted in Austria. Her most recent blog post explores the role of AI in group facilitation, a topic about which she is deeply passionate, and about which I am very curious.
It’s happening and I’m certainly willing to explore it more in deliberative contexts. I have run a couple of small experiments using AI to summarize vast amounts of narrative information and advice submitted by citizens to create high level summaries of advice, high level articulations of dissenting opinions and so on. This becomes material for further deliberation. I have been toying with a design where members of a group all spend time feeding information to different GPTs, querying the data in different ways and bringing their insights to a conversation. It’s about how to make vast amounts of opinion accessible, and generate a learning conversation that everyone can participate in.
This is becoming an interesting field and I notice the twin poles of curiosity and resistance in myself. My friend Jeff Aitken sent along a link to Metarelational.ai which feels like a true TRIP to explore. There are several varieties of trained chatbot there. I have seen and explored some of these, each one cultivated like a garden, each one designed to do something a bit different. Honestly, after a hour or so in a session with these tools, it’s hard to know what terms like “relational” mean. I am firmly in the world of knowing and working with human-to-human relationality. The work at Metarelational seems to at times to evokes a kind of eschatology of human relationships stemming from our own design, and a sort of surrender to AI and machine intelligence that feels religious. It uses religious and spiritual terms and language like “agape” and “right relationship” and “interbeing.” I joked with Jeff the other day about when a new religion might sprout up around an AI chatbot. It’s a joke, but given the proclivity for human beings to seek a higher intelligence that has all the answers, and to be led in a course of action “forward” at any costs, I think there is a serious question here.
Discover more from Chris Corrigan
Subscribe to get the latest posts sent to your email.
No Comments