
I’m not sure that this shows up in the training set
About 8 years ago I remember Dave Snowden coming to Vancouver directly from a conference of security experts where they were discussing the top existential threats to humanity. In ascending order, at that time, they were: nuclear war, climate change and AI. At the time I remember thinking that how strange that seemed given that climate change is an absolute certainty and at least with nuclear war, we could actively try to prevent it. I had no idea what AI could really look like.
Nevertheless this particularly dystopian view of things had me on alert as I watched for signs that this might be happening. I am no AI expert, and the only AI I regularly and consciously interact with is ChatGPT. ChatGPT is now the best search engine out there, as everything else has become ruined by algorithms. It works, but it is also highly flawed and there is a simple reason for that: It acts like a human being.
If you’ve used ChatGPT you will be familiar with its major flaws which include approval seeking, hallucinations and, an overinflated sense of its own abilities. It will often say it can do things – like a harmonic analysis of a jazz tune – that it cannot actually do. And when it does the work and confidently provides the user with absolute garbage, my instinct is, that if it was an employee, I’d fire it. The inability to say “that is beyond my current limitations” is maddening. I was asking for this musical analysis the other day and after it couldn’t provide it, I discussed the fact that there is a price to this misplaced confidence. ChatGPT uses a tremendous amount of energy and water, and when it does so to just waste my time, I explained, there is an ethical issue here. It acknowledged that issue but it didn’t really seemed bothered by it.
That shouldn’t be a surprise because it was trained on the documented behaviours of certain classes of humans, for whom performative ethics is the norm. We do almost everything here in the global north with a detached knowledge that our ways of life are unsustainable and deeply and negatively impactful on our environment and other people but we don’t seem particularly bothered by that, nor to we display any real urgency to do anything about it.
This training is why Yuval Noah Harari is so worried in this video. AI is unlike any other tool that humans have invented in that it has agency to act and create on its own. As Harari says, printing presses cannot write their own books. But AI can, and it can choose what to write about and what not to, and it can print them and distribute them too.
The issue, and we have seen this recently with Grok, is that AI has been trained on the detritus that humans have left scattered around on the Internet. It has been raised on all the ways that we show up online. And although it has also been trained on great works of literature and the best of human thought, even though most of that material appears to have been stolen, Harari also points out that the quantity of information in the world means that only a very, very tiny proportion of it is true.
When I watched the video and then reflected on the post I wrote yesterday about difficult conversations, I had the insight that AI will know all about the stupid online conversation I started, but will know nothing about the face-to-face conversation that I later had. Harari points out, very importantly, that AI doesn’t understand trust. The reason for that, he says, is that we haven’t figured out the trust and cooperation problem in human society. That’s the one we should be solving first.
AI has no way of knowing that when there are crises in a community, human beings often behave in very beautiful ways. Folks that are at each other’s throats online will be in each other’s lives in a deeply meaningful way, raising money, rebuilding things, looking after important details. There is no way that AI can witness these acts of human kindness or care at the scale with which it also processes the information record we have left online. It sees the way we treat each other in social media settings and can only surmise that human life is about that. It has no other information that proves otherwise.*
For me, this is why face-to-face work is critically important. Meetings are just not the same over zoom. We cannot generate the levels of trust on zoom that we can by spending a significant amount of time in physical proximity to one another. Face-to-face encounters develop contexts of meaning – what I have called dialogic containers – and it is in those spaces and times that we develop community, trust, friendship, sustainable commitment and, dare I say, peace. The qualities of living that we ascribe to the highest aspirations for human community are only generated in their fullness in person. They require us to work through the messiness of shared life-spaces, the conflict of values and ideas and paths forward, the disagreements and confusions, by creating multiple ways in which we encounter and relate to one another. Sustainable community life requires us to see one another in multiple identities so that we discover that there are multiple possibilities for our relationships, multiple ways we can work around blockages and unresolvable conflict.
We are fast losing this capability as human beings. When people ask me to work with their groups there is always the lingering question of whether we can do the work of three days in two, and the work of two days in one. The answer is no. We can do different work in limited times and spaces. Narrowing the constraints on the act of making meaning together creates more transactional relationships based on incresingly incomplete and inaccurate information. This is world we are showing to AI agents. The actual human world is also relational, multi-faceted, subtle and soaked with meaning. As we feed our robots a particular picture of ourselves it’s possible that we are also becoming that very picture. Depth of relationship and meaning becomes replaced with a smeared, shallow breadth of connections and transactions.
There is no better way – no faster way, even – to develop trust than to be together. I think this is so true that it certainly is axiomatic to my practice and how I live my life. And if trust is the critical “resource” we need as human beings, to not only live well but to also address the existential threats that we face – which are all entirely created from our own lack of trust – then being together face-to-face working, playing, singing, struggling, discussing, and figuring stuff out is the most radical act of hope and generosity we can make, to ourselves and to our descendants.
I suppose there will always be a top three list of threats to human existence, but it would be nice if those top three were things like “sun goes supernova” or “super volcano blankets the earth in decades of darkness” and not actions for which we are entirely responsible.
* It also occurs to me that alien cultures who are able to pick up and understand the electronic signals we have been radiating towards every planet within 100 light years of ours will also get a very particular picture of who we are as a civilization. Never mind what was on the Voyageur record. Monday’s TV news has already overtaken it.