So much AI
AI is ubiquitous. Well, talk of AI is ubiquitous. When was the last time you got through a day without engaging in a conversation about or reading something about AI? I teach at an institution that promotes (with the usual caution and caveats about misuses) the use of AI. I teach on a Master’s programme in Philosophy & AI. I also teach a course on Technology and Human Values (i.e., applied ethics with a technological bent) where AI heavily features. And I’m a Guardian reader. All of these things mean I rarely get through an hour without thinking about AI (this is despite the fact that I work in the history of philosophy which, you’d think, has little to do with AI!). I’ve read articles about how school teachers are approaching both their own and their students’ use of AI. And I’ve read many, many articles and posts about the use of - and effects of - AI amongst university-level students.
I wonder what effect this total immersion - or feeling of drowning - in AI discourse has on us. And I wonder how we’ll look back on this period. At the moment, it can often feel like we have to talk about AI. How can we not? It’s going to change everything, isn’t it? So we need to stay on top of it. We can’t bury our heads in the sand. You can’t get left behind. Don’t be a luddite. And so on. I think aspects of this are true. I really don’t think it would help anyone to completely ignore AI. Or to be totally AI-illiterate. For exmaple, I think it’s going to affect your ability to communicate with your students if you don’t have some sense of what their day-to-day interactions with AI (‘Chat’, Claude, etc.) look like. But, even if we do have to talk about AI a bit - how much do we have to talk about? And how all-encompassing does it have to be?
Ian McEwan’s latest novel What We Can Know (yes, the title sounds like something you’d ask first year Epistemology students to read) is set approximately a century into the future, in a post-apocalyptic (or near-apocalyptic) Britain which is now, due to higher sea levels and the effects of a nuclear war, an archipelago. The universities and libraries of Britain are now, for pragmatic reasons, housed in places like Mount Snowdon (I know it’s meant to be post-apocalypic, but I loved the idea of taking the funcilar up Snowdon to visit the Bodleian Library). Two things struck me about McEwan’s vision of the future and what, retrospectively, is and isn’t seen as important about our own time.

First, the pandemic/ covid is hardly mentioned. For the historians of the 21st century, there is no significant break between immediately pre-2020 and immediately post-2020. That struck me as interesting because, for many of us right now, that’s still how we categorise the last half a decade or so of our lives (‘did I have that accident or go on that holiday pre-pandemic or post-pandemic?’). This may not seem of immediate relevance to the AI debates, but I took from this the idea - even if it is speculative, presented in fiction - that things that seem really important right now are not necessary future milestones in history. I already find that my students (in their late teens/ early 20s) don’t really want to talk about the pandemic so maybe, in McEwan’s hypothesised future, this is even a result of collective human will: we don’t want this to be a milestone in our history. The second thig that struck me was McEwan’s depiction of future humanity’s relation to AI, where a relatively healthy equilibrium seems to have been reached. AI, in 21st century Britain, has been nationalised. People still depend on it, but in the way that we depend on rail infrastructure or the National Health Service. It hasn’t taken over or drastically changed what it means to be a human being - it’s something that mostly ticks away in the background, until important moments where we need to pay it explicit attention, a bit like plumbing.
Where am I going with this? I suppose one thought is that we needn’t feel bad if we aren’t always thinking about AI. We don’t have to choose between a simple binary of keep on top of it or you’re sticking your head in the sand. It’s ok to work on the assumption that AI isn’t going to completely take over. I found some of the proposals from humanities academics in this Guardian article quite sensible in that regard. And as the article notes, there are clear signs that young people (‘Gen Z’) don’t want to be AI-dependent zombies - they don’t want to live in that version of the future anymore than we do. So maybe, then, continuing to work, read, write, research, and teach in the ways we used to, a few years ago, before everyone was always talking about AI isn’t simply a matter of sticking one’s head in the sand. Maybe, instead, it’s just about getting on with things that will continue to be important, and are are worthwhile as they ever were - and that will be seen to have to been worthwhile by the historians of the future.
