I’ve been thinking about AI writing a lot these past few months, which is no surprise, because so has half the Internet. But for me, it feels personal in a different way, because I worked on AI creativity for my PhD, back in 2018, before it was cool. The program I wrote was pretty useless (and it was basically a toy that was supposed to do a cool/funny thing – even if it had worked well, it wasn’t going to replace humans!) But my theoretical writing – about how we define creativity and how it could be evaluated, taking cues from psychology and other fields that computer scientists often ignore – was judged good enough for a doctorate.
Nowadays I teach cognitive science to undergraduate students, and their interest in this topic is huge. I rewrote a whole unit in one of my courses last fall so that we could talk about large language models. I vividly remember the discussion I got into with one student who’d read Blake Lemoine’s LaMDA transcripts and was convinced, like Lemoine, that LaMDA might be sentient. (Spoiler: it is not, but it is very good at pretending to be, based on our cultural expectations of the kinds of things a sentient AI would say.)
I’ve also written, in the Outside series, about tropey AI that takes over the world. I wrote it that way, not because I think AI will actually take over in that way, but because I was reaching for something oppressive and religious to hang the worldbuilding off of ; “AI Gods” seemed like as good a concept as any.
I was always clear – in my head, at least – that the AI Gods were just fantasy and not a representation of AI from real life. At the beginning of the series I was fine with that. By the time I got to the third book, I’d started to question it more. I’d had time to think about how AI hype and inflated notions of AI’s abilities can itself be harmful. As Ted Chiang writes so eloquently, the problem isn’t AI “getting out of control”; it’s corporations that deliberately or callously use AI to harm people. The idea that the AI is somehow all-powerful, unerring, or wiser than us, enables a lot of that harm. Sometimes we have to puncture the hype, not so we can downplay the harm, but so that we can see the harm clearly in the first place.
(Read the full post on Substack)