Dennis Yi Tenen.
Arts & Culture

How I learned to stop worrying and love AI

Former software engineer turned English professor talks about future of literary studies in age of ChatGPT

8 min read

The recent news around advances in artificial intelligence — in particular, the emergence of the writing chatbot ChatGPT — left many Americans impressed at what these technologies could produce, and uneasy about whether AI could rival human intelligence.

Not so fast, argues Dennis Yi Tenen, Ph.D. ’12, a former fellow at the Berkman Klein Center for Internet and Society. AI is hardly new, and intelligence may be a bit of a stretch. In his new book, “Literary Theory for Robots: How Computers Learned to Write,” the former software engineer and current literary scholar traces the roots of modern machine intelligence back centuries. Tenen, now an associate professor of English at Columbia University and co-director of the Center for Comparative Media, spoke with the Gazette about his book. This interview has been edited for length and clarity.


What compelled you to write “Literary Theory for Robots”?

“I do think that human talent always rises above the average. So if AI represents a new average, the bar is always rising.”

There are two parts to the answer, personal and then more broadly. Personally, a lot of my research begins with my immediate context of writing. In my first book [“Plain Text: The Poetics of Computation” (2017)], I was writing in Microsoft Word and realized, “That’s weird. I don’t really understand how text works.” I’m a literary scholar, but I feel so disconnected from these tools.

Similarly, several years ago when Gmail started completing our messages, it was obvious to me that something was happening, changing our notion of literacy in general. I immediately wanted to write about it.

Cover of book with a robot computer carrying a stack of books.

As a researcher, you start going down these rabbit holes, and all of a sudden you end up in medieval Arabic philosophy trying to use Google Translate to understand medieval manuscripts! My approach in general is to look at what’s in front of me and then begin to unravel it back historically.

The bigger reason is I’m always thinking about the past and the future of literary studies. With this book, I’d like to argue for an expansion of literary studies and the future of the field. I envision something like creating the Institute for the Study of Machine Literature. As more and more texts are being written by and with machines, I think we should study the process, their influence, and our thinking. And I want to make sure that literary scholars and humanists actively participate in the designing of this future. It has to be done collaboratively with engineers, social scientists, and humanities people.

Why do you think it’s important to understand the historical development of artificial intelligence?

Partly it’s just so we can be less anxious. Historically, at the edge of creativity and the edge of technology there’s always anxiety. It’s not by accident that the early efforts in machine conversations are often very close to mysticism and religion. Today, the way technology discomforts us often sounds like some kind of theological metaphysical problem. But really, technology will neither save us nor destroy us. This is just a continuum of a particular historical thread. And I think with history, our understanding of the present moment becomes much richer.

In many ways artificial intelligence isn’t really about intelligence, at least as humans understand it. It’s more about leveraging massive amounts of data to predict probabilities. And yet we use language that suggests it is — AI “learns” or “understands.” Can you talk about why that is problematic?

I think we get a little confused when language is so close to us. Let’s pick an easier example, like the amazing weather-prediction models we now have. Does the model “understand” weather? And right away you see … I guess, kind of, maybe, metaphorically? It’s kind of similar with literary and textual AI. They’re just complicated mathematical realities with great predictive power [for the next letter, word, or sentence]. By definition, something like understanding is just a human concept. It’s a little bit like fanciful to go down that route and ask if it understands. In some ways perhaps, but mostly no.

Artificial intelligence is really a thinly veiled collective intelligence. It’s just a way to collaborate with others, remotely. Just imagine: If you could afford it, you would have a team of engineers following you around as a writer. Anytime you write, they’d be like, “We got this! We’re going to run it through all these filters. We’re going to go to the library, check your sources, and we’re going to automate it for you.”

AI is a gigantic cooperative effort. And that in itself is amazing, that thousands of people can collaborate over the course of centuries to grammar-check your sentence. It’s not AI; it’s collective labor that happens through technology.

When we talk about technology, we often use language that gives the agency to the technology rather than the collective effort that created it. An example you give in the book is, “It’s easier to say, ‘The phone completes my messages’ instead of ‘The engineering team behind the autocompletion tool writing software based on the following dozen research papers completes my messages.’” What do we miss when we shorthand what’s going on behind the scenes?

There are two kinds of political, ethical moments here: One is acknowledging hidden labor. Other very good scholars made the point that Facebook has filters that shield you (as a user) from offensive content. But really the filter is trained by people who sit there and watch horrible content all day. To say that “the filter” has put stuff in the spam folder? Well, yes, but at the cost of human intelligence and real emotional labor. None of that gets acknowledged. Instead, the filter “did” it.

And then the second part is agency. Take a simple example, when a fully automatic, self-driving car kills someone. We need to say it’s not the car that did that so we can untangle a very complicated web of agency: Was it the driver’s negligence? Was it the engineer’s negligence? Was it corporate negligence? Was it maybe the road? To just say that the car did it hides all sorts of political and social problems that we have to resolve before we start blaming or praising technology.

Why is it helpful for people to understand the collective nature of AI?

I hope it creates space where they feel a part of this project. Whatever generative AI you’ve been using, it was trained on your words. When you write, you’re contributing. There are thousands, probably millions, of people contributing to this project. And as I’m writing, that crowd is also present here. It gives us agency. You have a voice, and you have a role in shaping the process.

The book focuses a lot on history and literary theory, but it is also quite funny, even poetic. Why did you decide to take a more lighthearted tone?

As academics, part of our argument depends on the level of the content, but I think there is a lot of work that can be done stylistically to get our message across. Given the sometimes technically and historically turgid material, it can be really dense. In this book, there’s medieval mysticism layered on top of technology, the sort of stuff that may not immediately appeal to people.

Animating some of the historical characters, bringing them to life, bringing their humor — I wanted to make sure that I didn’t make them boring technologists. They were a bunch of weirdos; most of the people — like [19th-century computer pioneers] Ada Lovelace and [Charles]Babbage — they were wonderfully strange. They didn’t respect the boundaries between science, literature, and humanity. So that’s where my own sense of style and voice comes in.

In the book, you talk about the use of templates, and how template culture has pervaded nearly every industry. But when we talk about templating literature or artistic works, there’s resistance. Why is that?

It’s the shameful little secret of authorship. Going back for a long time, we’ve relied on templates. But there is a romantic notion of the author as a solitary genius that invents everything by themselves. I’m arguing against that, but the other important part is we should view authorship as labor. We see that right now in a major way with the writer strikes in Hollywood and elsewhere. Labor has been disrupted by automation. But if you were a blacksmith in Victorian England, you just felt automation much earlier, before this templating culture came for the author.

Do you think templating reduces or cheapens human creativity or intelligence?

It’s not an easy question. I write in the book that when things get automated, it devalues that particular labor. For example, 200 years ago, just being literate would get you a great job. You’d be the only person in your small town who could read stuff out loud and charge people money. With the advent of mass literacy — which happens through technology, books, dictionaries — the tide rises. Did literacy cheapen collective intelligence? No, I think it improved. The fact that authors who are not Shakespeare are using templated emails, that’s great! It made everybody a better writer. I do think that human talent always rises above the average. So if AI represents a new average, the bar is always rising.