Illustrations by Liz Zonarich/Harvard Staff
What good is writing anyway?
Scholars across range of disciplines weigh in on value of the activity amid rise of generative AI systems
What do students stand to lose if they no longer have to write?
Since the arrival of ChatGPT in 2022, many students have turned to AI for help writing papers, and use is expected to grow as students become more adept with it. The shift raised immediate concerns among educators about academic integrity and broader ones about what it could mean to intellectual and cognitive development.
The Gazette spoke with faculty across a range of disciplines, including a computer scientist, a philosopher, a neurologist, and two cognitive scientists, among others, to ask them about what may lie ahead. We asked the same question to GPT-4, which describes itself as “OpenAI’s most advanced system that produces safer and more useful responses.” The interviews have been edited for length and clarity.

Alice Flaherty.
Stephanie Mitchell/Harvard Staff Photographer
Neurologist Alice Flaherty, M.D., Ph.D., Associate Professor of Neurology and Associate Professor of Psychiatry at Harvard Medical School
We lose abilities whenever we farm out tasks to other people or machines. Because our brains have limited real estate, they actively lose facts and skills we no longer use, to make room for new facts and skills.
The London cabbie studies are a classic example of competition for resources. As they gained knowledge of the London road map, their posterior hippocampus physically swelled. But their anterior hippocampus shrank — and they were worse at visual recognition tasks than people without elaborate mental maps were.
The question is: When AI frees up all the neurons that currently are busy at finding the right adjective or trope, what new skills will AI make possible? We can’t predict that. If AI becomes able to do everything we can, we can’t even predict whether there will be any new skills left for us to acquire.

Leslie Valiant.
Harvard file photo
Computer scientist Leslie Valiant, T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics
Education will be rethought in the coming years. AI will be the impetus for this happening, but not the fundamental reason for its necessity. The fundamental reason, in my view, is that up till now we have not recognized human educability as being a phenomenon fundamentally worth studying and understanding. Instead, education has been practiced as a best-practice endeavor with few intellectual underpinnings.
Where does AI come in? With AI one can emulate any human cognitive phenomenon that one can define and that one puts enough effort toward. This view was presciently articulated by Alan Turing already in 1951. Recent evidence suggests that it will not be productive to dispute it.
In consequence, education should not be conceived in terms of competition between humans and machines. The primary question for educators is: What can and should education achieve for humans? Essays may or may not be part of the answer. Further, the challenges of evaluating student work when AI is available should not obscure the question.

Susanna Siegel.
Harvard file photo
Philosopher Susanna Siegel, Edgard Pierce Professor of Philosophy
If you think of a paper as an answer to a question, you might think the sole point of writing the paper is to answer the question. If a machine can answer the question just as well or better than a student can, isn’t it more efficient to use the machine’s answer?
This model of the situation overlooks the benefits to the student of the process of inquiry. As a finished product, a paper rehearsed a line of thought, sometimes a route to an answer to a question.
But the product leaves invisible the tangled lines of inquiry, the false but interesting starts, the productive, beautiful chaos of thinking and writing made of things that at one point seemed relevant but turned out not to be. Those things are a mix of sand and gold. Amidst this chaos is epistemic progress.
Finally, when we outsource the task of finding words to express things, we lose all the kinds of mental activation that come from those searches. Words and ideas are laden with associations, reminders, emotional charges — a whole forest of connections to our past and future tasks, ideas, and relationships.
There is also a communicative value to the process of writing from scratch; when you guide someone on a line of thought with all its tangles and dead ends and reorientations, you build their understanding.

Tomer Ullman.
Harvard file photo
Cognitive scientist Tomer D. Ullman, assistant professor of psychology
Suppose a wormhole opened up tomorrow and out stepped an alien intelligence nicknamed Bunfo. Bunfo seems dumb at first, but the more we talk to it, the smarter it gets. And all it seems to want is to answer whatever you ask. You can ask it, “What is ‘I love you’ in Italian?” and it’ll say, “Ti amo.” You can ask how to make a pizza, and it’ll give you a recipe.
Before long, industries form to hook up anything from phones to planes to Bunfo, as academics hotly debate how Bunfo thinks and what it wants. And before long, students are asking Bunfo to write their essays for them. What do the students lose? They lose whatever it is writing essays is meant to teach.
So, sure, if you want to order a pizza in Rome, it makes total sense to show a menu to Bunfo and ask, “What is this in English?” But if you’re signed up for a class because you yourself want to learn Italian, and all you do is have Bunfo answer things for you, then what are you even doing there?

Talia Konkle.
Photo courtesy of Talia Konkle
Visual cognitive computational neuroscientist Talia Konkle, professor of psychology
Some of the relevant cognitive science to draw on is related to active versus passive learning. Active learning — generating content yourself — helps retention. For example, the well-known “testing effect” is a powerful result in the science of learning; the act of testing, of trying to generate answers, helps retention.
A simple example is learning vocabulary. If you read the word and definition together, you won’t learn vocabulary as well as if you see the word alone, have a pause, try to guess, and then flip the flashcard over to see the definition afterwards.
If we use AI to help us with active learning, to me there are huge benefits. But if we’re using it to shortcut our thinking on the skills we’re trying to internalize, then that is likely counterproductive.
I know in advance when I’m doing the kind of work where I’m gaining new knowledge, where it is important to me that these gains get internalized into my own brain. This awareness influences the AI strategies I use to approach this learning. And I’m aware of other times when the work I need to produce is less related to my learning goals, where I’m happy to use AI’s synthesizing summarizing ability for work efficiency.
This is where I think it’s ultimately about the student’s learning goals when writing an essay. We as professors can communicate what we hope they get out of the writing process, and what we think would count as success. But, as with any assignment, the student can engage more or less actively with that process. AI may make it easier or more tempting to be passive learners, but I do think AI can equally make us more efficient active learners as well.

Joshua Greene.
Photo by Dylan Goodman
Experimental psychologist, neuroscientist and philosopher Joshua Greene, professor of psychology
When students let AI do the thinking for them, they miss out on learning how to think better. What these students lose today is what past students lost by enlisting a friend, parent, or hired gun to do the work. What’s different today, however, is that the substitute intelligence is expected to be freely available for the indefinite future.
I believe that it’s still important for people to learn how to think well, but as the machines get increasingly capable, many will ask: Why bother? My answer: Even if the economic demand for human cognitive labor is dramatically diminished, we still want wise humans in charge.
This requires that at least some humans — and ideally all humans — maintain the intellectual skills needed to envision and realize a good future. In the decades to come, the need for clear thinking may be less technocratic and more democratic.

Mary Lewis.
Photo by Dylan Goodman
Historian Mary Lewis, Robert Walton Goelet Professor of French History
The objective of writing is not just the product but also the process. Students who exclusively use AI to write their essays lose the process of thinking. And that is, after all, what we’re trying to do: produce new knowledge. If the goal is thinking or producing new knowledge, then a human being is the only one who can do that.
By its very nature, AI is rehashing things that are already out there. How many of us have had the experience of when we’re texting someone, and AI tries to predict what we’re going to say next, and gets it wrong? Why is it doing that? Because it is drawing from what it thinks is the most likely thing that we’re going to say. So, in a paper, if AI is using predictive text, it’s simply going to regurgitate ideas that are out there instead of creating new ones.
The more information that is fed into AI, the more AI has to work with. As this happens, it may look like it’s reflecting, but in fact, it’s operating in a kind of a huge, closed circuit. So, I return to my first point: Writing leads to thinking; it’s not an end product, it’s a process that stimulates thought.

GPT-4, “OpenAI’s most advanced system that produces safer and more useful responses”
When students allow AI to write their essays, they miss out on the essential learning that comes from engaging with the writing process. Writing helps develop critical thinking, encourages deeper understanding of the subject matter, and strengthens communication skills.
By bypassing this process, students lose the opportunity to clarify their own ideas, learn how to structure arguments, and refine their unique voice. They also risk not truly grasping the content, since meaningful learning often happens during the struggle to organize thoughts and express them clearly.
Additionally, using AI to write work intended to be original can raise ethical concerns and prevent students from receiving useful feedback that could help them grow. In the long run, relying entirely on AI can undermine both academic development and personal confidence.