
Michael Brenner (from left), Matthew Kopec, and Sean Kelly discuss generative AI.
Veasey Conway/Harvard Staff Photographer
Panelists look at challenges, opportunities of GAI tools
New initiative advances conversations about role of AI
When asked if it’s appropriate to use generative AI to grade student papers, write letters of recommendation, or screen job applicants the audience couldn’t come to a consensus.
Posing the questions was Dean of Arts and Humanities Sean Kelly, who kicked off the panel discussion “Original Thought in the AI Era: A Faculty Dialogue on Authorship and Ethics” by polling the audience before turning to the panel.
First up: Matthew Kopec, program director and lecturer for Embedded EthiCS, who opined, “Science is less fun because of all these tools.”
Quick to push back were Gary King, Albert J. Weatherhead III University Professor, and Michael Brenner, Michael F. Cronin Professor of Applied Mathematics and Applied Physics at SEAS.
“We’re in the business of making discoveries to improve the world for humans,” Brenner said. “We should use every tool that we have at our disposal to do that.” He and King posited that while GAI may make certain scientific endeavors easier, it can also encourage researchers to work on harder problems.
King, who is also the director of the Institute for Quantitative Social Science, noted that Harvard has long taught its students the latest technology to address problems faster and more easily, and GAI is no different.
“The first mathematics books had long passages trying to explain how to do mathematical calculations without wasting valuable paper. Most of us now spend a lot of time trying to figure out how to do calculations without blowing up our computers,” he said. “You should be the kind of person that uses whatever the best tools are to progress the fastest and go the farthest.”
The hourlong panel was the first installment in the spring GAI Dialogues series, part of a wider initiative exploring the impact generative AI has on the FAS educational mission. The initiative, a priority of Edgerley Family Dean of the Faculty of Arts and Sciences Hopi Hoekstra, is being led by her senior adviser on artificial intelligence, Chris Stubbs, the Samuel C. Moncher Professor of Physics and of Astronomy.
The conversation also examined concerns with ethical issues. Despite being an enthusiastic proponent for the use of GAI in science and math fields, Brenner acknowledged the need for scrutiny over who can or should control GAI tools.
King was blunter. “Yes, this technology can be used for harm. Any technology can be used for that. The causal factor isn’t the technology, it’s the humans that decided to use it,” he said.
A question from an audience member on AI’s potential environmental impact had all the panelists agreeing that the technology heavily consumes energy. King answered that AI may ruin the environment, or incent faster creation of new industries to generate clean energy.
“Before you single-handedly eliminate these incredibly visible tools, let’s just figure out the cost and benefits,” he said.
Upcoming events in this spring’s GAI Dialogues include “Teaching With Integrity in the Age of AI” with the College’s offices of Undergraduate Education and Academic Integrity at the Smith Campus Center on Monday. The faculty workshop will explore best practices for using AI in the classroom, potential coursework violations, and prevention strategies. Other events will focus on critical reading and writing in the age of AI, on April 3 and 24, respectively, and “Preparing Students for the Future: AI Literacy in the Liberal Arts” on May 5.