Sam Altman (pictured) speaking to students.

“Telling people not to use ChatGPT is not preparing people for the world of the future,” said Sam Altman, CEO of OpenAI.

Niles Singer/Harvard Staff Photographer

Campus & Community

Did student or ChatGPT write that paper? Does it matter?

Sam Altman, CEO of firm that developed app, says ethics do matter, but they need to be rethought (and AI isn’t going away)

4 min read

Colleges and universities have been wrestling with concerns over plagiarism and other ethical questions surrounding the use of AI since the emergence of ChatGPT in late 2022.

But Sam Altman, whose company, OpenAI, launched the chatbot app, said during a campus visit Wednesday that AI is such a powerful tool that higher education would be doing its students a disservice by turning its back on it — if that were even possible now. And some of the old rules of ethics will need to be rethought.

“Cheating on homework is obviously bad,” said Altman. “But what we mean by cheating and what the expected rules are does change over time.”

Altman discussed AI in the academy, along with the subtleties of using ChatGPT and other generative AI tools, while at the University to receive the Experiment Cup from Xfund, an early stage venture capital firm. That event was sponsored by the John A. Paulson School for Engineering and Applied Science, Harvard Business School, and the Institute for Business in Global Society (BiGS). It featured a conversation between Altman and Xfund co-founder Patrick Chung ’96.

Speaking to the Gazette before the Cup presentation, Altman likened the initial uproar at schools over ChatGPT to the ones that arose after the arrival of calculators and, later, search engines like Google. “People said, ‘We’ve got to ban these because people will just cheat on their homework,’” he said.

Altman, who left Stanford at 19 to start Loopt, a location-sharing social media app, said the reaction to calculators, for instance, was overblown. “If people don’t need to calculate a sine function by hand again … then mathematical education is over,” he said, with a gentle half-smile on his face.

Altman helped launch OpenAI in 2015 and its wildly influential ChatGPT — which can write papers and generate computer programs, among other things — before being removed in 2023 and then reinstated four days later as the company’s CEO.

ChatGPT, he said, has the potential to exponentially increase productivity in the same way calculators freed users from performing calculations by hand, calling the app “a calculator for words.”

He warned, “Telling people not to use ChatGPT is not preparing people for the world of the future.”

Following a bit of back-and-forth about how the ethics of using ChatGPT and other generative AI may differ in various disciplines, Altman came down hard in favor of utility, praising AI’s massive potential in every field.

“Standards are just going to have to evolve,” he said. He dismissed the notion that ChatGPT could be used for writing in the sciences, where the emphasis is on the findings, but not in the humanities, where the expression of ideas is central.

“Writing a paper the old-fashioned way is not going to be the thing,” he said. “Using the tool to best discover and express, to communicate ideas, I think that’s where things are going to go in the future.”

Altman, who last month joined the Department of Homeland Security’s Artificial Intelligence Safety and Security Board, said ethics remains a concern, and one that has yet to be resolved.

“There will be a conversation about what are the absolute limits of the tool, how do we as a society … negotiate ‘Here is what AI systems can never do.’ Where do we set the defaults? How much does an individual user get to move things around within those boundaries? How do we think about different countries’ laws?”

However, that discussion should not slow the development of AI. Instead, Altman described parallel tracks.

“Generally speaking, I do think these are tools that should do what their users want,” he said, before adding an important, if less than specific, caveat: “But there are going to have to be real limits.”