Logan McCarty.

“I think in the future, AI content will be unavoidable, undetectable, and maybe unreliable,” said Logan McCarty, assistant dean of science education at Harvard.

Kris Snibbe/Harvard Staff Photographer

Science & Tech

A fast pivot into the unknown

AI’s rapid rise prompts Harvard/MIT Symposium exploring excitement, potential challenges to STEM education, research

4 min read

“Generative artificial intelligence” and “large language models” are terms that once lived in computer science obscurity. Now, natural-language chatbots such as GPT4 and Google Bard have gone mainstream, and there’s no end in sight.

What does the rise of generative AI technologies mean for STEM education and the research enterprise? Harvard and other institutions of higher learning are figuring it out as they go.

“I think in the future, AI content will be unavoidable, undetectable, and maybe unreliable,” said Logan McCarty, assistant dean of science education at Harvard and physics lecturer, who co-organized a symposium last week hosted jointly by Harvard’s Division of Science and MIT’s Department of Physics. The event convened faculty, researchers, instructors, and students to share new approaches to leveraging AI technologies in the lab, the classroom, and the marketplace of ideas.

The gathering also enabled free-flowing — and at points existential — discussion around hard questions facing not just academia, but humanity at large: With powerful content-generating technologies at the fingertips of all, will competence be devalued? Will truth be relative? Will machines control knowledge?

Harvard and other academic institutions are trying to set precedents quickly on how AI should, and shouldn’t, be integrated into research and learning. Keynote speaker Kavita Bala, dean of computing and information science at Cornell University, outlined her institution’s published guidelines for faculty on adjusting their teaching with respect to AI capabilities and opportunities.

“We’re at a stage here of stumbling to formulate the right kinds of questions to guide our work.”

Chris Stubbs.
Christopher Stubbs

Sessions and panel discussions bounced among researchers experimenting with, even embracing, generative AI in their fields ­— from real-time analysis of supernovae to clinical skills development in medical training. Harvard Earth and planetary sciences postdoctoral fellow Ethan Kyzivat described experimenting with using a language model to synthesize literature reviews.

Harvard Faculty of Arts and Sciences Dean of Science Christopher Stubbs, delivering a keynote titled “Generative AI: What’s it good for, and what’s it good at?” lauded the technology’s enormous potential benefits for optimizing tasks: new course outlines, HR functions, code debugging, to name a few. He polled the audience as to whether it would be ethical to use AI for: Touching up a manuscript? Preparing application materials? Summarizing student evaluations for a tenure review package? He urged colleagues to strive for answers while setting guardrails.

“We’re at a stage here of stumbling to formulate the right kinds of questions to guide our work,” Stubbs said.

Many at Harvard and MIT are applying rigorous methodology to test generative AI’s help or hindrance in the classroom and student learning outcomes. Senior Harvard physics instructors Greg Kestin and Kelly Miller described their research comparing AI-supported instruction vs. active learning in an introductory physics course. So far, they have found significant learning gains with the AI supplementation.

Others on campus are similarly experimenting. Computer science preceptor Rongxin Liu reported the integration of an AI “chatbot with context” into CS50, a large, popular introductory course. The course-specific chatbot gives students unfettered access to personalized tutoring and support. 

Paralleling the excitement are growing concerns around unintended consequences and harm. Damion Mannings, program coordinator in MIT’s School of Engineering, used the example of inherent bias in facial recognition technology, and the subsequent police targeting of Black women, as a reason to push for ethical frameworks around emerging technologies like generative AI.

Jack Maier, an MIT graduate student and former high school physics teacher, posited that while AI is set to surpass humans at certain tasks, the purpose of an education should be to help students become better at being human — what he termed “human flourishing.”

“At the end of the day, when AI and robotics reach their full potential, we will still be better at being human. And that’s what we can excel at.”