Imagine a robot trained to think, respond, and behave using you as a model. Now imagine it assuming one of your roles in life, at home or perhaps at work. Would you trust it to do the right thing in a morally fraught situation?
That’s a question worth pondering as artificial intelligence increasingly becomes part of our everyday lives, from helping us navigate city streets to selecting a movie or song we might enjoy — services that have gotten more use in this era of social distancing. It’s playing an even larger cultural role with its use in systems for elections, policing, and health care.
But the conversation around AI has been held largely in technical circles and focused on barriers to advances or use in ethically thorny areas — think self-driving cars. Sarah Newman and her colleagues at metaLAB(at)Harvard, however, think it’s time to get everyone involved, and that’s why they created the AI+Art project in 2017 to get people talking and thinking about how it may impact our lives in the future.
Newman, metaLAB’s director of art and education and the AI+Art project lead, developed the Moral Labyrinth, a walkable maze whose pathways are defined by questions — like whether we’re really the best role model for robot behavior — designed to provoke thought about our readiness for the proliferation of the increasingly powerful technology.
“We’re in a time when asking questions is of paramount importance,” said Newman, who is also a fellow at the Berkman Klein Center for Internet and Society. “I think people are refreshed to encounter the humanities and arts perspectives on topics that can be both technical and elitist.”
MetaLAB is part of the Berkman Klein Center and its AI work has been developed alongside the Center’s Ethics and Governance of AI Initiative, which was launched in 2017 to examine the ongoing, widespread adoption of autonomous systems in society.