Solutions
How can we regulate the internet in a way that lets us reap the game-changing benefits and avoid the equally huge risks?
A Q&A with Francine Berman
In this series, the Gazette asks Harvard experts for concrete solutions to complex problems. Francine Berman, the Edward P. Hamilton Distinguished Professor in Computer Science at Rensselaer Polytechnic Institute, is a Faculty Associate at the Berkman Klein Center for Internet & Society. Berman’s current work focuses on the social and environmental impacts of information technology, and in particular of the Internet of Things — a deeply interconnected ecosystem of billions of everyday things linked through the web.
GAZETTE: Do you think the internet has been a force for good in the world?
BERMAN: Yes and no. What the internet and information technologies have brought us is tremendous power. Tech has become critical infrastructure for modern life. It saved our lives during the pandemic, providing the only way for many to go to school, work, or see family and friends. It also enabled election manipulation, the rapid spread of misinformation, and the growth of radicalism.
Are digital technologies good or evil? The same internet supports both Pornhub and CDC.gov, Goodreads and Parler.com. The digital world we experience is a fusion of tech innovation and social controls. For cyberspace to be a force for good, it will require a societal shift in how we develop, use, and oversee tech, a reprioritization of the public interest over private profit.
Fundamentally, it is the public sector’s responsibility to create the social controls that promote the use of tech for good rather than for exploitation, manipulation, misinformation, and worse. Doing so is enormously complex and requires a change in the broader culture of tech opportunism to a culture of tech in the public interest.
GAZETTE: How do we change the culture of tech opportunism?
BERMAN: There is no magic bullet that will create this culture change — no single law, federal agency, institutional policy, or set of practices will do it, although all are needed. It’s a long, hard slog. Changing from a culture of tech opportunism to a culture of tech in the public interest will require many and sustained efforts on a number of fronts, just like we are experiencing now as we work hard to change from a culture of discrimination to a culture of inclusion.
That being said, we need to create the building blocks for culture change now — pro-active short-term solutions, foundational long-term solutions, and serious efforts to develop strategies for challenges that we don’t yet know how to address.
In the short term, government must take the lead. There are a lot of horror stories — false arrest based on bad facial recognition, data-brokered lists of rape victims, intruders screaming at babies from connected baby monitors — but there is surprisingly little consensus about what digital protections — specific expectations for privacy, security, safety, and the like — U.S. citizens should have.
We need to fix that. Europe’s General Data Protection Regulation (GDPR) is based on a well-articulated set of digital rights of European Union citizens. In the U.S. we have some specific digital rights — privacy of health and financial data, privacy of children’s online data — but these rights are largely piecemeal. What are the digital privacy rights of consumers? What are the expectations for the security and safety of digital systems and devices used as critical infrastructure?
Specificity is important here because to be effective, social protections must be embedded in technical architectures. If a federal law were passed tomorrow that said that consumers must opt in to personal data collection by digital consumer services, Google and Netflix would have to change their systems (and their business models) to allow users this kind of discretion. There would be trade-offs for consumers who did not opt in: Google’s search would become more generic, and Netflix’s recommendations wouldn’t be well-tailored to your interests. But there would also be upsides — opt-in rules put consumers in the driver’s seat and give them greater control over the privacy of their information.
Once a base set of digital rights for citizens is specified, a federal agency should be created with regulatory and enforcement power to protect those rights. The FDA was created to promote the safety of our food and drugs. OSHA was created to promote the safety of our workplaces. Today, there is more public scrutiny about the safety of the lettuce you buy at the grocery store than there is about the security of the software you download from the internet. Current bills in Congress that call for a Data Protection Agency, similar to the Data Protection Authorities required by the GDPR, could create needed oversight and enforcement of digital protections in cyberspace.
Additional legislation that penalizes companies, rather than consumers, for failure to protect consumer digital rights could also do more to incentivize the private sector to promote the public interest. If your credit card is stolen, the company, not the cardholder, largely pays the price. Penalizing companies with meaningful fines and holding company personnel legally accountable — particularly those in the C suite — provide strong incentives for companies to strengthen consumer protections. Refocusing company priorities would positively contribute to shifting us from a culture of tech opportunism to a culture of tech in the public interest.
GAZETTE: Is specific legislation needed to solve some of today’s thorniest challenges —misinformation on social media, fake news, and the like?
BERMAN: It’s hard to solve problems online that you haven’t solved in the real world. Moreover, legislation isn’t useful if the solution isn’t clear. At the root of our problems with misinformation and fake news online is the tremendous challenge of automating trust, truth, and ethics.
Social media largely removes context from information, and with it, many of the cues that enable us to vet what we hear. Online, we probably don’t know whom we’re talking with or where they got their information. There is a lot of piling on. In real life, we have ways to vet information, assess credentials from context, and utilize conversational dynamics to evaluate what we’re hearing. Few of those things are present in social media.