Bruce Schneier.

“Democracy has been affected by technology since its beginning,” says Bruce Schneier.

File photo by Niles Singer/Harvard Staff Photographer

Nation & World

What if we used AI to strengthen democracy?

Surveillance, control, propaganda aren’t the only options, says security technologist

7 min read

AI is just the latest technology in a long line of innovations through history that have influenced politics. While many experts fear artificial intelligence will be deployed to weaken democracy, examples abound around the world of it being used to make systems fairer.

“We talk about AI being used as a tool of surveillance, as a tool of control, as a tool of propaganda,” said security technologist Bruce Schneier, co-author of the new book “Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship.” “These are all possible scenarios, but AI can also be used to resist all those things.”

In this interview, which has been edited for clarity and length, the lecturer in public policy at Harvard Kennedy School talks about AI’s potential impact on the democratic process, and the need to both regulate it and create a public option to offset the power of private companies.

Book cover: "Rewiring Democracy."

In your book, you write that democracy is an information system and because of that, it will be affected by AI. Can you explain?

When we say that democracy is an information system, what we mean is that it’s a way of figuring out what people want to do in some fair manner and what a country’s policies should be. It takes information about what people want and combines it together, and the output is some set of policy, some set of actions. AI technology fundamentally processes information, and that is why it’s going to affect democracy. This isn’t new. Democracy has been affected by technology since its beginning. You could think about the voting booth. You can think about the train, television, the internet; all of these things have affected democracy, and AI technology will affect democracy like everything else did.

Will AI have a larger impact on democracy than social media?

Social media has had an extraordinary impact on the process of democracy. But it’s important to say that social media, as it’s currently practiced by the large tech monopolies in the United States, is not really a technology; it is how we have decided to do social media based on surveillance and manipulation for the near-term financial benefit of a few companies that has shaped social media. Similarly, AI is going to be shaped by the corporate environment where it comes from. I don’t think we know how AI will shape democracy at this point. There’s just so many ways. AI is way more than chatbots, way more than generative AI. But the fact that a lot of this technology is being developed for the near-term benefit of a bunch of white male tech billionaires in Silicon Valley will have an extraordinary impact on how it is developed and, ultimately, how it is going to affect the democratic process.

How could a small group of people leading AI companies represent a threat to democracy?

Concentrations of power are always a threat to democracy. They have been since democracy was born. Right now, one of the biggest threats to democracy today are the concentrations of wealth and power in the United States. We worry that AI, as it is practiced, will further concentrate power. It doesn’t have to be that way. AI could be democratizing, and we imagine a future where AI helps distribute power rather than concentrate it. And to me, that’s the future we want to try to move toward.

In which ways could AI be used to further democracy?

In our book, we give many examples from around the world of AI being used to support democracy. We talk about AI being used to help write legislation, and how this makes legislators less dependent on lobbyists, which helps democracy. We talk about AI being used to assist local candidates to run for office, and how this can make it easier for people to run for office, which helps democracy. We talk about AI being used in government administration in ways that are fairer and faster, making democracy better. AI can be used in the court system, too. We can think of ways it could be used badly, but when it’s used well, it can make the courts run smoother and more efficiently. And finally, we talk about AI being used by citizens to help them get information, to help them reach consensus, and to help them engage with their government — which, again, helps with democracy. And these are not just theoretical. These things are happening across the world. We have examples from Europe, Asia, South America, Africa, the United States, and Canada, where these technologies are being used to make democracy better.

What are the scenarios in which AI can be used to harm democracy?

The way to think of it is that AI is a power-enhancing technology. It enhances the power of people in groups who use it. If those people want to further democracy, the technology helps them do that. If those people are autocrats, the technology helps them harm democracy. Either way, it is a tool of the humans who use it. We talk about AI being used as a tool of surveillance, as a tool of control, as a tool of propaganda. These are all possible scenarios, but AI can also be used to resist all those things.

Is it necessary to have regulation for AI to support democracy?

If we don’t want AI solely benefiting the tech monopolies, we need some very strong regulation. We made this mistake with social media. We chose not to regulate it, and now we’re living with all the harms that came from that decision. We could choose a different path today. We can regulate AI technology to ensure that it distributes power, to ensure that the people gain the benefits of the technology. This isn’t impossible. Europe is trying; there is an EU AI Act. We can discuss the good and bad parts of it, but it is trying to regulate the technology. Some states in the U.S. are trying as well. I think we do need more. We need to recognize that this technology broadly affects society in ways that could both be very positive and very negative and regulate it accordingly.

You write that one option is to develop public AI to counterbalance the power of private AI companies. How feasible is this?

In our book, we write about a noncorporate AI option. This would be AI developed not by a corporation, but by a government, or an NGO or a university. When we wrote the book, it was theoretical. We knew it was possible, but it hadn’t happened yet. Two months ago, the government of Switzerland released a public AI model. It’s a large language model, developed in conjunction with ETH Zurich University, and it is a competitive AI core model that is not developed by a corporation with a profit motive; it is for the public good. It’s an amazing development. Now we know it is feasible. This isn’t going to replace corporate AI, but it’s going to be a counterbalance and demonstrate that something else is possible and can be in the ecosystem as a possibility for people and organizations to use.