Worried about how AI may affect foreign policy? You should be.

Experts discuss vulnerabilities, need for oversight of tech development, regulation
Forget AI hallucinations. Computer security expert Bruce Schneier warns of bigger problems as policymakers worldwide enlist the tools of artificial intelligence.
“As soon as you put any AI system in a position of power, where it is making a recommendation to a government,” said the Harvard Kennedy School lecturer and fellow with the Berkman Klein Center for Internet and Society, “that means it will be hacked.”
Schneier spoke at a panel last week, convened by the Weatherhead Center for International Affairs, on potential threats and opportunitiesas governments worldwide adopt the rapidly developing technology.
“What we want to do is explore how artificial intelligence is reshaping global decision-making and diplomatic strategy,” said moderator Erez Manela, Francis Lee Higginson Professor of History and acting director of the Weatherhead Center.
“What we want to do is explore how artificial intelligence is reshaping global decision-making and diplomatic strategy.”
Erez Manela
Also on the panel was Ofrit Liviatan, a Department of Government lecturer. The self-described “recovering lawyer,” who teaches a College course on AI in policy design, foresees many positive impacts.
“Large language models, our new oracles, can drastically speed up a law formation. They can optimize data analysis. They can expose weaknesses in current laws. They can identify compliance gaps,” she said.
But AI can just as easily be used to undermine international order. That’s why global cooperation is needed to help direct AI’s advancement, Liviatan said.
Regulation remains in its infancy, she noted, citing the European Union’s AI Act as an early example. The landmark law features guardrails against misinformation, surveillance, and cyber security attacks.
Formally adopted less than two years ago, it is already being “attacked,” she said, by those who fear a drag on innovation.
“I personally find this innovation/regulation standoff misguided, given that innovation isn’t always progress — and because the function of law is precisely to steer behavior and shape preferred directions,” Liviatan argued. “We also have in place extensive regulatory infrastructure for pharmaceuticals, food, transportation, you name it. And in all these areas, innovation seems to be plentiful.”
Schneier, author of “Rewiring Democracy” (2026), believes AI can make policymaking more responsive and equitable. But first, he said, the systems must be trusted to act as loyal, trustworthy advisers.
Governments have long hacked one another’s electrical grids and internal databases, Schneier said. Observers now see attempts to influence how AI answers certain questions by flooding the internet with state propaganda for example
“We’re already seeing Russian attacks that deliberately manipulate AI training data,” he said.
The for-profit entities that build AI systems are corruptible in other ways.
Panelist Carmem Domingues ’09, a former AI policy adviser to the White House, flagged the risk of domestic companies accepting advertising dollars from foreign adversaries. The impact on information provided by large language models would likely remain hidden amid the opaqueness of AI algorithms.
Schneier also pointed to some of the more unsettling features users encounter in AI systems today: overconfidence and obsequiousness of responses to queries, the well-documented biases, the mining of personal data.
These are products of corporate decision-making, he insisted. They are not inherent to the technology.
“My guess is we’re going to see a government AI, that the government will use for these more sensitive negotiations,” Schneier concluded, citing the Swiss National Computing Centre’s Apertuse as the trailblazer in this space. “It won’t upload your data to who-knows-what company. It won’t unduly manipulate you. It will be designed under different principles, not under profit.”