Science & Tech

Time for government, business leaders to figure out AI cybersecurity regulation

Fred Heiding (from left), Josephine Wolff, James Mickens, and Robert Knake speaking during the event.

Cybersecurity experts Fred Heiding (from left), Josephine Wolff, James Mickens, and Robert Knake.

Photos by Niles Singer/Harvard Staff Photographer

7 min read

Experts say capabilities of agentic AI rising, along with risk to personal data, economy, national security

As new agentic AI models continue to come online, cybersecurity experts laud their ability to sift through vast quantities of data quickly and autonomously — making them great tools to help fight cybercrime.

But, they warn, those attributes could also be put to work by bad actors to hack systems and risk our personal data, our economy, and our national security.

A group of cybersecurity experts were recently brought together for a Berkman Klein Center for Internet and Security discussion, during which all agreed that it’s high time for business and government leaders to regulate the tech — before it’s too late.

Cybercrime, recent data from IBM shows, is rising rapidly. According to a 2026 study, the company found that cyberattacks aimed at public-facing software and systems applications — many of which utilized AI — had a year-over-year increase of 44 percent.

High-profile attacks include the November data breach of Anthropic — the AI company behind the Claude Code assistant. Attackers were able to use their own AI models to scan for weak spots in its source code and publish its inner workings.

“The unfortunate thing is that the bad people only have to win once in some sense, whereas the defenders have to win all the time,” said James Mickens, Gordon McKay Professor of Computer Science. “To me, at least, that’s a concerning aspect of what it means to think about agentic cyber security, attacks and defenses.”

Moreover, cybercriminals have made alarming progress in phishing attacks over recent months, using AI to fine-tune targets and craft messages.

“A year ago, we still had email messages in our inbox that had misspellings that were not colloquial English, that were easy to identify if you were vigilant. Now, all those signals are gone.”

Robert Knake
Robert Knake.

“A year ago, we still had email messages in our inbox that had misspellings that were not colloquial English, that were easy to identify if you were vigilant. Now, all those signals are gone,” said Robert Knake, panelist and partner at Paladin Capital, a cyber-venture capital group.

Knake also served as the first deputy national cyber director for strategy and budget in the newly created Office of the National Cyber Director at the White House from 2022 to 2023.

In Knake’s view, the federal government needs to start requiring the private sector to take greater steps to prevent attacks that jeopardize consumer and national safety.

 “We’re not at a place where we can say any error in your software that leads to a harm, you need to be responsible for. That will kill off software development,” he said. “But we could create a safe harbor in which we say, if you’ve done … these basic things, like using the most current and known secure version of an open-source package … you should not be held liable for a bad outcome from your software. If you haven’t done them, you should be.”

According to Mickens, this type of regulatory scheme may be easier said than done — especially as the cybersecurity landscape continues to change.

For decades, he said, tech companies like Microsoft and Amazon have included stopgaps in their codes to prevent traditional internal security breaches, without formal government regulation.

“The big difference with AI is that the threat model changes,” Mickens said. “Essentially, there’s some human in a chair that’s outside of the data center who’s sending evil commands to the code that’s running in the data center and otherwise trying to trick it into being evil with AI.”

Any conversation on mandating security measures against outside forces and AI will have to clearly define the liabilities at stake and the types of hardware and software that would ensure compliance he added.

Josephine Wolff, associate dean for research and professor of cybersecurity policy at the Fletcher School at Tufts University, added that regulation could become especially tricky if the private sector is asked to be proactive in finding vulnerabilities across large networks.

“Documentation and inventories are both really important and really hard,” she said. “Can you inventory all of the code that’s running on your computers so that if there’s a vulnerability, if something goes wrong, you can at least know where you need to look?”

But while the liability piece remains murky after online systems are breached, all the panelists agree that companies should not be responsible for retaliation against the hackers. A school of thought in combatting cybercrime argues firms that are hacked may be in a unique position to “hack back.”

“I think that the more actors you have out there in the name of self-defense, intruding on other people’s networks, the less likely you are to de-escalate anything,” Wolff said. “The idea that you’re going to bring in the private sector and have that lead to anything but greater chaos seems hopelessly optimistic to me.”

Moreover, she added, the idea that large companies like Google and Microsoft would make sophisticated surgical strikes to take down small clusters of servers launching denial of service attacks at them is unlikely.

“I think you would have a whole bunch of much crazier firms with many fewer lawyers feeling like, here’s our opportunity to take on North Korea. And that doesn’t seem to me like a safer world.”

Mickens imagines a world in which offloading retaliation efforts to the private sector could also lead to corporations running unmanned agentic firewalls.

“It sees an intrusion, traces the hackers back to London, Berlin, and then does something offensive. I think that world very quickly degenerates into essentially high-frequency trading, except now in cyber security, where you just have a bunch of algorithms going back and forth and reacting to each other in very real time,” he said. “I don’t think we want to get into that world for the same reason that, in general, we don’t want to sort of deputize vigilantes in the physical world.”

And as for combating phishing scams bolstered by AI, the panelists imagine a world, equally obscure at present, that would allow genuine human identities to be verified online.

“This has been a problem in the ecosystem going back 30 years,” Knake said. “I think that the threat of AI just means that we are going to have to know with certainty who we are dealing with, and that it is a real person if they are claiming to be a real person, so that we can trust who you’re engaging with.”

Mickens added that while digital identification could be a viable option to combat cybercrime moving forward, it may hit some roadblocks because of how consumers use the internet.

“One reason digital IDs have traditionally struggled is that there are many scenarios in which someone wants to be identified as part of their identity, but not the full identity,” he said. “For example, if I’m the victim of domestic abuse or I’m a runaway kid or whatever, I may want someone to know I am a human but I don’t want them to actually know my real name. I want the things that I say to be associated with a particular pseudonym consistently, but I don’t want it to be my real name. Those types of practical problems would need to be solved to make some of these proposals real.”

Overall, tech companies and government agencies are facing constant changes in AI capabilities. Along with the changes come both challenges and opportunities to harness technology.

“The ability to have agentic AI essentially sitting over your shoulder, on your phone, on your computer, looking at everything you’re doing and saying this certainly looks like it’s a kill chain for a fraudulent scheme, is there,” Knake said. “We can do this. We just need to find the right market players who will make that investment and build that technology.”