The brain and how it learns may be among the most complicated puzzles in the quickly advancing field of neuroscience. But Harvard is trying to unravel its mystery.

The Ariadne Project, led by David Cox, an assistant professor of molecular and cellular biology and computer science at Harvard, mobilized a multi-university team of experts in neuroscience, physics, machine learning, and high-performance computing to explore the possibility of creating an artificial brain by reverse-engineering the brain of a rat while it learns. The aim is to build computer algorithms that replicate the way human brains perceive information and learn.

“This is where the field of computer science and neuroscience are not only exploding, but are merging on a collision course that is allowing us to explore the way we conventionally think of understanding,” Cox told 80 attendees at the Harvard Ed Portal’s Faculty Speaker Series lecture “Toward an Artificial Brain” in Allston.

“Walking across a room and not falling over is hard for a computer [robot], but easy for us. Systems don’t quite understand the way we understand,” he said. “Let’s go back to the brain and find out what we’re missing.”

Kenneth Blum, executive director of the Center for Brain Science (CBS) said in opening remarks that recent advances in artificial intelligence may indicate that intelligent machines are just around the corner. But how that might happen remains a question.

“You have probably read about artificial intelligence in the news, seen it in movies, used Siri or Alexa or one of the other voice assistants … or know that soon in most cars to be sold there will be a little chip that will gather data to assist with self-driving cars,” he said. “But nobody thinks that simply massive computer power alone — which has driven all of these recent advances — will be enough to get us to magically understand real intelligence. That’s what we are going to hear from David tonight,” he said.

Cox explained that by analyzing a rat’s brain activity, taking that brain apart, reconstructing the wiring, and mapping microscopic data, the Ariadne Project is paving the way on the artificial intelligence frontier.

Blum said Cox’s research is unique.

“As far as I know he’s the only person who simultaneously does experiments on real brains and research on how one can write computer programs that can learn intelligently,” he said. “There are lots of people who do one or the other, but nobody who I know does both of them at the level that David does.”

“David Cox has combined neuroscience and computer science in a creative new way. By applying computer methods to vision, he is giving us startling new insights into how the brain perceives the world, and by applying what he learns about the brain to computers, he is improving their capabilities,” said Joshua R. Sanes, the Paul J. Finnegan Family Director of the Center for Brain Science and Jeff C. Tarr Professor of Molecular and Cellular Biology.

Funded by the federal government’s Intelligence Advanced Research Projects Activity (IARPA), Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS), CBS, and the Department of Molecular and Cellular Biology received more than $28 million to understand how the brain learns. The results of their work will help design computer systems that can completely interpret and analyze information.

Cox said the “dream team” of researchers involved in the project crosses 12 labs in six institutions including Harvard, Massachusetts Institute of Technology, Notre Dame, New York University, University of Chicago, and Rockefeller University.

The task begins in the Northwest Lab on Harvard’s campus, where a laser-powered, two-photon excitation microscope watches the brain activity of rats responding to computer-generated stimuli. The rat’s brain is then removed and shipped to Argonne National Laboratory in Lemont, Ill., where computerized tomography displays a high-resolution image of its structure.

The analyzed brain returns to the Lichtman Lab at Harvard for serial section electron microscopy — “slicing” the brain into pieces 30 billionths of a millimeter thin. The slices are imaged, imposed on miles of tape, spooled, put on silicon wafers, cataloged, and, by using computer vision techniques, reconstructed.

Finally, the rat’s digitized brain is uploaded to the Cloud for IARPA and remains at Harvard in a storage array.

In the course of five years, scientists will collect two petabytes of data — the equivalent of a million USB memory sticks, one of the largest neuroscience data sets ever collected, Cox said.

Although it will be a long time before scientists are able to digitize a human brain, Cox said the Ariadne Project’s first steps hold promise of improving computers’ capabilities to do many of the things humans do — which will emphatically impact jobs, laws, and ethics.

“I think the meta-point is that we are entering into an interesting era where we are starting to close the gap between brains and machines, and we can’t stop it from coming,” said Cox. “When that happens there will be all sorts of follow-along consequences — some of them might be really great and some not so good. I think it’s incumbent upon all of us to understand where we fit into that, understand how we can guide toward better solutions, solutions that better serve the community and better serve humanity.”

Parizad Bilimoria, science program manager for the Harvard Brain Science Initiative, a collaboration of Harvard-affiliated researchers, said the Ed Portal program helped demystify the scientific process, and gave people a snapshot about what actually goes on in a brain-research laboratory and why it matters. “It’s a topic that everyone is naturally interested in, even if you are not a scientist,” she said.

Betsy Burkhardt of Cambridge, a former teaching fellow at Harvard’s Department of Organismic and Evolutionary Biology, said she had a personal motive for attending. “This is the first time I’m learning about the brain, but also my mother had dementia so this particularly interesting to me,” she said.