What if you could hold a physical model of your own brain, accurate down to its every unique fold? That’s just a normal part of life for Steven Keating, who had a baseball-sized tumor removed from his brain at age 26 while he was a graduate student in the MIT Media Lab’s Mediated Matter group.
Curious to see what his brain looked like before the tumor was removed, and with the goal of better understanding his diagnosis and treatment options, Keating collected his medical data and began 3-D printing his MRI and CT scans. He found the existing models prohibitively time-intensive and cumbersome, and said they failed to accurately reveal important features of interest. So Keating reached out to some of his group’s collaborators, including members of the Wyss Institute at Harvard University, who were exploring a new method for 3-D printing biological samples.
“It never occurred to us to use this approach for human anatomy until Steve came to us and said, ‘Guys, here’s my data, what can we do?’” said Ahmed Hosny, who was a research fellow with the Wyss Institute at the time and is now a machine learning engineer at the Dana-Farber Cancer Institute.
The result of that impromptu collaboration is a new technique that allows images from MRI, CT, and other medical scans to be converted easily and quickly into physical models with unprecedented detail.
The collaborators included James Weaver, senior research scientist at the Wyss Institute; Neri Oxman, director of the MIT Media Lab’s Mediated Matter group and associate professor of media arts and sciences; and a team of researchers and physicians at several other academic and medical centers in the U.S. and Germany. The research is reported in the journal 3D Printing and Additive Manufacturing.