Jumping spiders have evolved a more efficient system to measure depth. Each principal eye has a few semi-transparent retinae arranged in layers, and these retinae measure multiple images with different amounts of blur. For example, if a jumping spider looks at a fruit fly with one of its principal eyes, the fly will appear sharper in one retina’s image and blurrier in another. This change in blur encodes information about the distance to the fly.
In computer vision, this type of distance calculation is known as depth from defocus. But so far, replicating Nature has required large cameras with motorized internal components that can capture differently-focused images over time. This limits the speed and practical applications of the sensor.
That’s where the metalens comes in.
Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering at SEAS and co-senior author of the paper, and his lab have already demonstrated metalenses that can simultaneously produce several images containing different information. Building off that research, the team designed a metalens that can simultaneously produce two images with different blur.
“Instead of using layered retina to capture multiple simultaneous images, as jumping spiders do, the metalens splits the light and forms two differently-defocused images side-by-side on a photosensor,” said Shi, who is part of Capasso’s lab.
An ultra-efficient algorithm, developed by Zickler’s group, then interprets the two images and builds a depth map to represent object distance.
“Being able to design metasurfaces and computational algorithms together is very exciting,” said Qi Guo, a GSAS Ph.D. candidate in Zickler’s lab and co-first author of the paper. “This is new way of creating computational sensors, and it opens the door to many possibilities.”
“Metalenses are a game changing technology because of their ability to implement existing and new optical functions much more efficiently, faster and with much less bulk and complexity than existing lenses,” said Capasso. “Fusing breakthroughs in optical design and computational imaging has led us to this new depth camera that will open up a broad range of opportunities in science and technology.”
This paper was co-authored by Yao-Wei Huang, Emma Alexander, and Cheng-Wei Qiu, of the National University of Singapore. It was supported by Air Force Office of Scientific Research and US National Science Foundation.
The Daily Gazette
Sign up for daily emails to get the latest Harvard news.