“Deep learning,” already poised to transform fields from earthquake prediction to cancer detection to self-driving cars, is about to be unleashed on a new discipline — ecology.
A team of researchers from Harvard, Auburn University, the University of Wyoming, the University of Oxford, and the University of Minnesota has demonstrated that the artificial-intelligence technique can be used to identify animal images captured by motion-sensing cameras.
Researchers applied deep learning to more than 3 million photographs from the citizen-science project Snapshot Serengeti to identify, count, and describe animals in their natural habitats. The system was able to automate the process for up to 99.3 percent of images as accurately as human volunteers. The study was described in a paper published last month in the Proceedings of the National Academy of Sciences.
Motion-sensitive cameras deployed in Tanzania by Snapshot Serengeti collect images of lions, leopards, cheetahs, elephants, and other animals. While the images can offer insight into a range of questions, from how carnivore species coexist to predator-prey dynamics, they are only useful once they have been converted into data that can be processed. For years, the best method for extracting such information was to ask crowdsourced teams of volunteers to label each image manually.
“Not only does the artificial intelligence system tell you which of 48 different species of animal is present, it also tells you how many there are and what they are doing,” said Harvard’s Margaret Kosmala, one of the leaders of Snapshot Serengeti and a co-author of the study. “It will tell you if they are eating, sleeping, if babies are present, etc.
“We estimate that the deep-learning technology pipeline we describe would save more than eight years of human labeling effort for each additional 3 million images. That is a lot of valuable volunteer time that can be redeployed to help other projects.”