Automating drone-based wildlife surveys saves time and money, study finds

Automating drone-based wildlife surveys saves time and money, study finds

by Sue Palminteri on 2 August 2018, sources: https://news.mongabay.com/wildtech/2018/08/automating-drone-based-wildlife-surveys-saves-time-and-money-study-finds/

  • Reserve managers have begun to survey wildlife in savanna ecosystems by analyzing thousands of images captured using unmanned aerial vehicles (UAVs, or drones), a time-consuming process.
  • A research team has developed machine learning models that analyze such aerial images and automatically identify those images most likely to contain animals, which, according to the authors, is usually a small fraction of the total number of images taken during a UAV survey effort.
  • The new algorithms reduced the number of images that needed human verification to less than one third of that using earlier models, and they highlight the patterns in those images that are most likely to be animals, making the technique useful for image-based surveys of large landscapes with animals in relatively few images.

The Great Elephant Census, conducted in 2014 and 2015, counted more than 350,000* elephants across 18 African countries. Human observers in small planes flew some 294,000 kilometers during more than 1,500 hours to systematically count the animals.

Could a future census be managed locally, using unmanned aerial vehicles (UAVs, a.k.a. drones), cameras, and computer vision to detect specific objects, such as elephants, rhino, or zebra?

Rhinos in the Kuzikus reserve, Namibia. Image by Friedrich Fedor Reinhard.

Although surveying the large animals in their individual reserves  is a smaller job than the Great Elephant Census, such surveys cost managers substantial time and money.

A Swiss research team recently tested a new approach to wildlife surveys. They mounted commercial cameras on UAVs to take aerial photos of Kuzikus, a private game sanctuary in Namibia, and applied convolutional neural networks (CNNs)—a type of machine learning – to automate part of the image processing.

Surveying terrestrial mammals from the air

Sending field teams out to survey the wildlife of a large nature reserve on the ground, especially where animals occur at low densities, is inefficient and laborious. Some 3,000 animals traverse the 103 square kilometer  (40 square mile) Kuzikus reserve, on the edge of the Kalahari desert, corresponding to 29 animals per km2 (75 per mi2).

Surveying with planes and helicopters covers far more area but can disturb wildlife, requires experienced pilots and human observers, and is both risky and too expensive for many reserves.

With the rapid development of small Unmanned Aerial Vehicles (UAVS), or drones, reserve managers and even livestock ranchers have experimented with them to count animals, flying UAVs mounted with cameras to take images or video footage of the animals below.

The research team launches the unmanned aerial vehicle (UAV, or drone) for a test survey at Kuzikus reserve, Namibia. Image by Friedrich Fedor Reinhard.

A UAV can be programmed to fly specific routes, cover 100 km2 per week, and require as little as one pilot on the ground to run. They are quieter than planes or helicopters, so are less likely to disturb wildlife, and they remove the risk of having human observers inside a plane flying survey patterns at low elevation.

Nevertheless, a series of UAV flight campaigns can produce many thousands of images, each of which must be reviewed for the presence of animals. The UAVs used in this study, for example, took more than 150 images per square kilometer (389 per square mile), and, from above, African animals can easily resemble rocks, shrubs, and other features of the landscape.

“We acquired data that intentionally contained acquisitions made at several times of multiple days to account for variations [in sun position], said PhD candidate and lead author Benjamin Kellenberger and co-author Devis Tuia, both at the University of Wageningen in the Netherlands. “In our experiments, we actually found shadows cast by animals to be particularly helpful for detection performance, as they allowed distinguishing between e.g. gazelles and similarly looking rocks and dirt mounds.”

For this study, the researchers used photographs, rather than video. “The main issue with video footage, Kellenberger and Tuia told Mongabay-Wildtech, “is the amount of data it generates. With typical reserve sizes of dozens of square kilometers, the number of still images already reaches several thousands; videos would be prohibitively large.”

“Videos could shed light on additional traits, such as animal movement and behavior,” they added. “However, our study focused primarily on the detection of animals, for which we found static imagery to be sufficient; for example, our model manages to detect most of the animals in the shade of trees nonetheless.”

Zebra and wildebeest other savanna grazers from the ground look very different from the air. Image by Sue Palminteri/Mongabay

Analyzing the hundreds or thousands of images generated during UAV-based monitoring projects typically requires project teams to spend many hours reviewing and analyzing photos taken during UAV-based surveys.

Neural networks streamline image analysis

The research team developed machine learning algorithms to automate part of the process of detecting and identifying animals in the UAV images.

They used convolutional neural networks (CNNs), a type of artificial intelligence that has effectively detected objects in large databases, to assess its potential for surveying wildlife over extensive areas, and developed recommendations for training a CNN on a large UAV-based dataset.

The algorithms highlight the patterns in the images most likely to be animals, enabling the researchers to quickly eliminate most of the images that did not contain wildlife.

“This initial phase of elimination and sorting is the longest and most painstaking,” Tuia said in a statement. “For the AI system to do this effectively, it can’t miss a single animal. So it has to have a fairly large tolerance, even if that means generating more false positives, such as bushes wrongly identified as animals, which then have to be manually eliminated.”

Automating an object recognition task through machine learning requires a big data set to train the software to recognize the features of interest, in this case, large mammals seen from above. The team created a data set of images with and without animals in the Kuzikus reserve by conducting a crowdsourcing campaign in which some 200 volunteers identified animals in thousands of aerial images taken by the researchers.

They trained the AI system by assigning penalty points to different types of errors. They gave the system one point for mistaking a bush for an animal but gave it 80 points for missing an animal completely. In this manner, the authors say, the software learns to distinguish wildlife from inanimate features without missing any animals.

“Automating part of the animal counting makes it easier to collect more accurate and up-to-date information,” Tuia said in the statement.

Prediction results on a test set image comparing the current model using neural networks to a 2017 model used as a baseline. Both models were set to minimize false positives while also detecting 90% of the animals present in test data set. The neural network (blue) produced far fewer false alarms than the 2017 model (red). Ground truthed locations are shown in yellow. Figure from Kellenberger et. al. (2018).

Once the data set contains just those images that the AI system recognizes as containing animals, a human conducts the final sorting. The system places colored frames around questionable features to let human interpreters know to examine that part of the image.

In this study, the researchers developed CNN training recommendations that substantially reduced the number of false positives generated by previous models while still detecting 90% of animals present. This combination thus detected almost all animals in the Kizukus reserve automatically and minimized the number of images that the Kizukus rangers had to screen manually.

Kellenberger and Tuia designed their model with low animal densities in mind but believe it would also work well with higher densities. “Results on footage containing a larger number of animals indicated that precision and recall of our detector were still satisfactory,” Kellenberger and Tuia said. “It might require a few adjustments, but it should be apt for the task without any technical difficulties.”

Scaling up

According to the researchers, recognizing animals from aerial images becomes challenging, as individuals of the same species may look different due to variations in size, fur color and patterns, position, and angle from the camera. Automated object detection algorithms must therefore be able to learn and consider the various ways a given species appears in images.

Birds-eye view of blue gnu, or wildebeest, moving across the Kuzikus, Namibia landscape. The different positions of each animal and their respective angles to the unmanned aerial vehicle present a challenge to teaching machines to identify and count them automatically. Image by Friedrich Fedor Reinhard.

However, broadening the definition of what a “zebra” or “greater kudu” looks like to a machine may cause the algorithms to mistake background objects, such as downed trees or branch formations, for the target animal species.

The researchers explored how to scale the CNNs to survey wildlife over extensive areas and developed recommendations for training CNNs on other large UAV-based datasets.

They state in their paper that while the greater variety of landscape types generally associated with larger study areas does not usually cause problems for human image interpreters, it may decrease the success rate of machine algorithms trained on a certain landscape.

“Given our system employs off-the-shelf point-and-shoot cameras, occlusions due to dense and tall vegetation will inevitably cause the system to struggle,” Kellenberger and Tuia said. “In such cases, a good alternative would be to employ thermal infrared cameras, which in turn requires data acquisition at night, due to an increased temperature contrast between livestock and soil.”

The researchers did attach thermal sensors to the UAV for the current study, but the landscape’s warm vegetation and soil did not contrast sufficiently with the heat of the animals, so they did not use the thermal image data.

“All these effects have the consequence that extrapolating results from a small to a big area is not going to yield trustworthy results,” the researchers caution, “and models trained on a small subset and then evaluated on larger areas will not perform satisfactorily.”

Neural networks to survey savannas

“We first wanted to show guidelines on how to train state-of-the-art models (i.e., deep learning-based models) on the task,” Kellenberger and Tuia said.

Can you spot the animals in the sample aerial image captured by the UAV (above)? The image below shows the algorithm’s predicted locations of animals in red boxes and the animals’ locations verified by a human interpreter in blue. The red box to the right was a dead tree, not an animal, but the others were confirmed as wildlife. Images by Friedrich Fedor Reinhard.

The researchers will begin applying the technology to different areas, first through collaborations using data from Kenya, and how to adapt to repeated acquisitions. Given a model trained on one dataset (area, acquisition year, etc.), the researchers are exploring how they can adapt it to a second one with minimal effort required by the user. The researchers suggest developing their models — currently in prototype stage — to develop into an app for use by the wider wildlife research community.

“Once this step is done,” Kellenberger and Tuia said, “any complicated core parts like the neural network design would be invisible to the user. Instead, applying such a model would only require a set of images and ground truth for training (which could be provided by experts in an easy-to-use interface), and possibly an optimized set of parameters (which can be determined automatically from the training data for the most part). The rest — training the model and predicting animals in new images — would be done automatically without heavy requirements from the user.”

*Estimated from counts during the sample flights.

Citation

Kellenberger, B., Marcos, D., & Tuia, D. (2018). Detecting mammals in UAV images: Best practices to address a substantially imbalanced dataset with deep learning. Remote Sensing of Environment, 216, 139-153.

Date de modification : 07 juin 2023 | Date de création : 03 août 2018 | Rédaction : yc