Monitoring Wildlife with Camera Traps

Problem Statement

Re-identifying animals in the wild has a several applications in the study of population dynamics (especially of endangered species) and the study of ecosystems and behavioural ecology. Can deep learning techniques be applied on stills and video captured from camera traps to accurately and minutely monitor wildlife?

Status

Under Ideation

Positions

Domain experts to guide problem selection and data collection

Goal

Enhance existing conservation efforts

Relevance in India and Value Proposition

Ullas Karanth. Source: Wikipedia

Ullas Karanth. Source: Wikipedia

India has a rich diversity of flora and fauna: There are over 500 species of mammals, 2,000 species of birds, and over 30,000 species of insects. However, several species and ecosystems are at a severe risk due to climate change, rapid industrialisation, deforestation, and poaching. To address this, several wildlife conservation measures have been undertaken including the flagship projects of Project Tiger starting in the 1970s and Project Elephant starting in 1990s. Different technological interventions need to be explored to support such conservation measures.

Wildlife conservation measures depend on accuracy in studying population dynamics and migratory behaviour. This can be achieved by tagging animals or by analysing DNA from collected samples. But such approaches are very expensive and difficult to scale. An alternative approach which is widely used is the deployment of camera traps. This was pioneered in India by the leading tiger zoologist Ullas Karanth, who is widely recognised for his contribution to Project Tiger.

Processing of images and videos from trap cameras come with some challenges. Firstly, discriminating individuals from such images is considered an expert skill and is prone to bias [1]. As an illustration of the task, the distinguishing features in the stripes of some tigers from Bandhavgarh are shown in the images below. Further, as a large number of trap cameras are installed for precise monitoring, human annotators become the bottleneck in processing the images. It is thus natural to consider deploying computer vision methods to help in the re-identification of animals.

Finally, the value proposition of these methods also includes improving the experience in tourism and in providing richer insight in citizen science. As the planet limps towards severe effects of climate change, technology should and can play a role in enhancing the connect between individuals and nature.

Distinguishing marks in the faces of tigers. Source: The Last Wilderness

Distinguishing marks in the faces of tigers. Source: The Last Wilderness

Existing Work

Use of computer vision for re-identification of animals has been studied since 1990, when flukes of whales were used to identify whales [2, 3] reporting an accuracy of about 43% top-1 accuracy for a collection of 30 individuals. A recent Kaggle challenge re-introduced this topic of whale classification with leaderboards indicating 78% accuracy on a public dataset.

Since the 1990s, re-identification for several other animals have been studied such as tigers [4], seals [5], cheetahs [6], dolphins [7], elephants [8], penguins [9], and primates [10]. Most of these techniques have required careful selection of features. For instance, [8] proposes identifying elephants by matching curve splines to fit the mantle of elephants. In [4], the authors propose a projection of the stripes of a tiger to a spline node coordinate space to normalise for the camera angle and then comparing against a database to determine the identity match. Feature engineering has been successful in specific domains where the volume of data is small enough to require careful human intervention in labelling, feature extraction, and classification method design. However, such methods fail to scale across settings and usually do not continue to improve in accuracy as the volume of data increases.

Very recently, some works have also considered the application of deep learning methods for monitoring animals. An ensemble of neural networks was used to classify green turtles from the Great Barrier Reef achieving an accuracy of 95% [11]. Since the release of C-Zoo and C-Tai datasets of labelled images of chimpanzees, several methods have applied deep neural networks to achieve accuracies up to 92%. For elephants, the authors of [12] used YOLO to localise an elephant head followed by a ResNet architecture to identify the elephant achieving 59% accuracy on a population size of 276.


Open Technical Challenges

Deep learning based methods have outperformed hand coded feature engineering in most computer vision tasks. For wildlife monitoring, the application of deep learning is still in its infancy and must work remains to be done. We list down some ideas to enable such future work:

  1. For the Indian context, applying deep learning to identify animals in the wild starting from endangered species prone to poaching such as tigers, elephants, and rhinoceros.

  2. Developing an active labelling pipeline wherein noisy classifiers bootstrap the labelling process and only the images that are found to be hard are sent for manual labelling by experts. The careful optimisation of the time of experts would be an important contribution of deep learning.

  3. Applying advances made in using deep learning for video understanding (such as captioning actions in short videos), to annotate actions performed by animals such as walking by, lying down, fighting, sleeping, eating, etc. This can provide rich information on the behavioural properties of individual animals and provide insight to zoologists.

  4. Developing hardware systems that can work in the wild with edge analytics such that only relevant frames and video snippets are either stored or sent to a base network. This would reduce the frequency at which field experts have to visit the trap cameras for maintenance.

Next steps

The AI4Bharat community has the technical expertise and the will to build the underlying deep learning technology required for aforementioned challenges. However, this project, more than any other, is strongly dependent on access to domain knowledge and expertise.

the ai4bharat community is open to collaboration with the government and/or NGOs on collecting data, active labelling, and deep learning based classification.

References

[1] Meek, Paul Douglas, Karl Vernes, and Greg Falzon. "On the reliability of expert identification of small-medium sized mammals from camera trap photos." Wildlife Biology in Practice 9, no. 2 (2013): 1-19.
[2] Whitehead, Hal. "Computer assisted individual identification of sperm whale flukes." Report of the International Whaling Commission 12 (1990): 71-77.
[3] Mizroch, Sally A., Judith A. Beard, and Macgill Lynde. "Computer assisted photo-identification of humpback whales." Report of the International Whaling Commission 12 (1990): 63-70.
[4] Hiby, Lex, Phil Lovell, Narendra Patil, N. Samba Kumar, Arjun M. Gopalaswamy, and K. Ullas Karanth. "A tiger cannot change its stripes: using a three-dimensional model to match images of living tigers and tiger skins." Biology letters 5, no. 3 (2009): 383-386.
[5] Hiby, Lex, and Phil Lovell. "Computer aided matching of natural markings: a prototype system for grey seals." Report of the International Whaling Commission 12 (1990): 57-61.
[6] Kelly, Marcella J. "Computer-aided photograph matching in studies using individual identification: an example from Serengeti cheetahs." Journal of Mammalogy 82, no. 2 (2001): 440-449.
[7] Hillman, G. R., B. Wursig, G. A. Gailey, N. Kehtarnavaz, A. Drobyshevsky, B. N. Araabi, H. D. Tagare, and D. W. Weller. "Computer-assisted photo-identification of individual marine vertebrates: a multi-species system." Aquatic Mammals 29, no. 1 (2003): 117-123.
[8] Ardovini, Alessandro, Luigi Cinque, and Enver Sangineto. "Identifying elephant photos by multi-curve matching." Pattern Recognition 41, no. 6 (2008): 1867-1877.
[9] Burghardt, T., P. J. Barham, N. Campbell, I. C. Cuthill, R. B. Sherley, and T. M. Leshoro. "A fully automated computer vision system for the biometric identification of African penguins (Spheniscus demersus) on Robben Island." In 6th International Penguin Conference (IPC07), Hobart, Tasmania, Australia, EJ Woehler Ed. 2007.
[10] Deb, Debayan, Susan Wiper, Sixue Gong, Yichun Shi, Cori Tymoszek, Alison Fletcher, and Anil K. Jain. "Face recognition: Primates in the wild." In 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS), pp. 1-10. IEEE, 2018.
[11] Carter, Steven JB, Ian P. Bell, Jessica J. Miller, and Peter P. Gash. "Automated marine turtle photograph identification using artificial neural networks, with application to green turtles." Journal of experimental marine biology and ecology 452 (2014): 105-110.
[12] Körschens, Matthias, Björn Barz, and Joachim Denzler. "Towards automatic identification of elephants in the wild." arXiv preprint arXiv:1812.04418 (2018).