The task involves a supervised classification of bird species from a set of bird images.
From an ecological and environmental point of view, monitoring bird diversity is an important task. While bird monitoring is a well-established process, the observation is largely carried out manually which is time-consuming, and hence the scalability is low. This has motivated the use of machine learning methods to analyze bird images and sounds, using camera-trap data, recorder data or crowd-sourcing. In this challenge, we pose the bird image classification task, especially for Himalayan birds, based on a limited but a diverse set of crowd-sourced data. Especially, the present challenge involves a fairly low amount of labelled data, and may require transfer learning based approaches for effective classification.
- A short overview of the existing work on the problem:
There are various approaches developed for bird image classification which mainly involve the Caltech bird image dataset. However, in this challenge we are providing a dataset, which, while smaller has a larger variation in terms of scale, illumination etc.
- Evaluation protocol:
The challenge data distribution will be divided in two phases. First, the training data will be made available, which the participants can use to train and validate their methods. After a few weeks, and closer to the result submission deadline, a test data will be made available. The results will be decided on the F-score metric averaged across classes, which involves True-positives and False-positives.
A training data set is provided, wherein each folder contains a class of birds.
To develop and validate the algorithms the participants should only use the images in the training set. The ratio of the number of images used for training and validation can be decided by the participants.
To obtain the training data, please register by filling the form at the following link:
We shall email you the link for the training data, once we receive your registration information.
The testing images will be available as per the schedule below (for which a notification will be sent to all registered participants).
After getting the results on the test images, the authors need to submit their results the along with an extended abstract (see below). More details on the formatting of the results will be shared along with the testing data
Note: The authors should not make any changes in the algorithms or its parameters while processing the test images. Any changes in the algorithms and its parameters can be done using only the training (and validation) data.
- Extended abstract:
Along with the test results, the authors should also submit an extended abstract describing their overall approach, experimental details and some example visual results, and quantitative results.
Among any other details, the experimental details should include no. of images used for training and validation, cross validation information, if any.
The quantitative validation results should be specified in terms of precision-recall and f-score.
More details about the formatting of the abstract will be made available soon
- Timeline of events:
June 8, 2018: Challenge registration open
June 20, 2018: Training data available
July 20, 2018: Registration closes
Aug 1, 2018: Test data available
August 31, 2018: Submission of results, reports and code
CVIP 2018: Announcement of the challenge results
- Organization team and contact details:
Arnav Bhavsar (IIT Mandi) (email@example.com)
A.D. Dileep (IIT Mandi) (firstname.lastname@example.org)
Padmanabhan Rajan (IIT Mandi) (email@example.com)