Leaf Segmentation and Counting Challenges

To advance the state of the art in leaf segmentation and to demonstrate the difficulty of segmenting all leaves in an image of plants, we organize the Leaf Segmentation and Counting Challenges (LSC and LCC). This is the 3rd LSC after the successful LSC 2014 and 2015 and the 2nd LCC. Examples of methods stemming from these challenges or using the data are , , ). The major difference of this years challenge is the expansion of the data that we focus on leaf segmentation accuracy and as such ground truth foreground segmentation masks are provided for training and testing.

For the challenges we release training sets (containing raw images and annotations) and testing sets (containing raw images, only). Papers will be evaluated and ranked according to their outcome, the validity of the algorithm, and suitability of the approach. Only fully automated approaches will be accepted. Accepted papers will be presented either orally or in a poster session, and will appear in the proceedings. Should a large, high quality, number of papers be received, a collation study, summarizing algorithms and results, will be presented instead with authors presenting details in a poster session.

A jointly authored paper presenting the findings of the collation study may be invited to a high impact journal in computer vision (to be announced at the workshop if appropriate). The collation paper will be compiled primarily from participants presenting at the workshop.

How to participate

Please read first the challenge terms and conditions

  1. Please register for one or both of the challenge by filling in the online registration form. This registration is a mandatory step before downloading data and submitting results to the challenges.
  2. Upon reception of your registration form, you will receive a link to download the dataset (as a single zip file), collected in our laboratories (datasets A1 -- A3) or derived from a public dataset (A4, public data kindly shared by Dr Hannah Dee from Aberystwyth) of top-view images of rosette plants. All images were hand labelled. The archive contains an evaluation function (in MATLAB) for comparing segmentation and counting outcomes between ground truth and algorithm results.
  3. After the results submission deadline the organizers will evaluate the results, and outcomes will be sent back to the authors for inclusion in their submission.

About the data

We share images of tobacco plants and arabidopsis plants. Tobacco images were collected using a camera which contained in its field of view a single plant. Arabidopsis images were collected using a camera with a larger field of view encompassing many plants, which were cropped. The images released are either from mutants or wild types and have been taken in a span of several days. Plant images are encoded as tiff files.

All images were hand labelled to obtain ground truth masks for each leaf in the scene. These masks are image files encoded in PNG where each segmented leaf is identified with a unique integer value, starting from 1, where 0 is background. For the counting problem, annotations are provided in the form of a png image where each leaf center is denoted by a single pixel. Additionally a CSV file with image name and number of leaves is provided.

For further information on the ground truth annotation process, please refer to:

  1. M. Minervini, A. Fischbach, H.Scharr, and S.A. Tsaftaris. Finely-grained annotated datasets for image-based plant phenotyping. Pattern Recognition Letters, pages 1-10, 2015, doi:10.1016/j.patrec.2015.10.013 [PDF] [BibTex]
  2. Hanno Scharr, Massimo Minervini, Andreas Fischbach, Sotirios A. Tsaftaris. Annotated Image Datasets of Rosette Plants. Technical Report No. FZJ-2014-03837, Forschungszentrum Jülich, 2014

  3. Bell, Jonathan, & Dee, Hannah M. (2016). Aberystwyth Leaf Evaluation Dataset [Data set]. Zenodo.

or the challenge documents on LSC 2017 or LCC 2017.

Challenge Terms and Conditions

  • All the data made available for the CVPPP 2017 Challenges (LSC and LCC) can only be used to generate a submission for these challenges.
  • Results submitted to CVPPP 2017, can be published (as seen appropriate by the organizers) through different media including this website and journal publications.
  • By submitting an entry to CVPPP 2017 LSC and LCC, each team agrees to have at least a single member register to the accompanying workshop (held on Oct. 28, 2017).

These guidelines follow those established by challenges in biomedical image analysis such as example 1 and example 2.


Deadlines for the Challenges

Challenge opens for registration       May 2017
Data ready for download May 2017
Submit results on testing data             June 20 2017, 11:59PM Pacific Time
Evaluation of results on testing data    June 23 2017
Paper submission deadline June 28 2017, 11:59PM Pacific Time
Notification of acceptance Aug 10 2017
Camera-ready paper Aug 24 2017, 11:59PM Pacific Time
Author registration deadline Aug 24 2017
Workshop day Oct 28 2017


 Registration stays open till June 19 2017.


Challenge Organization

Sotirios A. Tsaftaris, University of Edinburgh,  UK

Hanno Scharr, IBG-2, Forschungszentrum Jülich, Germany





Sponsored by



Logo LemnaTec


Logo PhenoSpex




Some example images from data set