Menu

Object Detection

In the RAIL-BENCH Object Detection benchmark, the task is to accuratly detect and classify objects of seven categories relevant to railway environments: train, signal, signal pole, catenary pole, road vehicle, bicycle, and person.

Object Detection Visualization

Dataset

The RAIL-BENCH Object dataset shares the same images as RAIL-BENCH Rail (see Rail Track Detection), comprising 2500 real-world RGB images, split into 1500 training, 500 validation, and 500 test images. Objects are annotated with axis-aligned bounding boxes across seven categories: train, signal, signal pole, catenary pole, road vehicle, bicycle, and person. The entire dataset — with exception of the test ground truth — is publicly available for download:

Annotation Policy

  • Objects are annotated with axis-aligned bounding boxes. The seven annotated categories are: train, signal, signal pole, catenary pole, road vehicle, bicycle, and person.
  • We only consider front-facing light signals in the signal class.
  • The categories person, road vehicle, and bicycle can occur in dense groups, which are particularly difficult to annotate at instance level when observed from large distances. In such cases, these objects may be annotated collectively as a single crowd entity.
  • Each each bounding box is labeled with an occlusion level that can be 0 (0-24 %), 1 (25-49 %), 2 (50-74 %) or 3 (75-99 %) and can be marked as ignore if considered ambiguous by the annotator.

RAIL-BENCH Object Challenge

In the RAIL-BENCH Object challenge, predictions on the held-out test set are evaluated using mean Average Precision (mAP) at different Intersection over Union (IoU) thresholds, following the standard COCO evaluation protocol.

We define three levels of difficulty:

  1. Easy: only bounding boxes with an area of at least 50 x 50 pixels and an occlusion value below 25 % are taken into account.
  2. Moderate: only bounding boxes with an area of at least 30 x 30 pixels and an occlusion value below 75 % are taken into account.
  3. Hard: All ground-truth bounding boxes are taken into account regardless of their size or occlusion level.

Submissions are ranked by mAP in the hard difficulty level averaged across all IoU thresholds (mAP@[0.5:0.95]).

How to Participate

  1. Download the RAIL-BENCH Object dataset.
  2. Train your model with the train split of the dataset. Do not use the validation split for training, but only for hyperparameter tuning, early stopping, etc.
  3. You can use additional training data from other sources, but should state that when submitting your results.
  4. Optionally: use the RAIL-BENCH toolkit to compute evaluation scores on the validation set locally.
  5. Soon, we will publish the official Codabench challenge, where you can submit your predictions and get evaluated on the hidden test set.