Menu

Rail Track Detection

Robust rail track detection serves as a fundamental and indispensable component in the pursuit of comprehensive scene understanding within automated railway systems, as it provides precise information about the region of interest for obstacle detection and can facilitate self-localization based on digital maps.

Track Detection Visualization

Dataset

The RAIL-BENCH Rail dataset comprises 2500 real-world RGB images, split into 1500 training, 500 validation, and 500 test images. All rails visible in the images are annotated with polylines. In addition, object-level annotations are also provided (see Object Detection). The entire dataset — with exception of the test ground truth — is publicly available for download:

Annotation Policy

  • All visible rails in the images are annotated with polylines at the position of the outer edges of the rail heads. Check and guard rails are excluded from the annotations.
  • At turnouts, where several tracks merge/diverge, polylines terminate at the horizontal line connecting the two tips of the switch blades.
  • Only visible rails are labeled. However, in cases of minor occlusions (e.g., poles, signs) or when the rail position can be reliably inferred (e.g., when straight rails are covered by another train, but are visible before and after the train), we interpolate through the occlusion to maintain continuity.
  • Any rails that are present in the scene but not labeled (e.g., due to poor visibility, heavy occlusion, or ambiguity) or where it is unclear whether occluded rails should be annotated, ignore areas are added to mark these regions. Predictions in these regions are excluded during the evaluation to prevent penalizing reasonable model behavior.

RAIL-BENCH Rail Challenge

In the RAIL-BENCH Rail challenge, predictions on the held-out test set are evaluated using two metrics:

  1. ChamferAP: An average precision metric based on the Chamfer Distance.
  2. LineAP: A novel segment-based polyline average precision metric developed specifically for this challenge.

Both metrics evaluate the positioning of rail predictions. However, whereas ChamferAP evaluates how well predicted polylines match ground truth rails on an instance-level basis, LineAP performs a fine-grained geometric evaluation by dividing both predicted and ground truth rails into small line segments.

For RAIL-BENCH Rail, both ChamferAP and LineAP are computed with three different distance thresholds defined with respect to the image width. For instance, AP@1 corresponds to a distance threshold of 1% of the image width. We also report the mean across all three thresholds as mChamferAP and mLineAP, respectively. Submissions are finally ranked by the mean of mChamferAP and mLineAP.

How to Participate

  1. Download the RAIL-BENCH Rail dataset.
  2. Train your model with the train split of the dataset. Do not use the validation split for training, but only for hyperparameter tuning, early stopping, etc.
  3. You can use additional training data from other sources, but should state that when submitting your results.
  4. Optionally: use the RAIL-BENCH toolkit to compute evaluation scores on the validation set locally.
  5. Soon, we will publish the official Codabench challenge, where you can submit your predictions and get evaluated on the hidden test set.