Clarification on input sensors during evaluation

As stated in the instructions, only speed and RGB images from front/right/left cameras are kept during evaluation. I have 2 questions regarding this:

  1. Do the RGB images include semantic segmentation images?
  2. Additionally, participants are “restricted from accessing model weights or custom logs during evaluation”. Can you be more specific about the restrictions? I assumed that we can use pretrained model weights during stage 1, but not in stage 2?
1 Like
  1. You can extract any output (segmentation etc) from an input of RGB images.
  2. Your understanding is correct

Thank you for answering!
We can extract segmentation from RGB images, but I’m asking whether the ground-truth segmentation images are provided during evaluation?

Do you mean, providing the ground truth segmentation for evaluating networks on your local systems? For Round 2 - No that is currently not possible. We will evaluate if it is necessary later.

To improve your score consider the three metrics working together.

If you are asking to see the results of Round 1 evaluation - that is coming soon by rendering the output video and showing it onto the leaderboard.