As stated in the instructions, only speed and RGB images from front/right/left cameras are kept during evaluation. I have 2 questions regarding this:
Do the RGB images include semantic segmentation images?
Additionally, participants are “restricted from accessing model weights or custom logs during evaluation”. Can you be more specific about the restrictions? I assumed that we can use pretrained model weights during stage 1, but not in stage 2?
Thank you for answering!
We can extract segmentation from RGB images, but I’m asking whether the ground-truth segmentation images are provided during evaluation?
Do you mean, providing the ground truth segmentation for evaluating networks on your local systems? For Round 2 - No that is currently not possible. We will evaluate if it is necessary later.
To improve your score consider the three metrics working together.
If you are asking to see the results of Round 1 evaluation - that is coming soon by rendering the output video and showing it onto the leaderboard.
The wording is a bit unclear to me, During the “evaluation” there is “practice” session of 1 Hour and then the final “evaluation”. My question is regarding the “practice” session, are we allowed to use any sensor during the practice session or only the three (Front Left and Right) ?
You can choose to use any set of parameters while training or during the practice session, as these will ultimately change the quality of the trained model. However during evaluation only a fixed set of parameters are available (speed, RGB from front+right+left).
Will RGB from right and left cameras be available during evaluation? It seems like it’s not available right now during evaluation step. Only front camera returning.
Check that the sensors you want are enabled in the config.py file. See active_sensors, add the ones you want from the cameras dict in the Envconfig class.
After reading this thread I am still unclear about the availability of the ground truth segmentation masks during the “1 Hour” training period for round 2. It is clear they will not be available during the evaluation period.
After the code change for using multiple cameras this line in evaluator.py
self.check_for_allowed_sensors()
throws an exception when trying to add them to the sim environment.
Access to these masks is important for anyone using a segmentation model
The evaluator code in the starter kit does not allow Segmentation cameras during 1-hr practice session. Is the evaluator on the server different from the starter-kit ?