Clarification on input sensors during evaluation

As stated in the instructions, only speed and RGB images from front/right/left cameras are kept during evaluation. I have 2 questions regarding this:

  1. Do the RGB images include semantic segmentation images?
  2. Additionally, participants are “restricted from accessing model weights or custom logs during evaluation”. Can you be more specific about the restrictions? I assumed that we can use pretrained model weights during stage 1, but not in stage 2?
1 Like
  1. You can extract any output (segmentation etc) from an input of RGB images.
  2. Your understanding is correct

Thank you for answering!
We can extract segmentation from RGB images, but I’m asking whether the ground-truth segmentation images are provided during evaluation?

Do you mean, providing the ground truth segmentation for evaluating networks on your local systems? For Round 2 - No that is currently not possible. We will evaluate if it is necessary later.

To improve your score consider the three metrics working together.

If you are asking to see the results of Round 1 evaluation - that is coming soon by rendering the output video and showing it onto the leaderboard.

For Round 2, can we use additional sensors for 1-hour training period ? Like segmentation camera view for the new track ?

Please review the evaluation method.

The wording is a bit unclear to me, During the “evaluation” there is “practice” session of 1 Hour and then the final “evaluation”. My question is regarding the “practice” session, are we allowed to use any sensor during the practice session or only the three (Front Left and Right) ?

1 Like

You can choose to use any set of parameters while training or during the practice session, as these will ultimately change the quality of the trained model. However during evaluation only a fixed set of parameters are available (speed, RGB from front+right+left).

Will RGB from right and left cameras be available during evaluation? It seems like it’s not available right now during evaluation step. Only front camera returning.

they will be available as an additional data point, you can choose to not use them.

This was brought up by a fellow participant during the course of the competition itself.

Check that the sensors you want are enabled in the config.py file. See active_sensors, add the ones you want from the cameras dict in the Envconfig class.

class SimulatorConfig(object):
racetrack = “Thruxton”
active_sensors = [
“CameraFrontRGB”,
]
driver_params = {
“DriverAPIClass”: “VApiUdp”,
“DriverAPI_UDP_SendAddress”: “0.0.0.0”,
}
camera_params = {
“Format”: “ColorBGR8”,
“FOVAngle”: 90,
“Width”: 512,
“Height”: 384,
“bAutoAdvertise”: True,
}
vehicle_params = False

Hope this is helpful

1 Like

After reading this thread I am still unclear about the availability of the ground truth segmentation masks during the “1 Hour” training period for round 2. It is clear they will not be available during the evaluation period.

After the code change for using multiple cameras this line in evaluator.py

self.check_for_allowed_sensors()

throws an exception when trying to add them to the sim environment.

Access to these masks is important for anyone using a segmentation model

1 Like

Any updates here? would be good to know before round 2 starts

1 Like

Bottom line: yes, we will allow access to the semseg cameras, during the 1-hour practice period in Stage 2

1 Like

The evaluator code in the starter kit does not allow Segmentation cameras during 1-hr practice session. Is the evaluator on the server different from the starter-kit ?

link to code line