Need your Inputs for improving competition

Hi all, you’ve spent some time wrangling the simulator and can now help us make the challenge better.

Please let us know what has been your major pain points so far? and, what would you like to see improved?

1 Like

Haven’t gotten up and running yet, but I had some missing libraries when installing the python packages from requirements.txt.

Specifically, I fixed it by sudo apt install libhdf5-dev libglib2.0-dev.

1 Like

I have 2 questions:

  1. Can we run the simulator in headless mode (no GUI)? I believe without rendering the training will be much faster.
  2. Can we run one simulator instance for multiple training scripts, i.e. same IP, same port?

Hi @ducnx:

  1. The simulator runs at a fixed heartbeat which means running headless will give you some speed benefit but not necessarily a lot. Additionally, right now the arrival simulator does not support running headless.
  2. Currently not supported, but added to roadmap. Thank you!

Thank you for answering. How can I run multiple instances of the simulator with different ports on a single machine, so I can allow each training script to use one simulator instance?

1 Like

@siddha_ganju @jyotish
Hi, I found a bug in, lines 689-691:

            ) + np.arctan(
                dx / dy
            ) # yaw, radians

This should be:

            ) + np.arctan2(
                dx, dy
            ) # yaw, radians

This is a bug because with arctan the car is facing the wrong way when resetted to some segments, leading to episode termination after the car starts driving. Reference:

1 Like

Also it would be super cool if we were allowed to cut the curbs on the track, that would just look absolutely sick :laughing:


Good catch, this has been fixed and will be available shortly.

The cameras on the evaluation server are still not configured according to the rules in the challenge overview, as I described in this thread:

This is currently keeping me from submitting my agent. Are there any news on this topic? :thinking:


Thanks — added to the documentation, along with some other suggestions:


Is there a way to view/playback submitted evaluations? It would be a great asset to be able to view these so that irregular behavior can be diagnosed. I understand it cannot be done for round 2. I have noticed large discrepancy between scores, performance and agent behavior in a local simulator versus the evaluation results used for grading, even if you reduce the frame rate to match the evaluation server.


outputting videos is a great idea!

Three things for me would be great:

  1. A mode in the simulator where the simulator only steps when env.step is called. This would be useful during training and data collection because you wouldn’t need to worry about frame rates dropping due to whatever time delays are introduced by your code.
  2. Access to the true simulator clock time to help calculate accurate ft between env.steps. currently I’m using time.time() but I don’t think it is quite as accurate as getting a direct read out at step.
  3. Correct and updated docs. Getting started was a pain because there seems to be out dated information. For example, when looking at getting started there is a reference to using a bash script to start the random/sac agents but in the latest repo this does not exist. It isn’t unless you look at the repos readme that you see there is a which should be used instead.

I only just got going on this challenge so perhaps the first 2 are non issues and I am doing things sub optimally. If you have suggestions for those two let me know.