Observation delay limits processing to maximum of 10FPS

Are we meant to be able to change this? I am able to run my code locally at around 30FPS while on the server it runs at around 5-6FPS because of the obs_delay parameter in config.py being overwritten to 0.1s. This 0.1s delay essentially makes a hard limit that no ones code should run faster than 10FPS.

Is this and intentional limitation to make the processing power similar to what the real world car can actually achieve?

1 Like

I think it is intentional: see the fourth paragraph in Environment Overview — Learn-to-Race documentation. However I doubt that a real world solution would not have a solid system collecting observations at a fixed interval. The algorithm might sometimes run a computation that takes longer than the time between observations, but that could run in another thread and there should be regular observations with a controller that handles that until the longer computation finishes.

The 0.1 second pause is called in the main thread (in RacingEnv._observe()) instead of having a timer in a separate thread. That causes the time between observations to be 0.1 second plus the time it takes to run some other code in RacingEnv and whatever code your agent runs. The main problem is not that it’s over 0.1 second, but that it can vary between observations. This is an especially big problem for an agent that does one big computation every 20 frames. One out of every 20 observations would have a massive dt, but the user has no feedback that this is happening.

You wrote that you get 5-6 fps because of the obs_delay, but I think it’s actually because your agent runs its code in the main thread and one step computation of your agent on the remote server takes an extra ~0.1 s on top of the 0.1 s obs_delay. So you probably have to make your computation more efficient, or maybe not call it every time step if that’s possible. And you should probably have your agent code run in a different thread to have more consistency in the observations.

One fix to make the behavior of the engine better would be to have the 0.1 second delay as a timer in a separate thread. However that’s not a perfect solution because although the images are sent from the simulator at 40 fps, they don’t arrive to the RacingEnv at intervals of exactly 0.025 s. So if a timer was set every 0.1 second to grab the latest image sent from the simulatory, it would sometimes get the third image from the last (0.075 s) and sometimes the fifth image from the last (0.125 s). So a better fix would be to actually wait for every fourth image instead of setting a timer at 0.1 s.

I looked at changing this in the engine code, but there was some error handling in the image receiver code that makes it seem like some images from the simulator sometimes don’t make it. I’m not sure if that is really the case, but it would certainly complicate matters.

1 Like