KeyError: ‘success_rate’

Hello,

I got this error message when submitting: KeyError: ‘success_rate’

What does this mean? The model passed local evaluation.

Thanks

4 Likes

Hi, I have the same problem. I think it is bug probably and it is happening when your agent finishes race completely. Hope that somebody can help.

3 Likes

Thanks for confirming. Yeah, I suspect it’s a bug too. I could also finish the race completely in local environment.

@jyotish Do you mind taking a look? Thanks.

2 Likes

@denis9 @boliu0 Thanks for reporting this. Can you check if this fixes the issue?

If this commit doesn’t fix the issue, it would help us pinpoint the problem if you can share the traceback for when this exception is raised.

1 Like

I checked, the problem still exists.

1 Like

Hi @jyotish

Just to be clear, we don’t see this error when running evaluation locally via python rollout.py. It successfully finished evaluation. But we get the KeyError: ‘success_rate’ when submitting. I don’t see a trackback in the submission error message.

The commit above was on evaluator.py, which is part of the local evaluation. I think the bug is probably in the submission evaluation? (i.e. the code that is different between local and leaderboard evaluation that we cannot see)

2 Likes

@jyotish I don’t think ignoring the absence of the key is going to do any good.

Let’s say success rate is meant to be 100%, which is the case for my agent and it shows in my own logs, the key is missing and it is replaced with 0 giving a false positive result for that run.

This issue is likely the cause of not one vehicle completing the track 100% on the leader board.

I’m currently trying to figure out what the issue could be but I’m afraid I don’t have access to the files responsible for this.

2 Likes

+1 We are having the same issues - seems to be tied with getting 100.0 success rate

2 Likes

Hi @jyotish,

Could you please have a look at this problem, this error occurred today for my submission. The agent likely completed the entire course.

2022-02-04 07:46:05.823 | INFO | main:run_evaluation:81 - Starting evaluation on Thruxton racetrack
2022-02-04 07:46:09.866 | INFO | aicrowd_gym.clients.base_oracle_client:register_agent:210 - Registering agent with oracle…
2022-02-04 07:46:09.868 | SUCCESS | aicrowd_gym.clients.base_oracle_client:register_agent:226 - Registered agent with oracle
/home/miniconda/lib/python3.9/site-packages/numpy/core/fromnumeric.py:3440: RuntimeWarning: Mean of empty slice.
return _methods._mean(a, axis=axis, dtype=dtype,
/home/miniconda/lib/python3.9/site-packages/numpy/core/_methods.py:189: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)

2 Likes

Hello all!

The way we register lap wise metrics had a bug that was causing the evaluator to skip registering a few metrics if the lap was completed by the end of the first episode. This issue is fixed and all the effected submissions were re-evaluated. Please let us know if you are still facing this issue or any of your submissions did not get re-evaluated.

1 Like