Pytorch fail during the evaluation

I have updated to the latest Pytorch code (Add the aicrowd_helpers.submit()), but it still fails in the evaluation:

The error message is given as follow:
“Whoops ! Something went wrong with the evaluation. Please tag aicrowd-bot on this issue to provide you the relevant logs.”

@Jie-Qiao: That is because you havent pulled in the latest changes in the starter kit, sorry about that.
You can have a look at the latest two commits here :

which add the necessary aicrowd_helpers.submit() calls at the end of the training.

For others facing the same problem, when the evaluation fails after the completion of the training, please do ensure that the aicrowd_helpers.submit() call is present at the end of the training, which triggers the final evaluation of the dumped representations.

Did you ever find out why this problem is occuring?
My evaluations are still failing and everything seems to be in place.

I have tagged @mohanty to give me access to the debug logs to figure out the reason; no answers yet after 2 days.

@amirabdi: I see you have a successful submission here : ?
Are you still facing issues making the submission ?

And sorry for the delay in the response. We are adding a few internal processes in place which will ensure you guys have much faster feedback from the evaluation.

I still have issues.
That successful submission was just me submitting the base to make sure the backend is actually functional.

I guess some debug logs would help; at this point, I have no idea what could be the problem without some feedback on your side.

As a side note, it would be nice if “failed submissions” were not counted towards the maximum daily allowed submissions.

Thank you for the response.

@amirabdi: I just pasted the logs from multiple failed issues. It seems you are either not including the dependencies correctly (like tqdm), or you are not using the environment variables to pick up the correct dataset name, etc.

And regarding the point about not counting failed submissions towards maximum daily allowed submissions, that opens up the case where participants would intentionally crash their submissions after receiving the relevant feedback to increase the number of probes they can do on the production evaluator. So that wouldnt be in the interest of all the participants to not count failed submissions in the max daily submissions.

Also @amirabdi, if you are around now, I am hanging out on the gitter channel here :

I will be online there for another 2-3 hours, and it might be easier to sort out the exact issue on realtime chat there.

@mohanty Thanks. Joined the conversation.