The error message is given as follow:
“Whoops ! Something went wrong with the evaluation. Please tag aicrowd-bot on this issue to provide you the relevant logs.”
which add the necessary aicrowd_helpers.submit() calls at the end of the training.
For others facing the same problem, when the evaluation fails after the completion of the training, please do ensure that the aicrowd_helpers.submit() call is present at the end of the training, which triggers the final evaluation of the dumped representations.
And sorry for the delay in the response. We are adding a few internal processes in place which will ensure you guys have much faster feedback from the evaluation.
@amirabdi: I just pasted the logs from multiple failed issues. It seems you are either not including the dependencies correctly (like tqdm), or you are not using the environment variables to pick up the correct dataset name, etc.
And regarding the point about not counting failed submissions towards maximum daily allowed submissions, that opens up the case where participants would intentionally crash their submissions after receiving the relevant feedback to increase the number of probes they can do on the production evaluator. So that wouldnt be in the interest of all the participants to not count failed submissions in the max daily submissions.