Thank you for organizing the competition. I have two questions about selecting submission IDs.
If one team does not fill out the form, will the top-2 submissions from each track of their team be automatically selected and rerun with different random seeds?
Additionally, for the two selected submissions, will the final score be taken from the best score among all runs or the average score?
@yilun_jin , Sometimes the exact same code will succeed one time and fail another time. For example we submitted the exact same code [here] and [here]. The first succeeded and the second failed. During re-run what happens if code fails that has previously succeeded? Will the admins run it a second time?
Also, can you tell us why the second link above failed?
When we select our final 2 submissions for each track, should we just select our best scoring submission twice in case it fails the first time it is re-run?
Hi @yilun_jin ,
Thank you for hosting the competition. While selecting my final submission, I have two questions.
Among the submissions marked as “Evaluation Succeeded,” there are some submissions where the score is not displayed. This information is necessary for selecting the final submission. Is it possible to check these scores? Examples are here and here, etc.
Sometimes, even with the same code, there are submissions that failed due to “Evaluation timed out.” In such cases, I believe some of these might succeed if submitted again. Is it possible to select one of these as one of the two final submissions?
@Chris_Deotte It may happen that a valid submission may fail due to network connection issues (which is what happened to the second link). This is a problem we encounter since the beginning, but fail to completely address. We apologize for that.
During re-evaluation, we would make sure that all submissions will run through to get scores, so feel free to select two different submissions.
I manually check the logs to see the final scores. They have been updated on your issue page.
It is possible to submit them, but since you did not see the final scores, you have to do that at some risk. Apologies for the occasional network issue.
Thank you very much.
Regarding the answer to question 2, I understand well. Thank you for your detailed explanation.
Concerning question 1, I’d like to ask some additional questions.
I can now see the scores. Thank you very much! However, an unbelievable score was displayed. In particular, the Multiple-Choice Score was close to 0.0, which, considering the number of questions, could indicate some error in the evaluation. Or does it mean that none of the answers were correct? I’d like to request a confirmation on this. such case here
If possible, there are three more problems that haven’t been evaluated in the same way, which are here, here, and here that’s all. Would it be possible for you to evaluate these as well?
Hello @wakawakawakawaka ,
Regarding your low multiple-choice score, I have replied with a log snippet in the issue. Basically, your output did not conform with our parser, resulting in a very low multiple choice score. We don’t believe that there is any error with our evaluation.
Regarding your second question, I will post the scores in the issues as well.
@yilun_jin Thank you for your excellent organization. I have a question could a team be notified which submissions are selected in the final testing stage?