Hi everyone,
Please make sure that your submissions are creating prediction file with correct row_id
.
The row_id
was not being match strictly till the previous evaluator version and we have added assert for the same now. Due to which the submissions have failed with the row_ids in the generated prediction file do not match that of the ground_truth
.
Your solution need to output row_id
from testing data during evaluation and not hardcoded / sequential (0,1,2…). Also note, that row_id
can be different & shuffled on data present on evaluations v/s workspace, to make sure people who have just submit predictions csv (instead of code) fail automatically.
We are trying to apply automatic patch wherever possible, but it need to be ultimately fixed in solutions submitted. Example patch is present here.