When a submission is made, the evaluator expects 980 predictions, but there are 981 rows in the test.csv dataset. Looking at the baseline, it appears that the first one is lost due to missing the header = None argument in read_csv. Or, perhaps a different row was lost? Also, there various inconsistencies and what appear to be incomplete entries in the descriptions, discussion boards, etc.
Is this a legitimate challenge or something else?