Inconsistent sample numbers

When a submission is made, the evaluator expects 980 predictions, but there are 981 rows in the test.csv dataset. Looking at the baseline, it appears that the first one is lost due to missing the header = None argument in read_csv. Or, perhaps a different row was lost? Also, there various inconsistencies and what appear to be incomplete entries in the descriptions, discussion boards, etc.

Is this a legitimate challenge or something else?


Thanks for pointing out. The first row was supposed to be the header and not the sample given. I have updated the test set, the number of samples is 980.

Yes, it is a legitimate challenge as a part of our practice section.


Thank you for the clarification and the adjustment. Is the training set correct without the header? The example shows it is read without a header, but I’m wondering, given the mistake in the test set.