Looks like the leaderboard ranking is based on F1 instead of logloss as communicated.
The leaderboard is based on F1 as primary and logloss as secondary score.
Can you point us which communication you are referring to above, so we can fix/discuss there?
In Evaluation Criterion.
We will get the challenge page updated after communicating with organisers, and update here when it’s done. Till then please consider “F1 as primary and logloss as secondary score”.
@yzhounvs The miscommunication has been sorted out and you were correct. The log loss is the primary score and f1 score is secondary. The leaderboard has been fixed and new ranking are listed accordingly.
When we sat down together as a team, we realized that we are not sure, at all, whether it will be the logLoss of the final submission or the best logLoss of any submission. Obviously, that makes a difference for how one does submissions. Could you clarify?
hi bjoern - right now it is the best log loss submission.
please keep in mind that in the test data - we do have a hold out.
the final leaderboard will be the hold out test data - plus rthe current test data. this would be evaluated currently on your top submitted model.
Hi, @kelleni2, how is the “top submitted model” determined?
Does it mean that the final leaderboard only evaluates the best performing submission based on the current public leaderboard? Or all submissions will be used on the whole test data to identify the top score for final evaluation?
It is your submission having best score on half of the test dataset.
We already have scores against full dataset for all of your submissions (hidden), so all submissions will be used.
Can you confirm all the submission be considered for the final leaderboard? Or do we send to send something like a final submission?