No overall_recall in evaluation metric?

Seems like the overall_recall score has been removed from the evaluation metric. Is this some error or only the overall_accuracy is going to be the sole metric to determine the results?
With just a few days remaining for the challenge to end, I would request you to please confirm what exactly is going to be the metric.

Apart from that, it would be great if we could have a leaderboard of sorts so that we know where we lie exactly and modify our approach accordingly.

Yes the recall had to be remove as one of the participant found a bug in the evaluation script concerning this metric. You can still see the value when clicking on ‘view’ for a particular submission though.
Only the overall_accuracy will be used.
I apologize for the confusion.