About the evaluation metric (bis)

Dear Participants,
As you may have notice and as mentioned here, here and here, we have a problem with the evaluation metric.
There is a bug in the recall allowing for above one value and it is possible to exploit the overall_precision to reach near perfect value.
When the problem first was first brought to our attention, and after the first exploit, we choose to only keep the first metric for consistency (controversial decision, but in our opinion the logical one at one week of the end of the challenge).
The overall_precision will still be used as primary metric. We will also compute the mAP0.5 and recall 0.5 as originally planed with the final submissions using a trusted script: https://github.com/rafaelpadilla/Object-Detection-Metrics
I know the system is gamified, but please refrain from exploiting those flaws as those submission will be removed.
Once again, we apologize for this less than ideal situation.
Regards
Dimitri