It would help to go back to the underlying motivation:
We wanted to reduce the chance of visibly fooling ourselves with top solutions including leaked information, rendering them irrelevant for real world decision making.
We wanted all top solutions able to be re-run by the evaluation & project team, to be interrogated for generalizability etc. By design, the kubernetes cluster and git combo enables this.
That said, we also want the best solutions possible for the larger initiative at the end of the event - which is why we were trying to ease some of the frustrations which were blocking some teams.
I discussed with the team, and we would highly encourage to continue to predict on the original test data in the evaluation clusters rather than provide a table of solutions. Especially for the final solution.
However, do what you feel you need to do as a team in order to come up with your optimal solution. But keep in mind, the final leaderboard will change when we add in the hold out test data, and winners will need their model to be validated by the evaluation team, so please make it clear how one would load and interrogate your model.