In the sample submission file, the puck coordinates of the image files present in the dataset/images/ directory are determined. So, do we have to predict the puck coordinates of those images only, or we can determine the puck coordinates of other frames in the videos also?
Only the images are used for determining the score? or other frames can also be considered in scoring?
@chittalpatel The dataset provided tells you the video and frame that we score on, the sample submission may be missing a frame I don’t remember but it was something I tested the evaluator on.
I hope that helps, let me know if you have any further questions
@jason_brumwell So, does it mean that the Test Set is the images of video frames present in the dataset? or there will be other frames used for testing?
@chittalpatel Thank you for the followup, the score is based on the frames listed, we currently don’t provide a training dataset with puck location already provided.
@jason_brumwell If the list of test images is known in advance and hand-labeling is allowed what prevents us from hand-labeling of test images and getting a perfect score (with model training on these labels)? It sounds like this approach is not prohibited by the rules, but it is not the solution you are looking for (as far as i can understand).
@u1234x1234 Hand labeling would fail the prize criteria, the solution would have to score relative in score on our second dataset. I did notice that I forgot to include that in the overview and the rules and have updated them. Thank you @u1234x1234