Interestingness score from some inducers is not in the range of 0 and 1

Hi.

I am facing some issues with the provided data, the Interestingness score in some inducers is not in the range of 0 to 1. So i was wondering as to what should i do with those inducers? Should i normalize the Interestingness score for those inducers? or should i ignore them?

Waiting eagerly for your response as the submission deadline is near.

Thank you

Dear participant,

Thank you for noticing this.
It is possible that some inducers may have values outside the range 0 to 1. Your suggestion to normalize the Interestingness score for them is the correct approach in this case.

Best regards,
Gabi Constantin

1 Like

Okay… I have created a submission and it is successfully graded but it has unused score 1 = 0.0 and unused score 2 = 0.0 . I want to know that what does it mean??

Please reply.

Yes, we have received your submission.

As we mentioned in the instructions for the submission format:
“Finally, when submitting your runs, please note that the scores (metrics) are not automatically calculated - therefore the system will display a score of 0.00.”

We will manually download your submission, calculate your scores, and update the site with your real scores. We will do this once the deadline for submissions is over - this means after 13 May 23:59 UTC time zone. In the meantime, if you have other models you can continue with more submissions untill the deadline.

Thank you and best regards,
Gabi Constantin

Hi Gabi Constantin,

I am facing issue while running trec_eval file. Please guide me how can run this file to get MAP@10 metric.

Regards

Dear UECORK team,

The trec_eval file is a standard Linux executable file. It should work out of the box, as it is.

You should call the executable with the following parameters from the terminal:
treceval -M10 <path_to_gt_file.qrels> <path_to_prediction_file>

This will output in the terminal a series of metrics. The metric you should follow will be displayed in the line containing “map”. While this line says “map”, the -M10 switch in your terminal actually means you will get the value for MAP@10.
map all 0.1333

I hope this answers you question, if not please give me more details as to what issue you are facing.

Best regards,
Gabi Constantin

Okay… So i must need linux platform for this rather windows?. I had errors of ELF format.
I will try this on linux then and will let you know if had another query.

Yes, it works on Linux only.

I have another query regarding “devset_gt.qrels.txt” file in gt folder. what does last colum of this file depicts?? As for result submission format is concerned, we are asked to submit result in the format of “video id, image id, label (0/1), level of interestingness , last column as run_name”.
I have gone through overview multiple times but it is still unclear what does this last coumn of file shows?

As for measuring performance metric, when i run my test results file generated result_to_trec.py with the given command, nothing has been displayed on terminal screen. but when i run devset_gt.qrels.txt with result_to_trec.py to get “devset_gt.qrels.txt.trec” output and use it with the given command, metrics values are displayed on terminal screen.

Please guide me in this context.

Regards.

Hi.

The final column in the gt file represents an image rank withing a video. So for example, for video 78 from the devset the row with rank 1 (the highest score within the video) is for image 1441_1418-1464.jpg.

On the other hand, the last column in your submission file represent a general name you give your submission. It can be whatever you want. So for example, if you have two submissions (meaning two submission files) the first one can be “submission_run1.txt” and “submission_run2.txt”. You just have to make sure that the same name is the same for all rows in a submission file.

Regarding the third question - the devset_gt file only contains ground truth values for the devset. We do not release the ground truth values for the testset. In order to get your metrics on the testset, you have to submit your runs to us. After the submission, you will see your file showing " unused_score1" as 0.0. This means your scores are pending and we will update them after we compute your file.

After the deadline is over (tomorrow 17 May 12:00 UTC) we will download all your submissions, compute your metrics, and update all your submissions with your final score.

Hope this answers your questions, if not please tell us.
Best regards,