Hi @snehananavati ,
For all the three tasks, I would like to see the metrics defined precisely in this page. The fractional notation appears to be broken.
Moreover, both MRR and BLEU are calculated for each sample (row) in the test_sessions, so how do you calculate the final score? E.g., take a simple average over samples of test_sessions, or weighted-mean.
Hi @Bruce your team creation has been reverted as you asked.
As for compressed files for submission, parquet has a number of compression options, can you use those directly? Are you seeing a major difference in size when using parquet’s available compression methods vs other ones?
Hello, I tried to create my own Collab Notebook, but I encountered a problem while opening it and couldn’t open it correctly.
I feel confused about this. May I ask the administrator to help me take a look at this issue? If you could tell me how I can solve it, it would be even better.
Anyone who needs their team creation reverted please dm me, mention your registered email in the message, and if the team contains more than one member, all members need to message me consenting to it.
@dipam@snehananavati Hello! currently I can achieve about MRR@100=0.14 in the verification set of the custom division in the session_train.csv file, but the result is 0 after submitting the test set in session_test_task1.csv. I would like to know how the official process the submitted parquet file to get the final MRR value ,and this is helpful for me to troubleshoot related issues.
Same question. I am also curious about the official evaluation codes as I could reach MRR@100=0.3 in my self-splitting validation set from the session_train.csv, but I reached almost 0 after submitting the test set.