Dear Participants,
The Leaderboards for all 3 tasks are seriously heating up As we gear up for code-based submissions, weβre excited to share that the baselines for all 3 tasks are now published!
Get Started With The Baselines Here
Key Details
- For Task 1, we fine-tuned 3 models one for each
query_locale
.- For
us
locacale we fine-tuned MS MARCO Cross-Encoders. Fores
andjp
locales multilingual MPNet. We used the query and title of the product as input for these models.
- For
- For Task 2, we trained a Multilayer perceptron (MLP) classifier whose input is the concatenation of the representations provided by BERT multilingual base for the query and title of the product.
- For Task 3, we followed the same approach as in task 2.
The following table shows the baseline results obtained through the different public tests of the three tasks.
Task | Metric | Score |
---|---|---|
1 | nDCG | 0.850 |
2 | Micro F1 | 0.655 |
3 | Micro F1 | 0.780 |
Remember, you can get all creative with the baselines now, and share your own versions with the community - To win Drones, VR Sets, and bragging rights!
Not just that, you can share explainers/videos/articles/contribute to discussions - anything that will help the community!
All the best!
Team AIcrowd