Checking that the goal is unsupervised representation learning

There are some labels for demo tasks provided, so I just want to clarify my understanding. Am I correct that these are ONLY for our own evaluation use and cannot be used for any sort of training to learn the embeddings we will later submit? In other words, am I right that the goal here is to learn the representations in an unsupervised way, from only the keypoints and without using any additional labels?
This has been my approach so far but I can see how fine-tuning a model on these additional tasks might help it learn more useful representations for the hidden tasks. So if we ARE allowed to do that then it’ll be nice to know this is an option going forward :slight_smile:


Thanks for your question!

We want to encourage exploration of diverse representation learning methods, including unsupervised learning, weakly-supervised learning, self-supervised learning etc. Therefore, you are allowed to use the provided demo labels, as well as any other weak/self supervision signals you design (ex: behavior features or heuristics, contrastive loss, autoencoding, etc.) during training. However, manually annotating the data will not be allowed, neither is trying to guess the test task labels. Also, be careful of overfitting to the demo labels!


Ah great, thank you for clarifying.