Many annotation sequences start with only a single frame of ‘other’. I’m guessing this is a function of the labelling methodology, but I wanted to check 1) is this intentional and 2) should we just set the predictions for the first frame in each sequence to other to match this format?
Thank you very much for pointing this out- we have investigated this and found that a 1-frame shift was being inadvertently applied to sequence labels!
We’ve prepared a revised version of the dataset and will have it uploaded shortly. Once it’s up, we will send an email announcement so folks know to re-download the data. We are very sorry for the hiccup!
Given the amount of jitter in manual annotations you may not notice much difference in your model’s performance, but it could boost performance for particularly short bouts of behavior, eg in Task 3.
Thank you for being nice and responsive - glad it is sorted out. A 1-frame difference is well within the bounds of human error when labelling anyway, but nice to cut down on as many sources of error as possible