🚀 Round 2 Launched

Dear Foodies :avocado:,

We are back again with a new Round of the Food Recognition Benchmark 2022.

And we are excited to release the MyFoodRepo-v2.1 dataset.

The new dataset, which reflects the problem formulation of Round 2 contains :

  • 54,392 images of food items
  • 100,256 instance segmentation annotations
  • 323 food classes.

Key Change :sparkles:

A key change from Round 1 (and MyFoodRepo-v2.0) is that many of the visually similar food classes have been merged into a single class (which we refer to as food sets), reducing the total number of instance segmentation classes from 498 to 323.

Prizes :gift:

The results of Round 2 also decide who claims the following prizes :

You can still take part in Round 2, even if you have not engaged in Round 1.


New to the competition?

1 Like

Hi @shivam, thanks for the update!
Let me make sure whether all of the image in 2.0 dataset are included in 2.1.

Also, could you please make sure you can correctly untar the dataset files?
There are some wierd points, the extension is somehow tar (not tar.gz as description says) and PaxHeader files are included in image directory.
If you have a certain way to untar, please let me know!

Thanks,

Hi @shivam
Do the round 2 datasets contain all the images from round 1 or do they need to be used separately?
Another problem is while unzipping the files, there’s an error that v2.1 files are not found in the archive. How can we fix this?
Thanks

Hi @Camaro @sanjana_kothari,

In order to make the challenge more attractive, we have also increased the minimum number of annotations available for each class to 60. (used to be 20)

Hence, a few images from Round 1 aren’t carried forward in Round 2 (basically for the classes which didn’t make the cut-off).

Due to this reason, I would suggest you to download Round 2’s dataset (v2.1) and not rely on Round 1’s dataset (v2.0).


Yes, v2.1 contains all the required images and you don’t need to download Round 1’s images (v2.0).


I have reuploaded the dataset files, and it should resolve any issue faced while untarring/unzipping. :smiley:

1 Like

Update:

We have also added mapping for old category ID to new one (and utility function). So you can make submission with your Round 1’s model immediately as well. :partying_face:

Hi, @shivam! I submitted my model from 1st round without any problem (#176575), but after changing weights (model architecture is the same) I got two different errors (#176590, #176594). Could you please check my submissions and provide more information about these issues?

@shivam, we have rerun the same submissions and one is ok from the second run, the other one is ok from the 3rd run. So, we suspect maybe it is something wrong with submission system?
But, we still could not submit 176725, could you please check this one?
thank you in advanced!

Hi @Mykola_Lavreniuk & @gotsulyak,

tl;dr
I assume you are using multiple GPUs on your side for the training phase.
The checkpoint generated by cuda:1 etc is unable to load on cuda:0.


description

The error your submission has:

RuntimeError: Attempting to deserialize object on CUDA device 2 but torch.cuda.device_count() is 1. Please use torch.load with map_location to map your storages to an existing device.

The checkpoints which were mapped to the correct CUDA device cuda:0 ran successfully while the ones having checkpoints mapped from other devices example cuda:1, etc are having problems.

This shouldn’t have been an issue given MMdetection fixed it in Oct 2021 via #6405 and released from v2.18.0 onwards. From the above pull request:

In another case, if we have a model saved in cuda:1 and we want to load it on a single-GPU machine, this may raise a runtime error because cuda:1 doesn’t exist.

BUT your submission is using mmdet in your repository which is v2.12.0, hence don’t have the fix for it.


solution

You can either upgrade the mmdet folder to a recent version or as a quick fix just port the following patch in your mmdet/apis/inference.py.

map_loc = 'cpu' if device == 'cpu' else None
checkpoint = load_checkpoint(model, checkpoint, map_location=map_loc)

to

checkpoint = load_checkpoint(model, checkpoint, map_location='cpu')

side note; Given your models are comparatively larger, it would be useful to delete unused models to reduce failures in Docker build stage (or at least for speeding its docker builds).


All the best with your submissions! :raised_hands:

2 Likes

Thank you very much for very fast and clear response!

Hi, @shivam . We have managed to make pipeline to normally submit for 2nd round and it worked on several weigths for a few days. However, today for the 1st submission it worked and for the next two it looks like something went wrong with the system (after 5 hours of waiting it provided us time out error in one submission and other one still runing). I have seen similar case today in your submission as well…
Could you please check it and if it is possible to rerun our submissions
176980 and 176979

Hi @Mykola_Lavreniuk, yes, I noticed that on our side and have re-run them already. The timeout seems to be unrelated to your code.

@shivam , Thank you for quick response!

@Mykola_Lavreniuk Thanks for your patience, we have added some stability fixes due to which they failed yesterday.
The pipeline should continue back as normal going forward.

1 Like