Instructions, EDA and baseline for Food Recognition Challenge

We are releasing a notebook with data analysis on the Food Recognition Dataset and then a short tutorial on training with keras and pytorch. This lets you immediately jump onto the challenge and solve the challenge.
https://colab.research.google.com/drive/1A5p9GX5X3n6OMtLjfhnH6Oeq13tWNtFO#scrollTo=ok54AWT_VoWV

Along with the notebook, we are also releasing the starter codes in both keras (using matterport maskrcnn) and pytorch (using mmdetection). Also, these starter codes have the submission format required to make a successful submission to AICrowd.

mmdetection (pytorch): https://gitlab.aicrowd.com/nikhil_rayaprolu/food-pytorch-baseline
matterport-maskrcnn (keras - tensorflow) - https://gitlab.aicrowd.com/nikhil_rayaprolu/food-recognition

1 Like

hey @nikhil_rayaprolu ,
I tried to clone your repo and submit. It fails to build the docker image and is throwing the following error:

[91m    ERROR: Command errored out with exit status 1:
     command: /opt/conda/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/mmdetection/setup.py'"'"'; __file__='"'"'/mmdetection/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info
         cwd: /mmdetection/
    Complete output (8 lines):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/mmdetection/setup.py", line 171, in <module>
        sources=['src/compiling_info.cpp']),
      File "/mmdetection/setup.py", line 101, in make_cuda_ext
        raise EnvironmentError('CUDA is required to compile MMDetection!')
    OSError: CUDA is required to compile MMDetection!
    No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
    ----------------------------------------
e[0me[91mERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
e[0mRemoving intermediate container 33d43e41331b
The command '/bin/sh -c pip install --no-cache-dir -e .' returned a non-zero code: 1The command '/bin/sh -c pip install --no-cache-dir -e .' returned a non-zero code: 1
Traceback (most recent call last):
  File "/home/ubuntu/anaconda3/envs/aicrowd-sourcerer/lib/python3.6/site-packages/repo2docker/__main__.py", line 354, in main
    r2d.start()
  File "/home/ubuntu/anaconda3/envs/aicrowd-sourcerer/lib/python3.6/site-packages/repo2docker/app.py", line 714, in start
    self.build()
  File "/home/ubuntu/anaconda3/envs/aicrowd-sourcerer/lib/python3.6/site-packages/repo2docker/app.py", line 700, in build
    raise docker.errors.BuildError(l["error"], build_log="")
docker.errors.BuildError: The command '/bin/sh -c pip install --no-cache-dir -e .' returned a non-zero code: 1

What is the issue?

Hi @shraddhaamohan,

Thanks for notifying about it. The Dockerfile for the baseline was dependent on https://github.com/open-mmlab/mmdetection repository’s master branch which is broken right now. We have updated the baseline repository point to a stable release version now.

When trying to run the model for inference I get the error:

Traceback (most recent call last):
  File "mmdetection/tools/test.py", line 284, in <module>
    main()
  File "mmdetection/tools/test.py", line 233, in main
    checkpoint = load_checkpoint(model, args.checkpoint, map_location='cpu')
  File "/opt/conda/lib/python3.6/site-packages/mmcv/runner/checkpoint.py", line 172, in load_checkpoint
    checkpoint = torch.load(filename, map_location=map_location)
  File "/opt/conda/lib/python3.6/site-packages/torch/serialization.py", line 387, in load
    return _load(f, map_location, pickle_module, **pickle_load_args)
  File "/opt/conda/lib/python3.6/site-packages/torch/serialization.py", line 564, in _load
    magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, 'v'.

From trying to load the weights.
I get the same error trying to use any other of the weights, besides the set ‘epoch_22.pth’

https://github.com/YBIGTA/pytorch-hair-segmentation/issues/37 this could be an issue, if you hadn’t used git lfs to pull the models.

HI @joao_schapke, please use git lfs clone <repo> / git lfs pull command in your above repository as Nikhil also mentioned. Do let us know how it goes and if the problem continues.

Thanks for the feedback, fixed my issue.

I haven’t submitted yet, I see that in the baseline you remove the .json extension from the output path, is this necessary for the submission?

Hi @joao_schapke,

You will get an environment variable AICROWD_PREDICTIONS_OUTPUT_PATH having absolute path to location at which json file need to be written.

Example from starter kit here.