Instructions, EDA and baseline for Food Recognition Challenge

We are releasing a notebook with data analysis on the Food Recognition Dataset and then a short tutorial on training with keras and pytorch. This lets you immediately jump onto the challenge and solve the challenge.

Along with the notebook, we are also releasing the starter codes in both keras (using matterport maskrcnn) and pytorch (using mmdetection). Also, these starter codes have the submission format required to make a successful submission to AICrowd.

mmdetection (pytorch):
matterport-maskrcnn (keras - tensorflow) -

1 Like

hey @nikhil_rayaprolu ,
I tried to clone your repo and submit. It fails to build the docker image and is throwing the following error:

[91m    ERROR: Command errored out with exit status 1:
     command: /opt/conda/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/mmdetection/'"'"'; __file__='"'"'/mmdetection/'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);'"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info
         cwd: /mmdetection/
    Complete output (8 lines):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/mmdetection/", line 171, in <module>
      File "/mmdetection/", line 101, in make_cuda_ext
        raise EnvironmentError('CUDA is required to compile MMDetection!')
    OSError: CUDA is required to compile MMDetection!
    No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
e[0me[91mERROR: Command errored out with exit status 1: python egg_info Check the logs for full command output.
e[0mRemoving intermediate container 33d43e41331b
The command '/bin/sh -c pip install --no-cache-dir -e .' returned a non-zero code: 1The command '/bin/sh -c pip install --no-cache-dir -e .' returned a non-zero code: 1
Traceback (most recent call last):
  File "/home/ubuntu/anaconda3/envs/aicrowd-sourcerer/lib/python3.6/site-packages/repo2docker/", line 354, in main
  File "/home/ubuntu/anaconda3/envs/aicrowd-sourcerer/lib/python3.6/site-packages/repo2docker/", line 714, in start
  File "/home/ubuntu/anaconda3/envs/aicrowd-sourcerer/lib/python3.6/site-packages/repo2docker/", line 700, in build
    raise docker.errors.BuildError(l["error"], build_log="")
docker.errors.BuildError: The command '/bin/sh -c pip install --no-cache-dir -e .' returned a non-zero code: 1

What is the issue?

Hi @shraddhaamohan,

Thanks for notifying about it. The Dockerfile for the baseline was dependent on repository’s master branch which is broken right now. We have updated the baseline repository point to a stable release version now.

When trying to run the model for inference I get the error:

Traceback (most recent call last):
  File "mmdetection/tools/", line 284, in <module>
  File "mmdetection/tools/", line 233, in main
    checkpoint = load_checkpoint(model, args.checkpoint, map_location='cpu')
  File "/opt/conda/lib/python3.6/site-packages/mmcv/runner/", line 172, in load_checkpoint
    checkpoint = torch.load(filename, map_location=map_location)
  File "/opt/conda/lib/python3.6/site-packages/torch/", line 387, in load
    return _load(f, map_location, pickle_module, **pickle_load_args)
  File "/opt/conda/lib/python3.6/site-packages/torch/", line 564, in _load
    magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, 'v'.

From trying to load the weights.
I get the same error trying to use any other of the weights, besides the set ‘epoch_22.pth’ this could be an issue, if you hadn’t used git lfs to pull the models.

HI @joao_schapke, please use git lfs clone <repo> / git lfs pull command in your above repository as Nikhil also mentioned. Do let us know how it goes and if the problem continues.

Thanks for the feedback, fixed my issue.

I haven’t submitted yet, I see that in the baseline you remove the .json extension from the output path, is this necessary for the submission?

Hi @joao_schapke,

You will get an environment variable AICROWD_PREDICTIONS_OUTPUT_PATH having absolute path to location at which json file need to be written.

Example from starter kit here.