Issues with submitting

I’ve been having several issues in regards to submitting. Is there anyone who could specify the exact format necessary, be it rle or polygon format, and what exactly the output json should have? I have been able to perform local evaluation with no problem so I don’t know what the issue is.
After a lot of debugging I was able to get the evaluation 0.25 processed, but irrespective of what format(rle / polygon) I get the message “Results do not correspond to current coco set”

Any help would be appreciated.

1 Like

hello, shraddhaamohan, we will be providing an example in the next 24 hours that smoothens your submission process. This example contains a baseline model and the structure of the repository for successful submission.

here is the initial version of the baseline submission, please go through README on changes required for submission, and raise your further queries here.

@nikhil_rayaprolu Hey, when I make a submission it takes too long to even start the evaluation. It’s been 30 mins and it still says evaluation pending.

Hi @rohitmidha23,

It is stuck right now due to GPU node provisioning.
We were out of limits in Food challenge, and newer limits have been requested with GCP right now. It will start evaluating shortly after this is resolved.

1 Like

@shivam in that case would a gpu=False submission evaluate?

Hi @rohitmidha23,

Yes, gpu=False was working as expected in the meanwhile.

The GPUs issue is resolved now and the submissions with GPU are no longer in pending state.

Hi @shraddhaamohan,

We debugged on your submission. Your output.json contains prediction as follows (one of them for example):

  {
    "image_id": 10752,
    "category_id": 1040,
    "bbox": [
      5.0,
      29.0,
      466.0,
      427.0
    ],
    "score": 0.8330578207969666,
    "area": 176875,
    "segmentation": [
      [
        195.0,
        455.5,
        194.0,
        455.5,
        193.0,
        455.5,
        192.0,
        455.5,
        [.....]

The coco format is not loading the such generated output properly, the issue is due to bbox of size 4. Please try generating bbox of different dimension. Related issue on Gitlab.

@shivam is the test set on the server different? When running local evaluation we got a different mAP and recall, hence the question.

Yes, the test set provided to you in resources section has following description:

Set of test images for local debuggin (Note : These are the same ones that are provided in the validation set)

It is validation set basically. The server runs your code with [hidden] test set in protected environment.

1 Like

Thanks for the reply. I was finally able to get a submission through.

@shivam I made a submission at 10.45am IST and it still hasn’t finished evaluating. Is there any problem on the server side?

@shivam @nikhil_rayaprolu my submission has be in the “submitted” phase for more than a day now. Can you check up on it?

Or at least cancel it so I can submit other stuff?

Hey @shivam, are submissions stuck again ? We haven’t been able to submit for a really long time now. It’s stuck in the “waiting_in_queue_for_evaluation”.

Hi @shraddhaamohan, you are correct. Couple of user submitted codes ran into error but keep on running forever (didn’t exit) due to which pipeline was blocked. We will be adding sensible overall timeout for the challenge so this blockage is taken care of automatically going on.

1 Like

@shivam I seem to be getting a HTTPS error. Can you check?

hi @shivam, I’ve submitted about 5 hours back and its still in the “evaluation started”. I’ve checked locally using nvidia-docker and it generates the output.json in the right place and have also locally evaluated the json generated and have got the results. Could you look into this?