🚀 Code Submission Round Launched 🚀

hi ,I get an error when I make the second submission:
Describe the bug

Submission failed : The participant has no submission slots remaining for today. Please wait until 2022-07-03 08:51:17 UTC to make your next submission.

Expected behavior
A Team may make only five submission per task per 24-hour period. Challenge Rule URL.

Screenshots

1.Limited by the current number of commits, I may have only a few opportunities to test my prediction code.
2.I’m not familiar with repo2docker (especially the environment configuration),it makes me more worried about whether I can finish the competition before July 15th.

Is it possible to increase the number of submissions?

Best,
Xuange

@xuange_cui I have met similar problem before. And after I disabled the debug mode, I can submit normally.

2 Likes

@mohanty @shivam Is the 30min time constraints means that we have 30min to run our prediction code? My submission on task2 always fail without any error message. I think this may cause by timeout, but the time between the failure and the log aicrowd_evaluations.evaluator.client:register:168 - connected to evaluation server is always around 27min, which is less than 30 min.

hi, the channel of pip source could be kept up to latest?
Describe the bug
when i used requirements.txt as this:

pandas==1.4.2

The error is

Status: QUEUED

Status: BUILDING

Status: FAILED
Build failed :frowning:
Last response:
{
“git_uri”: “git@gitlab.aicrowd.com:xuange_cui/task_2_multiclass_product_classification_starter_kit.git”,
“git_revision”: “submission-v0703.09”,
“dockerfile_path”: null,
“context_path”: null,
“image_tag”: “aicrowd/submission:192070”,
“mem”: “14Gi”,
“cpu”: “3000m”,
“base_image”: null,
“node_selector”: null,
“labels”: “evaluations-api.aicrowd.com/cluster-id: 2; evaluations-api.aicrowd.com/grader-id: 68; evaluations-api.aicrowd.com/dns-name: runtime-setup; evaluations-api.aicrowd.com/expose-logs: true”,
“build_args”: null,
“cluster_id”: 1,
“id”: 3708,
“queued_at”: “2022-07-03T09:03:31.405604”,
“started_at”: “2022-07-03T09:03:43.673873”,
“built_at”: null,
“pushed_at”: null,
“cancelled_at”: null,
“failed_at”: “2022-07-03T09:07:34.352399”,
“status”: “FAILED”,
“build_metadata”: “{“base_image”: “nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04”}”
}

But when i used requirements.txt as this:

pandas==1.1.5

The debug log shows that I have completed successfully

Status: QUEUED
.
Status: BUILDING

Status: PUSHED
Build succeeded
Trying to setup AIcrowd runtime.
2022-07-03 09:01:35.781 | INFO | aicrowd_evaluations.evaluator.client:register:153 - registering client with evaluation server
2022-07-03 09:01:35.785 | SUCCESS | aicrowd_evaluations.evaluator.client:register:168 - connected to evaluation server
Phase Key : public_test_phase
Phase Key : public_test_phase
Progress ---- 3.609534947517362e-06
Progress ---- 0.7759706039473874
Writing Task-2 Predictions to : /shared/task2-public-64b9d737-1ec6-4ff6-843f-7bdcf17042b1.csv
Progress ---- 1

Screenshots

pandas 1.4.x only support python 3.8+. Your problem is probably because the python version in docker mirror is less than 3.8.

1 Like

Hi @mohanty @shivam ,I was trying to make a submission using my own dockerfile. I built the image and ran my test code successfully on my local computer to make sure that my dockerfile has no problem, but after submitted (with debug mode on), it failed. Here’s the full log:

=================
{
“git_uri”: “git@gitlab.aicrowd.com:TransiEnt/task_1_query-product_ranking_code_starter_kit.git”,
“git_revision”: “submission-v0_11_2_1”,
“dockerfile_path”: null,
“context_path”: null,
“image_tag”: “aicrowd/submission:192122”,
“mem”: “14Gi”,
“cpu”: “3000m”,
“base_image”: null,
“node_selector”: null,
“labels”: “evaluations-api.aicrowd.com/cluster-id: 2; evaluations-api.aicrowd.com/grader-id: 70; evaluations-api.aicrowd.com/dns-name: runtime-setup; evaluations-api.aicrowd.com/expose-logs: true”,
“build_args”: null,
“cluster_id”: 1,
“id”: 3739,
“queued_at”: “2022-07-03T20:55:03.753642”,
“started_at”: null,
“built_at”: null,
“pushed_at”: null,
“cancelled_at”: null,
“failed_at”: null,
“status”: “QUEUED”,
“build_metadata”: null
}
Status: QUEUED

Status: BUILDING

Status: PUSHED
Build succeeded
Trying to setup AIcrowd runtime.
Traceback (most recent call last):
File “”, line 2, in
File “/tmp/evaluator_libs/shared/run_wrapper.py”, line 8, in
from starter_kit.run import Task1Predictor as Predictor
ModuleNotFoundError: No module named ‘starter_kit’

So the dockerfile didn’t fail, “Build succeeded”, it was one of the post-built procedures that failed. I wonder how to tackle this issue. If this is inevitable when using custom dockerfile in order to use custom base images (instead of using aicrowd-repo2docker, which does not seem to allow for specifying custom base image), then what is the correct way to use custom base image?

My repository looks like this.

I try to use ‘environment.yml’ to build my running environment. My method is to upload the yml file to my project after I generate it locally, but I failed. I wonder if there should be a Dockerfile file to tell me how to customize the build and copy the file to the image?

1 Like

@mohanty I got an error when i submit code,and I’m going mad!

@LYZD-fintech : We are investigating this. We will get back to you soon on this.

The updated run.py introduced the new argument of product_catalogue_path. That could be the reason why you failed this time, according to your error message.

@TransiEnt : No, we added backward compatibility for submissions which do not include the product_catalogue_path parameter. It seems to be an issue with GIT-LFS not properly pulling the necessary files into the submission containers. We are pushing a fix for that as we speak, and will post an update here as soon as its done.

This problem is solved, Check [update] Customize Dockerfile for both phase for more details.

Hello team (@mohanty)! First of all, thanks a lot for organizing such a competition and making this kind of data available.
In view of the various problems that participants have with the code submission, would it be possible to postpone the end of the competition?

1 Like

Hi, I cannot make Dockerfile work, so I use requirements.txt and include “git-lfs” for pip install git-lfs.
But I got this below error:

  File "/srv/conda/envs/notebook/lib/python3.7/site-packages/transformers/modeling_utils.py", line 467, in load_state_dict
    "You seem to have cloned a repository without having git-lfs installed. Please install "
OSError: You seem to have cloned a repository without having git-lfs installed. Please install git-lfs and run `git lfs install` followed by `git lfs pull` in the folder you cloned.

Is my error here related to you mentioned issue here?
Thanks!

@mohanty please give some suggestions how to solve this issue? Thanks!

Yes,you are right,Thanks

1 Like

OK,I well submit later.

@Erica : Can you please point to your submission hash (and corresponding issue) where you faced this issue ?

Thanks,
Mohanty

Hi, @mohanty
Here is my submission failure using dockerfile: 8a849b958dd0879b2b9f0b2e918708aae322738b.

Here is another failure using requirement.txt file:
submission_hash : 2d7e4f3a8823501883a5229f4cd9a24a86442baf.

Thanks,