🚀 Code Submission Round Launched 🚀

Thanks for your quick reply. I also want to know what is the directory structure in the data folder when evaluating. Do we need to redownload product_catalogue?

Hi @wufanyou,

The absolute file paths are available as parameters of predict function.
(to get rid of any confusion participants may face with directory structures)

We have data, models, etc folders available as examples for the getting started section and a better local development experience. You are free to upload your models and relevant files in folder/structure of your choice. The only constraint is that the entrypoint to your code evaluations will be run.py.

The idea to provide product catalogue by default sounds good. :+1:
Please let us check internally and we will update on the request soon.

1 Like

Hi, what is the exact dataset size of private dataset for each task ?

1 Like

Hi, is there will be a new product_catalogue or new product_id that we haven’t seen before in private testset?
Since We have done some data augment works based on the product_catalogue;

@mohanty
on the leaderboard I can see only 1 submission. Is that the current status?

Hi @amiruddin_nagri, the default leaderboard is “Code Submissions” which starter recently.

You can select “Prediction Submissions” to view the previous submissions and their relevant scores.

We are soon releasing some changes to the starter kit, which will allow you to access the product catalogue for each of the tasks. Note that, the product catalogue is the same that is already publicly released, and the private test set does not contain any products which are not already included in the product catalogue.

1 Like

We have now updated the previous announcement with the size of the private test sets for each of the Tasks.

2 Likes

Hi @mohanty , using git push origin master will not create a code submission? And only use git push origin <tagname> will create a code submission? Is that right?

Hi @heya5, yes only tag pushes with submission prefix will create a submission.

1 Like

hi ,I get an error when I make the second submission:
Describe the bug

Submission failed : The participant has no submission slots remaining for today. Please wait until 2022-07-03 08:51:17 UTC to make your next submission.

Expected behavior
A Team may make only five submission per task per 24-hour period. Challenge Rule URL.

Screenshots

1.Limited by the current number of commits, I may have only a few opportunities to test my prediction code.
2.I’m not familiar with repo2docker (especially the environment configuration),it makes me more worried about whether I can finish the competition before July 15th.

Is it possible to increase the number of submissions?

Best,
Xuange

@xuange_cui I have met similar problem before. And after I disabled the debug mode, I can submit normally.

2 Likes

@mohanty @shivam Is the 30min time constraints means that we have 30min to run our prediction code? My submission on task2 always fail without any error message. I think this may cause by timeout, but the time between the failure and the log aicrowd_evaluations.evaluator.client:register:168 - connected to evaluation server is always around 27min, which is less than 30 min.

hi, the channel of pip source could be kept up to latest?
Describe the bug
when i used requirements.txt as this:

pandas==1.4.2

The error is

Status: QUEUED

Status: BUILDING

Status: FAILED
Build failed :frowning:
Last response:
{
“git_uri”: “git@gitlab.aicrowd.com:xuange_cui/task_2_multiclass_product_classification_starter_kit.git”,
“git_revision”: “submission-v0703.09”,
“dockerfile_path”: null,
“context_path”: null,
“image_tag”: “aicrowd/submission:192070”,
“mem”: “14Gi”,
“cpu”: “3000m”,
“base_image”: null,
“node_selector”: null,
“labels”: “evaluations-api.aicrowd.com/cluster-id: 2; evaluations-api.aicrowd.com/grader-id: 68; evaluations-api.aicrowd.com/dns-name: runtime-setup; evaluations-api.aicrowd.com/expose-logs: true”,
“build_args”: null,
“cluster_id”: 1,
“id”: 3708,
“queued_at”: “2022-07-03T09:03:31.405604”,
“started_at”: “2022-07-03T09:03:43.673873”,
“built_at”: null,
“pushed_at”: null,
“cancelled_at”: null,
“failed_at”: “2022-07-03T09:07:34.352399”,
“status”: “FAILED”,
“build_metadata”: “{“base_image”: “nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04”}”
}

But when i used requirements.txt as this:

pandas==1.1.5

The debug log shows that I have completed successfully

Status: QUEUED
.
Status: BUILDING

Status: PUSHED
Build succeeded
Trying to setup AIcrowd runtime.
2022-07-03 09:01:35.781 | INFO | aicrowd_evaluations.evaluator.client:register:153 - registering client with evaluation server
2022-07-03 09:01:35.785 | SUCCESS | aicrowd_evaluations.evaluator.client:register:168 - connected to evaluation server
Phase Key : public_test_phase
Phase Key : public_test_phase
Progress ---- 3.609534947517362e-06
Progress ---- 0.7759706039473874
Writing Task-2 Predictions to : /shared/task2-public-64b9d737-1ec6-4ff6-843f-7bdcf17042b1.csv
Progress ---- 1

Screenshots

pandas 1.4.x only support python 3.8+. Your problem is probably because the python version in docker mirror is less than 3.8.

1 Like

Hi @mohanty @shivam ,I was trying to make a submission using my own dockerfile. I built the image and ran my test code successfully on my local computer to make sure that my dockerfile has no problem, but after submitted (with debug mode on), it failed. Here’s the full log:

=================
{
“git_uri”: “git@gitlab.aicrowd.com:TransiEnt/task_1_query-product_ranking_code_starter_kit.git”,
“git_revision”: “submission-v0_11_2_1”,
“dockerfile_path”: null,
“context_path”: null,
“image_tag”: “aicrowd/submission:192122”,
“mem”: “14Gi”,
“cpu”: “3000m”,
“base_image”: null,
“node_selector”: null,
“labels”: “evaluations-api.aicrowd.com/cluster-id: 2; evaluations-api.aicrowd.com/grader-id: 70; evaluations-api.aicrowd.com/dns-name: runtime-setup; evaluations-api.aicrowd.com/expose-logs: true”,
“build_args”: null,
“cluster_id”: 1,
“id”: 3739,
“queued_at”: “2022-07-03T20:55:03.753642”,
“started_at”: null,
“built_at”: null,
“pushed_at”: null,
“cancelled_at”: null,
“failed_at”: null,
“status”: “QUEUED”,
“build_metadata”: null
}
Status: QUEUED

Status: BUILDING

Status: PUSHED
Build succeeded
Trying to setup AIcrowd runtime.
Traceback (most recent call last):
File “”, line 2, in
File “/tmp/evaluator_libs/shared/run_wrapper.py”, line 8, in
from starter_kit.run import Task1Predictor as Predictor
ModuleNotFoundError: No module named ‘starter_kit’

So the dockerfile didn’t fail, “Build succeeded”, it was one of the post-built procedures that failed. I wonder how to tackle this issue. If this is inevitable when using custom dockerfile in order to use custom base images (instead of using aicrowd-repo2docker, which does not seem to allow for specifying custom base image), then what is the correct way to use custom base image?

My repository looks like this.

I try to use ‘environment.yml’ to build my running environment. My method is to upload the yml file to my project after I generate it locally, but I failed. I wonder if there should be a Dockerfile file to tell me how to customize the build and copy the file to the image?

1 Like

@mohanty I got an error when i submit code,and I’m going mad!

@LYZD-fintech : We are investigating this. We will get back to you soon on this.