🚀 Code Submission Round Launched 🚀

@vitor_amancio_jerony : this is a valid use case. We are trying to come up with a solution for this. You will be able to submit your model for evaluation.

I will keep you posted.

Best,
Mohanty

1 Like

I don’t think this is a fair act. If there is a limit, then all participants should adhere to it. Allowing some participants to break the limit would be unfair to the rest, since we all know that bigger (and more) models perform better, to a certain extent.

I don’t think that my git clone error is related to the max submission size, though. I just tried to submit a 7GB repo and the same error happened. Here I assume it only clones whatever is in HEAD
hash: 588a2ed07f72ad2825a04da006b744aa735c1aa4

@TransiEnt : All the submissions will still get the same V100 GPU. At this point, the issue with the large model is not during the actual evaluations, but because our git servers throw a tantrum when a single large binary file is checked into a repository (via GIT-LFS). We are working on a fix for this, and that will equally affect all participants, and not change any of the resource constraints already announced for the round.

Best,
Mohanty

Submission failed : No participant could be found for this username

my submission was failed because of the reason above, I dont recal there was anything specific that we have to do for participating this, anybody know why?

Hi @animath3099, can you share the link to the issue page? We are looking into it.

AIcrowd Submission Failed (#1) · Issues · wac81 / :three: Product Substitute Identification Starter Kit · GitLab
here thanks in advance

Hi @animath3099, the issue has been resolved. You can try to make a submission now.

Note: Please remove debug: true from your aicrowd.json, due to which no slots remaining error is coming.

Hi @mohanty and @shivam
Were you guys able to take a look at this error I got? This time I had used a 7GB repo and had the same error.
hash: 588a2ed07f72ad2825a04da006b744aa735c1aa4

Hi, @shivam and @mohanty
my last two submissions failed and the debug logs disappeared (i.e. link to the log is not displayed on the page), but debug logs before them are available. Could you take a look at it?

submission_hash : 03f98c0ca06db9537caefe022523b76ddcb32326.

submission_hash : 56abf6cff5b1bb6c3402f0d660389fed5accdc78.

Hi @shivam ,
any update on this issue? Thanks

Submission failed : No participant could be found for this username

and i pushed without debug line.

how to resolve?

AIcrowd Submission Failed (#6) · Issues · wac81 / :two: Multiclass Product Classification Starter Kit · GitLab

1 Like

Hi @wac81,

The issue is resolved and I noticed you were able to make the submission just now.
Please let us know in case you face any other issue.

The above problem has been solved, thank you @mohanty @shivam !

Now I have another problem. When my code was running at 92%, the submission suddenly failed. And no errors are shown in the debug log. Could you take a look?

submission_hash : 3c4a3a2939733c08fcadf8088e4928834f1e27df.

Hi @zhichao_feng, your submissions have failed due to 90 minutes timeout for each public & private phase runs. (it got 92% in 90m)

Hi @vitor_amancio_jerony @qinpersevere & all,

The image building code has been updated on our side, and the repository size is no longer a restriction.

Please keep in mind that the resources available for inference are same for everyone i.e. Tesla V100-SXM2-16GB.

2 Likes

@shivam thanks, i saw my submission is init process for few hours, i can’t see any log for error,could you help me for check it?

here is submission:

AIcrowd Submission Received #193588 - 7

submission_hash : 3b8cea9ae65fb94418f0ec8b2a141b49971795c5.

Hi @wac81, your submissions have been evaluated.

@shivam My submission took more than 90 minutes, but it was successful. Will it be counted as the final result? In addition, I found that the difference in submission time with the same amount of calculation will reach 2000 seconds. When two submissions are calculated at the same time, the calculation time will increase. Is there a GPU physically shared by multiple submissions when the same user submits a queue?

Hi @LYZD-fintech,

Each submission has dedicated compute resources.

The time elapsed, that is reported on the issue page is currently wrong and will be fixed soon, it shows the total time from the submission to completion (instead of start of execution of your code to the end). The timeout however is properly implemented and only considers the running time.

We provision a new machine dynamically for each submission, due to which the time elapsed might have been higher when there are a high number of submissions in the evaluation queue (multiple machines got provisioned)

I hope that clears any confusion.

Best,
Shivam