Hi @dipam,
I would like to ask if it is possible to get the following information.
Hi @dipam. Could you send me the errors for error for this submission?
Also, I can mention that in my last 4 failed submissions, 3 were because of aicrowd platform.
Not sure if AiCrowd has changed sth or if it was just errors coming from Cloud Provider
I’ve added the error in the gitlab comments.
About the failed builds with no changes to dependencies. The only time this has occurred is that the repository has too many models which kills the Docker build, not sure if it’s also the case here, I’ll check further. If it is indeed the cause, unfortunately for now you’ll have to reduce the overall repository size somehow.
For the one with time-out but it didn’t start, can you give me the submission id?
Hi,
If the git repo is too large maybe you ignore the .git
folder when building the docker image.
Here is the list of submissions with weird errors:
Also, could you also check out what happened here?
@dipam can you please let me know what went wrong here, It says inference failed and there’s nothing in the logs, submission:211733, thanks!
@dipam Could you provide more info about submissions #212367 and #212368, please?
Both are Product Matching: Inference failed
@dipam, could you please check
#212567
#212566
It is failed in step “Build Packages And Env”, however I have changed only NN params.
As to me it is very strange…
@dipam Could you check submission: 213161?
The diagram shows that everything worked fine, but the status is failed:
@dipam , have you changed some settings of the server for inference?
Previously I have faced near 1 failed submitting per day, just rerunning helps.
however today I have changed only NN weights files and thats all, 1 get 1 submit ok, and 4 other weights with same size, same model all same just other epochs - failed.
As to me it is very strange…
Could you pls check it?
If it is timeout, how it could be if other weights are ok, or just resubmitting sometimes helps?
All these timed out, they’re just barely above the time limit. The variation in runtime can be due to the slight variation in CPU type that the AWS nodes we provision can have. Hence resubmitting can sometimes help, but for consistency I suggest trying to bring down the compute time.
I understand that every second might matter, however the organizer has deemed 10 minutes to be a generous time limit for the kind of solutions they are looking for, hence the constraint.