Hi @shivam ,
any update on this issue? Thanks
Submission failed : No participant could be found for this username
and i pushed without debug line.
how to resolve?
Hi @wac81,
The issue is resolved and I noticed you were able to make the submission just now.
Please let us know in case you face any other issue.
The above problem has been solved, thank you @mohanty @shivam !
Now I have another problem. When my code was running at 92%, the submission suddenly failed. And no errors are shown in the debug log. Could you take a look?
submission_hash : 3c4a3a2939733c08fcadf8088e4928834f1e27df
.
Hi @zhichao_feng, your submissions have failed due to 90 minutes timeout for each public & private phase runs. (it got 92% in 90m)
Hi @vitor_amancio_jerony @qinpersevere & all,
The image building code has been updated on our side, and the repository size is no longer a restriction.
Please keep in mind that the resources available for inference are same for everyone i.e. Tesla V100-SXM2-16GB.
@shivam thanks, i saw my submission is init process for few hours, i canβt see any log for errorοΌcould you help me for check itοΌ
here is submission:
AIcrowd Submission Received #193588 - 7
submission_hash : 3b8cea9ae65fb94418f0ec8b2a141b49971795c5
.
@shivam My submission took more than 90 minutes, but it was successful. Will it be counted as the final result? In addition, I found that the difference in submission time with the same amount of calculation will reach 2000 seconds. When two submissions are calculated at the same time, the calculation time will increase. Is there a GPU physically shared by multiple submissions when the same user submits a queue?
Hi @LYZD-fintech,
Each submission has dedicated compute resources.
The time elapsed, that is reported on the issue page is currently wrong and will be fixed soon, it shows the total time from the submission to completion (instead of start of execution of your code to the end). The timeout however is properly implemented and only considers the running time.
We provision a new machine dynamically for each submission, due to which the time elapsed might have been higher when there are a high number of submissions in the evaluation queue (multiple machines got provisioned)
I hope that clears any confusion.
Best,
Shivam
Thank you, but how do I know if my submission will be considered in the final ranking?
Hi @LYZD-fintech,
All the successful submissions would be considered for the final ranking in this challenge.
Best,
Shivam
Thanks, I have no problem.
@shivam @mohanty Iβm getting CUDA out of memory error when loading my model on pytorch, even though run.py works on my own T4 and V100. I tested it inside the same container that the Dockerfile builds. I donβt know what else do to at this point.
Hash: af5e3e9d5a515b6917e2d39340da51e23b23d878
@mohanty @shivam
we meet the problem that the environment isnβt configured well for 2 hours
Could you take a look at it
submit hash: 00d6a5fb492d8648d5cc1724ce7efcd79b3f532d
@shivam
i have no idea for this case error logοΌcanβt see any errorοΌbut failed
AIcrowd Submission Received #193832 - initial-15
submission_hash : 77fd92f1686f89bb2a0a4a09ab2cb83cce5f3e0c
.
If this issue is not updated within a reasonable amount of time, please send email to help@aicrowd.com
.
Could you also check my submission? I believe there is an unusual behavior of some hosting services. The code passed the public test set and soon failed for the private set. I also observed other participantsβ submissions near the same time and all of them failed.
submission_hash : 540adaa2989b1c62dffc48659400db2cc0a13989
.
@shivam
itβs timeout for public test tooοΌ
have 2 processοΌ
- data process spend 160s
οΏ½οΏ½ββββ | 240263/277044 [02:38<00:26, 1370.87it/s] - predict spend 27m
[Predict] 2/271 [β¦] - ETA: 27:15
i have no idea for timeout.
@wac81 : Yes, the timeouts apply to both the Public and Private Test Phases. Also, we have increased the timeout to 120 mins
- please refer to this post : π Deadline Extension to 20th July && β³ Increased Timeout of 120 mins
Best,
Mohanty