Thank you for developing this API for us, this is great.
but Sorry, I seem to test too many times, resulting in a maximum number of submissions.
@ChenKuanSun Yes, the debug submissions will continue to count toward overall quota of submissions.
Given, these are treated exactly the same way as actual submissions internally i.e. utilise prod resources, we would like to stop any possible misuse. As a participant, you would have to choose wisely when & how many submissions need to be done as actual v/s debug.
What I want to know is whether there is a basic configuration that provides a similar test environment. I want to do the simulation environment on my own GCP. This way I don’t have to take up resources, and then I can adapt the configuration file and submit it.
It seems that I have tested successful files on the official docker and cannot do it in your environment.
This is a great feature, @shivam! Thanks for taking the time to implement it. Hopefully it will help a lot of participants.
@ChenKuanSun It looks like a good suggestion which we can try to incorporate in AICrowd. /cc @mohanty
Just the GCP has a mechanism for providing direct import into the image. You can consider doing this and you can also make specifications.
My question is, do you have Git LFS when you are doing a clone action to build an evaluation environment? What I am worried about is that once the evaluation environment does not use LFS clone I have a file that uses LFS, it will not be accessed correctly.
@mohanty
@ChenKuanSun : Yes the evaluator can use git lfs, and that shouldnt be a problem.
And not sure what you mean by, “I have tested successful files on the official docker and cannot do it in your environment.”, if its around the software runtime packaging, you can very well drop a Dockerfile at the root of your repository, and the image builder will use that to build your image.
I recently discovered that your evaluation environment uses nvidia/cuda:9.0-cudnn7-runtime-ubuntu16.04. For some people (including me) it is possible to use Cuda-10. After several tests, I learned the actual environment of aicrowd, and adjusted it to cuda 9.0 when creating images on GCP.
Also in
A note should be added to the evaluation section: using nvidia-gpu should be added --runtime=nvidia
Would you be willing to make a quick PR to the repo with this change in the readme to help other users?
If I successfully test the completion environment, I should provide repo to you so that you can publish other methods like Readme.md to help other contestants.
@arthurj
I made a PR just now.
Hi, I used debug mode to test my submission and I tried to set it off to get actual result. But it seems it still runs in debug mode even after I updated aicrowd.json
, pushed to the repo, and created a tag.
Does it take time to reflect the debug flag?
What should I do?
delete debug label…
Thanks for flagging, fixed the bug and value will be checked now on instead of just key.
@tky Sorry for the inconvenience caused to you.
And I was thinking I’m going mad when my previously working submission suddenly broke after “disabling” debug