Can I have an example of a code which is working to make a submission on gitlab?

Hi git lfs migrate is for transferring any older commit to start using lfs. This is useful in case you have lots of older commit (intended/unintended) and want those files to migrate to LFS based in future.

1 Like

What must we keep from the initial file env.yml (besides aicrowd-api) ?

What’s wrong with my yml file ??

@ashivani Can you look at my yml file ? aicrowd-api is in

@amapic Your master branch contains the aicrowd-api but you submission branch does not. The environment.yml file in submission-v0.22 does not contain the api.

1 Like

Could you please clarify on the last date for the contest? On the home page it shows “2 days remaining” but Timeline mentions Jan,17,2020.

Hi @gokuleloop,

Thanks for pointing it out. We have updated the last date to Jan 17, 2020 on website as well.

1 Like

@ignasimg Hi,thanks for providing your help. What value did you set for AICROWD_TEST_IMAGES_PATH and AICROWD_PREDICTIONS_OUTPUT_PATH ?

do not change the default

What test did you use to detect corrupt images ?

@amapic

“try except” is the easiest one.

https://docs.python.org/3/tutorial/errors.html

1 Like

@amapic in the sample submission from starter kit you can find:

AICROWD_TEST_IMAGES_PATH = os.getenv("AICROWD_TEST_IMAGES_PATH", "./data/test_images_small/")
AICROWD_TEST_METADATA_PATH = os.getenv("AICROWD_TEST_METADATA_PATH", "./data/test_metadata_small.csv")
AICROWD_PREDICTIONS_OUTPUT_PATH = os.getenv("AICROWD_PREDICTIONS_OUTPUT_PATH", "random_prediction.csv")

as @ValAn told you, it’s better if you don’t change the defaults. But if you still need to change, make sure to just change the second parameter from the call to os.getenv.

This is because when you submit your code aicrowd expects you to “read” those paths from environment variables they’ve set.

For you to test that it works on your local machine it should be enough with the default values and uncompressing both test_metadata_small.tar.gz and test_images_small.tar.gz in the data folder. You can download both of those files in the resources page


As for dealing with corrupt files you can see how @gokuleloop did it for round 2 @ https://github.com/GokulEpiphany/contests-final-code/blob/master/aicrowd-snake-species/inference/run.py#L196
Disclaimer: It’s impossible to use his same idea, due to now we don’t have a sample submission .csv file, but you can get an idea of how to deal with those.

Personally what I do is simply generate a “fake” random image, but I guess there are better ways (more efficient / scoring higher) my pseudocode would be like:

try :
    image = read(file)
except :
    image = random

Final tip: Be sure to add a line with a corrupt / non-existent image file to the test_metadata_small.csv mentioned earlier, so you can also be sure your code can handle errors when reading the images.

Best of luck! :slight_smile:

1 Like

Thank you. Can you give me a yml file with keras and tensorflow 1 ?

I don’t use Keras nor Tensorflow, but if you are using conda - which you totally should (not just because it will make dependency management way easier but also because it’s a piece of software easy to use and which actually works straightforward) - it’s as easy as having your environment activated typing:

$ conda env export > environment.yml

Please use
conda env export --no-build > environment.yml
Also, Inference happens on a K80 (if you enable GPU). Make sure CUDA version is 10.0 and not 10.1

2 Likes

Why it needs to be 10.0? I don’t understand. TBH, I am not sure that organizers enabled GPU for this comp?
@shivam @ashivani @mohanty is there a gpu allocated or not?

A relevant discussion.

1 Like

Hi participants, @ValAn,

Yes the GPUs are available on snakes challenge submissions when gpu: true is done in aicrowd.json.

It need to be 10.0 because nodes on which your code run has GKE version 1.12.x currently -> Nvidia driver 410.79 (based on) -> cuda 10.0 (based on).

We are looking forward to have future challenges on higher CUDA version (GKE version). But to keep consistency in results, timings, etc we do not want to change versions mid-way of contest.

I apologize for overlooking this. Slow evaluation drove me crazy as I mentioned earlier in this discussion.

Now I wonder how I am supposed to know this?

Am I supposed to read through previous competitions to understand how to submit?

Also I really think you should add edit history for your challenge description? Two months ago I read it for this challenge and now I see it’s changed. Nothing important, you updated number of images which were originally just copy pasted from stage 2. I hope you will not take my comments as an offense, I am just trying to understand, share my experience and give some suggestions how to make it easier to participate.

3 Likes

Dear @ValAn,

Our sincere apologies for the inconveniences faced by you.

Regarding the slow evaluation speeds, given that we have to execute your code (and models etc) on a large number of test images, the evaluations are indeed slow. Your model has to make predictions for a large number of images. We are trying to improve this experience by providing better feedback in terms of progress etc, and will definitely address this in the upcoming version of the challenges.

Regarding the competition, we are providing all updates on this forum here, and we would be happy to answer any and all questions you have here. We are also working on better notification systems so that you get relevant updates from the challenge over emails and other notification channels on the platform that you subscribe to.

In the meantime, we really appreciate your feedback. Your feedback helps us make the platform much better for thousands of other users on the platform, and under no circumstances we take it as an offense.

Thank You,
Mohanty
(on behalf of the organizing team)

1 Like