What is being evaluated during submission?

Is the predict.R or predict.py script run in the queue when we submit a solution through SSH? If that’s the case, how do we know where is the test dataset in the evaluation environment?

Also, is there a way to get the log file when the GitLab issue says evaluation is failed to debug what might have been wrong?

Thanks.

Hi @wangbot,

Welcome to the challenge!

As described here in starter kit README, we use run.sh as the code entry point. You can modify it based on your requirement. https://gitlab.aicrowd.com/novartis/novartis-dsai-challenge-starter-kit#code-entrypoint

We have a debug mode which you can activate using debug: true in aicrowd.json. Under this, you will get complete access to logs and can debug without need of help for logs. NOTE: The submission runs on a extremely small/partial dataset during this and your scores aren’t reflected back to leaderboard. https://gitlab.aicrowd.com/novartis/novartis-dsai-challenge-starter-kit#aicrowdjson

Nevertheless, AIcrowd team [and organisers] have access to all the logs and we do share error tracebacks and relevant logs with you as comment in Gitlab issue on best effort manner, which range from few minutes to few hours.

I hope this clarifies any doubt you had. All the best with the competition!