Submission Failed: Evaluation Error

Hi, we are trying to learn the submission format.

By following the submitting section: https://github.com/Unity-Technologies/obstacle-tower-challenge#submitting , we submitted the original obstacle-tower-challenge branch with ObstacleTower executable in it.

However, it always returns “failed” followed by “image_built_successfully”. The evaluation logs are not that helpful. We don’t know how to solve it, thus cannot got to the next step. Any suggestions are very grateful. Thanks a lot.

  • State : Evaluation Error :broken_heart: :cry:
  • Total Execution Time :hourglass: : 47s
  • The following containers terminated prematurely. : agent
  • Please contact administrators, or refer to the execution logs. (container logs are inaccessible to participants)

I have the same problem, I just want to build tensorflow-gpu in docker, run the environment of dopamine framework. There have been problems all the time. Can you provide a suggested configuration file? I can already complete the 7th to 9th floors, but unfortunately I have been unable to submit successfully.:disappointed_relieved:

@immik @ChenKuanSun: Sorry for the issues. We have noticed a few instabilities in the evaluation because of the comms with the binary. We are investigating it right now, and will get back to you as soon as we have more updates.

Hi @immik @ChenKuanSun,

We have updated the submissions status page with relevant logs for your submissions. You can browse to your Gitlab repo’s issues to view them.

@shivam Hi,I have the same problem and want to check the logs. Thank you.

Hi @zhenghognzhi,

Shared in your issue, although I guess you already figured out the error in latest submission. :smiley:

@shivam had similar issue as well and could use logs for guidance…
https://gitlab.aicrowd.com/banjtheman/obstacle-tower-challenge/issues/4

Also got another weird issue on a different submit

    Submission failed : uninitialized constant Api::ExternalGradersController::TermsNotAcceptedByParticipant

https://gitlab.aicrowd.com/banjtheman/obstacle-tower-challenge/issues/5

@banjtheman: This was a temporary issue on the evaluator because of the release of a new version of the whole platform. But this has been resolved now. Just resubmitting the same solution should work.

(Pre) Announcement : @shivam has been both awesome and kind enough to implement a “debug” mode when submitting, so that you can directly get access to the logs of your container, without having to wait for us (and without the risk of any information leak). I will let him do a formal announcement with details whenever he is back online !!

Thanks @shivam :tada: !!!

That will be so nice. Thank you all @mohanty @shivam !!!

Can’t wait! I’ve been trying to get my dopamine trained agent to be scored (only 5-7 floors so far), but the only response I get after every change is
The following containers terminated prematurely. : agent
and it’s not very helpful. It builds fine, but gets stuck on evaluation phase.

@karolisram: The error at your end was :

2019-03-06T12:08:55.052739425Z root
2019-03-06T12:08:55.446903073Z Traceback (most recent call last):
2019-03-06T12:08:55.446954908Z   File "/srv/conda/lib/python3.6/runpy.py", line 193, in _run_module_as_main
2019-03-06T12:08:55.469690564Z     "__main__", mod_spec)
2019-03-06T12:08:55.469706832Z   File "/srv/conda/lib/python3.6/runpy.py", line 85, in _run_code
2019-03-06T12:08:55.469949671Z     exec(code, run_globals)
2019-03-06T12:08:55.469983464Z   File "/home/aicrowd/dopamine/discrete_domains/run_trained.py", line 29, in <module>
2019-03-06T12:08:55.470166251Z     from dopamine.discrete_domains import run_experiment
2019-03-06T12:08:55.47018091Z   File "/home/aicrowd/dopamine/discrete_domains/run_experiment.py", line 25, in <module>
2019-03-06T12:08:55.470281308Z     from dopamine.agents.dqn import dqn_agent
2019-03-06T12:08:55.470291629Z   File "/home/aicrowd/dopamine/agents/dqn/dqn_agent.py", line 28, in <module>
2019-03-06T12:08:55.47048102Z     from dopamine.discrete_domains import atari_lib
2019-03-06T12:08:55.470529508Z   File "/home/aicrowd/dopamine/discrete_domains/atari_lib.py", line 31, in <module>
2019-03-06T12:08:55.470610658Z     import atari_py
2019-03-06T12:08:55.470621438Z ModuleNotFoundError: No module named 'atari_py'

Its hard for us to make the container logs publicly accessible to ensure that there is no data leak.
But a “debug” mode was recently implemented, which provides you the logs in a dummy submission mode. But waiting for @shivam to announce it.

Also, as a general practice, its good to test your submissions locally before submitting. Many of these agent container logs would jump out if you test them locally before submitting.

1 Like

Announcement has been made about debug feature now. You can read about it here.

2 Likes