Using PyTorch for submissions

Hello!

We noticed that some of you are facing issues trying to get the PyTorch version running. We created a branch torch in the starter kit that should work. To be precise, these are the changes that you need to make to get the PyTorch based solutions to work.

diff --git a/Dockerfile b/Dockerfile
index d633fd3..5f74098 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -58,4 +58,4 @@ RUN wget -nv -O miniconda.sh https://repo.anaconda.com/miniconda/Miniconda3-py37

 ENV PATH ${CONDA_DIR}/bin:${PATH}

-RUN pip install -r requirements.txt --no-cache-dir
+RUN pip install -r requirements.txt --no-cache-dir -f https://download.pytorch.org/whl/torch_stable.html
diff --git a/aicrowd.json b/aicrowd.json
index afa2f25..525e6e1 100644
--- a/aicrowd.json
+++ b/aicrowd.json
@@ -3,6 +3,6 @@
     "grader_id": "evaluations-api-neurips-2020-procgen",
     "authors" : ["Sharada Mohanty <mohanty@aicrowd.com>"],
     "description": "Procgen test submission",
-    "docker_build": false,
+    "docker_build": true,
     "debug" : false
 }
diff --git a/experiments/impala-baseline.yaml b/experiments/impala-baseline.yaml
index be0d8db..a0e3fca 100644
--- a/experiments/impala-baseline.yaml
+++ b/experiments/impala-baseline.yaml
@@ -125,7 +125,7 @@ procgen-ppo:
         # Use PyTorch (instead of tf). If using `rllib train`, this can also be
         # enabled with the `--torch` flag.
         # NOTE: Some agents may not support `torch` yet and throw an error.
-        use_pytorch: False
+        use_pytorch: True

         ################################################
         ################################################
@@ -187,7 +187,7 @@ procgen-ppo:
             # Examples of implementing the model in Keras is also available
             # here :
             # https://github.com/ray-project/ray/blob/master/rllib/examples/custom_keras_model.py
-            custom_model: impala_cnn_tf
+            custom_model: impala_cnn_torch
             # Extra options to pass to custom class
             custom_options: {}

diff --git a/requirements.txt b/requirements.txt
index 7bebb58..c5bc3cb 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,5 +1,8 @@
-ray[rllib]==0.8.5
-procgen==0.10.1
-tensorflow==2.1.0
+ray[rllib]==0.8.6
+procgen==0.10.4
+tensorflow-gpu==2.2.0
 mlflow==1.8.0
 boto3==1.13.10
+requests
+torch==1.6.0+cu101
+torchvision==0.7.0+cu101

Link to the PyTorch branch:

Now the upgrade is not just a PyTorch, but all the other environments have been upgraded, but is the evaluation environment still the same as the previous version?
If so, can I understand that I can use “docker_build”: true in Tensorflow to apply these requirements?

Hello @Paseul

Using PyTorch is independent of the Procgen package version. During evaluations, we use a custom procgen package that has private environments. You should be able to run the PyTorch based impala implementation provided in the starter kit using any version of procgen.

If so, can I understand that I can use “docker_build”: true in Tensorflow to apply these requirements?

Yes, you should set "docker_build": true in your aicrowd.json to install any additional dependencies. But as mentioned previously, the procgen version you use should not effect the evaluations.