How to install pytorch in your submissions? Can my pytorch use GPU?

Potential Issue:

In case the challenge you are participating has GPUs available during evaluation, and your submission is running slower than when you run it locally. One of the potential reason can be wrong torch version installed.

Why is this a common occurrence?

It is undoubtedly one of the most popular frameworks for the deep learning users, BUT there is a special caveat.
The PyPI releases for the PyTorch do not have GPU releases, and due to that it is easy to just add torch in your requirements.txt and end up using the CPU version of pytorch.

How to identify if you are using correct torch version?

In the build logs provided to you, please check the package downloaded.
For example below is a CPU version as it is missing cuXXX prefix:

Collecting torch
    Downloading torch-1.12.0-cp37-cp37m-manylinux1_x86_64.whl

The GPU version examples:





>>> import torch
# you can add this in your code & print in the logs of your code as well to verify that everything works
>>> torch.cuda.is_available() 


I am using requirements.txt. What should I do?

You can add extra-index-url before mentioning torch or at the top of your requirements.txt.

In case you want to point to a specific CUDA version and PyTorch version, the example is as follows (you can open the link to know if your pytorch version support your cuda version):


I am using environment.yml. What should I do?

The installation is a bit easier with Conda, and the sample file looks as follows. (note: you can mention python version of your choice, etc as well)

name: myenv
  - conda-forge
  - defaults
  - pytorch
  - python=3.8
  - cudatoolkit=10.2   # cuda version
  - pytorch=1.11       # torch version
  - torchvision
  - torchaudio
  - pip:
      - pandas
      - <....other pip dependencies goes here....>
1 Like