I downloaded the baseline repo and trying to install the packages by:
python setup.py install
However, errors given as:
Download error on git+https://gitlab.aicrowd.com/flatland/flatland.git@42-run-baselines-in-ci: unknown url type: git+https -- Some packages may not be found! No local packages or working download links found for torch>=1.1.0
The baseline Repo does not need to be installed. Once you have installed Fltaland-RL by following the quick start instructions you can just clone the baseline repo to a desired location and run the code from that folder.
Thank you for your reply. I updated the training introduction. Will also see if we can simplify this further by automatically installing the dependencies.
After installing the pytorch, running the script will cause an error:
File "<folderStruct>\baselines\torch_training\dueling_double_dqn.py", line 11, in <module> from torch_training.model import QNetwork, QNetwork2 ModuleNotFoundError: No module named 'torch_training'
This can be fixed by removing the torch_training prefix in the line from torch_training.model import QNetwork, QNetwork2 as the file locates inside the “torch_training” folder already.
Similar issues exist for the line from utils.observation_utils import norm_obs_clip, split_tree in training_navigation.py file, as utils folder exist outside the parent folder.
Is this caused by a change in the folder structure, or am I doing sth wrong?
Thanks all for the inputs.
Just wanted to add that, the baseline repository can evolve much faster with community contributions.
Please feel free to send in pull requests with changes that worked for you.
After training the baseline, I got two window as mentioned in the starting guide, one for the cost function curve, one for an animated image of results.
However, my second one is still image and not animated.
Any sugggestions?
To view the performance of your agents. I suggest you use multi_agent_inference.py for your multi agent example (just load the correct network file from training) or uste render_agent_behavior.py for single agent behavior. If this does not work, don’t hesitate to reach out. We are here to help.