Insufficient cluster resources to launch trial

Hello,

I am trying to get my own basic agent running. First off, I had to downgrade to Python3.7, otherwise I would get a cloudpickle error (TypeError).

Now I created a custom agent by mimicking the RandomAgent as described in the “Getting Started”.
I get the following error now:

ray.tune.error.TuneError: Insufficient cluster resources to launch trial: trial requested 7 CPUs, 0.8999999999999999 GPUs but the cluster has only 0 CPUs, 0 GPUs, 1.37 GiB heap, 0.63 GiB objects (1.0 node:10.62.1.226). Pass queue_trials=True in ray.tune.run() or on the command line to queue trials until the cluster scales up or resources become available.

Surely, I can just run an agent just like in the Gym - take sample in, update agent, take action, repeat? It seems Ray/RLLib is very geared towards distributed batching approaches with multiple environments, but is there a way to run it just like plain old Gym?

1 Like

This is if you want to collect data from outside the RLlib setup. I’m sure you could adapt it to your own purposes.

source: https://docs.ray.io/en/releases-0.8.4/rllib-offline.html

import gym
import numpy as np
import os

import ray.utils

from ray.rllib.models.preprocessors import get_preprocessor
from ray.rllib.evaluation.sample_batch_builder import SampleBatchBuilder
from ray.rllib.offline.json_writer import JsonWriter

if __name__ == "__main__":
    batch_builder = SampleBatchBuilder()  # or MultiAgentSampleBatchBuilder
    writer = JsonWriter(
        os.path.join(ray.utils.get_user_temp_dir(), "demo-out"))

    # You normally wouldn't want to manually create sample batches if a
    # simulator is available, but let's do it anyways for example purposes:
    env = gym.make("CartPole-v0")

    # RLlib uses preprocessors to implement transforms such as one-hot encoding
    # and flattening of tuple and dict observations. For CartPole a no-op
    # preprocessor is used, but this may be relevant for more complex envs.
    prep = get_preprocessor(env.observation_space)(env.observation_space)
    print("The preprocessor is", prep)

    for eps_id in range(100):
        obs = env.reset()
        prev_action = np.zeros_like(env.action_space.sample())
        prev_reward = 0
        done = False
        t = 0
        while not done:
            action = env.action_space.sample()
            new_obs, rew, done, info = env.step(action)
            batch_builder.add_values(
                t=t,
                eps_id=eps_id,
                agent_index=0,
                obs=prep.transform(obs),
                actions=action,
                action_prob=1.0,  # put the true action probability here
                rewards=rew,
                prev_actions=prev_action,
                prev_rewards=prev_reward,
                dones=done,
                infos=info,
                new_obs=prep.transform(new_obs))
            obs = new_obs
            prev_action = action
            prev_reward = rew
            t += 1
        writer.write(batch_builder.build_and_reset())
1 Like

Thanks for the answer, I will see what I can do with that. Since my algorithm learns online/incrementally (no replay of any kind), and is on-policy, it seems that I would be basically stripping everything out and just using Gym directly in the end. How does this affect the submission process?

Also, how can I not use TF or PyTorch? I guess it doesn’t really matter if it’s in the configs, it just seems a bit weird to have them basically hard-coded into the competition starter files. I see I can add additional requirements in the requirements.txt, but how does this affect the submission process?

Further, is there a way I can not use RLLib? What if I don’t want to perform distributed training? It all seems very rigid.

Hello @CireNeikual

For why we want to use rllib, please refer FAQ: Regarding rllib based approach for submissions

Updating the requirements.txt won’t be enough. You also need to set "docker_build": true in your aicrowd.json file. More on using custom images can be found here, https://github.com/aicrowd/neurips2020-procgen-starter-kit#submission-environment-configuration

Please make sure to include the mlflow pip package in your custom image. We can’t post the evaluation updates on the gitlab issues page without this.