I am trying to get my own basic agent running. First off, I had to downgrade to Python3.7, otherwise I would get a cloudpickle error (TypeError).
Now I created a custom agent by mimicking the RandomAgent as described in the “Getting Started”.
I get the following error now:
ray.tune.error.TuneError: Insufficient cluster resources to launch trial: trial requested 7 CPUs, 0.8999999999999999 GPUs but the cluster has only 0 CPUs, 0 GPUs, 1.37 GiB heap, 0.63 GiB objects (1.0 node:10.62.1.226). Pass queue_trials=True
in ray.tune.run() or on the command line to queue trials until the cluster scales up or resources become available.
Surely, I can just run an agent just like in the Gym - take sample in, update agent, take action, repeat? It seems Ray/RLLib is very geared towards distributed batching approaches with multiple environments, but is there a way to run it just like plain old Gym?