Does anythone find that Pytorch template for competition reports following warning：
/…/ray/rllib/utils/torch_ops.py:117: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /opt/conda/conda-bld/pytorch_1595629427478/work/torch/csrc/utils/tensor_numpy.cpp:141.)
tensor = torch.from_numpy(np.asarray(item))
[W TensorIterator.cpp:918] Warning: Mixed memory format inputs detected while calling the operator. The operator will output contiguous tensor even if some of the inputs are in channels_last format. (function operator())
Any solution to this problem?
I guess tensorflow is more compatible to Ray than pytorch does. And with same configuration, TF submition is faster than Pytorch with about 900s on P100 and V100，this is a big gap in 2hrs.
with Pytorch template from following links: