Now that everything is setup and working, I am having a hard time figuring out how to customize my inputs and change the config on the simulator, and the AICrowd wrappers and their documentation are a little confusing.
Things I cannot really find easily documented (trying to find it, but clarification would help):
- Why is the pose a 30 length vector? Why is it not just pitch, roll and yaw, and the 3 vector for the translation, giving the me 6 DOF representation of the 12 element pose transform matrix. I am talking about the pose as returned by the
- The config file seems to suggest that I can access segmentation maps, and an overhead view, atleast during training. But the AICrowd wrapper’s
env.step()only gives me access to the 30-length pose vector, not the other stuff that seems useful. How do I get access to that?
env.make()exists, which is called by the AICrowd evaluator class, should I just bypass the evaluator and instantiate my own simulator for training?
Small Error in documentation:
env.step()claims to return pose and image as a dict in the docstring, it even talks about the keys, even though it returns as a tuple as clearly mentioned in the docstring for