Round 2 has started!

Dear participants,

today Round 2 of the competition starts!
The second round is open to everyone, even those who did not partecipate in Round 1.

As announced, we have updated both the environment and the starter kit for Round 2.

real_robots package updated to v.0.1.21

(New feature: depth camera!): While we have removed the additional observations of Round 1 (object position and segmented image), we have decided to add a depth observation.
This was a long asked feature, since REAL 2019. Many participants observed that as the environment involves a shelf and the camera has a top-view, it might be hard to judge depth based only on the RGB input. Given that nowadays depth cameras are a common sensor for robots and that returning the depth has no performance impact (PyBullet already calculated it behind the scene), we have decided to add it. The dictionary of the observations now has an additional depth entry with a 320x240 depth image.

(Fixed - missing objects): we have improved the reset mechanisms for objects that fall off the table.
It was previously possible in some cases to have the objects stuck inside the table, below the shelf: this has been fixed.

(Fixed and improved - Cartesian control): we have fixed cartesian control as the gripper_command was not being performed. We have also added the option so send “None” as a cartesian_command to have the robot go to the “Home” position.
Finally, we have improved the speed of Cartesian control by adding a cache mechanism (thanks @ermekaitygulov for the suggestion).
If you repeat the same cartesian_command for more than one timestep, it will use the cache instead of calculating the inverse kynematic again.
This makes using cartesian control much faster, doubling its speed if you repeat the same cartesian_command for 4 timesteps.

(Improved - Joints control): we have added the possibility to send “None” as a joint_command, which is equivalent to sending the “Home” position (all joints to zero). This makes easier to switch between different types of control, since the “Home” is always the None command.

(New feature: Videos!!)
We have added the ability to record videos during the intrinsic and extrinsic phase!
You will find in the local_evaluation.py file of the starter kit a line with video = (True, True, True)

  • Intrinsic phase recording: the first True means that the intrinsic phase will be recorded.
    It will automatically record 3 minutes of the intrinsic phase: the first minute (12000 timesteps), then one minute starting at the middle of the phase and then the last minute of the phase.
    You can set this to False to have no video of the extrinsic phase or you can set a different interval of frames to be recorded.
    i.e. video = (interval([0, 50000], [70000, 200000]), True, True) will record the first 50k frames and then from frame 70000 to frame 200000.
  • Extrinsic phase recording: the second True means that the extrinsic phase will be recorded.
    It will automatically record 5 trials, chosen at random.
    You can set this to False to have no video of the intrinsic phase or you can set which trials should be recorded.
    i.e. video = (True, interval(7, [20-30]), True) will record the trial number 7 and also all the trials from 20 to 30.
  • Debug info: the third True means that debug info will be added to the videos, such as current timesteps and scores.
    This can be set either to True or False (no debug info printed on the videos).

REAL2020_starter_kit repository updated!

(Improvement - Baseline for Round 2) As the macro_action control is now forbidden for Round 2, we have updated the baseline to use joints control.
The baseline now produces a variable-length list of joint positions to go to and then periodically go back to the Home position to check what the effects of those actions were.
It scores lower than using the pre-defined macro_action, but it is still able to learn and move the cube in a variety of positions consistently.
It is also fun to watch as it twists and finds weird strategies to move the cube with all the parts of the robotic arm!

(Improvement - pre-trained VAE) The baseline now automatically saves its Variational Autoencoder after training in two folders (trained_encoder and trained_decoder).
We have added a new parameter in baseline/config.yaml: is it now possible to set pre_trained_vae: true and it will load the previously trained VAE in subsequent runs.
This is especially beneficial on computers without GPU which take very long to train the VAE.

As always, feel free to use the baseline as a starting base to develop your submissions and enjoy the competition.
We look forward to your submissions! :slight_smile:

1 Like