Baseline question

Hello! Question about ‘percentage_of_actions_ignored_at_the_extremes’ parameter.
As I understand this parameter allows us to drop the least relevant distances. Should there be np.linspace(actions_to_remove, len(self.actions) - 1, …) or np.linspace(0, len(self.actions) - 1 - actions_to_remove, …) instead of np.linspace(actions_to_remove, len(self.actions) - 1 - actions_to_remove, …) in abstractor.py:

        for i in range(condition_dimension):
            sup = ordered_differences_queues[i].get_queue_values()
            for j in np.linspace(actions_to_remove, len(self.actions) - 1 - actions_to_remove, config.abst['total_abstraction']).round(0):
                self.lists_significative_differences[i] += [sup[int(j)]]

? :slight_smile:

Dear @ermekaitygulov,
that code divides all the (ordered) differences into 200 abstraction levels, ignoring some of the differences at both the extremes.

We no longer use that ‘percentage_of_actions_ignored_at_the_extremes’ parameter in the current baseline (it is set to 0).

However, we found it to be useful in previous versions of the baseline, when we used the object positions instead of the images+VAE for planning.
Empirically, we found that the smallest differences where due to environment noise (i.e. objects very slightly changed position value between due observation even if the robot missed them) and also the largest differences were not that useful as abstraction levels (i.e. if you conflate positions that are too much different between each other the planned actions no longer work) - so it paid off to remove both.

PS: Welcome to the competition! :slight_smile:

1 Like

@ermekaitygulov can I team up with you? Could we talk about this? My email is ngthanhtinqn@gmail.com

Hello! Sorry, I’m already in team.