Potential bug in environment code: Shall the soc always be restricted to be <= capacity?

Is it true that soc(and soc_init) should always be no larger than capacity?

If this holds, then I guess I might found a bug in energy_model.py.
(else pls just ignore the following content)

In method get_max_input_power, the soc_normalized should be within [0, 1]. But there do exist such cases when self.soc_init gets a little bit larger than self.capacity, causing soc_normalized be like 1.00000001, which is NOT 1.0. And such slight difference could make the result of idx to be wholly different: soc_normalized==1.0 will lead to idx==1 while 1.00000001 will lead to idx==0, thereby totally affecting the result power and then essentially the episode afterwards.

[To reproduce the issue]
The above issue could be reproduced:
For example, using building_1, and follow the below action sequence in an episode:

action_series = [-0.6781939, 0.58110934, -0.73443955, 0.18472742, 0.18293902, -0.65686876, 0.5656311, -0.5761771, -0.46125087, 0.10670366, 0.07985443, 0.81957483, 0.9780224, 0.5936974]

You’ll get a wrong computation of energy at the last time step, cuz the current soc(6.399877760440494) is slightly larger than the current capacity(6.399874036444029).

Another option is, you could write a modified get_max_input_power function:
soc_normalized = np.clip(self.soc_init/self.capacity, 0, 1). Using a random agent and do control/treatment study with the unmodified function, you’ll easily get different observations in episodes.

[Suggestions to fix the bug]

  1. Clip the soc_normalized in get_max_input_power;
    Or
  2. Since capacity is updated(degraded) every time step after updating soc and energy_balance, it is possible that it gets degraded to a value less than soc, therefore I suggest always restricting the soc every time we updated capacity in order to make sure soc <= capacity at all time.
2 Likes

Great observation!
@dipam @kingsley_nweye what do you think? Thank you

1 Like

hi @xiren_zhou @mansur, we are aware of this bug. We raised an issue for it here: https://github.com/intelligent-environments-lab/CityLearn/issues/29, but likely not before the competition ends.

1 Like