Important Update

:warning: Update (October 10th, 2025)
This announcement has been superseded by a newer post:
:point_right: Phase 2 — We hear you ! Here’s how we’re updating the plan
Please refer to the latest update for the final Phase 2 format, dataset, and scoring details.
(This original announcement remains for archival reference only.)

Please note some key updates about the challenge:

Phase 1 (Competition Phase)

  • Ends on Sunday, 19 October 2025.
  • Includes submission of the Solution Documentation.

Phase 2

  • Expected to open between 20 and 22 October 2025, 23:59 UTC.
  • The Solution Documentation deadline is extended to this window.

Phase 2 Format

  • Multiple one-year datasets will be sliced into shorter, equal-length context windows.
  • Slices will include a mix of previously seen and new sites.
  • In Phase 2, participants will make a single prediction per time series. This is to prevent look-ahead.
  • Phase 2 is designed to ensure fairness and reward models that can generalise, are context-aware, and are transferable across sites.

Scoring and Final Ranking

  • The final ranking will factor in two components with equal weight:
    1. the score from the Phase 1 private dataset (SiteF), and
    2. the score from Phase 2.
  • Within Phase 2, scores will be the equal-weighted average across all slices and sites.

Scoring Code and Starter Kit

  • We will publish the scoring script and a minimal code sample in the starter kit.
  • These materials will be available when Phase 2 opens.

Challenge Description Clarification

  • Participants must predict each timestamp t₀ using only inputs where t_input ≤ t₀.
  • Training data includes ground-truth time series and demand response flags.
  • Models should learn consumption patterns both when demand response is inactive and when it is active.

2 Likes

Will the current leaderboard reset?

2 Likes

How will participants be able to perform feature engineering in phase 2? It sounds like the structure of the data will be pretty different from phase 1 which may cause problems.

5 Likes

Its not ideal if you change the structure of the data itself, some of my lookback features would no longer be valid.

If the organizers do want to go this way - atleast they should tell whats the context size available for each slice, this way we can design features to fit into this schema.

1 Like

The current description of ’Phase 2‘ lacks some details, so is it appropriate to give competitors only 2-3 days to participate in ’Phase 2‘? Additionally, ’The final ranking will factor in two components with equal weight.‘ Competitors have spent a significant amount of time (maybe more than a month) on the current so-called “Phase 1,” while Phase 2 only has 2-3 days. Furthermore, competitors currently cannot fully understand the scope of Phase 2. Is this appropriate?

I strongly oppose the introduction of an additional competition phase at this stage for the following reasons:

  1. Compromised Integrity and Authority: For a legitimate competition, the organizers should establish the competition’s objectives and evaluation rules from the outset. Randomly modifying rules based on participant questions undermines the organizers’ authority and raises significant doubts about the competition’s integrity.

  2. Lack of Transparency and Hasty Implementation: The organizers appear to have hastily published a so-called “Important Update” while omitting crucial details. For a public and transparent competition, this is unacceptable, especially when introducing an additional competition phase that accounts for 50% of the final score, so close to the competition’s end. If new questions arise from participants after the details of this additional phase are released, will the competition be further extended and rules modified again?

  3. Inconsistency with Original Design and Documentation: According to GitLab records, the competition phase was originally designated as Phase 2. Furthermore, the competition overview never mentioned any additional phases.

  4. Unfair Burden on Participants: Participants have already invested a significant amount of time in the current competition phase. This additional phase introduces entirely different data formats, and even new site data. With only three days provided, participants are expected to submit additional code and documentation. This is extremely unfair to those who cannot dedicate extra time during this short period.

Regarding the suggestion from some participants to only evaluate Site F, this would severely lack assessment of model generality. I propose that the organizers use only a portion of Site E’s data for public leaderboard evaluation, and reserve the remaining Site E data along with Site F for the private leaderboard.

1 Like

@snehananavati Could the organizers please provide clarity to all competitors?

2 Likes

:warning: Update (October 10th, 2025)
This announcement has been superseded by a newer post:
:point_right: Phase 2 — We hear you ! Here’s how we’re updating the plan
Please refer to the latest update for the final Phase 2 format, dataset, and scoring details.
(This original announcement remains for archival reference only.)