🏆 Generative Interior Design Challenge: Top 3 Teams

Dear Teams,

Thank you for participating in the Generative Interior Design Challenge! We are excited to announce the top three teams selected by an expert jury to advance to the final competition phase, which will take place on April 17 at the Machines Can See Summit in Dubai.

Here is the selection procedure we followed:

  • Phase 1 (Jan 30 - Apr 1): Ranking based on the public test. All teams scoring above the baseline were selected for the next phase.
  • Phase 2 (Apr 2 - Apr 3): Ranking based on the private test, with the top five teams advancing to the next phase for jury review.
  • Phase 3 (Apr 4 - Apr 5): The expert jury ranked and selected the top three teams. Each jury member chose the best result among five generated images across six room categories and three empty scenes per category, doing so repeatedly. The names of the teams were concealed during the voting process. The three teams with the highest number of votes were chosen to proceed to the final phase.

Our jury consisted of experts in interior design, real estate development, and artificial intelligence.

As a result of Phases 1 and 2, the top five teams selected (in alphabetical order) are: Decem, EVATeam, Saidinesh_pola, StableDesign, and XenonStack.

Finally, the top three teams selected by the jury for Phase 3 (in alphabetical order) are:

These teams are now officially selected for the award. Congratulations!

We would like to note that the top three teams selected by the jury also rank among the top four on the public leaderboard of the competition.

We extend our thanks to all participating teams and look forward to the last competition phase on April 17 in Dubai. There, the final ranking will be determined jointly by the expert jury and the audience at the Machines Can See Summit.

Congratulations again, and we look forward to seeing everyone at Machines Can See on April 17th at the Museum of the Future!

Best wishes,
The Generative Interior Design Challenge Organizing Team

2 Likes

This was my first time participating in a challenge on AICrowd. The lack of transparency and organization experienced here sadly will make this my last time, no point ever investing any time. Best of luck to the winners.

3 Likes

I understand your standing. This competitions was novel. I have never seen a competitions that try to judge the generated images, marks can be subjective, also seed of model can change a generated image a lot.
I would say that organizers made a good job in competitions organization. Still there were some flaws, like lack of example of scored images (to understand better what does realism and functionality may mean for annotators). Also 3 exemplar images were not enough (that was fixed).

As a team, we found some correllation between quality of generated images and scores (but it was no perfect), especially keeping a right geometry has a lot of impact on score. And they ‘prompt consistency’ was also simple to follow.

I hope that organizers will share publicly some lesson learned from this competition as it could be usefull for the whole GenAI community.

2 Likes

@bartosz_ludwiczuk It can be objectively seen that the Xenon team hasn’t placed any objects requested in the prompt. Also their geometry is off like you pointed out in your previous discussion post. So it’s hard to understand why the rankings have suddenly changed.

2 Likes

Congratulations to the winners! Also post this in discord, we have no idea that this post exists.

2 Likes

@lavanya_nemani you can not judge the test dataset’s performance based on 3 public images. As the scores are really close, the preference of annotators can change a little bit.

I agree that XenonStack has the weakest geometry among top4. This is why I hope that organizers will share leasson learned about their evaluation process.

1 Like