Use of external data

I was wondering if the usage of external data is allowed for this challenge (and the other challenges of the Blitz as well)

If I were to find the annotated dataset online, it would be fairly easy to overfit the test set and get 100% F1 score using it, although not very interesting… Would something like this be allowed? Or does this qualify as “unfair activities”?

@gaetan_ramet : That does indeed qualify as “unfair activity”.

To help put this in perspective, AIcrowd Blitz is an educational initiative. The goal here is to encourage community members who are interested in learning about ML by using these problems as short term goals. Some of the problems in AIcrowd Blitz (and in the broader educational initiatives of AIcrowd) will be based on text book toy problems, publicly available datasets, etc. Hence, someone willing to try and cheat on a problem in these educational initiatives might very well cheat. That is definitely against the spirit of this event.

Also, at the end of the competition before claiming any of the token prizes, the participants are expected to publicly share their code/notebooks that they used to generate the final predictions. If that is not the case, then we will unfortunately have to disqualify the said submission and also cite the reason for the same (something which stays a public record forever).

I like to think of these educational problems as a set of weekly/monthly milestones that we would share with all of our community, especially the young members of the community who are very new to the field. These milestones will hopefully help everyone to be consistent while they are trying to learn various aspects of AI and ML. (Something similar to daily health goals many people like to have : drink N glasses of water, take M steps, etc).

The existence of the centralized leaderboard is to help everyone see the broad progress of everyone else who are in the same boat as them. Now, if some participants can gain the satisfaction of solving a problem simply by cheating, then it is mostly a loss for them !

That said, as soon as the competition ends on May 17th, we will internally be coordinating with all the top participants in the competition to help create a shared set of resources that others can learn from. I believe, that will also help us remove and disqualify submissions which have categorically cheated in the competition.

Cheers,
Mohanty

PS: For participants with the only objective to cheat, here is the original source of the dataset : And for participants , here is the original source of the dateset : https://archive.ics.uci.edu/ml/datasets/Diabetic+Retinopathy+Debrecen+Data+Set
(Also cited in the actual challenge references section)

2 Likes

Thanks for the extensive answer! I’m glad you are seeing it the same way I do :slight_smile:

A few more questions that are more or less linked to this:

  • Is an algorithmic solution (i.e. involving no learning/AI/ML) acceptable? I’m thinking mostly about the PKHND challenge, were it’s obvious that it can be solved without ML

  • Regarding the publication of solutions, does that mean the solutions will be tested for reproducibility? And if yes, is 100% reproducibility necessary for a solution to be acceptable?

  • How will the publication of the solutions work? Through posts in the discussions, Github repos and/or something else?

@gaetan_ramet: Algorithmic solutions are acceptable. Infact, we wish computationally feasible Algorithmic solutions existed for all ML problems :smiley:

100% reproducibility is ideal, but we understand if some variation in the result is present. The internal team will use best judgement to deal with any arbitration related to the same. In some future competitions we will also collect code as a part of the evaluation process (like numerous of research challenges we run on AIcrowd).

We are finalizing the guidelines for publications of the solutions, but they will all be aggregated in a single repository with proper credit attribution (via the git authorship attribution). We will release the same soon.

Cheers,
Mohanty

Thanks for all the answers and also for organizing this event!

Looking forward to the publication of the guidelines! :slight_smile:

Cheers

1 Like

Since @mohanty mentioned that AIcrowd Blitz is an educational initiative and the goal is to encourage community members who are interested in learning about ML by using these problems as short term goals.
Releasing test case file publicly is not a very good option, because people who have been trying to use ML/DL or mathematical solutions might get discouraged by seeing the top scores in the leaderboard of problem statement. For example in ORIENTME problem statement, if the test set had not been release along with training set, there is no way that low score can be obtained. If a participant try different ML/DL models he will know in a day or two that low score cannot be achieved, and he would surely go for doing “unfair activity” that goes against the “spirit of the event”.( I was having idea of data augmentation to this problem but seeing the leaderboard its of no use putting time to that) .

Once this competition ends, the encouraged people might not participate in next event because of the “unfair activities” that has happened in this event. All because of releasing the “test set” in first place. In all other competitions related to ML I have taken part, there happens a internal testing on a “test set” because of these reasons only. People always find way to scam and provide a supporting code, that well follows your guidelines.

Giving the link to public dataset and taking the good side is not a good way, all these can be avoided if you have not release dataset on first.

PS: All participant don’t come to cheat, they have to cheat because of your fault.

1 Like

Hi @O_O,

Thank you for the thoughtful comment, and I personally understand where it comes from.
And I was expecting something along these lines when I wrote my previous comment.

Much of your comment is based around the idea of “releasing the test set”.

In context of DIBRD, the actual test set (with labels) is one google search away. When I included the link, it was also an attempt at being more transparent. Security through obscurity is a myth ! A handful of people who already were abusing the dataset had an unfair advantage which many other had no clue about. Even in a completely competitive setup, I would be a lot more comfortable knowing that everyone had equal access to all information.

Coming to ORIENTME. Great suggestion about actually not releasing the test set. Infact, a lot of the research challenge we run do not actually release the test set at all. But given that those competitions expect code submissions (imagine elaborate code repositories with exotic software runtimes), they happen to have a huge barrier to entry for participants, especially the ones quite new to the field.
And more so, in context of ORIENTME, have you considered that maybe the actual insight on how you can use the test data cleverly was the takeaway we hoped the participants would arrive independently at ? Historically speaking, many hard problems, and many amazing results have had a key moment like that : a simple solution hugely outperforms the obvious more sophisticated solution. As mentioned in this response, one of my most favorite examples of this phenomenon is the results detailed in this paper : https://arxiv.org/pdf/1505.04467.pdf

And thank you for highlighting our faults, we happily acknowledge that this is a huge learning experience for us, with every new challenge that we run. And we will continue to try and improve the experience we bring for the participants. And thank you for all the feedback here, we will weigh them in in the design process for the next iteration of AIcrowd Blitz.

Thanks,
Mohanty

1 Like