[Round-2 Update] $400 AWS Credits Per Team - How To Win & Claim Them

:wave: Hello there,

As mentioned in the Round-2 announcement for Multi-Agent Behavior Challenge participants who make a submission in round-2 can win up to $400 AWS credits.

:raised_hand_with_fingers_splayed: Keep reading to see how you can claim AWS credits for yourself or for your team :point_down:

:white_check_mark: Eligibility

  • You (or your team) must have received scores higher than the below-mentioned scores in any of the 2 tasks.
  • On the Ant & Beetle Video Data task, you must beat a mean F1 score of 0.606. Here’s the baseline for the Ant & Beetle Task, which achieves a mean F1 score of 0.591.
  • On the Mouse Triplet Video Data task, you must beat a mean F1 score of 0.306. Here’s the baseline for the Mouse Triplet Task, which achieves a mean F1 score of 0.292.
  • Each participant (or team) can claim a $200 credit code for each target beaten (up to $400 total.)

Please note:

  • We will be sharing 50 X $200 = $10,000 worth of AWS Credits. The distribution will happen on a first come first serve basis and will cease once all the 50 codes (each for $200) are distributed.
  • For teams claiming the code, please nominate one person to represent. The code will be mailed to that team member.

:computer: How to redeem?

  • Please share the following as a reply to this thread to receive your code.

Team name (if relevant):

Submission id (the one that beats baseline as defined above):

How much did you improve over the relevant baseline score?:

A brief intro about you (Us and the participants would love to know what brought you to this challenge):

:shield: We will be sending out the AWS credits every week after verifying the details. After receiving your code, you can go to this website and claim your credits.

If you haven’t gone through the baselines, check them out here :point_down:

  1. :mouse2: Mouse Triplet Video Task Baseline

  2. :ant: Ant Beetle Video Task Baseline

All the best! :+1:

2 Likes

Team name (if relevant): The_Yangs

Submission id (the one that beats baseline as defined above): #180806

How much did you improve over the relevant baseline score?: Mean F1 score 0.287

A brief intro about you (Us and the participants would love to know what brought you to this challenge): We are working on the Annolid(An annotation and instance segmentation-based multiple animal tracking and behavior analysis package) which is open-sourced here GitHub - healthonrails/annolid: An annotation and instance segmentation-based multiple animal tracking and behavior analysis package..

Hi, I beat the baselines on both tasks.
Ant & Beetle
#183770
Mean F1 Score: 0.557 β†’ 0.580
Mouse Triplet
#183189
Mean F1 Score: 0.217 β†’ 0.230

I am a PhD student with a broad interest in machine learning, including computer vision tasks.

For mouse triplet:
Submission id: 184298
How much did you improve over the relevant baseline score?: 0.217 β†’ 0.258
A brief intro about you: I have a bit of spare time before I start full time work after finishing studying