We are constantly trying to improve this challenge for you and would appreciate any feedback you might have!
Please reply to this thread with your suggestions and feedback on making the challenge better for you!
- What have been your major pain points so far?
- What would you like to see improved?
Hello! Has the challenge been started ? On its page the start date is Aug 16th, but on AIcrowd Challenges page it is only visible when “Starting soon” category filter is applied.
Incentives for those who contribute using blogs or tutorial for Multi-agent DRL would be a good option to increase both participation and quality of submissions.
What is the maximum number of a team? We have a group of students that want to participate this challenge. Is it possible to make the limit of team members larger?
We are starting to test attention mechanisms and we get no evaluation result both with the evaluation.py file as well as with the local_evaluation.py file. We use a working without attention configs.py file and set the attention parameters. We deactivate use_lstm and set use_attention. The evaluation.py results can be seen here: 006 – Google Drive
. Also, there is a sample here of the result when submitting as well as when doing local evaluation. Submission: error_submission_oct_18.txt - Google Drive . local_evaluation: output (1).txt - Google Drive