I really hope someone from AI Crowd can help. I am having real trouble with submissions to this challenge. They just don’t seem to run. Some have been obvious (trying to import libraries I haven’t pip installed). But others I am totally stuck on.
I have stripped back the notebook to what I believe it needs to be, am using the assets directory properly (I think) but still no joy.
Please could someone from the team advise as to why my latest submissions are failing.
I am looking into this issue and will get back to you asap
I looked into the notebook you submitted in the submission #171169 and the error in our internal logs was
IndexError: index 0 is out of bounds for axis 0 with size 0 occured on the line
X1.append(df_saved_vocab[df_saved_vocab['word']==t]['min'].values) . I tried executing the notebook locally with the private data and found out the error is due to the list returned by
df_saved_vocab[df_saved_vocab['word']==t]['min'].values had no elements (
 ) and when you tried to get the
0 index element by using
.values it resulted in the above error
IndexError: index 0 is out of bounds for axis 0 with size 0 occured` .
In Generate Predictions On Test Data phrase of evaluation, we have private test data of over 20k samples used for generating the predictions. In your df_saved_vocab.csv there are values of only 95 different words which can be insufficient given that our private test data has over 20k samples.
What I will suggest to try is -
- You can either generate the embeddings in the Prediction phase of the notebook
- You can generate the embeddings of top n most common English words and then save it in the
- You can also add
try except block in the
.values to make sure that if this error occurs, you can append, for ex. (
 ) to the
I hope this explanation helps Let me know if you have any more doubts
That’s amazingly helpful - thank you. Makes total sense.
All the best
Also, sorry, just checking, but the error you found
IndexError: index 0 is out of bounds for axis 0 with size 0, I could not see that in the logs on that submission, or am I missing something? Did you find it there, or somewhere else?
Unfortunately, The logs for the Generate Predictions On Test Data part of the evaluation remain private and only available for admins. It is due to the private test data used in Generate Predictions On Test Data part of the evaluation.
No problem - that makes sense. Thanks a lot for explaining my issues.
All the best.