Help! Submission stuck at submitted

My Submission #284190 stuck at submitted. The website show ‘submitted’ but the detail page show ‘Failed’. Why is there such inconsistency? And why no error log for me?

The error log is as follows.

2025-05-11 20:43:56.134	
TypeError: argument 'input': 'NoneType' object cannot be converted to 'Sequence'
2025-05-11 20:43:56.134	

2025-05-11 20:43:56.134	
                └ <evaluation_utils.CRAGOnlineEvaluator object at 0x7fc484c36750>
2025-05-11 20:43:56.134	
                │    └ Tokenizer(version="1.0", truncation=TruncationParams(direction=Right, max_length=75, strategy=LongestFirst, stride=0), paddin...
2025-05-11 20:43:56.134	
                │    │         └ <method 'encode_batch' of 'tokenizers.Tokenizer' objects>
2025-05-11 20:43:56.134	
                │    │         │            └ None
2025-05-11 20:43:56.134	
    encodings = self.tokenizer.encode_batch(agent_responses)
2025-05-11 20:43:56.134	
  File "/home/aicrowd/starter_kit/local_evaluation.py", line 425, in truncate_agent_responses
2025-05-11 20:43:56.134	

2025-05-11 20:43:56.134	
                      └ <evaluation_utils.CRAGOnlineEvaluator object at 0x7fc484c36750>
2025-05-11 20:43:56.134	
                      │    └ <function CRAGEvaluator.truncate_agent_responses at 0x7fc4848bd760>
2025-05-11 20:43:56.134	
                      │    │                        └ None
2025-05-11 20:43:56.134	
    agent_responses = self.truncate_agent_responses(agent_responses) # Truncase each response to the maximum allowed length (75 tokens)
2025-05-11 20:43:56.134	
  File "/home/aicrowd/starter_kit/local_evaluation.py", line 248, in generate_agent_responses
2025-05-11 20:43:56.134	

2025-05-11 20:43:56.134	
    └ <evaluation_utils.EvaluationRunner object at 0x7fc4850338d0>
2025-05-11 20:43:56.134	
    │    └ <evaluation_utils.CRAGOnlineEvaluator object at 0x7fc484c36750>
2025-05-11 20:43:56.134	
    │    │         └ <function CRAGEvaluator.generate_agent_responses at 0x7fc4848bd440>
2025-05-11 20:43:56.134	
    │    │         │                        └ <evaluation_utils.ProgressTracker object at 0x7fc47df98ad0>
2025-05-11 20:43:56.134	
    │    │         │                        │                └ <function ProgressTracker.update_progress at 0x7fc4848bdee0>
2025-05-11 20:43:56.134	
    self.evaluator.generate_agent_responses(progress_tracker.update_progress)
2025-05-11 20:43:56.134	
  File "/home/aicrowd/evaluation_utils.py", line 298, in generate_predictions
2025-05-11 20:43:56.134	

2025-05-11 20:43:56.134	
    raise e
2025-05-11 20:43:56.134	
  File "/home/aicrowd/evaluation_utils.py", line 301, in generate_predictions
2025-05-11 20:43:56.134		

2025-05-11 20:43:56.134	
    └ <evaluation_utils.EvaluationRunner object at 0x7fc4850338d0>
2025-05-11 20:43:56.134	
    │      └ <function EvaluationRunner.generate_predictions at 0x7fc4848be0c0>
2025-05-11 20:43:56.134	
    runner.generate_predictions()
2025-05-11 20:43:56.134	
  File "/home/aicrowd/evaluator.py", line 53, in serve
2025-05-11 20:43:56.134	

2025-05-11 20:43:56.134	
    raise e
2025-05-11 20:43:56.134	
  File "/home/aicrowd/evaluator.py", line 62, in serve
2025-05-11 20:43:56.134	

2025-05-11 20:43:56.134	
           └ <evaluator.AIcrowdEvaluator object at 0x7fc58af85d50>
2025-05-11 20:43:56.134	
           │                 └ <function AIcrowdEvaluator.serve at 0x7fc4853111c0>
2025-05-11 20:43:56.134	
    return aicrowd_evaluator.serve()
2025-05-11 20:43:56.134	
  File "/home/aicrowd/aicrowd_server.py", line 16, in run_server
2025-05-11 20:43:56.134	

2025-05-11 20:43:56.134	
             └ <function run_server at 0x7fc58aecc900>
2025-05-11 20:43:56.134	
    result = run_server()
2025-05-11 20:43:56.134	
> File "/home/aicrowd/aicrowd_server.py", line 22, in <module>
2025-05-11 20:43:56.134		

2025-05-11 20:43:56.134	
Traceback (most recent call last):

and the inconsistency should be some transient evaluator errors.