Hello everyone. Is it allowed to inference model with mixed or half precision or participants need to inference in absolute same way as in training? And there are any restrictions to use onnx or TensorRT during inference?
1 Like
I think that there is no restriction, the model running in a different environment is still the same model. And you can do whatever you want with your model during the inference. This is my opinion as it was like that in Kaggle competitions.