Inference failed errors

Hi @sina1374

I’m sending you the traceback privately.

For clarity, the reason you are not getting the full traceback is that the evaluation works in 4 stages:

  1. Building your environment and installing packages
  2. Using a sample of the training data and generating predictions
  3. Generating predictions on the leaderboard data
  4. Computing your score

We give you tracebacks on steps 1 and 2. Full traceback on step 3 is not provided to reduce the risk of data leakage. Step 4 is from our side and should very rarely contain an error :slight_smile:

Hi @alfarzan
I figured out that there was problem in my code.
I realized it is possible that we have new make_model, compared to what we have in our train dataset, isn’t it?

Yes! just like a real insurance company may encounter vehicles they’ve never seen before in a new year, so can your model.

But this fact doesn’t extend to other categorical features since they have much fewer unique values.

That’s right! Make super sense to me, thanks! :grinning:

1 Like

Hi @alfarzan,
My latest submissions (e.g. 114780) have encountered a problem with the same error message, which this thread is reffering to: “Inference failed. View the submission for more details.”.
In the log I can find following detailed error message:

“Error in load_model(model_output_path) :
unused argument (model_output_path)”

I really can’t find any solution. Can you please take a look at my submission?

Hi @imfilip

This was a small issue from our side, can you please try again? and please copy a new neotebook (or reset your notebook to factory runtime).

Thanks for catching it! :handshake:
I’ve made a clarification post about this as well.

Hi I’ve been getting inference errors trying to submit h2o model. Can you please help?

1 Like

Hi @actuarist

Can you please provide a submission number so I can look into this one? I’m looking into the one from @Baracuda already as they’ve reached out on Discord as well :slight_smile:

@actuarist for @Baracuda who is using zip submissions the issue was that the save and load model functions need to be altered to work with how h2o models are saved and loaded. Let me know if this doesn’t resolve your issue as well.

finally working :slight_smile:
been mostly using the colab colab seems to run fine but can’t seem to get submission working
worked with zip submission after adding in initialising library statements in predict.R
i thought it should automatically initialise packages in the install.R?

Good to hear :muscle:

In colab any initialisation you need should be done inside of global_imports which is sourced at the top of every file including in predict.R. In zip you are to do that inside of model.R which gets sourced at the top of predict.R.

Can I have your submission ID to make sure things are working fine from our end?

117334 latest working zip submission
can we modify predict.R in colab?

Thanks for that!

I would advise that you do not change any of the predict.R scripts even for your zip submissions.

I’ve looked into your submission and actually the issue is:

  1. You should include all your packages at the top of the model.R code and start h2o up there as well.
  2. To load your model, you should hard-code your model path into your load_model function and the load_model itself cannot take any arguments.

I have sent you a modified version of your submission that does not require changing the predict.R to illustrate this :slight_smile:

For notebook submissions the only difference would be that the the hard-coded path of the saved model has to be in a directory called saved_objects. Otherwise it should work very similarly with the exception of having to use the install_packages and global_imports functions. h2o.init() would fit into the latter.

Hi, I’ve been having an issue with a Lightgbm model. The training works but I get an error importing lightgbm during inference.

########## Generating Predictions on /data/mse_debug_data.csv #############
Traceback (most recent call last):
File “predict.py”, line 20, in
from utils import *
File “/home/aicrowd/utils.py”, line 1, in
from global_imports import *
File “/home/aicrowd/global_imports.py”, line 5, in
import lightgbm as lgb
File “/usr/local/lib/python3.8/site-packages/lightgbm/init.py”, line 8, in
from .basic import Booster, Dataset
File “/usr/local/lib/python3.8/site-packages/lightgbm/basic.py”, line 43, in
_LIB = _load_lib()
File “/usr/local/lib/python3.8/site-packages/lightgbm/basic.py”, line 34, in _load_lib
lib = ctypes.cdll.LoadLibrary(lib_path[0])
File “/usr/local/lib/python3.8/ctypes/init.py”, line 451, in LoadLibrary
return self._dlltype(name)
File “/usr/local/lib/python3.8/ctypes/init.py”, line 373, in init
self._handle = _dlopen(self._name, mode)
OSError: libgomp.so.1: cannot open shared object file: No such file or directory

Hi @skiracer

I think the issue is that you need to also install the correct apt package. Please see this discussion thread for instructions.

If that doesn’t work then comment here and we’ll get to the bottom of it :muscle:

1 Like

@alfarzan

I keep getting Inference error messages, even thought my notebook codes works correctly. Can you help me figure out what is going on? Submission # 118475

Hi @sam_kloese

Yes :slight_smile: I see what is going on. I think you may be using one of the notebook versions from the early weeks of the game. We made a small change to the load_model function by making sure it requires no arguments.

In your case the fix is easy, can you please replace your load_model function with this?

load_model <- function(){ 
 # Load a saved trained model from the file `trained_model.RData`.

 #    This is called by the server to evaluate your submission on hidden data.
 #    Only modify this *if* you modified save_model.

  load('trained_model.RData')
  return(model)
}

I can generate prices using your submission with the following change :+1:

Hi @alfarzan , I’ve got an inference error. Can I get some help to see what’s happening in there please ?. Thank you. Submission #120257

Hi @daniel_fat

It seems like you were expecting specific policy IDs to be in the leaderboard. I have privately sent you the traceback error :mailbox: