Should the dataset name be kept as cars3d during submission

The starter kit mentions the variables as below.

# Step 1:
cd neurips2019_disentanglement_challenge_starter_kit

# Step 2: 
# Make sure that your datasets live in "./scatch/dataset". 
# If they do not, either make sure they do (recommended)
# or change train_environ.sh accordingly. Finally:
source train_environ.sh

# Step 3:
# Enter the name of your experiment
export AICROWD_EVALUATION_NAME=myvae
# Enter the name of the dataset
export AICROWD_DATASET_NAME=cars3d

Is it required for us to change the AICROWD_DATASET_NAME=cars3d during submission? If yes, what should it be changed to?

Many thanks in advance, @mohanty .

@sourabh_balgi : The evaluator sets these environment variables. So your code should not internally override these variables, and can reliably expect the evaluator to set them.

During debugging you could very well check it these variables exist, and accordingly set them so you can use the same pipeline for debugging too.

@mohanty, which scripts should not be changed for the evaluation to work.

Currently I have modified evaluate.py, local_evaluation.py and utils_pytorch.py. The reason for the modification is I was getting error using torch.jit.load/save. So I modified the script only to save and load the full model as is without torch.jit.

Can this be the reason for the failures of my submission?
Also can you please share the logs of my failed submission so that I can debug for successful submission.

I run the same code locally, everything works fine!?

I have replied to the gitlab issues. Looking forward to execution logs.

Many thanks, @mohanty .

@sourabh_balgi: You can change all the files as you please. The evaluator just expects that you do the aicrowd_helpers.submit() call after you are done with dumping the models and the representations in the required format.

I had a look at your latest failues, it seems thats because you have not included disentanglement_lib as a dependency. I pasted the exact error log on the relevant issue.

Thanks a lot for letting me know about disentanglement_lib. I’ll add this and make submissions.
The aicrowd_helpers.submit() command is going to invoke local_evaluation.py or my custom local_evaluation.py file? I’m confused as to using this.

I tried using torch.jit.* for model saving/loading but there seems to be some issue with my current architecture.

So it should work with my custom local_evaluation.py file , right?

Regards,
SB @sourabh_balgi

@sourabh_balgi: No, the aicrowd_helpers.submit() function will just send a signal that the evaluation can begin, and another container calls the local_evaluation.py separately. You will not have to manually call the local_evaluation.py.

And regarding the torch example, as far as I remember, the included code in the repository wasnt using anything fancy, and we will be using a similar local_evaluation.py as included in the repository, but not the exact one. If you need any custom changes in the evaluation script, you will have to send in a pull request, and we can consider it.

Thank you for your quick response, @mohanty

Yes. Like you said the utils_pytorch.py has nothing fancy. However the model is stored as a torch.jit.ScriptModule unlike torch.nn.Module.

The only difference is torch.jit.save(model, path) should be replaced with torch.save(model, path). Likewise for loading model = torch.load(path) instead of model = torch.jit.load(path).

These are the only 2 changes in the utils_pytorch.py file.
Infact we don’t need to make any changes to local_evaluation.py.

Do you need me to send a PR for this?

Best,
SB

torch.jit is not supported for certain operations.
Hence the errors.

However torch.load/save works always for all models/inputs.

@sourabh_balgi: Sure, please do send across a PR. and we can move the discussion to the relevant PR when its ready.

@mohanty, I’m getting the following error while pushing my branch for a PR.

git push origin torch_script_compatibility
Username for 'https://github.com': sobalgi
Password for 'https://sobalgi@github.com':
remote: Permission to AIcrowd/neurips2019_disentanglement_challenge_starter_kit.git denied to sobalgi.
fatal: unable to access 'https://github.com/AIcrowd/neurips2019_disentanglement_challenge_starter_kit.git/': The requested URL returned error: 403

I have made only the following changes to utils_pytorch.py script. Everything else should be fine.

def export_model(model, path=None, input_shape=(1, 3, 64, 64), use_script_module=True):
    """
    Exports the model. If the model is a `ScriptModule`, it is saved as is. If not,
    it is traced (with the given input_shape) and the resulting ScriptModule is saved
    (this requires the `input_shape`, which defaults to the competition default).

    Parameters
    ----------
    model : torch.nn.Module or torch.jit.ScriptModule
        Pytorch Module or a ScriptModule.
    path : str
        Path to the file where the model is saved. Defaults to the value set by the
        `get_model_path` function above.
    input_shape : tuple or list
        Shape of the input to trace the module with. This is only required if model is not a
        torch.jit.ScriptModule.
    use_script_module : True or False (default = True)
        If True saves model as torch.jit.ScriptModule. Else saves as a normal Module.

    Returns
    -------
    str
        Path to where the model is saved.
    """
    path = get_model_path() if path is None else path
    model = deepcopy(model).cpu().eval()
    if use_torch_script:
        if not isinstance(model, torch.jit.ScriptModule):
            assert input_shape is not None, "`input_shape` must be provided since model is not a " \
                                            "`ScriptModule`."
            traced_model = trace(model, torch.zeros(*input_shape))
        else:
            traced_model = model
        torch.jit.save(traced_model, path)
        return path
    else:
        torch.save(model, path) # saves model as a nn.Module
        return path        


def import_model(path=None, use_script_module=True):
    """
    Imports a model (as ScriptModule) from file.

    Parameters
    ----------
    path : str
        Path to where the model is saved. Defaults to the return value of the `get_model_path`
    use_script_module : True or False (default = True)
        If True loads model as torch.jit.ScriptModule. Else loads as a normal Module.

    Returns
    -------
    torch.jit.ScriptModule / torch.nn.Module
        The model file.
    """
    path = get_model_path() if path is None else path
    if use_script_module:
        return torch.jit.load(path)
    else:
        return torch.load(path) # loads model as a nn.Module

Please let me know how to proceed.

Should my submissions work if I submit my script with the above modifications?

@sourabh_balgi: Your account would not have write access to push to a new branch on the starter kit repository.
You will have to first fork the repository, then push your changes to a new branch there, and then create a Pull Request to the main starter kit repository.

Ok. Thanks. Let me do that.

@mohanty, I have sent the PR with the modifications.

Please take it up so that the model saving can be done as torch.jit.ScriptModule or torch.nn.Module without any restrictions.