The pytorch and tensorflow sample codes save their models in different directories:
scratch/shared/MODEL_NAME/representation/pytorch_model.pt
scratch/shared/MODEL_NAME/tfhub
Therefore, I’m a bit confused how the evaluator of aicrowd decides to load the models given that there is no environment variable to define this. Will it follow the same logic as the evaluate.py
file?
@amirabdi : We do provide the path to save the models here :
from disentanglement_lib.postprocessing import postprocess
from disentanglement_lib.utils import aggregate_results
import tensorflow as tf
from numba import cuda
import gin.tf
import aicrowd_helpers
# 0. Settings
# ------------------------------------------------------------------------------
# By default, we save all the results in subdirectories of the following path.
base_path = os.getenv("AICROWD_OUTPUT_PATH", "../scratch/shared")
experiment_name = os.getenv("AICROWD_EVALUATION_NAME", "experiment_name")
DATASET_NAME = os.getenv("AICROWD_DATASET_NAME", "cars3d")
ROOT = os.getenv("NDC_ROOT", "..")
overwrite = True
# 0.1 Helpers
# ------------------------------------------------------------------------------
def get_full_path(filename):
And in case of pytorch
, they are being picked up from here :
'dataset')})
# noinspection PyUnresolvedReferences
from disentanglement_lib.data.ground_truth.named_data import get_named_ground_truth_data
# --------------------------
ExperimentConfig = namedtuple('ExperimentConfig',
('base_path', 'experiment_name', 'dataset_name'))
def get_config():
"""
This function reads the environment variables AICROWD_OUTPUT_PATH,
AICROWD_EVALUATION_NAME and AICROWD_DATASET_NAME and returns a
named tuple.
"""
return ExperimentConfig(base_path=os.getenv("AICROWD_OUTPUT_PATH", "./scratch/shared"),
experiment_name=os.getenv("AICROWD_EVALUATION_NAME", "experiment_name"),
dataset_name=os.getenv("AICROWD_DATASET_NAME", "cars3d"))
1 Like