Transfer Learning

Hi,

the starter kit has some code for continuing an interrupted training. E.g. the model is saved with optimizer and all stats and loaded like in the torch.load() documentation. It looks all good.

If I try to continue training, the model is always starting with bad scores. It learns very fast to catch up with the saved model, but it is not really continuing exactly from the last learning state.

Can you explain how to continue an interrupted training on the main branch of the starter kit?

1 Like

I did not find the “normal” way, however I managed to reuse the code that loads the model. I also found the load_state_dict argument “strict” very nice, so I can use a pretrained model as warmstart-helper.

Here in code what I mean:
model.load_state_dict(checkpoint_states[“model_state_dict”], strict=False)

So for me… this thread is solved <3