Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The use of the checkpoint file in inference #14

Open
drdr-jiang opened this issue Nov 28, 2024 · 8 comments · Fixed by #15
Open

The use of the checkpoint file in inference #14

drdr-jiang opened this issue Nov 28, 2024 · 8 comments · Fixed by #15

Comments

@drdr-jiang
Copy link

Hello,

First of all, I would like to sincerely thank you for your work and for providing the code. I have a question to ask: if I create a custom dataset and train it, is the resulting ‘’checkpoint‘’ file only usable for evaluation? I used inference and noticed that the model options are set to "pinhole," "simple_radial," etc., but I didn't see my custom dataset-trained ''checkpoint'' file being used in the inference. Also, I didn't see how to specify the custom-trained checkpoint file in the inference process.
image

@veichta
Copy link
Collaborator

veichta commented Nov 28, 2024

Hi,

Currently, we only support the original weights in the extractor, but I agree it would be useful to allow initialization from a custom checkpoint. I’ll work on implementing this feature as soon as possible.

In the meantime, you can try the following workaround:

import torch
from geocalib import GeoCalib
model = GeoCalib()

# Load the model from the checkpoint
checkpoint_path = "path/to/checkpoint" # should be something like 'outputs/training/<run name>'

state_dict = torch.load(f"{checkpoint_path}/checkpoint_best.tar", map_location="cpu")
model.model.flexible_load(state_dict["model"])

Note: This method will work only if there are no architectural changes in your configuration.

Let me know if you run into any issues!

@veichta veichta linked a pull request Nov 28, 2024 that will close this issue
@veichta veichta reopened this Nov 28, 2024
@veichta
Copy link
Collaborator

veichta commented Nov 28, 2024

You can now load your custom weights by passing the path to the corresponding checkpoint.

If you’ve made changes to the architecture, make sure to use the extractor from siclib:

from siclib.models.extractor import GeoCalib

checkpoint_path = "path/to/checkpoint" # usually something like 'outputs/training/<run name>/checkpoint_best.tar'
model = GeoCalib(weights=checkpoint_path)

@drdr-jiang
Copy link
Author

Thank you very much for your answers and help!!! I will use your method and keep in touch with you in the future!

@drdr-jiang
Copy link
Author

Hi,
After modifying the extractor.py file, how is "name" being called? I have extracted the checkpoint_best.tar file, and there is no config file inside. When loading the model, it cannot find conf["name"].
image
image

@veichta
Copy link
Collaborator

veichta commented Dec 4, 2024

Hi, Could you maybe send the error message and the keys from the state_dict with print(state_dict.keys())?
Also make sure that you pass the path to the .tar file and not the extracted weights.

@drdr-jiang
Copy link
Author

HI,
Currently, I have updated the extractor code you provided, and then I call the .tar file in my demo.py. The error is shown in the image below:
image

@veichta
Copy link
Collaborator

veichta commented Dec 6, 2024

Hi, it seems that the config is present in the checkpoint. However, I made a mistake with how the config is accessed. This line should be added after loading the state_dict:
state_dict["conf"] = state_dict["conf"]["model"]

I've pushed a fix to the extractor where I also remove any weights used to initialize the training of the model.

@drdr-jiang
Copy link
Author

Thank you very much for your help! !!I have successfully run it, and I will continue with the next steps. I hope to stay in touch with you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants