Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What are the steps to run this project #2

Open
imran20487 opened this issue Oct 1, 2019 · 2 comments
Open

What are the steps to run this project #2

imran20487 opened this issue Oct 1, 2019 · 2 comments

Comments

@imran20487
Copy link

Can you please update on how to run this project?

@VanDuc0209
Copy link

Can you please update on how to run this project?

hello, how to run this project?

@skinkie
Copy link

skinkie commented Aug 31, 2021

While I don't have this project with respect to the correct results. I will document what got me starting. The code suggests that colornet was used as input data to train the model. This dataset (1.8TB full size) can be obtained via http://places.csail.mit.edu/downloadData.html Instead I used 1000 photos from my own collection then scaled them down due to memory constraints.

mkdir -p jobs
mkdir -p output
mkdir -p data/colornet
mogrify -resize 256x256\! -path /your/path/to/neuralhash/data/colornet /your/path/to/photos/*jpg 

At this moment I have removed all references to the VisdomLogger, but I would expect running it and changing the IP adress would obviously help too. Because my only Cuda machine had only 4GB of RAM I changed utils.py BATCH_SIZE=2. I also commented out the model2 testing while training (because I ran out of memory).

So once you have done the steps above you are able to run train_amnesia.py. This would give the weights_file mentioned in the rest of the code.

Where I am currently at:

python encoding.py /tmp/test-small.jpg --model=output/train_test.pth --max_iter=500 /tmp/test2.png
Target:  11100001101111111101100001100011
Changing distribution size: 96 -> 96
/home/skinkie/.local/lib/python3.9/site-packages/torch/nn/functional.py:3487: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
  warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
/home/skinkie/.local/lib/python3.9/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /pytorch/c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Fixing distribution size: 96 -> 96
skinkie@aspire7:~/neuralhash$ cat decoding.py 
import sys
from utils import *
from models import DecodingModel, DataParallelModel
import transforms

model = DataParallelModel(DecodingModel.load(distribution=transforms.encoding, n=96, weights_file="output/train_test.pth"))
image = im.torch(im.load('/tmp/test2.png')).unsqueeze(0)
# target = binary.parse(str(target))
prediction = model(image).mean(dim=1).squeeze().cpu().data.numpy()
prediction = binary.get(prediction)
print (f"Prediction: {binary.str(prediction)}")
skinkie@aspire7:~/neuralhash$ python decoding.py 
/home/skinkie/.local/lib/python3.9/site-packages/torch/nn/functional.py:3487: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
  warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
/home/skinkie/.local/lib/python3.9/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /pytorch/c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Prediction: 11110001101011101000001111101011

And as you can see, the Target != Prediction.

Now I have noticed that in models.py the following is set. Where MODEL_TYPE in utils.py is defined as DecodingGramNet.

DecodingModel = eval(MODEL_TYPE)

Even changing this model to DecodingNet does not result in a correct prediction. And TinyDecodingNet failing when encoding. And DilatedDecodingNet failing on usage (due to GramMatrix not being defined).

So from what I have experienced this thing does not work out of the box, not even when investing some time in it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants