You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We install torch via pip which is how we get the 2.4.0+cu124 version.
I believe the sentence-transformers package is pulling in torchvision which in turn pulls in libtorch.
@dagardner-nv The sentence-transformers conda install (via examples_cuda-125_arch-x86_64.yaml) in the release container results in the following pytorch packages in the container:
# packages in environment at /opt/conda/envs/morpheus:
#
# Name Version Build Channel
libtorch 2.4.1 cpu_generic_hb3b73e9_0 conda-forge
pytorch 2.4.1 cpu_generic_py310hcbfaffa_0 conda-forge
torch 2.4.0+cu124 pypi_0 pypi
torchvision 0.19.1 cpu_py310h0339c84_1 conda-forge
The VDB embedding stage then chooses to use CPU version. Replacing with pip package switches it back to GPU but it's not quite as fast (~3 min vs ~2 min in our example).
Version
24.10
Which installation method(s) does this occur on?
Source
Describe the bug.
We install torch via
pip
which is how we get the2.4.0+cu124
version.I believe the
sentence-transformers
package is pulling intorchvision
which in turn pulls inlibtorch
.Minimum reproducible example
CONDA_ALWAYS_YES=true conda env create --solver=libmamba -n morpheus -y --file conda/environments/all_cuda-125_arch-x86_64.yaml
Relevant log output
Click here to see error details
Full env printout
Click here to see environment details
Other/Misc.
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: