You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I had this error: TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
After some observation, I found that inside solve_1d_linesearch_quad function in optim.py, np.divide is used. I think it should be nx.divide where nx = get_backend(a, b, c). And inside TorchBackend class, a new method for divide should be added that uses torch.div function.
The text was updated successfully, but these errors were encountered:
We are reworking all the optimization pipeline in PR #431 and I seem to recall that this has been corrected. @cedricvincentcuaz could you confirm that the PR correct this bug?
Indeed this np.divide operation has been removed in PR #431 to prevent this kind of error. The minimum is now computed using only python operators as minimum = min(1., max(0., -b / (2.0 * a))) which inherits the type of a and b, whether there are 1D tensors or floats. Note that the priority system of any backend implies that if a is a tensor and b a float, minimum will be a tensor of the same type than a (and vice-versa).
I do not think that a new method nx.divide is necessary in this function. Please correct me if I am wrong.
I had this error: TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
After some observation, I found that inside
solve_1d_linesearch_quad
function in optim.py,np.divide
is used. I think it should benx.divide
wherenx = get_backend(a, b, c)
. And inside TorchBackend class, a new method for divide should be added that usestorch.div
function.The text was updated successfully, but these errors were encountered: