Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

returns :XTOL_REACHED but doesn't modify the input values #136

Closed
giadasp opened this issue Sep 6, 2019 · 8 comments
Closed

returns :XTOL_REACHED but doesn't modify the input values #136

giadasp opened this issue Sep 6, 2019 · 8 comments

Comments

@giadasp
Copy link

giadasp commented Sep 6, 2019

I don't know what's wrong with my code.
I'm using the low-level nlopt wrapper because I need to define custom functions to be optimized on a vector of variables and JuMP doesn't allow it.
If I choose the algorithm LD_SLSQP I get ROUNDOFF error, instead, if I use LD_MMA I get Xtol and Ftol reached but the values of the variables don't change from the starting values.
Do you know which can be the problem?
I tested all the functions I implemented and all of them work outside the nlopt optimization.

Thank you in advance for your support

Giada

@mzaffalon
Copy link
Contributor

Can you post your code?

As a side remark, unless you are reporting a bug, you should post on discourse, first.

@giadasp
Copy link
Author

giadasp commented Sep 6, 2019

Since with the same inputs in works in JuMP 0.18.6 I guess it is a bug.
Anyway the code is the following:

X=rand(100,2)
sp=rand(100)
r2=rand(100)
nPar=2
function myf(x::Vector,grad::Vector)
  nPar=size(x,1)
  y=X*x
  if size(grad,1) > 0
    grad= r2 - (sp ./ (1 .+(exp.(.- y))))
    grad= X'* gradend
  end
  z=log.(1 .+exp.(y))
  return sum(r2 .* y - (sp.*z))
end
opt=NLopt.Opt(:LD_SLSQP,nPar)
opt.max_objective = myf
 opt.lower_bounds = bds.minPars
opt.upper_bounds = bds.maxPars
# opt.xtol_rel = 0.00000001
# #opt.maxtime = 10.0
# opt.ftol_rel= 0.00000001
#opt.constrtol_abs = 0.0
pars_i=zeros(nPar)
# #(minf,pars_i,ret) = NLopt.optimize(opt,pars_i)
(minf,pars_i,ret)=NLopt.optimize!(opt,pars_i)

@mzaffalon
Copy link
Contributor

mzaffalon commented Sep 7, 2019

There are two mistakes in the gradient calculation:

  • first, grad= r2 - (sp ./ (1 .+(exp.(.- y)))) is a 100 element vector (and so is probably the second assignment to grad), whereas grad should be a vector of 2 elements;
  • second, you should assign to grad in place, as in the NLopt.jl tutorial:
grad[1] = ...
grad[2] = ...

What you were doing was creating a new grad variable: you can check that by running myf as follows

grad = zeros(2)
myf(some_value_for_x, grad)
grad

and verify that grad has not been changed.
I haven't used JuMP, but maybe its solvers do not make use of the gradient.

@mzaffalon
Copy link
Contributor

Can this issue be closed?

@giadasp
Copy link
Author

giadasp commented Sep 18, 2019 via email

@mx-the-gray
Copy link

Can I open a related issue?
I am trying the following constrained minimisation problem.

function ps(x::Vector,grad::Vector)
if length(grad)>0
grad[1]=1
grad[2]=0
grad[3]=0
end
return x[1]
end

function ps_con(z::Vector,x::Vector,grad::Matrix,w::Vector)
if length(grad)>0
grad[1,1]=w[1]
grad[2,2]=20x[2]
grad[1,2]=-2x[2]
grad[1,3]=-1
grad[2,1]=w[2]
grad[2,3]=-0.1
end
z[1]=-(x[2]^2-1+x[3])+w[1]x[1]
z[2]=-(-10x[2]^2+0.1*x[3])+w[2]*x[1]
end

opt=Opt(:LD_MMA,3)
opt.min_objective=ps
jopt.lower_bounds=[0,-Inf,-Inf];
opt.upper_bounds=[1,Inf,Inf];

inequality_constraint!(opt, (z,x,g) -> ps_con(z,x,g,[1,1]), [1e-8,1e-8]::AbstractVector)
(minf,minx,ret) = optimize(opt, [1,1, 1])

but now I get:
(1.0, [1.0, 1.0, 1.0], :FORCED_STOP)

@mzaffalon
Copy link
Contributor

There are two errors I can see:

  1. the gradient in the constraint function is a 3x2 matrix, but the routine tries to access grad[1,3] and grad[2,3]
  2. you are missing a * in the expression for z[1].

Could you please:

  1. indent your code and quote it with 3 back ticks
  2. post these questions in the Julia mailing list?

@odow
Copy link
Member

odow commented Mar 3, 2022

Closing because this is not a bug in NLopt.

Please ask usage questions like this on the community forum: https://discourse.julialang.org/c/domain/opt/13

@odow odow closed this as completed Mar 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

4 participants