Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tolerance-Value dependence of Output #124

Closed
ChrisLippiEdin opened this issue Dec 14, 2018 · 9 comments
Closed

Tolerance-Value dependence of Output #124

ChrisLippiEdin opened this issue Dec 14, 2018 · 9 comments

Comments

@ChrisLippiEdin
Copy link

Hi,
I am running a loop over different parameter values whereas in each I use MMA to solve a problem involving an inequality constraint. However, at some parameter values (even if the assure that the constraint is never binding) the results start to become funny in a sense that they are not the max anymore (for a slight parameter change, results drop massively. I tested that the outputs are non-optimal). My code is

 function obj(x::Vector, grad::Vector)   
            if length(grad) > 0
                grad[1] = pa*((1 + β₁*x[2]^β₂)/(1 + β₁*x[2]^β₂ + α₁*μ[i] + α₂*μ[i]^2))*(Aa*s[j])^(1-γ)*γ*(x[1])^(γ-1) - (1+r)*q + q
                grad[2] = pa*β₁*β₂*(x[1])^(γ)*(Aa*s[j])^(1-γ)*μ[i]*(α₂*μ[i] + α₁)*x[2]^(β₂-1)/(β₁*x[2]^(β₂) + α₂*μ[i]^2 + α₁*μ[i] + 1)^(2) - (1+r)*pn 
            end
    
           return pa*((1 + β₁*x[2]^β₂)/(1 + β₁*x[2]^β₂ + α₁*μ[i] + α₂*μ[i]^2))*(Aa*s[j])^(1-γ)*(x[1])^γ - (1+r)*(q*x[1] + pn*x[2]) + q*x[1]
        end


     function myconstraint(x::Vector, grad::Vector)
          if length(grad) > 0
               grad[1] = -1*λ*q + (1+r)*q
             grad[2] = (1+r)*pn
          end
          -1* λ*q*x[1]  + (1+r)*(q*x[1] + pn*x[2] - g)
      end

        opt = Opt(:LD_MMA, 2)    #algorithm, dimensionality of problem
        lower_bounds!(opt, [0, 0])
        xtol_rel!(opt,1e-8)
        max_objective!(opt, obj)

       inequality_constraint!(opt, (x,grad) -> myconstraint(x,grad), 0)

        results = NLopt.optimize(opt, [1e-3, 1e-3])

When I change the tolerance values then it might work for some parameter values, but still not for others. Bottom line, I don't find a way to assure the solver does not fail to find the optimum consistently for all exogenous values.
Do you have any suggestions what might be the issue?

Thanks a lot and merry Christmas

@stevengj
Copy link
Collaborator

stevengj commented Dec 14, 2018

Have you checked your gradients, e.g. against ForwardDiff.jl or even finite differences, for some random points? That is the most common source of problems.

@ChrisLippiEdin
Copy link
Author

Thanks for the answer,

what do you exactly mean by checking the gradients? What I did is, I used forwardDiff to check the gradient around the optimal points that I only get when I played around with the tolerance values. The gradient exists and exhibits the correct values.
The gradient from NLopt, before playing around with the tolerance values, gives me an output that, obviously, gives me incorrect values (the gradient is not equal to the vector of marginal costs).

Thanks a lot!

@mzaffalon
Copy link
Contributor

What @stevengj means is whether the vector grad contains the correct value if you do

grad = zeros(2)
obj(some_values_of_x, grad)

The expression for the gradient in obj is fairly complicated and a typo is not completely unlikely (although grad[1] looks OK to me).

@ChrisLippiEdin
Copy link
Author

Thanks @mzaffalon
I checked i) the gradient at the value what I before considered as non-optimal (non-profit maximizing) and ii) the gradient with a different tolerance value that is profit maximizing.
It turns out that both outputs give zero gradients and are therefore optimal.
Is the problem then just an issue of local vs global optimum? If so, do you know how to fix that in a sense of finding global maxima?

Thank you very much and merry Christmas!

@mzaffalon
Copy link
Contributor

mzaffalon commented Dec 19, 2018

Are you saying that you took a totally random value of x and obtained grad=[0,0]? Would you not conclude that there is something wrong in the way you compute the vector grad in the function obj?

@ChrisLippiEdin
Copy link
Author

no, I am saying that when I calculate the gradient with, for instance, forwarddiff.jl and apply random values for x then I get some random value that is not zero. However, for the two different values of x that I get from my optimization with different tolerance values, the gradient is in both cases essentially zero. Hence my conclusion of multiple equilibria.

@mzaffalon
Copy link
Contributor

mzaffalon commented Dec 19, 2018

The list of global optimizers is here but I have never used them.

In Julia, you want like to call opt = Opt(:GN_DIRECT_L, 2) as explained in the tutorial. Also see the list of algorithms https://github.com/JuliaOpt/NLopt.jl/blob/master/src/NLopt.jl#L29

@ChrisLippiEdin
Copy link
Author

Thanks a lot, that is very helpful

@odow
Copy link
Member

odow commented Mar 3, 2022

Closing as stale and because this doesn't look like an issue in NLopt.

@odow odow closed this as completed Mar 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

4 participants