Using optim.jl - how can i set an interval for x[1] , for example


#1

Hello.

using optim.jl in julia ( using ATOM ide )

opt = optimize(function, [starting_guess] )

suposse my starting guess is [ 1 ; 2 ] for x[1] and x[2] … the program will find the best x[1] and x[2] to fit my data …

but if i want to tell julia to find x[1] in a range of ( 0 , 2 ) and x[2] in a range of ( 1 , 5 ) which best fit my data ?

is it possble using optim ? or Nlopt ?


#2

Using Optim and NLOpt.

Questions like these can be answered with 30 seconds of Googling–it is often best to save the community’s goodwill for when you’re truly stuck.


#3

none of this documentations has an example using a range of starting guess instead of a unique point

but thanks …


#4

Have you actually read Optim’s documentation? The first line says: “…for simple “box” constraints (lower and upper bounds)…”.
Box constrains means setting an interval on each variable independently of the others.


#5

People have been interpreting your question as optimization over a bounded parameter space. If you are asking how to use multiple start values, then just embed the call to optimize in a loop, and generate the trial start values in the loop.


#6

look what i did :slight_smile:
optimjulia

and look the results …

do you see the negative value ? why it happens ?

my lower and upper value do not be negative, in the case of x[3]


#7

Please post a minimal working example, i.e. Complete code that people can copy and paste to run it, instead of screenshots.

In this case, for example, you also need to post the definition of the function X2.


#8

I am not sure about the details but I think GradientDescent needs the objective function gradient which will be computed numerically [1] if you don’t provide it. If your X2 still contains a numerical integration routine, it may compute a wrong gradient.

But please post a minimal (20 lines at most) working example if you want help.

[1] From the manual:

If we pass f alone, Optim will construct an approximate gradient for us using central finite differencing