Maximum Likelihood: Normal Linear Model

Two reasons:

  • An API call may be doing some other things in addition to reading the property value. (See, for example, gradient!(df, x) in the NLSolversBase README
  • In case the internal code of a package changes.

If there are changes in the structs in a package, API calls, such as Optim.minimizer or hessian, can be deprecated so that your code doesn’t break. So if you try to access opt2.minimizer but we have for some reason decided to change the property to opt2.minimum_x, then Optim.minimizer can be updated to reflect this, but opt.minimizer will just disappear.

Great :slight_smile: I believe examples of maximum likelihood problems can be very valuable to other newcomers to Julia as well. If you are able to write up an example of any of the problems you work on we’ll add it to the Optim docs (or some other centralised tutorial repository).

2 Likes

I just realised another thing that is important when you want to extract the information of the Hessian.

When optimize has reached tolerance and says it has converged, it does not update the Hessian as it is only needed if we were to take another iteration step. Thus, the value of hessian(fun2) is actually the Hessian at the penultimate iteration.

So to get a better approximation of the Bayesian standard errors you need to do the following:

using Optim, NLSolversBase
# define df and x0 ...
res = optimize(df, x0, Newton())
xmin = Optim.minimizer(res)
hessian!(df, xmin) # Update the Hessian calculation for the minimizer `xmin`
numerical_hessian = hessian(df)
3 Likes

Thank you very much for the additional information. I was unaware of this issue however and I appreciate the help!

I’m also very willing to write all of this up for a tutorial for the Optim package as well. However, I’m not sure how to proceed so if you can kindly point me in the right direction, I’d be happy to contribute.

I’m not sure if this should be a separate discussion, but maybe a “Tutorial” section on the forum would be an addition that would be helpful? I’m envisioning a “one stop shop” for various topics where individuals can get information regarding various topics, although I’m happy to defer to the community regarding this issue.

Please do! One solution is to produce a Jupyter notebook, or even better, at document that uses Literate to generate a Jupyter notebook from a scripts.

Thank you for the information! Obviously, a Jupyter notebook is something I’m familiar with but I’m not familiar with Literate, which I’m assuming is a package that will take a script and produce documentation?

If someone could provide a tutorial link I’d appreciate it.

Than you!

https://fredrikekre.github.io/Literate.jl/stable/

It’s not too difficult. If it’s easier to produce a notebook we could try the reverse: produce the notebook and generate the literate scripts based on something you know (how to make notebooks)

2 Likes

In addition to the example in the Literate docs, here’s an example that I used to write a short tutorial/example for LineSearches.jl

I prefer writing my examples in an editor to writing them in a Jupyter notebook, but as @pkofod said we can turn your Jupyter notebook example into a Literate example if you would like to stick with the notebook :wink:

2 Likes

Thank you all for the information! I’m eager to learn how to use Literate so I will concentrate on that for the time being. Your examples look straightforward and very helpful. Amazing how it produces such nice output…

Also, the Hessian code you mentioned earlier (i.e. hessian!) matched the analytical VC matrix calculation perfectly so that was nice to see (the analytical solution breaks down with increasing N so that’s why I use the numerical hessian).

I’ll be in touch once the example is complete.

Thanks!

I have looked at the Literate package as you and @pkofod mentioned and have produced a tutorial but since I cannot send any files via the Discourse forum I would appreciate some guidance. I need another pair (or two) of eyes to check my Jupyter notebook and Markdown file (which was very easy to create, BTW).

Thank You!

Perhaps put them in a gist?

1 Like

That was quick :slight_smile: I’ll see if I can enable the Literate examples in the Optim.jl documentation today or tomorrow, and then you can submit your example as a PR to Optim.

Uploading the Literate file as a gist, as @Tamas_Papp suggested is also a good way to show us the tutorial.

I have added both the markdown file and the Jupyter notebook as a gist, as Thomas Papp suggested. The copied URL is as follows:

https://gist.github.com/djlacombe/6c4ec789833514713df1d61f091cd89b.js

The name of the gist in both cases is “Maximum Likelihood Estimation in Julia: The Normal Linear Model”.

I tried to add a TIP to the markdown but it is not rendering correctly and I’m happy to make any corrections that anyone can point out in terms of errors or clarifications.

As for the speed of the work, I’ve been given the fortune to discover this community and Julia and just wish to provide something in return.

Thanks!

2 Likes

Thanks for putting this together; this morning it helped me figure out how to do a similar problem.

1 Like

Great :slight_smile:

If you wish, I can add your tutorial to the Literate PR I made to Optim just now, but then I would need a gist of the original jl file and not the Markdown or ipynb files.

Alternatively, you can follow the steps I did there and submit a PR yourself if you would like to do that instead (but then I advise waiting for my PR to be merged) :wink:

The Markdown TIP is written in a way so that it will work when the documentation is deployed (after it has been merged to the master branch on Optim. But that is all taken care of automatically so you can just copy-paste what I did in my example (you’ll have to change the file names, however :stuck_out_tongue: )

Excellent! That’s what I was hoping for in putting this together.

I’ll probably add some more tutorials in the near future.

I’ve placed a public gist on the board at the following location:

https://gist.github.com/djlacombe/b13b4386b20106f5933985d71c0f8073.js

As you can see, it’s a Julia script and I’m happy for you to do the PR…

I’ll need to learn more about GitHub and PR’s. For some reason PR’s make me nervous and I don’t want to be “that guy” who completely destroyed the Julia ecosystem with one mouse click.

You can’t. With a version control system you can always backtrack, and in any case PRs are like suggestions, to be reviewed by the repo maintainers. It is OK to make git mistakes too, you can always correct them.

2 Likes

Yup! Don’t be afraid. It’s really not that hard, and we’ll be happy to help you if some stages are not clear.

4 Likes

I’m revisiting this issue and had a quick question that I hope someone can answer.

The fieldnames(fun2) doesn’t return the elements of the fun2 structure anymore.

Is there an alternative that I can use? I’d like to be able to recover some of these parameters.

Thank you for any help in advance.

you need to do fieldnames(typeof(fun2)) now.