Extracting the loglikelihood function from GLM?

I am trying to extract the loglikelihood function that maps the parameter vector to each observation. The goal is to use the likelihood function to generate the score and hessian for each observation to use for inference.

The ideal would be that there is a method of GLM.loglik_obs spit out after the estimation that maps the model parameter vector to a vector equal to the sample size.

Is there a way to do that?

The alternative I am using is to code the loglikelihood myself. See below a working example for a logit MLE.

Thanks!

#Julia Version 1.9.4
#Commit 8e5136fa297 (2023-11-14 08:46 UTC)
#Build Info:
#  Official https://julialang.org/ release
#  [38e38edf] GLM v1.8.3
import GLM
import StatsFuns as StatF
import DataFrames as DF
import StatsBase
import CovarianceMatrices as COV



function main()
    data = DF.DataFrame(Y=[1,1,1,1,0,0,0,0], X=[1,2,4,2,11,7,9,12])
    estimates = GLM.glm(GLM.@formula(Y ~ 0+ X), data, GLM.Bernoulli(), GLM.LogitLink())
    θhat , ll0 = GLM.coef(estimates) , GLM.loglikelihood(estimates)
    function loglike_obs(θ)
        let Y=GLM.response(estimates), XX=estimates.mm.m
            Z = XX * θ
            ll = map( (y,z) -> y == 1 ? -StatF.log1pexp(-z) : -StatF.log1pexp(-z) - z, Y,Z )
            return ll
        end
    end
    loglike(θ)= sum(loglike_obs(θ))
    ϵ = 1e-7
    fscore(θ) = ( loglike_obs(θ .+ ϵ) .- loglike_obs(θ) ) ./ ϵ
    score = fscore(θhat)
    hess = ( fscore(θhat .+ ϵ) .- fscore(θhat) ) ./ ϵ
    SSp = StatsBase.mean( x->x^2, score)
    Vh = inv( - StatsBase.mean(hess) )
    Vs = Vh*SSp*Vh
    println(" GLM Log likelihood: ", ll0, " , manual ll: ", loglike(θhat), " , θ: ", θhat )
    println(" Manual SE from hessian: ", sqrt( Vh / GLM.nobs(estimates) ) , ", SE GLM: ", GLM.stderror(estimates) |> only )
    println(" Manual SE from sandwich: ", sqrt( Vs / GLM.nobs(estimates) ) , ", SE CovarianceMatrices HC0: ", COV.stderror(COV.HC0(), estimates) |> only )
    return estimates
end #main

main()

Maybe worth an issue as the doc-string states

help?> GLM.loglikelihood
  loglikelihood(model::StatisticalModel)
  loglikelihood(model::StatisticalModel, observation)

  Return the log-likelihood of the model.

  With an observation argument, return the contribution of observation to the
  log-likelihood of model.

  If observation is a Colon, return a vector of each observation's
  contribution to the log-likelihood of the model. In other words, this is the
  vector of the pointwise log-likelihood contributions.

  In general, sum(loglikehood(model, :)) == loglikelihood(model).

But, GLM does not seem to implement the loglikelihood(model::StatisticalModel, observation) method(s) and thus, does not fully support the StatsModel interface.

1 Like

Hi Nils, thanks for your reply.

While you are right that GLM.loglikelihood(model, i) doesn’t work, that wouldn’t address my issue because that function would spit out the likelihood evaluated at the estimated parameter values.

But what I am looking for is a function that evaluates the likelihood at each observation for any parameter value so that I can perturb it to calculate its derivative.

Another approach to this problem:

Does the GLM spit out any callable objects (::Function) that can be used to calculate the moment matrix of the model for different parameter values?

Sorry, missed that you want to pass own parameters.
In that case, I guess that GLM does not support it and your manual approach is fine. I would probably just go ahead and code the full GLM manually (evtl. even using Flux or something for parameter handling):

import Distributions as PD
import GLM
import StatsModels

function myglmloglik(fm, data, params)
    y, X = StatsModels.modelcols(StatsModels.apply_schema(fm, StatsModels.schema(fm, data)), data)
    p = GLM.linkinv.(GLM.LogitLink(), X * params)
    PD.logpdf.(PD.Bernoulli.(p), y)
end

loss(θ) = sum(myglmloglik(fm, data, θ))

At least, this would work with several AD backends and the model could easily be generalized by passing a Flux layer instead of params.

2 Likes

Thanks a lot, Nils! That works.

And the code can be generalized to be useful for logit and probit by using

fm.model.rr.link

instead of

GLM.LogitLink()

So, in your code above:

p = GLM.linkinv.(fm.model.rr.link, X * params)

Something else that may be helpful for someone else:

For the logit and the probit, you can construct the inverse of the expected hessian by

GLM.nobs(estimates) * GLM.invchol(fm.model.pp)

The GLM reported standard errors are

sqrt.(LinearAlgebra.diag(GLM.invchol(fm.model.pp)))

Many thanks to Giuseppe Ragusa, developer of CovarianceMatrices.jl, for helping me out with this. Also look at the function CovarianceMatrices.bread()