I’m not sure that the term relative likelihood is the best for pdf(dist,1)
since it returns the probability density for the value x=1 and not necessarily the likelihood. I guess for it to be called a likelihood the data should be fixed and the pdf is evaluated as a function of the parameter(s) in the distribution, i.e. over \mu or \sigma in the case we use a Normal distribution. For example, the likelihood given a known \sigma=1 and observed data x=1 would be something like likelihood(mu) = pdf(Normal(mu,1),1)
or likelihood(sigma) = pdf(Normal(2,sigma),1)
for known \mu=2 and x=1, or treating both parameters unknown.
3 Likes