I have used the maximum likelihood (ML) method to fit a linear model (y=mx+b) to a set of data points. The ML estimates are m=0.52, b=1.02 and the log-likelihood (LL) for the fitted model is LL=-934.23. Next, I proposed three additional (less likely) sets of parameter values so I can consider their plausibility. For m=0.52 & b=1.12, I get LL=-1016.58. For m=0.62 & b=1.02, I get LL=-1124.22. For m=0.62 & b=1.12, I get LL=-1306.96. The list of LL values for the four competing parameter sets are listed:

LL = -934.23 (ML estimate)

LL = -1016.58 (Hypothesis b)

LL = -1124.22 (Hypothesis c)

LL = -1306.96 (Hypothesis d)

Now I want to compare the LL values of each of my proposed parameter sets to the LL of the ML estimate. Most important to me, I want to find the range of LL values that I cannot reject at some predetermined alpha level (let's say 95%). I stress that I do not care what the range of parameter values are... I only care about how much worse the LL value has to be for me to reject the proposed parameter set.

Can someone please help me with the formula for this? I think I should use a relative log-likelihood calculation but I can't find a clear answer. Could someone use the values above to show me how to do it? I would also like to learn how to do the calculation for a different alpha level, let's say to 99%.

Finally, suppose I have some data and I fit a three parameter model (rather than a two parameter model) to the observations and get a LL value. Can I use the same formula to find a "cannot-reject range" of worse LL values associated with alternately proposed values of the three parameter set? Is there a difference in the calculation because the number of model parameters is higher? [note: I am not comparing the LL of the three parameter model to the LL of a two parameter model... I would do that with the LRT if the models are nested and would do it with BIC scores if the models are non-nested].

Cheers,

Steven