Likelihood Ratio Test for general hypotheses


The original paper (Wilks, 1938) describing the Likelihood Ratio Test considers a null hypothesis stating that a subset of the parameters from the parameter space are equal to fixed values.

Do you know of any extension/formal proof covering LTR with null hypotheses for more general cases, like functions of the parameters, for instance Theta1*Theta2=constant?


Just find the proof, but I need some help understanding a fragment of it. In the Handbook of Econometric, R. Engle shows how to can reduce a non-linear hypothesis g(Theta) =0 to a set of hypotheses which have preassigned values for a subset of the parameter vector.

He uses a Taylor expansion of g to reduce the constraint equation to:

G*Theta=G*Theta0, where G is the first derivate matrix of g, evaluated at Theta0.

Being the last a linear hipothesis, he says: “for any linear hypothesis one can always reparameterize by a linear non-singular matrix (A^-1)Theta = Phi. To do this let A2 have K - p columns in the orthogonal complement of G so that GA2 = 0. The remaining p columns of A say A1, span the row space of G so that GA non-singular.”

My question is: how we can operationally get this set of matrices A1 and A2 for a concrete problem, say we have g() is:
B1*B2 – B3*B4=0

Where B1…B4 are the parameters of my model. How can I find this decomposition?

Any hint or pointing to the way will be greatly appreciated.