Dear All,
I am writing to you because I have got a lot of troubles with some concepts regarding Generalised Linear Models. I need to understand it by the January 2012. I hope it is possible.
I am currently getting through one of the books regarding generalised linear models in a pretty much advanced way. I have also access to MATLAB so I can test my ideas in this environment. I don't have much expeirence in statsistics but it does not mean I cannot understand the concept at all. I split this message into a few sections where I outline the problems I am experiencing now. I will be grateful for any comments and suggestions. I am sorry for the length of this message.
Likelihood function
The least problem I have is the Maximum Likelihood Function. I think I have just recently got that right. I have understood I can estimate parameters of a particular probability distribution by finding what maximum probability for input data is, meaning I need to find a maximum of a likelihood function. The way we do this is either finding product or sum of logs of probabilities and analyze this function with respect to unknown parameters. Maximum of this function gives the best estimates and consequently the best parameters estimation. I hope I've got it right.
Expected value
This is the worst nightmare at the moment. For some reason I cannot even "feel" that. All equations in which this terms is involved become hardly to understand. Unfortunately, information matrix also requires this term to understand how to analytically find estimation of probability density function.
I know that for simple random variables, it is weighted average where probabilities corresponding to random variables are its weights. However, in the book I am reading now, Expected values are functions of such forms:
-E(d2L(pi)/dpi2)
for instance for binomial function it is:
-E[d2L(p)/dp2] = E[y/p2 + (n-y)/(1 -p)2]
I don't know how to calculate this. The book does not really explain it. I hope someone could either give me some resources to read or simply explain how to bite it. Thanks
Generalised Linear Models
I would be grateful if some of you could clarify one thing I am struggling with. How do we transfer our relationship between independent and dependent (categorical) variables from simple representation such as y vs x into probability of its occurrence vs x? Does error between real and estimated value of dependent variable has something to do with this? I have read about link function which map a dependent variable into probability.
Linear Regression Line
I've read Linear regression cannot be applied if variation is different along the data. Why does the non-constant variation in data cause this method invalid to apply? Why does Generalised Linear Models fit data better and why they take non-constant variation into account?
I will be grateful for your help. Thanks.
I am writing to you because I have got a lot of troubles with some concepts regarding Generalised Linear Models. I need to understand it by the January 2012. I hope it is possible.
I am currently getting through one of the books regarding generalised linear models in a pretty much advanced way. I have also access to MATLAB so I can test my ideas in this environment. I don't have much expeirence in statsistics but it does not mean I cannot understand the concept at all. I split this message into a few sections where I outline the problems I am experiencing now. I will be grateful for any comments and suggestions. I am sorry for the length of this message.
Likelihood function
The least problem I have is the Maximum Likelihood Function. I think I have just recently got that right. I have understood I can estimate parameters of a particular probability distribution by finding what maximum probability for input data is, meaning I need to find a maximum of a likelihood function. The way we do this is either finding product or sum of logs of probabilities and analyze this function with respect to unknown parameters. Maximum of this function gives the best estimates and consequently the best parameters estimation. I hope I've got it right.
Expected value
This is the worst nightmare at the moment. For some reason I cannot even "feel" that. All equations in which this terms is involved become hardly to understand. Unfortunately, information matrix also requires this term to understand how to analytically find estimation of probability density function.
I know that for simple random variables, it is weighted average where probabilities corresponding to random variables are its weights. However, in the book I am reading now, Expected values are functions of such forms:
-E(d2L(pi)/dpi2)
for instance for binomial function it is:
-E[d2L(p)/dp2] = E[y/p2 + (n-y)/(1 -p)2]
I don't know how to calculate this. The book does not really explain it. I hope someone could either give me some resources to read or simply explain how to bite it. Thanks
Generalised Linear Models
I would be grateful if some of you could clarify one thing I am struggling with. How do we transfer our relationship between independent and dependent (categorical) variables from simple representation such as y vs x into probability of its occurrence vs x? Does error between real and estimated value of dependent variable has something to do with this? I have read about link function which map a dependent variable into probability.
Linear Regression Line
I've read Linear regression cannot be applied if variation is different along the data. Why does the non-constant variation in data cause this method invalid to apply? Why does Generalised Linear Models fit data better and why they take non-constant variation into account?
I will be grateful for your help. Thanks.