Exploring OLS and GLM Models: Understanding the Link Function and Coefficients

  • #1
fog37
1,568
108
TL;DR Summary
OLS in GLM models...
Hello,

I know this is a big topic but I would like to check that what I understand so far is at least correct. Will look more into it. GLM is a family of statistical models in which the coefficients betas are "linear". The relation between ##Y## and the covariates ##Xs## can be nonlinear (ex: polynomial regression and logistic regression). The relation we need to look at is the one between the link function and the coefficients. For example, for logistic regression, the probability ##p## is related to the covariates ##X## via a sigmoid equation and ##p## and the ##\beta##s are not in a linear relation. But the logit and the ##\beta##s are!
  • OLS is the "best" method to find the unknown coefficients when the model is linear regression (simple or multiple). OLS is also the "best" method when the model is polynomial regression (linear regression being a special case of it).
  • However, in the case of logistic regression, we cannot use OLS to compute the estimated coefficients.. I initially wondered why since the log of the odd is a linear function of the covariates is a straight line model: $$log(odd)=\beta_1 X_1+\beta_2 X_2+...+\beta_0$$
I thought we could use OLS to find the coefficients in the equation for ##log(odd)=log(\frac {p}{1-p})##, given the straight line relation with the ##X## variables, and then, via simple math transformations, find the probability ##p## which is related to the covariates ##X##s via the sigmoid function. I believe the reason we cannot use OLS to find the betas for logistic regression is that the OLS assumptions are violated for logistic regression so the estimated betas would be quite wrong. So we have to resort to the maximum likelihood iterative estimation (MLE) method to find the betas.

Am I on the right track? Any corrections? Thank you!
 
Physics news on Phys.org
  • #2
It depends what you mean by ordinary least squares (OLS). If you just mean minimising the sum of squared errors (SSE), then that still provides a consistent estimator. But we can't just use the closed-form formulas used to give OLS solutions to simple linear regressions, because the estimates ##\hat p_i## are nonlinear functions of the regressors, which violates the assumptions used to derive those formulas. To minimise the SSE we need to use an iterative, non-linear optimiser. Or we can forget about SSE and use MLE instead, which is also a non-closed form, iterative approach.
 
  • Like
Likes fog37
  • #3
The logit model was fitted for a long time via linear regression. The problems are e.g. with points with p=0 or p=1. The variance of log p/(1-p) varies with p, hence unweighted linear regression will not be efficient.
 
  • Like
Likes FactChecker and fog37
  • #4
Good answers above: also:

- if you tried to use LS regression with the original data for a logistic binary classification problem none of the usual inference procedures would be justified since they require a continuous response
- if you tried to use LS regression with a mulinomial classification problem, where you coded the k response levels 1, 2, 3, ..., k, you would be implying an ordering of importance of the levels; and results would be different for different orderings
 

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
554
  • Set Theory, Logic, Probability, Statistics
Replies
23
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
30
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
549
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
2K
Back
Top