Determining residual outliers when heteroskedasticity exists

#1
Hi,

I was wondering if anyone has any thoughts on the best way to approach the detection of outliers based on residuals from a linear regression when heteroskedasticity exists. For example, if the variance in the upper end of the the dependent variable increases with an increase in an independent variable, then calculating the Z-score for a particular residual that refers to a datapoint in relation to all others does not seem to make sense in this instance.

Any thoughts?
 

hlsmith

Omega Contributor
#2
Side question, if you have heteroscedasticity why are you worried about outliers and moving forward instead of addressing that assumption of linear regression?
 

kiton

New Member
#3
Side question, if you have heteroscedasticity why are you worried about outliers and moving forward instead of addressing that assumption of linear regression?
I guess the assumption here could be that outliers in the data cause the heteroskedastisity. If this is the case, then a plot of residuals vs. variables in the model might help reveal which specific variables cause the issues.
 

hlsmith

Omega Contributor
#4
I need to visualize these things but if there was a nonlinear pattern in the residuals your comment could come into play, perhaps?