# Thread: Bookclub: ISLR

1. ## Re: Bookclub: ISLR

I am thinking of something like either build the model by using a training set and validate it using a test set – or work backwards, find the model using all the data but then to require that there should be some indication of the effect if we used a smaller random subset of the original data?

2. ## Re: Bookclub: ISLR

Yes,
or turning it around: if there is no indication or contradictory indications for an effect in smaller data-sets then it would be legitimate to think that the effect is just a fluke, I think.

regards

3. ## Re: Bookclub: ISLR

So,
a bit late, due to the year-end hassle, but still keeping at it - I am working now on chap. 6 - Regression and especially model selection, ridge and lasso.

I just finished ex. 8 where I had to generate a random X and a Y that was a polynomial function of X with degree 3 plus noise of course. Then generate the powers of X up to 10 and try to find a regression model correctly describing the X-Y relationship.

My first surprise was that the regsubsets function from the leaps package did a pretty good job identifying the model with 3 variables . I tried three selection criteria, cp, rsquared.adjusted and BIC . If I went for the minimum then only BIC picked the right model but if I went for the "knee" in the graphical representation then all three were obviously identifying the model with 3 parameters as the best one.

Using the lasso the "best" model found by cross validation also identified 3 parameters, but only if I picked the lambda.1se and not the lambda.min- which was my intuitive choice anyway.

As I knew the parameter values I could also compare the lm model's guess to that of the lasso - and interestingly the lm model was somewhat better. Also comparing the MSE on a new set of similarly generated data lm performed better.

So, I repetead the exercise by adding a lot more noise . In this case the performance of the lasso MSE-wise was closer to tjat of the lm but still the simple lm model was better.

regards

4. ## Re: Bookclub: ISLR

I am jealous, I am still a little too busy and lazy to commit. The LASSO seems to outperform when it is more of a p > n scenario I believe, and variable may be correlated. And as you know the CV helps more with the overfitting and out-of-sample application.

5. ## Re: Bookclub: ISLR

I got slowed down due to work but still have the ambition to continue - so the last exercise for chapter 6redicting the crime rate in Boston - the dataset Boston from the library MASS.

The task is to generate all the models that were developed in the chapter. I generated a random sample of 100 datapoints for testing and left 406 in the training.

The first thing I learned is that in the presence of some outliers the test-set performance of the models can be hugely variable. For the exact same model, depending on the test-set I could get an MSE of 100 or 10 . The effect depended on whether some outliers got into the test-set or not - of course an outlier in the test - set meant that it had no influence on the model but generated a large residual.

So, comparing the methods - again the simple regression (with interactions) performed on the average better then either the lasso or the regression. PCR was somewhere in between the regression and the lasso while PLS got very close to the simple regression. Given how much more difficult it would be to explain a PLS as compared to the regression the simple regression still seems to be the winner - but the number of variables was really not high enough to see the advantages of the more sophisticated methods.

Another point - it does make sense to include nonlinearities and interactions into the models - This would be easy with a simplle regression - for all the others I just added product columns to the dataset (could try squares as well). The tendency did not change as far as model performance was concerned, but the MSEs went down for all the models.

Also, the outliers complicate the modelling a lot - so exploratory analysis would be a must for any modelling . This does not seem to be a great discovery, but one tends to forget this in the heat of a project.

So, on to chapter 7...

+ Reply to Thread
Page 2 of 2 First 1 2

 Tweet

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts