Statistics Help @ Talk Stats Forum - Regression Analysis
http://www.talkstats.com/
Linear regression, linear models, nonlinear regressionenWed, 01 Oct 2014 11:41:51 GMTvBulletin60http://www.talkstats.com/images/misc/rss.pngStatistics Help @ Talk Stats Forum - Regression Analysis
http://www.talkstats.com/
Time series forecasting
http://www.talkstats.com/showthread.php/57863-Time-series-forecasting?goto=newpost
Tue, 30 Sep 2014 14:22:36 GMTCan any one explain how to forecast volatility in stock markets using auto-regressive models like TARCH, EGARCH etc.Can any one explain how to forecast volatility in stock markets using auto-regressive models like TARCH, EGARCH etc.
]]>Regression Analysistimeserieshttp://www.talkstats.com/showthread.php/57863-Time-series-forecastingwhole data vs training/test data
http://www.talkstats.com/showthread.php/57853-whole-data-vs-training-test-data?goto=newpost
Mon, 29 Sep 2014 19:41:14 GMTI have time series data of about 150 sample and 8 variables. It is used to generate a interaction network, whose exact structure is not known yet.
...I have time series data of about 150 sample and 8 variables. It is used to generate a interaction network, whose exact structure is not known yet.

I can propose a model using two ways:
First approach, use 100 variable as training (derivation) data and 50 for testing (validation) set. This results in fitting 8 nonlinear regression models, which in turn results an interaction network involving all the variables.
Second approach, use whole data for modeling. This also gives me an interaction network. However, some of the interaction are different from those obtained with approach one.

Which one is a better way?
Though i think, approach one seems promising because it has promising validation results. However, in doing so we are loosing the initial information which could be used during model development.

I think approach one could be useful for simulated data where one knows the final structure.
Since the exact information about the interaction is not known, wouldn’t it be better to use second approach so that most of the information in the data could be used.

It would be great if anyone can direct me to related research article/case studies.
]]>Regression Analysisicehttp://www.talkstats.com/showthread.php/57853-whole-data-vs-training-test-dataCompute time to sell (with 50% probability) at Price P from linear regression
http://www.talkstats.com/showthread.php/57838-Compute-time-to-sell-(with-50-probability)-at-Price-P-from-linear-regression?goto=newpost
Mon, 29 Sep 2014 04:30:17 GMTI think this is a very basic question (from a total stat newbie). I'm trying to understand a problem I was given (I'm not a statistics student). It's a multi-part problem having to do with car sales. In the first part, I ran a linear regression on car mileage/sales price data (price being the dependent variable). This gives me a function where I can plug in a mileage number and it will spit out a price.

The next part of the problem states "using your results from the linear regression, compute an expected time to sell given 3 prices - [P, P+10%, P-10%] and assuming a 50% sell probability" and I'm not sure what this means or how to go about it. I suspect this wording is less vague to people who are well-versed in statistics. Could someone shed some light on it for me? Thanks!
]]>Regression Analysismellemahttp://www.talkstats.com/showthread.php/57838-Compute-time-to-sell-(with-50-probability)-at-Price-P-from-linear-regression