Page 1 of 1

### Confidence Intervals: Forecast vs. Makemodel

Posted: Tue Aug 28, 2018 7:41 am
Dear all,

I am a little bit confused and really hope you can help me. I estimated several models (time series as well as pooled regression). In case of a time series regression I can hit the forecast button and obtain the standard errors and confidence intervals of the forecast - no problem at all. But when I solve the model via "make model", which seems to be necessary in case of a pooled regression, the confidence intervals are very narrow. I compared both standard errors in a time series regression ("forecast" vs. "make model") and the standard errors of "make model" were indeed much smaller than the ones produced by the forecast button. I read the eviews guide chapter 23, but got no sufficient answer. On p. 144 is mentioned that the forecast standard errors are computed as follows: forecast se = s*sqrt(1+xt'(X'X)^(-1)*xt). I tried to calculate the standard errors on my own, but failed with the dimensions. As far as I understand, xt should be the coefficient matrix at time t (the row vector respectively), but that would not match with the dimension of the inverted coefficient matrix.
Can anyone please explain me the differences between the standard error calculation of the normal forecast and the make model prediction?
Is there another way to match both standard errors in order to calculate the confidence interval in case of a pooled regression properly?

### Re: Confidence Intervals: Forecast vs. Makemodel

Posted: Tue Aug 28, 2018 8:02 am
Without knowing more details on the exact specification of your equations, it is hard to say, but the most obvious difference is that an equation's forecast standard errors are, in general, computed analytically, whereas a model produces them by simulation (you do not mention if you are using a stochastic solve to obtain the model standard errors, but I assume that is the case).

### Re: Confidence Intervals: Forecast vs. Makemodel

Posted: Wed Aug 29, 2018 4:09 am
I ran two types of regression. One is a "Fully Modified OLS" regression with Newey West Standard in a log-log specification. When I click on forecast, the standard errors and the corresponding confidence intervals seem to be correct - I used a static forecast with coefficient uncertainty in S.E. calculation and forecasted the level variable (instead of logs). When I compare the standard errors of the estimated model with those calculated by "make model", I found severe differences although the predicted values are the same. In this case I could use the forecast button to produce the specific standard errors. The second model, where I am ultimately interested in, is "Pooled EGLS" with fixed effects and cross-section weights. When I click on "make model" after the estimation and break the links of the equations, I can solve the model equation wise. I tried every possibly option to solve this model, but whatever I do, the standard errors are much too small. Especially when I try the stochastic simulation with static solution and include the coefficient uncertainty. In case of a stochastic simulation type, it is possible to calculate the standard deviation and bounds. But the bounds are far to small to be correct.
I know that the standard deviation and standard error are computed differently in a forecast approach compared to normal regression, but I am not sure how the formula works exactly. Does anyone know how to compute the standard errors of the "make model" - approach by hand so that they are similiar to the normal forecast approach?
I would love to make both standard error computations comparable.

Thank you.

### Re: Confidence Intervals: Forecast vs. Makemodel

Posted: Thu Aug 30, 2018 8:55 am
We are looking into this for you. It may take a bit but we will let you know what we discover.

### Re: Confidence Intervals: Forecast vs. Makemodel

Posted: Mon Sep 03, 2018 6:20 am
That's great! Thank you so much ### Re: Confidence Intervals: Forecast vs. Makemodel

Posted: Tue Sep 11, 2018 3:54 pm
Is there a chance you can share your workfile. In all of my test cases I'm not seeing this discrepancy. (Note that the model simulation doesn't include the coefficient uncertainty, but in my cases, I'm not seeing much difference from that).

### Re: Confidence Intervals: Forecast vs. Makemodel

Posted: Mon Apr 20, 2020 3:14 pm
Dear Glenn,

I have the same problem when i try to estimate the confidence intervals using the forecast option and making a model.
I share my program (it's simulated data) .  ### Re: Confidence Intervals: Forecast vs. Makemodel

Posted: Mon Apr 20, 2020 3:20 pm  ### Re: Confidence Intervals: Forecast vs. Makemodel

Posted: Fri Apr 24, 2020 10:44 am
Which version and build date of EViews? (Help->About EViews). I just ran your program in an up-to-date EViews 10 and 11, and both gave identical results between model and equation.

### Re: Confidence Intervals: Forecast vs. Makemodel

Posted: Sat Apr 25, 2020 3:03 pm
Thank you! The build date was june 28th 2016 (Eviews 9). I don't know what was the problem but when i run the program in Eviews 10 the difference disappear. I don't remember if they have notified about any modification in the programming of these command!
Anyway thank you for your help!
Just one more question plz!! Do u have any material that describe how to program asymmetric confidence interval in fanchart forecasting in Eviews ?