Dear all,
I am a little bit confused and really hope you can help me. I estimated several models (time series as well as pooled regression). In case of a time series regression I can hit the forecast button and obtain the standard errors and confidence intervals of the forecast  no problem at all. But when I solve the model via "make model", which seems to be necessary in case of a pooled regression, the confidence intervals are very narrow. I compared both standard errors in a time series regression ("forecast" vs. "make model") and the standard errors of "make model" were indeed much smaller than the ones produced by the forecast button. I read the eviews guide chapter 23, but got no sufficient answer. On p. 144 is mentioned that the forecast standard errors are computed as follows: forecast se = s*sqrt(1+xt'(X'X)^(1)*xt). I tried to calculate the standard errors on my own, but failed with the dimensions. As far as I understand, xt should be the coefficient matrix at time t (the row vector respectively), but that would not match with the dimension of the inverted coefficient matrix.
Can anyone please explain me the differences between the standard error calculation of the normal forecast and the make model prediction?
Is there another way to match both standard errors in order to calculate the confidence interval in case of a pooled regression properly?
I am very desperate and happy about every answer.
Thanks in Advance.
Confidence Intervals: Forecast vs. Makemodel
Moderators: EViews Gareth, EViews Moderator

 Posts: 3
 Joined: Tue Aug 28, 2018 12:53 am

 Fe ddaethom, fe welon, fe amcangyfrifon
 Posts: 11983
 Joined: Tue Sep 16, 2008 5:38 pm
Re: Confidence Intervals: Forecast vs. Makemodel
Without knowing more details on the exact specification of your equations, it is hard to say, but the most obvious difference is that an equation's forecast standard errors are, in general, computed analytically, whereas a model produces them by simulation (you do not mention if you are using a stochastic solve to obtain the model standard errors, but I assume that is the case).
Follow us on Twitter @IHSEViews

 Posts: 3
 Joined: Tue Aug 28, 2018 12:53 am
Re: Confidence Intervals: Forecast vs. Makemodel
Thank you very much for your quick reply.
I ran two types of regression. One is a "Fully Modified OLS" regression with Newey West Standard in a loglog specification. When I click on forecast, the standard errors and the corresponding confidence intervals seem to be correct  I used a static forecast with coefficient uncertainty in S.E. calculation and forecasted the level variable (instead of logs). When I compare the standard errors of the estimated model with those calculated by "make model", I found severe differences although the predicted values are the same. In this case I could use the forecast button to produce the specific standard errors. The second model, where I am ultimately interested in, is "Pooled EGLS" with fixed effects and crosssection weights. When I click on "make model" after the estimation and break the links of the equations, I can solve the model equation wise. I tried every possibly option to solve this model, but whatever I do, the standard errors are much too small. Especially when I try the stochastic simulation with static solution and include the coefficient uncertainty. In case of a stochastic simulation type, it is possible to calculate the standard deviation and bounds. But the bounds are far to small to be correct.
I know that the standard deviation and standard error are computed differently in a forecast approach compared to normal regression, but I am not sure how the formula works exactly. Does anyone know how to compute the standard errors of the "make model"  approach by hand so that they are similiar to the normal forecast approach?
I would love to make both standard error computations comparable.
Thank you.
I ran two types of regression. One is a "Fully Modified OLS" regression with Newey West Standard in a loglog specification. When I click on forecast, the standard errors and the corresponding confidence intervals seem to be correct  I used a static forecast with coefficient uncertainty in S.E. calculation and forecasted the level variable (instead of logs). When I compare the standard errors of the estimated model with those calculated by "make model", I found severe differences although the predicted values are the same. In this case I could use the forecast button to produce the specific standard errors. The second model, where I am ultimately interested in, is "Pooled EGLS" with fixed effects and crosssection weights. When I click on "make model" after the estimation and break the links of the equations, I can solve the model equation wise. I tried every possibly option to solve this model, but whatever I do, the standard errors are much too small. Especially when I try the stochastic simulation with static solution and include the coefficient uncertainty. In case of a stochastic simulation type, it is possible to calculate the standard deviation and bounds. But the bounds are far to small to be correct.
I know that the standard deviation and standard error are computed differently in a forecast approach compared to normal regression, but I am not sure how the formula works exactly. Does anyone know how to compute the standard errors of the "make model"  approach by hand so that they are similiar to the normal forecast approach?
I would love to make both standard error computations comparable.
Thank you.

 EViews Developer
 Posts: 2600
 Joined: Wed Oct 15, 2008 9:17 am
Re: Confidence Intervals: Forecast vs. Makemodel
We are looking into this for you. It may take a bit but we will let you know what we discover.

 Posts: 3
 Joined: Tue Aug 28, 2018 12:53 am
Re: Confidence Intervals: Forecast vs. Makemodel
That's great! Thank you so much

 EViews Developer
 Posts: 2600
 Joined: Wed Oct 15, 2008 9:17 am
Re: Confidence Intervals: Forecast vs. Makemodel
Is there a chance you can share your workfile. In all of my test cases I'm not seeing this discrepancy. (Note that the model simulation doesn't include the coefficient uncertainty, but in my cases, I'm not seeing much difference from that).
Who is online
Users browsing this forum: No registered users and 10 guests