Hi!
I use the forecast evaluation "fcasteval" but the tabel is puzzling. In the output table showing evaluation statistics there is no constant relationship between RMSE and TheilU2.
I use the following command where "gr_f" contains around 50 forecasts for PC_GDP. (I use Eviews10)
PC_GDP.fcasteval(evalsmpl=%wfend!NRF %wfend, mean, trmean, median, trim=15) gr_f
What am I missing?
Forecast evaluation output
Moderators: EViews Gareth, EViews Moderator

 Fe ddaethom, fe welon, fe amcangyfrifon
 Posts: 12000
 Joined: Tue Sep 16, 2008 5:38 pm
Re: Forecast evaluation output
Follow us on Twitter @IHSEViews
Re: Forecast evaluation output
Sorry, I still don’t get it. If you refer to the discussion about the sample size I don’t see how that could have different impact on TheilU2 for different series (and all my forecast series have the same sample). The forecast rank should still be the same according to RMSE and TheilU2.

 Fe ddaethom, fe welon, fe amcangyfrifon
 Posts: 12000
 Joined: Tue Sep 16, 2008 5:38 pm
Re: Forecast evaluation output
Sorry, I'm not sure I understand what you're asking.
I don't believe there it is the case, using the definition of U2 given, that U2 and RMSE have to give the same ranking of forecasts.
I don't believe there it is the case, using the definition of U2 given, that U2 and RMSE have to give the same ranking of forecasts.
Follow us on Twitter @IHSEViews
Re: Forecast evaluation output
Perhaps I'm missing something in the previous topic. Here´s an example to explain my problem:
Series a = actual series
Series f = forecast of a
series fpe = (fa)/a
series ape = (aa(1))/a
genr sq_fpe = (fpe)^2
genr sq_ape = (ape)^2
scalar fpe2 = @sum(sq_fpe)
scalar ape2 = @sum(sq_ape)
RMSE_fpe = @sqrt(fpe2)
RMSE_ape = @sqrt(ape2)
U2= RMSE_fpe / RMSE_ape
Since RMSE_ape does not change for a given sample, RMSE and U2 must give the same ranking of the forecasts. But this is not the case in the output table.
Series a = actual series
Series f = forecast of a
series fpe = (fa)/a
series ape = (aa(1))/a
genr sq_fpe = (fpe)^2
genr sq_ape = (ape)^2
scalar fpe2 = @sum(sq_fpe)
scalar ape2 = @sum(sq_ape)
RMSE_fpe = @sqrt(fpe2)
RMSE_ape = @sqrt(ape2)
U2= RMSE_fpe / RMSE_ape
Since RMSE_ape does not change for a given sample, RMSE and U2 must give the same ranking of the forecasts. But this is not the case in the output table.

 Fe ddaethom, fe welon, fe amcangyfrifon
 Posts: 12000
 Joined: Tue Sep 16, 2008 5:38 pm
Re: Forecast evaluation output
Code: Select all
create u 100
rndseed 10
Series a = nrnd
series f1 = nrnd
series f2 = nrnd
series e1 = f1a
series e2 = f2a
series fpe1 = e1/a(1)
series fpe2 = e2/a(1)
series ape = (a(1)a)/a(1)
series sq_fpe1 = (e1/a(1))^2
series sq_fpe2 = (e2/a(1))^2
genr sq_ape = ((a(1)a)/a(1))^2
series sq_e1 = e1^2
series sq_e2 = e2^2
smpl 90 @last
scalar fpe21 = @sum(sq_fpe1)
scalar fpe22 = @sum(sq_fpe2)
scalar ape2 = @sum(sq_ape)
scalar U21 = @sqrt(fpe21/ape2)
scalar U22 = @sqrt(fpe22/ape2)
smpl 89 @last
scalar rmse1 = @sqrt(@sum(sq_e1)/12)
scalar rmse2 = @sqrt(@sum(sq_e2)/12)
a.fcasteval f1 f2
Follow us on Twitter @IHSEViews
Re: Forecast evaluation output
Ok, thanks for this clarification. Googled TheilU2 and there seems to be different definitions, not sure which one is the orginal.
Return to “General Information and Tips and Tricks”
Who is online
Users browsing this forum: No registered users and 5 guests