Hi there
I first simulated AR1 data and then estimated the model using OLS and ML on a sample that ignores the first observation.
The model: y_t = b*y_t-1 + e
(so we estimate two parameters, the b and the variance of the error term)
I obtain the same log likelihood in the two outputs. The reported AIC, however, is not consistent between these two estimation approaches.
The AIC formula used in EViews is: -2(l/T)+2(k/T), where l is the log likelihood, T the number of observations and k the number of estimates.
I noticed that the AIC reported by the ML procedure is correct (setting k = 2 in the above formula). For the OLS procedure, it reports the AIC assuming that k = 1.
Is there a reason why this is the case?
Best
s
Wrong calculation of information criteria?
Moderators: EViews Gareth, EViews Moderator
-
EViews Glenn
- EViews Developer
- Posts: 2682
- Joined: Wed Oct 15, 2008 9:17 am
Re: Wrong calculation of information criteria?
If I'm understanding the question, you are asking why there is only a single coefficient K=1 used in reporting AICs in a regression
but two coefficients are obtained when estimating using ML.
It's an interesting philosophical question. There are probably a lot of answers and subtlety to which I'm not going to do justice, but here's my take.
The short answer is that it's standard convention in problems estimated using least squares to report the IC not including the variance parameters unless the latter is modeled (as in ARCH). You'll find this to be the case in virtually all statistical software (I say 'virtually' only because there might be someone who doesn't, but I don't know of any examples) which reports regression results. Note that you could make a more formal argument in favor of the K=1 case lines by first concentrating the likelihood function so that it is only a function of mean parameters. I'm not certain what tools you are using to estimate the ML, but a general ML engine of the unconcentrated likelihood is simply going to count the number of parameters that you specify and won't distinguish between variance and mean coefficients.
The bottom line, however, is that so long as you are comparing apples to apples and oranges to oranges (e.g., the number of variance parameters counted doesn't vary across models you are comparing) it generally doesn't matter which method for determining K you are using (note that while unlikely, it can, if the K appears nonlinearly in the penalty function).
Code: Select all
y = c(1)*y(-1) It's an interesting philosophical question. There are probably a lot of answers and subtlety to which I'm not going to do justice, but here's my take.
The short answer is that it's standard convention in problems estimated using least squares to report the IC not including the variance parameters unless the latter is modeled (as in ARCH). You'll find this to be the case in virtually all statistical software (I say 'virtually' only because there might be someone who doesn't, but I don't know of any examples) which reports regression results. Note that you could make a more formal argument in favor of the K=1 case lines by first concentrating the likelihood function so that it is only a function of mean parameters. I'm not certain what tools you are using to estimate the ML, but a general ML engine of the unconcentrated likelihood is simply going to count the number of parameters that you specify and won't distinguish between variance and mean coefficients.
The bottom line, however, is that so long as you are comparing apples to apples and oranges to oranges (e.g., the number of variance parameters counted doesn't vary across models you are comparing) it generally doesn't matter which method for determining K you are using (note that while unlikely, it can, if the K appears nonlinearly in the penalty function).
Re: Wrong calculation of information criteria?
Thanks for your clarification.
Who is online
Users browsing this forum: No registered users and 2 guests
