About the new feature Lagrange Multiplier Tests for Random Effects in EViews9, I want to make you aware of three things:
1) I feel like the p-values for negative statistics should be printed as well. The paper's of Baltagi contain p-values.
2) I think the documentation could be a bit extended: comments about the negative statistics and how to deal with them (look at GHM statistic then is adviced) would be worthwile
3) Could you also clarify about the alternative hypothesis? Shall the user look at the outputted p-values for all tests and base the (oversimplified, of course) decision (reject/do not reject) based on those p-values, e.g. compare to 0.05 and if lower reject or does the user need to take the one-sidedness of the alternative into account (look for values < 0.1 to reject for all statistics except Breusch-Pagan)?
See e.g. the last picture on this page for an example of the LM tests:
http://www.eviews.com/EViews9/ev9ecdiag ... aneleffect
Lagrange Multiplier Tests for Random Effects
Moderators: EViews Gareth, EViews Moderator
-
- EViews Developer
- Posts: 2672
- Joined: Wed Oct 15, 2008 9:17 am
Re: Lagrange Multiplier Tests for Random Effects
1. Baltagi presents critical values, not p-values. You are correct that under the null, the distributions are N(0, 1) so that we could report p-values (which in this case would, of course be >.5). However, as there is evidence that negative statistics are relevant in this context, and since the tests are one-sided it struck as as more useful to note visually that the statistic was on the wrong side of the null given the alternative. We're not wedded to this (it's obviously easier to just report the value), but thought that it was more useful in the context. We'd be interested in hearing other thoughts.
2. Perhaps there could be a bit more, but I don't feel that we are unnecessarily terse. We do point out the value of one-sided tests where appropriate and note that GHM is a two-sided test that accounts for the possibility of negative estimates of the variance components. Moreover, we point people to the much more extensive literature on the subject, including the excellent summary in the Baltagi book.
3. They're p-values (not critical values), which you should compare with your desired alpha. The BP come from the chi-square and the GHM come from the mixed chi-square, while the one-sided tests from one tail of the cumulative normal.
2. Perhaps there could be a bit more, but I don't feel that we are unnecessarily terse. We do point out the value of one-sided tests where appropriate and note that GHM is a two-sided test that accounts for the possibility of negative estimates of the variance components. Moreover, we point people to the much more extensive literature on the subject, including the excellent summary in the Baltagi book.
3. They're p-values (not critical values), which you should compare with your desired alpha. The BP come from the chi-square and the GHM come from the mixed chi-square, while the one-sided tests from one tail of the cumulative normal.
Re: Lagrange Multiplier Tests for Random Effects
Glenn,
Thank you for your prompt and comprehensive answer! Much appreciated!
1. I have checked: Baltagi's textbook has p-values (at least in the 5th edition), but those are not calculated by him but by the former add-in for older versions of EViews (which got the "sided-ness" wrong in the output). Would the p-values for the one-sided statistics not be so high that basically nobody would deem them significant if the statistic is < 0? E.g. statistic = 0 => p-value = 0.5 already.
3. As you mention in your answer all tests are one-sided (concerning the alternative) except BP and GHM: The current output just mentions BP as two-sided (see the link to picture in my original post). Should GHM also be mentioned as having a two-sided alternative? The User's Guide II does so (p. 870).
Thank you for your prompt and comprehensive answer! Much appreciated!
1. I have checked: Baltagi's textbook has p-values (at least in the 5th edition), but those are not calculated by him but by the former add-in for older versions of EViews (which got the "sided-ness" wrong in the output). Would the p-values for the one-sided statistics not be so high that basically nobody would deem them significant if the statistic is < 0? E.g. statistic = 0 => p-value = 0.5 already.
3. As you mention in your answer all tests are one-sided (concerning the alternative) except BP and GHM: The current output just mentions BP as two-sided (see the link to picture in my original post). Should GHM also be mentioned as having a two-sided alternative? The User's Guide II does so (p. 870).
Re: Lagrange Multiplier Tests for Random Effects
About the EViews user's guide, p. 870, on GHM critical values and also in the current output of the tests in EViews 9:
"The critical values of 7.289, 4.321, 2.952 for standard test sizes 0.01, 0.05 and 0.1 respectively, are obtained from Baltagi (2008)."
4.321 is a copied typo from Baltagi's textbook (in the notes section). It should be 4.231 as it is in Baltagi's textbook in the main text (and can easily be checked).
"The critical values of 7.289, 4.321, 2.952 for standard test sizes 0.01, 0.05 and 0.1 respectively, are obtained from Baltagi (2008)."
4.321 is a copied typo from Baltagi's textbook (in the notes section). It should be 4.231 as it is in Baltagi's textbook in the main text (and can easily be checked).
Re: Lagrange Multiplier Tests for Random Effects
Also, that one guy is spelled "Gouriéroux", not "Gourierioux" (w/o "i") in the EViews output (appears twice there), cf. original paper.
-
- EViews Developer
- Posts: 2672
- Joined: Wed Oct 15, 2008 9:17 am
Re: Lagrange Multiplier Tests for Random Effects
1. Yes. The p-value is over 0.5 for negative statistics.
2. BP is a two-sided alternative. I was thinking of the other tests when I made the one-sided statement. I was being vague, sorry.
3. We'll fix the Gourieroux typo. And GHM should be one-sided, that's a typo too, as it's based on the Honda and SLM.
4. The 4.321 matches the original Baltagi et al. (1992) paper, which is actually the reference that we used. We're checking to see which is correct (though you say it's easy to check, my recollection is that it's not an easy distribution to evaluate, but I may be misremembering).
2. BP is a two-sided alternative. I was thinking of the other tests when I made the one-sided statement. I was being vague, sorry.
3. We'll fix the Gourieroux typo. And GHM should be one-sided, that's a typo too, as it's based on the Honda and SLM.
4. The 4.321 matches the original Baltagi et al. (1992) paper, which is actually the reference that we used. We're checking to see which is correct (though you say it's easy to check, my recollection is that it's not an easy distribution to evaluate, but I may be misremembering).
Re: Lagrange Multiplier Tests for Random Effects
ad 4.: Concerning the critical value 4.321 vs. 4.231 for the chi-bar-square distribution:
Have a look at Baltagi's textbook (2013), p. 74 table 4.1 where we have 4.231 (in the notes to this section there is 4.321 (p. 88)).
It is easy to check (here is how I did it with R):
crit <- c(7.289, 4.321, 2.952)
p.val <- (1/4)*pchisq(crit, df=0, lower.tail = F) + (1/2) * pchisq(crit, df=1, lower.tail = F) + (1/4) * pchisq(crit, df=2, lower.tail = F)
0.01000252 0.04763926 0.10002319
crit_corr <- c(7.289, 4.231, 2.952)
p.val_corr <- (1/4)*pchisq(crit_corr, df=0, lower.tail = F) + (1/2) * pchisq(crit_corr, df=1, lower.tail = F) + (1/4) * pchisq(crit_corr, df=2, lower.tail = F)
0.01000252 0.04998927 0.10002319
4.231 yields a p-value "way" closer to 0.05 compared to 4.321.
Have a look at Baltagi's textbook (2013), p. 74 table 4.1 where we have 4.231 (in the notes to this section there is 4.321 (p. 88)).
It is easy to check (here is how I did it with R):
crit <- c(7.289, 4.321, 2.952)
p.val <- (1/4)*pchisq(crit, df=0, lower.tail = F) + (1/2) * pchisq(crit, df=1, lower.tail = F) + (1/4) * pchisq(crit, df=2, lower.tail = F)
0.01000252 0.04763926 0.10002319
crit_corr <- c(7.289, 4.231, 2.952)
p.val_corr <- (1/4)*pchisq(crit_corr, df=0, lower.tail = F) + (1/2) * pchisq(crit_corr, df=1, lower.tail = F) + (1/4) * pchisq(crit_corr, df=2, lower.tail = F)
0.01000252 0.04998927 0.10002319
4.231 yields a p-value "way" closer to 0.05 compared to 4.321.
-
- EViews Developer
- Posts: 2672
- Joined: Wed Oct 15, 2008 9:17 am
Re: Lagrange Multiplier Tests for Random Effects
You are correct. We validated on this end, it's 4.231.
I had forgotten that the distribution of the test statistic isn't a chi-square mixture (that would have been hard), but rather the CDF of the statistic was a chi-square CDF mixture (which, as you note, is easy).
I had forgotten that the distribution of the test statistic isn't a chi-square mixture (that would have been hard), but rather the CDF of the statistic was a chi-square CDF mixture (which, as you note, is easy).
Re: Lagrange Multiplier Tests for Random Effects
Just checked the latest update. Thank your for implementing the correction. I noticed:
The latest docs still have the wrong critical value mentioned:
User Guide II, Ch. 44, pp. 894, 896.
Well, the paragraph refers to the Baltagi (1998) paper where the typo occured fist, so this is technical not a wrong statement, but nevertheless it is the wrong value to be used. Also the layout of the test looks a bit different now - the docs still have the old picture.
The latest docs still have the wrong critical value mentioned:
User Guide II, Ch. 44, pp. 894, 896.
Well, the paragraph refers to the Baltagi (1998) paper where the typo occured fist, so this is technical not a wrong statement, but nevertheless it is the wrong value to be used. Also the layout of the test looks a bit different now - the docs still have the old picture.
Return to “Suggestions and Requests”
Who is online
Users browsing this forum: No registered users and 28 guests