Wald Statistic in Chow test

For requesting general information about EViews, sharing your own tips and tricks, and information on EViews training or guides.

Moderators: EViews Gareth, EViews Moderator

JoaoPereira
Posts: 14
Joined: Thu Jan 03, 2013 10:56 am

Wald Statistic in Chow test

Postby JoaoPereira » Thu Jan 03, 2013 12:21 pm

Hello all

I've just entered the forum and I'll much appreciate if someone can help.

I' m working with Chow test , and in the output from Eviews three different results are given: F-statistic,Loglike and Wald Statistic.
I was asked to confirm the Wald statistic but I didn´t succeed so far.I checked the manual and I got the info that follows

(User's guide II page 171)
The Wald statistic is computed from a standard Wald test of the restriction that the coefficients
on the equation parameters are the same in all subsamples. As with the log likelihood
ratio statistic, the Wald statistic has an asymptotic Chisquare distribution with (m-1)k degrees of
freedom, where m is the number of subsamples .

(User's guide II page 152)
Here we have the formula (6.16) that is W= (Rb – r)' * Inverse( R*VAR(b)*R') *( Rb-r)

This statistic applies for tests within the same model and I could confirm the result with some examples.
But when we have the Chow test we split the original model in sub-samples ( in my case two of them let say A and B),and if my interpretation of the guide is correct,we must compare for the test, the coefficients of the two models (one for each sub-sample).
After estimation I used the following formula to compute the statistic

W= (bA – bB)' * Inverse( VAR(bA)+VAR(bB) ) *( bA-bB)

where bA and bB stands for the coefficient vectors from each sub-model and inverse is an inverse matrix.

The result I get is close but not correct,some detail is missing .The formula seems to me logical but of course I,m not sure.

So if your help is very welcome

Tks for your time
João Pereira

EViews Gareth
Fe ddaethom, fe welon, fe amcangyfrifon
Posts: 13307
Joined: Tue Sep 16, 2008 5:38 pm

Re: Wald Statistic in Chow test

Postby EViews Gareth » Thu Jan 03, 2013 2:23 pm

Create a dummy for the periods after the break point, then estimate over the entire sample, but add extra regressors of d*X where d is your dummy and C are your regressors (remember to include just the dummy by itself if you have a constant).

Then do a Wald test of all the dummy coefficients being equal to zero.
Follow us on Twitter @IHSEViews

JoaoPereira
Posts: 14
Joined: Thu Jan 03, 2013 10:56 am

Re: Wald Statistic in Chow test

Postby JoaoPereira » Fri Jan 04, 2013 2:35 am

Hi Gareth

Thks for your response .I've already got the result

All the best
João Pereira

t_schl
Posts: 7
Joined: Thu Mar 27, 2014 7:30 am

Re: Wald Statistic in Chow test

Postby t_schl » Thu Mar 27, 2014 8:32 am

Hi,
I'm struggling with the same issue, but I have AR terms in my model involved. I cannot dummy the AR terms (it gives me an error message) and putting in lagged terms of the dependent variables as regressors won't give me the same Wald-Statistic as the Quandt-Andrews test in EViews. Could you help me here and tell me how to estimate this kind of equation?
Thank you very much for your help.

Thore

t_schl
Posts: 7
Joined: Thu Mar 27, 2014 7:30 am

Re: Wald Statistic in Chow test

Postby t_schl » Fri Mar 28, 2014 3:38 pm

Hi again,
to be more precise on the above issue: I am trying to re-estimate the build-in EViews Wald-statistic of the Quandt-Andrews test for an unknown breakpoint (or the Chow test).
I am able to (re)-estimate the Wald-statistic as described above by including dummies for each regressor for any model that does not contain autoregressive terms. If the model does include AR-terms, I am not able to get the right statistic. I went back to the estimation of two seperate models (instead of one model with dummies). This got me quite far, but the result is still noz correct yet:

Code: Select all


series wald_value
for !i = 0 to 177
coef(23) b1
smpl 1991:10 1994:12+!i
equation ea_delta_s_w1.ls d(ea_exrate_us_log) c ea_m1_SA_LOG_d1(-1 to -4) US_M1_SA_LOG_d1(-1 to -4) ea_ip_SA_LOG_d1(-1 to -4)  US_ip_SA_LOG_d1(-1 to -4)  interest_dif(-1) error_corr_ea1(-1) ar(1 to 4)
for !k = 1 to 23
   b1(!k) = @coefs(!k)
next
sym v1=ea_delta_s_w1.@cov
coef(23) b2
smpl 1995:1+!i 2012:12
equation ea_delta_s_w2.ls d(ea_exrate_us_log) c ea_m1_SA_LOG_d1(-1 to -4) US_M1_SA_LOG_d1(-1 to -4) ea_ip_SA_LOG_d1(-1 to -4)  US_ip_SA_LOG_d1(-1 to -4)  interest_dif(-1) error_corr_ea1(-1) ar(1 to 4)
for !k = 1 to 23
   b2(!k) = @coefs(!k)
next
sym v2=ea_delta_s_w2.@cov
matrix(23,23) r
for !k = 1 to 23
r(!k, !k) =1
next
vector(23) q
matrix wald2 = @transpose(r*(b1-b2)-q)*@inverse(r*(v1+v2)*@transpose(r))*(r*(b1-b2)-q)
wald_value(49+!i) = wald2(1,1)
delete b1 b2 wald2 v1 v2
next


I am not sure if I need to weight the covariance matrices properly to get the right result or if this is impossible here.
Another approach would probably be to translate the model to non-linear LS to add the AR-dummies as suggested here: http://forums.eviews.com/viewtopic.php?f=6&t=1689 and then estimate one model only. This would make the whole issue way more complicated I guess (I have no clue how to derive the closed form of such a big model).
Anyhow, I would appreciate any help or comment on that issue.
Kind regards.
Thore
Last edited by t_schl on Sat Mar 29, 2014 12:37 am, edited 1 time in total.

EViews Gareth
Fe ddaethom, fe welon, fe amcangyfrifon
Posts: 13307
Joined: Tue Sep 16, 2008 5:38 pm

Re: Wald Statistic in Chow test

Postby EViews Gareth » Fri Mar 28, 2014 3:46 pm

Post the workfile.
Follow us on Twitter @IHSEViews

t_schl
Posts: 7
Joined: Thu Mar 27, 2014 7:30 am

Re: Wald Statistic in Chow test

Postby t_schl » Sat Mar 29, 2014 12:41 am

Hi Gareth,
here is the workfile. I hope you can give me a hint here.
Regards,
Thore
Attachments
workfile.wf1
(63 KiB) Downloaded 432 times

EViews Gareth
Fe ddaethom, fe welon, fe amcangyfrifon
Posts: 13307
Joined: Tue Sep 16, 2008 5:38 pm

Re: Wald Statistic in Chow test

Postby EViews Gareth » Mon Mar 31, 2014 9:02 am

The Chow Test (and Quandt Andrews) assumes that the covariances between subsamples are equal. Thus you need to scale the covariance matrix by the overall SE.
Follow us on Twitter @IHSEViews

t_schl
Posts: 7
Joined: Thu Mar 27, 2014 7:30 am

Re: Wald Statistic in Chow test

Postby t_schl » Tue Apr 01, 2014 10:34 am

Hi Gareth,
thank you for your assistance. But even after hours of thinking about the problem, I am not able to derive the right result.
Could you please go into greater detail of how to scale the covariance matrix?
I understood your comment in the following way:
I calculated the SE over the full sample:

Code: Select all

   
smpl 1991:10 2012:12
equation ea_delta_s_w3.ls d(ea_exrate_us_log) c ea_m1_SA_LOG_d1(-1 to -4) US_M1_SA_LOG_d1(-1 to -4) ea_ip_SA_LOG_d1(-1 to -4)  US_ip_SA_LOG_d1(-1 to -4)  interest_dif(-1) error_corr_ea1(-1) ar(1 to 4)
   scalar total_se = ea_delta_s_w3.@se

and weighted the total SE with the number of observations before and after the break.
Accordingly, the test statistic look somehow like this:

Code: Select all

matrix wald = @transpose(r*(b1-b2)-q)*@inverse(r*(v1/(((39+!i)/254)*total_se)+v2/((254-39-!i)/254*total_se))*@transpose(r))*(r*(b1-b2)-q)

This seems not to be the right way at all though. I have also tried numerous other ways, but none yielded the result.
I would be glad if you could further clarify the issue.
Thank you very much.
T.

t_schl
Posts: 7
Joined: Thu Mar 27, 2014 7:30 am

Re: Wald Statistic in Chow test

Postby t_schl » Wed Apr 02, 2014 10:05 am

Hi,
I went back to the literature and thought about the scaling of the covariance matrix.
This resulted in the following code:

Code: Select all

for !i = 0 to 177
   'first sub-sample
   coef(23) b1
   smpl 1991:10 1995:1+!i
   equation ea_delta_s_w1.ls d(ea_exrate_us_log) c ea_m1_SA_LOG_d1(-1 to -4) US_M1_SA_LOG_d1(-1 to -4) ea_ip_SA_LOG_d1(-1 to -4)  US_ip_SA_LOG_d1(-1 to -4)  interest_dif(-1) error_corr_ea1(-1) ar(1 to 4)
   'get the coeffcient estimates
   for !k = 1 to 23
      b1(!k) = @coefs(!k)
   next
   'get the Q matrix Q = 1/T X'X
   vector(40+!i) constant =1
   mtos(constant, constant_1)
   group matrix constant_1 ea_m1_sa_log_d1(-1) ea_m1_SA_LOG_d1(-2) ea_m1_SA_LOG_d1(-3) ea_m1_SA_LOG_d1(-4) US_M1_SA_LOG_d1(-1) US_M1_SA_LOG_d1(-2) US_M1_SA_LOG_d1(-3) US_M1_SA_LOG_d1(-4) ea_ip_SA_LOG_d1(-1) ea_ip_SA_LOG_d1(-2) ea_ip_SA_LOG_d1(-3) ea_ip_SA_LOG_d1(-4) US_ip_SA_LOG_d1(-1) US_ip_SA_LOG_d1(-2) US_ip_SA_LOG_d1(-3) US_ip_SA_LOG_d1(-4) interest_dif(-1) error_corr_ea1(-1) ea_exrate_us_log_d1(-1) ea_exrate_us_log_d1(-2) ea_exrate_us_log_d1(-3) ea_exrate_us_log_d1(-4)
   matrix xmat = matrix
   matrix xmat_strich = xmat.@t
   'get matrix Q1
   matrix qu_1 = 1/(40+!i)*(xmat_strich*xmat)
   'second regression
   coef(23) b2
   smpl 1995:2+!i 2012:12
   equation ea_delta_s_w2.ls d(ea_exrate_us_log) c  ea_m1_SA_LOG_d1(-1 to -4) US_M1_SA_LOG_d1(-1 to -4) ea_ip_SA_LOG_d1(-1 to -4)  US_ip_SA_LOG_d1(-1 to -4)  interest_dif(-1) error_corr_ea1(-1)  ar(1 to 4)
   for !j = 1 to 23
      b2(!j) = @coefs(!j)
   next
   'get the second Q matrix Q = 1/(T-break) X'X
   vector(254-40-!i) constant =1
   mtos(constant, constant_1)
   group matrix constant_1 ea_m1_sa_log_d1(-1) ea_m1_SA_LOG_d1(-2) ea_m1_SA_LOG_d1(-3) ea_m1_SA_LOG_d1(-4) US_M1_SA_LOG_d1(-1) US_M1_SA_LOG_d1(-2) US_M1_SA_LOG_d1(-3) US_M1_SA_LOG_d1(-4) ea_ip_SA_LOG_d1(-1) ea_ip_SA_LOG_d1(-2) ea_ip_SA_LOG_d1(-3) ea_ip_SA_LOG_d1(-4) US_ip_SA_LOG_d1(-1) US_ip_SA_LOG_d1(-2) US_ip_SA_LOG_d1(-3) US_ip_SA_LOG_d1(-4) interest_dif(-1) error_corr_ea1(-1) ea_exrate_us_log_d1(-1) ea_exrate_us_log_d1(-2) ea_exrate_us_log_d1(-3) ea_exrate_us_log_d1(-4)
   matrix xmat = matrix
   matrix xmat_strich = xmat.@t
   matrix qu_2 = 1/(254-40-!i)*(xmat_strich*xmat)
   
   smpl 1991:1 2012:12
   equation ea_delta_s_w3.ls d(ea_exrate_us_log) c ea_m1_SA_LOG_d1(-1 to -4) US_M1_SA_LOG_d1(-1 to -4) ea_ip_SA_LOG_d1(-1 to -4)  US_ip_SA_LOG_d1(-1 to -4)  interest_dif(-1) error_corr_ea1(-1) ar(1 to 4)
   scalar var_total = (ea_delta_s_w3.@se)^2

   'Restrictions
   matrix(22,23) r
   for !k = 1 to 22
      r(!k,!k+1) =1
        next
   vector(22) q
'Test statistic
   matrix wald = 254*@transpose(r*(b1-b2))*@inverse(r*(@inverse(qu_1)*var_total*254/(40+!i)+@inverse(qu_2)*var_total*254/(254-40-!i))*@transpose(r))*(r*(b1-b2)-q)
   wald_value(49+!i) = wald(1,1)
next

In the test statistic my intention is to weight the total variance of the regression by the using the regressor matrices of the two subsamples. But the result turns not out to be the same as what EViews calculated. I do not know how the use the SE here suggested by EViewsGareth which is probably the solution to the problem.
I would appreciate further clarification on that issue.
Kind regards,
T.

EViews Gareth
Fe ddaethom, fe welon, fe amcangyfrifon
Posts: 13307
Joined: Tue Sep 16, 2008 5:38 pm

Re: Wald Statistic in Chow test

Postby EViews Gareth » Wed Apr 02, 2014 10:59 am

Here's an example of the Chow test with an AR. Should be easy enough for you to for loop it into a QA test.

Code: Select all

create u 100
rndseed 1
series y=nrnd
series x=nrnd

equation eq1.ls y c x ar(1)
freeze(t) eq1.chow 50

smpl 1 49
equation eq1_1.ls y c x ar(1)

smpl 50 100
equation eq1_2.ls y c x ar(1)

matrix wald = @transpose(@identity(eq1.@ncoef) *(eq1_1.@coefs - eq1_2.@coefs)) * @inverse(eq1_1.@cov/(eq1_1.@se)^2 + eq1_2.@cov/(eq1_2.@se)^2) * @identity(eq1.@ncoef)*(eq1_1.@coefs - eq1_2.@coefs)
wald = wald *(eq1.@regobs - 2*eq1.@ncoef)/(eq1_1.@ssr + eq1_2.@ssr)
Follow us on Twitter @IHSEViews

t_schl
Posts: 7
Joined: Thu Mar 27, 2014 7:30 am

Re: Wald Statistic in Chow test

Postby t_schl » Wed Apr 02, 2014 1:03 pm

Thank you very much!
Regards,
Thore

t_schl
Posts: 7
Joined: Thu Mar 27, 2014 7:30 am

Re: Wald Statistic in Chow test

Postby t_schl » Thu Apr 03, 2014 2:29 am

Hi,
just another comment on the Quandt-Andrews test.
Regarding the trimming level in the EViews 8 Users Guide II (page 173) it says :
[...] To compensate for this behavior, it is generally suggested that the ends of the equation
sample not be included in the testing procedure. A standard level for this “trimming” is
15%, where we exclude the first and last 7.5% of the observations. EViews sets trimming at
15% by default, but also allows the user to choose other levels. [...]

I reckon that this formulation might be misleading, since Eviews trims 15 % of the data at the end and at the beginning of the sample if a trimming level of 15 % is chosen.
Regards,
T.


Return to “General Information and Tips and Tricks”

Who is online

Users browsing this forum: No registered users and 15 guests