Cochrane-Orcutt in presence of a lagged dependent variable
Moderators: EViews Gareth, EViews Moderator
-
Bigbrotherjx
- Posts: 36
- Joined: Wed Feb 10, 2010 4:25 pm
Cochrane-Orcutt in presence of a lagged dependent variable
Some literature e.g. Betancourt and Kelejian (1980) show that Cochrane-Orcutt in the presence of a lagged dependent variable leads to multiple minima so that there is no unique value to which the iteration converges to.
How does Eviews deal with this?
How does Eviews deal with this?
-
startz
- Non-normality and collinearity are NOT problems!
- Posts: 3796
- Joined: Wed Sep 17, 2008 2:25 pm
Re: Cochrane-Orcutt in presence of a lagged dependent variable
I believe EViews algorithm is largely equivalent to iterative Cochrane-Orcutt. Both attempt to find local minima of the sum squared residuals.
-
Bigbrotherjx
- Posts: 36
- Joined: Wed Feb 10, 2010 4:25 pm
Re: Cochrane-Orcutt in presence of a lagged dependent variable
Quite aside from the multiple solutions problem, am I correct in thinking that the estimates of the rho term in the presence of a lagged dependent variable would be inconsistent anyway under Cochrane-Orcutt since the first step OLS would be inconsistent?
In which case I shouldn't be running a regression with both a lagged dependent variable and an AR(1) in the regressor list?
In which case I shouldn't be running a regression with both a lagged dependent variable and an AR(1) in the regressor list?
-
startz
- Non-normality and collinearity are NOT problems!
- Posts: 3796
- Joined: Wed Sep 17, 2008 2:25 pm
Re: Cochrane-Orcutt in presence of a lagged dependent variable
No.Quite aside from the multiple solutions problem, am I correct in thinking that the estimates of the rho term in the presence of a lagged dependent variable would be inconsistent anyway under Cochrane-Orcutt since the first step OLS would be inconsistent?
In which case I shouldn't be running a regression with both a lagged dependent variable and an AR(1) in the regressor list?
Either iterative Cochrane-Orcutt or adding AR(1) as EViews does gives the maximum-likelihood estimates (ignoring any possible multiple solutions, as you point out), and either is fine with a lagged dependent variable.
-
Bigbrotherjx
- Posts: 36
- Joined: Wed Feb 10, 2010 4:25 pm
Re: Cochrane-Orcutt in presence of a lagged dependent variable
Is this what is referred to in the manual as the "Marquadt Non-linear Least Squares Algorithm"?
I'm afraid I'm not familiar with the theory behind this thing, or why it should indeed be compatible with lagged dependent variables. What the best source of information?
I'm afraid I'm not familiar with the theory behind this thing, or why it should indeed be compatible with lagged dependent variables. What the best source of information?
-
Bigbrotherjx
- Posts: 36
- Joined: Wed Feb 10, 2010 4:25 pm
Re: Cochrane-Orcutt in presence of a lagged dependent variable
This will be very sensitive to the initial conditions right? But I wouldn't have any control over that anyway just by putting AR(1) as an explanatory variable...?I believe EViews algorithm is largely equivalent to iterative Cochrane-Orcutt. Both attempt to find local minima of the sum squared residuals.
-
startz
- Non-normality and collinearity are NOT problems!
- Posts: 3796
- Joined: Wed Sep 17, 2008 2:25 pm
Re: Cochrane-Orcutt in presence of a lagged dependent variable
Unless you have an unusual model, any of the available algorithms will end up in about the same place. And you can set the initial condition by entering a value in the c vector in the workfile in the position that corresponds to the coefficient of interest.
-
Bigbrotherjx
- Posts: 36
- Joined: Wed Feb 10, 2010 4:25 pm
Re: Cochrane-Orcutt in presence of a lagged dependent variable
Would it be possible for someone to explain very briefly why Eviews' method of estimation avoids the problems with lagged dependent variables that would normally afflict things like Cochrane-Orcutt. Or could you direct me to a paper or textbook?
-
startz
- Non-normality and collinearity are NOT problems!
- Posts: 3796
- Joined: Wed Sep 17, 2008 2:25 pm
Re: Cochrane-Orcutt in presence of a lagged dependent variable
Both EViews method and iterated Cochrane-Orcutt work with a lagged dependent variable because both are nonlinear algorithms to minimize the sum of squared residuals. If the errors are normal, this is maximum-likelihood.
-
Bigbrotherjx
- Posts: 36
- Joined: Wed Feb 10, 2010 4:25 pm
Re: Cochrane-Orcutt in presence of a lagged dependent variable
Does the fact that I'm using the White robust covariance matrix make any difference (I have heteroscedasticity in time dimension)?Both EViews method and iterated Cochrane-Orcutt work with a lagged dependent variable because both are nonlinear algorithms to minimize the sum of squared residuals. If the errors are normal, this is maximum-likelihood.
-
startz
- Non-normality and collinearity are NOT problems!
- Posts: 3796
- Joined: Wed Sep 17, 2008 2:25 pm
Re: Cochrane-Orcutt in presence of a lagged dependent variable
It shouldn't. Using robust standard errors affects the standard errors, but not the point estimates.
-
Bigbrotherjx
- Posts: 36
- Joined: Wed Feb 10, 2010 4:25 pm
Re: Cochrane-Orcutt in presence of a lagged dependent variable
In a system, should be adding the AR(1) term individually for each equation then?
Thanks for all your help btw, you've been very quick with your replies
Thanks for all your help btw, you've been very quick with your replies
-
startz
- Non-normality and collinearity are NOT problems!
- Posts: 3796
- Joined: Wed Sep 17, 2008 2:25 pm
Re: Cochrane-Orcutt in presence of a lagged dependent variable
Yes, adding AR(1) to a particular equation says that you're specifying aotoregressive errors for that particular equation, which will affect the answer.
(Glad to offer the minor assist.)
(Glad to offer the minor assist.)
-
EViews Glenn
- EViews Developer
- Posts: 2682
- Joined: Wed Oct 15, 2008 9:17 am
Re: Cochrane-Orcutt in presence of a lagged dependent variable
Let me add a bit to Startz's discussion of Cochrane-0rcutt with a lagged dependent since I think it might be misleading to those not following carefully. With any luck, I won't make things worse.
As implied by the original question, a non-iterated version of Cochrane-Orcutt in the presence of a lagged dependent variable does not work. The reason is that Cochrane-Orcutt requires a consistent estimator of the beta in order to estimate the rho consistently. In the presence of the lagged dependent, least squares estimation of the beta is not consistent, hence the residuals aren't, hence the estimator for rho isn't, etc...
Iterated Cochrane-Orcutt and the EViews NLLS estimator for the transformed equation work because they simultaneously optimize with respect to both the rho and beta coefficients. There is no requirement of consistency for the starting estimate of beta. Startz correctly notes that iterated CO (with the emphasis here on iterated) is just a (not very) different way of estimating the nonlinear least squares estimator.
Personally, I find the NLLS variant is easier to think about...there's really not much need for discussion or references as once transformed, the model is simply a NLLS model with serially uncorrelated errors. At that point there is nothing different about the specification from any garden variety nonlinear model so the usual results apply. We (obviously) much prefer the NLLS form of this estimator since it simultaneously updates coefficients for the rho and beta, making for a straightforward estimation procedure. This fact is more important in the TSLS versions of these estimators which are somewhat muddled in the iterated Cochrane-Orcutt case (one has to be careful about what gets instrumented and what is added to the instrument list), and much cleaner in the NLLS case.
I should also note that Startz's comment about ML should be read with appropriate caveats since the NLLS estimator it is only conditionally ML given the transformation. There is a slightly more efficient ML estimator that uses all of the data.
Lastly, I will note that my favorite description of the issues involved is in Chapter 6 of the Fair book cited in our manual. He shows the first order conditions for the transformed model which should be quite familiar to the Cochrane-Orcutt fans amongst us.
As implied by the original question, a non-iterated version of Cochrane-Orcutt in the presence of a lagged dependent variable does not work. The reason is that Cochrane-Orcutt requires a consistent estimator of the beta in order to estimate the rho consistently. In the presence of the lagged dependent, least squares estimation of the beta is not consistent, hence the residuals aren't, hence the estimator for rho isn't, etc...
Iterated Cochrane-Orcutt and the EViews NLLS estimator for the transformed equation work because they simultaneously optimize with respect to both the rho and beta coefficients. There is no requirement of consistency for the starting estimate of beta. Startz correctly notes that iterated CO (with the emphasis here on iterated) is just a (not very) different way of estimating the nonlinear least squares estimator.
Personally, I find the NLLS variant is easier to think about...there's really not much need for discussion or references as once transformed, the model is simply a NLLS model with serially uncorrelated errors. At that point there is nothing different about the specification from any garden variety nonlinear model so the usual results apply. We (obviously) much prefer the NLLS form of this estimator since it simultaneously updates coefficients for the rho and beta, making for a straightforward estimation procedure. This fact is more important in the TSLS versions of these estimators which are somewhat muddled in the iterated Cochrane-Orcutt case (one has to be careful about what gets instrumented and what is added to the instrument list), and much cleaner in the NLLS case.
I should also note that Startz's comment about ML should be read with appropriate caveats since the NLLS estimator it is only conditionally ML given the transformation. There is a slightly more efficient ML estimator that uses all of the data.
Lastly, I will note that my favorite description of the issues involved is in Chapter 6 of the Fair book cited in our manual. He shows the first order conditions for the transformed model which should be quite familiar to the Cochrane-Orcutt fans amongst us.
Who is online
Users browsing this forum: No registered users and 1 guest
