Hi Statalist!

Any advice on the below, would be very much appreciated!

We have a fixed effects mode, on a small T large N dataset, l which is delivering strong results. We suspect there may be a serial correlation issue which may necessitate a dynamic estimation, as the dependant variable is somewhat persistent.

The model is specified as follows:
Code:
xtreg xROA_c1 L1.(op BB concn EE LEV RD tq) y_*, fe robust
At current, we have used the below code to estimate the correlations between uhat and L1.uhat:

Code:
quietly regress xROA_c1 L1.(oe op EE RD LEV concn tq BB) y_*,vce(cluster uid)
predict uhat, residuals
forvalues  j = 1/6 {
     quietly corr uhat L`j'.uhat
     display "Autocorrelation at lag `j' = " %6.3f r(rho)
     }
The results suggest serial correlation:
Autocorrelation at lag 1 = 0.396
Autocorrelation at lag 2 = 0.300
Autocorrelation at lag 3 = 0.293
Autocorrelation at lag 4 = 0.242
Autocorrelation at lag 5 = 0.219
Autocorrelation at lag 6 = 0.146
When we include a lagged dependant variable in the specification, these autocorrelations decrease substantially:
Autocorrelation at lag 1 = -0.059
Autocorrelation at lag 2 = 0.152
Autocorrelation at lag 3 = 0.128
Autocorrelation at lag 4 = 0.027
Autocorrelation at lag 5 = 0.141
Autocorrelation at lag 6 = 0.034
Given these results, I have two questions which I hope you can help with:
  1. Given that our specification is a fixed effects model, with time lags, are we testing for autocorrelation/serial-correlation of the residuals correctly?
  2. Given these autocorrelations, is there a threshold at which we should consider autocorrelation to be a problem?
  3. If our tests do indicate a autocorrelation problem, is there an alternative way to address it without opting for a dynamic model (GMM) or a finite distributed lag model?
Thank you very much for your consideration!

Kind regards
Ayrton Da Silva