I am running a Difference in Difference analysis. In order to test whether treatment and control units have common trends, I am doing the following exercise.
During the period before treatment, I run the regression below, where Treatment_i is a dummy taking value 1 if the unit i is in treatment, 0 otherwise; Untreatment_i is a dummy with value 1 if unit i is not treated, 0 otherwise; Time_t is a time variable. Further, I add fixed effects by each time period and unit, and a number of observed factors - which I do also include in the Difference in Difference regression.
Outcome = a + b (Treatment_i * Time_t) + c (Untreatment_i * Time_t) + FixedEffectsUnits_i + FixedEffectsTime_t + Controls_it + e_it
After estimating the equation above, I can test whether the coefficients b and c are statistically different through an F test. If I can reject the null hypothesis that the coefficients are the same, this means treatment and untreatment groups fundamentally have different pre-treatment trends.
To see how the test performs, I have run the test in three model specifications: (1) without fixed effects and controls; (2) with fixed effects; (3) with fixed effects and controls. One would expect that, the more we control for observed variables and unobserved fixed factors, the more we should pass the test (e.g. we account for things that may drive different pre-treatment trends, hence making the requirement of parallel trends more flexible). So far, however, I am having the following results:
- Model (1) fails the test
- Model (2) passes the test
- Model (3) fails the test
Thanks.
0 Response to Difference in Difference Parallel Trends test
Post a Comment