I have done a Difference in Difference model and I got the following output
Code:
reghdfe ltb_ta_w ///
> i.ibc2##i.treat2 ///
> size_w nfa_ta_w , a (co_code year) ///
> vce(robust)
(dropped 54 singleton observations)
note: 1bn.ibc2 is probably collinear with the fixed effects (all partialled-out values are close to zero; tol = 1.0e-09)
note: 1bn.treat2 is probably collinear with the fixed effects (all partialled-out values are close to zero; tol = 1.0e-0
> 9)
(MWFE estimator converged in 7 iterations)
note: 1.ibc2 omitted because of collinearity
note: 1.treat2 omitted because of collinearity
HDFE Linear regression Number of obs = 3,670
Absorbing 2 HDFE groups F( 3, 3031) = 21.63
Prob > F = 0.0000
R-squared = 0.8400
Adj R-squared = 0.8063
Within R-sq. = 0.0358
Root MSE = 0.0714
------------------------------------------------------------------------------
| Robust
ltb_ta_w | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
1.ibc2 | 0 (omitted)
1.treat2 | 0 (omitted)
|
ibc2#treat2 |
1 1 | .003701 .0056227 0.66 0.510 -.0073237 .0147258
|
size_w | .0218235 .0069342 3.15 0.002 .0082273 .0354197
nfa_ta_w | .1341914 .0188946 7.10 0.000 .0971438 .1712389
_cons | -.065562 .0625945 -1.05 0.295 -.1882939 .05717
------------------------------------------------------------------------------
Absorbed degrees of freedom:
-----------------------------------------------------+
Absorbed FE | Categories - Redundant = Num. Coefs |
-------------+---------------------------------------|
co_code | 629 0 629 |
year | 8 1 7 |
-----------------------------------------------------+
1. In this forum I have seen recommendations on standard errors to check whether there are any errors or not. One common saying is that inflated standard errors indicate some issues in the data while low standard errors are acceptable. However, as there are no hard and fast rules on standard errors, can someone help me in checking whether my standard errors are credible (ruling out possibilities of any errors in data or model).
2. Another approach is to look at confidence intrevals. Again does my confidence interval say something erroneous in model
I am mindful about Clyde Schechter remark on statistical significance but I wish to know what is the thin line that draws a boundary between p-hacking and ill-specified model etc
0 Response to Statistical Non-Significance of results and their interpretion by using confidence intervals and standard errors
Post a Comment