Helpful Link: https://www.stata.com/support/faqs/statistics/one-sided-tests-for-coefficients/


Hello everyone, I realize this topic has a few other posts about this, but it is hard for me to piece together the above link and some of the other comments input. The post also adds value in that the link above does not address testing the difference between regression coefficients when an F-test is returned from the 'test' command.

Code:
reg PctGrow ib1.Factor_Arm#ib2018.Year if Year == 2018 & TermFlag == 0 ,  vce(cluster ClinicID)
    test 2.Factor_Arm#2018.Year - 3.Factor_Arm#2018.Year = 0
    local sign_fac = sign(_b[3.Factor_Arm#2018.Year]-_b[2.Factor_Arm#2018.Year ])
    display "H_0: Rebate coef >= Ranking coef. p-value = " ttail(r(df_r),`sign_fac'*sqrt(r(F)))
OUTPUT:
Code:
. reg PctGrow ib1.Factor_Arm#ib2018.Year if Year == 2018 & TermFlag == 0 ,  vce(cluster ClinicID)

Linear regression                               Number of obs     =      2,700
                                                F(2, 134)         =       1.66
                                                Prob > F          =     0.1934
                                                R-squared         =     0.0041
                                                Root MSE          =     .46162

                                (Std. Err. adjusted for 135 clusters in ClinicID)
---------------------------------------------------------------------------------
                |               Robust
        PctGrow |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
----------------+----------------------------------------------------------------
Factor_Arm#Year |
        2 2018  |   .0449931    .037292     1.21   0.230    -.0287639      .11875
        3 2018  |  -.0261276   .0370407    -0.71   0.482    -.0993876    .0471324
                |
          _cons |    .619855   .0245349    25.26   0.000     .5713293    .6683808
---------------------------------------------------------------------------------

. 
end of do-file

. do "C:\Users\...

.         test 2.Factor_Arm#2018.Year - 3.Factor_Arm#2018.Year = 0

 ( 1)  2.Factor_Arm#2018b.Year - 3.Factor_Arm#2018b.Year = 0

       F(  1,   134) =    3.24
            Prob > F =    0.0739

.         local sign_fac = sign(_b[3.Factor_Arm#2018.Year]-_b[2.Factor_Arm#2018.Year ])

.         display "H_0: Rebate coef >= Ranking coef. p-value = " ttail(r(df_r),`sign_fac'*sqrt(r(F)))
H_0: Rebate coef >= Ranking coef. p-value = .96305365

.
I realize there are numerous problems with one-sided statistical tests. But I want to be able to reject the null (statistically) that the coefficient for the 3.Factor_Arm... is greater than 2.Factor_Arm..., and a one-sided test seems most appropriate, as it is tough to tell from the confidence intervals that they are distinctly different.

The code is largely taken from the link above. My problem is the "p-value" that is spit out at the bottom. Perhaps the confusion is my own on whether it should be ttail() or 1-ttail(). The 'sign_fac' is going to be negative for sure, and so I'm feeding in a negative value to the ttail() function, but this function is returning a value that says that 96% of the distribution is above this critical value. I'm not convinced that this is the 'p-value' as it is traditionally interpreted, the probability of achieving an even MORE extreme value.

For example, if 2.Factor_Arm was even greater, the critical value would be even more negative and the "p-value" percentage spit out of these commands would be even greater.

Can someone clarify for me?

1. To get a real 'pvalue', shouldn't this be 1-ttail() in this case?
2. Or alternatively, perhaps I should remove the `sign_fac' multiplication inside the ttail() function entirely? Perhaps I do not understand the place of this macro in the ttail function, because the critical value is automatically positive and I want to know what the probability is of receiving an even greater value.

Thanks for the consideration of a response. Hopefully others find our discussion helpful and constructive.