Dear all,
maybe this is a silly question but I cannot spot the error. Basically, I can compare 2 samples using either a t-test or a permutation test. Ideally, both results converge. I wonder about the results of one and two-sided tests and the p-values. See this basic example.

Code:
. clear all

. version 16.1

. sysuse auto
(1978 Automobile Data)

.
.
.
. ttest headroom, by(foreign)

Two-sample t test with equal variances
------------------------------------------------------------------------------
   Group |     Obs        Mean    Std. Err.   Std. Dev.   [95% Conf. Interval]
---------+--------------------------------------------------------------------
Domestic |      52    3.153846    .1269928    .9157578    2.898898    3.408795
 Foreign |      22    2.613636     .103676    .4862837     2.39803    2.829242
---------+--------------------------------------------------------------------
combined |      74    2.993243    .0983449    .8459948    2.797242    3.189244
---------+--------------------------------------------------------------------
    diff |            .5402098    .2070884                .1273867    .9530329
------------------------------------------------------------------------------
    diff = mean(Domestic) - mean(Foreign)                         t =   2.6086
Ho: diff = 0                                     degrees of freedom =       72

    Ha: diff < 0                 Ha: diff != 0                 Ha: diff > 0
 Pr(T < t) = 0.9945         Pr(|T| > |t|) = 0.0110          Pr(T > t) = 0.0055

. permute foreign r(mean) if foreign, reps(25000) seed(346) nodots nodrop nowarn: summarize headroom, meanonly

Monte Carlo permutation results                 Number of obs     =         74

      command:  summarize headroom if foreign, meanonly
        _pm_1:  r(mean)
  permute var:  foreign

------------------------------------------------------------------------------
T            |     T(obs)       c       n   p=c/n   SE(p) [95% Conf. Interval]
-------------+----------------------------------------------------------------
       _pm_1 |   2.613636   24877   25000  0.9951  0.0004  .9941325   .9959094
------------------------------------------------------------------------------
Note: Confidence interval is with respect to p=c/n.
Note: c = #{|T| >= |T(obs)|}

. permute foreign r(mean) if foreign, right reps(25000) seed(346) nodots nodrop nowarn: summarize headroom, meanonly

Monte Carlo permutation results                 Number of obs     =         74

      command:  summarize headroom if foreign, meanonly
        _pm_1:  r(mean)
  permute var:  foreign

------------------------------------------------------------------------------
T            |     T(obs)       c       n   p=c/n   SE(p) [95% Conf. Interval]
-------------+----------------------------------------------------------------
       _pm_1 |   2.613636   24877   25000  0.9951  0.0004  .9941325   .9959094
------------------------------------------------------------------------------
Note: Confidence interval is with respect to p=c/n.
Note: c = #{T >= T(obs)}

. permute foreign r(mean) if foreign, left reps(25000) seed(346) nodots nodrop nowarn: summarize headroom, meanonly

Monte Carlo permutation results                 Number of obs     =         74

      command:  summarize headroom if foreign, meanonly
        _pm_1:  r(mean)
  permute var:  foreign

------------------------------------------------------------------------------
T            |     T(obs)       c       n   p=c/n   SE(p) [95% Conf. Interval]
-------------+----------------------------------------------------------------
       _pm_1 |   2.613636     181   25000  0.0072  0.0005  .0062267   .0083703
------------------------------------------------------------------------------
Note: Confidence interval is with respect to p=c/n.
Note: c = #{T <= T(obs)}
My question is: why is the result from the two sided t-test (p=0.0110) so different from the two-sided result from the two sided permutation test (p=0.9951)? The results for the other p-values are indeed very similar.