In the wild bootstrap, the process for generating many simulated data sets begins by refitting the original model, usually subject to the constraint that the null hypothesis (e.g., that a particular coefficient is zero). The outcome data are then decomposed into their predictions under this fit and the residuals. Then about half the residuals, randomly chosen, are negated, and new simulated outcomes are created by summing the initial best fit and the modified residuals. (Or the residuals can be multiplied by other random weights.)

Young (2022) and MacKinnon, Nielsen, and Webb (2022) advocate jackknifing the initial fit by cluster. In the usual, non-jackknife wild bootstrap, if a single cluster contains extreme observations, that will drive the initial model fit toward minimizing their apparent extremity, thus potentially making the bootstrap data-generating process less realistic. To compensate, the jackknifed bootstrap computes each cluster's best fit, and associated residuals, by fitting the model only to the data from all the other clusters, then extending its predictions to the cluster in question. The two papers demonstrate the benefit of jackknifing in simulations, the first for instrumented regressions the second for OLS.

I just added this feature to boottest--though for now only after OLS or after instrumental variables regression when running the Anderson-Rubin test. The implementation corresponds to what MacKinnon, Nielsen, and Webb (2022) calls the WCU/WCR31. I'm working on fully extending it to IV regression as in Young (2022). You invoke it by adding the jk or jackknife option to the command line.

Install the latest version of boottest with
Code:
ssc install boottest, replace