Dear all,
Hope everyone is staying well in this difficult time.
I am bootstrapping standard errors of the estimates that are obtained from the following line of code:
npregress kernel price unitsize i.floor i.byear_lump, estimator(linear) kernel(gauss) dkernel(li)
Bootstrapping the standard error (using wild bootstrap) is straightforward, except that it takes a long time for each iteration to terminate.
I am running multiple instances of my code on a server, but the server imposes a time limit on how each instance can be run on it (24 hours), and it turned out that the code takes longer than that to terminate.
Other than reducing the number of observations (currently 72000, and the code terminates after 79000 seconds on my desktop), is there any way to get around the time limit, e.g., have Stata store the current state of the algorithm right before the time limit force quits the program, so that the Stata can restart from there when the code is run again?
Any advice would be greatly appreciated,
Thanks,
Sam
Related Posts with Getting around the time limit imposed by the server
Interpretation of grand mean centered versus original form level-2 variables in multilevel logistic regressionHello, I have a random intercept multilevel logistic regression model with cross-level interactions…
2SLS estimationHello, I would like to estimate the following 2SLS panel regression: First stage: Zit = α Wit*Fit +…
Merging two data sets: weighting help?Hi All, I am working to merge two separate data sets, each with their own sampling method and weight…
How to calculate Jaccard SimilarityDear Statlisters, I am trying to calculate a pairwise Jaccard similarity measure and have trouble f…
Is there an arbitrary scaling factor in logistic regression in Stata?Hi, Norton et al state in a recent JAMA guide that the magnitude of the odds ratio from a logistic …
Subscribe to:
Post Comments (Atom)
0 Response to Getting around the time limit imposed by the server
Post a Comment