Dear all,
Hope everyone is staying well in this difficult time.
I am bootstrapping standard errors of the estimates that are obtained from the following line of code:
npregress kernel price unitsize i.floor i.byear_lump, estimator(linear) kernel(gauss) dkernel(li)
Bootstrapping the standard error (using wild bootstrap) is straightforward, except that it takes a long time for each iteration to terminate.
I am running multiple instances of my code on a server, but the server imposes a time limit on how each instance can be run on it (24 hours), and it turned out that the code takes longer than that to terminate.
Other than reducing the number of observations (currently 72000, and the code terminates after 79000 seconds on my desktop), is there any way to get around the time limit, e.g., have Stata store the current state of the algorithm right before the time limit force quits the program, so that the Stata can restart from there when the code is run again?
Any advice would be greatly appreciated,
Thanks,
Sam
Related Posts with Getting around the time limit imposed by the server
Question on marginal effects in a fixed effects logit and using aextlogitHi, I am studying an unbalanced panel of companies with a year-industry fixed effects logit model (…
Trying to link specific values across three different datasets with close to a million observations.Hello everyone, I am writing a thesis in which I will analyze the application behaviour of top perf…
Problem using xtprobit with lagged independent variablesHello, I am using panel data, where my dataset includes 16 different target firm characteristics an…
Question on Stata's svy: with suestHi Forum, I am using the 2013 Canadian General Social Survey, which allows its data to use the svys…
Why is my CFA-model not converging?Dear all, We're trying to run a confirmatory factor analysis on a motivational measure of a total 4…
Subscribe to:
Post Comments (Atom)
0 Response to Getting around the time limit imposed by the server
Post a Comment