Dear all,
Hope everyone is staying well in this difficult time.
I am bootstrapping standard errors of the estimates that are obtained from the following line of code:
npregress kernel price unitsize i.floor i.byear_lump, estimator(linear) kernel(gauss) dkernel(li)
Bootstrapping the standard error (using wild bootstrap) is straightforward, except that it takes a long time for each iteration to terminate.
I am running multiple instances of my code on a server, but the server imposes a time limit on how each instance can be run on it (24 hours), and it turned out that the code takes longer than that to terminate.
Other than reducing the number of observations (currently 72000, and the code terminates after 79000 seconds on my desktop), is there any way to get around the time limit, e.g., have Stata store the current state of the algorithm right before the time limit force quits the program, so that the Stata can restart from there when the code is run again?
Any advice would be greatly appreciated,
Thanks,
Sam
Related Posts with Getting around the time limit imposed by the server
gsem with dependent binary variablesHello statalisters, Using Code: webuse womenwk , one can estimate a Heckman model in gsem as outl…
Error r(430) when including lagged DV in XTMIXEDI am trying to run an HLM model in which monthly brand sales is my dependent variable. The model run…
Parameters in assert command must be positiveHello, I am using the assert command in a program that I wrote and I get the error message indicati…
Cleaning Data Problemhello, everyone I have this problem: I have a variable x1 string and I have x2 byte...I need to cre…
R squared of fixed effects model too highHello everybody, thank you for your time and effort in advance! I really appreciate it! I have some…
Subscribe to:
Post Comments (Atom)
0 Response to Getting around the time limit imposed by the server
Post a Comment