As is to be expected, not every model will converge, and so where this happens the model currently being fitted loops indefinitely. Rather than having to identify each of these in turn, and restart, I'd like to amend the code so that, if convergence is not found within eg 100 iterations, the model fitting process moves onto the next predictor. The estimated coefficients in the output file could simply be left as missing or an error message written to the outfile file in another variable (e.g. "Non convergence"). Is this a relatively straightforward process? I've posted some sample code below which may help-it will run where every model converges, but goes into an infinite loop once non-convergence appears. Many thanks for any advice offered.
Code:
use test_dataset, clear describe /* Contains data from test_dataset obs: 6,312 vars: 1002 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- storage display value variable name type format label variable label ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- case_control_no float %9.0g Case control status (1=with outcome, 2=outcome) case_studyid matched sets identifier exposure1 float %9.0g 1 exposure (binary Y/N) exposure2 float %9.0g 2 exposure (binary Y/N) exposure3 float %9.0g 3 exposure (binary Y/N) exposure4 float %9.0g 4 exposure (binary Y/N) ETC */ postutil clear postfile temp exposure logor se using results_dataset, replace forvalues i=1(1)1000{ local a `i' /*unadjusted analyses-any exposure*/ clogit case_control_no i.exposure`i' , group(case_studyid) local logor=_b[bi_ingred_tally_med`i'] matrix A=e(V) local se=sqrt(A[1,1]) matrix drop A post temp (`a') (`logor') (`se') postclose temp }
0 Response to Writing results to file when model fails to converge
Post a Comment