Dear expert.
I am testing accuracy of a risk score (Continious) on predicting mortality (binary 0:1)
I have ROC curve with AUC calculated.
Now looking for internal validation through bootstrapped method. I want to compare the ROC curve from the bootstrampped sample to the original one.
I wonder if there is a code to help with such approach.
I did research on the forum, so far what i got was
I need your feedback on the code, if it is correct to use, and if there is an easier approach for internal validation based on bootstrapping.
Sincerely
capture program drop optimism
program define optimism, rclass
preserve
bsample
logit Mortality score
lroc, nograph
return scalar area_bootstrap = r(area)
end
logit Mortality score
lroc, nograph
local base_ROC = r(area)
tempfile sim_results
simulate area = r(area_bootstrap), reps(200) seed(12345) saving(`sim_results'): optimism use 'sim_results', clear
sum area
gen diff = area - 0.7972 // whatever number generated from previous steps
gen optimism = 0.7972 - diff
sum area // The average AUC in the bootstrap samples.
sum diff // the average difference between the original AUC and the AUC in each bootstrap sample.
sum optimism // the average "optimism", the original AUC minus the difference.
_pctile optimism, p(2.5 50 97.5) // distribution of "optimism" in the bootstrap samples.
return list
0 Response to Internal validation of risk prediction model - Bootstrapped method.
Post a Comment