Hi everyone,

I created a monte carlo simulation. I am trying to see attentuation bias which I see, but I am trying to see that the betas for run with measurement error in it are more variable and also the standard errors are less accurate. I created a code with an x_true and an x_observed. Can you help me find a code so I can figure out the variability of the betas in my simulations? I ran with this code:
cap prog drop fivesample /*note need to drop the program if already exists */
program fivesample, rclass
drop _all
set obs 50
generate x_true = rnormal()
generate eps = rnormal()
generate measerr= rnormal()
generate x_observed= x_true + measerr
generate y = 1 + 1*x_true + eps
regress y x_true
return scalar betatrue = _b[x]
return scalar stderrtrue = _se[x]
regress y x_observed
return scalar betaobs = _b[x]
return scalar stderrobs = _se[x]
end
fivesample
return list
simulate betat = r(betatrue) set = r(stderrtrue) betao = r(betaobs) seo = r(stderrobs) vartrue=r(vartrue) varobs=r(varobs), seed(4553) reps(1000) nodots: fivesample
summarize betat betao set seo vartrue varobs