So I am trying to understand the output from a GSEM model using latent variables that has a sampling weight. I am basically trying to figure out the relative impact and significance of three latent variables (Exposure, Worry, and Past) on four binary dummy variables (leave, nothings, mit_wcs and protect_wcs) as the dependent variables.
When I was first constructing it using the SEM builder I set it up as a normal SEM model:
CODE:
sem (Exposure -> q69, ) (Exposure -> q68, ) (Exposure -> q10, ) (Exposure -> q11, ) (Expo
> sure -> q12, ) (Exposure -> q22, ) (Exposure -> q47_n, ) (Exposure -> leave, ) (Exposure
> -> mit_wcs, ) (Exposure -> protect_wcs, ) (Exposure -> nothings, ) (Worry -> q54_reverse,
> ) (Worry -> q60_reverse, ) (Worry -> q62_reverse, ) (Worry -> leave, ) (Worry -> mit_wcs
> , ) (Worry -> protect_wcs, ) (Worry -> nothings, ) (Past -> leave, ) (Past -> q71_reverse
> , ) (Past -> q72_reverse, ) (Past -> q73_reverse, ) (Past -> q74_reverse, ) (Past -> mit_
> wcs, ) (Past -> protect_wcs, ) (Past -> nothings, ) [pweight = weight], covstruct(_lexoge
> nous, diagonal) standardized latent(Exposure Worry Past ) nocapslatent
In this, I set up the reporting to show the standardized coefficients so I could interpret all the latent variables going into a dependent variable, instead of having one constrained. The output looked like this:
OUTPUT: (I trimmed out all the pieces that went into creating the latent variables, please let me know if I need to put that back in to understand the problem)
-----------------------------------------------------------------------------------
| Robust
Standardized | Coef. Std. Err. z P>|z| [95% Conf. Interval]
------------------+----------------------------------------------------------------
----------------+----------------------------------------------------------------
leave |
Exposure | .0208351 .1357459 0.15 0.878 -.245222 .2868921
Worry | .2543848 .1092235 2.33 0.020 .0403107 .4684589
Past | .1435473 .1041415 1.38 0.168 -.0605662 .3476609
_cons | .9174479 .0891612 10.29 0.000 .7426951 1.092201
----------------+----------------------------------------------------------------
mit_wcs |
Exposure | -.2213914 .1068167 -2.07 0.038 -.4307483 -.0120345
Worry | -.3137979 .1062784 -2.95 0.003 -.5220996 -.1054961
Past | .8426943 .0507751 16.60 0.000 .7431768 .9422117
_cons | 1.664636 .2063966 8.07 0.000 1.260106 2.069166
----------------+----------------------------------------------------------------
protect_wcs |
Exposure | -.2243416 .1186625 -1.89 0.059 -.4569159 .0082326
Worry | -.2077294 .1249082 -1.66 0.096 -.4525449 .0370862
Past | .8934766 .0434587 20.56 0.000 .8082991 .9786542
_cons | 1.845029 .2391518 7.71 0.000 1.3763 2.313758
----------------+----------------------------------------------------------------
nothings |
Exposure | .1600502 .0965356 1.66 0.097 -.029156 .3492564
Worry | .1358037 .1227594 1.11 0.269 -.1048004 .3764078
Past | -.7846964 .0563719 -13.92 0.000 -.8951833 -.6742095
_cons | .3588613 .049722 7.22 0.000 .2614081 .4563146
----------------+----------------------------------------------------------------
Which, interpreting the p values and the direction of the coefficient, made sense in the scope of previous literature on the subject and the theoretical model.
However - (I think) what I really need to do is have it as a GSEM model, because many of my variables (everything supporting the exposure latent variable, and all the final dependent variables) are binary 0/1, so I (think I) needed to do logistic regressions with them.
CODE:
.
. gsem (Exposure -> q69, family(bernoulli) link(logit)) (Exposure -> q68, family(bernoulli)
> link(logit)) (Exposure -> q10, family(bernoulli) link(logit)) (Exposure -> q11, family(b
> ernoulli) link(logit)) (Exposure -> q12, family(bernoulli) link(logit)) (Exposure -> q22,
> family(bernoulli) link(logit)) (Exposure -> q47_n, family(bernoulli) link(logit)) (Expos
> ure -> leave, family(bernoulli) link(logit)) (Exposure -> mit_wcs, family(bernoulli) link
> (logit)) (Exposure -> protect_wcs, family(bernoulli) link(logit)) (Exposure -> nothings,
> family(bernoulli) link(logit)) (Worry -> q54_reverse, ) (Worry -> q60_reverse, ) (Worry -
> > q62_reverse, ) (Worry -> leave, family(bernoulli) link(logit)) (Worry -> mit_wcs, famil
> y(bernoulli) link(logit)) (Worry -> protect_wcs, family(bernoulli) link(logit)) (Worry ->
> nothings, family(bernoulli) link(logit)) (Past -> leave, family(bernoulli) link(logit))
> (Past -> q71_reverse, ) (Past -> q72_reverse, ) (Past -> q73_reverse, ) (Past -> q74_reve
> rse, ) (Past -> mit_wcs, family(bernoulli) link(logit)) (Past -> protect_wcs, family(bern
> oulli) link(logit)) (Past -> nothings, family(bernoulli) link(logit)) [pweight = weight],
> covstruct(_lexogenous, diagonal) vce(robust) latent(Exposure Worry Past ) nocapslatent
But with GSEM you can't set it to show the standardized coefficients, so it constrains one of the three latent variables going into my dependent variables.
EXCERPT FROM OUTPUT
------------------------------------------------------------------------------------
| Robust
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------------+----------------------------------------------------------------
-------------------+----------------------------------------------------------------
leave |
Exposure | .0119008 .1705572 0.07 0.944 -.3223851 .3461867
Worry | 1 (constrained)
Past | .7463485 .4164972 1.79 0.073 -.069971 1.562668
_cons | -.2277844 .2182763 -1.04 0.297 -.655598 .2000293
-------------------+----------------------------------------------------------------
mit_wcs |
Exposure | -.1644291 .1750106 -0.94 0.347 -.5074435 .1785853
Worry | -1.171892 1.082415 -1.08 0.279 -3.293385 .9496019
Past | 1 (constrained)
_cons | 1.298088 .235061 5.52 0.000 .8373769 1.758799
-------------------+----------------------------------------------------------------
So I don't understand how I interpret that with one of the main variables constrained. Is there any advice on how I can generate output that has all variables in play, or how to interpret it's absence? Or should I set up the model in a different way even though the outcomes are binary to have more interpret-able data?
Thanks!
Related Posts with SEM & GSEM - Understanding / Standardizing Output
Installing package not on SSCI am trying to install the package 'meta'. I have commands in a file I am trying to use where the co…
Auto-ARIMA is now available in Stata! The new arimaauto and xtarimau commandsThanks to Kit Baum's relentless work on uploading new packages into the SSC, Stata now has auto-ARIM…
How to select categorical variable in Hoteck method to impute missing dataHello Everyone I am trying to address the missing values of four variables (price) in my research an…
The new AUTOArima Mata class: you are free to use it if you want auto-ARIMA in your commandThanks to Kit Baum's adding the arimaauto command to the SSC, the class is publicly available after …
Reshaping Data, Wide format from long Hello everyone, I'm having trouble formatting my data. I have a dataset that looks like this: Cou…
Subscribe to:
Post Comments (Atom)
0 Response to SEM & GSEM - Understanding / Standardizing Output
Post a Comment