I investigate how firms can take advantage of their investors' assets.
My data is organized as an unbalanced panel with firm ID as panel variable and report period as time variable. There are 149 firms and 490 observations.
For my first and second hypotheses my dependent variable is a 0.5-paced score ranging from 0 to 10. My independent variable is a dummy.
H1:
Dependent variable: revenues, rev_rating; 0.5-paced score from 0-10
Independent variable: theoretical fit, theo_fit; dummy, 1 if there is a potential fit between two specific entities, 0 otherwise
Controls: batch (categorical, 7 options), age (number of months, 23-100), year (controlling for the year in which rating has been performed; categorical, 3 options), status (categorical, 4 options), no_head (size in number of employees, 0-1,100), office (categorical, 12 options)
H2:
Dependent variable: rev_rating; 0.5-paced score from 0-10
Independent variable: cur_fit; dummy; 1 if there is an actual fit between the startup and another entity, 0 otherwise
Note: theo_fit=1 (independent variable in H1 and upcoming H3) is a prerequisite for cur_fit=1., but not all theo_fit=1 also have cur_fit=1
I got recommended to run xtologit and compare it with xtoprobit. However, I’m not sure whether this is the right thing to do since my dependent variable is a score rather than a dummy. I cannot run any fixed effects regressions because my independent variable doesn’t change over time and gets omitted. Does someone have other recommendations?
Moreover, I fear that my third hypothesis cannot be tested by regression analysis. Here, both, my dependent (current fit) and my independent (theoretical fit) variables are dummies moderated by another dummy variable (information sharing). I made a contingency table to check the frequencies of every possible outcome. Is there another way to analyze it?
H3:
Dependent variable: cur_fit
Independent variable: theo_fit
Moderating variable: information sharing, share_info
I ran
Code:
xtlogit cur_fit i.theo_fit##i.share_info
Code:
note: 0.theo_fit != 0 predicts failure perfectly 0.theo_fit dropped and 302 obs not used note: 1.theo_fit omitted because of collinearity note: 1.theo_fit#1.share_info omitted because of collinearity Fitting comparison model: Iteration 0: log likelihood = -104.60614 Iteration 1: log likelihood = -104.38846 Iteration 2: log likelihood = -104.38808 Iteration 3: log likelihood = -104.38808 Fitting full model: tau = 0.0 log likelihood = -104.38808 tau = 0.1 log likelihood = -99.379343 tau = 0.2 log likelihood = -94.396503 tau = 0.3 log likelihood = -89.452781 tau = 0.4 log likelihood = -84.509089 tau = 0.5 log likelihood = -79.488327 tau = 0.6 log likelihood = -74.273074 tau = 0.7 log likelihood = -68.676386 tau = 0.8 log likelihood = -62.379474 Iteration 0: log likelihood = -68.679948 Iteration 1: log likelihood = -35.388335 (not concave) Iteration 2: log likelihood = -35.206976 (not concave) Iteration 3: log likelihood = -35.071197 (not concave) Iteration 4: log likelihood = -35.014527 (not concave) Iteration 5: log likelihood = -34.90272 (not concave) Iteration 6: log likelihood = -34.746671 (not concave) Iteration 7: log likelihood = -34.608508 (not concave) Iteration 8: log likelihood = -34.498735 (not concave) Iteration 9: log likelihood = -34.498735 (not concave) Iteration 10: log likelihood = -34.417236 (not concave) Iteration 11: log likelihood = -27.688751 (not concave) Iteration 12: log likelihood = -25.901447 Iteration 13: log likelihood = -25.853734 (not concave) Iteration 14: log likelihood = -24.131304 (not concave) Iteration 15: log likelihood = -20.304816 Iteration 16: log likelihood = -19.016582 Iteration 17: log likelihood = -15.744041 Iteration 18: log likelihood = -15.722299 Iteration 19: log likelihood = -15.722287 Iteration 20: log likelihood = -15.722287 Random-effects logistic regression Number of obs = 188 Group variable: id_venture Number of groups = 55 Random effects u_i ~ Gaussian Obs per group: min = 2 avg = 3.4 max = 6 Integration method: mvaghermite Integration pts. = 12 Wald chi2(1) = 0.67 Log likelihood = -15.722287 Prob > chi2 = 0.4144 ------------------------------------------------------------------------------------- cur_fit | Coef. Std. Err. z P>|z| [95% Conf. Interval] --------------------+---------------------------------------------------------------- theo_fit | 0 | 0 (empty) 1 | 0 (omitted) | 1.share_info | .7869919 .9642079 0.82 0.414 -1.102821 2.676805 | theo_fit#share_info | 0 0 | 0 (empty) 0 1 | 0 (empty) 1 1 | 0 (omitted) | _cons | -.4224731 .726772 -0.58 0.561 -1.84692 1.001974 --------------------+---------------------------------------------------------------- /lnsig2u | 3.475415 .1946816 3.093846 3.856984 --------------------+---------------------------------------------------------------- sigma_u | 5.684297 .553314 4.696996 6.879128 rho | .9075908 .0163279 .8702305 .9349986 ------------------------------------------------------------------------------------- LR test of rho=0: chibar2(01) = 177.33 Prob >= chibar2 = 0.000
Thank you very, very much in advance!
0 Response to Alternatives to xtologit/xtoprobit for score as dependent variable and dummy as independent variable
Post a Comment