Thursday, March 31, 2022

generate a dummy varaible based on whetehr the other dummy has the same or more ==1 than ==0 over the sample period

Hello

I have a panel data with firm and year variables. I also have a dummy variable, litigation, which equals 1 if there are 1 or more litigations for a firm in a year and 0 if there is no litigation for the firm in that year. I want to create a new dummy variable, abovemedian_litigation, which equals 1 if the firm has 50% or more of the times with litigation==1 than litigation==0 over the sample period. How can I do this? Thanks a lot for any suggestions!
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input float year double firm float litigation
2010  2 1
2011  2 1
2012  2 1
2013  2 1
2014  2 0
2015  2 0
2016  2 1
2017  2 1
2018  2 1
2019  2 1
2010  4 0
2011  4 0
2012  4 1
2013  4 1
2014  4 1
2015  4 0
2016  4 1
2017  4 0
2018  4 1
2019  4 1
2010  5 1
2011  5 1
2012  5 1
2013  5 1
2014  5 1
2015  5 0
2016  5 1
2017  5 1
2018  5 1
2019  5 1
2011  7 0
2012  7 0
2013  7 1
2014  7 0
2015  7 1
2016  7 0
2017  7 0
2018  7 0
2019  7 1

Random sampling according to group in Stata

Hi everyone,

I have a question about how to randomly sample data in Stata according to specific groups. Below is my data structure:

I want to randomly sample 1/3 of the data according to the ids. For instance, if I randomly choose id 3, then I'll keep all the observations of id3, in my case 4 obs.
Similarly, if I randomly choose id 1, then I want to keep all the observations of id 1.

Does anyone know how to realize this in Stata?


Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input byte id str9 date
1 "2022/1/2" 
1 "2022/1/8" 
1 "2022/1/9" 
1 "2022/1/12"
1 "2022/1/13"
1 "2022/1/14"
2 "2022/1/3" 
2 "2022/1/6" 
2 "2022/2/1" 
2 "2022/2/8" 
2 "2022/2/11"
3 "2022/2/6" 
3 "2022/2/9" 
3 "2022/2/13"
3 "2022/2/15"
end
Thanks a lot!

Asreg

When I run the following code

webuse grunfeld, clear
bys company: asreg invest mvalue kstock, fmb newey(2)

I get the following results
Array

If I do not run the regression by company

asreg invest mvalue kstock, fmb newey(2)

I get exactly the same results.
Array

Does anyone know why these two different codes generate the same results?

Loop function

Hello,

I want to generate following:
gen capm1 = rmrf if decile == 1
and this 10x times, basically until
gen capm10 = rmrf if decile == 10

Do I have to type this in 10x myself or can I use foreach? Is foreach possible with function gen, because the variablename has to change every time?

At the end I would also like to form 10 regressions looking like this, which is perhaps more possible with foreach? :
reg decile1 capm1 until reg decile10 capm10

Logit odds ratio

I am using xtlogit and have a simple model that performs as we might wish. But the odds ratio for one x variable is 186.8 which seems impossible. All other odds ratios are in line with our intuition (1.02, 1.31, 1.01). What can explain this seeming outlier? More importantly, how do I interpret such a ratio?

Parameter estimates scaled by SD

Hello Forum,

Does anybody has an idea how to approach to get parameter estimates scaled by the standard deviation of an underlying variable?

Best,
Keith


grouping dimensions with borders

I want horizontal lines separating my the levels of my first dimension. I can almost obtain this using by adding the option "spacer" . But spacer gives me blank lines instead of borders between the groups:

Code:
sysuse auto, clear
egen price_n4=cut(price), group(4)
table (rep78 foreign) (price_n4), stat(mean headroom) nototals
collect style row split, delimiter(`" # "') atdelimiter(`" @ "') bardelimiter(`" | "') binder(`"="')  dups(first) position(left) span spacer
collect preview
gives...


price_n4
0 1 2 3

Repair record 1978 1 Car origin Domestic 1.75

2 Car origin Domestic 3.25 3 4

3 Car origin Domestic 3.142857 3.5 2.7 3.357143
3 Foreign 3 2.5 2.5

4 Car origin Domestic 3 2.75 3.625 4
4 Foreign 3.5 2.75 2 2.375

5 Car origin Domestic 2 2.5
5 Foreign 2.75 2 2.625 2.75

but I want borders not lines separating the levels of my first dimension
Code:
rep78
.

Thanks
Joakim

Wednesday, March 30, 2022

Tabulation based on multiple categorical variables

I am using Stata 17 and I’d like to produce a frequency table (tabulate one way) based on multiple variables that have been manually encoded from an open survey-question.
Since respondents have given multiple answers to that question, each person in the dataset has five variables named “encoding_t1” to “encoding_t5” that relate to that same open question. Therefore, a value like 101 encoding for example “cultural related activities”, or 102 encoding "religious related activities" can occur in every single one of these variables - same for every other value of the code scheme.
The frequency table I'd like to produce, should, however, not provide only statistics (like number of observations, percentages) on one variable (e.g. encoding_t1), but should provide information on all of these variables, since I would like to know how many respondents engage in cultural activities or religion activities overall.

Is there a way to produce a variable set where I can get statistics on responses from all these variables in one table? Or do I have to produce a dummy-variable for every possible value from encoding_t1 to encoding_t5?

Quarter Averages to Yearly Averages

Hello,

Code:
* Example generated by -dataex-. For more info, type help dataex
clear
input float(fundnr month year q_mf_expratio)
1  7 2011  1.305195
1  8 2011  1.305195
1  9 2011  1.305195
1 10 2011 1.3438524
1 11 2011 1.3438524
1 12 2011 1.3438524
1  1 2012 1.3517568
1  2 2012 1.3517568
1  3 2012 1.3517568
1  4 2012 1.3203474
1  5 2012 1.3203474
1  6 2012 1.3203474
1  7 2012  1.329235
1  8 2012  1.329235
1  9 2012  1.329235
1 10 2012 1.3242614
1 11 2012 1.3242614
1 12 2012 1.3242614
1  1 2013 1.1346366
1  2 2013 1.1346366
1  3 2013 1.1346366
1  4 2013 1.1041577
1  5 2013 1.1041577
1  6 2013 1.1041577
1  7 2013 1.1157745
1  8 2013 1.1157745
1  9 2013 1.1157745
1 10 2013 1.1379126
1 11 2013 1.1379126
1 12 2013 1.1379126
1  1 2014 1.0696377
1  2 2014 1.0696377
1  3 2014 1.0696377
1  4 2014 1.0903803
1  5 2014 1.0903803
1  6 2014 1.0903803
1  7 2014 1.0906675
1  8 2014 1.0906675
1  9 2014 1.0906675
1 10 2014  1.098166
1 11 2014  1.098166
1 12 2014  1.098166
1  1 2015 1.0920255
1  2 2015 1.0920255
1  3 2015 1.0920255
1  4 2015  1.193412
2  8 2011  .7761759
2  9 2011  .7761759
2 10 2011  .8770025
2 11 2011  .8770025
2 12 2011  .8770025
2  1 2012  .8935469
2  2 2012  .8935469
2  3 2012  .8935469
2  4 2012  .8624817
2  5 2012  .8624817
2  6 2012  .8624817
2  7 2012  .8760487
2  8 2012  .8760487
2  9 2012  .8760487
2 10 2012  .9519633
2 11 2012  .9519633
2 12 2012  .9519633
2  1 2013 1.0320818
2  2 2013 1.0320818
2  3 2013 1.0320818
2  4 2013 1.0079682
2  5 2013 1.0079682
2  6 2013 1.0079682
2  7 2013 1.0340407
2  8 2013 1.0340407
2  9 2013 1.0340407
2 10 2013  .9332104
2 11 2013  .9332104
2 12 2013  .9332104
2  1 2014  .8537141
2  2 2014  .8537141
2  3 2014  .8537141
2  4 2014  .8638967
2  5 2014  .8638967
2  6 2014  .8638967
2  7 2014  .8533909
2  8 2014  .8533909
2  9 2014  .8533909
2 10 2014  .8720222
2 11 2014  .8720222
2 12 2014  .8720222
2  1 2015  .8694636
2  2 2015  .8694636
2  3 2015  .8694636
2  4 2015  .8601169
2  5 2015  .8601169
2  6 2015  .8601169
2  7 2015  .8387364
2  8 2015  .8387364
2  9 2015  .8387364
2 10 2015  .8791451
2 11 2015  .8791451
2 12 2015  .8791451
2  1 2016  .8448672
end

q_mf_expratio stands for 'Quarter Average of Win. monthly expense ratio on fundlevel', I am trying to get the average expense ratio in the average year. Is it possible to go from quarter averages to yearly averages for each fundnr in each year?

From monthly to yearly in %

Hello!

Code:
* Example generated by -dataex-. For more info, type help dataex
clear
input float(fundnr month year mf_m_cf)
1  7 2011          .
1  8 2011   81.92355
1  9 2011   476.8079
1 10 2011   21.52062
1 11 2011   2.582487
1 12 2011   6.664676
1  1 2012   8.380138
1  2 2012   1.357534
1  3 2012   8.550103
1  4 2012  -2.678535
1  5 2012   5.655555
1  6 2012   2.504373
1  7 2012 -.12116786
1  8 2012  4.5901546
1  9 2012    .600794
1 10 2012  .29563415
1 11 2012   3.328934
1 12 2012  1.4961462
1  1 2013  .54998076
1  2 2013   4.551462
1  3 2013   .6748788
1  4 2013   .4444814
1  5 2013 -.01265101
1  6 2013   .6499855
1  7 2013  1.6610664
1  8 2013   2.949057
1  9 2013   3.399674
1 10 2013   9.943465
1 11 2013   .2990036
1 12 2013  -.3797645
1  1 2014  -1.064931
1  2 2014  .24728173
1  3 2014   1.374272
1  4 2014  -3.390731
1  5 2014    .359465
1  6 2014  -.4076815
1  7 2014 -1.1045641
1  8 2014   -.697988
1  9 2014  -.8082402
1 10 2014 -.13452792
1 11 2014  1.3073466
1 12 2014 -2.0818634
1  1 2015  -4.356202
1  2 2015  2.8146536
1  3 2015  -.6765027
1  4 2015  -83.74078
2  8 2011          .
2  9 2011  4.5467353
2 10 2011   6.979911
2 11 2011  8.1631975
2 12 2011   .6553888
2  1 2012  1.0961826
2  2 2012  -2.891215
2  3 2012  1.5067998
2  4 2012   24.91592
2  5 2012  -.2980085
2  6 2012  -.4910712
2  7 2012 -.22777045
2  8 2012  -.9225484
2  9 2012  -.8890392
2 10 2012 -1.3467927
2 11 2012  -3.202692
2 12 2012 -2.0998995
2  1 2013   5.050189
2  2 2013   3.924864
2  3 2013  -1.263452
2  4 2013  -.5413366
2  5 2013  1.5253117
2  6 2013  -.9724789
2  7 2013 -3.5416684
2  8 2013  -.2780949
2  9 2013  -.4938647
2 10 2013   4.586828
2 11 2013  .23677163
2 12 2013  -1.385884
2  1 2014  .13616227
2  2 2014  -2.955069
2  3 2014 -1.5510585
2  4 2014 -1.4003643
2  5 2014  .27552602
2  6 2014  -.7300715
2  7 2014  -.8981757
2  8 2014 -1.3882908
2  9 2014 -1.1608492
2 10 2014   1.369086
2 11 2014   .7596431
2 12 2014 -4.2625575
2  1 2015  1.0341657
2  2 2015  -.0750414
2  3 2015 -.20450144
2  4 2015  -2.822578
2  5 2015  -.9178555
2  6 2015 -1.2965387
2  7 2015  -.3353655
2  8 2015   2.649211
2  9 2015 -.32353655
2 10 2015  -.0687631
2 11 2015  -.9628157
2 12 2015 -1.1600745
2  1 2016  -.8803672
end
For clarification: mf_m_cf stands for 'Monthly Flow in % following Sirri/Tufano' and I am trying to get the yearly flow in %. I dont think I can just add the monthly flow in % for each fundnr in each year, is there perhaps a Stata function for this?

Time series dummy variable

Hi all,

I am trying to make a time-series line plot for employment as a dummy variable (called "lfs", where 1="employed"; 0="not"), and am trying to treat this as a continuous rate for the whole four-month period and for each month over the period (i.e., if 59% of respondents=1 "Employed" in February, 55% in March, etc., then it shows up as such in the line plot).

How can I create a variable which turns this into a rate (the proportion of people in the whole sample for whom lfs=1)? As well, can this variable then be used for a time-series line plot?

I've tried the code:

Code:
twoway (tsline lfs if male ==1)
but it gives me the unusable and odd output below:
Array

The x-axis seems right (if un-labelled), but I am stumped on the y-axis/how to treat "lfs" as a continuous outcome among the sample (and sub-samples).

Any insights would be most appreciated.

Thanks so much,
Alex

Ordinal independent variable treated as continuous & marginal effects in ordinal logistic regression

Hello,

I am an economics student doing an undergraduate empirical dissertation using data from the European Working Conditions Survey, where my DV is job satisfaction (ordinal, 4-point scale) and my two main predictor variables are recognition from employer (ordinal, 5-point likert scale: strongly disagree... strongly agree) and working from home (WFH).

The main issue I have is with my WFH variable. From the survey it is measured as follows: “during the last 12 months in your main paid job, how often you have worked in your own home?” 1=Daily; 2=Several times a week; 3=Several times a month; 4=Less often; 5=Never.

Due to the ordinal nature of my DV, I am conducting ordinal logistic regression, and of the two options for handling my ordinal predictors (treat as continuous or categorical), my supervisor advised that I treat them both as continuous. I am aware of the drawbacks of doing so (eg underlying assumption of equally spaced intervals), but have been told for purposes of undergrad dissertation, this is ok and in much existing literature, ordinal variables are treated as continuous in the same way.

My first question is when treating WFH as continuous, would I keep the original coding so it is just a continuous scale from 1 to 5, or is there a way to re-code it so it better approximates a continuous scale?

Secondly, I want to discuss marginal effects but this does not seem intuitive for my predictor variables that don't have an easily quantifiable "one-unit change".
For example if I had age as my continuous predictor variable, my interpretation would be "an increase in age by one year causes a beta change in the log-odds of reporting very satisfied with job."
But I am not sure of the equivalent of a one unit change in my WFH variable the way it is measured.
So how would I frame / quantify this marginal effect for my WFH variable? (and for that matter my recognition variable too?)

I would really appreciate any help/advice, thanks!

complex tag - I think

Hello, below is a case where a person has 2 rows of data and and 2 different values for the variable lx_item_id_rc on the same date (dos_rc). I am trying to identify all participants in the dataset that more than 1 value for lx_item_id on the same date. Suggestions are really appreciated.

n1 enc_id lx_item_id_rc lx_item lx_item_value class dos_rc
60 276429554 429 STAGE II (CHECK ALL THAT APPLY) General Conditioning 2) Stage 2 02may2017
70 276429554 428 STAGE 1 (CHECK ONE) Stabilization 1) Stage 1 02may2017

Tuesday, March 29, 2022

No Observation Error with correlation

Hi everyone,
I'm very new to Stata and I am very confused my this error. I have created two variables to describe flights from differents airports in Miami to different airports in Dallas. Then, I mean to find a correlation between the prices of plane tickets to the same city, but different airports. The sum function works and I believe I have no missing values, so I have no idea what may be causing the error. I will paste in my code for reference

pwcorr MiamitoDallasPrice1 MiamitoDallasPrice2
no observations
r(2000);

Lag length in panel data


Dear Statalisters,
I have an unbalanced panel and was looking for a way to determine optimal lag length in panels.
​​​
I run Fisher-type test, Augmented DF, ADF
Code:
xtunitroot fisher varname, dfuller lags(0)
and it works for unit root test.

My question is how do I get the optimal lag length in panel?
I know how to do that in the VARs. What about panels, thought?
I will run a normal panel, if that is of interest

Thank you.
Mario Ferri

Missing data

I am looking to analyze missing data so I tried with command mdesc and summ but it came back with no missing data. When I tried codebook it came back with 98 and 99. These are not defined as missing data but rather "don't know" and "prefer not to say." Is this correct? Is this why the mdesc and summ did not show them as missing? Command misstable only works if I conduct multiple impution but does not list out the missing values. Please advise. Thank you.

Importing multiple excel worksheets using a loop

Hello everyone,

I have a single excel file that has multiple worksheets, the worksheets are named Country 1, Country 2, Country 3 etc. I want to be able to import all these worksheets and save each one as a data file using a loop. I'm capable of doing it individually but it makes my code very long. Also, how would I be able to merge all these dta files into one file afterwards?

Thanks for the help!

Error message when conducting Heckman correlation test

Dear all,

I am trying to use the Heckman correlation test to investigate the possibility of a selection bias in my data. I have panel data - when I run the command "xtheckman" using the code
Code:
 xtheckman wages training_hrs i.high_qual, select (training_hrs i.high_qual i.illness_disability i.sex i.children i.general_health i.region i.age i.sector)
an error message stating "invalid specification of select(); only one selection variable is allowed r(198)" comes up. But when I run the same code using "heckman" command, then I get a nice table with results.

My question is: why can I not include more than one variable under "xtheckman" test - what if I need all of these variables? Also, would it be wrong to run the code under a simple "heckman" command even if my data is panel data?

Thank you very much in advance!

Seeking advice on how to accurately calculate marginal effects after multivariate probit regression

Dear statalist users,

Hope you all are doing well. I used Stata/MP 14.0 to run a multivariate probit model which has three binary dependent variables/three equations. When I tried to obtain the marginal effects using the margins command in the Stata window, a warning showed up: Warning: cannot perform check for estimable functions. Following that, values of marginal effects were obtained, which however were identical to the coefficients of independent variables for the multivariate probit estimation. Supposedly, the values of marginal effects should be in the probability scale. I was wondering if Stata chose the inappropriate function to estimate in terms of the marginal effects. Does anyone know how to fix this problem? Thanks a lot in advance!

Here is a screenshot of the margins command and the associated results (variables were mosaiced):

Array


All suggestions are welcome!

Best,
Lingling

Monday, March 28, 2022

Variable labels lost due to reshape

Hello,
I have a dataset (first example below) that I reshape from wide to long. However I lost of the label information. How can I keep the labels of the variables? I send below the example of the data and the command that I used to reshape the data. Thank you in advance.




Code:
* Example generated by -dataex-. For more info, type help dataex
clear
input long(pidp psu) int strata long(i_hidp j_hidp k_hidp) byte(cg_semp ce_semp ce_parent0plus ce_couple cg_couple)
   76165   19    6 141657616 141460418 141045620 -8  1 1 1 1
  280165   67   15 754793216 754371618 754113220  . -8 2 1 .
  469205  106   25 415059096 414738818 414412420  . -8 1 2 .
  732365  157   43 619371216 618949618 618698020 -8 -8 2 2 2
 1587125  215   65 618269616 617895618 617671220 -8  3 2 2 2
 4849085  560  148 347554816 347255618 346990420 -8  1 2 1 1
68002725    1    1  73025216  72739618  72420020 -8  4 2 2 2
68008847 2012 2006  68040816  68040818  68027220 -8  1 2 2 2
68010887 2012 2006  68054416  68054418  68040820 -8  1 2 1 1
68029931 2060 2030  68136016  68129218  68102020 -8  . . . 1
68031967 2060 2030  68142816  68136018  68108820  4  4 2 2 2
68035365   11    4  73045616  72760018  72433620 -8  4 2 2 2
68035367 2060 2030  68156416  68149618  68115620 -8  1 1 1 1
68041487 2084 2042  68176816  68170018  68142820 -8  1 2 1 1
68041491 2084 2042  68176816  68170018  68142820 -8  . . . 1
68045567 2084 2042  68210816  68204018  68177500 -8  1 2 2 2
68051007 2108 2054  68251616  68244818  68204020 -8  2 2 1 1
68051011 2108 2054  68251616  68244818  68204020 -8  1 2 1 1
68058487 2108 2054  68272016  68265218  68224420 -8  4 2 1 1
68058491 2108 2054  68272016  68265218  68224420 -8  4 2 1 1
68060531 2108 2054  68285616  68278818  68238020 -8  1 2 1 1
68060533   18    6  73059216  72773618        -9 -8  4 2 1 1
68060537   18    6  73059216  72773618        -9 -8  4 2 1 1
68061288 2012 2006  68047616  68047618  68034020  . -8 1 1 .
68063247 2132 2066  68292416  68285618  68244820 -8  1 2 1 1
68063927 2132 2066  68299216  68292418  68251620 -8  1 2 1 1
68063931 2132 2066  68299216  68292418  68251620 -8 -8 2 1 1
68064605   18    6  73066016  72780418  72454020 -8  4 2 1 1
68064609   18    6  73066016  72780418  72454020 -8  4 2 1 1
68068007 2132 2066  68326416  68312818  68272020 -8  1 2 1 1
68068011 2132 2066  68326416  68312818  68272020  .  1 2 2 .
68068082 2012 2006  68054416  68054418  68040820 -8  2 2 1 1
68097245   25    8  73072816  72787218  72460820 -8  4 2 2 2
68097927 2180 2090  68421616  68414818  68353620 -8  4 2 2 2
68112211 2228 2114  68462416  68448818  68380820  .  1 1 1 .
68120367 2228 2114  68476016  68462418  68394420 -8  4 2 2 2
68120375 2228 2114  68476696  68469218  68401220 -8  1 1 2 2
68125127 2252 2126  68516816  68503218  68428420 -8  1 2 1 1
68125131 2252 2126  68516816  68503218  68428420 -8  1 2 1 1
68125135 2252 2126  68516816  68503218  68428420 -8  1 2 2 2
68133285   34   11  73086416  72800818  72474420 -8  4 2 2 2
68133289   34   11  73093216  72807618  72481220 -8  4 1 1 1
68136009   34   11  73106816  72821218  72494820 -8  1 2 2 2
68137365   34   11  73113616  72828018  72501620 -8 -8 2 2 2
68138045   34   11  73120416  72834818  72508420 -8  4 2 1 1
68138049   34   11  73120416  72834818  72508420 -8  4 2 1 1
68138051 2276 2138  68550816  68544018  68469220 -8  4 2 1 1
68144847 2276 2138  68584816  68578018  68503220 -8  1 2 1 1
68144851 2276 2138  68584816  68578018  68503220 -8  1 2 1 1
68148247 2300 2150  68591616  68584818  68510020 -8 -8 2 1 1
68148251 2300 2150  68591616  68584818  68510020  .  4 2 1 .
68150967 2300 2150  68598416  68591618  68516820  .  1 2 1 .
68150971 2300 2150  68598416  68591618  68516820 -8  1 2 1 1
68150975 2300 2150  68598416  68591618  68516820 -8  1 2 2 2
68155047 2300 2150  68618816  68605218  68530420  4 -8 2 1 1
68155051 2300 2150  68618816  68605218  68530420  4  1 2 1 1
68157771 2300 2150  68666416  68632418  68557620  2 -8 2 2 2
68159131 2300 2150  68673216  68639218  68564420 -8  1 2 1 1
68160485   39   11  73127216  72841618  72515220 -8  1 2 2 2
68160489   39   11  73134016  72848418  72522020 -8  1 2 2 2
68173407 2348 2174  68707216  68673218  68598420 -8  4 2 2 1
68174767 2348 2174  68720816  68686818  68612020  .  4 1 1 .
68180887 2348 2174  68754816  68714018  68632420 -8  1 2 1 1
68180891 2348 2174  68754816  68714018  68632420  .  1 2 1 .
68184971 2372 2186  68761616  68720818  68639220 -8  . . . 1
68185647 2372 2186  68768416  68727618  68646020 -8  4 2 2 2
68187687 2372 2186  68775216  68734418  68659620 -8  4 2 1 1
68187691 2372 2186  68775216  68734418  68659620 -8  4 2 1 1
68191771 2372 2186  68809216  68775218        -9 -8  1 2 2 2
68193127 2372 2186  68816016  68782018  68707220 -8  4 2 2 2
68195167 2372 2186  68822816  68795618  68720820 -8  4 2 1 1
68195171 2372 2186  68822816  68795618  68720820 -8  4 2 1 1
68195851 2372 2186  68829616  68802418  68727620 -8  1 2 1 1
68197211 2396 2198  68836416  68809218  68734420  .  1 2 1 .
68197887 2396 2198  68843216  68816018  68741220 -8  1 2 2 2
68197899 2396 2198  68843216  68817378  68754820 -8 -8 2 2 2
68197903 2396 2198  68843216  68816018  68741220 -8 -8 2 2 2
68199247 2396 2198  76506816  76153218  75595620 -8  1 1 1 1
68207407 2396 2198  68863616  68836418  68775220 -8  4 2 1 1
68207411 2396 2198  68863616  68836418  68775220 -8  4 2 1 1
68211487 2420 2210  68877216  68850018  68788820 -8  4 2 2 2
68214207 2420 2210  68890816  68863618  68802420 -8  1 2 2 2
68214887 2420 2210  68897616  68870418  68809220  . -8 2 1 .
68214891 2420 2210  68897616  68870418  68809220  .  1 2 1 .
68216247 2420 2210  68904416  68877218  68816020 -8  1 2 1 1
68218287 2420 2210  68911216  68884018  68822820 -8 -8 2 2 2
68230527 2444 2222  68979216  68965618  68897620  . -8 2 2 .
68231223 2444 2222  68986016  68972418  68904420 -8  4 2 2 2
68238011 2468 2234  69006416  68986018  68911220 -8  1 2 1 1
68262487 2516 2258  69081216  69047218  68965620 -8  2 2 1 1
68266567 2516 2258  69101616  69067618  68986020 -8  4 2 2 2
68278127 2540 2270  69142416  69108418  69020020 -8  4 2 2 2
68288327 2564 2282  69162816  69128818  69033620 -8  1 2 1 1
68288331 2564 2282  69162816  69128818  69033620 -8  1 2 1 1
68291731 2564 2282  69169616  69135618  69040420 -8  4 2 2 2
68293087 2564 2282  69176416  69142418  69047220 -8  4 2 1 1
68293091 2564 2282  69176416  69142418  69047220 -8  1 2 1 1
68293095 2564 2282  69176416  69143098  69054020 -8  1 1 1 1
68293099 2564 2282  69176416  69142418  69047220 -8  1 1 1 1
68293168 2108 2054  68278816  68272018        -9 -8  4 2 2 2
end
label values psu psu
label values strata strata
label values cg_semp cg_semp
label def cg_semp -8 "inapplicable", modify
label def cg_semp 2 "Yes, self-employed only", modify
label def cg_semp 4 "No", modify
label values ce_semp ce_semp
label def ce_semp -8 "inapplicable", modify
label def ce_semp 1 "Yes, employed only", modify
label def ce_semp 2 "Yes, self-employed only", modify
label def ce_semp 3 "Both employed and self-employed", modify
label def ce_semp 4 "No", modify
label values ce_parent0plus ce_parent0plus
label def ce_parent0plus 1 "Yes", modify
label def ce_parent0plus 2 "No", modify
label values ce_couple ce_couple
label def ce_couple 1 "Yes", modify
label def ce_couple 2 "No", modify
label values cg_couple cg_couple
label def cg_couple 1 "Yes", modify
label def cg_couple 2 "No", modify

ds ce_* cg_*
local stublist `r(varlist)'
local stublist: subinstr local stublist "ce_" "@", all
local stublist: subinstr local stublist "cg_" "@", all
local stublist: list uniq stublist

reshape long `stublist', i(pidp) j(_j) string

foreach s of local stublist {
local t = strtoname(substr(`"`s'"', 2, .))
local stublist: subinstr local stublist `"`s'"' `"@`t'"'
}
drop if psu==.

gen int date = cond(_j == "cg_", tm(2021m1), tm(2020m9))
format date %tmMonth_CCYY

drop _j















Pearson's Chi-Squared Test - Requesting help with tab2

I am currently working on an extension for research that requires me to measure whether the allocation of patents to examiners are random. I have sorted the data in order to conduct a Chi-squared test on the last digit to compare it to the expected number (which is the sum of patent applications/10). But I do not how to proceed from there. I have tried using the tabulate command (tabulate expected_no sumoflastdigit, chi2) with many different variations but it all doesn't seem right. There are 1370 observations in my data so some of the combination of variables I've tried also mentioned that there are too many values. Below is a screen capture of part of my data.

Merging multiple .tex files: How to sort the individual files?

I am outputting a lot of tex tables as raw files and then merge them into one ordered tex document using an automated procedure. The code below works, it stores all files in the folder in the local `files' and then loops over these files in the middle of a file write.

The problem is the tables are not the order I want to. And that even though they are named quite simply table_1 table_2 etc. This order seems to be somewhat random as it changes between running codes.

Is any of these two options (or another) feasible?
1. Sort the files in the files in the folder from within stata (presuming the local dir adheres to that order)
2. Sort the elements within the files local after creating it?

Here is code of the second step. To use this you would need some .tex files in a folder defined by $output.

Code:
cap erase "$output/all_tables.tex"
local files : dir "$output" files "*.tex", respectcase

cap file close myfile
file open myfile using "$output/all_tables.tex", write replace
    file write myfile ///
    "\documentclass{article}"    _n ///
    "\textwidth=14cm"            _n ///
    "\begin{document}"            _n ///
    foreach file in `files' {
        file write myfile ///
        "\begin{table}[ht!]"    _n ///
        "\centering"            _n ///
        "\input{`file'}"        _n ///
        "\end{table}"            _n ///
        "\clearpage"            _n ///
    }
    file write myfile ///
    "\end{document}"            _n
file close myfile

scatterplot

I would like to do several scatterplots were I plot the correlation btw mortality and covid-cases and were I would like to identify the treatment and control by different colors, this will be done for each month. I don't know if I have done the best way but by using the following code I get a nice and neat scatterplot but were I have to do it separattly in four figures, what I now want to do is to merge all of them into one big figure but with several smaller plots. Any body knows how I can do this the best way?

twoway (scatter mortality_1000_treated new_cases_1000indiv ) (scatter mortality_1000_nontreated new_cases_1000indiv) if time== tm(2020m3)
twoway (scatter mortality_1000_treated new_cases_1000indiv ) (scatter mortality_1000_nontreated new_cases_1000indiv) if time== tm(2020m4)
twoway (scatter mortality_1000_treated new_cases_1000indiv ) (scatter mortality_1000_nontreated new_cases_1000indiv) if time== tm(2020m5)

/David


Missing Time-Series Data

Hi Statalist,

I have a time series dataset of ozone data for multiple different counties. It has quite a bit of missing data, sometimes several observations in a row. I want to fill in the missing data using the average of the nearest non-missing observation prior and the nearest non-missing observation afterwards in that county.
County Week Ozone
Milwaukee 532 0.148
Milwaukee 533 .
Milwaukee 534 .
Milwaukee 535 0.564
Waukesha 127 0.185
Waukesha 128 .

In this data, I would want both week 533 and week 534 in Milwaukee County to be 0.356 (the average of 0.148 and 0.564). I have tried lots of different codes but they are either not able to be used with "by" commands (necessary because it must be done by county) or it includes other observations that are missing in the average instead of averaging the two nearest non-missing. Any thoughts on a code that would work here?

Thanks!

LSDVC method

After regressions with LSDVC method stata indicates bootsraps error. I want to know what is differnce between Bootstraps and error standard?

Ytitle with euro sign does not display horizontally

Despite putting angle(0) the title will not be displayed horizontally. Since this only happens for the title but not the label, I presume it is because I have a char sign in there. But how to change that?

MWE

Code:
clear
input year cost 
2006 36 
2007 54
2008 71
2009 96
2010 11
2011 11 
2012 16 
2013 19 
2014 21
2015 26
2016 30
2017 36 
end

twoway line cost year, title(Title) xtitle("") ytitle("Mio. {c 0128}", angle(0)) ylabel(, angle(0))

Generating random variables based on some predefined correlations

Dear Stata Members
I would like to ask a few questions and clear some doubts regarding regressions but for that, I need to create some random variables with some pre-defined correlations. So please help me to create some panel data with 4 four variables (v1,v2,v3, & v4) that has the following correlations
1. v1 and v2 (0 correlation)
2. v2 and v3 (some positive .50 to .60 correlation)
3. v2 and v4 (some negative .50 to .60 correlation
4. v3 & v4 (some low correlation 0.05-.12)
150-200 observations will be fine and if the panel is not easy then let these variables be some time series ones.

Did I make sense by asking the question? These artificially created variable can help me in testing some assumptions of OLS. I tried excel but couldn't meet these requirements.

Export Word putdocx

Hi

I have not had Stata for too long and am trying to create a Word export from Stata.
Unfortunately, Stata rounds down or up my data when exporting.

In the Stata browse, the number is stored as follows:
2.501e+09
Type: double, Format: %10.0g

My syntax is:
Code:
putdocx table Table1(1,1) = ("sample number")
putdocx table Table1(1,2) = (samplenr), nformat(%-16.0g)
The output in the Word: 2501500000 instead of 2501466707.

As soon as the number has fewer digits, it works fine and the number is not rounded - but it is then stored in Stata as long, %12.0g.

Does anyone on the forum have experience with putdocx commands and would be kind enough to help me? I'm using Stata, version 17 and Microsoft365, thanks in advance. :-)

Sunday, March 27, 2022

What should the command be?

I would like to ask about the exact command.
Assume yt= (infla t, unrate t, hwages t)is(3×1)-vector, in order to estimate the order of the VAR(p) model for yt, using a constant for the determinstic component, what should the command be?
I suppose it is varsoc command.
The data is like this (until 2021).
date infla unrate hwages
1961-01-01 1.600272387 6.6 1.401869159
1961-02-01 1.462087725 6.9 1.401869159
1961-03-01 1.462087725 6.9 1.401869159
1961-04-01 0.914014895 7.0 1.869158879
1961-05-01 0.913087589 7.1 2.803738318
1961-06-01 0.776764607 6.9 2.803738318
1961-07-01 1.252115059 7.0 2.803738318
1961-08-01 1.114488349 6.6 2.34741784
1961-09-01 1.249577845 6.7 2.325581395
1961-10-01 0.773109244 6.5 3.255813953
1961-11-01 0.671591672 6.1 3.720930233
1961-12-01 0.6709158 6.0 4.147465438

How can I recode the variable in this long format data set with Stata?

I want to create a new variable "signal" in this long format data set. The rule is set below,
if all status values equal 1 within id, then signal==NR
if all status values equal 0 within id, then signal==1
if the first 0 occurs several consecutive 1 within id, then signal==number of the consecutive status value (which is one)+1
For example, for id==1, signal=3+1=4
*Simulated data for illustrative purpose.
clear
input byte (id status)
1 1
1 1
1 1
1 0
1 0
1 0
1 0
1 0
1 0
2 0
2 0
2 0
2 0
2 0
3 1
3 1
3 1
3 1
3 1
3 1
3 1
4 1
4 0
4 0
4 0
end

Thank you for your help!

Wooldridge Introductory Econometrics 6th Edition CH13 C16 Qvi and Qvii.

Hi, I am currently solving the question from Wooldridge, the question is as follow:
Array
The link for the data is:

https://s2.smu.edu/tfomby/eco5350/wo...ntymurders.dta

I have difficulty in solving question vi and vii. Thank you for all your command!

Best,

Jacky

Create a Dummy indicating country pair has appeared previously in different variable

Code:
* Example generated by -dataex-. For more info, type help dataex
clear
input float(Year country_pair2 BIT_pair)
1993 4447  1
1999 2507  2
2008  583  3
1994 6339  4
1993 2171  5
2010 6813  6
1994 6917  7
1995 2198  8
1993 2198  9
1997 6687 10
1995 2196 11
1991 5720 12
1991 4500 13
1996 2546 14
2002 2783 15
1996  555 16
1991 2198 17
2003 2198 18
2007 6114 19
2007 1431 20
end
label values country_pair2 country_pair2
label def country_pair2 555 "Belgium Germany", modify
label def country_pair2 583 "Belgium Netherlands", modify
label def country_pair2 1431 "Colombia Colombia", modify
label def country_pair2 2171 "Germany Brazil", modify
label def country_pair2 2196 "Germany France", modify
label def country_pair2 2198 "Germany Germany", modify
label def country_pair2 2507 "Hong Kong Australia", modify
label def country_pair2 2546 "Hong Kong Hong Kong", modify
label def country_pair2 2783 "India India", modify
label def country_pair2 4447 "Netherlands France", modify
label def country_pair2 4500 "Netherlands Netherlands", modify
label def country_pair2 5720 "South Africa South Africa", modify
label def country_pair2 6114 "Sweden Poland", modify
label def country_pair2 6339 "Taiwan United States", modify
label def country_pair2 6687 "United Kingdom India", modify
label def country_pair2 6813 "United States Australia", modify
label def country_pair2 6917 "United States Mexico", modify
label values BIT_pair BIT_pair
label def BIT_pair 1 "Albania  Austria", modify
label def BIT_pair 2 "Albania  BLEU (Belgium-Luxembourg Economic Union)", modify
label def BIT_pair 3 "Albania  Bosnia and Herzegovina", modify
label def BIT_pair 4 "Albania  Bulgaria", modify
label def BIT_pair 5 "Albania  China", modify
label def BIT_pair 6 "Albania  Cyprus", modify
label def BIT_pair 7 "Albania  Czechia", modify
label def BIT_pair 8 "Albania  Denmark", modify
label def BIT_pair 9 "Albania  Egypt", modify
label def BIT_pair 10 "Albania  Finland", modify
label def BIT_pair 11 "Albania  France", modify
label def BIT_pair 12 "Albania  Germany", modify
label def BIT_pair 13 "Albania  Greece", modify
label def BIT_pair 14 "Albania  Hungary", modify
label def BIT_pair 15 "Albania  Iran", modify
label def BIT_pair 16 "Albania  Israel", modify
label def BIT_pair 17 "Albania  Italy", modify
label def BIT_pair 18 "Albania  Korea", modify
label def BIT_pair 19 "Albania  Kuwait", modify
label def BIT_pair 20 "Albania  Lithuania", modify
I'm trying to create a dummy = 1 if a country pair has appeared at a previous date in a separate variable list (indicating a treaty signing), and 0 otherwise.
Any help is greatly appreciated, and apologies if the dataex is not provided very well.

Taking strings out of file names

I scraped together some xls files using the almighty Python. Before I can/want to work with these files I must rename them, because this code
Code:
foreach f of loc files {
    
import delim "`f'", clear

sa "`f'", replace
}
ruins the files.

This code gives me

Code:
cls


local files : dir "`c(pwd)'" files "*Dados*"

foreach f of loc files {
    
di "`f'"

}
dadosbo_2018_1(homicÍdio doloso).xls
dadosbo_2018_10(homicÍdio doloso).xls
dadosbo_2018_11(homicÍdio doloso).xls
dadosbo_2018_12(homicÍdio doloso).xls
dadosbo_2018_2(homicÍdio doloso).xls
dadosbo_2018_3(homicÍdio doloso).xls
dadosbo_2018_4(homicÍdio doloso).xls
dadosbo_2018_5(homicÍdio doloso).xls
dadosbo_2018_6(homicÍdio doloso).xls
dadosbo_2018_7(homicÍdio doloso).xls
dadosbo_2018_8(homicÍdio doloso).xls
dadosbo_2018_9(homicÍdio doloso).xls
What I want, is to mass rename all of these by removing "dadosbo_" and "(homicÍdio doloso)" from the file names. I know how to rename them individually, but how could I only remove some strings from these file names?

Saturday, March 26, 2022

Amazon McKinsey 7S Model

Amazon McKinsey 7S ModelAmazon McKinsey 7S model illustrates the ways in which seven key elements of businesses can be united to increase effectiveness. According to this model strategy, structure and systems represent hard elements, whereas shared values, skills, style and staff are soft elements. McKinsey 7S model stresses the presence of strong links between elements. Specifically, it argues that a change in one element causes changes in others. As it is illustrated in figure below, shared values are positioned at the core of Amazon McKinsey 7S model, since shared values guide employee behaviour with implications in their performance. McKinsey 7S model   Hard Elements in Amazon McKinsey 7S Model Strategy Amazon adheres to cost leadership business strategy. Three main pillars of Amazon business strategy are competitive prices, great range and speed of delivery. The largest internet retailer in the world has been able to sustain this strategy thanks to economies of scale, innovation of various business processes and regular business diversification. Moreover, Amazon business strategy places a great emphasis on encouraging communication among various components of its ecosystem. These components of Amazon ecosystem include merchants, writers, reviewers, publishers, apps developers, and the information market of commentators, analysts, journalists and feature writers. Additionally, customer obsession and focus on Amazon leadership values represent important cornerstones of Amazon business strategy.   Structure Amazon organizational structure is hierarchical. It is difficult for the company to adapt an alternative structure such as divisional or matrix due to its gigantic size. Specifically, the e-commerce giant employs approximately 1,3 million people who serve hundreds of millions of customers worldwide.[1] The key features of Amazon corporate structure include flexibility of the business; which is unusual for a company of such a big size and stability in the top management, i.e. little turnover in the senior management team. Reliance on hybrid project…

Imputing missing state-years using average of surrounding years

Hello,

I have a state-year panel dataset, which is mostly balanced except for one missing year in Wisconsin (1998) and two missing years in Maine (1991 and 1992). For a set of 3 outcomes, I'm hoping to replace Wisconsin's missing values with the average of that state's values in 1997 and 1999, and Maine's missing values for both years 1991 and 1992 with the average of its values in 1990 and 1993. I have used the code below for the case of Wisconsin when there are non-missing values immediately surrounding one year of missing values:

Code:
bysort state (year) : replace `v' = (`v'[_n-1] + `v'[_n+1])/2 if (missing(`v')
But that code does not work in the case of Maine since each missing year has another missing value adjacent to it. I'd like to write out code like the following but it does not work. Can anyone suggest a more flexible code like the one below or offer a modification to the above code for the case of Maine's missing 1991 and 1992 values?


Code:
foreach v in outcome1 outcome2 outcome3 {
    bysort state (year) : replace `v' = (`v'[1997] + `v'[1999])/2 if state=="wisconsin" & year==1998
    bysort state (year) : replace `v' = (`v'[1990] + `v'[1993])/2 if state=="maine" & year==1991
    bysort state (year) : replace `v' = (`v'[1990] + `v'[1993])/2 if state=="maine" & year==1992
}
Thank you very much for your time!

Tom

How can I use pweight by hand?

Dear statalists:
I want to use a mirco sampling data in R,but the package used in R is systemfit which doesn't support sampling weight(in stata,we can use option pweight),so I need to make some transformation to data in advance,. My question is how to transform constants in stata.
In stata,if we want to use mirco sampling data for OLS,we can use this code:
Code:
reg y x [pweight = weight]

*If we don't have constant,this order will be:

reg y x [pweight = weight],nocons

*and we can do it by hand,the next three codes are equal to "reg y x [pweight = weight],nocons"

replace y = y*((weight)^0.5)
replace x = x*((weight)^0.5)
reg y x,noncons
But the constant can't be omitted in regression, I wonder if you can tell how to use pweight by hand with constant in stata.
Many thanks

All Dates on Data Editor Coming Up as .

Hello!

Currently new to operating STATA. My Data Editor does not show any dates despite me having dates put in on my REDCap. All dates appear as a period (.) Is there a way I can fix this to show like the actual date I put in for like dob?

Thank you!


Estimating the modified Jones model by industry and year using panel data

Hi all. I'm studying the relationship between restatements and firms' earnings management behaviour. I'm using DA to capture such behaviour. The sample period is 2010-2019. Let's say a firm restated in 2015, I'm interested in its DA in period 13, 14, 16, 17 (two years before and two years after). Data in t-3 (2012 in this case) and t (2015) is also included as this is a panel data. I have all the data transformed into the delta form, it is just the last step that I can never get right. I attached a file in which all the data are processed and are ready for the last step.

below is the code I first used:

​​​​​​
Code:
gen Jones=.
forval y = 2010 (1) 2019{
forval i = 1 (1) 136{
display `i'
display `y'
reg scaled_TACC inverse_lagTA term2 scaled_PPE if `i' == Industryid&`y'== YEAR, noconstant
predict r if `i'== Industryid&`y'== YEAR, resid
replace Jones=r if `i'== Industryid&`y'== YEAR
drop r
}
}
But this gives me nothing and STATA keeps showing r451. Then I reformed it and added a line:

Code:
​​​​​​ gen Jones=.
forval y = 2010(1)2019{
   forval i = 1(1) 136{
      display `i'
      display `y'
      capture noisily{
         reg scaled_TACC inverse_lagTA term2 scaled_PPE if `i'== Industryid & `y'== YEAR, noconstant
         predict r if `i'== Industryid & `y'== YEAR, resid
         replace Jones_2005_TAC2=r if `i'== Industryid & `y'== YEAR 
         drop r
      }
   }
}
Then the code starts running but, still, I have quite a lot of missing values in my result. Really hope someone could help me. Thanks in advance!!

Combined new variable: code MV from other variables

I would like to create a variable in which I code 1 if a certain threshold in several other indicators is met. 0 only if the threshold is not met in any of the variables. In my case, I'm trying to assess the "objective" interest of a country in migration if the share of immigrants OR the share of emigrants OR the share of remittances/GDP is beyond my thresholds.

I did the following:

. gen miginterest1=1 if immi_share>=0.15
. replace miginterest1=1 if emi_share>=0.15
. replace miginterest1=1 if remit_share>=15

*Since I needed the rows that didn't meet my threshold to be coded as 0 I did:
. mvencode miginterest1, mv(0)

* the problem is that this also codes countries where there is missing information as 0, which it should't. I tried to recode, if and replace if but it didn't work. It's probably a straightforward syntax issue but any help would be appreciated! Thanks!

Singleton dummy variable (possible problem)

Dear all,

I am running a couple of regressions with country and year fixed effects. But, in some regressions, when I include dummies for countries, F statistic goes missing (so the problem is certainly a singleton dummy somewhere, because the sample size is enough for the degrees of freedom). Countries in the dataset are represented as numbers. Any possible suggestion on how to solve this problem and locate the singleton dummy? Thank you! ( I did not put the tab countries due to the excess of characters allowed)

Code:
 * Example generated by -dataex-. To install: ssc install dataex
clear
input float country1 int year str2 isic float inv_rate1 str3 isiccomb byte sourcecode double(OutputINDSTAT4 Wages) float(emp gdp_percapita TotalOutput coeff_diversi)
840 1980 "36"  .02465116 "36" 3    4.300e+10   1.006e+10 103.07093  31725.71 1.4288206e+12 .7828951
124 1970 "15"  .03108122 "15" 3   9397453982  1325392470   8.15579  23772.37   98669453312  .771994
840 1993 "33"  .03249149 "33" 3   1.2631e+11  3.0341e+10  123.1207  41300.26  1.735673e+12 .8016322
840 1973 "27" .032639198 "27" 3    6.771e+10   1.361e+10  89.76096  28467.86  1.287729e+12  .789232
840 1971 "36" .022700815 "36" 3    1.718e+10   4.810e+09  84.70138 26059.816 1.1757533e+12 .7877562
840 1966 "27"  .05572354 "27" 3    4.630e+10   9.420e+09  78.84691 23837.215 1.0717137e+12 .7844567
124 1989 "26"  .06191513 "26" 3   7762007885  1555780035 13.236956 35763.516  1.786803e+11 .7766514
840 2004 "19" .014050784 "19" 1   5756974121  1096890991 140.27019  53076.24 2.9361835e+12 .9224685
124 1988 "24"  .04129102 "24" 3  22531874111  2834969664 12.978824 35440.508  177239048192 .7860352
840 1998 "17" .033607107 "17" 1  97078156250 16714824219  134.5009  47120.55  3.131152e+12 .9413515
840 2003 "23" .033053238 "23" 1 222849875000  5173883789 138.69086  51581.71 2.8813335e+12 .9245498
124 1982 "17"  .03036053 "17" 3   4271581823   971845656 10.984604  30183.06  127023939584 .7614031
840 1980 "17" .028521126 "17" 3    5.680e+10   1.111e+10 103.07093  31725.71 1.4288206e+12 .7828951
124 1970 "25"  .06435644 "25" 3   1160676065   277719521   8.15579  23772.37   98669453312  .771994
840 2013 "22"  .02656869 "22" 3  82425482000 19874717000 145.97845  56153.92 2.7011915e+12 .8281536
124 1979 "25"  .04223744 "25" 3   3739038982   782807933 10.760995 30416.396  138993172480 .7792382
840 1997 "26"  .05279192 "26" 1  97446468750 17231816406 132.36258  45674.03 3.0052516e+12 .9421328
124 1983 "36" .013669065 "36" 3   4511478304  1186291597 11.076824  30654.63  134779363328 .7679245
124 1975 "16"   .0233853 "16" 3    882851157   115026264  9.654721 27061.703  123320745984  .773169
124 1964 "20"  .04039735 "20" 3   1399965511   340256518  7.201654  19730.25   7.54404e+10  .769388
840 2003 "35"  .01864969 "35" 1 158765171875 30109058594 138.69086  51581.71 2.8813335e+12 .9245498
840 2012 "29" .025993686 "29" 3 395091867000 58412509000 144.58849     55552  2.659329e+12 .8266457
124 1970 "22"  .04039776 "22" 3   1540864512   581295686   8.15579  23772.37   98669453312  .771994
124 1969 "22" .035920464 "22" 3   1447833868   535856409  8.164087 23481.746   9.63015e+10 .7776658
840 1969 "23"  .04379861 "23" 3    2.443e+10   1.370e+09  84.23369  25700.67 1.1903348e+12 .7891335
840 1989 "36" .021874525 "36" 3    6.583e+10   1.512e+10 121.82807   39646.8 1.6576732e+12 .8044491
840 1977 "36"  .02674772 "36" 3    3.290e+10   8.110e+09  96.26191 30044.285 1.4170328e+12 .7924358
840 1976 "36"  .02789855 "36" 3    2.760e+10   6.940e+09  93.08827  28982.56 1.3292258e+12 .7863804
840 2003 "18" .008491283 "18" 1  35554578125  5658536133 138.69086  51581.71 2.8813335e+12 .9245498
124 1979 "21" .065928854 "21" 3  10798822630  2106837490 10.760995 30416.396  138993172480 .7792382
840 2004 "15"   .0235957 "15" 1 584906625000 52151515625 140.27019  53076.24 2.9361835e+12 .9224685
840 1971 "27"  .04151962 "27" 3    4.817e+10   1.041e+10  84.70138 26059.816 1.1757533e+12 .7877562
840 1978 "23"  .02208293 "23" 3    1.037e+11   3.000e+09 100.19444 31413.426 1.4627904e+12 .7929254
840 2013 "20" .035359234 "20" 3  88617589000 13399120000 145.97845  56153.92 2.7011915e+12 .8281536
840 1991 "27"  .04650017 "27" 3   1.2172e+11   2.046e+10 121.56706  39587.73 1.6336614e+12 .8001903
840 1977 "28"  .02936118 "28" 3    8.140e+10   1.830e+10  96.26191 30044.285 1.4170328e+12 .7924358
840 2000 "30"  .02253478 "30" 1 112952679688 11015568359 138.63611  50205.23 3.1705056e+12 .9420096
840 1975 "24"  .06949891 "24" 3    9.180e+10   1.172e+10  90.27315 27752.486 1.2275606e+12 .7793719
124 1979 "22"  .03245436 "22" 3   4208553009  1376956593 10.760995 30416.396  138993172480 .7792382
840 2010 "31"  .02235981 "31" 3 110225982000 16555225000  140.7138  54371.41  2.477068e+12 .8355898
840 2009 "25"  .03106351 "25" 3 171403278000 27402191000 141.22081  53480.25 2.3697901e+12  .839965
124 1973 "17"  .04424779 "17" 3   3050725435   711935926   9.09335 26566.717  122023444480 .7768121
840 2003 "28" .029430447 "28" 1 229887328125 54458734375 138.69086  51581.71 2.8813335e+12 .9245498
840 1994 "17" .034872964 "17" 3  1.01626e+11  1.8138e+10 125.68998  42520.34 1.8012163e+12 .8045634
840 1999 "28"  .04088608 "28" 1 243248296875 58651789063 136.75647  48762.61 3.2182575e+12 .9413739
124 1974 "15"  .02477184 "15" 3  17253261187  2042907186  9.562041 27039.414  133928755200 .7765449
840 1992 "20" .021414176 "20" 3    5.954e+10  1.0294e+10   121.797  40591.29 1.7067297e+12 .8025618
840 1998 "36"  .02916295 "36" 1 131397531250 27978519531  134.5009  47120.55  3.131152e+12 .9413515
124 1987 "26"  .04255066 "26" 3   6327381271  1259442994 12.543772 34422.844  159211159552 .7861783
840 1972 "25"   .0501894 "25" 3    2.112e+10   5.160e+09  86.97282  27187.73 1.2567706e+12 .7931709
124 1984 "36" .015064103 "36" 3   4818287253  1237774754 11.369482  32116.95  138367156224 .7711541
840 2010 "16"  .01009832 "16" 3  39202752000   940695000  140.7138  54371.41  2.477068e+12 .8355898
840 2007 "26"  .05725816 "26" 1 149943468750 21730236328 146.39578     55989  3.080468e+12 .9066486
124 1967 "24"  .10014836 "24" 3   2499281549   453319242  7.906885   21828.6   88298528768 .7727897
124 1985 "28"  .04150943 "28" 3  10091489828  2145723164 11.795504 33246.566  139898568704 .7775148
124 1965 "28"  .04169884 "28" 3   2402633063   588134889  7.508637 20638.375   80182960128 .7723618
840 1981 "33"  .04139344 "33" 3    4.880e+10   1.156e+10 104.21618 32227.516 1.4228834e+12 .7818055
840 1966 "26" .064363144 "26" 3    1.476e+10   3.840e+09  78.84691 23837.215 1.0717137e+12 .7844567
840 1963 "22" .028307693 "22" 3    1.625e+10   5.510e+09 73.074265  20620.72  9.030245e+11 .7825109
840 1970 "17" .034317344 "17" 3    2.710e+10   6.090e+09  84.69689   25454.2  1.155691e+12 .7854229
840 2006 "21"  .04456621 "21" 1 170360546875 20639992188 145.09415  55483.83  3.047019e+12 .9147386
840 2005 "31"  .02032121 "31" 1  1.30258e+11  2.3512e+10  142.4933  54449.45  3.008504e+12 .9174208
840 1971 "28" .024568394 "28" 3    4.518e+10   1.183e+10  84.70138 26059.816 1.1757533e+12 .7877562
124 1965 "15"  .02775453 "15" 3   6651304658   902610799  7.508637 20638.375   80182960128 .7723618
840 1979 "25"  .04672305 "25" 3    4.730e+10   1.015e+10 102.81062 32106.373 1.4935535e+12 .7893529
840 2003 "26"  .04630388 "26" 1   1.1051e+11 18970273438 138.69086  51581.71 2.8813335e+12 .9245498
840 1987 "16" .022115385 "16" 3    2.080e+10   1.490e+09  116.8861  37408.96  1.595493e+12 .8057776
840 2011 "18" .015910074 "18" 3  12523260000  2726990000 142.14735  54758.74 2.5623357e+12 .8250764
124 1988 "17"   .0383815 "17" 3   7028514643  1487769978 12.978824 35440.508  177239048192 .7860352
840 1982 "24"  .05319025 "24" 3    1.724e+11   2.156e+10 103.40858  31350.66  1.331578e+12 .7829695
840 2009 "26"  .04197488 "26" 3  89583516000 15942407000 141.22081  53480.25 2.3697901e+12  .839965
124 1963 "17"  .03931034 "17" 3   1344442446   305976557  6.961045 18873.299   6.87436e+10 .7681732
840 1985 "24"  .04111498 "24" 3    2.009e+11   2.434e+10 111.38438 35609.535  1.430635e+12 .7960103
124 1963 "23" .032214765 "23" 3   1381530513   101992186  6.961045 18873.299   6.87436e+10 .7681732
840 1992 "25"  .04210254 "25" 3   1.1605e+11   2.323e+10   121.797  40591.29 1.7067297e+12 .8025618
840 1983 "16"   .0398773 "16" 3    1.630e+10   1.350e+09 104.77914 32480.977  1.365531e+12 .7898316
840 1964 "24"  .05108724 "24" 3    3.817e+10   6.900e+09  74.73962 21509.076  9.721075e+11 .7795107
840 1993 "21"  .05589624 "21" 3  1.27683e+11  1.9902e+10  123.1207  41300.26  1.735673e+12 .8016322
124 1984 "33"  .04025157 "33" 3   1227736656   333573733 11.369482  32116.95  138367156224 .7711541
840 2002 "15" .030331217 "15" 1 525586687500 51187167969 138.15208  50589.63  2.986348e+12 .9304876
840 2010 "23" .019689966 "23" 3 627770571000  8433885000  140.7138  54371.41  2.477068e+12 .8355898
840 1998 "34"  .03475203 "34" 1 414918843750 43991316406  134.5009  47120.55  3.131152e+12 .9413515
840 1986 "25"  .04093887 "25" 3    7.328e+10   1.528e+10   113.924  36499.08  1.443684e+12 .7996418
840 2010 "19" .020039143 "19" 3   4844369000   860174000  140.7138  54371.41  2.477068e+12 .8355898
124 1989 "36" .007243243 "36" 3   7812684759  2234850148 13.236956 35763.516  1.786803e+11 .7766514
840 2015 "23"  .02969718 "23" 3 507785122000 10357030000 150.24847  58514.89  2.737627e+12 .8383057
840 1975 "21"  .06522782 "21" 3    4.170e+10   6.990e+09  90.27315 27752.486 1.2275606e+12 .7793719
840 1972 "24"  .04644119 "24" 3    5.943e+10   9.340e+09  86.97282  27187.73 1.2567706e+12 .7931709
840 1991 "36"  .01871698 "36" 3    6.625e+10   1.509e+10 121.56706  39587.73 1.6336614e+12 .8001903
840 2012 "35"  .01716096 "35" 3 277483202000 45690906000 144.58849     55552  2.659329e+12 .8266457
840 1994 "15"   .0234226 "15" 3  4.30994e+11  3.8492e+10 125.68998  42520.34 1.8012163e+12 .8045634
124 1979 "27"  .05571531 "27" 3  11874436583  2076105663 10.760995 30416.396  138993172480 .7792382
124 1964 "26"  .08118812 "26" 3    936400772   222511075  7.201654  19730.25   7.54404e+10  .769388
840 2015 "27"  .02589473 "27" 3 228434073000 23380366000 150.24847  58514.89  2.737627e+12 .8383057
840 1963 "20" .035928145 "20" 3    8.350e+09   2.140e+09 73.074265  20620.72  9.030245e+11 .7825109
840 2004 "36" .020689776 "36" 1 141610906250 28483798828 140.27019  53076.24 2.9361835e+12 .9224685
840 1999 "19" .016028171 "19" 1   9673218750  1759656982 136.75647  48762.61 3.2182575e+12 .9413739
840 1999 "32"  .06077582 "32" 1 235089140625 36713621094 136.75647  48762.61 3.2182575e+12 .9413739
124 1975 "36"  .02130045 "36" 3   2630857123   738330979  9.654721 27061.703  123320745984  .773169
840 2012 "36" .025330657 "36" 3 215883977000 40835569000 144.58849     55552  2.659329e+12 .8266457
end


Example of a regression with country dummies

Code:
 reg l_gdp_percapita share_man i.country1 i.year, robust

Linear regression                               Number of obs     =     45,316
                                                F(177, 45132)     =          .
                                                Prob > F          =          .
                                                R-squared         =     0.9638
                                                Root MSE          =     .20151

------------------------------------------------------------------------------
             |               Robust
l_gdp_perc~a |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
   share_man |   .9188907   .0458941    20.02   0.000     .8289374    1.008844
             |
    country1 |
         12  |   .1398192   .0097238    14.38   0.000     .1207605     .158878
         31  |  -.0220833   .0236409    -0.93   0.350    -.0684199    .0242533
         32  |   1.000396   .0105811    94.55   0.000     .9796571    1.021135
         36  |   1.724077   .0098756   174.58   0.000     1.704721    1.743433
         40  |   1.650948   .0105608   156.33   0.000     1.630248    1.671647
         44  |   1.667833   .0150731   110.65   0.000     1.638289    1.697376
         48  |   1.967524    .011058   177.93   0.000      1.94585    1.989198
         50  |   -1.29537   .0119894  -108.04   0.000    -1.318869   -1.271871
         51  |   .0843728   .0281494     3.00   0.003     .0291994    .1395461
         52  |   .6119524    .012826    47.71   0.000     .5868133    .6370915
         56  |   1.526947   .0103872   147.00   0.000     1.506588    1.547306
         60  |   1.657907   .0126674   130.88   0.000     1.633078    1.682735
         68  |  -.0507294   .0159609    -3.18   0.001     -.082013   -.0194459
         70  |  -.1275699   .0263274    -4.85   0.000    -.1791721   -.0759676
         72  |   .4438411   .0114761    38.68   0.000     .4213478    .4663343
         76  |   .3688905   .0110913    33.26   0.000     .3471514    .3906296
        100  |    .361612   .0131338    27.53   0.000     .3358696    .3873545
        104  |  -1.961633    .023902   -82.07   0.000    -2.008481   -1.914784
        112  |   .6383378   .0113539    56.22   0.000     .6160839    .6605917
        116  |  -1.643725   .0109332  -150.34   0.000    -1.665154   -1.622295
        120  |  -.7420946   .0186935   -39.70   0.000    -.7787341   -.7054551
        124  |   1.788242    .010166   175.90   0.000     1.768317    1.808168
        140  |  -1.332602   .0581746   -22.91   0.000    -1.446625   -1.218578
        144  |  -.2394646   .0124757   -19.19   0.000    -.2639173    -.215012
        152  |   .6076752   .0112157    54.18   0.000     .5856922    .6296581
        156  |  -.2011489   .0311219    -6.46   0.000    -.2621484   -.1401495
        170  |   .3194332   .0100574    31.76   0.000     .2997206    .3391458
        188  |   .6719599    .013369    50.26   0.000     .6457566    .6981633
        191  |   .8065507   .0117399    68.70   0.000     .7835403    .8295611
        196  |   1.068581   .0120084    88.99   0.000     1.045044    1.092117
        203  |   1.021347   .0120049    85.08   0.000     .9978173    1.044877
        208  |   1.713557   .0106053   161.58   0.000      1.69277    1.734343
        218  |   .3025109   .0112648    26.85   0.000     .2804318    .3245901
        222  |  -.0738693   .0138399    -5.34   0.000    -.1009957    -.046743
        231  |  -2.145684   .0119943  -178.89   0.000    -2.169193   -2.122175
        233  |   .9057312   .0111763    81.04   0.000     .8838255    .9276369
        242  |   .2236346   .0106636    20.97   0.000     .2027338    .2445354
        246  |    1.43978   .0104581   137.67   0.000     1.419282    1.460278
        250  |   1.544187   .0105397   146.51   0.000     1.523529    1.564845
        266  |   1.137986   .0110765   102.74   0.000     1.116276    1.159696
        268  |   .0766702   .0136464     5.62   0.000      .049923    .1034174
        270  |  -1.162161   .0103543  -112.24   0.000    -1.182456   -1.141867
        275  |  -.5467384   .0108368   -50.45   0.000    -.5679787   -.5254981
        276  |   1.509933   .0113012   133.61   0.000     1.487783    1.532084
        288  |  -.3154668   .0410579    -7.68   0.000     -.395941   -.2349925
        300  |   1.255589   .0112735   111.38   0.000     1.233492    1.277685
        320  |    .110528   .0140347     7.88   0.000     .0830197    .1380362
        340  |  -.3263836   .0102711   -31.78   0.000    -.3465152    -.306252
        344  |   1.242486    .015634    79.47   0.000     1.211843    1.273128
        348  |   .8694839   .0110921    78.39   0.000     .8477432    .8912246
        352  |   1.624975   .0099666   163.04   0.000      1.60544     1.64451
        356  |  -.9933728   .0121642   -81.66   0.000    -1.017215   -.9695309
        360  |  -.3468424   .0120225   -28.85   0.000    -.3704068   -.3232781
        364  |   .4794817   .0177027    27.09   0.000     .4447841    .5141793
        372  |   1.657644   .0131108   126.43   0.000     1.631947    1.683341
        376  |   1.319273   .0097486   135.33   0.000     1.300166    1.338381
        380  |    1.57303   .0113083   139.10   0.000     1.550866    1.595195
        384  |  -.8037646   .0102571   -78.36   0.000    -.8238688   -.7836605
        392  |   1.406302   .0108517   129.59   0.000     1.385033    1.427572
        398  |   .8981183    .009638    93.19   0.000     .8792277     .917009
        400  |   .3052589    .012813    23.82   0.000     .2801452    .3303727
        404  |  -.5147151   .0269775   -19.08   0.000    -.5675914   -.4618389
        410  |   .6722883   .0217413    30.92   0.000      .629675    .7149016
        414  |   2.236188   .0231366    96.65   0.000      2.19084    2.281536
        418  |  -.7040705   .0458362   -15.36   0.000    -.7939102   -.6142307
        422  |    .611516   .0229675    26.63   0.000     .5664992    .6565327
        426  |   -1.56843   .0165501   -94.77   0.000    -1.600868   -1.535991
        428  |   .6195997   .0143469    43.19   0.000     .5914795      .64772
        440  |   .7144657   .0133826    53.39   0.000     .6882356    .7406958
        442  |   2.204164    .010143   217.31   0.000     2.184283    2.224044
        446  |   1.779534   .0186684    95.32   0.000     1.742943    1.816124
        450  |  -1.651845   .0120918  -136.61   0.000    -1.675545   -1.628145
        454  |  -1.883317   .0170413  -110.51   0.000    -1.916718   -1.849915
        458  |   .4775227   .0125024    38.19   0.000     .4530177    .5020277
        462  |   .6493281   .0111001    58.50   0.000     .6275717    .6710844
        470  |   .7183254   .0171533    41.88   0.000     .6847046    .7519463
        484  |   .7464449   .0112014    66.64   0.000     .7244899    .7683999
        496  |   -.184911   .0177652   -10.41   0.000    -.2197311    -.150091
        504  |  -.4884675   .0097987   -49.85   0.000    -.5076732   -.4692619
        508  |  -2.018648   .0132443  -152.42   0.000    -2.044607   -1.992689
        512  |   1.219262   .0133043    91.64   0.000     1.193186    1.245339
        524  |  -1.343939   .0101304  -132.66   0.000    -1.363795   -1.324083
        528  |   1.734882    .009824   176.60   0.000     1.715627    1.754137
        554  |   1.484114   .0124538   119.17   0.000     1.459704    1.508523
        558  |  -.0703292    .012193    -5.77   0.000    -.0942278   -.0464307
        562  |  -1.966414   .0229068   -85.84   0.000    -2.011312   -1.921517
        566  |  -.5526878   .0205543   -26.89   0.000    -.5929745    -.512401
        578  |   1.966367   .0102139   192.52   0.000     1.946347    1.986386
        586  |  -.6789311   .0105184   -64.55   0.000    -.6995474   -.6583148
        590  |   .6901412   .0100713    68.53   0.000     .6704013    .7098811
        604  |   .0505702   .0149795     3.38   0.001     .0212102    .0799302
        608  |   -.286317   .0117811   -24.30   0.000    -.3094082   -.2632258
        616  |   .6803964   .0117916    57.70   0.000     .6572846    .7035082
        620  |    1.10305   .0108115   102.03   0.000     1.081859     1.12424
        642  |   .6804005   .0117587    57.86   0.000     .6573532    .7034479
        646  |  -1.589416   .0138312  -114.91   0.000    -1.616525   -1.562306
        682  |    1.56228   .0114958   135.90   0.000     1.539748    1.584812
        686  |  -.8868662   .0249211   -35.59   0.000     -.935712   -.8380205
        702  |   1.559547   .0166446    93.70   0.000     1.526923    1.592171
        703  |   .6443317   .0118639    54.31   0.000     .6210783    .6675852
        704  |  -.8683774   .0114083   -76.12   0.000    -.8907379   -.8460169
        705  |   .9703285    .011908    81.49   0.000     .9469887    .9936683
        710  |   1.054817   .0142531    74.01   0.000     1.026881    1.082753
        716  |  -1.197127   .0099345  -120.50   0.000    -1.216599   -1.177656
        724  |   1.385057    .010179   136.07   0.000     1.365106    1.405008
        736  |  -.9589485    .010491   -91.41   0.000    -.9795111   -.9383859
        748  |  -.2198058   .0130833   -16.80   0.000    -.2454493   -.1941623
        752  |   1.631499   .0105799   154.21   0.000     1.610762    1.652235
        756  |   1.905009   .0099525   191.41   0.000     1.885502    1.924516
        762  |   -1.07388   .0117352   -91.51   0.000    -1.096881   -1.050878
        764  |    .283712   .0138427    20.50   0.000     .2565802    .3108438
        780  |   .8580207   .0157231    54.57   0.000     .8272033    .8888382
        784  |   3.482394   .0165427   210.51   0.000      3.44997    3.514818
        788  |  -.0400499   .0114344    -3.50   0.000    -.0624616   -.0176383
        792  |   .6924607   .0096331    71.88   0.000     .6735796    .7113418
        800  |   -1.13468   .0352539   -32.19   0.000    -1.203778   -1.065582
        804  |   .2291589   .0166405    13.77   0.000     .1965432    .2617746
        807  |   .2485353   .0123563    20.11   0.000     .2243168    .2727539
        818  |   -.058803   .0128072    -4.59   0.000    -.0839052   -.0337007
        826  |   1.495211   .0100999   148.04   0.000     1.475415    1.515007
        834  |  -1.494105   .0132088  -113.11   0.000    -1.519995   -1.468216
        840  |   1.830288   .0098971   184.93   0.000     1.810889    1.849686
        858  |   .5544513   .0106193    52.21   0.000     .5336374    .5752653
        860  |   .0213968   .0099338     2.15   0.031     .0019263    .0408673
        862  |  -2.202293    .014429  -152.63   0.000    -2.230574   -2.174012
        887  |  -.8229839   .0102919   -79.96   0.000    -.8431562   -.8028115
        894  |  -.3254562   .0306429   -10.62   0.000    -.3855168   -.2653957
             |
        year |
       1964  |    .093142   .0302506     3.08   0.002     .0338502    .1524338
       1965  |   .1269253   .0294211     4.31   0.000     .0692594    .1845911
       1966  |   .1174069   .0316488     3.71   0.000     .0553746    .1794391
       1967  |   .1319217   .0308946     4.27   0.000     .0713677    .1924756
       1968  |   .1874856   .0282115     6.65   0.000     .1321906    .2427806
       1969  |   .2313429   .0288315     8.02   0.000     .1748328     .287853
       1970  |   .2944888   .0281154    10.47   0.000     .2393821    .3495955
       1971  |   .3220804   .0278954    11.55   0.000     .2674049    .3767559
       1972  |   .3588911   .0275966    13.00   0.000     .3048013    .4129808
       1973  |   .3849375   .0275078    13.99   0.000     .3310217    .4388533
       1974  |   .4376506   .0266705    16.41   0.000      .385376    .4899251
       1975  |   .4207195   .0257967    16.31   0.000     .3701575    .4712815
       1976  |   .4493373   .0257377    17.46   0.000     .3988909    .4997837
       1977  |   .4619859   .0256467    18.01   0.000     .4117179    .5122539
       1978  |   .4909571   .0254936    19.26   0.000     .4409892     .540925
       1979  |   .5460373   .0255093    21.41   0.000     .4960386    .5960359
       1980  |   .5555927   .0247801    22.42   0.000     .5070233     .604162
       1981  |   .5836529    .024464    23.86   0.000     .5357032    .6316027
       1982  |   .5853933   .0244204    23.97   0.000     .5375289    .6332578
       1983  |   .5996335     .02401    24.97   0.000     .5525735    .6466934
       1984  |    .629723   .0240294    26.21   0.000     .5826251     .676821
       1985  |   .6396022   .0240536    26.59   0.000     .5924569    .6867476
       1986  |   .6677129   .0241434    27.66   0.000     .6203914    .7150345
       1987  |    .706043   .0240272    29.39   0.000     .6589493    .7531368
       1988  |   .7201399   .0241164    29.86   0.000     .6728713    .7674085
       1989  |    .727629   .0243977    29.82   0.000     .6798091    .7754488
       1990  |   .7347261    .024509    29.98   0.000     .6866881    .7827642
       1991  |   .7553216   .0240907    31.35   0.000     .7081034    .8025399
       1992  |    .786211   .0240693    32.66   0.000     .7390346    .8333873
       1993  |   .7952321   .0239937    33.14   0.000     .7482041    .8422602
       1994  |   .8156353   .0239282    34.09   0.000     .7687356     .862535
       1995  |    .819727   .0237128    34.57   0.000     .7732496    .8662045
       1996  |   .8351929   .0239033    34.94   0.000      .788342    .8820438
       1997  |   .8775972   .0238985    36.72   0.000     .8307558    .9244386
       1998  |   .8968982   .0237906    37.70   0.000     .8502683    .9435282
       1999  |   .9371042   .0237903    39.39   0.000     .8904749    .9837335
       2000  |    .972243   .0239075    40.67   0.000     .9253838    1.019102
       2001  |   .9855313   .0238646    41.30   0.000     .9387562    1.032306
       2002  |   1.017459   .0239096    42.55   0.000      .970596    1.064322
       2003  |   1.031339    .023939    43.08   0.000     .9844183     1.07826
       2004  |   1.084362   .0238365    45.49   0.000     1.037642    1.131081
       2005  |   1.118332   .0237818    47.02   0.000     1.071719    1.164945
       2006  |   1.164148   .0237877    48.94   0.000     1.117523    1.210772
       2007  |   1.211904   .0239643    50.57   0.000     1.164933    1.258874
       2008  |   1.227434    .023992    51.16   0.000     1.180409    1.274459
       2009  |   1.203944   .0241581    49.84   0.000     1.156593    1.251294
       2010  |   1.230624   .0239956    51.29   0.000     1.183592    1.277656
       2011  |   1.259778    .024245    51.96   0.000     1.212257    1.307299
       2012  |   1.264025   .0242692    52.08   0.000     1.216457    1.311593
       2013  |   1.276914   .0244094    52.31   0.000     1.229071    1.324757
       2014  |   1.298817   .0243788    53.28   0.000     1.251034      1.3466
       2015  |   1.322773   .0244492    54.10   0.000     1.274852    1.370694
       2016  |   1.335156   .0245274    54.44   0.000     1.287082     1.38323
       2017  |   1.367187   .0247622    55.21   0.000     1.318653    1.415721
       2018  |   1.364428   .0248158    54.98   0.000     1.315788    1.413067
             |
       _cons |   7.884736   .0258162   305.42   0.000     7.834136    7.935336
------------------------------------------------------------------------------

Calculando o coeficiente de Gini

Boa tarde pessoal, estou tentando calcular o índice de gini para alguns setores da economia, identificados de 1 a 5. Busquei em alguns fóruns e no youtube também, mas vi que o pessoal usa por exemplo o comando:inequal var

Em outro fórum um senhor informou que é possível através de: ineqdeco var [fw=weight], by(subgroup)

Como sou iniciante no mundo do stata, não entendi muito bem o que é pra ser colocado dentro dos [ ...] . Mas enfim, a questão é que fazendo da primeira forma mostrada ali em cima, ele mostra o coeficiente, porém, preciso criar uma variável que receba esses coeficientes, eu tentei fazer da seguinte maneira:
by setor: gen ind_gini = ineqdeco renda

Porém, não deu certo, alguém pode sugerir uma solução? Muito obrigado.

Friday, March 25, 2022

export spatial weight matrix

Hi,

I'm exploiting district-level unbalanced panel data.
I used the following command to generate a weight matrix w
HTML Code:
spwmatrix gecon _CY _CX , wname(w) wtype(inv) cart dband(0 100) rowstand
And afterward, it said-
HTML Code:
Inverse distance (alpha = 1) spatial weights matrix (13892 x 13892) calculated successfully and the following action(s) taken:

 - Spatial weights matrix  created as Stata object(s): w.

 - Spatial weights matrix has been row-standardized.
Now I want to use
HTML Code:
xtmoran RYield, wname(w)
where w needs to be in dta format.

I don't know how to export the above-generated weight matrix as dta file.
I can't find the appropriate code.

Can you please advise,

Thank you,

Event Study Graph: Do I need an indicator variable for each pre and post treatment period (except t-1)?

Code:
 * Example generated by -dataex-. For more info, type help dataex clear input float(id monthly independent sales TreatZero lead2 lead3 lead4 lead5 lead6 lead7_backwards lag1 lag2 lag3 lag4 lag5 lag6 lead1) 1 672 0  249512 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 673 0  177712 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 674 0  109524 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 675 0   20776 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 676 0  846471 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 677 0  328806 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 678 0   46470 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 679 0  394758 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 680 0  301179 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 681 0  756129 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 682 0  116117 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 683 0  374293 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 684 0  432423 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 685 0  364780 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 686 0  797174 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 687 0  400569 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 688 0  126897 0 0 0 0 0 0 0 0 1 0 0 0 0 0 2 672 1   65104 0 0 0 0 0 0 1 0 0 0 0 0 0 0 2 673 1   77133 0 0 0 0 0 0 1 0 0 0 0 0 0 0 2 674 1   76200 0 0 0 0 0 0 1 0 0 0 0 0 0 0 2 675 1  218342 0 0 0 0 0 0 1 0 0 0 0 0 0 0 2 676 1   39265 0 0 0 0 0 0 1 0 0 0 0 0 0 0 2 677 1    6649 0 0 0 0 0 0 1 0 0 0 0 0 0 0 2 678 1   41677 0 0 0 0 0 0 1 0 0 0 0 0 0 0 2 679 1  156277 0 0 0 0 0 0 1 0 0 0 0 0 0 0 2 680 1   98535 0 0 0 0 0 1 0 0 0 0 0 0 0 0 2 681 1    3920 0 0 0 0 1 0 0 0 0 0 0 0 0 0 2 682 1  165573 0 0 0 1 0 0 0 0 0 0 0 0 0 0 2 683 1   73413 0 0 1 0 0 0 0 0 0 0 0 0 0 0 2 684 1   97216 0 1 0 0 0 0 0 0 0 0 0 0 0 0 2 685 1  106015 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 686 1   33066 1 0 0 0 0 0 0 0 0 0 0 0 0 0 2 687 1   54207 0 0 0 0 0 0 0 1 0 0 0 0 0 0 2 688 1  118173 0 0 0 0 0 0 0 0 1 0 0 0 0 0 3 672 0  737203 0 0 0 0 0 0 1 0 0 0 0 0 0 0 3 673 0  306725 0 0 0 0 0 0 1 0 0 0 0 0 0 0 3 674 0  198990 0 0 0 0 0 0 1 0 0 0 0 0 0 0 3 675 0 1054751 0 0 0 0 0 0 1 0 0 0 0 0 0 0 3 676 0 1886147 0 0 0 0 0 1 0 0 0 0 0 0 0 0 3 677 0 1142545 0 0 0 0 1 0 0 0 0 0 0 0 0 0 3 678 0 1277825 0 0 0 1 0 0 0 0 0 0 0 0 0 0 3 679 0  397706 0 0 1 0 0 0 0 0 0 0 0 0 0 0 3 680 0 1354199 0 1 0 0 0 0 0 0 0 0 0 0 0 0 3 681 0 1348788 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 682 0  914274 1 0 0 0 0 0 0 0 0 0 0 0 0 0 3 683 0  805134 0 0 0 0 0 0 0 1 0 0 0 0 0 0 3 684 0  769588 0 0 0 0 0 0 0 0 1 0 0 0 0 0 3 685 0  292174 0 0 0 0 0 0 0 0 0 1 0 0 0 0 3 686 0 1236297 0 0 0 0 0 0 0 0 0 0 1 0 0 0 3 687 0   58338 0 0 0 0 0 0 0 0 0 0 0 1 0 0 3 688 0 1681455 0 0 0 0 0 0 0 0 0 0 0 0 1 0 4 672 1   82611 0 0 0 0 0 0 1 0 0 0 0 0 0 0 4 673 1  190401 0 0 0 0 0 0 1 0 0 0 0 0 0 0 4 674 1  122867 0 0 0 0 0 0 1 0 0 0 0 0 0 0 4 675 1  111444 0 0 0 0 0 0 1 0 0 0 0 0 0 0 4 676 1   44781 0 0 0 0 0 1 0 0 0 0 0 0 0 0 4 677 1  158895 0 0 0 0 1 0 0 0 0 0 0 0 0 0 4 678 1   71693 0 0 0 1 0 0 0 0 0 0 0 0 0 0 4 679 1   62140 0 0 1 0 0 0 0 0 0 0 0 0 0 0 4 680 1  321720 0 1 0 0 0 0 0 0 0 0 0 0 0 0 4 681 1  188944 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 682 1  179921 1 0 0 0 0 0 0 0 0 0 0 0 0 0 4 683 1  159214 0 0 0 0 0 0 0 1 0 0 0 0 0 0 4 684 1  118173 0 0 0 0 0 0 0 0 1 0 0 0 0 0 4 685 1  246030 0 0 0 0 0 0 0 0 0 1 0 0 0 0 4 686 1   83191 0 0 0 0 0 0 0 0 0 0 1 0 0 0 4 687 1  100867 0 0 0 0 0 0 0 0 0 0 0 1 0 0 4 688 1   42409 0 0 0 0 0 0 0 0 0 0 0 0 1 0 5 672 0   32247 0 0 0 0 0 0 1 0 0 0 0 0 0 0 5 673 0    9993 0 0 0 0 0 0 1 0 0 0 0 0 0 0 5 674 0   44384 0 0 0 0 0 0 1 0 0 0 0 0 0 0 5 675 0   28284 0 0 0 0 0 0 1 0 0 0 0 0 0 0 5 676 0    6873 0 0 0 0 0 0 1 0 0 0 0 0 0 0 5 677 0   35780 0 0 0 0 0 0 1 0 0 0 0 0 0 0 5 678 0     226 0 0 0 0 0 0 1 0 0 0 0 0 0 0 5 679 0   41062 0 0 0 0 0 0 1 0 0 0 0 0 0 0 5 680 0   34161 0 0 0 0 0 0 1 0 0 0 0 0 0 0 5 681 0    5773 0 0 0 0 0 1 0 0 0 0 0 0 0 0 5 682 0   12586 0 0 0 0 1 0 0 0 0 0 0 0 0 0 5 683 0   22660 0 0 0 1 0 0 0 0 0 0 0 0 0 0 5 684 0   40637 0 0 1 0 0 0 0 0 0 0 0 0 0 0 5 685 0   40881 0 1 0 0 0 0 0 0 0 0 0 0 0 0 5 686 0    3560 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 687 0    9365 1 0 0 0 0 0 0 0 0 0 0 0 0 0 5 688 0     852 0 0 0 0 0 0 0 1 0 0 0 0 0 0 6 672 0   94715 0 0 0 0 0 0 1 0 0 0 0 0 0 0 6 673 0    2692 0 0 0 0 0 0 1 0 0 0 0 0 0 0 6 674 0  123457 0 0 0 0 0 0 1 0 0 0 0 0 0 0 6 675 0  724462 0 0 0 0 0 0 1 0 0 0 0 0 0 0 6 676 0  871857 0 0 0 0 0 0 1 0 0 0 0 0 0 0 6 677 0   16821 0 0 0 0 0 0 1 0 0 0 0 0 0 0 6 678 0  499244 0 0 0 0 0 0 1 0 0 0 0 0 0 0 6 679 0  441009 0 0 0 0 0 0 1 0 0 0 0 0 0 0 6 680 0  429921 0 0 0 0 0 0 1 0 0 0 0 0 0 0 6 681 0  156341 0 0 0 0 0 1 0 0 0 0 0 0 0 0 6 682 0  461273 0 0 0 0 1 0 0 0 0 0 0 0 0 0 6 683 0  325237 0 0 0 1 0 0 0 0 0 0 0 0 0 0 6 684 0  302210 0 0 1 0 0 0 0 0 0 0 0 0 0 0 6 685 0  332281 0 1 0 0 0 0 0 0 0 0 0 0 0 0 6 686 0  298871 0 0 0 0 0 0 0 0 0 0 0 0 0 0 end format %tm monthly
I have a staggered diff in diff setting. In the dataset, lead values correspond to the indicator variables for pre-treatment values, whereas lag values correspond to the indicator variables for post-treatment values. The dependent variable is sales. This is my code
Code:
xtset id monthly  xtreg sales lead7_backwards lead6 lead5 lead4 lead3 lead2 lead1 TreatZero lag1 lag2 lag3 lag4 lag5 lag6 i.monthly, fe vce(cluster id)  coefplot, vertical omitted keep(lead6 lead5 lead4 lead3 lead2 lead1 TreatZero lag1 lag2 lag3 lag4 lag5 lag6) ciopts(recast(rcap)) yline(0) msymbol(d)
My question is, do I need to include lead7_backwards in the regression? Or instead, I need to run:

Code:
xtreg sales lead6 lead5 lead4 lead3 lead2 lead1 TreatZero lag1 lag2 lag3 lag4 lag5 lag6 i.monthly, fe vce(cluster id)

How to Count the Number of the Students Having Grade Retention in a Dataset in Stata Code?

As the title suggested, I would like to count the number of the students who had grade retention in K-12 system. For example, I want to know how many students who made normal grade progression, how many students who had one-time grade retention, how many students who had two-time grade retentions....etc. The dataset is structured in long format below,
*Simulated dataset.
clear
input byte (id grade)
1 1
1 2
1 2
1 3
1 4
1 4
1 4
1 4
1 5
1 6
1 6
2 1
2 2
2 3
2 4
2 4
2 5
2 5
2 6
2 6
2 6
2 7
3 1
3 2
3 3
3 4
3 5
3 6
3 7
4 1
4 1
4 2
4 3
4 4
4 5
4 6
5 1
5 2
5 3
5 4
5 5
5 5
6 1
6 2
6 3
6 4
6 4
6 5
7 1
7 2
7 3
7 3
7 5
7 6
7 7
end
​​​​​​​Thank you for your help!

Reshape wide to long - dataset

Hello,

I need to organize my data and convert it from wide to long. Since I have to many variables I use the command "set maxvar 32767, permanently".
After that I run the following code:
ds cg_* ce_*
local varlist `r(varlist)'
local varlist: subinstr local varlist " " "_@ ", all
local varlist `varlist'_@

reshape long `varlist', i(pidp) j(_j) string
replace _j = cond(_j=="ce","September 2020","January 2021")

drop if psu==.

I didn´t get the results that I want. I still have the variables with the "cg_" and "ce_" and the only month that appears in the rows in january 2021. Can you help me to solve this problem? Thank you in advance.
I send an example of the dataset below:

* Example generated by -dataex-. For more info, type help dataex
clear
input long(pidp psu) int strata byte(ce_semp cg_semp cg_parent5plus ce_parent5plus cg_couple ce_couple cg_sex_cv ce_sex_cv i_qfhigh)
76165 19 6 1 -8 2 2 1 1 2 2 -8
280165 67 15 -8 . . 1 . 1 . 2 -8
469205 106 25 -8 . . 2 . 2 . 2 15
732365 157 43 -8 -8 2 2 2 2 1 1 -8
1587125 215 65 3 -8 2 2 2 2 2 2 -8
4849085 560 148 1 -8 2 2 1 1 1 1 -8
68002725 1 1 4 -8 2 2 2 2 2 2 -8
68008847 2012 2006 1 -8 2 2 2 2 2 2 -8
68010887 2012 2006 1 -8 2 2 1 1 2 2 -8
68029931 2060 2030 . -8 1 . 1 . 1 . -8
68031967 2060 2030 4 4 2 2 2 2 2 2 -8
68035365 11 4 4 -8 2 2 2 2 1 1 -8
68035367 2060 2030 1 -8 2 2 1 1 1 1 -8
68041487 2084 2042 1 -8 1 1 1 1 2 2 -8
68041491 2084 2042 . -8 1 . 1 . 1 . -8
68045567 2084 2042 1 -8 2 2 2 2 2 2 -8
68051007 2108 2054 2 -8 2 2 1 1 1 1 -8
68051011 2108 2054 1 -8 2 2 1 1 2 2 -8
68058487 2108 2054 4 -8 2 2 1 1 1 1 -8
68058491 2108 2054 4 -8 2 2 1 1 2 2 -8
68060531 2108 2054 1 -8 1 1 1 1 2 2 -8
68060533 18 6 4 -8 2 2 1 1 2 2 -8
68060537 18 6 4 -8 2 2 1 1 1 1 -8
68061288 2012 2006 -8 . . 2 . 1 . 2 -8
68063247 2132 2066 1 -8 1 1 1 1 2 2 -8
68063927 2132 2066 1 -8 1 1 1 1 2 2 -8
68063931 2132 2066 -8 -8 1 1 1 1 1 1 -8
68064605 18 6 4 -8 2 2 1 1 1 1 -8
68064609 18 6 4 -8 2 2 1 1 2 2 -8
68068007 2132 2066 1 -8 2 2 1 1 1 1 -8
68068011 2132 2066 1 . . 2 . 2 . 2 -8
68068082 2012 2006 2 -8 2 2 1 1 1 1 -8
68097245 25 8 4 -8 2 2 2 2 2 2 -8
68097927 2180 2090 4 -8 2 2 2 2 2 2 -8
68112211 2228 2114 1 . . 1 . 1 . 2 -8
68120367 2228 2114 4 -8 2 2 2 2 2 2 -8
68120375 2228 2114 1 -8 1 2 2 2 2 2 -8
68125127 2252 2126 1 -8 1 1 1 1 2 2 -8
68125131 2252 2126 1 -8 2 2 1 1 1 1 -8
68125135 2252 2126 1 -8 2 2 2 2 2 2 -8
68133285 34 11 4 -8 2 2 2 2 2 2 -8
68133289 34 11 4 -8 2 2 1 1 2 2 -8
68136009 34 11 1 -8 1 1 2 2 2 2 -8
68137365 34 11 -8 -8 2 2 2 2 2 2 -8
68138045 34 11 4 -8 2 2 1 1 1 1 -8
68138049 34 11 4 -8 2 2 1 1 2 2 -8
68138051 2276 2138 4 -8 2 2 1 1 2 2 -8
68144847 2276 2138 1 -8 2 2 1 1 1 1 -8
68144851 2276 2138 1 -8 1 1 1 1 2 2 -8
68148247 2300 2150 -8 -8 2 2 1 1 1 1 -8
68148251 2300 2150 4 . . 2 . 1 . 2 -8
68150967 2300 2150 1 . . 2 . 1 . 1 -8
68150971 2300 2150 1 -8 2 2 1 1 2 2 -8
68150975 2300 2150 1 -8 2 2 2 2 1 1 -8
68155047 2300 2150 -8 4 1 1 1 1 2 2 -8
68155051 2300 2150 1 4 1 1 1 1 1 1 -8
68157771 2300 2150 -8 2 2 2 2 2 2 2 -8
68159131 2300 2150 1 -8 1 1 1 1 2 2 -8
68160485 39 11 1 -8 1 1 2 2 2 2 -8
68160489 39 11 1 -8 2 2 2 2 1 1 -8
68173407 2348 2174 4 -8 2 2 1 2 2 2 -8
68174767 2348 2174 4 . . 1 . 1 . 2 -8
68180887 2348 2174 1 -8 1 1 1 1 2 2 -8
68180891 2348 2174 1 . . 1 . 1 . 1 -8
68184971 2372 2186 . -8 1 . 1 . 2 . -8
68185647 2372 2186 4 -8 2 2 2 2 2 2 -8
68187687 2372 2186 4 -8 1 1 1 1 1 1 -8
68187691 2372 2186 4 -8 1 1 1 1 2 2 -8
68191771 2372 2186 1 -8 1 1 2 2 2 2 -8
68193127 2372 2186 4 -8 1 1 2 2 2 2 -8
68195167 2372 2186 4 -8 2 2 1 1 1 1 -8
68195171 2372 2186 4 -8 2 2 1 1 2 2 -8
68195851 2372 2186 1 -8 2 2 1 1 2 2 -8
68197211 2396 2198 1 . . 2 . 1 . 2 -8
68197887 2396 2198 1 -8 1 1 2 2 2 2 -8
68197899 2396 2198 -8 -8 2 1 2 2 2 2 -8
68197903 2396 2198 -8 -8 2 2 2 2 1 1 -8
68199247 2396 2198 1 -8 2 2 1 1 1 1 -8
68207407 2396 2198 4 -8 2 2 1 1 2 2 -8
68207411 2396 2198 4 -8 2 2 1 1 1 1 -8
68211487 2420 2210 4 -8 2 2 2 2 1 1 -8
68214207 2420 2210 1 -8 2 2 2 2 1 1 -8
68214887 2420 2210 -8 . . 1 . 1 . 1 -8
68214891 2420 2210 1 . . 1 . 1 . 2 -8
68216247 2420 2210 1 -8 1 1 1 1 2 2 -8
68218287 2420 2210 -8 -8 2 2 2 2 1 1 -8
68230527 2444 2222 -8 . . 1 . 2 . 2 -8
68231223 2444 2222 4 -8 2 2 2 2 2 2 -8
68238011 2468 2234 1 -8 2 2 1 1 2 2 -8
68262487 2516 2258 2 -8 2 2 1 1 1 1 -8
68266567 2516 2258 4 -8 2 2 2 2 2 2 -8
68278127 2540 2270 4 -8 2 2 2 2 2 2 -8
68288327 2564 2282 1 -8 1 1 1 1 2 2 -8
68288331 2564 2282 1 -8 1 1 1 1 1 1 -8
68291731 2564 2282 4 -8 2 2 2 2 2 2 -8
68293087 2564 2282 4 -8 1 1 1 1 2 2 -8
68293091 2564 2282 1 -8 1 1 1 1 1 1 -8
68293095 2564 2282 1 -8 1 1 1 1 1 1 -8
68293099 2564 2282 1 -8 2 2 1 1 1 1 -8
68293168 2108 2054 4 -8 2 2 2 2 1 1 -8
end
label values psu psu
label values strata strata
label values ce_semp ce_semp
label def ce_semp -8 "inapplicable", modify
label def ce_semp 1 "Yes, employed only", modify
label def ce_semp 2 "Yes, self-employed only", modify
label def ce_semp 3 "Both employed and self-employed", modify
label def ce_semp 4 "No", modify
label values cg_semp cg_semp
label def cg_semp -8 "inapplicable", modify
label def cg_semp 2 "Yes, self-employed only", modify
label def cg_semp 4 "No", modify
label values cg_parent5plus cg_parent5plus
label def cg_parent5plus 1 "Yes", modify
label def cg_parent5plus 2 "No", modify
label values ce_parent5plus ce_parent5plus
label def ce_parent5plus 1 "Yes", modify
label def ce_parent5plus 2 "No", modify
label values cg_couple cg_couple
label def cg_couple 1 "Yes", modify
label def cg_couple 2 "No", modify
label values ce_couple ce_couple
label def ce_couple 1 "Yes", modify
label def ce_couple 2 "No", modify
label values cg_sex_cv cg_sex_cv
label def cg_sex_cv 1 "Male", modify
label def cg_sex_cv 2 "Female", modify
label values ce_sex_cv ce_sex_cv
label def ce_sex_cv 1 "Male", modify
label def ce_sex_cv 2 "Female", modify
label values i_qfhigh i_qfhigh
label def i_qfhigh -8 "inapplicable", modify
label def i_qfhigh 15 "Other school (inc. school leaving exam certificate or matriculation)", modify
[/CODE]

Long data? Help. New to Stata

Hi. I am new to Stata, and coming back to data analysis after 20 years since my undergraduate. I am sure that this is a simple fix, but there is nothing specific in all of the youtubes and manuals - unless I just haven't been able to find it yet!

I have a data set which measures company performance over a 6 year period that I am want to analyse as panel data. I have 53 firms measured over this period with a total of 378 observations.

It appears as:

Year Company A Score C Score R Score Combined Score Q Score ROA
2013 Acme 5 7 9 21 1 5.6
2014 Acme
2015 Acme
2016 Acme
2013 Bacme
2014 Bacme
2015 Bacme
How do I fix this structure so that stata recognises the panel, and not 378 individual observataions?

I have been restrudying reshaping etc, but no luck.

Do I need to go back and restructure the information in the initial Excel spreadsheet, or is there a command that I could use in Stata?

Thanks so much in advance!

Making a figure including confidence intervals for multiple groups

Dear Stata-users,

As I was working on my thesis, I was wondering about the following. I have made two figures with confidence intervals for a variable on expected retirement age (called 'rretage') per gender (called 'ragender') and grouped by current age of respondents (called 'ragey_e'). I have included the code below:

preserve
collapse (mean) mrretage = rretage (sd) sdrretage = rretage (count) nrretage = rretage if ragender ==1 , by(ragey_e)
generate rretageucl = mrretage + 1.96*sdrretage/sqrt(nrretage)
generate rretagelcl = mrretage - 1.96*sdrretage/sqrt(nrretage)


twoway rcap rretagelcl rretageucl ragey_e, xtitle(Current age) ytitle(Expected retirement age) subtitle(By current age) saving(confidenceintmales.gph)


restore

preserve
collapse (mean) mrretage = rretage (sd) sdrretage = rretage (count) nrretage = rretage if ragender ==2 , by(ragey_e)
generate rretageucl = mrretage + 1.96*sdrretage/sqrt(nrretage)
generate rretagelcl = mrretage - 1.96*sdrretage/sqrt(nrretage)


twoway rcap rretagelcl rretageucl ragey_e, xtitle(Current age) ytitle(Expected retirement age) subtitle(By current age) saving(confidenceintfemales.gph)

restore

They look like the picture I added (after combining the figures).

My question is: Is there a way to combine the confidence intervals into one figure, i.e. place them 'over' each other with different colors?

Hope someone can help and please let me know if my question is not clear.

Best,

Floor