Dear statist,
I need your help.
I am doing work that involves assessing the impact of issuing a specific obligation on governance variables (e.g. independent directors, size of the board of directors, etc.).

First of all, I made a specific association with STATA's psmatch2 command, to find three companies more similar to the issuer of this bond, but which are not issuers.

Now I have to do a DID analysis. To do this I encountered two problems:
1. create a specific ID for matched pairs
2. create a PRE_POST vector for treated group and control group

My difficulty is due to the use of the panel data and the nature of the data which requires that each company has issued an obligation in several years.
It follows that a control group company can be combined with a company treated in 2014 and another company treated in 2018.

In this case, I struggle to find a specific ID for each game and to create the time vector (PRE_POST).

I thought of duplicating the companies that are paired more than once, so that they can have a specific ID for each pairing and a PRE_POST consistent with the company to be treated. This would cause an increase in the control group, would it be a problem?
Would it be statistically correct?
Do you have any other ideas?



The code to calculate the propensity score and make the match:
Code:
 
 xtlogit Treatment logattivo ROE ROA SIC, fe /*with a logit function I calculate the propensity score. My covariates are TotalAssets, ROA, ROE, industry */  predict pscore  gen pscore1=YEAR*10+SIC1*100+pscore       psmatch2 TRATTAMENTO2, pscore(pscore1) logit
If data duplication was valid, I used three methods, read in this forum, to do DID analysis. Which of these is more correct to estimate the impact of a bond issue (considering before and after issue) on a y (ESG score), compared with the respective control sample?What are their differences?

After using the -xtset ID code:
1. I do a regression on panel data:
Code:
xtreg ESG.SCORE  i.Treatment##i.PRE_POST, fe    vce(cluster ID)
2.I use the "mixed" code

Code:
gen interaction  = 1.TRATTAMENTO#1.PRE_POST

foreach v of varlist Treatment PRE_POST interaction {
    by ID, sort: egen b_`v' = mean(`v')
    gen w_`v' = `v' - b_`v'
}

mixed ESG.SCORE  w_* b_* || ID: || issuer:
3. i use "diff" code:
Code:
diff ESG.SCORE, t(Treatment) p(PRE_POST) id(ID)
My data:


----------------------- copy starting from the next line -----------------------
Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input byte ID int(issuer YEAR) byte(Treatment PRE_POST) double ESG.SCORE
1  56 2016 0 1     .
1   2 2015 1 1  9.18
1 151 2016 0 1     .
1 128 2014 0 1     .
1 128 2015 0 1     .
1 151 2019 0 1     .
1  56 2013 0 0     .
1  56 2017 0 1     .
1  56 2014 0 1     .
1 128 2012 0 0     .
1   2 2019 1 1     .
1  56 2019 0 1     .
1 151 2014 0 1     .
1 128 2010 0 0     .
1 128 2013 0 0     .
1 128 2016 0 1     .
1 128 2018 0 1     .
1 128 2017 0 1     .
1   2 2017 1 1 76.42
1   2 2010 1 0    25
1 151 2018 0 1     .
1  56 2010 0 0     .
1 128 2011 0 0     .
1  56 2018 0 1 39.36
1   2 2018 1 1 75.69
1 151 2011 0 0     .
1 128 2019 0 1     .
1  56 2011 0 0     .
1 151 2010 0 0     .
1  56 2015 0 1     .
1   2 2013 1 0 38.04
1  56 2012 0 0     .
1   2 2014 1 1 26.04
1   2 2012 1 0 32.22
1 151 2017 0 1     .
1   2 2016 1 1 17.35
1 151 2013 0 0     .
1 151 2012 0 0     .
1 151 2015 0 1     .
1   2 2011 1 0  26.6
2 181 2016 0 0     .
2 181 2010 0 0     .
2 181 2014 0 0     .
2 133 2013 0 0     .
2 181 2017 0 0     .
2  36 2019 1 1     .
2  36 2017 1 0     .
2 148 2015 0 0     .
2 133 2014 0 0     .
2 148 2018 0 0     .
2  36 2018 1 0 27.96
2 181 2019 0 1     .
2 181 2012 0 0     .
2 148 2012 0 0     .
2 181 2011 0 0     .
2  36 2015 1 0     .
2 133 2011 0 0     .
2 133 2016 0 0     .
2  36 2011 1 0     .
2  36 2013 1 0     .
2  36 2010 1 0     .
2 133 2017 0 0     .
2 133 2010 0 0     .
2 148 2013 0 0     .
2 148 2010 0 0     .
2 181 2018 0 0     .
2 181 2015 0 0     .
2 133 2015 0 0     .
2 133 2019 0 1     .
2  36 2016 1 0     .
2  36 2014 1 0     .
2 181 2013 0 0     .
2 148 2014 0 0     .
2 148 2016 0 0     .
2 148 2019 0 1     .
2  36 2012 1 0     .
2 148 2011 0 0     .
2 133 2018 0 0 52.97
2 133 2012 0 0     .
2 148 2017 0 0     .
3  44 2013 1 0     .
3 118 2014 0 0     .
3 122 2013 0 0 40.36
3 122 2015 0 0    50
3 122 2016 0 0 82.45
3 210 2014 0 0 83.68
3 210 2011 0 0 79.03
3 210 2016 0 0 85.05
3  44 2016 1 0     .
3  44 2017 1 1     .
3 118 2013 0 0     .
3  44 2011 1 0     .
3 122 2011 0 0 22.94
3  44 2012 1 0     .
3  44 2015 1 0     .
3 122 2012 0 0 26.79
3 210 2013 0 0 89.78
3 210 2018 0 1 92.95
3 210 2012 0 0 83.33
3  44 2018 1 1 10.19
end
------------------ copy up to and including the previous line ------------------

Sorry for the many questions, but I'm a beginner