Hi, I am replicating a paper in Matlab that uses Stata. Problem setup: paper uses, for example, -reg ... i.clustervar cluster(clustervar)- to obtain POLS estimates with clustered standard errors. The panel data is clustered at the individual (country) level. When Stata omits the clustervar dummies for collinearity, it still uses the in-group variance of those omitted groups to calculate the standard errors for non-omitted coefficients. I stumbled upon this calculation method by pure luck as I was only considering the variances of those groups not omitted previously. More precisely, the sum over the groups of u*u' within the asymptotic variance formula is over all variables, including omitted variables. Is this a feature or a bug? If it is a feature, why?

It seems to me that, from a model specification view, we should not be using data from omitted variables to calculate standard errors. On the other hand, this method increases the standard errors and makes them more robust(?) so I can see the desire from that direction. However, it seems that if the latter case is true, there is positive bias on standard errors.