While preparing teaching material for my undergrads I noticed a peculiar, not to say undesirable, feature when doing tests after regressions with interactions.
Imagine you are running an OLS model with 2 continuous variables and a dummy. You want to interact the dummy with the continuous variables. Probably a common enough situation.
Using the auto data set, you might do:
reg price c.mpg##i.foreign c.headroom##i.foreign
You might also try
reg price foreign c.mpg#i.foreign c.headroom#i.foreign
Here is the good news: they are the same model : there is no difference between the two, zilch, nada.
Naturally you want to test for the joint significance of the two interactions so you will probably try:
test _b[1.foreign#c.mpg]=_b[1.foreign#c.headroom]=0
Here is where it gets spooky . This command will give you different results depending on which way you estimated the same model.
Having done it the hard way, I can confirm that only after the first specification above will you get the correct answer..
What is being tested in the second case I don't know.
The output looks the same:
( 1) 1.foreign#c.mpg - 1.foreign#c.headroom = 0
( 2) 1.foreign#c.mpg = 0


But gives a different value. Any ideas?
I haven't looked at the documentation so I don't know if users are warned about this. But who reads the documentation anyway?

As a post-script, I drew this to the attention of Stata via Tech Support. Unfortunately the statistician I was dealing with insists, in repeated emails,
that these two regressions are different notwithstanding the clear evidence otherwise.
Careful with your interactions .