I am struggling to do the right thing here, as well as use the right terminology because I'm not positive which is the correct technique, though I can visualize what I want to do. Here is a sample dataset:

Code:
* Example generated by -dataex-. To install: ssc install dataex
clear
input long id double(height weight)
 14 60 150
 14 60   .
 22  . 180
 22 48 180
  7 67 211
  7 55 211
218 50 306
218 50 306
100 71 175
100 65 175
100 81 190
end
I actually need to eventually merge this dataset with another, but because I have duplicates on the key variable "id", it will not work. So, I would ultimately like to "collapse" this dataset so that there is only one instance of each unique "id". So I would have 5 observations:
Code:
id
14
22
7
218
100
But, I cannot just drop the duplicate observations, because the other 2 variables (height, weight) may or may not have unique observations within them, and I need to preserve all of them. I also don't want to use -collapse- as won't it make me summarize, when I just need to keep the raw data?

Ideally the corrected set would look like this (can I create new variables in this way?) :
id height height1 weight weight1 height2 weight2
14 60 60 150 . . .
22 . 48 180 180 . .
7 67 55 211 211 . .
218 50 50 306 306 . .
100 71 65 175 175 81 190









The only things I thought of are using the -collapse- command, or I saw a -collapseunique-
I also thought if I could convert from long to wide, but I would need a "j" variable which I do not have here...

Could I create another variable "count" and this counts upward by 1 for each new occurrence of a unique "id" value? That would give me a repeating variable to use, although some would have counts of 2 and some may have 3 or 4 depending on how many duplicates there are.