Hello Statalist,

I am trying to control for attribute non-attendance in a choice experiment. The choice experiment has 333 respondents and +15,000 observations. Through lclogit I have constructed three latent classes based on preference heterogeneity. Now I want to compare the output of lclogit with the output of an eaalogit model (endogenous attribute attendance logit model, as described by Hole (2010)). The reasoning is that if there are no large differences between the two models, the attribute non-attendance's bias within my results is small.

My first question is the following: is there a way to combine the lclogit and eaalogit models? Right now I just split my respondents in three (non-latent) classes and perform the eaalogit on them separately. However, I know that I can't fully correctly compare the results from this with the results from the lclogit, because the latent weighing used in lclogit is not used in eaalogit.

My second problem occurs when I do try this simplified method of making an eaalogit model per class. I can do it for two of my three classes, but when I want to create an eaalogit model for the third class, I receive the following error: "could not calculate numerical derivatives -- discontinuous region with missing values encountered".

Do you know what I can do to solve this? I have tried the following lines, they both give the aforementioned error:

eaalogit choicenumber very_early early late very_late months fertilizer_cost yield risknumber market80 market60 market40 if class==2, group(id_0_card) id(corrhhid_numeric) keaa(6) eaaspec(x1 x1 x1 x1 x2 x3 x4 x5 x6 x6 x6)

eaalogit choicenumber months yield market80 market60 market40 very_early early late very_late fertilizer_cost risknumber if class==2, group(id_0_card) id(corrhhid_numeric) keaa(3) eaaspec(one one one one one x1 x1 x1 x1 x2 x3)

*The effects I have put as "one" (months, yield, and market) are those for which I expect the AN-A to be less, as their effect is strongly significant in the latent class model (lclogit).

Thank you very much for your help.