I am studying the impact of non-monetary rewards incentive introduced by online platform on the quality of online reviews. The reward program was introduced by the platform in 2016. Those users who self-selected to participate in the rewards program had to complete additional registration on the platform in order to start earning points for their contributions (reviews). Although both types of users (i.e., program participants and non-participants) are intrinsically motivated, the assumption is that extrinsically motivated participants will provide reviews of higher quality (measured in terms of readability, cohesion, coherence, length, etc.). To test my hypothesis, I collected post-level yearly (2008-2018) data for a sample of platform users:
Code:
Before 2016 After 2016 Non-participants: 8247 20825 29072 Participants: 13348 112498 125846 21595 133323 Total number of posts: 154918 Total number of users: 8473
Code:
reg y time##treated, vce(cluster user_id)
Does my approach to test the impact of the rewards incentive seem plausible in terms of the difference-in-difference analysis?
Additionally, if the platform randomly selected users that would be participants in the program, then I would think of this research design as a natural experiment. Or, if the platform randomly sent out invitations prompting users to participate and some of them decided to do so, then I would think of it as a randomized controlled trial. However, in my case users have self-select to participate in the program once it was offered by the platform to all its users -- is there a specific name for such research design?
Please let me know if you have any additional questions.
Thank you for your feedback!
0 Response to Ensuring correct model specification (diff-in-diff)
Post a Comment