Hello, I am writing my bachelor thesis trying to measure the effect of cryptocurrency 'forks' (fork is a kind of event) on cryptocurrency price ( daily high). I transformed high to log(high) and performed augmented dickey fuller test. Using 1 to 30 lags I can't reject the null hypothesis of unit root present in time series sample for all of the cryptocurrencies.
My supervisor told me to run the intended AR(1) model log(high_t)= α +βlog(high_t-1) + γfork_t + ε_t anyway, store the residuals and then run the ADF test on the residuals and if I can reject the null hypothesis on the residuals, I am okay. If I can't then I should add lagged variables. Does it make sense? I thought I can't use AR(1) model if I have a unit root in my data.