I am running a difference-in-difference regression. The treatment variable is assigned a continuum (i.e. continuous values between 0 and 1) instead of binary. I obtain standardized coefficients by regressing standardized Y on standardized X (where X is the treatment intensity variable). I find that 1 S.D. change in X is associated with 0.16 SD change in Y. I need to interpret this coefficient in percentage terms. The distribution for unstandardized X and Y are as follows:

Variable Mean S.D
X 0.4197373 0.086
Y 34.94349 27.37068

Is the following back of the envelope calculation correct:
1SD change in X ---- 0.16 SD change in Y
= 0.16 * 0.086
= 1.2 % change in Y

I am wondering if there is a more robust way of interpreting these coefficients. Thanks in advance!