We know how to find the stochastic differential equation (Hamilton-Jacobi-Bellman equation, HJB) of the control problem where a process $X_t$ is controlled up until it is stopped at a stopping time $\tau$: $$V_t=\sup_{\tau,(A_s)_{s\geq t}}E\Big{[}\int^{\tau}_te^{-\rho (s-t)}f(s,X_s,A_s)ds+e^{-\rho (\tau-t)}g(\tau, X_{\tau})|\mathcal{F}_t\Big{]}$$ such that $$dX_t=\mu(t,X_t,A_t)dt+\sigma(t,X_t,A_t)dB_t, \quad X_t =x$$ and $\mathcal{F}_t$ is the filtration generated by the brownian motions up to time t. It is $$0 = \sup_a \Big{[} g(t,x)-v(t,x),-\rho v(t,x) + f(t,x,a) + (\partial_tv)(t,x)+\mu(t,x,a) (\partial_x v)(t,x) + \frac{\sigma^2(t,x,a)}{2}(\partial_{xx}(t,x) \Big{]}$$.

In an application I stumble upon following the control problem where the optimal stopping time changes the drift and(/or) volatility of the process: $$V_t=\sup_{\tau,(A_s)_{s\geq t}}E\Big{[}\int^{\tau}_te^{-\rho (s-t)}f(s,X_s,A_s)ds+e^{-\rho (\tau-t)}g(\tau, X_{\tau},A_{\tau})|\mathcal{F}_t\Big{]}$$ such that $$dX_t=\mu_1(t,X_t,A_t)dt+\sigma_1(t,X_t,A_t)dB_t, \quad X_t =x, \quad if \quad t \leq \tau$$ and $$dX_t=\mu_2(t,X_t,A_{\tau})dt+\sigma_2(t,X_t,A_{\tau})dB_t, \quad X_t =x, \quad if \quad t > \tau$$.

The differences are that the optimal stopping time $\tau $ change the law of motion of $X_t$ and the the control at that time $A_{\tau}$ affect the drift and volatility of the process and the terminal payoff at stopping.

I couldn't find the HJB equation (assuming everything is well defined). Any hints on how I could proceed to find it?