I doubt it’s a matter of finding a better algorithm. There’s not really anything in your problem statement that rules out inherent sensitivity of the solution as a function of the given data, regardless of what algorithm you use. In fact, it looks like it is almost guaranteed unless there is some additional structure. The usual standard for stability in solving a linear system is if an algorithm is backward stable, so that a computed solution is the exact solution for slightly perturbed matrices. But in problems of the form you have described, small perturbations in the matrices can greatly change the solution.
Take a 2\times 2 example in which D_1 = I and
A = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}, \qquad
A^{-1} = \begin{bmatrix} 1 & -1 \\ 0 & 1 \end{bmatrix}, \qquad
D_2 = \begin{bmatrix} a & 0 \\ 0 & b \end{bmatrix}, \qquad R = r
with a \ll b. A little algebra gives x^- = b y^-, x^+ = rb y^-, and
y^+ = \left(1 + \frac{b}{a} (r-1)\right) y^-.
Suppose r=1 so that y^+ = y^-. If r comes with some small relative error so that you really have \hat{r} = 1+\epsilon and there are no further errors in your algorithm, then you would compute
\hat{y}^+ = \left(1 + \frac{b}{a} ((1+\epsilon)-1)\right) y^- =
\left(1 + \frac{b}{a} \epsilon\right) y^- = \left(1 + \frac{b}{a} \epsilon\right) y^+.
Equivalently
\frac{\hat{y}^+ - y^+}{y^+} = \frac{b}{a} \epsilon
which is a large relative error in y^+ even if \epsilon is very small. If \epsilon is of the order of the unit round-off, say about 10^{-16}, and b/a is around 10^{18}, as you suggest might be the case, then
\frac{\hat{y}^+ - y^+}{y^+} = 100.
So you have no correct digits and don’t even have the correct order of magnitude. Another way to look at it is that getting full accuracy in y^+ in double precision requires \epsilon = 10^{-16} / 10^{18} = 10^{-34}. So you would need to know R accurately to about 34 digits in this example. You need to know R to more than 18 digits to have any hope of getting any correct digits in y^+ at all. You can’t even store R with that many digits in double precision. If R involves measured data with larger errors, you are almost certainly in an even worse situation than that. This is all still true, no matter what algorithm you use.
Sometimes a specific algorithm can introduce unnecessary ill-conditioning in a specific step and a better algorithm helps. But if it is your problem formulation itself that is inherently ill-conditioned, it’s usually a sign that you need to reconsider the problem formulation.