What is the difference (interpretation) between the partial R^2 and the SHAP value for a linear regression model?

Both calculate the contribution of a given variable to the output’s variability but…

To calculate the coefficient of partial determination R2 for a given variable:
We calculate the R2 with and without that variable and substract them. This implies fitting a different model with and without that variable.
I don’t know if for this calculation it’s better to use adjusted R2s or unadjusted ones.

The procedure to calculate the SHAP value is more convoluted:
The output for a given input is substracted from the output for all other possible inputs, generated by fixing one variable and randomly modifying another one, or two, or three… in different orders. And all these differences are averaged out.
This process can be repeated from different staring inputs and take the grand average.

SHAP is a black-box method useful for any model but it’s very slow. Sampling some of the data and interpolation may be used to accelerate its calculation.

Intuitively, how are the interpretation of the partial R2 and the SHAP value different when applied to a linear regression model?