Fisher r to z transform / significance test for two different values of r^2

Short version: I have two values of r^2 from Pearson's correlation, one from a control group (.713) and one for an experimental (.527), and I would like to quantitatively compare the difference in how well/poorly the points in each group cluster around their fit line.

Long Version: The data consists of IDs and a measure of the volume of individual objects within that ID. On the X axis I have plotted the number of objects with a given ID (ex ID1= 3 objects, ID2=5 objects) and along the Y I have plotted the total volume of the objects in each ID ( ex the 3 objects in ID 1 have a total volume of 8, the 5 objects in ID 2 have a total volume of 11). Using the lm function in R - line=lm(x~y), I fit a line to the data in each group and using qqplot(x,resid(line)) I was able to verify the normality of the data (Correct me if I’m wrong but I believe if this plot indicates a normal distribution then one can claim that the linear model was an appropriate choice, or should I use shaprio.test(resid(line)) here? )

Also, I calculated r squared values for each group and verified that they were significant ( I believe this supports the claim that there is a relationship between x and y in each group). Specifically, for the control, t=11.92 with 57 degrees of freedom, and for the experimental, t= 6.42 with 37 degrees of freedom. What I would like to know is if there is a way to compare the r squared values between the two groups and see if the difference is significant in terms of how well the cluster around their fit line. I think something like a r to z transformation would be appropriate but I am not sure how to proceed.

Note that it does depend a bit on what exactly you are measuring. For example, if you are looking at two different measures of the same thing - that is, you are trying to decide which test is the best- then you should do a Bland-Altman plot.