Enough to use RMSD to decide significance?

#1
Hi

I have calculated RMSD for two different simulation to decide how large the error/standard deviation is.
How can I decide if the RMSD-value is small enough to show that there is no significant difference between the simulations?

thanks
 

katxt

Active Member
#2
no significant difference between the simulations
This isn't really a statistical question - it is a question of what you are prepared to tolerate. To a statistician "no significant difference" simply means "I don't know if there is a difference or not." To an engineer "no significant difference" means "it doesn't matter which I use. They are both equally good at their job."
 

hlsmith

Less is more. Stay pure. Stay poor.
#3
Please define acronym RMSD. Look up ROPE from Bayesian analysis. It will help guide you, but you will have to set some parameters.
 

katxt

Active Member
#5
hlsmith. ZS0020's post above really needs to be read in conjunction with the previous post at
http://www.talkstats.com/threads/the-use-of-rmsd-and-nrmsd.76700/ which seems to have become separated somehow.
We have two ways of doing something and need to decide if the extra cost involved in increasing the precision is worthwhile.
The reference to ROPE is interesting but it's hard to see how it applies here. That is more of an equivalence test situation for means.
A scree plot might give you an idea of the diminishing returns, but I still think it's a judgement call.
 
Last edited:

katxt

Active Member
#7
Please define acronym RMSD. Look up ROPE ...
Maybe we should define acronym ROPE. I had to look this up but I'm grateful for the exposure. In any event you have to judge for yourself how big a difference you can tolerate. I'm not a fan of Cohen's effect size of 0.1 being insignificant. There are plenty of situations where this small a difference can have an important effect.
 

hlsmith

Less is more. Stay pure. Stay poor.
#8
Maybe we should define acronym ROPE. I had to look this up but I'm grateful for the exposure. In any event you have to judge for yourself how big a difference you can tolerate. I'm not a fan of Cohen's effect size of 0.1 being insignificant. There are plenty of situations where this small a difference can have an important effect.
I agree, tolerance values are very situational. NASA and engineers may want six sigmas, particle physicists want even more, while some social scientists may be happy with 2.

I am also not saying it has to explicitly be ROPE, but that frame work is fairly comparable since say you are using Markov chain Monte Carlo to integrate your distributions in Bayesian models - and end-up with pretty much 'simulation' data directed by priors and empiric data.
 

katxt

Active Member
#9
You've encouraged me to look further, but I'm afraid I'm probably too entrenched in TOST for equivalence testing. It is simple, and easy to understand and explain. kat