I'm trying to figure out how I can determine the significance of differences between two values with propagated error. Here's an example:
I'm running an assay that requires that I dilute my samples down (usually 60+ fold) so I can get them in the sensitive range of the assay (otherwise the signal is saturated). I can propagate the error through this dilution to get an idea of what the error is on my calculated values (plus the error of the standard curve that I'm using). So my question is this: if I run two samples and figure out the error in this manner, how would I go about determining if the difference between the samples is real? Do I just throw out samples that have overlapping error, z-test maybe? (I am running these in triplicate, but I usually end up throwing one out because the assay can have issues with outliers.)
Thanks in advance!
Advertise on Talk Stats