Difference between rounding up and off a decimal?

#1
This might be a silly question and simplistic for most of you. But what is the difference between rounding "up" and "off" a decimal? I was taught to always round up if >5 and down <4?

Also are there circumstances where one would round up a decimal<5 and round down a <4. Any round off examples? Thanks.
 

noetsi

Fortran must die
#2
I don't think there is any scientific rule that says either is better. It would depend on the report you are doing. Also how much confidence you have in your method. There is what is known as false precision. Having more decimal places than your method realistically supports which will deceive the one looking at your data. If you have 4 decimals places and you can really only reasonably be sure of 2 then you can round or truncate based on your judgement of which is more accurate.
 

hlsmith

Not a robit
#3
No hard rules that I know of beyond the 0.5 cutoff, it may be field dependent. Where medical research rounds to hundredths place but physicist require more places for exactitude. Of note, if the rounded value is a part of large repetive calculations, eventually you can get some 'rounding error' between estimates - if the effect size is small. So some times the confidence includes the null and other times it does not just based on the cutoff used. But that is an issue in null hypothesis testing. It is good practice to define how you may round values before starting a project. A corollary issue in stats may be removing outliers, which can also change parameter estimates, but for a different reason.