... I anticipated that SPSS would plot the following:

25th Percentile = .76
50th Percentile (median) = 2.83
75th Percentile = 5.31
Interquartile Range (IQR) = 75th P – 25th P = 4.55

Outliers (identified with a circle) greater than 1.5 times the IQR:
e.g., 75th Percentile + (1.5 x IQR) = 12.14
e.g., 25th Percentile – (1.5 x IQR) = -6.07

Outliers (identified with an asterisk) greater than 3 times the IQR:
e.g., 75th Percentile + (3 x IQR) = 18.96
e.g., 25th Percentile – (3 x IQR) = -12.89

Using the above calculations, these data should have *NO* outliers (circle, or asterisk) ...

**However**, in this data set, SPSS identifies 11.84 as an outlier (circle).

I also note that the 25th Percentile appears to be higher than .76 and the 75th Percentile lower than 5.31. I assume this smaller IQR is at the root of the outlier issue ...

Can anyone offer insight into how SPSS 18 is calculating and plotting the 25th and 75th percentiles on a box plot?

I ran this in SAS to see if it was a SPSS thing. I found the lower quartile and the upper quartile (what I believe are your 25th and 75 percent values) to be 1.34 and 4.91 respectably. The outer fence was then at 10.71 and 11.84 is therefore an extreme outlier. The box and whisker plot looked much like you say SPSS described.

The median was the one you predicted

"Very few theories have been abandoned because they were found to be invalid on the basis of empirical evidence...." Spanos, 1995

It turns out that there are differing definitions by statisticians of how to calculate the quartiles. That may be why you are comming up with different results than SAS or SPSS.

Unfortunately there are more than 2 ways to do this, there are at least 7 ways to calculate these values and they lead to different results. Note that this is not unusual. Software uses algorithms (sp) to solve equations and they vary by software. Usually they lead to similar results, but not always. Indeed different versions of the same software change this over time.

But it is surpising the same version of the same software did that. I have not seen that before. I would go with the values that SPSS and SAS agree on. Of course it would be nice if statisticians could agree on one right way.

"Very few theories have been abandoned because they were found to be invalid on the basis of empirical evidence...." Spanos, 1995

Why should there be one right way? There are multiple ways and each makes a certain amount of sense. Some work better in certain situations and others work better in other situations.

That would be like telling all chefs to pick a single type of knife and just stick with that - why should we need a butcher knife, a butter knife, knives with serrated edges and knives that are curved... just pick one and be done with it.

That is sort of like asking why gravity always pulls things towards mass rather than away sometimes and towards other times. Or why you don't add different ammounts of water to get the same cement. Or why the same exact reactions don't generate widely different ammount of heat at different times (for the same conditions).

Because physical reality does not differ based on the views of different analyst and having disagreements causes serious practical problems as noted in this thread.

Only one value or group of matching values should be the 25th percentile. Someday I suppose the UN will get a group of statisticians together and create one common set of generally understood standards. While they are at it they can create one unified set of nomeclature for sum of squares rather than the 20 conflicting ones that they have now.

The confusion causes real problems for the meer mortals who have to use this for practical things even if statisticians are oblivious to such. Chemist and physicists dont use multiple terms for the same exact thing normally, they long ago agreed on a common set of values.

It is absurd to have error sum of squares, residual sum of squares, sum of squares within (etc) all mean one thing. Just agree on one term.

Why is it that physicists, chemists etc can agree on a common set of terms and definitions and statisticians can not?

End of rant

"Very few theories have been abandoned because they were found to be invalid on the basis of empirical evidence...." Spanos, 1995

And that's fine. But we're talking about estimating the 25th percentile. If the data comes from a normal distribution we might estimate it better one way over another. If it comes from a discrete distribution then a different way might get us a better estimate.

I dont understand why ranked data (that is when you list values from lowest to highest and take the one that corresponds to 1/4 of the way from the lowest value for example) would depend on a specific distribution.

But I will take your word for it

In the example from this thread you get two totally different answers for the same distribution based on the calculation. If there was an agreement on which way to calculate it this would not occur. And it makes a great deal of difference because if different people in the same organization come up with totally different answers this way there could be real problems. What is the ammount of rebarb one firm added to its cement varied from what another did based on the way they (unknowingly) calculated the 25th percentile? So one firm thought it was getting one ammount and the other firm thought they wanted a 2nd value since their assumptions of what that meant varied.

Which they would not even realize since it was buried in their software.

"Very few theories have been abandoned because they were found to be invalid on the basis of empirical evidence...." Spanos, 1995

I dont understand why ranked data (that is when you list values from lowest to highest and take the one that corresponds to 1/4 of the way from the lowest value for example) would depend on a specific distribution.

But I will take your word for it

In the example from this thread you get two totally different answers for the same distribution based on the calculation. If there was an agreement on which way to calculate it this would not occur. And it makes a great deal of difference because if different people in the same organization come up with totally different answers this way there could be real problems. What is the ammount of rebarb one firm added to its cement varied from what another did based on the way they (unknowingly) calculated the 25th percentile? So one firm thought it was getting one ammount and the other firm thought they wanted a 2nd value since their assumptions of what that meant varied.

Which they would not even realize since it was buried in their software.

Which would be a failure to communicate on their part.

But the fact that you don't understand why we might have different ways to estimate quantiles doesn't change the fact that there are different ways and some ways work better in different situations. You understand why we have different ways to test for a difference in centrality parameters depending on the assumptions we're willing to make. What is so different about this situation that makes it hard to understand why there might be a difference in how we estimate quantiles depending on what we believe about the data?

Note that if we took the data that we have to be the population of interest then there is a set way that we would determine what the XXth quantile is. The difficulty arises when we don't think that we have all of the data - when we're estimating what that XXth quantile is instead.

You are not making different assumptions here. You have one distribution, and two totally different answers for it.

If you think most analyst know that there are multiple ways to calculate something as basic as the 25th percentile (or know the assumptions built in their software)...you are giving them (including me) much more credit than you should in that regard. Until the original poster brought this up it never occured to me that this could happen in this type of answer. I thought the software was doing something wrong.

"Very few theories have been abandoned because they were found to be invalid on the basis of empirical evidence...." Spanos, 1995

You are not making different assumptions here. You have one distribution, and two totally different answers for it.

Except that you don't know what the underlying distribution is. You have some data that you assume comes from some distribution. If you make different assumptions about what that underlying distribution is then you might prefer one way of estimating quantiles over another. That's all I'm saying.

This is what R has for describing the 9 types of quantiles that it supports on the help page for the quantile function:

Code:

‘quantile’ returns estimates of underlying distribution quantiles
based on one or two order statistics from the supplied elements in
‘x’ at probabilities in ‘probs’. One of the nine quantile
algorithms discussed in Hyndman and Fan (1996), selected by
‘type’, is employed.
All sample quantiles are defined as weighted averages of
consecutive order statistics. Sample quantiles of type i are
defined by:
Q[i](p) = (1 - gamma) x[j] + gamma x[j+1],
where 1 <= i <= 9, (j-m)/n <= p < (j-m+1)/n, x[j] is the jth order
statistic, n is the sample size, the value of gamma is a function
of j = floor(np + m) and g = np + m - j, and m is a constant
determined by the sample quantile type.
*Discontinuous sample quantile types 1, 2, and 3*
For types 1, 2 and 3, Q[i](p) is a discontinuous function of p,
with m = 0 when i = 1 and i = 2, and m = -1/2 when i = 3.
Type 1 Inverse of empirical distribution function. gamma = 0 if g
= 0, and 1 otherwise.
Type 2 Similar to type 1 but with averaging at discontinuities.
gamma = 0.5 if g = 0, and 1 otherwise.
Type 3 SAS definition: nearest even order statistic. gamma = 0 if
g = 0 and j is even, and 1 otherwise.
*Continuous sample quantile types 4 through 9*
For types 4 through 9, Q[i](p) is a continuous function of p, with
gamma = g and m given below. The sample quantiles can be obtained
equivalently by linear interpolation between the points
(p[k],x[k]) where x[k] is the kth order statistic. Specific
expressions for p[k] are given below.
Type 4 m = 0. p[k] = k / n. That is, linear interpolation of the
empirical cdf.
Type 5 m = 1/2. p[k] = (k - 0.5) / n. That is a piecewise linear
function where the knots are the values midway through the
steps of the empirical cdf. This is popular amongst
hydrologists.
Type 6 m = p. p[k] = k / (n + 1). Thus p[k] = E[F(x[k])]. This
is used by Minitab and by SPSS.
Type 7 m = 1-p. p[k] = (k - 1) / (n - 1). In this case, p[k] =
mode[F(x[k])]. This is used by S.
Type 8 m = (p+1)/3. p[k] = (k - 1/3) / (n + 1/3). Then p[k] =~
median[F(x[k])]. The resulting quantile estimates are
approximately median-unbiased regardless of the distribution
of ‘x’.
Type 9 m = p/4 + 3/8. p[k] = (k - 3/8) / (n + 1/4). The
resulting quantile estimates are approximately unbiased for
the expected order statistics if ‘x’ is normally distributed.
Further details are provided in Hyndman and Fan (1996) who
recommended type 8. The default method is type 7, as used by S
and by R < 2.0.0.

Why should you complain about having multiple ways to do something? Especially when that something is an estimation process - which isn't very exact. Do you think it's unreasonable that there are multiple ways to estimate variance components when doing ANOVA?

Sure it makes it a little more difficult for some people but there are typically defaults built into software that should make reasonable choices.

It sucks that in this case SPSS gave two different answer. But I don't like SPSS so I don't really care

Because outside of academics, where all this is Greek, people make assumptions which statisticians and software programers ignore. And that, as the original post in this thread indicates, has signficant impact on the real world.

Sure it makes it a little more difficult for some people ...

It makes it a lot more difficult for about 99 percent of the population.

"Very few theories have been abandoned because they were found to be invalid on the basis of empirical evidence...." Spanos, 1995

Because outside of academics, where all this is Greek, people make assumptions which statisticians and software programers ignore. And that, as the original post in this thread indicates, has signficant impact on the real world.

It makes it a lot more difficult for about 99 percent of the population.

Blah blah blah. You aren't going to convince me that it's bad to have other options when your argument is "But then I have to thiiiiiiink about the problem".

Edit: There are typically sensible defaults chosen in software. It doesn't typically matter toooo much which method you use. Most software is at least consistent. SPSS apparently wasn't. That's a strike against SPSS. I'm sorry IBM let you down

No because at heart you are an academic. Having been there I recognize the behavior Academics write for themself and ignore the rest of the world (well I am not sure academics actually realize the rest of the world exists or that their behavior influences it).

IBM will eventually take over all software again as AT&T is taking over slowly the telecommunication industry again. Resistance is futile, you will be assimilated.

If I keep this up 28 more times I will hit a thousand posts....

"Very few theories have been abandoned because they were found to be invalid on the basis of empirical evidence...." Spanos, 1995