# Can you Generating Meaningful Confidence Intervals from logarithmic data?

#### erkvos

##### New Member
Hi, could somebody with an advanced understanding of statistics please point me in the right direction?

I have a question relating to the interpretation of confidence intervals generated from data demonstrating a logarithmic distribution on excel. As a disclaimer, this is 'getting into the weeds' territory, but for a variety of reasons it is important to identify the upper and lower bounds of our estimates.

I am using a reagant tests ('BART') to estimate the population of bacteria present in a given sample of water. The tests will generate an estimate of the number of bacteria present in the sample in units of cfu/ml, based on the number of days before a reaction occurs.

I'm not certain of this, but it seems that the estimates generated using this method have a logarithmic distribution. It it appears to be skewed to the right ('Skew' function yields '2' in Excel), and this is consistent given that the nature of microbial growth is exponential. For example - a reaction after 1 day may indicate that 500,00 cfu/ml are present, but a reaction on day three indicates that only 35,000 bacteria are present.

I'd like to give an average of the number of bacteria present, and I realize a confidence interval would be necessary. While four samples are not a robust sample size, some results replicate quite well with only one outlier (for example: 9,000, 9,000, 9,000, 35,000 cfu/ml).

Would the confidence interval generated using this approach (logarithmic) be more accurate? And if so, how do I calculate the 'real' confidence interval using the log distribution? I have done this so far by summing the logarithm of the average and confidence interval, and then raising 10 to this power (undoing the log). once I have this value, I subtracted the calculated average.

The 'Exact Test' and Chi Squared both seem like potential alternatives, but do not seem to apply to this type of data (non categorical).Any help or examples of a viable approach would be greatly appreciated. A screen shot of my excel sheet is attached with one control group highlighted. The standard deviation, Coefficient of variation, average, and confidence interval are calculated in blue for the raw data, and in green for the logarithmic data.

Thank you.

#### Attachments

• 57.6 KB Views: 3

#### katxt

##### Active Member
In general, if you have somehow managed to find the lower and upper limits for a CI on logged data, you can just "unlog" these limits more or less as you have described to get an asymmetric CI on the raw data.
However, I suspect that things are more complicated here. If you got 9000, 9000, 9000, 9000 it doesn't mean there is no uncertainty in the 9000 average. I'm guessing that you find the number of days for reaction, and look the cfu/ml up in a table, a bit like the MPN method of estimating cfu/ml.
Without further details about the testing protocols, my only comment is that the current method makes a statistician squirm.
If you want to go further into this, post a reference to the actual test. kat

#### erkvos

##### New Member
In general, if you have somehow managed to find the lower and upper limits for a CI on logged data, you can just "unlog" these limits more or less as you have described to get an asymmetric CI on the raw data.
However, I suspect that things are more complicated here. If you got 9000, 9000, 9000, 9000 it doesn't mean there is no uncertainty in the 9000 average. I'm guessing that you find the number of days for reaction, and look the cfu/ml up in a table, a bit like the MPN method of estimating cfu/ml.
Without further details about the testing protocols, my only comment is that the current method makes a statistician squirm.
If you want to go further into this, post a reference to the actual test. kat
Hi Kat, the test is a Hach BART test for IRB (Iron Reducing Bacteria). For the concentrations referenced, the lookup table provides a single number (rather than a range) for the values generated.

The company that developed the method is called Droycon Bioconcepts. I spoke with their tech team on a couple of occasions, and gathered that the tests are on 90 percent accurate. Meaning that if a perfectly uniform inoculation medium with a known bacteria count is analyzed with ten BART testers, 9/10 will agree, with one dud.

https://www.dbi.ca/BARTs/FAQ.html

Last edited:

#### katxt

##### Active Member
Thanks for that. I'm just guessing here from your comments and calculations. Is this the sort of thing?
You take your sample, do something to it, and see how long it takes to react. If it takes 1 day record 500000, 3 days record 35000, 4 days record 9000. and probably something like 2 days 100000 and 6 days 2000.
A true value of 9000 will react in 4 days, but so will a true value of 12 000 or a true value 7000 and both well be recorded as 9000. There will be intermediate values such as 20 000 that sometimes take 3 days and sometimes take 4.
So if you want to find a confidence interval, you need to ask "what range of true values will plausibly result in the recorded result?"
If you do one test, 4 days, record 9000, then the true value may plausibly lie in the range from, say, 6000 to 20000. This is a rough confidence interval for the true value.
If you do four tests on the one sample, and record 9000 four times, the confidence interval is the same.
If you do four tests on the one sample, and record 9000 three times and 35000 once, you are probably towards the intermediate level, and your confidence level may be 16000 to 22000. (bottom of day 3, top of day 4).
Either way the calculations in your original post won't work.

#### erkvos

##### New Member
Thanks Kat, what you described are exactly those tests. It makes sense that the lack of precision inherent to each data point distorts further abstraction.

My charitable view of the tests is that in exactly 96 hours (4 days) post-inoculation, the slight formation of color indicator really does indicate a true number (9000). But as you pointed out, the true value of those four 9000 cfu/ml tests is definitely not 9000.

It may sound surprising, but the bandwidth you noted is actually tolerable in my case. Would it be possible for me to up the statistical strength ante here with 21 sample points?

This may be a rookie attempt at analysis, but after taking the log(10) of the 21 population counts, use of the lognormal probability mass function on excel does produce a normal distribution when plotted across a population range (3sigma above/below avg). After back-transforming the average of the logs, upper and lower CI bounds are:

14,619 <--->21,885 <---->32,762 cfu/ml (90% Confidence interval, Z=1.65, sigma(log(Population Counts))=..47)

Seems plausible but I would expect a wider range. I asked the manufacturer, and they claim a 90 percent population accuracy but who knows if this is actually true.

either way it seems that I should be rounding these (to the nearest 1000th?). But because the difference between 50,000 and 500,000 is worth knowing in my application despite the error range you described

#### katxt

##### Active Member
Seems plausible but I would expect a wider range. I asked the manufacturer, and they claim a 90 percent population accuracy but who knows if this is actually true.
I think that they mean 90% consistency as you explained it before. If you give a true 9000 to 10 labs, then 9/10 labs will report 9000. However it seems to be a little dishonest of the manufacturer to claim that this means 90% accuracy. Also, if you take a graded series of concentrations from 35000 down to 9000 and give then to 10 labs, there will be a concentration round about 19000 or so which will have five labs reporting 35000 and five reporting 9000. This is inevitable, I think.
Would it be possible for me to up the statistical strength ante here with 21 sample points?
I'm not sure what you mean by this. Are you proposing to check the samples more often? say every 6 hours? Checking every day is a very coarse measure.

#### erkvos

##### New Member
I think that they mean 90% consistency as you explained it before. If you give a true 9000 to 10 labs, then 9/10 labs will report 9000. However it seems to be a little dishonest of the manufacturer to claim that this means 90% accuracy. Also, if you take a graded series of concentrations from 35000 down to 9000 and give then to 10 labs, there will be a concentration round about 19000 or so which will have five labs reporting 35000 and five reporting 9000. This is inevitable, I think.

I'm not sure what you mean by this. Are you proposing to check the samples more often? say every 6 hours? Checking every day is a very coarse measure.
In this case I have been checking 21 different reagent tests, twice per day.

So a list of 21 independent population counts

#### katxt

##### Active Member
You need to keep in mind that a confidence interval is the range of plausible values that the true value could lie in. The method you are trying is not suitable in this case. For a start, the values you get aren't independent. If the first sample gives 9000, then the second sample is likely to be 9000 as well, and the next.
I have assumed that a lookup value 9000 is the most likely conc, but it may be the low end, so it may mean somewhere between 9000 and 35000. You documentation may tell you that.
If you started with a parent population of 500 000 and made a series of samples each of which was 80% dilution, you would get an even log range that spanned about 6 data points per day, say about 40 in total. Set them all going at once and record the day they turn. Plot the graph of conc across (logged) and day up.
The graph will be a series of horizontal lines with some overlap at the ends of each where natural uncertainty has taken its course. You can split the graph horizontally into sections that contain only one day, or a mixture of two days. (Hopefully not three days.)
Now you can use the graph to estimate a confidence interval. If your sample of four are all the same day then use the range of concs from the matching single day. If they come from two different days then use the range of the mixture.
The manufacturer may have the data from a similar experiment they would share.
Surprisingly perhaps, a mixture of days is likely to give you a more accurate confidence interval than if they are all the same. kat