Testing Difference of Values from Lognormal Distribution?

I have data that fits a lognormal distribution really well. My primary goal is to create a statistical classifier using this distribution to apply to new data from the same source. Essentially, I have 2 possible values, I choose the bigger one (based on the lognormal density) and then attach a statistical test score to that maximum probability.

As part of the output of this classifier, I would like to know the statistical significance of the difference between two values from the lognormal. For normal data, the z-score and Standard Error can be used, but is there an equivalent method for the lognormal?

Disclaimer: I've never worked with the lognormal and may be missing some very basic instincts on the distribution (but I have research as much as possible to understand it). Let me know if you need further information about my question to understand my goals.
Hey LMC2012.

The lognormal is a distribution like any other and like any distribution you can calculate a cumulative probability given a value also get a value given a cumulative probability.

Typically with estimators, you usually get an interval that corresponds to a particular probability value (based on your alpha). You can, using a computer, get an interval corresponding to a two-tail (like alpha/2 in each tail) or a one-tail by getting a value that corresponds to the cumulative probability for a given distribution and many packages do the lognormal distribution including the free package R.


The Normal distribution has an advantage in that it is symmetric around the mean so getting probabilities for each side of the mean is the same, but you don't need to worry about this in general: all you need to do is get the computer to calculate the right probabilities for a given distribution (using numerical routines) and these probabilities will correspond to values on the distribution and these become your confidence intervals or your critical values.