I had an idea to try and create some kind of rating/ranking for a histogram that represents the frequency that another metric falls into certain "zones"(these are usually -15 -> 15 or so). The desired outcome of the histogram should be a smooth bell curve with 1 standard deviation = 4. So the frequency of the original data should be roughly 68.2% between -4 and 4 and so on. What happens is that sometimes when there is not enough data point from the original data the histogram can become choppy and irregular.
I am wondering if anyone knows of any way to quantify the quality of a standard bell curve? All I could come up with is to check that the frequencies within x number of standard deviations are "close" and that the frequencies decrease as they move away from 0. I would think there has to be a more elegant solution?