Error between two probability distributions

I need to measure the accuracy of an algorithm that estimates a probability distribution ((key : probability) values). I have the true distribution (for each key I have the true probability). What method(s) would be the most standard way of doing this?

I found Kullback-Leibler divergence but it doesn't measure a distance. I'd like to be able to compare two algorithms this way and order them based on the estimation error.

I have a lot of keys in the distribution (above 100000) and they are usually from a zeta (zipf) distribution.