Standard deviation in uncertainty analysis

A Monte-Carlo analysis is performed for the propagation of uncertainties through a select model. Here a large number of input points are first randomly generated from parameter PDFs. These are then used to evaluate the output quantity. Finally the dispersion of the computed output values are to be used to evaluate both the estimate and the uncertainty of output quantity where the latter is taken as being equal to the standard deviation of the set of computed data.

The output data follows a normal distribution and I need clarification on the "best practices" relating to the levels of SD that must be reported on either side of the mean to quantify uncertainty? Is it adequate to conclude the results by stating 68% of the uncertainty is 1 SD, and 95% of uncertainty is 2 SD?