Please forgive the slightly long winded explanation, I could really do with some help here.

I have a computer simulation which contains multiple variables to which I have applied random perturbations in order to simulate the possible error. The random perturbations are in the form of a normal distribution. I perform an experiment (1) where I do this 250 times and end up with a mean and variance as the result.

I know perform experiment 2 where the random perturbations are only applied to one variable and output another mean and varience of the 250 simulations.

If I want to know the effect that this variable is having on the overall distribution is it mathematically correct to divide the variance obtained by experiment 2 by the varience obtained by experiment 1?

If I then do another experiment (3) where I include perturbations to only 2 variables (The first will be the same variable as used in experiment 2 and an additional variable) can I subtract the varience obtained by experiment 2 from the varience obtained in experiment 3 to give me the error associated with the additional variable and then divide this by the varience obtained in experiment 1 to give me the amount that this variable is contributing to the overall error. Is this mathematically correct?

I can then repeat the experiment several times clumping variables together in terms of their origins in order to determine which groups contribute to the overall error.

I have tried this but when summing up the variences always get an answer bigger than the original varience obtained in experiment 1. I think that this is due to the perturbations applied to each variable cancelling each other out and thus giving a smaller overall varience for experiment 1 than the sum of all the individual components. If anyone can help with this problem I would be most grateful. I am trying to work out how to disentangle the errors without running lots and lots of experiments, ie one for each individual variable.

Thank you in advance

David