Adding variances and getting standard dev from results

If I have non-related datasets, can I find the variances to each and add them, then take the sqrt. to get the combined standard deviation? Thank you!


Ambassador to the humans
What are you looking to find the standard deviation of exactly? Do you want to know what the standard deviation would be if you literally ignored the grouping, considered all the data as a single dataset and then calculated the standard deviation? Or are you interested in what the standard deviation of the new random variable that is the sum of a random element from each data set?
Dason, thanks for responding. Here is the problem. I hope you can help:

I have two columns of data
Column A is returns
Column B is a percent that will be removed from the returns
Column C is the returns adjusted down after the percent is removed. This column my not be useful.

Column A's standard deviation shows risk to the average return of column A, because it is harder to know what the average return will be as Stdev increases.

Column B's standard deviation does the same thing, since these amounts are going to be removed from A's returns, but we are not sure year to year what amount since they vary.

If I were just adjusting the returns for risk, I owuld just dinf the sdev in the returns and do Returns/ SD. but, I need to account for both of these standard deviations then apply that collective standard deviation to (1) either the adjusted returns (col. . C) or (2) original returns ( col. A) whichever is appropriate for showing the double risk of A and B's impact on returns ITO SD. RET /(SDa+SDb).

Or perhaps I just subtract column B's % from Column C and find the stdev in this adjusted column?

Not sure appropriate way to do.



Ambassador to the humans
It's still not clear to me what you're actually trying to calculate the standard deviation of. It sounds like you're not entirely clear on that either. If you don't know what you want it's very hard for me to do anything.
I do know, just unsure of how.
Maybe and example to clarify?

An investment over 3 years returned 12%, 4%, 6%.
The mean is 7.3%
The Stdev is .042

Where Standard deviation = risk, the risk-adjusted return is 7.3 /.042
We assume these returns are investor profits, unless something else detracts from them.

Something does.

It is required that over the same 3 years.
Year 1: 25% of 12% return is taken, leaving a net return of 9
Year 2: 20% of 4% is taken, leaving a net return of 3.2
Year 3: 10% of 6% is taken, leaving a net return of 5.4

The amount taken is independent of the returns.
The amount taken over three years has a SD of .076
Since this standard deviation (or variance also represents a risk to returns. Just ask we cannot know our returns, we cannot know how much will be taken year to year.

Because of this, our risk-adjusted net returns must account for this standard deviation as well as the former. How do I express this?
intuitively I think mean of riginal return/ Sqrt ( VARa + VARb) or mean of net return / Sqrt ( VARa + VARb)
Just not certain