Optimization of variables with significantly different standard errors.

Hi everyone! I have some issues related to the title at hand, and hopefully you guys can help. Sorry for the long post.

I have forecasted mean variance data given to me for about 10 different variables. I need to optimize those variables.

I have historical data for those 10 different variables. The historical mean and variance is fairly different than the forecast.

1. I want to use skew and kurtosis in my optimization because I know logically and via the historical data that it exists within the 10 variables and in different degrees. Would bootstrapping historical data, finding the skew/kurtosis of said historical data, and assuming the same distribution for the forecast be acceptable? Skew and kurtosis should be dimensionless, and I have no reason to think the shape of the distribution will change outside of MV. My thinking can be way off base.

2. The standard errors, the uncertainty, of each individual variable forecast varies wildly. This is the main problem I'm trying to tackle. Right now my optimization loves the variables with the highest standard errors for both mean and variance. When I say highest standard errors I'm talking in terms of historical data, but the rationale on why these variables will continue to have high uncertainty holds true for the forecast as well. The problem is I want to somehow incorporate into my methodology something that harms variables that are less certain than the ones I feel really darn confident about. How can I do that? I can find the historical standard errors, can they be utilized in any form?

3. Similar to #1, but can I utilize the historical data, in a manner such as a bootstrap, and apply those standard errors to a forecasted mean and variance?

The way I'm doing it now (hence why I'm here), is I'm using the skew/kurtosis from the bootstrap and using it with the forecasted mean and variance in the optimization. In order to factor in the uncertainty I'm taking the standard errors from the bootstrap and subtracting it from the mean and adding it to the variance. Ew. Actually, even worse, I'm standardizing the standard errors around the variable I'm most confident in, so that variable isn't changing but the rest are changing by the difference of its standard error and the standard error of the variable I'm most confident in. Double ew.