I have a mean (mu) and its standard error (se) of variable X from the literature. I have a dataset of size n which doesn't have variable X. Now I want to use mu and se to simulate X for each of the n observations in my dataset assuming normal distribution. So I should use mu and se*√n as the parameters, am I right?

What do I do with the observations that have simulated data values out of bound? For example, the logical values for X are between 1 and 10, but the simulated data can be <1 or >10, is it legitimate if I simply reject the out-of-bound value and do another random draw for that observation until all values are between 1 and 10?

1. If the standard error you received is the standard error of sample mean, then you do have the relationship you mentioned.

2. Why do you use normal distribution to model a random variable with finite support? Without answering this I cannot comment whether it is "legitimate" to do this. Discarding the out-of-bound values and using a repeat-until loop means you are generating a truncated-normal distribution.

There maybe other distribution better than truncated normal in your situation, but again it depends on your constraint, objective etc.

Thanks BGM for your reply.
Is there another distribution I can specify given that the only information I have are population mean and se, and the sample size of my own dataset? The only constraint I have is the lower and upper bounds (1, 10). The data should be scaled from 1 to 10. But I don't care whether the simulated data points are intergers or have decimal points. I can round them to integers. I know the best scenario is to have estimates of multinomial distribution p1-p10 and simulate data from this distribution. But I don't have it. All I have are the mean treating the scale as a countinuos variable and it's se.
Could you please adivise? Thanks so much!

btw, I forgot another question. The standard error (se) that I get should be the standard error of the popultion mean because it was from a national-represantative survey. Am I still right to use se*√n as the standard deviation if I assume normal distribution? Or do I just use se? Thanks again.

From my perception standard error always refers to the standard deviation of an estimator whereas standard deviation alone refers to the one of the original sample.

In most context it should be alright.

Again you have to study your data, try to find out the nature of the data. The rationale behind the choice of model can vary from very theoretical perspective to empirical results.

E.g. are the data categorical / ordinal (or even can be treated as continuous)?

What are the usual choice from literature when modelling a similar situation?

Thanks BGM. The data should be ordinal. If I assume normal distribution with se*√n as the standard deviation , a lot of data points will be out of bounds. So maybe I should just give up this approach. Thanks again.