# Comparing the output of JAGS to conjugate analysis - normal with unknown mean and var

#### idif

##### New Member
Dear Bayesians.

I'm starting my way in the Bayesian world, and I'm trying to build a simple model for estimating the mean and variance of a normal distribution. I assume that:
Code:
y=rnorm(100,50,4) # This would be the data
mu0=0 #Prior of mean
var0=100 #Prior for the variance

#I continue by using the equations from Gelman to do conjugate analysis. (I assume that the #'sample size' on which the prior is based on is 1).

mu1=(n0*mu0+length(y)*mean(y))/(n0+nx) #posterior mean
var1=(n0*var0+var(y)*length(y)+(n0*nx/(n0+nx))*(mu0-mean(y))^2)/(n0+nx) #posterior variance
meanUncertainty=var1/(n0+nx) #Variance of the posterior distribution of the mean

#I could also build distributions for var1 and mu1 as follows:
postVarDist=rinvgamma(10000,shape=(dataN+priorN)/2,rate=(postVar*(dataN+priorN))/2)
postMeanDist=rt(10000,priorN)*meanUncertainty+postMean # A scale-shifting t distribution
The results show that mu1 is indeed close to 50, while var1 is also around 40-50 (So failing to reproduce the sample variance

Now I build a JAGS model as follows:
Code:
#The datalist:
datalist=list(priorMean=0,
y=y,
var0=1000,
meanUncertainty=1000/1,
n0=1,
nx=length(data))

#The model
for (i in 1:dataN){
y[i]~dnorm(mu1,tau)
}
tau~dgamma(n0/2,(n0*var0)/2)
postMean~dnorm(mu0,1/meanUncertainty)
postVar<-1/tau
Now, even though I get the same distribution for the mu, I get a different distribution for the variance, which is indeed closer to the sample value of 4^2.

I'm really trying to figure out what I did wrong in any of the models. I noticed that if in the analytical solution I define var1=(n0*var0+var(y)*length(y) (Thus ignoring the 'prediction error' - the distance between mean(y) and mu0), I get similar results. What does it means? Is the distribution I get in JAGS for the variance a marginal or a P(var/mu) distribution?

Any help would be very much appreciated.