answersLogoWhite

0

The sample mean is distributed with the same mean as the popualtion mean. If the popolation variance is s2 then the sample mean has a variance is s2/n. As n increases, the distribution of the sample mean gets closer to a Gaussian - ie Normal - distribution.

This is the basis of the Central Limit Theorem which is important for hypothesis testing.

User Avatar

Wiki User

13y ago

Still curious? Ask our experts.

Chat with our AI personalities

SteveSteve
Knowledge is a journey, you know? We'll get there.
Chat with Steve
BlakeBlake
As your older brother, I've been where you are—maybe not exactly, but close enough.
Chat with Blake
RafaRafa
There's no fun in playing it safe. Why not try something a little unhinged?
Chat with Rafa

Add your answer:

Earn +20 pts
Q: How do you calculate distribution of sample means?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about Math & Arithmetic

Why you need sampling distribution?

in order to calculate the mean of the sample's mean and also to calculate the standard deviation of the sample's


How do you calculate the mean of the sampling distribution of the sample proportion?

i dont no the answer


The distribution of sample means consists of?

A set of probabilities over the sampling distribution of the mean.


Why is the normal probability distribution widely used in practice?

Suppose you have a random variable, X, with any distribution. Suppose you take a sample of n independent observations, X1, X2, ... Xn and calculate their mean. Repeat this process several times. Then as the sample size increases and the number of repeats increases, the distribution of the means tends towards a normal distribution. This is due to the Central Limit Theorem. One consequence is that many common statistical measures have an approximately normal distribution.


How do you calculate standard deviation without a normal distribution?

You calculate standard deviation the same way as always. You find the mean, and then you sum the squares of the deviations of the samples from the means, divide by N-1, and then take the square root. This has nothing to do with whether you have a normal distribution or not. This is how you calculate sample standard deviation, where the mean is determined along with the standard deviation, and the N-1 factor represents the loss of a degree of freedom in doing so. If you knew the mean a priori, you could calculate standard deviation of the sample, and only use N, instead of N-1.