answersLogoWhite

0


Best Answer

According to the Central Limit Theorem, the mean of a sufficiently large number of independent random variables which have a well defined mean and a well defined variance, is approximately normally distributed.


The necessary requirements are shown in bold.



According to the Central Limit Theorem, the mean of a sufficiently large number of independent random variables which have a well defined mean and a well defined variance, is approximately normally distributed.


The necessary requirements are shown in bold.



According to the Central Limit Theorem, the mean of a sufficiently large number of independent random variables which have a well defined mean and a well defined variance, is approximately normally distributed.


The necessary requirements are shown in bold.



According to the Central Limit Theorem, the mean of a sufficiently large number of independent random variables which have a well defined mean and a well defined variance, is approximately normally distributed.


The necessary requirements are shown in bold.

User Avatar

Wiki User

11y ago
This answer is:
User Avatar
More answers
User Avatar

Wiki User

11y ago

According to the Central Limit Theorem, the mean of a sufficiently large number of independent random variables which have a well defined mean and a well defined variance, is approximately normally distributed.


The necessary requirements are shown in bold.

This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: How do you see that the sampling distribution of the mean is normal?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about Other Math

What is normal distribution in statstics?

The normal distribution is a statistical distribution. Many naturally occurring variables follow the normal distribution: examples are peoples' height, weights. The sum of independent, identically distributed variables - whatever their own underlying distribution - will tend towards the normal distribution as the number in the sum increases. This means that the mean of repeated measures of ANY variable will approach the normal distribution. Furthermore, some distributions that are not normal to start with, can be converted to normality through simple transformations of the variable. These characteristics make the normal distribution very important in statistics. See attached link for more.


State the main reason for using the empirical rule rather than chebyshevs theorem?

The empirical rule can only be used for a normal distribution, so I will assume you are referring to a normal distribution. Chebyshev's theorem can be used for any distribution. The empirical rule is more accurate than Chebyshev's theorem for a normal distribution. For 2 standard deviations (sd) from the mean, the empirical rule says 95% of the data are within that, and Chebyshev's theorem says 1 - 1/2^2 = 1 - 1/4 = 3/4 or 75% of the data are within that. From the standard normal distribution chart, the answer for 2 sd from the mean is 95.44% So, as you can see the empirical rule is more accurate.


What is the z score of a value that is the mean of a set of data?

z =0 and P(X< x) = 0.5 Explanation: z = (x-xbar)/sd, where xbar is the estimated mean or average of the sample, sd is the standard deviation, and x is the value of the particular outcome. We change x to z so that we can use the normal distribution or t-tables tables, which are based on a zero mean and 1 standard deviation. For example: What is the probability that the mean value of the distribution is 5 or less, given the sample average is 5 and the sd is 2? The z-score would be (5-5)/2 which is equal to 0. The probability, if we assume the normal or t-distribution, is 0.50. (see normal distribution tables) I hope this makes sense to you. The normal distribution is symmetrical. Per the example, a sample average of 5 tells you there is equal chance of the population mean being above and below 5.


What is the purpose of a stem and leaf plot?

to simply organise your numbers.ajm If you can make a histogram, a dotplot, or even a boxplot; there is no reason to do a steam and leaf plot. It's the worst graph. With a stem and leaf graph, you can see the distribution of data points, and determine whether it's normal distribution or not. As mentioned above, there are better graphs for doing that, though.


How can you approximate a binomial distribution to a poison distribution when the number of binomial trials became large enough?

The Poisson distribution with parameter np will be a good approximation for the binomial distribution with parameters n and p when n is large and p is small. For more details See related link below

Related questions

How do you find the confidence intervals?

See: http://en.wikipedia.org/wiki/Confidence_interval Includes a worked out example for the confidence interval of the mean of a distribution. In general, confidence intervals are calculated from the sampling distribution of a statistic. If "n" independent random variables are summed (as in the calculation of a mean), then their sampling distribution will be the t distribution with n-1 degrees of freedom.


What is normal distribution in statstics?

The normal distribution is a statistical distribution. Many naturally occurring variables follow the normal distribution: examples are peoples' height, weights. The sum of independent, identically distributed variables - whatever their own underlying distribution - will tend towards the normal distribution as the number in the sum increases. This means that the mean of repeated measures of ANY variable will approach the normal distribution. Furthermore, some distributions that are not normal to start with, can be converted to normality through simple transformations of the variable. These characteristics make the normal distribution very important in statistics. See attached link for more.


What are the types of normal distribution?

Generally, when we refer to the normal distribution, it is the standard, univariant normal distribution. We don't have a normal type 1, type 2, etc. However, there are closely related distributions, the truncated normal and the multivariant normal. A truncated multivariant normal would also be possible. See related links.


What do you mean by gaussian?

The Gaussian distribution is the same as the normal distribution. Sometimes, "Gaussian" is used as in "Gaussian noise" and "Gaussian process." See related links, Interesting that Gauss did not first derive this distribution. That honor goes to de Moivre in 1773.


What is the meaning of normal distribution with examples?

According to the Central Limit Theorem, if you take measurements for some variable from repeated samples from any population, the mean values have a probability distribution which is known as the Gaussian distribution. Because of the fact that it is found often it is also called the Normal distribution. It is a symmetric distribution which is fully determined by two parameters: the mean and variance (or standard deviation). It is also sometimes referred to as the bell curve although I have yet to see a bell that stretches out at its bottom towards infinity!The normal distribution can be used for the heights or masses of people, for examination scores.


Concept of Probability sampling and chi square test?

A probability sampling method is any method of sampling that utilizes some form of random selection. See: http://www.socialresearchmethods.net/kb/sampprob.php The simple random sample is an assumption when the chi-square distribution is used as the sampling distribution of the calculated variance (s^2). The second assumption is that the particular variable is normally distributed. It may not be in the sample, but it is assumed that the variable is normally distributed in the population. For a very good discussion of the chi-square test, see: http://en.wikipedia.org/wiki/Pearson%27s_chi-square_test


What are importance of mean and standard deviation in the use of normal distribution?

For data sets having a normal distribution, the following properties depend on the mean and the standard deviation. This is known as the Empirical rule. About 68% of all values fall within 1 standard deviation of the mean About 95% of all values fall within 2 standard deviation of the mean About 99.7% of all values fall within 3 standard deviation of the mean. So given any value and given the mean and standard deviation, one can say right away where that value is compared to 60, 95 and 99 percent of the other values. The mean of the any distribution is a measure of centrality, but in case of the normal distribution, it is equal to the mode and median of the distribtion. The standard deviation is a measure of data dispersion or variability. In the case of the normal distribution, the mean and the standard deviation are the two parameters of the distribution, therefore they completely define the distribution. See: http://en.wikipedia.org/wiki/Normal_distribution


Normal distributing is essy to calculate then other distribution?

We prefer mostly normal distribution, because most of the data around us follows normal distribution example height, weight etc. will follow normal. We can check it by plotting the graph then we can see the bell curve on the histogram. The most importantly by CLT(central limit theorem) and law of large numbers, we can say that as n is large the data follows normal distribution.


What is the difference between beta and normal distribution?

The probability density functions are different in shape and the domain. The domain of the beta distribution is from 0 to 1, while the normal goes from negative infinite to positive infinity. The shape of the normal is always a symmetrical, bell shape with inflection points on either sides of the mean. The beta distribution can be a variety of shapes, symmetrical half circle, inverted (cup up) half circle, or asymmetrical shapes. Normal distribution has many applications in classical hypothesis testing. Beta has many applications in Bayesian analysis. The uniform distribution is considered a specialized case of the beta distribution. See related links.


State the main reason for using the empirical rule rather than chebyshevs theorem?

The empirical rule can only be used for a normal distribution, so I will assume you are referring to a normal distribution. Chebyshev's theorem can be used for any distribution. The empirical rule is more accurate than Chebyshev's theorem for a normal distribution. For 2 standard deviations (sd) from the mean, the empirical rule says 95% of the data are within that, and Chebyshev's theorem says 1 - 1/2^2 = 1 - 1/4 = 3/4 or 75% of the data are within that. From the standard normal distribution chart, the answer for 2 sd from the mean is 95.44% So, as you can see the empirical rule is more accurate.


Will a population mean and sample mean always be identical?

The sample mean will seldom be the same as the population mean due to sampling error. See the related link.


Normal distribution as limiting form of binomial?

The de Moivre-Laplace theorem. Please see the link.