answersLogoWhite

0


Best Answer

How widely spread out, or tightly concentrated about the mean the observations are.

User Avatar

Wiki User

14y ago
This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: What do both variance and the standard deviaton tell you about distribution?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about Math & Arithmetic

Why is the standard deviation of a distribution of means smaller than the standard deviation of the population from which it was derived?

The reason the standard deviation of a distribution of means is smaller than the standard deviation of the population from which it was derived is actually quite logical. Keep in mind that standard deviation is the square root of variance. Variance is quite simply an expression of the variation among values in the population. Each of the means within the distribution of means is comprised of a sample of values taken randomly from the population. While it is possible for a random sample of multiple values to have come from one extreme or the other of the population distribution, it is unlikely. Generally, each sample will consist of some values on the lower end of the distribution, some from the higher end, and most from near the middle. In most cases, the values (both extremes and middle values) within each sample will balance out and average out to somewhere toward the middle of the population distribution. So the mean of each sample is likely to be close to the mean of the population and unlikely to be extreme in either direction. Because the majority of the means in a distribution of means will fall closer to the population mean than many of the individual values in the population, there is less variation among the distribution of means than among individual values in the population from which it was derived. Because there is less variation, the variance is lower, and thus, the square root of the variance - the standard deviation of the distribution of means - is less than the standard deviation of the population from which it was derived.


If outliers are added to a dataset how would the variance and standard deviation change?

They would both increase.


What does a probability distribution with high variance indicates?

There are several things that will cause a large variance. One would be a large spread in the data; the other is the data may contain an outlier(s) that is causing it or a combination of both.


How does the standard normal distribution differ from the t-distribution?

The normal distribution and the t-distribution are both symmetric bell-shaped continuous probability distribution functions. The t-distribution has heavier tails: the probability of observations further from the mean is greater than for the normal distribution. There are other differences in terms of when it is appropriate to use them. Finally, the standard normal distribution is a special case of a normal distribution such that the mean is 0 and the standard deviation is 1.


Can expected values be infinite how about variance?

The answer to both questions is yes.

Related questions

Why would a favorable price variance for material might be the cause of unfavorable quantity variance?

A favorable/unfavorable price variance does not effect your quantity variance. The reason you would see a favorable price variance and an unfavorable quantity variance is because you consumed more materials than your standard allows AND the price you paid for those material was less than your standard price. If you paid more than your standard price, you would have experienced an unfavorable variance in both quantity and price.


Why standard deviation is more often used than variance?

Both variance and standard deviation are measures of dispersion or variability in a set of data. They both measure how far the observations are scattered away from the mean (or average). While computing the variance, you compute the deviation of each observation from the mean, square it and sum all of the squared deviations. This somewhat exaggerates the true picure because the numbers become large when you square them. So, we take the square root of the variance (to compensate for the excess) and this is known as the standard deviation. This is why the standard deviation is more often used than variance but the standard deviation is just the square root of the variance.


Why is the standard deviation of a distribution of means smaller than the standard deviation of the population from which it was derived?

The reason the standard deviation of a distribution of means is smaller than the standard deviation of the population from which it was derived is actually quite logical. Keep in mind that standard deviation is the square root of variance. Variance is quite simply an expression of the variation among values in the population. Each of the means within the distribution of means is comprised of a sample of values taken randomly from the population. While it is possible for a random sample of multiple values to have come from one extreme or the other of the population distribution, it is unlikely. Generally, each sample will consist of some values on the lower end of the distribution, some from the higher end, and most from near the middle. In most cases, the values (both extremes and middle values) within each sample will balance out and average out to somewhere toward the middle of the population distribution. So the mean of each sample is likely to be close to the mean of the population and unlikely to be extreme in either direction. Because the majority of the means in a distribution of means will fall closer to the population mean than many of the individual values in the population, there is less variation among the distribution of means than among individual values in the population from which it was derived. Because there is less variation, the variance is lower, and thus, the square root of the variance - the standard deviation of the distribution of means - is less than the standard deviation of the population from which it was derived.


If outliers are added to a dataset how would the variance and standard deviation change?

They would both increase.


What does a probability distribution with high variance indicates?

There are several things that will cause a large variance. One would be a large spread in the data; the other is the data may contain an outlier(s) that is causing it or a combination of both.


How does the standard normal distribution differ from the t-distribution?

The normal distribution and the t-distribution are both symmetric bell-shaped continuous probability distribution functions. The t-distribution has heavier tails: the probability of observations further from the mean is greater than for the normal distribution. There are other differences in terms of when it is appropriate to use them. Finally, the standard normal distribution is a special case of a normal distribution such that the mean is 0 and the standard deviation is 1.


What is the purpose of doing the z-score?

The Normal distribution is frequntly encountered in real life. Often a matter of interest is how likely it is that the random variable (RV) being studied takes a value that it did or one that is more extreme. This requires a comparison of the observed value of the RV with its Normal distribution. Unfortunately, the general Normal distribution is extremely difficult to calculate. The Normal distribution is defined by two parameters: the mean and the variance (or standard deviation). It is impossible to tabulate the distribution of every possible combination of two parameters - both of which are continuous real numbers. However, using Z score reduces the problem to that of tabulating only one the Normal distribution: the N(0, 1) or standard Normal distribution. This allows the analysis of an RV with any Normal distribution.


Why is the mean the standard partner of the standard deviation?

The mean and standard deviation often go together because they both describe different but complementary things about a distribution of data. The mean can tell you where the center of the distribution is and the standard deviation can tell you how much the data is spread around the mean.


Calculate the standard deviation of the portfolio with two securites first A proporiton 0.39 variance 160 second B proportion 61 and variance is 340 covariance of both is 190?

[((.39)^2)*160 +((.61)^2)*340+2*.61*.39*190]^.5 = 15.5323


Can expected values be infinite how about variance?

The answer to both questions is yes.


What is a 1.66 standard deviation in percentage?

The standard deviation is a measure of spread in a distribution, and 1.66 sd is a measure of a multiple of that interval. What that represents, in percentage terms, depends on the distribution, and whether the 1.66 sd is on one side of the mean or both. In view of the missing information, there can be no simple answer.


When do you use n-2 degrees of freedom?

With n observations, it could be when 2 distributional parameters have been estimated from the data. Often this may be the mean and variance (or standard deviation( when they are both unknown.