answersLogoWhite

0


Best Answer

Any non-negative value.

User Avatar

Wiki User

12y ago
This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: What are all the values that a standard deviation s can possibly take?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about Other Math

You take an SRS of size n from a population that has mean 80 and standard deviation 20 How big should n be so that the sampling distribution has standard deviation 1?

400


Why standard deviation is more often used than variance?

Both variance and standard deviation are measures of dispersion or variability in a set of data. They both measure how far the observations are scattered away from the mean (or average). While computing the variance, you compute the deviation of each observation from the mean, square it and sum all of the squared deviations. This somewhat exaggerates the true picure because the numbers become large when you square them. So, we take the square root of the variance (to compensate for the excess) and this is known as the standard deviation. This is why the standard deviation is more often used than variance but the standard deviation is just the square root of the variance.


Which value is NOT always a number in the data set it represents?

The range, median, mean, variance, standard deviation, absolute deviation, skewness, kurtosis, percentiles, quartiles, inter-quartile range - take your pick. It would have been simpler to ask which value IS in the data set!


What is rate measure and calculation of errors?

Standard error (statistics)From Wikipedia, the free encyclopediaFor a value that is sampled with an unbiased normally distributed error, the above depicts the proportion of samples that would fall between 0, 1, 2, and 3 standard deviations above and below the actual value.The standard error is a method of measurement or estimation of the standard deviation of the sampling distribution associated with the estimation method.[1] The term may also be used to refer to an estimate of that standard deviation, derived from a particular sample used to compute the estimate.For example, the sample mean is the usual estimator of a population mean. However, different samples drawn from that same population would in general have different values of the sample mean. The standard error of the mean (i.e., of using the sample mean as a method of estimating the population mean) is the standard deviation of those sample means over all possible samples (of a given size) drawn from the population. Secondly, the standard error of the mean can refer to an estimate of that standard deviation, computed from the sample of data being analyzed at the time.A way for remembering the term standard error is that, as long as the estimator is unbiased, the standard deviation of the error (the difference between the estimate and the true value) is the same as the standard deviation of the estimates themselves; this is true since the standard deviation of the difference between the random variable and its expected value is equal to the standard deviation of a random variable itself.In practical applications, the true value of the standard deviation (of the error) is usually unknown. As a result, the term standard error is often used to refer to an estimate of this unknown quantity. In such cases it is important to be clear about what has been done and to attempt to take proper account of the fact that the standard error is only an estimate. Unfortunately, this is not often possible and it may then be better to use an approach that avoids using a standard error, for example by using maximum likelihood or a more formal approach to deriving confidence intervals. One well-known case where a proper allowance can be made arises where Student's t-distribution is used to provide a confidence interval for an estimated mean or difference of means. In other cases, the standard error may usefully be used to provide an indication of the size of the uncertainty, but its formal or semi-formal use to provide confidence intervals or tests should be avoided unless the sample size is at least moderately large. Here "large enough" would depend on the particular quantities being analyzed (see power).In regression analysis, the term "standard error" is also used in the phrase standard error of the regression to mean the ordinary least squares estimate of the standard deviation of the underlying errors.[2][3]


If 1000 students take a test that has a mean of 40 minutes a standard deviation of 8 minutes and is normally distributed how many would you expect would finish in less than 40 minutes?

The expected number is 500.

Related questions

What are all the values that a standard deviation can possibly take?

Any real value >= 0.


What are all the values a standard deviation can take?

The standard deviation must be greater than or equal to zero.


A set of 1000 values has a normal distribution the mean of the data is 120 and the standard deviation is 20 how many values are within one standard deviaiton from the mean?

The Empirical Rule states that 68% of the data falls within 1 standard deviation from the mean. Since 1000 data values are given, take .68*1000 and you have 680 values are within 1 standard deviation from the mean.


What is the relationship between the mean and standard deviation in statistics?

The 'standard deviation' in statistics or probability is a measure of how spread out the numbers are. It mathematical terms, it is the square root of the mean of the squared deviations of all the numbers in the data set from the mean of that set. It is approximately equal to the average deviation from the mean. If you have a set of values with low standard deviation, it means that in general, most of the values are close to the mean. A high standard deviation means that the values in general, differ a lot from the mean. The variance is the standard deviation squared. That is to say, the standard deviation is the square root of the variance. To calculate the variance, we simply take each number in the set and subtract it from the mean. Next square that value and do the same for each number in the set. Lastly, take the mean of all the squares. The mean of the squared deviation from the mean is the variance. The square root of the variance is the standard deviation. If you take the following data series for example, the mean for all of them is '3'. 3, 3, 3, 3, 3, 3 all the values are 3, they're the same as the mean. The standard deviation is zero. This is because the difference from the mean is zero in each case, and after squaring and then taking the mean, the variance is zero. Last, the square root of zero is zero so the standard deviation is zero. Of note is that since you are squaring the deviations from the mean, the variance and hence the standard deviation can never be negative. 1, 3, 3, 3, 3, 5 - most of the values are the same as the mean. This has a low standard deviation. In this case, the standard deviation is very small since most of the difference from the mean are small. 1, 1, 1, 5, 5, 5 - all the values are two higher or two lower than the mean. This series has the highest standard deviation.


Is it possible for a standard deviation to be negative?

no it is not possible because you have to take the square of error that is (x-X)2. the square of any number is always positive----------Improved answer:It is not possible to have a negative standard deviation because:SD (standard deviation) is equal to the square of V (variance).


You take an SRS of size n from a population that has mean 80 and standard deviation 20 How big should n be so that the sampling distribution has standard deviation 1?

400


How do you calculate standard deviation without a normal distribution?

You calculate standard deviation the same way as always. You find the mean, and then you sum the squares of the deviations of the samples from the means, divide by N-1, and then take the square root. This has nothing to do with whether you have a normal distribution or not. This is how you calculate sample standard deviation, where the mean is determined along with the standard deviation, and the N-1 factor represents the loss of a degree of freedom in doing so. If you knew the mean a priori, you could calculate standard deviation of the sample, and only use N, instead of N-1.


Why standard deviation is more often used than variance?

Both variance and standard deviation are measures of dispersion or variability in a set of data. They both measure how far the observations are scattered away from the mean (or average). While computing the variance, you compute the deviation of each observation from the mean, square it and sum all of the squared deviations. This somewhat exaggerates the true picure because the numbers become large when you square them. So, we take the square root of the variance (to compensate for the excess) and this is known as the standard deviation. This is why the standard deviation is more often used than variance but the standard deviation is just the square root of the variance.


How To Calculate Standard Deviation?

You can calculate standard deviation by addin the numbers of data that are together and dividing that number by the amount pieces of data.THAT IS TOTALLY INCORRECT.What was answered above was the calculation for getting an (mean) average.If you take five numbers for example 1, 2, 3, 4, 5 then the (mean) average is 3.But the standard deviation between them is 1.58814 and the variance is 2.5Also the population std. deviation will be 1.41421 and the population variance will be 2.see standard-deviation.appspot.com/


Why do you not take the sum of absolute deviations?

You most certainly can. The standard deviation, however, has better statistical properties.


Why to take square in the formula of standard deviation?

The sum of deviations from the mean will always be 0 and so does not provide any useful information. The absolute deviation is one solution to tat, the other is to take the square - and then take a square root.


How does a sample size impact the standard deviation?

If I take 10 items (a small sample) from a population and calculate the standard deviation, then I take 100 items (larger sample), and calculate the standard deviation, how will my statistics change? The smaller sample could have a higher, lower or about equal the standard deviation of the larger sample. It's also possible that the smaller sample could be, by chance, closer to the standard deviation of the population. However, A properly taken larger sample will, in general, be a more reliable estimate of the standard deviation of the population than a smaller one. There are mathematical equations to show this, that in the long run, larger samples provide better estimates. This is generally but not always true. If your population is changing as you are collecting data, then a very large sample may not be representative as it takes time to collect.