answersLogoWhite

0

Still curious? Ask our experts.

Chat with our AI personalities

BeauBeau
You're doing better than you think!
Chat with Beau
JudyJudy
Simplicity is my specialty.
Chat with Judy
ReneRene
Change my mind. I dare you.
Chat with Rene

Add your answer:

Earn +20 pts
Q: What are all the values that a standard deviation s can possibly take?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about Other Math

You take an SRS of size n from a population that has mean 80 and standard deviation 20 How big should n be so that the sampling distribution has standard deviation 1?

400


Why standard deviation is more often used than variance?

Both variance and standard deviation are measures of dispersion or variability in a set of data. They both measure how far the observations are scattered away from the mean (or average). While computing the variance, you compute the deviation of each observation from the mean, square it and sum all of the squared deviations. This somewhat exaggerates the true picure because the numbers become large when you square them. So, we take the square root of the variance (to compensate for the excess) and this is known as the standard deviation. This is why the standard deviation is more often used than variance but the standard deviation is just the square root of the variance.


Which value is NOT always a number in the data set it represents?

The range, median, mean, variance, standard deviation, absolute deviation, skewness, kurtosis, percentiles, quartiles, inter-quartile range - take your pick. It would have been simpler to ask which value IS in the data set!


What is rate measure and calculation of errors?

Standard error (statistics)From Wikipedia, the free encyclopediaFor a value that is sampled with an unbiased normally distributed error, the above depicts the proportion of samples that would fall between 0, 1, 2, and 3 standard deviations above and below the actual value.The standard error is a method of measurement or estimation of the standard deviation of the sampling distribution associated with the estimation method.[1] The term may also be used to refer to an estimate of that standard deviation, derived from a particular sample used to compute the estimate.For example, the sample mean is the usual estimator of a population mean. However, different samples drawn from that same population would in general have different values of the sample mean. The standard error of the mean (i.e., of using the sample mean as a method of estimating the population mean) is the standard deviation of those sample means over all possible samples (of a given size) drawn from the population. Secondly, the standard error of the mean can refer to an estimate of that standard deviation, computed from the sample of data being analyzed at the time.A way for remembering the term standard error is that, as long as the estimator is unbiased, the standard deviation of the error (the difference between the estimate and the true value) is the same as the standard deviation of the estimates themselves; this is true since the standard deviation of the difference between the random variable and its expected value is equal to the standard deviation of a random variable itself.In practical applications, the true value of the standard deviation (of the error) is usually unknown. As a result, the term standard error is often used to refer to an estimate of this unknown quantity. In such cases it is important to be clear about what has been done and to attempt to take proper account of the fact that the standard error is only an estimate. Unfortunately, this is not often possible and it may then be better to use an approach that avoids using a standard error, for example by using maximum likelihood or a more formal approach to deriving confidence intervals. One well-known case where a proper allowance can be made arises where Student's t-distribution is used to provide a confidence interval for an estimated mean or difference of means. In other cases, the standard error may usefully be used to provide an indication of the size of the uncertainty, but its formal or semi-formal use to provide confidence intervals or tests should be avoided unless the sample size is at least moderately large. Here "large enough" would depend on the particular quantities being analyzed (see power).In regression analysis, the term "standard error" is also used in the phrase standard error of the regression to mean the ordinary least squares estimate of the standard deviation of the underlying errors.[2][3]


If 1000 students take a test that has a mean of 40 minutes a standard deviation of 8 minutes and is normally distributed how many would you expect would finish in less than 40 minutes?

The expected number is 500.