answersLogoWhite

0


Want this question answered?

Be notified when an answer is posted

Add your answer:

Earn +20 pts
Q: Can you use standard deviation on interval level?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Related questions

What must be present in order to use the z interval procedure?

Use ZInterval when you have a single variable to analyze, and you already know the standard deviation


What must be present in order to use the z-interval procedure?

Use ZInterval when you have a single variable to analyze, and you already know the standard deviation


When do you use the relative standard deviation instead of standard deviation?

Use %RSD when comparing the deviation for popolations with different means. Use SD to compare data with the same mean.


Why use standard deviation and not average deviation?

Because the average deviation will always be zero.


Why use the T score?

T-score is used when you don't have the population standard deviation and must use the sample standard deviation as a substitute.


How do you use standard deviation?

Standard deviation is a measure of how spread out a set of numbers are from each other. It has a variety of uses in statistics.


When to use t-test?

You use the t-test when the population standard deviation is not known and estimated by the sample standard deviation. (1) To test hypothesis about the population mean (2) To test whether the means of two independent samples are different. (3) To test whether the means of two dependent samples are different. (4) To construct a confidence interval for the population mean.


Why does the effect-size calculation use standard deviation rather than standard error?

The goal is to disregard the influence of sample size. When calculating Cohen's d, we use the standard deviation in teh denominator, not the standard error.


How do you calculate sample standard deviation?

Here's how you do it in Excel: use the function =STDEV(<range with data>). That function calculates standard deviation for a sample.


Variance and standard deviation are one and the same thing?

No. But they are related. If a sample of size n is taken, a standard deviation can be calculated. This is usually denoted as "s" however some textbooks will use the symbol, sigma. The standard deviation of a sample is usually used to estimate the standard deviation of the population. In this case, we use n-1 in the denomimator of the equation. The variance of the sample is the square of the sample's standard deviation. In many textbooks it is denoted as s2. In denoting the standard deviation and variance of populations, the symbols sigma and sigma2 should be used. One last note. We use standard deviations in describing uncertainty as it's easier to understand. If our measurements are in days, then the standard deviation will also be in days. The variance will be in units of days2.


What is the relationship between confidence interval and standard deviation?

Short answer, complex. I presume you're in a basic stats class so your dealing with something like a normal distribution however (or something else very standard). You can think of it this way... A confidence interval re-scales margin of likely error into a range. This allows you to say something along the lines, "I can say with 95% confidence that the mean/variance/whatever lies within whatever and whatever" because you're taking into account the likely error in your prediction (as long as the distribution is what you think it is and all stats are what you think they are). This is because, if you know all of the things I listed with absolute certainty, you are able to accurately predict how erroneous your prediction will be. It's because central limit theory allow you to assume statistically relevance of the sample, even given an infinite population of data. The main idea of a confidence interval is to create and interval which is likely to include a population parameter within that interval. Sample data is the source of the confidence interval. You will use your best point estimate which may be the sample mean or the sample proportion, depending on what the problems asks for. Then, you add or subtract the margin of error to get the actual interval. To compute the margin of error, you will always use or calculate a standard deviation. An example is the confidence interval for the mean. The best point estimate for the population mean is the sample mean according to the central limit theorem. So you add and subtract the margin of error from that. Now the margin of error in the case of confidence intervals for the mean is za/2 x Sigma/ Square root of n where a is 1- confidence level. For example, confidence level is 95%, a=1-.95=.05 and a/2 is .025. So we use the z score the corresponds to .025 in each tail of the standard normal distribution. This will be. z=1.96. So if Sigma is the population standard deviation, than Sigma/square root of n is called the standard error of the mean. It is the standard deviation of the sampling distribution of all the means for every possible sample of size n take from your population ( Central limit theorem again). So our confidence interval is the sample mean + or - 1.96 ( Population Standard deviation/ square root of sample size. If we don't know the population standard deviation, we use the sample one but then we must use a t distribution instead of a z one. So we replace the z score with an appropriate t score. In the case of confidence interval for a proportion, we compute and use the standard deviation of the distribution of all the proportions. Once again, the central limit theorem tells us to do this. I will post a link for that theorem. It is the key to really understanding what is going on here!


Are standard deviation and mean use for ratio data?

Yes.