answersLogoWhite

0


Best Answer

The sum of deviations from the mean will always be 0 and so does not provide any useful information. The absolute deviation is one solution to tat, the other is to take the square - and then take a square root.

User Avatar

Wiki User

11y ago
This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: Why to take square in the formula of standard deviation?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about Math & Arithmetic

What is the equation for standard deviation?

Standard deviation is equal to the square root of the variance. To arrive at this work out the mean, then subtract the mean and square the result of each number. Then work out the mean of those squared differences and take the square root of that.


How do you calculate standard deviation without a normal distribution?

You calculate standard deviation the same way as always. You find the mean, and then you sum the squares of the deviations of the samples from the means, divide by N-1, and then take the square root. This has nothing to do with whether you have a normal distribution or not. This is how you calculate sample standard deviation, where the mean is determined along with the standard deviation, and the N-1 factor represents the loss of a degree of freedom in doing so. If you knew the mean a priori, you could calculate standard deviation of the sample, and only use N, instead of N-1.


Is a standard deviation dependent on the mean?

To find the standard deviation, you must first compute the mean for the data set. So the answer is yes. Just have a look at the 5 steps needed to compute a standard deviation and you will see why the answer is yes. In reality, people most often use calculators or computers to do this. However, it is good to understand what they are doing. 1. Compute the deviation by subtracting the mean from each value. 2. Square each individual deviation. 3. Add up the squared deviations. 4. Divide by one less than the sample size. 5. Take the square root


How do you find standard deviation?

First, you need to determine the mean. Then, subtract the mean from every number you have. The SQUARE all your numbers. Add up all of the resulting squares to get their total sum. Divide by one less then the total numbers you have (if you have 6 numbers you will divde by five) To get the standard deviation, just take the square root of the resulting number


Why take the square root of the variance when calculation standard deviation?

The variance is based on the squares of the variable being studied. If, for example, the variable is mass, then the variance is measured in mass-squared. Most people will not be able to wrap their heads around the square of mass. However, the square root will be in the same units of measurement as the variable itself. Thus, the idea of a variable being distributed about a mean, M (also measured in the same units), with a standard deviation (or error) of S is easier to understand.Second, under reasonable conditions,the transformed variable obtained by subtracting the mean and dividing the result by the standard deviation will have a standard normal distribution. This is extremely important for estimation and hypothesis testing.

Related questions

Is it possible for a standard deviation to be negative?

no it is not possible because you have to take the square of error that is (x-X)2. the square of any number is always positive----------Improved answer:It is not possible to have a negative standard deviation because:SD (standard deviation) is equal to the square of V (variance).


What is the equation for standard deviation?

Standard deviation is equal to the square root of the variance. To arrive at this work out the mean, then subtract the mean and square the result of each number. Then work out the mean of those squared differences and take the square root of that.


Why standard deviation is more often used than variance?

Both variance and standard deviation are measures of dispersion or variability in a set of data. They both measure how far the observations are scattered away from the mean (or average). While computing the variance, you compute the deviation of each observation from the mean, square it and sum all of the squared deviations. This somewhat exaggerates the true picure because the numbers become large when you square them. So, we take the square root of the variance (to compensate for the excess) and this is known as the standard deviation. This is why the standard deviation is more often used than variance but the standard deviation is just the square root of the variance.


What are all the values a standard deviation can take?

The standard deviation must be greater than or equal to zero.


What is the relationship between the mean and standard deviation in statistics?

The 'standard deviation' in statistics or probability is a measure of how spread out the numbers are. It mathematical terms, it is the square root of the mean of the squared deviations of all the numbers in the data set from the mean of that set. It is approximately equal to the average deviation from the mean. If you have a set of values with low standard deviation, it means that in general, most of the values are close to the mean. A high standard deviation means that the values in general, differ a lot from the mean. The variance is the standard deviation squared. That is to say, the standard deviation is the square root of the variance. To calculate the variance, we simply take each number in the set and subtract it from the mean. Next square that value and do the same for each number in the set. Lastly, take the mean of all the squares. The mean of the squared deviation from the mean is the variance. The square root of the variance is the standard deviation. If you take the following data series for example, the mean for all of them is '3'. 3, 3, 3, 3, 3, 3 all the values are 3, they're the same as the mean. The standard deviation is zero. This is because the difference from the mean is zero in each case, and after squaring and then taking the mean, the variance is zero. Last, the square root of zero is zero so the standard deviation is zero. Of note is that since you are squaring the deviations from the mean, the variance and hence the standard deviation can never be negative. 1, 3, 3, 3, 3, 5 - most of the values are the same as the mean. This has a low standard deviation. In this case, the standard deviation is very small since most of the difference from the mean are small. 1, 1, 1, 5, 5, 5 - all the values are two higher or two lower than the mean. This series has the highest standard deviation.


Why the standard deviation of set of data will always be greater than zero?

The standard deviation is always be equal or higher than zero. If my set of data is limited to whole numbers, all of which are equal, the standard deviation is 0. In all other situations, we first calculate the difference of each number from the average and then calculate the square of the difference. While the difference can be a negative, the square of the difference can not be. The square of the standard deviation has to be positive, since it is the sum of all positive numbers. If we calculate s2 = 4, then s can be -2 or +2. By convention, we take the positive root.


How do you calculate standard deviation without a normal distribution?

You calculate standard deviation the same way as always. You find the mean, and then you sum the squares of the deviations of the samples from the means, divide by N-1, and then take the square root. This has nothing to do with whether you have a normal distribution or not. This is how you calculate sample standard deviation, where the mean is determined along with the standard deviation, and the N-1 factor represents the loss of a degree of freedom in doing so. If you knew the mean a priori, you could calculate standard deviation of the sample, and only use N, instead of N-1.


Is a standard deviation dependent on the mean?

To find the standard deviation, you must first compute the mean for the data set. So the answer is yes. Just have a look at the 5 steps needed to compute a standard deviation and you will see why the answer is yes. In reality, people most often use calculators or computers to do this. However, it is good to understand what they are doing. 1. Compute the deviation by subtracting the mean from each value. 2. Square each individual deviation. 3. Add up the squared deviations. 4. Divide by one less than the sample size. 5. Take the square root


How do you find standard deviation?

First, you need to determine the mean. Then, subtract the mean from every number you have. The SQUARE all your numbers. Add up all of the resulting squares to get their total sum. Divide by one less then the total numbers you have (if you have 6 numbers you will divde by five) To get the standard deviation, just take the square root of the resulting number


How do you create data set with larger standard deviation?

Standard deviation is the square root of the sum of the squares of the deviations of each item from the mean, i.e. the square root of the variance. In order to increase the standard deviation, therefore, you need to increase the average deviation from the mean. There are many ways to do this. One is to move each item further away from the mean. For example, take the set [2, 4, 4, 4, 5, 5, 7, 9]. It has a mean of 5 and a standard deviation of 2.14. Multiply each item by 2.2 and subtract 5, giving the set [-1.3, 2.9, 2.9, 2.9, 5, 5, 9.2, 13.4], effectively moving each item 10% further away from the mean. This still has a mean of 5, but the standard deviation is 4.49.


You take an SRS of size n from a population that has mean 80 and standard deviation 20 How big should n be so that the sampling distribution has standard deviation 1?

400


Why take the square root of the variance when calculation standard deviation?

The variance is based on the squares of the variable being studied. If, for example, the variable is mass, then the variance is measured in mass-squared. Most people will not be able to wrap their heads around the square of mass. However, the square root will be in the same units of measurement as the variable itself. Thus, the idea of a variable being distributed about a mean, M (also measured in the same units), with a standard deviation (or error) of S is easier to understand.Second, under reasonable conditions,the transformed variable obtained by subtracting the mean and dividing the result by the standard deviation will have a standard normal distribution. This is extremely important for estimation and hypothesis testing.