Chat with our AI personalities
Sets of data have many characteristics. The central location (mean, median) is one measure. But you can have different data sets with the same mean. So a measure of dispersion is used to determine whether there is a little or a lot of variability within the set. Sometimes it is necessary to look at higher order measures like the skewness, kurtosis.
The standard deviation is better since it takes account of all the information in the data set. However, the range is quick and easy to compute.
The range, inter-quartile range (IQR), mean absolute deviation [from the mean], variance and standard deviation are some of the many measures of variability.
It tells you how much variability there is in the data. A small standard deviation (SD) shows that the data are all very close to the mean whereas a large SD indicates a lot of variability around the mean. Of course, the variability, as measured by the SD, can be reduced simply by using a larger measurement scale!
The standard deviation of a set of data is a measure of the random variability present in the data. Given any two sets of data it is extremely unlikely that their means will be exactly the same. The standard deviation is used to determine whether the difference between the means of the two data sets is something that could happen purely by chance (ie is reasonable) or not.Also, if you wish to take samples of a population, then the inherent variability - as measured by the standard deviation - is a useful measure to help determine the optimum sample size.