Q: How many times would the sample size have to increase to cut the standard deviation by half?

Write your answer...

Submit

Still have questions?

Continue Learning about Math & Arithmetic

It is a theorem that describes the result of performing the same experiment a large number of times. This theorem forms the basis of frequency-style thinking. It says that the sample means, the sample variance and the sample standard deviation converge to what they are trying to estimate.

The standard deviation, in itself, cannot be high nor low. If the same measurements were recorded using a unit that was a ten times as large (centimetres instead of millimetres), the standard deviation for exactly the same data set would be 1.8. And if they were recorded in metres the sd would be 0.018

Standard deviation is the spread of the data. If each score has 7 added, this would not affect the spread of the data - it would be just as evenly spaced or clumped up, but 7 greater. The only thing that would affect the spread is multiplying every data point by 0.9. This makes distances between the data points 0.9 times as big, and thus makes the standard deviation 0.9 times as big. The standard deviation was 5.6, and so now is 5.6x0.9 = 5.04

Heteroskedasticity is when the standard deviation of a variable is inconsistent when measured several times over a period of time.

Standard deviation tells you how spread out your set of values is compared with the average (of your set of values).For example, if you have the heights of all the players in a soccer football team, then you can work out the average height (the mean). But if you know the mean, that doesn't tell you much about the spread. If the average height is 180 cm, you don't know if ALL the players were 180 cm, or if they were all between 175 and 195 cm. You don't know if one of them was 210 cm, or if some were really short. If we know the SPREAD then we have some extra information.The standard deviation is the average difference between a player's height and the average for the team. So if the team average height is 180 cm, and the standard deviation is small, say 4 cm, then you know that most players are between 176 and 184 cm. If the standard deviation is large (say 18 cm) then most players are between 162 and 198 cm, a much bigger range!! So the standard deviation really does tell you something about your data.Basically, standard deviation is when you measure the differences between your players and the average height. Some will be shorter than average (with a negative difference) and some will be taller than average (with a positive difference). And some may have a zero difference (if they are the same height as the mean).If you add up all these differences, the negative ones will cancel out the positives, and you won't get any useful information. So you SQUARE all the differences first before you add them up. When you square a negative number it becomes positive (-2 times -2 = +4). Then you get the average of all the squared differences (add them all up and divide the number of answers, that is, eleven). So for our eleven players, square the difference between each one's height and the average. Add them all together, and divide by 11. This answer is called the VARIANCE.(If you were only measuring a sample of the team you would divide by 10 [one FEWER than the total number], but because you measured the whole population of the team, you divide by 11.)Get the square root of the variance (remember you squared all the numbers, now you unsquare them), and the answer is the standard deviation. (Square root is the opposite of squared. Four squared = 16. The square root of 16 is 4.)Here it is again:Get the average (mean) of the heights of all your players.Work out all the differences between their heights and the average. Shorter players will have a negative difference, taller players will have a positive.Square each difference (Square means multiply it by itself, eg, -8 x -8 = +64). All the answers will be positive.Add all the answers together and divide by 11. This number is called the Variance.Get the square root of the Variance and THAT is the Standard Deviation.A small standard deviation (3 or 4 cm) tells you that most of the team are about the same size. A large standard deviation (15 to 20 cm) tells you that you have a bigger spread, and might have some really tall, and some really short. Answer:The question actually asked for "a really easy explanation". Now, although it is not an easy concept for any really easy explanation, I'm sure we can simplify a little the great mass that we have above for the average 'JoeBlow'.Standard deviation is, as mentioned above, a measure of "the spread", or how far spread apart, from the average of all the figures you are considering, or of all the set of measurements you have made about something.To possess meaning, we express this 'spread' using numbers. 1 standard deviation, for instance, ABOVE the average, or mean, of all the values in your sample is the point at which 34% of the values nearest, but above the mean lie. On the other hand, the 34% of numbers closest to the mean, but Below the mean is called the -1 standard deviation value. So 68% of all the values in your sample fall inside 1 standard deviation above the mean and 1 standard deviation below the mean. This region will, therefore, possess the middle 68% of all the values in your sample - which is most of them really.

Related questions

No. But a small sample will be a less accurate predictor of the standard deviation of the population due to its size. Another way of saying this: Small samples have more variability of results, sometimes estimates are too high and other times too low. As the sample size gets larger, there's a better chance that your sample will be close to the actual standard deviation of the population.

2 times the standard deviation!

It is a theorem that describes the result of performing the same experiment a large number of times. This theorem forms the basis of frequency-style thinking. It says that the sample means, the sample variance and the sample standard deviation converge to what they are trying to estimate.

The 68-95-99.7 rule, or empirical rule, says this:for a normal distribution almost all values lie within 3 standard deviations of the mean.this means that approximately 68% of the values lie within 1 standard deviation of the mean (or between the mean minus 1 times the standard deviation, and the mean plus 1 times the standard deviation). In statistical notation, this is represented as: μ ± σ.And approximately 95% of the values lie within 2 standard deviations of the mean (or between the mean minus 2 times the standard deviation, and the mean plus 2 times the standard deviation). The statistical notation for this is: μ ± 2σ.Almost all (actually, 99.7%) of the values lie within 3 standard deviations of the mean (or between the mean minus 3 times the standard deviation and the mean plus 3 times the standard deviation). Statisticians use the following notation to represent this: μ ± 3σ.(www.wikipedia.org)

There's no valid answer to your question. The problem is a standard deviation can be close to zero, but there is no upper limit. So, I can make a statement that if my standard deviation is much smaller than my mean, this indicates a low standard deviation. This is somewhat subjective. But I can't make say that if my standard deviation is many times the mean value, that would be considered high. It depends on the problem at hand.

The standard deviation, in itself, cannot be high nor low. If the same measurements were recorded using a unit that was a ten times as large (centimetres instead of millimetres), the standard deviation for exactly the same data set would be 1.8. And if they were recorded in metres the sd would be 0.018

Standard deviation is the spread of the data. If each score has 7 added, this would not affect the spread of the data - it would be just as evenly spaced or clumped up, but 7 greater. The only thing that would affect the spread is multiplying every data point by 0.9. This makes distances between the data points 0.9 times as big, and thus makes the standard deviation 0.9 times as big. The standard deviation was 5.6, and so now is 5.6x0.9 = 5.04

Standard deviation is a measure of the scatter or dispersion of the data. Two sets of data can have the same mean, but different standard deviations. The dataset with the higher standard deviation will generally have values that are more scattered. We generally look at the standard deviation in relation to the mean. If the standard deviation is much smaller than the mean, we may consider that the data has low dipersion. If the standard deviation is much higher than the mean, it may indicate the dataset has high dispersion A second cause is an outlier, a value that is very different from the data. Sometimes it is a mistake. I will give you an example. Suppose I am measuring people's height, and I record all data in meters, except on height which I record in millimeters- 1000 times higher. This may cause an erroneous mean and standard deviation to be calculated.

Heteroskedasticity is when the standard deviation of a variable is inconsistent when measured several times over a period of time.

A band of 1 SD will represents about 68% confidence whereas 2 SD represents around 95%. Since the latter is often used for hypothesis testing, it may be better to use 2 SD.

Suppose the random variable, X, that you are studying, has a mean = m, and standard deviation (sd) = s. Then z = 1.33 is equivalent to saying that(x - m)/s = 1.33 or that your observed value is greater than the mean by 1.33 times the sd.

The best formula for detection limit is usually the limit of detection (LOD) or the limit of quantification (LOQ). These are commonly calculated using the signal-to-noise ratio method, where the limit of detection is three times the standard deviation of the blank signal divided by the slope of the calibration curve, and the limit of quantification is ten times the standard deviation of the blank signal divided by the slope of the calibration curve.