Not a lot. After all, the sample sd is an estimate for the population sd.
The standardised score decreases.
The standard error increases.
Decrease
1. Compute the square of the difference between each value and the sample mean.2. Add those values up.3. Divide the sum by n-1. This is called the variance.4. Take the square root to obtain the Standard Deviation.Why divide by n-1 rather than n in the third step above?In step 1, you compute the difference between each value and the mean of those values. You don't know the true mean of the population; all you know is the mean of your sample. Except for the rare cases where the sample mean happens to equal the population mean, the data will be closer to the sample mean than it will be to the true population mean.The value you compute in step 2 will probably be a bit smaller (and can't be larger) than what it would be if you used the true population mean in step 1. To make up for this, divide by n-1 rather than n.But why n-1?If you knew the sample mean, and all but one of the values, you could calculate what that last value must be. Statisticians say there are n-1 degrees of freedom.
people start riots for food and lock themselves in their houses
The standardised score decreases.
The absolute value of the standard score becomes smaller.
It goes up.
The standard deviation is used in the numerator of the margin of error calculation. As the standard deviation increases, the margin of error increases; therefore the confidence interval width increases. So, the confidence interval gets wider.
The statistics of the population aren't supposed to depend on the sample size. If they do, that just means that at least one of the samples doesn't accurately represent the population. Maybe both.
decreases
Nothing actually happens! You just get a value that is very unlikely but still possible. That it is possible is evidenced by the fact that the value was observed.
The two distributions are symmetrical about the same point (the mean). The distribution where the sd is larger will be more flattened - with a lower peak and more spread out.
Nothing happens. There is no particular significance in that happening.
All the controversy over the meaning and usefulness of IQ aside, the average IQ is 100 by definition. The distribution of IQ appears to follow a normal (bell) curve and therefore the well understood characteristics of normal curves allow us to make some good estimates. Considering this definition, average performance on an IQ measure (a measure or an instrument- not a test- that has good validity and reliability) is always 100. There will always be, even if the general 'intelligence' of the population is changing, 50% of the population above average and 50% below average. From here you decide what you mean by "low IQ".It seems unreasonable to assume that any IQ below 100 should be defined as 'low', since we are talking about part of the normal curve of intelligence where the great majority of people fall. Also, no person would consider an IQ of 105 to be 'high', so it is callous and ignorant to consider an IQ of 95 to be 'low'. You might define (it is mostly arbitrary) the average range of IQ as falling within one standard deviation of the mean (85 to 115), or perhaps one and a half standard deviations (roughly 78 to 122).Usually the standard deviation of IQ is defined as 15. Then by studying the characteristics of normal curves you can arrive at a reasonably good estimate of the percent of individuals who would fall below your arbitrary cut-off. Considering the range of one standard deviation above and below average, IQ's of 85 to 115, we can estimate that close to 68.2 or 68.3 percent of the population falls within this range. Also, we can estimate that about 15.9% of the population falls below this range, and about 15.9% of the population falls above it. Perhaps these estimates suit your need.Whatever your need happens to be, don't fall into the trap of glorifying IQ, a tendency that probably increases as IQ's themselves increase. They are very useful over a limited range of applications, and beyond that hard work needs to kick in. No one should feel entitled because of an artificial estimate of 'intelligence'. As the saying goes, "IQ is what IQ instruments measure", whatever that means. IQ's do not and should not define anyone.Brief note on the average and standard deviation: The numbers 100 and 15 are completely arbitrary but generally accepted. You could choose, if you want, 917 as average with a standard deviation of 71, or any other numbers. These numbers are then treated mathematically so that they allow the same valid analysis of the population used in developing the instrument. Virtually the same individuals represented in the 85 to 115 IQ above would be the ones represented by the measures 846 to 988 in our whimsical scenario.
The standard error increases.
The population decreases.