If I take 10 items (a small sample) from a population and calculate the standard deviation, then I take 100 items (larger sample), and calculate the standard deviation, how will my statistics change?
The smaller sample could have a higher, lower or about equal the standard deviation of the larger sample. It's also possible that the smaller sample could be, by chance, closer to the standard deviation of the population. However, A properly taken larger sample will, in general, be a more reliable estimate of the standard deviation of the population than a smaller one.
There are mathematical equations to show this, that in the long run, larger samples provide better estimates. This is generally but not always true.
If your population is changing as you are collecting data, then a very large sample may not be representative as it takes time to collect.
If the population standard deviation is sigma, then the estimate for the sample standard error for a sample of size n, is s = sigma*sqrt[n/(n-1)]
Standard error of the sample mean is calculated dividing the the sample estimate of population standard deviation ("sample standard deviation") by the square root of sample size.
No, it is not.
No.
The increase in sample size will reduce the confidence interval. The increase in standard deviation will increase the confidence interval. The confidence interval is not based on a linear function so the overall effect will require some calculations based on the levels before and after these changes. It would depend on the relative rates at which the change in sample size and change in standard deviation occurred. If the sample size increased more quickly than then standard deviation, in some sense, then the size of the confidence interval would decrease. Conversely, if the standard deviation increased more quickly than the sample size, in some sense, then the size of the confidence interval would increase.
If the population standard deviation is sigma, then the estimate for the sample standard error for a sample of size n, is s = sigma*sqrt[n/(n-1)]
Standard error of the sample mean is calculated dividing the the sample estimate of population standard deviation ("sample standard deviation") by the square root of sample size.
No, it is not.
When the sample size is small
There is no such thing. The standard error can be calculated for a sample of any size greater than 1.
The standard error is the standard deviation divided by the square root of the sample size.
yes
2
No.
The standard deviation would generally decrease because the large the sample size is, the more we know about the population, so we can be more exact in our measurements.
In general the mean of a truly random sample is not dependent on the size of a sample. By inference, then, so is the variance and the standard deviation.
The increase in sample size will reduce the confidence interval. The increase in standard deviation will increase the confidence interval. The confidence interval is not based on a linear function so the overall effect will require some calculations based on the levels before and after these changes. It would depend on the relative rates at which the change in sample size and change in standard deviation occurred. If the sample size increased more quickly than then standard deviation, in some sense, then the size of the confidence interval would decrease. Conversely, if the standard deviation increased more quickly than the sample size, in some sense, then the size of the confidence interval would increase.