Let sigma = standard deviation. Standard error (of the sample mean) = sigma / square root of (n), where n is the sample size. Since you are dividing the standard deviation by a positive number greater than 1, the standard error is always smaller than the standard deviation.
No.
The more precise a result, the smaller will be the standard deviation of the data the result is based upon.
It is inversely proportional; a larger standard deviation produces a small kurtosis (smaller peak, more spread out data) and a smaller standard deviation produces a larger kurtosis (larger peak, data more centrally located).
If I take 10 items (a small sample) from a population and calculate the standard deviation, then I take 100 items (larger sample), and calculate the standard deviation, how will my statistics change? The smaller sample could have a higher, lower or about equal the standard deviation of the larger sample. It's also possible that the smaller sample could be, by chance, closer to the standard deviation of the population. However, A properly taken larger sample will, in general, be a more reliable estimate of the standard deviation of the population than a smaller one. There are mathematical equations to show this, that in the long run, larger samples provide better estimates. This is generally but not always true. If your population is changing as you are collecting data, then a very large sample may not be representative as it takes time to collect.
The smaller the standard deviation, the closer together the data is. A standard deviation of 0 tells you that every number is the same.
Let sigma = standard deviation. Standard error (of the sample mean) = sigma / square root of (n), where n is the sample size. Since you are dividing the standard deviation by a positive number greater than 1, the standard error is always smaller than the standard deviation.
No.
Standard deviation in statistics refers to how much deviation there is from the average or mean value. Sample deviation refers to the data that was collected from a smaller pool than the population.
In the same way that you calculate mean and median that are greater than the standard deviation!
The more precise a result, the smaller will be the standard deviation of the data the result is based upon.
An acceptable standard deviation depends entirely on the study and person asking for the study. The smaller the standard deviation, the more acceptable it will be because the less likely there are to be errors.
The absolute value of the standard score becomes smaller.
No.
Yes. If the variance is less than 1, the standard deviation will be greater that the variance. For example, if the variance is 0.5, the standard deviation is sqrt(0.5) or 0.707.
A large standard deviation means that the data were spread out. It is relative whether or not you consider a standard deviation to be "large" or not, but a larger standard deviation always means that the data is more spread out than a smaller one. For example, if the mean was 60, and the standard deviation was 1, then this is a small standard deviation. The data is not spread out and a score of 74 or 43 would be highly unlikely, almost impossible. However, if the mean was 60 and the standard deviation was 20, then this would be a large standard deviation. The data is spread out more and a score of 74 or 43 wouldn't be odd or unusual at all.
There's no valid answer to your question. The problem is a standard deviation can be close to zero, but there is no upper limit. So, I can make a statement that if my standard deviation is much smaller than my mean, this indicates a low standard deviation. This is somewhat subjective. But I can't make say that if my standard deviation is many times the mean value, that would be considered high. It depends on the problem at hand.