Standard deviation in statistics refers to how much deviation there is from the average or mean value. Sample deviation refers to the data that was collected from a smaller pool than the population.
It is the estimate for s, the sample standard deviation.
Usually s means standard deviation of a sample.
A single observation cannot have a sample standard deviation.
The standard deviation of the population. the standard deviation of the population.
Yes
If the population standard deviation is sigma, then the estimate for the sample standard error for a sample of size n, is s = sigma*sqrt[n/(n-1)]
s is the standard deviation of a sample. It is difficult to know what you are asking. I will note that there is a statistical programming language called S-Plus, see "Modern Applied Statistics with S-Plus, by Venables and Ripley. I also note that "s" is also used commonly in statistics as standard deviation of a sample. That's about all that comes to mind.
Standard error of the sample mean is calculated dividing the the sample estimate of population standard deviation ("sample standard deviation") by the square root of sample size.
the sample standard deviation
If I take 10 items (a small sample) from a population and calculate the standard deviation, then I take 100 items (larger sample), and calculate the standard deviation, how will my statistics change? The smaller sample could have a higher, lower or about equal the standard deviation of the larger sample. It's also possible that the smaller sample could be, by chance, closer to the standard deviation of the population. However, A properly taken larger sample will, in general, be a more reliable estimate of the standard deviation of the population than a smaller one. There are mathematical equations to show this, that in the long run, larger samples provide better estimates. This is generally but not always true. If your population is changing as you are collecting data, then a very large sample may not be representative as it takes time to collect.
SE is the standard error. it is the standard deviation divided by the square root of sample size. It basically measures how accurately a statistic describes the population.