The two distributions are symmetrical about the same point (the mean). The distribution where the sd is larger will be more flattened - with a lower peak and more spread out.
It goes up.
The absolute value of the standard score becomes smaller.
Nothing actually happens! You just get a value that is very unlikely but still possible. That it is possible is evidenced by the fact that the value was observed.
If there are n scores and one score is changed by x then the mean changes by x/n.
Nothing happens. There is no particular significance in that happening.
It goes up.
The standard deviation is used in the numerator of the margin of error calculation. As the standard deviation increases, the margin of error increases; therefore the confidence interval width increases. So, the confidence interval gets wider.
The standardised score decreases.
Not a lot. After all, the sample sd is an estimate for the population sd.
The absolute value of the standard score becomes smaller.
Nothing actually happens! You just get a value that is very unlikely but still possible. That it is possible is evidenced by the fact that the value was observed.
decreases
As the value of k, the degrees of freedom increases, the (chisq - k)/sqrt(2k) approaches the standard normal distribution.
It is the expected value of the distribution. It also happens to be the mode and median.It is the expected value of the distribution. It also happens to be the mode and median.It is the expected value of the distribution. It also happens to be the mode and median.It is the expected value of the distribution. It also happens to be the mode and median.
If there are n scores and one score is changed by x then the mean changes by x/n.
Nothing happens. There is no particular significance in that happening.
The statistics of the population aren't supposed to depend on the sample size. If they do, that just means that at least one of the samples doesn't accurately represent the population. Maybe both.