There are no benefits in doing something that cannot be done. The standard normal distribution is not transformed to the standard distribution because the latter does not exist.
Yes. Normal (or Gaussian) distribution are parametric distributions and they are defined by two parameters: the mean and the variance (square of standard deviation). Each pair of these parameters gives rise to a different normal distribution. However, they can all be "re-parametrised" to the standard normal distribution using z-transformations. The standard normal distribution has mean 0 and variance 1.
the t distributions take into account the variability of the sample standard deviations. I think that it is now common to use the t distribution when the population standard deviation is unknown, regardless of the sample size.
Because the z-score table, which is heavily related to standard deviation, is only applicable to normal distributions.
The Normal probability distribution is defined by two parameters: its mean and standard deviation (sd) and, between them, these two can define infinitely many different Normal distributions. The Normal distribution is very common but there is no simple way to use it to calculate probabilities. However, the probabilities for the Standard Normal distribution (mean = 0, sd = 1) have been calculated numerically and are tabulated for quick reference. The z-score is a linear transformation of a Normal variable and it allows any Normal distribution to be converted to the Standard Normal. Finding the relevant probabilities is then a simple task.
None.z-scores are linear transformations that are used to convert an "ordinary" Normal variable - with mean, m, and standard deviation, s, to a normal variable with mean = 0 and st dev = 1 : the Standard Normal distribution.
There are no benefits in doing something that cannot be done. The standard normal distribution is not transformed to the standard distribution because the latter does not exist.
Check the lecture on t distributions at StatLect. It is explained there.
Yes. Normal (or Gaussian) distribution are parametric distributions and they are defined by two parameters: the mean and the variance (square of standard deviation). Each pair of these parameters gives rise to a different normal distribution. However, they can all be "re-parametrised" to the standard normal distribution using z-transformations. The standard normal distribution has mean 0 and variance 1.
the t distributions take into account the variability of the sample standard deviations. I think that it is now common to use the t distribution when the population standard deviation is unknown, regardless of the sample size.
standard deviation is best measure of dispersion because all the data distributions are nearer to the normal distribution.
Because the z-score table, which is heavily related to standard deviation, is only applicable to normal distributions.
The Normal probability distribution is defined by two parameters: its mean and standard deviation (sd) and, between them, these two can define infinitely many different Normal distributions. The Normal distribution is very common but there is no simple way to use it to calculate probabilities. However, the probabilities for the Standard Normal distribution (mean = 0, sd = 1) have been calculated numerically and are tabulated for quick reference. The z-score is a linear transformation of a Normal variable and it allows any Normal distribution to be converted to the Standard Normal. Finding the relevant probabilities is then a simple task.
The normal distribution, also known as the Gaussian distribution, has a familiar "bell curve" shape and approximates many different naturally occurring distributions over real numbers.
The Normal distribution is a probability distribution of the exponential family. It is a symmetric distribution which is defined by just two parameters: its mean and variance (or standard deviation. It is one of the most commonly occurring distributions for continuous variables. Also, under suitable conditions, other distributions can be approximated by the Normal. Unfortunately, these approximations are often used even if the required conditions are not met!
There may or may not be a benefit: it depends on the underlying distributions. Using the standard normal distribution, whatever the circumstances is naive and irresponsible. Also, it depends on what parameter you are testing for. For comparing whether or not two distributions are the same, tests such as the Kolmogorov-Smirnov test or the Chi-Square goodness of fit test are often better. For testing the equality of variance, an F-test may be better.
The two distributions are symmetrical about the same point (the mean). The distribution where the sd is larger will be more flattened - with a lower peak and more spread out.