There may or may not be a benefit: it depends on the underlying distributions. Using the standard normal distribution, whatever the circumstances is naive and irresponsible. Also, it depends on what parameter you are testing for.
For comparing whether or not two distributions are the same, tests such as the Kolmogorov-Smirnov test or the Chi-Square goodness of fit test are often better. For testing the equality of variance, an F-test may be better.
Yes. Normal (or Gaussian) distribution are parametric distributions and they are defined by two parameters: the mean and the variance (square of standard deviation). Each pair of these parameters gives rise to a different normal distribution. However, they can all be "re-parametrised" to the standard normal distribution using z-transformations. The standard normal distribution has mean 0 and variance 1.
There are no benefits in doing something that cannot be done. The standard normal distribution is not transformed to the standard distribution because the latter does not exist.
Yes. And that is true of most probability distributions.
the t distributions take into account the variability of the sample standard deviations. I think that it is now common to use the t distribution when the population standard deviation is unknown, regardless of the sample size.
Because the z-score table, which is heavily related to standard deviation, is only applicable to normal distributions.
Yes. Normal (or Gaussian) distribution are parametric distributions and they are defined by two parameters: the mean and the variance (square of standard deviation). Each pair of these parameters gives rise to a different normal distribution. However, they can all be "re-parametrised" to the standard normal distribution using z-transformations. The standard normal distribution has mean 0 and variance 1.
There are no benefits in doing something that cannot be done. The standard normal distribution is not transformed to the standard distribution because the latter does not exist.
Check the lecture on t distributions at StatLect. It is explained there.
The normal distribution, also known as the Gaussian distribution, has a familiar "bell curve" shape and approximates many different naturally occurring distributions over real numbers.
Yes. And that is true of most probability distributions.
the t distributions take into account the variability of the sample standard deviations. I think that it is now common to use the t distribution when the population standard deviation is unknown, regardless of the sample size.
You make comparisons between their mean or median, their spread - as measured bu the inter-quartile range or standard deviation, their skewness, the underlying distributions.
standard deviation is best measure of dispersion because all the data distributions are nearer to the normal distribution.
Because the z-score table, which is heavily related to standard deviation, is only applicable to normal distributions.
Yes. Most do.
The Normal distribution is a probability distribution of the exponential family. It is a symmetric distribution which is defined by just two parameters: its mean and variance (or standard deviation. It is one of the most commonly occurring distributions for continuous variables. Also, under suitable conditions, other distributions can be approximated by the Normal. Unfortunately, these approximations are often used even if the required conditions are not met!
The two distributions are symmetrical about the same point (the mean). The distribution where the sd is larger will be more flattened - with a lower peak and more spread out.