It is a consequence of the Central Limit Theorem (CLT).
Suppose you have a large number of independent random variables. Then, provided some fairly simple conditions are met, the CLT states that their mean has a distribution which approximates the Normal distribution - the bell curve.
Chat with our AI personalities
It depends on what the distribution is. In a Normal or Gaussian distribution, the standard deviation is the square root of the mean, so it could be 3.1 but, again, it depends on the distribution.
we prefer normal distribution over other distribution in statistics because most of the data around us is continuous. So, for continuous data normal distribution is used.
The standard normal distribution is tabulated. The critical values for various outcomes can therefore be worked out easily from tables. The normal distribution is extremely difficult to integrate: most people, even with a university degree in mathematics will be unable to do so. So working out the probability of events from the normal distribution is near enough impossible.
The standard normal distribution is a subset of a normal distribution. It has the properties of mean equal to zero and a standard deviation equal to one. There is only one standard normal distribution and no others so it could be considered the "perfect" one.
In parametric statistical analysis we always have some probability distributions such as Normal, Binomial, Poisson uniform etc.In statistics we always work with data. So Probability distribution means "from which distribution the data are?