standard deviation is the best measure of dispersion because..
a)It measure the absolute dispersion
b)It is most frequentlyused as prossesses almost all the the qualities that a good measure of variation have.
c)It is beased on all observation.
d)It is rigidly defined.
e)It is capable of further algebraic treatment.
f)It is least affected by the fluctuation of sampling.
Chat with our AI personalities
The standard error is the standard deviation divided by the square root of the sample size.
The variance is standard deviation squared, or, in other terms, the standard deviation is the square root of the variance. In many cases, this means that the variance is bigger than the standard deviation - but not always, it depends on the specific values.
Context of this question is not clear because it is NOT a full question. However when attempting to estimate an parameter such as µ using sample data when the population standard deviation σ is unknown, we have to estimate the standard deviation of the population using a stastitic called s where _ Σ(x-x)² s = ▬▬▬▬ n -1 _ and estimator for µ , in particular x ........................................._ has a standard deviation of s(x)= s/√n and the statistic _ x - hypothesized µ T = ▬▬▬▬▬▬▬▬▬▬ s has a student's T distribution with n-1 degrees of freedom If n> 30 , then by the Central Limit Theorem, the T distribution approaches the shape and form of the normal(gaussian) probability distribution and the Z table may be used to find needed critical statistical values for hypothesis tests , p-values, and interval estimates.
Obtuse?
an acute angle