Did you mean, "How do you calculate the 99.9 % confidence interval to a parameter using the mean and the standard deviation?" ?
The parameter is the population mean μ. Let xbar and s denote the sample mean and the sample standard deviation. The formula for a 99.9% confidence limit for μ is
xbar - 3.08 s / √n and
xbar + 3.08 s / √n
where xbar is the sample mean, n the sample size and s the sample standard deviation. 3.08 comes from a Normal probability table.
Confidence intervals may be calculated for any statistics, but the most common statistics for which CI's are computed are mean, proportion and standard deviation. I have include a link, which contains a worked out example for the confidence interval of a mean.
It goes up.
You probably mean the confidence interval. When you construct a confidence interval it has a percentage coverage that is based on assumptions about the population distribution. If the population distribution is skewed there is reason to believe that (a) the statistics upon which the interval are based (namely the mean and standard deviation) might well be biased, and (b) the confidence interval will not accurately cover the population value as accurately or symmetrically as expected.
Why confidence interval is useful
Answers.com says it is: A statistical range with a specified probability that a given parameter lies within the range. I think that means, just how confident you are that your statistical analysis is correct.
No. For instance, when you calculate a 95% confidence interval for a parameter this should be taken to mean that, if you were to repeat the entire procedure of sampling from the population and calculating the confidence interval many times then the collection of confidence intervals would include the given parameter 95% of the time. And sometimes the confidence intervals would not include the given parameter.
The standard deviation is used in the numerator of the margin of error calculation. As the standard deviation increases, the margin of error increases; therefore the confidence interval width increases. So, the confidence interval gets wider.
Confidence intervals may be calculated for any statistics, but the most common statistics for which CI's are computed are mean, proportion and standard deviation. I have include a link, which contains a worked out example for the confidence interval of a mean.
no
No, the confidence interval (CI) doesn't always contain the true population parameter. A 95% CI means that there is a 95% probability that the population parameter falls within the specified CI.
, the desired probabilistic level at which the obtained interval will contain the population parameter.
The increase in sample size will reduce the confidence interval. The increase in standard deviation will increase the confidence interval. The confidence interval is not based on a linear function so the overall effect will require some calculations based on the levels before and after these changes. It would depend on the relative rates at which the change in sample size and change in standard deviation occurred. If the sample size increased more quickly than then standard deviation, in some sense, then the size of the confidence interval would decrease. Conversely, if the standard deviation increased more quickly than the sample size, in some sense, then the size of the confidence interval would increase.
Confidence IntervalsConfidence interval (CI) is a parameter with a degree of confidence. Thus, 95 % CI means parameter with 95 % of confidence level. The most commonly used is 95 % confidence interval.Confidence intervals for means and proportions are calculated as follows:point estimate ± margin of error.
Confidence intervals represent an interval that is likely, at some confidence level, to contain the true population parameter of interest. Confidence interval is always qualified by a particular confidence level, expressed as a percentage. The end points of the confidence interval can also be referred to as confidence limits.
It goes up.
It will make it wider.
Never!