There are an infinite number of confidence intervals; different disciplines and different circumstances will determine which is used. Common ones are 50% (is the event likely?), 75%, 90%, 95%, 99%, 99.5%, 99.9%, 99.99% etc.
Chat with our AI personalities
They are related but they are NOT the same.
Confidence intervals represent a specific probability that the "true" mean of a data set falls within a given range. The given range is based off of the experimental mean.
Statistical estimates cannot be exact: there is a degree of uncertainty associated with any statistical estimate. A confidence interval is a range such that the estimated value belongs to the confidence interval with the stated probability.
See: http://en.wikipedia.org/wiki/Confidence_interval Includes a worked out example for the confidence interval of the mean of a distribution. In general, confidence intervals are calculated from the sampling distribution of a statistic. If "n" independent random variables are summed (as in the calculation of a mean), then their sampling distribution will be the t distribution with n-1 degrees of freedom.
Confidence intervals may be calculated for any statistics, but the most common statistics for which CI's are computed are mean, proportion and standard deviation. I have include a link, which contains a worked out example for the confidence interval of a mean.