In a study using 9 samples, and in which the population variance is unknown, the distribution that should be used to calculate confidence intervals is
It can be.
The standard deviation associated with a statistic and its sampling distribution.
, the desired probabilistic level at which the obtained interval will contain the population parameter.
Confidence intervals represent an interval that is likely, at some confidence level, to contain the true population parameter of interest. Confidence interval is always qualified by a particular confidence level, expressed as a percentage. The end points of the confidence interval can also be referred to as confidence limits.
I can examine this as a question of theory or real life: As a matter of theory, I will rephrase your question as follows: Does theoretical confidence interval of the mean (CI) of a sample, size n become larger as n is reduced? The answer is true. This is established from the sampling distribution of the mean. The sampling distribution is the probability distribution of the mean of a sample, size n. I will also consider the question as a matter of real life: If I take a sample from a population, size 50 and calculate the CI and take a smaller sample, say size 10, will I calculate a larger CI? If I use the standard deviation calculated from the sample, this is not necessarily true. The CI should be larger but I can't say in every case it will belarger. The standard deviation of the sample will vary from sample to sample. I hope this answers your question. You can find more information on confidence intervals at: http://onlinestatbook.com/chapter8/mean.html
It can be.
A t-distribution with 15 degrees of freedom
No. For instance, when you calculate a 95% confidence interval for a parameter this should be taken to mean that, if you were to repeat the entire procedure of sampling from the population and calculating the confidence interval many times then the collection of confidence intervals would include the given parameter 95% of the time. And sometimes the confidence intervals would not include the given parameter.
See: http://en.wikipedia.org/wiki/Confidence_interval Includes a worked out example for the confidence interval of the mean of a distribution. In general, confidence intervals are calculated from the sampling distribution of a statistic. If "n" independent random variables are summed (as in the calculation of a mean), then their sampling distribution will be the t distribution with n-1 degrees of freedom.
The standard deviation associated with a statistic and its sampling distribution.
Yes, but that begs the question: how large should the sample size be?
, the desired probabilistic level at which the obtained interval will contain the population parameter.
confidence intervals
Confidence intervals may be calculated for any statistics, but the most common statistics for which CI's are computed are mean, proportion and standard deviation. I have include a link, which contains a worked out example for the confidence interval of a mean.
P. van der Laan has written: 'Simple distribution-free confidence intervals for a difference in location' -- subject(s): Confidence interval, Distribution (Probability theory), Nonparametric statistics, Sampling (Statistics), Statistical hypothesis testing
Confidence intervals represent an interval that is likely, at some confidence level, to contain the true population parameter of interest. Confidence interval is always qualified by a particular confidence level, expressed as a percentage. The end points of the confidence interval can also be referred to as confidence limits.
A point estimate is a single value used to estimate a population parameter, such as the sample mean used to estimate the population mean. Confidence intervals can also be used to provide a range within which the population parameter is likely to lie.