answersLogoWhite

0


Best Answer

You need the data to be homoscedastic, the errors to be independent. The independent variable(s) should lie within (or very close to) the range of observed values.

User Avatar

Wiki User

6y ago
This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: When can the standard error of the estimate be used to construct a prediction interval about a value y'?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about Statistics

Is it true that the larger the standard deviation the wider the confidence interval?

no


What happen to confidence interval if increase sample size and population standard deviation simultanesous?

The increase in sample size will reduce the confidence interval. The increase in standard deviation will increase the confidence interval. The confidence interval is not based on a linear function so the overall effect will require some calculations based on the levels before and after these changes. It would depend on the relative rates at which the change in sample size and change in standard deviation occurred. If the sample size increased more quickly than then standard deviation, in some sense, then the size of the confidence interval would decrease. Conversely, if the standard deviation increased more quickly than the sample size, in some sense, then the size of the confidence interval would increase.


When to use t-test?

You use the t-test when the population standard deviation is not known and estimated by the sample standard deviation. (1) To test hypothesis about the population mean (2) To test whether the means of two independent samples are different. (3) To test whether the means of two dependent samples are different. (4) To construct a confidence interval for the population mean.


How do you find the sample size if you are given the confidence interval and the margin of error as well as the standard deviation?

You can't. You need an estimate of p (p-hat) q-hat = 1 - p-hat variance = square of std dev sample size n= p-hat * q-hat/variance yes you can- it would be the confidence interval X standard deviation / margin of error then square the whole thing


What is the relationship between confidence interval and standard deviation?

Short answer, complex. I presume you're in a basic stats class so your dealing with something like a normal distribution however (or something else very standard). You can think of it this way... A confidence interval re-scales margin of likely error into a range. This allows you to say something along the lines, "I can say with 95% confidence that the mean/variance/whatever lies within whatever and whatever" because you're taking into account the likely error in your prediction (as long as the distribution is what you think it is and all stats are what you think they are). This is because, if you know all of the things I listed with absolute certainty, you are able to accurately predict how erroneous your prediction will be. It's because central limit theory allow you to assume statistically relevance of the sample, even given an infinite population of data. The main idea of a confidence interval is to create and interval which is likely to include a population parameter within that interval. Sample data is the source of the confidence interval. You will use your best point estimate which may be the sample mean or the sample proportion, depending on what the problems asks for. Then, you add or subtract the margin of error to get the actual interval. To compute the margin of error, you will always use or calculate a standard deviation. An example is the confidence interval for the mean. The best point estimate for the population mean is the sample mean according to the central limit theorem. So you add and subtract the margin of error from that. Now the margin of error in the case of confidence intervals for the mean is za/2 x Sigma/ Square root of n where a is 1- confidence level. For example, confidence level is 95%, a=1-.95=.05 and a/2 is .025. So we use the z score the corresponds to .025 in each tail of the standard normal distribution. This will be. z=1.96. So if Sigma is the population standard deviation, than Sigma/square root of n is called the standard error of the mean. It is the standard deviation of the sampling distribution of all the means for every possible sample of size n take from your population ( Central limit theorem again). So our confidence interval is the sample mean + or - 1.96 ( Population Standard deviation/ square root of sample size. If we don't know the population standard deviation, we use the sample one but then we must use a t distribution instead of a z one. So we replace the z score with an appropriate t score. In the case of confidence interval for a proportion, we compute and use the standard deviation of the distribution of all the proportions. Once again, the central limit theorem tells us to do this. I will post a link for that theorem. It is the key to really understanding what is going on here!

Related questions

When comparing the 95 percent confidence and prediction intervals for a given regression analysis what is the relation between confidence and prediction interval?

Confidence interval considers the entire data series to fix the band width with mean and standard deviation considers the present data where as prediction interval is for independent value and for future values.


Which statistics are used to construct a confidence interval?

The parameters of the underlying distribution, plus the standard error of observation.


How do i construct a confidence interval?

Typically, the mean is the center and the interval extends a fixed number of standard-errors-of-the mean in wither direction. M+/- 1 SEM for example. I guess because you don't know, I should give you the simplest.


When population distribution is right skewed is the interval still valid?

You probably mean the confidence interval. When you construct a confidence interval it has a percentage coverage that is based on assumptions about the population distribution. If the population distribution is skewed there is reason to believe that (a) the statistics upon which the interval are based (namely the mean and standard deviation) might well be biased, and (b) the confidence interval will not accurately cover the population value as accurately or symmetrically as expected.


What does it mean to have 95 percent confidence in an interval estimate?

It means that 95% of the values in the data set falls within 2 standard deviations of the mean value.


What happens to the confidence interval as the standard deviation of a distribution increases?

The standard deviation is used in the numerator of the margin of error calculation. As the standard deviation increases, the margin of error increases; therefore the confidence interval width increases. So, the confidence interval gets wider.


Construct an 80 percent confidence interval for the true population mean given that the standard deviation of the population is 6 and the sample mean is 18?

THe answer will depend on whether the confidence interval is central or one-sided. If central, then -1.28 < z < 1.28 -1.28 < (m - 18)/6 < 1.28 -7.68 < m - 18 < 7.68 10.3 < m < 25.7


Is it true that the larger the standard deviation the wider the confidence interval?

no


What happen to confidence interval if increase sample size and population standard deviation simultanesous?

The increase in sample size will reduce the confidence interval. The increase in standard deviation will increase the confidence interval. The confidence interval is not based on a linear function so the overall effect will require some calculations based on the levels before and after these changes. It would depend on the relative rates at which the change in sample size and change in standard deviation occurred. If the sample size increased more quickly than then standard deviation, in some sense, then the size of the confidence interval would decrease. Conversely, if the standard deviation increased more quickly than the sample size, in some sense, then the size of the confidence interval would increase.


What happens to the confidence interval as the standard deviation of a distribution decreases?

It goes up.


When to use t-test?

You use the t-test when the population standard deviation is not known and estimated by the sample standard deviation. (1) To test hypothesis about the population mean (2) To test whether the means of two independent samples are different. (3) To test whether the means of two dependent samples are different. (4) To construct a confidence interval for the population mean.


How do you find the sample size if you are given the confidence interval and the margin of error as well as the standard deviation?

You can't. You need an estimate of p (p-hat) q-hat = 1 - p-hat variance = square of std dev sample size n= p-hat * q-hat/variance yes you can- it would be the confidence interval X standard deviation / margin of error then square the whole thing