answersLogoWhite

0


Best Answer

it is possible to distribute standard deviation and mean but you dont have to understand how the mouse runs up the clock hicorky dickory dock.

User Avatar

Wiki User

11y ago
This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: Is it possible that in standard normal distribution standard deviation known and mean unknown?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Related questions

When the population standard deviation is not known the sampling distribution is a?

If the samples are drawn frm a normal population, when the population standard deviation is unknown and estimated by the sample standard deviation, the sampling distribution of the sample means follow a t-distribution.


When the population standard deviation is unknown the sampling distribution is equal to what?

The answer will depend on the underlying distribution for the variable. You may not simply assume that the distribution is normal.


When to use z or t-distribution?

If the sample size is large (>30) or the population standard deviation is known, we use the z-distribution.If the sample sie is small and the population standard deviation is unknown, we use the t-distribution


The t distribution is used to construct confidence intervals for the population mean when the population standard deviation is unknown?

It can be.


What is the difference between t-distribution and standard normal distribution?

the t distributions take into account the variability of the sample standard deviations. I think that it is now common to use the t distribution when the population standard deviation is unknown, regardless of the sample size.


When do you know when to use t-distribution opposed to the z-distribution?

z- statistics is applied under two conditions: 1. when the population standard deviation is known. 2. when the sample size is large. In the absence of the parameter sigma when we use its estimate s, the distribution of z remains no longer normal but changes to t distribution. this modification depends on the degrees of freedom available for the estimation of sigma or standard deviation. hope this will help u.... mona upreti.. :)


In statistics what shows how far away a measurement is from the mean or average of the set?

The "z-score" is derived by subtracting the population mean from the measurement and dividing by the population standard deviation. It measures how many standard deviations the measurement is above or below the mean. If the population mean and standard deviation are unknown the "t-distribution" can be used instead using the sample mean and sample deviation.


What is a t-test?

a t test is used inplace of a z-test when the population standard deviation is unknown.


When the population standard deviation is not known?

Context of this question is not clear because it is NOT a full question. However when attempting to estimate an parameter such as µ using sample data when the population standard deviation σ is unknown, we have to estimate the standard deviation of the population using a stastitic called s where _ Σ(x-x)² s = ▬▬▬▬ n -1 _ and estimator for µ , in particular x ........................................._ has a standard deviation of s(x)= s/√n and the statistic _ x - hypothesized µ T = ▬▬▬▬▬▬▬▬▬▬ s has a student's T distribution with n-1 degrees of freedom If n> 30 , then by the Central Limit Theorem, the T distribution approaches the shape and form of the normal(gaussian) probability distribution and the Z table may be used to find needed critical statistical values for hypothesis tests , p-values, and interval estimates.


When to you use a z-scores or t-scores?

T score is usually used when the sample size is below 30 and/or when the population standard deviation is unknown.


What is rate measure and calculation of errors?

Standard error (statistics)From Wikipedia, the free encyclopediaFor a value that is sampled with an unbiased normally distributed error, the above depicts the proportion of samples that would fall between 0, 1, 2, and 3 standard deviations above and below the actual value.The standard error is a method of measurement or estimation of the standard deviation of the sampling distribution associated with the estimation method.[1] The term may also be used to refer to an estimate of that standard deviation, derived from a particular sample used to compute the estimate.For example, the sample mean is the usual estimator of a population mean. However, different samples drawn from that same population would in general have different values of the sample mean. The standard error of the mean (i.e., of using the sample mean as a method of estimating the population mean) is the standard deviation of those sample means over all possible samples (of a given size) drawn from the population. Secondly, the standard error of the mean can refer to an estimate of that standard deviation, computed from the sample of data being analyzed at the time.A way for remembering the term standard error is that, as long as the estimator is unbiased, the standard deviation of the error (the difference between the estimate and the true value) is the same as the standard deviation of the estimates themselves; this is true since the standard deviation of the difference between the random variable and its expected value is equal to the standard deviation of a random variable itself.In practical applications, the true value of the standard deviation (of the error) is usually unknown. As a result, the term standard error is often used to refer to an estimate of this unknown quantity. In such cases it is important to be clear about what has been done and to attempt to take proper account of the fact that the standard error is only an estimate. Unfortunately, this is not often possible and it may then be better to use an approach that avoids using a standard error, for example by using maximum likelihood or a more formal approach to deriving confidence intervals. One well-known case where a proper allowance can be made arises where Student's t-distribution is used to provide a confidence interval for an estimated mean or difference of means. In other cases, the standard error may usefully be used to provide an indication of the size of the uncertainty, but its formal or semi-formal use to provide confidence intervals or tests should be avoided unless the sample size is at least moderately large. Here "large enough" would depend on the particular quantities being analyzed (see power).In regression analysis, the term "standard error" is also used in the phrase standard error of the regression to mean the ordinary least squares estimate of the standard deviation of the underlying errors.[2][3]


When the population standard deviation is unknown and the sample size is less than 30 what table value should be used in computing a confidence interval for a mean?

t-test for means