There is not enough information to give an answer.
You need to know something about the distribution of the variable in the population. It is reasonable to assume that it is Gaussian (Normal)? Secondly, what power do you require of the test statistic? In other words, what level of significance do you require and that, in turn, depends on the "cost" of getting it wrong!
I've included a couple of links. Statistical theory can never tell you how many samples you must take, all it can tell you the expected error that your sample should have given the variability of the data. Worked in reverse, you provide an expected error and the variability of the data, and statistical theory can tell you the corresponding sample size. The calculation methodology is given on the related links.
yes
There is no "ideal" sample size for any given population, because polls and other statistical analysis forms depend on many factors, including what the survey is intended to show, who the target audience is, how much statistical error is permitted, and so on. The "Survey System" link, below, offers definitions and a couple of calculators to determine the best sample size for most purposes.
he was the one who introduced the slovin's formula, the estimated sample size given the population size and margin of error
a sample is a sample sized piece given... a sample size is the amount given in one sample
Standard error A statistical measure of the dispersion of a set of values. The standard error provides an estimation of the extent to which the mean of a given set of scores drawn from a sample differs from the true mean score of the whole population. It should be applied only to interval-level measures. Standard deviation A measure of the dispersion of a set of data from its mean. The more spread apart the data is, the higher the deviation,is defined as follows: Standard error x sqrt(n) = Standard deviation Which means that Std Dev is bigger than Std err Also, Std Dev refers to a bigger sample, while Std err refers to a smaller sample
I've included a couple of links. Statistical theory can never tell you how many samples you must take, all it can tell you the expected error that your sample should have given the variability of the data. Worked in reverse, you provide an expected error and the variability of the data, and statistical theory can tell you the corresponding sample size. The calculation methodology is given on the related links.
Yes.
You cannot from the information provided.
yes
You can't. You need an estimate of p (p-hat) q-hat = 1 - p-hat variance = square of std dev sample size n= p-hat * q-hat/variance yes you can- it would be the confidence interval X standard deviation / margin of error then square the whole thing
The standard score associated with a given level of significance.
There is no "ideal" sample size for any given population, because polls and other statistical analysis forms depend on many factors, including what the survey is intended to show, who the target audience is, how much statistical error is permitted, and so on. The "Survey System" link, below, offers definitions and a couple of calculators to determine the best sample size for most purposes.
Standard error (statistics)From Wikipedia, the free encyclopediaFor a value that is sampled with an unbiased normally distributed error, the above depicts the proportion of samples that would fall between 0, 1, 2, and 3 standard deviations above and below the actual value.The standard error is a method of measurement or estimation of the standard deviation of the sampling distribution associated with the estimation method.[1] The term may also be used to refer to an estimate of that standard deviation, derived from a particular sample used to compute the estimate.For example, the sample mean is the usual estimator of a population mean. However, different samples drawn from that same population would in general have different values of the sample mean. The standard error of the mean (i.e., of using the sample mean as a method of estimating the population mean) is the standard deviation of those sample means over all possible samples (of a given size) drawn from the population. Secondly, the standard error of the mean can refer to an estimate of that standard deviation, computed from the sample of data being analyzed at the time.A way for remembering the term standard error is that, as long as the estimator is unbiased, the standard deviation of the error (the difference between the estimate and the true value) is the same as the standard deviation of the estimates themselves; this is true since the standard deviation of the difference between the random variable and its expected value is equal to the standard deviation of a random variable itself.In practical applications, the true value of the standard deviation (of the error) is usually unknown. As a result, the term standard error is often used to refer to an estimate of this unknown quantity. In such cases it is important to be clear about what has been done and to attempt to take proper account of the fact that the standard error is only an estimate. Unfortunately, this is not often possible and it may then be better to use an approach that avoids using a standard error, for example by using maximum likelihood or a more formal approach to deriving confidence intervals. One well-known case where a proper allowance can be made arises where Student's t-distribution is used to provide a confidence interval for an estimated mean or difference of means. In other cases, the standard error may usefully be used to provide an indication of the size of the uncertainty, but its formal or semi-formal use to provide confidence intervals or tests should be avoided unless the sample size is at least moderately large. Here "large enough" would depend on the particular quantities being analyzed (see power).In regression analysis, the term "standard error" is also used in the phrase standard error of the regression to mean the ordinary least squares estimate of the standard deviation of the underlying errors.[2][3]
he was the one who introduced the slovin's formula, the estimated sample size given the population size and margin of error
A "Good" estimator is the one which provides an estimate with the following qualities:Unbiasedness: An estimate is said to be an unbiased estimate of a given parameter when the expected value of that estimator can be shown to be equal to the parameter being estimated. For example, the mean of a sample is an unbiased estimate of the mean of the population from which the sample was drawn. Unbiasedness is a good quality for an estimate, since, in such a case, using weighted average of several estimates provides a better estimate than each one of those estimates. Therefore, unbiasedness allows us to upgrade our estimates. For example, if your estimates of the population mean µ are say, 10, and 11.2 from two independent samples of sizes 20, and 30 respectively, then a better estimate of the population mean µ based on both samples is [20 (10) + 30 (11.2)] (20 + 30) = 10.75.Consistency: The standard deviation of an estimate is called the standard error of that estimate. The larger the standard error the more error in your estimate. The standard deviation of an estimate is a commonly used index of the error entailed in estimating a population parameter based on the information in a random sample of size n from the entire population.An estimator is said to be "consistent" if increasing the sample size produces an estimate with smaller standard error. Therefore, your estimate is "consistent" with the sample size. That is, spending more money to obtain a larger sample produces a better estimate.Efficiency: An efficient estimate is one which has the smallest standard error among all unbiased estimators.The "best" estimator is the one which is the closest to the population parameter being estimated.
A "Good" estimator is the one which provides an estimate with the following qualities:Unbiasedness: An estimate is said to be an unbiased estimate of a given parameter when the expected value of that estimator can be shown to be equal to the parameter being estimated. For example, the mean of a sample is an unbiased estimate of the mean of the population from which the sample was drawn. Unbiasedness is a good quality for an estimate, since, in such a case, using weighted average of several estimates provides a better estimate than each one of those estimates. Therefore, unbiasedness allows us to upgrade our estimates. For example, if your estimates of the population mean µ are say, 10, and 11.2 from two independent samples of sizes 20, and 30 respectively, then a better estimate of the population mean µ based on both samples is [20 (10) + 30 (11.2)] (20 + 30) = 10.75.Consistency: The standard deviation of an estimate is called the standard error of that estimate. The larger the standard error the more error in your estimate. The standard deviation of an estimate is a commonly used index of the error entailed in estimating a population parameter based on the information in a random sample of size n from the entire population.An estimator is said to be "consistent" if increasing the sample size produces an estimate with smaller standard error. Therefore, your estimate is "consistent" with the sample size. That is, spending more money to obtain a larger sample produces a better estimate.Efficiency: An efficient estimate is one which has the smallest standard error among all unbiased estimators.The "best" estimator is the one which is the closest to the population parameter being estimated.