The data from a normal distribution are symmetric about its mean, not about zero. There is, therefore nothing strange about all the values being negative.
It is not negative. it is positively skewed, and it approaches a normal distribution as the degrees of freedom increase. Its shape is NEVER based on the sample size.
The t-distribution and the normal distribution are not exactly the same. The t-distribution is approximately normal, but since the sample size is so small, it is not exact. But n increases (sample size), degrees of freedom also increase (remember, df = n - 1) and the distribution of t becomes closer and closer to a normal distribution. Check out this picture for a visual explanation: http://www.uwsp.edu/PSYCH/stat/10/Image87.gif
The normal distribution is very important in statistical analysis. A considerable amount of data follows a normal distribution: the weight and length of items mass-produced usually follow a normal distribution ; and if average demand for a product is high, then demand usually follows a normal distribution. It is possible to show that when the sample is large, the sample mean follows a normal distribution. This result is important in the construction of confidence intervals and in significance testing. In quality control procedures for a mean chart, the construction of the warning and action lines is based on the normal distribution.
The ideal sample size depends on a number of factors:how far from Normal the underlying distribution is.how close you need to get to a Normal distribution - in terms of the decision(s) that might be based on it and the cost of making an error.the rarity of the characteristic that you wish to study. (You might need a large sample just to ensure that you get representatives that have whatever characteristic you are studying.)
The sample mean is distributed with the same mean as the popualtion mean. If the popolation variance is s2 then the sample mean has a variance is s2/n. As n increases, the distribution of the sample mean gets closer to a Gaussian - ie Normal - distribution. This is the basis of the Central Limit Theorem which is important for hypothesis testing.
The distribution of sample means will not be normal if the number of samples does not reach 30.
The distribution of the sample mean is bell-shaped or is a normal distribution.
Yes. You could have a biased sample. Its distribution would not necessarily match the distribution of the parent population.
It is not negative. it is positively skewed, and it approaches a normal distribution as the degrees of freedom increase. Its shape is NEVER based on the sample size.
Not necessarily. It needs to be a random sample from independent identically distributed variables. Although that requirement can be relaxed, the result will be that the sample means will diverge from the Normal distribution.
It approaches a normal distribution.
The F distribution is used to test whether two population variances are the same. The sampled populations must follow the normal distribution. Therefore, as the sample size increases, the F distribution approaches the normal distribution.
The central limit theorem basically states that for any distribution, the distribution of the sample means approaches a normal distribution as the sample size gets larger and larger. This allows us to use the normal distribution as an approximation to binomial, as long as the number of trials times the probability of success is greater than or equal to 5 and if you use the normal distribution as an approximation, you apply the continuity correction factor.
It need not be if: the number of samples is small; the elements within each sample, and the samples themselves are not selected independently.
Frequently it's impossible or impractical to test the entire universe of data to determine probabilities. So we test a small sub-set of the universal database and we call that the sample. Then using that sub-set of data we calculate its distribution, which is called the sample distribution. Normally we find the sample distribution has a bell shape, which we actually call the "normal distribution." When the data reflect the normal distribution of a sample, we call it the Student's t distribution to distinguish it from the normal distribution of a universe of data. The Student's t distribution is useful because with it and the small number of data we test, we can infer the probability distribution of the entire universal data set with some degree of confidence.
Frequently it's impossible or impractical to test the entire universe of data to determine probabilities. So we test a small sub-set of the universal database and we call that the sample. Then using that sub-set of data we calculate its distribution, which is called the sample distribution. Normally we find the sample distribution has a bell shape, which we actually call the "normal distribution." When the data reflect the normal distribution of a sample, we call it the Student's t distribution to distinguish it from the normal distribution of a universe of data. The Student's t distribution is useful because with it and the small number of data we test, we can infer the probability distribution of the entire universal data set with some degree of confidence.
Because as the sample size increases the Student's t-distribution approaches the standard normal.