The Fisher F-test for Analysis of Variance (ANOVA).
A paired samples t-test is an example of parametric (not nonparametric) tests.
t-test
Parametric tests assume that data follow a specific distribution, typically a normal distribution, and that certain conditions, such as homogeneity of variances, are met. A situational problem arises when these assumptions are violated, such as when dealing with small sample sizes or skewed data, leading to inaccurate results. For example, using a t-test on data that are not normally distributed can result in misleading conclusions about group differences. In such cases, non-parametric tests may be more appropriate, as they do not rely on these strict assumptions.
non-parametric I believe the above is a reductionistic assumption bassed upon ill-informed logic. Chi-square is a statistic that is related to the central limit theorem in the sense that proportions are in fact means, and that proportions are normally distributed (with a mean of pi [not 3.141592653...] and a variance of pi*(1-pi)). Therefore, we can perform a normal curve test for examining the difference between proportions such that Z squared = chi square on one degree of freedom. Since Z is indubitably a parametric test, and chi square can be related to Z, we can infer that it is, in fact, parametric. From another approach, a parametric test is a test that makes an assumption about the value of a parameter (the measure of the population rather than your sample) in a statistical density function. Since our expected frequencies are based upon either theory, or a mathematical assumption based upon the average of our presented frequencies, i.e. the mean, we are making an assumption about what the parameter of our distribution would be. Therefore, given this assumption, and the relationship of chi square to the normal curve, one can argue for chi square being a parametric test.
The simplest answer is that parametric statistics are based on numerical data from which descriptive statistics can be calculated, while non-parametric statistics are based on categorical data. Takes two example questions: 1) Do men live longer than women, and 2), are men or women more likely to be statisticians. In the first example, you can calculate the average life span of both men and women and then compare the two averages. This is a parametric test. But in the second, you cannot calculate an average between "man" and "woman" or between "statistician" or "non-statistician." As there is no numerical data to work with, this would be a non-parametric test. The difference is vitally important. Because inferential statistics require numerical data, it is possible to estimate how accurate a parametric test on a sample is compared to the relevant population. However, it is not possible to make this estimation with non-parametric statistics. So while non-parametric tests are still used in many studies, they are often regarded as less conclusive than parametric statistics. However, the ability to generalize sample results to a population is based on more than just inferential statistics. With careful adherence to accepted random sampling, sample size, and data collection conventions, non-parametric results can still be generalizable. It is just that the accuracy of that generalization can not be statistically verified.
Parametric statistical tests assume that your data are normally distributed (follow a classic bell-shaped curve). An example of a parametric statistical test is the Student's t-test.Non-parametric tests make no such assumption. An example of a non-parametric statistical test is the Sign Test.
Parametric.
A paired samples t-test is an example of parametric (not nonparametric) tests.
If the distribution is parametric then yes.
An example of a non-parametric test is the Mann-Whitney U test, which is used to compare two independent groups when the data do not necessarily follow a normal distribution. Unlike parametric tests that assume a specific distribution for the data, non-parametric tests are more flexible and can be applied to ordinal data or non-normally distributed interval data. The Mann-Whitney U test evaluates whether the ranks of the two groups differ significantly.
yes
Parametric for one set?! Yeah
* Always when the assumptions for the specific test (as there are many parametric tests) are fulfilled. * When you want to say something about a statistical parameter.
Binomial is a non- parametric test. Since this binomial test of significance does not involve any parameter and therefore is non parametric in nature, the assumption that is made about the distribution in the parametric test is therefore not assumed in the binomial test of significance. In the binomial test of significance, it is assumed that the sample that has been drawn from some population is done by the process of random sampling. The sample on which the binomial test of significance is conducted by the researcher is therefore a random sample.
A classic would be the Kolmogorov-Smirnov test.
It is not.It is not.It is not.It is not.
t-test