Best Answer

Parametric are the usual tests you learn about.

Non-parametric tests are used when something is very "wrong" with your data--usually that they are very non-normally distributed, or N is very small. There are a variety of ways of approaching non-parametric statistics; often they involve either rank-ordering the data, or "Monte-Carlo" random sampling or exhaustive sampling from the data set.

The whole idea with non-parametrics is that since you can't assume that the usual distribution holds (e.g., the X² distribution for the X² test, normal distribution for t-test, etc.), you use the calculated statistic but apply a new test to it based only on the data set itself.

Q: What is the difference between parametric and nonparametric statistical tests?

Write your answer...

Submit

Still have questions?

Related questions

Parametric tests draw conclusions based on the data that are drawn from populations that have certain distributions. Non-parametric tests draw fewer conclusions about the data set. The majority of elementary statistical methods are parametric because they generally have larger statistical outcomes. However, if the necessary conclusions cannot be drawn about a data set, non-parametric tests are then used.

Nonparametric tests are sometimes called distribution free statistics because they do not require that the data fit a normal distribution. Nonparametric tests require less restrictive assumptions about the data than parametric restrictions. We can perform the analysis of categorical and rank data using nonparametric tests.

1. A nonparametric statistic has no inference 2. A nonparametric statistic has no standard error 3. A nonparametric statistic is an element in a base population (universe of possibilities) where every possible event in the population is known and can be characterized * * * * * That is utter rubbish and a totally irresponsible answer. In parametric statistics, the variable of interest is distributed according to some distribution that is determined by a small number of parameters. In non-parametric statistics there is no underlying parametric distribution. With non-parametric data you can compare between two (or more) possible distributions (goodness-of-fit), test for correlation between variables. Some test, such as the Student's t, chi-square are applicable for parametric as well as non-parametric statistics. I have, therefore, no idea where the previous answerer got his/her information from!

Explain the difference between capability and control.

You can use the z test for two proportions. The link below will do this test for you.

Parametric statistical tests assume that the data belong to some type of probability distribution. The normal distribution is probably the most common. That is, when graphed, the data follow a "bell shaped curve".On the other hand, non-parametric statistical tests are often called distribution free tests since don't make any assumptions about the distribution of data. They are often used in place of parametric tests when one feels that the assumptions of the have been violated such as skewed data.For each parametric statistical test, there is one or more nonparametric tests. A one sample t-test allows us to test whether a sample mean (from a normally distributed interval variable) significantly differs from a hypothesized value. The nonparametric analog uses the One sample sign test In one sample sign test,we can compare the sample values to the a hypothesized median (not a mean). In other words we are testing a population median against a hypothesized value k. We set up the hypothesis so that + and - signs are the values of random variables having equal size. A data value is given a plus if it is greater than the hypothesized mean, a negative if it is less, and a zero if it is equal.he sign test for a population median can be left tailed, right tailed, or two tailed. The null and alternative hypothesis for each type of test will be one of the following:Left tailed test: H0: median ≥ k and H1: median < kRight tailed test: H0: median ≤ k and H1: median > kTwo tailed test: H0: median ≠ k and H1: median = kTo use the sign test, first compare each entry in the sample to the hypothesized median k.If the entry is below the median, assign it a - sign.If the entry is above the median, assign it a + sign.If the entry is equal to the median, assign it a 0.Then compare the number of + and - signs. The 0′s are ignored.If there is a large difference in the number of + and - signs, then it is likely that the median is different from the hypothesized value and the null hypothesis should be rejected.When using the sign test, the sample size n is the total number of + and - signs.If the sample size > 25, we use the standard normal distribution to find the critical values and we find the test statistic by plugging n and x into a formula that can be found on the link.When n ≤ 25, we find the test statistic x, by using the smaller number of + or - .So if we had 10 +'s and 5 -'s, the test statistic x would be 5. The zeros are ignored.I will provided a link to some nonparametric test that goes into more detail. The information about the Sign Test was just given as an example of one of the simplest nonparametric test so one can see how these tests work The Wilcoxon Rank Sum Test, The Mann-Whitney U test and the Kruskal-Wallis Test are a few more common nonparametric tests. Most statistics books will give you a list of the pros and cons of parametric vs noparametric tests.

Parametric, since we may assume that the salaries of male and female employees follow normal distributions.

Basic Statistical Return is the full form of BSR code,

a statistical is a question that has a variety of answers, but a non-statistical question has only one answer. like if i say "how old am i?" that is a non-statistical question because there is only one answer. But if I say "How old are the 6th and 7th grade students in school?" that is a statistical question because there will be various answers.

non-parametric I believe the above is a reductionistic assumption bassed upon ill-informed logic. Chi-square is a statistic that is related to the central limit theorem in the sense that proportions are in fact means, and that proportions are normally distributed (with a mean of pi [not 3.141592653...] and a variance of pi*(1-pi)). Therefore, we can perform a normal curve test for examining the difference between proportions such that Z squared = chi square on one degree of freedom. Since Z is indubitably a parametric test, and chi square can be related to Z, we can infer that it is, in fact, parametric. From another approach, a parametric test is a test that makes an assumption about the value of a parameter (the measure of the population rather than your sample) in a statistical density function. Since our expected frequencies are based upon either theory, or a mathematical assumption based upon the average of our presented frequencies, i.e. the mean, we are making an assumption about what the parameter of our distribution would be. Therefore, given this assumption, and the relationship of chi square to the normal curve, one can argue for chi square being a parametric test.

t-test is the statistical test used to find the difference of mean between two groups

an approach to sampling that has the characteristics of being randomly selected and the use of probability theory to evaluate sample results. Whereas non-statistical sampling is therefore any sampling approach that does not have both of the characteristicss of statistical sampling. I hope this will help....