A very superficial argument goes like this:
You have a null hypothesis under which your variable has some distribution. On the basis of this distribution you expect certain values (frequencies) in certain intervals. The intervals may be numeric or categoric. But what you observe are different values. You could look at the differences between your observed and expected values but then, in total, they would all cancel out. So you look at their squares. Also, an observed value of 15 where you expected 10 (difference = 5) is, relatively speaking, much bigger than an observed value of 1005 where you were expecting 1000 (diff still = 5). So you divide by the expected value.
Thus, for each interval you have (O-E)2/E. You add all these together and that is your chi-square test statistic. Call it C.
If your data are consistent with the null hypothesis, then the observed values will be close to the expected values so that the absolute value of (O-E) and therefore its square will be small. So under the null hypothesis, the test statistic will be small.
If C is small, the likelihood is that the observations are consistent with the null hypothesis. And in that case you accept the null hypothesis. As C gets larger, the chance of observing that large a value (or larger) when the null hypothesis is true decreases. Finally, for really large values of C, the chances of getting that big a value (or bigger), still under the null hypothesis, are so smaller than some pre-determined limit that you set - for example less than 5% for 95% confidence or 1% for 99% confidence etc. At that stage you decide that there is so little chance that the data are cnsistent with the null hypothesis that you must reject it and accept the alternative.
Rather than calculate the probability of observing a value of C or larger, you would look up tables of critical values of C at the 5%, 1% etc levels.
Finally, a word about degrees of freedom. If the data are classified one-way into n categories, the sum of the n expected values and the n observed values is the same. So, once you have n-1 of these the nth is determined. So you only have n-1 degrees of freedom. Similar arguments apply to 2-way, 3-way etc classifications.
For more detail I suggest you get hold of a decent textbook.
Actually
The Chi-squared statistic can be used to test for association.
The Chi-square test is a statistical test that is usually used to test how well a data set fits some hypothesised distribution.
square
A F-ratio test compares 2 variances and tell if they are significantly different. A Chi-square test compares count data.
yes it is
The chi-square test is pronounced "keye-skwair" test.
A Chi-square table is used in a Chi-square test in statistics. A Chi-square test is used to compare observed data with the expected hypothetical data.
The Chi-squared statistic can be used to test for association.
The Chi-square test is a statistical test that is usually used to test how well a data set fits some hypothesised distribution.
square
A F-ratio test compares 2 variances and tell if they are significantly different. A Chi-square test compares count data.
Yes.
yes it is
no
Yes. It is a statistical test.
A chi-square statistic which is near zero suggests that the observations are exceptionally consistent with the hypothesis.
you do the observed-expected value and square it, then devide that by the expected you do this for each cell then you add them up also you can enter your data as a matrix on a calculator TI and go to stat, test, chi square test.