A high F statistic would results in a lower Sig, or P value, which would indicate that your results are significant.
The F statistic is statistic which may be used to test whether a regression accounts for a statistically significant proportion of the observed variation in the dependent variable.
the populations have an excess of heterozygotes
You make assumptions about the nature of the distribution for a set of observations and determine a pair of competing hypotheses - a null hypotheis and an alternative. Based on the null hypothesis you devise a test for a statistic that is based on the observations. Assuming the null hypothesis is true, if the probability of observing a test statistic that is at least as extreme as the one obtained is smaller than some pre-determined level (that is, if the observations are very unlikely under the null hypothesis) then the result is said to be statistically significant. This does not automatically imply managerial significance since, among other factors, the latter must take account of the consequences (costs) of making the wrong decision.
You can calculate a result that is somehow related to the mean, based on the data available. Provided that you can work out its distribution under the null hypothesis against appropriate alternatives, you have a test statistic.
A high F statistic would results in a lower Sig, or P value, which would indicate that your results are significant.
What is significant and insignificant of a numerical statistic is dependent on the sample/population ratio.
a small mean difference and large sample variances
In statistics a significant number is a number that passes certain tests that makes the statistic relevant.
The F statistic is statistic which may be used to test whether a regression accounts for a statistically significant proportion of the observed variation in the dependent variable.
2.4299999999999997
Not in itself. You need to say what it is. Perhaps it's an F statistic?
It is a defensive statistic that stands for "Assists"
No, it is not.
Assuming you mean the t-statistic from least squares regression, the t-statistic is the regression coefficient (of a given independent variable) divided by its standard error. The standard error is essentially one estimated standard deviation of the data set for the relevant variable. To have a very large t-statistic implies that the coefficient was able to be estimated with a fair amount of accuracy. If the t-stat is more than 2 (the coefficient is at least twice as large as the standard error), you would generally conclude that the variable in question has a significant impact on the dependent variable. High t-statistics (over 2) mean the variable is significant. What if it's REALLY high? Then something is wrong. The data points might be serially correlated. Assuming you mean the t-statistic from least squares regression, the t-statistic is the regression coefficient (of a given independent variable) divided by its standard error. The standard error is essentially one estimated standard deviation of the data set for the relevant variable. To have a very large t-statistic implies that the coefficient was able to be estimated with a fair amount of accuracy. If the t-stat is more than 2 (the coefficient is at least twice as large as the standard error), you would generally conclude that the variable in question has a significant impact on the dependent variable. High t-statistics (over 2) mean the variable is significant. What if it's REALLY high? Then something is wrong. The data points might be serially correlated.
The population data may be skewed and thus the mean is not a valid statistic. If mean > median, the data will be skewed to the right. If median > mean, the data is skewed to the left.
Significant means of importance, or it can mean "a lot", as in "there was a significant amount of blood loss".