Want this question answered?

Q: What other independent variables could be added to the regression and why?

Write your answer...

Submit

Still have questions?

Continue Learning about Math & Arithmetic

Regression techniques are used to find the best relationship between two or more variables. Here, best is defined according to some statistical criteria. The regression line is the straight line or curve based on this relationship. The relationship need not be a straight line - it could be a curve. For example, the regression between many common variables in physics will follow the "inverse square law".

Independent variable could be the number (or spacing or size) of the laces and the dependant variable is distance. Possibly levels of the independent variable could be ranges of number of laces.

The domain. It need not be the "independent variable" since the variables could be interdependent.

The three types of variables are: Independent: it is the one that you manipulate Dependent: the one that reacts to the changes in the independent variable and is measured in a experiment Control: all the other factors that could affect the dependent variable but are kept constant through out an experiment

Usually, yes. Obviously, only if you have one: the two variables could be inter-dependent.

Related questions

Regression techniques are used to find the best relationship between two or more variables. Here, best is defined according to some statistical criteria. The regression line is the straight line or curve based on this relationship. The relationship need not be a straight line - it could be a curve. For example, the regression between many common variables in physics will follow the "inverse square law".

No it doesn't. Cause and effect is not demonstrated with regression, it only shows that the variables differ together. One variable could be affecting another or the affects could be coming from the way the data is defined.

Independent variable could be the number (or spacing or size) of the laces and the dependant variable is distance. Possibly levels of the independent variable could be ranges of number of laces.

If y= 2x, then x is said to be an independent variable because you can choose what you like to calculate the dependant variable y . But someone could come along and say, Hey, I know what y is, and from his point of view he could calculate what x is, So from his point of view x would be a dependent variable and y the independent variable x=2/y. In general, the independent variables are the variables you use to calculate the answer, which is then the dependent variable.

Although not everyone follows this naming convention, multiple regression typically refers to regression models with a single dependent variable and two or more predictor variables. In multivariate regression, by contrast, there are multiple dependent variables, and any number of predictors. Using this naming convention, some people further distinguish "multivariate multiple regression," a term which makes explicit that there are two or more dependent variables as well as two or more independent variables.In short, multiple regression is by far the more familiar form, although logically and computationally the two forms are extremely similar.Multivariate regression is most useful for more special problems such as compound tests of coefficients. For example, you might want to know if SAT scores have the same predictive power for a student's grades in the second semester of college as they do in the first. One option would be to run two separate simple regressions and eyeball the results to see if the coefficients look similar. But if you want a formal probability test of whether the relationship differs, you could run it instead as a multivariate regression analysis. The coefficient estimates will be the same, but you will be able to directly test for their equality or other properties of interest.In practical terms, the way you produce a multivariate analysis using statistical software is always at least a little different from multiple regression. In some packages you can use the same commands for both but with different options; but in a number of packages you use completely different commands to obtain a multivariate analysis.A final note is that the term "multivariate regression" is sometimes confused with nonlinear regression; in other words, the regression flavors besides Ordinary Least Squares (OLS) linear regression. Those forms are more accurately called nonlinear or generalized linear models because there is nothing distinctively "multivariate" about them in the sense described above. Some of them have commonly used multivariate forms, too, but these are often called "multinomial" regressions in the case of models for categorical dependent variables.

The domain. It need not be the "independent variable" since the variables could be interdependent.

The three types of variables are: Independent: it is the one that you manipulate Dependent: the one that reacts to the changes in the independent variable and is measured in a experiment Control: all the other factors that could affect the dependent variable but are kept constant through out an experiment

Usually, yes. Obviously, only if you have one: the two variables could be inter-dependent.

There are many cases where you won't get useful results from regression. The two most common kinds of issues are (1) when your data contain major violations of regression assumptions and (2) when you don't have enough data (or of the right kinds). Core assumptions behind regression include - That there is in fact a relationship between the outcome variable and the predictor variables. - That observations are independent. - That the residuals are normally distributed and independent of the values of variables in the model. - That each predictor variable is not a linear combination of any others and is not extremely correlated with any others. - Additional assumptions depend on the nature of your dependent variable; for example whether it is measured on a continuous scale or is categorical yes/no etc. The form of regression you use (linear, logistic, etc.) must match the type of data. Not having enough data means having very few cases at all or having large amounts of missing values for the variables you want to analyze. If you don't have enough observations, your model either will not be able to run or else the estimates could be so imprecise (with large standard errors) that they aren't useful. A generic rule some people cite is that you need 10-20 cases per variable in the model; there's nothing magic about that number and you might get by just fine with less, but it suggests you could run into trouble if you have much less that that. Missing values can be a big problem as well and in the worst case could skew your results if they are not handled properly.

Say you want to study the effect of age on income. You would control for other variables that could effect income (for example gender, race etc...). What you are really doing is holding those variables constant so you can see if there actually is a relationship between age and income. Controling makes your findings more powerful and 'true'.

As it is written, x is the independent variable and y is the dependent. But you could re-write it as x = y/23 + 30/23 and then y is the independent variable and x the dependent.

If one of the variables was independent or if there was a causal relationship between the two variables, then that variable would be placed on the x-axis. If there were no independent variable but one of them was discrete then that would usually be on the x-axis. Otherwise, any variable could be placed on the x-axis.