answersLogoWhite

0


Best Answer

the statistically independent random variables are uncorrelated but the converse is not true ,i want a counter example,

User Avatar

Wiki User

15y ago
This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: Is Statistically independent random variables are uncorrelated and vice versa?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about Math & Arithmetic

Will the variance of the difference of two independent normally distributed random variables be equal to the SUM of the variances of the two distributions?

Yes it is. That is actually true for all random vars, assuming the covariance of the two random vars is zero (they are uncorrelated).


How to add standard deviations?

Square each standard deviation individually to get each variance. Add the variances, divide by the number of variances and then take the square root of that sum. ---------------------------- No, independent linear variables work like this: If X and Y are any two random variables, then mX+Y = mX + mY If X and Y are independent random variables, then s2X+Y = s2X + s2Y


Why are important in random variables?

It might help if you specified why WHAT was important in random variables.


What are the types of variables in statistics?

dependent variables, independent variable, nominal, ordinal, interval, ratio variableThere are three main kinds:Nominal: such as colour of eyes, or gender, or species of animal. With nominal variables there is no intrinsic sense in which one category can be said to be "more" than another.Ordinal: Such as Small/Medium/Large, orStrongly Disagree/Disagree/Indifferent/Agree/Srongly Agree. The categories can be ordered but the differences between pairs is not comparable. For example, it is not really possible to say that the difference betwen Strongly disagree and disagree is the same as (or double or half or whatever) the difference between indifferent and agree.Interval: These are variables where the distance between one pair of values (their interval) can be related to the distance between another pair. Such variables can be subdivided into discrete and continuous.Another way of classifying variables is independent and dependent.The dependent variable is a random variable but the independent variable can be random or non-random.


What is the difference between a random variable and random process?

A random process is a sequence of random variables defined over a period of time.

Related questions

Will the variance of the difference of two independent normally distributed random variables be equal to the SUM of the variances of the two distributions?

Yes it is. That is actually true for all random vars, assuming the covariance of the two random vars is zero (they are uncorrelated).


What is conditional expectation of 2 uncorrelated random variables-is it equal to unconditional expectation?

Yes. the conditional expectation of X given Y is simply the expectation of X if X and Y are uncorrelated. This is a consequence of one of the properties of conditional expectation.


What is the difference between statistical randomness and unpredictability?

In applications such as reciprocal authentication and session key generation the requirement is not so much that the sequence of numbers be statistically random but that the successive numbers of the sequence are unpredictable. With true random sequences each number is statistically independent of other numbers in the sequence and therefore unpredictable.


What do you want your residuals to be in statistics?

For the purpose of analyses, they should be independent, identically distributed random variables. But the ideal is that they are all 0.


What has the author Percy A Pierre written?

Percy A. Pierre has written: 'Characterizations of Gaussian random processes by representations in terms of independent random variables' -- subject(s): Gaussian processes, Random noise theory


How to add standard deviations?

Square each standard deviation individually to get each variance. Add the variances, divide by the number of variances and then take the square root of that sum. ---------------------------- No, independent linear variables work like this: If X and Y are any two random variables, then mX+Y = mX + mY If X and Y are independent random variables, then s2X+Y = s2X + s2Y


Why are important in random variables?

It might help if you specified why WHAT was important in random variables.


How did the normal distribution get its name?

According to the Central Limit Theorem the sum of [a sufficiently large number of] independent, identically distributed random variables has a Gaussian distribution. This is true irrespective of the underlying distribution of each individual random variable.As a result, many of the measurable variables that we come across have a Gaussian distribution and consequently, it is also called the normal distribution.According to the Central Limit Theorem the sum of [a sufficiently large number of] independent, identically distributed random variables has a Gaussian distribution. This is true irrespective of the underlying distribution of each individual random variable.As a result, many of the measurable variables that we come across have a Gaussian distribution and consequently, it is also called the normal distribution.According to the Central Limit Theorem the sum of [a sufficiently large number of] independent, identically distributed random variables has a Gaussian distribution. This is true irrespective of the underlying distribution of each individual random variable.As a result, many of the measurable variables that we come across have a Gaussian distribution and consequently, it is also called the normal distribution.According to the Central Limit Theorem the sum of [a sufficiently large number of] independent, identically distributed random variables has a Gaussian distribution. This is true irrespective of the underlying distribution of each individual random variable.As a result, many of the measurable variables that we come across have a Gaussian distribution and consequently, it is also called the normal distribution.


What has the author T V Arak written?

T. V. Arak has written: 'Uniform limit theorems for sums of independent random variables' -- subject(s): Distribution (Probability theory), Limit theorems (Probability theory), Random variables, Sequences (Mathematics)


What is a stochastic error?

A stochastic error is a type of random error that occurs in statistical models or experiments. It is caused by factors that are unpredictable or beyond the control of the researcher, leading to variability in the data. Stochastic errors can be minimized through larger sample sizes or by using statistical techniques to account for their presence in the analysis.


What are the types of variables in statistics?

dependent variables, independent variable, nominal, ordinal, interval, ratio variableThere are three main kinds:Nominal: such as colour of eyes, or gender, or species of animal. With nominal variables there is no intrinsic sense in which one category can be said to be "more" than another.Ordinal: Such as Small/Medium/Large, orStrongly Disagree/Disagree/Indifferent/Agree/Srongly Agree. The categories can be ordered but the differences between pairs is not comparable. For example, it is not really possible to say that the difference betwen Strongly disagree and disagree is the same as (or double or half or whatever) the difference between indifferent and agree.Interval: These are variables where the distance between one pair of values (their interval) can be related to the distance between another pair. Such variables can be subdivided into discrete and continuous.Another way of classifying variables is independent and dependent.The dependent variable is a random variable but the independent variable can be random or non-random.


What does the Central Limit Theorem state?

The Central Limit Theorem (abbreviated as CLT) states that random variables that are independent of each other will have a normally distributed mean.