answersLogoWhite

0


Best Answer

The sum of two random variables that are normally distributed will be also be normally distributed. Use the link and check out the article. It'll save a cut and paste.

User Avatar

Wiki User

16y ago
This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: What is the The sum of two normally distributed random variables?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about Statistics

Will the sum of two normally distributed random variables be normally distributed if the random variables are independent?

Yes, and the new distribution has a mean equal to the sum of the means of the two distribution and a variance equal to the sum of the variances of the two distributions. The proof of this is found in Probability and Statistics by DeGroot, Third Edition, page 275.


What is marginal probability function?

If f(x, y) is the joint probability distribution function of two random variables, X and Y, then the sum (or integral) of f(x, y) over all possible values of y is the marginal probability function of x. The definition can be extended analogously to joint and marginal distribution functions of more than 2 variables.


Does a sample statistic always have a normal distribution?

No, many sample statistics do not have a normal distribution. In most cases order statistics, such as the minimum or the maximum, are not normally distributed, even when the underlying data themselves have a common normal distribution. The geometric mean (for positive-valued data) almost never has a normal distribution. Practically important statistics, including the chi-square statistic, the F-statistic, and the R-squared statistic of regression, do not have normal distributions. Typically, the normal distribution arises as a good approximation when the sample statistic acts like the independent sum of variables none of whose variances dominates the total variance: this is a loose statement of the Central Limit Theorem. A sample sum and mean, when the elements of the sample are independently obtained, will therefore often be approximately normally distributed provided the sample is large enough.


How do you give the probability of a sum?

You find the event space for the random variable that is the required sum and then calculate the probabilities of each favourable outcome. In the simplest case it is a convolution of the probability distribution functions.


What are the 2 conditions that determine a probability distribution?

The value of the distribution for any value of the random variable must be in the range [0, 1]. The sum (or integral) of the probability distribution function over all possible values of the random variable must be 1.

Related questions

Will the sum of two normally distributed random variables be normally distributed if the random variables are independent?

Yes, and the new distribution has a mean equal to the sum of the means of the two distribution and a variance equal to the sum of the variances of the two distributions. The proof of this is found in Probability and Statistics by DeGroot, Third Edition, page 275.


Will the variance of the difference of two independent normally distributed random variables be equal to the SUM of the variances of the two distributions?

Yes it is. That is actually true for all random vars, assuming the covariance of the two random vars is zero (they are uncorrelated).


How did the normal distribution get its name?

According to the Central Limit Theorem the sum of [a sufficiently large number of] independent, identically distributed random variables has a Gaussian distribution. This is true irrespective of the underlying distribution of each individual random variable.As a result, many of the measurable variables that we come across have a Gaussian distribution and consequently, it is also called the normal distribution.According to the Central Limit Theorem the sum of [a sufficiently large number of] independent, identically distributed random variables has a Gaussian distribution. This is true irrespective of the underlying distribution of each individual random variable.As a result, many of the measurable variables that we come across have a Gaussian distribution and consequently, it is also called the normal distribution.According to the Central Limit Theorem the sum of [a sufficiently large number of] independent, identically distributed random variables has a Gaussian distribution. This is true irrespective of the underlying distribution of each individual random variable.As a result, many of the measurable variables that we come across have a Gaussian distribution and consequently, it is also called the normal distribution.According to the Central Limit Theorem the sum of [a sufficiently large number of] independent, identically distributed random variables has a Gaussian distribution. This is true irrespective of the underlying distribution of each individual random variable.As a result, many of the measurable variables that we come across have a Gaussian distribution and consequently, it is also called the normal distribution.


What is importance of central limit theorem?

The importance is that the sum of a large number of independent random variables is always approximately normally distributed as long as each random variable has the same distribution and that distribution has a finite mean and variance. The point is that it DOES NOT matter what the particular distribution is. So whatever distribution you start with, you always end up with normal.


What is the distribution of the sum of squared Poisson random variables?

You must pay for the answer


When do you use a normal distribution?

When studying the sum (or average) of a large number of independent variables. A large number is necessary for the Central Limit Theorem to kick in - unless the variables themselves were normally distributed. Independence is critical. If they are not, normality may not be assumed.


What is the distribution of the sum of two exponentially distributed random variables?

If the exponential distributions have the same scale parameter it's known as the Erlang-2 distribution. PDF and CDF exist in closed-form but the quantile function does not. If you're looking to generate random variates the easiest method is to sum exponentially distributed variates. If the scale parameter is the same you can simplify a bit: -log(U0) - log(U1) = -log(U0*U1).


Can the variance of a normally distributed random variable be negative?

No. The variance of any distribution is the sum of the squares of the deviation from the mean. Since the square of the deviation is essentially the square of the absolute value of the deviation, that means the variance is always positive, be the distribution normal, poisson, or other.


The value of a random variable are uniformly distributed between 20 and 100 the mean of this distribution is?

The value of a random variable that is uniformly distributed between 20 and 100 can be calculated by calculating the sum of numbers from 20 to 100 and dividing it by the difference between 100 and 20. The resulting mean is 58.5.


What is the sum of the exponents of the variables of a monomial is the of the monomial?

The degree of a term is the sum of the exponents on the variables.


How do you check for normal distribution?

There are two main methods: theoretical and empirical. Theoretical: Is the random variable the sum (or mean) of a large number of imdependent, identically distributed variables? If so, by the Central Limit Theorem the variable in question is approximately normally distributed. Empirical: there are various goodness-of-fit tests. Two of the better known are the chi-square and the Kolmogorov-Smirnov tests. There are others. These compare the observed values with what might be expected if the distribution were Normal. The greater the discrepancy, the less likely it is that the distribution is Normal, the smaller the discrepancy the more likely that the distribution is Normal.


How to add standard deviations?

Square each standard deviation individually to get each variance. Add the variances, divide by the number of variances and then take the square root of that sum. ---------------------------- No, independent linear variables work like this: If X and Y are any two random variables, then mX+Y = mX + mY If X and Y are independent random variables, then s2X+Y = s2X + s2Y