(i) P(X <= 2, Y = 1) = P(X=0, Y=1) + P(X=1, Y=1) + P(X=2, Y=1)
= (0+1)/30 + (1+1)/30 + (2+1)/30 = 6/30 = 1/5.
(ii) P(X + Y = 4) = P(X=2, Y=2) + P(X=3, Y=1)
= (2+2)/30 + (3+1)/30 = 8/30 = 4/15.
Chat with our AI personalities
The joint probability of two discrete variables, X and Y isP(x, y) = Prob(X = x and Y = y) and it is defined for each ordered pair (x,y) in the event space.The conditional probability of X, given that Y is y is Prob[(X, Y) = (x, y)]/Prob(Y = y) or equivalently,Prob(X = x and Y = y)/Prob(Y = y)The marginal probability of X is simply the probability of X. It can be derived from the joint distribution by summing over all possible values of Y.
Suppose you have two random variables, X and Y and their joint probability distribution function is f(x, y) over some appropriate domain. Then the marginal probability distribution of X, is the integral or sum of f(x, y) calculated over all possible values of Y.
The exponential distribution is a continuous probability distribution with probability density definded by: f(x) = ke-kx for x ≥ 0 and f(x) = 0 otherwise.
If a random variable X has a Poisson distribution with parameter l, then the probability that X takes the value x isPr(X = x) = lx*e-l/x! for x = 0, 1, 2, 3, ...
In some situationsX is continuous but Y is discrete. For example, in a logistic regression, one may wish to predict the probability of a binary outcome Y conditional on the value of a continuously-distributed X. In this case, (X, Y) has neither a probability density function nor a probability mass function in the sense of the terms given above. On the other hand, a "mixed joint density" can be defined in either of two ways:Formally, fX,Y(x, y) is the probability density function of (X, Y) with respect to the product measure on the respective supports of X and Y. Either of these two decompositions can then be used to recover the joint cumulative distribution function:The definition generalizes to a mixture of arbitrary numbers of discrete and continuous random variables.