Prove that lim y->infinity F(X,Y) (x,y) = F(X)(x)

  • MHB
  • Thread starter oyth94
  • Start date
In summary: Sorry, I was a little hasty with that; let me actually be sure about what the question states. My understanding is that you are to prove that for any $X,Y$ with some joint cdf $F_{X,Y}$ and where $X$ and $Y$ are not necessarily independent, we can state that$$F_{X,Y}(x,y)\leq F_X(x)\cdot F_Y(y)$$Or, phrased in a different way:$$P(X< x \text{ and } Y< y) \leq P(X < x)\cdot P(Y < y)$$If
  • #1
oyth94
33
0
Prove that lim y->infinity F(X,Y) (x,y) = F(X)(x)
 
Physics news on Phys.org
  • #2
re: Prove that lim y->infinity F(X,Y) (x,y) = F(X)(x)

oyth94 said:
Prove that lim y->infinity F(X,Y) (x,y) = F(X)(x)

What is F(X,Y) here? Is this the cumulative distribution function?

Are we given any information about X and Y in the problem?
 
  • #3
TheBigBadBen said:
What is F(X,Y) here? Is this the cumulative distribution function?

Are we given any information about X and Y in the problem?

I think this is the joint cdf, This question is related to joint probability in some way.

i did lim y->infinity FX,Y(x,y) = P(X<= x, Y <= infinity)
after that i am not sure if i skipped a step or went the wrong direction but the next step i did was conclude that
= P(X<=x)
= F(X)(x)

please help!
 
Last edited:
  • #4
oyth94 said:
Prove that lim y->infinity F(X,Y) (x,y) = F(X)(x)

By definition is...

$$F_{X,Y} (x,y) = P \{X<x,Y<y\}$$

... and because if y tends to infinity then $P \{Y<y\}$ tends to 1 we have...

$$\lim_{y \rightarrow \infty} F_{X,Y} (x,y) = P \{X<x\} = F_{X} (x)$$

Kind regards

$\chi$ $\sigma$
 
  • #5
chisigma said:
By definition is...

$$F_{X,Y} (x,y) = P \{X<x,Y<y\}$$

... and because if y tends to infinity then $P \{Y<y\}$ tends to 1 we have...

$$\lim_{y \rightarrow \infty} F_{X,Y} (x,y) = P \{X<x\} = F_{X} (x)$$

Kind regards

$\chi$ $\sigma$

So I did it correctly? No steps skipped?
 
  • #6
oyth94 said:
So I did it correctly? No steps skipped?

I would think that unless your professor wants you to prove this using $\epsilon$s and $\delta$s, you've said as much as you need to say. However, the best way to know is to ask the person who will be grading your homework.
 
  • #7
There is a similar question. But this one is not a limit question but does invoke joint probability, independence etc
The question is:

F(X,Y(x,y) <= FX(x),FY(y)

I know that when finding the integral for FX(x) it is in respect to y. And when finding the integral for FY(y) we find integral with respect to x. When we multiply the two together to find if it is independent we multiply the two together and see if it equals to FX,Y(x,y)
But I'm not sure if this question is regarding independence or something else. How must I go about proving this question?
 
  • #8
oyth94 said:
There is a similar question. But this one is not a limit question but does invoke joint probability, independence etc
The question is:

F(X,Y(x,y) <= FX(x),FY(y)

I know that when finding the integral for FX(x) it is in respect to y. And when finding the integral for FY(y) we find integral with respect to x. When we multiply the two together to find if it is independent we multiply the two together and see if it equals to FX,Y(x,y)
But I'm not sure if this question is regarding independence or something else. How must I go about proving this question?

I don't think that the premise of the question is true in general, you would have provide more information about the distribution on $X$ and $Y$.
 
  • #9
TheBigBadBen said:
I don't think that the premise of the question is true in general, you would have provide more information about the distribution on $X$ and $Y$.

This was all that was given in the question. So I am confused now... Or can we prove by contradiction if possible?
 
  • #10
oyth94 said:
This was all that was given in the question. So I am confused now... Or can we prove by contradiction if possible?

Sorry, I was a little hasty with that; let me actually be sure about what the question states. My understanding is that you are to prove that for any $X,Y$ with some joint cdf $F_{X,Y}$ and where $X$ and $Y$ are not necessarily independent, we can state that
$$
F_{X,Y}(x,y)\leq F_X(x)\cdot F_Y(y)
$$
Or, phrased in a different way:
$$
P(X< x \text{ and } Y< y) \leq P(X < x)\cdot P(Y < y)
$$

If the above is what you meant, I would pose the following counterargument: we could equivalently state
$$
P(X< x|Y<y) \cdot P(Y< y) \leq P(X < x)\cdot P(Y < y) \Rightarrow \\
P(X<x|Y<y) \leq P(X<x)
$$
and that simply isn't true for all distributions. That is, prior knowledge of another variable can increase the probability of an event. Would you like a counter-example to this claim?

If that's not what you meant, or if there's more information about $X$ and $Y$ that you're leaving out, do say so.
 
  • #11
I realize what you conceivably could have meant (and probably did mean) is

$$
\text{for any }y:\; F_{X,Y}(x,y)\leq F_X(x) \text{ AND }\\
\text{for any }x:\; F_{X,Y}(x,y)\leq F_Y(y)
$$

Is this what you meant to prove? Then yes, we can prove this by integration, as you rightly mentioned.

Please, please, please: try to be clearer in the future about what you mean, even if it makes your post a little longer.
 
  • #12
TheBigBadBen said:
I realize what you conceivably could have meant (and probably did mean) is

$$
\text{for any }y:\; F_{X,Y}(x,y)\leq F_X(x) \text{ AND }\\
\text{for any }x:\; F_{X,Y}(x,y)\leq F_Y(y)
$$

Is this what you meant to prove? Then yes, we can prove this by integration, as you rightly mentioned.

Please, please, please: try to be clearer in the future about what you mean, even if it makes your post a little longer.

Hi my apologies this is actually what I meant to say. So how does it work after integration? An I doing: the Integral from 0 to y for FX(x)dy multiply with integral from 0 to x of FY(y)dx to get integral of FXY(x,y)? Okay something is wrong I don't think that makes sense does it?
 
  • #13
Re: Prove that lim y-&gt;infinity F(X,Y) (x,y) = F(X)(x)

TheBigBadBen said:
Sorry, I was a little hasty with that; let me actually be sure about what the question states. My understanding is that you are to prove that for any $X,Y$ with some joint cdf $F_{X,Y}$ and where $X$ and $Y$ are not necessarily independent, we can state that
$$
F_{X,Y}(x,y)\leq F_X(x)\cdot F_Y(y)
$$
Or, phrased in a different way:
$$
P(X< x \text{ and } Y< y) \leq P(X < x)\cdot P(Y < y)
$$

If the above is what you meant, I would pose the following counterargument: we could equivalently state
$$
P(X< x|Y<y) \cdot P(Y< y) \leq P(X < x)\cdot P(Y < y) \Rightarrow \\
P(X<x|Y<y) \leq P(X<x)
$$
and that simply isn't true for all distributions. That is, prior knowledge of another variable can increase the probability of an event. Would you like a counter-example to this claim?

If that's not what you meant, or if there's more information about $X$ and $Y$ that you're leaving out, do say so.

For the counter argument why did you use conditional probability? And multiplied by p(Y<y)?

- - - Updated - - -

oyth94 said:
For the counter argument why did you use conditional probability? And multiplied by p(Y<y)?

Oh sorry never mind I understand why you used the conditional probability multiply with P(Y<y) because it is te intersection of X<x and Y<y. So that is the counter argument used against this proof?
 
  • #14
oyth94 said:
Hi my apologies this is actually what I meant to say. So how does it work after integration? An I doing: the Integral from 0 to y for FX(x)dy multiply with integral from 0 to x of FY(y)dx to get integral of FXY(x,y)? Okay something is wrong I don't think that makes sense does it?

So the proof of the first inequality via integrals would go something like this:
First of all, definition. We state that there is some (joint) probability density function of the form $f_{X,Y}(x,y)$. We can then supply the following definitions of $F_X,F_Y,$ and $F_{X,Y}$ in terms of integrals:
$$
F_{X,Y}(x,y)=\int_{-\infty}^y \int_{-\infty}^x f_{X,Y}(x,y)\,dx\,dy\\
F_{X}(x)=\int_{-\infty}^{\infty} \int_{-\infty}^x f_{X,Y}(x,y)\,dx\,dy\\
F_{Y}(y)=\int_{-\infty}^{y} \int_{-\infty}^\infty f_{X,Y}(x,y)\,dx\,dy
$$

With that in mind, we may state that
$$
F_{X}(x)=\int_{-\infty}^{\infty} \int_{-\infty}^x f_{X,Y}(x,y)\,dx\,dy\\
= \int_{-\infty}^{y} \int_{-\infty}^x f_{X,Y}(x,y)\,dx\,dy +
\int_{y}^{\infty} \int_{-\infty}^x f_{X,Y}(x,y)\,dx\,dy
$$
Since $f_{X,Y}(x,y)$ is a probability distribution, it is non-negative at all $(x,y)$, which means that both of the above integrals are non-negative. This allows us to state that
$$
\int_{-\infty}^{y} \int_{-\infty}^x f_{X,Y}(x,y)\,dx\,dy +
\int_{y}^{\infty} \int_{-\infty}^x f_{X,Y}(x,y)\,dx\,dy \geq \\
\int_{-\infty}^{y} \int_{-\infty}^x f_{X,Y}(x,y)\,dx\,dy = F_{X,Y}(x,y)
$$
Thus, we may conclude that in general, $F_{X,Y}(x,y) \leq F_Y{y}$.

However, it is not necessary to appeal to these integral definitions in order to go through the proof. I'll post an alternate, simpler proof as well
 
  • #15
The Easier Proof (via set theory)

Consider the following definitions:
$$
F_{X,Y}(x,y)=P(X< x \text{ and } Y < y)\\
F_{X}(x)=P(X< x)\\
F_{Y}(y)=P(Y< y)
$$

We note that the set of $(x,y)$ such that $X< x \text{ and } Y < y$ is the intersection of the set of $(x,y)$ such that $X< x$ and the set of $(x,y)$ such that $Y < y$. Since this set is a subset of each set, the probability of the associated event is less than or equal to the probability of either of the other events.
 

Related to Prove that lim y->infinity F(X,Y) (x,y) = F(X)(x)

What does the notation "lim y->infinity F(X,Y) (x,y) = F(X)(x)" mean?

The notation represents the limit of the function F(X,Y) as Y approaches infinity, with X held constant. It states that as Y gets larger and larger, the value of F(X,Y) will approach a finite value, which is represented by F(X)(x).

What is the significance of taking the limit as Y approaches infinity?

Taking the limit as Y approaches infinity allows us to analyze the behavior of a function as its input approaches a very large value. This can help us understand the long-term behavior or trends of the function and make predictions about its behavior in the future.

What does it mean for a function to approach a finite value?

When a function approaches a finite value, it means that the output of the function is getting closer and closer to a specific number as the input increases. This indicates that the function is becoming more and more stable and predictable.

Can the limit of a function as Y approaches infinity be any value?

No, the limit of a function as Y approaches infinity must be a finite value. This means that the function cannot approach infinity or negative infinity as its input gets larger, but rather it will approach a specific number.

How is the limit of a function as Y approaches infinity calculated?

The limit of a function as Y approaches infinity can be calculated by plugging in increasingly larger values for Y and observing the corresponding output values. If the outputs are approaching a finite value, then that value is the limit. Alternatively, mathematical techniques such as L'Hopital's rule or series expansion can also be used to calculate the limit.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
886
  • Set Theory, Logic, Probability, Statistics
Replies
10
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
763
  • Set Theory, Logic, Probability, Statistics
2
Replies
64
Views
3K
  • Calculus and Beyond Homework Help
Replies
3
Views
638
  • Topology and Analysis
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
3K
  • General Math
Replies
11
Views
553
  • Calculus and Beyond Homework Help
Replies
2
Views
562
Replies
11
Views
1K
Back
Top