Taking limits in discrete form

In summary, the limit does not exist as t → 0 in an arbitrary way. It seems that t1 and t2 could approach 0 in some controlled way that allows the two terms to cancel out.
  • #1
friend
1,452
9
I'm trying to calculate this limit to answer a question in Quantum Mechanics:
[tex]\mathop {\lim }\limits_{{t_1} \to 0} \,\,{\left( {\frac{m}{{2\pi \hbar i{t_1}}}} \right)^{{\raise0.5ex\hbox{$\scriptstyle 1$}
\kern-0.1em/\kern-0.15em
\lower0.25ex\hbox{$\scriptstyle 2$}}}}{e^{im{{(x' - x)}^2}/2\hbar {t_1}}}\,\,\,\,\, + \,\,\,\,\,\mathop {\lim }\limits_{{t_2} \to 0} \,\,{\left( {\frac{m}{{2\pi \hbar ( - i){t_2}}}} \right)^{{\raise0.5ex\hbox{$\scriptstyle 1$}
\kern-0.1em/\kern-0.15em
\lower0.25ex\hbox{$\scriptstyle 2$}}}}{e^{ - im{{(x - x')}^2}/2\hbar {t_2}}}[/tex]
It seems as t → 0 in an arbitrary way, the complex exponentials circles wildly from +1 to i to -1 to -i to +1 again. And the two square roots approach ∞ in magnitude and are 90° out of phase with each other. So it seems the limit does not approach any particular value, not even to plus or minus ∞; the limit seem undefined.

However, I wonder if t1 and t2 could approach 0 in some controlled way that allows the two terms to cancel out.

Let
[tex]A = {\left( {\frac{m}{{2\pi \hbar i}}} \right)^{{\raise0.5ex\hbox{$\scriptstyle 1$}
\kern-0.1em/\kern-0.15em
\lower0.25ex\hbox{$\scriptstyle 2$}}}}[/tex]
And let,
[tex]B = m{(x' - x)^2}/2\hbar [/tex]
Then the above limit can be written,
[tex]A\left( {\mathop {\lim }\limits_{{t_1} \to 0} \frac{{{e^{iB/{t_1}}}}}{{{t_1}^{{\raise0.5ex\hbox{$\scriptstyle 1$}
\kern-0.1em/\kern-0.15em
\lower0.25ex\hbox{$\scriptstyle 2$}}}}} + i\mathop {\lim }\limits_{{t_2} \to 0} \frac{{{e^{ - iB/{t_2}}}}}{{{t_2}^{{\raise0.5ex\hbox{$\scriptstyle 1$}
\kern-0.1em/\kern-0.15em
\lower0.25ex\hbox{$\scriptstyle 2$}}}}}} \right)[/tex]
which equals,
[tex]A\left( {\mathop {\lim }\limits_{{t_1} \to 0} \frac{{{e^{iB/{t_1}}}}}{{{t_1}^{{\raise0.5ex\hbox{$\scriptstyle 1$}
\kern-0.1em/\kern-0.15em
\lower0.25ex\hbox{$\scriptstyle 2$}}}}} + \mathop {\lim }\limits_{{t_2} \to 0} \frac{{{e^{ - iB/{t_2} + i\pi /2}}}}{{{t_2}^{{\raise0.5ex\hbox{$\scriptstyle 1$}
\kern-0.1em/\kern-0.15em
\lower0.25ex\hbox{$\scriptstyle 2$}}}}}} \right)[/tex]
If we restrict t2 such that,
[tex]\frac{{ - iB}}{{{t_2}}} + \frac{{i\pi }}{2} = \frac{{iB}}{{{t_1}}} + i\pi - i2\pi n[/tex]
for n any integer, then the [itex]i\pi - i2\pi n[/itex] factor in the exponent will insure that the second term is always 180° out of phase with the first term so that the two terms will cancel out to zero. In this case, t1 could be any arbitrary number approaching zero. But t2 would have to be
[tex]{t_2} = \frac{B}{{\frac{{ - B}}{{{t_1}}} + 2\pi (n - \frac{1}{4})}}[/tex]
We can see from this that t2 gets arbitrarily close to zero from above or below as n increases to plus infinity or negative infinity, respectively. It seems that for any other way of letting t2 approach zero, the limit is completely undefined.

The question is whether it is allowed to let parameters be discrete in the limiting process. Or must they always be continuous?
 
Last edited:
Physics news on Phys.org
  • #2
I was concerned that the two terms would not be equal if [itex]{t_2} \ne - {t_1}[/itex]. But the last equation above can be written as,
[tex]{t_2} = \frac{{B{t_1}}}{{ - B + {t_1}2\pi (n - \frac{1}{4})}}[/tex]
Then it is seen that as t1→0, the second term in the denominator approaches zero compared to B so that
[tex]{t_2} \to \frac{{B{t_1}}}{{ - B}} = - {t_1}[/tex]
And when this is put in the first equation in the previous post, then the two square roots become equal. And since they have opposite phases, the two terms cancel.

And as for the limit of t2 being made discrete with n, doesn't this just change the limit of a function into a limit of a sequence, but still a legitimate limit process? I'm still not sure you can make two limiting processes somewhat dependent on each other. I would think that it doesn't matter if t2 is a function of t1 as long as you can still put the limit of t2→0 in the ε, δ formulation, right?
 
Last edited:
  • #3
Mathematically you can't do this.

Consider the following limit: ##\displaystyle \lim_{(x,y) \rightarrow (0,0)} \frac{xy}{x²+y²}##.

This limit doesn't exist.

The limit exists if you go to ##(0,0)## along a line.
Take the limit with ##x=2y##, then the limit is 2/5.
Take the limit with ##x=y##, then the limit is 1/2.

You get a "limit" that depends on the slope of the line you use to approach ##(0,0)##. Use polar coordinates to easily establish this.
 
Last edited:
  • #4
Samy_A said:
You get a "limit" that depends on the slope of the line you use to approach ##(0,0)##. Use polar coordinates to easily establish this.
If I could (as you do in your example) manipulate the limit I pose to various values, then I would agree that my manipulation to zero has no meaning. But I don't see that. I see either 0 or ∞, either existence or non-existence.

I suppose I could manipulate the limits to get any phase angle I wanted between the two terms. But then the magnitudes of the two terms would not match, and I'd get infinity as the parameters approached zero. So I'm beginning to see my manipulation as a regulatory method. If I carefully regulate the manner in which the limit is taken, I can guarantee that the limit converges. They do this a lot in quantum theory; it's call renormalization, where if they use some trick or another, then they can get a reasonable value where before they were getting infinity.
 
  • #5
friend said:
If I could (as you do in your example) manipulate the limit I pose to various values, then I would agree that my manipulation to zero has no meaning. But I don't see that. I see either 0 or ∞, either existence or non-existence.

I suppose I could manipulate the limits to get any phase angle I wanted between the two terms. But then the magnitudes of the two terms would not match, and I'd get infinity as the parameters approached zero. So I'm beginning to see my manipulation as a regulatory method. If I carefully regulate the manner in which the limit is taken, I can guarantee that the limit converges. They do this a lot in quantum theory; it's call renormalization, where if they use some trick or another, then can get a reasonable value where before they were getting infinity.
That's why my answer started with a bold "Mathematically". :oldsmile:
I don't know whether your manipulation is acceptable in some specific physical context. Maybe you can ask in the appropriate physics forum. There is more likelihood that someone knowledgeable will read it.
 
  • #6
Samy_A said:
That's why my answer started with a bold "Mathematically". :oldsmile:
I don't know whether your manipulation is acceptable in some specific physical context. Maybe you can ask in the appropriate physics forum. There is more likelihood that someone knowledgeable will read it.

Mathematically, I think we are talking about convergence and whether it is acceptable to regulate the limiting process to get convergence. Does it really matter HOW I approach the limit of zero? Can I change a continuous parameter to a discrete parameter and still get the same and valid limit? Does it really matter if the limiting parameters are related? I think I'm asking a rather technical question about analysis. The physicists would probably direct me to the math department.

One thing that differs from your example using x and y is that my t1 and t2 are not entirely dependent on one another. Each can approach zero independently from the other (except one parameter become discrete). So the question of the slope or direction of approaching (0,0) is avoided.
 
  • #7
friend said:
Mathematically, I think we are talking about convergence and whether it is acceptable to regulate the limiting process to get convergence. Does it really matter HOW I approach the limit of zero? Can I change a continuous parameter to a discrete parameter and still get the same and valid limit? Does it really matter if the limiting parameters are related? I think I'm asking a rather technical question about analysis. The physicists would probably direct me to the math department.
The mathematical answer is yes, it matters HOW the limit is taken. If the two parameters must be related to get a limit, then it is not a limit for a math department.

friend said:
One thing that differs from your example using x and y is that my t1 and t2 are not entirely dependent on one another. Each can approach zero independently from the other (except one parameter become discrete). So the question of the slope or direction of approaching (0,0) is avoided.
I understand the difference, but still, mathematically, the limit doesn't exist.

Let me give you a simple example: does ##\displaystyle \lim_{x \rightarrow \infty} \sin x## exist?
The answer of course is no.
But, if I set ##x_n=n\pi##, where ##n\in \mathbb N##, then the "discrete" limit ##\displaystyle \lim_{n \rightarrow \infty} \sin x_n=0##.
 
  • #8
Samy_A said:
The mathematical answer is yes, it matters HOW the limit is taken. If the two parameters must be related to get a limit, then it is not a limit for a math department.
I appreciate your answers. I'm trying to narrow down the issues and find principles that can be applied.

What I think I have so far is that I can indeed manipulate the two limits of t1 and t2 independently. I can certainly manipulate t1 independently as I wish. But if I keep t1 constant, then I can change the value of n independently to get an independent discrete value of t2. So I think the issue comes down to whether I can change a continuous parameter to a discrete parameter. My intuition tells me that this is what we do in practice anyway. We get out the calculator and discretely try ever smaller values to see if it is converging on a particular value.
 
  • #9
friend said:
I appreciate your answers. I'm trying to narrow down the issues and find principles that can be applied.

What I think I have so far is that I can indeed manipulate the two limits of t1 and t2 independently. I can certainly manipulate t1 independently as I wish. But if I keep t1 constant, then I can change the value of n independently to get an independent discrete value of t2. So I think the issue comes down to whether I can change a continuous parameter to a discrete parameter. My intuition tells me that this is what we do in practice anyway. We get out the calculator and discretely try ever smaller values to see if it is converging on a particular value.
Your intuition is wrong. Look at how a limit is defined using ##\epsilon, \delta##.
##\displaystyle \lim_{(x,y)\rightarrow (0,0)}f(x,y)=L## is defined by:
$$\forall \epsilon>0 \ \exists \delta>0 \ \ : 0\lt \|(x,y)\| \lt \delta \ \Rightarrow |f(x,y)-L|\lt \epsilon$$
If you take a "discrete" limit, you may not have ##|f(x,y)-L|\lt \epsilon## for all ##(x,y)## satisfying ##\|(x,y)\| \lt \delta##.
 
Last edited:
  • #10
Samy_A said:
Your intuition is wrong. Look at how a limit is defined using ##\epsilon, \delta##.
##\displaystyle \lim_{(x,y)\rightarrow (0,0)}f(x,y)=L## is defined by:
$$\forall \epsilon>0 \ \exists \delta>0 \ \ : \|(x,y)\| \lt \delta \ \Rightarrow |f(x,y)-L|\lt \epsilon$$
If you take a "discrete" limit, you may not have ##|f(x,y)-L|\lt \epsilon## for all ##(x,y)## satisfying ##\|(x,y)\| \lt \delta##.
Well, now you're talking. Yes, I see what you're saying. There could be continuous functions that are sharply spiked between adjacent discrete values so that you can not find a discrete values in the sequence for which [itex]|f(n) - L| < \varepsilon [/itex] with [itex]n > N[/itex]. So in general you cannot replace a continuous parameter with a discrete parameter.

But in my case it seems obvious that you can always find [itex]{t_1} < \delta [/itex] and [itex]n > N[/itex] such that [itex]|f({t_1},{t_2}(n)) - L| < \varepsilon [/itex].

Or more precisely, you can always find [itex]\left\| {\left( {{t_1},{t_2}} \right)} \right\| < \delta [/itex] such that [itex]|f({t_1},{t_2}) - L| < \varepsilon [/itex], because you can always find a [itex]{t_1} < \tau [/itex] and an [itex]n > N[/itex] such that [itex]\left\| {\left( {{t_1},{t_2}} \right)} \right\| < \delta [/itex].
 
Last edited:
  • #11
friend said:
Well, now you're talking. Yes, I see what you're saying. There could be continuous functions that are sharply spiked between adjacent discrete values so that you can not find a discrete values in the sequence for which [itex]|f(n) - L| < \varepsilon [/itex] with [itex]n > N[/itex]. So in general you cannot replace a continuous parameter with a discrete parameter.

But in my case it seems obvious that you can always find [itex]{t_1} < \delta [/itex] and [itex]n > N[/itex] such that [itex]|f({t_1},{t_2}(n)) - L| < \varepsilon [/itex].

Or more precisely, you can always find [itex]\left\| {\left( {{t_1},{t_2}} \right)} \right\| < \delta [/itex] such that [itex]|f({t_1},{t_2}) - L| < \varepsilon [/itex], because you can always find a [itex]{t_1} < \tau [/itex] and an [itex]n > N[/itex] such that [itex]\left\| {\left( {{t_1},{t_2}} \right)} \right\| < \delta [/itex].
You are missing the point about what the definition of limit means.
You don't have to find "a" ##(t_1,t_2)## such that ##\left\| {\left( {{t_1},{t_2}} \right)} \right\| < \delta## and ##|f({t_1},{t_2}) - L| < \epsilon##, it must be true for all ##(t_1,t_2)## such that ##\left\| {\left( {{t_1},{t_2}} \right)} \right\| < \delta##.

Try in on the following "limit": ##\displaystyle \lim_{x \rightarrow 0} \sin \frac{1}{x}##. For each ##\epsilon >0 ## you can find an ##x## such that ##|x|<\epsilon## and ##|\sin \frac{1}{x}|<\epsilon##.
But of course the limit doesn't exist.

Conclusion: mathematically your limit doesn't exist. Whether your manipulation to make it converge is acceptable is not a mathematical question.
 
  • #12
Samy_A said:
You are missing the point about what the definition of limit means.
You don't have to find "a" ##(t_1,t_2)## such that ##\left\| {\left( {{t_1},{t_2}} \right)} \right\| < \delta## and ##|f({t_1},{t_2}) - L| < \epsilon##, it must be true for all ##(t_1,t_2)## such that ##\left\| {\left( {{t_1},{t_2}} \right)} \right\| < \delta##.

Try in on the following "limit": ##\displaystyle \lim_{x \rightarrow 0} \sin \frac{1}{x}##. For each ##\epsilon >0 ## you can find an ##x## such that ##|x|<\epsilon## and ##|\sin \frac{1}{x}|<\epsilon##.
But of course the limit doesn't exist.

Conclusion: mathematically your limit doesn't exist. Whether your manipulation to make it converge is acceptable is not a mathematical question.
So let me fix the wording a little bit and see if it works for you.

My limit is such that, for all [itex]\left\| {\left( {{t_1},{t_2}} \right)} \right\| < \delta [/itex] there exists [itex]|f({t_1},{t_2}) - L| < \varepsilon [/itex], because for all [itex]{t_1} < \tau [/itex] and all [itex]n > N[/itex] it is true that [itex]\left\| {\left( {{t_1},{t_2}} \right)} \right\| < \delta [/itex]. I don't see anything in the definition that prohibits t2 from being a function of t1 and n. I don't see anything that prohibit t2 from being discrete.
 
Last edited:
  • #13
friend said:
So let me fix the wording a little bit and see if it works for you.

My limit is such that, for all [itex]\left\| {\left( {{t_1},{t_2}} \right)} \right\| < \delta [/itex] there exists [itex]|f({t_1},{t_2}) - L| < \varepsilon [/itex], because for all [itex]{t_1} < \tau [/itex] and all [itex]n > N[/itex] it is true that [itex]\left\| {\left( {{t_1},{t_2}} \right)} \right\| < \delta [/itex]. I don't see anything in the definition that prohibits t2 from being a function of t1 and n. I don't see anything that prohibit t2 from being discrete.
Let me write the definition of limit (for a ##\mathbb R² \to \mathbb R## function) again, and add the elements that were implicitly assumed (as they seem obvious, but ok, let's be precise).

##\displaystyle \lim_{(x,y)\rightarrow (0,0)}f(x,y)=L##, where ##L \in \mathbb R##, is defined by:
$$\forall \epsilon>0 \ \exists \delta>0 \ \ \ \text{such that}\ \ \forall (x,y) \in \mathbb R² \ :\\ 0 \lt \|(x,y)\| \lt \delta \ \Rightarrow |f(x,y)-L|\lt \epsilon$$

The ## \forall (x,y) \in \mathbb R²## is the key that prohibits ##t_2## being a function of ##t_1## and ##n##. All that is required of ##(t_1,t_2)## is that ##\|(t_1,t_2)\|\lt \delta##. For any ##(t_1,t_2)## satisfying this condition ##|f(x,y)-L|\lt \epsilon## must hold.
If that is the case (and can be achieved for every ##\epsilon>0##), the limit exists. If not, the limit doesn't exist.

Again, I'm not even trying to say that what you do is wrong in your context. I just don't know about that.
But given how limits are defined, your expression doesn't have a limit.
 
Last edited:
  • #14
friend said:
Then the above limit can be written ...[tex]A\left( {\mathop {\lim }\limits_{{t_1} \to 0} \frac{{{e^{iB/{t_1}}}}}{{{t_1}^{{\raise0.5ex\hbox{$\scriptstyle 1$}
\kern-0.1em/\kern-0.15em
\lower0.25ex\hbox{$\scriptstyle 2$}}}}} + \mathop {\lim }\limits_{{t_2} \to 0} \frac{{{e^{ - iB/{t_2} + i\pi /2}}}}{{{t_2}^{{\raise0.5ex\hbox{$\scriptstyle 1$}
\kern-0.1em/\kern-0.15em
\lower0.25ex\hbox{$\scriptstyle 2$}}}}}} \right)[/tex]
If we restrict t2 such that,
[tex]\frac{{ - iB}}{{{t_2}}} + \frac{{i\pi }}{2} = \frac{{iB}}{{{t_1}}} + i\pi - i2\pi n[/tex]
for n any integer, then the [itex]i\pi - i2\pi n[/itex] factor in the exponent will insure that the second term is always 180° out of phase with the first term so that the two terms will cancel out to zero.
No they won't. For that to work you would require that
$$\lim_{t_1,t_2\to 0}\left[\textrm{1st term}+\textrm{2nd term}\right]=
\lim_{t_1,t_2\to 0}[\textrm{1st term}]+\lim_{t_1,t_2\to 0}[\textrm{2nd term}]
$$
and that is only true if both limits on the right exist, which they don't.

If you replace ##t_1## in the formula by an expression involving only ##t_2## and you then replace the two limit operators by a single limit operator, outside the whole expression, that takes the limit as ##t_2\to 0## then, if your calcs are correct (which I haven't checked) you will get a valid limit. But the new expression will be a different expression from the original one. The two do not represent the same thing and one cannot be converted to, or substituted for, the other.
 
  • #15
Thanks for your input, guys. Let me see if I can reduce it to a more intuitive formula. I think what I have in essence is:

[tex]\mathop {\lim }\limits_{({t_1},{t_2}) \to 0} \left[ {\left( {\frac{1}{{{t_1}}}} \right) - \left( {\frac{1}{{{t_2}}}} \right)} \right][/tex]

And then put

[tex]{t_2} = \frac{{{t_1}}}{{1 + {t_1}n}}[/tex]

which gives the limit as

[tex]\mathop {\lim }\limits_{({t_1},{t_2}) \to 0} \left[ {\left( {\frac{1}{{{t_1}}}} \right) - \left( {\frac{1}{{\frac{{{t_1}}}{{1 + {t_1}n}}}}} \right)} \right][/tex]

And I think it is clear that for any value of n the two terms cancel out.

Although, I think Samy_A might disagree with you, andrewkirk, as to whether I can make t2 a function of t1.
 
  • #16
Samy_A said:
The ## \forall (x,y) \in \mathbb R²## is the key that prohibits ##t_2## being a function of ##t_1## and ##n##.
I take issue with the requirement that ## \forall (x,y) \in \mathbb R²##. It seems that this prohibits discrete values of (x,y) and prohibits the convergence of sequences, since ALL values of (x,y) must be considered.
 
  • #17
friend said:
I think what I have in essence is:

[tex]\mathop {\lim }\limits_{({t_1},{t_2}) \to 0} \left[ {\left( {\frac{1}{{{t_1}}}} \right) - \left( {\frac{1}{{{t_2}}}} \right)} \right][/tex]

And then put

[tex]{t_2} = \frac{{{t_1}}}{{1 + {t_1}n}}[/tex]

which gives the limit as

[tex]\mathop {\lim }\limits_{({t_1},{t_2}) \to 0} \left[ {\left( {\frac{1}{{{t_1}}}} \right) - \left( {\frac{1}{{\frac{{{t_1}}}{{1 + {t_1}n}}}}} \right)} \right][/tex]

And I think it is clear that for any value of n the two terms cancel out.
As Samy has already pointed out (in the unsimplified case) the second expression is not the same as the first. In the second, you are taking the limit of the unlimited expression as ##(t_1,t_2)## approaches ##(0,0)## along a specific path (line). Along that path, there is a limit, and it is

$$
\mathop {\lim }\limits_{({t_1},{t_2}) \to 0} \left[ {\left( {\frac{1}{{{t_1}}}} \right) - \left( {\frac{1}{{\frac{{{t_1}}}{{1 + {t_1}n}}}}} \right)} \right]=
\mathop {\lim }\limits_{({t_1},{t_2}) \to 0} \left[ {\left( {\frac{1}{{{t_1}}}} \right) - \left(\frac{1 + {t_1}n}{t_1} \right)} \right]=
\mathop {\lim }\limits_{({t_1},{t_2}) \to 0} \left[ \frac{1}{t_1}-\frac{1}{t_1}-n\right]=-n
$$

So for every positive integer ##n## there is a path through the point ##(0,0)## and every path will give a different limit. There are other paths through ##(0,0)## for which there is no limit, such as the path on which ##t_2=t_1{}^2##. So the limit as ##(t_1,t_2)\to 0## does not exist, because whether there is a limit, and what the limit is when there is one, both depend on the path taken.

This is directly similar to the example Samy gave in post 3, where he presented a function that has limits at a point if you approach it along specific paths, and those limits differ between paths, so that there is no simple (ie general) limit.

You felt that Samy's example didn't apply to your case, as you observed:
friend said:
If I could (as you do in your example) manipulate the limit I pose to various values, then I would agree that my manipulation to zero has no meaning.
My calc above demonstrates that you can manipulate your limit, just by choosing different paths - ie by choosing different values of ##n##. So there is no limit as ##(t_1,t_2)\to 0##, as that notation requires that the limit exists on every path that approaches ##(0,0)## and that all such limits are the same.
 
  • #18
No need to repeat what @andrewkirk wrote.

Just a remark about this:
friend said:
Although, I think Samy_A might disagree with you, andrewkirk, as to whether I can make t2 a function of t1.
No, I do agree with andrewkirk. Of course you can make ##t_2## a function of ##t_1##, or take a discrete path, or do whatever manipulation you want.
However, the result will not be the limit of the original expression.
Whether that result will be significant depends on the (physical) context, not on the mathematics.
 
  • #19
friend said:
I take issue with the requirement that ## \forall (x,y) \in \mathbb R²##. It seems that this prohibits discrete values of (x,y) and prohibits the convergence of sequences, since ALL values of (x,y) must be considered.
It shows that a "continuous" limit is not the same as a "discrete" limit. In the first post you asked: "The question is whether it is allowed to let parameters be discrete in the limiting process. Or must they always be continuous?"
The answer you can deduce from the definition of limit is: the two are not the same.

See how ##\displaystyle \lim_{x \rightarrow 0}\sin \frac{1}{x}## doesn't exist while some "discrete" variants behave nicely: ##\displaystyle \lim_{n \in \mathbb N \rightarrow \infty}\sin (\frac{1}{\frac{1}{n\pi}})=0##.
 
Last edited:
  • #20
Samy_A said:
No need to repeat what @andrewkirk wrote.

Just a remark about this:
No, I do agree with andrewkirk. Of course you can make ##t_2## a function of ##t_1##, or take a discrete path, or do whatever manipulation you want.
However, the result will not be the limit of the original expression.
Whether that result will be significant depends on the (physical) context, not on the mathematics.
I think you need to change that to "may not be the limit of the original expression." For obviously,
[tex]\mathop {\lim }\limits_{n \to 0} \frac{1}{n} = 0[/tex]
whether n is continuous or discrete. This is a counterexample to your use of "will not be".

And the reason I take issue with ## \forall (x,y) \in \mathbb R²## is because there are functions not even defined on some of R2 such as the probability of obtaining less than zero samples or the density of negative volume. It also contradicts the taking of a limit of a sequence.
 
Last edited:
  • #21
friend said:
I think you need to change that to "may not be the limit of the original expression." For obviously,
[tex]\mathop {\lim }\limits_{n \to 0} \frac{1}{n} = 0[/tex]
whether n is continuous or discrete.
(Typo: I assume that you meant ##n \to \infty## there.)

Of course. When a limit exists, then the limit will also exist along a specific path. That is quite obvious.

I was referring to your case, where the limit doesn't exist. If you do find a "limit" after some manipulation, that result will mathematically not be the limit of the expression you started with.

But if you want me to restate the post, no problem:

No, I do agree with andrewkirk. Of course you can make ##t_2## a function of ##t_1##, or take a discrete path, or do whatever manipulation you want.
However, the result will not be the limit of the original expression if that limit doesn't exist in the first place.
Whether that result will be significant depends on the (physical) context, not on the mathematics.
 
  • #22
friend said:
And the reason I take issue with ## \forall (x,y) \in \mathbb R²## is because there are functions not even defined on some of R2 such as the probability of obtaining less than zero samples or the density of negative volume. It also contradicts the taking of a limit of a sequence.
Please don't take things out of context. We were dealing with a limit for ##(x,y) \to (0,0)##.
The condition in the limit definition is that for all ##(x,y) \in \mathbb R²## that satisfy ##0<\|(x,y)\|<\delta##, ##|f(x,y)-L|<\epsilon##.
When we talk about a function, and its limit to a point, we of course assume that the function is defined in a neighborhood of the point (with the possible exclusion of the point itself).

I don't have the ambition nor the intention to post a comprehensive reference for limits. After all, this thread is marked "A(dvanced)", and the precise definition of a limit is well known and not really a matter of controversy.

The basic point remains: your expression has no limit. The "limit" you get after manipulation may or may not be of physical relevance
 
  • #23
Samy_A said:
However, the result will not be the limit of the original expression if that limit doesn't exist in the first place.
Whether that result will be significant depends on the (physical) context, not on the mathematics.
You say, "if that limit does not exist in the first place". How do you decide "in the first place" that a limit does not exist? I consider for example l'Hopital's rule, a complicated procedure to find out if the limit exists when it first appears that it does not. We apply l'Hopital's rule in situations that appear to be 0/0 or ∞/∞ or 0*∞, and we don't know what the value is. My case is similar, depending on the the value of t1 and t2, the limit could be ∞ or -∞ or 0; we just don't know. So if I come up with a technique that uniquely results in a limit of zero, does it prove the limit actually does exist? You claim that I cannot approach 0 by some specific path, but isn't that what l'Hopital's rule is doing, approaching zero along the specific path defined by the slope of a derivative?
 
Last edited:
  • #24
Samy_A said:
Please don't take things out of context. We were dealing with a limit for ##(x,y) \to (0,0)##.
The condition in the limit definition is that for all ##(x,y) \in \mathbb R²## that satisfy ##0<\|(x,y)\|<\delta##, ##|f(x,y)-L|<\epsilon##.
When we talk about a function, and its limit to a point, we of course assume that the function is defined in a neighborhood of the point (with the possible exclusion of the point itself).
If you say, for all ##(x,y) \in \mathbb R²## such that ##0<\|(x,y)\|<\delta##, then that seems to allow specific paths, for example discrete values.
Samy_A said:
I don't have the ambition nor the intention to post a comprehensive reference for limits. After all, this thread is marked "A(dvanced)", and the precise definition of a limit is well known and not really a matter of controversy.
Perhaps you know of a link, then.

Samy_A said:
The basic point remains: your expression has no limit. The "limit" you get after manipulation may or may not be of physical relevance
The question is how we know that the limit does not exist.

P.S. Thanks for hanging in there with me on this issue. I know I may be a little taxing with my continued questions. But it is important to me. A lot is riding on this issue. I consider it possible at this point that all of physics could be explained if only I can get this issue resolved.
 
  • #25
friend said:
You say, "if that limit does not exist in the first place". How do you decide "in the first place" that a limit does not exist? I consider for example l'Hopital's rule, a complicated procedure to find out if the limit exists when it first appears that it does not. We apply l'Hopital's rue in situations that appear to be 0/0 or ∞/∞ or 0*∞, and we don't know what the value is. My case is similar, depending on the the value of t1 and t2, the limit could be ∞ or -∞ or 0; we just don't know. So if I come up with a technique that uniquely results in a limit of zero, does it prove the limit actually does exist? You claim that I cannot approach 0 by some specific path, but isn't that what l'Hopital's rule is doing, approaching zero along the specific path defined by the slope of a derivative?
The bottom line is: we "decide" that a limit exists if it satisfies the well know ##\epsilon - \delta## definition.
We have seen a number of examples in this thread showing that "limits" can exist if we approach the limit point in some particular way, or by taking a "discrete" path, even when the limit doesn't exist.

L'Hôpital's rule in no way changes that. It is a very interesting method to compute certain limits without doing ##\epsilon - \delta## gymnastics.
However, if you can compute a limit using L'Hôpital's rule, you are sure that the function has a limit satisfying the usual definition.
You can find the proof of L'Hôpital's rule in any good calculus textbook (and even on Wikipedia).

Now, if you have found a novel way to compute limits, that is fine. In order for that method to be accepted you will have to prove that if a limit exists according to your method, it also exists according to the usual ##\epsilon - \delta## definition. Once that proof is provided, the new method can be used, same as we use L'Hôpital's rule to compute limits.

However, we know that your expression doesn't converge, unless you perform some manipulation.
 
  • #26
andrewkirk said:
As Samy has already pointed out (in the unsimplified case) the second expression is not the same as the first. In the second, you are taking the limit of the unlimited expression as ##(t_1,t_2)## approaches ##(0,0)## along a specific path (line). Along that path, there is a limit, and it is

$$
\mathop {\lim }\limits_{({t_1},{t_2}) \to 0} \left[ {\left( {\frac{1}{{{t_1}}}} \right) - \left( {\frac{1}{{\frac{{{t_1}}}{{1 + {t_1}n}}}}} \right)} \right]=
\mathop {\lim }\limits_{({t_1},{t_2}) \to 0} \left[ {\left( {\frac{1}{{{t_1}}}} \right) - \left(\frac{1 + {t_1}n}{t_1} \right)} \right]=
\mathop {\lim }\limits_{({t_1},{t_2}) \to 0} \left[ \frac{1}{t_1}-\frac{1}{t_1}-n\right]=-n
$$

So for every positive integer ##n## there is a path through the point ##(0,0)## and every path will give a different limit. There are other paths through ##(0,0)## for which there is no limit, such as the path on which ##t_2=t_1{}^2##. So the limit as ##(t_1,t_2)\to 0## does not exist, because whether there is a limit, and what the limit is when there is one, both depend on the path taken.

OK, very good, you got me on that one. I may have oversimplified. Let me think about it a bit.
Since this question was conceived in the context of quantum mechanics, it may turn out to be a virtue instead of a vice.
 
Last edited:
  • #27
Samy_A said:
However, we know that your expression doesn't converge, unless you perform some manipulation.
You seem to say that with a lot of confidence. But the question remains, HOW do we know that it does not converge?
 
  • #28
friend said:
If you say, for all ##(x,y) \in \mathbb R²## such that ##0<\|(x,y)\|<\delta##, then that seems to allow specific paths, for example discrete values.
No, it doesn't. The condition ##0<\|(x,y)\|<\delta## has no provision that allows to consider only certain paths.
Maybe that is even clearer in polar coordinates, where ##0<\|(x,y)\|<\delta## can be written as ##0<r<\delta##. The argument (the angle) is missing, so you can't assume anything about a specific path.
(Again, for simplicity, I consider a limit ##(x,y) \to (0,0)##.)
friend said:
Perhaps you know of a link, then.
Maybe this insight by micromass can help.
friend said:
The question is how we know that the limit does not exist.
You answered that question in the first post:
friend said:
It seems as t → 0 in an arbitrary way, the complex exponentials circles wildly from +1 to i to -1 to -i to +1 again. And the two square roots approach ∞ in magnitude and are 90° out of phase with each other. So it seems the limit does not approach any particular value, not even to plus or minus ∞; the limit seem undefined.
 
  • #29
Samy_A said:
No, it doesn't. The condition ##0<\|(x,y)\|<\delta## has no provision that allows to consider only certain paths.
Maybe that is even clearer in polar coordinates, where ##0<\|(x,y)\|<\delta## can be written as ##0<r<\delta##. The argument (the angle) is missing, so you can't assume anything about a specific path.
I don't see where any of this prohibits a specific path to (0,0). It doesn't seem to address the legitimacy of using a specific path to obtain a valid limit.

Samy_A said:
You answered that question in the first post:
And in that post I keep saying "seems" not to converge just as I would say 0/0 does not seem to converge. But that is not proof.
 
  • #30
friend said:
I don't see where any of this prohibits a specific path to (0,0). It doesn't seem to address the legitimacy of using a specific path to obtain a valid limit.
Because ##|f(x,y)-L| \lt \epsilon## must hold for all ##(x,y) \in \mathbb R²## satisfying ##0<\|(x,y)\|<\delta##.
If the limit only exists along a specific path, or is different according to the path one uses, it is by definition not a limit.

friend said:
And in that post I keep saying "seems" not to converge just as I would say 0/0 does not seem to converge. But that is not proof.
Well, basically you have ##\displaystyle \lim_{t \rightarrow 0} \frac{1}{\sqrt t}e^{if(t)}##, and that doesn't converge.
 
  • #31
Samy_A said:
Because ##|f(x,y)-L| \lt \epsilon## must hold for all ##(x,y) \in \mathbb R²## satisfying ##0<\|(x,y)\|<\delta##.
If the limit only exists along a specific path, or is different according to the path one uses, it is by definition not a limit.
If we must include all of R2, then that would prohibit the convergence of a sequence since they consider only discrete values of R2 (and not all values of R2). And it would prohibit functions not defined for some values of R2, since the ##|f(x,y)-L| \lt \epsilon## would not hold. And I would contend that a specific path is no more restrictive on R2 then using only discrete values of R2 or using only the positive quadrant of R2.

Samy_A said:
Well, basically you have ##\displaystyle \lim_{t \rightarrow 0} \frac{1}{\sqrt t}e^{if(t)}##, and that doesn't converge.
OK, suppose I grant that. The question remains whether my limit in the first post converges or not. Each term by itself may not converge, but it is not clear that when added that the sum does not converge. For there may be some values of the parameters that put them out of phase so they do converge to zero. You claim that specific paths are not allowed by the definition of a limit. But I don't see that yet.
 
  • #32
friend said:
If we must include all of R2, then that would prohibit the convergence of a sequence since they consider only discrete values of R2 (and not all values of R2). And it would prohibit functions not defined for some values of R2, since the ##|f(x,y)-L| \lt \epsilon## would not hold. And I would contend that a specific path is no more restrictive on R2 then using only discrete values of R2 or using only the positive quadrant of R2.
All this is irrelevant as far as your expression in post #1 is concerned.

I gave a general definition of the limit of a function of two variables. If the domain is not the whole of ##\mathbb R²##, the definition is adapted accordingly.
Still, that does not mean that you can pick a convenient path to the limit point, and claim convergence because the function converges along that convenient path.
We have seen examples in this thread, I won't repeat them.
friend said:
OK, suppose I grant that. The question remains whether my limit in the first post converges or not. Each term by itself may not converge, but it is not clear that when added that the sum does not converge. For there may be some values of the parameters that put them out of phase so they do converge to zero. You claim that specific paths are not allowed by the definition of a limit. But I don't see that yet.
The question of whether your limit in the first post converges or not doesn't remain: it has been answered, your limit doesn't converge.

What you have is ##\displaystyle \lim_{t_1 \rightarrow 0} \frac{c_1}{\sqrt t_1}e^{if_1(t_1)} + \lim_{t_2 \rightarrow 0} \frac{c_2}{\sqrt t_2}e^{if_2(t_2)}##, where ##c_1, c_2## are non zero constants and ##f_1, f_2## real functions.
None of these two limits converges.

You want to consider the following expression:
##\displaystyle \lim_{t_1 \rightarrow 0}( \frac{c_1}{\sqrt t_1}e^{if_1(t_1)} + \frac{c_2}{\sqrt t_2}e^{if_2(t_2)})##, where ##t_2## is now some specific function of ##t_1##. Even if this limit converges, it will not be equal to the original expression, because we know that the original expression doesn't converge.

I'll conclude with a simple example:
Define ##f(t)=\frac{1}{t}## and ##g(t)=-\frac{1}{t}##.
What is ##\displaystyle \lim_{t_1 \rightarrow 0} f(t_1) + \lim_{t_2 \rightarrow 0} g(t_2) \ \ \ \ (1)##?
Clearly not convergent, as we get ##+\infty - \infty##, an indeterminate form.
But what about ##\displaystyle \lim_{t_1 \rightarrow 0}( f(t_1) + g(t_1))##?
Clearly ##\displaystyle \lim_{t_1 \rightarrow 0}( f(t_1) + g(t_1))=0 \ \ \ \ (2)##

However, no mathematician will consider that the convergence of (2) somehow implies the convergence of (1).
 
  • #33
friend said:
I may have oversimplified. Let me think about it a bit.
Since this question was conceived in the context of quantum mechanics, it may turn out to be a virtue instead of a vice.
It sounds like you are confident that your construction has some valid physical use because of that context. For all we know, you may be right. Since we know nothing of the context, we cannot have any informed opinion on that. All we can say is that the expression you arrive at after performing your manipulations is not mathematically equivalent to the one you started with. Whether the final expression is applicable to the physical situation you are envisaging depends on what that physical situation is.

So let me reiterate Samy's request that you explain the physical context. With that context provided, somewhere here may be able to help you. Without it, we can do no more than observe that your end result does not follow mathematically from your starting point.
 
  • #34
Thanks for the help, guys (which includes possibly gals). But my enthusiasm may have been premature. I did a little more algebra to see if my limit was path dependent as andrewkirk may have suggested. It turns out that, sure, I can get the two phase factors to cancel out, but I pick up a negative square root in the second term. So it seems I cannot get rid of the phase and make them cancel as I hoped. Sorry for putting you through this for nothing.

As for the physical context that originally got me excited, the first term is the transition amplitude for a (unspecified) particle to go from x to x'. The second term is for an antiparticle to transition from the same place. I was hoping that in the limit as the parameters go to zero, that I could get them to cancel out. This would be consistent with virtual particles that pop in and out of existence without any permanent wave function. Maybe there's still a way to do this, but I can't see it. But if I was able to get these conjugates to cancel out at least in the limit, then I would be able to say that they exist everywhere since there is no permanent effect (wave function). And I have some intuition and notes on how all of spacetime, matter, and energy could be described with these "virtual particles". And I wanted to get a firm mathematical foundation before getting too far. That dream has yet to be realized. Thanks, but I'm suspending my campaign until further notice.
 
  • #35
Another point is that in the original question, where a factor of i is pulled out of the second term (in the 4th displayed equation), it would be necessary to take its square root first. I don't know if the 1/2 powers are well-defined in your physics situation. A simple way to do that is — if permitted by the physics — is: If angle is always chosen to be in the interval (-π, π) then the square root has an angle just half whatever the angle is. Since the terms taken to the 1/2 power are both on the imaginary axis, this would mean the square roots have angles -π/4 (first term), π/4 (second term), respectively.
 

Similar threads

  • Engineering and Comp Sci Homework Help
Replies
5
Views
9K
Replies
11
Views
1K
Replies
1
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
3K
Replies
2
Views
2K
Replies
4
Views
841
  • Introductory Physics Homework Help
2
Replies
40
Views
2K
Replies
2
Views
481
Replies
5
Views
1K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
1
Views
1K
Back
Top