Puiseux/Taylor Expansion of an Integrand pre-Integration

In summary, the conversation discusses the problem of integrating a function with a parameter going to zero and then series expanding as the parameter approaches zero. There are difficulties with this approach, as it can lead to divergent results. Various strategies are mentioned, such as Borel-resummation, dimensional regularization, and Ramanujan resummation, but these approaches are context-dependent and may not work in all situations. The conversation also touches on IR divergences and the difficulty of integrating analytically after adding a mass. The expert suggests that in the given calculus example, the integral is formally divergent and numerics will reflect this unless the partitions are chosen in a way that makes the integral accidentally convergent.
  • #1
Hepth
Gold Member
464
40
My problem : I have a function that I want to integrate, in the limit that a parameter goes to zero.

I have a function ##f[x,r]##

I want to compute ##F[r] = \int dx f[x,r]## and then series expand as ##r \rightarrow 0##

This is impossible algebraically for me, but may be possible if I can expand ##f[x,r]## about ##r\rightarrow0## first. But I get a divergent result if I do.

For example:

Assume I have the function ##f[x,r] = \frac{1}{x+r}##

Then $$F[r] = \int_0^1 \frac{1}{x+r} dx = log(\frac{1}{r} + 1)$$ and so the expansion is $$F[r] \approx -log(r) + r -\frac{r^2}{2} + ...$$

which is good. That is what I want.

But if I expand first, the truncation will cause a divergence unless i let it go to infinity:

$$\int_0^1 \frac{1}{x+r} dx \approx\int_0^1 (\frac{1}{x} -\frac{r}{x^2}+\frac{r^2}{x^3} + ...)dx
$$
$$ =\bigl[ log(x) + \frac{r}{x} -\frac{r^2}{2 x^2} + (...)\bigr]_0^1$$

which each term is divergent in the lower limit, unless you resum the series completely.Are there any strategies for regulating this? Z transforms? Plus distributions? etc?

Thank you for any insight.
 
Physics news on Phys.org
  • #2
This is generally speaking there are many different approaches you can take, like Borel-(re)sumation or dimensional regularization (zeta function regularization in 2-D), or Ramanujan resummation. But all of this is very context dependent, and it depends upon functional-properties (analyticity, etc) of your "real" underlying function. It also depends upon what exactly is causing the infinity. Without looking at the example very hard, I'd guess that zeta function regularization should do the trick. Sometimes, however, there is no "UV-complete" (in HEP language) underlying function that is valid for all limits of the parameters.
 
  • #3
Well, this is a final integration over the kinematic space. The loop integrations are done and the function has products of 2 logs, dilogs, and some local divergences that were already regulated away during calculation. Its a 2-D integration finally.

Numerically, it is convergent. But an analytic form is desirable, especially in the low-mass limit (r->0). The problem is pin-pointing the divergences as you say, which is tough. Extremely, extremely tough. There are a lot of "pole-like" structures that cancel, like in my previous example where if you had $$\int_0^1 dx \frac{1}{x+a} + ln(a) |_{a\rightarrow 0} = 0 $$ , and is convergent.

Even for this simple example, do you have specifically a way to expand this integrand without already knowing what the integral of ##\frac{1}{x+a}## is, and get ##0## as a result?
 
  • #4
Hepth said:
Well, this is a final integration over the kinematic space. The loop integrations are done and the function has products of 2 logs, dilogs, and some local divergences that were already regulated away during calculation. Its a 2-D integration finally.

Numerically, it is convergent. But an analytic form is desirable, especially in the low-mass limit (r->0). The problem is pin-pointing the divergences as you say, which is tough. Extremely, extremely tough. There are a lot of "pole-like" structures that cancel, like in my previous example where if you had $$\int_0^1 dx \frac{1}{x+a} + ln(a) |_{a\rightarrow 0} = 0 $$ , and is convergent.

Even for this simple example, do you have specifically a way to expand this integrand without already knowing what the integral of ##\frac{1}{x+a}## is, and get ##0## as a result?

Ah, okay. If I've interpreted what you've said correctly, you're running into what's known as a IR divergence, which needs to be separately canceled from the UV divergences. http://isites.harvard.edu/fs/docs/icb.topic521209.files/QFT-Schwartz.pdf (as do other QFT books, e.g. chapter 26 of Srednicki, 4.1.2 of Ityzkson & Zuber). This happens, for instance, in quantized Maxwell theory (and necessarily in all conformal field theories). Essentially what's going on is that you're integrating something that doesn't die off sufficiently fast asymptotically/##k \to 0## (e.g. flux) and so over any finite region of spatial integration, you get a finite answer. But as you try to integrate this over an infinite volume (i.e. around ##k=0##), you're getting a divergent quantity.

One thing you might try, which is done in QED, is add a mass in the propagator of the guilty particle, and then send it to zero after you've performed all integrals.
 
Last edited:
  • #5
This is my problem, but as in your last sentence this is what I am trying to do." "a" in the example above is the mass that I inserted. The problem is that the integration is not possible analytically. Or maybe it is, but it is extremely difficult. I would probably have to undo a lot of work and solve the loop integrals in terms of HyperGeometric functions instead so I could do some sort of recursion on them to get an analytic result.

So my problem is after adding this mass, I cannot integrate it analytically, only numerically.

I guess there is no solution in this case. I just need some advances methods of integration, which is what I was trying but is extremely difficult.
 
  • #6
Hepth said:
This is my problem, but as in your last sentence this is what I am trying to do." "a" in the example above is the mass that I inserted. The problem is that the integration is not possible analytically. Or maybe it is, but it is extremely difficult. I would probably have to undo a lot of work and solve the loop integrals in terms of HyperGeometric functions instead so I could do some sort of recursion on them to get an analytic result.

So my problem is after adding this mass, I cannot integrate it analytically, only numerically.

I guess there is no solution in this case. I just need some advances methods of integration, which is what I was trying but is extremely difficult.

EDIT: Let me stick to your calculus example.Okay, when r=0, the integral that you wrote down IS formally divergent. Numerics have no choice but to reflect this, unless the Riemann partitions (i.e. how you discretized) that you've chosen somehow accidentally make this divergent integral convergent, which almost certainly hasn't happened.

So in the example that you give, as I do numerical integrals, and I check how the integral value changes as I make r ~ ##2^{-N} ## for larger and larger ##N##, you will discover that your integral doesn't converge, unless you subtract off the log running (i.e. make it a different function).

So, first, please explain to me why you argue that your ## r \to 0 ## limit makes sense, even at a numerical level.
 
Last edited:
  • #7
I will just add a caution that may be unnecessary: When using power series (or Laurent series, it's important to keep track of which values of the variable it is valid for.

The integration result of the form log(x + 1), where x = 1/r, has a standard Taylor series:

log(x + 1) = x - x2/2 + x3/3 - x4/4 + ...,​

which is valid for |x| < 1. (Obtained by integrating the geometric series for 1/(1+x).)

Plugging in 1/r we get

log(1/r + 1) = 1/r - 1/r2/2 + 1/r3/3 - 1/r4/4 + ...,​

which is valid for |1/r| < 1, or in other words |r| > 1.

If you write

log(1/r + 1) = log(1 + r) - log(r)

then you can change the last series to

log(1 + 1/r) = - log(r) + r - r2/2 + r3/3 - r4/4 + ...

which is valid for |r| < 1.

But I'm not sure why it's important to first integrate and then expand the antiderivative in a series before evaluating at the limits of integration.
 
  • #8
zinq said:
then you can change the last series to

log(1 + 1/r) = - log(r) + r - r2/2 + r3/3 - r4/4 + ...

which is valid for |r| < 1.

Yeah, even so, however, the limit that ## r \to 0 ## in any of these expressions (including the formal definition of the logarithm where no expansion assumptions have been made at that point) leads to a formally divergent expression. Even in the cases outside of the validity of the expansion, you still recover that the expression is ill-defined in that limit.

EDIT: Yeah, I believe that I agree with you radius of convergence.
 
  • #9
EDIT: Okay, there's some pretty conspicuous errors in your calculation.

Hepth said:
I want to compute ##F[r] = \int dx f[x,r]## and then series expand as ##r \rightarrow 0##

This is impossible algebraically for me, but may be possible if I can expand ##f[x,r]## about ##r\rightarrow0## first. But I get a divergent result if I do.

If you did there what you did here, I'm 90% sure that you simply took the order of limits incorrectly.

Assume I have the function ##f[x,r] = \frac{1}{x+r}##

Then $$F[r] = \int_0^1 \frac{1}{x+r} dx = log(\frac{1}{r} + 1)$$ and so the expansion is $$F[r] \approx -log(r) + r -\frac{r^2}{2} + ...$$

So we disagree at this step. Right now, you're supposed to be assuming that ## r \neq 0 ##, which you haven't to get that indefinite integral. This can be justified because we're actually doing an analytic continuation around the pole at ## r=0 ##. (For physicists: This is equivalent to the justification for the ## + i \epsilon ## in the Feynman propagator, where you're actually doing a contour integral around that pole on the real axis. So you're analytically continuing the function in one direction around the pole, and integrating following that path.)

Once you make that assumption and use a substitution of the form ##x = r z##, you find:

## F(r) = \int_0^{r} dz \frac{1}{1 + z} = \ln[1+y] |_0^r = \ln[1+r] ##

Now, when I take the limit ## r \to 0 ##, this is totally convergent, ## F(r) \to 0 ##. Your limit is screwed up because you're not assuming ## r \neq 0 ##, presumably you plugged this into Mathematica and it just took the solution which was valid (i.e. doing the literal integral over the real line). So the literal integral will include the knowledge that the integral should diverge in the limit that ## r \to 0 ##. But that of course will reintroduce the IR divergence that you're trying to remove with ##r##, so the game is to assume non-zero mass until the very, very end. (Also, the point is that physical observables, things you actually measure in a lab, will not contain IR divergences.)

(The whole point of cancelling IR divergences is to assume a non-zero mass until the end of the calculation. As you can see, this amounts to a kind of analytic continuation, which presumably can be re-explained in terms of zeta function regularization.)

But if I expand first, the truncation will cause a divergence unless i let it go to infinity:

$$\int_0^1 \frac{1}{x+r} dx \approx\int_0^1 (\frac{1}{x} -\frac{r}{x^2}+\frac{r^2}{x^3} + ...)dx
$$
$$ =\bigl[ log(x) + \frac{r}{x} -\frac{r^2}{2 x^2} + (...)\bigr]_0^1$$

which each term is divergent in the lower limit, unless you resum the series completely.

This is a separate point, but there's an error here: You've already used a non-trivial resummation here. This is because, as zinq correctly points out, you're being wildly cavalier with the radius of convergences, and if you want to ignore the radius of convergence you need to use formal resummation procedures (in this case, you can definitely use Abel summation to resum your formal series, and I'm 99% sure without checking that zeta function regularization will formally resum this with no problems). In other words, you're assuming that you can resum the series, for instance, when ##x = 1## which definitely requires a formal resummation, but the zero cases make things much worse.

Essentially, because you need to include the cases where ## \frac{x}{r} \geq 1 ## and ## \frac{x}{r} \leq 1 ## since you're taking two limits here, ## x \to 0## in the integral and ## r \to 0 ## in your assumption. Thus, you are outside of real analysis (ordinary real-variable calculus), and you need to define what you're doing in terms of contour integrals or formal summation procedures (which will amount to the same things).
 
  • #10
Maybe I am misunderstanding, but I don't see the problem referred to by "Right now, you're supposed to be assuming that r≠0, which you haven't to get that indefinite integral."

The integral of 1/(x+r) from x = 0 to x = 1, which is

F(r) = ln(1+r) - ln(0+r) = ln(1 + 1/r),​

makes perfect sense for r > 0. (Plus, this is a definite integral.)

Then the next step, to take

F(r) = ln(1 + r) - ln(r)​

and expand the first term ln(1+r) as a Taylor series (valid for |r| < 1), to get

F(r) = -ln(r) + r - r2/2 + r3/3 - r4/4 + ...​

(which is also valid for r = 1, as it happens) means that if we have also assumed r > 0, then our constraints on r should now be

0 < r ≤ 1.

So up through this step, there is no analytic continuation needed; it is just true that if you plug in any r in the range 0 < r ≤ 1, the equation

ln(1 +1/r) = -ln(r) + r - r2/2 + r3/3 - r4/4 + ...​

will be true.

Apologies if I have missed the point.
 
  • #11
zinq said:
Maybe I am misunderstanding, but I don't see the problem referred to by "Right now, you're supposed to be assuming that r≠0, which you haven't to get that indefinite integral."

The integral of 1/(x+r) from x = 0 to x = 1, which is

F(r) = ln(1+r) - ln(0+r) = ln(1 + 1/r),​

makes perfect sense for r > 0. (Plus, this is a definite integral.)

Then the next step, to take

F(r) = ln(1 + r) - ln(r)​

and expand the first term ln(1+r) as a Taylor series (valid for |r| < 1), to get

F(r) = -ln(r) + r - r2/2 + r3/3 - r4/4 + ...​

(which is also valid for r = 1, as it happens) means that if we have also assumed r > 0, then our constraints on r should now be

0 < r ≤ 1.

So up through this step, there is no analytic continuation needed; it is just true that if you plug in any r in the range 0 < r ≤ 1, the equation

ln(1 +1/r) = -ln(r) + r - r2/2 + r3/3 - r4/4 + ...​

will be true.

Apologies if I have missed the point.

Indeed, you're right, I made a simple calculus error, my bound should have been: "1/a" not "a." In any case, I go back to my previous point: The divergences they're calculating are real, so whatever observable he or she is claiming to calculate is actually IR divergent. This typically means they are not calculating something correctly or what they're claiming is a physical observable isn't an observable. The log(r) is absolutely divergent as ## r \to 0 ##, so the numerical integral must reflect this.

No matter what, there's an error here, and without much more detail, it's impossible to guess where.
 
  • #12
I don't think I have an error.

Assume the observable is

$$\mathcal{\hat{O}} = \int_0^1 dx \left( \frac{1}{x+r} + \log(r)\right) $$

The result for ##0 < r <= 1##

$$\mathcal{\hat{O}} = \log(1+r) $$

and in the limit of ##r\rightarrow 0##

$$\lim_{r\rightarrow 0} \log(1+r) = 0$$

I see no problem. It is convergent after integration, and in the proper limit. (I had this example in my second post)

Yes, it is an IR divergence form a massless limit of a process, but it is indeed convergent, even in the massless limit.

My problem is that, upon calculation in the MASSIVE regime, the integrand is not easily integrable. And by "not easy" I mean extremely difficult both in terms of its complexity as well as the number of different terms.

I note that I know that the taylor expansion of the integrand IS integrable algebraically, and makes it easier, though I also know that this is not the correct way of performing the limit.

In essence, I wish to find a way to "undo" the IR-regulator that was used and move it into some other method of regulation, like dim-reg. Or ANYTHING, as I was looking for methods of doing this.

Also, Thank you for your help, I really appreciate it!
 

Related to Puiseux/Taylor Expansion of an Integrand pre-Integration

1. What is the Puiseux/Taylor expansion of an integrand pre-integration?

The Puiseux/Taylor expansion is a mathematical technique used to approximate a function by expressing it as a polynomial. This is particularly useful for integrals, as it allows us to break down a complex function into simpler terms that are easier to integrate.

2. Why is the Puiseux/Taylor expansion important in integration?

The Puiseux/Taylor expansion allows us to approximate a function and break it down into simpler terms, making it easier to integrate. This can be especially helpful for functions that are difficult or impossible to integrate using traditional methods.

3. How do you determine the coefficients in a Puiseux/Taylor expansion?

The coefficients in a Puiseux/Taylor expansion can be determined by taking derivatives of the function at a specific point. The coefficients can also be found using mathematical formulas and algorithms, depending on the complexity of the function.

4. Can the Puiseux/Taylor expansion be used for any function?

In theory, the Puiseux/Taylor expansion can be used for any function. However, the accuracy of the approximation depends on the smoothness of the function and the number of terms included in the expansion. Some functions may require an infinite number of terms to accurately approximate, making it impractical for integration.

5. Are there any limitations to using the Puiseux/Taylor expansion in integration?

While the Puiseux/Taylor expansion can be a powerful tool for integration, it does have its limitations. It may not be suitable for some functions, particularly those that are discontinuous or have sharp corners. Additionally, the accuracy of the expansion can decrease as the number of terms is increased, making it important to carefully choose the number of terms used in the approximation.

Similar threads

Replies
2
Views
432
Replies
20
Views
2K
Replies
4
Views
560
Replies
3
Views
1K
Replies
2
Views
1K
Replies
2
Views
1K
Replies
1
Views
1K
Replies
1
Views
1K
  • Calculus
Replies
8
Views
374
Replies
1
Views
2K
Back
Top