Convergence tests for improper multiple integrals

For improper single integrals with positive integrands of the first, second or mixed type there are the comparison and the limit tests to determine their convergence or divergence. There is also the absolute convergence theorem.

For multiple integrals I know that the comparison test can be used as well.

Question: But can the limit or other tests be generalized to multiple integrals? Could you provide some references?

Added: I have thought of Riemann integration only.

Example: When we evaluate (Proof 1 of this and Apostol’s article) this improper double integral $$\int_{0}^{1}\int_{0}^{1}\left(\dfrac{1}{1-xy}\right) \mathrm{d}x\mathrm{d}y,$$ we conclude that it is finite.

Question 2: Which test should we apply to know in advance that it converges?

Solutions Collecting From Web of "Convergence tests for improper multiple integrals"


The following response attempts to address two aspects I perceive to be of interest in this question: one concerns whether there are some relatively rote or standard procedures for evaluating the convergence of certain kinds of improper multivariate integrals; the other is whether there is some simple intuition behind the subject. To keep the discussion from becoming too abstract, I will use the double integral (introduced in the question) as a running example.


Many integrals that are improper due to singular behavior of their integrand can be analyzed, both rigorously and intuitively, with a simple comparison. The idea is that any singularity of the integrand that doesn’t blow up too fast compared to the codimension of the manifold of singular points can be “smoothed over” by the integral, provided the domain of integration doesn’t contact the singular points too “tightly.” It remains to make these ideas precise.


When the domain of integration is relatively compact, as the one in the example is, the problems with convergence occur only at the possible singularities of the integrand within the closure of the domain, which in this case is the isolated point $(x,y) = (1,1)$. However, it is evident that if the domain of integration were to be expanded, any zero of the curve $1 – x y = 0$ could be a singularity. In general, many singularities occur this way: an integrand $f(\bf{x})$ is locally of the form $g(||h(\bf{x})||)$ where $h: \mathbb{R}^n \to \mathbb{R}^k$ and $g(r) = r^{-s} u(r)$ for some function $u$ that is bounded in some nonnegative neighborhood of $0$. In this case $h(\bf{x}) = 1 – x y$ and $g(r) = r^{-1}$, so $s=1$.

When $h$ is differentiable at $\bf{0}$ with nonsingular derivative there, principles of Calculus invite us to linearize the situation and geometric ideas suggest a simple form for the linearization. Specifically, the Implicit Function Theorem guarantees that local coordinates can be found near such a singularity in which the derivative of $h$ is in the form $\left( \bf{1}_{k \times k} ; \bf{0}_{k \times n-k} \right)$. The singularity itself can be translated to the origin where, to first order, the zeros of $h$ locally correspond with the vector subspace generated by the last $n-k$ coordinate axes. The effect on the integral is to introduce a factor given by the determinant of the Jacobian, which is locally bounded and so does not affect the convergence. In this example we can explicitly make these changes of coordinates by translating the singularity to $\bf{0}$, computing the gradient of $h$ there, and rotating it to point along the first axis. This amounts to the change of variables $(u, v)$ = $(2 – x – y, y – x)$, in which $h(u, v) = u – \frac{1}{4}(u^2 – v^2)$, equal to $u$ to first order. Within a small neighborhood of the origin, the domain of integration becomes a 90-degree wedge containing positive $(u,v)$ for which $u \le |v|$ and the zeros of $h$ coincide with the $v$ axis to first order.

Let’s consider the case where the domain of integration locally contains an isolated point of the singular set $H$ (the zeros of $h$), which we translate to $\bf{0}$. Estimate the integral near this singularity by adopting spherical coordinates there. To do this, we need a closer look at the domain of integration in spherical coordinates. Any $\epsilon \gt 0$ determines the set of all possible unit direction vectors from $\bf{0}$ toward elements of the domain of integration that lie within a distance $\epsilon$ of $\bf{0}$. In the example, this corresponds to the set of points on the unit circle with angles between $-\pi/4$ and $\pi/4$. Suppose that for sufficiently small $\epsilon$ the closure (in the unit sphere) of this set of direction vectors does not include any of the tangent vectors of $H$ at $\bf{0}$. Then an easy estimate shows that the angular part of the integral in spherical coordinates is bounded. This reduces the task of integration to estimating the radial part. Because the volume element in spherical coordinates $(\rho, \Omega)$ is $\rho^{n-1} d\rho d\Omega$, the radial integral is proportional to $\rho^{n-1} g(\rho) d \rho$. This is the key calculation: it shows how integration in $n$ dimensions can “cancel” a factor of $\rho^{1-n}$ in the integrand.

We’re reduced to evaluating a 1D integral, improper at $0$, whose integrand behaves like $\rho^{n-1} g(\rho)$. By virtue of the assumptions about $g$, this is bounded above by a multiple of $\rho^{n-1-s}$. Provided $n-1-s \gt -1$, this converges (at a rate proportional to $\rho^{n-s}$). In the example, $n=2$, $s=1$, so convergence is assured.


When $n-1-s \le -1$, the behavior of the original integral depends on exactly how the domain “pinches out” as it approaches the singularity, so more detailed analysis is needed. The spirit of the exercise doesn’t change, though: with Calculus we linearize the situation (relying on simple estimates to take care of the second order remainder term), we use spherical coordinates centered at an isolated singularity in the domain (or more generally, cylindrical coordinates near a more complicated singularity) to perform the integration, and we estimate the rate of growth of the integral near the singularity in terms of a radial component and a contribution from a spherical angle. The behavior of the integral over the spherical angle as we approach the singularity is determined by the shape of the domain near the singularity, so often some careful (and interesting) geometric analysis is needed. This approach is characteristic of the proofs of many theorems in Several Complex Variables, such as the Edge of the Wedge Theorem (if I recall correctly–I’m reaching back to memories now a quarter century old). One good reference is Steven Krantz’s book on Function Theory of SCV.


The behavior of the integral of $(1-x y)^{-1}$ near $(1,1)$ can be analyzed by adopting polar coordinates $(\rho, \theta)$ centered at $(1,1)$, observing that the domain here forms a wedge whose point contacts the singular set $1 – x y = 0$ “transversely,” noticing that the radial behavior of the integrand is $\rho^{-1}$, remembering that the area element in polar coordinates has a $\rho^1 d \rho$ term, and noting that the integral of $\rho^{-1} \rho^1 d \rho$ converges at $0$. Indeed, what originally looked like a singularity has entirely disappeared. This procedure generalizes to multidimensional integrals subject to fairly mild restrictions on the nature of the singularities of their integrands and on the geometry of the domain of integration near those singularities.