 wordpress.com Go back Open original

# The Shortest Path To The Abel Prize

By RJLipton+KWRegan

While melding topology, geometry, and analysis

Karen Uhlenbeck is a mathematician who has won a number of awards in the past and has just now been announced as winner of the 2019 Abel Prize.

Today Ken and I want to explain a tiny bit about what Uhlenbeck did.

The Abel Prize citation says that Uhlenbeck won for

“pioneering achievements in geometric partial differential equations, gauge theory, and integrable systems, and for the fundamental impact on analysis, geometry and mathematical physics.”

A story in Quanta and another in Scientific American are among those with readable summaries of the general nature of this work. The latter describes Uhlenbeck’s discovery with the mathematician Jonathan Sacks of a phenomenon called bubbling as follows:

Sacks and Uhlenbeck were studying ‘minimal surfaces,’ the mathematical theory of how soap films arrange themselves into shapes that minimize their energy. But the theory had been marred by the appearance of points at which energy appeared to become infinitely concentrated. Uhlenbeck’s insight was to “zoom in” on those points to that this were caused by a new bubble splitting off the surface.

Some of the coolest comments are by Uhlenbeck’s doctoral graduate Mark Haskins in the story in the current issue of Nature.

Haskins says Uhlenbeck is one of those mathematicians who have ‘an innate sense of what should be true,’ even if they cannot always explain why.

The story recounts his often being baffled by answers to his questions, thinking Uhlenbeck had misheard them. But

“maybe weeks later, you would realize that you had not asked the correct question.”

## Calculus Of Variations

Simon Donaldson wrote a piece in the current issue of AMS Notices that explains Uhlenbeck’s research in the Calculus of Variations. The article starts with $\displaystyle F(u) = \int \Phi(u,u') dx.$

You can think of ${F(u)}$ as assigning a cost to a function ${u=u(x)}$. The goal of the calculus of variations is to find the best ${u}$ that minimizes ${F(u)}$ subject to some conditions on ${u}$. This is a huge generalization of simple minimization problems that arise in basic calculus. He then goes on to explain that in order to study the minimum solutions of such a function one quickly needs to examine partial differential equations. The math gets complex and beautiful very quickly.

As computer scientists who like discrete structures this is not our sweet spot. We rarely use partial derivatives in our work. Well not very often. See these two posts for an example.

To get a taste of this area, we will consider a classic variation problem coming out of these helpful online notes. It leads to integrals such as $\displaystyle \int_{0}^{a} (1+u'(x)^{2})^{1/2} dx.$

Well, we take to integrals even less than partial derivatives.

## Straight-line Shortest Path

We will change things up by starting with a discrete approach—as is our wont. Our given task is to prove in general that a straight line is the shortest path from the origin to a given point ${p = (a,b)}$. We first consider polygonal paths with ${n}$ line segments.

First, if ${n = 1}$ then the only option allowed is to go from ${(0,0)}$ to ${(a,b)}$ in one line segment. Thus the conclusion holds trivially: the Euclidean distance ${d(0,p)}$ is the minimum length of a ${1}$-segment path.

Now let ${n \geq 2}$. Let $\displaystyle P = (0,0) \rightarrow x_1 \rightarrow x_2 \rightarrow \cdots \rightarrow x_n = (a,b)$

be a series of line segments that form the shortest path from ${0}$ to ${p}$. Now by induction, the minimum length of a path of up to ${n-1}$ segments from ${x_1}$ to ${p}$ is ${d(x_1,p)}$ via a straight line from ${x_1}$ to ${p}$. And the length of the segment from the origin ${0}$ to ${x_1}$ of course is ${d(0,x_1)}$. Now the Euclidean triangle inequality says that the length ${d(0,x_1) + d(x_1,p)}$ which bounds the length of this path from below is not less than ${d(0,p)}$. Thus we have proved it for ${n}$ and the induction goes through.

What we really want to do, however, is prove that ${d(0,p)}$ is the shortest length for any path, period. The path need not have any straight segments. It may go in circular arcs, continually changing direction. The arcs need not be circular per-se; they could be anything.

The idea that occurs to us computer scientists is to let ${n}$ go to infinity. That is, we want to consider any path as being a limit of polygonal paths. But is this really legitimate? We can certainly approximate any path by paths of segments. But real analysis is littered with examples of complicated curves—themselves defined by limits—that defeat many intuitive expectations about continuity and limits. So how can we make such an infinitistic proof go through rigorously? This is where the calculus of variations takes over.

## Minimizers of Functionals

To set up the problem for fully general paths, we could represent them as functions ${f(t) = (x(t),y(t))}$ such that ${f(0) = 0}$ and ${f(1) = p}$. The length of the path is then obtained by integrating all the horizontal and vertical displacements: $\displaystyle \ell(f)(0,p) = \int_{t=0}^{t=1} \sqrt{\left(\frac{dx(t)}{dt}\right)^2 + \left(\frac{dy(t)}{dt}\right)^2} dt. \ \ \ \ \ (1)$

Wrangling this integral seems daunting enough, but the real action involving ${\ell(f)}$ only begins after doing so. Both the length ${\ell(f)}$ and the body of the integral are functionals—that is, functions of a function. We need to minimize ${\ell(f)}$ over all functions ${f}$. This is a higher-order task than minimizing a function at a point.

Our source simplifies the problem by assuming without loss of generality that ${x}$ increases from ${0}$ to ${a}$, giving the function ${f}$ as ${y(x)}$ instead. Then the problem becomes to minimize $\displaystyle \ell(f) = \int_{x=0}^{x=a} \sqrt{1 + \left(\frac{dy(x)}{dx}\right)^2} dx. \ \ \ \ \ (2)$

The body can be abstracted as a functional ${F(u,u')}$ where ${u}$ and its derivative ${u'}$ are functions of ${x}$. Here we have ${u = y(x)}$ and ${u' = \frac{dy}{dx}}$. The condition for ${F}$ to minimize ${\mathcal{F} = \int_x F}$ was derived by Leonhard Euler and Joseph Lagrange: $\displaystyle \frac{d}{dx}\left(\frac{\partial F}{\partial u'}\right) = \frac{\partial F}{\partial u}\;. \ \ \ \ \ (3)$

We won’t reproduce here how our source derives this but give some interpretation. This is a kind of regularity property that ${F}$ must obey in order to minimize ${\mathcal{F}}$. To quote Donaldson’s survey:

Then the condition that ${F}$ is stationary with respect to compactly supported variations of ${u}$ is a second order differential equation—the Euler-Lagrange equation associated to the functional.

However you slice it, the point is that the equation (3), when applied to cases like the above, is attackable. In the minimum-length path example, our source—after doing eight more equation lines of work—deduces that ${u' = \frac{dy}{dx}}$ must be constant. Any function argument ${F = y(x)}$ that yields this must be a straight line. The initial conditions force this to be the straight line from ${0}$ to ${p}$.

## Some of Uhlenbeck’s Work

The point we are emphasizing is that this simple case of paths in the plane—and its abstraction via functionals ${\mathcal{F}}$ that are ultimately founded on one variable ${x}$—have a ready-made minimization scheme, thanks to Euler and Lagrange. The scheme is fully general—not subject to the caveats about our simple approximation by line segments.

What happens in higher-dimensional cases? We can quote from the wonderful two-page essay accompanying the Abel Prize citation. It first notes the importance of a condition on functionals and their ambient spaces named for Richard Palais and Stephen Smale, which however fails for many cases of interest including harmonic maps.

[T]he Palais-Smale compactness condition … guarantees existence of minimizers of geometric functionals and is successful in the case of 1-dimensional domains, such as closed geodesics. Uhlenbeck realized that the condition of Palais-Smale fails in the case of surfaces due to topological reasons.

The papers with Sacks explored the roots of these breakdowns and found a way to patch them. The violation of the Palais-Smale condition allows minimizing sequences of functionals to converge with dependence on points outside the space being analyzed. But those loci are governed by a finite set of singular points within the space. This enables the calculus outside the space to be treated as a re-scaling of what goes on inside the space.

In general cases the view of the process from inside to outside can be described and analyzed as bubbles emerging from the singular locations. More than this picture and interpretation, the Sacks-Uhlenbeck papers produced a now-standard tool-set for higher-dimensional minimization of functionals. It is also another successful marriage of topology—determining the singularities—and analysis.

This work was extensible to more-general kinds of functionals such as a central one of Yang-Mills theory in physics. Geometric properties of a Riemannian manifold ${M}$ are expressed via the concept of a connection ${A}$ and the functional associates to ${A}$ its curvature ${F(A)}$. This is the body for the Yang-Mills functional $\displaystyle \mathcal{F} = \int_M |F(A)|^2.$

There is a corresponding lifting of the Euler-Lagrange equation. This led to developments very much along lines of the previous work with Sacks and more besides. There was particular success analyzing cases where ${M}$ has dimension 4 that were soon relevant to Donaldson’s own Fields Medal-winning research on these spaces. Most in particular, Uhlenbeck working solo proved that these cases were immune to the “bubbling” issue—with the consequence as related in Quanta that

any finite-energy solution to the Yang-Mills equations that is well-defined in the neighborhood of a point will also extend smoothly to the point itself.

## Open Problems

We’ve been happy to report that Uhlenbeck has won the prestigious Abel Prize. We have avoided referencing one aspect—despite giving numerous quotes verbatim—that can be appreciated in subsequent fullness here and here and in this. By so doing we’ve abided the desire stated in the twelfth paragraph of this essay. We wonder if this is the right way to do things. What do you think?