Ali Belabbas proved the following clever result.
Consider a Riemannian manifold \(M^m\), and the gradient flow of a generic function \(f\). Then the \(\omega\)-limit of a trajectory starting near a local maxima (i.e., with the starting point drawn from a density \(\lambda^mf(\lambda x)dx\)), consists, with asymptotic certainty as \(\lambda\to\infty\), of at most two local minima of \(f\).
What is going on here?! Nothing unexpected, in fact.
Consider a local equilibrium (say, at the origin) of a smooth dynamical system,
\[
\dot{x}=v(x), v(0)=0,
\]
with the linearization \(A=\partial v/\partial x\). Assume, further, that an eigenvalue of \(A\) dominates the rest:
\[
\sigma(A)=\{s_1,\ldots,s_m\}, \Re(s_1)>\Re(s_k), k=2,\ldots,m
\]
(this implies, of course, that \(s_1\) is real), and that \(s_1>0\).
These data allow one to define a 1-dimensional and 1-codimensional \(v\)-invariant submanifolds \(\Sigma_+\) and \(\Sigma_-\), such that the tangent space to \(\Sigma_+\) at the origin (the whiskers) is spanned by the eigenvector for \(s_1\), and the tangent space to \(\Sigma_+\) at the origin is spanned by the remaining eigenspaces of \(A\) (see, e.g., Hirsch et al, Invariant Manifolds).
Using this splitting, one can easily prove the following result:
Proposition. For any sufficiently small sphere \(S_r\) around the origin, let \(\{x_1, x_2\}=\Sigma_+ \cap S_r\) be the intersection of the whiskers with that sphere, and \(U_1, U_2\) arbitrarily small spherical vicinities of \(x_1, x_2\). Then for sufficiently small \(\epsilon\), the fraction of set of points in the \(\epsilon\)-ball around the origin such that the \(v\)-trajectories starting there exists \(S_r\) at \(U_{1,2}\) is arbitrarily close to one.
This is, essentially, Belabbas’ theorem: in his situation, the linearization of the gradient flow near a critical point is self-adjoint, and, generically, the leading eigenvalue is simple. Each of the whiskers, generically, descends to a local minimum, and if one starts close enough to it (in the vicinity \(U_k, k=1,2\)), one can assure that the resulting trajectory follows the whisker to the basin of attraction of the corresponding minimum.
Nice!
The randomness, however, manifests itself not just at the starting point, but along the trajectory. What happens if one considers the system \(\dot{x}=v(x)\) and perturbs it slightly with stochastic noise?
In other words, consider the SDE
\[
dx=v(x)dt+\epsilon g(x)dW, G:U\to TM
\]
(here \(v(p)=0\), \(g\) is a morphism of the trivial \(k\)-dimensional vector bundle over \(M\) to its tangent bundle, \(W\) standard \(k\)-dimensional Brownian motion; Ito integrals).
In this case, in the notation above, one has (we assume, for simplicity, that the starting point is located deterministically at the origin):
Proposition: If the pair \((A=\partial v/\partial x, B=g(0))\) is controllable, or, equivalently, satisfies the Hörmander condition at the origin1i.e., the minimal \(A\)-invariant subspace containing the range of \(B\) is the whole space \(T_oM\), then for sufficiently small \(\epsilon\), the probability that the trajectories of the SDE above exit \(S_r\) at \(U_{1,2}\) is arbitrarily close to one.
Specializing again to generic gradient systems, and using Kurtz’ theorem, one can establish that the trajectories of the corresponding SDE descend with overwhelming probability to small vicinities of one of at most two minima (before, eventually, leaving them in their search for an attainable global minimum).