The Unpredictable Dance of Fluids and the Quest for Singularities

Imagine pouring cream into your coffee. The intricate swirls and eddies that form are a beautiful, everyday example of fluid dynamics. For centuries, mathematicians and physicists have used a set of equations—some dating back to Leonhard Euler in the 1750s—to describe this motion. These equations, like the Euler and Navier–Stokes equations, are the bedrock of our understanding of everything from weather patterns to the airflow over an airplane’s wing.

But lurking within these elegant mathematical descriptions is a profound and unsettling question: can they break? Can a perfectly smooth, well-behaved fluid flow spontaneously develop singularities—points where physical quantities like velocity or pressure shoot to infinity in a finite amount of time?

This isn’t just a mathematical curiosity. If singularities can form, it means the very equations we rely on to predict the behavior of fluids can fail, leading to physically impossible outcomes. The question of whether this happens in the 3D Navier–Stokes equations is so fundamental that it’s one of the seven Millennium Prize Problems, carrying a one-million-dollar reward for its solution.

For decades, the search for these singularities has been a central challenge. Most numerical methods have focused on finding stable singularities—those that form robustly even if you slightly change the starting conditions. However, for the big, open problems like the boundary-free Euler and Navier–Stokes equations, experts believe that if singularities exist, they must be unstable.

An unstable singularity is an entirely different beast. It’s like trying to balance a pencil perfectly on its tip: a valid physical state, but the slightest nudge—an infinitesimally small perturbation—makes it topple. Finding such a solution requires near-infinite precision. It’s a mathematical ghost, perched on a razor’s edge, and traditional simulation methods struggle to even get close without being thrown off course.

A new paper, Discovery of Unstable Singularities, presents a groundbreaking framework that conquers this challenge. It systematically discovers families of these elusive unstable singularities. By combining cleverly designed neural networks with a high-precision Gauss–Newton optimizer, the researchers created a new “playbook” for exploring the complex world of fluid equations—achieving a level of accuracy limited only by the double-precision floating-point limits of modern GPUs.


Background: The Ghosts in the Equations

Before diving into the new method, let’s build intuition about the core concepts.

What is a Singularity?

In the context of fluid equations, a singularity (or “blow-up”) is a point in time and space where the solution ceases to be smooth. Imagine a wave approaching shore: it grows steeper until it breaks. A singularity is like the mathematical breaking point—but far more extreme—where slopes (gradients) become infinite. The equations predict an infinitely sharp feature forming from an initially smooth state.

Stable vs. Unstable: Balancing a Pencil on Its Tip

The concept of stability is crucial:

  • Stable singularity: Like a ball in a valley—nudge it, and it rolls back to the center. In fluid terms, small changes in initial conditions still lead to the same singularity. Robust, easy to find numerically.
  • Unstable singularity: Like balancing a pencil on its tip—possible, but ultra-fragile. The smallest error sends the solution away from blow-up. Any numerical noise or rounding error becomes a fatal perturbation.

It’s widely believed that singularities in the most challenging fluid problems (like boundary-free Navier–Stokes) must be unstable. Developing tools to find and analyze these fragile solutions is essential for tackling the grand challenge.

The Self-Similar Trick: Freezing Time

One of the most powerful mathematical tools for studying singularities is self-similar coordinates. Instead of watching the fluid evolve in normal space and time, you “zoom in” on the singularity at just the right rate.

In a self-similar blow-up, the spatial shape of the solution looks the same over time—it’s just scaled in size and amplitude. By transforming into new coordinates that shrink space and dilate magnitude at the blow-up rate, the problem becomes finding a single, static profile, \( \Phi(y) \), that solves a steady equation.

This transformation introduces a critical scaling parameter, \( \lambda \): the self-similar scaling rate. Smooth physical solutions only occur at discrete “admissible” \( \lambda \) values. The hunt for a singularity becomes a hunt for these special numbers.


The Discovery Engine: A New Playbook for Finding Singularities

The authors built a powerful discovery engine for finding admissible \( \lambda \) values and their corresponding solution profiles—even for highly unstable cases. This is a two-stage process: first, discover a high-accuracy candidate solution; second, analyze its stability.

Figure 1 shows the two-stage research flowchart. Stage one uses a Physics-Informed Neural Network (PINN) guided by mathematical modeling to find a candidate self-similar solution. Stage two analyzes stability by linearizing the PDE around the candidate.

Figure 1. Workflow combining mathematical modeling and machine learning to discover and validate self-similar solutions, distinguishing stable and unstable singularities.


PINNs: Teaching Neural Networks Physics

A Physics-Informed Neural Network (PINN) is trained not on data, but on the physics itself. The loss function is the PDE residual—the mismatch when the network’s output is plugged into the equation. If the network outputs the exact solution, the residual is zero everywhere.

But standard PINNs often stall at modest precision (say \( 10^{-3} \)), far from the extreme accuracy needed for rigorous study. The authors’ innovations are in the architecture, the training pipeline, and the optimizer.


Secret Sauce #1: Embedding Mathematical Wisdom

The team built inductive biases—mathematical knowledge—directly into the network:

  1. Compactifying Infinity: Self-similar problems live on infinite domains. The authors use coordinate transforms to map infinite space to a finite domain, making it tractable for a neural network.

  2. Hard-coded Symmetry & Decay: Certain physical constraints are known analytically. For example, for vorticity \( \Omega \) in the Boussinesq equation, the profile must be odd in \( y_1 \) and decay at infinity with a specific power-law. They implement:

    \[ \Omega(y_1, y_2) = \underbrace{\left(\frac{y_1}{\sqrt{1 + y_1^2 + y_2^2}}\right)}_{\text{Odd Symmetry}} \cdot \mathrm{NN}_{\Omega}[q,\beta] \cdot \underbrace{q(y_1, y_2)}_{\text{Asymptotic Decay}} \]

    This frees up the network to focus on the truly non-obvious aspects of the solution.

Numerical experiments feed back into architecture refinements—in a loop shown in Figure 1—finding hidden mathematical structures and baking them into the model.


Secret Sauce #2: High-Precision Training with Gauss–Newton

Finding unstable solutions requires navigating a rugged loss landscape. The team switched from common optimizers (Adam, L-BFGS) to a full-matrix Gauss–Newton (GN) optimizer. GN uses both gradients and curvature, intelligently choosing steps that avoid local traps.

Because their PINNs are modest in size (thousands of parameters), full GN is feasible. As seen below, GN converges faster and achieves far lower residuals than first-order methods.

Figure 4 compares optimizers: Gauss-Newton (blue) far outperforms Adam and L-BFGS, and multi-stage training drops residuals further to around 10^-14.

Figure 4. High-precision training curves. GN optimization and multi-stage refinement deliver performance several orders of magnitude better than standard methods.


Machine Precision with Multi-Stage Training

Even GN has limits: a single network can only fit so well. The authors use multi-stage training:

  1. Train a first network to near-optimal precision.
  2. Freeze it; compute its residual.
  3. Train a second network to learn that residual and correct it.

The outputs are added to form the final solution. This improved some solutions by five orders of magnitude, achieving residuals \( \sim 10^{-13} \) for the CCF equation—limited only by hardware floating-point round-off.

Figure 3 shows spatial residual profiles for IPM and tabulated log10 max residuals for all solutions—many below 10^-7, with CCF reaching -13.7.

Figure 3. (a) Spatial residual profile for an IPM solution (~\(10^{-11}\)). (b) Log-10 max residuals across solutions, with record-low levels for CCF.


The Payoff: A Zoo of New Unstable Singularities

With this engine, the authors attacked three important systems:

  • Córdoba–Córdoba–Fontelos (CCF) equation — a 1D model capturing elements of 3D fluid dynamics.
  • Incompressible Porous Media (IPM) equation — models flow through porous material.
  • 2D Boussinesq equations — mathematically analogous to 3D Euler with axial symmetry and boundaries.

They reproduced stable solutions with unprecedented accuracy, but also found new families of unstable ones.

Figure 2 shows vorticity profiles for IPM and Boussinesq stable/unstable modes, a linear plot of inverse λ vs instability order, and a table of λ values.

Figure 2. (a–d) Spatial vorticity profiles and cross-sections for stable to 3rd-unstable solutions. (e) Linear trend between instability order and \( 1/\lambda \). (f) Table of scaling rates \( \lambda \) for each mode.

For IPM and Boussinesq, they found the stable and first–third unstable solutions, plus a candidate 4th-unstable Boussinesq case—the first smooth unstable self-similar solutions for any unforced incompressible fluid equations.


Validating the “Magic Numbers”: Funnel Plots

How to ensure \( \lambda^* \) is truly admissible? The team fixes \( \lambda \) to nearby values, retrains, and records minimum residuals. Admissible \( \lambda \) produces the lowest residual; deviations form a sharp “V” curve (Figure 5).

Figure 5 shows funnel plots of max residual vs λ deviation, confirming admissible values for Boussinesq modes.

Figure 5. Funnel plots: residual minima pinpoint exact admissible \( \lambda \), with residuals rising steeply when perturbed.

The funnel basins’ width gives the significant digits they can trust. Figure 6 zooms in with a symlog x-axis to measure resolution.

Figure 6 shows same plots with logarithmic x-axis, highlighting basin width for precision estimation.

Figure 6. Basin widths in symlog scale determine validated significant digits for each \( \lambda \).


Patterns in Chaos: Predicting Higher-Order Modes

By plotting \(1/\lambda_n\) vs instability order \(n\), the authors uncovered linear trends for IPM and Boussinesq. Empirical formulas emerged that estimate \( \lambda \) for yet-undiscovered modes—providing a map for future exploration.


Conclusion: Lighting the Way Forward

Discovery of Unstable Singularities is a landmark at the crossroads of mathematics, physics, and AI.

Key Takeaways:

  1. Specialized Discovery Tool: A framework combining physics insight and AI can uncover unstable solutions unreachable by prior methods.
  2. Extreme Precision: Inductive biases, Gauss–Newton optimization, and multi-stage refinement achieved hardware-limited accuracy—critical for trustworthy use in rigorous proofs.
  3. New Mathematical Objects: Families of unstable singularities were found in core fluid models, revealing order–scaling patterns.
  4. Redefining PINNs: When guided by domain knowledge, PINNs become powerful discovery engines, not just generic PDE solvers.

While the Millennium Prize for Navier–Stokes remains unsolved, this work is a decisive step forward—proof that classical analysis + modern computation can illuminate the hardest problems in nonlinear PDEs.

The hunt for singularities continues, but with this new torch, the path ahead is brighter than ever.