Artificial Intelligence and Electrical & Electronics Engineering: AIEEE Open Access

A Neural PDE Framework via Navier–Stokes Dynamics: Continuous-Depth Learning and Hybrid Attention-Driven Forcing

Abstract

Chur Chin

We propose a reformulation of a Navier–Stokes–based dynamical learning framework as a continuous-depth neural network (Neural PDE), and develop a hybrid model in which transformer-style attention mechanisms act as adaptive forcing terms within the flow. In contrast to conventional neural networks trained by gradient descent and backpropagation, the proposed approach replaces explicit loss minimization with evolution governed by energy dissipation, spectral cascades, and resolvent-based regularity control. We show how the flow map of the Navier–Stokes equation can be interpreted as an infinite-depth residual network, establish conditional regularity criteria analogous to Beale–Kato–Majda, and position attention as a data-dependent operator that injects structured information into the dynamics. This framework provides a principled bridge between operator-theoretic PDE analysis and modern deep learning architectures, with potential applications in structured inference, multiscale representation learning, and mathematically constrained learning systems.

PDF

Journal key Highlights