Partial Differential Equations (PDEs) are the backbone of modern science and engineering, governing phenomena from climate modeling and drug discovery to aerospace design and seismic imaging. Solving these equations with classical numerical methods can be computationally very intensive. More recently, numerous neural network methods have been proposed to create fast, efficient and scalable PDE solvers. However, numerous problems remain: how can we ensure that neural networks respect the physics and symmetries of the underlying equations? Why do some architectures generalize across discretizations or scales while others fail? How does the training process interact with the mathematical structure of the PDE—such as its eigenmodes, symmetries or conservation laws? Developing systematic answers to these questions is still an open frontier in scientific machine learning.
In this project, you will explore the relationship between the mathematical structure of a PDE and the architecture of neural networks trained to solve it. You will begin with simple prototype problems (e.g. diffusion, wave, or Poisson equations) and experiment with different network types. These may include standard architectures (fully connected networks, CNNs, transformers) and more advanced ones (wavelet networks, group-equivariant CNNs, neural Green’s operators). You will analyze how different architectures encode symmetries, scales, and spectral properties of the PDE and how these properties affect learning speed, generalization, and stability. Possible directions include:
- studying how filters or attention heads align with physical or spectral modes of the PDE during training (“alignment analysis”);
- testing whether architectures with built-in equivariance (e.g. rotation, translation, or scale) improve data efficiency and accuracy;
- exploring the emergence of geometric representations in latent spaces and their relation to physical quantities;
- investigating failure modes—when networks learn spurious correlations or violate physical constraints.
This will be an exciting interdisciplinary project where you can gain valuable skills in deep learning, mathematics and engineering applications. We are looking for a motivated student with strong programming and ML skills and a good foundation in mathematics (linear algebra, calculus, and basic PDEs). You will work with concepts such as eigenvalues, symmetries, and gradient dynamics, and will have the opportunity to strengthen your background in these areas during the project.
This project will be supervised by Dr. Hannah Pinson (main supervisor, Data and AI cluster), Prof. Dr. Victorita Dolean-Maini (scientific computing), and Dr. Michael Abdelmalik (computational fluid dynamics).
Hannah Pinson