Lunch Seminars
(Will be held at 12 noon PST in ANB 213 or on Zoom when noted)
October 8, 2024
Ziming Liu [MIT & IAIFI]
▦ Towards Unification of Artificial Intelligence and Science ▦
A major challenge of AI + Science lies in their inherent incompatibility: today's AI is primarily based on connectionism, while science depends on symbolism. In the first part of the talk, I will talk about Kolmogorov-Arnold Networks (KANs) as a solution to synergize both worlds. Inspired by Kolmogorov-Arnold representation theorem, KANs are more aligned with symbolic representations than MLPs, and demonstrate strong accuracy and interpretability. In the second part, I will talk about more broadly the intersection of AI and Science, including science for AI (Poisson Flow Generative Models), science of AI (understanding grokking), and AI for Science (AI scientists).
October 15, 2024
Tim Roith [DESY]
▦ Gullible networks and the mathematics of adversarial attacks ▦
With the increasing incentive to employ deep learning techniques in real-life scenarios, one naturally questions their reliability. For image classification, a well-known phenomenon called adversarial examples shows how small, humanly imperceptible input-perturbations can change the output of a neural network completely. This insight formed the field of adversarial robustness, which we explore in this talk. We discuss how regularizing the standard training objective with Lipschitz and TV regularization terms can lead to resilient neural networks.
Furthermore, we also explore the adversarial attack problem. We derive an associated gradient-flow for the so-called fast gradient sign method, which is commonly used to find malicious input-perturbations. Here, we work in an abstract metric setting, where we then highlight the distributional Wasserstein case, which relates back to the robustness problem. Finally, we also consider the attack problem in a realistic closed-box scenario, where we employ gradient-free optimizers.
October 22, 2024
Pranava Jayanti [University of Southern California]
▦ Avoiding vacuum in two models of superfluidity
▦
At low pressures and very low temperatures, Helium-4 is composed of two interacting phases: the superfluid and the normal fluid. We discuss some recent mathematical results in the analysis of two distinct models of superfluidity.
Micro-scale model: The nonlinear Schrödinger equation is coupled with the incompressible inhomogeneous Navier-Stokes equations through a bidirectional nonlinear relaxation mechanism that facilitates mass and momentum exchange between phases. For small initial data, we construct solutions that are either global or almost-global in time, depending on the strength of the superfluid's self-interactions. The primary challenge lies in controlling inter-phase mass transfer to prevent vacuum formation within the normal fluid. Two approaches are employed: one based on energy estimates alone, and another combining energy estimates with maximal regularity. These results are part of joint work with Juhi Jang and Igor Kukavica.
Macro-scale model: Both phases are governed by the incompressible Euler equations, coupled through a nonlinear and singular interaction term. We construct unique local-in-time analytic solutions. To address the singularity in the coupling, we ensure the absence of vorticity vacuum, while the derivative loss due to the nonlinearity is offset by trading regularity for dissipation.
November 5, 2024
Yannick Sire [Johns-Hopkins University ]
▦ Some problems in the flow of Liquid Crystals ▦
"I will describe some recent results related to some simplified models of Liquid Crystals, with a view towards geometric free boundaries. A simplified version of the Ericksen-Leslie system has been introduced by FH Lin in the 80's. After describing the state of the art for this system, I will introduce a new one involving a free boundary. Though the mathematical analysis of the system is still very preliminary, some results are still available in 2 dimensions and I will mainly motivate the introduction of geometric variational problems with free boundaries and how one can deal with them, thanks to recent advances in compensated-compactness in odd dimension. I will mention several open questions and possible further generalizations."
November 12, 2024
Nicolas Boulle [Imperial College]
▦ Operator learning without the adjoint
▦
There is a mystery at the heart of operator learning: how can one recover a non-self-adjoint operator from data without probing the adjoint? Current practical approaches suggest that one can accurately recover an operator while only using data generated by the forward action of the operator without access to the adjoint. However, naively, it seems essential to sample the action of the adjoint for learning time-dependent PDEs. In this talk, we will first explore connections with low-rank matrix recovery problems in numerical linear algebra. Then, we will show that one can approximate a family of non-self-adjoint infinite-dimensional compact operators via projection onto a Fourier basis without querying the adjoint.
November 19, 2024
Vincent Martinez [The City University of New York - Hunter College]
▦ On reconstructing unknown state and parameters in hydrodynamic systems from time-series data
▦
This talk will describe a basic approach to the problem of simultaneous state and parameter reconstruction from low-dimensional time-series data in the context of hydrodynamic systems. We present theorems identifying conditions under which these approaches are guaranteed to succeed in an idealized setting that give some clarity to the general issue of reconstructability. Ultimately, the success of these algorithms rely on a crucial nonlinear mechanism common to these systems, the exact role of which will be discussed.
January 14, 2025
Jonas Latz [University of Manchester]
▦ TBD
▦
TBD
February 11, 2025
Nan Chen [University of Wisconsin-Madison]
▦ TBD
▦
TBD
Other Seminars
(Time and location vary)
January 7, 2025
• CMX Special Seminar •
ANB 213
4:00 pm
Juan Toscano [Brown University]
▦ Inferring turbulent velocity and temperature fields and their statistics from Lagrangian velocity measurements using physics-informed Kolmogorov-Arnold Networks (PIKANs)
▦
We propose the Artificial Intelligence Velocimetry-Thermometry (AIVT) method to infer hidden temperature fields from experimental turbulent velocity data. This physics-informed machine learning method enables us to infer continuous temperature fields using only sparse velocity data, eliminating the need for direct temperature measurements. Specifically, AIVT is based on physics-informed Kolmogorov-Arnold Networks (not neural networks) and is trained by optimizing a combined loss function that minimizes the residuals of the velocity data, boundary conditions, and governing equations. We apply AIVT to a unique set of experimental volumetric and simultaneous temperature and velocity data of Rayleigh-BĂ©nard convection (RBC) acquired by combining Particle Image Thermometry and Lagrangian Particle Tracking. This allows us to directly compare AIVT predictions with measurements. We demonstrate the ability to reconstruct and infer continuous and instantaneous velocity and temperature fields from sparse experimental data at a fidelity comparable to direct numerical simulations (DNS) of turbulence. This, in turn, enables us to compute important quantities for quantifying turbulence, such as fluctuations, viscous and thermal dissipation, and QR distribution. This paradigm shift in processing experimental data using AIVT to infer turbulent fields at DNS-level fidelity offers a promising approach for advancing quantitative understanding of turbulence at high Reynolds numbers, where DNS is computationally infeasible.
Student/Postdoc Seminars
(Will be held at 4pm PST in ANB 213 unless otherwise noted)
October, 2024
• CMX Student/Postdoc Seminar •
TBA [Caltech]
▦ TBD ▦
TBD
Meetings and Workshops
|