Lunch Seminars Other Seminars Student/Postdoc Seminars Meetings and Workshops

Past Events

AY 2023/2024: Lunch Seminars



October 11, 2023
Daniel Sanz-Alonso [University of Chicago]
▦ Ensemble Kalman methods: statistical theory in high and infinite dimension ▦


This talk aims to demonstrate how existing and new results in high and infinite-dimensional statistical theory can bring novel understanding to ensemble Kalman methods. The first part of the talk develops a non-asymptotic analysis of ensemble Kalman updates in high dimension that rigorously explains why a small ensemble size suffices if the prior covariance has moderate effective dimension due to fast spectrum decay or approximate sparsity. We present our theory in a unified framework, comparing several implementations of ensemble Kalman updates that use perturbed observations, square root filtering, and localization. As part of our analysis, we develop new dimension-free covariance matrix estimation bounds for approximately sparse matrices that may be of independent interest. The second part of the talk studies covariance operator estimation in infinite dimension. We introduce a notion of sparsity for infinite-dimensional covariance operators and a family of thresholded estimators which exploits it. In a small lengthscale regime, we show that thresholded estimators achieve an exponential improvement in sample complexity over the standard sample covariance estimator. Our analysis explains the importance of using covariance localization techniques in ensemble Kalman methods for global data assimilation.


October 18, 2023
Eric Hester [University of California Los Angeles]
▦ Modelling multiphase matter: from microparticles to mega-icebergs ▦


The world is multiphase. Water and ice, rock and lava, nucleus and cytoplasm. How can we model these systems, and simulate them efficiently? I'll start with three examples from my research, boat dynamics in dead water, melting icebergs in salty oceans, and phase-separating polymers in microfluidic experiments. The same patterns recur. A seemingly simple partition into PDEs and boundary conditions can miss the narrow transition between phases. This diffuse interface in turn motivates a host of new numerical schemes. The immersed-boundary method, volume-penalty techniques, and phase-field models are a handful of examples. The bulk of my talk will discuss the mathematical tools we need to understand and improve these methods. Signed-distance coordinates give a straightforward vector calculus around arbitrary submanifolds, and multiple scales matched asymptotics describes the resulting solutions to arbitrary order. I'll also discuss efficient spectral discretisations of these schemes, emphasising how our mathematical tools can improve accuracy and alleviate stiffness, before concluding with some bigger questions for multiphase methods.   


October 25, 2023
Eviatar Bach [Caltech]
▦ Ensemble data assimilation methods for improving model statistics Abstract: ▦


In this talk, I will discuss two novel ensemble data assimilation (DA) methods for improving forecast model statistics. The first one is the multi-model ensemble Kalman filter (MM-EnKF). The MM-EnKF can combine multiple model ensembles for both DA and forecasting in a flow-dependent manner, using adaptive model error estimation to provide weights for the separate models and the observations. Our numerical experiments include multiple models with parametric error, different resolved scales, and different fidelities. The MM-EnKF results in significant probabilistic error reductions compared to the best model, as well as to an unweighted multi-model ensemble. We discuss the potential of the MM-EnKF as a method for incorporating machine learning forecast models into the DA process. The second is the ensemble Fokker-Planck filter (EnFPF). We consider the problem of filtering dynamical systems, possibly stochastic, using observations of statistics. The task is naturally formulated as an infinite-dimensional filtering problem in the space of densities. However, for the purposes of tractability, we introduce a mean field state space model and, using interacting particle system approximations to this model, we propose an ensemble method. Our numerical experiments show that the EnFPF is able to correct ensemble statistics and to accelerate convergence to a system's invariant density. We discuss applications of the EnFPF to correcting error in the statistical properties of dynamical models.   


November 15, 2023
Molei Tao [Georgia Institute of Technology]
▦ Optimization, Sampling, and Generative Modeling in Non-Euclidean Spaces ▦


Machine learning in non-Euclidean spaces have been rapidly attracting attention in recent years, and this talk will give some examples of progress on its mathematical and algorithmic foundations. A sequence of developments that eventually leads to non-Euclidean generative modeling will be reported.

More precisely, I will begin with variational optimization, which, together with delicate interplays between continuous- and discrete-time dynamics, enables the construction of momentum-accelerated algorithms that optimize functions defined on manifolds. Selected applications, namely a generic improvement of Transformer, and a low-dim. approximation of high-dim. optimal transport distance, will be described. Then I will turn these optimizers into an algorithm that samples probability distributions on Lie groups. The efficient and accuracy of the sampler will be quantified via a new, non-asymptotic error analysis. Finally, I will describe how this sampler can lead to a structurally-pleasant diffusion generative model that allows users to, given training data that follow any latent statistical distribution on a Lie group, generate more data exactly on the same manifold that follow the same distribution.

(No substantial prior knowledge in geometry is needed for this talk).   


January 17, 2024
Peter Schroder [Caltech]
▦ Motion from Shape Change ▦


What do cells, spermatozoa, snakes, stingrays, falling cats, astronauts, and platform divers have in common? They all effect motion---rotation and/or translation—through shape change and the resulting motion can be derived from the change of geometry with the aid of a variational principle. Remarkably, the resulting equations are first order in time rather than the usual second order equations of Newtonian dynamics. An algorithm that treats all of the these settings uniformly for purposes of animation is the subject of my talk in which we will see cats falling, snakes slithering about, artificial jellyfish swim, and Stanford bunnies dropped in water (and many more).   

(Joint work with Oliver Gross, Yousuf Soliman, Marcel Padilla, Felix Knöppel, and Ulrich Pinkall ).   


January 24, 2024
Abhishek Balakrishna [USC]
▦ A regularity criterion for the 3D Navier-Stokes equations based on finitely many observations. ▦


Data assimilation is a technique that combines observational data with a given model to improve the model's accuracy. We first discuss the application of a particular data assimilation technique (AOT algorithms) to the 3-D Navier-Stokes equation (3D NSE); we then describe how a data assimilated solution approximates the true solution. Then we observe the data assimilated solution is, in fact, regular (i.e., a strong solution) when the observed data satisfies a condition we present for only a finite collection of data. This result suggests a connection between our condition and the regularity of solutions to the actual 3D NSE. We pursue this line of inquiry to confirm this hypothesis, and formulate such a regularity criterion for the 3D NSE purely in terms of finitely-observed data.   


January 31, 2024
Georg Menz [UCLA]
▦ Aggregation of financial markets. ▦


We present a formal framework for the aggregation of financial markets mediated by arbitrage. Our main tool is to characterize markets via utility functions and to employ a one-to-one correspondence to limit order book states. Inspired by the theory of thermodynamics we argue that the arbitrage-mediated aggregation mechanism gives rise to a market-dynamical entropy, which quantifies the loss of liquidity caused by aggregation. We also discuss future directions of research in this emerging theory of market dynamics.

Joint work with Moritz Voss   


February 21, 2024
Michael Murray [UCLA]
▦ Transitions between harmful, benign and no overfitting in neural networks ▦


We will discuss benign overfitting in two-layer ReLU networks trained using gradient descent and hinge loss on noisy data for binary classification. In particular, we consider linearly separable data for which a relatively small proportion of labels are corrupted or flipped. We identify conditions on the margin of the clean data that give rise to three distinct training outcomes: benign overfitting, in which zero loss is achieved and with high probability test data is classified correctly; overfitting, in which zero loss is achieved but test data is misclassified with probability lower bounded by a constant; and non-overfitting, in which clean points, but not corrupt points, achieve zero loss and again with high probability test data is classified correctly. Our analysis provides a fine-grained description of the dynamics of neurons throughout training and reveals two distinct phases: in the first phase clean points achieve close to zero loss, in the second phase clean points oscillate on the boundary of zero loss while corrupt points either converge towards zero loss or are eventually zeroed by the network. We prove these results using a combinatorial approach that involves bounding the number of clean versus corrupt updates across these phases of training.


February 28, 2024
Yuhua Zhu [UCSD]
▦ A PDE-Based Bellman Equation for Continuous-time Reinforcement Learning ▦


In this talk, we address the problem of continuous-time reinforcement learning in scenarios where the dynamics follow a stochastic differential equation. When the underlying dynamics remain unknown and we have access only to discrete-time information, how can we effectively conduct policy evaluation? We begin by highlighting that the commonly used Bellman equation is not always a reliable approximation to the true value function. We then introduce PhiBE, a PDE-based Bellman equation that offers a more accurate approximation to the true value function, especially in scenarios where the underlying dynamics change slowly. Moreover, we extend PhiBE to higher orders, providing increasingly accurate approximations. Additionally, we present a numerical algorithm based on the Galerkin method, tailored for solving PhiBE when only discrete-time trajectory data is available. Numerical experiments are provided to validate the theoretical guarantees we propose.



April 17, 2024
Yizhe Zhu [UC Irvine]
▦ Non-Backtracking spectral methods for sparse matrices and tensors ▦


The non-backtracking operator, an asymmetric matrix constructed from an undirected graph, connects to various aspects of graph theory, including random walks, graph zeta functions, and expander graphs. It has emerged as a powerful tool for analyzing sparse random graphs, leading to new results for sparse random matrices. Additionally, algorithms employing the non-backtracking operator have achieved optimal sample complexity in many low-rank estimation problems. In my talk, I will present my recent work utilizing the non-backtracking operator to design new algorithms through the introduction of asymmetry into data matrices. The discussion will include estimates of the extreme singular values of sparse random matrices and applications in hypergraph community detection and tensor completion.



April 24, 2024
Siting Liu [UCLA]
▦ Enhancing PDE Computations and Score-Based Generative Models through Optimization ▦


This presentation explores optimization strategies for improving both partial differential equation (PDE) computations and score-based generative models (SGM). In the realm of numerical computations, we introduce a saddle point framework that capitalizes on the inherent structure of PDEs. Integrated seamlessly with existing discretization schemes, this framework eliminates the necessity for nonlinear inversions, paving the way for efficient parallelization. Shifting focus to SGM, we delve into the Wasserstein proximal operator (WPO) to understand the mathematical foundations of SGM - it can be written as the Wasserstein proximal operators of cross-entropy. Leveraging PDE formulation of WPO, we propose an WPO-informed score model which showcases accelerated training and reduced data requirements.



May 15, 2024
Venkat Chandrasekaran [Caltech]
▦ On False Positive Error ▦


Controlling the false positive error in model selection is a prominent paradigm for gathering evidence in data-driven science. In model selection problems such as variable selection and graph estimation, models are characterized by an underlying Boolean structure such as presence or absence of a variable or an edge. Therefore, false positive error or false negative error can be conveniently specified as the number of variables/edges that are incorrectly included or excluded in an estimated model. However, the increasing complexity of modern datasets has been accompanied by the use of sophisticated modeling paradigms in which defining false positive error is a significant challenge. For example, models specified by structures such as partitions (for clustering), permutations (for ranking), directed acyclic graphs (for causal inference), or subspaces (for principal components analysis) are not characterized by a simple Boolean logical structure, which leads to difficulties with formalizing and controlling false positive error. We present a generic approach to endow a collection of models with partial order structure, which leads to systematic approaches for defining natural generalizations of false positive error and methodology for controlling this error.

(Joint work with Armeen Taeb, Mateo Diaz, Peter Buhlmann, Parikshit Shah)



May 22, 2024
David Goluskin [University of Victoria]
▦ Studying nonlinear dynamics using polynomial optimization ▦


For nonlinear ODEs and PDEs that cannot be solved exactly, various properties can be inferred by constructing functions that satisfy suitable inequalities. Although the most familiar example is proving nonlinear stability of an equilibrium by constructing Lyapunov functions, similar approaches can produce many other types of mathematical statements, including for systems with chaotic behavior. Such statements include bounds on attractor properties or on transient behavior, estimates of basins of attraction, design of nonlinear controls, and more. Analytical results of these types often give overly conservative results in order to remain tractable. Much stronger results can be achieved by using computational methods of polynomial optimization to construct functions that satisfy the desired inequalities, especially via sum-of-squares constraints that imply nonnegativity of polynomials. This talk will provide an introduction to how polynomial optimization can be used to study ODEs, including some theoretical guarantees of arbitrarily sharp results, followed by some extensions to PDEs.


May 29, 2024
Matthew Jacobs [UCSB]
▦ Adversarial training and the generalized Wasserstein barycenter problem ▦


Adversarial training is a framework widely used by practitioners to enforce robustness of machine learning models. During the training process, the learner is pitted against an adversary who has the power to alter the input data. As a result, the learner is forced to build a model that is robust to data perturbations. Despite the importance and relative conceptual simplicity of adversarial training, there are many aspects that are still not well-understood (e.g. regularization effects, geometric/analytic interpretations, tradeoff between accuracy and robustness, etc...), particularly in the case of multiclass classification. In this talk, I will show that in the non-parametric setting, the adversarial training problem is equivalent to a multimarginal optimal transport problem that can be viewed as a generalized version of the Wasserstein barycenter problem. The connection between these problems allows us to completely characterize the optimal adversarial strategy and to bring in tools from optimal transport to analyze and compute optimal classifiers. This also has implications for the parametric setting, as the value of the generalized barycenter problem gives a universal upper bound on the robustness/accuracy tradeoff inherent to adversarial training.



AY 2023/2024: Student/Postdoc Seminars


October 6, 2023
• CMX Student/Postdoc Seminar •

Chris Yeh [Caltech]
▦ Learning and Control of Sustainable Energy Systems ▦

Today's energy systems are increasing in unpredictability and in distribution shifts, in large part due to the increasing adoption of renewable energy. This talk presents two perspectives on the problem of learning and control of sustainable energy systems when faced with this problem of uncertainty in the systems. First, I will discuss the voltage control problem, which generally requires accurate information about the electricity grid's topology in order to guarantee network stability. However, accurate topology identification is challenging for existing methods, especially as the grid is subject to increasingly frequent reconfiguration due to the adoption of renewable energy. By combining a nested convex body chasing algorithm with a robust predictive controller, we are able to achieve provably finite-time convergence to safe voltage limits in the online setting where there is uncertainty in both the network topology as well as load and generation variations. Our approach can also incorporate existing partial knowledge of the network to improve voltage control performance. We demonstrate the effectiveness of our approach in a case study on a Southern California Edison 56-bus distribution system. Second, I examine the challenge of controlling sustainable energy systems from the perspective of reinforcement learning (RL). The lack of standardized benchmarks for RL for sustainable energy systems has made it difficult to both track progress on specific domains and identify bottlenecks for researchers to focus their effort. In this work, I present SustainGym, a suite of five environments designed to test the performance of RL algorithms on realistic sustainable energy system tasks, ranging from electric vehicle charging to carbon-aware data center job scheduling. The environments test RL algorithms under realistic distribution shifts as well as in multi-agent settings. We show that standard off-the-shelf RL algorithms leave significant room for improving performance and highlight the theoretical and practical challenges ahead for introducing RL to real-world sustainability tasks.   


October 20, 2023
• CMX Student/Postdoc Seminar •

Lauren Conger [Caltech]
▦ Inequalities for proving convergence of coupled PDEs for modeling distribution shift ▦

We propose a novel framework for analyzing the dynamics of distribution shift in real-world systems that captures the feedback loop between learning algorithms and the distributions on which they are deployed. Prior work largely models feedback-induced distribution shift as adversarial or via an overly simplistic distribution-shift structure. In contrast, we propose a coupled partial differential equation model that captures fine-grained changes in the distribution over time by accounting for complex dynamics that arise due to strategic responses to algorithmic decision-making, non-local endogenous population interactions, and other exogenous sources of distribution shift. We consider two common settings in machine learning: cooperative settings with information asymmetries, and competitive settings where a learner faces strategic users. For both of these settings, when the algorithm retrains via gradient descent, we prove asymptotic convergence of the retraining procedure to a steady-state, both in finite and in infinite dimensions, obtaining explicit rates in terms of the model parameters. To do so we derive new results on the convergence of coupled PDEs that extends what is known on multi-species systems. In this talk I will focus on the classical inequalities used to prove convergence.   




November 3, 2023
• CMX Student/Postdoc Seminar •

Noel Csomay-Shanklin [Caltech]
▦ A Hierarchical Perspective on Robotic Control ▦

As robotic systems become increasingly capable and autonomous, both the tasks we ask them to accomplish and the resulting control strategies we develop grow in complexity. In this talk, we will discuss some perspectives on hierarchical controllers as a way to address this complexity by enabling efficient, systematic, and generalizable design processes. We will showcase recent results in achieving dynamically stable behaviors on a 3D hopping robot and a planar biped through the use of hierarchical control architectures. Along the way, we will review a quick primer on nonlinear control with a focus on how to incorporate it into the discussed hierarchical pipeline  


December 1, 2023
• CMX Student/Postdoc Seminar •

Preston Culbertson [Caltech]
▦ Seeing is believing: Leveraging Neural Radiance Fields (NeRFs) for uncertainty-aware robot perception ▦

At some point, every robot must translate its raw perceptual data (i.e., RGB images, lidar scans) into a mathematical model of the scene geometry. In this talk, I will discuss an emerging scene representation in robotics, Neural Radiance Fields (NeRFs), that can be built using only (posed) RGB images. NeRFs promise a number of advantages over traditional scene representations, including the ability to generate differentiable, high-quality, synthetic views of the scene from unseen poses and their inherent ability to represent uncertainty about the scene's geometry.

In the first part of the talk, I will provide a brief tutorial on neural rendering, deriving the rendering procedure used by NeRFs and surveying the current state-of-the-art. Following the tutorial, I will present some of our recent work using NeRFs for robotic tasks such as visual navigation, including an exciting theoretical result showing that the NeRF density can be rigorously interpreted as a stochastic process. I will conclude with a brief discussion of open research directions in robot perception (i.e., incorporating semantics, uncertainty-awareness) and some ideas about how NeRFs can bridge existing gaps. Overall, I hope to make a case that NeRFs are a promising scene/geometry representation for a variety of robotics applications and to provide a thorough explanation of their basic concepts for robotics practitioners.

January 12, 2024
• CMX Student/Postdoc Seminar •

Ethan Epperly [Caltech]
▦ Does sketching work? ▦

In the age of machine learning and data-driven scientific computing, computational scientists and mathematicians want to solve larger and larger computational problems. To meet this need, researchers have developed sketching, a randomized dimensionality reduction technique that promises to solve large linear algebra problems with ease. Sketching has been widely used and studied, yet questions remain about when and if it works. This talk will critically investigate the efficacy of sketching for least-squares problems. After demonstrating deficiencies of some sketching-based methods, this talk will present new research showing that the iterative sketching method is fast, accurate, and stable for least-squares problems. This answers the title question "Does sketching work?" with a qualified "yes".

January 26, 2024
• CMX Student/Postdoc Seminar •

Joseph Slote [Caltech]
▦ Dimension-free Remez Inequalities and Norm Designs ▦

What global properties of a function can we infer from local data? Remez-type inequalities are one answer to this question: they show the supremum norm of a polynomial can be controlled by its supremum over a small subset of the domain. Remez-type inequalities enjoy widespread use in approximation theory, compressed sensing, and many other areas of applied mathematics. Unfortunately, however, multivariate versions are often limited by a strong dependence on dimension. We will show in this talk that in fact for many spaces and test sets a dimension-free Remez inequality is available. The proof uses only basic probability and analysis and includes some entertaining ideas, such as how to tensorize a one-dimensional inequality without picking up dimension dependence. Applications to quantum and classical learning theory will also be discussed.

Based on the joint work arXiv:2310.07926 with Lars Becker, Ohad Klein, Alexander Volberg, and Haonan Zhang.


March 1, 2024
• CMX Student/Postdoc Seminar •

Apurva Badithela [Caltech]
▦ Formal Test Synthesis and System-level Evaluation for Safety-Critical Autonomous Systems ▦

Autonomous robotic systems have the potential for profoundly impacting society, with applications in search and rescue, improving transportation and mobility, and autonomous space missions, among others. However, these systems need to demonstrate correct behavior despite complexities in both the system design and the operational environment. For mainstream deployment in safety-critical applications, we need rigorous approaches to testing and evaluating these systems. In this talk, I will outline the present challenges with test and evaluation and discuss recent work on formally constructing test plans as well as methods to evaluate subsystems with respect to system-level task requirements. In particular, I will provide frameworks for reactive testing of high-level reasoning and decision making, and for evaluating perception in the context of system-level behavior.


March 8, 2024
• CMX Student/Postdoc Seminar •

Raul Astudillo [Caltech]
▦ Composite Bayesian Optimization for Efficient and Scalable Adaptive Experimentation ▦

Experimentation is a cornerstone of science and a key driver of human progress. Many experimental tasks can be viewed as optimization problems, where the goal is to find the best solution under constraints of cost, time, or other resources. Bayesian optimization has emerged as a powerful method for solving these problems. However, tasks in high-stakes fields such as materials design and drug discovery often present challenges that surpass the capabilities of existing approaches.

In this talk, I will discuss recent advances that overcome these limitations. Specifically, I will discuss how leveraging the composite structure present in many experimental tasks can dramatically improve the efficiency and scalability of Bayesian optimization. This approach opens new avenues for tackling complex problems out of the reach of standard methods. Lastly, I will outline future research directions toward developing a comprehensive framework for efficient, adaptive experimental design.


June 28, 2024

Zoom
10:00am



• CMX Student/Postdoc Seminar •

Juncai He [ King Abdullah University of Science and Technology (KAUST).]
▦General and Specific Neural Operators Based on Multigrid Structure ▦

In this talk, I will present recent results on applying multigrid structures to both general and specific operator learning problems in numerical PDEs. First, we will illustrate MgNet as a unified framework for convolutional neural networks and multigrid methods with some preliminary theories and applications. Then, we will discuss some basic background on operator learning, including the problem setup, a uniform framework, and a general universal approximation result. Motivated by the general definition of neural operators and the MgNet structure, we propose MgNO, which utilizes multigrid structures to parameterize these linear operators between neurons, offering a new and concise architecture in operator learning. This approach provides both mathematical rigor and practical expressivity, with many interesting numerical properties and observations. Finally, I will offer some remarks on our most recent work in using multigrid structures to approximate and learn a specific operator in numerical PDEs.

These results are based on joint work with Prof. Jinchao Xu and Dr. Xinliang Liu.



AY 2023/2024: Other Seminars


September 01, 2023
• CMX Special Seminar •

ANB 213
4:00 pm


Ryan Schneider [UCSD]
▦ Pseudospectral Divide-and-Conquer for Matrix Pencil Diagonalization ▦

Banks, Garza Vargas, Kulkarni, and Srivastava recently developed a randomized algorithm that (with high probability) approximately diagonalizes a matrix in nearly matrix multiplication time [Foundations of Computational Math 2022]. Central to their work is the phenomenon of pseudospectral shattering, in which a small Gaussian perturbation regularizes the pseudospectra of a matrix. The key idea is remarkably simple: with high probability, perturbing a matrix breaks its pseudospectra into disjoint components, allowing a classical divide-and-conquer eigensolver to run successfully. In this talk, we generalize both pseudospectral shattering and a divide-and-conquer algorithm based on it to the generalized eigenvalue problem. The result is a randomized, inverse-free, and highly parallel algorithm for diagonalizing a matrix pencil that - like the single matrix version - runs in nearly matrix multiplication time. Along the way, we explore a number of interesting questions in numerical linear algebra and random matrix theory: How do we define the pseudospectra of a matrix pencil? What effect do independent Gaussian perturbations to a pair of matrices have on the corresponding set of generalized eigenvalues? And finally, can the generalized algorithm offer improved stability over its single matrix counterpart?



September 20, 2023
• CMX Special Seminar •

ANB 213
12:00pm


Jason Bramburger [Concordia University- Montreal]
▦ Transferability of Graph Neural Networks using Graphons ▦

Graph neural networks (GNNs) have become powerful tools for processing graph-based information in various domains. A desirable property of GNNs is transferability, where a trained network can swap in information from a different graph without retraining and retain its accuracy. A recent method of capturing transferability of GNNs is through the use of graphons, which are symmetric, measurable functions representing the limit of large dense graphs. In this talk, I will introduce the basic notions of graphons and GNNs, while also presenting recent rigorous analytical results on transferability using the two concepts. In particular, I will show that this work addresses transferability between both deterministic weighted graphs and simple random graphs and overcomes issues related to the curse of dimensionality that arise in other GNN results. The proposed GNN architectures offer practical solutions for handling graph data of varying sizes while maintaining performance guarantees without extensive retraining.   


January 23, 2024
• CMX Special Seminar •

ANB 104
4:00pm


Seung Yeal Ha [Seoul National University]
▦ Four episodes of Kuramoto oscillators ▦

In this talk, we discuss the state-of-the-art results on the emergent behaviors of the Kuramoto oscillators. In particular, we study relations between the finiteness of collisions and phase-locking of the Kuramoto model. When there is no inertial effect, it is well known that the finiteness of collisions is equivalent to the emergence of phase-locking. Thus, when a Kuramoto ensemble is under the effect of inertia, whether the same equivalence relation hold or not is an intricate question. In this talk, we show that in a small inertia regime, the aforementioned equivalence still holds, whereas in a large inertia regime, we show that a homogeneous Kuramoto ensemble with the same natural frequencies can exhibit phase-locking, while there are countable number of collisions between Kuramoto oscillators. This is a contrasted effect of a large inertia in phase-locking process.

This is a joint work with Hangjun Cho (SNU) and Jiu-Gang Dong (Dalian Univ. of Technology).   


March 14, 2024
• CMX Special Seminar •

ANB 104
4:00pm


Adam Larios [University of Nebraska, Lincoln]
▦ A slice of PDEs: Using circles to find beauty and meaning in the equations of flames and fluids ▦

The flame equation, also known as the Kuramoto-Sivashinsky equation (KSE) is a highly chaotic dynamical system that arises in flame fronts, plasmas, crystal growth, and many other phenomena. Due to its lack of a maximum principle, the KSE is often studied as an analogue to the 3D Navier-Stokes equations (NSE) of fluids. We will discuss some of the relationships between these equations of fire and water. Much progress has been made on the 1D KSE since roughly 1984, but for the 2D KSE, even global well-posedness remains a major open question. In analogy with regularizations of the 3D NSE, we present modifications of the 2D KSE which allow for global well-posedness, while still retaining many important features of the 2D KSE. However, as has been demonstrated by Kostianko, Titi, and Zelik, standard regularizations, which work well for Navier-Stokes, destabilize the system when applied to even the 1D KSE. Thus, we present entirely new types of modifications for the 2D KSE. This talk will describe key ideas of the analysis, and also show many colorful movies of solutions. The talk will be accessible to students who have taken Calculus, but should also be of interest to experts.


June 6, 2024
• CMX Special Seminar •

ANB 213
4:30pm


Sebastian Reich [University of Potsdam]
▦The Mathematics of Nurturing a Digital Twin ▦

With the invention of digital computers in the mid 1940's, John von Neumann envisioned numerical weather prediction (NWP) as a revolutionising application of this technology and famously stated in a talk at Princeton in 1950: "All processes that are stable we shall predict. All processes that are unstable we shall control." In today's language, von Neumann had sown the seeds for the modern concept of NWP as a digital twin (DT) of actual weather processes in the atmosphere, its physical twin (PT). Since the days of von Neumann, DTs have found ground breaking applications in science and engineering. Still many challenges remain when considering highly complex multi-scale processes arising from, e.g., aeronautics, cognition, ecology, and pharmacology. Mathematically speaking, a DT can often be described as a partially observed Markov decision process (POMDP). Solving POMDPs computationally constitutes one of the most challenging problems around. Still, tremendous progress has been made in closely related fields such as data assimilation, uncertainty quantification, control and optimisation, and model reduction. A key emerging questions is thus how we can successfully synthesise these advances into a tool broadly applicable to DT. In my talk I will attempt to map out one possible roadway.


June 20, 2024
• CMX Special Seminar •

ANB 213
12:00pm


Wuchen Li [University of South Carolina]
▦Information Gamma Calculus: Convexity Analysis for Stochastic Differential Equations ▦

Abstract: We study the Lyapunov convergence analysis for degenerate and non-reversible stochastic differential equations (SDEs). We apply the Lyapunov method to the Fokker-Planck equation, in which the Lyapunov functional is chosen as a weighted relative Fisher information functional. We derive a structure condition and formulate the Lyapunov constant explicitly. Given the positive Lyapunov constant, we prove the exponential convergence result for the probability density function towards its invariant distribution in the L1 norm. Several examples are presented: underdamped Langevin dynamics with variable diffusion matrices, quantum SDEs in Lie groups (Heisenberg group, displacement group, and Martinet sub-Riemannian structure), three oscillator chain models with nearest-neighbor couplings, and underdamped mean field Langevin dynamics (weakly self-consistent Vlasov-Fokker-Planck equations). If time is allowable, some extensions will be discussed on the time-inhomogeneous SDEs.



AY 2023/2024: Meetings & Workshops