Lunch Seminars Other Seminars Student/Postdoc Seminars Meetings and Workshops

Past Events

AY 2022/2023: Lunch Seminars



October 19, 2022
Matthias Morzfeld [University of California San Diego]
▦ Markov chain Monte Carlo and high-dimensional, nonlinear inverse problems in Earth Science ▦


Earth science nearly always requires estimating models, or model parameters, from data. This could mean to infer the state of the southern ocean from ARGO floats, to compute the state of our atmosphere based on atmospheric observations of the past six hours, or to construct a resistivity model of the Earth's subsurface from electromagnetic data. All these problems have in common that the number of unknowns is large (millions to hundreds of millions) and that the underlying processes are nonlinear. The problems also all have in common that they can be formulated as the problem of drawing samples from a high-dimensional Bayesian posterior distribution.

Due to the nonlinearity, Markov chain Monte Carlo (MCMC) is a good candidate for the numerical solution of geophysical inverse problems. But MCMC is known to be slow when the number of unknowns is large. In this talk, I will argue that an unbiased solution of nonlinear, high-dimensional problems remains difficult, but one can construct efficient and accurate biased estimators that are feasible to apply to high-dimensional problems. I will show examples of biased estimators in action and invert electromagnetic data using an approximate MCMC sampling algorithm called the RTO-TKO (randomize-then-optimize -- technical-kock-out).   


October 26, 2022
Levon Nurbekyan [University California Los Angeles]
▦ Mean-field games ▦


Mean-field game (MFG) is a framework for modeling large systems of interacting agents that play non-cooperative differential games. In a PDE form, an MFG system comprises a Hamilton-Jacobi PDE coupled with a Kolmogorov-Fokker-Planck or continuity equation. Such systems are challenging to analyze and solve due to their non-linear forward-backward structure. I will discuss computational methods and related theoretical aspects of these PDEs. More specifically, I will present a new framework for solving systems with nonlocal interactions and new insights into the variational structure of MFG systems. One of the most efficient techniques for the analysis and numerical solution of MFG systems is the extension of the celebrated Benamou-Brenier formulation of the optimal transportation problem to MFG systems. More specifically, under suitable structural assumptions, the MFG system can be regarded as a system of first-order optimality conditions for convex energy. Such systems are called potential, and extensions of the Benamou-Brenier technique beyond these systems were unknown. I will show that non-potential systems that satisfy the so-called Lasry-Lions monotonicity condition admit a monotone-inclusion version of the Benamou-Brenier formulation. I will present new numerical methods based on this formulation and discuss its possible applications in the analysis of non-potential MFG systems. Finally, I will conclude with a few remarks on the applications of MFG to problems in swarm control and economics.   


November 16, 2022
Deep Ray [University of Southern California]
▦ VarMiON: A variationally mimetic operator network. ▦


Operator networks have emerged as promising deep learning tools for approximating the solution to partial differential equations (PDEs). These networks map input functions that describe material properties, forcing functions and boundary data to the solution of a PDE, i.e., they learn the solution operator of the PDE. In this talk, we consider a new type of operator network called VarMiON, that mimics the variational or weak formulation of PDEs. A precise error analysis of the VarMiON solution reveals that the approximation error contains contributions from the error in the training data, the training error, quadrature error in sampling input and output functions, and a ''covering error'' that measures how well the input training functions cover the space of input functions. Further, the error estimate also depends on the stability constants for the true solution operator and its VarMiON approximation. Numerical experiments are presented for a canonical elliptic PDE to demonstrate the efficacy and robustness of the VarMiON as compared to a standard DeepONet.   


January 18, 2023
Guido Montufar [University of California Los Angeles]
▦ On the maximum and expected complexity of maxout networks ▦


Learning with artificial neural networks relies on the complexity of the representable functions and their parameters. In this talk I present results on the maximum and expected number of linear regions of the functions represented by neural networks with maxout units. In the first part, I discuss counting formulas and sharp upper bounds for the number of linear regions, with connections to Minkowski sums of polytopes. This is based on work with Yue Ren and Leon Zhang. In the second part, I discuss the behavior for generic parameters and present upper bounds on the expected number of regions given a probability distribution over the parameters and parameter initialization strategies. This is based on work with Hanna Tseran.   


January 25, 2023
Jamie Haddock [Harvey Mudd College]
▦ Connections between Iterative Methods for Linear Systems and Consensus Dynamics on Networks ▦


There is a well-established linear algebraic lens for studying consensus dynamics on networks, which has yielded significant theoretical results in areas like distributed computing, modeling of opinion dynamics, and ranking methods. Recently, strong connections have been made between problems of consensus dynamics on networks and classical iterative methods in numerical linear algebra. This talk will discuss instances of these connections, in particular between the gossip methods in distributed computing and the Kaczmarz methods in numerical linear algebra. We will present theoretical convergence results, empirical and numerical simulation results, and discuss future work in applying these numerical linear algebraic techniques to broader and more complex consensus dynamics models, especially those coming from opinion dynamics and ranking.   


February 22, 2023
Damek Davis [Cornell University]
▦ Leveraging "partial" smoothness for faster convergence in nonsmooth optimization ▦


First-order methods in nonsmooth optimization are often described as "slow." I will present two (locally) accelerated first-order methods that violate this perception: a superlinearly convergent method for solving nonsmooth equations, and a linearly convergent method for solving "generic" nonsmooth optimization problems. The key insight in both cases is that nonsmooth functions are often "partially" smooth in useful ways.  


April 19, 2023
Qin Li [University of Wisconsin Madison]
▦ Mean field theory in Inverse Problems: from Bayesian inference to overparameterization of networks ▦


Bayesian sampling and neural networks are seemingly two different machine learning areas, but they both deal with many particle systems. In sampling, one evolves a large number of samples (particles) to match a target distribution function, and in optimizing over-parameterized neural networks, one can view neurons particles that feed each other information in the DNN flow. These perspectives allow us to employ mean-field theory, a powerful tool that translates dynamics of many particle system into a partial differential equation (PDE), so rich PDE analysis techniques can be used to understand both the convergence of sampling methods and the zero-loss property of over-parameterization of ResNets. We showcase the use of mean-field theory in these two machine learning areas.  

View Recorded Video

April 26, 2023
Krishnakumar Balasubramanian [University of California, Davis]
▦ Regularized SVGD, Gaussian Variational Inference and Wasserstein Gradient Flows ▦


   Stein Variational Gradient Descent (SVGD) and Gaussian Variational Inference (GVI) are two optimization-inspired, deterministic, particle-based algorithms for sampling from a given target density. They are considered as alternatives to the more classical randomized MCMC methods. Both approaches could be thought of as approximate but practically implementable discretizations of the Wasserstein Gradient Flow (WGF)corresponding to the KL-divergence minimization. However, theoretical properties of the SVGD and GVI method, and comparisons to more standard MCMC methods are largely unknown.

Mean-field analysis of SVGD reveals that the gradient flow corresponding to the SVGD algorithm (i.e., the Stein Variational Gradient Flow (SVGF)) only provides a constant-order approximation to the WGF. In the first part of the talk, I will introduce a Regularized-SVGF (R-SVGF) which interpolates between the SVGF and the WGF. I will discuss various theoretical properties of the R-SVGF and its time-discretization, including convergence to equilibrium, existence and uniqueness of weak solutions, and stability of the solutions under various assumptions on the target density. We provide preliminary numerical evidence of the improved performance offered by the regularization.

GVI provides parametric approximations to the WGF by restricting the flow to the Bures-Wasserstein (BW) space of Gaussians endowed with the Wasserstein distance. In the second part of the talk, I will introduce the (Stochastic) Forward-Backward GVI (FB-GVI) algorithm for implementing GVI. This algorithm exploits the composite structure of the KL divergence, which can be written as the sum of a smooth term (the potential) and a non-smooth term (the entropy) over the BW space. For the FB-GVI algorithm, I will discuss state-of-the-art convergence guarantees when the target density is log-smooth and log-concave, as well as the first convergence guarantees to first-order stationary solutions when the target density is only log-smooth.   


*Postponed date TBD*
Anna Ma [University of California Irvine]
▦ Doubly Noisy Linear Systems and the Kaczmarz Algorithm ▦


   Large-scale linear systems, Ax=b, frequently arise in data science and scientific computing at massive scales, thus demanding effective iterative methods to solve them. Often, these systems are noisy due to operational errors or faulty data-collection processes. In the past decade, the randomized Kaczmarz algorithm (RK) was studied extensively as an efficient iterative solver for such systems. However, the convergence study of RK in the noisy regime is limited and considers measurement noise in the right-hand side vector, b. Unfortunately, that is not always the case, and the coefficient matrix A can also be noisy. In this talk, we motivate and discuss doubly noise linear systems and the performance of the Kaczmarz algorithm applied to such systems. The presented work is joint work with El Houcine Bergou, Soumia Boucherouite, Aritra Dutta, and Xin Li.  


May 24, 2023
Alex Townsend [Cornell University]
▦ A new paradigm for computing spectra, pseudospectra, and continuous spectra of infinite-dimensional operators ▦


   Traditional methods for solving infinite-dimensional eigenproblems usually follow the discretize-then-solve paradigm. Discretize first, and then solve the matrix eigenproblem. The discretize-then-solve paradigm can be tricky for infinite-dimensional eigenproblems as the spectrum of matrix discretizations may not converge to the spectrum of the operator. Moreover, it is impossible to fully capture the continuous part of the spectrum with a finite-sized matrix eigenproblem. In this talk, we will discuss an alternative solve-then-discretize paradigm for infinite-dimensional eigenproblems that is rigorously justified. To compute the discrete spectrum and pseudospectra, we will discuss infinite-dimensional analogues of contour-based eigensolvers and randomized linear algebra. For the continuous spectra, we will show how to calculate a smoothed version of the so-called spectral measure. I will demonstrate that our techniques avoid spectral pollution on a magnetic tight-binding model of graphene.   


May 31, 2023
Christopher Miles [University of California, Irvine]
▦ Decoding spatial stochastic RNA dynamics from imaging data with point process inference ▦


    Quantitative models in biology largely partition into either classical mechanistic (e.g., differential equations) or statistical/machine-learned. The former tend to fail to faithfully incorporate noisy, heterogeneous data, whereas the latter struggle to extract underlying interpretable mechanisms. In this talk, I will share a humble case study in the pursuit of bridging these techniques in the context of a specific biological problem. Advances in microscopy have provided snapshot images of individual RNA molecules within a nucleus. Decoding the underlying spatiotemporal dynamics is important for understanding gene expression, but challenging due to the static, heterogeneous, and stochastic nature of the data. I will write down a stochastic reaction-diffusion model and show that observations of this process follow a spatial point (Cox) process constrained by a reaction-diffusion PDE. Inference on this data resembles a classical "inverse problem" but differs in the observations of individual particles rather than concentrations. We perform inference using variational Bayesian Monte Carlo with promising results. However, a number of seemingly open computational challenges remain in the development of scalable and extendable techniques for this inverse problem.

This work is in collaboration with the Fangyuan Ding lab of Biomedical Engineering at UCI.   


AY 2021/2022: Student/Postdoc Seminars


Oct 14, 2022
• CMX Student/Postdoc Seminar •

ANB 105
1:00pm


Robert Webber [Caltech]
▦ Randomized low-rank approximation with higher accuracy and reduced costs ▦

Randomized low-rank matrix approximation is one of the great success stories of randomized numerical linear algebra. It is a fast, scalable approach that has been widely adopted in scientific computing. However, the approach is most successful at approximating matrices with fast singular value decay. When the singular values decay slowly, there can be significant errors! Our ongoing work improves the accuracy of randomized low-rank matrix approximation for matrices with slow singular value decay. We analyze an approach called 'randomized block Krylov iteration', which builds a rich approximation space from the output of repeated matrix-matrix multiplications. Our contributions are two-fold: first, we derive an efficient implementation of randomized block Krylov iteration that reduces the standard runtime by a half, and second, we establish error bounds that quantify the rapid convergence of randomized block Krylov iteration, demonstrating its advantage over alternative methods.   


Nov 11, 2022
• CMX Student/Postdoc Seminar •

Ricardo Baptista [Caltech]
▦ Towards consistent nonlinear filtering in high-dimensions ▦

Sequential inference problems are ubiquitous in scientific applications ranging from operational aspects of numerical weather prediction to tracking infectious diseases. Given a sequence of observations, the ensemble Kalman filter (EnKF) is a widely popular online algorithm for estimating the hidden state of these systems and their uncertainty. While the EnKF yields robust state estimates for many high-dimensional and non-Gaussian models, this method is limited by linear transformations and is generally inconsistent with the Bayesian solution. In this presentation, we will discuss how transportation of measures can be used to consistently transform a forecast ensemble into samples from the filtering distribution. This approach can be understood as a natural generalization of the EnKF to nonlinear updates, and reduces the intrinsic bias of the EnKF with a marginal increase in computational cost. In small-sample settings, we will show how to estimate transport maps for high-dimensional inference problems by exploiting low-dimensional structure in the filtering distribution. Finally, we will demonstrate the benefit of this framework for filtering with chaotic dynamical systems and aerodynamic flows.   


Dec 2, 2022
• CMX Student/Postdoc Seminar •

Margaret Trautner [Caltech]
▦ Learning Homogenization for Elliptic Operators with Discontinuous Parameters ▦

Elliptic partial differential equations describe important physical phenomena such as elastic material deformation and fluid flow through porous media. When the coefficient in this equation is discontinuous, involves corner interfaces, or changes across multiple scales, numerically solving the equation can be computationally expensive. In this talk, we will discuss in depth the challenges associated with learning homogenized solutions to equations with such parameterizations. A full analysis in one dimension gives insight into addressing the problem in higher dimensions. PDE theory on solutions at corner interfaces in two dimensions suggests learning methods. An approximation theorem for the homogenized map is given, and different approaches to learning are discussed.   


Jan 13, 2023
• CMX Student/Postdoc Seminar •

Ethan Epperly [Caltech]
▦ Leave-one-out randomized algorithms for matrix computations ▦

"Leave-one-out" is a powerful idea in statistics and machine learning in which one recomputes a quantity multiple times, each calculation omitting a different data point. This talk presents two algorithms which apply the leave-one-out idea to randomized matrix computations. The first algorithm, XTrace, uses the leave-one-out idea to compute highly accurate estimates of the trace of a matrix defined implicitly through matrix–vector products. The second algorithm, the matrix jackknife, uses a leave-one-out technique to assess the variability of the output of a randomized matrix computation from a single execution of the algorithm, providing a computationally cheap way to assess the reliability of the computed output. Both of these algorithms are made lightning-fast by efficient algorithms to perform the leave-one-out procedure on the randomized SVD, a core primitive in many randomized matrix algorithms.   


Feb 3, 2023
• CMX Student/Postdoc Seminar •

Eitan Levin [Caltech]
▦ Infinitely-instantiable descriptions of convex sets. ▦

Classical algorithms are defined on inputs of different sizes. In contrast, data-driven algorithms are usually only defined on inputs of the same size as the training data. What does it mean for an algorithm to be defined on infinitely-many input sizes? How do we describe such algorithms, and how do we parametrize and search over them?

In this talk, we tackle these questions for convex optimization-based algorithms. Describing such algorithms reduces to describing convex sets. These, in turn, are often "freely" described, meaning that their description makes instantiation in every dimension obvious. Examples include unit balls of standard norms defined on vectors of any size, graph parameters defined for graphs of any size, and (quantum) information theoretic quantities defined for distributions on any number of (qu)bits.

We show that such free descriptions of convex sets arise from two ingredients. First, group invariance and the recently-identified phenomenon of representation stability. Second, embeddings and projections relating different-sized problem instances. We combine these ingredients to obtain parametrized families of infinitely instantiable convex sets. We also identify consistency conditions relating sets in different dimensions that are satisfied in a variety of applications, and obtain parametrizations respecting these conditions. Our parametrizations can be obtained computationally.   


Feb 24, 2023
• CMX Student/Postdoc Seminar •

Jorge Garza Vargas [Caltech]
▦ Theoretical guarantees for the shifted QR algorithm ▦

The shifted QR algorithm is a powerful and widely used iterative method for computing the eigenvalues and eigenvectors of a matrix (both in the symmetric and non-symmetric case). In the 60s, the effectiveness of this algorithm on symmetric inputs was fully explained after a sequence of papers (by different authors) showed that Wilkinson's variant of the algorithm converges rapidly on any (symmetric) input.

Since then, despite its sustained use, the convergence properties of the shifted QR algorithm on non-symmetric inputs has remained elusive. From the practical side, a complex but empirically reliable and practical version of the algorithm has been obtained over the decades by addressing non-convergent cases with heuristic modifications and improvements. In this talk I will present recent theoretical work that proves that a relatively simple randomized version of the shifted QR algorithm converges rapidly on any (possibly non-symmetric) input. This is joint work with Jess Banks and Nikhil Srivastava.   


March 10, 2023
• CMX Student/Postdoc Seminar •

Oliver Gross [Technische Universitat Berlin]
▦ Filament Based Plasma ▦

The arcs visible in our sun's atmosphere are some of the most awe-inspiring natural spectacles and their visual depiction is of great interest in scientific visualization, special effects, and games. These visually prominent features can be modeled as magnetic filaments whose shape is governed by the magnetohydrostatic equation. In this talk I will show how to procedurally generate solar atmospheres by representing the magnetic filaments as curves with thickness. These curves provide a Lagrangian representation and their initial configuration can be generated from magnetic flux given as a scalar texture on the sun's surface. Subsequently, the shape of the filaments can be determined based on a variational formulation which is derived from a geometric interpretation of the problem.   


April 14, 2023
• CMX Student/Postdoc Seminar •

Marianna Russkikh [Caltech]
▦ Scaling limit behavior of the dimer model ▦

The dimer model is one of the best-known models of statistical physics, first introduced to model a diatomic gas. Such models can be used to represent many other statistical physics systems, including the famous Ising model of ferromagnetism. In this talk, we discuss some scaling limit phenomena of the dimer model.   


April 28, 2023
• CMX Student/Postdoc Seminar •

Yixuan Wang [Caltech]
▦ Self-similar blowups of various models in fluid dynamics ▦

Blowup of the 3D Euler/ Navier-Stokes equation remains one of the most challenging and important problems in PDE. We will illustrate the study of self-similar blowups by various 1D models that capture the essential features and mechanism of the Euler equation. We reduce the blowup analysis to the stability analysis of the profile equation. Approximate profile needs to be constructed, either analytically or numerically; and then stability analysis can be established in weighted L^2 or L^\infty norm. Finally, we discuss the implications of the results to the 3D equations, where many of the insights in the results of the models actually corroborates with recent numerical findings of the 3D equations.   


May 12, 2023
• CMX Student/Postdoc Seminar •

Eliza O'Reilly [Caltech]
▦ Random Tessellation Forests ▦

Random forests are a popular class of algorithms used for regression and classification that are ensembles of randomized decision trees built from axis-aligned partitions of the feature space. One such variant, called Mondrian random forests, were proposed to handle the online setting and are the first class of random forests for which minimax rates were obtained in arbitrary dimension. However, the restriction to axis-aligned splits fails to capture dependencies between features, and oblique splits have shown improved empirical performance. By viewing the Mondrian as a special case of the stable under iterated (STIT) process in stochastic geometry, we utilize the theory of stationary random tessellations to show that STIT random forests achieve minimax rates for Lipschitz and C^2 functions and illustrate how they can overcome the curse of dimensionality in high dimensional feature space.   


May 19, 2023
• CMX Student/Postdoc Seminar •

Samuel Lanthaler [Caltech]
▦ Learning operators with neural networks ▦

Neural networks have proven to be effective approximators of high dimensional functions in a wide variety of applications. In scientific computing the goal is often to approximate an underlying operator, which defines a mapping between infinite-dimensional spaces of input and output functions. Extensions of neural networks to this infinite-dimensional setting have been proposed in recent years, giving rise to the emerging field of operator learning. Despite their practical success, our theoretical understanding of these approaches is still in its infancy; Why are neural networks so effective in these applications? In this talk, I will discuss work addressing what neural networks can and cannot achieve in this context.   



AY 2021/2022: Other Seminars


Aug 01, 2022
• CMX Special Seminar •

ANB 105 & Zoom
11:00am


Peter Park [Harvard Medical School]
▦ Mutational Signature Analysis and its Applications ▦

Whole-genome sequencing of a large number of individuals generates hundreds of terabytes of data, requiring efficient computational methods for in-depth analysis. Mutational signature analysis is a recent computational approach for interpreting somatic mutations identified from sequencing data. To discover "signatures" (a 96-dim vector representing possible mutation types and their nucleotide context), a conventional approach utilizes non-negative matrix factorization; to match a signature to a known catalog of signatures, a non-negative least squares is typically used. I will describe some of the shortcomings of the current approaches and our solutions. Applications include detection of homologous recombination deficiency in tumor samples and prediction of response to immunotherapy.



Oct 20, 2022
• CMX Special Seminar •

ANB 105
4:00pm


Matthew Colbrook [University of Cambridge]
▦ The mpEDMD algorithm: Structure-preserving and rigorous Koopmanism ▦

Koopman operators globally linearize nonlinear dynamical systems, and their spectral information can be a powerful tool for the analysis and decomposition of nonlinear dynamical systems. However, Koopman operators are infinite-dimensional, and computing their spectral information is a considerable challenge. We introduce measure-preserving extended dynamic mode decomposition (mpEDMD), the first Galerkin method whose eigendecomposition converges to the spectral quantities of Koopman operators for general measure-preserving dynamical systems. mpEDMD is a data-driven and structure-preserving algorithm based on an orthogonal Procrustes problem that enforces measure-preserving truncations of Koopman operators using a general dictionary of observables. It is flexible and easy to use with any pre-existing DMD-type method and with different data types. We prove the convergence of mpEDMD for projection-valued and scalar-valued spectral measures, spectra, and Koopman mode decompositions. For the case of delay embedding (Krylov subspaces), our results include convergence rates of the approximation of spectral measures as the size of the dictionary increases. We demonstrate mpEDMD on a range of challenging examples, its increased robustness to noise compared with other DMD-type methods, and its ability to capture the energy conservation and cascade of a turbulent boundary layer flow with Reynolds number > 60000 and state-space dimension >100000. Finally, if time permits, we discuss how this algorithm forms part of a broader program on the foundations of infinite-dimensional spectral computations.



May 11, 2023
• CMX Special Seminar •

ANB 213
5:00pm


Elizabeth Carlson [University of Victoria]
▦ Insights from Nonlinear Continuous Data Assimilation for Turbulent Flows ▦

One of the challenges of the accurate simulation of turbulent flows is that initial data is often incomplete. Data assimilation circumvents this issue by continually incorporating the observed data into the model. An emerging approach to data assimilation known as the Azouani-Olson-Titi (AOT) algorithm introduced a feedback control term to the 2D incompressible Navier-Stokes equations (NSE) in order to incorporate sparse measurements. The solution to the AOT algorithm applied to the 2D NSE was proven to converge exponentially to the true solution of the 2D NSE with respect to the given initial data. In this talk, we will focus on the insights of a nonlinear version of the AOT algorithm and distinguish the clear connections to the physics of the existing systems.


May 19, 2023
• CMX Special Seminar •

ANB 213
12:00pm


Chris Budd [University of Bath]
▦ R-adaptivity, deep learning and the deep Ritz method ▦

PINNS (*physics informed neural nets) are becoming increasingly popular methods for using deep learning techniques to solve a wide variety of differential equations. They have been advertised as 'mesh free methods' which can out perform traditional methods. But how good are they in practice? In this talk I will look at how they compare with traditional techniques such as the finite element method on different types of PDE, linking their performance to that of general approximation methods using free knot splines and carefully chosen collocation points. I will show that a combination of 'traditional' numerical analysis and deep learning can yield good results. But there is still a lot to be learned about the performance and reliability of a PINN based method.

Joint work with Simone Appela, Teo Deveney, Lisa Kreusser, and Carola Schoenlieb



AY 2021/2022: Meetings & Workshops