Lunch Seminars
Other Seminars
Student/Postdoc Seminars
Meetings and Workshops
|
Past Events
AY 2020/2021: Lunch Seminars
October 21, 2020
Etienne Vouga [University Texas Austin]
▦ Simulating Finely-wrinkled Thin Shells using Wrinkle Fields ▦
Complex, high-frequency wrinkles characterize the shape of draped thin materials like cloth or plastic film. Unfortunately, simulating the formation of these wrinkles carries a steep computational cost: the shell must be discretized finely enough to resolve the wrinkle geometry, and the elastic energy governing wrinkle formation is non-convex (since wrinkles form as a consequence of buckling instability). I will present some of our preliminary work on a new model and algorithm for predicting the high-definition static shape of thin shells, including the fine wrinkles that arise in the interplay of tension and compression, using very coarse meshes with few degrees of freedom (100x fewer than is needed to resolve wrinkling at a similar scale using traditional shell elements). The main idea is to split the kinematics of the shell into degrees of freedom representing the coarse shape of the shell, which does not buckle in response to compression, and a wrinkle field, encoding the direction and frequency of the wrinkling. By analysing the physics of wrinkled, curved shells, we derive a principled expression for the reduced-order elastic energy of the wrinkle field. We validate our method on model problems from the physics literature, as well as on draped cloth garments.
October 28, 2020
Carola-Bibiane Schonlieb [University of Cambridge]
▦ Combining knowledge and data driven methods for solving inverse imaging problems - getting the best from both worlds ▦
Inverse problems in imaging range from tomographic reconstruction (CT, MRI, etc) to image deconvolution, segmentation, and classification, just to name a few. In this talk I will discuss approaches to inverse imaging problems which have both a mathematical modelling (knowledge driven) and a machine learning (data driven) component. Mathematical modelling is crucial in the presence of ill-posedness, making use of information about the imaging data, for narrowing down the search space. Such an approach results in highly generalizable reconstruction and analysis methods which come with desirable solutions guarantees. Machine learning on the other hand is a powerful tool for customising methods to individual data sets. Highly parametrised models such as deep neural networks in particular, are powerful tools for accurately modelling prior information about solutions. The combination of these two paradigms, getting the best from both of these worlds, is the topic of this talk, furnished with examples for image classification under minimal supervision and for tomographic image reconstruction.
View Recorded Video
December 9, 2020
Rebecca Morrison [University of Colorado Boulder]
▦ Learning Sparse Non-Gaussian Graphical Models ▦
Identification and exploitation of a sparse undirected graphical model (UGM) can simplify inference and prediction processes, illuminate previously unknown variable relationships, and even decouple multi-domain computational models. In the continuous realm, the UGM corresponding to a Gaussian data set is equivalent to the non-zero entries of the inverse covariance matrix. However, this correspondence no longer holds when the data is non-Gaussian. In this talk, we explore a recently developed algorithm called SING (Sparsity Identification of Non-Gaussian distributions), which identifies edges using Hessian information of the log density. Various data sets are examined, with sometimes surprising results about the nature of non-Gaussianity.
View Recorded Video
January 20, 2021
William Shadwick [Omega Analysis Limited]
▦ Predicting the Course of Covid-19 and other epidemic and endemic disease ▦
We show that the Gompertz Function provides a generically excellent fit to viral and bacterial epidemics and endemic data. There is a ‘good’ Gompertz Function fit for each time t, starting very early in any outbreak. Successive fits provide excellent forecasts for extended periods.
Examples include Influenza in 2017-18 and Covid-19 in the Spring of 2020 and the ‘second waves’.
The Gompertz Function’s features have consequences for herd immunity and for the transition from epidemic to endemic disease—which we argue is what we are now seeing in the so-called second waves.
We conjecture that the accuracy of the Gompertz Function fits is not a coincidence but reflects the human immune response to viral and bacterial infections that has evolved over millions of years. Based on the Gumbel distribution, which is one of the exceptional distributions for the geometry induced by the action of the ‘location-scale’ group, it has a natural Hamiltonian structure an the energy is a key observable of the epidemic dynamics.
Finally, we illustrate the use of Extreme Value Theory to predict potential surges in the early phase of outbreaks where the Gompertz Function’s predictive power is lowest.
The talk is based on joint work with Ana Cascon.
View Recorded Video
January 27, 2021
David Ginsbourger [University of Bern]
▦Modeling and optimizing set functions via RKHS embeddings▦
We consider the issue of modeling and optimizing set functions, with a main focus on kernel methods for expensive objective functions taking finite sets as inputs.
Based on recent developments on embeddings of probability distributions in Reproducing Kernel Hilbert Spaces, we explore adaptations of Gaussian Process modeling and
Bayesian Optimization to the framework of interest. In particular, combining RKHS embeddings and positive definite kernels on Hilbert spaces delivers a promising
class of kernels, as illustrated in particular on two test cases from mechanical engineering and contaminant source localization, respectively. Based on several
collaborations and notably on the paper "Kernels over sets of finite sets using RKHS embeddings, with application to Bayesian (combinatorial) optimization" with
Poompol Buathong and Tipaluck Krityakierne (AISTATS 2020).
View Recorded Video
February 17, 2021
Jonathan Siegel [Penn State University]
▦ Approximation Theory and Metric Entropy of Neural Networks ▦
We consider the problem of approximating high dimensional functions using shallow neural networks. We begin by introducing natural spaces of functions which can be efficiently approximated by such networks. Then, we derive the metric entropy of the unit balls in these spaces. Drawing upon recent work connecting stable approximation rates to metric entropy, this leads to the optimal approximation rates for the given spaces. Next, we show that higher approximation rates can be obtained by further restricting the function class. In particular, for a restrictive but natural space of functions, shallow networks with ReLU$^k$ activation function achieve an approximation rate of $O(n^{-(k+1)})$ in every dimension. Finally, we discuss the connections between this surprising result and the finite element method.
View Recorded Video
February 24, 2021
Xiaojing (Ruby) Fu [Caltech]
▦ From viscous to crustal fingering: phase-field modeling of complex interfacial flow ▦
In this talk, I will discuss the modeling of complex multiphase flow in porous media using phase-field methods. I will start with the modeling of viscous fingering between two fluids of partial miscibility— a scenario that is rarely addressed. Through a careful design of the thermodynamic free energy of a binary mixture, we develop a phase-field model of fluid-fluid displacements in a Hele-Shaw cell for the general case in which the two fluids have limited (but nonzero) solubility into one another.
Then I will address the modeling of methane clathrate (gas hydrate) in multiphase environments using phase-field methods. Motivated by field and laboratory observations, I will describe how the spontaneous formation of a solid hydrate crust on a moving gas-liquid interface gives rise to a new type of flow instability we term crustal fingering. I will further show that this solid-modulated gas percolation mechanism is crucial to our understanding of methane venting in the world’s oceans.
Finally, I will discuss challenges on modeling fluid-solid coupling due to phase change in porous media, and opportunities to address new questions at the interface of engineering and geosciences.
View Recorded Video
April 21, 2021
Dwight Barkley [University of Warwick]
▦ The mechanics of finite-time blowup in an Euler flow ▦
The mechanism for singularity formation in an inviscid wall-bounded fluid flow
is investigated. The incompressible Euler equations are numerically simulated
in a cylindrical container. The flow is axisymmetric with swirl. The
simulations reproduce and corroborate aspects of prior studies by Luo and Hou
reporting strong evidence for a finite-time singularity. The analysis here
focuses on the interplay between inertia and pressure, rather than on
vorticity. Linearity of the pressure Poisson equation is exploited to
decompose the pressure field into independent contributions arising from the
meridional flow and from the swirl, and enforcing incompressibility and
enforcing flow confinement. The key pressure field driving the blowup of
velocity gradients is that confining the fluid within the cylinder walls. A
model is presented based on a primitive-variables formulation of the Euler
equations on the cylinder wall, with closure coming from how pressure is
determined from velocity. The model captures key features in the mechanics of
the blowup scenario.
View Recorded Video
April 28, 2021
Aaron D. Ames [Caltech]
▦
Safety-Critical Control of Dynamic Robots
▦
Guaranteeing safe behavior is a critical component of translating robots from a laboratory setting to real-world environments in an autonomous fashion. With this as motivation, this talk will present a safety-critical approach to the control of dynamic robotic systems, ranging from legged robots, to multi-robot teams, to robotic assistive devices. To this end, a unified nonlinear control framework for realizing dynamic behaviors will be presented. Underlying this approach is an optimization-based control paradigm leveraging control barrier functions that guarantee safety (represented as forward set invariance). The ability of control Lyapunov functions to stabilize nonlinear systems will be used to motivate these constructions, and the implications on autonomous systems will be considered. The application of these ideas will be demonstrated experimentally on a wide variety of robotic systems, including: multi-robot systems with guaranteed safe behavior, bipedal and humanoid robots capable of achieving dynamic walking and running behaviors that display the hallmarks of natural human locomotion, and robotic assistive devices (including prostheses and exoskeletons) aimed at restoring mobility.
View Recorded Video
May 26, 2021
Albert Chern [UC San Diego]
▦ Geometric approaches to infinite domain problems ▦
Numerical simulations on infinite domains are challenging. In this talk, we will take geometric approaches to analyze the problems and provide new solutions. One problem we tackle is the perfectly matched layer (PML) problem for computational waves on infinite domains. PML is a theoretical wave-absorbing medium attached to the truncated domain that generates no reflection at the interface. However, over the past 25 years, the method still suffers from numerical reflections due to discretization error. We derive the PML based on principles in discrete differential geometry, and for the first time, we obtain a discrete PML that generates no numerical reflections.
Another geometric approach to infinite domain problems is to study transformations and symmetries at the level of PDEs. It turns out that within this geometry of functions and PDEs, the distinction between the notions of exterior and interior for a domain is no longer prominent. In the talk, we will generalize the Kelvin transformations as a general strategy to compactify PDE problems.
View Recorded Video
AY 2020/2021: Student/Postdoc Seminars
October 9, 2020
• Internal CMX Seminar •
Zoom
1:00pm
De Huang
▦Nonlinear Matrix Concentration via Semigroup Methods. ▦
Matrix concentration inequalities provide information about the probability that a random matrix is close to its expectation with respect to the spectral norm. This talk presents our recent results on using semigroup methods to derive sharp nonlinear matrix inequalities. In particular, we show that the classic Bakry—Èmery curvature criterion implies subgaussian concentration for “matrix Lipschitz” functions. This argument circumvents the need to develop a matrix version of the log-Sobolev inequality, a technical obstacle that has blocked previous attempts to derive matrix concentration inequalities in this setting. The approach unifies and extends much of the previous work on matrix concentration. When applied to a product measure, the theory reproduces the matrix Efron—Stein inequalities. It also handles matrix-valued functions on a Riemannian manifold with uniformly positive Ricci curvature. We also deduce subexponential matrix concentration from a Poincarè inequality via a short, conceptual argument.
View Recorded Video
October 16, 2020
• Internal CMX Seminar •
Zoom
1:00pm
Florian Schaefer
▦Competitive Optimization and Factorization by KL-Minimization ▦
The first part of this talk is concerned with competitive optimization, where multiple agents try to minimize their respective objectives that each depend on all agents' actions. We propose competitive gradient descent (CGD) and show that it is a natural generalization of gradient descent to competitive optimization, with good practical performance. We then show how ideas from information geometry can be used to extend CGD to competitive mirror descent (CMD) that can incorporate a wide range of convex constraints.
The second part of the talk is concerned with the approximate factorization of dense kernel matrices arising as Green's matrices of elliptic PDE or covariance matrices of smooth Gaussian processes. For a given sparsity pattern, we show that the optimal (in KL-divergence) sparse inverse-Cholesky factor of the kernel matrix can be computed in closed form, embarrassingly parallel. By exploiting the conditional independence properties of finitely smooth Gaussian processes, we show that these factors can be computed in near-linear complexity, improving the state of the art for fast solvers of Green's matrices of elliptic PDE.
View Recorded Video
October 23, 2020
• Internal CMX Seminar •
Zoom
1:00pm
Elizabeth Qian
▦Lift & Learn: a scientific machine learning framework for learning low-dimensional models for nonlinear PDEs ▦
Many systems of engineering interest exhibit complex nonlinear dynamics for which high-fidelity simulation is expensive and the development of accurate low-cost approximations is challenging. Lift & Learn is a new scientific machine learning framework that incorporates physical knowledge to learn accurate, inexpensive reduced models for complex nonlinear systems. The method first ‘lifts’ the known system governing equations to quadratic form; i.e., we transform and augment the system state representation so that the governing equations in the new lifted state variables contain only quadratic nonlinearities in the lifted states. Proper orthogonal decomposition (POD) is used to reduce the dimension of lifted snapshot data, and a low-dimensional quadratic model is fit to the reduced lifted data via a least-squares operator inference procedure. Unlike black box methods such as neural nets, our Lift & Learn models respect the structure of the physics in the transformed equations and can be analyzed. We prove that the Lift & Learn models capture the system physics at least as accurately as traditional intrusive model reduction approaches, providing a bridge to the interpretability and analyzability of traditional model reduction. Numerical experiments demonstrate the accuracy and generalizability of the Lift & Learn models.
View Recorded Video
October 30, 2020
• Internal CMX Seminar •
Zoom
1:00pm
Robert Huang
▦ Predicting many properties of a quantum system from very few measurements ▦
Predicting properties of complex, large-scale quantum systems is essential for developing quantum technologies. We present an efficient method for constructing an approximate classical description of a quantum state using very few measurements of the state. This description, called a classical shadow, can be used to predict many different properties: order log M measurements suffice to accurately predict M different functions of the state with high success probability. The number of measurements is independent of the system size and saturates information-theoretic lower bounds. Moreover, target properties to predict can be selected after the measurements are completed. We support our theoretical findings with extensive numerical experiments. We apply classical shadows to predict quantum fidelities, entanglement entropies, two-point correlation functions, expectation values of local observables, and the energy variance of many-body local Hamiltonians. The numerical results highlight the advantages of classical shadows relative to previously known methods.
View Recorded Video
November 6, 2020
• Internal CMX Seminar •
Zoom
1:00pm
Riley Murray
▦ Convex relaxations and computational optimization ▦
In this talk I give a tour of my work in convex relaxations for nonconvex problems, optimization modeling software, and some planned future work in numerical methods. I begin by introducing signomials and reviewing their history in optimization modeling for engineering design. Next, I show how the old idea of partial dualization can be revitalized with relative entropy optimization to develop an effective convex relaxation scheme for constrained signomial programming. I address in some detail how these signomial methods lead to advances in sparse and high-degree polynomial optimization, as well as new insights in convex and real-algebraic geometry. The second half of my talk shifts towards computational optimization. This shift begins by describing the SageOpt python package that implements the mathematics from my thesis, and proceeds by highlighting some of my contributions as one of three core developers of the widely-used CVXPY python package. I conclude by outlining possible uses of randomized numerical linear algebra and GPUs in second order methods for convex programming, and in particular I indicate how the latter method may help us solve operator relative entropy programs at scale.
View Recorded Video
November 13, 2020
• Internal CMX Seminar •
Zoom
1:00pm
Michelle Feng
▦Bounded-confidence models for network extensions ▦
In this talk, I will discuss several extensions to bounded-confidence models, as well as preliminary results on their behaviors. I will give a brief introduction to bounded-confidence and its applications before introducing extensions to multilayer networks, hypergraphs, and adaptive networks. I will present numerical simulations on these various network extensions, describing differences in behavior between these bounded-confidence models and traditional ones. I will also discuss some theoretical results concerning convergence behaviors of the extended model. This talk will be aimed at a general computational audience, assuming no prior familiarity with bounded-confidence or opinion dynamics.
View Recorded Video
November 20, 2020
• Internal CMX Seminar •
Zoom
1:00pm
Yifan Chen
▦ Multiscale Computation and Parameter Learning for Kernels from PDEs: Two Provable Examples ▦
This talk is concerned with the computation and learning of kernel operators from a PDE background. The standard mathematical model is Lu=f, where L is the inverse of some kernel operator; u and f are functions that may or may not be directly available to us, depending on the problem set-up.
In the first part, we consider the computation problem: given L and f, compute u. Here L can be heterogeneous Laplacians, or Helmholtz's operators in the high-frequency regime. For this problem, we develop a multiscale framework that achieves nearly exponential convergence of accuracy regarding the computational degrees of freedom. The main innovation is an effective coarse-fine scale decomposition of the solution space that exploits local structures of both L and f.
In the second part, we consider the learning problem: given u at some scattered points only, the task is to recover the full u and learn the operator L that encodes the underlying physics. We approach this problem via Empirical Bayes and Kernel Flow methods. Analysis of their consistency in the large data limit, as well as explicit identification of their implicit bias in parameter learning, are established for a Matern-like model on the torus, both theoretically and empirically.
View Recorded Video
December 4, 2020
• Internal CMX Seminar •
Zoom
1:00pm
Nikola Kovachki
▦ Approximating Operators with Deep Learning ▦
Efficiently approximating the solution operators arising from parametric PDE systems
is a challenging task with numerous applications in science and engineering. I will present
two recently proposed approaches for this task in a fully data-driven (non-intrusive)
setting. Both follow the philosophy of first conceptualizing an algorithm on the space of
functions then discretizing only when required for computation. This affords rates of approximation
that are independent of the underlying finite-dimensional space used to discretize the data.
The first approach combines ideas from deep learning and projection-based model reduction,
constructing a neural network which links the latent spaces of the input-output snapshots.
The approximation is shown to converge in the limit of infinite data and reduced dimension.
The second approach generalizes standard neural networks defined on finite-dimensional Euclidean
spaces to infinite-dimensional function spaces by replacing the parameter matrix
with a kernel integral operator. A universal approximation result is proved for this
architecture. Numerically, I will demonstrate the efficacy and robustness to discretization
of both approaches on classes of parametric elliptic, parabolic, and hyperbolic PDEs
with applications in underground reservoir modeling, the turbulent flow of fluids,
and the deformation of plastic materials.
View Recorded Video
January 8, 2021
• Internal CMX Seminar •
Zoom
1:00pm
Armeen Taeb
▦ Latent-variable modeling: causality, robustness, and false discovery methods ▦
Many driving factors of physical systems are often latent or unobserved. Thus, understanding such systems and producing robust predictions crucially relies on accounting for the influence of the latent structure. I will discuss methodological and theoretical advances in two central problems in latent-variable modeling. The first problem aims to estimate causal relations among a collection of observed variables with latent effects. Given access to heterogeneous data arising from perturbations, I introduce a maximum-likelihood framework that provably identifies the underlying causal structure. Unlike previous techniques, this procedure allows for perturbations on all of the variables. The second problem focuses on developing false discovery methods for latent-variable models that are parameterized by low-rank matrices, where the traditional perspective on false discovery control is ill-suited due to the non-discrete nature of the underlying decision spaces. To overcome this challenge, I present a geometric reformulation of the notion of a discovery as well as a specific algorithm to control false discoveries in these settings. Throughout, I will explore the utility of the proposed methodologies for real-world applications such as California reservoir modeling.
View Recorded Video
January 15, 2021
• Internal CMX Seminar •
Zoom
1:00pm
Jinlong Wu
▦ Estimating model error using sparsity-promoting ensemble Kalman inversion▦
Closure models are widely used in simulating complex systems such as turbulence and Earth’s climate, for which direct numerical simulation is too expensive. Although it is almost impossible to perfectly reproduce the true system with closure models, it is often sufficient to correctly reproduce time-averaged statistics. Here we present a sparsity-promoting, derivative-free optimization method to estimate model error from time-averaged statistics. Specifically, we show how sparsity can be imposed as a constraint in ensemble Kalman inversion (EKI), resulting in an iterative quadratic programming problem. We illustrate how this approach can be used to quantify model error in the closures of dynamical systems. In addition, we demonstrate the merit of introducing stochastic processes to quantify model error for certain systems. We also present the potential of replacing existing closures with purely data-driven closures using the proposed methodology. The results show that the proposed methodology provides a systematic approach to estimate model error in closures of dynamical systems.
View Recorded Video
January 29, 2021
• Internal CMX Seminar •
Zoom
1:00pm
Shumao Zhang
▦ Multiscale Invertible Generative Networks for High-Dimensional Bayesian Inference Problems ▦
High-dimensional Bayesian inference problems cast a long-standing challenge in generating samples, especially when the posterior has multiple modes. For a wide class of Bayesian inference problems whose forward model equipped with the multiscale structure that coarse-scale low-dimensional surrogate can approximate the original fine-scale high-dimensional problem well, we propose to train a Multiscale Invertible Generative Network (MsIGN) for sample generation. We approximate the fine-scale posterior distribution by a fine-scale surrogate that can be decoupled into the coarse-scale posterior and a prior conditional distribution. A novel prior conditioning layer is then designed to model this prior conditional distribution and bridge different scales, enabling coarse-to-fine multi-stage training. The fine-scale surrogate is further modified by the invertible generative network, and to avoid mode missing, we adopt the Jeffreys divergence as the training objective. On two high-dimensional Bayesian inverse problems, MsIGN approximates the posterior accurately and clearly captures multiple modes, showing superior performance compared with previous deep generative network approaches. On the natural image synthesis task, MsIGN achieves superior performance in bits-per-dimension among our baselines and yields great interpret-ability of its neurons in intermediate layers.
February 12, 2021
• Internal CMX Seminar •
Zoom
1:00pm
Christoph Bauinger
▦ Interpolated Factored Green Function method for accelerated solution of scattering problems ▦
The Interpolated Factored Green Function method (IFGF) is a novel method for the accelerated evaluation of discrete
integral operators in scattering theory. The IFGF algorithm
evaluates the action of Green function-based discrete integral operators at a
cost of O(N log N) operations for an N-point surface
mesh. The method capitalizes on slow variations inherent in a certain
Green function analytic factor and which therefore allows for accelerated
evaluation of fields produced by groups of sources based on a
recursive application of classical interpolation methods resulting in an algorithm which can be implemented easily and efficiently. Unlike
other approaches, the IFGF method does not utilize the Fast Fourier
Transform (FFT), special-function expansions, high-dimensional linear-algebra factorizations, translation operators, equivalent sources, or parabolic scaling.
The efficiency of the algorithm in terms of memory and
speed is illustrated by means of a variety of numerical
experiments.
View Recorded Video
February 26, 2021
• Internal CMX Seminar •
Zoom
1:00pm
(1st Speaker) Zongyi Li
▦ Neural operator and dynamic systems. ▦
The classical development of neural networks has primarily focused on learning mappings between finite-dimensional Euclidean spaces. Recently, this has been generalized to neural operators that learn mappings between function spaces. For partial differential equations (PDEs), neural operators directly learn the mapping from any functional parametric dependence to the solution. Thus, they learn an entire family of PDEs, in contrast to classical methods which solve one instance of the equation. In this work, we formulate a new neural operator by parameterizing the integral kernel directly in Fourier space, allowing for an expressive and efficient architecture. We perform experiments on Burgers' equation, Darcy flow, and the Navier-Stokes equation (including the turbulent regime). Our Fourier neural operator shows state-of-the-art performance compared to existing neural network methodologies and it is up to three orders of magnitude faster compared to traditional PDE solvers.
( 2nd Speaker) Ziyun Zhang
▦ Low-rank matrix manifold:Geometry and asymptotic behavior of optimization ▦
The low-rank matrix manifold is the Riemannian manifold of fixed rank matrices whose rank is much smaller than the dimension. It is popular in modern data science applications involving low-rank recovery because of the efficiency of manifold optimization algorithms along with nearly optimal theoretical guarantee. This talk is motivated by some recent findings about using Riemannian gradient descent to minimize the least-squares loss function on the low-rank matrix manifold. Our focus is to address the non-convexity and non-closedness of this manifold. I will first introduce the general theory of the asymptotic escape of strict saddle sets on Riemannian manifolds. Then I will discuss the so-called spurious critical points that are special to the low-rank matrix manifold, and new analytical techniques tailored to the spurious critical points. Together they pave the way for a thorough understanding of the global asymptotic behavior of Riemannian gradient descent on the low-rank matrix manifold.
AY 2020/2021: Other Seminars
None Scheduled
▦ None Scheduled ▦
AY 2020/2021: Meetings & Workshops
|