CMX is a research group aimed at the development and analysis of novel algorithmic ideas underlying emerging applications in the physical, biological, social and information sciences.  We are distinguished by a shared value system built on the development of foundational mathematical understanding, and the deployment of this understanding to impact on emerging key scientific and technological challenges.


Venkat Chandrasekaran
Mathieu Desbrun
Thomas Hou
Houman Owhadi
Peter Schröder
Andrew Stuart
Joel Tropp

Von Karman

Franca Hoffmann
Ka Chun Lam


Alfredo Garbuno-Inigo
Bamdad Hosseini
Pengfei Liu
Krithika Manohar
Melike Sirlanci

Grad Students

Max Budninskiy
Utkan Candogan
JiaJie Chen
De Huang
Nikola Kovachki
Matt Levine
Riley Murray
Florian Schaefer
Yong Shen Soh
Yousuf Soliman
Armeen Taeb
Gene R. Yoo
Shumao Zhang

Lunch Seminars

(Will be held at 12 noon in Annenberg 213, unless otherwise specified.)

May 7, 2019
James Saunderson
Certifying polynomial nonnegativity via hyperbolic optimization
     Certifying nonnegativity of multivariate polynomials is fundamental to solving optimization problems modeled with polynomials. One well-known way to certify nonnegativity is to express a polynomial as a sum of squares. Furthermore, the search for such a certificate can be carried out via semidefinite optimization. An interesting generalization of semidefinite optimization, that retains many of its good algorithmic properties, is hyperbolic optimization. Are there natural certificates of nonnegativity that we can search for via hyperbolic optimization, and that are not obviously captured by sums of squares? If so, these could have the potential to generate hyperbolic optimization-based relaxations of optimization problems with that may be stronger, in some sense, than semidefinite optimization-based relaxations.
In this talk, I will describe one candidate for such "hyperbolic certificates of nonnegativity", and discuss what is known about their relationship with sums of squares.

September 25, 2019
Jose Antonio Carrillo
▦ Primal dual methods for Wasserstein gradient flows ▦
   Combining the classical theory of optimal transport with modern operator splitting techniques, I will present a new numerical method for nonlinear, nonlocal partial differential equations, arising in models of porous media,materials science, and biological swarming. Using the JKO scheme, along with the Benamou-Brenier dynamical characterization of the Wasserstein distance, we reduce computing the solution of these evolutionary PDEs to solving a sequence of fully discrete minimization problems, with strictly convex objective function and linear constraint. We compute the minimizer of these fully discrete problems by applying a recent, provably convergent primal dual splitting scheme for three operators. By leveraging the PDE’s underlying variational structure, ourmethod overcomes traditional stability issues arising from the strong nonlinearity and degeneracy, and it is also naturally positivity preserving and entropy decreasing. Furthermore, by transforming the traditional linear equality constraint, as has appeared in previous work, into a linear inequality constraint, our method converges in fewer iterations without sacrificing any accuracy. Remarkably, our method is also massively parallelizable and thus very efficient in resolving high dimensional problems. We prove that minimizers of the fully discrete problem converge to minimizers of the continuum JKO problem as the discretization is refined, and in the process, we recover convergence results for existing numerical methods for computing Wasserstein geodesics. Finally, we conclude with simulations of nonlinear PDEs and Wasserstein geodesics in one and two dimensions that illustrate the key properties of our numerical method.  

October 16, 2019
Rupert Frank
▦ A `Liquid-Solid' Phase Transition in a Simple Model for Swarming ▦
   We consider a non-local shape optimization problem, which is motivated by a simple model for swarming and other self-assembly/aggregation models, and prove the existence of different phases. In particular, we show that in the large mass regime the ground state density profile is the characteristic function of a round ball. An essential ingredient in our proof is a strict rearrangement inequality with a quantitative error estimate.

The talk is based on joint work with E. Lieb.  

October 23, 2019
Steven Low
▦ Mitigation of Cascading Failures in Power Systems ▦
   Line failure in power grid propgates in non-local, intricate and counterintuitive ways because of the interplay between power flow physics and network topology, making the mitigation of cascading failure difficult. The conventional approach to grid reliability is through building redundant lines. In this talk, we present an opposite approach to grid reliability through failure localization, by judiciously removing lines and adopting a new class of frequency control algorithms at real time. The topology design partitions the network into regions that are connected in a tree structure. The frequency control automatically adjusts controllable generators and loads to minimize disruption and localize failure propagation. This approach is derived from a spectral view of power flow equations that relates failure propagation to the graphical structure of the grid through its Laplacian matrix. We summarize the underlying theory and present simulation results that demonstrate that our approach not only localizes failure propagation, as promised by the theory, but also improves overall grid reliability even though it reduces line redundancy.

(Joint work with Daniel Guo, Chen Liang, Alessandro Zocca, and Adam Wierman)

October 30, 2019
Gianluca Favre
▦ Kinetic model with thermalization for a gas with total energy conservation ▦
   We consider the thermalization of a gas towards a Maxwellian velocity distribution which depends locally on the temperature of the background. The exchange of kinetic and thermal energy between the gas and the background drives the system towards a global equilibrium with constant temperature. The heat flow is governed by the Fourier's law. Mathematically we consider a coupled system of nonlinear kinetic and heat equations where in both cases we add a term that describes the energy exchange. For this problem we are able to prove existence of the solution in 1D, exponential convergence to the equilibrium through a hypocoercivity technique, macroscopic limit toward a cross-diffusion system. In the last two cases a perturbative approach is taken into account. It's worth noticing that also without heat conductivity we can show the temperature diffusion thanks to the transport of energy. It is also interesting to show that the thermalization is highly influenced by the background temperature. All these aspects have been investigated also from a numerical viewpoint in order to provide simulations in 2D.

January 22, 2020
Richard Kueng
▦ TBA ▦

January 29, 2020
Speaker TBD
▦ TBA ▦

February 26, 2020
Speaker TBD
▦ TBA ▦

April 15, 2020
Speaker TBD
▦ TBA ▦

April 22, 2020
Speaker TBD
▦ TBA ▦

April 29, 2020
Niles Pierce
▦ TBA ▦

May 20, 2020
Speaker TBD
▦ TBA ▦

Other Seminars

May 16, 2019
C.-C. Jay Kuo
Interpretable Convolutional Neural Networks (CNNs) via Feedforward Design
     Given a convolutional neural network (CNN) architecture, its network parameters are determined by backpropagation (BP) nowadays. The underlying mechanism remains to be a black-box after a large amount of theoretical investigation. In this talk, I describe a new interpretable and feedforward (FF) design with the LeNet-5 as an example. The FF-trained CNN is a data-centric approach that derives network parameters based on training data statistics layer by layer in one pass. To build the convolutional layers, we develop a new signal transform, called the Saab (Subspace approximation with adjusted bias) transform. The bias in filter weights is chosen to annihilate nonlinearity of the activation function. To build the fully-connected (FC) layers, we adopt a label-guided linear least squared regression (LSR) method. The classification performances of BP- and FF-trained CNNs on the MNIST and the CIFAR-10 datasets are compared. The computational complexity of the FF design is significantly lower than the BP design and, therefore, the FF-trained CNN is ideal for mobile/edge computing. We also comment on the relationship between BP and FF designs by examining the cross-entropy values at nodes of intermediate layers.


August 22, 2019
Giacomo Garegnani
Bayesian Inference of Multiscale Differential Equations
     Inverse problems involving differential equations defined on multiple scales naturally arise in several engineering applications. The computational cost due to discretization of multiscale equations can be reduced employing homogenization methods, which allow for cheaper computations. Nonetheless, homogenization techniques introduce a modelling error, which has to be taken into account when solving inverse problems. In this presentation, we consider the treatment of the homogenization error in the framework of inverse problems involving either an elliptic PDE or a Langevin diffusion process. In both cases, theoretical results involving the limit of oscillations of vanishing amplitude are provided, and computational techniques for dealing with the modelling error are presented.

November 19, 2019
Matthew Thorpe
▦ How Many Labels Do You Need For Semi-Supervised Learning? ▦
   Given a data set of which a small subset are labelled, the goal of semi-supervised learning is to find the unknown labels. A popular method is to minimise a discrete p-Dirichlet energy defined on a graph constructed from the data. As the size of the data set increases one hopes that solutions of the discrete problem converge to a continuum variational problem with the continuum p-Dirichlet energy. It follows from Sobolev regularity that one cannot impose constraints if p is less than the dimension of the data hence, in this regime, one must also increase the number of labels in order to avoid labels "dissappearing" in the limit. In this talk I will address the question of what is the minimal number of labels. To compare labelling functions on different domains we use a metric based on optimal transport which then allows for the application of methods from the calculus of variation, in particular Gamma-convergence, and methods from PDE's, such as constructing barrier functions in order to apply the maximum principle. We can further show rates of convergence.

This is joint work with Jeff Calder (Minnesota) and Dejan Slepcev (CMU).  

December 5, 2019
Jose Antonio Carrillo
▦ TBA ▦

December 12, 2019
Zhenzhen Li
▦ TBA ▦

Meetings and Workshops

Past Events

Lunch Seminars

 AY 2017/18
 AY 2018/19

Other Seminars

 AY 2017/18
 AY 2018/19

Meetings & Workshops

 AY 2017/18
 AY 2018/19