CMX is a research group aimed at the development and analysis of novel algorithmic ideas underlying emerging applications in the physical, biological, social and information sciences.  We are distinguished by a shared value system built on the development of foundational mathematical understanding, and the deployment of this understanding to impact on emerging key scientific and technological challenges.


Faculty

Venkat Chandrasekaran
Mathieu Desbrun
Thomas Hou
Houman Owhadi
Peter Schröder
Andrew Stuart
Joel Tropp

Von Karman
Instructors

Franca Hoffmann
Ka Chun Lam

Postdoctoral
Researchers

Alfredo Garbuno-Inigo
Bamdad Hosseini
Pengfei Liu
Krithika Manohar
Melike Sirlanci

Grad Students

Max Budninskiy
Utkan Candogan
JiaJie Chen
De Huang
Nikola Kovachki
Matt Levine
Riley Murray
Florian Schaefer
Yong Shen Soh
Yousuf Soliman
Armeen Taeb
Gene R. Yoo
Shumao Zhang

Lunch Seminars

(Will be held at 12 noon in Annenberg 213, unless otherwise specified.)


May 7, 2019
James Saunderson
Certifying polynomial nonnegativity via hyperbolic optimization
     Certifying nonnegativity of multivariate polynomials is fundamental to solving optimization problems modeled with polynomials. One well-known way to certify nonnegativity is to express a polynomial as a sum of squares. Furthermore, the search for such a certificate can be carried out via semidefinite optimization. An interesting generalization of semidefinite optimization, that retains many of its good algorithmic properties, is hyperbolic optimization. Are there natural certificates of nonnegativity that we can search for via hyperbolic optimization, and that are not obviously captured by sums of squares? If so, these could have the potential to generate hyperbolic optimization-based relaxations of optimization problems with that may be stronger, in some sense, than semidefinite optimization-based relaxations.
In this talk, I will describe one candidate for such "hyperbolic certificates of nonnegativity", and discuss what is known about their relationship with sums of squares.

September 25, 2019
Jose Antonio Carrillo
▦ Primal dual methods for Wasserstein gradient flows ▦
   Combining the classical theory of optimal transport with modern operator splitting techniques, I will present a new numerical method for nonlinear, nonlocal partial differential equations, arising in models of porous media,materials science, and biological swarming. Using the JKO scheme, along with the Benamou-Brenier dynamical characterization of the Wasserstein distance, we reduce computing the solution of these evolutionary PDEs to solving a sequence of fully discrete minimization problems, with strictly convex objective function and linear constraint. We compute the minimizer of these fully discrete problems by applying a recent, provably convergent primal dual splitting scheme for three operators. By leveraging the PDE’s underlying variational structure, ourmethod overcomes traditional stability issues arising from the strong nonlinearity and degeneracy, and it is also naturally positivity preserving and entropy decreasing. Furthermore, by transforming the traditional linear equality constraint, as has appeared in previous work, into a linear inequality constraint, our method converges in fewer iterations without sacrificing any accuracy. Remarkably, our method is also massively parallelizable and thus very efficient in resolving high dimensional problems. We prove that minimizers of the fully discrete problem converge to minimizers of the continuum JKO problem as the discretization is refined, and in the process, we recover convergence results for existing numerical methods for computing Wasserstein geodesics. Finally, we conclude with simulations of nonlinear PDEs and Wasserstein geodesics in one and two dimensions that illustrate the key properties of our numerical method.  

Other Seminars


May 16, 2019
C.-C. Jay Kuo
Interpretable Convolutional Neural Networks (CNNs) via Feedforward Design
     Given a convolutional neural network (CNN) architecture, its network parameters are determined by backpropagation (BP) nowadays. The underlying mechanism remains to be a black-box after a large amount of theoretical investigation. In this talk, I describe a new interpretable and feedforward (FF) design with the LeNet-5 as an example. The FF-trained CNN is a data-centric approach that derives network parameters based on training data statistics layer by layer in one pass. To build the convolutional layers, we develop a new signal transform, called the Saab (Subspace approximation with adjusted bias) transform. The bias in filter weights is chosen to annihilate nonlinearity of the activation function. To build the fully-connected (FC) layers, we adopt a label-guided linear least squared regression (LSR) method. The classification performances of BP- and FF-trained CNNs on the MNIST and the CIFAR-10 datasets are compared. The computational complexity of the FF design is significantly lower than the BP design and, therefore, the FF-trained CNN is ideal for mobile/edge computing. We also comment on the relationship between BP and FF designs by examining the cross-entropy values at nodes of intermediate layers.


Meetings and Workshops







Past Events

Lunch Seminars Other Seminars Meetings & Workshops