FacultyVenkat ChandrasekaranMathieu Desbrun Thomas Hou Houman Owhadi Peter Schröder Andrew Stuart Joel Tropp Von Karman
Franca Hoffmann 
Lunch Seminars(Will be held at 12 noon in Annenberg 213, unless otherwise specified.)May 7, 2019
September 25, 2019
Jose Antonio Carrillo ▦ Primal dual methods for Wasserstein gradient flows ▦ Combining the classical theory of optimal transport with modern operator splitting techniques, I will present a new numerical method for nonlinear, nonlocal partial differential equations, arising in models of porous media,materials science, and biological swarming. Using the JKO scheme, along with the BenamouBrenier dynamical characterization of the Wasserstein distance, we reduce computing the solution of these evolutionary PDEs to solving a sequence of fully discrete minimization problems, with strictly convex objective function and linear constraint. We compute the minimizer of these fully discrete problems by applying a recent, provably convergent primal dual splitting scheme for three operators. By leveraging the PDE’s underlying variational structure, ourmethod overcomes traditional stability issues arising from the strong nonlinearity and degeneracy, and it is also naturally positivity preserving and entropy decreasing. Furthermore, by transforming the traditional linear equality constraint, as has appeared in previous work, into a linear inequality constraint, our method converges in fewer iterations without sacrificing any accuracy. Remarkably, our method is also massively parallelizable and thus very efficient in resolving high dimensional problems. We prove that minimizers of the fully discrete problem converge to minimizers of the continuum JKO problem as the discretization is refined, and in the process, we recover convergence results for existing numerical methods for computing Wasserstein geodesics. Finally, we conclude with simulations of nonlinear PDEs and Wasserstein geodesics in one and two dimensions that illustrate the key properties of our numerical method. Other Seminars
May 16, 2019
C.C. Jay Kuo ▦ Interpretable Convolutional Neural Networks (CNNs) via Feedforward Design ▦ Given a convolutional neural network (CNN) architecture, its network parameters are determined by backpropagation (BP) nowadays. The underlying mechanism remains to be a blackbox after a large amount of theoretical investigation. In this talk, I describe a new interpretable and feedforward (FF) design with the LeNet5 as an example. The FFtrained CNN is a datacentric approach that derives network parameters based on training data statistics layer by layer in one pass. To build the convolutional layers, we develop a new signal transform, called the Saab (Subspace approximation with adjusted bias) transform. The bias in filter weights is chosen to annihilate nonlinearity of the activation function. To build the fullyconnected (FC) layers, we adopt a labelguided linear least squared regression (LSR) method. The classification performances of BP and FFtrained CNNs on the MNIST and the CIFAR10 datasets are compared. The computational complexity of the FF design is significantly lower than the BP design and, therefore, the FFtrained CNN is ideal for mobile/edge computing. We also comment on the relationship between BP and FF designs by examining the crossentropy values at nodes of intermediate layers. Meetings and Workshops 
