FacultyVenkat ChandrasekaranMathieu Desbrun Thomas Hou Houman Owhadi Peter Schröder Andrew Stuart Joel Tropp Von Karman
Franca Hoffmann |
Lunch Seminars(Will be held at 12 noon in Annenberg 213, unless otherwise specified.)September 25, 2019 October 16, 2019 The talk is based on joint work with E. Lieb. October 23, 2019 (Joint work with Daniel Guo, Chen Liang, Alessandro Zocca, and Adam Wierman) October 30, 2019 January 22, 2020 January 29, 2020 February 26, 2020 April 15, 2020 April 22, 2020 April 29, 2020 May 20, 2020 Other Seminars(Time and location vary)
August 22, 2019
• Special CMX Seminar • Annenberg 213 12:00pm Giacomo Garegnani ▦ Bayesian Inference of Multiscale Differential Equations ▦ Inverse problems involving differential equations defined on multiple scales naturally arise in several engineering applications. The computational cost due to discretization of multiscale equations can be reduced employing homogenization methods, which allow for cheaper computations. Nonetheless, homogenization techniques introduce a modelling error, which has to be taken into account when solving inverse problems. In this presentation, we consider the treatment of the homogenization error in the framework of inverse problems involving either an elliptic PDE or a Langevin diffusion process. In both cases, theoretical results involving the limit of oscillations of vanishing amplitude are provided, and computational techniques for dealing with the modelling error are presented.
November 19, 2019
• Special CMX Seminar • Annenberg 213 4:30pm Matthew Thorpe ▦ How Many Labels Do You Need For Semi-Supervised Learning? ▦ Given a data set of which a small subset are labelled, the goal of semi-supervised learning is to find the unknown labels. A popular method is to minimise a discrete p-Dirichlet energy defined on a graph constructed from the data. As the size of the data set increases one hopes that solutions of the discrete problem converge to a continuum variational problem with the continuum p-Dirichlet energy. It follows from Sobolev regularity that one cannot impose constraints if p is less than the dimension of the data hence, in this regime, one must also increase the number of labels in order to avoid labels "dissappearing" in the limit. In this talk I will address the question of what is the minimal number of labels. To compare labelling functions on different domains we use a metric based on optimal transport which then allows for the application of methods from the calculus of variation, in particular Gamma-convergence, and methods from PDE's, such as constructing barrier functions in order to apply the maximum principle. We can further show rates of convergence. This is joint work with Jeff Calder (Minnesota) and Dejan Slepcev (CMU).
December 6, 2019
• CMX Special Seminar • Annenberg 213 4:00pm Jose Antonio Carrillo ▦ Consensus Based Models and Applications to Global Optimization ▦ We introduce a novel first-order stochastic swarm intelligence (SI) model in the spirit of consensus formation models, namely a consensus-based optimization (CBO) algorithm, which may be used for the global optimization of a function in multiple dimensions. The CBO algorithm allows for passage to the mean-field limit, which results in a nonstandard, nonlocal, degenerate parabolic partial differential equation (PDE). Exploiting tools from PDE analysis we provide convergence results that help to understand the asymptotic behavior of the SI model. We further present numerical investigations underlining the feasibility of our approach with applications to machine learning problems.
December 12, 2019
• CMX Special Seminar • Annenberg 213 4:30pm Zhenzhen Li ▦ A New Non-convex Optimization Framework For Low-rank Matrix Recovery with Mathematical Guarantee ▦ Recent years have witnessed growing importance of non-convex methods in many industrial and practical problems. Many low-rank related problems can be solved by reformulating them as nonconvex optimization problems. Surprisingly, these optimization problems usually do not have spurious local minima and all saddle points are strict under the Euclidean parameterized setting. Although a dimension-free polynomial convergence can be guaranteed for many problems, numerical experiments have demonstrated much better performance than what the current theory predicts. Different from previous non-convex methods (with weaker theoretical convergence guarantee) or convex relaxation (with heavier computational cost), in this talk, we will discuss a new global non-convex optimization framework for solving a general inverse problem of low-rank matrices under the Riemannian manifold setting. Given some random measurements of a low-rank matrix, how does a least square loss work via the Riemannian gradient descent (light computational cost version) with random initialization? Our analysis gives rigorous mathematical analysis with respect to both asymptotic convergence behavior and fast convergence rate under isometry or weaker isometry condtions. More specifically, under isometry case a low-rank matrix manifold with rank r (r << n) consists of 2^r branches. We will show that a random initialization with probability 1 falls into an intrinsic branch. Further, it needs O(log n+log(1/epsilon)) iterations to generate an epsilon-accuracy solution. Similar results also hold for low rank matrix recovery by given some random information under mild conditions (weaker isometry case). Potential applications include but not limited to low-rank matrix related problems, such as matrix sensing, matrix completion, low-rank Hankel matrix recovery, phase retrieval, robust PCA, and non-convex flow, etc.
January 10, 2020
• CMX Special Seminar • Annenberg 213 12:00pm Mitch Luskin ▦ Modeling and Computation for Trilayer Graphene ▦ January 13, 2019 (These results are joint works with J. Wolf.) Meetings and Workshops |
Past Events
AY 2017/18 AY 2017/18 |