Inverse Problems and Machine Learning
Feb 9-11, 2018

Main IPML Meeting Page
Program

PDF of Abstracts

Friday, February 9, 2018 (day 1)

The first day of the workshop took place in the Millikan Board Room.   Lunch was under the arches of Moore building.   ☆ map ☆

8:00 am check-in and breakfast (25 min)
8:25 am Andrew Stuart, Caltech Welcome
8:30 am Lorenzo Rosasco, University of Genoa, MIT An inverse problem perspective on machine learning Slides
9:10 am Rémi Gribonval, Inria Learning from random moments Slides
9:50 am break (30 min)
10:20 am Dejan Slepcev, Carnegie Mellon Regularizing objective functionals of semi-supervised learning Slides
11:00 am Dirk Lorenz, TU Braunschweig Randomized sparse Kaczmarz methods Slides
11:40 am Jiequn Han, Princeton Solving high-dimensional partial differential equations using deep learning Slides
12:20 pm lunch (1 hour)
1:20 pm Pratik Chaudhari, UCLA Unraveling the mysteries of stochastic gradient descent
on deep networks
Slides
2:00 pm Jens Behrmann, University of Bremen Towards understanding the ill-posedness of inverting rectifier networks Slides
2:40 pm Misha Belkin, Ohio State University Making shallow learning great again
3:20 pm break (40 min)
4:00 pm Christoph Brune, University of Twente Deep learning theory with application to cancer research Slides
4:40 pm Xiyang Luo, UCLA and Matthew Dunlop, Caltech UQ in graph-based classification Slides
5:20 pm Joan Bruna, NYU On the loss surface of neural networks Slides
6:00 pm day finished


Saturday, February 10, 2018 (day 2)

The second and third days of the workshop took place in the Annenberg building.

8:00 am breakfast (30 min)
8:30 am Nicolas Flammarion, UC Berkeley Optimal rates for least-squares regression through SGD
9:10 am Hrushikesh Mhaskar, Claremont Graduate University Machine learning meets super-resolution Slides
9:50 am break (30 min)
10:20 am Mauro Maggioni, Johns Hopkins Learning effective diffusion processes on manifolds
11:00 am Johannes Schmidt-Hieber, Leiden University Statistical theory for deep neural networks with ReLU activation function Slides
11:40 am Nikola Kovachki, Caltech Derivative-free ensemble methods for machine learning tasks Slides
12:20 pm lunch (1 hour)
1:20 pm Stanley Osher, UCLA PDE based approaches to nonconvex optimzation
2:00 pm Pengchuan Zhang, Microsoft Research AI Analysis and applications of deep generative models Slides
2:40 pm Braxton Osting, University of Utah A generalized MBO diffusion generated method for constrained harmonic maps Slides
3:20 pm break (40 min)
4:00 pm Ekaterina Rapinchuk, Michigan State University An auction approach to semi-supervised data classification
4:40 pm Stefano Soatto, UCLA The emergence theory of deep learning: perception, information theory and PAC Bayes
5:20 pm Venkat Chandrasekaran, Caltech Learning regularizers from data
6:00 pm day finished


Sunday, February 11, 2018 (day 3)

The second and third days of the workshop took place in the Annenberg building.

8:00 am breakfast (30 min)
8:30 am Michael Mahoney, UC Berkeley Second order machine learning Slides
9:10 am Adam Oberman, McGill University Continuous time methods for large scale optimization
9:50 am break (30 min)
10:20 am Bharath Sriperumbudur, Pennsylvania State University On approximate kernel PCA using random features
11:00 am Sergiy Pereverzyev Jr., University of Innsbruck Regularized integral operators in two-sample problem
11:40 am Eldad Haber, University of British Columbia Deep neural networks meet partial differential equations
12:20 pm lunch (1 hour)
1:30 pm meeting complete