Microsoft Research
Deep learning is transforming the field of artificial intelligence, yet it is lacking solid theoretical underpinnings. This state of affair significantly hinders further progress, as exemplified by time-consuming hyperparameters optimization, or the extraordinary difficulties encountered in adversarial machine learning. Our three-day workshop stems on what we identify as the current main bottleneck: understanding the geometrical structure of deep neural networks. This problem is at the confluence of mathematics, computer science, and practical machine learning. We invite the leaders in these fields to bolster new collaborations and to look for new angles of attack on the mysteries of deep learning.
9:00 AM – 10:00 AM | Peter Bartlett, University of California at Berkeley
AI Institute “Geometry of Deep Learning” 2019 event page: https://www.microsoft.com/en-us/research/event/ai-institute-2019/
Source
Can you share any blog or the research papers related to this? Thanks
I couldn't use the wifi 🙁
Numerical examples illustrating the major points would have been nice. Maybe they are in a paper somewhere?
Thank you for a fantastic lecture. This result is extremely interesting 🙂
One comment I have on how one could improve this presentation is that I feel that if the lecture started from the sketch of the proof – then it would be easier to introduce both the main result (this pair of two equations) and notion of matrix ranks (lowercase r and uppercase R). If you struggle to watch this video – concentrate mostly on understanding the influence of epsilon noise on error – then the rest of the lecture is pretty clear.
Alphabet intelligence proof shogi.