Loss surfaces of artificial neural networks and random matrix theory.
Mathematics Postgraduate Seminar
13th December 2019, 4:30 pm – 5:30 pm
Fry Building, 2.04
Over the past few years, the machine learning communities in academia and industry have been expending vast amounts of research and engineering effort on the development and application of so-called artificial neural networks. Modern networks contain vast numbers of tunable parameters and are used to obtain state-of-the-art results in computer vision, speech recognition, natural language processing, anomaly detection and so on. If you've used Google search or Siri, then you've interacted with a large modern neural network. Despite the undeniable success of neural networks on a wide variety of so-called 'AI' tasks, the theory is decidedly lacking and the literature is rife with pseudo-scientific claims and hacky heuristics. In this talk I will introduce artificial neural networks conceptually and explain some of the areas of paucity of theory before making connections with random matrix theory and showing some results that shed light on the empirical success of neural networks. I shall hopefully dispel any misconceptions about 'AI' and, if there is audience appetite, I may also rant about the hype surrounding it.
Comments are closed.