Course content
Summary
Our goal is to introduce students to a powerful class of model, the Neural Network. In fact, this is a broad term which includes many diverse models and approaches. We will first motivate networks by analogy to the brain. The analogy is loose, but serves to introduce the idea of parallel and distributed computation.We then introduce one kind of network in detail: the feedforward network trained by backpropagation of error. We discuss model architectures, training methods and data representation issues. We hope to cover everything you need to know to get backpropagation working for you. A range of applications and extensions to the basic model will be presented in the final section of the module.
Lecture 1: Introduction
- Questions
- Motivation and Applications
- Computation in the brain
- Artificial neuron models
- Linear regression
- Linear neural networks
- Multi-layer networks
- Error Backpropagation
Lecture 3: Optimizing Linear Networks
Lecture 4: The Backprop Toolbox
- 2-Layer Networks and Backprop
- Noise and Overtraining
- Momentum
- Delta-Bar-Delta
- Many layer Networks and Backprop
- Backprop: an example
- Overfitting and regularization
- Growing and pruning networks
- Preconditioning the network
- Momentum
- Delta-Bar-Delta
- Introduction
- Linear Compression (PCA)
- NonLinear Compression
- Competitive Learning
- Kohonon Self-Organizing Nets
Lecture 7: Advanced Topics
- Learning rate adaptation
- Classification
- Non-supervised learning
- Time-Delay Neural Networks
- Recurrent neural networks
- Real-Time Recurrent Learning
- Dynamics of RNNs
- Long Short-Term Memory
Review for Midterm:
Links
Tutorials:- The Nervous System - a very nice introduction, many pictures
No comments:
Post a Comment