COURSE OUTLINE

Session 1

Neural network basics. The perceptron:

Neural networks: history and basic idea. Relationship between biology and mathematics. The perceptron: basic construction, training, activation functions. Practice: intro to TensorFlow and Keras.

Session 2

Feedforward neural networks:

Feedforward neural networks. Gradient descent basics. Computation graph and computing gradients on the computation graph (backpropagation). Why deep learning is hard. Practice: a feedforward neural network on the MNIST dataset.

Session 3

Optimization in neural networks:

Gradient descent and its problems. Nesterov’s momentum. Second order methods. Adaptive methods of gradient descent: Adagrad, Adadelta, Adam. Practice: comparing gradient descent variations.

Session 4

Regularisation in neural networks:

Regularization: L1, L2, early stopping. Dropout. Practice: comparing regularisers.

Session 5

Weight initialisation and batchnorm:

Weight initialization: supervised pre training idea, why straightforward random init fails, Xavier initialisation. Covariate shift and batch normalisation. Practice: putting everything together.

Session 7

Convolutional neural networks II:

Modern convolutional architectures. AlexNet, VGG, Network in network, Inception. Residual connections and ResNet. Practice: image recognition.

Session 8

Recurrent neural networks I:

Sequence-based problems. Recurrent neural networks: idea, backprop in RNNs. Simple RNNs and their problems. Vanishing and exploding gradients. Practice: seq2seq.

Session 9

Recurrent neural networks II:

How to fix vanishing gradients. Constant error carousel: LSTM, GRU, and other architectures. Practice: sentiment analysis with RNNs.

Session 10

Autoencoders:

Autoencoders. Sparse autoencoders, regularisation, denoising autoencoders. Deconvolution and convolutional autoencoders. Practice: autoencoders.

Session 11

Generative adversarial networks:

Generative models and neural networks. Types of generative models. Generative adversarial networks: idea, DCGAN, AAE, modern applications. Practice: AAE on MNIST.

Session 12

Deep reinforcement learning:

Reinforcement learning. Multiarmed bandits. Markov decision processes, the Bellman equations, policy iteration methods. Practice: multiarmed bandits.

Session 13

Deep reinforcement learning II:

TD-learning, Q-learning. Reinforcement learning with neural networks: DQN and tricks (Double DQN, experience replay etc.). Policy gradient and actor-critic algorithms. Practice: OpenAI Gym.

Session 14

Bayesian methods and neural networks:

Neuro bayesian methods. Variational autoencoders. A Bayesian look at dropout and dropout in RNNs. Practice: generating numbers with a variational autoencoder.

Session 15

Final test:

Putting everything together on a real-world problem.

Session 6

Convolutional neural networks I:

Convolutional architectures: idea and structure. Examples. Deconvolution and visualization in CNNs. Practice: CNNs for MNIST.