COURSE OUTLINE
Session 5
Attention in Encoder-Decoder architecture
Encoder-Decoder architecture bottleneck. Attention mechanism.
Session 6
Transformers in NLP
Attention is all you need example. Transformers as another approach to NLP tasks.
Session 8
Unsupervised Deep Learning part 2
Variational autoencoders. Connections to Generative adversarial networks.
Session 7
Unsupervised approaches in Deep Learning
Dimensionality reduction, denoising and data transformation using autoencoders. Similarities to PCA.
Session 9
Midterm test
Session 10
Introduction to Reinforcement Learning
Reinforcement Learning problem statement. Stochastic and black box optimisation.
Session 4
Convolutional neural networks in text classification
CNN approach to context analysis. Similarities and differences from RNN.
Session 3
Recurrent neural networks, seq2seq
Sequential modeling. Encoder-decoder architecture.
Session 1
Word embeddings
Word representations in Machine Learning. Classical approach. Embeddings. Word2vec.
Session 2
Text classification tasks
Text classification using in Machine Learning and Deep Learning.
Session 12
Model free learning. Q-learning, SARSA
On policy and off policy algorithms. N-step algorithms.
Session 11
Value based methods in RL
Discounted reward in RL. Value iteration. Policy iteration.
Session 13
Approximate Q-learning
Value function approximation using complex functions and neural networks. DQN. Experience replay.
Session 14
Policy gradient methods
Policy gradient. REINFORCE algorithm. Advanced actor critic.
Session 15
Final exam