COURSE OUTLINE
Session 5
Transformers in NLP
Self-attention technique. Transformer architecture overview.
Session 6
Contextual Embeddings
Transformer-based contextual embeddings. ELMo, BERT, GPT-2, XLM overview.
Session 8
Midterm test
NLP open problems. Discussion, section outro.
Session 7
Question Answering
Q&A systems. Bi-directional attention flow (BiDAF)
Session 9
Introduction to Reinforcement Learning
Reinforcement Learning problem statement. Stochastic and black box optimization.
Session 10
Value based methods in RL
Discounted reward in RL. Value iteration. Policy iteration.
Session 4
Attention in Encoder-Decoder architecture
Encoder-Decoder architecture bottleneck. Attention mechanism. Attention outside NLP.
Session 3
Neural Machine Translation
Machine Translation and Neural Machine Translation. Encoder-Decoder architecture, sequential modeling.
Session 1
Natural Language Processing intro
Main problems in NLP. Text classification and generation. Deep Learning techniques in NLP. Regularization in DL recap. Word Embeddings recap.
Session 2
Convolutional Neural Networks in text classification.
CNN approach to context analysis. Similarities and differences from RNN.
Session 12
Approximate Q-learning
Value function approximation using complex functions and neural networks. DQN. Experience replay.
Session 11
Model free learning. Q-learning, SARSA
On policy and off policy algorithms. N-step algorithms.
Session 13
Policy gradient methods
Policy gradient. REINFORCE algorithm. Advanced actor critic.
Session 14
RL outside games
Policy gradient as optimization approach in different areas. Policy gradient for sequence modeling.
Session 15
Final test