NEURAL NETWORKS
AND DEEP LEARNING 
SERGEY
NIKOLENKO

We offer innovative university degrees taught in English by industry leaders from around the world, aimed at giving our students meaningful and creatively satisfying top-level professional futures. We think the future is bright if you make it so.

Ten years ago, machine learning went through a revolution. While neural networks had been one of the oldest tools in artificial intelligence, people had not really been able to train deep architectures efficiently until the mid-2000s. After the breakthrough results of the groups of Geoffrey Hinton and Yoshua Bengio, however,
deep neural architectures quickly outperformed state of the art in image processing, speech
recognition, natural language
processing, and by now they basically define the modern state of machine learning in many different domains, from face recognition and self-driving cars to playing Go. In the course, we will see what makes modern neural networks so powerful, learn to train them properly, go through the most important architectures, and, best of all, learn to implement all of these ideas in code through standard libraries such as TensorFlow and Keras.

Sergey Nikolenko is a computer scientist with wide experience in machine learning and data analysis, algorithms design and analysis, theoretical computer science, and algebra. He graduated from the St. Petersburg State University in 2005, majoring in algebra (Chevalley groups), and earned his Ph.D. at the Steklov Mathematical Institute at St. Petersburg in 2009 in theoretical computer science (circuit complexity and theoretical cryptography). Since then, Dr. Nikolenko has been interested in machine learning and probabilistic modeling, producing theoretical results and working on practical projects for the industry. He is currently employed at the Steklov Mathematical Institute at St. Petersburg and Higher School of Economics at St. Petersburg. Dr. Nikolenko has more than 100 publications, including top computer science journals and conferences and several books.

• Machine learning: probabilistic graphical models, recommender systems, topic modeling

• Algorithms for networking: competitive analysis, FIB optimization

• Bioinformatics: processing mass-spectrometry data, genome assembly

• Proof theory, automated reasoning, computational complexity, circuit complexity

• Algebra (Chevalley groups), algebraic geometry (motives).

Learn classical and modern architectures in neural networks

• Learn how to train various deep neural architectures

• Understand a wide variety of neural architectures suited for different tasks

• Learn to implement these ideas in standard neural network libraries

SKILLS:

- Machine learning

- Algorithms for networking

- Bioinformatics

- Mathematical Modeling

- Python

ABOUT SERGEY
HARBOUR.SPACE 
WHAT YOU WILL LEARN
RESERVE MY SPOT

DATE: 7 Nov - 24 Nov, 2017

DURATION: 3 Weeks

LECTURES: 3 Hours per day

LANGUAGE: English

LOCATION: Barcelona, Harbour.Space Campus

COURSE TYPE: Offline

HARBOUR.SPACE UNIVERSITY

RESERVE MY SPOT

@snikolenko

DATE: 7 Nov - 24 Nov, 2017

DURATION:  3 Weeks

LECTURES: 3 Hours per day

LANGUAGE: English

LOCATION: Barcelona, Harbour.Space Campus

COURSE TYPE: Offline

All rights reserved. 2017

Harbour.Space University
Tech Heart
COURSE OUTLINE
SHOW MORE

Session 1

Neural network basics. The perceptron:

Neural networks: history and basic idea. Relationship between biology and mathematics. The perceptron: basic construction, training, activation functions. Practice: intro to TensorFlow and Keras.

Session 4

Regularisation in neural networks:

Regularization: L1, L2, early stopping. Dropout. Practice: comparing regularisers.

Session 3

Optimization in neural networks:

Gradient descent and its problems. Nesterov’s momentum. Second order methods. Adaptive methods of gradient descent: Adagrad, Adadelta, Adam. Practice: comparing gradient descent variations.

Session 2

Feedforward neural networks:

Feedforward neural networks. Gradient descent basics. Computation graph and computing gradients on the computation graph (backpropagation). Why deep learning is hard. Practice: a feedforward neural network on the MNIST dataset.

NEURAL 
NETWORKS AND 
DEEP LEARNING
BIBLIOGRAPHY

Ten years ago, machine learning went through a revolution. While neural networks had been one of the oldest tools in artificial intelligence, people had not really been able to train deep architectures efficiently until the mid-2000s. After the breakthrough results of the groups of Geoffrey Hinton and Yoshua Bengio, however, deep neural architectures quickly outperformed state of the art in image processing, speech recognition, natural language processing, and by now they basically define the modern state of machine learning in many different domains, from face recognition and self-driving cars to playing Go. In the course, we will see what makes modern neural networks so powerful, learn to train them properly, go through the most important architectures, and, best of all, learn to implement all of these ideas in code through standard libraries such as TensorFlow and Keras.