PARALLEL & DISTRIBUTED
+ HIGH PERFORMANCE
COMPUTING 
DALVAN 
GRIEBLER

We offer innovative university degrees taught in English by industry leaders from around the world, aimed at giving our students meaningful and creatively satisfying top-level professional futures. We think the future is bright if you make it so.

Combinatorics and graph theory lay at the heart of discrete mathematics and computer science. In the course, we begin with a brief review of the fundamentals of combinatorics---counting, permutations, binomial coefficients, and the pigeonhole principle---and then devote most of the course to the fundamentals of graph theory. We cover the most common definitions and ideas of graph theory, proving important theorems and introducing important algorithms, but mostly aiming to simply establish the common language of discrete mathematics and computer science.

Dalvan Griebler holds the Master's Degree in Computer Science from the Pontifical Catholic University of Rio Grande do Sul - PUCRS (2012) in the area of Parallel and Distributed Processing (PDP), Ph.D. in Informatics by Università di Pisa - UNIPI (2016) in the area of Parallel Programming Models, and PhD in Computer Science by PUCRS (2016) in the area of PDP. 

He is currently a professor and postdoctoral fellow at PUCRS in the Computer Science Graduate Programme (PPGCC), an associate researcher in the Parallel Applications Modeling Group - GMAP, and a professor at Três de Maio Educational Society - SETREM in Brazil. 

He was the founder and is currently the coordinator of the Laboratory of Advanced Researches on Cloud Computing - LARCC at SETREM. He also performs other several research activities such as being reviewer/chief-editor of international journals, programme committee of international conferences, and organizer of conferences and workshops. He recently started lecturing two courses (Structured Parallel Programming, and Heterogeneous Parallel Programming) in the Master and PhD programme in Computer Science at PUCRS. 
He has been a referee on several situations regarding research projects and dissertations. Moreover, he has been keynote speaker and lecture of short period courses in Brazilian congresses.  

Dalvan Griebler is interested as well as is working on the following research topics: High-Performance Computing; Analysis of System Performance; High-Level Abstractions for Parallelism Exploitation; Parallel Programming; Stream Processing; Source-to-Source Transformations; Compilers Design; Domain-Specific Languages; Automatic Parallel Code Generation; Cloud computing; Cloud Management Platforms; and Computer Networks.

In this course, students will have to:

Learn about HPC systems and applications

• Parallel Programming for different paradigms (shared and distributed memory architectures)

• Identify performance bottlenecks

• Use a structured parallel programming approach to express parallelism

• Use and learn the mainstream and well established parallel programming libraries/frameworks

• Learn new and emergent tools for HPC systems and applications

SKILLS:

- Computer Science

- Linux

- Parallel Programming

- C++

- Parallel Computing

- Java

- Distributed Systems

- High Performance Computing

ABOUT DALVAN
HARBOUR.SPACE 
WHAT YOU WILL LEARN
RESERVE MY SPOT

DATE: 21 May - 8 Jun, 2018 

DURATION: 3 Weeks

LECTURES: 3 Hours per day

LANGUAGE: English

LOCATION: Barcelona, Harbour.Space Campus

COURSE TYPE: Offline

HARBOUR.SPACE UNIVERSITY

RESERVE MY SPOT

DATE: 21 May - 8 Jun, 2018

DURATION: 3 Weeks

LECTURES: 3 Hours per day

LANGUAGE: English

LOCATION: Barcelona, Harbour.Space Campus

COURSE TYPE: Offline

All rights reserved. 2018

Harbour.Space University
Tech Heart
COURSE OUTLINE
SHOW MORE

Session 1

Introduction to High-Performance Architectures and Applications

Current high-performance architectures, such as Cluster, Multicore, and Accelerators. Introduction to real-world applications that requires HPC, and the exascale challenge.

Session 4

Parallel Programming for Multi-Core

Introduction to the shared memory parallel programming paradigm: Thread; lock and lock-free synchronisation mechanisms; race conditions, performance optimisations with load balancing and scheduling, cache efficiency, memory locality, and state-of-the-art libraries/frameworks.

Session 3

Structured Parallel Programming

Algorithmic Skeletons and Parallel Design Patterns. Separation of concerns, patterns and strategies for parallel programming.

Session 2

Performance Analysis and Evaluation

Performance metrics, performance tracing and analysis. Characterisation of the applications’ performance. Bottlenecks and critical issues for performance scaling. Simple alternatives to visualise and identify performance problems.

PARALLEL & DISTRIBUTED + HIGH 
PERFORMANCE COMPUTING

BIBLIOGRAPHY

This course aims to provide a background of high-performance computing (HPC) in shared and distributed parallel programming environments. Students will learn about the current HPC architectures environments (cluster,  multicore, and accelerators) and how to program clusters and multicores systems. 

They will also learn how to analyze the application’s performance, and identify opportunities for accelerating their code, concerning memory, CPU, network, and I/O resources. The course will introduce the main structured parallel programming strategies (Master/Worker, Farm, Pipeline, and MapReduce), and framework/libraries for expressing parallelism (MPI, Intel TBB, and OpenMP). 

The main challenges with load balancing, message passing, and data races will be approached. Finally, emergent tools/solutions for HPC will be studied and discussed during an interactive seminary, mainly focusing in big data and data stream applications.

This course aims to provide a background of high-performance computing (HPC) in shared and distributed parallel programming environments. Students will learn about the current HPC architectures environments (cluster,  multicore, and accelerators) and how to program clusters and multicores systems. 

They will also learn how to analyze the application’s performance, and identify opportunities for accelerating their code, concerning memory, CPU, network, and I/O resources. The course will introduce the main structured parallel programming strategies (Master/Worker, Farm, Pipeline, and MapReduce), and framework/libraries for expressing parallelism (MPI, Intel TBB, and OpenMP). 

The main challenges with load balancing, message passing, and data races will be approached. Finally, emergent tools/solutions for HPC will be studied and discussed during an interactive seminary, mainly focusing in big data and data stream applications.