Introduction to MPI Course

One of the critical factors in HPC is parallelism. To solve a problem faster or to compute larger data sets, tasks can be divided into several sub-tasks and executed in parallel. The key to supercomputers is not in ultra-powerful microprocessors or very expensive and different components from those we use daily in our homes, but the parallelism factor.

Therefore, our team has created the Introduction to MPI (Message Passing Interface) online course. The goal is to provide an excellent foundation for anybody to build their knowledge of parallel programming.

What will you learn?

  • Introduction to HPC
  • Parallelism - The PCAM Method
  • Supercomputer architecture: Symmetrical multiprocessing architecture, Cluster architecture, State-of-the-Art architectures
  • Basic MPI: introduction to MPI, parallel programming concepts, the six necessary MPI commands
  • MPI Collective: Barrier, Gather, Scatter, Reduce, All-Gather, All-To-All, All-Reduce

Start the Introduction to MPI Course now! You can download it from here: