Best Practice Guide for MPI+OmpSs Programming
The latest in the series of INTERTWinE Best Practice Guides has just been published: the Best Practice Guide for Writing MPI + OmpSs Interoperable Programs.
OmpSs is a parallel programming model based on tasks and developed at Barcelona Supercomputing Center. OmpSs also tries to be a test bench for the OpenMP programming model in order to improve its tasking model. In particular, our objective is to extend OpenMP with new directives, clauses and semantics to support asynchronous parallelism.
OmpSs uses preprocessor annotations to express the concurrency of the application. Programmers can create tasks (through task generating constructs) and guarantee data race free programs though synchronization mechanisms (e.g. dependences, taskwaits, atomics, critical, etc.). Tasks are the smallest unit of work which represents a specific instance of an executable code and its associated data. Dependences let the user express the data flow of the program, so that at runtime this information can be used to determine if the parallel execution of two tasks may cause data races or not. Other synchronization constructs, such as critical or atomic, guarantee correct access to these variables when dependences are not completely needed.
Since its appearance in 1994, MPI has been used as one of the most widespread programming models for distributed memory environments. The API has evolved through the years in order to include more functionality, and has been adapted to take into account new hardware architectures. The standard defines a set of library routines that allow the writing of portable message-passing programs that usually follow the Single Process Multipe Data (SPMD) execution model, but also support the Multiple Program Multiple Data execution model (MPMD).
MPI programs execute multiple processes, each one executing their own code and using the MPI communication primitives in order to explicitly communicate data between nodes and synchronize sections of code running in parallel. Each process executes in its own memory address space.
As also discussed in the “Best Practice Guide to Hybrid MPI + OpenMP Programming”, there are two principal reasons why a hybrid version of an application MPI plus a shared-memory programming model might be beneficial. The first is to reduce the memory requirements of the application; the second is to improve its performance. The document reference above also provides more details about the advantages and disadvantages when programming with this combination of programming models. In addition, this document can also be considered as an extension of the MPI + OpenMP Best Practice Guide in those aspects concerning the tasking model, as most of the issues discussed in this document can also be applied to OpenMP.
Read the Best Practice Guide for Writing MPI + OmpSs Interoperable Programs now.