Migrating legacy MPI code to GASPI-SHAN
MPI has been used, since its appearance in 1994, as one of the most widespread programming models for distributed memory environments. The API has been evolved through the years in order to include more functionality and adapt himself to take into account new hardware architectures. The standard defines a set of library routines that allow writing portable message-passing programs that usually follow the Single Process Multiple Data (SPMD) execution model, but also supports the Multiple Program Multiple Data execution model (MPMD).
MPI programs execute multiple processes, each one executing their own code and using the MPI communication primitives in order to explicitly communicate data between nodes and synchronize sections of code running in parallel. Each process executes in its own memory address space. While the MPI standard offers a rich set of features, alternative models have evolved in order to provide improved scalabality or better support for modern hardware architectures. One of these models is the Global Adress Space Programming API (GASPI).
The GASPI standard promotes the use of one-sided notified communication, where the initiator has all the relevant information for performing the data movement. GASPI enables the processes to put or get data from remote memory, without engaging the corresponding remote process, or having a synchronization point for every communication request.
GASPI provides weak synchronization primitives which update a notification on the remote side. The corresponding notification semantics is complemented with routines that wait for the updating of a single or a set of notifications. Even though GASPI has been designed to provide interoperability with MPI (in order to allow for an incremental porting of applications), GASPI in general assumes a hybrid application, where threads (or tasks) are responsible for intra-node computation and GASPI then is leveraged for inter-node communication. In consequence today there is no shared memory implementation for GASPI. A migration of ‘flat’ MPI legacy code (where MPI is being used in shared memory) towards GASPI was hence hitherto rarely successful. In order to overcome these shortcomings the Intertwine project has developed a GASPI shared memory extension (GASPI-SHAred Notifications, GASPI-SHAN). GASPI-SHAN extends the notified communication of GASPI into shared memory windows.