API Combinations

A number of key API combinations have been identified for INTERTWinE's interoperability studies.

At least 15 API combinations would need to be studied to ensure interoperability between any combination of our target APIs (also taking into account CUDA, OpenCL and MKL). We have therefore prioritised a number of API combinations to be the main focus of our work. These combinations represent 3 classes of programming model:

Shared memory programming models (shared memory programming models that execute within a node, but not on accelerator devices). We are investigating interoperability between (pairs of) OpenMP, OmpSs and StarPU, and also interoperability of these programming models with parallel libraries such as MKL and MAGMA.

  • OpenMP + OmpSs + StarPU
  • PaRSEC + StarPU / OpenMP / OmpSs / MPI

Distributed programming models (where at least one of the models is a distributed programming model). We are investigating interoperability between pairs of distributed memory programming models (e.g., GASPI + MPI), and between distributed and shared memory programming models (for example MPI / GASPI + OmpSs / StarPU / OpenMP).

Programming models for accelerators (where at least one of the models is an accelerator programming model such as CUDA or OpenCL).

  • OpenMP  / StarPU / OmpSs + OpenCL / CUDA

Different approaches are being used to investigate the different API combinations. These include the directory cache service (e.g. MPI / GASPI + OmpSs / StarPU) and the resource manager functionality (e.g. PLASMA + OmpSs / StarPU). Others (e.g. MPI + OpenMP) are targeted on the co-design application level  (e.g. by iPIC3D and Ludwig), as well as on the API level (with the endpoints proposal).

Further reading

Deliverable D3.1 Initial Requirements Analysis

This initial requirements analysis informed the direction of work at the project outset. It describes the status of the programming models addressed by the project, with particular attention to known interoperability issues already identified by the relevant standards bodies and/or developer teams. It then provides details of known pairwise interoperability issues between programming models, and identifies for each case any areas which INTERTWinE is capable of addressing.

Best Practice Guide for Writing GASPI-MPI Interoperable Programs

A guide for application developers who are considering either complementing MPI with a Partitioned Global Address Space, or combining legacy MPI applications or libraries with (bundled) notified communication in PGAS, or complementing their MPI code with highly multithreaded communication calls.

Best Practice Guide for Writing MPI + OmpSs Interoperable Programs

OmpSs is a task-based parallel programming model based on tasks. This Best Practice Guide offers advice on programming hybrid applications using MPI + OmpSs and is aimed at application developers who are considering taking advantage of the multilevel parallelism offered by modern HPC systems using these two programming models (i.e. invoking MPI calls within asynchronous tasks).

Best Practice Guide for Hybrid MPI + OpenMP Programming

Hybrid application programs using MPI + OpenMP are now commonplace on large HPC systems. This Best Practice Guide discusses the motivations for using this combination, as well as the possible downsides.  It covers the technical details of thread support in the MPI library, and describes some different styles of MPI + OpenMP program and their relative advantages and disadvantages.  It also provides some best practice tips for developers of hybrid MPI + OpenMP applications.

Best Practice Guide for Writing OpenMP/OmpSs/StarPU + Multi-threaded Libraries Interoperable Programs

This Best Practice Guide is aimed at application developers who plan to exploit task-parallelism from within a task-based runtime system concurrently with the use of a multi-threaded numerical library to execute the runtime tasks. In particular, the document pays special attention to the interoperability issues that arise due to oversubscription when exploiting thread-level parallelism simultaneously from within the task-based runtimes OpenMP/OmpSs/StarPU and the MKL numerical library from Intel.

Last updated: 27 Oct 2017 at 12:08