INTERTWinE focuses on six key programming APIs, and nine specific combinations of these. Each API is represented by at least one project partner with extensive experience in API design and implementation.
The target APIs are: MPI, OpenMP, StarPU, GASPI, OmpSs and PaRSEC. Read more below about specific issues with each of these and how the INTERTWinE consortium has been contributing to efforts to address them.
MPI: Interaction with all non-MPI components other than POSIX-like threads is implementation-dependent. With the current interface the highest level of thread safety is difficult to implement without compromising performance. The proposed MPI endpoints for thread safety are a topic under discussion within the MPI forum. It proposes the introduction of a new communicator creation function that creates a communicator with multiple ranks for each MPI process in a parent communicator. The INTERTWinE consortium has actively contributed to the endpoints proposal.
OpenMP: OpenMP is a parallel application program interface targeting Symmetric Multiprocessor systems which may also include accelerator devices such as GPUs, DSPs or MIC architectures. OpenMP programs may also use other parallel mathematical libraries (e.g. Intel MKL). The proper use of the underlying computational resources in these cases makes the use of the Resource Manager highly recommendable. INTERTWinE will present the Resource Manager component to the OpenMP community. Potentially there will be no impact on the language, but rather on the RTL implementations.
StarPU: StarPU is a runtime system which enables programmers to exploit CPUs and accelerator units. StarPU supports the serving of data dependencies over MPI on distributed sessions. Each participating process is assumed to annotate data with node ownership, and to submit the same sequence of tasks. Each task is by default executed on the node where it accesses data in 'write' mode. In the scope of INTERTWinE, strategies enabling fully multi-threaded incoming messages processing - such as 'endpoints' (MPI) or 'notifications' (GASPI) - will be tested. StarPU will interface with the INTERTWinE resource manager and directory cache service.
GASPI: The Global Address Space Programming Interface (GASPI) is a specification of a communication API owned by the open GASPI forum. It defines asynchronous, single-sided and non-blocking communication primitives for a Partitioned Global Address Space (PGAS). Its implementation GPI is interoperable with MPI and allows for incremental porting of existing MPI applications. GASPI copies the parallel environment during its initialisation and this mechanism can be used to keep existing toolchains, including distribution and initialisation of the binaries. GASPI allows data that were allocated in the MPI program to be accessed without additional copy. In the scope of INTERTWinE, a closer memory and communication management of GASPI and MPI is envisaged.
Learn more about GASPI using this online tutorial: http://www.gpi-site.com/gpi2/tutorial/
OmpSs: OmpSs is a parallel programming model focused on exploiting task-based parallelism for applications written in C, C++ or Fortran. OmpSs has also been extended to run in a multi-node cluster environment with a master-slave design, where one process acts a director of the execution (master) and the rest of processes (slaves) follow the commands issued by the master. OmpSs will apply the resource manager and directory cache service of INTERTWinE.
PaRSEC: PaRSEC is a generic framework for architecture-aware scheduling and management of micro-tasks on distributed many-core heterogeneous architectures. The main benefits of PaRSEC are task parametrisation and provision of architecture-aware scheduling. In the context of INTERTWinE, co-existence of PaRSEC-based numerical libraries (e.g. DPLASMA) with MPI applications will be investigated. PaRSEC will also interface the INTERTWinE Resource Manager and experiment with the Directory Cache service.
API combinations: We have identified a number of key API combinations, which represent 3 classes of programming model: shared memory programming models, distributed programming models, and programming models for accelerators. Read more about these API combinations.