OpenMP F2F meeting report

08 Mar 2018
INTERTWinE team member Xavi Teruel (BSC) reports on the recent OpenMP F2F meeting.

The INTERTWinE partners are very active in pursuing interoperability issues and new API developments in relevant standards bodies. Below, Xavi reports on the discussions held by the interoperability subgroup at the OpenMP F2F meeting in Austin, Texas, in February.

During the interoperability meetings which took place at the OpenMP F2F meeting in Austin (February, 2018), we had some initial discussions in which people explained what they expected in this particular area in future OpenMP specifications. The different ideas were grouped into different areas.

First to be discussed were the issues relating to the aspects of interoperability affecting a single runtime system but including multiple compilers or languages (Fortran, C or C ++). The main concern here was to understand the degree of interoperability of the specification with respect to the coexistence of these compilers/languages, and whether the different vendors' implementations conform to these premises. One important issue within this area would be the creation of a test suite that allows verification of the correctness/compliance of a given implementation with respect to interoperability. One partner also indicated that his OpenMP "Compliance Benchmark Suite" has just been released and it could be interesting to take over this interoperability test suite.

In the second area of discussion, the subcommittee was interested in how OpenMP can interoperate with other runtimes executing within the same process. The management of the underlying resources is a desired feature due to the fact that parallel components may own a resource for a long period of time without extracting any benefit from it. The discussion on how to have OpenMP release these resources in between parallel regions is developed under the form of a ticket. Many applications use libraries written in OpenMP and another threading model (e.g. Pthreads), and want to use these in sequence. There was a general agreement that this would be useful, and we worked on flushing out a solution and its related text. The current approach is based on an OpenMP API routine to pause resources, which enables the runtime to relinquish these resources used by OpenMP on the specific device. This routine receives two parameters, one indicating which device is affected, and the other indicating the level the runtime must reach (i.e. soft, medium or hard).

Another issue which arose during the meeting was how to support asynchronous services and how they can be synchronized with OpenMP tasks. Two different approaches were discussed. The first of these involves exposing the concept of dependencies as a new data type which can be used as parameters of new OpenMP routines (create, fulfill, clear, etc.). These dependencies can be combined with other clauses associated with tasks.

The second approach comes from the INTERTWinE project, and showed very encouraging results in terms of "taskifying" MPI communication services within an OpenMP program. This interface works with a routine-only approach, requiring no new pragmas / constructs. The work showed multiple ways to use the calls, including a scheme of doing polling, using an additional interoperability library to capture MPI blocking calls, using a modified version of the MPICH library in order to notify the send/recv completion, and finally one way based on callbacks usage from the user programming space. This second approach also mimics the code discussed about inserting a callback in the stream (CUDA). People attending the meeting showed some concerns about how the omp_block_task() can be implemented by the current commercial runtimes degrading the performance due to it leaving the blocking call in the calling stack while executing other tasks, so the response time to come back and fulfil the associated dependences of the "taskified" service can be delayed more than needed.

Some people also showed interest in following the topic of PMIx (Process Management Interface Exascale). This standard is focused on providing a well-defined interface that abstracts common features necessary to manage large-scale application deployment and monitoring. In particular, this interface allows the exchange of information between processes that could be used to migrate resources between OpenMP processes (variable resource usage). Groups in the PMIx community are focusing their efforts to ensure the standard identifies the additional required support for cross-model interactions including the sharing and allocation of processor cores and network resources between different programming models. More information can be found at: https://pmix.github.io/pmix

During the meeting there was also mention of the efforts to implement the use of threads in new MPI approaches (including endpoints and finepoints based on qthreads). The subcommittee will keep track of developments in this area.

For more information about the developments discussed in this report, please contact us at info [at] intertwine-project.eu.

Last updated: 22 Nov 2018 at 10:16