INTERTWinE round-up of final results

20 Nov 2018
An overview of the project and its outcomes.

In September 2018, the INTERTWinE project came to the end of three years of highly productive work on the important but often neglected topic of parallel programming model interoperability.

The first Exascale computers will be very highly parallel systems, consisting of a hierarchy of architectural levels. To program such systems effectively and portably, application programming interfaces (APIs) with efficient and robust implementations will be required. A single, “silver bullet” API which addresses all the architectural levels does not exist and seems very unlikely to emerge soon enough. We must therefore expect that using combinations of different APIs at different system levels will be the only practical solution in the short to medium term.

INTERTWinE brought together the principal European organisations that are inspiring the evolution of programming models and their implementations. Our focus was on six key APIs: MPI, GASPI, OpenMP, OmpSs, StarPU, and PaRSEC, each of which had a project partner with extensive experience in API design and implementation. The project worked closely with the relevant standards bodies and development teams for these APIs, solving interoperability problems at both at the specification level and at the implementation level.

Some highlights of the work in the project include:

  • Resource management APIs The project designed several APIs to support resource management between multiple runtimes that might be active in the same application. These allow offloading computation and resource enforcement via extensions to OpenCL, dynamic lending and borrowing of CPU resources between runtimes, the ability to pause and resume computational tasks, and a way to specify the dependence of tasks on external events.
  • Task-aware communication libraries Using the task pause/resume API, the project has developed task-aware versions of the MPI and GASPI communication libraries. These permit the extension of the natural style of programming using asynchronous dependent tasks to encompass inter-process communication as well as computation. They remove the possibility of deadlock which can occur if this style is attempted using standard MPI and OpenMP tasks, say.
  • Shared memory windows in GASPI The use of shared memory windows provides a convenient migration path from pure MPI to hybrid MPI + GASPI applications. The project has improved the support for shared memory windows in GASPI, which has resulted in very high performance implementations of some common communication patterns, including halo exchanges and certain collective operations.
  • Distributed tasks support Distributed dependent task models go some way towards the “sliver bullet” single API approach by moving interoperability issues away from the application into the runtime. To support this approach, INTERTWinE has designed and implemented a directory/cache interface which allows the decoupling of some distributed task runtimes from the underlying communication layer. In recognition of the difficulties of automatically scheduling tasks in a distributed system, the project has explored a new approach: the Event-Driven Asynchronous Tasks (EDAT) model provides much of the convenience of the tasking model, while still retaining full programmer control over which nodes the tasks will execute on.
  • Standards bodies Project partners have been actively engaged in the international standards bodies for MPI, OpenMP and GASPI, promoting interoperability issues and supporting the adoption of new features in these APIs to solve interoperability issues.
  • Developer engagement The Developer Hub on the INTERTWinE website provides resource packs to support developers interested in different API combinations, containing Best Practice Guides and example codes. The project ran two very successful Exascale Application Workshops: findings from these are also available on the website.

INTERTWinE has made important advances in solving interoperability problems between parallel programming APIs, and hence also towards the ultimate goal of providing practical but performant ways of programming the upcoming generation of very highly parallel machines. Although the project itself is now over, many of the ideas and implementations developed are being taken up by other projects or being developed further by the project partners.

INTERTWinE was funded for 3 years, from 1st October 2015, by the European Union’s Horizon 2020 Research and Innovation programme under Grant Agreement no. 671602.

INTERTWinE Partners

Last updated: 22 Nov 2018 at 10:14