Task Pause/Resume

The Task Pause/Resume API developed in INTERTWinE has been presented to the OpenMP language committee.

The Task Pause/Resume API is a simple, generic and low-level API that is designed to inform task-based runtime systems that a task is going to be paused due to call to a blocking operation, and later, once the blocking operation has been completed, the paused task can be resumed. This situation usually occurs when a task calls a synchronous function that blocks until that operation has completed. For instance, if a task calls MPI_Recv and the data is not ready yet, the thread that is executing the task, as well as the CPU that is running that thread, will block until the data is received by the MPI library.

There are three functions provided by the Task Pause/Resume API. The first one is the get_current_blocking_context(). This function returns a void pointer that is an opaque identifier of the current task. The result is NULL if we call the function from a code that is not running on the context of any task. The second function is block_current_task(void *blocking_context). This function is used to inform the runtime that the task identified by the blocking_context parameter (previously obtained by the first call) is going to be paused. This function will block the execution flow of the current task and will pass the control of the CPU to the runtime system, which will be able to execute another task on that CPU. The paused task will not be resumed until another thread calls the third function: unblock_task(void *blocking_context). At this point, the task that was paused will be put on the runtime ready queue and will be executed as soon as there is a CPU available to run it.  

This API can be used to implement a task-aware version of the synchronous MPI_recv operation. After launching the asynchronous version of the receive operation, the first action is to check if the operation has immediately completed. In such a case, the function returns without blocking the task, as the receive operation has already been completed. Otherwise, a ticket object is created and filled with the information about the ongoing MPI operation and the current task. The ticket is then registered within the task-based runtime system and the task is paused. Once the MPI library completes this operation, it will be also be responsible for unblocking the corresponding task by calling the unblock_task service.

The INTERTWinE project presented this work to the OpenMP language committee under the use case of taskifying MPI communication services in an OpenMP program. The Pause/Resume interface works with a routine-only approach, requiring no new pragmas/constructs. We demonstrated multiple ways to use the new API calls, including a scheme of doing polling, using an additional interoperability library to capture MPI blocking calls, or using a modified version of the MPICH library to notify the send/recv completion, and a different version based on programming using callbacks embedded in the MPI services (this last version has no reference implementation but conceptually could be implemented). A CUDA program was also presented using this callback mechanism to transfer memory from/to the host to/from the device (callback is already supported by CUDA programming).

The OpenMP language committee show some concerns about how the task pause and resume service could be implemented by the current commercial runtimes, as the ability to pause and resume threads adds a significant complexity in the design and implementation of the runtime system. Moreover, OpenMP was already investigating another approach based on external events (see section on Task completion at external events) that can be used to more efficiently support a subset of the use-cases supported by the pause-resume API.

Our work on Task completion at external events has also been presented to the OpenMP ARB.

Last updated: 11 Sep 2018 at 14:35