GASPI course, Heidelberg

08 Nov 2016
24-25 November 2016: Efficient Parallel Programming with GASPI

In this tutorial we present an asynchronous data flow programming model for Partitioned Global Address Spaces (PGAS) as an alternative to the programming model of MPI.  GASPI, which stands for Global Address Space Programming Interface, is a partitioned global address space (PGAS) API. The GASPI API is designed as a C/C++/Fortran library and focused on three key objectives: scalability, flexibility and fault tolerance. In order to achieve its much improved scaling behaviour GASPI aims at asynchronous dataflow with remote completion, rather than bulk-synchronous message exchanges. GASPI follows a single/multiple program multiple data (SPMD/MPMD) approach and offers a small, yet powerful API (see also and  Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of GASPI.

Registration: To register, please send an e-mail to hpc-support [at] (subject: Efficient%20Parallel%20Programming%20with%20GASPI) .

Further information: For more information on prequisites, language or travel, please see the info tab at:

Please note that there will also be a GASPI tutorial in Ostrava (Czech Republic) on 23-24 February 2017. Full details of this and other INTERTWinE courses can always be found at:

Last updated: 25 Apr 2017 at 9:46