| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303 | /* * This file is part of the StarPU Handbook. * Copyright (C) 2009--2011  Universit@'e de Bordeaux 1 * Copyright (C) 2010, 2011, 2012, 2013  Centre National de la Recherche Scientifique * Copyright (C) 2011, 2012 Institut National de Recherche en Informatique et Automatique * See the file version.doxy for copying conditions. *//*! \defgroup API_MPI_Support MPI Support@name Initialisation\ingroup API_MPI_Support\def STARPU_USE_MPI\ingroup API_MPI_SupportThis macro is defined when StarPU has been installed with MPIsupport. It should be used in your code to detect the availability ofMPI.\fn int starpu_mpi_init(int *argc, char ***argv, int initialize_mpi)\ingroup API_MPI_SupportInitializes the starpumpi library. \p initialize_mpi indicates if MPIshould be initialized or not by StarPU. If the value is not 0, MPIwill be initialized by calling <c>MPI_Init_Thread(argc, argv,MPI_THREAD_SERIALIZED, ...)</c>.\fn int starpu_mpi_initialize(void)\deprecated\ingroup API_MPI_SupportThis function has been made deprecated. One should use instead thefunction starpu_mpi_init(). This function does not call MPI_Init(), itshould be called beforehand.\fn int starpu_mpi_initialize_extended(int *rank, int *world_size)\deprecated\ingroup API_MPI_SupportThis function has been made deprecated. One should use instead thefunction starpu_mpi_init(). MPI will be initialized by starpumpi bycalling <c>MPI_Init_Thread(argc, argv, MPI_THREAD_SERIALIZED,...)</c>.\fn int starpu_mpi_shutdown(void)\ingroup API_MPI_SupportCleans the starpumpi library. This must be called between callingstarpu_mpi functions and starpu_shutdown(). MPI_Finalize() will becalled if StarPU-MPI has been initialized by starpu_mpi_init().\fn void starpu_mpi_comm_amounts_retrieve(size_t *comm_amounts)\ingroup API_MPI_SupportRetrieve the current amount of communications from the current node inthe array \p comm_amounts which must have a size greater or equal tothe world size. Communications statistics must be enabled (see\ref STARPU_COMM_STATS).@name Communication\anchor MPIPtpCommunication\ingroup API_MPI_Support\fn int starpu_mpi_send(starpu_data_handle_t data_handle, int dest, int mpi_tag, MPI_Comm comm)\ingroup API_MPI_SupportPerforms a standard-mode, blocking send of \p data_handle to the node\p dest using the message tag \p mpi_tag within the communicator \pcomm.\fn int starpu_mpi_recv(starpu_data_handle_t data_handle, int source, int mpi_tag, MPI_Comm comm, MPI_Status *status)\ingroup API_MPI_SupportPerforms a standard-mode, blocking receive in \p data_handle from thenode \p source using the message tag \p mpi_tag within thecommunicator \p comm.\fn int starpu_mpi_isend(starpu_data_handle_t data_handle, starpu_mpi_req *req, int dest, int mpi_tag, MPI_Comm comm)\ingroup API_MPI_SupportPosts a standard-mode, non blocking send of \p data_handle to the node\p dest using the message tag \p mpi_tag within the communicator \pcomm. After the call, the pointer to the request \p req can be used totest or to wait for the completion of the communication.\fn int starpu_mpi_irecv(starpu_data_handle_t data_handle, starpu_mpi_req *req, int source, int mpi_tag, MPI_Comm comm)\ingroup API_MPI_SupportPosts a nonblocking receive in \p data_handle from the node \p sourceusing the message tag \p mpi_tag within the communicator \p comm.After the call, the pointer to the request \p req can be used to testor to wait for the completion of the communication.\fn int starpu_mpi_isend_detached(starpu_data_handle_t data_handle, int dest, int mpi_tag, MPI_Comm comm, void (*callback)(void *), void *arg)\ingroup API_MPI_SupportPosts a standard-mode, non blocking send of \p data_handle to the node\p dest using the message tag \p mpi_tag within the communicator \pcomm. On completion, the \p callback function is called with theargument \p arg.Similarly to the pthread detached functionality, when a detachedcommunication completes, its resources are automatically released backto the system, there is no need to test or to wait for the completionof the request.\fn int starpu_mpi_irecv_detached(starpu_data_handle_t data_handle, int source, int mpi_tag, MPI_Comm comm, void (*callback)(void *), void *arg)\ingroup API_MPI_SupportPosts a nonblocking receive in \p data_handle from the node \p sourceusing the message tag \p mpi_tag within the communicator \p comm. Oncompletion, the \p callback function is called with the argument \parg.Similarly to the pthread detached functionality, when a detachedcommunication completes, its resources are automatically released backto the system, there is no need to test or to wait for the completionof the request.\fn int starpu_mpi_irecv_detached_sequential_consistency(starpu_data_handle_t data_handle, int source, int mpi_tag, MPI_Comm comm, void (*callback)(void *), void *arg, int sequential_consistency)\ingroup API_MPI_SupportPosts a nonblocking receive in \p data_handle from the node \p sourceusing the message tag \p mpi_tag within the communicator \p comm. Oncompletion, the \p callback function is called with the argument \parg.The parameter \p sequential_consistency allows to enable or disablethe sequential consistency for \p data handle (sequential consistencywill be enabled or disabled based on the value of the parameter \psequential_consistency and the value of the sequential consistencydefined for \p data_handle).Similarly to the pthread detached functionality, when a detachedcommunication completes, its resources are automatically released backto the system, there is no need to test or to wait for the completionof the request.\fn int starpu_mpi_wait(starpu_mpi_req *req, MPI_Status *status)\ingroup API_MPI_SupportReturns when the operation identified by request \p req is complete.\fn int starpu_mpi_test(starpu_mpi_req *req, int *flag, MPI_Status *status)\ingroup API_MPI_SupportIf the operation identified by \p req is complete, set \p flag to 1.The \p status object is set to contain information on the completedoperation.\fn int starpu_mpi_barrier(MPI_Comm comm)\ingroup API_MPI_SupportBlocks the caller until all group members of the communicator \p commhave called it.\fn int starpu_mpi_isend_detached_unlock_tag(starpu_data_handle_t data_handle, int dest, int mpi_tag, MPI_Comm comm, starpu_tag_t tag)\ingroup API_MPI_SupportPosts a standard-mode, non blocking send of \p data_handle to the node\p dest using the message tag \p mpi_tag within the communicator \pcomm. On completion, \p tag is unlocked.\fn int starpu_mpi_irecv_detached_unlock_tag(starpu_data_handle_t data_handle, int source, int mpi_tag, MPI_Comm comm, starpu_tag_t tag)\ingroup API_MPI_SupportPosts a nonblocking receive in \p data_handle from the node \p sourceusing the message tag \p mpi_tag within the communicator \p comm. Oncompletion, \p tag is unlocked.\fn int starpu_mpi_isend_array_detached_unlock_tag(unsigned array_size, starpu_data_handle_t *data_handle, int *dest, int *mpi_tag, MPI_Comm *comm, starpu_tag_t tag)\ingroup API_MPI_SupportPosts \p array_size standard-mode, non blocking send. Each post sendsthe n-th data of the array \p data_handle to the n-th node of thearray \p dest using the n-th message tag of the array \p mpi_tagwithin the n-th communicator of the array \p comm. On completion ofthe all the requests, \p tag is unlocked.\fn int starpu_mpi_irecv_array_detached_unlock_tag(unsigned array_size, starpu_data_handle_t *data_handle, int *source, int *mpi_tag, MPI_Comm *comm, starpu_tag_t tag)\ingroup API_MPI_SupportPosts \p array_size nonblocking receive. Each post receives in the n-thdata of the array \p data_handle from the n-th node of the array \psource using the n-th message tag of the array \p mpi_tag within then-th communicator of the array \p comm. On completion of the all therequests, \p tag is unlocked.@name Communication Cache\ingroup API_MPI_Support\fn void starpu_mpi_cache_flush(MPI_Comm comm, starpu_data_handle_t data_handle)\ingroup API_MPI_SupportClear the send and receive communication cache for the data\p data_handle. The function has to be called synchronously by all theMPI nodes. The function does nothing if the cache mechanism isdisabled (see \ref STARPU_MPI_CACHE).\fn void starpu_mpi_cache_flush_all_data(MPI_Comm comm)\ingroup API_MPI_SupportClear the send and receive communication cache for all data. Thefunction has to be called synchronously by all the MPI nodes. Thefunction does nothing if the cache mechanism is disabled (see\ref STARPU_MPI_CACHE).@name MPI Insert Task\anchor MPIInsertTask\ingroup API_MPI_Support\fn int starpu_data_set_tag(starpu_data_handle_t handle, int tag)\ingroup API_MPI_SupportTell StarPU-MPI which MPI tag to use when exchanging the data.\fn int starpu_data_get_tag(starpu_data_handle_t handle)\ingroup API_MPI_SupportReturns the MPI tag to be used when exchanging the data.\fn int starpu_data_set_rank(starpu_data_handle_t handle, int rank)\ingroup API_MPI_SupportTell StarPU-MPI which MPI node "owns" a given data, that is, the nodewhich will always keep an up-to-date value, and will by defaultexecute tasks which write to it.\fn int starpu_data_get_rank(starpu_data_handle_t handle)\ingroup API_MPI_SupportReturns the last value set by starpu_data_set_rank().\def STARPU_EXECUTE_ON_NODE\ingroup API_MPI_Supportthis macro is used when calling starpu_mpi_task_insert(), and must befollowed by a integer value which specified the node on which toexecute the codelet.\def STARPU_EXECUTE_ON_DATA\ingroup API_MPI_Supportthis macro is used when calling starpu_mpi_task_insert(), and must befollowed by a data handle to specify that the node owning the givendata will execute the codelet.\def starpu_mpi_insert_task\ingroup API_MPI_SupportConvenience macro for the function starpu_mpi_task_insert() which used to be called starpu_mpi_insert_task.\fn int starpu_mpi_task_insert(MPI_Comm comm, struct starpu_codelet *codelet, ...)\ingroup API_MPI_SupportCreate and submit a task corresponding to codelet with the followingarguments. The argument list must be zero-terminated.The arguments following the codelets are the same types as for thefunction starpu_task_insert(). The extra argument::STARPU_EXECUTE_ON_NODE followed by an integer allows to specify theMPI node to execute the codelet. It is also possible to specify thatthe node owning a specific data will execute the codelet, by using::STARPU_EXECUTE_ON_DATA followed by a data handle.The internal algorithm is as follows:<ol><li>        Find out which MPI node is going to execute the codelet.        <ul>            <li>If there is only one node owning data in ::STARPU_W mode, it will be selected;            <li>If there is several nodes owning data in ::STARPU_W node, the one selected will be the one having the least data in R mode so as to minimize the amount of data to be transfered;            <li>The argument ::STARPU_EXECUTE_ON_NODE followed by an integer can be used to specify the node;            <li>The argument ::STARPU_EXECUTE_ON_DATA followed by a data handle can be used to specify that the node owing the given data will execute the codelet.        </ul></li><li>        Send and receive data as requested. Nodes owning data which need to be read by the task are sending them to the MPI node which will execute it. The latter receives them.</li><li>        Execute the codelet. This is done by the MPI node selected in the 1st step of the algorithm.</li><li>        If several MPI nodes own data to be written to, send written data back to their owners.</li></ol>The algorithm also includes a communication cache mechanism thatallows not to send data twice to the same MPI node, unless the datahas been modified. The cache can be disabled (see \ref STARPU_MPI_CACHE).\fn void starpu_mpi_get_data_on_node(MPI_Comm comm, starpu_data_handle_t data_handle, int node)\ingroup API_MPI_SupportTransfer data \p data_handle to MPI node \p node, sending it from itsowner if needed. At least the target node and the owner have to callthe function.\fn void starpu_mpi_get_data_on_node_detached(MPI_Comm comm, starpu_data_handle_t data_handle, int node, void (*callback)(void*), void *arg)\ingroup API_MPI_SupportTransfer data \p data_handle to MPI node \p node, sending it from itsowner if needed. At least the target node and the owner have to callthe function. On reception, the \p callback function is called withthe argument \p arg.@name Collective Operations\anchor MPICollectiveOperations\ingroup API_MPI_Support\fn void starpu_mpi_redux_data(MPI_Comm comm, starpu_data_handle_t data_handle)\ingroup API_MPI_SupportPerform a reduction on the given data. All nodes send the data to itsowner node which will perform a reduction.\fn int starpu_mpi_scatter_detached(starpu_data_handle_t *data_handles, int count, int root, MPI_Comm comm, void (*scallback)(void *), void *sarg, void (*rcallback)(void *), void *rarg)\ingroup API_MPI_SupportScatter data among processes of the communicator based on theownership of the data. For each data of the array \p data_handles, theprocess \p root sends the data to the process owning this data. Processesreceiving data must have valid data handles to receive them. Oncompletion of the collective communication, the \p scallback function iscalled with the argument \p sarg on the process \p root, the \prcallback function is called with the argument \p rarg on any otherprocess.\fn int starpu_mpi_gather_detached(starpu_data_handle_t *data_handles, int count, int root, MPI_Comm comm, void (*scallback)(void *), void *sarg, void (*rcallback)(void *), void *rarg)\ingroup API_MPI_SupportGather data from the different processes of the communicator onto theprocess \p root. Each process owning data handle in the array\p data_handles will send them to the process \p root. The process \proot must have valid data handles to receive the data. On completionof the collective communication, the \p rcallback function is calledwith the argument \p rarg on the process root, the \p scallbackfunction is called with the argument \p sarg on any other process.*/
 |