410_mpi_support.doxy 47 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033103410351036103710381039104010411042104310441045104610471048104910501051105210531054105510561057105810591060106110621063106410651066106710681069107010711072107310741075107610771078107910801081108210831084108510861087108810891090109110921093109410951096109710981099110011011102110311041105110611071108110911101111111211131114111511161117111811191120112111221123112411251126112711281129113011311132113311341135113611371138113911401141114211431144114511461147114811491150
  1. /* StarPU --- Runtime system for heterogeneous multicore architectures.
  2. *
  3. * Copyright (C) 2009-2021 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria
  4. *
  5. * StarPU is free software; you can redistribute it and/or modify
  6. * it under the terms of the GNU Lesser General Public License as published by
  7. * the Free Software Foundation; either version 2.1 of the License, or (at
  8. * your option) any later version.
  9. *
  10. * StarPU is distributed in the hope that it will be useful, but
  11. * WITHOUT ANY WARRANTY; without even the implied warranty of
  12. * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
  13. *
  14. * See the GNU Lesser General Public License in COPYING.LGPL for more details.
  15. */
  16. /*! \page MPISupport MPI Support
  17. The integration of MPI transfers within task parallelism is done in a
  18. very natural way by the means of asynchronous interactions between the
  19. application and StarPU. This is implemented in a separate <c>libstarpumpi</c> library
  20. which basically provides "StarPU" equivalents of <c>MPI_*</c> functions, where
  21. <c>void *</c> buffers are replaced with ::starpu_data_handle_t, and all
  22. GPU-RAM-NIC transfers are handled efficiently by StarPU-MPI. The user has to
  23. use the usual <c>mpirun</c> command of the MPI implementation to start StarPU on
  24. the different MPI nodes.
  25. In case the user wants to run several MPI processes by machine (e.g. one per
  26. NUMA node), \ref STARPU_WORKERS_GETBIND should be used to make StarPU take into
  27. account the binding set by the MPI launcher (otherwise each StarPU instance
  28. would try to bind on all cores of the machine...)
  29. An MPI Insert Task function provides an even more seamless transition to a
  30. distributed application, by automatically issuing all required data transfers
  31. according to the task graph and an application-provided distribution.
  32. \section MPIBuild Building with MPI support
  33. If a <c>mpicc</c> compiler is already in your PATH, StarPU will automatically
  34. enable MPI support in the build. If <c>mpicc</c> is not in PATH, you
  35. can specify its location by passing <c>--with-mpicc=/where/there/is/mpicc</c> to
  36. <c>./configure</c>
  37. It can be useful to enable MPI tests during <c>make check</c> by passing
  38. <c>--enable-mpi-check</c> to <c>./configure</c>. And similarly to
  39. <c>mpicc</c>, if <c>mpiexec</c> in not in PATH, you can specify its location by passing
  40. <c>--with-mpiexec=/where/there/is/mpiexec</c> to <c>./configure</c>, but this is
  41. not needed if it is next to <c>mpicc</c>, configure will look there in addition to PATH.
  42. Similarly, Fortran examples use <c>mpif90</c>, which can be specified manually
  43. with <c>--with-mpifort</c> if it can't be found automatically.
  44. \section ExampleDocumentation Example Used In This Documentation
  45. The example below will be used as the base for this documentation. It
  46. initializes a token on node 0, and the token is passed from node to node,
  47. incremented by one on each step. The code is not using StarPU yet.
  48. \code{.c}
  49. for (loop = 0; loop < nloops; loop++)
  50. {
  51. int tag = loop*size + rank;
  52. if (loop == 0 && rank == 0)
  53. {
  54. token = 0;
  55. fprintf(stdout, "Start with token value %d\n", token);
  56. }
  57. else
  58. {
  59. MPI_Recv(&token, 1, MPI_INT, (rank+size-1)%size, tag, MPI_COMM_WORLD);
  60. }
  61. token++;
  62. if (loop == last_loop && rank == last_rank)
  63. {
  64. fprintf(stdout, "Finished: token value %d\n", token);
  65. }
  66. else
  67. {
  68. MPI_Send(&token, 1, MPI_INT, (rank+1)%size, tag+1, MPI_COMM_WORLD);
  69. }
  70. }
  71. \endcode
  72. \section NotUsingMPISupport About Not Using The MPI Support
  73. Although StarPU provides MPI support, the application programmer may want to
  74. keep his MPI communications as they are for a start, and only delegate task
  75. execution to StarPU. This is possible by just using starpu_data_acquire(), for
  76. instance:
  77. \code{.c}
  78. for (loop = 0; loop < nloops; loop++)
  79. {
  80. int tag = loop*size + rank;
  81. /* Acquire the data to be able to write to it */
  82. starpu_data_acquire(token_handle, STARPU_W);
  83. if (loop == 0 && rank == 0)
  84. {
  85. token = 0;
  86. fprintf(stdout, "Start with token value %d\n", token);
  87. }
  88. else
  89. {
  90. MPI_Recv(&token, 1, MPI_INT, (rank+size-1)%size, tag, MPI_COMM_WORLD);
  91. }
  92. starpu_data_release(token_handle);
  93. /* Task delegation to StarPU to increment the token. The execution might
  94. * be performed on a CPU, a GPU, etc. */
  95. increment_token();
  96. /* Acquire the update data to be able to read from it */
  97. starpu_data_acquire(token_handle, STARPU_R);
  98. if (loop == last_loop && rank == last_rank)
  99. {
  100. fprintf(stdout, "Finished: token value %d\n", token);
  101. }
  102. else
  103. {
  104. MPI_Send(&token, 1, MPI_INT, (rank+1)%size, tag+1, MPI_COMM_WORLD);
  105. }
  106. starpu_data_release(token_handle);
  107. }
  108. \endcode
  109. In that case, <c>libstarpumpi</c> is not needed. One can also use <c>MPI_Isend()</c> and
  110. <c>MPI_Irecv()</c>, by calling starpu_data_release() after <c>MPI_Wait()</c> or <c>MPI_Test()</c>
  111. have notified completion.
  112. It is however better to use <c>libstarpumpi</c>, to save the application from having to
  113. synchronize with starpu_data_acquire(), and instead just submit all tasks and
  114. communications asynchronously, and wait for the overall completion.
  115. \section SimpleExample Simple Example
  116. The flags required to compile or link against the MPI layer are
  117. accessible with the following commands:
  118. \verbatim
  119. $ pkg-config --cflags starpumpi-1.3 # options for the compiler
  120. $ pkg-config --libs starpumpi-1.3 # options for the linker
  121. \endverbatim
  122. \code{.c}
  123. void increment_token(void)
  124. {
  125. struct starpu_task *task = starpu_task_create();
  126. task->cl = &increment_cl;
  127. task->handles[0] = token_handle;
  128. starpu_task_submit(task);
  129. }
  130. int main(int argc, char **argv)
  131. {
  132. int rank, size;
  133. starpu_mpi_init_conf(&argc, &argv, 1, MPI_COMM_WORLD, NULL);
  134. starpu_mpi_comm_rank(MPI_COMM_WORLD, &rank);
  135. starpu_mpi_comm_size(MPI_COMM_WORLD, &size);
  136. starpu_vector_data_register(&token_handle, STARPU_MAIN_RAM, (uintptr_t)&token, 1, sizeof(unsigned));
  137. unsigned nloops = NITER;
  138. unsigned loop;
  139. unsigned last_loop = nloops - 1;
  140. unsigned last_rank = size - 1;
  141. for (loop = 0; loop < nloops; loop++)
  142. {
  143. int tag = loop*size + rank;
  144. if (loop == 0 && rank == 0)
  145. {
  146. starpu_data_acquire(token_handle, STARPU_W);
  147. token = 0;
  148. fprintf(stdout, "Start with token value %d\n", token);
  149. starpu_data_release(token_handle);
  150. }
  151. else
  152. {
  153. starpu_mpi_irecv_detached(token_handle, (rank+size-1)%size, tag, MPI_COMM_WORLD, NULL, NULL);
  154. }
  155. increment_token();
  156. if (loop == last_loop && rank == last_rank)
  157. {
  158. starpu_data_acquire(token_handle, STARPU_R);
  159. fprintf(stdout, "Finished: token value %d\n", token);
  160. starpu_data_release(token_handle);
  161. }
  162. else
  163. {
  164. starpu_mpi_isend_detached(token_handle, (rank+1)%size, tag+1, MPI_COMM_WORLD, NULL, NULL);
  165. }
  166. }
  167. starpu_task_wait_for_all();
  168. starpu_mpi_shutdown();
  169. if (rank == last_rank)
  170. {
  171. fprintf(stderr, "[%d] token = %d == %d * %d ?\n", rank, token, nloops, size);
  172. STARPU_ASSERT(token == nloops*size);
  173. }
  174. \endcode
  175. We have here replaced <c>MPI_Recv()</c> and <c>MPI_Send()</c> with starpu_mpi_irecv_detached()
  176. and starpu_mpi_isend_detached(), which just submit the communication to be
  177. performed. The implicit sequential consistency dependencies provide
  178. synchronization between mpi reception and emission and the corresponding tasks.
  179. The only remaining synchronization with starpu_data_acquire() is at
  180. the beginning and the end.
  181. \section MPIInitialization How to Initialize StarPU-MPI
  182. As seen in the previous example, one has to call starpu_mpi_init_conf() to
  183. initialize StarPU-MPI. The third parameter of the function indicates
  184. if MPI should be initialized by StarPU or if the application did it
  185. itself. If the application initializes MPI itself, it must call
  186. <c>MPI_Init_thread()</c> with <c>MPI_THREAD_SERIALIZED</c> or
  187. <c>MPI_THREAD_MULTIPLE</c>, since StarPU-MPI uses a separate thread to
  188. perform the communications. <c>MPI_THREAD_MULTIPLE</c> is necessary if
  189. the application also performs some MPI communications.
  190. \section PointToPointCommunication Point To Point Communication
  191. The standard point to point communications of MPI have been
  192. implemented. The semantic is similar to the MPI one, but adapted to
  193. the DSM provided by StarPU. A MPI request will only be submitted when
  194. the data is available in the main memory of the node submitting the
  195. request.
  196. There are two types of asynchronous communications: the classic
  197. asynchronous communications and the detached communications. The
  198. classic asynchronous communications (starpu_mpi_isend() and
  199. starpu_mpi_irecv()) need to be followed by a call to
  200. starpu_mpi_wait() or to starpu_mpi_test() to wait for or to
  201. test the completion of the communication. Waiting for or testing the
  202. completion of detached communications is not possible, this is done
  203. internally by StarPU-MPI, on completion, the resources are
  204. automatically released. This mechanism is similar to the pthread
  205. detach state attribute which determines whether a thread will be
  206. created in a joinable or a detached state.
  207. For send communications, data is acquired with the mode ::STARPU_R.
  208. When using the \c configure option
  209. \ref enable-mpi-pedantic-isend "--enable-mpi-pedantic-isend", the mode
  210. ::STARPU_RW is used to make sure there is no more than 1 concurrent
  211. \c MPI_Isend() call accessing a data
  212. and StarPU does not read from it from tasks during the communication.
  213. Internally, all communication are divided in 2 communications, a first
  214. message is used to exchange an envelope describing the data (i.e its
  215. tag and its size), the data itself is sent in a second message. All
  216. MPI communications submitted by StarPU uses a unique tag which has a
  217. default value, and can be accessed with the functions
  218. starpu_mpi_get_communication_tag() and
  219. starpu_mpi_set_communication_tag(). The matching of tags with
  220. corresponding requests is done within StarPU-MPI.
  221. For any userland communication, the call of the corresponding function
  222. (e.g starpu_mpi_isend()) will result in the creation of a StarPU-MPI
  223. request, the function starpu_data_acquire_cb() is then called to
  224. asynchronously request StarPU to fetch the data in main memory; when
  225. the data is ready and the corresponding buffer has already been
  226. received by MPI, it will be copied in the memory of the data,
  227. otherwise the request is stored in the <em>early requests list</em>. Sending
  228. requests are stored in the <em>ready requests list</em>.
  229. While requests need to be processed, the StarPU-MPI progression thread
  230. does the following:
  231. <ol>
  232. <li> it polls the <em>ready requests list</em>. For all the ready
  233. requests, the appropriate function is called to post the corresponding
  234. MPI call. For example, an initial call to starpu_mpi_isend() will
  235. result in a call to <c>MPI_Isend()</c>. If the request is marked as
  236. detached, the request will then be added in the <em>detached requests
  237. list</em>.
  238. </li>
  239. <li> it posts a <c>MPI_Irecv()</c> to retrieve a data envelope.
  240. </li>
  241. <li> it polls the <em>detached requests list</em>. For all the detached
  242. requests, it tests its completion of the MPI request by calling
  243. <c>MPI_Test()</c>. On completion, the data handle is released, and if a
  244. callback was defined, it is called.
  245. </li>
  246. <li> finally, it checks if a data envelope has been received. If so,
  247. if the data envelope matches a request in the <em>early requests list</em> (i.e
  248. the request has already been posted by the application), the
  249. corresponding MPI call is posted (similarly to the first step above).
  250. If the data envelope does not match any application request, a
  251. temporary handle is created to receive the data, a StarPU-MPI request
  252. is created and added into the <em>ready requests list</em>, and thus will be
  253. processed in the first step of the next loop.
  254. </li>
  255. </ol>
  256. \ref MPIPtpCommunication gives the list of all the
  257. point to point communications defined in StarPU-MPI.
  258. \section ExchangingUserDefinedDataInterface Exchanging User Defined Data Interface
  259. New data interfaces defined as explained in \ref DefiningANewDataInterface
  260. can also be used within StarPU-MPI and
  261. exchanged between nodes. Two functions needs to be defined through the
  262. type starpu_data_interface_ops. The function
  263. starpu_data_interface_ops::pack_data takes a handle and returns a
  264. contiguous memory buffer allocated with
  265. \code{.c}
  266. starpu_malloc_flags(ptr, size, 0)
  267. \endcode
  268. along with its size where data to be conveyed
  269. to another node should be copied.
  270. \code{.c}
  271. static int complex_pack_data(starpu_data_handle_t handle, unsigned node, void **ptr, ssize_t *count)
  272. {
  273. STARPU_ASSERT(starpu_data_test_if_allocated_on_node(handle, node));
  274. struct starpu_complex_interface *complex_interface = (struct starpu_complex_interface *) starpu_data_get_interface_on_node(handle, node);
  275. *count = complex_get_size(handle);
  276. *ptr = starpu_malloc_on_node_flags(node, *count, 0);
  277. memcpy(*ptr, complex_interface->real, complex_interface->nx*sizeof(double));
  278. memcpy(*ptr+complex_interface->nx*sizeof(double), complex_interface->imaginary, complex_interface->nx*sizeof(double));
  279. return 0;
  280. }
  281. \endcode
  282. The inverse operation is
  283. implemented in the function starpu_data_interface_ops::unpack_data which
  284. takes a contiguous memory buffer and recreates the data handle.
  285. \code{.c}
  286. static int complex_unpack_data(starpu_data_handle_t handle, unsigned node, void *ptr, size_t count)
  287. {
  288. STARPU_ASSERT(starpu_data_test_if_allocated_on_node(handle, node));
  289. struct starpu_complex_interface *complex_interface = (struct starpu_complex_interface *) starpu_data_get_interface_on_node(handle, node);
  290. memcpy(complex_interface->real, ptr, complex_interface->nx*sizeof(double));
  291. memcpy(complex_interface->imaginary, ptr+complex_interface->nx*sizeof(double), complex_interface->nx*sizeof(double));
  292. starpu_free_on_node_flags(node, (uintptr_t) ptr, count, 0);
  293. return 0;
  294. }
  295. \endcode
  296. And the starpu_data_interface_ops::peek_data operation does
  297. the same, but without freeing the buffer. Of course one can
  298. implement starpu_data_interface_ops::unpack_data as merely calling
  299. starpu_data_interface_ops::peek_data and do the free:
  300. \code{.c}
  301. static int complex_peek_data(starpu_data_handle_t handle, unsigned node, void *ptr, size_t count)
  302. {
  303. STARPU_ASSERT(starpu_data_test_if_allocated_on_node(handle, node));
  304. STARPU_ASSERT(count == complex_get_size(handle));
  305. struct starpu_complex_interface *complex_interface = (struct starpu_complex_interface *) starpu_data_get_interface_on_node(handle, node);
  306. memcpy(complex_interface->real, ptr, complex_interface->nx*sizeof(double));
  307. memcpy(complex_interface->imaginary, ptr+complex_interface->nx*sizeof(double), complex_interface->nx*sizeof(double));
  308. return 0;
  309. }
  310. \endcode
  311. \code{.c}
  312. static struct starpu_data_interface_ops interface_complex_ops =
  313. {
  314. ...
  315. .pack_data = complex_pack_data,
  316. .peek_data = complex_peek_data
  317. .unpack_data = complex_unpack_data
  318. };
  319. \endcode
  320. Instead of defining pack and unpack operations, users may want to
  321. attach a MPI type to their user-defined data interface. The function
  322. starpu_mpi_interface_datatype_register() allows to do so. This function takes 3
  323. parameters: the interface ID for which the MPI datatype is going to be defined,
  324. a function's pointer that will create the MPI datatype, and a function's pointer
  325. that will free the MPI datatype. If for some data an MPI datatype can not be
  326. built (e.g. complex data structure), the creation function can return -1,
  327. StarPU-MPI will then fallback to using pack/unpack.
  328. The functions to create and free the MPI datatype are defined and registered as
  329. follows.
  330. \code{.c}
  331. void starpu_complex_interface_datatype_allocate(starpu_data_handle_t handle, MPI_Datatype *mpi_datatype)
  332. {
  333. int ret;
  334. int blocklengths[2];
  335. MPI_Aint displacements[2];
  336. MPI_Datatype types[2] = {MPI_DOUBLE, MPI_DOUBLE};
  337. struct starpu_complex_interface *complex_interface = (struct starpu_complex_interface *) starpu_data_get_interface_on_node(handle, STARPU_MAIN_RAM);
  338. MPI_Get_address(complex_interface, displacements);
  339. MPI_Get_address(&complex_interface->imaginary, displacements+1);
  340. displacements[1] -= displacements[0];
  341. displacements[0] = 0;
  342. blocklengths[0] = complex_interface->nx;
  343. blocklengths[1] = complex_interface->nx;
  344. ret = MPI_Type_create_struct(2, blocklengths, displacements, types, mpi_datatype);
  345. STARPU_ASSERT_MSG(ret == MPI_SUCCESS, "MPI_Type_contiguous failed");
  346. ret = MPI_Type_commit(mpi_datatype);
  347. STARPU_ASSERT_MSG(ret == MPI_SUCCESS, "MPI_Type_commit failed");
  348. }
  349. void starpu_complex_interface_datatype_free(MPI_Datatype *mpi_datatype)
  350. {
  351. MPI_Type_free(mpi_datatype);
  352. }
  353. static struct starpu_data_interface_ops interface_complex_ops =
  354. {
  355. ...
  356. };
  357. interface_complex_ops.interfaceid = starpu_data_interface_get_next_id();
  358. starpu_mpi_interface_datatype_register(interface_complex_ops.interfaceid, starpu_complex_interface_datatype_allocate, starpu_complex_interface_datatype_free);
  359. starpu_data_interface handle;
  360. starpu_complex_data_register(&handle, STARPU_MAIN_RAM, real, imaginary, 2);
  361. ...
  362. \endcode
  363. It is also possible to use starpu_mpi_datatype_register() to register the
  364. functions through a handle rather than the interface ID, but note that in that
  365. case it is important to make sure no communication is going to occur before the
  366. function starpu_mpi_datatype_register() is called. This would otherwise produce
  367. an undefined result as the data may be received before the function is called,
  368. and so the MPI datatype would not be known by the StarPU-MPI communication
  369. engine, and the data would be processed with the pack and unpack operations. One
  370. would thus need to synchronize all nodes:
  371. \code{.c}
  372. starpu_data_interface handle;
  373. starpu_complex_data_register(&handle, STARPU_MAIN_RAM, real, imaginary, 2);
  374. starpu_mpi_datatype_register(handle, starpu_complex_interface_datatype_allocate, starpu_complex_interface_datatype_free);
  375. starpu_mpi_barrier(MPI_COMM_WORLD);
  376. \endcode
  377. \section MPIInsertTaskUtility MPI Insert Task Utility
  378. To save the programmer from having to explicit all communications, StarPU
  379. provides an "MPI Insert Task Utility". The principle is that the application
  380. decides a distribution of the data over the MPI nodes by allocating it and
  381. notifying StarPU of this decision, i.e. tell StarPU which MPI node "owns"
  382. which data. It also decides, for each handle, an MPI tag which will be used to
  383. exchange the content of the handle. All MPI nodes then process the whole task
  384. graph, and StarPU automatically determines which node actually execute which
  385. task, and trigger the required MPI transfers.
  386. The list of functions is described in \ref MPIInsertTask.
  387. Here an stencil example showing how to use starpu_mpi_task_insert(). One
  388. first needs to define a distribution function which specifies the
  389. locality of the data. Note that the data needs to be registered to MPI
  390. by calling starpu_mpi_data_register(). This function allows to set
  391. the distribution information and the MPI tag which should be used when
  392. communicating the data. It also allows to automatically clear the MPI
  393. communication cache when unregistering the data.
  394. \code{.c}
  395. /* Returns the MPI node number where data is */
  396. int my_distrib(int x, int y, int nb_nodes)
  397. {
  398. /* Block distrib */
  399. return ((int)(x / sqrt(nb_nodes) + (y / sqrt(nb_nodes)) * sqrt(nb_nodes))) % nb_nodes;
  400. // /* Other examples useful for other kinds of computations */
  401. // /* / distrib */
  402. // return (x+y) % nb_nodes;
  403. // /* Block cyclic distrib */
  404. // unsigned side = sqrt(nb_nodes);
  405. // return x % side + (y % side) * size;
  406. }
  407. \endcode
  408. Now the data can be registered within StarPU. Data which are not
  409. owned but will be needed for computations can be registered through
  410. the lazy allocation mechanism, i.e. with a <c>home_node</c> set to <c>-1</c>.
  411. StarPU will automatically allocate the memory when it is used for the
  412. first time.
  413. One can note an optimization here (the <c>else if</c> test): we only register
  414. data which will be needed by the tasks that we will execute.
  415. \code{.c}
  416. unsigned matrix[X][Y];
  417. starpu_data_handle_t data_handles[X][Y];
  418. for(x = 0; x < X; x++)
  419. {
  420. for (y = 0; y < Y; y++)
  421. {
  422. int mpi_rank = my_distrib(x, y, size);
  423. if (mpi_rank == my_rank)
  424. /* Owning data */
  425. starpu_variable_data_register(&data_handles[x][y], STARPU_MAIN_RAM, (uintptr_t)&(matrix[x][y]), sizeof(unsigned));
  426. else if (my_rank == my_distrib(x+1, y, size) || my_rank == my_distrib(x-1, y, size)
  427. || my_rank == my_distrib(x, y+1, size) || my_rank == my_distrib(x, y-1, size))
  428. /* I don't own this index, but will need it for my computations */
  429. starpu_variable_data_register(&data_handles[x][y], -1, (uintptr_t)NULL, sizeof(unsigned));
  430. else
  431. /* I know it's useless to allocate anything for this */
  432. data_handles[x][y] = NULL;
  433. if (data_handles[x][y])
  434. {
  435. starpu_mpi_data_register(data_handles[x][y], x*X+y, mpi_rank);
  436. }
  437. }
  438. }
  439. \endcode
  440. Now starpu_mpi_task_insert() can be called for the different
  441. steps of the application.
  442. \code{.c}
  443. for(loop=0 ; loop<niter; loop++)
  444. for (x = 1; x < X-1; x++)
  445. for (y = 1; y < Y-1; y++)
  446. starpu_mpi_task_insert(MPI_COMM_WORLD, &stencil5_cl,
  447. STARPU_RW, data_handles[x][y],
  448. STARPU_R, data_handles[x-1][y],
  449. STARPU_R, data_handles[x+1][y],
  450. STARPU_R, data_handles[x][y-1],
  451. STARPU_R, data_handles[x][y+1],
  452. 0);
  453. starpu_task_wait_for_all();
  454. \endcode
  455. I.e. all MPI nodes process the whole task graph, but as mentioned above, for
  456. each task, only the MPI node which owns the data being written to (here,
  457. <c>data_handles[x][y]</c>) will actually run the task. The other MPI nodes will
  458. automatically send the required data.
  459. To tune the placement of tasks among MPI nodes, one can use
  460. ::STARPU_EXECUTE_ON_NODE or ::STARPU_EXECUTE_ON_DATA to specify an explicit
  461. node, or the node of a given data (e.g. one of the parameters), or use
  462. starpu_mpi_node_selection_register_policy() and ::STARPU_NODE_SELECTION_POLICY
  463. to provide a dynamic policy.
  464. A function starpu_mpi_task_build() is also provided with the aim to
  465. only construct the task structure. All MPI nodes need to call the
  466. function, which posts the required send/recv on the various nodes as needed.
  467. Only the node which is to execute the task will then return a
  468. valid task structure, others will return <c>NULL</c>. This node must submit the task.
  469. All nodes then need to call the function starpu_mpi_task_post_build() -- with the same
  470. list of arguments as starpu_mpi_task_build() -- to post all the
  471. necessary data communications meant to happen after the task execution.
  472. \code{.c}
  473. struct starpu_task *task;
  474. task = starpu_mpi_task_build(MPI_COMM_WORLD, &cl,
  475. STARPU_RW, data_handles[0],
  476. STARPU_R, data_handles[1],
  477. 0);
  478. if (task) starpu_task_submit(task);
  479. starpu_mpi_task_post_build(MPI_COMM_WORLD, &cl,
  480. STARPU_RW, data_handles[0],
  481. STARPU_R, data_handles[1],
  482. 0);
  483. \endcode
  484. \section MPIInsertPruning Pruning MPI Task Insertion
  485. Making all MPI nodes process the whole graph can be a concern with a growing
  486. number of nodes. To avoid this, the
  487. application can prune the task for loops according to the data distribution,
  488. so as to only submit tasks on nodes which have to care about them (either to
  489. execute them, or to send the required data).
  490. A way to do some of this quite easily can be to just add an <c>if</c> like this:
  491. \code{.c}
  492. for(loop=0 ; loop<niter; loop++)
  493. for (x = 1; x < X-1; x++)
  494. for (y = 1; y < Y-1; y++)
  495. if (my_distrib(x,y,size) == my_rank
  496. || my_distrib(x-1,y,size) == my_rank
  497. || my_distrib(x+1,y,size) == my_rank
  498. || my_distrib(x,y-1,size) == my_rank
  499. || my_distrib(x,y+1,size) == my_rank)
  500. starpu_mpi_task_insert(MPI_COMM_WORLD, &stencil5_cl,
  501. STARPU_RW, data_handles[x][y],
  502. STARPU_R, data_handles[x-1][y],
  503. STARPU_R, data_handles[x+1][y],
  504. STARPU_R, data_handles[x][y-1],
  505. STARPU_R, data_handles[x][y+1],
  506. 0);
  507. starpu_task_wait_for_all();
  508. \endcode
  509. This permits to drop the cost of function call argument passing and parsing.
  510. If the <c>my_distrib</c> function can be inlined by the compiler, the latter can
  511. improve the test.
  512. If the <c>size</c> can be made a compile-time constant, the compiler can
  513. considerably improve the test further.
  514. If the distribution function is not too complex and the compiler is very good,
  515. the latter can even optimize the <c>for</c> loops, thus dramatically reducing
  516. the cost of task submission.
  517. To estimate quickly how long task submission takes, and notably how much pruning
  518. saves, a quick and easy way is to measure the submission time of just one of the
  519. MPI nodes. This can be achieved by running the application on just one MPI node
  520. with the following environment variables:
  521. \code{.sh}
  522. export STARPU_DISABLE_KERNELS=1
  523. export STARPU_MPI_FAKE_RANK=2
  524. export STARPU_MPI_FAKE_SIZE=1024
  525. \endcode
  526. Here we have disabled the kernel function call to skip the actual computation
  527. time and only keep submission time, and we have asked StarPU to fake running on
  528. MPI node 2 out of 1024 nodes.
  529. \section MPITemporaryData Temporary Data
  530. To be able to use starpu_mpi_task_insert(), one has to call
  531. starpu_mpi_data_register(), so that StarPU-MPI can know what it needs to do for
  532. each data. Parameters of starpu_mpi_data_register() are normally the same on all
  533. nodes for a given data, so that all nodes agree on which node owns the data, and
  534. which tag is used to transfer its value.
  535. It can however be useful to register e.g. some temporary data on just one node,
  536. without having to register a dumb handle on all nodes, while only one node will
  537. actually need to know about it. In this case, nodes which will not need the data
  538. can just pass \c NULL to starpu_mpi_task_insert():
  539. \code{.c}
  540. starpu_data_handle_t data0 = NULL;
  541. if (rank == 0)
  542. {
  543. starpu_variable_data_register(&data0, STARPU_MAIN_RAM, (uintptr_t) &val0, sizeof(val0));
  544. starpu_mpi_data_register(data0, 0, rank);
  545. }
  546. starpu_mpi_task_insert(MPI_COMM_WORLD, &cl, STARPU_W, data0, 0); /* Executes on node 0 */
  547. \endcode
  548. Here, nodes whose rank is not \c 0 will simply not take care of the data, and consider it to be on another node.
  549. This can be mixed various way, for instance here node \c 1 determines that it does
  550. not have to care about \c data0, but knows that it should send the value of its
  551. \c data1 to node \c 0, which owns data and thus will need the value of \c data1 to execute the task:
  552. \code{.c}
  553. starpu_data_handle_t data0 = NULL, data1, data;
  554. if (rank == 0)
  555. {
  556. starpu_variable_data_register(&data0, STARPU_MAIN_RAM, (uintptr_t) &val0, sizeof(val0));
  557. starpu_mpi_data_register(data0, -1, rank);
  558. starpu_variable_data_register(&data1, -1, 0, sizeof(val1));
  559. starpu_variable_data_register(&data, STARPU_MAIN_RAM, (uintptr_t) &val, sizeof(val));
  560. }
  561. else if (rank == 1)
  562. {
  563. starpu_variable_data_register(&data1, STARPU_MAIN_RAM, (uintptr_t) &val1, sizeof(val1));
  564. starpu_variable_data_register(&data, -1, 0, sizeof(val));
  565. }
  566. starpu_mpi_data_register(data, 42, 0);
  567. starpu_mpi_data_register(data1, 43, 1);
  568. starpu_mpi_task_insert(MPI_COMM_WORLD, &cl, STARPU_W, data, STARPU_R, data0, STARPU_R, data1, 0); /* Executes on node 0 */
  569. \endcode
  570. \section MPIPerNodeData Per-node Data
  571. Further than temporary data on just one node, one may want per-node data,
  572. to e.g. replicate some computation because that is less expensive than
  573. communicating the value over MPI:
  574. \code{.c}
  575. starpu_data_handle pernode, data0, data1;
  576. starpu_variable_data_register(&pernode, -1, 0, sizeof(val));
  577. starpu_mpi_data_register(pernode, -1, STARPU_MPI_PER_NODE);
  578. /* Normal data: one on node0, one on node1 */
  579. if (rank == 0)
  580. {
  581. starpu_variable_data_register(&data0, STARPU_MAIN_RAM, (uintptr_t) &val0, sizeof(val0));
  582. starpu_variable_data_register(&data1, -1, 0, sizeof(val1));
  583. }
  584. else if (rank == 1)
  585. {
  586. starpu_variable_data_register(&data0, -1, 0, sizeof(val1));
  587. starpu_variable_data_register(&data1, STARPU_MAIN_RAM, (uintptr_t) &val1, sizeof(val1));
  588. }
  589. starpu_mpi_data_register(data0, 42, 0);
  590. starpu_mpi_data_register(data1, 43, 1);
  591. starpu_mpi_task_insert(MPI_COMM_WORLD, &cl, STARPU_W, pernode, 0); /* Will be replicated on all nodes */
  592. starpu_mpi_task_insert(MPI_COMM_WORLD, &cl2, STARPU_RW, data0, STARPU_R, pernode); /* Will execute on node 0, using its own pernode*/
  593. starpu_mpi_task_insert(MPI_COMM_WORLD, &cl2, STARPU_RW, data1, STARPU_R, pernode); /* Will execute on node 1, using its own pernode*/
  594. \endcode
  595. One can turn a normal data into pernode data, by first broadcasting it to all nodes:
  596. \code{.c}
  597. starpu_data_handle data;
  598. starpu_variable_data_register(&data, -1, 0, sizeof(val));
  599. starpu_mpi_data_register(data, 42, 0);
  600. /* Compute some value */
  601. starpu_mpi_task_insert(MPI_COMM_WORLD, &cl, STARPU_W, data, 0); /* Node 0 computes it */
  602. /* Get it on all nodes */
  603. starpu_mpi_get_data_on_all_nodes_detached(MPI_COMM_WORLD, data);
  604. /* And turn it per-node */
  605. starpu_mpi_data_set_rank(data, STARPU_MPI_PER_NODE);
  606. \endcode
  607. The data can then be used just like pernode above.
  608. \section MPIMpiRedux Inter-node reduction
  609. One might want to leverage a reduction pattern across several nodes.
  610. Using \c STARPU_REDUX, one can obtain reduction patterns across several nodes,
  611. however each core across the contributing nodes will spawn their own
  612. contribution to work with. In the case that these allocations or the
  613. required reductions are too expensive to execute for each contribution,
  614. the access mode \c STARPU_MPI_REDUX tells StarPU to spawn only one contribution
  615. on node executing tasks partaking in the reduction.
  616. Tasks producing a result in the inter-node reduction should be registered as
  617. accessing the contribution through \c STARPU_RW|STARPU_COMMUTE mode.
  618. \code{.c}
  619. static struct starpu_codelet contrib_cl =
  620. {
  621. .cpu_funcs = {cpu_contrib}, /* cpu implementation(s) of the routine */
  622. .nbuffers = 1, /* number of data handles referenced by this routine */
  623. .modes = {STARPU_RW | STARPU_COMMUTE} /* access modes for the contribution */
  624. .name = "contribution"
  625. };
  626. \endcode
  627. When inserting these tasks, the access mode handed out to the StarPU-MPI layer
  628. should be \c STARPU_MPI_REDUX. Assuming \c data is owned by node 0 and we want node
  629. 1 to compute the contribution, we could do the following.
  630. \code{.c}
  631. starpu_mpi_task_insert(MPI_COMM_WORLD, &contrib_cl, STARPU_MPI_REDUX, data, EXECUTE_ON_NODE, 1); /* Node 1 computes it */
  632. \endcode
  633. \section MPIPriorities Priorities
  634. All send functions have a <c>_prio</c> variant which takes an additional
  635. priority parameter, which allows to make StarPU-MPI change the order of MPI
  636. requests before submitting them to MPI. The default priority is \c 0.
  637. When using the starpu_mpi_task_insert() helper, ::STARPU_PRIORITY defines both the
  638. task priority and the MPI requests priority.
  639. To test how much MPI priorities have a good effect on performance, you can
  640. set the environment variable \ref STARPU_MPI_PRIORITIES to \c 0 to disable the use of
  641. priorities in StarPU-MPI.
  642. \section MPICache MPI Cache Support
  643. StarPU-MPI automatically optimizes duplicate data transmissions: if an MPI
  644. node \c B needs a piece of data \c D from MPI node \c A for several tasks, only one
  645. transmission of \c D will take place from \c A to \c B, and the value of \c D will be kept
  646. on \c B as long as no task modifies \c D.
  647. If a task modifies \c D, \c B will wait for all tasks which need the previous value of
  648. \c D, before invalidating the value of \c D. As a consequence, it releases the memory
  649. occupied by \c D. Whenever a task running on \c B needs the new value of \c D, allocation
  650. will take place again to receive it.
  651. Since tasks can be submitted dynamically, StarPU-MPI can not know whether the
  652. current value of data \c D will again be used by a newly-submitted task before
  653. being modified by another newly-submitted task, so until a task is submitted to
  654. modify the current value, it can not decide by itself whether to flush the cache
  655. or not. The application can however explicitly tell StarPU-MPI to flush the
  656. cache by calling starpu_mpi_cache_flush() or starpu_mpi_cache_flush_all_data(),
  657. for instance in case the data will not be used at all any more (see for instance
  658. the cholesky example in <c>mpi/examples/matrix_decomposition</c>), or at least not in
  659. the close future. If a newly-submitted task actually needs the value again,
  660. another transmission of \c D will be initiated from \c A to \c B. A mere
  661. starpu_mpi_cache_flush_all_data() can for instance be added at the end of the whole
  662. algorithm, to express that no data will be reused after this (or at least that
  663. it is not interesting to keep them in cache). It may however be interesting to
  664. add fine-graph starpu_mpi_cache_flush() calls during the algorithm; the effect
  665. for the data deallocation will be the same, but it will additionally release some
  666. pressure from the StarPU-MPI cache hash table during task submission.
  667. One can determine whether a piece of data is cached with
  668. starpu_mpi_cached_receive() and starpu_mpi_cached_send().
  669. Functions starpu_mpi_cached_receive_set() and
  670. starpu_mpi_cached_send_set() are automatically called by
  671. starpu_mpi_task_insert() but can also be called directly by the
  672. application. Functions starpu_mpi_cached_send_clear() and
  673. starpu_mpi_cached_receive_clear() must be called to clear data from
  674. the cache. They are also automatically called when using
  675. starpu_mpi_task_insert().
  676. The whole caching behavior can be disabled thanks to the \ref STARPU_MPI_CACHE
  677. environment variable. The variable \ref STARPU_MPI_CACHE_STATS can be set to <c>1</c>
  678. to enable the runtime to display messages when data are added or removed
  679. from the cache holding the received data.
  680. \section MPIMigration MPI Data Migration
  681. The application can dynamically change its mind about the data distribution, to
  682. balance the load over MPI nodes for instance. This can be done very simply by
  683. requesting an explicit move and then change the registered rank. For instance,
  684. we here switch to a new distribution function <c>my_distrib2</c>: we first
  685. register any data which wasn't registered already and will be needed, then
  686. migrate the data, and register the new location.
  687. \code{.c}
  688. for(x = 0; x < X; x++)
  689. {
  690. for (y = 0; y < Y; y++)
  691. {
  692. int mpi_rank = my_distrib2(x, y, size);
  693. if (!data_handles[x][y] && (mpi_rank == my_rank
  694. || my_rank == my_distrib(x+1, y, size) || my_rank == my_distrib(x-1, y, size)
  695. || my_rank == my_distrib(x, y+1, size) || my_rank == my_distrib(x, y-1, size)))
  696. /* Register newly-needed data */
  697. starpu_variable_data_register(&data_handles[x][y], -1, (uintptr_t)NULL, sizeof(unsigned));
  698. if (data_handles[x][y])
  699. {
  700. /* Migrate the data */
  701. starpu_mpi_data_migrate(MPI_COMM_WORLD, data_handles[x][y], mpi_rank);
  702. }
  703. }
  704. }
  705. \endcode
  706. From then on, further tasks submissions will use the new data distribution,
  707. which will thus change both MPI communications and task assignments.
  708. Very importantly, since all nodes have to agree on which node owns which data
  709. so as to determine MPI communications and task assignments the same way, all
  710. nodes have to perform the same data migration, and at the same point among task
  711. submissions. It thus does not require a strict synchronization, just a clear
  712. separation of task submissions before and after the data redistribution.
  713. Before data unregistration, it has to be migrated back to its original home
  714. node (the value, at least), since that is where the user-provided buffer
  715. resides. Otherwise the unregistration will complain that it does not have the
  716. latest value on the original home node.
  717. \code{.c}
  718. for(x = 0; x < X; x++)
  719. {
  720. for (y = 0; y < Y; y++)
  721. {
  722. if (data_handles[x][y])
  723. {
  724. int mpi_rank = my_distrib(x, y, size);
  725. /* Get back data to original place where the user-provided buffer is. */
  726. starpu_mpi_get_data_on_node_detached(MPI_COMM_WORLD, data_handles[x][y], mpi_rank, NULL, NULL);
  727. /* And unregister it */
  728. starpu_data_unregister(data_handles[x][y]);
  729. }
  730. }
  731. }
  732. \endcode
  733. \section MPICollective MPI Collective Operations
  734. The functions are described in \ref MPICollectiveOperations.
  735. \code{.c}
  736. if (rank == root)
  737. {
  738. /* Allocate the vector */
  739. vector = malloc(nblocks * sizeof(float *));
  740. for(x=0 ; x<nblocks ; x++)
  741. {
  742. starpu_malloc((void **)&vector[x], block_size*sizeof(float));
  743. }
  744. }
  745. /* Allocate data handles and register data to StarPU */
  746. data_handles = malloc(nblocks*sizeof(starpu_data_handle_t *));
  747. for(x = 0; x < nblocks ; x++)
  748. {
  749. int mpi_rank = my_distrib(x, nodes);
  750. if (rank == root)
  751. {
  752. starpu_vector_data_register(&data_handles[x], STARPU_MAIN_RAM, (uintptr_t)vector[x], blocks_size, sizeof(float));
  753. }
  754. else if ((mpi_rank == rank) || ((rank == mpi_rank+1 || rank == mpi_rank-1)))
  755. {
  756. /* I own this index, or i will need it for my computations */
  757. starpu_vector_data_register(&data_handles[x], -1, (uintptr_t)NULL, block_size, sizeof(float));
  758. }
  759. else
  760. {
  761. /* I know it's useless to allocate anything for this */
  762. data_handles[x] = NULL;
  763. }
  764. if (data_handles[x])
  765. {
  766. starpu_mpi_data_register(data_handles[x], x*nblocks+y, mpi_rank);
  767. }
  768. }
  769. /* Scatter the matrix among the nodes */
  770. starpu_mpi_scatter_detached(data_handles, nblocks, root, MPI_COMM_WORLD, NULL, NULL, NULL, NULL);
  771. /* Calculation */
  772. for(x = 0; x < nblocks ; x++)
  773. {
  774. if (data_handles[x])
  775. {
  776. int owner = starpu_data_get_rank(data_handles[x]);
  777. if (owner == rank)
  778. {
  779. starpu_task_insert(&cl, STARPU_RW, data_handles[x], 0);
  780. }
  781. }
  782. }
  783. /* Gather the matrix on main node */
  784. starpu_mpi_gather_detached(data_handles, nblocks, 0, MPI_COMM_WORLD, NULL, NULL, NULL, NULL);
  785. \endcode
  786. Other collective operations would be easy to define, just ask starpu-devel for
  787. them!
  788. \section MPIDriver Make StarPU-MPI Progression Thread Execute Tasks
  789. The default behaviour of StarPU-MPI is to spawn an MPI thread to take care only
  790. of MPI communications in an active fashion (i.e the StarPU-MPI thread sleeps
  791. only when there is no active request submitted by the application), with the
  792. goal of being as reactive as possible to communications. Knowing that, users
  793. usually leave one free core for the MPI thread when starting a distributed
  794. execution with StarPU-MPI. However, this could result in a loss of performance
  795. for applications that does not require an extreme reactivity to MPI
  796. communications.
  797. The starpu_mpi_init_conf() routine allows the user to give the
  798. starpu_conf configuration structure of StarPU (usually given to the
  799. starpu_init() routine) to StarPU-MPI, so that StarPU-MPI reserves for its own
  800. use one of the CPU drivers of the current computing node, or one of the CPU
  801. cores, and then calls starpu_init() internally.
  802. This allows the MPI communication thread to call a StarPU CPU driver to run
  803. tasks when there is no active requests to take care of, and thus recover the
  804. computational power of the "lost" core. Since there is a trade-off between
  805. executing tasks and polling MPI requests, which is how much the application
  806. wants to lose in reactivity to MPI communications to get back the computing
  807. power of the core dedicated to the StarPU-MPI thread, there are two environment
  808. variables to pilot the behaviour of the MPI thread so that users can tune
  809. this trade-off depending of the behaviour of the application.
  810. The \ref STARPU_MPI_DRIVER_CALL_FREQUENCY environment variable sets how many times
  811. the MPI progression thread goes through the MPI_Test() loop on each active communication request
  812. (and thus try to make communications progress by going into the MPI layer)
  813. before executing tasks. The default value for this environment variable is 0,
  814. which means that the support for interleaving task execution and communication
  815. polling is deactivated, thus returning the MPI progression thread to its
  816. original behaviour.
  817. The \ref STARPU_MPI_DRIVER_TASK_FREQUENCY environment variable sets how many tasks
  818. are executed by the MPI communication thread before checking all active
  819. requests again. While this environment variable allows a better use of the core
  820. dedicated to StarPU-MPI for computations, it also decreases the reactivity of
  821. the MPI communication thread as much.
  822. \section MPIDebug Debugging MPI
  823. Communication trace will be enabled when the environment variable
  824. \ref STARPU_MPI_COMM is set to \c 1, and StarPU has been configured with the
  825. option \ref enable-verbose "--enable-verbose".
  826. Statistics will be enabled for the communication cache when the
  827. environment variable \ref STARPU_MPI_CACHE_STATS is set to \c 1. It
  828. prints messages on the standard output when data are added or removed
  829. from the received communication cache.
  830. When the environment variable \ref STARPU_COMM_STATS is set to \c 1,
  831. StarPU will display at the end of the execution for each node the
  832. volume and the bandwidth of data sent to all the other nodes.
  833. Here an example of such a trace.
  834. \verbatim
  835. [starpu_comm_stats][3] TOTAL: 476.000000 B 0.000454 MB 0.000098 B/s 0.000000 MB/s
  836. [starpu_comm_stats][3:0] 248.000000 B 0.000237 MB 0.000051 B/s 0.000000 MB/s
  837. [starpu_comm_stats][3:2] 50.000000 B 0.000217 MB 0.000047 B/s 0.000000 MB/s
  838. [starpu_comm_stats][2] TOTAL: 288.000000 B 0.000275 MB 0.000059 B/s 0.000000 MB/s
  839. [starpu_comm_stats][2:1] 70.000000 B 0.000103 MB 0.000022 B/s 0.000000 MB/s
  840. [starpu_comm_stats][2:3] 288.000000 B 0.000172 MB 0.000037 B/s 0.000000 MB/s
  841. [starpu_comm_stats][1] TOTAL: 188.000000 B 0.000179 MB 0.000038 B/s 0.000000 MB/s
  842. [starpu_comm_stats][1:0] 80.000000 B 0.000114 MB 0.000025 B/s 0.000000 MB/s
  843. [starpu_comm_stats][1:2] 188.000000 B 0.000065 MB 0.000014 B/s 0.000000 MB/s
  844. [starpu_comm_stats][0] TOTAL: 376.000000 B 0.000359 MB 0.000077 B/s 0.000000 MB/s
  845. [starpu_comm_stats][0:1] 376.000000 B 0.000141 MB 0.000030 B/s 0.000000 MB/s
  846. [starpu_comm_stats][0:3] 10.000000 B 0.000217 MB 0.000047 B/s 0.000000 MB/s
  847. \endverbatim
  848. These statistics can be plotted as heatmaps using StarPU tool <c>starpu_mpi_comm_matrix.py</c>, this will produce 2 PDF files, one plot for the bandwidth, and one plot for the data volume.
  849. \image latex trace_bw_heatmap.pdf "Bandwidth Heatmap" width=0.5\textwidth
  850. \image html trace_bw_heatmap.png "Bandwidth Heatmap"
  851. \image latex trace_volume_heatmap.pdf "Data Volume Heatmap" width=0.5\textwidth
  852. \image html trace_volume_heatmap.png "Data Bandwidth Heatmap"
  853. \section MPIExamples More MPI examples
  854. MPI examples are available in the StarPU source code in mpi/examples:
  855. <ul>
  856. <li>
  857. <c>comm</c> shows how to use communicators with StarPU-MPI
  858. </li>
  859. <li>
  860. <c>complex</c> is a simple example using a user-define data interface over
  861. MPI (complex numbers),
  862. </li>
  863. <li>
  864. <c>stencil5</c> is a simple stencil example using starpu_mpi_task_insert(),
  865. </li>
  866. <li>
  867. <c>matrix_decomposition</c> is a cholesky decomposition example using
  868. starpu_mpi_task_insert(). The non-distributed version can check for
  869. <algorithm correctness in 1-node configuration, the distributed version uses
  870. exactly the same source code, to be used over MPI,
  871. </li>
  872. <li>
  873. <c>mpi_lu</c> is an LU decomposition example, provided in three versions:
  874. <c>plu_example</c> uses explicit MPI data transfers, <c>plu_implicit_example</c>
  875. uses implicit MPI data transfers, <c>plu_outofcore_example</c> uses implicit MPI
  876. data transfers and supports data matrices which do not fit in memory (out-of-core).
  877. </li>
  878. </ul>
  879. \section Nmad Using the NewMadeleine communication library
  880. NewMadeleine (see http://pm2.gforge.inria.fr/newmadeleine/, part of the PM2
  881. project) is an optimizing communication library for high-performance networks.
  882. NewMadeleine provides its own interface, but also an MPI interface (called
  883. MadMPI). Thus there are two possibilities to use NewMadeleine with StarPU:
  884. <ul>
  885. <li>
  886. using the NewMadeleine's native interface. StarPU supports this interface from
  887. its release 1.3.0, by enabling the \c configure option \ref enable-nmad
  888. "--enable-nmad". In this case, StarPU relies directly on NewMadeleine to make
  889. communications progress and NewMadeleine has to be built with the profile
  890. <c>pukabi+madmpi.conf</c>.
  891. </li>
  892. <li>
  893. using the NewMadeleine's MPI interface (MadMPI). StarPU will use the standard
  894. MPI API and NewMadeleine will handle the calls to the MPI API. In this case,
  895. StarPU makes communications progress and thus communication progress has to be
  896. disabled in NewMadeleine by compiling it with the profile
  897. <c>pukabi+madmpi-mini.conf</c>.
  898. </li>
  899. </ul>
  900. To build NewMadeleine, download the latest version from the website (or,
  901. better, use the Git version to use the most recent version), then:
  902. \code{.sh}
  903. cd pm2/scripts
  904. ./pm2-build-packages ./<the profile you chose> --prefix=<installation prefix>
  905. \endcode
  906. With Guix, the NewMadeleine's native interface can be used by setting the
  907. parameter \c \-\-with-input=openmpi=nmad and MadMPI can be used with \c
  908. \-\-with-input=openmpi=nmad-mini.
  909. Whatever implementation (NewMadeleine or MadMPI) is used by StarPU, the public
  910. MPI interface of StarPU (described in \ref API_MPI_Support) is the same.
  911. \section MPIMasterSlave MPI Master Slave Support
  912. StarPU provides an other way to execute applications across many
  913. nodes. The Master Slave support permits to use remote cores without
  914. thinking about data distribution. This support can be activated with
  915. the \c configure option \ref enable-mpi-master-slave
  916. "--enable-mpi-master-slave". However, you should not activate both MPI
  917. support and MPI Master-Slave support.
  918. The existing kernels for CPU devices can be used as such. They only have to be
  919. exposed through the name of the function in the \ref starpu_codelet::cpu_funcs_name field.
  920. Functions have to be globally-visible (i.e. not static) for StarPU to
  921. be able to look them up, and <c>-rdynamic</c> must be passed to gcc (or
  922. <c>-export-dynamic</c> to ld) so that symbols of the main program are visible.
  923. Optionally, you can choose the use of another function on slaves thanks to
  924. the field \ref starpu_codelet::mpi_ms_funcs.
  925. By default, one core is dedicated on the master node to manage the
  926. entire set of slaves. If the implementation of MPI you are using has a
  927. good multiple threads support, you can use the \c configure option
  928. \ref with-mpi-master-slave-multiple-thread "--with-mpi-master-slave-multiple-thread"
  929. to dedicate one core per slave.
  930. Choosing the number of cores on each slave device is done by setting
  931. the environment variable \ref STARPU_NMPIMSTHREADS "STARPU_NMPIMSTHREADS=\<number\>"
  932. with <c>\<number\></c> being the requested number of cores. By default
  933. all the slave's cores are used.
  934. Setting the number of slaves nodes is done by changing the <c>-n</c>
  935. parameter when executing the application with mpirun or mpiexec.
  936. The master node is by default the node with the MPI rank equal to 0.
  937. To select another node, use the environment variable \ref
  938. STARPU_MPI_MASTER_NODE "STARPU_MPI_MASTER_NODE=\<number\>" with
  939. <c>\<number\></c> being the requested MPI rank node.
  940. */