410_mpi_support.doxy 32 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796
  1. /*
  2. * This file is part of the StarPU Handbook.
  3. * Copyright (C) 2009--2011 Universit@'e de Bordeaux
  4. * Copyright (C) 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017 CNRS
  5. * Copyright (C) 2011, 2012, 2017 INRIA
  6. * See the file version.doxy for copying conditions.
  7. */
  8. /*! \page MPISupport MPI Support
  9. The integration of MPI transfers within task parallelism is done in a
  10. very natural way by the means of asynchronous interactions between the
  11. application and StarPU. This is implemented in a separate <c>libstarpumpi</c> library
  12. which basically provides "StarPU" equivalents of <c>MPI_*</c> functions, where
  13. <c>void *</c> buffers are replaced with ::starpu_data_handle_t, and all
  14. GPU-RAM-NIC transfers are handled efficiently by StarPU-MPI. The user has to
  15. use the usual <c>mpirun</c> command of the MPI implementation to start StarPU on
  16. the different MPI nodes.
  17. An MPI Insert Task function provides an even more seamless transition to a
  18. distributed application, by automatically issuing all required data transfers
  19. according to the task graph and an application-provided distribution.
  20. \section ExampleDocumentation Example used in this documentation
  21. The example below will be used as the base for this documentation. It
  22. initializes a token on node 0, and the token is passed from node to node,
  23. incremented by one on each step. The code is not using StarPU yet.
  24. \code{.c}
  25. for (loop = 0; loop < nloops; loop++)
  26. {
  27. int tag = loop*size + rank;
  28. if (loop == 0 && rank == 0)
  29. {
  30. token = 0;
  31. fprintf(stdout, "Start with token value %d\n", token);
  32. }
  33. else
  34. {
  35. MPI_Recv(&token, 1, MPI_INT, (rank+size-1)%size, tag, MPI_COMM_WORLD);
  36. }
  37. token++;
  38. if (loop == last_loop && rank == last_rank)
  39. {
  40. fprintf(stdout, "Finished: token value %d\n", token);
  41. }
  42. else
  43. {
  44. MPI_Send(&token, 1, MPI_INT, (rank+1)%size, tag+1, MPI_COMM_WORLD);
  45. }
  46. }
  47. \endcode
  48. \section NotUsingMPISupport About not using the MPI support
  49. Although StarPU provides MPI support, the application programmer may want to
  50. keep his MPI communications as they are for a start, and only delegate task
  51. execution to StarPU. This is possible by just using starpu_data_acquire(), for
  52. instance:
  53. \code{.c}
  54. for (loop = 0; loop < nloops; loop++)
  55. {
  56. int tag = loop*size + rank;
  57. /* Acquire the data to be able to write to it */
  58. starpu_data_acquire(token_handle, STARPU_W);
  59. if (loop == 0 && rank == 0)
  60. {
  61. token = 0;
  62. fprintf(stdout, "Start with token value %d\n", token);
  63. }
  64. else
  65. {
  66. MPI_Recv(&token, 1, MPI_INT, (rank+size-1)%size, tag, MPI_COMM_WORLD);
  67. }
  68. starpu_data_release(token_handle);
  69. /* Task delegation to StarPU to increment the token. The execution might
  70. * be performed on a CPU, a GPU, etc. */
  71. increment_token();
  72. /* Acquire the update data to be able to read from it */
  73. starpu_data_acquire(token_handle, STARPU_R);
  74. if (loop == last_loop && rank == last_rank)
  75. {
  76. fprintf(stdout, "Finished: token value %d\n", token);
  77. }
  78. else
  79. {
  80. MPI_Send(&token, 1, MPI_INT, (rank+1)%size, tag+1, MPI_COMM_WORLD);
  81. }
  82. starpu_data_release(token_handle);
  83. }
  84. \endcode
  85. In that case, <c>libstarpumpi</c> is not needed. One can also use <c>MPI_Isend()</c> and
  86. <c>MPI_Irecv()</c>, by calling starpu_data_release() after <c>MPI_Wait()</c> or <c>MPI_Test()</c>
  87. have notified completion.
  88. It is however better to use <c>libstarpumpi</c>, to save the application from having to
  89. synchronize with starpu_data_acquire(), and instead just submit all tasks and
  90. communications asynchronously, and wait for the overall completion.
  91. \section SimpleExample Simple Example
  92. The flags required to compile or link against the MPI layer are
  93. accessible with the following commands:
  94. \verbatim
  95. $ pkg-config --cflags starpumpi-1.3 # options for the compiler
  96. $ pkg-config --libs starpumpi-1.3 # options for the linker
  97. \endverbatim
  98. \code{.c}
  99. void increment_token(void)
  100. {
  101. struct starpu_task *task = starpu_task_create();
  102. task->cl = &increment_cl;
  103. task->handles[0] = token_handle;
  104. starpu_task_submit(task);
  105. }
  106. int main(int argc, char **argv)
  107. {
  108. int rank, size;
  109. starpu_init(NULL);
  110. starpu_mpi_init(&argc, &argv, 1);
  111. starpu_mpi_comm_rank(MPI_COMM_WORLD, &rank);
  112. starpu_mpi_comm_size(MPI_COMM_WORLD, &size);
  113. starpu_vector_data_register(&token_handle, STARPU_MAIN_RAM, (uintptr_t)&token, 1, sizeof(unsigned));
  114. unsigned nloops = NITER;
  115. unsigned loop;
  116. unsigned last_loop = nloops - 1;
  117. unsigned last_rank = size - 1;
  118. for (loop = 0; loop < nloops; loop++)
  119. {
  120. int tag = loop*size + rank;
  121. if (loop == 0 && rank == 0)
  122. {
  123. starpu_data_acquire(token_handle, STARPU_W);
  124. token = 0;
  125. fprintf(stdout, "Start with token value %d\n", token);
  126. starpu_data_release(token_handle);
  127. }
  128. else
  129. {
  130. starpu_mpi_irecv_detached(token_handle, (rank+size-1)%size, tag,
  131. MPI_COMM_WORLD, NULL, NULL);
  132. }
  133. increment_token();
  134. if (loop == last_loop && rank == last_rank)
  135. {
  136. starpu_data_acquire(token_handle, STARPU_R);
  137. fprintf(stdout, "Finished: token value %d\n", token);
  138. starpu_data_release(token_handle);
  139. }
  140. else
  141. {
  142. starpu_mpi_isend_detached(token_handle, (rank+1)%size, tag+1,
  143. MPI_COMM_WORLD, NULL, NULL);
  144. }
  145. }
  146. starpu_task_wait_for_all();
  147. starpu_mpi_shutdown();
  148. starpu_shutdown();
  149. if (rank == last_rank)
  150. {
  151. fprintf(stderr, "[%d] token = %d == %d * %d ?\n", rank, token, nloops, size);
  152. STARPU_ASSERT(token == nloops*size);
  153. }
  154. \endcode
  155. We have here replaced <c>MPI_Recv()</c> and <c>MPI_Send()</c> with starpu_mpi_irecv_detached()
  156. and starpu_mpi_isend_detached(), which just submit the communication to be
  157. performed. The only remaining synchronization with starpu_data_acquire() is at
  158. the beginning and the end.
  159. \section MPIInitialization How to Initialize StarPU-MPI
  160. As seen in the previous example, one has to call starpu_mpi_init() to
  161. initialize StarPU-MPI. The third parameter of the function indicates
  162. if MPI should be initialized by StarPU or if the application will do
  163. it itself. If the application initializes MPI itself, it must call
  164. <c>MPI_Init_thread()</c> with <c>MPI_THREAD_SERIALIZED</c> or
  165. <c>MPI_THREAD_MULTIPLE</c>, since StarPU-MPI uses a separate thread to
  166. perform the communications. <c>MPI_THREAD_MULTIPLE</c> is necessary if
  167. the application also performs some MPI communications.
  168. \section PointToPointCommunication Point To Point Communication
  169. The standard point to point communications of MPI have been
  170. implemented. The semantic is similar to the MPI one, but adapted to
  171. the DSM provided by StarPU. A MPI request will only be submitted when
  172. the data is available in the main memory of the node submitting the
  173. request.
  174. There are two types of asynchronous communications: the classic
  175. asynchronous communications and the detached communications. The
  176. classic asynchronous communications (starpu_mpi_isend() and
  177. starpu_mpi_irecv()) need to be followed by a call to
  178. starpu_mpi_wait() or to starpu_mpi_test() to wait for or to
  179. test the completion of the communication. Waiting for or testing the
  180. completion of detached communications is not possible, this is done
  181. internally by StarPU-MPI, on completion, the resources are
  182. automatically released. This mechanism is similar to the pthread
  183. detach state attribute which determines whether a thread will be
  184. created in a joinable or a detached state.
  185. For send communications, data is acquired with the mode ::STARPU_R.
  186. When using the configure option
  187. \ref enable-mpi-pedantic-isend "--enable-mpi-pedantic-isend", the mode
  188. ::STARPU_RW is used to make sure there is no more than 1 concurrent
  189. MPI_Isend call accessing a data.
  190. Internally, all communication are divided in 2 communications, a first
  191. message is used to exchange an envelope describing the data (i.e its
  192. tag and its size), the data itself is sent in a second message. All
  193. MPI communications submitted by StarPU uses a unique tag which has a
  194. default value, and can be accessed with the functions
  195. starpu_mpi_get_communication_tag() and
  196. starpu_mpi_set_communication_tag(). The matching of tags with
  197. corresponding requests is done within StarPU-MPI.
  198. For any userland communication, the call of the corresponding function
  199. (e.g starpu_mpi_isend()) will result in the creation of a StarPU-MPI
  200. request, the function starpu_data_acquire_cb() is then called to
  201. asynchronously request StarPU to fetch the data in main memory; when
  202. the data is ready and the corresponding buffer has already been
  203. received by MPI, it will be copied in the memory of the data,
  204. otherwise the request is stored in the <em>early requests list</em>. Sending
  205. requests are stored in the <em>ready requests list</em>.
  206. While requests need to be processed, the StarPU-MPI progression thread
  207. does the following:
  208. <ol>
  209. <li> it polls the <em>ready requests list</em>. For all the ready
  210. requests, the appropriate function is called to post the corresponding
  211. MPI call. For example, an initial call to starpu_mpi_isend() will
  212. result in a call to <c>MPI_Isend()</c>. If the request is marked as
  213. detached, the request will then be added in the <em>detached requests
  214. list</em>.
  215. </li>
  216. <li> it posts a <c>MPI_Irecv()</c> to retrieve a data envelope.
  217. </li>
  218. <li> it polls the <em>detached requests list</em>. For all the detached
  219. requests, it tests its completion of the MPI request by calling
  220. <c>MPI_Test()</c>. On completion, the data handle is released, and if a
  221. callback was defined, it is called.
  222. </li>
  223. <li> finally, it checks if a data envelope has been received. If so,
  224. if the data envelope matches a request in the <em>early requests list</em> (i.e
  225. the request has already been posted by the application), the
  226. corresponding MPI call is posted (similarly to the first step above).
  227. If the data envelope does not match any application request, a
  228. temporary handle is created to receive the data, a StarPU-MPI request
  229. is created and added into the <em>ready requests list</em>, and thus will be
  230. processed in the first step of the next loop.
  231. </li>
  232. </ol>
  233. \ref MPIPtpCommunication gives the list of all the
  234. point to point communications defined in StarPU-MPI.
  235. \section ExchangingUserDefinedDataInterface Exchanging User Defined Data Interface
  236. New data interfaces defined as explained in \ref DefiningANewDataInterface
  237. can also be used within StarPU-MPI and
  238. exchanged between nodes. Two functions needs to be defined through the
  239. type starpu_data_interface_ops. The function
  240. starpu_data_interface_ops::pack_data takes a handle and returns a
  241. contiguous memory buffer allocated with
  242. \code{.c}
  243. starpu_malloc_flags(ptr, size, 0)
  244. \endcode
  245. along with its size where data to be conveyed
  246. to another node should be copied. The reversed operation is
  247. implemented in the function starpu_data_interface_ops::unpack_data which
  248. takes a contiguous memory buffer and recreates the data handle.
  249. \code{.c}
  250. static int complex_pack_data(starpu_data_handle_t handle, unsigned node, void **ptr, ssize_t *count)
  251. {
  252. STARPU_ASSERT(starpu_data_test_if_allocated_on_node(handle, node));
  253. struct starpu_complex_interface *complex_interface =
  254. (struct starpu_complex_interface *) starpu_data_get_interface_on_node(handle, node);
  255. *count = complex_get_size(handle);
  256. starpu_malloc_flags(ptr, *count, 0);
  257. memcpy(*ptr, complex_interface->real, complex_interface->nx*sizeof(double));
  258. memcpy(*ptr+complex_interface->nx*sizeof(double), complex_interface->imaginary,
  259. complex_interface->nx*sizeof(double));
  260. return 0;
  261. }
  262. static int complex_unpack_data(starpu_data_handle_t handle, unsigned node, void *ptr, size_t count)
  263. {
  264. STARPU_ASSERT(starpu_data_test_if_allocated_on_node(handle, node));
  265. struct starpu_complex_interface *complex_interface =
  266. (struct starpu_complex_interface *) starpu_data_get_interface_on_node(handle, node);
  267. memcpy(complex_interface->real, ptr, complex_interface->nx*sizeof(double));
  268. memcpy(complex_interface->imaginary, ptr+complex_interface->nx*sizeof(double),
  269. complex_interface->nx*sizeof(double));
  270. return 0;
  271. }
  272. static struct starpu_data_interface_ops interface_complex_ops =
  273. {
  274. ...
  275. .pack_data = complex_pack_data,
  276. .unpack_data = complex_unpack_data
  277. };
  278. \endcode
  279. Instead of defining pack and unpack operations, users may want to attach a MPI type to their user defined data interface. The function starpu_mpi_datatype_register() allows to do so. This function takes 3 parameters: the data handle for which the MPI datatype is going to be defined, a function's pointer that will create the MPI datatype, and a function's pointer that will free the MPI datatype.
  280. \code{.c}
  281. starpu_data_interface handle;
  282. starpu_complex_data_register(&handle, STARPU_MAIN_RAM, real, imaginary, 2);
  283. starpu_mpi_datatype_register(handle, starpu_complex_interface_datatype_allocate, starpu_complex_interface_datatype_free);
  284. \endcode
  285. The functions to create and free the MPI datatype are defined as follows.
  286. \code{.c}
  287. void starpu_complex_interface_datatype_allocate(starpu_data_handle_t handle, MPI_Datatype *mpi_datatype)
  288. {
  289. int ret;
  290. int blocklengths[2];
  291. MPI_Aint displacements[2];
  292. MPI_Datatype types[2] = {MPI_DOUBLE, MPI_DOUBLE};
  293. struct starpu_complex_interface *complex_interface =
  294. (struct starpu_complex_interface *) starpu_data_get_interface_on_node(handle, STARPU_MAIN_RAM);
  295. MPI_Address(complex_interface, displacements);
  296. MPI_Address(&complex_interface->imaginary, displacements+1);
  297. displacements[1] -= displacements[0];
  298. displacements[0] = 0;
  299. blocklengths[0] = complex_interface->nx;
  300. blocklengths[1] = complex_interface->nx;
  301. ret = MPI_Type_create_struct(2, blocklengths, displacements, types, mpi_datatype);
  302. STARPU_ASSERT_MSG(ret == MPI_SUCCESS, "MPI_Type_contiguous failed");
  303. ret = MPI_Type_commit(mpi_datatype);
  304. STARPU_ASSERT_MSG(ret == MPI_SUCCESS, "MPI_Type_commit failed");
  305. }
  306. void starpu_complex_interface_datatype_free(MPI_Datatype *mpi_datatype)
  307. {
  308. MPI_Type_free(mpi_datatype);
  309. }
  310. \endcode
  311. Note that it is important to make sure no communication is going to occur before the function starpu_mpi_datatype_register() is called. That would produce an undefined result as the data may be received before the function is called, and so the MPI datatype would not be known by the StarPU-MPI communication engine, and the data would be processed with the pack and unpack operations.
  312. \code{.c}
  313. starpu_data_interface handle;
  314. starpu_complex_data_register(&handle, STARPU_MAIN_RAM, real, imaginary, 2);
  315. starpu_mpi_datatype_register(handle, starpu_complex_interface_datatype_allocate, starpu_complex_interface_datatype_free);
  316. starpu_mpi_barrier(MPI_COMM_WORLD);
  317. \endcode
  318. \section MPIInsertTaskUtility MPI Insert Task Utility
  319. To save the programmer from having to explicit all communications, StarPU
  320. provides an "MPI Insert Task Utility". The principe is that the application
  321. decides a distribution of the data over the MPI nodes by allocating it and
  322. notifying StarPU of that decision, i.e. tell StarPU which MPI node "owns"
  323. which data. It also decides, for each handle, an MPI tag which will be used to
  324. exchange the content of the handle. All MPI nodes then process the whole task
  325. graph, and StarPU automatically determines which node actually execute which
  326. task, and trigger the required MPI transfers.
  327. The list of functions is described in \ref MPIInsertTask.
  328. Here an stencil example showing how to use starpu_mpi_task_insert(). One
  329. first needs to define a distribution function which specifies the
  330. locality of the data. Note that the data needs to be registered to MPI
  331. by calling starpu_mpi_data_register(). This function allows to set
  332. the distribution information and the MPI tag which should be used when
  333. communicating the data. It also allows to automatically clear the MPI
  334. communication cache when unregistering the data.
  335. \code{.c}
  336. /* Returns the MPI node number where data is */
  337. int my_distrib(int x, int y, int nb_nodes)
  338. {
  339. /* Block distrib */
  340. return ((int)(x / sqrt(nb_nodes) + (y / sqrt(nb_nodes)) * sqrt(nb_nodes))) % nb_nodes;
  341. // /* Other examples useful for other kinds of computations */
  342. // /* / distrib */
  343. // return (x+y) % nb_nodes;
  344. // /* Block cyclic distrib */
  345. // unsigned side = sqrt(nb_nodes);
  346. // return x % side + (y % side) * size;
  347. }
  348. \endcode
  349. Now the data can be registered within StarPU. Data which are not
  350. owned but will be needed for computations can be registered through
  351. the lazy allocation mechanism, i.e. with a <c>home_node</c> set to <c>-1</c>.
  352. StarPU will automatically allocate the memory when it is used for the
  353. first time.
  354. One can note an optimization here (the <c>else if</c> test): we only register
  355. data which will be needed by the tasks that we will execute.
  356. \code{.c}
  357. unsigned matrix[X][Y];
  358. starpu_data_handle_t data_handles[X][Y];
  359. for(x = 0; x < X; x++)
  360. {
  361. for (y = 0; y < Y; y++)
  362. {
  363. int mpi_rank = my_distrib(x, y, size);
  364. if (mpi_rank == my_rank)
  365. /* Owning data */
  366. starpu_variable_data_register(&data_handles[x][y], STARPU_MAIN_RAM,
  367. (uintptr_t)&(matrix[x][y]), sizeof(unsigned));
  368. else if (my_rank == my_distrib(x+1, y, size) || my_rank == my_distrib(x-1, y, size)
  369. || my_rank == my_distrib(x, y+1, size) || my_rank == my_distrib(x, y-1, size))
  370. /* I don't own that index, but will need it for my computations */
  371. starpu_variable_data_register(&data_handles[x][y], -1,
  372. (uintptr_t)NULL, sizeof(unsigned));
  373. else
  374. /* I know it's useless to allocate anything for this */
  375. data_handles[x][y] = NULL;
  376. if (data_handles[x][y])
  377. {
  378. starpu_mpi_data_register(data_handles[x][y], x*X+y, mpi_rank);
  379. }
  380. }
  381. }
  382. \endcode
  383. Now starpu_mpi_task_insert() can be called for the different
  384. steps of the application.
  385. \code{.c}
  386. for(loop=0 ; loop<niter; loop++)
  387. for (x = 1; x < X-1; x++)
  388. for (y = 1; y < Y-1; y++)
  389. starpu_mpi_task_insert(MPI_COMM_WORLD, &stencil5_cl,
  390. STARPU_RW, data_handles[x][y],
  391. STARPU_R, data_handles[x-1][y],
  392. STARPU_R, data_handles[x+1][y],
  393. STARPU_R, data_handles[x][y-1],
  394. STARPU_R, data_handles[x][y+1],
  395. 0);
  396. starpu_task_wait_for_all();
  397. \endcode
  398. I.e. all MPI nodes process the whole task graph, but as mentioned above, for
  399. each task, only the MPI node which owns the data being written to (here,
  400. <c>data_handles[x][y]</c>) will actually run the task. The other MPI nodes will
  401. automatically send the required data.
  402. This can be a concern with a growing number of nodes. To avoid this, the
  403. application can prune the task for loops according to the data distribution,
  404. so as to only submit tasks on nodes which have to care about them (either to
  405. execute them, or to send the required data).
  406. A way to do some of this quite easily can be to just add an <c>if</c> like this:
  407. \code{.c}
  408. for(loop=0 ; loop<niter; loop++)
  409. for (x = 1; x < X-1; x++)
  410. for (y = 1; y < Y-1; y++)
  411. if (my_distrib(x,y,size) == my_rank
  412. || my_distrib(x-1,y,size) == my_rank
  413. || my_distrib(x+1,y,size) == my_rank
  414. || my_distrib(x,y-1,size) == my_rank
  415. || my_distrib(x,y+1,size) == my_rank)
  416. starpu_mpi_task_insert(MPI_COMM_WORLD, &stencil5_cl,
  417. STARPU_RW, data_handles[x][y],
  418. STARPU_R, data_handles[x-1][y],
  419. STARPU_R, data_handles[x+1][y],
  420. STARPU_R, data_handles[x][y-1],
  421. STARPU_R, data_handles[x][y+1],
  422. 0);
  423. starpu_task_wait_for_all();
  424. \endcode
  425. This permits to drop the cost of function call argument passing and parsing.
  426. If the <c>my_distrib</c> function can be inlined by the compiler, the latter can
  427. improve the test.
  428. If the <c>size</c> can be made a compile-time constant, the compiler can
  429. considerably improve the test further.
  430. If the distribution function is not too complex and the compiler is very good,
  431. the latter can even optimize the <c>for</c> loops, thus dramatically reducing
  432. the cost of task submission.
  433. To estimate quickly how long task submission takes, and notably how much pruning
  434. saves, a quick and easy way is to measure the submission time of just one of the
  435. MPI nodes. This can be achieved by running the application on just one MPI node
  436. with the following environment variables:
  437. \code
  438. export STARPU_DISABLE_KERNELS=1
  439. export STARPU_MPI_FAKE_RANK=2
  440. export STARPU_MPI_FAKE_SIZE=1024
  441. \endcode
  442. Here we have disabled the kernel function call to skip the actual computation
  443. time and only keep submission time, and we have asked StarPU to fake running on
  444. MPI node 2 out of 1024 nodes.
  445. A function starpu_mpi_task_build() is also provided with the aim to
  446. only construct the task structure. All MPI nodes need to call the
  447. function, only the node which is to execute the task will return a
  448. valid task structure, others will return <c>NULL</c>. That node must submit that task.
  449. All nodes then need to call the function starpu_mpi_task_post_build() -- with the same
  450. list of arguments as starpu_mpi_task_build() -- to post all the
  451. necessary data communications.
  452. \code{.c}
  453. struct starpu_task *task;
  454. task = starpu_mpi_task_build(MPI_COMM_WORLD, &cl,
  455. STARPU_RW, data_handles[0],
  456. STARPU_R, data_handles[1],
  457. 0);
  458. if (task) starpu_task_submit(task);
  459. starpu_mpi_task_post_build(MPI_COMM_WORLD, &cl,
  460. STARPU_RW, data_handles[0],
  461. STARPU_R, data_handles[1],
  462. 0);
  463. \endcode
  464. \section MPICache MPI cache support
  465. StarPU-MPI automatically optimizes duplicate data transmissions: if an MPI
  466. node B needs a piece of data D from MPI node A for several tasks, only one
  467. transmission of D will take place from A to B, and the value of D will be kept
  468. on B as long as no task modifies D.
  469. If a task modifies D, B will wait for all tasks which need the previous value of
  470. D, before invalidating the value of D. As a consequence, it releases the memory
  471. occupied by D. Whenever a task running on B needs the new value of D, allocation
  472. will take place again to receive it.
  473. Since tasks can be submitted dynamically, StarPU-MPI can not know whether the
  474. current value of data D will again be used by a newly-submitted task before
  475. being modified by another newly-submitted task, so until a task is submitted to
  476. modify the current value, it can not decide by itself whether to flush the cache
  477. or not. The application can however explicitly tell StarPU-MPI to flush the
  478. cache by calling starpu_mpi_cache_flush() or starpu_mpi_cache_flush_all_data(),
  479. for instance in case the data will not be used at all any more (see for instance
  480. the cholesky example in <c>mpi/examples/matrix_decomposition</c>), or at least not in
  481. the close future. If a newly-submitted task actually needs the value again,
  482. another transmission of D will be initiated from A to B. A mere
  483. starpu_mpi_cache_flush_all_data() can for instance be added at the end of the whole
  484. algorithm, to express that no data will be reused after that (or at least that
  485. it is not interesting to keep them in cache). It may however be interesting to
  486. add fine-graph starpu_mpi_cache_flush() calls during the algorithm; the effect
  487. for the data deallocation will be the same, but it will additionally release some
  488. pressure from the StarPU-MPI cache hash table during task submission.
  489. One can determine whether a piece of is cached with starpu_mpi_cached_receive()
  490. and starpu_mpi_cached_send().
  491. The whole caching behavior can be disabled thanks to the \ref STARPU_MPI_CACHE
  492. environment variable. The variable \ref STARPU_MPI_CACHE_STATS can be set to <c>1</c>
  493. to enable the runtime to display messages when data are added or removed
  494. from the cache holding the received data.
  495. \section MPIMigration MPI Data migration
  496. The application can dynamically change its mind about the data distribution, to
  497. balance the load over MPI nodes for instance. This can be done very simply by
  498. requesting an explicit move and then change the registered rank. For instance,
  499. we here switch to a new distribution function <c>my_distrib2</c>: we first
  500. register any data that wasn't registered already and will be needed, then
  501. migrate the data, and register the new location.
  502. \code{.c}
  503. for(x = 0; x < X; x++)
  504. {
  505. for (y = 0; y < Y; y++)
  506. {
  507. int mpi_rank = my_distrib2(x, y, size);
  508. if (!data_handles[x][y] && (mpi_rank == my_rank
  509. || my_rank == my_distrib(x+1, y, size) || my_rank == my_distrib(x-1, y, size)
  510. || my_rank == my_distrib(x, y+1, size) || my_rank == my_distrib(x, y-1, size)))
  511. /* Register newly-needed data */
  512. starpu_variable_data_register(&data_handles[x][y], -1,
  513. (uintptr_t)NULL, sizeof(unsigned));
  514. if (data_handles[x][y])
  515. {
  516. /* Migrate the data */
  517. starpu_mpi_data_migrate(MPI_COMM_WORLD, data_handles[x][y], mpi_rank);
  518. }
  519. }
  520. }
  521. \endcode
  522. From then on, further tasks submissions will use the new data distribution,
  523. which will thus change both MPI communications and task assignments.
  524. Very importantly, since all nodes have to agree on which node owns which data
  525. so as to determine MPI communications and task assignments the same way, all
  526. nodes have to perform the same data migration, and at the same point among task
  527. submissions. It thus does not require a strict synchronization, just a clear
  528. separation of task submissions before and after the data redistribution.
  529. Before data unregistration, it has to be migrated back to its original home
  530. node (the value, at least), since that is where the user-provided buffer
  531. resides. Otherwise the unregistration will complain that it does not have the
  532. latest value on the original home node.
  533. \code{.c}
  534. for(x = 0; x < X; x++)
  535. {
  536. for (y = 0; y < Y; y++)
  537. {
  538. if (data_handles[x][y])
  539. {
  540. int mpi_rank = my_distrib(x, y, size);
  541. /* Get back data to original place where the user-provided buffer is. */
  542. starpu_mpi_get_data_on_node_detached(MPI_COMM_WORLD, data_handles[x][y], mpi_rank, NULL, NULL);
  543. /* And unregister it */
  544. starpu_data_unregister(data_handles[x][y]);
  545. }
  546. }
  547. }
  548. \endcode
  549. \section MPICollective MPI Collective Operations
  550. The functions are described in \ref MPICollectiveOperations.
  551. \code{.c}
  552. if (rank == root)
  553. {
  554. /* Allocate the vector */
  555. vector = malloc(nblocks * sizeof(float *));
  556. for(x=0 ; x<nblocks ; x++)
  557. {
  558. starpu_malloc((void **)&vector[x], block_size*sizeof(float));
  559. }
  560. }
  561. /* Allocate data handles and register data to StarPU */
  562. data_handles = malloc(nblocks*sizeof(starpu_data_handle_t *));
  563. for(x = 0; x < nblocks ; x++)
  564. {
  565. int mpi_rank = my_distrib(x, nodes);
  566. if (rank == root)
  567. {
  568. starpu_vector_data_register(&data_handles[x], STARPU_MAIN_RAM, (uintptr_t)vector[x],
  569. blocks_size, sizeof(float));
  570. }
  571. else if ((mpi_rank == rank) || ((rank == mpi_rank+1 || rank == mpi_rank-1)))
  572. {
  573. /* I own that index, or i will need it for my computations */
  574. starpu_vector_data_register(&data_handles[x], -1, (uintptr_t)NULL,
  575. block_size, sizeof(float));
  576. }
  577. else
  578. {
  579. /* I know it's useless to allocate anything for this */
  580. data_handles[x] = NULL;
  581. }
  582. if (data_handles[x])
  583. {
  584. starpu_mpi_data_register(data_handles[x], x*nblocks+y, mpi_rank);
  585. }
  586. }
  587. /* Scatter the matrix among the nodes */
  588. starpu_mpi_scatter_detached(data_handles, nblocks, root, MPI_COMM_WORLD);
  589. /* Calculation */
  590. for(x = 0; x < nblocks ; x++)
  591. {
  592. if (data_handles[x])
  593. {
  594. int owner = starpu_data_get_rank(data_handles[x]);
  595. if (owner == rank)
  596. {
  597. starpu_task_insert(&cl, STARPU_RW, data_handles[x], 0);
  598. }
  599. }
  600. }
  601. /* Gather the matrix on main node */
  602. starpu_mpi_gather_detached(data_handles, nblocks, 0, MPI_COMM_WORLD);
  603. \endcode
  604. Other collective operations would be easy to define, just ask starpu-devel for
  605. them!
  606. \section MPIDebug Debugging MPI
  607. Communication trace will be enabled when the environment variable
  608. \ref STARPU_MPI_COMM is set to 1, and StarPU has been configured with the
  609. option \ref enable-verbose "--enable-verbose".
  610. Statistics will be enabled for the communication cache when the
  611. environment variable \ref STARPU_MPI_CACHE_STATS is set to 1. It
  612. prints messages on the standard output when data are added or removed
  613. from the received communication cache.
  614. \section MPIExamples More MPI examples
  615. MPI examples are available in the StarPU source code in mpi/examples:
  616. <ul>
  617. <li>
  618. <c>comm</c> shows how to use communicators with StarPU-MPI
  619. </li>
  620. <li>
  621. <c>complex</c> is a simple example using a user-define data interface over
  622. MPI (complex numbers),
  623. </li>
  624. <li>
  625. <c>stencil5</c> is a simple stencil example using starpu_mpi_task_insert(),
  626. </li>
  627. <li>
  628. <c>matrix_decomposition</c> is a cholesky decomposition example using
  629. starpu_mpi_task_insert(). The non-distributed version can check for
  630. <algorithm correctness in 1-node configuration, the distributed version uses
  631. exactly the same source code, to be used over MPI,
  632. </li>
  633. <li>
  634. <c>mpi_lu</c> is an LU decomposition example, provided in three versions:
  635. <c>plu_example</c> uses explicit MPI data transfers, <c>plu_implicit_example</c>
  636. uses implicit MPI data transfers, <c>plu_outofcore_example</c> uses implicit MPI
  637. data transfers and supports data matrices which do not fit in memory (out-of-core).
  638. </li>
  639. </ul>
  640. \section MPIMasterSlave MPI Master Slave Support
  641. StarPU includes an other way to execute the application across many nodes. The Master
  642. Slave support permits to use remote cores without thinking about data distribution. This
  643. support can be activated with the \ref enable-mpi-master-slave "--enable-mpi-master-slave". However, you should not activate
  644. both MPI support and MPI Master-Slave support.
  645. If a codelet contains a kernel for CPU devices, it is automatically eligible to be executed
  646. on a MPI Slave device. However, you can decide to execute the codelet on a MPI Slave by filling
  647. the \ref starpu_codelet::mpi_ms_funcs variable. The functions have to be globally-visible (i.e. not static ) for
  648. StarPU to be able to look them up, and <c>-rdynamic</c> must be passed to gcc (or <c>-export-dynamic</c> to ld)
  649. so that symbols of the main program are visible.
  650. By default, one core is dedicated on the master to manage the entire set of slaves. If MPI
  651. has a good multiple threads support, you can use \ref with-mpi-master-slave-multiple-thread "--with-mpi-master-slave-multiple-thread" to
  652. dedicate one core per slave.
  653. If you want to chose the number of cores on the slave device, use the \ref STARPU_NMPIMSTHREADS "STARPU_NMPIMSTHREADS=\<number\>"
  654. with <c>\<number\></c> is the number of cores wanted. The default value is all the slave's cores. To select
  655. the number of slaves nodes, change the <c>-n</c> parameter when executing the application with mpirun
  656. or mpiexec.
  657. The node chosen by default is the with the MPI rank 0. To modify this, use the environment variable
  658. \ref STARPU_MPI_MASTER_NODE "STARPU_MPI_MASTER_NODE=\<number\>" with <c>\<number\></c> is the MPI rank wanted.
  659. */