Prechádzať zdrojové kódy

doc: reorganize mpi sections

Nathalie Furmento 12 rokov pred
rodič
commit
2a81de0afd
2 zmenil súbory, kde vykonal 93 pridanie a 85 odobranie
  1. 86 3
      doc/chapters/api.texi
  2. 7 82
      doc/chapters/mpi-support.texi

+ 86 - 3
doc/chapters/api.texi

@@ -3186,7 +3186,8 @@ start recording it again, etc.
 @menu
 * Initialisation::
 * Communication::
-* Communication cache::
+* Communication Cache::
+* MPI Insert Task::
 @end menu
 
 @node Initialisation
@@ -3319,8 +3320,8 @@ node of the array @var{source} using the n-th message tag of the array
 On completion of the all the requests, @var{tag} is unlocked.
 @end deftypefun
 
-@node Communication cache
-@subsection Communication cache
+@node Communication Cache
+@subsection Communication Cache
 
 @deftypefun void starpu_mpi_cache_flush (MPI_Comm @var{comm}, starpu_data_handle_t @var{data_handle})
 Clear the send and receive communication cache for the data
@@ -3335,6 +3336,88 @@ function has to be called synchronously by all the MPI nodes.
 The function does nothing if the cache mechanism is disabled (@pxref{STARPU_MPI_CACHE}).
 @end deftypefun
 
+@node MPI Insert Task
+@subsection MPI Insert Task
+
+@deftypefun int starpu_data_set_tag (starpu_data_handle_t @var{handle}, int @var{tag})
+Tell StarPU-MPI which MPI tag to use when exchanging the data.
+@end deftypefun
+
+@deftypefun int starpu_data_get_tag (starpu_data_handle_t @var{handle})
+Returns the MPI tag to be used when exchanging the data.
+@end deftypefun
+
+@deftypefun int starpu_data_set_rank (starpu_data_handle_t @var{handle}, int @var{rank})
+Tell StarPU-MPI which MPI node "owns" a given data, that is, the node which will
+always keep an up-to-date value, and will by default execute tasks which write
+to it.
+@end deftypefun
+
+@deftypefun int starpu_data_get_rank (starpu_data_handle_t @var{handle})
+Returns the last value set by @code{starpu_data_set_rank}.
+@end deftypefun
+
+@defmac STARPU_EXECUTE_ON_NODE
+this macro is used when calling @code{starpu_mpi_insert_task}, and
+must be followed by a integer value which specified the node on which
+to execute the codelet.
+@end defmac
+
+@defmac STARPU_EXECUTE_ON_DATA
+this macro is used when calling @code{starpu_mpi_insert_task}, and
+must be followed by a data handle to specify that the node owning the
+given data will execute the codelet.
+@end defmac
+
+@deftypefun int starpu_mpi_insert_task (MPI_Comm @var{comm}, struct starpu_codelet *@var{codelet}, ...)
+Create and submit a task corresponding to @var{codelet} with the following
+arguments.  The argument list must be zero-terminated.
+
+The arguments following the codelets are the same types as for the
+function @code{starpu_insert_task} defined in @ref{Insert Task
+Utility}. The extra argument @code{STARPU_EXECUTE_ON_NODE} followed by an
+integer allows to specify the MPI node to execute the codelet. It is also
+possible to specify that the node owning a specific data will execute
+the codelet, by using @code{STARPU_EXECUTE_ON_DATA} followed by a data
+handle.
+
+The internal algorithm is as follows:
+@enumerate
+@item Find out which MPI node is going to execute the codelet.
+      @enumerate
+      @item If there is only one node owning data in W mode, it will
+      be selected;
+      @item If there is several nodes owning data in W node, the one
+      selected will be the one having the least data in R mode so as
+      to minimize the amount of data to be transfered;
+      @item The argument @code{STARPU_EXECUTE_ON_NODE} followed by an
+      integer can be used to specify the node;
+      @item The argument @code{STARPU_EXECUTE_ON_DATA} followed by a
+      data handle can be used to specify that the node owing the given
+      data will execute the codelet.
+      @end enumerate
+@item Send and receive data as requested. Nodes owning data which need to be
+read by the task are sending them to the MPI node which will execute it. The
+latter receives them.
+@item Execute the codelet. This is done by the MPI node selected in the
+1st step of the algorithm.
+@item If several MPI nodes own data to be written to, send written
+data back to their owners.
+@end enumerate
+
+The algorithm also includes a communication cache mechanism that
+allows not to send data twice to the same MPI node, unless the data
+has been modified. The cache can be disabled
+(@pxref{STARPU_MPI_CACHE}).
+@c todo parler plus du cache
+
+@end deftypefun
+
+@deftypefun void starpu_mpi_get_data_on_node (MPI_Comm @var{comm}, starpu_data_handle_t @var{data_handle}, int @var{node})
+Transfer data @var{data_handle} to MPI node @var{node}, sending it from its
+owner if needed. At least the target node and the owner have to call the
+function.
+@end deftypefun
 
 @node Task Bundles
 @section Task Bundles

+ 7 - 82
doc/chapters/mpi-support.texi

@@ -226,7 +226,6 @@ static struct starpu_data_interface_ops interface_complex_ops =
 @end smallexample
 @end cartouche
 
-@page
 @node MPI Insert Task Utility
 @section MPI Insert Task Utility
 
@@ -239,90 +238,14 @@ exchange the content of the handle. All MPI nodes then process the whole task
 graph, and StarPU automatically determines which node actually execute which
 task, and trigger the required MPI transfers.
 
-@deftypefun int starpu_data_set_tag (starpu_data_handle_t @var{handle}, int @var{tag})
-Tell StarPU-MPI which MPI tag to use when exchanging the data.
-@end deftypefun
-
-@deftypefun int starpu_data_get_tag (starpu_data_handle_t @var{handle})
-Returns the MPI tag to be used when exchanging the data.
-@end deftypefun
-
-@deftypefun int starpu_data_set_rank (starpu_data_handle_t @var{handle}, int @var{rank})
-Tell StarPU-MPI which MPI node "owns" a given data, that is, the node which will
-always keep an up-to-date value, and will by default execute tasks which write
-to it.
-@end deftypefun
-
-@deftypefun int starpu_data_get_rank (starpu_data_handle_t @var{handle})
-Returns the last value set by @code{starpu_data_set_rank}.
-@end deftypefun
-
-@defmac STARPU_EXECUTE_ON_NODE
-this macro is used when calling @code{starpu_mpi_insert_task}, and
-must be followed by a integer value which specified the node on which
-to execute the codelet.
-@end defmac
-
-@defmac STARPU_EXECUTE_ON_DATA
-this macro is used when calling @code{starpu_mpi_insert_task}, and
-must be followed by a data handle to specify that the node owning the
-given data will execute the codelet.
-@end defmac
-
-@deftypefun int starpu_mpi_insert_task (MPI_Comm @var{comm}, struct starpu_codelet *@var{codelet}, ...)
-Create and submit a task corresponding to @var{codelet} with the following
-arguments.  The argument list must be zero-terminated.
-
-The arguments following the codelets are the same types as for the
-function @code{starpu_insert_task} defined in @ref{Insert Task
-Utility}. The extra argument @code{STARPU_EXECUTE_ON_NODE} followed by an
-integer allows to specify the MPI node to execute the codelet. It is also
-possible to specify that the node owning a specific data will execute
-the codelet, by using @code{STARPU_EXECUTE_ON_DATA} followed by a data
-handle.
-
-The internal algorithm is as follows:
-@enumerate
-@item Find out which MPI node is going to execute the codelet.
-      @enumerate
-      @item If there is only one node owning data in W mode, it will
-      be selected;
-      @item If there is several nodes owning data in W node, the one
-      selected will be the one having the least data in R mode so as
-      to minimize the amount of data to be transfered;
-      @item The argument @code{STARPU_EXECUTE_ON_NODE} followed by an
-      integer can be used to specify the node;
-      @item The argument @code{STARPU_EXECUTE_ON_DATA} followed by a
-      data handle can be used to specify that the node owing the given
-      data will execute the codelet.
-      @end enumerate
-@item Send and receive data as requested. Nodes owning data which need to be
-read by the task are sending them to the MPI node which will execute it. The
-latter receives them.
-@item Execute the codelet. This is done by the MPI node selected in the
-1st step of the algorithm.
-@item If several MPI nodes own data to be written to, send written
-data back to their owners.
-@end enumerate
-
-The algorithm also includes a communication cache mechanism that
-allows not to send data twice to the same MPI node, unless the data
-has been modified. The cache can be disabled
-(@pxref{STARPU_MPI_CACHE}).
-@c todo parler plus du cache
-
-@end deftypefun
-
-@deftypefun void starpu_mpi_get_data_on_node (MPI_Comm @var{comm}, starpu_data_handle_t @var{data_handle}, int @var{node})
-Transfer data @var{data_handle} to MPI node @var{node}, sending it from its
-owner if needed. At least the target node and the owner have to call the
-function.
-@end deftypefun
+The list of functions are described in @ref{MPI Insert Task}.
 
 Here an stencil example showing how to use @code{starpu_mpi_insert_task}. One
 first needs to define a distribution function which specifies the
 locality of the data. Note that that distribution information needs to
-be given to StarPU by calling @code{starpu_data_set_rank}.
+be given to StarPU by calling @code{starpu_data_set_rank}. A MPI tag
+should also be defined for each data handle by calling
+@code{starpu_data_set_tag}.
 
 @cartouche
 @smallexample
@@ -371,8 +294,10 @@ data which will be needed by the tasks that we will execute.
             else
                 /* I know it's useless to allocate anything for this */
                 data_handles[x][y] = NULL;
-            if (data_handles[x][y])
+            if (data_handles[x][y]) @{
                 starpu_data_set_rank(data_handles[x][y], mpi_rank);
+                starpu_data_set_tag(data_handles[x][y], x*X+y);
+            @}
         @}
     @}
 @end smallexample