490_clustering_a_machine.doxy 9.2 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222
  1. /* StarPU --- Runtime system for heterogeneous multicore architectures.
  2. *
  3. * Copyright (C) 2015-2018 CNRS
  4. * Copyright (C) 2015-2016 Inria
  5. * Copyright (C) 2015, 2018 Université de Bordeaux
  6. *
  7. * StarPU is free software; you can redistribute it and/or modify
  8. * it under the terms of the GNU Lesser General Public License as published by
  9. * the Free Software Foundation; either version 2.1 of the License, or (at
  10. * your option) any later version.
  11. *
  12. * StarPU is distributed in the hope that it will be useful, but
  13. * WITHOUT ANY WARRANTY; without even the implied warranty of
  14. * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
  15. *
  16. * See the GNU Lesser General Public License in COPYING.LGPL for more details.
  17. */
  18. /*! \page ClusteringAMachine Clustering A Machine
  19. TODO: clarify and put more explanations, express how to create clusters
  20. using the context API.
  21. \section GeneralIdeas General Ideas
  22. Clusters are a concept introduced in this
  23. <a href="https://hal.inria.fr/view/index/docid/1181135">paper</a>. This
  24. comes from a basic idea, making use of two level of parallelism in a DAG.
  25. We keep the DAG parallelism but consider on top of it that a task can
  26. contain internal parallelism. A good example is if each task in the DAG
  27. is OpenMP enabled.
  28. The particularity of such tasks is that we will combine the power of two
  29. runtime systems: StarPU will manage the DAG parallelism and another
  30. runtime (e.g. OpenMP) will manage the internal parallelism. The challenge
  31. is in creating an interface between the two runtime systems so that StarPU
  32. can regroup cores inside a machine (creating what we call a "cluster") on
  33. top of which the parallel tasks (e.g. OpenMP tasks) will be ran in a
  34. contained fashion.
  35. The aim of the cluster API is to facilitate this process in an automatic
  36. fashion. For this purpose, we depend on the hwloc tool to detect the
  37. machine configuration and then partition it into usable clusters.
  38. An example of code running on clusters is available in
  39. <c>examples/sched_ctx/parallel_tasks_with_cluster_api.c</c>.
  40. Let's first look at how to create one in practice, then we will detail
  41. their internals.
  42. \section CreatingClusters Creating Clusters
  43. Partitioning a machine into clusters with the cluster API is fairly
  44. straightforward. The simplest way is to state under which machine
  45. topology level we wish to regroup all resources. This level is an HwLoc
  46. object, of the type <c>hwloc_obj_type_t</c>. More can be found in the
  47. <a href="https://www.open-mpi.org/projects/hwloc/doc/v1.11.0/a00076.php">hwloc
  48. documentation</a>.
  49. Once a cluster is created, the full machine is represented with an opaque
  50. structure starpu_cluster_machine. This can be printed to show the
  51. current machine state.
  52. \code{.c}
  53. struct starpu_cluster_machine *clusters;
  54. clusters = starpu_cluster_machine(HWLOC_OBJ_SOCKET, 0);
  55. starpu_cluster_print(clusters);
  56. //... submit some tasks with OpenMP computations
  57. starpu_uncluster_machine(clusters);
  58. //... we are back in the default starpu state
  59. \endcode
  60. The following graphic is an example of what a particular machine can
  61. look like once clusterized. The main difference is that we have less
  62. worker queues and tasks which will be executed on several resources at
  63. once. The execution of these tasks will be left to the internal runtime
  64. system, represented with a dashed box around the resources.
  65. \image latex runtime-par.eps "StarPU using parallel tasks" width=0.5\textwidth
  66. \image html runtime-par.png "StarPU using parallel tasks"
  67. Creating clusters as shown in the example above will create workers able to
  68. execute OpenMP code by default. The cluster API aims in allowing to
  69. parametrize the cluster creation and can take a <c>va_list</c> of arguments
  70. as input after the HwLoc object (always terminated by a 0 value). These can
  71. help creating clusters of a type different from OpenMP, or create a more
  72. precise partition of the machine.
  73. \section ExampleOfConstrainingOpenMP Example Of Constraining OpenMP
  74. Clusters require being able to constrain the runtime managing the internal
  75. task parallelism (internal runtime) to the resources set by StarPU. The
  76. purpose of this is to express how StarPU must communicate with the internal
  77. runtime to achieve the required cooperation. In the case of OpenMP, StarPU
  78. will provide an awake thread from the cluster to execute this liaison. It
  79. will then provide on demand the process ids of the other resources supposed
  80. to be in the region. Finally, thanks to an OpenMP region we can create the
  81. required number of threads and bind each of them on the correct region.
  82. These will then be reused each time we encounter a <c>\#pragma omp
  83. parallel</c> in the following computations of our program.
  84. The following graphic is an example of what an OpenMP-type cluster looks
  85. like and how it represented in StarPU. We can see that one StarPU (black)
  86. thread is awake, and we need to create on the other resources the OpenMP
  87. threads (in pink).
  88. \image latex parallel_worker2.eps "StarPU with an OpenMP cluster" width=0.3\textwidth
  89. \image html parallel_worker2.png "StarPU with an OpenMP cluster"
  90. Finally, the following code shows how to force OpenMP to cooperate with StarPU
  91. and create the aforementioned OpenMP threads constrained in the cluster's
  92. resources set:
  93. \code{.c}
  94. void starpu_openmp_prologue(void * sched_ctx_id)
  95. {
  96. int sched_ctx = *(int*)sched_ctx_id;
  97. int *cpuids = NULL;
  98. int ncpuids = 0;
  99. int workerid = starpu_worker_get_id();
  100. //we can target only CPU workers
  101. if (starpu_worker_get_type(workerid) == STARPU_CPU_WORKER)
  102. {
  103. //grab all the ids inside the cluster
  104. starpu_sched_ctx_get_available_cpuids(sched_ctx, &cpuids, &ncpuids);
  105. //set the number of threads
  106. omp_set_num_threads(ncpuids);
  107. #pragma omp parallel
  108. {
  109. //bind each threads to its respective resource
  110. starpu_sched_ctx_bind_current_thread_to_cpuid(cpuids[omp_get_thread_num()]);
  111. }
  112. free(cpuids);
  113. }
  114. return;
  115. }
  116. \endcode
  117. This is in fact exactly the default function used when we don't specify
  118. anything. As can be seen, we based the clusters on several tools and
  119. models present in the StarPU contexts, and merely extended them to allow
  120. to represent and carry clusters. More on contexts can be read here
  121. \ref SchedulingContexts.
  122. \section CreatingCustomClusters Creating Custom Clusters
  123. As was previously said it is possible to create clusters using another
  124. cluster type, in order to bind another internal runtime inside StarPU.
  125. This can be done with in several ways:
  126. - By using the currently available functions
  127. - By passing as argument a user defined function
  128. Here are two examples:
  129. \code{.c}
  130. struct starpu_cluster_machine *clusters;
  131. clusters = starpu_cluster_machine(HWLOC_OBJ_SOCKET,
  132. STARPU_CLUSTER_TYPE, STARPU_CLUSTER_GNU_OPENMP_MKL,
  133. 0);
  134. \endcode
  135. This type of clusters is available by default only if StarPU is compiled
  136. with MKL. It uses MKL functions to set the number of threads which is
  137. more reliable when using an OpenMP implementation different from the
  138. Intel one.
  139. \code{.c}
  140. void foo_func(void* foo_arg);
  141. \\...
  142. int foo_arg = 0;
  143. struct starpu_cluster_machine *clusters;
  144. clusters = starpu_cluster_machine(HWLOC_OBJ_SOCKET,
  145. STARPU_CLUSTER_CREATE_FUNC, &foo_func,
  146. STARPU_CLUSTER_CREATE_FUNC_ARG, &foo_arg,
  147. 0);
  148. \endcode
  149. \section ClustersWithSchedulingContextsAPI Clusters With Scheduling
  150. Contexts API As previously mentioned, the cluster API is implemented
  151. on top of \ref SchedulingContexts. Its main addition is to ease the
  152. creation of a machine CPU partition with no overlapping by using
  153. HwLoc, whereas scheduling contexts can use any number of any
  154. resources.
  155. It is therefore possible, but not recommended, to create clusters
  156. using the scheduling contexts API. This can be useful mostly in the
  157. most complex machine configurations where the user has to dimension
  158. precisely clusters by hand using his own algorithm.
  159. \code{.c}
  160. /* the list of resources the context will manage */
  161. int workerids[3] = {1, 3, 10};
  162. /* indicate the list of workers assigned to it, the number of workers,
  163. the name of the context and the scheduling policy to be used within
  164. the context */
  165. int id_ctx = starpu_sched_ctx_create(workerids, 3, "my_ctx", 0);
  166. /* let StarPU know that the following tasks will be submitted to this context */
  167. starpu_sched_ctx_set_task_context(id);
  168. task->prologue_callback_pop_func=runtime_interface_function_here;
  169. /* submit the task to StarPU */
  170. starpu_task_submit(task);
  171. \endcode
  172. As this example illustrates, creating a context without scheduling
  173. policy will create a cluster. The important change is that the user
  174. will have to specify an interface function between the two runtimes he
  175. plans to use. This can be done in the
  176. <c>prologue_callback_pop_func</c> field of the task. Such a function
  177. can be similar to the OpenMP thread team creation one.
  178. Note that the OpenMP mode is the default one both for clusters and
  179. contexts. The result of a cluster creation is a woken up master worker
  180. and sleeping "slaves" which allow the master to run tasks on their
  181. resources. To create a cluster with woken up workers one can use the
  182. flag \ref STARPU_SCHED_CTX_AWAKE_WORKERS with the scheduling context
  183. API and \ref STARPU_CLUSTER_AWAKE_WORKERS with the cluster API as
  184. parameter to the creation function.
  185. */