490_clustering_a_machine.doxy 9.4 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229
  1. /* StarPU --- Runtime system for heterogeneous multicore architectures.
  2. *
  3. * Copyright (C) 2015-2019 CNRS
  4. * Copyright (C) 2015,2018 Université de Bordeaux
  5. * Copyright (C) 2015,2016 Inria
  6. *
  7. * StarPU is free software; you can redistribute it and/or modify
  8. * it under the terms of the GNU Lesser General Public License as published by
  9. * the Free Software Foundation; either version 2.1 of the License, or (at
  10. * your option) any later version.
  11. *
  12. * StarPU is distributed in the hope that it will be useful, but
  13. * WITHOUT ANY WARRANTY; without even the implied warranty of
  14. * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
  15. *
  16. * See the GNU Lesser General Public License in COPYING.LGPL for more details.
  17. */
  18. /*! \page ClusteringAMachine Clustering A Machine
  19. TODO: clarify and put more explanations, express how to create clusters
  20. using the context API.
  21. \section GeneralIdeas General Ideas
  22. Clusters are a concept introduced in this
  23. <a href="https://hal.inria.fr/view/index/docid/1181135">paper</a>. This
  24. comes from a basic idea, making use of two levels of parallelism in a DAG.
  25. We keep the DAG parallelism but consider on top of it that a task can
  26. contain internal parallelism. A good example is if each task in the DAG
  27. is OpenMP enabled.
  28. The particularity of such tasks is that we will combine the power of two
  29. runtime systems: StarPU will manage the DAG parallelism and another
  30. runtime (e.g. OpenMP) will manage the internal parallelism. The challenge
  31. is in creating an interface between the two runtime systems so that StarPU
  32. can regroup cores inside a machine (creating what we call a "cluster") on
  33. top of which the parallel tasks (e.g. OpenMP tasks) will be ran in a
  34. contained fashion.
  35. The aim of the cluster API is to facilitate this process in an automatic
  36. fashion. For this purpose, we depend on the \c hwloc tool to detect the
  37. machine configuration and then partition it into usable clusters.
  38. An example of code running on clusters is available in
  39. <c>examples/sched_ctx/parallel_tasks_with_cluster_api.c</c>.
  40. Let's first look at how to create a cluster.
  41. \section CreatingClusters Creating Clusters
  42. Partitioning a machine into clusters with the cluster API is fairly
  43. straightforward. The simplest way is to state under which machine
  44. topology level we wish to regroup all resources. This level is an \c hwloc
  45. object, of the type <c>hwloc_obj_type_t</c>. More information can be found in the
  46. <a href="https://www.open-mpi.org/projects/hwloc/doc/v2.0.3/">hwloc
  47. documentation</a>.
  48. Once a cluster is created, the full machine is represented with an opaque
  49. structure starpu_cluster_machine. This can be printed to show the
  50. current machine state.
  51. \code{.c}
  52. struct starpu_cluster_machine *clusters;
  53. clusters = starpu_cluster_machine(HWLOC_OBJ_SOCKET, 0);
  54. starpu_cluster_print(clusters);
  55. //... submit some tasks with OpenMP computations
  56. starpu_uncluster_machine(clusters);
  57. //... we are back in the default starpu state
  58. \endcode
  59. The following graphic is an example of what a particular machine can
  60. look like once clusterized. The main difference is that we have less
  61. worker queues and tasks which will be executed on several resources at
  62. once. The execution of these tasks will be left to the internal runtime
  63. system, represented with a dashed box around the resources.
  64. \image latex runtime-par.eps "StarPU using parallel tasks" width=0.5\textwidth
  65. \image html runtime-par.png "StarPU using parallel tasks"
  66. Creating clusters as shown in the example above will create workers able to
  67. execute OpenMP code by default. The cluster API aims in allowing to
  68. parametrize the cluster creation and can take a <c>va_list</c> of arguments
  69. as input after the \c hwloc object (always terminated by a 0 value). These can
  70. help creating clusters of a type different from OpenMP, or create a more
  71. precise partition of the machine.
  72. \section ExampleOfConstrainingOpenMP Example Of Constraining OpenMP
  73. Clusters require being able to constrain the runtime managing the internal
  74. task parallelism (internal runtime) to the resources set by StarPU. The
  75. purpose of this is to express how StarPU must communicate with the internal
  76. runtime to achieve the required cooperation. In the case of OpenMP, StarPU
  77. will provide an awake thread from the cluster to execute this liaison. It
  78. will then provide on demand the process ids of the other resources supposed
  79. to be in the region. Finally, thanks to an OpenMP region we can create the
  80. required number of threads and bind each of them on the correct region.
  81. These will then be reused each time we encounter a <c>\#pragma omp
  82. parallel</c> in the following computations of our program.
  83. The following graphic is an example of what an OpenMP-type cluster looks
  84. like and how it represented in StarPU. We can see that one StarPU (black)
  85. thread is awake, and we need to create on the other resources the OpenMP
  86. threads (in pink).
  87. \image latex parallel_worker2.eps "StarPU with an OpenMP cluster" width=0.3\textwidth
  88. \image html parallel_worker2.png "StarPU with an OpenMP cluster"
  89. Finally, the following code shows how to force OpenMP to cooperate with StarPU
  90. and create the aforementioned OpenMP threads constrained in the cluster's
  91. resources set:
  92. \code{.c}
  93. void starpu_openmp_prologue(void * sched_ctx_id)
  94. {
  95. int sched_ctx = *(int*)sched_ctx_id;
  96. int *cpuids = NULL;
  97. int ncpuids = 0;
  98. int workerid = starpu_worker_get_id();
  99. //we can target only CPU workers
  100. if (starpu_worker_get_type(workerid) == STARPU_CPU_WORKER)
  101. {
  102. //grab all the ids inside the cluster
  103. starpu_sched_ctx_get_available_cpuids(sched_ctx, &cpuids, &ncpuids);
  104. //set the number of threads
  105. omp_set_num_threads(ncpuids);
  106. #pragma omp parallel
  107. {
  108. //bind each threads to its respective resource
  109. starpu_sched_ctx_bind_current_thread_to_cpuid(cpuids[omp_get_thread_num()]);
  110. }
  111. free(cpuids);
  112. }
  113. return;
  114. }
  115. \endcode
  116. This function is the default function used when calling starpu_cluster_machine() without extra parameter.
  117. Cluster are based on several tools and models already available within
  118. StarPU contexts, and merely extend contexts. More on contexts can be
  119. read in Section \ref SchedulingContexts.
  120. \section CreatingCustomClusters Creating Custom Clusters
  121. Clusters can be created either with the predefined functions provided
  122. within StarPU, or with user-defined functions to bind another runtime
  123. inside StarPU.
  124. The predefined cluster types provided by StarPU are
  125. ::STARPU_CLUSTER_OPENMP, ::STARPU_CLUSTER_INTEL_OPENMP_MKL and
  126. ::STARPU_CLUSTER_GNU_OPENMP_MKL. The last one is only provided if
  127. StarPU is compiled with the \c MKL library. It uses MKL functions to
  128. set the number of threads which is more reliable when using an OpenMP
  129. implementation different from the Intel one.
  130. Here an example creating a MKL cluster.
  131. \code{.c}
  132. struct starpu_cluster_machine *clusters;
  133. clusters = starpu_cluster_machine(HWLOC_OBJ_SOCKET,
  134. STARPU_CLUSTER_TYPE, STARPU_CLUSTER_GNU_OPENMP_MKL,
  135. 0);
  136. \endcode
  137. Using the default type ::STARPU_CLUSTER_OPENMP is similar to calling
  138. starpu_cluster_machine() without any extra parameter.
  139. Users can also define their own function.
  140. \code{.c}
  141. void foo_func(void* foo_arg);
  142. \\...
  143. int foo_arg = 0;
  144. struct starpu_cluster_machine *clusters;
  145. clusters = starpu_cluster_machine(HWLOC_OBJ_SOCKET,
  146. STARPU_CLUSTER_CREATE_FUNC, &foo_func,
  147. STARPU_CLUSTER_CREATE_FUNC_ARG, &foo_arg,
  148. 0);
  149. \endcode
  150. \section ClustersWithSchedulingContextsAPI Clusters With Scheduling
  151. As previously mentioned, the cluster API is implemented
  152. on top of \ref SchedulingContexts. Its main addition is to ease the
  153. creation of a machine CPU partition with no overlapping by using
  154. \c hwloc, whereas scheduling contexts can use any number of any
  155. resources.
  156. It is therefore possible, but not recommended, to create clusters
  157. using the scheduling contexts API. This can be useful mostly in the
  158. most complex machine configurations where users have to dimension
  159. precisely clusters by hand using their own algorithm.
  160. \code{.c}
  161. /* the list of resources the context will manage */
  162. int workerids[3] = {1, 3, 10};
  163. /* indicate the list of workers assigned to it, the number of workers,
  164. the name of the context and the scheduling policy to be used within
  165. the context */
  166. int id_ctx = starpu_sched_ctx_create(workerids, 3, "my_ctx", 0);
  167. /* let StarPU know that the following tasks will be submitted to this context */
  168. starpu_sched_ctx_set_task_context(id);
  169. task->prologue_callback_pop_func=&runtime_interface_function_here;
  170. /* submit the task to StarPU */
  171. starpu_task_submit(task);
  172. \endcode
  173. As this example illustrates, creating a context without scheduling
  174. policy will create a cluster. The important change is that users
  175. will have to specify an interface function between StarPU and the other runtime.
  176. This can be done in the field starpu_task::prologue_callback_pop_func. Such a function
  177. can be similar to the OpenMP thread team creation one (see above).
  178. Note that the OpenMP mode is the default one both for clusters and
  179. contexts. The result of a cluster creation is a woken up master worker
  180. and sleeping "slaves" which allow the master to run tasks on their
  181. resources. To create a cluster with woken up workers one can use the
  182. flag \ref STARPU_SCHED_CTX_AWAKE_WORKERS with the scheduling context
  183. API and \ref STARPU_CLUSTER_AWAKE_WORKERS with the cluster API as
  184. parameter to the creation function.
  185. */