330_scheduling_contexts.doxy 7.1 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170
  1. /*
  2. * This file is part of the StarPU Handbook.
  3. // * Copyright (C) 2009--2011 Universit@'e de Bordeaux
  4. * Copyright (C) 2010, 2011, 2012, 2013, 2016 CNRS
  5. * Copyright (C) 2011, 2012 INRIA
  6. * See the file version.doxy for copying conditions.
  7. */
  8. /*! \page SchedulingContexts Scheduling Contexts
  9. TODO: improve!
  10. \section GeneralIdeas General Ideas
  11. Scheduling contexts represent abstracts sets of workers that allow the
  12. programmers to control the distribution of computational resources
  13. (i.e. CPUs and GPUs) to concurrent parallel kernels. The main goal is
  14. to minimize interferences between the execution of multiple parallel
  15. kernels, by partitioning the underlying pool of workers using
  16. contexts.
  17. \section CreatingAContext Creating A Context
  18. By default, the application submits tasks to an initial context, which
  19. disposes of all the computation resources available to StarPU (all
  20. the workers). If the application programmer plans to launch several
  21. parallel kernels simultaneously, by default these kernels will be
  22. executed within this initial context, using a single scheduler
  23. policy(see \ref TaskSchedulingPolicy). Meanwhile, if the application
  24. programmer is aware of the demands of these kernels and of the
  25. specificity of the machine used to execute them, the workers can be
  26. divided between several contexts. These scheduling contexts will
  27. isolate the execution of each kernel and they will permit the use of a
  28. scheduling policy proper to each one of them.
  29. Scheduling Contexts may be created in two ways: either the programmers indicates
  30. the set of workers corresponding to each context (providing he knows the
  31. identifiers of the workers running within StarPU), or the programmer
  32. does not provide any worker list and leaves the Hypervisor assign
  33. workers to each context according to their needs (\ref SchedulingContextHypervisor)
  34. Both cases require a call to the function
  35. starpu_sched_ctx_create(), which requires as input the worker
  36. list (the exact list or a NULL pointer) and a list of optional
  37. parameters such as the scheduling policy, terminated by a 0. The
  38. scheduling policy can be a character list corresponding to the name of
  39. a StarPU predefined policy or the pointer to a custom policy. The
  40. function returns an identifier of the context created which you will
  41. use to indicate the context you want to submit the tasks to.
  42. Please note that if no scheduling policy is specified, the context
  43. will be used as another type of resource: a cluster, which we consider
  44. contexts without scheduler (eventually delegated to another
  45. runtime). For more information see \ref ClusteringAMachine. It is
  46. therefore <b>mandatory</b> to stipulate the context's scheduler to use
  47. it in this traditional way.
  48. \code{.c}
  49. /* the list of resources the context will manage */
  50. int workerids[3] = {1, 3, 10};
  51. /* indicate the list of workers assigned to it, the number of workers,
  52. the name of the context and the scheduling policy to be used within
  53. the context */
  54. int id_ctx = starpu_sched_ctx_create(workerids, 3, "my_ctx", STARPU_SCHED_CTX_POLICY_NAME, "dmda", 0);
  55. /* let StarPU know that the following tasks will be submitted to this context */
  56. starpu_sched_ctx_set_task_context(id);
  57. /* submit the task to StarPU */
  58. starpu_task_submit(task);
  59. \endcode
  60. Note: Parallel greedy and parallel heft scheduling policies do not support the existence of several disjoint contexts on the machine.
  61. Combined workers are constructed depending on the entire topology of the machine, not only the one belonging to a context.
  62. \section ModifyingAContext Modifying A Context
  63. A scheduling context can be modified dynamically. The applications may
  64. change its requirements during the execution and the programmer can
  65. add additional workers to a context or remove if no longer needed. In
  66. the following example we have two scheduling contexts
  67. <c>sched_ctx1</c> and <c>sched_ctx2</c>. After executing a part of the
  68. tasks some of the workers of <c>sched_ctx1</c> will be moved to
  69. context <c>sched_ctx2</c>.
  70. \code{.c}
  71. /* the list of ressources that context 1 will give away */
  72. int workerids[3] = {1, 3, 10};
  73. /* add the workers to context 1 */
  74. starpu_sched_ctx_add_workers(workerids, 3, sched_ctx2);
  75. /* remove the workers from context 2 */
  76. starpu_sched_ctx_remove_workers(workerids, 3, sched_ctx1);
  77. \endcode
  78. \section SubmittingTasksToAContext Submitting Tasks To A Context
  79. The application may submit tasks to several contexts either
  80. simultaneously or sequnetially. If several threads of submission
  81. are used the function starpu_sched_ctx_set_context() may be called just
  82. before starpu_task_submit(). Thus StarPU considers that
  83. the current thread will submit tasks to the coresponding context.
  84. When the application may not assign a thread of submission to each
  85. context, the id of the context must be indicated by using the
  86. function starpu_task_submit_to_ctx() or the field \ref STARPU_SCHED_CTX
  87. for starpu_task_insert().
  88. \section DeletingAContext Deleting A Context
  89. When a context is no longer needed it must be deleted. The application
  90. can indicate which context should keep the resources of a deleted one.
  91. All the tasks of the context should be executed before doing this.
  92. Thus, the programmer may use either a barrier and then delete the context
  93. directly, or just indicate
  94. that other tasks will not be submitted later on to the context (such that when
  95. the last task is executed its workers will be moved to the inheritor)
  96. and delete the context at the end of the execution (when a barrier will
  97. be used eventually).
  98. \code{.c}
  99. /* when the context 2 is deleted context 1 inherits its resources */
  100. starpu_sched_ctx_set_inheritor(sched_ctx2, sched_ctx1);
  101. /* submit tasks to context 2 */
  102. for (i = 0; i < ntasks; i++)
  103. starpu_task_submit_to_ctx(task[i],sched_ctx2);
  104. /* indicate that context 2 finished submitting and that */
  105. /* as soon as the last task of context 2 finished executing */
  106. /* its workers can be moved to the inheritor context */
  107. starpu_sched_ctx_finished_submit(sched_ctx1);
  108. /* wait for the tasks of both contexts to finish */
  109. starpu_task_wait_for_all();
  110. /* delete context 2 */
  111. starpu_sched_ctx_delete(sched_ctx2);
  112. /* delete context 1 */
  113. starpu_sched_ctx_delete(sched_ctx1);
  114. \endcode
  115. \section EmptyingAContext Emptying A Context
  116. A context may have no resources at the begining or at a certain
  117. moment of the execution. Task can still be submitted to these contexts
  118. and they will be executed as soon as the contexts will have resources. A list
  119. of tasks pending to be executed is kept and when workers are added to
  120. the contexts these tasks start being submitted. However, if resources
  121. are never allocated to the context the program will not terminate.
  122. If these tasks have low
  123. priority the programmer can forbid the application to submit them
  124. by calling the function starpu_sched_ctx_stop_task_submission().
  125. \section ContextsSharingWorkers Contexts Sharing Workers
  126. Contexts may share workers when a single context cannot execute
  127. efficiently enough alone on these workers or when the application
  128. decides to express a hierarchy of contexts. The workers apply an
  129. alogrithm of ``Round-Robin'' to chose the context on which they will
  130. ``pop'' next. By using the function
  131. starpu_sched_ctx_set_turn_to_other_ctx(), the programmer can impose
  132. the <c>workerid</c> to ``pop'' in the context <c>sched_ctx_id</c>
  133. next.
  134. */