scheduling_contexts.doxy 5.7 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137
  1. /*
  2. * This file is part of the StarPU Handbook.
  3. * Copyright (C) 2009--2011 Universit@'e de Bordeaux 1
  4. * Copyright (C) 2010, 2011, 2012, 2013 Centre National de la Recherche Scientifique
  5. * Copyright (C) 2011, 2012 Institut National de Recherche en Informatique et Automatique
  6. * See the file version.doxy for copying conditions.
  7. */
  8. /*! \page SchedulingContexts Scheduling Contexts
  9. TODO: improve!
  10. \section GeneralIdeas General Ideas
  11. Scheduling contexts represent abstracts sets of workers that allow the
  12. programmers to control the distribution of computational resources
  13. (i.e. CPUs and GPUs) to concurrent parallel kernels. The main goal is
  14. to minimize interferences between the execution of multiple parallel
  15. kernels, by partitioning the underlying pool of workers using
  16. contexts.
  17. \section CreatingAContext Creating A Context
  18. By default, the application submits tasks to an initial context, which
  19. disposes of all the computation ressources available to StarPU (all
  20. the workers). If the application programmer plans to launch several
  21. parallel kernels simultaneusly, by default these kernels will be
  22. executed within this initial context, using a single scheduler
  23. policy(see \ref TaskSchedulingPolicy). Meanwhile, if the application
  24. programmer is aware of the demands of these kernels and of the
  25. specificity of the machine used to execute them, the workers can be
  26. divided between several contexts. These scheduling contexts will
  27. isolate the execution of each kernel and they will permit the use of a
  28. scheduling policy proper to each one of them. In order to create the
  29. contexts, you have to know the indentifiers of the workers running
  30. within StarPU. By passing a set of workers together with the
  31. scheduling policy to the function starpu_sched_ctx_create(), you will
  32. get an identifier of the context created which you will use to
  33. indicate the context you want to submit the tasks to.
  34. \code{.c}
  35. /* the list of ressources the context will manage */
  36. int workerids[3] = {1, 3, 10};
  37. /* indicate the scheduling policy to be used within the context, the list of
  38. workers assigned to it, the number of workers, the name of the context */
  39. int id_ctx = starpu_sched_ctx_create("dmda", workerids, 3, "my_ctx");
  40. /* let StarPU know that the folowing tasks will be submitted to this context */
  41. starpu_sched_ctx_set_task_context(id);
  42. /* submit the task to StarPU */
  43. starpu_task_submit(task);
  44. \endcode
  45. Note: Parallel greedy and parallel heft scheduling policies do not support the existence of several disjoint contexts on the machine.
  46. Combined workers are constructed depending on the entire topology of the machine, not only the one belonging to a context.
  47. \section ModifyingAContext Modifying A Context
  48. A scheduling context can be modified dynamically. The applications may
  49. change its requirements during the execution and the programmer can
  50. add additional workers to a context or remove if no longer needed. In
  51. the following example we have two scheduling contexts
  52. <c>sched_ctx1</c> and <c>sched_ctx2</c>. After executing a part of the
  53. tasks some of the workers of <c>sched_ctx1</c> will be moved to
  54. context <c>sched_ctx2</c>.
  55. \code{.c}
  56. /* the list of ressources that context 1 will give away */
  57. int workerids[3] = {1, 3, 10};
  58. /* add the workers to context 1 */
  59. starpu_sched_ctx_add_workers(workerids, 3, sched_ctx2);
  60. /* remove the workers from context 2 */
  61. starpu_sched_ctx_remove_workers(workerids, 3, sched_ctx1);
  62. \endcode
  63. \section DeletingAContext Deleting A Context
  64. When a context is no longer needed it must be deleted. The application
  65. can indicate which context should keep the resources of a deleted one.
  66. All the tasks of the context should be executed before doing this. If
  67. the application need to avoid a barrier before moving the resources
  68. from the deleted context to the inheritor one, the application can
  69. just indicate when the last task was submitted. Thus, when this last
  70. task was submitted the resources will be move, but the context should
  71. still be deleted at some point of the application.
  72. \code{.c}
  73. /* when the context 2 will be deleted context 1 will be keep its resources */
  74. starpu_sched_ctx_set_inheritor(sched_ctx2, sched_ctx1);
  75. /* submit tasks to context 2 */
  76. for (i = 0; i < ntasks; i++)
  77. starpu_task_submit_to_ctx(task[i],sched_ctx2);
  78. /* indicate that context 2 finished submitting and that */
  79. /* as soon as the last task of context 2 finished executing */
  80. /* its workers can be mobed to the inheritor context */
  81. starpu_sched_ctx_finished_submit(sched_ctx1);
  82. /* wait for the tasks of both contexts to finish */
  83. starpu_task_wait_for_all();
  84. /* delete context 2 */
  85. starpu_sched_ctx_delete(sched_ctx2);
  86. /* delete context 1 */
  87. starpu_sched_ctx_delete(sched_ctx1);
  88. \endcode
  89. \section EmptyingAContext Emptying A Context
  90. A context may not have any resources at the begining or at a certain
  91. moment of the execution. Task can still be submitted to these contexts
  92. and they will execute them as soon as they will have resources. A list
  93. of tasks pending to be executed is kept and when workers are added to
  94. the contexts the tasks are submitted. However, if no resources are
  95. allocated the program will not terminate. If these tasks have not much
  96. priority the programmer can forbid the application to submitted them
  97. by calling the function starpu_sched_ctx_stop_task_submission().
  98. \section ContextsSharingWorkers Contexts Sharing Workers
  99. Contexts may share workers when a single context cannot execute
  100. efficiently enough alone on these workers or when the application
  101. decides to express a hierarchy of contexts. The workers apply an
  102. alogrithm of ``Round-Robin'' to chose the context on which they will
  103. ``pop'' next. By using the function
  104. starpu_sched_ctx_set_turn_to_other_ctx(), the programmer can impose
  105. the <c>workerid</c> to ``pop'' in the context <c>sched_ctx_id</c>
  106. next.
  107. */