Selaa lähdekoodia

move ctxs2 in a separate file and add some more doc about how to modify and delete a context

Andra Hugo 12 vuotta sitten
vanhempi
commit
31c5a16cee
3 muutettua tiedostoa jossa 111 lisäystä ja 34 poistoa
  1. 0 34
      doc/chapters/perf-optimization.texi
  2. 102 0
      doc/chapters/sched_ctx.texi
  3. 9 0
      doc/starpu.texi

+ 0 - 34
doc/chapters/perf-optimization.texi

@@ -14,7 +14,6 @@ TODO: improve!
 * Task submission::
 * Task priorities::
 * Task scheduling policy::
-* Task scheduling contexts::
 * Performance model calibration::
 * Task distribution vs Data transfer::
 * Data prefetch::
@@ -206,39 +205,6 @@ parallel tasks (still experimental).
 The @b{peager} (parallel eager) scheduler is similar to eager, it also
 supports parallel tasks (still experimental).
 
-@node Task scheduling contexts
-@section Task scheduling contexts
-Task scheduling contexts represent abstracts sets of workers that allow the programmers to control the distribution of computational resources (i.e. CPUs and
-GPUs) to concurrent parallel kernels. The main goal is to minimize interferences between the execution of multiple parallel kernels, by partitioning the underlying pool of workers using contexts.
-
-By default, the application submits tasks to an initial context, which disposes of all the computation ressources available to StarPU (all the workers). 
-If the application programmer plans to launch several parallel kernels simultaneusly, by default these kernels will be executed within this initial context, using a single scheduler policy(@pxref{Task scheduling policy}).
-Meanwhile, if the application programmer is aware of the demands of these kernels and of the specificity of the machine used to execute them, the workers can be divided between several contexts. 
-These scheduling contexts will isolate the execution of each kernel and they will permit the use of a scheduling policy proper to each one of them.
-In order to create the contexts, you have to know the indentifiers of the workers running within StarPU. 
-By passing a set of workers together with the scheduling policy to the function @code{starpu_sched_ctx_create}, you will get an identifier of the context created which you will use to indicate the context you want to submit the tasks to.
-
-@cartouche
-@smallexample
-/* @b{the list of ressources the context will manage} */
-int workerids[3] = @{1, 3, 10@};
-
-/* @b{indicate the scheduling policy to be used within the context, the list of 
-   workers assigned to it, the number of workers, the name of the context} */
-int id_ctx = starpu_sched_ctx_create("heft", workerids, 3, "my_ctx");
-
-/* @b{let StarPU know that the folowing tasks will be submitted to this context} */
-starpu_sched_ctx_set_task_context(id);
-
-/* @b{submit the task to StarPU} */
-starpu_task_submit(task);
-
-@end smallexample
-@end cartouche
-
-Note: Parallel greedy and parallel heft scheduling policies do not support the existence of several disjoint contexts on the machine. 
-Combined workers are constructed depending on the entire topology of the machine, not only the one belonging to a context.
-
 @node Performance model calibration
 @section Performance model calibration
 

+ 102 - 0
doc/chapters/sched_ctx.texi

@@ -0,0 +1,102 @@
+@c -*-texinfo-*-
+
+@c This file is part of the StarPU Handbook.
+@c Copyright (C) 2009--2011  Universit@'e de Bordeaux 1
+@c Copyright (C) 2010, 2011, 2012, 2013  Centre National de la Recherche Scientifique
+@c Copyright (C) 2011 Institut National de Recherche en Informatique et Automatique
+@c See the file starpu.texi for copying conditions.
+
+TODO: improve!
+
+@menu
+* General Idea::
+* Create a Context::
+* Modify a Context::
+* Delete a Context::
+@end menu
+
+@node General Idea
+@section General Idea
+Scheduling contexts represent abstracts sets of workers that allow the programmers to control the distribution of computational resources (i.e. CPUs and
+GPUs) to concurrent parallel kernels. The main goal is to minimize interferences between the execution of multiple parallel kernels, by partitioning the underlying pool of workers using contexts.
+
+@node Create a Context
+@section Create a Context
+By default, the application submits tasks to an initial context, which disposes of all the computation ressources available to StarPU (all the workers). 
+If the application programmer plans to launch several parallel kernels simultaneusly, by default these kernels will be executed within this initial context, using a single scheduler policy(@pxref{Task scheduling policy}).
+Meanwhile, if the application programmer is aware of the demands of these kernels and of the specificity of the machine used to execute them, the workers can be divided between several contexts. 
+These scheduling contexts will isolate the execution of each kernel and they will permit the use of a scheduling policy proper to each one of them.
+In order to create the contexts, you have to know the indentifiers of the workers running within StarPU. 
+By passing a set of workers together with the scheduling policy to the function @code{starpu_sched_ctx_create}, you will get an identifier of the context created which you will use to indicate the context you want to submit the tasks to.
+
+@cartouche
+@smallexample
+/* @b{the list of ressources the context will manage} */
+int workerids[3] = @{1, 3, 10@};
+
+/* @b{indicate the scheduling policy to be used within the context, the list of 
+   workers assigned to it, the number of workers, the name of the context} */
+int id_ctx = starpu_sched_ctx_create("dmda", workerids, 3, "my_ctx");
+
+/* @b{let StarPU know that the folowing tasks will be submitted to this context} */
+starpu_sched_ctx_set_task_context(id);
+
+/* @b{submit the task to StarPU} */
+starpu_task_submit(task);
+
+@end smallexample
+@end cartouche
+
+Note: Parallel greedy and parallel heft scheduling policies do not support the existence of several disjoint contexts on the machine. 
+Combined workers are constructed depending on the entire topology of the machine, not only the one belonging to a context.
+
+
+@node Modify a Context
+@section Modify a Context
+A scheduling context can be modified dynamically. The applications may change its requirements during the execution and the programmer can add additional workers to a context or remove if no longer needed.
+In the following example we have two scheduling contexts @code{sched_ctx1} and @code{sched_ctx2}. After executing a part of the tasks some of the workers of @code{sched_ctx1} will be moved to context @code{sched_ctx2}.
+
+@cartouche
+@smallexample
+/* @b{the list of ressources that context 1 will give away} */
+int workerids[3] = @{1, 3, 10@};
+
+/* @b{add the workers to context 1} */
+starpu_sched_ctx_add_workers(workerids, 3, sched_ctx2);
+
+/* @b{remove the workers from context 2} */
+starpu_sched_ctx_remove_workers(workerids, 3, sched_ctx1);
+
+@end smallexample
+@end cartouche
+
+@node Delete a Context 
+@section Delete a Context
+When a context is no longer needed it must be deleted. The application can indicate which context should keep the resources of a deleted one. 
+All the tasks of the context should be executed before doing this. If the application need to avoid a barrier before moving the resources from the deleted context to the inheritor one, the application can just indicate
+when the last task was submitted. Thus, when this last task was submitted the resources will be move, but the context should still be deleted at some point of the application.
+
+@cartouche
+@smallexample
+/* @b{when the context 2 will be deleted context 1 will be keep its resources} */
+starpu_sched_ctx_set_inheritor(sched_ctx2, sched_ctx1);
+
+/* @b{submit tasks to context 2} */
+for (i = 0; i < ntasks; i++)
+    starpu_task_submit_to_ctx(task[i],sched_ctx2);
+
+/* @b{indicate that context 2 finished submitting and that } */
+/* @b{as soon as the last task of context 2 finished executing } */
+/* @b{its workers can be mobed to the inheritor context} */
+starpu_sched_ctx_finished_submit(sched_ctx1);
+
+/* @b{wait for the tasks of both contexts to finish} */
+starpu_task_wait_for_all();
+
+/* @b{delete context 2} */
+starpu_sched_ctx_delete(sched_ctx2);
+
+/* @b{delete context 1} */
+starpu_sched_ctx_delete(sched_ctx1);
+@end smallexample
+@end cartouche

+ 9 - 0
doc/starpu.texi

@@ -76,6 +76,7 @@ was last updated on @value{UPDATED}.
 * StarPU FFT support::          How to perform FFT computations with StarPU
 * C Extensions::                Easier StarPU programming with GCC
 * SOCL OpenCL Extensions::      How to use OpenCL on top of StarPU
+* Scheduling Contexts in StarPU::         How to use Scheduling Context of StarPU
 * Scheduling Context Hypervisor::  How to use Scheduling Context Hypervisor with StarPU
 * StarPU's API::                The API to use StarPU
 * Configuration Options for StarPU::
@@ -207,6 +208,14 @@ was last updated on @value{UPDATED}.
 @include chapters/socl.texi
 
 @c ---------------------------------------------------------------------
+@c Scheduling Contexts in StarPU
+@c ---------------------------------------------------------------------
+
+@node Scheduling Contexts in StarPU
+@chapter Scheduling Contexts in StarPU
+@include chapters/sched_ctx.texi
+
+@c ---------------------------------------------------------------------
 @c Scheduling Context Hypervisor
 @c ---------------------------------------------------------------------