|
@@ -196,11 +196,11 @@ supports parallel tasks (still experimental).
|
|
|
|
|
|
@node Task scheduling contexts
|
|
|
@section Task scheduling contexts
|
|
|
-Task scheduling contexts represent abstracts sets of workers that allow the programmers to control the distribution of computational resources (i.e. CPUs and
|
|
|
+Task scheduling contexts represent abstracts sets of workers that allow programmers to control the distribution of computational resources (i.e. CPUs and
|
|
|
GPUs) to concurrent parallel kernels. The main goal is to minimize interferences between the execution of multiple parallel kernels, by partitioning the underlying pool of workers using contexts.
|
|
|
|
|
|
-By default, the application submits tasks to an initial context, which disposes of all the computation ressources available to StarPU (all the workers).
|
|
|
-If the application programmer plans to launch several parallel kernels simultaneusly, by default these kernels will be executed within this initial context, using a single scheduler policy(@pxref{Task scheduling policy}).
|
|
|
+By default, the application submits tasks to an initial context, which uses the computation ressources available to StarPU (all the workers).
|
|
|
+If the application programmer plans to launch several parallel kernels simultaneously, by default these kernels will be executed within this initial context, using a single scheduler policy(@pxref{Task scheduling policy}).
|
|
|
Meanwhile, if the application programmer is aware of the demands of these kernels and of the specificity of the machine used to execute them, the workers can be divided between several contexts.
|
|
|
These scheduling contexts will isolate the execution of each kernel and they will permit the use of a scheduling policy proper to each one of them.
|
|
|
In order to create the contexts, you have to know the indentifiers of the workers running within StarPU.
|