浏览代码

add hypervisor in the doc

Andra Hugo 13 年之前
父节点
当前提交
c6e13b9642

+ 1 - 2
doc/chapters/advanced-api.texi

@@ -516,8 +516,7 @@ Variant of starpu_worker_can_execute_task compatible with combined workers
 @node Scheduling Contexts
 @section Scheduling Contexts
 StarPU permits on one hand grouping workers in combined workers in order to execute a parallel task and on the other hand grouping tasks in bundles that will be executed by a single specified worker.
-Scheduling contexts are different, they represent abstracts sets of workers that allow the programmers to control the distribution of computational resources (i.e. CPUs and
-GPUs) to concurrent parallel kernels. The main goal is to minimize interferences between the execution of multiple parallel kernels, by partitioning the underlying pool of workers using contexts.
+In contrast when we group workers in scheduling contexts we submit starpu tasks to them and we schedule them with the policy assigned to the context.
 Scheduling contexts can be created, deleted and modified dynamically.
 
 @deftypefun unsigned starpu_create_sched_ctx (const char *@var{policy_name}, int *@var{workerids_ctx}, int @var{nworkers_ctx}, const char *@var{sched_ctx_name})

+ 1 - 1
doc/chapters/c-extensions.texi

@@ -46,7 +46,7 @@ plug-in.  It does not require detailed knowledge of the StarPU library.
 Note: as of StarPU @value{VERSION}, this is still an area under
 development and subject to change.
 
-@menu
+  @menu
 * Defining Tasks::              Defining StarPU tasks
 * Synchronization and Other Pragmas:: Synchronization, and more.
 * Registered Data Buffers::     Manipulating data buffers

+ 9 - 0
doc/chapters/configuration.texi

@@ -117,6 +117,11 @@ Allow for at most @var{count} codelet implementations for the same
 target device.  This information is then available as the
 @code{STARPU_MAXIMPLEMENTATIONS} macro.
 
+@item ----enable-max-sched-ctxs=@var{count}
+Allow for at most @var{count} scheduling contexts
+This information is then available as the
+@code{STARPU_NMAX_SCHED_CTXS} macro.
+
 @end table
 
 @node Advanced configuration
@@ -200,6 +205,10 @@ default, it is enabled when an OpenCL implementation is found.
 Disable the StarPU-Top interface (@pxref{StarPU-Top}).  By default, it
 is enabled when the required dependencies are found.
 
+@item --enable-sched-ctx-hypervisor
+Enables the Scheduling Context Hypervisor plugin(@pxref{Scheduling Context Hypervisor}). 
+By default, it is disabled.
+
 @end table
 @node Execution configuration through environment variables
 @section Execution configuration through environment variables

+ 11 - 3
doc/chapters/perf-optimization.texi

@@ -160,10 +160,15 @@ supports parallel tasks (still experimental).
 
 @node Task scheduling contexts
 @section Task scheduling contexts
+Task scheduling contexts represent abstracts sets of workers that allow the programmers to control the distribution of computational resources (i.e. CPUs and
+GPUs) to concurrent parallel kernels. The main goal is to minimize interferences between the execution of multiple parallel kernels, by partitioning the underlying pool of workers using contexts.
+
 By default, the application submits tasks to an initial context, which disposes of all the computation ressources available to StarPU (all the workers). 
-If the application programmer plans to launch several parallel kernels simultaneusly, these kernels will be executed within this initial context, using a single scheduler (@pxref{Task scheduling policy}).
-Meanwhile, if the application programmer is aware of the demands of these kernels and of the specificity of the machine used to execute them, the workers can be divided between several contexts. Thus, the scheduling contexts will isolate the execution of each kernel and they will permit the use of a scheduling policy proper to each one of them.
-In order to create the contexts, you have to know the indentifiers of the workers running within StarPU. By passing a set of workers together with the scheduling policy to the function @code{starpu_create_sched_ctx}, you will get an identifier of the context created which you will use to indicate the context you want to submit the tasks to.
+If the application programmer plans to launch several parallel kernels simultaneusly, by default these kernels will be executed within this initial context, using a single scheduler policy(@pxref{Task scheduling policy}).
+Meanwhile, if the application programmer is aware of the demands of these kernels and of the specificity of the machine used to execute them, the workers can be divided between several contexts. 
+These scheduling contexts will isolate the execution of each kernel and they will permit the use of a scheduling policy proper to each one of them.
+In order to create the contexts, you have to know the indentifiers of the workers running within StarPU. 
+By passing a set of workers together with the scheduling policy to the function @code{starpu_create_sched_ctx}, you will get an identifier of the context created which you will use to indicate the context you want to submit the tasks to.
 
 @cartouche
 @smallexample
@@ -183,6 +188,9 @@ starpu_task_submit(task);
 @end smallexample
 @end cartouche
 
+Note: Parallel greedy and parallel heft scheduling policies do not support the existence of several disjoint contexts on the machine. 
+Combined workers are constructed depending on the entire topology of the machine, not only the one belonging to a context.
+
 @node Performance model calibration
 @section Performance model calibration
 

+ 27 - 0
doc/chapters/sched_ctx_hypervisor.texi

@@ -0,0 +1,27 @@
+@c -*-texinfo-*-
+
+@c This file is part of the StarPU Handbook.
+@c Copyright (C) 2011, 2012 Institut National de Recherche en Informatique et Automatique
+@c See the file starpu.texi for copying conditions.
+
+@cindex Scheduling Context Hypervisor
+
+StarPU proposes a platform for constructing Scheduling Contexts, for deleting and modifying them dynamically.
+A parallel kernel, can thus be isolated into a scheduling context and interferences between several parallel kernels are avoided.
+If the user knows exactly how many workers each scheduling context needs, he can assign them to the contexts at their creation time or modify them during the execution of the program.
+
+The Scheduling Context Hypervisor Plugin is available for the users who do not dispose of a regular parallelism and who need to resize the contexts according to the behavior of the parallel kernel.
+The Hypervisor receives information concerning the execution of the tasks, the efficiency of the resources, etc. from StarPU and it decides accordingly when and how the contexts it manages can be resized.
+Basic strategies of resizing scheduling contexts already exist but a platform for implementing additional custom ones is available.
+
+@menu
+* Performance counters::              			StarPU provides information to the Hypervisor through performance counters
+* Registering Scheduling Contexts to the hypervisor:: 	Contexts have to register to the hypervisor
+* The user's input in the resizing process:: 		The user can help the hypervisor decide how to resize
+* Defining a new hypervisor policy::      		New Policies can be implemented
+@end menu
+
+@c Local Variables:
+@c TeX-master: "../starpu.texi"
+@c ispell-local-dictionary: "american"
+@c End: