Ver código fonte

doc: add missing documentation

Nathalie Furmento 8 anos atrás
pai
commit
ed52082ecd

+ 12 - 12
doc/doxygen/chapters/10scheduling_context_hypervisor.doxy

@@ -1,7 +1,7 @@
 /*
  * This file is part of the StarPU Handbook.
  * Copyright (C) 2009--2011  Universit@'e de Bordeaux
- * Copyright (C) 2010, 2011, 2012, 2013, 2014  CNRS
+ * Copyright (C) 2010, 2011, 2012, 2013, 2014, 2016  CNRS
  * Copyright (C) 2011, 2012 INRIA
  * See the file version.doxy for copying conditions.
  */
@@ -13,12 +13,12 @@
 StarPU proposes a platform to construct Scheduling Contexts, to
 delete and modify them dynamically. A parallel kernel, can thus
 be isolated into a scheduling context and interferences between
-several parallel kernels are avoided. If the user knows exactly how
-many workers each scheduling context needs, he can assign them to the
+several parallel kernels are avoided. If users know exactly how
+many workers each scheduling context needs, they can assign them to the
 contexts at their creation time or modify them during the execution of
 the program.
 
-The Scheduling Context Hypervisor Plugin is available for the users
+The Scheduling Context Hypervisor Plugin is available for users
 who do not dispose of a regular parallelism, who cannot know in
 advance the exact size of the context and need to resize the contexts
 according to the behavior of the parallel kernels.
@@ -72,9 +72,9 @@ time of the resources.
 
 The plugin proposes several strategies for resizing the scheduling context.
 
-The <b>Application driven</b> strategy uses the user's input concerning the moment when he wants to resize the contexts.
-Thus, the users tags the task that should trigger the resizing
-process. We can set directly the field starpu_task::hypervisor_tag or
+The <b>Application driven</b> strategy uses users's input concerning the moment when they want to resize the contexts.
+Thus, users tag the task that should trigger the resizing
+process. One can set directly the field starpu_task::hypervisor_tag or
 use the macro ::STARPU_HYPERVISOR_TAG in the function
 starpu_task_insert().
 
@@ -91,13 +91,13 @@ starpu_task_insert(&codelet,
                     0);
 \endcode
 
-Then the user has to indicate that when a task with the specified tag is executed the contexts should resize.
+Then users have to indicate that when a task with the specified tag is executed the contexts should resize.
 
 \code{.c}
 sc_hypervisor_resize(sched_ctx, 2);
 \endcode
 
-The user can use the same tag to change the resizing configuration of the contexts if he considers it necessary.
+Users can use the same tag to change the resizing configuration of the contexts if they consider it necessary.
 
 \code{.c}
 sc_hypervisor_ctl(sched_ctx,
@@ -109,7 +109,7 @@ sc_hypervisor_ctl(sched_ctx,
 
 
 The <b>Idleness</b> based strategy moves workers unused in a certain context to another one needing them.
-(see \ref UsersInputInTheResizingProcess "Users’ Input In The Resizing Process")
+(see \ref API_SC_Hypervisor_usage)
 
 \code{.c}
 int workerids[3] = {1, 3, 10};
@@ -122,7 +122,7 @@ sc_hypervisor_ctl(sched_ctx_id,
 
 The <b>Gflops rate</b> based strategy resizes the scheduling contexts such that they all finish at the same time.
 The speed of each of them is computed and once one of them is significantly slower the resizing process is triggered.
-In order to do these computations the user has to input the total number of instructions needed to be executed by the
+In order to do these computations users have to input the total number of instructions needed to be executed by the
 parallel kernels and the number of instruction to be executed by each
 task.
 
@@ -175,7 +175,7 @@ without submitting them right away.
 
 The <b>Ispeed </b> strategy divides the execution of the application in several frames.
 For each frame the hypervisor computes the speed of the contexts and tries making them
-run at the same speed. The strategy requires less contribution from the user as
+run at the same speed. The strategy requires less contribution from users as
 the hypervisor requires only the size of the frame in terms of flops.
 
 \code{.c}

+ 14 - 0
doc/doxygen/chapters/40environment_variables.doxy

@@ -320,6 +320,20 @@ and friends.  The default is Enabled.
 This permits to test the performance effect of memory pinning.
 </dd>
 
+<dt>STARPU_MIC_SINK_PROGRAM_NAME</dt>
+<dd>
+\anchor STARPU_MIC_SINK_PROGRAM_NAME
+\addindex __env__STARPU_MIC_SINK_PROGRAM_NAME
+todo
+</dd>
+
+<dt>STARPU_MIC_SINK_PROGRAM_PATH</dt>
+<dd>
+\anchor STARPU_MIC_SINK_PROGRAM_PATH
+\addindex __env__STARPU_MIC_SINK_PROGRAM_PATH
+todo
+</dd>
+
 </dl>
 
 \section ConfiguringTheSchedulingEngine Configuring The Scheduling Engine

+ 9 - 7
doc/doxygen/chapters/api/modularized_scheduler.doxy

@@ -2,7 +2,7 @@
  * This file is part of the StarPU Handbook.
  * Copyright (C) 2013        Simon Archipoff
  * Copyright (C) 2009--2011  Universit@'e de Bordeaux
- * Copyright (C) 2014, 2015        CNRS
+ * Copyright (C) 2014, 2015, 2016        CNRS
  * Copyright (C) 2013, 2014  INRIA
  * See the file version.doxy for copying conditions.
  */
@@ -150,7 +150,7 @@ The actual scheduler
 \ingroup API_Modularized_Scheduler
 	 compatibility with starpu_sched_policy interface
 
-\fn struct starpu_task *starpu_sched_tree_pop_task()
+\fn struct starpu_task *starpu_sched_tree_pop_task(unsigned sched_ctx)
 \ingroup API_Modularized_Scheduler
 	 compatibility with starpu_sched_policy interface
 
@@ -169,7 +169,7 @@ The actual scheduler
 @name Generic Scheduling Component API
 \ingroup API_Modularized_Scheduler
 
-\fn struct starpu_sched_component *starpu_sched_component_create(struct starpu_sched_tree *tree)
+\fn struct starpu_sched_component *starpu_sched_component_create(struct starpu_sched_tree *tree, const char *name)
 \ingroup API_Modularized_Scheduler
 	 allocate and initialize component field with defaults values :
 	.pop_task make recursive call on father
@@ -441,10 +441,12 @@ todo
 \ingroup API_Modularized_Scheduler
 	 this function build a scheduler for \p sched_ctx_id according to \p s and the hwloc topology of the machine.
 
-\fn int starpu_sched_component_push_task(struct starpu_sched_component *component, struct starpu_task *task);
-	Push a task to a component. This is a helper for <c>component->push_task(component, task)</c> plus tracing.
+\fn int starpu_sched_component_push_task(struct starpu_sched_component *from, struct starpu_sched_component *to, struct starpu_task *task)
+\ingroup API_Modularized_Scheduler
+Push a task to a component. This is a helper for <c>component->push_task(component, task)</c> plus tracing.
 
-\fn struct starpu_task *starpu_sched_component_pull_task(struct starpu_sched_component *component);
-	Pull a task from a component. This is a helper for <c>component->pull_task(component)</c> plus tracing.
+\fn struct starpu_task *starpu_sched_component_pull_task(struct starpu_sched_component *from, struct starpu_sched_component *to)
+\ingroup API_Modularized_Scheduler
+Pull a task from a component. This is a helper for <c>component->pull_task(component)</c> plus tracing.
 
 */

+ 9 - 3
doc/doxygen/chapters/api/mpi.doxy

@@ -289,9 +289,9 @@ Return the tag of the given data.
 Return the tag of the given data.
 Symbol kept for backward compatibility. Calling function starpu_mpi_data_get_tag
 
-\def starpu_data_migrate(MPI_Comm comm, starpu_data_handle_t data, int new_rank)
+\def starpu_mpi_data_migrate(MPI_Comm comm, starpu_data_handle_t handle, int new_rank)
 \ingroup API_MPI_Support
-Migration the data onto the \p new_rank MPI node. This means both transferring
+Migrate the data onto the \p new_rank MPI node. This means both transferring
 the data to node \p new_rank if it hasn't been transferred already, and setting
 the home node of the data to the new node. Further data transfers triggered by
 starpu_mpi_task_insert() will be done from that new node. This function thus
@@ -310,6 +310,12 @@ this macro is used when calling starpu_mpi_task_insert(), and must be
 followed by a data handle to specify that the node owning the given
 data will execute the codelet.
 
+\def STARPU_NODE_SELECTION_POLICY
+\ingroup API_MPI_Support
+this macro is used when calling starpu_mpi_task_insert(), and must be
+followed by a identifier to a node selection policy. This is needed when several
+nodes own data in ::STARPU_W mode.
+
 \fn int starpu_mpi_insert_task(MPI_Comm comm, struct starpu_codelet *codelet, ...)
 \ingroup API_MPI_Support
 This function does the same as the function starpu_mpi_task_insert(). It has been kept to avoid breaking old codes.
@@ -335,7 +341,7 @@ The internal algorithm is as follows:
         Find out which MPI node is going to execute the codelet.
         <ul>
             <li>If there is only one node owning data in ::STARPU_W mode, it will be selected;
-            <li>If there is several nodes owning data in ::STARPU_W node, a node will be selected according to a given node selection policy (see ::STARPU_NODE_SELECTION_POLICY or starpu_mpi_node_selection_set_current_policy())
+            <li>If there is several nodes owning data in ::STARPU_W mode, a node will be selected according to a given node selection policy (see ::STARPU_NODE_SELECTION_POLICY or starpu_mpi_node_selection_set_current_policy())
             <li>The argument ::STARPU_EXECUTE_ON_NODE followed by an integer can be used to specify the node;
             <li>The argument ::STARPU_EXECUTE_ON_DATA followed by a data handle can be used to specify that the node owing the given data will execute the codelet.
         </ul>

+ 5 - 1
doc/doxygen/chapters/api/sc_hypervisor/sc_hypervisor.doxy

@@ -1,7 +1,7 @@
 /*
  * This file is part of the StarPU Handbook.
  * Copyright (C) 2009--2011  Universit@'e de Bordeaux
- * Copyright (C) 2010, 2011, 2012, 2013  CNRS
+ * Copyright (C) 2010, 2011, 2012, 2013, 2016  CNRS
  * Copyright (C) 2011, 2012, 2013 INRIA
  * See the file version.doxy for copying conditions.
  */
@@ -149,6 +149,10 @@ The quantity of data(in bytes) needed by the task to execute
 \var sc_hypervisor_policy_task_pool::next
 Other task kinds
 
+\def STARPU_HYPERVISOR_TAG
+\ingroup API_SC_Hypervisor
+todo
+
 \fn void sc_hypervisor_post_resize_request(unsigned sched_ctx, int task_tag)
 \ingroup API_SC_Hypervisor
 Requires resizing the context \p sched_ctx whenever a task tagged with the id \p task_tag

+ 11 - 1
doc/doxygen/chapters/api/scheduling_contexts.doxy

@@ -1,7 +1,7 @@
 /*
  * This file is part of the StarPU Handbook.
  * Copyright (C) 2009--2011  Universit@'e de Bordeaux
- * Copyright (C) 2010, 2011, 2012, 2013, 2014  CNRS
+ * Copyright (C) 2010, 2011, 2012, 2013, 2014, 2016  CNRS
  * Copyright (C) 2011, 2012 INRIA
  * See the file version.doxy for copying conditions.
  */
@@ -37,6 +37,11 @@ hypervisor how the application and the resources are executing.
 @name Scheduling Contexts Basic API
 \ingroup API_Scheduling_Contexts
 
+\def STARPU_NMAX_SCHED_CTXS
+\ingroup API_Scheduling_Policy
+Define the maximum number of scheduling contexts managed by StarPU. The default value can be
+modified at configure by using the option \ref enable-max-sched-ctxs "--enable-max-sched-ctxs".
+
 \fn unsigned starpu_sched_ctx_create(int *workerids_ctx, int nworkers_ctx, const char *sched_ctx_name, ...)
 \ingroup API_Scheduling_Contexts
 This function creates a scheduling context with the given parameters
@@ -88,6 +93,11 @@ minimum scheduler priority value.
 This macro is used when calling starpu_sched_ctx_create() to specify a
 maximum scheduler priority value.
 
+\def STARPU_SCHED_CTX_POLICY_INIT
+\ingroup API_Scheduling_Contexts
+This macro is used when calling starpu_sched_ctx_create() to specify a
+function pointer allowing to initialize the scheduling policy.
+
 \fn unsigned starpu_sched_ctx_create_inside_interval(const char *policy_name, const char *sched_ctx_name, int min_ncpus, int max_ncpus, int min_ngpus, int max_ngpus, unsigned allow_overlap)
 \ingroup API_Scheduling_Contexts
 Create a context indicating an approximate interval of resources

+ 6 - 11
doc/doxygen/chapters/api/scheduling_policy.doxy

@@ -1,7 +1,7 @@
 /*
  * This file is part of the StarPU Handbook.
  * Copyright (C) 2009--2011  Universit@'e de Bordeaux
- * Copyright (C) 2010, 2011, 2012, 2013, 2014  CNRS
+ * Copyright (C) 2010, 2011, 2012, 2013, 2014, 2016  CNRS
  * Copyright (C) 2011, 2012 INRIA
  * See the file version.doxy for copying conditions.
  */
@@ -13,6 +13,11 @@
 implement custom policies to address specific problems. The API
 described below allows users to write their own scheduling policy.
 
+\def STARPU_MAXIMPLEMENTATIONS
+\ingroup API_Scheduling_Policy
+Define the maximum number of implementations per architecture. The default value can be modified at
+configure by using the option \ref enable-maximplementations "--enable-maximplementations".
+
 \struct starpu_sched_policy
 \ingroup API_Scheduling_Policy
 This structure contains all the methods that implement a
@@ -224,14 +229,4 @@ Prefetch data for a given task on a given node when the bus is idle
 The scheduling policies indicates if the worker may pop tasks from the list of other workers
 or if there is a central list with task for all the workers
 
-\fn void _starpu_graph_compute_depths(void)
-\ingroup API_Scheduling_policy
-This make StarPU compute for each task the depth, i.e. the length of the
-longest path to a task without outgoing dependencies.
-
-\fn void _starpu_graph_foreach(void (*func)(void *data, struct _starpu_job *job), void *data)
-\ingroup API_Scheduling_policy
-This calls \e func for each node of the task graph, passing also \e data as it
-is.
-
 */

+ 3 - 3
doc/doxygen/chapters/api/task_lists.doxy

@@ -1,7 +1,7 @@
 /*
  * This file is part of the StarPU Handbook.
  * Copyright (C) 2009--2011  Universit@'e de Bordeaux
- * Copyright (C) 2010, 2011, 2012, 2013, 2014  CNRS
+ * Copyright (C) 2010, 2011, 2012, 2013, 2014, 2016  CNRS
  * Copyright (C) 2011, 2012 INRIA
  * See the file version.doxy for copying conditions.
  */
@@ -64,9 +64,9 @@ Get the end of \p list.
 \ingroup API_Task_Lists
 Get the next task of \p list. This is not erase-safe.
 
-\fn int starpu_task_list_ismember(struct starpu_task_list *list, struct starpu_task *task)
+\fn int starpu_task_list_ismember(struct starpu_task_list *list, struct starpu_task *look)
 \ingroup API_Task_Lists
-Test whether the given \p task is contained in the \p list.
+Test whether the given task \p look is contained in the \p list.
 
 */
 

+ 3 - 3
doc/doxygen/chapters/api/tree.doxy

@@ -1,6 +1,6 @@
 /*
  * This file is part of the StarPU Handbook.
- * Copyright (C) 2014  CNRS
+ * Copyright (C) 2014, 2016  CNRS
  * See the file version.doxy for copying conditions.
  */
 
@@ -23,7 +23,7 @@ todo
 \var int starpu_tree::is_pu
 todo
 
-\fn void starpu_tree_reset_visited(struct starpu_tree *tree, int *visited)
+\fn void starpu_tree_reset_visited(struct starpu_tree *tree, char *visited)
 \ingroup API_Tree
 todo
 
@@ -35,7 +35,7 @@ todo
 \ingroup API_Tree
 todo
 
-\fn struct starpu_tree *starpu_tree_get_neighbour(struct starpu_tree *tree, struct starpu_tree *node, int *visited, int *present)
+\fn struct starpu_tree *starpu_tree_get_neighbour(struct starpu_tree *tree, struct starpu_tree *node, char *visited, char *present)
 \ingroup API_Tree
 todo
 

+ 12 - 1
doc/doxygen/chapters/api/workers.doxy

@@ -1,7 +1,7 @@
 /*
  * This file is part of the StarPU Handbook.
  * Copyright (C) 2009--2011  Universit@'e de Bordeaux
- * Copyright (C) 2010, 2011, 2012, 2013, 2014  CNRS
+ * Copyright (C) 2010, 2011, 2012, 2013, 2014, 2016  CNRS
  * Copyright (C) 2011, 2012 INRIA
  * See the file version.doxy for copying conditions.
  */
@@ -12,6 +12,17 @@
 \ingroup API_Workers_Properties
 Define the maximum number of workers managed by StarPU.
 
+\def STARPU_MAXCPUS
+\ingroup API_Workers_Properties
+Define the maximum number of CPU workers managed by StarPU. The default value can be modified at
+configure by using the option \ref enable-maxcpus "--enable-maxcpus".
+
+\def STARPU_MAXNODES
+\ingroup API_Workers_Properties
+Define the maximum number of memory nodes managed by StarPU. The default value can be modified at
+configure by using the option \ref enable-maxnodes "--enable-maxnodes". Reducing it allows to
+considerably reduce memory used by StarPU data structures.
+
 \enum starpu_node_kind
 \ingroup API_Workers_Properties
 TODO