/* * This file is part of the StarPU Handbook. * Copyright (C) 2009--2011 Universit@'e de Bordeaux 1 * Copyright (C) 2010, 2011, 2012, 2013, 2014 Centre National de la Recherche Scientifique * Copyright (C) 2011, 2012 Institut National de Recherche en Informatique et Automatique * See the file version.doxy for copying conditions. */ /*! \page Scheduling Scheduling \section TaskSchedulingPolicy Task Scheduling Policy By default, StarPU uses the simple greedy scheduler eager. This is because it provides correct load balance even if the application codelets do not have performance models. If your application codelets have performance models (\ref PerformanceModelExample), you should change the scheduler thanks to the environment variable \ref STARPU_SCHED. For instance export STARPU_SCHED=dmda . Use help to get the list of available schedulers. The eager scheduler uses a central task queue, from which workers draw tasks to work on. This however does not permit to prefetch data since the scheduling decision is taken late. If a task has a non-0 priority, it is put at the front of the queue. The prio scheduler also uses a central task queue, but sorts tasks by priority (between -5 and 5). The random scheduler distributes tasks randomly according to assumed worker overall performance. The ws (work stealing) scheduler schedules tasks on the local worker by default. When a worker becomes idle, it steals a task from the most loaded worker. The dm (deque model) scheduler uses task execution performance models into account to perform an HEFT-similar scheduling strategy: it schedules tasks where their termination time will be minimal. The dmda (deque model data aware) scheduler is similar to dm, it also takes into account data transfer time. The dmdar (deque model data aware ready) scheduler is similar to dmda, it also sorts tasks on per-worker queues by number of already-available data buffers. The dmdas (deque model data aware sorted) scheduler is similar to dmda, it also supports arbitrary priority values. The heft (heterogeneous earliest finish time) scheduler is deprecated. It is now just an alias for dmda. The pheft (parallel HEFT) scheduler is similar to heft, it also supports parallel tasks (still experimental). Should not be used when several contexts using it are being executed simultaneously. The peager (parallel eager) scheduler is similar to eager, it also supports parallel tasks (still experimental). Should not be used when several contexts using it are being executed simultaneously. \section TaskDistributionVsDataTransfer Task Distribution Vs Data Transfer Distributing tasks to balance the load induces data transfer penalty. StarPU thus needs to find a balance between both. The target function that the scheduler dmda of StarPU tries to minimize is alpha * T_execution + beta * T_data_transfer, where T_execution is the estimated execution time of the codelet (usually accurate), and T_data_transfer is the estimated data transfer time. The latter is estimated based on bus calibration before execution start, i.e. with an idle machine, thus without contention. You can force bus re-calibration by running the tool starpu_calibrate_bus. The beta parameter defaults to 1, but it can be worth trying to tweak it by using export STARPU_SCHED_BETA=2 for instance, since during real application execution, contention makes transfer times bigger. This is of course imprecise, but in practice, a rough estimation already gives the good results that a precise estimation would give. \section Power-basedScheduling Power-based Scheduling If the application can provide some power performance model (through the field starpu_codelet::power_model), StarPU will take it into account when distributing tasks. The target function that the scheduler dmda minimizes becomes alpha * T_execution + beta * T_data_transfer + gamma * Consumption , where Consumption is the estimated task consumption in Joules. To tune this parameter, use export STARPU_SCHED_GAMMA=3000 for instance, to express that each Joule (i.e kW during 1000us) is worth 3000us execution time penalty. Setting alpha and beta to zero permits to only take into account power consumption. This is however not sufficient to correctly optimize power: the scheduler would simply tend to run all computations on the most energy-conservative processing unit. To account for the consumption of the whole machine (including idle processing units), the idle power of the machine should be given by setting export STARPU_IDLE_POWER=200 for 200W, for instance. This value can often be obtained from the machine power supplier. The power actually consumed by the total execution can be displayed by setting export STARPU_PROFILING=1 STARPU_WORKER_STATS=1 . On-line task consumption measurement is currently only supported through the CL_PROFILING_POWER_CONSUMED OpenCL extension, implemented in the MoviSim simulator. Applications can however provide explicit measurements by using the function starpu_perfmodel_update_history() (examplified in \ref PerformanceModelExample with the power_model performance model). Fine-grain measurement is often not feasible with the feedback provided by the hardware, so the user can for instance run a given task a thousand times, measure the global consumption for that series of tasks, divide it by a thousand, repeat for varying kinds of tasks and task sizes, and eventually feed StarPU with these manual measurements through starpu_perfmodel_update_history(). \section StaticScheduling Static Scheduling In some cases, one may want to force some scheduling, for instance force a given set of tasks to GPU0, another set to GPU1, etc. while letting some other tasks be scheduled on any other device. This can indeed be useful to guide StarPU into some work distribution, while still letting some degree of dynamism. For instance, to force execution of a task on CUDA0: \code{.c} task->execute_on_a_specific_worker = 1; task->worker = starpu_worker_get_by_type(STARPU_CUDA_WORKER, 0); \endcode Note however that using scheduling contexts while statically scheduling tasks on workers could be tricky. Be careful to schedule the tasks exactly on the workers of the corresponding contexts, otherwise the workers' corresponding scheduling structures may not be allocated or the execution of the application may deadlock. Moreover, the hypervisor should not be used when statically scheduling tasks. \section DefiningANewSchedulingPolicy Defining A New Scheduling Policy A full example showing how to define a new scheduling policy is available in the StarPU sources in the directory examples/scheduler/. See \ref API_Scheduling_Policy \code{.c} static struct starpu_sched_policy dummy_sched_policy = { .init_sched = init_dummy_sched, .deinit_sched = deinit_dummy_sched, .add_workers = dummy_sched_add_workers, .remove_workers = dummy_sched_remove_workers, .push_task = push_task_dummy, .push_prio_task = NULL, .pop_task = pop_task_dummy, .post_exec_hook = NULL, .pop_every_task = NULL, .policy_name = "dummy", .policy_description = "dummy scheduling strategy" }; \endcode */