/* StarPU --- Runtime system for heterogeneous multicore architectures. * * Copyright (C) 2009-2021 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria * * StarPU is free software; you can redistribute it and/or modify * it under the terms of the GNU Lesser General Public License as published by * the Free Software Foundation; either version 2.1 of the License, or (at * your option) any later version. * * StarPU is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. * * See the GNU Lesser General Public License in COPYING.LGPL for more details. */ /*! \page Scheduling Scheduling \section TaskSchedulingPolicy Task Scheduling Policies The basics of the scheduling policy are the following: This means scheduling policies usually contain at least one queue of tasks to store them between the time when they become available, and the time when a worker gets to grab them. By default, StarPU uses the work-stealing scheduler \b lws. This is because it provides correct load balance and locality even if the application codelets do not have performance models. Other non-modelling scheduling policies can be selected among the list below, thanks to the environment variable \ref STARPU_SCHED. For instance export STARPU_SCHED=dmda . Use help to get the list of available schedulers. \subsection NonPerformanceModelingPolicies Non Performance Modelling Policies - The eager scheduler uses a central task queue, from which all workers draw tasks to work on concurrently. This however does not permit to prefetch data since the scheduling decision is taken late. If a task has a non-0 priority, it is put at the front of the queue. - The random scheduler uses a queue per worker, and distributes tasks randomly according to assumed worker overall performance. - The ws (work stealing) scheduler uses a queue per worker, and schedules a task on the worker which released it by default. When a worker becomes idle, it steals a task from the most loaded worker. - The lws (locality work stealing) scheduler uses a queue per worker, and schedules a task on the worker which released it by default. When a worker becomes idle, it steals a task from neighbour workers. It also takes into account priorities. - The prio scheduler also uses a central task queue, but sorts tasks by priority specified by the programmer. - The heteroprio scheduler uses different priorities for the different processing units. This scheduler must be configured to work correclty and to expect high-performance as described in the corresponding section. \subsection DMTaskSchedulingPolicy Performance Model-Based Task Scheduling Policies If (and only if) your application codelets have performance models (\ref PerformanceModelExample), you should change the scheduler thanks to the environment variable \ref STARPU_SCHED, to select one of the policies below, in order to take advantage of StarPU's performance modelling. For instance export STARPU_SCHED=dmda . Use help to get the list of available schedulers. Note: Depending on the performance model type chosen, some preliminary calibration runs may be needed for the model to converge. If the calibration has not been done, or is insufficient yet, or if no performance model is specified for a codelet, every task built from this codelet will be scheduled using an eager fallback policy. Troubleshooting: Configuring and recompiling StarPU using the \c configure option \ref enable-verbose "--enable-verbose" displays some statistics at the end of execution about the percentage of tasks which have been scheduled by a DM* family policy using performance model hints. A low or zero percentage may be the sign that performance models are not converging or that codelets do not have performance models enabled. - The dm (deque model) scheduler takes task execution performance models into account to perform a HEFT-similar scheduling strategy: it schedules tasks where their termination time will be minimal. The difference with HEFT is that dm schedules tasks as soon as they become available, and thus in the order they become available, without taking priorities into account. - The dmda (deque model data aware) scheduler is similar to \b dm, but it also takes into account data transfer time. - The dmdap (deque model data aware prio) scheduler is similar to \b dmda, except that it sorts tasks by priority order, which allows to become even closer to HEFT by respecting priorities after having made the scheduling decision (but it still schedules tasks in the order they become available). - The dmdar (deque model data aware ready) scheduler is similar to \b dmda, but it also privileges tasks whose data buffers are already available on the target device. - The dmdas combines \b dmdap and \b dmdas: it sorts tasks by priority order, but for a given priority it will privilege tasks whose data buffers are already available on the target device. - The dmdasd (deque model data aware sorted decision) scheduler is similar to dmdas, except that when scheduling a task, it takes into account its priority when computing the minimum completion time, since this task may get executed before others, and thus the latter should be ignored. - The heft (heterogeneous earliest finish time) scheduler is a deprecated alias for dmda. - The pheft (parallel HEFT) scheduler is similar to \b dmda, it also supports parallel tasks (still experimental). It should not be used when several contexts using it are being executed simultaneously. - The peager (parallel eager) scheduler is similar to eager, it also supports parallel tasks (still experimental). It should not be used when several contexts using it are being executed simultaneously. \subsection ExistingModularizedSchedulers Modularized Schedulers StarPU provides a powerful way to implement schedulers, as documented in \ref DefiningANewModularSchedulingPolicy . It is currently shipped with the following pre-defined Modularized Schedulers : - modular-eager , modular-eager-prefetching are eager-based Schedulers (without and with prefetching)), they are \n naive schedulers, which try to map a task on the first available resource they find. The prefetching variant queues several tasks in advance to be able to do data prefetching. This may however degrade load balancing a bit. - modular-prio, modular-prio-prefetching, modular-eager-prio are prio-based Schedulers (without / with prefetching):, similar to Eager-Based Schedulers. Can handle tasks which have a defined priority and schedule them accordingly. The modular-eager-prio variant integrates the eager and priority queue in a single component. This allows it to do a better job at pushing tasks. - modular-random, modular-random-prio, modular-random-prefetching, modular-random-prio-prefetching are random-based Schedulers (without/with prefetching) : \n Select randomly a resource to be mapped on for each task. - modular-ws) implements Work Stealing: Maps tasks to workers in round robin, but allows workers to steal work from other workers. - modular-heft, modular-heft2, and modular-heft-prio are HEFT Schedulers : \n Maps tasks to workers using a heuristic very close to Heterogeneous Earliest Finish Time. It needs that every task submitted to StarPU have a defined performance model (\ref PerformanceModelCalibration) to work efficiently, but can handle tasks without a performance model. modular-heft just takes tasks by order. modular-heft2 takes at most 5 tasks of the same priority and checks which one fits best. modular-heft-prio is similar to modular-heft, but only decides the memory node, not the exact worker, just pushing tasks to one central queue per memory node. By default, they sort tasks by priorities and privilege running first a task which has most of its data already available on the target. These can however be changed with \ref STARPU_SCHED_SORTED_ABOVE, \ref STARPU_SCHED_SORTED_BELOW, and \ref STARPU_SCHED_READY . - modular-heteroprio is a Heteroprio Scheduler: \n Maps tasks to worker similarly to HEFT, but first attribute accelerated tasks to GPUs, then not-so-accelerated tasks to CPUs. \section TaskDistributionVsDataTransfer Task Distribution Vs Data Transfer Distributing tasks to balance the load induces data transfer penalty. StarPU thus needs to find a balance between both. The target function that the scheduler \b dmda of StarPU tries to minimize is alpha * T_execution + beta * T_data_transfer, where T_execution is the estimated execution time of the codelet (usually accurate), and T_data_transfer is the estimated data transfer time. The latter is estimated based on bus calibration before execution start, i.e. with an idle machine, thus without contention. You can force bus re-calibration by running the tool starpu_calibrate_bus. The beta parameter defaults to 1, but it can be worth trying to tweak it by using export STARPU_SCHED_BETA=2 (\ref STARPU_SCHED_BETA) for instance, since during real application execution, contention makes transfer times bigger. This is of course imprecise, but in practice, a rough estimation already gives the good results that a precise estimation would give. \section Energy-basedScheduling Energy-based Scheduling Note: by default StarPU does not let CPU workers sleep, to let them react to task release as quickly as possible. For idle time to really let CPU cores save energy, one needs to use the \c configure option \ref enable-blocking-drivers "--enable-blocking-drivers". If the application can provide some energy consumption performance model (through the field starpu_codelet::energy_model), StarPU will take it into account when distributing tasks. The target function that the scheduler \b dmda minimizes becomes alpha * T_execution + beta * T_data_transfer + gamma * Consumption , where Consumption is the estimated task consumption in Joules. To tune this parameter, use export STARPU_SCHED_GAMMA=3000 (\ref STARPU_SCHED_GAMMA) for instance, to express that each Joule (i.e kW during 1000us) is worth 3000us execution time penalty. Setting alpha and beta to zero permits to only take into account energy consumption. This is however not sufficient to correctly optimize energy: the scheduler would simply tend to run all computations on the most energy-conservative processing unit. To account for the consumption of the whole machine (including idle processing units), the idle power of the machine should be given by setting export STARPU_IDLE_POWER=200 (\ref STARPU_IDLE_POWER) for 200W, for instance. This value can often be obtained from the machine power supplier, e.g. by running \verbatim ipmitool -I lanplus -H mymachine-ipmi -U myuser -P mypasswd sdr type Current \endverbatim The energy actually consumed by the total execution can be displayed by setting export STARPU_PROFILING=1 STARPU_WORKER_STATS=1 (\ref STARPU_PROFILING and \ref STARPU_WORKER_STATS). For OpenCL devices, on-line task consumption measurement is currently supported through the OpenCL extension CL_PROFILING_POWER_CONSUMED, implemented in the MoviSim simulator. For CUDA devices, on-line task consumption measurement is supported on V100 cards and beyond. This however only works for quite long tasks, since the measurement granularity is about 10ms. Applications can however provide explicit measurements by using the function starpu_perfmodel_update_history() (examplified in \ref PerformanceModelExample with the performance model energy_model). Fine-grain measurement is often not feasible with the feedback provided by the hardware, so the user can for instance run a given task a thousand times, measure the global consumption for that series of tasks, divide it by a thousand, repeat for varying kinds of tasks and task sizes, and eventually feed StarPU with these manual measurements through starpu_perfmodel_update_history(). For instance, for CUDA devices, nvidia-smi -q -d POWER can be used to get the current consumption in Watt. Multiplying this value by the average duration of a single task gives the consumption of the task in Joules, which can be given to starpu_perfmodel_update_history(). Another way to provide the energy performance is to define a perfmodel with starpu_perfmodel::type ::STARPU_PER_ARCH or ::STARPU_PER_WORKER , and set the field starpu_perfmodel::arch_cost_function or starpu_perfmodel::worker_cost_function to a function which shall return the estimated consumption of the task in Joules. Such a function can for instance use starpu_task_expected_length() on the task (in µs), multiplied by the typical power consumption of the device, e.g. in W, and divided by 1000000. to get Joules. \subsection MeasuringEnergyandPower Measuring energy and power with StarPU We have extended the performance model of StarPU to measure energy and power values of CPUs. These values are measured using the existing Performance API (PAPI) analysis library. PAPI provides the tool designer and application engineer with a consistent interface and methodology for use of the performance counter hardware found in most major microprocessors. PAPI enables software engineers to see, in near real time, the relation between software performance and processor events. - To measure energy consumption of CPUs, we use the RAPL events, which are available on CPU architecture: RAPL_ENERGY_PKG that represents the whole CPU socket power consumption, and RAPL_ENERGY_DRAM that represents the RAM power consumption. PAPI provides a generic, portable interface for the hardware performance counters available on all modern CPUs and some other components of interest that are scattered across the chip and system. In order to use the right rapl events for energy measurement, user should check the rapl events available on the machine, using this command: \verbatim $ papi_native_avail \endverbatim Depending on the system configuration, the user may have to run this as root to get the performance counter values. Since the measurement is for all the the CPUs and the memory, the approach taken here is to run a series of tasks on all of them and to take the overall measurement. - The example below illustrates the energy and power measurements, using the functions starpu_energy_start() and starpu_energy_stop(). In this example, we launch several tasks of the same type in parallel. To perform the energy requirement measurement of a program, we call starpu_energy_start(), which initializes energy measurement counters and starpu_energy_stop(struct starpu_perfmodel *model, struct starpu_task *task, unsigned nimpl, unsigned ntasks, int workerid, enum starpu_worker_archtype archi) to stop counting and update the performance model. This ends up yielding the average energy requirement of a single task. The example below illustrates this for a given task type. \code{.c} unsigned N = starpu_cpu_worker_get_count() * 40; starpu_energy_start(-1, STARPU_CPU_WORKER); for (i = 0; i < N; i++) starpu_task_insert(&cl, STARPU_EXECUTE_WHERE, STARPU_CPU, STARPU_R, arg1, STARPU_RW, arg2, 0); starpu_task_t *specimen = starpu_task_build(&cl, STARPU_R, arg1, STARPU_RW, arg2, 0); starpu_energy_stop(&codelet.energy_model, specimen, 0, N, -1, STARPU_CPU_WORKER); . . . \endcode The example starts 40 times more tasks of the same type than there are CPU execution unit. Once the tasks are distributed over all CPUs, the latter are all executing the same type of tasks (with the same data size and parameters); each CPU will in the end execute 40 tasks. A specimen task is then constructed and passed to starpu_energy_stop(), which will fold into the performance model the energy requirement measurement for that type and size of task. For the energy and power measurements, depending on the system configuration the user may have to run applications as root to use PAPI library. The function starpu_energy_stop() uses PAPI_stop() to stop counting and store the values into the array. We calculate both energy in Joules and power consumption in Watt. We call the function starpu_perfmodel_update_history() in the perfmormance model to provide explicit measurements. - In the CUDA case, nvml provides per-GPU energy measurement. We can thus calibrate the performance models per GPU: \code{.c} unsigned N = 40; for (i = 0; i < starpu_cuda_worker_get_count(); i++) { int workerid = starpu_worker_get_by_type(STARPU_CUDA_WORKER, i); starpu_energy_start(workerid, STARPU_CUDA_WORKER); for (i = 0; i < N; i++) starpu_task_insert(&cl, STARPU_EXECUTE_ON_WORKER, workerid, STARPU_R, arg1, STARPU_RW, arg2, 0); starpu_task_t *specimen = starpu_task_build(&cl, STARPU_R, arg1, STARPU_RW, arg2, 0); starpu_energy_stop(&codelet.energy_model, specimen, 0, N, workerid, STARPU_CUDA_WORKER); } \endcode - A complete example is available in tests/perfmodels/regression_based_memset.c \section StaticScheduling Static Scheduling In some cases, one may want to force some scheduling, for instance force a given set of tasks to GPU0, another set to GPU1, etc. while letting some other tasks be scheduled on any other device. This can indeed be useful to guide StarPU into some work distribution, while still letting some degree of dynamism. For instance, to force execution of a task on CUDA0: \code{.c} task->execute_on_a_specific_worker = 1; task->workerid = starpu_worker_get_by_type(STARPU_CUDA_WORKER, 0); \endcode or equivalently \code{.c} starpu_task_insert(&cl, ..., STARPU_EXECUTE_ON_WORKER, starpu_worker_get_by_type(STARPU_CUDA_WORKER, 0), ...); \endcode One can also specify a set of worker(s) which are allowed to take the task, as an array of bit, for instance to allow workers 2 and 42: \code{.c} task->workerids = calloc(2,sizeof(uint32_t)); task->workerids[2/32] |= (1 << (2%32)); task->workerids[42/32] |= (1 << (42%32)); task->workerids_len = 2; \endcode One can also specify the order in which tasks must be executed by setting the field starpu_task::workerorder. If this field is set to a non-zero value, it provides the per-worker consecutive order in which tasks will be executed, starting from 1. For a given of such task, the worker will thus not execute it before all the tasks with smaller order value have been executed, notably in case those tasks are not available yet due to some dependencies. This eventually gives total control of task scheduling, and StarPU will only serve as a "self-timed" task runtime. Of course, the provided order has to be runnable, i.e. a task should should not depend on another task bound to the same worker with a bigger order. Note however that using scheduling contexts while statically scheduling tasks on workers could be tricky. Be careful to schedule the tasks exactly on the workers of the corresponding contexts, otherwise the workers' corresponding scheduling structures may not be allocated or the execution of the application may deadlock. Moreover, the hypervisor should not be used when statically scheduling tasks. \section configuringHeteroprio Configuring Heteroprio Within Heteroprio, one priority per processing unit type is assigned to each task, such that a task has several priorities. Each worker pops the task that has the highest priority for the hardware type it uses, which could be CPU or CUDA for example. Therefore, the priorities has to be used to manage the critical path, but also to promote the consumption of tasks by the more appropriate workers. The tasks are stored inside buckets, where each bucket corresponds to a priority set. Then each worker uses an indirect access array to know the order in which it should access the buckets. Moreover, all the tasks inside a bucket must be compatible with all the processing units that may access it (at least). These priorities are now automatically assigned by Heteroprio in auto calibration mode using heuristics. If you want to set these priorities manually, you can change \ref STARPU_HETEROPRIO_USE_AUTO_CALIBRATION and follow the example below. In this example code, we have 5 types of tasks. CPU workers can compute all of them, but CUDA workers can only execute tasks of types 0 and 1, and is expected to go 20 and 30 time faster than the CPU, respectively. \code{.c} #include // Before calling starpu_init struct starpu_conf conf; starpu_conf_init(&conf); // Inform StarPU to use Heteroprio conf.sched_policy_name = "heteroprio"; // Inform StarPU about the function that will init the priorities in Heteroprio // where init_heteroprio is a function to implement conf.sched_policy_init = &init_heteroprio; // Do other things with conf if needed, then init StarPU starpu_init(&conf); \endcode \code{.c} void init_heteroprio(unsigned sched_ctx) { // CPU uses 5 buckets and visits them in the natural order starpu_heteroprio_set_nb_prios(ctx, STARPU_CPU_WORKER, 5); // It uses direct mapping idx => idx for(unsigned idx = 0; idx < 5; ++idx){ starpu_heteroprio_set_mapping(ctx, STARPU_CPU_WORKER, idx, idx); // If there is no CUDA worker we must tell that CPU is faster starpu_heteroprio_set_faster_arch(ctx, STARPU_CPU_WORKER, idx); } if(starpu_cuda_worker_get_count()){ // CUDA is enabled and uses 2 buckets starpu_heteroprio_set_nb_prios(ctx, STARPU_CUDA_WORKER, 2); // CUDA will first look at bucket 1 starpu_heteroprio_set_mapping(ctx, STARPU_CUDA_WORKER, 0, 1); // CUDA will then look at bucket 2 starpu_heteroprio_set_mapping(ctx, STARPU_CUDA_WORKER, 1, 2); // For bucket 1 CUDA is the fastest starpu_heteroprio_set_faster_arch(ctx, STARPU_CUDA_WORKER, 1); // And CPU is 30 times slower starpu_heteroprio_set_arch_slow_factor(ctx, STARPU_CPU_WORKER, 1, 30.0f); // For bucket 0 CUDA is the fastest starpu_heteroprio_set_faster_arch(ctx, STARPU_CUDA_WORKER, 0); // And CPU is 20 times slower starpu_heteroprio_set_arch_slow_factor(ctx, STARPU_CPU_WORKER, 0, 20.0f); } } \endcode Then, when a task is inserted the priority of the task will be used to select in which bucket is has to be stored. So, in the given example, the priority of a task will be between 0 and 4 included. However, tasks of priorities 0-1 must provide CPU and CUDA kernels, and tasks of priorities 2-4 must provide CPU kernels (at least). \subsection LAHeteroprio Using locality aware Heteroprio Heteroprio supports a mode where locality is evaluated to guide the distribution of the tasks (see https://peerj.com/articles/cs-190.pdf). Currently, this mode is available using the dedicated function or an environment variable \ref STARPU_HETEROPRIO_USE_LA, and can be configured using environment variables. \code{.c} void starpu_heteroprio_set_use_locality(unsigned sched_ctx_id, unsigned use_locality); \endcode In this mode, multiple strategies are available to determine which memory node's workers are the most qualified for executing a specific task. This strategy can be set with \ref STARPU_LAHETEROPRIO_PUSH and available strategies are: - WORKER: the worker which pushed the task is preferred for the execution. - LCs: the node with the shortest data transfer time (estimated by StarPU) is the most qualified - LS_SDH: the node with the smallest data amount to be transferred will be preferred. - LS_SDH2: similar to LS_SDH, but data in write access is counted in a quadratic manner to give them more importance. - LS_SDHB: similar to LS_SDH, but data in write access is balanced with a coefficient (its value is set to 1000) and for the same amount of data, the one with less pieces of data to be transferred will be preferred. - LC_SMWB: similar to LS_SDH, but the amount of data in write access gets multiplied by a coefficient which gets closer to 2 as the amount of data in read access gets larger than the data in write access. - AUTO: strategy by default, this one selects the best strategy and changes it in runtime to improve performance Other environment variables to configure LaHeteteroprio are documented in \ref ConfiguringLaHeteroprio \subsection AutoHeteroprio Using Heteroprio in auto-calibration mode In this mode, Heteroprio saves data about each program execution, in order to improve future ones. By default, theses files are stored in the folder used by perfmodel, but this can be changed using the \ref STARPU_HETEROPRIO_DATA_DIR environment variable. You can also specify the data filename directly using \ref STARPU_HETEROPRIO_DATA_FILE. Additionally, to assign priorities to tasks, Heteroprio needs a way to detect that some tasks are similar. By default, Heteroprio looks for tasks with the same perfmodel, or with the same codelet's name if no perfmodel was assigned. This behavior can be changed to only consider the codelet's name by setting \ref STARPU_HETEROPRIO_CODELET_GROUPING_STRATEGY to 1 Other environment variables to configure AutoHeteteroprio are documented in \ref ConfiguringAutoHeteroprio */