/* StarPU --- Runtime system for heterogeneous multicore architectures. * * Copyright (C) 2009-2021 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria * Copyright (C) 2016 Uppsala University * Copyright (C) 2020 Federal University of Rio Grande do Sul (UFRGS) * * StarPU is free software; you can redistribute it and/or modify * it under the terms of the GNU Lesser General Public License as published by * the Free Software Foundation; either version 2.1 of the License, or (at * your option) any later version. * * StarPU is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. * * See the GNU Lesser General Public License in COPYING.LGPL for more details. */ /*! \page ExecutionConfigurationThroughEnvironmentVariables Execution Configuration Through Environment Variables The behavior of the StarPU library and tools may be tuned thanks to the following environment variables. \section EnvConfiguringWorkers Configuring Workers \subsection Basic General Configuration
STARPU_WORKERS_NOBIND
\anchor STARPU_WORKERS_NOBIND \addindex __env__STARPU_WORKERS_NOBIND Setting it to non-zero will prevent StarPU from binding its threads to CPUs. This is for instance useful when running the testsuite in parallel.
STARPU_WORKERS_GETBIND
\anchor STARPU_WORKERS_GETBIND \addindex __env__STARPU_WORKERS_GETBIND Setting it to non-zero makes StarPU use the OS-provided CPU binding to determine how many and which CPU cores it should use. This is notably useful when running several StarPU-MPI processes on the same host, to let the MPI launcher set the CPUs to be used.
STARPU_WORKERS_CPUID
\anchor STARPU_WORKERS_CPUID \addindex __env__STARPU_WORKERS_CPUID Passing an array of integers in \ref STARPU_WORKERS_CPUID specifies on which logical CPU the different workers should be bound. For instance, if STARPU_WORKERS_CPUID = "0 1 4 5", the first worker will be bound to logical CPU #0, the second CPU worker will be bound to logical CPU #1 and so on. Note that the logical ordering of the CPUs is either determined by the OS, or provided by the library hwloc in case it is available. Ranges can be provided: for instance, STARPU_WORKERS_CPUID = "1-3 5" will bind the first three workers on logical CPUs #1, #2, and #3, and the fourth worker on logical CPU #5. Unbound ranges can also be provided: STARPU_WORKERS_CPUID = "1-" will bind the workers starting from logical CPU #1 up to last CPU. Note that the first workers correspond to the CUDA workers, then come the OpenCL workers, and finally the CPU workers. For example if we have STARPU_NCUDA=1, STARPU_NOPENCL=1, STARPU_NCPU=2 and STARPU_WORKERS_CPUID = "0 2 1 3", the CUDA device will be controlled by logical CPU #0, the OpenCL device will be controlled by logical CPU #2, and the logical CPUs #1 and #3 will be used by the CPU workers. If the number of workers is larger than the array given in \ref STARPU_WORKERS_CPUID, the workers are bound to the logical CPUs in a round-robin fashion: if STARPU_WORKERS_CPUID = "0 1", the first and the third (resp. second and fourth) workers will be put on CPU #0 (resp. CPU #1). This variable is ignored if the field starpu_conf::use_explicit_workers_bindid passed to starpu_init() is set.
STARPU_WORKERS_COREID
\anchor STARPU_WORKERS_COREID \addindex __env__STARPU_WORKERS_COREID Same as \ref STARPU_WORKERS_CPUID, but bind the workers to cores instead of PUs (hyperthreads).
STARPU_MAIN_THREAD_BIND
\anchor STARPU_MAIN_THREAD_BIND \addindex __env__STARPU_MAIN_THREAD_BIND When defined, this make StarPU bind the thread that calls starpu_initialize() to a reserved CPU, subtracted from the CPU workers.
STARPU_MAIN_THREAD_CPUID
\anchor STARPU_MAIN_THREAD_CPUID \addindex __env__STARPU_MAIN_THREAD_CPUID When defined, this make StarPU bind the thread that calls starpu_initialize() to the given CPU ID.
STARPU_MAIN_THREAD_COREID
\anchor STARPU_MAIN_THREAD_COREID \addindex __env__STARPU_MAIN_THREAD_COREID Same as \ref STARPU_MAIN_THREAD_CPUID, but bind the thread that calls starpu_initialize() to the given core, instead of the PU (hyperthread).
STARPU_WORKER_TREE
\anchor STARPU_WORKER_TREE \addindex __env__STARPU_WORKER_TREE Define to 1 to enable the tree iterator in schedulers.
STARPU_SINGLE_COMBINED_WORKER
\anchor STARPU_SINGLE_COMBINED_WORKER \addindex __env__STARPU_SINGLE_COMBINED_WORKER If set, StarPU will create several workers which won't be able to work concurrently. It will by default create combined workers which size goes from 1 to the total number of CPU workers in the system. \ref STARPU_MIN_WORKERSIZE and \ref STARPU_MAX_WORKERSIZE can be used to change this default.
STARPU_MIN_WORKERSIZE
\anchor STARPU_MIN_WORKERSIZE \addindex __env__STARPU_MIN_WORKERSIZE Specify the minimum size of the combined workers. Default value is 2.
STARPU_MAX_WORKERSIZE
\anchor STARPU_MAX_WORKERSIZE \addindex __env__STARPU_MAX_WORKERSIZE Specify the minimum size of the combined workers. Default value is the number of CPU workers in the system.
STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER
\anchor STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER \addindex __env__STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER Specify how many elements are allowed between combined workers created from \c hwloc information. For instance, in the case of sockets with 6 cores without shared L2 caches, if \ref STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER is set to 6, no combined worker will be synthesized beyond one for the socket and one per core. If it is set to 3, 3 intermediate combined workers will be synthesized, to divide the socket cores into 3 chunks of 2 cores. If it set to 2, 2 intermediate combined workers will be synthesized, to divide the the socket cores into 2 chunks of 3 cores, and then 3 additional combined workers will be synthesized, to divide the former synthesized workers into a bunch of 2 cores, and the remaining core (for which no combined worker is synthesized since there is already a normal worker for it). The default, 2, thus makes StarPU tend to building a binary trees of combined workers.
STARPU_DISABLE_ASYNCHRONOUS_COPY
\anchor STARPU_DISABLE_ASYNCHRONOUS_COPY \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_COPY Disable asynchronous copies between CPU and GPU devices. The AMD implementation of OpenCL is known to fail when copying data asynchronously. When using this implementation, it is therefore necessary to disable asynchronous data transfers. See also \ref STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY and \ref STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY.
STARPU_DISABLE_PINNING
\anchor STARPU_DISABLE_PINNING \addindex __env__STARPU_DISABLE_PINNING Disable (1) or Enable (0) pinning host memory allocated through starpu_malloc(), starpu_memory_pin() and friends. The default is Enabled. This permits to test the performance effect of memory pinning.
STARPU_BACKOFF_MIN
\anchor STARPU_BACKOFF_MIN \addindex __env__STARPU_BACKOFF_MIN Set minimum exponential backoff of number of cycles to pause when spinning. Default value is 1.
STARPU_BACKOFF_MAX
\anchor STARPU_BACKOFF_MAX \addindex __env__STARPU_BACKOFF_MAX Set maximum exponential backoff of number of cycles to pause when spinning. Default value is 32.
STARPU_SINK
\anchor STARPU_SINK \addindex __env__STARPU_SINK Defined internally by StarPU when running in master slave mode.
\subsection cpuWorkers CPU Workers
STARPU_NCPU
\anchor STARPU_NCPU \addindex __env__STARPU_NCPU Specify the number of CPU workers (thus not including workers dedicated to control accelerators). Note that by default, StarPU will not allocate more CPU workers than there are physical CPUs, and that some CPUs are used to control the accelerators.
STARPU_RESERVE_NCPU
\anchor STARPU_RESERVE_NCPU \addindex __env__STARPU_RESERVE_NCPU Specify the number of CPU cores that should not be used by StarPU, so the application can use starpu_get_next_bindid() and starpu_bind_thread_on() to bind its own threads. This option is ignored if \ref STARPU_NCPU or starpu_conf::ncpus is set.
STARPU_NCPUS
\anchor STARPU_NCPUS \addindex __env__STARPU_NCPUS This variable is deprecated. You should use \ref STARPU_NCPU.
\subsection cudaWorkers CUDA Workers
STARPU_NCUDA
\anchor STARPU_NCUDA \addindex __env__STARPU_NCUDA Specify the number of CUDA devices that StarPU can use. If \ref STARPU_NCUDA is lower than the number of physical devices, it is possible to select which GPU devices should be used by the means of the environment variable \ref STARPU_WORKERS_CUDAID. By default, StarPU will create as many CUDA workers as there are GPU devices.
STARPU_NWORKER_PER_CUDA
\anchor STARPU_NWORKER_PER_CUDA \addindex __env__STARPU_NWORKER_PER_CUDA Specify the number of workers per CUDA device, and thus the number of kernels which will be concurrently running on the devices, i.e. the number of CUDA streams. The default value is 1.
STARPU_CUDA_THREAD_PER_WORKER
\anchor STARPU_CUDA_THREAD_PER_WORKER \addindex __env__STARPU_CUDA_THREAD_PER_WORKER Specify whether the cuda driver should use one thread per stream (1) or to use a single thread to drive all the streams of the device or all devices (0), and \ref STARPU_CUDA_THREAD_PER_DEV determines whether is it one thread per device or one thread for all devices. The default value is 0. Setting it to 1 is contradictory with setting \ref STARPU_CUDA_THREAD_PER_DEV.
STARPU_CUDA_THREAD_PER_DEV
\anchor STARPU_CUDA_THREAD_PER_DEV \addindex __env__STARPU_CUDA_THREAD_PER_DEV Specify whether the cuda driver should use one thread per device (1) or to use a single thread to drive all the devices (0). The default value is 1. It does not make sense to set this variable if \ref STARPU_CUDA_THREAD_PER_WORKER is set to to 1 (since \ref STARPU_CUDA_THREAD_PER_DEV is then meaningless).
STARPU_CUDA_PIPELINE
\anchor STARPU_CUDA_PIPELINE \addindex __env__STARPU_CUDA_PIPELINE Specify how many asynchronous tasks are submitted in advance on CUDA devices. This for instance permits to overlap task management with the execution of previous tasks, but it also allows concurrent execution on Fermi cards, which otherwise bring spurious synchronizations. The default is 2. Setting the value to 0 forces a synchronous execution of all tasks.
STARPU_WORKERS_CUDAID
\anchor STARPU_WORKERS_CUDAID \addindex __env__STARPU_WORKERS_CUDAID Similarly to the \ref STARPU_WORKERS_CPUID environment variable, it is possible to select which CUDA devices should be used by StarPU. On a machine equipped with 4 GPUs, setting STARPU_WORKERS_CUDAID = "1 3" and STARPU_NCUDA=2 specifies that 2 CUDA workers should be created, and that they should use CUDA devices #1 and #3 (the logical ordering of the devices is the one reported by CUDA). This variable is ignored if the field starpu_conf::use_explicit_workers_cuda_gpuid passed to starpu_init() is set.
STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY
\anchor STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY Disable asynchronous copies between CPU and CUDA devices. See also \ref STARPU_DISABLE_ASYNCHRONOUS_COPY and \ref STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY.
STARPU_ENABLE_CUDA_GPU_GPU_DIRECT
\anchor STARPU_ENABLE_CUDA_GPU_GPU_DIRECT \addindex __env__STARPU_ENABLE_CUDA_GPU_GPU_DIRECT Enable (1) or Disable (0) direct CUDA transfers from GPU to GPU, without copying through RAM. The default is Enabled. This permits to test the performance effect of GPU-Direct.
STARPU_CUDA_ONLY_FAST_ALLOC_OTHER_MEMNODES
\anchor STARPU_CUDA_ONLY_FAST_ALLOC_OTHER_MEMNODES \addindex __env__STARPU_CUDA_ONLY_FAST_ALLOC_OTHER_MEMNODES Specify if CUDA workers should do only fast allocations when running the datawizard progress of other memory nodes. This will pass the internal value _STARPU_DATAWIZARD_ONLY_FAST_ALLOC to allocation methods. Default value is 0, allowing CUDA workers to do slow allocations. This can also be specified with starpu_conf::cuda_only_fast_alloc_other_memnodes.
\subsection openclWorkers OpenCL Workers
STARPU_NOPENCL
\anchor STARPU_NOPENCL \addindex __env__STARPU_NOPENCL Specify the number of OpenCL devices that StarPU can use. If \ref STARPU_NOPENCL is lower than the number of physical devices, it is possible to select which GPU devices should be used by the means of the environment variable \ref STARPU_WORKERS_OPENCLID. By default, StarPU will create as many OpenCL workers as there are GPU devices. Note that by default StarPU will launch CUDA workers on GPU devices. You need to disable CUDA to allow the creation of OpenCL workers.
STARPU_WORKERS_OPENCLID
\anchor STARPU_WORKERS_OPENCLID \addindex __env__STARPU_WORKERS_OPENCLID Similarly to the \ref STARPU_WORKERS_CPUID environment variable, it is possible to select which GPU devices should be used by StarPU. On a machine equipped with 4 GPUs, setting STARPU_WORKERS_OPENCLID = "1 3" and STARPU_NOPENCL=2 specifies that 2 OpenCL workers should be created, and that they should use GPU devices #1 and #3. This variable is ignored if the field starpu_conf::use_explicit_workers_opencl_gpuid passed to starpu_init() is set.
STARPU_OPENCL_PIPELINE
\anchor STARPU_OPENCL_PIPELINE \addindex __env__STARPU_OPENCL_PIPELINE Specify how many asynchronous tasks are submitted in advance on OpenCL devices. This for instance permits to overlap task management with the execution of previous tasks, but it also allows concurrent execution on Fermi cards, which otherwise bring spurious synchronizations. The default is 2. Setting the value to 0 forces a synchronous execution of all tasks.
STARPU_OPENCL_ON_CPUS
\anchor STARPU_OPENCL_ON_CPUS \addindex __env__STARPU_OPENCL_ON_CPUS By default, the OpenCL driver only enables GPU and accelerator devices. By setting the environment variable \ref STARPU_OPENCL_ON_CPUS to 1, the OpenCL driver will also enable CPU devices.
STARPU_OPENCL_ONLY_ON_CPUS
\anchor STARPU_OPENCL_ONLY_ON_CPUS \addindex __env__STARPU_OPENCL_ONLY_ON_CPUS By default, the OpenCL driver enables GPU and accelerator devices. By setting the environment variable \ref STARPU_OPENCL_ONLY_ON_CPUS to 1, the OpenCL driver will ONLY enable CPU devices.
STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY
\anchor STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY Disable asynchronous copies between CPU and OpenCL devices. The AMD implementation of OpenCL is known to fail when copying data asynchronously. When using this implementation, it is therefore necessary to disable asynchronous data transfers. See also \ref STARPU_DISABLE_ASYNCHRONOUS_COPY and \ref STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY.
\subsection mpimsWorkers MPI Master Slave Workers
STARPU_NMPI_MS
\anchor STARPU_NMPI_MS \addindex __env__STARPU_NMPI_MS Specify the number of MPI master slave devices that StarPU can use.
STARPU_NMPIMSTHREADS
\anchor STARPU_NMPIMSTHREADS \addindex __env__STARPU_NMPIMSTHREADS Number of threads to use on the MPI Slave devices.
STARPU_MPI_MASTER_NODE
\anchor STARPU_MPI_MASTER_NODE \addindex __env__STARPU_MPI_MASTER_NODE This variable allows to chose which MPI node (with the MPI ID) will be the master.
STARPU_DISABLE_ASYNCHRONOUS_MPI_MS_COPY
\anchor STARPU_DISABLE_ASYNCHRONOUS_MPI_MS_COPY \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_MPI_MS_COPY Disable asynchronous copies between CPU and MPI Slave devices.
\subsection mpiConf MPI Configuration
STARPU_MPI_THREAD_CPUID
\anchor STARPU_MPI_THREAD_CPUID \addindex __env__STARPU_MPI_THREAD_CPUID When defined, this make StarPU bind its MPI thread to the given CPU ID. Setting it to -1 (the default value) will use a reserved CPU, subtracted from the CPU workers.
STARPU_MPI_THREAD_COREID
\anchor STARPU_MPI_THREAD_COREID \addindex __env__STARPU_MPI_THREAD_COREID Same as \ref STARPU_MPI_THREAD_CPUID, but bind the MPI thread to the given core ID, instead of the PU (hyperthread).
STARPU_MPI_NOBIND
\anchor STARPU_MPI_NOBIND \addindex __env__STARPU_MPI_NOBIND Setting it to non-zero will prevent StarPU from binding the MPI to a separate core. This is for instance useful when running the testsuite on a single system.
\section ConfiguringTheSchedulingEngine Configuring The Scheduling Engine
STARPU_SCHED
\anchor STARPU_SCHED \addindex __env__STARPU_SCHED Choose between the different scheduling policies proposed by StarPU: work random, stealing, greedy, with performance models, etc. Use STARPU_SCHED=help to get the list of available schedulers.
STARPU_MIN_PRIO
\anchor STARPU_MIN_PRIO_env \addindex __env__STARPU_MIN_PRIO Set the mininum priority used by priorities-aware schedulers. The flag can also be set through the field starpu_conf::global_sched_ctx_min_priority.
STARPU_MAX_PRIO
\anchor STARPU_MAX_PRIO_env \addindex __env__STARPU_MAX_PRIO Set the maximum priority used by priorities-aware schedulers. The flag can also be set through the field starpu_conf::global_sched_ctx_max_priority.
STARPU_CALIBRATE
\anchor STARPU_CALIBRATE \addindex __env__STARPU_CALIBRATE If this variable is set to 1, the performance models are calibrated during the execution. If it is set to 2, the previous values are dropped to restart calibration from scratch. Setting this variable to 0 disable calibration, this is the default behaviour. Note: this currently only applies to dm and dmda scheduling policies.
STARPU_CALIBRATE_MINIMUM
\anchor STARPU_CALIBRATE_MINIMUM \addindex __env__STARPU_CALIBRATE_MINIMUM Define the minimum number of calibration measurements that will be made before considering that the performance model is calibrated. The default value is 10.
STARPU_BUS_CALIBRATE
\anchor STARPU_BUS_CALIBRATE \addindex __env__STARPU_BUS_CALIBRATE If this variable is set to 1, the bus is recalibrated during intialization.
STARPU_PREFETCH
\anchor STARPU_PREFETCH \addindex __env__STARPU_PREFETCH Indicate whether data prefetching should be enabled (0 means that it is disabled). If prefetching is enabled, when a task is scheduled to be executed e.g. on a GPU, StarPU will request an asynchronous transfer in advance, so that data is already present on the GPU when the task starts. As a result, computation and data transfers are overlapped. Note that prefetching is enabled by default in StarPU.
STARPU_SCHED_ALPHA
\anchor STARPU_SCHED_ALPHA \addindex __env__STARPU_SCHED_ALPHA To estimate the cost of a task StarPU takes into account the estimated computation time (obtained thanks to performance models). The alpha factor is the coefficient to be applied to it before adding it to the communication part.
STARPU_SCHED_BETA
\anchor STARPU_SCHED_BETA \addindex __env__STARPU_SCHED_BETA To estimate the cost of a task StarPU takes into account the estimated data transfer time (obtained thanks to performance models). The beta factor is the coefficient to be applied to it before adding it to the computation part.
STARPU_SCHED_GAMMA
\anchor STARPU_SCHED_GAMMA \addindex __env__STARPU_SCHED_GAMMA Define the execution time penalty of a joule (\ref Energy-basedScheduling).
STARPU_SCHED_READY
\anchor STARPU_SCHED_READY \addindex __env__STARPU_SCHED_READY For a modular scheduler with sorted queues below the decision component, workers pick up a task which has most of its data already available. Setting this to 0 disables this.
STARPU_SCHED_SORTED_ABOVE
\anchor STARPU_SCHED_SORTED_ABOVE \addindex __env__STARPU_SCHED_SORTED_ABOVE For a modular scheduler with queues above the decision component, it is usually sorted by priority. Setting this to 0 disables this.
STARPU_SCHED_SORTED_BELOW
\anchor STARPU_SCHED_SORTED_BELOW \addindex __env__STARPU_SCHED_SORTED_BELOW For a modular scheduler with queues below the decision component, they are usually sorted by priority. Setting this to 0 disables this.
STARPU_IDLE_POWER
\anchor STARPU_IDLE_POWER \addindex __env__STARPU_IDLE_POWER Define the idle power of the machine (\ref Energy-basedScheduling).
STARPU_PROFILING
\anchor STARPU_PROFILING \addindex __env__STARPU_PROFILING Enable on-line performance monitoring (\ref EnablingOn-linePerformanceMonitoring).
STARPU_PROF_PAPI_EVENTS
\anchor STARPU_PROF_PAPI_EVENTS \addindex __env__STARPU_PROF_PAPI_EVENTS Specify which PAPI events should be recorded in the trace (\ref PapiCounters).
\section ConfiguringHeteroprio Configuring The Heteroprio Scheduler \subsection ConfiguringLaHeteroprio Configuring LAHeteroprio
STARPU_HETEROPRIO_USE_LA
\anchor STARPU_HETEROPRIO_USE_LA \addindex __env__STARPU_HETEROPRIO_USE_LA Enable the locality aware mode of Heteroprio which guides the distribution of tasks to workers in order to reduce the data transfers between memory nodes.
STARPU_LAHETEROPRIO_PUSH
\anchor STARPU_LAHETEROPRIO_PUSH \addindex __env__STARPU_LAHETEROPRIO_PUSH Choose between the different push strategies for locality aware Heteroprio: WORKER, LcS, LS_SDH, LS_SDH2, LS_SDHB, LC_SMWB, AUTO (by default: AUTO). These are detailed in \ref LAHeteroprio
STARPU_LAHETEROPRIO_S_[ARCH]
\anchor STARPU_LAHETEROPRIO_S_[ARCH] \addindex __env__STARPU_LAHETEROPRIO_S_arch Specify the number of memory nodes contained in an affinity group. An affinity group will be composed of the closests memory nodes to a worker of a given architecture, and this worker will look for tasks available inside these memory nodes, before considering stealing tasks outside this group. ARCH can be CPU, CUDA, OPENCL, MICC, SCC, MPI_MS, etc.
STARPU_LAHETEROPRIO_PRIO_STEP_[ARCH]
\anchor STARPU_LAHETEROPRIO_PRIO_STEP_[ARCH] \addindex __env__STARPU_LAHETEROPRIO_PRIO_STEP_arch Specify the number of buckets in the local memory node in which a worker will look for available tasks, before this worker starts looking for tasks in other memory nodes' buckets. ARCH indicates that this number is specific to a given arch which can be: CPU, CUDA, OPENCL, MICC, SCC, MPI_MS, etc.
\subsection ConfiguringAutoHeteroprio Configuring AutoHeteroprio
STARPU_HETEROPRIO_USE_AUTO_CALIBRATION
\anchor STARPU_HETEROPRIO_USE_AUTO_CALIBRATION \addindex __env__STARPU_HETEROPRIO_USE_AUTO_CALIBRATION Enable the auto calibration mode of Heteroprio which assign priorities to tasks automatically
STARPU_HETEROPRIO_DATA_DIR
\anchor STARPU_HETEROPRIO_DATA_DIR \addindex __env__STARPU_HETEROPRIO_DATA_DIR Specify the path of the directory where Heteroprio stores data about program executions. By default, these are stored in the same directory used by perfmodel.
STARPU_HETEROPRIO_DATA_FILE
\anchor STARPU_HETEROPRIO_DATA_FILE \addindex __env__STARPU_HETEROPRIO_DATA_FILE Specify the filename where Heteroprio will save data about the current program's execution.
STARPU_HETEROPRIO_CODELET_GROUPING_STRATEGY
\anchor STARPU_HETEROPRIO_CODELET_GROUPING_STRATEGY \addindex __env__STARPU_HETEROPRIO_CODELET_GROUPING_STRATEGY Choose how Heteroprio groups similar tasks. It can be 0 to group the tasks with the same perfmodel or the same codelet's name if no perfmodel was assigned. Or, it could be 1 to group the tasks only by codelet's name.
STARPU_AUTOHETEROPRIO_PRINT_DATA_ON_UPDATE
\anchor STARPU_AUTOHETEROPRIO_PRINT_DATA_ON_UPDATE \addindex __env__STARPU_AUTOHETEROPRIO_PRINT_DATA_ON_UPDATE Enable the printing of priorities' data every time they get updated.
STARPU_AUTOHETEROPRIO_PRINT_AFTER_ORDERING
\anchor STARPU_AUTOHETEROPRIO_PRINT_AFTER_ORDERING \addindex __env__STARPU_AUTOHETEROPRIO_PRINT_AFTER_ORDERING Enable the printing of priorities' order for each architecture every time there's a reordering.
STARPU_AUTOHETEROPRIO_PRIORITY_ORDERING_POLICY
\anchor STARPU_AUTOHETEROPRIO_PRIORITY_ORDERING_POLICY \addindex __env__STARPU_AUTOHETEROPRIO_PRIORITY_ORDERING_POLICY Specify the heuristic which will be used to assign priorities automatically. It should be an integer between 0 and 27.
STARPU_AUTOHETEROPRIO_ORDERING_INTERVAL
\anchor STARPU_AUTOHETEROPRIO_ORDERING_INTERVAL \addindex __env__STARPU_AUTOHETEROPRIO_ORDERING_INTERVAL Specify the period (in number of tasks pushed), between priorities reordering operations.
STARPU_AUTOHETEROPRIO_FREEZE_GATHERING
\anchor STARPU_AUTOHETEROPRIO_FREEZE_GATHERING \addindex __env__STARPU_AUTOHETEROPRIO_FREEZE_GATHERING Disable data gathering from task executions.
\section Extensions Extensions
SOCL_OCL_LIB_OPENCL
\anchor SOCL_OCL_LIB_OPENCL \addindex __env__SOCL_OCL_LIB_OPENCL THE SOCL test suite is only run when the environment variable \ref SOCL_OCL_LIB_OPENCL is defined. It should contain the location of the file libOpenCL.so of the OCL ICD implementation.
OCL_ICD_VENDORS
\anchor OCL_ICD_VENDORS \addindex __env__OCL_ICD_VENDORS When using SOCL with OpenCL ICD (https://forge.imag.fr/projects/ocl-icd/), this variable may be used to point to the directory where ICD files are installed. The default directory is /etc/OpenCL/vendors. StarPU installs ICD files in the directory $prefix/share/starpu/opencl/vendors.
STARPU_COMM_STATS
\anchor STARPU_COMM_STATS \addindex __env__STARPU_COMM_STATS Communication statistics for starpumpi (\ref MPIDebug) will be enabled when the environment variable \ref STARPU_COMM_STATS is defined to an value other than 0.
STARPU_MPI_CACHE
\anchor STARPU_MPI_CACHE \addindex __env__STARPU_MPI_CACHE Communication cache for starpumpi (\ref MPISupport) will be disabled when the environment variable \ref STARPU_MPI_CACHE is set to 0. It is enabled by default or for any other values of the variable \ref STARPU_MPI_CACHE.
STARPU_MPI_COMM
\anchor STARPU_MPI_COMM \addindex __env__STARPU_MPI_COMM Communication trace for starpumpi (\ref MPISupport) will be enabled when the environment variable \ref STARPU_MPI_COMM is set to 1, and StarPU has been configured with the option \ref enable-verbose "--enable-verbose".
STARPU_MPI_CACHE_STATS
\anchor STARPU_MPI_CACHE_STATS \addindex __env__STARPU_MPI_CACHE_STATS When set to 1, statistics are enabled for the communication cache (\ref MPISupport). For now, it prints messages on the standard output when data are added or removed from the received communication cache.
STARPU_MPI_PRIORITIES
\anchor STARPU_MPI_PRIORITIES \addindex __env__STARPU_MPI_PRIORITIES When set to 0, the use of priorities to order MPI communications is disabled (\ref MPISupport).
STARPU_MPI_NDETACHED_SEND
\anchor STARPU_MPI_NDETACHED_SEND \addindex __env__STARPU_MPI_NDETACHED_SEND This sets the number of send requests that StarPU-MPI will emit concurrently. The default is 10.
STARPU_MPI_NREADY_PROCESS
\anchor STARPU_MPI_NREADY_PROCESS \addindex __env__STARPU_MPI_NREADY_PROCESS This sets the number of requests that StarPU-MPI will submit to MPI before polling for termination of existing requests. The default is 10.
STARPU_MPI_FAKE_SIZE
\anchor STARPU_MPI_FAKE_SIZE \addindex __env__STARPU_MPI_FAKE_SIZE Setting to a number makes StarPU believe that there are as many MPI nodes, even if it was run on only one MPI node. This allows e.g. to simulate the execution of one of the nodes of a big cluster without actually running the rest. It of course does not provide computation results and timing.
STARPU_MPI_FAKE_RANK
\anchor STARPU_MPI_FAKE_RANK \addindex __env__STARPU_MPI_FAKE_RANK Setting to a number makes StarPU believe that it runs the given MPI node, even if it was run on only one MPI node. This allows e.g. to simulate the execution of one of the nodes of a big cluster without actually running the rest. It of course does not provide computation results and timing.
STARPU_MPI_DRIVER_CALL_FREQUENCY
\anchor STARPU_MPI_DRIVER_CALL_FREQUENCY \addindex __env__STARPU_MPI_DRIVER_CALL_FREQUENCY When set to a positive value, activates the interleaving of the execution of tasks with the progression of MPI communications (\ref MPISupport). The starpu_mpi_init_conf() function must have been called by the application for that environment variable to be used. When set to 0, the MPI progression thread does not use at all the driver given by the user, and only focuses on making MPI communications progress.
STARPU_MPI_DRIVER_TASK_FREQUENCY
\anchor STARPU_MPI_DRIVER_TASK_FREQUENCY \addindex __env__STARPU_MPI_DRIVER_TASK_FREQUENCY When set to a positive value, the interleaving of the execution of tasks with the progression of MPI communications mechanism to execute several tasks before checking communication requests again (\ref MPISupport). The starpu_mpi_init_conf() function must have been called by the application for that environment variable to be used, and the STARPU_MPI_DRIVER_CALL_FREQUENCY environment variable set to a positive value.
STARPU_MPI_MEM_THROTTLE
\anchor STARPU_MPI_MEM_THROTTLE \addindex __env__STARPU_MPI_MEM_THROTTLE When set to a positive value, this makes the starpu_mpi_*recv* functions block when the memory allocation required for network reception overflows the available main memory (as typically set by \ref STARPU_LIMIT_CPU_MEM)
STARPU_MPI_EARLYDATA_ALLOCATE
\anchor STARPU_MPI_EARLYDATA_ALLOCATE \addindex __env__STARPU_MPI_EARLYDATA_ALLOCATE When set to 1, the MPI Driver will immediately allocate the data for early requests instead of issuing a data request and blocking. The default value is 0, issuing a data request. Because it is an early request and we do not know its real priority, the data request will assume \ref STARPU_DEFAULT_PRIO. In cases where there are many data requests with priorities greater than \ref STARPU_DEFAULT_PRIO the MPI drive could be blocked for long periods.
STARPU_SIMGRID
\anchor STARPU_SIMGRID \addindex __env__STARPU_SIMGRID When set to 1 (the default is 0), this makes StarPU check that it was really build with simulation support. This is convenient in scripts to avoid using a native version, that would try to update performance models...
STARPU_SIMGRID_TRANSFER_COST
\anchor STARPU_SIMGRID_TRANSFER_COST \addindex __env__STARPU_SIMGRID_TRANSFER_COST When set to 1 (which is the default), data transfers (over PCI bus, typically) are taken into account in SimGrid mode.
STARPU_SIMGRID_CUDA_MALLOC_COST
\anchor STARPU_SIMGRID_CUDA_MALLOC_COST \addindex __env__STARPU_SIMGRID_CUDA_MALLOC_COST When set to 1 (which is the default), CUDA malloc costs are taken into account in SimGrid mode.
STARPU_SIMGRID_CUDA_QUEUE_COST
\anchor STARPU_SIMGRID_CUDA_QUEUE_COST \addindex __env__STARPU_SIMGRID_CUDA_QUEUE_COST When set to 1 (which is the default), CUDA task and transfer queueing costs are taken into account in SimGrid mode.
STARPU_PCI_FLAT
\anchor STARPU_PCI_FLAT \addindex __env__STARPU_PCI_FLAT When unset or set to 0, the platform file created for SimGrid will contain PCI bandwidths and routes.
STARPU_SIMGRID_QUEUE_MALLOC_COST
\anchor STARPU_SIMGRID_QUEUE_MALLOC_COST \addindex __env__STARPU_SIMGRID_QUEUE_MALLOC_COST When unset or set to 1, simulate within SimGrid the GPU transfer queueing.
STARPU_MALLOC_SIMULATION_FOLD
\anchor STARPU_MALLOC_SIMULATION_FOLD \addindex __env__STARPU_MALLOC_SIMULATION_FOLD Define the size of the file used for folding virtual allocation, in MiB. The default is 1, thus allowing 64GiB virtual memory when Linux's sysctl vm.max_map_count value is the default 65535.
STARPU_SIMGRID_TASK_SUBMIT_COST
\anchor STARPU_SIMGRID_TASK_SUBMIT_COST \addindex __env__STARPU_SIMGRID_TASK_SUBMIT_COST When set to 1 (which is the default), task submission costs are taken into account in SimGrid mode. This provides more accurate SimGrid predictions, especially for the beginning of the execution.
STARPU_SIMGRID_FETCHING_INPUT_COST
\anchor STARPU_SIMGRID_FETCHING_INPUT_COST \addindex __env__STARPU_SIMGRID_FETCHING_INPUT_COST When set to 1 (which is the default), fetching input costs are taken into account in SimGrid mode. This provides more accurate SimGrid predictions, especially regarding data transfers.
STARPU_SIMGRID_SCHED_COST
\anchor STARPU_SIMGRID_SCHED_COST \addindex __env__STARPU_SIMGRID_SCHED_COST When set to 1 (0 is the default), scheduling costs are taken into account in SimGrid mode. This provides more accurate SimGrid predictions, and allows studying scheduling overhead of the runtime system. However, it also makes simulation non-deterministic.
\section MiscellaneousAndDebug Miscellaneous And Debug
STARPU_HOME
\anchor STARPU_HOME \addindex __env__STARPU_HOME Specify the main directory in which StarPU stores its configuration files. The default is $HOME on Unix environments, and $USERPROFILE on Windows environments.
STARPU_PATH
\anchor STARPU_PATH \addindex __env__STARPU_PATH Only used on Windows environments. Specify the main directory in which StarPU is installed (\ref RunningABasicStarPUApplicationOnMicrosoft)
STARPU_PERF_MODEL_DIR
\anchor STARPU_PERF_MODEL_DIR \addindex __env__STARPU_PERF_MODEL_DIR Specify the main directory in which StarPU stores its performance model files. The default is $STARPU_HOME/.starpu/sampling.
STARPU_PERF_MODEL_HOMOGENEOUS_CPU
\anchor STARPU_PERF_MODEL_HOMOGENEOUS_CPU \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_CPU When set to 0, StarPU will assume that CPU devices do not have the same performance, and thus use different performance models for them, thus making kernel calibration much longer, since measurements have to be made for each CPU core.
STARPU_PERF_MODEL_HOMOGENEOUS_CUDA
\anchor STARPU_PERF_MODEL_HOMOGENEOUS_CUDA \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_CUDA When set to 1, StarPU will assume that all CUDA devices have the same performance, and thus share performance models for them, thus allowing kernel calibration to be much faster, since measurements only have to be once for all CUDA GPUs.
STARPU_PERF_MODEL_HOMOGENEOUS_OPENCL
\anchor STARPU_PERF_MODEL_HOMOGENEOUS_OPENCL \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_OPENCL When set to 1, StarPU will assume that all OPENCL devices have the same performance, and thus share performance models for them, thus allowing kernel calibration to be much faster, since measurements only have to be once for all OPENCL GPUs.
STARPU_PERF_MODEL_HOMOGENEOUS_MPI_MS
\anchor STARPU_PERF_MODEL_HOMOGENEOUS_MPI_MS \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_MPI_MS When set to 1, StarPU will assume that all MPI Slave devices have the same performance, and thus share performance models for them, thus allowing kernel calibration to be much faster, since measurements only have to be once for all MPI Slaves.
STARPU_HOSTNAME
\anchor STARPU_HOSTNAME \addindex __env__STARPU_HOSTNAME When set, force the hostname to be used when dealing performance model files. Models are indexed by machine name. When running for example on a homogenenous cluster, it is possible to share the models between machines by setting export STARPU_HOSTNAME=some_global_name.
STARPU_MPI_HOSTNAMES
\anchor STARPU_MPI_HOSTNAMES \addindex __env__STARPU_MPI_HOSTNAMES Similar to \ref STARPU_HOSTNAME but to define multiple nodes on a heterogeneous cluster. The variable is a list of hostnames that will be assigned to each StarPU-MPI rank considering their position and the value of \ref starpu_mpi_world_rank on each rank. When running, for example, on a heterogeneous cluster, it is possible to set individual models for each machine by setting export STARPU_MPI_HOSTNAMES="name0 name1 name2". Where rank 0 will receive name0, rank1 will receive name1, and so on. This variable has precedence over \ref STARPU_HOSTNAME.
STARPU_OPENCL_PROGRAM_DIR
\anchor STARPU_OPENCL_PROGRAM_DIR \addindex __env__STARPU_OPENCL_PROGRAM_DIR Specify the directory where the OpenCL codelet source files are located. The function starpu_opencl_load_program_source() looks for the codelet in the current directory, in the directory specified by the environment variable \ref STARPU_OPENCL_PROGRAM_DIR, in the directory share/starpu/opencl of the installation directory of StarPU, and finally in the source directory of StarPU.
STARPU_SILENT
\anchor STARPU_SILENT \addindex __env__STARPU_SILENT Allow to disable verbose mode at runtime when StarPU has been configured with the option \ref enable-verbose "--enable-verbose". Also disable the display of StarPU information and warning messages.
STARPU_MPI_DEBUG_LEVEL_MIN
\anchor STARPU_MPI_DEBUG_LEVEL_MIN \addindex __env__STARPU_MPI_DEBUG_LEVEL_MIN Set the minimum level of debug when StarPU has been configured with the option \ref enable-mpi-verbose "--enable-mpi-verbose".
STARPU_MPI_DEBUG_LEVEL_MAX
\anchor STARPU_MPI_DEBUG_LEVEL_MAX \addindex __env__STARPU_MPI_DEBUG_LEVEL_MAX Set the maximum level of debug when StarPU has been configured with the option \ref enable-mpi-verbose "--enable-mpi-verbose".
STARPU_LOGFILENAME
\anchor STARPU_LOGFILENAME \addindex __env__STARPU_LOGFILENAME Specify in which file the debugging output should be saved to.
STARPU_FXT_PREFIX
\anchor STARPU_FXT_PREFIX \addindex __env__STARPU_FXT_PREFIX Specify in which directory to save the generated trace if FxT is enabled.
STARPU_FXT_SUFFIX
\anchor STARPU_FXT_SUFFIX \addindex __env__STARPU_FXT_SUFFIX Specify in which file to save the generated trace if FxT is enabled.
STARPU_FXT_TRACE
\anchor STARPU_FXT_TRACE \addindex __env__STARPU_FXT_TRACE Specify whether to generate (1) or not (0) the FxT trace in /tmp/prof_file_XXX_YYY (the directory and file name can be changed with \ref STARPU_FXT_PREFIX and \ref STARPU_FXT_SUFFIX). The default is 0 (do not generate it)
STARPU_LIMIT_CUDA_devid_MEM
\anchor STARPU_LIMIT_CUDA_devid_MEM \addindex __env__STARPU_LIMIT_CUDA_devid_MEM Specify the maximum number of megabytes that should be available to the application on the CUDA device with the identifier devid. This variable is intended to be used for experimental purposes as it emulates devices that have a limited amount of memory. When defined, the variable overwrites the value of the variable \ref STARPU_LIMIT_CUDA_MEM.
STARPU_LIMIT_CUDA_MEM
\anchor STARPU_LIMIT_CUDA_MEM \addindex __env__STARPU_LIMIT_CUDA_MEM Specify the maximum number of megabytes that should be available to the application on each CUDA devices. This variable is intended to be used for experimental purposes as it emulates devices that have a limited amount of memory.
STARPU_LIMIT_OPENCL_devid_MEM
\anchor STARPU_LIMIT_OPENCL_devid_MEM \addindex __env__STARPU_LIMIT_OPENCL_devid_MEM Specify the maximum number of megabytes that should be available to the application on the OpenCL device with the identifier devid. This variable is intended to be used for experimental purposes as it emulates devices that have a limited amount of memory. When defined, the variable overwrites the value of the variable \ref STARPU_LIMIT_OPENCL_MEM.
STARPU_LIMIT_OPENCL_MEM
\anchor STARPU_LIMIT_OPENCL_MEM \addindex __env__STARPU_LIMIT_OPENCL_MEM Specify the maximum number of megabytes that should be available to the application on each OpenCL devices. This variable is intended to be used for experimental purposes as it emulates devices that have a limited amount of memory.
STARPU_LIMIT_CPU_MEM
\anchor STARPU_LIMIT_CPU_MEM \addindex __env__STARPU_LIMIT_CPU_MEM Specify the maximum number of megabytes that should be available to the application in the main CPU memory. Setting it enables allocation cache in main memory. Setting it to zero lets StarPU overflow memory. Note: for now not all StarPU allocations get throttled by this parameter. Notably MPI reception are not throttled unless \ref STARPU_MPI_MEM_THROTTLE is set to 1.
STARPU_LIMIT_CPU_NUMA_devid_MEM
\anchor STARPU_LIMIT_CPU_NUMA_devid_MEM \addindex __env__STARPU_LIMIT_CPU_NUMA_devid_MEM Specify the maximum number of megabytes that should be available to the application on the NUMA node with the OS identifier devid. Setting it overrides the value of STARPU_LIMIT_CPU_MEM.
STARPU_LIMIT_CPU_NUMA_MEM
\anchor STARPU_LIMIT_CPU_NUMA_MEM \addindex __env__STARPU_LIMIT_CPU_NUMA_MEM Specify the maximum number of megabytes that should be available to the application on each NUMA node. This is the same as specifying that same amount with \ref STARPU_LIMIT_CPU_NUMA_devid_MEM for each NUMA node number. The total memory available to StarPU will thus be this amount multiplied by the number of NUMA nodes used by StarPU. Any \ref STARPU_LIMIT_CPU_NUMA_devid_MEM additionally specified will take over STARPU_LIMIT_CPU_NUMA_MEM.
STARPU_LIMIT_BANDWIDTH
\anchor STARPU_LIMIT_BANDWIDTH \addindex __env__STARPU_LIMIT_BANDWIDTH Specify the maximum available PCI bandwidth of the system in MB/s. This can only be effective with simgrid simulation. This allows to easily override the bandwidths stored in the platform file generated from measurements on the native system. This can be used e.g. for convenient Specify the maximum number of megabytes that should be available to the application on each NUMA node. This is the same as specifying that same amount with \ref STARPU_LIMIT_CPU_NUMA_devid_MEM for each NUMA node number. The total memory available to StarPU will thus be this amount multiplied by the number of NUMA nodes used by StarPU. Any \ref STARPU_LIMIT_CPU_NUMA_devid_MEM additionally specified will take over STARPU_LIMIT_BANDWIDTH.
STARPU_MINIMUM_AVAILABLE_MEM
\anchor STARPU_MINIMUM_AVAILABLE_MEM \addindex __env__STARPU_MINIMUM_AVAILABLE_MEM Specify the minimum percentage of memory that should be available in GPUs (or in main memory, when using out of core), below which a reclaiming pass is performed. The default is 0%.
STARPU_TARGET_AVAILABLE_MEM
\anchor STARPU_TARGET_AVAILABLE_MEM \addindex __env__STARPU_TARGET_AVAILABLE_MEM Specify the target percentage of memory that should be reached in GPUs (or in main memory, when using out of core), when performing a periodic reclaiming pass. The default is 0%.
STARPU_MINIMUM_CLEAN_BUFFERS
\anchor STARPU_MINIMUM_CLEAN_BUFFERS \addindex __env__STARPU_MINIMUM_CLEAN_BUFFERS Specify the minimum percentage of number of buffers that should be clean in GPUs (or in main memory, when using out of core), below which asynchronous writebacks will be issued. The default is 5%.
STARPU_TARGET_CLEAN_BUFFERS
\anchor STARPU_TARGET_CLEAN_BUFFERS \addindex __env__STARPU_TARGET_CLEAN_BUFFERS Specify the target percentage of number of buffers that should be reached in GPUs (or in main memory, when using out of core), when performing an asynchronous writeback pass. The default is 10%.
STARPU_DISK_SWAP
\anchor STARPU_DISK_SWAP \addindex __env__STARPU_DISK_SWAP Specify a path where StarPU can push data when the main memory is getting full.
STARPU_DISK_SWAP_BACKEND
\anchor STARPU_DISK_SWAP_BACKEND \addindex __env__STARPU_DISK_SWAP_BACKEND Specify the backend to be used by StarPU to push data when the main memory is getting full. The default is unistd (i.e. using read/write functions), other values are stdio (i.e. using fread/fwrite), unistd_o_direct (i.e. using read/write with O_DIRECT), leveldb (i.e. using a leveldb database), and hdf5 (i.e. using HDF5 library).
STARPU_DISK_SWAP_SIZE
\anchor STARPU_DISK_SWAP_SIZE \addindex __env__STARPU_DISK_SWAP_SIZE Specify the maximum size in MiB to be used by StarPU to push data when the main memory is getting full. The default is unlimited.
STARPU_LIMIT_MAX_SUBMITTED_TASKS
\anchor STARPU_LIMIT_MAX_SUBMITTED_TASKS \addindex __env__STARPU_LIMIT_MAX_SUBMITTED_TASKS Allow users to control the task submission flow by specifying to StarPU a maximum number of submitted tasks allowed at a given time, i.e. when this limit is reached task submission becomes blocking until enough tasks have completed, specified by \ref STARPU_LIMIT_MIN_SUBMITTED_TASKS. Setting it enables allocation cache buffer reuse in main memory.
STARPU_LIMIT_MIN_SUBMITTED_TASKS
\anchor STARPU_LIMIT_MIN_SUBMITTED_TASKS \addindex __env__STARPU_LIMIT_MIN_SUBMITTED_TASKS Allow users to control the task submission flow by specifying to StarPU a submitted task threshold to wait before unblocking task submission. This variable has to be used in conjunction with \ref STARPU_LIMIT_MAX_SUBMITTED_TASKS which puts the task submission thread to sleep. Setting it enables allocation cache buffer reuse in main memory.
STARPU_TRACE_BUFFER_SIZE
\anchor STARPU_TRACE_BUFFER_SIZE \addindex __env__STARPU_TRACE_BUFFER_SIZE Set the buffer size for recording trace events in MiB. Setting it to a big size allows to avoid pauses in the trace while it is recorded on the disk. This however also consumes memory, of course. The default value is 64.
STARPU_GENERATE_TRACE
\anchor STARPU_GENERATE_TRACE \addindex __env__STARPU_GENERATE_TRACE When set to 1, indicate that StarPU should automatically generate a Paje trace when starpu_shutdown() is called.
STARPU_GENERATE_TRACE_OPTIONS
\anchor STARPU_GENERATE_TRACE_OPTIONS \addindex __env__STARPU_GENERATE_TRACE_OPTIONS When the variable \ref STARPU_GENERATE_TRACE is set to 1 to generate a Paje trace, this variable can be set to specify options (see starpu_fxt_tool --help).
STARPU_ENABLE_STATS
\anchor STARPU_ENABLE_STATS \addindex __env__STARPU_ENABLE_STATS When defined, enable gathering various data statistics (\ref DataStatistics).
STARPU_MEMORY_STATS
\anchor STARPU_MEMORY_STATS \addindex __env__STARPU_MEMORY_STATS When set to 0, disable the display of memory statistics on data which have not been unregistered at the end of the execution (\ref MemoryFeedback).
STARPU_MAX_MEMORY_USE
\anchor STARPU_MAX_MEMORY_USE \addindex __env__STARPU_MAX_MEMORY_USE When set to 1, display at the end of the execution the maximum memory used by StarPU for internal data structures during execution.
STARPU_BUS_STATS
\anchor STARPU_BUS_STATS \addindex __env__STARPU_BUS_STATS When defined, statistics about data transfers will be displayed when calling starpu_shutdown() (\ref Profiling). By default, statistics are printed on the standard error stream, use the environment variable \ref STARPU_BUS_STATS_FILE to define another filename.
STARPU_BUS_STATS_FILE
\anchor STARPU_BUS_STATS_FILE \addindex __env__STARPU_BUS_STATS_FILE Define the name of the file where to display data transfers statistics, see \ref STARPU_BUS_STATS.
STARPU_WORKER_STATS
\anchor STARPU_WORKER_STATS \addindex __env__STARPU_WORKER_STATS When defined, statistics about the workers will be displayed when calling starpu_shutdown() (\ref Profiling). When combined with the environment variable \ref STARPU_PROFILING, it displays the energy consumption (\ref Energy-basedScheduling). By default, statistics are printed on the standard error stream, use the environment variable \ref STARPU_WORKER_STATS_FILE to define another filename.
STARPU_WORKER_STATS_FILE
\anchor STARPU_WORKER_STATS_FILE \addindex __env__STARPU_WORKER_STATS_FILE Define the name of the file where to display workers statistics, see \ref STARPU_WORKER_STATS.
STARPU_STATS
\anchor STARPU_STATS \addindex __env__STARPU_STATS When set to 0, data statistics will not be displayed at the end of the execution of an application (\ref DataStatistics).
STARPU_WATCHDOG_TIMEOUT
\anchor STARPU_WATCHDOG_TIMEOUT \addindex __env__STARPU_WATCHDOG_TIMEOUT When set to a value other than 0, allows to make StarPU print an error message whenever StarPU does not terminate any task for the given time (in µs), but lets the application continue normally. Should be used in combination with \ref STARPU_WATCHDOG_CRASH (see \ref DetectionStuckConditions).
STARPU_WATCHDOG_CRASH
\anchor STARPU_WATCHDOG_CRASH \addindex __env__STARPU_WATCHDOG_CRASH When set to a value other than 0, trigger a crash when the watch dog is reached, thus allowing to catch the situation in gdb, etc (see \ref DetectionStuckConditions)
STARPU_WATCHDOG_DELAY
\anchor STARPU_WATCHDOG_DELAY \addindex __env__STARPU_WATCHDOG_DELAY Delay the activation of the watchdog by the given time (in µs). This can be convenient for letting the application initialize data etc. before starting to look for idle time.
STARPU_TASK_PROGRESS
\anchor STARPU_TASK_PROGRESS \addindex __env__STARPU_TASK_PROGRESS Print the progression of tasks. This is convenient to determine whether a program is making progress in task execution, or is just stuck.
STARPU_TASK_BREAK_ON_PUSH
\anchor STARPU_TASK_BREAK_ON_PUSH \addindex __env__STARPU_TASK_BREAK_ON_PUSH When this variable contains a job id, StarPU will raise SIGTRAP when the task with that job id is being pushed to the scheduler, which will be nicely catched by debuggers (see \ref DebuggingScheduling)
STARPU_TASK_BREAK_ON_SCHED
\anchor STARPU_TASK_BREAK_ON_SCHED \addindex __env__STARPU_TASK_BREAK_ON_SCHED When this variable contains a job id, StarPU will raise SIGTRAP when the task with that job id is being scheduled by the scheduler (at a scheduler-specific point), which will be nicely catched by debuggers. This only works for schedulers which have such a scheduling point defined (see \ref DebuggingScheduling)
STARPU_TASK_BREAK_ON_POP
\anchor STARPU_TASK_BREAK_ON_POP \addindex __env__STARPU_TASK_BREAK_ON_POP When this variable contains a job id, StarPU will raise SIGTRAP when the task with that job id is being popped from the scheduler, which will be nicely catched by debuggers (see \ref DebuggingScheduling)
STARPU_TASK_BREAK_ON_EXEC
\anchor STARPU_TASK_BREAK_ON_EXEC \addindex __env__STARPU_TASK_BREAK_ON_EXEC When this variable contains a job id, StarPU will raise SIGTRAP when the task with that job id is being executed, which will be nicely catched by debuggers (see \ref DebuggingScheduling)
STARPU_DISABLE_KERNELS
\anchor STARPU_DISABLE_KERNELS \addindex __env__STARPU_DISABLE_KERNELS When set to a value other than 1, it disables actually calling the kernel functions, thus allowing to quickly check that the task scheme is working properly, without performing the actual application-provided computation.
STARPU_HISTORY_MAX_ERROR
\anchor STARPU_HISTORY_MAX_ERROR \addindex __env__STARPU_HISTORY_MAX_ERROR History-based performance models will drop measurements which are really far froom the measured average. This specifies the allowed variation. The default is 50 (%), i.e. the measurement is allowed to be x1.5 faster or /1.5 slower than the average.
STARPU_RAND_SEED
\anchor STARPU_RAND_SEED \addindex __env__STARPU_RAND_SEED The random scheduler and some examples use random numbers for their own working. Depending on the examples, the seed is by default juste always 0 or the current time() (unless SimGrid mode is enabled, in which case it is always 0). \ref STARPU_RAND_SEED allows to set the seed to a specific value.
STARPU_GLOBAL_ARBITER
\anchor STARPU_GLOBAL_ARBITER \addindex __env__STARPU_GLOBAL_ARBITER When set to a positive value, StarPU will create a arbiter, which implements an advanced but centralized management of concurrent data accesses (see \ref ConcurrentDataAccess).
STARPU_USE_NUMA
\anchor STARPU_USE_NUMA \addindex __env__STARPU_USE_NUMA When defined, NUMA nodes are taking into account by StarPU. Otherwise, memory is considered as only one node. This is experimental for now. When enabled, ::STARPU_MAIN_RAM is a pointer to the NUMA node associated to the first CPU worker if it exists, the NUMA node associated to the first GPU discovered otherwise. If StarPU doesn't find any NUMA node after these step, ::STARPU_MAIN_RAM is the first NUMA node discovered by StarPU.
STARPU_IDLE_FILE
\anchor STARPU_IDLE_FILE \addindex __env__STARPU_IDLE_FILE When defined, a file named after its contents will be created at the end of the execution. This file will contain the sum of the idle times of all the workers.
STARPU_HWLOC_INPUT
\anchor STARPU_HWLOC_INPUT \addindex __env__STARPU_HWLOC_INPUT When defined to the path of an XML file, \c hwloc will use this file as input instead of detecting the current platform topology, which can save significant initialization time. To produce this XML file, use lstopo file.xml
STARPU_CATCH_SIGNALS
\anchor STARPU_CATCH_SIGNALS \addindex __env__STARPU_CATCH_SIGNALS By default, StarPU catch signals \c SIGINT, \c SIGSEGV and \c SIGTRAP to perform final actions such as dumping FxT trace files even though the application has crashed. Setting this variable to a value other than 1 will disable this behaviour. This should be done on JVM systems which may use these signals for their own needs. The flag can also be set through the field starpu_conf::catch_signals.
STARPU_DISPLAY_BINDINGS
\anchor STARPU_DISPLAY_BINDINGS \addindex __env__STARPU_DISPLAY_BINDINGS Display the binding of all processes and threads running on the machine. If MPI is enabled, display the binding of each node.
Users can manually display the binding by calling starpu_display_bindings().
\section ConfiguringTheHypervisor Configuring The Hypervisor
SC_HYPERVISOR_POLICY
\anchor SC_HYPERVISOR_POLICY \addindex __env__SC_HYPERVISOR_POLICY Choose between the different resizing policies proposed by StarPU for the hypervisor: idle, app_driven, feft_lp, teft_lp; ispeed_lp, throughput_lp etc. Use SC_HYPERVISOR_POLICY=help to get the list of available policies for the hypervisor
SC_HYPERVISOR_TRIGGER_RESIZE
\anchor SC_HYPERVISOR_TRIGGER_RESIZE \addindex __env__SC_HYPERVISOR_TRIGGER_RESIZE Choose how should the hypervisor be triggered: speed if the resizing algorithm should be called whenever the speed of the context does not correspond to an optimal precomputed value, idle it the resizing algorithm should be called whenever the workers are idle for a period longer than the value indicated when configuring the hypervisor.
SC_HYPERVISOR_START_RESIZE
\anchor SC_HYPERVISOR_START_RESIZE \addindex __env__SC_HYPERVISOR_START_RESIZE Indicate the moment when the resizing should be available. The value correspond to the percentage of the total time of execution of the application. The default value is the resizing frame.
SC_HYPERVISOR_MAX_SPEED_GAP
\anchor SC_HYPERVISOR_MAX_SPEED_GAP \addindex __env__SC_HYPERVISOR_MAX_SPEED_GAP Indicate the ratio of speed difference between contexts that should trigger the hypervisor. This situation may occur only when a theoretical speed could not be computed and the hypervisor has no value to compare the speed to. Otherwise the resizing of a context is not influenced by the the speed of the other contexts, but only by the the value that a context should have.
SC_HYPERVISOR_STOP_PRINT
\anchor SC_HYPERVISOR_STOP_PRINT \addindex __env__SC_HYPERVISOR_STOP_PRINT By default the values of the speed of the workers is printed during the execution of the application. If the value 1 is given to this environment variable this printing is not done.
SC_HYPERVISOR_LAZY_RESIZE
\anchor SC_HYPERVISOR_LAZY_RESIZE \addindex __env__SC_HYPERVISOR_LAZY_RESIZE By default the hypervisor resizes the contexts in a lazy way, that is workers are firstly added to a new context before removing them from the previous one. Once this workers are clearly taken into account into the new context (a task was poped there) we remove them from the previous one. However if the application would like that the change in the distribution of workers should change right away this variable should be set to 0
SC_HYPERVISOR_SAMPLE_CRITERIA
\anchor SC_HYPERVISOR_SAMPLE_CRITERIA \addindex __env__SC_HYPERVISOR_SAMPLE_CRITERIA By default the hypervisor uses a sample of flops when computing the speed of the contexts and of the workers. If this variable is set to time the hypervisor uses a sample of time (10% of an aproximation of the total execution time of the application)
*/