501_environment_variables.doxy 40 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986987988989990991992993994995996997998999100010011002100310041005100610071008100910101011101210131014101510161017101810191020102110221023102410251026102710281029103010311032103310341035103610371038103910401041104210431044104510461047104810491050105110521053105410551056105710581059106010611062106310641065106610671068106910701071107210731074107510761077107810791080108110821083108410851086108710881089109010911092109310941095109610971098109911001101110211031104110511061107110811091110111111121113111411151116111711181119112011211122112311241125112611271128112911301131113211331134113511361137113811391140114111421143114411451146114711481149115011511152115311541155115611571158115911601161116211631164116511661167116811691170117111721173117411751176117711781179118011811182118311841185118611871188
  1. /*
  2. * This file is part of the StarPU Handbook.
  3. * Copyright (C) 2009--2011 Universit@'e de Bordeaux
  4. * Copyright (C) 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017 CNRS
  5. * Copyright (C) 2011, 2012, 2016, 2017 INRIA
  6. * Copyright (C) 2016 Uppsala University
  7. * See the file version.doxy for copying conditions.
  8. */
  9. /*! \page ExecutionConfigurationThroughEnvironmentVariables Execution Configuration Through Environment Variables
  10. The behavior of the StarPU library and tools may be tuned thanks to
  11. the following environment variables.
  12. \section ConfiguringWorkers Configuring Workers
  13. <dl>
  14. <dt>STARPU_NCPU</dt>
  15. <dd>
  16. \anchor STARPU_NCPU
  17. \addindex __env__STARPU_NCPU
  18. Specify the number of CPU workers (thus not including workers
  19. dedicated to control accelerators). Note that by default, StarPU will
  20. not allocate more CPU workers than there are physical CPUs, and that
  21. some CPUs are used to control the accelerators.
  22. </dd>
  23. <dt>STARPU_NCPUS</dt>
  24. <dd>
  25. \anchor STARPU_NCPUS
  26. \addindex __env__STARPU_NCPUS
  27. This variable is deprecated. You should use \ref STARPU_NCPU.
  28. </dd>
  29. <dt>STARPU_NCUDA</dt>
  30. <dd>
  31. \anchor STARPU_NCUDA
  32. \addindex __env__STARPU_NCUDA
  33. Specify the number of CUDA devices that StarPU can use. If
  34. \ref STARPU_NCUDA is lower than the number of physical devices, it is
  35. possible to select which CUDA devices should be used by the means of the
  36. environment variable \ref STARPU_WORKERS_CUDAID. By default, StarPU will
  37. create as many CUDA workers as there are CUDA devices.
  38. </dd>
  39. <dt>STARPU_NWORKER_PER_CUDA</dt>
  40. <dd>
  41. \anchor STARPU_NWORKER_PER_CUDA
  42. \addindex __env__STARPU_NWORKER_PER_CUDA
  43. Specify the number of workers per CUDA device, and thus the number of kernels
  44. which will be concurrently running on the devices. The default value is 1.
  45. </dd>
  46. <dt>STARPU_CUDA_THREAD_PER_WORKER</dt>
  47. <dd>
  48. \anchor STARPU_CUDA_THREAD_PER_WORKER
  49. \addindex __env__STARPU_CUDA_THREAD_PER_WORKER
  50. Specify if the cuda driver should provide a thread per stream or a single thread
  51. dealing with all the streams. 0 if one thread per stream, 1 otherwise. The default
  52. value is 0. Setting it to 1 is contradictory with setting STARPU_CUDA_THREAD_PER_DEV to 1.
  53. </dd>
  54. <dt>STARPU_CUDA_THREAD_PER_DEV</dt>
  55. <dd>
  56. \anchor STARPU_CUDA_THREAD_PER_DEV
  57. \addindex __env__STARPU_CUDA_THREAD_PER_DEV
  58. Specify if the cuda driver should provide a thread per device or a single thread
  59. dealing with all the devices. 0 if one thread per device, 1 otherwise. The default
  60. value is 1, unless STARPU_CUDA_THREAD_PER_WORKER is set to 1. Setting it to 1 is
  61. contradictory with setting STARPU_CUDA_THREAD_PER_WORKER to 1.
  62. </dd>
  63. <dt>STARPU_CUDA_PIPELINE</dt>
  64. <dd>
  65. \anchor STARPU_CUDA_PIPELINE
  66. \addindex __env__STARPU_CUDA_PIPELINE
  67. Specify how many asynchronous tasks are submitted in advance on CUDA
  68. devices. This for instance permits to overlap task management with the execution
  69. of previous tasks, but it also allows concurrent execution on Fermi cards, which
  70. otherwise bring spurious synchronizations. The default is 2. Setting the value to 0 forces a synchronous
  71. execution of all tasks.
  72. </dd>
  73. <dt>STARPU_NOPENCL</dt>
  74. <dd>
  75. \anchor STARPU_NOPENCL
  76. \addindex __env__STARPU_NOPENCL
  77. OpenCL equivalent of the environment variable \ref STARPU_NCUDA.
  78. </dd>
  79. <dt>STARPU_OPENCL_PIPELINE</dt>
  80. <dd>
  81. \anchor STARPU_OPENCL_PIPELINE
  82. \addindex __env__STARPU_OPENCL_PIPELINE
  83. Specify how many asynchronous tasks are submitted in advance on OpenCL
  84. devices. This for instance permits to overlap task management with the execution
  85. of previous tasks, but it also allows concurrent execution on Fermi cards, which
  86. otherwise bring spurious synchronizations. The default is 2. Setting the value to 0 forces a synchronous
  87. execution of all tasks.
  88. </dd>
  89. <dt>STARPU_OPENCL_ON_CPUS</dt>
  90. <dd>
  91. \anchor STARPU_OPENCL_ON_CPUS
  92. \addindex __env__STARPU_OPENCL_ON_CPUS
  93. By default, the OpenCL driver only enables GPU and accelerator
  94. devices. By setting the environment variable \ref STARPU_OPENCL_ON_CPUS
  95. to 1, the OpenCL driver will also enable CPU devices.
  96. </dd>
  97. <dt>STARPU_OPENCL_ONLY_ON_CPUS</dt>
  98. <dd>
  99. \anchor STARPU_OPENCL_ONLY_ON_CPUS
  100. \addindex __env__STARPU_OPENCL_ONLY_ON_CPUS
  101. By default, the OpenCL driver enables GPU and accelerator
  102. devices. By setting the environment variable \ref STARPU_OPENCL_ONLY_ON_CPUS
  103. to 1, the OpenCL driver will ONLY enable CPU devices.
  104. </dd>
  105. <dt>STARPU_NMIC</dt>
  106. <dd>
  107. \anchor STARPU_NMIC
  108. \addindex __env__STARPU_NMIC
  109. MIC equivalent of the environment variable \ref STARPU_NCUDA, i.e. the number of
  110. MIC devices to use.
  111. </dd>
  112. <dt>STARPU_NMICTHREADS</dt>
  113. <dd>
  114. \anchor STARPU_NMICTHREADS
  115. \addindex __env__STARPU_NMICTHREADS
  116. Number of threads to use on the MIC devices.
  117. </dd>
  118. <dt>STARPU_NMPI_MS</dt>
  119. <dd>
  120. \anchor STARPU_NMPI_MS
  121. \addindex __env__STARPU_NMPI_MS
  122. MPI Master Slave equivalent of the environment variable \ref STARPU_NCUDA, i.e. the number of
  123. MPI Master Slave devices to use.
  124. </dd>
  125. <dt>STARPU_NMPIMSTHREADS</dt>
  126. <dd>
  127. \anchor STARPU_NMPIMSTHREADS
  128. \addindex __env__STARPU_NMPIMSTHREADS
  129. Number of threads to use on the MPI Slave devices.
  130. </dd>
  131. <dt>STARPU_MPI_MASTER_NODE</dt>
  132. <dd>
  133. \anchor STARPU_MPI_MASTER_NODE
  134. \addindex __env__STARPU_MPI_MASTER_NODE
  135. This variable allows to chose which MPI node (with the MPI ID) will be the master.
  136. </dd>
  137. <dt>STARPU_NSCC</dt>
  138. <dd>
  139. \anchor STARPU_NSCC
  140. \addindex __env__STARPU_NSCC
  141. SCC equivalent of the environment variable \ref STARPU_NCUDA.
  142. </dd>
  143. <dt>STARPU_WORKERS_NOBIND</dt>
  144. <dd>
  145. \anchor STARPU_WORKERS_NOBIND
  146. \addindex __env__STARPU_WORKERS_NOBIND
  147. Setting it to non-zero will prevent StarPU from binding its threads to
  148. CPUs. This is for instance useful when running the testsuite in parallel.
  149. </dd>
  150. <dt>STARPU_WORKERS_CPUID</dt>
  151. <dd>
  152. \anchor STARPU_WORKERS_CPUID
  153. \addindex __env__STARPU_WORKERS_CPUID
  154. Passing an array of integers in \ref STARPU_WORKERS_CPUID
  155. specifies on which logical CPU the different workers should be
  156. bound. For instance, if <c>STARPU_WORKERS_CPUID = "0 1 4 5"</c>, the first
  157. worker will be bound to logical CPU #0, the second CPU worker will be bound to
  158. logical CPU #1 and so on. Note that the logical ordering of the CPUs is either
  159. determined by the OS, or provided by the library <c>hwloc</c> in case it is
  160. available. Ranges can be provided: for instance, <c>STARPU_WORKERS_CPUID = "1-3
  161. 5"</c> will bind the first three workers on logical CPUs #1, #2, and #3, and the
  162. fourth worker on logical CPU #5. Unbound ranges can also be provided:
  163. <c>STARPU_WORKERS_CPUID = "1-"</c> will bind the workers starting from logical
  164. CPU #1 up to last CPU.
  165. Note that the first workers correspond to the CUDA workers, then come the
  166. OpenCL workers, and finally the CPU workers. For example if
  167. we have <c>STARPU_NCUDA=1</c>, <c>STARPU_NOPENCL=1</c>, <c>STARPU_NCPU=2</c>
  168. and <c>STARPU_WORKERS_CPUID = "0 2 1 3"</c>, the CUDA device will be controlled
  169. by logical CPU #0, the OpenCL device will be controlled by logical CPU #2, and
  170. the logical CPUs #1 and #3 will be used by the CPU workers.
  171. If the number of workers is larger than the array given in
  172. \ref STARPU_WORKERS_CPUID, the workers are bound to the logical CPUs in a
  173. round-robin fashion: if <c>STARPU_WORKERS_CPUID = "0 1"</c>, the first
  174. and the third (resp. second and fourth) workers will be put on CPU #0
  175. (resp. CPU #1).
  176. This variable is ignored if the field
  177. starpu_conf::use_explicit_workers_bindid passed to starpu_init() is
  178. set.
  179. </dd>
  180. <dt>STARPU_WORKERS_CUDAID</dt>
  181. <dd>
  182. \anchor STARPU_WORKERS_CUDAID
  183. \addindex __env__STARPU_WORKERS_CUDAID
  184. Similarly to the \ref STARPU_WORKERS_CPUID environment variable, it is
  185. possible to select which CUDA devices should be used by StarPU. On a machine
  186. equipped with 4 GPUs, setting <c>STARPU_WORKERS_CUDAID = "1 3"</c> and
  187. <c>STARPU_NCUDA=2</c> specifies that 2 CUDA workers should be created, and that
  188. they should use CUDA devices #1 and #3 (the logical ordering of the devices is
  189. the one reported by CUDA).
  190. This variable is ignored if the field
  191. starpu_conf::use_explicit_workers_cuda_gpuid passed to starpu_init()
  192. is set.
  193. </dd>
  194. <dt>STARPU_WORKERS_OPENCLID</dt>
  195. <dd>
  196. \anchor STARPU_WORKERS_OPENCLID
  197. \addindex __env__STARPU_WORKERS_OPENCLID
  198. OpenCL equivalent of the \ref STARPU_WORKERS_CUDAID environment variable.
  199. This variable is ignored if the field
  200. starpu_conf::use_explicit_workers_opencl_gpuid passed to starpu_init()
  201. is set.
  202. </dd>
  203. <dt>STARPU_WORKERS_MICID</dt>
  204. <dd>
  205. \anchor STARPU_WORKERS_MICID
  206. \addindex __env__STARPU_WORKERS_MICID
  207. MIC equivalent of the \ref STARPU_WORKERS_CUDAID environment variable.
  208. This variable is ignored if the field
  209. starpu_conf::use_explicit_workers_mic_deviceid passed to starpu_init()
  210. is set.
  211. </dd>
  212. <dt>STARPU_WORKERS_SCCID</dt>
  213. <dd>
  214. \anchor STARPU_WORKERS_SCCID
  215. \addindex __env__STARPU_WORKERS_SCCID
  216. SCC equivalent of the \ref STARPU_WORKERS_CUDAID environment variable.
  217. This variable is ignored if the field
  218. starpu_conf::use_explicit_workers_scc_deviceid passed to starpu_init()
  219. is set.
  220. </dd>
  221. <dt>STARPU_WORKER_TREE</dt>
  222. <dd>
  223. \anchor STARPU_WORKER_TREE
  224. \addindex __env__STARPU_WORKER_TREE
  225. Define to 1 to enable the tree iterator in schedulers.
  226. </dd>
  227. <dt>STARPU_SINGLE_COMBINED_WORKER</dt>
  228. <dd>
  229. \anchor STARPU_SINGLE_COMBINED_WORKER
  230. \addindex __env__STARPU_SINGLE_COMBINED_WORKER
  231. If set, StarPU will create several workers which won't be able to work
  232. concurrently. It will by default create combined workers which size goes from 1
  233. to the total number of CPU workers in the system. \ref STARPU_MIN_WORKERSIZE
  234. and \ref STARPU_MAX_WORKERSIZE can be used to change this default.
  235. </dd>
  236. <dt>STARPU_MIN_WORKERSIZE</dt>
  237. <dd>
  238. \anchor STARPU_MIN_WORKERSIZE
  239. \addindex __env__STARPU_MIN_WORKERSIZE
  240. \ref STARPU_MIN_WORKERSIZE
  241. permits to specify the minimum size of the combined workers (instead of the default 2)
  242. </dd>
  243. <dt>STARPU_MAX_WORKERSIZE</dt>
  244. <dd>
  245. \anchor STARPU_MAX_WORKERSIZE
  246. \addindex __env__STARPU_MAX_WORKERSIZE
  247. \ref STARPU_MAX_WORKERSIZE
  248. permits to specify the minimum size of the combined workers (instead of the
  249. number of CPU workers in the system)
  250. </dd>
  251. <dt>STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER</dt>
  252. <dd>
  253. \anchor STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER
  254. \addindex __env__STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER
  255. Let the user decide how many elements are allowed between combined workers
  256. created from hwloc information. For instance, in the case of sockets with 6
  257. cores without shared L2 caches, if \ref STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER is
  258. set to 6, no combined worker will be synthesized beyond one for the socket
  259. and one per core. If it is set to 3, 3 intermediate combined workers will be
  260. synthesized, to divide the socket cores into 3 chunks of 2 cores. If it set to
  261. 2, 2 intermediate combined workers will be synthesized, to divide the the socket
  262. cores into 2 chunks of 3 cores, and then 3 additional combined workers will be
  263. synthesized, to divide the former synthesized workers into a bunch of 2 cores,
  264. and the remaining core (for which no combined worker is synthesized since there
  265. is already a normal worker for it).
  266. The default, 2, thus makes StarPU tend to building a binary trees of combined
  267. workers.
  268. </dd>
  269. <dt>STARPU_DISABLE_ASYNCHRONOUS_COPY</dt>
  270. <dd>
  271. \anchor STARPU_DISABLE_ASYNCHRONOUS_COPY
  272. \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_COPY
  273. Disable asynchronous copies between CPU and GPU devices.
  274. The AMD implementation of OpenCL is known to
  275. fail when copying data asynchronously. When using this implementation,
  276. it is therefore necessary to disable asynchronous data transfers.
  277. </dd>
  278. <dt>STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY</dt>
  279. <dd>
  280. \anchor STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY
  281. \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY
  282. Disable asynchronous copies between CPU and CUDA devices.
  283. </dd>
  284. <dt>STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY</dt>
  285. <dd>
  286. \anchor STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY
  287. \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY
  288. Disable asynchronous copies between CPU and OpenCL devices.
  289. The AMD implementation of OpenCL is known to
  290. fail when copying data asynchronously. When using this implementation,
  291. it is therefore necessary to disable asynchronous data transfers.
  292. </dd>
  293. <dt>STARPU_DISABLE_ASYNCHRONOUS_MIC_COPY</dt>
  294. <dd>
  295. \anchor STARPU_DISABLE_ASYNCHRONOUS_MIC_COPY
  296. \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_MIC_COPY
  297. Disable asynchronous copies between CPU and MIC devices.
  298. </dd>
  299. <dt>STARPU_DISABLE_ASYNCHRONOUS_MPI_MS_COPY</dt>
  300. <dd>
  301. \anchor STARPU_DISABLE_ASYNCHRONOUS_MPI_MS_COPY
  302. \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_MPI_MS_COPY
  303. Disable asynchronous copies between CPU and MPI Slave devices.
  304. </dd>
  305. <dt>STARPU_ENABLE_CUDA_GPU_GPU_DIRECT</dt>
  306. <dd>
  307. \anchor STARPU_ENABLE_CUDA_GPU_GPU_DIRECT
  308. \addindex __env__STARPU_ENABLE_CUDA_GPU_GPU_DIRECT
  309. Enable (1) or Disable (0) direct CUDA transfers from GPU to GPU, without copying
  310. through RAM. The default is Enabled.
  311. This permits to test the performance effect of GPU-Direct.
  312. </dd>
  313. <dt>STARPU_DISABLE_PINNING</dt>
  314. <dd>
  315. \anchor STARPU_DISABLE_PINNING
  316. \addindex __env__STARPU_DISABLE_PINNING
  317. Disable (1) or Enable (0) pinning host memory allocated through starpu_malloc, starpu_memory_pin
  318. and friends. The default is Enabled.
  319. This permits to test the performance effect of memory pinning.
  320. </dd>
  321. <dt>STARPU_MIC_SINK_PROGRAM_NAME</dt>
  322. <dd>
  323. \anchor STARPU_MIC_SINK_PROGRAM_NAME
  324. \addindex __env__STARPU_MIC_SINK_PROGRAM_NAME
  325. todo
  326. </dd>
  327. <dt>STARPU_MIC_SINK_PROGRAM_PATH</dt>
  328. <dd>
  329. \anchor STARPU_MIC_SINK_PROGRAM_PATH
  330. \addindex __env__STARPU_MIC_SINK_PROGRAM_PATH
  331. todo
  332. </dd>
  333. <dt>STARPU_MIC_PROGRAM_PATH</dt>
  334. <dd>
  335. \anchor STARPU_MIC_PROGRAM_PATH
  336. \addindex __env__STARPU_MIC_PROGRAM_PATH
  337. todo
  338. </dd>
  339. </dl>
  340. \section ConfiguringTheSchedulingEngine Configuring The Scheduling Engine
  341. <dl>
  342. <dt>STARPU_SCHED</dt>
  343. <dd>
  344. \anchor STARPU_SCHED
  345. \addindex __env__STARPU_SCHED
  346. Choose between the different scheduling policies proposed by StarPU: work
  347. random, stealing, greedy, with performance models, etc.
  348. Use <c>STARPU_SCHED=help</c> to get the list of available schedulers.
  349. </dd>
  350. <dt>STARPU_MIN_PRIO</dt>
  351. <dd>
  352. \anchor STARPU_MIN_PRIO_env
  353. \addindex __env__STARPU_MIN_PRIO
  354. Set the mininum priority used by priorities-aware schedulers.
  355. </dd>
  356. <dt>STARPU_MAX_PRIO</dt>
  357. <dd>
  358. \anchor STARPU_MAX_PRIO_env
  359. \addindex __env__STARPU_MAX_PRIO
  360. Set the maximum priority used by priorities-aware schedulers.
  361. </dd>
  362. <dt>STARPU_CALIBRATE</dt>
  363. <dd>
  364. \anchor STARPU_CALIBRATE
  365. \addindex __env__STARPU_CALIBRATE
  366. If this variable is set to 1, the performance models are calibrated during
  367. the execution. If it is set to 2, the previous values are dropped to restart
  368. calibration from scratch. Setting this variable to 0 disable calibration, this
  369. is the default behaviour.
  370. Note: this currently only applies to <c>dm</c> and <c>dmda</c> scheduling policies.
  371. </dd>
  372. <dt>STARPU_CALIBRATE_MINIMUM</dt>
  373. <dd>
  374. \anchor STARPU_CALIBRATE_MINIMUM
  375. \addindex __env__STARPU_CALIBRATE_MINIMUM
  376. This defines the minimum number of calibration measurements that will be made
  377. before considering that the performance model is calibrated. The default value is 10.
  378. </dd>
  379. <dt>STARPU_BUS_CALIBRATE</dt>
  380. <dd>
  381. \anchor STARPU_BUS_CALIBRATE
  382. \addindex __env__STARPU_BUS_CALIBRATE
  383. If this variable is set to 1, the bus is recalibrated during intialization.
  384. </dd>
  385. <dt>STARPU_PREFETCH</dt>
  386. <dd>
  387. \anchor STARPU_PREFETCH
  388. \addindex __env__STARPU_PREFETCH
  389. This variable indicates whether data prefetching should be enabled (0 means
  390. that it is disabled). If prefetching is enabled, when a task is scheduled to be
  391. executed e.g. on a GPU, StarPU will request an asynchronous transfer in
  392. advance, so that data is already present on the GPU when the task starts. As a
  393. result, computation and data transfers are overlapped.
  394. Note that prefetching is enabled by default in StarPU.
  395. </dd>
  396. <dt>STARPU_SCHED_ALPHA</dt>
  397. <dd>
  398. \anchor STARPU_SCHED_ALPHA
  399. \addindex __env__STARPU_SCHED_ALPHA
  400. To estimate the cost of a task StarPU takes into account the estimated
  401. computation time (obtained thanks to performance models). The alpha factor is
  402. the coefficient to be applied to it before adding it to the communication part.
  403. </dd>
  404. <dt>STARPU_SCHED_BETA</dt>
  405. <dd>
  406. \anchor STARPU_SCHED_BETA
  407. \addindex __env__STARPU_SCHED_BETA
  408. To estimate the cost of a task StarPU takes into account the estimated
  409. data transfer time (obtained thanks to performance models). The beta factor is
  410. the coefficient to be applied to it before adding it to the computation part.
  411. </dd>
  412. <dt>STARPU_SCHED_GAMMA</dt>
  413. <dd>
  414. \anchor STARPU_SCHED_GAMMA
  415. \addindex __env__STARPU_SCHED_GAMMA
  416. Define the execution time penalty of a joule (\ref Energy-basedScheduling).
  417. </dd>
  418. <dt>STARPU_IDLE_POWER</dt>
  419. <dd>
  420. \anchor STARPU_IDLE_POWER
  421. \addindex __env__STARPU_IDLE_POWER
  422. Define the idle power of the machine (\ref Energy-basedScheduling).
  423. </dd>
  424. <dt>STARPU_PROFILING</dt>
  425. <dd>
  426. \anchor STARPU_PROFILING
  427. \addindex __env__STARPU_PROFILING
  428. Enable on-line performance monitoring (\ref EnablingOn-linePerformanceMonitoring).
  429. </dd>
  430. </dl>
  431. \section Extensions Extensions
  432. <dl>
  433. <dt>SOCL_OCL_LIB_OPENCL</dt>
  434. <dd>
  435. \anchor SOCL_OCL_LIB_OPENCL
  436. \addindex __env__SOCL_OCL_LIB_OPENCL
  437. THE SOCL test suite is only run when the environment variable
  438. \ref SOCL_OCL_LIB_OPENCL is defined. It should contain the location
  439. of the file <c>libOpenCL.so</c> of the OCL ICD implementation.
  440. </dd>
  441. <dt>OCL_ICD_VENDORS</dt>
  442. <dd>
  443. \anchor OCL_ICD_VENDORS
  444. \addindex __env__OCL_ICD_VENDORS
  445. When using SOCL with OpenCL ICD
  446. (https://forge.imag.fr/projects/ocl-icd/), this variable may be used
  447. to point to the directory where ICD files are installed. The default
  448. directory is <c>/etc/OpenCL/vendors</c>. StarPU installs ICD
  449. files in the directory <c>$prefix/share/starpu/opencl/vendors</c>.
  450. </dd>
  451. <dt>STARPU_COMM_STATS</dt>
  452. <dd>
  453. \anchor STARPU_COMM_STATS
  454. \addindex __env__STARPU_COMM_STATS
  455. Communication statistics for starpumpi (\ref MPISupport)
  456. will be enabled when the environment variable \ref STARPU_COMM_STATS
  457. is defined to an value other than 0.
  458. </dd>
  459. <dt>STARPU_MPI_CACHE</dt>
  460. <dd>
  461. \anchor STARPU_MPI_CACHE
  462. \addindex __env__STARPU_MPI_CACHE
  463. Communication cache for starpumpi (\ref MPISupport) will be
  464. disabled when the environment variable \ref STARPU_MPI_CACHE is set
  465. to 0. It is enabled by default or for any other values of the variable
  466. \ref STARPU_MPI_CACHE.
  467. </dd>
  468. <dt>STARPU_MPI_COMM</dt>
  469. <dd>
  470. \anchor STARPU_MPI_COMM
  471. \addindex __env__STARPU_MPI_COMM
  472. Communication trace for starpumpi (\ref MPISupport) will be
  473. enabled when the environment variable \ref STARPU_MPI_COMM is set
  474. to 1, and StarPU has been configured with the option
  475. \ref enable-verbose "--enable-verbose".
  476. </dd>
  477. <dt>STARPU_MPI_CACHE_STATS</dt>
  478. <dd>
  479. \anchor STARPU_MPI_CACHE_STATS
  480. \addindex __env__STARPU_MPI_CACHE_STATS
  481. When set to 1, statistics are enabled for the communication cache (\ref MPISupport). For now,
  482. it prints messages on the standard output when data are added or removed from the received
  483. communication cache.
  484. </dd>
  485. <dt>STARPU_MPI_FAKE_SIZE</dt>
  486. <dd>
  487. \anchor STARPU_MPI_FAKE_SIZE
  488. \addindex __env__STARPU_MPI_FAKE_SIZE
  489. Setting to a number makes StarPU believe that there are as many MPI nodes, even
  490. if it was run on only one MPI node. This allows e.g. to simulate the execution
  491. of one of the nodes of a big cluster without actually running the rest.
  492. It of course does not provide computation results and timing.
  493. </dd>
  494. <dt>STARPU_MPI_FAKE_RANK</dt>
  495. <dd>
  496. \anchor STARPU_MPI_FAKE_RANK
  497. \addindex __env__STARPU_MPI_FAKE_RANK
  498. Setting to a number makes StarPU believe that it runs the given MPI node, even
  499. if it was run on only one MPI node. This allows e.g. to simulate the execution
  500. of one of the nodes of a big cluster without actually running the rest.
  501. It of course does not provide computation results and timing.
  502. </dd>
  503. <dt>STARPU_SIMGRID_CUDA_MALLOC_COST</dt>
  504. <dd>
  505. \anchor STARPU_SIMGRID_CUDA_MALLOC_COST
  506. \addindex __env__STARPU_SIMGRID_CUDA_MALLOC_COST
  507. When set to 1 (which is the default), CUDA malloc costs are taken into account
  508. in simgrid mode.
  509. </dd>
  510. <dt>STARPU_SIMGRID_CUDA_QUEUE_COST</dt>
  511. <dd>
  512. \anchor STARPU_SIMGRID_CUDA_QUEUE_COST
  513. \addindex __env__STARPU_SIMGRID_CUDA_QUEUE_COST
  514. When set to 1 (which is the default), CUDA task and transfer queueing costs are
  515. taken into account in simgrid mode.
  516. </dd>
  517. <dt>STARPU_PCI_FLAT</dt>
  518. <dd>
  519. \anchor STARPU_PCI_FLAT
  520. \addindex __env__STARPU_PCI_FLAT
  521. When unset or set to 0, the platform file created for simgrid will
  522. contain PCI bandwidths and routes.
  523. </dd>
  524. <dt>STARPU_SIMGRID_QUEUE_MALLOC_COST</dt>
  525. <dd>
  526. \anchor STARPU_SIMGRID_QUEUE_MALLOC_COST
  527. \addindex __env__STARPU_SIMGRID_QUEUE_MALLOC_COST
  528. When unset or set to 1, simulate within simgrid the GPU transfer queueing.
  529. </dd>
  530. <dt>STARPU_MALLOC_SIMULATION_FOLD</dt>
  531. <dd>
  532. \anchor STARPU_MALLOC_SIMULATION_FOLD
  533. \addindex __env__STARPU_MALLOC_SIMULATION_FOLD
  534. This defines the size of the file used for folding virtual allocation, in
  535. MiB. The default is 1, thus allowing 64GiB virtual memory when Linux's
  536. <c>sysctl vm.max_map_count</c> value is the default 65535.
  537. </dd>
  538. <dt>STARPU_SIMGRID_TASK_SUBMIT_COST</dt>
  539. <dd>
  540. \anchor STARPU_SIMGRID_TASK_SUBMIT_COST
  541. \addindex __env__STARPU_SIMGRID_TASK_SUBMIT_COST
  542. When set to 1 (which is the default), task submission costs are taken into
  543. account in simgrid mode. This provides more accurate simgrid predictions,
  544. especially for the beginning of the execution.
  545. </dd>
  546. <dt>STARPU_SIMGRID_FETCHING_INPUT_COST</dt>
  547. <dd>
  548. \anchor STARPU_SIMGRID_FETCHING_INPUT_COST
  549. \addindex __env__STARPU_SIMGRID_FETCHING_INPUT_COST
  550. When set to 1 (which is the default), fetching input costs are taken into
  551. account in simgrid mode. This provides more accurate simgrid predictions,
  552. especially regarding data transfers.
  553. </dd>
  554. <dt>STARPU_SIMGRID_SCHED_COST</dt>
  555. <dd>
  556. \anchor STARPU_SIMGRID_SCHED_COST
  557. \addindex __env__STARPU_SIMGRID_SCHED_COST
  558. When set to 1 (0 is the default), scheduling costs are taken into
  559. account in simgrid mode. This provides more accurate simgrid predictions,
  560. and allows studying scheduling overhead of the runtime system. However,
  561. it also makes simulation non-deterministic.
  562. </dd>
  563. </dl>
  564. \section MiscellaneousAndDebug Miscellaneous And Debug
  565. <dl>
  566. <dt>STARPU_HOME</dt>
  567. <dd>
  568. \anchor STARPU_HOME
  569. \addindex __env__STARPU_HOME
  570. This specifies the main directory in which StarPU stores its
  571. configuration files. The default is <c>$HOME</c> on Unix environments,
  572. and <c>$USERPROFILE</c> on Windows environments.
  573. </dd>
  574. <dt>STARPU_PATH</dt>
  575. <dd>
  576. \anchor STARPU_PATH
  577. \addindex __env__STARPU_PATH
  578. Only used on Windows environments.
  579. This specifies the main directory in which StarPU is installed
  580. (\ref RunningABasicStarPUApplicationOnMicrosoft)
  581. </dd>
  582. <dt>STARPU_PERF_MODEL_DIR</dt>
  583. <dd>
  584. \anchor STARPU_PERF_MODEL_DIR
  585. \addindex __env__STARPU_PERF_MODEL_DIR
  586. This specifies the main directory in which StarPU stores its
  587. performance model files. The default is <c>$STARPU_HOME/.starpu/sampling</c>.
  588. </dd>
  589. <dt>STARPU_PERF_MODEL_HOMOGENEOUS_CPU</dt>
  590. <dd>
  591. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_CPU
  592. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_CPU
  593. When this is set to 0, StarPU will assume that CPU devices do not have the same
  594. performance, and thus use different performance models for them, thus making
  595. kernel calibration much longer, since measurements have to be made for each CPU
  596. core.
  597. </dd>
  598. <dt>STARPU_PERF_MODEL_HOMOGENEOUS_CUDA</dt>
  599. <dd>
  600. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_CUDA
  601. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_CUDA
  602. When this is set to 1, StarPU will assume that all CUDA devices have the same
  603. performance, and thus share performance models for them, thus allowing kernel
  604. calibration to be much faster, since measurements only have to be once for all
  605. CUDA GPUs.
  606. </dd>
  607. <dt>STARPU_PERF_MODEL_HOMOGENEOUS_OPENCL</dt>
  608. <dd>
  609. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_OPENCL
  610. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_OPENCL
  611. When this is set to 1, StarPU will assume that all OPENCL devices have the same
  612. performance, and thus share performance models for them, thus allowing kernel
  613. calibration to be much faster, since measurements only have to be once for all
  614. OPENCL GPUs.
  615. </dd>
  616. <dt>STARPU_PERF_MODEL_HOMOGENEOUS_MIC</dt>
  617. <dd>
  618. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_MIC
  619. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_MIC
  620. When this is set to 1, StarPU will assume that all MIC devices have the same
  621. performance, and thus share performance models for them, thus allowing kernel
  622. calibration to be much faster, since measurements only have to be once for all
  623. MIC GPUs.
  624. </dd>
  625. <dt>STARPU_PERF_MODEL_HOMOGENEOUS_MPI_MS</dt>
  626. <dd>
  627. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_MPI_MS
  628. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_MPI_MS
  629. When this is set to 1, StarPU will assume that all MPI Slave devices have the same
  630. performance, and thus share performance models for them, thus allowing kernel
  631. calibration to be much faster, since measurements only have to be once for all
  632. MPI Slaves.
  633. </dd>
  634. <dt>STARPU_PERF_MODEL_HOMOGENEOUS_SCC</dt>
  635. <dd>
  636. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_SCC
  637. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_SCC
  638. When this is set to 1, StarPU will assume that all SCC devices have the same
  639. performance, and thus share performance models for them, thus allowing kernel
  640. calibration to be much faster, since measurements only have to be once for all
  641. SCC GPUs.
  642. </dd>
  643. <dt>STARPU_HOSTNAME</dt>
  644. <dd>
  645. \anchor STARPU_HOSTNAME
  646. \addindex __env__STARPU_HOSTNAME
  647. When set, force the hostname to be used when dealing performance model
  648. files. Models are indexed by machine name. When running for example on
  649. a homogenenous cluster, it is possible to share the models between
  650. machines by setting <c>export STARPU_HOSTNAME=some_global_name</c>.
  651. </dd>
  652. <dt>STARPU_OPENCL_PROGRAM_DIR</dt>
  653. <dd>
  654. \anchor STARPU_OPENCL_PROGRAM_DIR
  655. \addindex __env__STARPU_OPENCL_PROGRAM_DIR
  656. This specifies the directory where the OpenCL codelet source files are
  657. located. The function starpu_opencl_load_program_source() looks
  658. for the codelet in the current directory, in the directory specified
  659. by the environment variable \ref STARPU_OPENCL_PROGRAM_DIR, in the
  660. directory <c>share/starpu/opencl</c> of the installation directory of
  661. StarPU, and finally in the source directory of StarPU.
  662. </dd>
  663. <dt>STARPU_SILENT</dt>
  664. <dd>
  665. \anchor STARPU_SILENT
  666. \addindex __env__STARPU_SILENT
  667. This variable allows to disable verbose mode at runtime when StarPU
  668. has been configured with the option \ref enable-verbose "--enable-verbose". It also
  669. disables the display of StarPU information and warning messages.
  670. </dd>
  671. <dt>STARPU_LOGFILENAME</dt>
  672. <dd>
  673. \anchor STARPU_LOGFILENAME
  674. \addindex __env__STARPU_LOGFILENAME
  675. This variable specifies in which file the debugging output should be saved to.
  676. </dd>
  677. <dt>STARPU_FXT_PREFIX</dt>
  678. <dd>
  679. \anchor STARPU_FXT_PREFIX
  680. \addindex __env__STARPU_FXT_PREFIX
  681. This variable specifies in which directory to save the trace generated if FxT is enabled. It needs to have a trailing '/' character.
  682. </dd>
  683. <dt>STARPU_FXT_TRACE</dt>
  684. <dd>
  685. \anchor STARPU_FXT_TRACE
  686. \addindex __env__STARPU_FXT_TRACE
  687. This variable specifies whether to generate (1) or not (0) the FxT trace in /tmp/prof_file_XXX_YYY . The default is 1 (generate it)
  688. </dd>
  689. <dt>STARPU_LIMIT_CUDA_devid_MEM</dt>
  690. <dd>
  691. \anchor STARPU_LIMIT_CUDA_devid_MEM
  692. \addindex __env__STARPU_LIMIT_CUDA_devid_MEM
  693. This variable specifies the maximum number of megabytes that should be
  694. available to the application on the CUDA device with the identifier
  695. <c>devid</c>. This variable is intended to be used for experimental
  696. purposes as it emulates devices that have a limited amount of memory.
  697. When defined, the variable overwrites the value of the variable
  698. \ref STARPU_LIMIT_CUDA_MEM.
  699. </dd>
  700. <dt>STARPU_LIMIT_CUDA_MEM</dt>
  701. <dd>
  702. \anchor STARPU_LIMIT_CUDA_MEM
  703. \addindex __env__STARPU_LIMIT_CUDA_MEM
  704. This variable specifies the maximum number of megabytes that should be
  705. available to the application on each CUDA devices. This variable is
  706. intended to be used for experimental purposes as it emulates devices
  707. that have a limited amount of memory.
  708. </dd>
  709. <dt>STARPU_LIMIT_OPENCL_devid_MEM</dt>
  710. <dd>
  711. \anchor STARPU_LIMIT_OPENCL_devid_MEM
  712. \addindex __env__STARPU_LIMIT_OPENCL_devid_MEM
  713. This variable specifies the maximum number of megabytes that should be
  714. available to the application on the OpenCL device with the identifier
  715. <c>devid</c>. This variable is intended to be used for experimental
  716. purposes as it emulates devices that have a limited amount of memory.
  717. When defined, the variable overwrites the value of the variable
  718. \ref STARPU_LIMIT_OPENCL_MEM.
  719. </dd>
  720. <dt>STARPU_LIMIT_OPENCL_MEM</dt>
  721. <dd>
  722. \anchor STARPU_LIMIT_OPENCL_MEM
  723. \addindex __env__STARPU_LIMIT_OPENCL_MEM
  724. This variable specifies the maximum number of megabytes that should be
  725. available to the application on each OpenCL devices. This variable is
  726. intended to be used for experimental purposes as it emulates devices
  727. that have a limited amount of memory.
  728. </dd>
  729. <dt>STARPU_LIMIT_CPU_MEM</dt>
  730. <dd>
  731. \anchor STARPU_LIMIT_CPU_MEM
  732. \addindex __env__STARPU_LIMIT_CPU_MEM
  733. This variable specifies the maximum number of megabytes that should be
  734. available to the application in the main CPU memory. Setting it enables allocation
  735. cache in main memory. Setting it to zero lets StarPU overflow memory.
  736. </dd>
  737. <dt>STARPU_MINIMUM_AVAILABLE_MEM</dt>
  738. <dd>
  739. \anchor STARPU_MINIMUM_AVAILABLE_MEM
  740. \addindex __env__STARPU_MINIMUM_AVAILABLE_MEM
  741. This specifies the minimum percentage of memory that should be available in GPUs
  742. (or in main memory, when using out of core), below which a reclaiming pass is
  743. performed. The default is 0%.
  744. </dd>
  745. <dt>STARPU_TARGET_AVAILABLE_MEM</dt>
  746. <dd>
  747. \anchor STARPU_TARGET_AVAILABLE_MEM
  748. \addindex __env__STARPU_TARGET_AVAILABLE_MEM
  749. This specifies the target percentage of memory that should be reached in
  750. GPUs (or in main memory, when using out of core), when performing a periodic
  751. reclaiming pass. The default is 0%.
  752. </dd>
  753. <dt>STARPU_MINIMUM_CLEAN_BUFFERS</dt>
  754. <dd>
  755. \anchor STARPU_MINIMUM_CLEAN_BUFFERS
  756. \addindex __env__STARPU_MINIMUM_CLEAN_BUFFERS
  757. This specifies the minimum percentage of number of buffers that should be clean in GPUs
  758. (or in main memory, when using out of core), below which asynchronous writebacks will be
  759. issued. The default is 5%.
  760. </dd>
  761. <dt>STARPU_TARGET_CLEAN_BUFFERS</dt>
  762. <dd>
  763. \anchor STARPU_TARGET_CLEAN_BUFFERS
  764. \addindex __env__STARPU_TARGET_CLEAN_BUFFERS
  765. This specifies the target percentage of number of buffers that should be reached in
  766. GPUs (or in main memory, when using out of core), when performing an asynchronous
  767. writeback pass. The default is 10%.
  768. </dd>
  769. <dt>STARPU_DIDUSE_BARRIER</dt>
  770. <dd>
  771. \anchor STARPU_DIDUSE_BARRIER
  772. \addindex __env__STARPU_DIDUSE_BARRIER
  773. When set to 1, StarPU will never evict a piece of data if it has not been used
  774. by at least one task. This avoids odd behaviors under high memory pressure, but
  775. can lead to deadlocks, so is to be considered experimental only.
  776. </dd>
  777. <dt>STARPU_DISK_SWAP</dt>
  778. <dd>
  779. \anchor STARPU_DISK_SWAP
  780. \addindex __env__STARPU_DISK_SWAP
  781. This specifies a path where StarPU can push data when the main memory is getting
  782. full.
  783. </dd>
  784. <dt>STARPU_DISK_SWAP_BACKEND</dt>
  785. <dd>
  786. \anchor STARPU_DISK_SWAP_BACKEND
  787. \addindex __env__STARPU_DISK_SWAP_BACKEND
  788. This specifies then backend to be used by StarPU to push data when the main
  789. memory is getting full. The default is unistd (i.e. using read/write functions),
  790. other values are stdio (i.e. using fread/fwrite), unistd_o_direct (i.e. using
  791. read/write with O_DIRECT), leveldb (i.e. using a leveldb database), and hdf5
  792. (i.e. using HDF5 library).
  793. </dd>
  794. <dt>STARPU_DISK_SWAP_SIZE</dt>
  795. <dd>
  796. \anchor STARPU_DISK_SWAP_SIZE
  797. \addindex __env__STARPU_DISK_SWAP_SIZE
  798. This specifies then maximum size in MiB to be used by StarPU to push data when the main
  799. memory is getting full. The default is unlimited.
  800. </dd>
  801. <dt>STARPU_LIMIT_MAX_SUBMITTED_TASKS</dt>
  802. <dd>
  803. \anchor STARPU_LIMIT_MAX_SUBMITTED_TASKS
  804. \addindex __env__STARPU_LIMIT_MAX_SUBMITTED_TASKS
  805. This variable allows the user to control the task submission flow by specifying
  806. to StarPU a maximum number of submitted tasks allowed at a given time, i.e. when
  807. this limit is reached task submission becomes blocking until enough tasks have
  808. completed, specified by \ref STARPU_LIMIT_MIN_SUBMITTED_TASKS.
  809. Setting it enables allocation cache buffer reuse in main memory.
  810. </dd>
  811. <dt>STARPU_LIMIT_MIN_SUBMITTED_TASKS</dt>
  812. <dd>
  813. \anchor STARPU_LIMIT_MIN_SUBMITTED_TASKS
  814. \addindex __env__STARPU_LIMIT_MIN_SUBMITTED_TASKS
  815. This variable allows the user to control the task submission flow by specifying
  816. to StarPU a submitted task threshold to wait before unblocking task submission. This
  817. variable has to be used in conjunction with \ref STARPU_LIMIT_MAX_SUBMITTED_TASKS
  818. which puts the task submission thread to
  819. sleep. Setting it enables allocation cache buffer reuse in main memory.
  820. </dd>
  821. <dt>STARPU_TRACE_BUFFER_SIZE</dt>
  822. <dd>
  823. \anchor STARPU_TRACE_BUFFER_SIZE
  824. \addindex __env__STARPU_TRACE_BUFFER_SIZE
  825. This sets the buffer size for recording trace events in MiB. Setting it to a big
  826. size allows to avoid pauses in the trace while it is recorded on the disk. This
  827. however also consumes memory, of course. The default value is 64.
  828. </dd>
  829. <dt>STARPU_GENERATE_TRACE</dt>
  830. <dd>
  831. \anchor STARPU_GENERATE_TRACE
  832. \addindex __env__STARPU_GENERATE_TRACE
  833. When set to <c>1</c>, this variable indicates that StarPU should automatically
  834. generate a Paje trace when starpu_shutdown() is called.
  835. </dd>
  836. <dt>STARPU_ENABLE_STATS</dt>
  837. <dd>
  838. \anchor STARPU_ENABLE_STATS
  839. \addindex __env__STARPU_ENABLE_STATS
  840. When defined, enable gathering various data statistics (\ref DataStatistics).
  841. </dd>
  842. <dt>STARPU_MEMORY_STATS</dt>
  843. <dd>
  844. \anchor STARPU_MEMORY_STATS
  845. \addindex __env__STARPU_MEMORY_STATS
  846. When set to 0, disable the display of memory statistics on data which
  847. have not been unregistered at the end of the execution (\ref MemoryFeedback).
  848. </dd>
  849. <dt>STARPU_MAX_MEMORY_USE</dt>
  850. <dd>
  851. \anchor STARPU_MAX_MEMORY_USE
  852. \addindex __env__STARPU_MAX_MEMORY_USE
  853. When set to 1, display at the end of the execution the maximum memory used by
  854. StarPU for internal data structures during execution.
  855. </dd>
  856. <dt>STARPU_BUS_STATS</dt>
  857. <dd>
  858. \anchor STARPU_BUS_STATS
  859. \addindex __env__STARPU_BUS_STATS
  860. When defined, statistics about data transfers will be displayed when calling
  861. starpu_shutdown() (\ref Profiling).
  862. </dd>
  863. <dt>STARPU_WORKER_STATS</dt>
  864. <dd>
  865. \anchor STARPU_WORKER_STATS
  866. \addindex __env__STARPU_WORKER_STATS
  867. When defined, statistics about the workers will be displayed when calling
  868. starpu_shutdown() (\ref Profiling). When combined with the
  869. environment variable \ref STARPU_PROFILING, it displays the energy
  870. consumption (\ref Energy-basedScheduling).
  871. </dd>
  872. <dt>STARPU_STATS</dt>
  873. <dd>
  874. \anchor STARPU_STATS
  875. \addindex __env__STARPU_STATS
  876. When set to 0, data statistics will not be displayed at the
  877. end of the execution of an application (\ref DataStatistics).
  878. </dd>
  879. <dt>STARPU_WATCHDOG_TIMEOUT</dt>
  880. <dd>
  881. \anchor STARPU_WATCHDOG_TIMEOUT
  882. \addindex __env__STARPU_WATCHDOG_TIMEOUT
  883. When set to a value other than 0, allows to make StarPU print an error
  884. message whenever StarPU does not terminate any task for the given time (in µs),
  885. but lets the application continue normally. Should
  886. be used in combination with \ref STARPU_WATCHDOG_CRASH
  887. (see \ref DetectionStuckConditions).
  888. </dd>
  889. <dt>STARPU_WATCHDOG_CRASH</dt>
  890. <dd>
  891. \anchor STARPU_WATCHDOG_CRASH
  892. \addindex __env__STARPU_WATCHDOG_CRASH
  893. When set to a value other than 0, it triggers a crash when the watch
  894. dog is reached, thus allowing to catch the situation in gdb, etc
  895. (see \ref DetectionStuckConditions)
  896. </dd>
  897. <dt>STARPU_TASK_BREAK_ON_PUSH</dt>
  898. <dd>
  899. \anchor STARPU_TASK_BREAK_ON_PUSH
  900. \addindex __env__STARPU_TASK_BREAK_ON_PUSH
  901. When this variable contains a job id, StarPU will raise SIGTRAP when the task
  902. with that job id is being pushed to the scheduler, which will be nicely catched by debuggers
  903. (see \ref DebuggingScheduling)
  904. </dd>
  905. <dt>STARPU_TASK_BREAK_ON_SCHED</dt>
  906. <dd>
  907. \anchor STARPU_TASK_BREAK_ON_SCHED
  908. \addindex __env__STARPU_TASK_BREAK_ON_SCHED
  909. When this variable contains a job id, StarPU will raise SIGTRAP when the task
  910. with that job id is being scheduled by the scheduler (at a scheduler-specific
  911. point), which will be nicely catched by debuggers.
  912. This only works for schedulers which have such a scheduling point defined
  913. (see \ref DebuggingScheduling)
  914. </dd>
  915. <dt>STARPU_TASK_BREAK_ON_POP</dt>
  916. <dd>
  917. \anchor STARPU_TASK_BREAK_ON_POP
  918. \addindex __env__STARPU_TASK_BREAK_ON_POP
  919. When this variable contains a job id, StarPU will raise SIGTRAP when the task
  920. with that job id is being popped from the scheduler, which will be nicely catched by debuggers
  921. (see \ref DebuggingScheduling)
  922. </dd>
  923. <dt>STARPU_TASK_BREAK_ON_EXEC</dt>
  924. <dd>
  925. \anchor STARPU_TASK_BREAK_ON_EXEC
  926. \addindex __env__STARPU_TASK_BREAK_ON_EXEC
  927. When this variable contains a job id, StarPU will raise SIGTRAP when the task
  928. with that job id is being executed, which will be nicely catched by debuggers
  929. (see \ref DebuggingScheduling)
  930. </dd>
  931. <dt>STARPU_DISABLE_KERNELS</dt>
  932. <dd>
  933. \anchor STARPU_DISABLE_KERNELS
  934. \addindex __env__STARPU_DISABLE_KERNELS
  935. When set to a value other than 1, it disables actually calling the kernel
  936. functions, thus allowing to quickly check that the task scheme is working
  937. properly, without performing the actual application-provided computation.
  938. </dd>
  939. <dt>STARPU_HISTORY_MAX_ERROR</dt>
  940. <dd>
  941. \anchor STARPU_HISTORY_MAX_ERROR
  942. \addindex __env__STARPU_HISTORY_MAX_ERROR
  943. History-based performance models will drop measurements which are really far
  944. froom the measured average. This specifies the allowed variation. The default is
  945. 50 (%), i.e. the measurement is allowed to be x1.5 faster or /1.5 slower than the
  946. average.
  947. </dd>
  948. <dt>STARPU_RAND_SEED</dt>
  949. <dd>
  950. \anchor STARPU_RAND_SEED
  951. \addindex __env__STARPU_RAND_SEED
  952. The random scheduler and some examples use random numbers for their own
  953. working. Depending on the examples, the seed is by default juste always 0 or
  954. the current time() (unless simgrid mode is enabled, in which case it is always
  955. 0). \ref STARPU_RAND_SEED allows to set the seed to a specific value.
  956. </dd>
  957. <dt>STARPU_IDLE_TIME</dt>
  958. <dd>
  959. \anchor STARPU_IDLE_TIME
  960. \addindex __env__STARPU_IDLE_TIME
  961. When set to a value being a valid filename, a corresponding file
  962. will be created when shutting down StarPU. The file will contain the
  963. sum of all the workers' idle time.
  964. </dd>
  965. <dt>STARPU_GLOBAL_ARBITER</dt>
  966. <dd>
  967. \anchor STARPU_GLOBAL_ARBITER
  968. \addindex __env__STARPU_GLOBAL_ARBITER
  969. When set to a positive value, StarPU will create a arbiter, which
  970. implements an advanced but centralized management of concurrent data
  971. accesses (see \ref ConcurrentDataAccess).
  972. </dd>
  973. </dl>
  974. \section ConfiguringTheHypervisor Configuring The Hypervisor
  975. <dl>
  976. <dt>SC_HYPERVISOR_POLICY</dt>
  977. <dd>
  978. \anchor SC_HYPERVISOR_POLICY
  979. \addindex __env__SC_HYPERVISOR_POLICY
  980. Choose between the different resizing policies proposed by StarPU for the hypervisor:
  981. idle, app_driven, feft_lp, teft_lp; ispeed_lp, throughput_lp etc.
  982. Use <c>SC_HYPERVISOR_POLICY=help</c> to get the list of available policies for the hypervisor
  983. </dd>
  984. <dt>SC_HYPERVISOR_TRIGGER_RESIZE</dt>
  985. <dd>
  986. \anchor SC_HYPERVISOR_TRIGGER_RESIZE
  987. \addindex __env__SC_HYPERVISOR_TRIGGER_RESIZE
  988. Choose how should the hypervisor be triggered: <c>speed</c> if the resizing algorithm should
  989. be called whenever the speed of the context does not correspond to an optimal precomputed value,
  990. <c>idle</c> it the resizing algorithm should be called whenever the workers are idle for a period
  991. longer than the value indicated when configuring the hypervisor.
  992. </dd>
  993. <dt>SC_HYPERVISOR_START_RESIZE</dt>
  994. <dd>
  995. \anchor SC_HYPERVISOR_START_RESIZE
  996. \addindex __env__SC_HYPERVISOR_START_RESIZE
  997. Indicate the moment when the resizing should be available. The value correspond to the percentage
  998. of the total time of execution of the application. The default value is the resizing frame.
  999. </dd>
  1000. <dt>SC_HYPERVISOR_MAX_SPEED_GAP</dt>
  1001. <dd>
  1002. \anchor SC_HYPERVISOR_MAX_SPEED_GAP
  1003. \addindex __env__SC_HYPERVISOR_MAX_SPEED_GAP
  1004. Indicate the ratio of speed difference between contexts that should trigger the hypervisor.
  1005. This situation may occur only when a theoretical speed could not be computed and the hypervisor
  1006. has no value to compare the speed to. Otherwise the resizing of a context is not influenced by the
  1007. the speed of the other contexts, but only by the the value that a context should have.
  1008. </dd>
  1009. <dt>SC_HYPERVISOR_STOP_PRINT</dt>
  1010. <dd>
  1011. \anchor SC_HYPERVISOR_STOP_PRINT
  1012. \addindex __env__SC_HYPERVISOR_STOP_PRINT
  1013. By default the values of the speed of the workers is printed during the execution
  1014. of the application. If the value 1 is given to this environment variable this printing
  1015. is not done.
  1016. </dd>
  1017. <dt>SC_HYPERVISOR_LAZY_RESIZE</dt>
  1018. <dd>
  1019. \anchor SC_HYPERVISOR_LAZY_RESIZE
  1020. \addindex __env__SC_HYPERVISOR_LAZY_RESIZE
  1021. By default the hypervisor resizes the contexts in a lazy way, that is workers are firstly added to a new context
  1022. before removing them from the previous one. Once this workers are clearly taken into account
  1023. into the new context (a task was poped there) we remove them from the previous one. However if the application
  1024. would like that the change in the distribution of workers should change right away this variable should be set to 0
  1025. </dd>
  1026. <dt>SC_HYPERVISOR_SAMPLE_CRITERIA</dt>
  1027. <dd>
  1028. \anchor SC_HYPERVISOR_SAMPLE_CRITERIA
  1029. \addindex __env__SC_HYPERVISOR_SAMPLE_CRITERIA
  1030. By default the hypervisor uses a sample of flops when computing the speed of the contexts and of the workers.
  1031. If this variable is set to <c>time</c> the hypervisor uses a sample of time (10% of an aproximation of the total
  1032. execution time of the application)
  1033. </dd>
  1034. </dl>
  1035. */