501_environment_variables.doxy 44 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033103410351036103710381039104010411042104310441045104610471048104910501051105210531054105510561057105810591060106110621063106410651066106710681069107010711072107310741075107610771078107910801081108210831084108510861087108810891090109110921093109410951096109710981099110011011102110311041105110611071108110911101111111211131114111511161117111811191120112111221123112411251126112711281129113011311132113311341135113611371138113911401141114211431144114511461147114811491150115111521153115411551156115711581159116011611162116311641165116611671168116911701171117211731174117511761177117811791180118111821183118411851186118711881189119011911192119311941195119611971198119912001201120212031204120512061207120812091210121112121213121412151216121712181219122012211222122312241225122612271228122912301231123212331234123512361237123812391240124112421243124412451246124712481249125012511252125312541255125612571258125912601261126212631264126512661267126812691270127112721273127412751276127712781279128012811282128312841285128612871288128912901291129212931294
  1. /* StarPU --- Runtime system for heterogeneous multicore architectures.
  2. *
  3. * Copyright (C) 2011-2013,2015-2017 Inria
  4. * Copyright (C) 2010-2018 CNRS
  5. * Copyright (C) 2009-2011,2013-2018 Université de Bordeaux
  6. * Copyright (C) 2016 Uppsala University
  7. *
  8. * StarPU is free software; you can redistribute it and/or modify
  9. * it under the terms of the GNU Lesser General Public License as published by
  10. * the Free Software Foundation; either version 2.1 of the License, or (at
  11. * your option) any later version.
  12. *
  13. * StarPU is distributed in the hope that it will be useful, but
  14. * WITHOUT ANY WARRANTY; without even the implied warranty of
  15. * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
  16. *
  17. * See the GNU Lesser General Public License in COPYING.LGPL for more details.
  18. */
  19. /*! \page ExecutionConfigurationThroughEnvironmentVariables Execution Configuration Through Environment Variables
  20. The behavior of the StarPU library and tools may be tuned thanks to
  21. the following environment variables.
  22. \section ConfiguringWorkers Configuring Workers
  23. <dl>
  24. <dt>STARPU_NCPU</dt>
  25. <dd>
  26. \anchor STARPU_NCPU
  27. \addindex __env__STARPU_NCPU
  28. Specify the number of CPU workers (thus not including workers
  29. dedicated to control accelerators). Note that by default, StarPU will
  30. not allocate more CPU workers than there are physical CPUs, and that
  31. some CPUs are used to control the accelerators.
  32. </dd>
  33. <dt>STARPU_NCPUS</dt>
  34. <dd>
  35. \anchor STARPU_NCPUS
  36. \addindex __env__STARPU_NCPUS
  37. This variable is deprecated. You should use \ref STARPU_NCPU.
  38. </dd>
  39. <dt>STARPU_NCUDA</dt>
  40. <dd>
  41. \anchor STARPU_NCUDA
  42. \addindex __env__STARPU_NCUDA
  43. Specify the number of CUDA devices that StarPU can use. If
  44. \ref STARPU_NCUDA is lower than the number of physical devices, it is
  45. possible to select which CUDA devices should be used by the means of the
  46. environment variable \ref STARPU_WORKERS_CUDAID. By default, StarPU will
  47. create as many CUDA workers as there are CUDA devices.
  48. </dd>
  49. <dt>STARPU_NWORKER_PER_CUDA</dt>
  50. <dd>
  51. \anchor STARPU_NWORKER_PER_CUDA
  52. \addindex __env__STARPU_NWORKER_PER_CUDA
  53. Specify the number of workers per CUDA device, and thus the number of kernels
  54. which will be concurrently running on the devices. The default value is 1.
  55. </dd>
  56. <dt>STARPU_CUDA_THREAD_PER_WORKER</dt>
  57. <dd>
  58. \anchor STARPU_CUDA_THREAD_PER_WORKER
  59. \addindex __env__STARPU_CUDA_THREAD_PER_WORKER
  60. Specify whether the cuda driver should use one thread per stream (1) or to use
  61. a single thread to drive all the streams of the device or all devices (0), and
  62. STARPU_CUDA_THREAD_PER_DEV determines whether is it one thread per device or one
  63. thread for all devices. The default value is 0. Setting it to 1 is contradictory
  64. with setting STARPU_CUDA_THREAD_PER_DEV.
  65. </dd>
  66. <dt>STARPU_CUDA_THREAD_PER_DEV</dt>
  67. <dd>
  68. \anchor STARPU_CUDA_THREAD_PER_DEV
  69. \addindex __env__STARPU_CUDA_THREAD_PER_DEV
  70. Specify whether the cuda driver should use one thread per device (1) or to use a
  71. single thread to drive all the devices (0). The default value is 1. It does not
  72. make sense to set this variable if STARPU_CUDA_THREAD_PER_WORKER is set to to 1
  73. (since STARPU_CUDA_THREAD_PER_DEV is then meaningless).
  74. </dd>
  75. <dt>STARPU_CUDA_PIPELINE</dt>
  76. <dd>
  77. \anchor STARPU_CUDA_PIPELINE
  78. \addindex __env__STARPU_CUDA_PIPELINE
  79. Specify how many asynchronous tasks are submitted in advance on CUDA
  80. devices. This for instance permits to overlap task management with the execution
  81. of previous tasks, but it also allows concurrent execution on Fermi cards, which
  82. otherwise bring spurious synchronizations. The default is 2. Setting the value to 0 forces a synchronous
  83. execution of all tasks.
  84. </dd>
  85. <dt>STARPU_NOPENCL</dt>
  86. <dd>
  87. \anchor STARPU_NOPENCL
  88. \addindex __env__STARPU_NOPENCL
  89. OpenCL equivalent of the environment variable \ref STARPU_NCUDA.
  90. </dd>
  91. <dt>STARPU_OPENCL_PIPELINE</dt>
  92. <dd>
  93. \anchor STARPU_OPENCL_PIPELINE
  94. \addindex __env__STARPU_OPENCL_PIPELINE
  95. Specify how many asynchronous tasks are submitted in advance on OpenCL
  96. devices. This for instance permits to overlap task management with the execution
  97. of previous tasks, but it also allows concurrent execution on Fermi cards, which
  98. otherwise bring spurious synchronizations. The default is 2. Setting the value to 0 forces a synchronous
  99. execution of all tasks.
  100. </dd>
  101. <dt>STARPU_OPENCL_ON_CPUS</dt>
  102. <dd>
  103. \anchor STARPU_OPENCL_ON_CPUS
  104. \addindex __env__STARPU_OPENCL_ON_CPUS
  105. By default, the OpenCL driver only enables GPU and accelerator
  106. devices. By setting the environment variable \ref STARPU_OPENCL_ON_CPUS
  107. to 1, the OpenCL driver will also enable CPU devices.
  108. </dd>
  109. <dt>STARPU_OPENCL_ONLY_ON_CPUS</dt>
  110. <dd>
  111. \anchor STARPU_OPENCL_ONLY_ON_CPUS
  112. \addindex __env__STARPU_OPENCL_ONLY_ON_CPUS
  113. By default, the OpenCL driver enables GPU and accelerator
  114. devices. By setting the environment variable \ref STARPU_OPENCL_ONLY_ON_CPUS
  115. to 1, the OpenCL driver will ONLY enable CPU devices.
  116. </dd>
  117. <dt>STARPU_NMIC</dt>
  118. <dd>
  119. \anchor STARPU_NMIC
  120. \addindex __env__STARPU_NMIC
  121. MIC equivalent of the environment variable \ref STARPU_NCUDA, i.e. the number of
  122. MIC devices to use.
  123. </dd>
  124. <dt>STARPU_NMICTHREADS</dt>
  125. <dd>
  126. \anchor STARPU_NMICTHREADS
  127. \addindex __env__STARPU_NMICTHREADS
  128. Number of threads to use on the MIC devices.
  129. </dd>
  130. <dt>STARPU_NMPI_MS</dt>
  131. <dd>
  132. \anchor STARPU_NMPI_MS
  133. \addindex __env__STARPU_NMPI_MS
  134. MPI Master Slave equivalent of the environment variable \ref STARPU_NCUDA, i.e. the number of
  135. MPI Master Slave devices to use.
  136. </dd>
  137. <dt>STARPU_NMPIMSTHREADS</dt>
  138. <dd>
  139. \anchor STARPU_NMPIMSTHREADS
  140. \addindex __env__STARPU_NMPIMSTHREADS
  141. Number of threads to use on the MPI Slave devices.
  142. </dd>
  143. <dt>STARPU_MPI_MASTER_NODE</dt>
  144. <dd>
  145. \anchor STARPU_MPI_MASTER_NODE
  146. \addindex __env__STARPU_MPI_MASTER_NODE
  147. This variable allows to chose which MPI node (with the MPI ID) will be the master.
  148. </dd>
  149. <dt>STARPU_NSCC</dt>
  150. <dd>
  151. \anchor STARPU_NSCC
  152. \addindex __env__STARPU_NSCC
  153. SCC equivalent of the environment variable \ref STARPU_NCUDA.
  154. </dd>
  155. <dt>STARPU_WORKERS_NOBIND</dt>
  156. <dd>
  157. \anchor STARPU_WORKERS_NOBIND
  158. \addindex __env__STARPU_WORKERS_NOBIND
  159. Setting it to non-zero will prevent StarPU from binding its threads to
  160. CPUs. This is for instance useful when running the testsuite in parallel.
  161. </dd>
  162. <dt>STARPU_WORKERS_CPUID</dt>
  163. <dd>
  164. \anchor STARPU_WORKERS_CPUID
  165. \addindex __env__STARPU_WORKERS_CPUID
  166. Passing an array of integers in \ref STARPU_WORKERS_CPUID
  167. specifies on which logical CPU the different workers should be
  168. bound. For instance, if <c>STARPU_WORKERS_CPUID = "0 1 4 5"</c>, the first
  169. worker will be bound to logical CPU #0, the second CPU worker will be bound to
  170. logical CPU #1 and so on. Note that the logical ordering of the CPUs is either
  171. determined by the OS, or provided by the library <c>hwloc</c> in case it is
  172. available. Ranges can be provided: for instance, <c>STARPU_WORKERS_CPUID = "1-3
  173. 5"</c> will bind the first three workers on logical CPUs #1, #2, and #3, and the
  174. fourth worker on logical CPU #5. Unbound ranges can also be provided:
  175. <c>STARPU_WORKERS_CPUID = "1-"</c> will bind the workers starting from logical
  176. CPU #1 up to last CPU.
  177. Note that the first workers correspond to the CUDA workers, then come the
  178. OpenCL workers, and finally the CPU workers. For example if
  179. we have <c>STARPU_NCUDA=1</c>, <c>STARPU_NOPENCL=1</c>, <c>STARPU_NCPU=2</c>
  180. and <c>STARPU_WORKERS_CPUID = "0 2 1 3"</c>, the CUDA device will be controlled
  181. by logical CPU #0, the OpenCL device will be controlled by logical CPU #2, and
  182. the logical CPUs #1 and #3 will be used by the CPU workers.
  183. If the number of workers is larger than the array given in
  184. \ref STARPU_WORKERS_CPUID, the workers are bound to the logical CPUs in a
  185. round-robin fashion: if <c>STARPU_WORKERS_CPUID = "0 1"</c>, the first
  186. and the third (resp. second and fourth) workers will be put on CPU #0
  187. (resp. CPU #1).
  188. This variable is ignored if the field
  189. starpu_conf::use_explicit_workers_bindid passed to starpu_init() is
  190. set.
  191. </dd>
  192. <dt>STARPU_MAIN_THREAD_CPUID</dt>
  193. <dd>
  194. \anchor STARPU_MAIN_THREAD_CPUID
  195. \addindex __env__STARPU_MAIN_THREAD_CPUID
  196. When defined, this make StarPU bind the thread that calls starpu_initialize() to
  197. the given CPU ID.
  198. </dd>
  199. <dt>STARPU_MPI_THREAD_CPUID</dt>
  200. <dd>
  201. \anchor STARPU_MPI_THREAD_CPUID
  202. \addindex __env__STARPU_MPI_THREAD_CPUID
  203. When defined, this make StarPU bind its MPI thread to the given CPU ID.
  204. </dd>
  205. <dt>STARPU_WORKERS_CUDAID</dt>
  206. <dd>
  207. \anchor STARPU_WORKERS_CUDAID
  208. \addindex __env__STARPU_WORKERS_CUDAID
  209. Similarly to the \ref STARPU_WORKERS_CPUID environment variable, it is
  210. possible to select which CUDA devices should be used by StarPU. On a machine
  211. equipped with 4 GPUs, setting <c>STARPU_WORKERS_CUDAID = "1 3"</c> and
  212. <c>STARPU_NCUDA=2</c> specifies that 2 CUDA workers should be created, and that
  213. they should use CUDA devices #1 and #3 (the logical ordering of the devices is
  214. the one reported by CUDA).
  215. This variable is ignored if the field
  216. starpu_conf::use_explicit_workers_cuda_gpuid passed to starpu_init()
  217. is set.
  218. </dd>
  219. <dt>STARPU_WORKERS_OPENCLID</dt>
  220. <dd>
  221. \anchor STARPU_WORKERS_OPENCLID
  222. \addindex __env__STARPU_WORKERS_OPENCLID
  223. OpenCL equivalent of the \ref STARPU_WORKERS_CUDAID environment variable.
  224. This variable is ignored if the field
  225. starpu_conf::use_explicit_workers_opencl_gpuid passed to starpu_init()
  226. is set.
  227. </dd>
  228. <dt>STARPU_WORKERS_MICID</dt>
  229. <dd>
  230. \anchor STARPU_WORKERS_MICID
  231. \addindex __env__STARPU_WORKERS_MICID
  232. MIC equivalent of the \ref STARPU_WORKERS_CUDAID environment variable.
  233. This variable is ignored if the field
  234. starpu_conf::use_explicit_workers_mic_deviceid passed to starpu_init()
  235. is set.
  236. </dd>
  237. <dt>STARPU_WORKERS_SCCID</dt>
  238. <dd>
  239. \anchor STARPU_WORKERS_SCCID
  240. \addindex __env__STARPU_WORKERS_SCCID
  241. SCC equivalent of the \ref STARPU_WORKERS_CUDAID environment variable.
  242. This variable is ignored if the field
  243. starpu_conf::use_explicit_workers_scc_deviceid passed to starpu_init()
  244. is set.
  245. </dd>
  246. <dt>STARPU_WORKER_TREE</dt>
  247. <dd>
  248. \anchor STARPU_WORKER_TREE
  249. \addindex __env__STARPU_WORKER_TREE
  250. Define to 1 to enable the tree iterator in schedulers.
  251. </dd>
  252. <dt>STARPU_SINGLE_COMBINED_WORKER</dt>
  253. <dd>
  254. \anchor STARPU_SINGLE_COMBINED_WORKER
  255. \addindex __env__STARPU_SINGLE_COMBINED_WORKER
  256. If set, StarPU will create several workers which won't be able to work
  257. concurrently. It will by default create combined workers which size goes from 1
  258. to the total number of CPU workers in the system. \ref STARPU_MIN_WORKERSIZE
  259. and \ref STARPU_MAX_WORKERSIZE can be used to change this default.
  260. </dd>
  261. <dt>STARPU_MIN_WORKERSIZE</dt>
  262. <dd>
  263. \anchor STARPU_MIN_WORKERSIZE
  264. \addindex __env__STARPU_MIN_WORKERSIZE
  265. \ref STARPU_MIN_WORKERSIZE
  266. permits to specify the minimum size of the combined workers (instead of the default 2)
  267. </dd>
  268. <dt>STARPU_MAX_WORKERSIZE</dt>
  269. <dd>
  270. \anchor STARPU_MAX_WORKERSIZE
  271. \addindex __env__STARPU_MAX_WORKERSIZE
  272. \ref STARPU_MAX_WORKERSIZE
  273. permits to specify the minimum size of the combined workers (instead of the
  274. number of CPU workers in the system)
  275. </dd>
  276. <dt>STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER</dt>
  277. <dd>
  278. \anchor STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER
  279. \addindex __env__STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER
  280. Let the user decide how many elements are allowed between combined workers
  281. created from hwloc information. For instance, in the case of sockets with 6
  282. cores without shared L2 caches, if \ref STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER is
  283. set to 6, no combined worker will be synthesized beyond one for the socket
  284. and one per core. If it is set to 3, 3 intermediate combined workers will be
  285. synthesized, to divide the socket cores into 3 chunks of 2 cores. If it set to
  286. 2, 2 intermediate combined workers will be synthesized, to divide the the socket
  287. cores into 2 chunks of 3 cores, and then 3 additional combined workers will be
  288. synthesized, to divide the former synthesized workers into a bunch of 2 cores,
  289. and the remaining core (for which no combined worker is synthesized since there
  290. is already a normal worker for it).
  291. The default, 2, thus makes StarPU tend to building a binary trees of combined
  292. workers.
  293. </dd>
  294. <dt>STARPU_DISABLE_ASYNCHRONOUS_COPY</dt>
  295. <dd>
  296. \anchor STARPU_DISABLE_ASYNCHRONOUS_COPY
  297. \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_COPY
  298. Disable asynchronous copies between CPU and GPU devices.
  299. The AMD implementation of OpenCL is known to
  300. fail when copying data asynchronously. When using this implementation,
  301. it is therefore necessary to disable asynchronous data transfers.
  302. </dd>
  303. <dt>STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY</dt>
  304. <dd>
  305. \anchor STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY
  306. \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY
  307. Disable asynchronous copies between CPU and CUDA devices.
  308. </dd>
  309. <dt>STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY</dt>
  310. <dd>
  311. \anchor STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY
  312. \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY
  313. Disable asynchronous copies between CPU and OpenCL devices.
  314. The AMD implementation of OpenCL is known to
  315. fail when copying data asynchronously. When using this implementation,
  316. it is therefore necessary to disable asynchronous data transfers.
  317. </dd>
  318. <dt>STARPU_DISABLE_ASYNCHRONOUS_MIC_COPY</dt>
  319. <dd>
  320. \anchor STARPU_DISABLE_ASYNCHRONOUS_MIC_COPY
  321. \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_MIC_COPY
  322. Disable asynchronous copies between CPU and MIC devices.
  323. </dd>
  324. <dt>STARPU_DISABLE_ASYNCHRONOUS_MPI_MS_COPY</dt>
  325. <dd>
  326. \anchor STARPU_DISABLE_ASYNCHRONOUS_MPI_MS_COPY
  327. \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_MPI_MS_COPY
  328. Disable asynchronous copies between CPU and MPI Slave devices.
  329. </dd>
  330. <dt>STARPU_ENABLE_CUDA_GPU_GPU_DIRECT</dt>
  331. <dd>
  332. \anchor STARPU_ENABLE_CUDA_GPU_GPU_DIRECT
  333. \addindex __env__STARPU_ENABLE_CUDA_GPU_GPU_DIRECT
  334. Enable (1) or Disable (0) direct CUDA transfers from GPU to GPU, without copying
  335. through RAM. The default is Enabled.
  336. This permits to test the performance effect of GPU-Direct.
  337. </dd>
  338. <dt>STARPU_DISABLE_PINNING</dt>
  339. <dd>
  340. \anchor STARPU_DISABLE_PINNING
  341. \addindex __env__STARPU_DISABLE_PINNING
  342. Disable (1) or Enable (0) pinning host memory allocated through starpu_malloc, starpu_memory_pin
  343. and friends. The default is Enabled.
  344. This permits to test the performance effect of memory pinning.
  345. </dd>
  346. <dt>STARPU_MIC_SINK_PROGRAM_NAME</dt>
  347. <dd>
  348. \anchor STARPU_MIC_SINK_PROGRAM_NAME
  349. \addindex __env__STARPU_MIC_SINK_PROGRAM_NAME
  350. todo
  351. </dd>
  352. <dt>STARPU_MIC_SINK_PROGRAM_PATH</dt>
  353. <dd>
  354. \anchor STARPU_MIC_SINK_PROGRAM_PATH
  355. \addindex __env__STARPU_MIC_SINK_PROGRAM_PATH
  356. todo
  357. </dd>
  358. <dt>STARPU_MIC_PROGRAM_PATH</dt>
  359. <dd>
  360. \anchor STARPU_MIC_PROGRAM_PATH
  361. \addindex __env__STARPU_MIC_PROGRAM_PATH
  362. todo
  363. </dd>
  364. </dl>
  365. \section ConfiguringTheSchedulingEngine Configuring The Scheduling Engine
  366. <dl>
  367. <dt>STARPU_SCHED</dt>
  368. <dd>
  369. \anchor STARPU_SCHED
  370. \addindex __env__STARPU_SCHED
  371. Choose between the different scheduling policies proposed by StarPU: work
  372. random, stealing, greedy, with performance models, etc.
  373. Use <c>STARPU_SCHED=help</c> to get the list of available schedulers.
  374. </dd>
  375. <dt>STARPU_MIN_PRIO</dt>
  376. <dd>
  377. \anchor STARPU_MIN_PRIO_env
  378. \addindex __env__STARPU_MIN_PRIO
  379. Set the mininum priority used by priorities-aware schedulers.
  380. </dd>
  381. <dt>STARPU_MAX_PRIO</dt>
  382. <dd>
  383. \anchor STARPU_MAX_PRIO_env
  384. \addindex __env__STARPU_MAX_PRIO
  385. Set the maximum priority used by priorities-aware schedulers.
  386. </dd>
  387. <dt>STARPU_CALIBRATE</dt>
  388. <dd>
  389. \anchor STARPU_CALIBRATE
  390. \addindex __env__STARPU_CALIBRATE
  391. If this variable is set to 1, the performance models are calibrated during
  392. the execution. If it is set to 2, the previous values are dropped to restart
  393. calibration from scratch. Setting this variable to 0 disable calibration, this
  394. is the default behaviour.
  395. Note: this currently only applies to <c>dm</c> and <c>dmda</c> scheduling policies.
  396. </dd>
  397. <dt>STARPU_CALIBRATE_MINIMUM</dt>
  398. <dd>
  399. \anchor STARPU_CALIBRATE_MINIMUM
  400. \addindex __env__STARPU_CALIBRATE_MINIMUM
  401. This defines the minimum number of calibration measurements that will be made
  402. before considering that the performance model is calibrated. The default value is 10.
  403. </dd>
  404. <dt>STARPU_BUS_CALIBRATE</dt>
  405. <dd>
  406. \anchor STARPU_BUS_CALIBRATE
  407. \addindex __env__STARPU_BUS_CALIBRATE
  408. If this variable is set to 1, the bus is recalibrated during intialization.
  409. </dd>
  410. <dt>STARPU_PREFETCH</dt>
  411. <dd>
  412. \anchor STARPU_PREFETCH
  413. \addindex __env__STARPU_PREFETCH
  414. This variable indicates whether data prefetching should be enabled (0 means
  415. that it is disabled). If prefetching is enabled, when a task is scheduled to be
  416. executed e.g. on a GPU, StarPU will request an asynchronous transfer in
  417. advance, so that data is already present on the GPU when the task starts. As a
  418. result, computation and data transfers are overlapped.
  419. Note that prefetching is enabled by default in StarPU.
  420. </dd>
  421. <dt>STARPU_SCHED_ALPHA</dt>
  422. <dd>
  423. \anchor STARPU_SCHED_ALPHA
  424. \addindex __env__STARPU_SCHED_ALPHA
  425. To estimate the cost of a task StarPU takes into account the estimated
  426. computation time (obtained thanks to performance models). The alpha factor is
  427. the coefficient to be applied to it before adding it to the communication part.
  428. </dd>
  429. <dt>STARPU_SCHED_BETA</dt>
  430. <dd>
  431. \anchor STARPU_SCHED_BETA
  432. \addindex __env__STARPU_SCHED_BETA
  433. To estimate the cost of a task StarPU takes into account the estimated
  434. data transfer time (obtained thanks to performance models). The beta factor is
  435. the coefficient to be applied to it before adding it to the computation part.
  436. </dd>
  437. <dt>STARPU_SCHED_GAMMA</dt>
  438. <dd>
  439. \anchor STARPU_SCHED_GAMMA
  440. \addindex __env__STARPU_SCHED_GAMMA
  441. Define the execution time penalty of a joule (\ref Energy-basedScheduling).
  442. </dd>
  443. <dt>STARPU_IDLE_POWER</dt>
  444. <dd>
  445. \anchor STARPU_IDLE_POWER
  446. \addindex __env__STARPU_IDLE_POWER
  447. Define the idle power of the machine (\ref Energy-basedScheduling).
  448. </dd>
  449. <dt>STARPU_PROFILING</dt>
  450. <dd>
  451. \anchor STARPU_PROFILING
  452. \addindex __env__STARPU_PROFILING
  453. Enable on-line performance monitoring (\ref EnablingOn-linePerformanceMonitoring).
  454. </dd>
  455. </dl>
  456. \section Extensions Extensions
  457. <dl>
  458. <dt>SOCL_OCL_LIB_OPENCL</dt>
  459. <dd>
  460. \anchor SOCL_OCL_LIB_OPENCL
  461. \addindex __env__SOCL_OCL_LIB_OPENCL
  462. THE SOCL test suite is only run when the environment variable
  463. \ref SOCL_OCL_LIB_OPENCL is defined. It should contain the location
  464. of the file <c>libOpenCL.so</c> of the OCL ICD implementation.
  465. </dd>
  466. <dt>OCL_ICD_VENDORS</dt>
  467. <dd>
  468. \anchor OCL_ICD_VENDORS
  469. \addindex __env__OCL_ICD_VENDORS
  470. When using SOCL with OpenCL ICD
  471. (https://forge.imag.fr/projects/ocl-icd/), this variable may be used
  472. to point to the directory where ICD files are installed. The default
  473. directory is <c>/etc/OpenCL/vendors</c>. StarPU installs ICD
  474. files in the directory <c>$prefix/share/starpu/opencl/vendors</c>.
  475. </dd>
  476. <dt>STARPU_COMM_STATS</dt>
  477. <dd>
  478. \anchor STARPU_COMM_STATS
  479. \addindex __env__STARPU_COMM_STATS
  480. Communication statistics for starpumpi (\ref MPISupport)
  481. will be enabled when the environment variable \ref STARPU_COMM_STATS
  482. is defined to an value other than 0.
  483. </dd>
  484. <dt>STARPU_MPI_CACHE</dt>
  485. <dd>
  486. \anchor STARPU_MPI_CACHE
  487. \addindex __env__STARPU_MPI_CACHE
  488. Communication cache for starpumpi (\ref MPISupport) will be
  489. disabled when the environment variable \ref STARPU_MPI_CACHE is set
  490. to 0. It is enabled by default or for any other values of the variable
  491. \ref STARPU_MPI_CACHE.
  492. </dd>
  493. <dt>STARPU_MPI_COMM</dt>
  494. <dd>
  495. \anchor STARPU_MPI_COMM
  496. \addindex __env__STARPU_MPI_COMM
  497. Communication trace for starpumpi (\ref MPISupport) will be
  498. enabled when the environment variable \ref STARPU_MPI_COMM is set
  499. to 1, and StarPU has been configured with the option
  500. \ref enable-verbose "--enable-verbose".
  501. </dd>
  502. <dt>STARPU_MPI_CACHE_STATS</dt>
  503. <dd>
  504. \anchor STARPU_MPI_CACHE_STATS
  505. \addindex __env__STARPU_MPI_CACHE_STATS
  506. When set to 1, statistics are enabled for the communication cache (\ref MPISupport). For now,
  507. it prints messages on the standard output when data are added or removed from the received
  508. communication cache.
  509. </dd>
  510. <dt>STARPU_MPI_PRIORITIES</dt>
  511. <dd>
  512. \anchor STARPU_MPI_PRIORITIES
  513. \addindex __env__STARPU_MPI_PRIORITIES
  514. When set to 0, the use of priorities to order MPI communications is disabled
  515. (\ref MPISupport).
  516. </dd>
  517. <dt>STARPU_MPI_FAKE_SIZE</dt>
  518. <dd>
  519. \anchor STARPU_MPI_FAKE_SIZE
  520. \addindex __env__STARPU_MPI_FAKE_SIZE
  521. Setting to a number makes StarPU believe that there are as many MPI nodes, even
  522. if it was run on only one MPI node. This allows e.g. to simulate the execution
  523. of one of the nodes of a big cluster without actually running the rest.
  524. It of course does not provide computation results and timing.
  525. </dd>
  526. <dt>STARPU_MPI_FAKE_RANK</dt>
  527. <dd>
  528. \anchor STARPU_MPI_FAKE_RANK
  529. \addindex __env__STARPU_MPI_FAKE_RANK
  530. Setting to a number makes StarPU believe that it runs the given MPI node, even
  531. if it was run on only one MPI node. This allows e.g. to simulate the execution
  532. of one of the nodes of a big cluster without actually running the rest.
  533. It of course does not provide computation results and timing.
  534. </dd>
  535. <dt>STARPU_MPI_DRIVER_CALL_FREQUENCY</dt>
  536. <dd>
  537. \anchor STARPU_MPI_DRIVER_CALL_FREQUENCY
  538. \addindex __env__STARPU_MPI_DRIVER_CALL_FREQUENCY
  539. When set to a positive value, activates the interleaving of the execution of
  540. tasks with the progression of MPI communications (\ref MPISupport). The
  541. starpu_mpi_init_with_driver() function must have been called by the application
  542. for that environment variable to be used. When set to 0, the MPI progression
  543. thread does not use at all the driver given by the user, and only focuses on
  544. making MPI communications progress.
  545. </dd>
  546. <dt>STARPU_MPI_DRIVER_TASK_FREQUENCY</dt>
  547. <dd>
  548. \anchor STARPU_MPI_DRIVER_TASK_FREQUENCY
  549. \addindex __env__STARPU_MPI_DRIVER_TASK_FREQUENCY
  550. When set to a positive value, the interleaving of the execution of tasks with
  551. the progression of MPI communications mechanism to execute several tasks before
  552. checking communication requests again (\ref MPISupport). The
  553. starpu_mpi_init_with_driver() function must have been called by the application
  554. for that environment variable to be used, and the
  555. STARPU_MPI_DRIVER_CALL_FREQUENCY environment variable set to a positive value.
  556. </dd>
  557. <dt>STARPU_SIMGRID_CUDA_MALLOC_COST</dt>
  558. <dd>
  559. \anchor STARPU_SIMGRID_CUDA_MALLOC_COST
  560. \addindex __env__STARPU_SIMGRID_CUDA_MALLOC_COST
  561. When set to 1 (which is the default), CUDA malloc costs are taken into account
  562. in simgrid mode.
  563. </dd>
  564. <dt>STARPU_SIMGRID_CUDA_QUEUE_COST</dt>
  565. <dd>
  566. \anchor STARPU_SIMGRID_CUDA_QUEUE_COST
  567. \addindex __env__STARPU_SIMGRID_CUDA_QUEUE_COST
  568. When set to 1 (which is the default), CUDA task and transfer queueing costs are
  569. taken into account in simgrid mode.
  570. </dd>
  571. <dt>STARPU_PCI_FLAT</dt>
  572. <dd>
  573. \anchor STARPU_PCI_FLAT
  574. \addindex __env__STARPU_PCI_FLAT
  575. When unset or set to 0, the platform file created for simgrid will
  576. contain PCI bandwidths and routes.
  577. </dd>
  578. <dt>STARPU_SIMGRID_QUEUE_MALLOC_COST</dt>
  579. <dd>
  580. \anchor STARPU_SIMGRID_QUEUE_MALLOC_COST
  581. \addindex __env__STARPU_SIMGRID_QUEUE_MALLOC_COST
  582. When unset or set to 1, simulate within simgrid the GPU transfer queueing.
  583. </dd>
  584. <dt>STARPU_MALLOC_SIMULATION_FOLD</dt>
  585. <dd>
  586. \anchor STARPU_MALLOC_SIMULATION_FOLD
  587. \addindex __env__STARPU_MALLOC_SIMULATION_FOLD
  588. This defines the size of the file used for folding virtual allocation, in
  589. MiB. The default is 1, thus allowing 64GiB virtual memory when Linux's
  590. <c>sysctl vm.max_map_count</c> value is the default 65535.
  591. </dd>
  592. <dt>STARPU_SIMGRID_TASK_SUBMIT_COST</dt>
  593. <dd>
  594. \anchor STARPU_SIMGRID_TASK_SUBMIT_COST
  595. \addindex __env__STARPU_SIMGRID_TASK_SUBMIT_COST
  596. When set to 1 (which is the default), task submission costs are taken into
  597. account in simgrid mode. This provides more accurate simgrid predictions,
  598. especially for the beginning of the execution.
  599. </dd>
  600. <dt>STARPU_SIMGRID_FETCHING_INPUT_COST</dt>
  601. <dd>
  602. \anchor STARPU_SIMGRID_FETCHING_INPUT_COST
  603. \addindex __env__STARPU_SIMGRID_FETCHING_INPUT_COST
  604. When set to 1 (which is the default), fetching input costs are taken into
  605. account in simgrid mode. This provides more accurate simgrid predictions,
  606. especially regarding data transfers.
  607. </dd>
  608. <dt>STARPU_SIMGRID_SCHED_COST</dt>
  609. <dd>
  610. \anchor STARPU_SIMGRID_SCHED_COST
  611. \addindex __env__STARPU_SIMGRID_SCHED_COST
  612. When set to 1 (0 is the default), scheduling costs are taken into
  613. account in simgrid mode. This provides more accurate simgrid predictions,
  614. and allows studying scheduling overhead of the runtime system. However,
  615. it also makes simulation non-deterministic.
  616. </dd>
  617. </dl>
  618. \section MiscellaneousAndDebug Miscellaneous And Debug
  619. <dl>
  620. <dt>STARPU_HOME</dt>
  621. <dd>
  622. \anchor STARPU_HOME
  623. \addindex __env__STARPU_HOME
  624. This specifies the main directory in which StarPU stores its
  625. configuration files. The default is <c>$HOME</c> on Unix environments,
  626. and <c>$USERPROFILE</c> on Windows environments.
  627. </dd>
  628. <dt>STARPU_PATH</dt>
  629. <dd>
  630. \anchor STARPU_PATH
  631. \addindex __env__STARPU_PATH
  632. Only used on Windows environments.
  633. This specifies the main directory in which StarPU is installed
  634. (\ref RunningABasicStarPUApplicationOnMicrosoft)
  635. </dd>
  636. <dt>STARPU_PERF_MODEL_DIR</dt>
  637. <dd>
  638. \anchor STARPU_PERF_MODEL_DIR
  639. \addindex __env__STARPU_PERF_MODEL_DIR
  640. This specifies the main directory in which StarPU stores its
  641. performance model files. The default is <c>$STARPU_HOME/.starpu/sampling</c>.
  642. </dd>
  643. <dt>STARPU_PERF_MODEL_HOMOGENEOUS_CPU</dt>
  644. <dd>
  645. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_CPU
  646. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_CPU
  647. When this is set to 0, StarPU will assume that CPU devices do not have the same
  648. performance, and thus use different performance models for them, thus making
  649. kernel calibration much longer, since measurements have to be made for each CPU
  650. core.
  651. </dd>
  652. <dt>STARPU_PERF_MODEL_HOMOGENEOUS_CUDA</dt>
  653. <dd>
  654. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_CUDA
  655. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_CUDA
  656. When this is set to 1, StarPU will assume that all CUDA devices have the same
  657. performance, and thus share performance models for them, thus allowing kernel
  658. calibration to be much faster, since measurements only have to be once for all
  659. CUDA GPUs.
  660. </dd>
  661. <dt>STARPU_PERF_MODEL_HOMOGENEOUS_OPENCL</dt>
  662. <dd>
  663. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_OPENCL
  664. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_OPENCL
  665. When this is set to 1, StarPU will assume that all OPENCL devices have the same
  666. performance, and thus share performance models for them, thus allowing kernel
  667. calibration to be much faster, since measurements only have to be once for all
  668. OPENCL GPUs.
  669. </dd>
  670. <dt>STARPU_PERF_MODEL_HOMOGENEOUS_MIC</dt>
  671. <dd>
  672. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_MIC
  673. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_MIC
  674. When this is set to 1, StarPU will assume that all MIC devices have the same
  675. performance, and thus share performance models for them, thus allowing kernel
  676. calibration to be much faster, since measurements only have to be once for all
  677. MIC GPUs.
  678. </dd>
  679. <dt>STARPU_PERF_MODEL_HOMOGENEOUS_MPI_MS</dt>
  680. <dd>
  681. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_MPI_MS
  682. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_MPI_MS
  683. When this is set to 1, StarPU will assume that all MPI Slave devices have the same
  684. performance, and thus share performance models for them, thus allowing kernel
  685. calibration to be much faster, since measurements only have to be once for all
  686. MPI Slaves.
  687. </dd>
  688. <dt>STARPU_PERF_MODEL_HOMOGENEOUS_SCC</dt>
  689. <dd>
  690. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_SCC
  691. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_SCC
  692. When this is set to 1, StarPU will assume that all SCC devices have the same
  693. performance, and thus share performance models for them, thus allowing kernel
  694. calibration to be much faster, since measurements only have to be once for all
  695. SCC GPUs.
  696. </dd>
  697. <dt>STARPU_HOSTNAME</dt>
  698. <dd>
  699. \anchor STARPU_HOSTNAME
  700. \addindex __env__STARPU_HOSTNAME
  701. When set, force the hostname to be used when dealing performance model
  702. files. Models are indexed by machine name. When running for example on
  703. a homogenenous cluster, it is possible to share the models between
  704. machines by setting <c>export STARPU_HOSTNAME=some_global_name</c>.
  705. </dd>
  706. <dt>STARPU_OPENCL_PROGRAM_DIR</dt>
  707. <dd>
  708. \anchor STARPU_OPENCL_PROGRAM_DIR
  709. \addindex __env__STARPU_OPENCL_PROGRAM_DIR
  710. This specifies the directory where the OpenCL codelet source files are
  711. located. The function starpu_opencl_load_program_source() looks
  712. for the codelet in the current directory, in the directory specified
  713. by the environment variable \ref STARPU_OPENCL_PROGRAM_DIR, in the
  714. directory <c>share/starpu/opencl</c> of the installation directory of
  715. StarPU, and finally in the source directory of StarPU.
  716. </dd>
  717. <dt>STARPU_SILENT</dt>
  718. <dd>
  719. \anchor STARPU_SILENT
  720. \addindex __env__STARPU_SILENT
  721. This variable allows to disable verbose mode at runtime when StarPU
  722. has been configured with the option \ref enable-verbose "--enable-verbose". It also
  723. disables the display of StarPU information and warning messages.
  724. </dd>
  725. <dt>STARPU_LOGFILENAME</dt>
  726. <dd>
  727. \anchor STARPU_LOGFILENAME
  728. \addindex __env__STARPU_LOGFILENAME
  729. This variable specifies in which file the debugging output should be saved to.
  730. </dd>
  731. <dt>STARPU_FXT_PREFIX</dt>
  732. <dd>
  733. \anchor STARPU_FXT_PREFIX
  734. \addindex __env__STARPU_FXT_PREFIX
  735. This variable specifies in which directory to save the trace generated if FxT is enabled. It needs to have a trailing '/' character.
  736. </dd>
  737. <dt>STARPU_FXT_TRACE</dt>
  738. <dd>
  739. \anchor STARPU_FXT_TRACE
  740. \addindex __env__STARPU_FXT_TRACE
  741. This variable specifies whether to generate (1) or not (0) the FxT trace in /tmp/prof_file_XXX_YYY . The default is 1 (generate it)
  742. </dd>
  743. <dt>STARPU_LIMIT_CUDA_devid_MEM</dt>
  744. <dd>
  745. \anchor STARPU_LIMIT_CUDA_devid_MEM
  746. \addindex __env__STARPU_LIMIT_CUDA_devid_MEM
  747. This variable specifies the maximum number of megabytes that should be
  748. available to the application on the CUDA device with the identifier
  749. <c>devid</c>. This variable is intended to be used for experimental
  750. purposes as it emulates devices that have a limited amount of memory.
  751. When defined, the variable overwrites the value of the variable
  752. \ref STARPU_LIMIT_CUDA_MEM.
  753. </dd>
  754. <dt>STARPU_LIMIT_CUDA_MEM</dt>
  755. <dd>
  756. \anchor STARPU_LIMIT_CUDA_MEM
  757. \addindex __env__STARPU_LIMIT_CUDA_MEM
  758. This variable specifies the maximum number of megabytes that should be
  759. available to the application on each CUDA devices. This variable is
  760. intended to be used for experimental purposes as it emulates devices
  761. that have a limited amount of memory.
  762. </dd>
  763. <dt>STARPU_LIMIT_OPENCL_devid_MEM</dt>
  764. <dd>
  765. \anchor STARPU_LIMIT_OPENCL_devid_MEM
  766. \addindex __env__STARPU_LIMIT_OPENCL_devid_MEM
  767. This variable specifies the maximum number of megabytes that should be
  768. available to the application on the OpenCL device with the identifier
  769. <c>devid</c>. This variable is intended to be used for experimental
  770. purposes as it emulates devices that have a limited amount of memory.
  771. When defined, the variable overwrites the value of the variable
  772. \ref STARPU_LIMIT_OPENCL_MEM.
  773. </dd>
  774. <dt>STARPU_LIMIT_OPENCL_MEM</dt>
  775. <dd>
  776. \anchor STARPU_LIMIT_OPENCL_MEM
  777. \addindex __env__STARPU_LIMIT_OPENCL_MEM
  778. This variable specifies the maximum number of megabytes that should be
  779. available to the application on each OpenCL devices. This variable is
  780. intended to be used for experimental purposes as it emulates devices
  781. that have a limited amount of memory.
  782. </dd>
  783. <dt>STARPU_LIMIT_CPU_MEM</dt>
  784. <dd>
  785. \anchor STARPU_LIMIT_CPU_MEM
  786. \addindex __env__STARPU_LIMIT_CPU_MEM
  787. This variable specifies the maximum number of megabytes that should be
  788. available to the application in the main CPU memory. Setting it enables allocation
  789. cache in main memory. Setting it to zero lets StarPU overflow memory.
  790. </dd>
  791. <dt>STARPU_LIMIT_CPU_NUMA_devid_MEM</dt>
  792. <dd>
  793. \anchor STARPU_LIMIT_CPU_NUMA_devid_MEM
  794. \addindex __env__STARPU_LIMIT_CPU_NUMA_devid_MEM
  795. This variable specifies the maximum number of megabytes that should be
  796. available to the application on the NUMA node with the OS identifier <c>devid</c>.
  797. </dd>
  798. <dt>STARPU_MINIMUM_AVAILABLE_MEM</dt>
  799. <dd>
  800. \anchor STARPU_MINIMUM_AVAILABLE_MEM
  801. \addindex __env__STARPU_MINIMUM_AVAILABLE_MEM
  802. This specifies the minimum percentage of memory that should be available in GPUs
  803. (or in main memory, when using out of core), below which a reclaiming pass is
  804. performed. The default is 0%.
  805. </dd>
  806. <dt>STARPU_TARGET_AVAILABLE_MEM</dt>
  807. <dd>
  808. \anchor STARPU_TARGET_AVAILABLE_MEM
  809. \addindex __env__STARPU_TARGET_AVAILABLE_MEM
  810. This specifies the target percentage of memory that should be reached in
  811. GPUs (or in main memory, when using out of core), when performing a periodic
  812. reclaiming pass. The default is 0%.
  813. </dd>
  814. <dt>STARPU_MINIMUM_CLEAN_BUFFERS</dt>
  815. <dd>
  816. \anchor STARPU_MINIMUM_CLEAN_BUFFERS
  817. \addindex __env__STARPU_MINIMUM_CLEAN_BUFFERS
  818. This specifies the minimum percentage of number of buffers that should be clean in GPUs
  819. (or in main memory, when using out of core), below which asynchronous writebacks will be
  820. issued. The default is 5%.
  821. </dd>
  822. <dt>STARPU_TARGET_CLEAN_BUFFERS</dt>
  823. <dd>
  824. \anchor STARPU_TARGET_CLEAN_BUFFERS
  825. \addindex __env__STARPU_TARGET_CLEAN_BUFFERS
  826. This specifies the target percentage of number of buffers that should be reached in
  827. GPUs (or in main memory, when using out of core), when performing an asynchronous
  828. writeback pass. The default is 10%.
  829. </dd>
  830. <dt>STARPU_DIDUSE_BARRIER</dt>
  831. <dd>
  832. \anchor STARPU_DIDUSE_BARRIER
  833. \addindex __env__STARPU_DIDUSE_BARRIER
  834. When set to 1, StarPU will never evict a piece of data if it has not been used
  835. by at least one task. This avoids odd behaviors under high memory pressure, but
  836. can lead to deadlocks, so is to be considered experimental only.
  837. </dd>
  838. <dt>STARPU_DISK_SWAP</dt>
  839. <dd>
  840. \anchor STARPU_DISK_SWAP
  841. \addindex __env__STARPU_DISK_SWAP
  842. This specifies a path where StarPU can push data when the main memory is getting
  843. full.
  844. </dd>
  845. <dt>STARPU_DISK_SWAP_BACKEND</dt>
  846. <dd>
  847. \anchor STARPU_DISK_SWAP_BACKEND
  848. \addindex __env__STARPU_DISK_SWAP_BACKEND
  849. This specifies then backend to be used by StarPU to push data when the main
  850. memory is getting full. The default is unistd (i.e. using read/write functions),
  851. other values are stdio (i.e. using fread/fwrite), unistd_o_direct (i.e. using
  852. read/write with O_DIRECT), leveldb (i.e. using a leveldb database), and hdf5
  853. (i.e. using HDF5 library).
  854. </dd>
  855. <dt>STARPU_DISK_SWAP_SIZE</dt>
  856. <dd>
  857. \anchor STARPU_DISK_SWAP_SIZE
  858. \addindex __env__STARPU_DISK_SWAP_SIZE
  859. This specifies then maximum size in MiB to be used by StarPU to push data when the main
  860. memory is getting full. The default is unlimited.
  861. </dd>
  862. <dt>STARPU_LIMIT_MAX_SUBMITTED_TASKS</dt>
  863. <dd>
  864. \anchor STARPU_LIMIT_MAX_SUBMITTED_TASKS
  865. \addindex __env__STARPU_LIMIT_MAX_SUBMITTED_TASKS
  866. This variable allows the user to control the task submission flow by specifying
  867. to StarPU a maximum number of submitted tasks allowed at a given time, i.e. when
  868. this limit is reached task submission becomes blocking until enough tasks have
  869. completed, specified by \ref STARPU_LIMIT_MIN_SUBMITTED_TASKS.
  870. Setting it enables allocation cache buffer reuse in main memory.
  871. </dd>
  872. <dt>STARPU_LIMIT_MIN_SUBMITTED_TASKS</dt>
  873. <dd>
  874. \anchor STARPU_LIMIT_MIN_SUBMITTED_TASKS
  875. \addindex __env__STARPU_LIMIT_MIN_SUBMITTED_TASKS
  876. This variable allows the user to control the task submission flow by specifying
  877. to StarPU a submitted task threshold to wait before unblocking task submission. This
  878. variable has to be used in conjunction with \ref STARPU_LIMIT_MAX_SUBMITTED_TASKS
  879. which puts the task submission thread to
  880. sleep. Setting it enables allocation cache buffer reuse in main memory.
  881. </dd>
  882. <dt>STARPU_TRACE_BUFFER_SIZE</dt>
  883. <dd>
  884. \anchor STARPU_TRACE_BUFFER_SIZE
  885. \addindex __env__STARPU_TRACE_BUFFER_SIZE
  886. This sets the buffer size for recording trace events in MiB. Setting it to a big
  887. size allows to avoid pauses in the trace while it is recorded on the disk. This
  888. however also consumes memory, of course. The default value is 64.
  889. </dd>
  890. <dt>STARPU_GENERATE_TRACE</dt>
  891. <dd>
  892. \anchor STARPU_GENERATE_TRACE
  893. \addindex __env__STARPU_GENERATE_TRACE
  894. When set to <c>1</c>, this variable indicates that StarPU should automatically
  895. generate a Paje trace when starpu_shutdown() is called.
  896. </dd>
  897. <dt>STARPU_GENERATE_TRACE_OPTIONS</dt>
  898. <dd>
  899. \anchor STARPU_GENERATE_TRACE_OPTIONS
  900. \addindex __env__STARPU_GENERATE_TRACE_OPTIONS
  901. When the variable \ref STARPU_GENERATE_TRACE is set to <c>1</c> to
  902. generate a Paje trace, this variable can be set to specify options (see
  903. <c>starpu_fxt_tool --help</c>).
  904. </dd>
  905. <dt>STARPU_ENABLE_STATS</dt>
  906. <dd>
  907. \anchor STARPU_ENABLE_STATS
  908. \addindex __env__STARPU_ENABLE_STATS
  909. When defined, enable gathering various data statistics (\ref DataStatistics).
  910. </dd>
  911. <dt>STARPU_MEMORY_STATS</dt>
  912. <dd>
  913. \anchor STARPU_MEMORY_STATS
  914. \addindex __env__STARPU_MEMORY_STATS
  915. When set to 0, disable the display of memory statistics on data which
  916. have not been unregistered at the end of the execution (\ref MemoryFeedback).
  917. </dd>
  918. <dt>STARPU_MAX_MEMORY_USE</dt>
  919. <dd>
  920. \anchor STARPU_MAX_MEMORY_USE
  921. \addindex __env__STARPU_MAX_MEMORY_USE
  922. When set to 1, display at the end of the execution the maximum memory used by
  923. StarPU for internal data structures during execution.
  924. </dd>
  925. <dt>STARPU_BUS_STATS</dt>
  926. <dd>
  927. \anchor STARPU_BUS_STATS
  928. \addindex __env__STARPU_BUS_STATS
  929. When defined, statistics about data transfers will be displayed when calling
  930. starpu_shutdown() (\ref Profiling).
  931. </dd>
  932. <dt>STARPU_WORKER_STATS</dt>
  933. <dd>
  934. \anchor STARPU_WORKER_STATS
  935. \addindex __env__STARPU_WORKER_STATS
  936. When defined, statistics about the workers will be displayed when calling
  937. starpu_shutdown() (\ref Profiling). When combined with the
  938. environment variable \ref STARPU_PROFILING, it displays the energy
  939. consumption (\ref Energy-basedScheduling).
  940. </dd>
  941. <dt>STARPU_STATS</dt>
  942. <dd>
  943. \anchor STARPU_STATS
  944. \addindex __env__STARPU_STATS
  945. When set to 0, data statistics will not be displayed at the
  946. end of the execution of an application (\ref DataStatistics).
  947. </dd>
  948. <dt>STARPU_WATCHDOG_TIMEOUT</dt>
  949. <dd>
  950. \anchor STARPU_WATCHDOG_TIMEOUT
  951. \addindex __env__STARPU_WATCHDOG_TIMEOUT
  952. When set to a value other than 0, allows to make StarPU print an error
  953. message whenever StarPU does not terminate any task for the given time (in µs),
  954. but lets the application continue normally. Should
  955. be used in combination with \ref STARPU_WATCHDOG_CRASH
  956. (see \ref DetectionStuckConditions).
  957. </dd>
  958. <dt>STARPU_WATCHDOG_CRASH</dt>
  959. <dd>
  960. \anchor STARPU_WATCHDOG_CRASH
  961. \addindex __env__STARPU_WATCHDOG_CRASH
  962. When set to a value other than 0, it triggers a crash when the watch
  963. dog is reached, thus allowing to catch the situation in gdb, etc
  964. (see \ref DetectionStuckConditions)
  965. </dd>
  966. <dt>STARPU_WATCHDOG_DELAY</dt>
  967. <dd>
  968. \anchor STARPU_WATCHDOG_DELAY
  969. \addindex __env__STARPU_WATCHDOG_DELAY
  970. This delays the activation of the watchdog by the given time (in µs). This can
  971. be convenient for letting the application initialize data etc. before starting
  972. to look for idle time.
  973. </dd>
  974. <dt>STARPU_TASK_BREAK_ON_PUSH</dt>
  975. <dd>
  976. \anchor STARPU_TASK_BREAK_ON_PUSH
  977. \addindex __env__STARPU_TASK_BREAK_ON_PUSH
  978. When this variable contains a job id, StarPU will raise SIGTRAP when the task
  979. with that job id is being pushed to the scheduler, which will be nicely catched by debuggers
  980. (see \ref DebuggingScheduling)
  981. </dd>
  982. <dt>STARPU_TASK_BREAK_ON_SCHED</dt>
  983. <dd>
  984. \anchor STARPU_TASK_BREAK_ON_SCHED
  985. \addindex __env__STARPU_TASK_BREAK_ON_SCHED
  986. When this variable contains a job id, StarPU will raise SIGTRAP when the task
  987. with that job id is being scheduled by the scheduler (at a scheduler-specific
  988. point), which will be nicely catched by debuggers.
  989. This only works for schedulers which have such a scheduling point defined
  990. (see \ref DebuggingScheduling)
  991. </dd>
  992. <dt>STARPU_TASK_BREAK_ON_POP</dt>
  993. <dd>
  994. \anchor STARPU_TASK_BREAK_ON_POP
  995. \addindex __env__STARPU_TASK_BREAK_ON_POP
  996. When this variable contains a job id, StarPU will raise SIGTRAP when the task
  997. with that job id is being popped from the scheduler, which will be nicely catched by debuggers
  998. (see \ref DebuggingScheduling)
  999. </dd>
  1000. <dt>STARPU_TASK_BREAK_ON_EXEC</dt>
  1001. <dd>
  1002. \anchor STARPU_TASK_BREAK_ON_EXEC
  1003. \addindex __env__STARPU_TASK_BREAK_ON_EXEC
  1004. When this variable contains a job id, StarPU will raise SIGTRAP when the task
  1005. with that job id is being executed, which will be nicely catched by debuggers
  1006. (see \ref DebuggingScheduling)
  1007. </dd>
  1008. <dt>STARPU_DISABLE_KERNELS</dt>
  1009. <dd>
  1010. \anchor STARPU_DISABLE_KERNELS
  1011. \addindex __env__STARPU_DISABLE_KERNELS
  1012. When set to a value other than 1, it disables actually calling the kernel
  1013. functions, thus allowing to quickly check that the task scheme is working
  1014. properly, without performing the actual application-provided computation.
  1015. </dd>
  1016. <dt>STARPU_HISTORY_MAX_ERROR</dt>
  1017. <dd>
  1018. \anchor STARPU_HISTORY_MAX_ERROR
  1019. \addindex __env__STARPU_HISTORY_MAX_ERROR
  1020. History-based performance models will drop measurements which are really far
  1021. froom the measured average. This specifies the allowed variation. The default is
  1022. 50 (%), i.e. the measurement is allowed to be x1.5 faster or /1.5 slower than the
  1023. average.
  1024. </dd>
  1025. <dt>STARPU_RAND_SEED</dt>
  1026. <dd>
  1027. \anchor STARPU_RAND_SEED
  1028. \addindex __env__STARPU_RAND_SEED
  1029. The random scheduler and some examples use random numbers for their own
  1030. working. Depending on the examples, the seed is by default juste always 0 or
  1031. the current time() (unless simgrid mode is enabled, in which case it is always
  1032. 0). \ref STARPU_RAND_SEED allows to set the seed to a specific value.
  1033. </dd>
  1034. <dt>STARPU_IDLE_TIME</dt>
  1035. <dd>
  1036. \anchor STARPU_IDLE_TIME
  1037. \addindex __env__STARPU_IDLE_TIME
  1038. When set to a value being a valid filename, a corresponding file
  1039. will be created when shutting down StarPU. The file will contain the
  1040. sum of all the workers' idle time.
  1041. </dd>
  1042. <dt>STARPU_GLOBAL_ARBITER</dt>
  1043. <dd>
  1044. \anchor STARPU_GLOBAL_ARBITER
  1045. \addindex __env__STARPU_GLOBAL_ARBITER
  1046. When set to a positive value, StarPU will create a arbiter, which
  1047. implements an advanced but centralized management of concurrent data
  1048. accesses (see \ref ConcurrentDataAccess).
  1049. </dd>
  1050. <dt>STARPU_USE_NUMA</dt>
  1051. <dd>
  1052. \anchor STARPU_USE_NUMA
  1053. \addindex __env__STARPU_USE_NUMA
  1054. When defined, NUMA nodes are taking into account by StarPU. Otherwise, memory
  1055. is considered as only one node. This is experimental for now.
  1056. When enabled, STARPU_MAIN_MEMORY is a pointer to the NUMA node associated to the
  1057. first CPU worker if it exists, the NUMA node associated to the first GPU discovered otherwise.
  1058. If StarPU doesn't find any NUMA node after these step, STARPU_MAIN_MEMORY is the first NUMA node
  1059. discovered by StarPU.
  1060. </dd>
  1061. <dt>STARPU_IDLE_FILE</dt>
  1062. <dd>
  1063. \anchor STARPU_IDLE_FILE
  1064. \addindex __env__STARPU_IDLE_FILE
  1065. If the environment variable STARPU_IDLE_FILE is defined, a file named after its contents will be created at the end of the execution.
  1066. The file will contain the sum of the idle times of all the workers.
  1067. </dd>
  1068. </dl>
  1069. \section ConfiguringTheHypervisor Configuring The Hypervisor
  1070. <dl>
  1071. <dt>SC_HYPERVISOR_POLICY</dt>
  1072. <dd>
  1073. \anchor SC_HYPERVISOR_POLICY
  1074. \addindex __env__SC_HYPERVISOR_POLICY
  1075. Choose between the different resizing policies proposed by StarPU for the hypervisor:
  1076. idle, app_driven, feft_lp, teft_lp; ispeed_lp, throughput_lp etc.
  1077. Use <c>SC_HYPERVISOR_POLICY=help</c> to get the list of available policies for the hypervisor
  1078. </dd>
  1079. <dt>SC_HYPERVISOR_TRIGGER_RESIZE</dt>
  1080. <dd>
  1081. \anchor SC_HYPERVISOR_TRIGGER_RESIZE
  1082. \addindex __env__SC_HYPERVISOR_TRIGGER_RESIZE
  1083. Choose how should the hypervisor be triggered: <c>speed</c> if the resizing algorithm should
  1084. be called whenever the speed of the context does not correspond to an optimal precomputed value,
  1085. <c>idle</c> it the resizing algorithm should be called whenever the workers are idle for a period
  1086. longer than the value indicated when configuring the hypervisor.
  1087. </dd>
  1088. <dt>SC_HYPERVISOR_START_RESIZE</dt>
  1089. <dd>
  1090. \anchor SC_HYPERVISOR_START_RESIZE
  1091. \addindex __env__SC_HYPERVISOR_START_RESIZE
  1092. Indicate the moment when the resizing should be available. The value correspond to the percentage
  1093. of the total time of execution of the application. The default value is the resizing frame.
  1094. </dd>
  1095. <dt>SC_HYPERVISOR_MAX_SPEED_GAP</dt>
  1096. <dd>
  1097. \anchor SC_HYPERVISOR_MAX_SPEED_GAP
  1098. \addindex __env__SC_HYPERVISOR_MAX_SPEED_GAP
  1099. Indicate the ratio of speed difference between contexts that should trigger the hypervisor.
  1100. This situation may occur only when a theoretical speed could not be computed and the hypervisor
  1101. has no value to compare the speed to. Otherwise the resizing of a context is not influenced by the
  1102. the speed of the other contexts, but only by the the value that a context should have.
  1103. </dd>
  1104. <dt>SC_HYPERVISOR_STOP_PRINT</dt>
  1105. <dd>
  1106. \anchor SC_HYPERVISOR_STOP_PRINT
  1107. \addindex __env__SC_HYPERVISOR_STOP_PRINT
  1108. By default the values of the speed of the workers is printed during the execution
  1109. of the application. If the value 1 is given to this environment variable this printing
  1110. is not done.
  1111. </dd>
  1112. <dt>SC_HYPERVISOR_LAZY_RESIZE</dt>
  1113. <dd>
  1114. \anchor SC_HYPERVISOR_LAZY_RESIZE
  1115. \addindex __env__SC_HYPERVISOR_LAZY_RESIZE
  1116. By default the hypervisor resizes the contexts in a lazy way, that is workers are firstly added to a new context
  1117. before removing them from the previous one. Once this workers are clearly taken into account
  1118. into the new context (a task was poped there) we remove them from the previous one. However if the application
  1119. would like that the change in the distribution of workers should change right away this variable should be set to 0
  1120. </dd>
  1121. <dt>SC_HYPERVISOR_SAMPLE_CRITERIA</dt>
  1122. <dd>
  1123. \anchor SC_HYPERVISOR_SAMPLE_CRITERIA
  1124. \addindex __env__SC_HYPERVISOR_SAMPLE_CRITERIA
  1125. By default the hypervisor uses a sample of flops when computing the speed of the contexts and of the workers.
  1126. If this variable is set to <c>time</c> the hypervisor uses a sample of time (10% of an aproximation of the total
  1127. execution time of the application)
  1128. </dd>
  1129. </dl>
  1130. */