501_environment_variables.doxy 38 KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411511611711811912012112212312412512612712812913013113213313413513613713813914014114214314414514614714814915015115215315415515615715815916016116216316416516616716816917017117217317417517617717817918018118218318418518618718818919019119219319419519619719819920020120220320420520620720820921021121221321421521621721821922022122222322422522622722822923023123223323423523623723823924024124224324424524624724824925025125225325425525625725825926026126226326426526626726826927027127227327427527627727827928028128228328428528628728828929029129229329429529629729829930030130230330430530630730830931031131231331431531631731831932032132232332432532632732832933033133233333433533633733833934034134234334434534634734834935035135235335435535635735835936036136236336436536636736836937037137237337437537637737837938038138238338438538638738838939039139239339439539639739839940040140240340440540640740840941041141241341441541641741841942042142242342442542642742842943043143243343443543643743843944044144244344444544644744844945045145245345445545645745845946046146246346446546646746846947047147247347447547647747847948048148248348448548648748848949049149249349449549649749849950050150250350450550650750850951051151251351451551651751851952052152252352452552652752852953053153253353453553653753853954054154254354454554654754854955055155255355455555655755855956056156256356456556656756856957057157257357457557657757857958058158258358458558658758858959059159259359459559659759859960060160260360460560660760860961061161261361461561661761861962062162262362462562662762862963063163263363463563663763863964064164264364464564664764864965065165265365465565665765865966066166266366466566666766866967067167267367467567667767867968068168268368468568668768868969069169269369469569669769869970070170270370470570670770870971071171271371471571671771871972072172272372472572672772872973073173273373473573673773873974074174274374474574674774874975075175275375475575675775875976076176276376476576676776876977077177277377477577677777877978078178278378478578678778878979079179279379479579679779879980080180280380480580680780880981081181281381481581681781881982082182282382482582682782882983083183283383483583683783883984084184284384484584684784884985085185285385485585685785885986086186286386486586686786886987087187287387487587687787887988088188288388488588688788888989089189289389489589689789889990090190290390490590690790890991091191291391491591691791891992092192292392492592692792892993093193293393493593693793893994094194294394494594694794894995095195295395495595695795895996096196296396496596696796896997097197297397497597697797897998098198298398498598698798898999099199299399499599699799899910001001100210031004100510061007100810091010101110121013101410151016101710181019102010211022102310241025102610271028102910301031103210331034103510361037103810391040104110421043104410451046104710481049105010511052105310541055105610571058105910601061106210631064106510661067106810691070107110721073107410751076107710781079108010811082108310841085108610871088108910901091109210931094109510961097109810991100110111021103110411051106110711081109111011111112111311141115111611171118111911201121112211231124112511261127112811291130113111321133113411351136113711381139
  1. /*
  2. * This file is part of the StarPU Handbook.
  3. * Copyright (C) 2009--2011 Universit@'e de Bordeaux
  4. * Copyright (C) 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017 CNRS
  5. * Copyright (C) 2011, 2012, 2016, 2017 INRIA
  6. * Copyright (C) 2016 Uppsala University
  7. * See the file version.doxy for copying conditions.
  8. */
  9. /*! \page ExecutionConfigurationThroughEnvironmentVariables Execution Configuration Through Environment Variables
  10. The behavior of the StarPU library and tools may be tuned thanks to
  11. the following environment variables.
  12. \section ConfiguringWorkers Configuring Workers
  13. <dl>
  14. <dt>STARPU_NCPU</dt>
  15. <dd>
  16. \anchor STARPU_NCPU
  17. \addindex __env__STARPU_NCPU
  18. Specify the number of CPU workers (thus not including workers
  19. dedicated to control accelerators). Note that by default, StarPU will
  20. not allocate more CPU workers than there are physical CPUs, and that
  21. some CPUs are used to control the accelerators.
  22. </dd>
  23. <dt>STARPU_NCPUS</dt>
  24. <dd>
  25. \anchor STARPU_NCPUS
  26. \addindex __env__STARPU_NCPUS
  27. This variable is deprecated. You should use \ref STARPU_NCPU.
  28. </dd>
  29. <dt>STARPU_NCUDA</dt>
  30. <dd>
  31. \anchor STARPU_NCUDA
  32. \addindex __env__STARPU_NCUDA
  33. Specify the number of CUDA devices that StarPU can use. If
  34. \ref STARPU_NCUDA is lower than the number of physical devices, it is
  35. possible to select which CUDA devices should be used by the means of the
  36. environment variable \ref STARPU_WORKERS_CUDAID. By default, StarPU will
  37. create as many CUDA workers as there are CUDA devices.
  38. </dd>
  39. <dt>STARPU_NWORKER_PER_CUDA</dt>
  40. <dd>
  41. \anchor STARPU_NWORKER_PER_CUDA
  42. \addindex __env__STARPU_NWORKER_PER_CUDA
  43. Specify the number of workers per CUDA device, and thus the number of kernels
  44. which will be concurrently running on the devices. The default value is 1.
  45. </dd>
  46. <dt>STARPU_CUDA_THREAD_PER_WORKER</dt>
  47. <dd>
  48. \anchor STARPU_CUDA_THREAD_PER_WORKER
  49. \addindex __env__STARPU_CUDA_THREAD_PER_WORKER
  50. Specify if the cuda driver should provide a thread per stream or a single thread
  51. dealing with all the streams. 0 if one thread per stream, 1 otherwise. The default
  52. value is 0.
  53. </dd>
  54. <dt>STARPU_CUDA_PIPELINE</dt>
  55. <dd>
  56. \anchor STARPU_CUDA_PIPELINE
  57. \addindex __env__STARPU_CUDA_PIPELINE
  58. Specify how many asynchronous tasks are submitted in advance on CUDA
  59. devices. This for instance permits to overlap task management with the execution
  60. of previous tasks, but it also allows concurrent execution on Fermi cards, which
  61. otherwise bring spurious synchronizations. The default is 2. Setting the value to 0 forces a synchronous
  62. execution of all tasks.
  63. </dd>
  64. <dt>STARPU_NOPENCL</dt>
  65. <dd>
  66. \anchor STARPU_NOPENCL
  67. \addindex __env__STARPU_NOPENCL
  68. OpenCL equivalent of the environment variable \ref STARPU_NCUDA.
  69. </dd>
  70. <dt>STARPU_OPENCL_PIPELINE</dt>
  71. <dd>
  72. \anchor STARPU_OPENCL_PIPELINE
  73. \addindex __env__STARPU_OPENCL_PIPELINE
  74. Specify how many asynchronous tasks are submitted in advance on OpenCL
  75. devices. This for instance permits to overlap task management with the execution
  76. of previous tasks, but it also allows concurrent execution on Fermi cards, which
  77. otherwise bring spurious synchronizations. The default is 2. Setting the value to 0 forces a synchronous
  78. execution of all tasks.
  79. </dd>
  80. <dt>STARPU_OPENCL_ON_CPUS</dt>
  81. <dd>
  82. \anchor STARPU_OPENCL_ON_CPUS
  83. \addindex __env__STARPU_OPENCL_ON_CPUS
  84. By default, the OpenCL driver only enables GPU and accelerator
  85. devices. By setting the environment variable \ref STARPU_OPENCL_ON_CPUS
  86. to 1, the OpenCL driver will also enable CPU devices.
  87. </dd>
  88. <dt>STARPU_OPENCL_ONLY_ON_CPUS</dt>
  89. <dd>
  90. \anchor STARPU_OPENCL_ONLY_ON_CPUS
  91. \addindex __env__STARPU_OPENCL_ONLY_ON_CPUS
  92. By default, the OpenCL driver enables GPU and accelerator
  93. devices. By setting the environment variable \ref STARPU_OPENCL_ONLY_ON_CPUS
  94. to 1, the OpenCL driver will ONLY enable CPU devices.
  95. </dd>
  96. <dt>STARPU_NMIC</dt>
  97. <dd>
  98. \anchor STARPU_NMIC
  99. \addindex __env__STARPU_NMIC
  100. MIC equivalent of the environment variable \ref STARPU_NCUDA, i.e. the number of
  101. MIC devices to use.
  102. </dd>
  103. <dt>STARPU_NMICTHREADS</dt>
  104. <dd>
  105. \anchor STARPU_NMICTHREADS
  106. \addindex __env__STARPU_NMICTHREADS
  107. Number of threads to use on the MIC devices.
  108. </dd>
  109. <dt>STARPU_NMPI_MS</dt>
  110. <dd>
  111. \anchor STARPU_NMPI_MS
  112. \addindex __env__STARPU_NMPI_MS
  113. MPI Master Slave equivalent of the environment variable \ref STARPU_NCUDA, i.e. the number of
  114. MPI Master Slave devices to use.
  115. </dd>
  116. <dt>STARPU_NMPIMSTHREADS</dt>
  117. <dd>
  118. \anchor STARPU_NMPIMSTHREADS
  119. \addindex __env__STARPU_NMPIMSTHREADS
  120. Number of threads to use on the MPI Slave devices.
  121. </dd>
  122. <dt>STARPU_MPI_MASTER_NODE</dt>
  123. <dd>
  124. \anchor STARPU_MPI_MASTER_NODE
  125. \addindex __env__STARPU_MPI_MASTER_NODE
  126. This variable allows to chose which MPI node (with the MPI ID) will be the master.
  127. </dd>
  128. <dt>STARPU_NSCC</dt>
  129. <dd>
  130. \anchor STARPU_NSCC
  131. \addindex __env__STARPU_NSCC
  132. SCC equivalent of the environment variable \ref STARPU_NCUDA.
  133. </dd>
  134. <dt>STARPU_WORKERS_NOBIND</dt>
  135. <dd>
  136. \anchor STARPU_WORKERS_NOBIND
  137. \addindex __env__STARPU_WORKERS_NOBIND
  138. Setting it to non-zero will prevent StarPU from binding its threads to
  139. CPUs. This is for instance useful when running the testsuite in parallel.
  140. </dd>
  141. <dt>STARPU_WORKERS_CPUID</dt>
  142. <dd>
  143. \anchor STARPU_WORKERS_CPUID
  144. \addindex __env__STARPU_WORKERS_CPUID
  145. Passing an array of integers in \ref STARPU_WORKERS_CPUID
  146. specifies on which logical CPU the different workers should be
  147. bound. For instance, if <c>STARPU_WORKERS_CPUID = "0 1 4 5"</c>, the first
  148. worker will be bound to logical CPU #0, the second CPU worker will be bound to
  149. logical CPU #1 and so on. Note that the logical ordering of the CPUs is either
  150. determined by the OS, or provided by the library <c>hwloc</c> in case it is
  151. available. Ranges can be provided: for instance, <c>STARPU_WORKERS_CPUID = "1-3
  152. 5"</c> will bind the first three workers on logical CPUs #1, #2, and #3, and the
  153. fourth worker on logical CPU #5. Unbound ranges can also be provided:
  154. <c>STARPU_WORKERS_CPUID = "1-"</c> will bind the workers starting from logical
  155. CPU #1 up to last CPU.
  156. Note that the first workers correspond to the CUDA workers, then come the
  157. OpenCL workers, and finally the CPU workers. For example if
  158. we have <c>STARPU_NCUDA=1</c>, <c>STARPU_NOPENCL=1</c>, <c>STARPU_NCPU=2</c>
  159. and <c>STARPU_WORKERS_CPUID = "0 2 1 3"</c>, the CUDA device will be controlled
  160. by logical CPU #0, the OpenCL device will be controlled by logical CPU #2, and
  161. the logical CPUs #1 and #3 will be used by the CPU workers.
  162. If the number of workers is larger than the array given in
  163. \ref STARPU_WORKERS_CPUID, the workers are bound to the logical CPUs in a
  164. round-robin fashion: if <c>STARPU_WORKERS_CPUID = "0 1"</c>, the first
  165. and the third (resp. second and fourth) workers will be put on CPU #0
  166. (resp. CPU #1).
  167. This variable is ignored if the field
  168. starpu_conf::use_explicit_workers_bindid passed to starpu_init() is
  169. set.
  170. </dd>
  171. <dt>STARPU_WORKERS_CUDAID</dt>
  172. <dd>
  173. \anchor STARPU_WORKERS_CUDAID
  174. \addindex __env__STARPU_WORKERS_CUDAID
  175. Similarly to the \ref STARPU_WORKERS_CPUID environment variable, it is
  176. possible to select which CUDA devices should be used by StarPU. On a machine
  177. equipped with 4 GPUs, setting <c>STARPU_WORKERS_CUDAID = "1 3"</c> and
  178. <c>STARPU_NCUDA=2</c> specifies that 2 CUDA workers should be created, and that
  179. they should use CUDA devices #1 and #3 (the logical ordering of the devices is
  180. the one reported by CUDA).
  181. This variable is ignored if the field
  182. starpu_conf::use_explicit_workers_cuda_gpuid passed to starpu_init()
  183. is set.
  184. </dd>
  185. <dt>STARPU_WORKERS_OPENCLID</dt>
  186. <dd>
  187. \anchor STARPU_WORKERS_OPENCLID
  188. \addindex __env__STARPU_WORKERS_OPENCLID
  189. OpenCL equivalent of the \ref STARPU_WORKERS_CUDAID environment variable.
  190. This variable is ignored if the field
  191. starpu_conf::use_explicit_workers_opencl_gpuid passed to starpu_init()
  192. is set.
  193. </dd>
  194. <dt>STARPU_WORKERS_MICID</dt>
  195. <dd>
  196. \anchor STARPU_WORKERS_MICID
  197. \addindex __env__STARPU_WORKERS_MICID
  198. MIC equivalent of the \ref STARPU_WORKERS_CUDAID environment variable.
  199. This variable is ignored if the field
  200. starpu_conf::use_explicit_workers_mic_deviceid passed to starpu_init()
  201. is set.
  202. </dd>
  203. <dt>STARPU_WORKERS_SCCID</dt>
  204. <dd>
  205. \anchor STARPU_WORKERS_SCCID
  206. \addindex __env__STARPU_WORKERS_SCCID
  207. SCC equivalent of the \ref STARPU_WORKERS_CUDAID environment variable.
  208. This variable is ignored if the field
  209. starpu_conf::use_explicit_workers_scc_deviceid passed to starpu_init()
  210. is set.
  211. </dd>
  212. <dt>STARPU_WORKER_TREE</dt>
  213. <dd>
  214. \anchor STARPU_WORKER_TREE
  215. \addindex __env__STARPU_WORKER_TREE
  216. Define to 1 to enable the tree iterator in schedulers.
  217. </dd>
  218. <dt>STARPU_SINGLE_COMBINED_WORKER</dt>
  219. <dd>
  220. \anchor STARPU_SINGLE_COMBINED_WORKER
  221. \addindex __env__STARPU_SINGLE_COMBINED_WORKER
  222. If set, StarPU will create several workers which won't be able to work
  223. concurrently. It will by default create combined workers which size goes from 1
  224. to the total number of CPU workers in the system. \ref STARPU_MIN_WORKERSIZE
  225. and \ref STARPU_MAX_WORKERSIZE can be used to change this default.
  226. </dd>
  227. <dt>STARPU_MIN_WORKERSIZE</dt>
  228. <dd>
  229. \anchor STARPU_MIN_WORKERSIZE
  230. \addindex __env__STARPU_MIN_WORKERSIZE
  231. \ref STARPU_MIN_WORKERSIZE
  232. permits to specify the minimum size of the combined workers (instead of the default 2)
  233. </dd>
  234. <dt>STARPU_MAX_WORKERSIZE</dt>
  235. <dd>
  236. \anchor STARPU_MAX_WORKERSIZE
  237. \addindex __env__STARPU_MAX_WORKERSIZE
  238. \ref STARPU_MAX_WORKERSIZE
  239. permits to specify the minimum size of the combined workers (instead of the
  240. number of CPU workers in the system)
  241. </dd>
  242. <dt>STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER</dt>
  243. <dd>
  244. \anchor STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER
  245. \addindex __env__STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER
  246. Let the user decide how many elements are allowed between combined workers
  247. created from hwloc information. For instance, in the case of sockets with 6
  248. cores without shared L2 caches, if \ref STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER is
  249. set to 6, no combined worker will be synthesized beyond one for the socket
  250. and one per core. If it is set to 3, 3 intermediate combined workers will be
  251. synthesized, to divide the socket cores into 3 chunks of 2 cores. If it set to
  252. 2, 2 intermediate combined workers will be synthesized, to divide the the socket
  253. cores into 2 chunks of 3 cores, and then 3 additional combined workers will be
  254. synthesized, to divide the former synthesized workers into a bunch of 2 cores,
  255. and the remaining core (for which no combined worker is synthesized since there
  256. is already a normal worker for it).
  257. The default, 2, thus makes StarPU tend to building a binary trees of combined
  258. workers.
  259. </dd>
  260. <dt>STARPU_DISABLE_ASYNCHRONOUS_COPY</dt>
  261. <dd>
  262. \anchor STARPU_DISABLE_ASYNCHRONOUS_COPY
  263. \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_COPY
  264. Disable asynchronous copies between CPU and GPU devices.
  265. The AMD implementation of OpenCL is known to
  266. fail when copying data asynchronously. When using this implementation,
  267. it is therefore necessary to disable asynchronous data transfers.
  268. </dd>
  269. <dt>STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY</dt>
  270. <dd>
  271. \anchor STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY
  272. \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY
  273. Disable asynchronous copies between CPU and CUDA devices.
  274. </dd>
  275. <dt>STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY</dt>
  276. <dd>
  277. \anchor STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY
  278. \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY
  279. Disable asynchronous copies between CPU and OpenCL devices.
  280. The AMD implementation of OpenCL is known to
  281. fail when copying data asynchronously. When using this implementation,
  282. it is therefore necessary to disable asynchronous data transfers.
  283. </dd>
  284. <dt>STARPU_DISABLE_ASYNCHRONOUS_MIC_COPY</dt>
  285. <dd>
  286. \anchor STARPU_DISABLE_ASYNCHRONOUS_MIC_COPY
  287. \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_MIC_COPY
  288. Disable asynchronous copies between CPU and MIC devices.
  289. </dd>
  290. <dt>STARPU_DISABLE_ASYNCHRONOUS_MPI_MS_COPY</dt>
  291. <dd>
  292. \anchor STARPU_DISABLE_ASYNCHRONOUS_MPI_MS_COPY
  293. \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_MPI_MS_COPY
  294. Disable asynchronous copies between CPU and MPI Slave devices.
  295. </dd>
  296. <dt>STARPU_ENABLE_CUDA_GPU_GPU_DIRECT</dt>
  297. <dd>
  298. \anchor STARPU_ENABLE_CUDA_GPU_GPU_DIRECT
  299. \addindex __env__STARPU_ENABLE_CUDA_GPU_GPU_DIRECT
  300. Enable (1) or Disable (0) direct CUDA transfers from GPU to GPU, without copying
  301. through RAM. The default is Enabled.
  302. This permits to test the performance effect of GPU-Direct.
  303. </dd>
  304. <dt>STARPU_DISABLE_PINNING</dt>
  305. <dd>
  306. \anchor STARPU_DISABLE_PINNING
  307. \addindex __env__STARPU_DISABLE_PINNING
  308. Disable (1) or Enable (0) pinning host memory allocated through starpu_malloc, starpu_memory_pin
  309. and friends. The default is Enabled.
  310. This permits to test the performance effect of memory pinning.
  311. </dd>
  312. <dt>STARPU_MIC_SINK_PROGRAM_NAME</dt>
  313. <dd>
  314. \anchor STARPU_MIC_SINK_PROGRAM_NAME
  315. \addindex __env__STARPU_MIC_SINK_PROGRAM_NAME
  316. todo
  317. </dd>
  318. <dt>STARPU_MIC_SINK_PROGRAM_PATH</dt>
  319. <dd>
  320. \anchor STARPU_MIC_SINK_PROGRAM_PATH
  321. \addindex __env__STARPU_MIC_SINK_PROGRAM_PATH
  322. todo
  323. </dd>
  324. <dt>STARPU_MIC_PROGRAM_PATH</dt>
  325. <dd>
  326. \anchor STARPU_MIC_PROGRAM_PATH
  327. \addindex __env__STARPU_MIC_PROGRAM_PATH
  328. todo
  329. </dd>
  330. </dl>
  331. \section ConfiguringTheSchedulingEngine Configuring The Scheduling Engine
  332. <dl>
  333. <dt>STARPU_SCHED</dt>
  334. <dd>
  335. \anchor STARPU_SCHED
  336. \addindex __env__STARPU_SCHED
  337. Choose between the different scheduling policies proposed by StarPU: work
  338. random, stealing, greedy, with performance models, etc.
  339. Use <c>STARPU_SCHED=help</c> to get the list of available schedulers.
  340. </dd>
  341. <dt>STARPU_MIN_PRIO</dt>
  342. <dd>
  343. \anchor STARPU_MIN_PRIO_env
  344. \addindex __env__STARPU_MIN_PRIO
  345. Set the mininum priority used by priorities-aware schedulers.
  346. </dd>
  347. <dt>STARPU_MAX_PRIO</dt>
  348. <dd>
  349. \anchor STARPU_MAX_PRIO_env
  350. \addindex __env__STARPU_MAX_PRIO
  351. Set the maximum priority used by priorities-aware schedulers.
  352. </dd>
  353. <dt>STARPU_CALIBRATE</dt>
  354. <dd>
  355. \anchor STARPU_CALIBRATE
  356. \addindex __env__STARPU_CALIBRATE
  357. If this variable is set to 1, the performance models are calibrated during
  358. the execution. If it is set to 2, the previous values are dropped to restart
  359. calibration from scratch. Setting this variable to 0 disable calibration, this
  360. is the default behaviour.
  361. Note: this currently only applies to <c>dm</c> and <c>dmda</c> scheduling policies.
  362. </dd>
  363. <dt>STARPU_CALIBRATE_MINIMUM</dt>
  364. <dd>
  365. \anchor STARPU_CALIBRATE_MINIMUM
  366. \addindex __env__STARPU_CALIBRATE_MINIMUM
  367. This defines the minimum number of calibration measurements that will be made
  368. before considering that the performance model is calibrated. The default value is 10.
  369. </dd>
  370. <dt>STARPU_BUS_CALIBRATE</dt>
  371. <dd>
  372. \anchor STARPU_BUS_CALIBRATE
  373. \addindex __env__STARPU_BUS_CALIBRATE
  374. If this variable is set to 1, the bus is recalibrated during intialization.
  375. </dd>
  376. <dt>STARPU_PREFETCH</dt>
  377. <dd>
  378. \anchor STARPU_PREFETCH
  379. \addindex __env__STARPU_PREFETCH
  380. This variable indicates whether data prefetching should be enabled (0 means
  381. that it is disabled). If prefetching is enabled, when a task is scheduled to be
  382. executed e.g. on a GPU, StarPU will request an asynchronous transfer in
  383. advance, so that data is already present on the GPU when the task starts. As a
  384. result, computation and data transfers are overlapped.
  385. Note that prefetching is enabled by default in StarPU.
  386. </dd>
  387. <dt>STARPU_SCHED_ALPHA</dt>
  388. <dd>
  389. \anchor STARPU_SCHED_ALPHA
  390. \addindex __env__STARPU_SCHED_ALPHA
  391. To estimate the cost of a task StarPU takes into account the estimated
  392. computation time (obtained thanks to performance models). The alpha factor is
  393. the coefficient to be applied to it before adding it to the communication part.
  394. </dd>
  395. <dt>STARPU_SCHED_BETA</dt>
  396. <dd>
  397. \anchor STARPU_SCHED_BETA
  398. \addindex __env__STARPU_SCHED_BETA
  399. To estimate the cost of a task StarPU takes into account the estimated
  400. data transfer time (obtained thanks to performance models). The beta factor is
  401. the coefficient to be applied to it before adding it to the computation part.
  402. </dd>
  403. <dt>STARPU_SCHED_GAMMA</dt>
  404. <dd>
  405. \anchor STARPU_SCHED_GAMMA
  406. \addindex __env__STARPU_SCHED_GAMMA
  407. Define the execution time penalty of a joule (\ref Energy-basedScheduling).
  408. </dd>
  409. <dt>STARPU_IDLE_POWER</dt>
  410. <dd>
  411. \anchor STARPU_IDLE_POWER
  412. \addindex __env__STARPU_IDLE_POWER
  413. Define the idle power of the machine (\ref Energy-basedScheduling).
  414. </dd>
  415. <dt>STARPU_PROFILING</dt>
  416. <dd>
  417. \anchor STARPU_PROFILING
  418. \addindex __env__STARPU_PROFILING
  419. Enable on-line performance monitoring (\ref EnablingOn-linePerformanceMonitoring).
  420. </dd>
  421. </dl>
  422. \section Extensions Extensions
  423. <dl>
  424. <dt>SOCL_OCL_LIB_OPENCL</dt>
  425. <dd>
  426. \anchor SOCL_OCL_LIB_OPENCL
  427. \addindex __env__SOCL_OCL_LIB_OPENCL
  428. THE SOCL test suite is only run when the environment variable
  429. \ref SOCL_OCL_LIB_OPENCL is defined. It should contain the location
  430. of the file <c>libOpenCL.so</c> of the OCL ICD implementation.
  431. </dd>
  432. <dt>OCL_ICD_VENDORS</dt>
  433. <dd>
  434. \anchor OCL_ICD_VENDORS
  435. \addindex __env__OCL_ICD_VENDORS
  436. When using SOCL with OpenCL ICD
  437. (https://forge.imag.fr/projects/ocl-icd/), this variable may be used
  438. to point to the directory where ICD files are installed. The default
  439. directory is <c>/etc/OpenCL/vendors</c>. StarPU installs ICD
  440. files in the directory <c>$prefix/share/starpu/opencl/vendors</c>.
  441. </dd>
  442. <dt>STARPU_COMM_STATS</dt>
  443. <dd>
  444. \anchor STARPU_COMM_STATS
  445. \addindex __env__STARPU_COMM_STATS
  446. Communication statistics for starpumpi (\ref MPISupport)
  447. will be enabled when the environment variable \ref STARPU_COMM_STATS
  448. is defined to an value other than 0.
  449. </dd>
  450. <dt>STARPU_MPI_CACHE</dt>
  451. <dd>
  452. \anchor STARPU_MPI_CACHE
  453. \addindex __env__STARPU_MPI_CACHE
  454. Communication cache for starpumpi (\ref MPISupport) will be
  455. disabled when the environment variable \ref STARPU_MPI_CACHE is set
  456. to 0. It is enabled by default or for any other values of the variable
  457. \ref STARPU_MPI_CACHE.
  458. </dd>
  459. <dt>STARPU_MPI_COMM</dt>
  460. <dd>
  461. \anchor STARPU_MPI_COMM
  462. \addindex __env__STARPU_MPI_COMM
  463. Communication trace for starpumpi (\ref MPISupport) will be
  464. enabled when the environment variable \ref STARPU_MPI_COMM is set
  465. to 1, and StarPU has been configured with the option
  466. \ref enable-verbose "--enable-verbose".
  467. </dd>
  468. <dt>STARPU_MPI_CACHE_STATS</dt>
  469. <dd>
  470. \anchor STARPU_MPI_CACHE_STATS
  471. \addindex __env__STARPU_MPI_CACHE_STATS
  472. When set to 1, statistics are enabled for the communication cache (\ref MPISupport). For now,
  473. it prints messages on the standard output when data are added or removed from the received
  474. communication cache.
  475. </dd>
  476. <dt>STARPU_MPI_FAKE_SIZE</dt>
  477. <dd>
  478. \anchor STARPU_MPI_FAKE_SIZE
  479. \addindex __env__STARPU_MPI_FAKE_SIZE
  480. Setting to a number makes StarPU believe that there are as many MPI nodes, even
  481. if it was run on only one MPI node. This allows e.g. to simulate the execution
  482. of one of the nodes of a big cluster without actually running the rest.
  483. It of course does not provide computation results and timing.
  484. </dd>
  485. <dt>STARPU_MPI_FAKE_RANK</dt>
  486. <dd>
  487. \anchor STARPU_MPI_FAKE_RANK
  488. \addindex __env__STARPU_MPI_FAKE_RANK
  489. Setting to a number makes StarPU believe that it runs the given MPI node, even
  490. if it was run on only one MPI node. This allows e.g. to simulate the execution
  491. of one of the nodes of a big cluster without actually running the rest.
  492. It of course does not provide computation results and timing.
  493. </dd>
  494. <dt>STARPU_SIMGRID_CUDA_MALLOC_COST</dt>
  495. <dd>
  496. \anchor STARPU_SIMGRID_CUDA_MALLOC_COST
  497. \addindex __env__STARPU_SIMGRID_CUDA_MALLOC_COST
  498. When set to 1 (which is the default), CUDA malloc costs are taken into account
  499. in simgrid mode.
  500. </dd>
  501. <dt>STARPU_SIMGRID_CUDA_QUEUE_COST</dt>
  502. <dd>
  503. \anchor STARPU_SIMGRID_CUDA_QUEUE_COST
  504. \addindex __env__STARPU_SIMGRID_CUDA_QUEUE_COST
  505. When set to 1 (which is the default), CUDA task and transfer queueing costs are
  506. taken into account in simgrid mode.
  507. </dd>
  508. <dt>STARPU_PCI_FLAT</dt>
  509. <dd>
  510. \anchor STARPU_PCI_FLAT
  511. \addindex __env__STARPU_PCI_FLAT
  512. When unset or set to 0, the platform file created for simgrid will
  513. contain PCI bandwidths and routes.
  514. </dd>
  515. <dt>STARPU_SIMGRID_QUEUE_MALLOC_COST</dt>
  516. <dd>
  517. \anchor STARPU_SIMGRID_QUEUE_MALLOC_COST
  518. \addindex __env__STARPU_SIMGRID_QUEUE_MALLOC_COST
  519. When unset or set to 1, simulate within simgrid the GPU transfer queueing.
  520. </dd>
  521. <dt>STARPU_MALLOC_SIMULATION_FOLD</dt>
  522. <dd>
  523. \anchor STARPU_MALLOC_SIMULATION_FOLD
  524. \addindex __env__STARPU_MALLOC_SIMULATION_FOLD
  525. This defines the size of the file used for folding virtual allocation, in
  526. MiB. The default is 1, thus allowing 64GiB virtual memory when Linux's
  527. <c>sysctl vm.max_map_count</c> value is the default 65535.
  528. </dd>
  529. <dt>STARPU_SIMGRID_TASK_SUBMIT_COST</dt>
  530. <dd>
  531. \anchor STARPU_SIMGRID_TASK_SUBMIT_COST
  532. \addindex __env__STARPU_SIMGRID_TASK_SUBMIT_COST
  533. When set to 1 (which is the default), task submission costs are taken into
  534. account in simgrid mode. This provides more accurate simgrid predictions,
  535. especially for the beginning of the execution.
  536. </dd>
  537. </dl>
  538. \section MiscellaneousAndDebug Miscellaneous And Debug
  539. <dl>
  540. <dt>STARPU_HOME</dt>
  541. <dd>
  542. \anchor STARPU_HOME
  543. \addindex __env__STARPU_HOME
  544. This specifies the main directory in which StarPU stores its
  545. configuration files. The default is <c>$HOME</c> on Unix environments,
  546. and <c>$USERPROFILE</c> on Windows environments.
  547. </dd>
  548. <dt>STARPU_PATH</dt>
  549. <dd>
  550. \anchor STARPU_PATH
  551. \addindex __env__STARPU_PATH
  552. Only used on Windows environments.
  553. This specifies the main directory in which StarPU is installed
  554. (\ref RunningABasicStarPUApplicationOnMicrosoft)
  555. </dd>
  556. <dt>STARPU_PERF_MODEL_DIR</dt>
  557. <dd>
  558. \anchor STARPU_PERF_MODEL_DIR
  559. \addindex __env__STARPU_PERF_MODEL_DIR
  560. This specifies the main directory in which StarPU stores its
  561. performance model files. The default is <c>$STARPU_HOME/.starpu/sampling</c>.
  562. </dd>
  563. <dt>STARPU_PERF_MODEL_HOMEGENEOUS_CUDA</dt>
  564. <dd>
  565. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_CUDA
  566. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_CUDA
  567. When this is set to 1, StarPU will assume that all CUDA devices have the same
  568. performance, and thus share performance models for them, thus allowing kernel
  569. calibration to be much faster, since measurements only have to be once for all
  570. CUDA GPUs.
  571. </dd>
  572. <dt>STARPU_PERF_MODEL_HOMEGENEOUS_OPENCL</dt>
  573. <dd>
  574. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_OPENCL
  575. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_OPENCL
  576. When this is set to 1, StarPU will assume that all OPENCL devices have the same
  577. performance, and thus share performance models for them, thus allowing kernel
  578. calibration to be much faster, since measurements only have to be once for all
  579. OPENCL GPUs.
  580. </dd>
  581. <dt>STARPU_PERF_MODEL_HOMEGENEOUS_MIC</dt>
  582. <dd>
  583. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_MIC
  584. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_MIC
  585. When this is set to 1, StarPU will assume that all MIC devices have the same
  586. performance, and thus share performance models for them, thus allowing kernel
  587. calibration to be much faster, since measurements only have to be once for all
  588. MIC GPUs.
  589. </dd>
  590. <dt>STARPU_PERF_MODEL_HOMOGENEOUS_MPI_MS</dt>
  591. <dd>
  592. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_MPI_MS
  593. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_MPI_MS
  594. When this is set to 1, StarPU will assume that all MPI Slave devices have the same
  595. performance, and thus share performance models for them, thus allowing kernel
  596. calibration to be much faster, since measurements only have to be once for all
  597. MPI Slaves.
  598. </dd>
  599. <dt>STARPU_PERF_MODEL_HOMEGENEOUS_SCC</dt>
  600. <dd>
  601. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_SCC
  602. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_SCC
  603. When this is set to 1, StarPU will assume that all SCC devices have the same
  604. performance, and thus share performance models for them, thus allowing kernel
  605. calibration to be much faster, since measurements only have to be once for all
  606. SCC GPUs.
  607. </dd>
  608. <dt>STARPU_HOSTNAME</dt>
  609. <dd>
  610. \anchor STARPU_HOSTNAME
  611. \addindex __env__STARPU_HOSTNAME
  612. When set, force the hostname to be used when dealing performance model
  613. files. Models are indexed by machine name. When running for example on
  614. a homogenenous cluster, it is possible to share the models between
  615. machines by setting <c>export STARPU_HOSTNAME=some_global_name</c>.
  616. </dd>
  617. <dt>STARPU_OPENCL_PROGRAM_DIR</dt>
  618. <dd>
  619. \anchor STARPU_OPENCL_PROGRAM_DIR
  620. \addindex __env__STARPU_OPENCL_PROGRAM_DIR
  621. This specifies the directory where the OpenCL codelet source files are
  622. located. The function starpu_opencl_load_program_source() looks
  623. for the codelet in the current directory, in the directory specified
  624. by the environment variable \ref STARPU_OPENCL_PROGRAM_DIR, in the
  625. directory <c>share/starpu/opencl</c> of the installation directory of
  626. StarPU, and finally in the source directory of StarPU.
  627. </dd>
  628. <dt>STARPU_SILENT</dt>
  629. <dd>
  630. \anchor STARPU_SILENT
  631. \addindex __env__STARPU_SILENT
  632. This variable allows to disable verbose mode at runtime when StarPU
  633. has been configured with the option \ref enable-verbose "--enable-verbose". It also
  634. disables the display of StarPU information and warning messages.
  635. </dd>
  636. <dt>STARPU_LOGFILENAME</dt>
  637. <dd>
  638. \anchor STARPU_LOGFILENAME
  639. \addindex __env__STARPU_LOGFILENAME
  640. This variable specifies in which file the debugging output should be saved to.
  641. </dd>
  642. <dt>STARPU_FXT_PREFIX</dt>
  643. <dd>
  644. \anchor STARPU_FXT_PREFIX
  645. \addindex __env__STARPU_FXT_PREFIX
  646. This variable specifies in which directory to save the trace generated if FxT is enabled. It needs to have a trailing '/' character.
  647. </dd>
  648. <dt>STARPU_FXT_TRACE</dt>
  649. <dd>
  650. \anchor STARPU_FXT_TRACE
  651. \addindex __env__STARPU_FXT_TRACE
  652. This variable specifies whether to generate (1) or not (0) the FxT trace in /tmp/prof_file_XXX_YYY . The default is 1 (generate it)
  653. </dd>
  654. <dt>STARPU_LIMIT_CUDA_devid_MEM</dt>
  655. <dd>
  656. \anchor STARPU_LIMIT_CUDA_devid_MEM
  657. \addindex __env__STARPU_LIMIT_CUDA_devid_MEM
  658. This variable specifies the maximum number of megabytes that should be
  659. available to the application on the CUDA device with the identifier
  660. <c>devid</c>. This variable is intended to be used for experimental
  661. purposes as it emulates devices that have a limited amount of memory.
  662. When defined, the variable overwrites the value of the variable
  663. \ref STARPU_LIMIT_CUDA_MEM.
  664. </dd>
  665. <dt>STARPU_LIMIT_CUDA_MEM</dt>
  666. <dd>
  667. \anchor STARPU_LIMIT_CUDA_MEM
  668. \addindex __env__STARPU_LIMIT_CUDA_MEM
  669. This variable specifies the maximum number of megabytes that should be
  670. available to the application on each CUDA devices. This variable is
  671. intended to be used for experimental purposes as it emulates devices
  672. that have a limited amount of memory.
  673. </dd>
  674. <dt>STARPU_LIMIT_OPENCL_devid_MEM</dt>
  675. <dd>
  676. \anchor STARPU_LIMIT_OPENCL_devid_MEM
  677. \addindex __env__STARPU_LIMIT_OPENCL_devid_MEM
  678. This variable specifies the maximum number of megabytes that should be
  679. available to the application on the OpenCL device with the identifier
  680. <c>devid</c>. This variable is intended to be used for experimental
  681. purposes as it emulates devices that have a limited amount of memory.
  682. When defined, the variable overwrites the value of the variable
  683. \ref STARPU_LIMIT_OPENCL_MEM.
  684. </dd>
  685. <dt>STARPU_LIMIT_OPENCL_MEM</dt>
  686. <dd>
  687. \anchor STARPU_LIMIT_OPENCL_MEM
  688. \addindex __env__STARPU_LIMIT_OPENCL_MEM
  689. This variable specifies the maximum number of megabytes that should be
  690. available to the application on each OpenCL devices. This variable is
  691. intended to be used for experimental purposes as it emulates devices
  692. that have a limited amount of memory.
  693. </dd>
  694. <dt>STARPU_LIMIT_CPU_MEM</dt>
  695. <dd>
  696. \anchor STARPU_LIMIT_CPU_MEM
  697. \addindex __env__STARPU_LIMIT_CPU_MEM
  698. This variable specifies the maximum number of megabytes that should be
  699. available to the application on each CPU device. Setting it enables allocation
  700. cache in main memory
  701. </dd>
  702. <dt>STARPU_MINIMUM_AVAILABLE_MEM</dt>
  703. <dd>
  704. \anchor STARPU_MINIMUM_AVAILABLE_MEM
  705. \addindex __env__STARPU_MINIMUM_AVAILABLE_MEM
  706. This specifies the minimum percentage of memory that should be available in GPUs
  707. (or in main memory, when using out of core), below which a reclaiming pass is
  708. performed. The default is 0%.
  709. </dd>
  710. <dt>STARPU_TARGET_AVAILABLE_MEM</dt>
  711. <dd>
  712. \anchor STARPU_TARGET_AVAILABLE_MEM
  713. \addindex __env__STARPU_TARGET_AVAILABLE_MEM
  714. This specifies the target percentage of memory that should be reached in
  715. GPUs (or in main memory, when using out of core), when performing a periodic
  716. reclaiming pass. The default is 0%.
  717. </dd>
  718. <dt>STARPU_MINIMUM_CLEAN_BUFFERS</dt>
  719. <dd>
  720. \anchor STARPU_MINIMUM_CLEAN_BUFFERS
  721. \addindex __env__STARPU_MINIMUM_CLEAN_BUFFERS
  722. This specifies the minimum percentage of number of buffers that should be clean in GPUs
  723. (or in main memory, when using out of core), below which asynchronous writebacks will be
  724. issued. The default is 5%.
  725. </dd>
  726. <dt>STARPU_TARGET_CLEAN_BUFFERS</dt>
  727. <dd>
  728. \anchor STARPU_TARGET_CLEAN_BUFFERS
  729. \addindex __env__STARPU_TARGET_CLEAN_BUFFERS
  730. This specifies the target percentage of number of buffers that should be reached in
  731. GPUs (or in main memory, when using out of core), when performing an asynchronous
  732. writeback pass. The default is 10%.
  733. </dd>
  734. <dt>STARPU_DIDUSE_BARRIER</dt>
  735. <dd>
  736. \anchor STARPU_DIDUSE_BARRIER
  737. \addindex __env__STARPU_DIDUSE_BARRIER
  738. When set to 1, StarPU will never evict a piece of data if it has not been used
  739. by at least one task. This avoids odd behaviors under high memory pressure, but
  740. can lead to deadlocks, so is to be considered experimental only.
  741. </dd>
  742. <dt>STARPU_DISK_SWAP</dt>
  743. <dd>
  744. \anchor STARPU_DISK_SWAP
  745. \addindex __env__STARPU_DISK_SWAP
  746. This specifies a path where StarPU can push data when the main memory is getting
  747. full.
  748. </dd>
  749. <dt>STARPU_DISK_SWAP_BACKEND</dt>
  750. <dd>
  751. \anchor STARPU_DISK_SWAP_BACKEND
  752. \addindex __env__STARPU_DISK_SWAP_BACKEND
  753. This specifies then backend to be used by StarPU to push data when the main
  754. memory is getting full. The default is unistd (i.e. using read/write functions),
  755. other values are stdio (i.e. using fread/fwrite), unistd_o_direct (i.e. using
  756. read/write with O_DIRECT), and leveldb (i.e. using a leveldb database).
  757. </dd>
  758. <dt>STARPU_DISK_SWAP_SIZE</dt>
  759. <dd>
  760. \anchor STARPU_DISK_SWAP_SIZE
  761. \addindex __env__STARPU_DISK_SWAP_SIZE
  762. This specifies then maximum size in MiB to be used by StarPU to push data when the main
  763. memory is getting full. The default is unlimited.
  764. </dd>
  765. <dt>STARPU_LIMIT_MAX_SUBMITTED_TASKS</dt>
  766. <dd>
  767. \anchor STARPU_LIMIT_MAX_SUBMITTED_TASKS
  768. \addindex __env__STARPU_LIMIT_MAX_SUBMITTED_TASKS
  769. This variable allows the user to control the task submission flow by specifying
  770. to StarPU a maximum number of submitted tasks allowed at a given time, i.e. when
  771. this limit is reached task submission becomes blocking until enough tasks have
  772. completed, specified by \ref STARPU_LIMIT_MIN_SUBMITTED_TASKS.
  773. Setting it enables allocation cache buffer reuse in main memory.
  774. </dd>
  775. <dt>STARPU_LIMIT_MIN_SUBMITTED_TASKS</dt>
  776. <dd>
  777. \anchor STARPU_LIMIT_MIN_SUBMITTED_TASKS
  778. \addindex __env__STARPU_LIMIT_MIN_SUBMITTED_TASKS
  779. This variable allows the user to control the task submission flow by specifying
  780. to StarPU a submitted task threshold to wait before unblocking task submission. This
  781. variable has to be used in conjunction with \ref STARPU_LIMIT_MAX_SUBMITTED_TASKS
  782. which puts the task submission thread to
  783. sleep. Setting it enables allocation cache buffer reuse in main memory.
  784. </dd>
  785. <dt>STARPU_TRACE_BUFFER_SIZE</dt>
  786. <dd>
  787. \anchor STARPU_TRACE_BUFFER_SIZE
  788. \addindex __env__STARPU_TRACE_BUFFER_SIZE
  789. This sets the buffer size for recording trace events in MiB. Setting it to a big
  790. size allows to avoid pauses in the trace while it is recorded on the disk. This
  791. however also consumes memory, of course. The default value is 64.
  792. </dd>
  793. <dt>STARPU_GENERATE_TRACE</dt>
  794. <dd>
  795. \anchor STARPU_GENERATE_TRACE
  796. \addindex __env__STARPU_GENERATE_TRACE
  797. When set to <c>1</c>, this variable indicates that StarPU should automatically
  798. generate a Paje trace when starpu_shutdown() is called.
  799. </dd>
  800. <dt>STARPU_ENABLE_STATS</dt>
  801. <dd>
  802. \anchor STARPU_ENABLE_STATS
  803. \addindex __env__STARPU_ENABLE_STATS
  804. When defined, enable gathering various data statistics (\ref DataStatistics).
  805. </dd>
  806. <dt>STARPU_MEMORY_STATS</dt>
  807. <dd>
  808. \anchor STARPU_MEMORY_STATS
  809. \addindex __env__STARPU_MEMORY_STATS
  810. When set to 0, disable the display of memory statistics on data which
  811. have not been unregistered at the end of the execution (\ref MemoryFeedback).
  812. </dd>
  813. <dt>STARPU_MAX_MEMORY_USE</dt>
  814. <dd>
  815. \anchor STARPU_MAX_MEMORY_USE
  816. \addindex __env__STARPU_MAX_MEMORY_USE
  817. When set to 1, display at the end of the execution the maximum memory used by
  818. StarPU for internal data structures during execution.
  819. </dd>
  820. <dt>STARPU_BUS_STATS</dt>
  821. <dd>
  822. \anchor STARPU_BUS_STATS
  823. \addindex __env__STARPU_BUS_STATS
  824. When defined, statistics about data transfers will be displayed when calling
  825. starpu_shutdown() (\ref Profiling).
  826. </dd>
  827. <dt>STARPU_WORKER_STATS</dt>
  828. <dd>
  829. \anchor STARPU_WORKER_STATS
  830. \addindex __env__STARPU_WORKER_STATS
  831. When defined, statistics about the workers will be displayed when calling
  832. starpu_shutdown() (\ref Profiling). When combined with the
  833. environment variable \ref STARPU_PROFILING, it displays the energy
  834. consumption (\ref Energy-basedScheduling).
  835. </dd>
  836. <dt>STARPU_STATS</dt>
  837. <dd>
  838. \anchor STARPU_STATS
  839. \addindex __env__STARPU_STATS
  840. When set to 0, data statistics will not be displayed at the
  841. end of the execution of an application (\ref DataStatistics).
  842. </dd>
  843. <dt>STARPU_WATCHDOG_TIMEOUT</dt>
  844. <dd>
  845. \anchor STARPU_WATCHDOG_TIMEOUT
  846. \addindex __env__STARPU_WATCHDOG_TIMEOUT
  847. When set to a value other than 0, allows to make StarPU print an error
  848. message whenever StarPU does not terminate any task for the given time (in µs),
  849. but lets the application continue normally. Should
  850. be used in combination with \ref STARPU_WATCHDOG_CRASH
  851. (see \ref DetectionStuckConditions).
  852. </dd>
  853. <dt>STARPU_WATCHDOG_CRASH</dt>
  854. <dd>
  855. \anchor STARPU_WATCHDOG_CRASH
  856. \addindex __env__STARPU_WATCHDOG_CRASH
  857. When set to a value other than 0, it triggers a crash when the watch
  858. dog is reached, thus allowing to catch the situation in gdb, etc
  859. (see \ref DetectionStuckConditions)
  860. </dd>
  861. <dt>STARPU_TASK_BREAK_ON_SCHED</dt>
  862. <dd>
  863. \anchor STARPU_TASK_BREAK_ON_SCHED
  864. \addindex __env__STARPU_TASK_BREAK_ON_SCHED
  865. When this variable contains a job id, StarPU will raise SIGTRAP when the task
  866. with that job id is being scheduled by the scheduler (at a scheduler-specific
  867. point), which will be nicely catched by debuggers.
  868. This only works for schedulers which have such a scheduling point defined
  869. (see \ref DebuggingScheduling)
  870. </dd>
  871. <dt>STARPU_TASK_BREAK_ON_PUSH</dt>
  872. <dd>
  873. \anchor STARPU_TASK_BREAK_ON_PUSH
  874. \addindex __env__STARPU_TASK_BREAK_ON_PUSH
  875. When this variable contains a job id, StarPU will raise SIGTRAP when the task
  876. with that job id is being pushed to the scheduler, which will be nicely catched by debuggers
  877. (see \ref DebuggingScheduling)
  878. </dd>
  879. <dt>STARPU_TASK_BREAK_ON_POP</dt>
  880. <dd>
  881. \anchor STARPU_TASK_BREAK_ON_POP
  882. \addindex __env__STARPU_TASK_BREAK_ON_POP
  883. When this variable contains a job id, StarPU will raise SIGTRAP when the task
  884. with that job id is being popped from the scheduler, which will be nicely catched by debuggers
  885. (see \ref DebuggingScheduling)
  886. </dd>
  887. <dt>STARPU_DISABLE_KERNELS</dt>
  888. <dd>
  889. \anchor STARPU_DISABLE_KERNELS
  890. \addindex __env__STARPU_DISABLE_KERNELS
  891. When set to a value other than 1, it disables actually calling the kernel
  892. functions, thus allowing to quickly check that the task scheme is working
  893. properly, without performing the actual application-provided computation.
  894. </dd>
  895. <dt>STARPU_HISTORY_MAX_ERROR</dt>
  896. <dd>
  897. \anchor STARPU_HISTORY_MAX_ERROR
  898. \addindex __env__STARPU_HISTORY_MAX_ERROR
  899. History-based performance models will drop measurements which are really far
  900. froom the measured average. This specifies the allowed variation. The default is
  901. 50 (%), i.e. the measurement is allowed to be x1.5 faster or /1.5 slower than the
  902. average.
  903. </dd>
  904. <dt>STARPU_RAND_SEED</dt>
  905. <dd>
  906. \anchor STARPU_RAND_SEED
  907. \addindex __env__STARPU_RAND_SEED
  908. The random scheduler and some examples use random numbers for their own
  909. working. Depending on the examples, the seed is by default juste always 0 or
  910. the current time() (unless simgrid mode is enabled, in which case it is always
  911. 0). \ref STARPU_RAND_SEED allows to set the seed to a specific value.
  912. </dd>
  913. <dt>STARPU_IDLE_TIME</dt>
  914. <dd>
  915. \anchor STARPU_IDLE_TIME
  916. \addindex __env__STARPU_IDLE_TIME
  917. When set to a value being a valid filename, a corresponding file
  918. will be created when shutting down StarPU. The file will contain the
  919. sum of all the workers' idle time.
  920. </dd>
  921. <dt>STARPU_GLOBAL_ARBITER</dt>
  922. <dd>
  923. \anchor STARPU_GLOBAL_ARBITER
  924. \addindex __env__STARPU_GLOBAL_ARBITER
  925. When set to a positive value, StarPU will create a arbiter, which
  926. implements an advanced but centralized management of concurrent data
  927. accesses (see \ref ConcurrentDataAccess).
  928. </dd>
  929. </dl>
  930. \section ConfiguringTheHypervisor Configuring The Hypervisor
  931. <dl>
  932. <dt>SC_HYPERVISOR_POLICY</dt>
  933. <dd>
  934. \anchor SC_HYPERVISOR_POLICY
  935. \addindex __env__SC_HYPERVISOR_POLICY
  936. Choose between the different resizing policies proposed by StarPU for the hypervisor:
  937. idle, app_driven, feft_lp, teft_lp; ispeed_lp, throughput_lp etc.
  938. Use <c>SC_HYPERVISOR_POLICY=help</c> to get the list of available policies for the hypervisor
  939. </dd>
  940. <dt>SC_HYPERVISOR_TRIGGER_RESIZE</dt>
  941. <dd>
  942. \anchor SC_HYPERVISOR_TRIGGER_RESIZE
  943. \addindex __env__SC_HYPERVISOR_TRIGGER_RESIZE
  944. Choose how should the hypervisor be triggered: <c>speed</c> if the resizing algorithm should
  945. be called whenever the speed of the context does not correspond to an optimal precomputed value,
  946. <c>idle</c> it the resizing algorithm should be called whenever the workers are idle for a period
  947. longer than the value indicated when configuring the hypervisor.
  948. </dd>
  949. <dt>SC_HYPERVISOR_START_RESIZE</dt>
  950. <dd>
  951. \anchor SC_HYPERVISOR_START_RESIZE
  952. \addindex __env__SC_HYPERVISOR_START_RESIZE
  953. Indicate the moment when the resizing should be available. The value correspond to the percentage
  954. of the total time of execution of the application. The default value is the resizing frame.
  955. </dd>
  956. <dt>SC_HYPERVISOR_MAX_SPEED_GAP</dt>
  957. <dd>
  958. \anchor SC_HYPERVISOR_MAX_SPEED_GAP
  959. \addindex __env__SC_HYPERVISOR_MAX_SPEED_GAP
  960. Indicate the ratio of speed difference between contexts that should trigger the hypervisor.
  961. This situation may occur only when a theoretical speed could not be computed and the hypervisor
  962. has no value to compare the speed to. Otherwise the resizing of a context is not influenced by the
  963. the speed of the other contexts, but only by the the value that a context should have.
  964. </dd>
  965. <dt>SC_HYPERVISOR_STOP_PRINT</dt>
  966. <dd>
  967. \anchor SC_HYPERVISOR_STOP_PRINT
  968. \addindex __env__SC_HYPERVISOR_STOP_PRINT
  969. By default the values of the speed of the workers is printed during the execution
  970. of the application. If the value 1 is given to this environment variable this printing
  971. is not done.
  972. </dd>
  973. <dt>SC_HYPERVISOR_LAZY_RESIZE</dt>
  974. <dd>
  975. \anchor SC_HYPERVISOR_LAZY_RESIZE
  976. \addindex __env__SC_HYPERVISOR_LAZY_RESIZE
  977. By default the hypervisor resizes the contexts in a lazy way, that is workers are firstly added to a new context
  978. before removing them from the previous one. Once this workers are clearly taken into account
  979. into the new context (a task was poped there) we remove them from the previous one. However if the application
  980. would like that the change in the distribution of workers should change right away this variable should be set to 0
  981. </dd>
  982. <dt>SC_HYPERVISOR_SAMPLE_CRITERIA</dt>
  983. <dd>
  984. \anchor SC_HYPERVISOR_SAMPLE_CRITERIA
  985. \addindex __env__SC_HYPERVISOR_SAMPLE_CRITERIA
  986. By default the hypervisor uses a sample of flops when computing the speed of the contexts and of the workers.
  987. If this variable is set to <c>time</c> the hypervisor uses a sample of time (10% of an aproximation of the total
  988. execution time of the application)
  989. </dd>
  990. </dl>
  991. */