501_environment_variables.doxy 37 KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411511611711811912012112212312412512612712812913013113213313413513613713813914014114214314414514614714814915015115215315415515615715815916016116216316416516616716816917017117217317417517617717817918018118218318418518618718818919019119219319419519619719819920020120220320420520620720820921021121221321421521621721821922022122222322422522622722822923023123223323423523623723823924024124224324424524624724824925025125225325425525625725825926026126226326426526626726826927027127227327427527627727827928028128228328428528628728828929029129229329429529629729829930030130230330430530630730830931031131231331431531631731831932032132232332432532632732832933033133233333433533633733833934034134234334434534634734834935035135235335435535635735835936036136236336436536636736836937037137237337437537637737837938038138238338438538638738838939039139239339439539639739839940040140240340440540640740840941041141241341441541641741841942042142242342442542642742842943043143243343443543643743843944044144244344444544644744844945045145245345445545645745845946046146246346446546646746846947047147247347447547647747847948048148248348448548648748848949049149249349449549649749849950050150250350450550650750850951051151251351451551651751851952052152252352452552652752852953053153253353453553653753853954054154254354454554654754854955055155255355455555655755855956056156256356456556656756856957057157257357457557657757857958058158258358458558658758858959059159259359459559659759859960060160260360460560660760860961061161261361461561661761861962062162262362462562662762862963063163263363463563663763863964064164264364464564664764864965065165265365465565665765865966066166266366466566666766866967067167267367467567667767867968068168268368468568668768868969069169269369469569669769869970070170270370470570670770870971071171271371471571671771871972072172272372472572672772872973073173273373473573673773873974074174274374474574674774874975075175275375475575675775875976076176276376476576676776876977077177277377477577677777877978078178278378478578678778878979079179279379479579679779879980080180280380480580680780880981081181281381481581681781881982082182282382482582682782882983083183283383483583683783883984084184284384484584684784884985085185285385485585685785885986086186286386486586686786886987087187287387487587687787887988088188288388488588688788888989089189289389489589689789889990090190290390490590690790890991091191291391491591691791891992092192292392492592692792892993093193293393493593693793893994094194294394494594694794894995095195295395495595695795895996096196296396496596696796896997097197297397497597697797897998098198298398498598698798898999099199299399499599699799899910001001100210031004100510061007100810091010101110121013101410151016101710181019102010211022102310241025102610271028102910301031103210331034103510361037103810391040104110421043104410451046104710481049105010511052105310541055105610571058105910601061106210631064106510661067106810691070107110721073107410751076107710781079108010811082108310841085108610871088108910901091109210931094109510961097109810991100
  1. /*
  2. * This file is part of the StarPU Handbook.
  3. * Copyright (C) 2009--2011 Universit@'e de Bordeaux
  4. * Copyright (C) 2010, 2011, 2012, 2013, 2014, 2015, 2016 CNRS
  5. * Copyright (C) 2011, 2012, 2016 INRIA
  6. * Copyright (C) 2016 Uppsala University
  7. * See the file version.doxy for copying conditions.
  8. */
  9. /*! \page ExecutionConfigurationThroughEnvironmentVariables Execution Configuration Through Environment Variables
  10. The behavior of the StarPU library and tools may be tuned thanks to
  11. the following environment variables.
  12. \section ConfiguringWorkers Configuring Workers
  13. <dl>
  14. <dt>STARPU_NCPU</dt>
  15. <dd>
  16. \anchor STARPU_NCPU
  17. \addindex __env__STARPU_NCPU
  18. Specify the number of CPU workers (thus not including workers
  19. dedicated to control accelerators). Note that by default, StarPU will
  20. not allocate more CPU workers than there are physical CPUs, and that
  21. some CPUs are used to control the accelerators.
  22. </dd>
  23. <dt>STARPU_NCPUS</dt>
  24. <dd>
  25. \anchor STARPU_NCPUS
  26. \addindex __env__STARPU_NCPUS
  27. This variable is deprecated. You should use \ref STARPU_NCPU.
  28. </dd>
  29. <dt>STARPU_NCUDA</dt>
  30. <dd>
  31. \anchor STARPU_NCUDA
  32. \addindex __env__STARPU_NCUDA
  33. Specify the number of CUDA devices that StarPU can use. If
  34. \ref STARPU_NCUDA is lower than the number of physical devices, it is
  35. possible to select which CUDA devices should be used by the means of the
  36. environment variable \ref STARPU_WORKERS_CUDAID. By default, StarPU will
  37. create as many CUDA workers as there are CUDA devices.
  38. </dd>
  39. <dt>STARPU_NWORKER_PER_CUDA</dt>
  40. <dd>
  41. \anchor STARPU_NWORKER_PER_CUDA
  42. \addindex __env__STARPU_NWORKER_PER_CUDA
  43. Specify the number of workers per CUDA device, and thus the number of kernels
  44. which will be concurrently running on the devices. The default value is 1.
  45. </dd>
  46. <dt>STARPU_NWORKER_PER_CUDA</dt>
  47. <dd>
  48. \anchor STARPU_CUDA_THREAD_PER_WORKER
  49. \addindex __env__STARPU_CUDA_THREAD_PER_WORKER
  50. Specify if the cuda driver should provide a thread per stream or a single thread
  51. dealing with all the streams. 0 if one thread per stream, 1 otherwise. The default
  52. value is 1.
  53. </dd>
  54. <dt>STARPU_CUDA_PIPELINE</dt>
  55. <dd>
  56. \anchor STARPU_CUDA_PIPELINE
  57. \addindex __env__STARPU_CUDA_PIPELINE
  58. Specify how many asynchronous tasks are submitted in advance on CUDA
  59. devices. This for instance permits to overlap task management with the execution
  60. of previous tasks, but it also allows concurrent execution on Fermi cards, which
  61. otherwise bring spurious synchronizations. The default is 2. Setting the value to 0 forces a synchronous
  62. execution of all tasks.
  63. </dd>
  64. <dt>STARPU_NOPENCL</dt>
  65. <dd>
  66. \anchor STARPU_NOPENCL
  67. \addindex __env__STARPU_NOPENCL
  68. OpenCL equivalent of the environment variable \ref STARPU_NCUDA.
  69. </dd>
  70. <dt>STARPU_OPENCL_PIPELINE</dt>
  71. <dd>
  72. \anchor STARPU_OPENCL_PIPELINE
  73. \addindex __env__STARPU_OPENCL_PIPELINE
  74. Specify how many asynchronous tasks are submitted in advance on OpenCL
  75. devices. This for instance permits to overlap task management with the execution
  76. of previous tasks, but it also allows concurrent execution on Fermi cards, which
  77. otherwise bring spurious synchronizations. The default is 2. Setting the value to 0 forces a synchronous
  78. execution of all tasks.
  79. </dd>
  80. <dt>STARPU_OPENCL_ON_CPUS</dt>
  81. <dd>
  82. \anchor STARPU_OPENCL_ON_CPUS
  83. \addindex __env__STARPU_OPENCL_ON_CPUS
  84. By default, the OpenCL driver only enables GPU and accelerator
  85. devices. By setting the environment variable \ref STARPU_OPENCL_ON_CPUS
  86. to 1, the OpenCL driver will also enable CPU devices.
  87. </dd>
  88. <dt>STARPU_OPENCL_ONLY_ON_CPUS</dt>
  89. <dd>
  90. \anchor STARPU_OPENCL_ONLY_ON_CPUS
  91. \addindex __env__STARPU_OPENCL_ONLY_ON_CPUS
  92. By default, the OpenCL driver enables GPU and accelerator
  93. devices. By setting the environment variable \ref STARPU_OPENCL_ONLY_ON_CPUS
  94. to 1, the OpenCL driver will ONLY enable CPU devices.
  95. </dd>
  96. <dt>STARPU_NMIC</dt>
  97. <dd>
  98. \anchor STARPU_NMIC
  99. \addindex __env__STARPU_NMIC
  100. MIC equivalent of the environment variable \ref STARPU_NCUDA, i.e. the number of
  101. MIC devices to use.
  102. </dd>
  103. <dt>STARPU_NMICTHREADS</dt>
  104. <dd>
  105. \anchor STARPU_NMICTHREADS
  106. \addindex __env__STARPU_NMICTHREADS
  107. Number of threads to use on the MIC devices.
  108. </dd>
  109. <dt>STARPU_NSCC</dt>
  110. <dd>
  111. \anchor STARPU_NSCC
  112. \addindex __env__STARPU_NSCC
  113. SCC equivalent of the environment variable \ref STARPU_NCUDA.
  114. </dd>
  115. <dt>STARPU_WORKERS_NOBIND</dt>
  116. <dd>
  117. \anchor STARPU_WORKERS_NOBIND
  118. \addindex __env__STARPU_WORKERS_NOBIND
  119. Setting it to non-zero will prevent StarPU from binding its threads to
  120. CPUs. This is for instance useful when running the testsuite in parallel.
  121. </dd>
  122. <dt>STARPU_WORKERS_CPUID</dt>
  123. <dd>
  124. \anchor STARPU_WORKERS_CPUID
  125. \addindex __env__STARPU_WORKERS_CPUID
  126. Passing an array of integers in \ref STARPU_WORKERS_CPUID
  127. specifies on which logical CPU the different workers should be
  128. bound. For instance, if <c>STARPU_WORKERS_CPUID = "0 1 4 5"</c>, the first
  129. worker will be bound to logical CPU #0, the second CPU worker will be bound to
  130. logical CPU #1 and so on. Note that the logical ordering of the CPUs is either
  131. determined by the OS, or provided by the library <c>hwloc</c> in case it is
  132. available. Ranges can be provided: for instance, <c>STARPU_WORKERS_CPUID = "1-3
  133. 5"</c> will bind the first three workers on logical CPUs #1, #2, and #3, and the
  134. fourth worker on logical CPU #5. Unbound ranges can also be provided:
  135. <c>STARPU_WORKERS_CPUID = "1-"</c> will bind the workers starting from logical
  136. CPU #1 up to last CPU.
  137. Note that the first workers correspond to the CUDA workers, then come the
  138. OpenCL workers, and finally the CPU workers. For example if
  139. we have <c>STARPU_NCUDA=1</c>, <c>STARPU_NOPENCL=1</c>, <c>STARPU_NCPU=2</c>
  140. and <c>STARPU_WORKERS_CPUID = "0 2 1 3"</c>, the CUDA device will be controlled
  141. by logical CPU #0, the OpenCL device will be controlled by logical CPU #2, and
  142. the logical CPUs #1 and #3 will be used by the CPU workers.
  143. If the number of workers is larger than the array given in
  144. \ref STARPU_WORKERS_CPUID, the workers are bound to the logical CPUs in a
  145. round-robin fashion: if <c>STARPU_WORKERS_CPUID = "0 1"</c>, the first
  146. and the third (resp. second and fourth) workers will be put on CPU #0
  147. (resp. CPU #1).
  148. This variable is ignored if the field
  149. starpu_conf::use_explicit_workers_bindid passed to starpu_init() is
  150. set.
  151. </dd>
  152. <dt>STARPU_WORKERS_CUDAID</dt>
  153. <dd>
  154. \anchor STARPU_WORKERS_CUDAID
  155. \addindex __env__STARPU_WORKERS_CUDAID
  156. Similarly to the \ref STARPU_WORKERS_CPUID environment variable, it is
  157. possible to select which CUDA devices should be used by StarPU. On a machine
  158. equipped with 4 GPUs, setting <c>STARPU_WORKERS_CUDAID = "1 3"</c> and
  159. <c>STARPU_NCUDA=2</c> specifies that 2 CUDA workers should be created, and that
  160. they should use CUDA devices #1 and #3 (the logical ordering of the devices is
  161. the one reported by CUDA).
  162. This variable is ignored if the field
  163. starpu_conf::use_explicit_workers_cuda_gpuid passed to starpu_init()
  164. is set.
  165. </dd>
  166. <dt>STARPU_WORKERS_OPENCLID</dt>
  167. <dd>
  168. \anchor STARPU_WORKERS_OPENCLID
  169. \addindex __env__STARPU_WORKERS_OPENCLID
  170. OpenCL equivalent of the \ref STARPU_WORKERS_CUDAID environment variable.
  171. This variable is ignored if the field
  172. starpu_conf::use_explicit_workers_opencl_gpuid passed to starpu_init()
  173. is set.
  174. </dd>
  175. <dt>STARPU_WORKERS_MICID</dt>
  176. <dd>
  177. \anchor STARPU_WORKERS_MICID
  178. \addindex __env__STARPU_WORKERS_MICID
  179. MIC equivalent of the \ref STARPU_WORKERS_CUDAID environment variable.
  180. This variable is ignored if the field
  181. starpu_conf::use_explicit_workers_mic_deviceid passed to starpu_init()
  182. is set.
  183. </dd>
  184. <dt>STARPU_WORKERS_SCCID</dt>
  185. <dd>
  186. \anchor STARPU_WORKERS_SCCID
  187. \addindex __env__STARPU_WORKERS_SCCID
  188. SCC equivalent of the \ref STARPU_WORKERS_CUDAID environment variable.
  189. This variable is ignored if the field
  190. starpu_conf::use_explicit_workers_scc_deviceid passed to starpu_init()
  191. is set.
  192. </dd>
  193. <dt>STARPU_WORKER_TREE</dt>
  194. <dd>
  195. \anchor STARPU_WORKER_TREE
  196. \addindex __env__STARPU_WORKER_TREE
  197. Define to 1 to enable the tree iterator in schedulers.
  198. </dd>
  199. <dt>STARPU_SINGLE_COMBINED_WORKER</dt>
  200. <dd>
  201. \anchor STARPU_SINGLE_COMBINED_WORKER
  202. \addindex __env__STARPU_SINGLE_COMBINED_WORKER
  203. If set, StarPU will create several workers which won't be able to work
  204. concurrently. It will by default create combined workers which size goes from 1
  205. to the total number of CPU workers in the system. \ref STARPU_MIN_WORKERSIZE
  206. and \ref STARPU_MAX_WORKERSIZE can be used to change this default.
  207. </dd>
  208. <dt>STARPU_MIN_WORKERSIZE</dt>
  209. <dd>
  210. \anchor STARPU_MIN_WORKERSIZE
  211. \addindex __env__STARPU_MIN_WORKERSIZE
  212. \ref STARPU_MIN_WORKERSIZE
  213. permits to specify the minimum size of the combined workers (instead of the default 2)
  214. </dd>
  215. <dt>STARPU_MAX_WORKERSIZE</dt>
  216. <dd>
  217. \anchor STARPU_MAX_WORKERSIZE
  218. \addindex __env__STARPU_MAX_WORKERSIZE
  219. \ref STARPU_MAX_WORKERSIZE
  220. permits to specify the minimum size of the combined workers (instead of the
  221. number of CPU workers in the system)
  222. </dd>
  223. <dt>STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER</dt>
  224. <dd>
  225. \anchor STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER
  226. \addindex __env__STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER
  227. Let the user decide how many elements are allowed between combined workers
  228. created from hwloc information. For instance, in the case of sockets with 6
  229. cores without shared L2 caches, if \ref STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER is
  230. set to 6, no combined worker will be synthesized beyond one for the socket
  231. and one per core. If it is set to 3, 3 intermediate combined workers will be
  232. synthesized, to divide the socket cores into 3 chunks of 2 cores. If it set to
  233. 2, 2 intermediate combined workers will be synthesized, to divide the the socket
  234. cores into 2 chunks of 3 cores, and then 3 additional combined workers will be
  235. synthesized, to divide the former synthesized workers into a bunch of 2 cores,
  236. and the remaining core (for which no combined worker is synthesized since there
  237. is already a normal worker for it).
  238. The default, 2, thus makes StarPU tend to building a binary trees of combined
  239. workers.
  240. </dd>
  241. <dt>STARPU_DISABLE_ASYNCHRONOUS_COPY</dt>
  242. <dd>
  243. \anchor STARPU_DISABLE_ASYNCHRONOUS_COPY
  244. \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_COPY
  245. Disable asynchronous copies between CPU and GPU devices.
  246. The AMD implementation of OpenCL is known to
  247. fail when copying data asynchronously. When using this implementation,
  248. it is therefore necessary to disable asynchronous data transfers.
  249. </dd>
  250. <dt>STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY</dt>
  251. <dd>
  252. \anchor STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY
  253. \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY
  254. Disable asynchronous copies between CPU and CUDA devices.
  255. </dd>
  256. <dt>STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY</dt>
  257. <dd>
  258. \anchor STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY
  259. \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY
  260. Disable asynchronous copies between CPU and OpenCL devices.
  261. The AMD implementation of OpenCL is known to
  262. fail when copying data asynchronously. When using this implementation,
  263. it is therefore necessary to disable asynchronous data transfers.
  264. </dd>
  265. <dt>STARPU_DISABLE_ASYNCHRONOUS_MIC_COPY</dt>
  266. <dd>
  267. \anchor STARPU_DISABLE_ASYNCHRONOUS_MIC_COPY
  268. \addindex __env__STARPU_DISABLE_ASYNCHRONOUS_MIC_COPY
  269. Disable asynchronous copies between CPU and MIC devices.
  270. </dd>
  271. <dt>STARPU_ENABLE_CUDA_GPU_GPU_DIRECT</dt>
  272. <dd>
  273. \anchor STARPU_ENABLE_CUDA_GPU_GPU_DIRECT
  274. \addindex __env__STARPU_ENABLE_CUDA_GPU_GPU_DIRECT
  275. Enable (1) or Disable (0) direct CUDA transfers from GPU to GPU, without copying
  276. through RAM. The default is Enabled.
  277. This permits to test the performance effect of GPU-Direct.
  278. </dd>
  279. <dt>STARPU_DISABLE_PINNING</dt>
  280. <dd>
  281. \anchor STARPU_DISABLE_PINNING
  282. \addindex __env__STARPU_DISABLE_PINNING
  283. Disable (1) or Enable (0) pinning host memory allocated through starpu_malloc, starpu_memory_pin
  284. and friends. The default is Enabled.
  285. This permits to test the performance effect of memory pinning.
  286. </dd>
  287. <dt>STARPU_MIC_SINK_PROGRAM_NAME</dt>
  288. <dd>
  289. \anchor STARPU_MIC_SINK_PROGRAM_NAME
  290. \addindex __env__STARPU_MIC_SINK_PROGRAM_NAME
  291. todo
  292. </dd>
  293. <dt>STARPU_MIC_SINK_PROGRAM_PATH</dt>
  294. <dd>
  295. \anchor STARPU_MIC_SINK_PROGRAM_PATH
  296. \addindex __env__STARPU_MIC_SINK_PROGRAM_PATH
  297. todo
  298. </dd>
  299. <dt>STARPU_MIC_PROGRAM_PATH</dt>
  300. <dd>
  301. \anchor STARPU_MIC_PROGRAM_PATH
  302. \addindex __env__STARPU_MIC_PROGRAM_PATH
  303. todo
  304. </dd>
  305. </dl>
  306. \section ConfiguringTheSchedulingEngine Configuring The Scheduling Engine
  307. <dl>
  308. <dt>STARPU_SCHED</dt>
  309. <dd>
  310. \anchor STARPU_SCHED
  311. \addindex __env__STARPU_SCHED
  312. Choose between the different scheduling policies proposed by StarPU: work
  313. random, stealing, greedy, with performance models, etc.
  314. Use <c>STARPU_SCHED=help</c> to get the list of available schedulers.
  315. </dd>
  316. <dt>STARPU_MIN_PRIO</dt>
  317. <dd>
  318. \anchor STARPU_MIN_PRIO_env
  319. \addindex __env__STARPU_MIN_PRIO
  320. Set the mininum priority used by priorities-aware schedulers.
  321. </dd>
  322. <dt>STARPU_MAX_PRIO</dt>
  323. <dd>
  324. \anchor STARPU_MAX_PRIO_env
  325. \addindex __env__STARPU_MAX_PRIO
  326. Set the maximum priority used by priorities-aware schedulers.
  327. </dd>
  328. <dt>STARPU_CALIBRATE</dt>
  329. <dd>
  330. \anchor STARPU_CALIBRATE
  331. \addindex __env__STARPU_CALIBRATE
  332. If this variable is set to 1, the performance models are calibrated during
  333. the execution. If it is set to 2, the previous values are dropped to restart
  334. calibration from scratch. Setting this variable to 0 disable calibration, this
  335. is the default behaviour.
  336. Note: this currently only applies to <c>dm</c> and <c>dmda</c> scheduling policies.
  337. </dd>
  338. <dt>STARPU_CALIBRATE_MINIMUM</dt>
  339. <dd>
  340. \anchor STARPU_CALIBRATE_MINIMUM
  341. \addindex __env__STARPU_CALIBRATE_MINIMUM
  342. This defines the minimum number of calibration measurements that will be made
  343. before considering that the performance model is calibrated. The default value is 10.
  344. </dd>
  345. <dt>STARPU_BUS_CALIBRATE</dt>
  346. <dd>
  347. \anchor STARPU_BUS_CALIBRATE
  348. \addindex __env__STARPU_BUS_CALIBRATE
  349. If this variable is set to 1, the bus is recalibrated during intialization.
  350. </dd>
  351. <dt>STARPU_PREFETCH</dt>
  352. <dd>
  353. \anchor STARPU_PREFETCH
  354. \addindex __env__STARPU_PREFETCH
  355. This variable indicates whether data prefetching should be enabled (0 means
  356. that it is disabled). If prefetching is enabled, when a task is scheduled to be
  357. executed e.g. on a GPU, StarPU will request an asynchronous transfer in
  358. advance, so that data is already present on the GPU when the task starts. As a
  359. result, computation and data transfers are overlapped.
  360. Note that prefetching is enabled by default in StarPU.
  361. </dd>
  362. <dt>STARPU_SCHED_ALPHA</dt>
  363. <dd>
  364. \anchor STARPU_SCHED_ALPHA
  365. \addindex __env__STARPU_SCHED_ALPHA
  366. To estimate the cost of a task StarPU takes into account the estimated
  367. computation time (obtained thanks to performance models). The alpha factor is
  368. the coefficient to be applied to it before adding it to the communication part.
  369. </dd>
  370. <dt>STARPU_SCHED_BETA</dt>
  371. <dd>
  372. \anchor STARPU_SCHED_BETA
  373. \addindex __env__STARPU_SCHED_BETA
  374. To estimate the cost of a task StarPU takes into account the estimated
  375. data transfer time (obtained thanks to performance models). The beta factor is
  376. the coefficient to be applied to it before adding it to the computation part.
  377. </dd>
  378. <dt>STARPU_SCHED_GAMMA</dt>
  379. <dd>
  380. \anchor STARPU_SCHED_GAMMA
  381. \addindex __env__STARPU_SCHED_GAMMA
  382. Define the execution time penalty of a joule (\ref Energy-basedScheduling).
  383. </dd>
  384. <dt>STARPU_IDLE_POWER</dt>
  385. <dd>
  386. \anchor STARPU_IDLE_POWER
  387. \addindex __env__STARPU_IDLE_POWER
  388. Define the idle power of the machine (\ref Energy-basedScheduling).
  389. </dd>
  390. <dt>STARPU_PROFILING</dt>
  391. <dd>
  392. \anchor STARPU_PROFILING
  393. \addindex __env__STARPU_PROFILING
  394. Enable on-line performance monitoring (\ref EnablingOn-linePerformanceMonitoring).
  395. </dd>
  396. </dl>
  397. \section Extensions Extensions
  398. <dl>
  399. <dt>SOCL_OCL_LIB_OPENCL</dt>
  400. <dd>
  401. \anchor SOCL_OCL_LIB_OPENCL
  402. \addindex __env__SOCL_OCL_LIB_OPENCL
  403. THE SOCL test suite is only run when the environment variable
  404. \ref SOCL_OCL_LIB_OPENCL is defined. It should contain the location
  405. of the file <c>libOpenCL.so</c> of the OCL ICD implementation.
  406. </dd>
  407. <dt>OCL_ICD_VENDORS</dt>
  408. <dd>
  409. \anchor OCL_ICD_VENDORS
  410. \addindex __env__OCL_ICD_VENDORS
  411. When using SOCL with OpenCL ICD
  412. (https://forge.imag.fr/projects/ocl-icd/), this variable may be used
  413. to point to the directory where ICD files are installed. The default
  414. directory is <c>/etc/OpenCL/vendors</c>. StarPU installs ICD
  415. files in the directory <c>$prefix/share/starpu/opencl/vendors</c>.
  416. </dd>
  417. <dt>STARPU_COMM_STATS</dt>
  418. <dd>
  419. \anchor STARPU_COMM_STATS
  420. \addindex __env__STARPU_COMM_STATS
  421. Communication statistics for starpumpi (\ref MPISupport)
  422. will be enabled when the environment variable \ref STARPU_COMM_STATS
  423. is defined to an value other than 0.
  424. </dd>
  425. <dt>STARPU_MPI_CACHE</dt>
  426. <dd>
  427. \anchor STARPU_MPI_CACHE
  428. \addindex __env__STARPU_MPI_CACHE
  429. Communication cache for starpumpi (\ref MPISupport) will be
  430. disabled when the environment variable \ref STARPU_MPI_CACHE is set
  431. to 0. It is enabled by default or for any other values of the variable
  432. \ref STARPU_MPI_CACHE.
  433. </dd>
  434. <dt>STARPU_MPI_COMM</dt>
  435. <dd>
  436. \anchor STARPU_MPI_COMM
  437. \addindex __env__STARPU_MPI_COMM
  438. Communication trace for starpumpi (\ref MPISupport) will be
  439. enabled when the environment variable \ref STARPU_MPI_COMM is set
  440. to 1, and StarPU has been configured with the option
  441. \ref enable-verbose "--enable-verbose".
  442. </dd>
  443. <dt>STARPU_MPI_CACHE_STATS</dt>
  444. <dd>
  445. \anchor STARPU_MPI_CACHE_STATS
  446. \addindex __env__STARPU_MPI_CACHE_STATS
  447. When set to 1, statistics are enabled for the communication cache (\ref MPISupport). For now,
  448. it prints messages on the standard output when data are added or removed from the received
  449. communication cache.
  450. </dd>
  451. <dt>STARPU_MPI_FAKE_SIZE</dt>
  452. <dd>
  453. \anchor STARPU_MPI_FAKE_SIZE
  454. \addindex __env__STARPU_MPI_FAKE_SIZE
  455. Setting to a number makes StarPU believe that there are as many MPI nodes, even
  456. if it was run on only one MPI node. This allows e.g. to simulate the execution
  457. of one of the nodes of a big cluster without actually running the rest.
  458. It of course does not provide computation results and timing.
  459. </dd>
  460. <dt>STARPU_MPI_FAKE_RANK</dt>
  461. <dd>
  462. \anchor STARPU_MPI_FAKE_RANK
  463. \addindex __env__STARPU_MPI_FAKE_RANK
  464. Setting to a number makes StarPU believe that it runs the given MPI node, even
  465. if it was run on only one MPI node. This allows e.g. to simulate the execution
  466. of one of the nodes of a big cluster without actually running the rest.
  467. It of course does not provide computation results and timing.
  468. </dd>
  469. <dt>STARPU_SIMGRID_CUDA_MALLOC_COST</dt>
  470. <dd>
  471. \anchor STARPU_SIMGRID_CUDA_MALLOC_COST
  472. \addindex __env__STARPU_SIMGRID_CUDA_MALLOC_COST
  473. When set to 1 (which is the default), CUDA malloc costs are taken into account
  474. in simgrid mode.
  475. </dd>
  476. <dt>STARPU_SIMGRID_CUDA_QUEUE_COST</dt>
  477. <dd>
  478. \anchor STARPU_SIMGRID_CUDA_QUEUE_COST
  479. \addindex __env__STARPU_SIMGRID_CUDA_QUEUE_COST
  480. When set to 1 (which is the default), CUDA task and transfer queueing costs are
  481. taken into account in simgrid mode.
  482. </dd>
  483. <dt>STARPU_PCI_FLAT</dt>
  484. <dd>
  485. \anchor STARPU_PCI_FLAT
  486. \addindex __env__STARPU_PCI_FLAT
  487. When unset or set to 0, the platform file created for simgrid will
  488. contain PCI bandwidths and routes.
  489. </dd>
  490. <dt>STARPU_SIMGRID_QUEUE_MALLOC_COST</dt>
  491. <dd>
  492. \anchor STARPU_SIMGRID_QUEUE_MALLOC_COST
  493. \addindex __env__STARPU_SIMGRID_QUEUE_MALLOC_COST
  494. When unset or set to 1, simulate within simgrid the GPU transfer queueing.
  495. </dd>
  496. <dt>STARPU_MALLOC_SIMULATION_FOLD</dt>
  497. <dd>
  498. \anchor STARPU_MALLOC_SIMULATION_FOLD
  499. \addindex __env__STARPU_MALLOC_SIMULATION_FOLD
  500. This defines the size of the file used for folding virtual allocation, in
  501. MiB. The default is 1, thus allowing 64GiB virtual memory when Linux's
  502. <c>sysctl vm.max_map_count</c> value is the default 65535.
  503. </dd>
  504. <dt>STARPU_SIMGRID_TASK_SUBMIT_COST</dt>
  505. <dd>
  506. \anchor STARPU_SIMGRID_TASK_SUBMIT_COST
  507. \addindex __env__STARPU_SIMGRID_TASK_SUBMIT_COST
  508. When set to 1 (which is the default), task submission costs are taken into
  509. account in simgrid mode. This provides more accurate simgrid predictions,
  510. especially for the beginning of the execution.
  511. </dd>
  512. </dl>
  513. \section MiscellaneousAndDebug Miscellaneous And Debug
  514. <dl>
  515. <dt>STARPU_HOME</dt>
  516. <dd>
  517. \anchor STARPU_HOME
  518. \addindex __env__STARPU_HOME
  519. This specifies the main directory in which StarPU stores its
  520. configuration files. The default is <c>$HOME</c> on Unix environments,
  521. and <c>$USERPROFILE</c> on Windows environments.
  522. </dd>
  523. <dt>STARPU_PATH</dt>
  524. <dd>
  525. \anchor STARPU_PATH
  526. \addindex __env__STARPU_PATH
  527. Only used on Windows environments.
  528. This specifies the main directory in which StarPU is installed
  529. (\ref RunningABasicStarPUApplicationOnMicrosoft)
  530. </dd>
  531. <dt>STARPU_PERF_MODEL_DIR</dt>
  532. <dd>
  533. \anchor STARPU_PERF_MODEL_DIR
  534. \addindex __env__STARPU_PERF_MODEL_DIR
  535. This specifies the main directory in which StarPU stores its
  536. performance model files. The default is <c>$STARPU_HOME/.starpu/sampling</c>.
  537. </dd>
  538. <dt>STARPU_PERF_MODEL_IGNORE_CUDAID</dt>
  539. <dd>
  540. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_CUDA
  541. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_CUDA
  542. When this is set to 1, StarPU will assume that all CUDA devices have the same
  543. performance, and thus share performance models for them, thus allowing kernel
  544. calibration to be much faster, since measurements only have to be once for all
  545. CUDA GPUs.
  546. </dd>
  547. <dt>STARPU_PERF_MODEL_IGNORE_OPENCLID</dt>
  548. <dd>
  549. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_OPENCL
  550. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_OPENCL
  551. When this is set to 1, StarPU will assume that all OPENCL devices have the same
  552. performance, and thus share performance models for them, thus allowing kernel
  553. calibration to be much faster, since measurements only have to be once for all
  554. OPENCL GPUs.
  555. </dd>
  556. <dt>STARPU_PERF_MODEL_IGNORE_MICID</dt>
  557. <dd>
  558. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_MIC
  559. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_MIC
  560. When this is set to 1, StarPU will assume that all MIC devices have the same
  561. performance, and thus share performance models for them, thus allowing kernel
  562. calibration to be much faster, since measurements only have to be once for all
  563. MIC GPUs.
  564. </dd>
  565. <dt>STARPU_PERF_MODEL_IGNORE_SCCID</dt>
  566. <dd>
  567. \anchor STARPU_PERF_MODEL_HOMOGENEOUS_SCC
  568. \addindex __env__STARPU_PERF_MODEL_HOMOGENEOUS_SCC
  569. When this is set to 1, StarPU will assume that all SCC devices have the same
  570. performance, and thus share performance models for them, thus allowing kernel
  571. calibration to be much faster, since measurements only have to be once for all
  572. SCC GPUs.
  573. </dd>
  574. <dt>STARPU_HOSTNAME</dt>
  575. <dd>
  576. \anchor STARPU_HOSTNAME
  577. \addindex __env__STARPU_HOSTNAME
  578. When set, force the hostname to be used when dealing performance model
  579. files. Models are indexed by machine name. When running for example on
  580. a homogenenous cluster, it is possible to share the models between
  581. machines by setting <c>export STARPU_HOSTNAME=some_global_name</c>.
  582. </dd>
  583. <dt>STARPU_OPENCL_PROGRAM_DIR</dt>
  584. <dd>
  585. \anchor STARPU_OPENCL_PROGRAM_DIR
  586. \addindex __env__STARPU_OPENCL_PROGRAM_DIR
  587. This specifies the directory where the OpenCL codelet source files are
  588. located. The function starpu_opencl_load_program_source() looks
  589. for the codelet in the current directory, in the directory specified
  590. by the environment variable \ref STARPU_OPENCL_PROGRAM_DIR, in the
  591. directory <c>share/starpu/opencl</c> of the installation directory of
  592. StarPU, and finally in the source directory of StarPU.
  593. </dd>
  594. <dt>STARPU_SILENT</dt>
  595. <dd>
  596. \anchor STARPU_SILENT
  597. \addindex __env__STARPU_SILENT
  598. This variable allows to disable verbose mode at runtime when StarPU
  599. has been configured with the option \ref enable-verbose "--enable-verbose". It also
  600. disables the display of StarPU information and warning messages.
  601. </dd>
  602. <dt>STARPU_LOGFILENAME</dt>
  603. <dd>
  604. \anchor STARPU_LOGFILENAME
  605. \addindex __env__STARPU_LOGFILENAME
  606. This variable specifies in which file the debugging output should be saved to.
  607. </dd>
  608. <dt>STARPU_FXT_PREFIX</dt>
  609. <dd>
  610. \anchor STARPU_FXT_PREFIX
  611. \addindex __env__STARPU_FXT_PREFIX
  612. This variable specifies in which directory to save the trace generated if FxT is enabled. It needs to have a trailing '/' character.
  613. </dd>
  614. <dt>STARPU_FXT_TRACE</dt>
  615. <dd>
  616. \anchor STARPU_FXT_TRACE
  617. \addindex __env__STARPU_FXT_TRACE
  618. This variable specifies whether to generate (1) or not (0) the FxT trace in /tmp/prof_file_XXX_YYY . The default is 1 (generate it)
  619. </dd>
  620. <dt>STARPU_LIMIT_CUDA_devid_MEM</dt>
  621. <dd>
  622. \anchor STARPU_LIMIT_CUDA_devid_MEM
  623. \addindex __env__STARPU_LIMIT_CUDA_devid_MEM
  624. This variable specifies the maximum number of megabytes that should be
  625. available to the application on the CUDA device with the identifier
  626. <c>devid</c>. This variable is intended to be used for experimental
  627. purposes as it emulates devices that have a limited amount of memory.
  628. When defined, the variable overwrites the value of the variable
  629. \ref STARPU_LIMIT_CUDA_MEM.
  630. </dd>
  631. <dt>STARPU_LIMIT_CUDA_MEM</dt>
  632. <dd>
  633. \anchor STARPU_LIMIT_CUDA_MEM
  634. \addindex __env__STARPU_LIMIT_CUDA_MEM
  635. This variable specifies the maximum number of megabytes that should be
  636. available to the application on each CUDA devices. This variable is
  637. intended to be used for experimental purposes as it emulates devices
  638. that have a limited amount of memory.
  639. </dd>
  640. <dt>STARPU_LIMIT_OPENCL_devid_MEM</dt>
  641. <dd>
  642. \anchor STARPU_LIMIT_OPENCL_devid_MEM
  643. \addindex __env__STARPU_LIMIT_OPENCL_devid_MEM
  644. This variable specifies the maximum number of megabytes that should be
  645. available to the application on the OpenCL device with the identifier
  646. <c>devid</c>. This variable is intended to be used for experimental
  647. purposes as it emulates devices that have a limited amount of memory.
  648. When defined, the variable overwrites the value of the variable
  649. \ref STARPU_LIMIT_OPENCL_MEM.
  650. </dd>
  651. <dt>STARPU_LIMIT_OPENCL_MEM</dt>
  652. <dd>
  653. \anchor STARPU_LIMIT_OPENCL_MEM
  654. \addindex __env__STARPU_LIMIT_OPENCL_MEM
  655. This variable specifies the maximum number of megabytes that should be
  656. available to the application on each OpenCL devices. This variable is
  657. intended to be used for experimental purposes as it emulates devices
  658. that have a limited amount of memory.
  659. </dd>
  660. <dt>STARPU_LIMIT_CPU_MEM</dt>
  661. <dd>
  662. \anchor STARPU_LIMIT_CPU_MEM
  663. \addindex __env__STARPU_LIMIT_CPU_MEM
  664. This variable specifies the maximum number of megabytes that should be
  665. available to the application on each CPU device. Setting it enables allocation
  666. cache in main memory
  667. </dd>
  668. <dt>STARPU_MINIMUM_AVAILABLE_MEM</dt>
  669. <dd>
  670. \anchor STARPU_MINIMUM_AVAILABLE_MEM
  671. \addindex __env__STARPU_MINIMUM_AVAILABLE_MEM
  672. This specifies the minimum percentage of memory that should be available in GPUs
  673. (or in main memory, when using out of core), below which a reclaiming pass is
  674. performed. The default is 5%.
  675. </dd>
  676. <dt>STARPU_TARGET_AVAILABLE_MEM</dt>
  677. <dd>
  678. \anchor STARPU_TARGET_AVAILABLE_MEM
  679. \addindex __env__STARPU_TARGET_AVAILABLE_MEM
  680. This specifies the target percentage of memory that should be reached in
  681. GPUs (or in main memory, when using out of core), when performing a periodic
  682. reclaiming pass. The default is 10%.
  683. </dd>
  684. <dt>STARPU_MINIMUM_CLEAN_BUFFERS</dt>
  685. <dd>
  686. \anchor STARPU_MINIMUM_CLEAN_BUFFERS
  687. \addindex __env__STARPU_MINIMUM_CLEAN_BUFFERS
  688. This specifies the minimum percentage of number of buffers that should be clean in GPUs
  689. (or in main memory, when using out of core), below which asynchronous writebacks will be
  690. issued. The default is 5%.
  691. </dd>
  692. <dt>STARPU_TARGET_CLEAN_BUFFERS</dt>
  693. <dd>
  694. \anchor STARPU_TARGET_CLEAN_BUFFERS
  695. \addindex __env__STARPU_TARGET_CLEAN_BUFFERS
  696. This specifies the target percentage of number of buffers that should be reached in
  697. GPUs (or in main memory, when using out of core), when performing an asynchronous
  698. writeback pass. The default is 10%.
  699. </dd>
  700. <dt>STARPU_DIDUSE_BARRIER</dt>
  701. <dd>
  702. \anchor STARPU_DIDUSE_BARRIER
  703. \addindex __env__STARPU_DIDUSE_BARRIER
  704. When set to 1, StarPU will never evict a piece of data if it has not been used
  705. by at least one task. This avoids odd behaviors under high memory pressure, but
  706. can lead to deadlocks, so is to be considered experimental only.
  707. </dd>
  708. <dt>STARPU_DISK_SWAP</dt>
  709. <dd>
  710. \anchor STARPU_DISK_SWAP
  711. \addindex __env__STARPU_DISK_SWAP
  712. This specifies a path where StarPU can push data when the main memory is getting
  713. full.
  714. </dd>
  715. <dt>STARPU_DISK_SWAP_BACKEND</dt>
  716. <dd>
  717. \anchor STARPU_DISK_SWAP_BACKEND
  718. \addindex __env__STARPU_DISK_SWAP_BACKEND
  719. This specifies then backend to be used by StarPU to push data when the main
  720. memory is getting full. The default is unistd (i.e. using read/write functions),
  721. other values are stdio (i.e. using fread/fwrite), unistd_o_direct (i.e. using
  722. read/write with O_DIRECT), and leveldb (i.e. using a leveldb database).
  723. </dd>
  724. <dt>STARPU_DISK_SWAP_SIZE</dt>
  725. <dd>
  726. \anchor STARPU_DISK_SWAP_SIZE
  727. \addindex __env__STARPU_DISK_SWAP_SIZE
  728. This specifies then size to be used by StarPU to push data when the main
  729. memory is getting full. The default is unlimited.
  730. </dd>
  731. <dt>STARPU_LIMIT_MAX_SUBMITTED_TASKS</dt>
  732. <dd>
  733. \anchor STARPU_LIMIT_MAX_SUBMITTED_TASKS
  734. \addindex __env__STARPU_LIMIT_MAX_SUBMITTED_TASKS
  735. This variable allows the user to control the task submission flow by specifying
  736. to StarPU a maximum number of submitted tasks allowed at a given time, i.e. when
  737. this limit is reached task submission becomes blocking until enough tasks have
  738. completed, specified by \ref STARPU_LIMIT_MIN_SUBMITTED_TASKS.
  739. Setting it enables allocation cache buffer reuse in main memory.
  740. </dd>
  741. <dt>STARPU_LIMIT_MIN_SUBMITTED_TASKS</dt>
  742. <dd>
  743. \anchor STARPU_LIMIT_MIN_SUBMITTED_TASKS
  744. \addindex __env__STARPU_LIMIT_MIN_SUBMITTED_TASKS
  745. This variable allows the user to control the task submission flow by specifying
  746. to StarPU a submitted task threshold to wait before unblocking task submission. This
  747. variable has to be used in conjunction with \ref STARPU_LIMIT_MAX_SUBMITTED_TASKS
  748. which puts the task submission thread to
  749. sleep. Setting it enables allocation cache buffer reuse in main memory.
  750. </dd>
  751. <dt>STARPU_TRACE_BUFFER_SIZE</dt>
  752. <dd>
  753. \anchor STARPU_TRACE_BUFFER_SIZE
  754. \addindex __env__STARPU_TRACE_BUFFER_SIZE
  755. This sets the buffer size for recording trace events in MiB. Setting it to a big
  756. size allows to avoid pauses in the trace while it is recorded on the disk. This
  757. however also consumes memory, of course. The default value is 64.
  758. </dd>
  759. <dt>STARPU_GENERATE_TRACE</dt>
  760. <dd>
  761. \anchor STARPU_GENERATE_TRACE
  762. \addindex __env__STARPU_GENERATE_TRACE
  763. When set to <c>1</c>, this variable indicates that StarPU should automatically
  764. generate a Paje trace when starpu_shutdown() is called.
  765. </dd>
  766. <dt>STARPU_ENABLE_STATS</dt>
  767. <dd>
  768. \anchor STARPU_ENABLE_STATS
  769. \addindex __env__STARPU_ENABLE_STATS
  770. When defined, enable gathering various data statistics (\ref DataStatistics).
  771. </dd>
  772. <dt>STARPU_MEMORY_STATS</dt>
  773. <dd>
  774. \anchor STARPU_MEMORY_STATS
  775. \addindex __env__STARPU_MEMORY_STATS
  776. When set to 0, disable the display of memory statistics on data which
  777. have not been unregistered at the end of the execution (\ref MemoryFeedback).
  778. </dd>
  779. <dt>STARPU_MAX_MEMORY_USE</dt>
  780. <dd>
  781. \anchor STARPU_MAX_MEMORY_USE
  782. \addindex __env__STARPU_MAX_MEMORY_USE
  783. When set to 1, display at the end of the execution the maximum memory used by
  784. StarPU for internal data structures during execution.
  785. </dd>
  786. <dt>STARPU_BUS_STATS</dt>
  787. <dd>
  788. \anchor STARPU_BUS_STATS
  789. \addindex __env__STARPU_BUS_STATS
  790. When defined, statistics about data transfers will be displayed when calling
  791. starpu_shutdown() (\ref Profiling).
  792. </dd>
  793. <dt>STARPU_WORKER_STATS</dt>
  794. <dd>
  795. \anchor STARPU_WORKER_STATS
  796. \addindex __env__STARPU_WORKER_STATS
  797. When defined, statistics about the workers will be displayed when calling
  798. starpu_shutdown() (\ref Profiling). When combined with the
  799. environment variable \ref STARPU_PROFILING, it displays the energy
  800. consumption (\ref Energy-basedScheduling).
  801. </dd>
  802. <dt>STARPU_STATS</dt>
  803. <dd>
  804. \anchor STARPU_STATS
  805. \addindex __env__STARPU_STATS
  806. When set to 0, data statistics will not be displayed at the
  807. end of the execution of an application (\ref DataStatistics).
  808. </dd>
  809. <dt>STARPU_WATCHDOG_TIMEOUT</dt>
  810. <dd>
  811. \anchor STARPU_WATCHDOG_TIMEOUT
  812. \addindex __env__STARPU_WATCHDOG_TIMEOUT
  813. When set to a value other than 0, allows to make StarPU print an error
  814. message whenever StarPU does not terminate any task for the given time (in µs),
  815. but lets the application continue normally. Should
  816. be used in combination with \ref STARPU_WATCHDOG_CRASH
  817. (see \ref DetectionStuckConditions).
  818. </dd>
  819. <dt>STARPU_WATCHDOG_CRASH</dt>
  820. <dd>
  821. \anchor STARPU_WATCHDOG_CRASH
  822. \addindex __env__STARPU_WATCHDOG_CRASH
  823. When set to a value other than 0, it triggers a crash when the watch
  824. dog is reached, thus allowing to catch the situation in gdb, etc
  825. (see \ref DetectionStuckConditions)
  826. </dd>
  827. <dt>STARPU_TASK_BREAK_ON_SCHED</dt>
  828. <dd>
  829. \anchor STARPU_TASK_BREAK_ON_SCHED
  830. \addindex __env__STARPU_TASK_BREAK_ON_SCHED
  831. When this variable contains a job id, StarPU will raise SIGTRAP when the task
  832. with that job id is being scheduled by the scheduler (at a scheduler-specific
  833. point), which will be nicely catched by debuggers.
  834. This only works for schedulers which have such a scheduling point defined
  835. (see \ref DebuggingScheduling)
  836. </dd>
  837. <dt>STARPU_TASK_BREAK_ON_PUSH</dt>
  838. <dd>
  839. \anchor STARPU_TASK_BREAK_ON_PUSH
  840. \addindex __env__STARPU_TASK_BREAK_ON_PUSH
  841. When this variable contains a job id, StarPU will raise SIGTRAP when the task
  842. with that job id is being pushed to the scheduler, which will be nicely catched by debuggers
  843. (see \ref DebuggingScheduling)
  844. </dd>
  845. <dt>STARPU_TASK_BREAK_ON_POP</dt>
  846. <dd>
  847. \anchor STARPU_TASK_BREAK_ON_POP
  848. \addindex __env__STARPU_TASK_BREAK_ON_POP
  849. When this variable contains a job id, StarPU will raise SIGTRAP when the task
  850. with that job id is being popped from the scheduler, which will be nicely catched by debuggers
  851. (see \ref DebuggingScheduling)
  852. </dd>
  853. <dt>STARPU_DISABLE_KERNELS</dt>
  854. <dd>
  855. \anchor STARPU_DISABLE_KERNELS
  856. \addindex __env__STARPU_DISABLE_KERNELS
  857. When set to a value other than 1, it disables actually calling the kernel
  858. functions, thus allowing to quickly check that the task scheme is working
  859. properly, without performing the actual application-provided computation.
  860. </dd>
  861. <dt>STARPU_HISTORY_MAX_ERROR</dt>
  862. <dd>
  863. \anchor STARPU_HISTORY_MAX_ERROR
  864. \addindex __env__STARPU_HISTORY_MAX_ERROR
  865. History-based performance models will drop measurements which are really far
  866. froom the measured average. This specifies the allowed variation. The default is
  867. 50 (%), i.e. the measurement is allowed to be x1.5 faster or /1.5 slower than the
  868. average.
  869. </dd>
  870. <dt>STARPU_RAND_SEED</dt>
  871. <dd>
  872. \anchor STARPU_RAND_SEED
  873. \addindex __env__STARPU_RAND_SEED
  874. The random scheduler and some examples use random numbers for their own
  875. working. Depending on the examples, the seed is by default juste always 0 or
  876. the current time() (unless simgrid mode is enabled, in which case it is always
  877. 0). \ref STARPU_RAND_SEED allows to set the seed to a specific value.
  878. </dd>
  879. <dt>STARPU_IDLE_TIME</dt>
  880. <dd>
  881. \anchor STARPU_IDLE_TIME
  882. \addindex __env__STARPU_IDLE_TIME
  883. When set to a value being a valid filename, a corresponding file
  884. will be created when shutting down StarPU. The file will contain the
  885. sum of all the workers' idle time.
  886. </dd>
  887. <dt>STARPU_GLOBAL_ARBITER</dt>
  888. <dd>
  889. \anchor STARPU_GLOBAL_ARBITER
  890. \addindex __env__STARPU_GLOBAL_ARBITER
  891. When set to a positive value, StarPU will create a arbiter, which
  892. implements an advanced but centralized management of concurrent data
  893. accesses (see \ref ConcurrentDataAccess).
  894. </dd>
  895. </dl>
  896. \section ConfiguringTheHypervisor Configuring The Hypervisor
  897. <dl>
  898. <dt>SC_HYPERVISOR_POLICY</dt>
  899. <dd>
  900. \anchor SC_HYPERVISOR_POLICY
  901. \addindex __env__SC_HYPERVISOR_POLICY
  902. Choose between the different resizing policies proposed by StarPU for the hypervisor:
  903. idle, app_driven, feft_lp, teft_lp; ispeed_lp, throughput_lp etc.
  904. Use <c>SC_HYPERVISOR_POLICY=help</c> to get the list of available policies for the hypervisor
  905. </dd>
  906. <dt>SC_HYPERVISOR_TRIGGER_RESIZE</dt>
  907. <dd>
  908. \anchor SC_HYPERVISOR_TRIGGER_RESIZE
  909. \addindex __env__SC_HYPERVISOR_TRIGGER_RESIZE
  910. Choose how should the hypervisor be triggered: <c>speed</c> if the resizing algorithm should
  911. be called whenever the speed of the context does not correspond to an optimal precomputed value,
  912. <c>idle</c> it the resizing algorithm should be called whenever the workers are idle for a period
  913. longer than the value indicated when configuring the hypervisor.
  914. </dd>
  915. <dt>SC_HYPERVISOR_START_RESIZE</dt>
  916. <dd>
  917. \anchor SC_HYPERVISOR_START_RESIZE
  918. \addindex __env__SC_HYPERVISOR_START_RESIZE
  919. Indicate the moment when the resizing should be available. The value correspond to the percentage
  920. of the total time of execution of the application. The default value is the resizing frame.
  921. </dd>
  922. <dt>SC_HYPERVISOR_MAX_SPEED_GAP</dt>
  923. <dd>
  924. \anchor SC_HYPERVISOR_MAX_SPEED_GAP
  925. \addindex __env__SC_HYPERVISOR_MAX_SPEED_GAP
  926. Indicate the ratio of speed difference between contexts that should trigger the hypervisor.
  927. This situation may occur only when a theoretical speed could not be computed and the hypervisor
  928. has no value to compare the speed to. Otherwise the resizing of a context is not influenced by the
  929. the speed of the other contexts, but only by the the value that a context should have.
  930. </dd>
  931. <dt>SC_HYPERVISOR_STOP_PRINT</dt>
  932. <dd>
  933. \anchor SC_HYPERVISOR_STOP_PRINT
  934. \addindex __env__SC_HYPERVISOR_STOP_PRINT
  935. By default the values of the speed of the workers is printed during the execution
  936. of the application. If the value 1 is given to this environment variable this printing
  937. is not done.
  938. </dd>
  939. <dt>SC_HYPERVISOR_LAZY_RESIZE</dt>
  940. <dd>
  941. \anchor SC_HYPERVISOR_LAZY_RESIZE
  942. \addindex __env__SC_HYPERVISOR_LAZY_RESIZE
  943. By default the hypervisor resizes the contexts in a lazy way, that is workers are firstly added to a new context
  944. before removing them from the previous one. Once this workers are clearly taken into account
  945. into the new context (a task was poped there) we remove them from the previous one. However if the application
  946. would like that the change in the distribution of workers should change right away this variable should be set to 0
  947. </dd>
  948. <dt>SC_HYPERVISOR_SAMPLE_CRITERIA</dt>
  949. <dd>
  950. \anchor SC_HYPERVISOR_SAMPLE_CRITERIA
  951. \addindex __env__SC_HYPERVISOR_SAMPLE_CRITERIA
  952. By default the hypervisor uses a sample of flops when computing the speed of the contexts and of the workers.
  953. If this variable is set to <c>time</c> the hypervisor uses a sample of time (10% of an aproximation of the total
  954. execution time of the application)
  955. </dd>
  956. </dl>
  957. */