configuration.texi 22 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577
  1. @c -*-texinfo-*-
  2. @c This file is part of the StarPU Handbook.
  3. @c Copyright (C) 2009--2011 Universit@'e de Bordeaux 1
  4. @c Copyright (C) 2010, 2011, 2012 Centre National de la Recherche Scientifique
  5. @c Copyright (C) 2011, 2012 Institut National de Recherche en Informatique et Automatique
  6. @c See the file starpu.texi for copying conditions.
  7. @menu
  8. * Compilation configuration::
  9. * Execution configuration through environment variables::
  10. @end menu
  11. @node Compilation configuration
  12. @section Compilation configuration
  13. The following arguments can be given to the @code{configure} script.
  14. @menu
  15. * Common configuration::
  16. * Configuring workers::
  17. * Extension configuration::
  18. * Advanced configuration::
  19. @end menu
  20. @node Common configuration
  21. @subsection Common configuration
  22. @defvr {Configure option} --enable-debug
  23. Enable debugging messages.
  24. @end defvr
  25. @defvr {Configure option} --enable-debug
  26. Enable debugging messages.
  27. @end defvr
  28. @defvr {Configure option} --enable-fast
  29. Disable assertion checks, which saves computation time.
  30. @end defvr
  31. @defvr {Configure option} --enable-verbose
  32. Increase the verbosity of the debugging messages. This can be disabled
  33. at runtime by setting the environment variable @code{STARPU_SILENT} to
  34. any value.
  35. @smallexample
  36. % STARPU_SILENT=1 ./vector_scal
  37. @end smallexample
  38. @end defvr
  39. @defvr {Configure option} --enable-coverage
  40. Enable flags for the @code{gcov} coverage tool.
  41. @end defvr
  42. @defvr {Configure option} --enable-quick-check
  43. Specify tests and examples should be run on a smaller data set, i.e
  44. allowing a faster execution time
  45. @end defvr
  46. @defvr {Configure option} --with-hwloc
  47. Specify hwloc should be used by StarPU. hwloc should be found by the
  48. means of the tools @code{pkg-config}.
  49. @end defvr
  50. @defvr {Configure option} --with-hwloc=@var{prefix}
  51. Specify hwloc should be used by StarPU. hwloc should be found in the
  52. directory specified by @var{prefix}.
  53. @end defvr
  54. @defvr {Configure option} --without-hwloc
  55. Specify hwloc should not be used by StarPU.
  56. @end defvr
  57. @defvr {Configure option} --disable-build-doc
  58. Disable the creation of the documentation. This should be done on a
  59. machine which does not have the tools @code{makeinfo} and @code{tex}.
  60. @end defvr
  61. Additionally, the @command{configure} script recognize many variables, which
  62. can be listed by typing @code{./configure --help}. For example,
  63. @code{./configure NVCCFLAGS="-arch sm_13"} adds a flag for the compilation of
  64. CUDA kernels.
  65. @node Configuring workers
  66. @subsection Configuring workers
  67. @defvr {Configure option} --enable-maxcpus=@var{count}
  68. Use at most @var{count} CPU cores. This information is then
  69. available as the @code{STARPU_MAXCPUS} macro.
  70. @end defvr
  71. @defvr {Configure option} --disable-cpu
  72. Disable the use of CPUs of the machine. Only GPUs etc. will be used.
  73. @end defvr
  74. @defvr {Configure option} --enable-maxcudadev=@var{count}
  75. Use at most @var{count} CUDA devices. This information is then
  76. available as the @code{STARPU_MAXCUDADEVS} macro.
  77. @end defvr
  78. @defvr {Configure option} --disable-cuda
  79. Disable the use of CUDA, even if a valid CUDA installation was detected.
  80. @end defvr
  81. @defvr {Configure option} --with-cuda-dir=@var{prefix}
  82. Search for CUDA under @var{prefix}, which should notably contain
  83. @file{include/cuda.h}.
  84. @end defvr
  85. @defvr {Configure option} --with-cuda-include-dir=@var{dir}
  86. Search for CUDA headers under @var{dir}, which should
  87. notably contain @code{cuda.h}. This defaults to @code{/include} appended to the
  88. value given to @code{--with-cuda-dir}.
  89. @end defvr
  90. @defvr {Configure option} --with-cuda-lib-dir=@var{dir}
  91. Search for CUDA libraries under @var{dir}, which should notably contain
  92. the CUDA shared libraries---e.g., @file{libcuda.so}. This defaults to
  93. @code{/lib} appended to the value given to @code{--with-cuda-dir}.
  94. @end defvr
  95. @defvr {Configure option} --disable-cuda-memcpy-peer
  96. Explicitly disable peer transfers when using CUDA 4.0.
  97. @end defvr
  98. @defvr {Configure option} --enable-maxopencldev=@var{count}
  99. Use at most @var{count} OpenCL devices. This information is then
  100. available as the @code{STARPU_MAXOPENCLDEVS} macro.
  101. @end defvr
  102. @defvr {Configure option} --disable-opencl
  103. Disable the use of OpenCL, even if the SDK is detected.
  104. @end defvr
  105. @defvr {Configure option} --with-opencl-dir=@var{prefix}
  106. Search for an OpenCL implementation under @var{prefix}, which should
  107. notably contain @file{include/CL/cl.h} (or @file{include/OpenCL/cl.h} on
  108. Mac OS).
  109. @end defvr
  110. @defvr {Configure option} --with-opencl-include-dir=@var{dir}
  111. Search for OpenCL headers under @var{dir}, which should notably contain
  112. @file{CL/cl.h} (or @file{OpenCL/cl.h} on Mac OS). This defaults to
  113. @code{/include} appended to the value given to @code{--with-opencl-dir}.
  114. @end defvr
  115. @defvr {Configure option} --with-opencl-lib-dir=@var{dir}
  116. Search for an OpenCL library under @var{dir}, which should notably
  117. contain the OpenCL shared libraries---e.g. @file{libOpenCL.so}. This defaults to
  118. @code{/lib} appended to the value given to @code{--with-opencl-dir}.
  119. @end defvr
  120. @defvr {Configure option} --enable-opencl-simulator
  121. Enable considering the provided OpenCL implementation as a simulator, i.e. use
  122. the kernel duration returned by OpenCL profiling information as wallclock time
  123. instead of the actual measured real time. This requires simgrid support.
  124. @end defvr
  125. @defvr {Configure option} --enable-maximplementations=@var{count}
  126. Allow for at most @var{count} codelet implementations for the same
  127. target device. This information is then available as the
  128. @code{STARPU_MAXIMPLEMENTATIONS} macro.
  129. @end defvr
  130. @defvr {Configure option} --disable-asynchronous-copy
  131. Disable asynchronous copies between CPU and GPU devices.
  132. The AMD implementation of OpenCL is known to
  133. fail when copying data asynchronously. When using this implementation,
  134. it is therefore necessary to disable asynchronous data transfers.
  135. @end defvr
  136. @defvr {Configure option} --disable-asynchronous-cuda-copy
  137. Disable asynchronous copies between CPU and CUDA devices.
  138. @end defvr
  139. @defvr {Configure option} --disable-asynchronous-opencl-copy
  140. Disable asynchronous copies between CPU and OpenCL devices.
  141. The AMD implementation of OpenCL is known to
  142. fail when copying data asynchronously. When using this implementation,
  143. it is therefore necessary to disable asynchronous data transfers.
  144. @end defvr
  145. @node Extension configuration
  146. @subsection Extension configuration
  147. @defvr {Configure option} --disable-socl
  148. Disable the SOCL extension (@pxref{SOCL OpenCL Extensions}). By
  149. default, it is enabled when an OpenCL implementation is found.
  150. @end defvr
  151. @defvr {Configure option} --disable-starpu-top
  152. Disable the StarPU-Top interface (@pxref{StarPU-Top}). By default, it
  153. is enabled when the required dependencies are found.
  154. @end defvr
  155. @defvr {Configure option} --disable-gcc-extensions
  156. Disable the GCC plug-in (@pxref{C Extensions}). By default, it is
  157. enabled when the GCC compiler provides a plug-in support.
  158. @end defvr
  159. @defvr {Configure option} --with-mpicc=@var{path}
  160. Use the @command{mpicc} compiler at @var{path}, for starpumpi
  161. (@pxref{StarPU MPI support}).
  162. @end defvr
  163. @node Advanced configuration
  164. @subsection Advanced configuration
  165. @defvr {Configure option} --enable-perf-debug
  166. Enable performance debugging through gprof.
  167. @end defvr
  168. @defvr {Configure option} --enable-model-debug
  169. Enable performance model debugging.
  170. @end defvr
  171. @defvr {Configure option} --enable-stats
  172. @c see ../../src/datawizard/datastats.c
  173. Enable gathering of various data statistics (@pxref{Data statistics}).
  174. @end defvr
  175. @defvr {Configure option} --enable-maxbuffers
  176. Define the maximum number of buffers that tasks will be able to take
  177. as parameters, then available as the @code{STARPU_NMAXBUFS} macro.
  178. @end defvr
  179. @defvr {Configure option} --enable-allocation-cache
  180. Enable the use of a data allocation cache to avoid the cost of it with
  181. CUDA. Still experimental.
  182. @end defvr
  183. @defvr {Configure option} --enable-opengl-render
  184. Enable the use of OpenGL for the rendering of some examples.
  185. @c TODO: rather default to enabled when detected
  186. @end defvr
  187. @defvr {Configure option} --enable-blas-lib
  188. Specify the blas library to be used by some of the examples. The
  189. library has to be 'atlas' or 'goto'.
  190. @end defvr
  191. @defvr {Configure option} --disable-starpufft
  192. Disable the build of libstarpufft, even if fftw or cuFFT is available.
  193. @end defvr
  194. @defvr {Configure option} --with-magma=@var{prefix}
  195. Search for MAGMA under @var{prefix}. @var{prefix} should notably
  196. contain @file{include/magmablas.h}.
  197. @end defvr
  198. @defvr {Configure option} --with-fxt=@var{prefix}
  199. Search for FxT under @var{prefix}.
  200. @url{http://savannah.nongnu.org/projects/fkt, FxT} is used to generate
  201. traces of scheduling events, which can then be rendered them using ViTE
  202. (@pxref{Off-line, off-line performance feedback}). @var{prefix} should
  203. notably contain @code{include/fxt/fxt.h}.
  204. @end defvr
  205. @defvr {Configure option} --with-perf-model-dir=@var{dir}
  206. Store performance models under @var{dir}, instead of the current user's
  207. home.
  208. @end defvr
  209. @defvr {Configure option} --with-goto-dir=@var{prefix}
  210. Search for GotoBLAS under @var{prefix}, which should notably contain @file{libgoto.so} or @file{libgoto2.so}.
  211. @end defvr
  212. @defvr {Configure option} --with-atlas-dir=@var{prefix}
  213. Search for ATLAS under @var{prefix}, which should notably contain
  214. @file{include/cblas.h}.
  215. @end defvr
  216. @defvr {Configure option} --with-mkl-cflags=@var{cflags}
  217. Use @var{cflags} to compile code that uses the MKL library.
  218. @end defvr
  219. @defvr {Configure option} --with-mkl-ldflags=@var{ldflags}
  220. Use @var{ldflags} when linking code that uses the MKL library. Note
  221. that the
  222. @url{http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/,
  223. MKL website} provides a script to determine the linking flags.
  224. @end defvr
  225. @defvr {Configure option} --disable-build-examples
  226. Disable the build of examples.
  227. @end defvr
  228. @defvr {Configure option} --enable-memory-stats
  229. Enable memory statistics (@pxref{Memory feedback}).
  230. @end defvr
  231. @defvr {Configure option} --enable-simgrid
  232. Enable simulation of execution in simgrid, to allow easy experimentation with
  233. various numbers of cores and GPUs, or amount of memory, etc. Experimental.
  234. @end defvr
  235. @node Execution configuration through environment variables
  236. @section Execution configuration through environment variables
  237. @menu
  238. * Workers:: Configuring workers
  239. * Scheduling:: Configuring the Scheduling engine
  240. * Extensions::
  241. * Misc:: Miscellaneous and debug
  242. @end menu
  243. @node Workers
  244. @subsection Configuring workers
  245. @defvr {Environment variable} STARPU_NCPU
  246. Specify the number of CPU workers (thus not including workers dedicated to control accelerators). Note that by default, StarPU will not allocate
  247. more CPU workers than there are physical CPUs, and that some CPUs are used to control
  248. the accelerators.
  249. @end defvr
  250. @defvr {Environment variable} STARPU_NCPUS
  251. This variable is deprecated. You should use @code{STARPU_NCPU}.
  252. @end defvr
  253. @defvr {Environment variable} STARPU_NCUDA
  254. Specify the number of CUDA devices that StarPU can use. If
  255. @code{STARPU_NCUDA} is lower than the number of physical devices, it is
  256. possible to select which CUDA devices should be used by the means of the
  257. @code{STARPU_WORKERS_CUDAID} environment variable. By default, StarPU will
  258. create as many CUDA workers as there are CUDA devices.
  259. @end defvr
  260. @defvr {Environment variable} STARPU_NOPENCL
  261. OpenCL equivalent of the @code{STARPU_NCUDA} environment variable.
  262. @end defvr
  263. @defvr {Environment variable} STARPU_OPENCL_ON_CPUS
  264. By default, the OpenCL driver only enables GPU and accelerator
  265. devices. By setting the environment variable
  266. @code{STARPU_OPENCL_ON_CPUS} to 1, the OpenCL driver will also enable
  267. CPU devices.
  268. @end defvr
  269. @defvr {Environment variable} STARPU_WORKERS_NOBIND
  270. Setting it to non-zero will prevent StarPU from binding its threads to
  271. CPUs. This is for instance useful when running the testsuite in parallel.
  272. @end defvr
  273. @defvr {Environment variable} STARPU_WORKERS_CPUID
  274. Passing an array of integers (starting from 0) in @code{STARPU_WORKERS_CPUID}
  275. specifies on which logical CPU the different workers should be
  276. bound. For instance, if @code{STARPU_WORKERS_CPUID = "0 1 4 5"}, the first
  277. worker will be bound to logical CPU #0, the second CPU worker will be bound to
  278. logical CPU #1 and so on. Note that the logical ordering of the CPUs is either
  279. determined by the OS, or provided by the @code{hwloc} library in case it is
  280. available.
  281. Note that the first workers correspond to the CUDA workers, then come the
  282. OpenCL workers, and finally the CPU workers. For example if
  283. we have @code{STARPU_NCUDA=1}, @code{STARPU_NOPENCL=1}, @code{STARPU_NCPU=2}
  284. and @code{STARPU_WORKERS_CPUID = "0 2 1 3"}, the CUDA device will be controlled
  285. by logical CPU #0, the OpenCL device will be controlled by logical CPU #2, and
  286. the logical CPUs #1 and #3 will be used by the CPU workers.
  287. If the number of workers is larger than the array given in
  288. @code{STARPU_WORKERS_CPUID}, the workers are bound to the logical CPUs in a
  289. round-robin fashion: if @code{STARPU_WORKERS_CPUID = "0 1"}, the first and the
  290. third (resp. second and fourth) workers will be put on CPU #0 (resp. CPU #1).
  291. This variable is ignored if the @code{use_explicit_workers_bindid} flag of the
  292. @code{starpu_conf} structure passed to @code{starpu_init} is set.
  293. @end defvr
  294. @defvr {Environment variable} STARPU_WORKERS_CUDAID
  295. Similarly to the @code{STARPU_WORKERS_CPUID} environment variable, it is
  296. possible to select which CUDA devices should be used by StarPU. On a machine
  297. equipped with 4 GPUs, setting @code{STARPU_WORKERS_CUDAID = "1 3"} and
  298. @code{STARPU_NCUDA=2} specifies that 2 CUDA workers should be created, and that
  299. they should use CUDA devices #1 and #3 (the logical ordering of the devices is
  300. the one reported by CUDA).
  301. This variable is ignored if the @code{use_explicit_workers_cuda_gpuid} flag of
  302. the @code{starpu_conf} structure passed to @code{starpu_init} is set.
  303. @end defvr
  304. @defvr {Environment variable} STARPU_WORKERS_OPENCLID
  305. OpenCL equivalent of the @code{STARPU_WORKERS_CUDAID} environment variable.
  306. This variable is ignored if the @code{use_explicit_workers_opencl_gpuid} flag of
  307. the @code{starpu_conf} structure passed to @code{starpu_init} is set.
  308. @end defvr
  309. @defvr {Environment variable} @code{STARPU_SINGLE_COMBINED_WORKER}
  310. If set, StarPU will create several workers which won't be able to work
  311. concurrently. It will create combined workers which size goes from 1 to the
  312. total number of CPU workers in the system.
  313. @end defvr
  314. @defvr {Environment variable} STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER
  315. Let the user decide how many elements are allowed between combined workers
  316. created from hwloc information. For instance, in the case of sockets with 6
  317. cores without shared L2 caches, if @code{SYNTHESIZE_ARITY_COMBINED_WORKER} is
  318. set to 6, no combined worker will be synthesized beyond one for the socket
  319. and one per core. If it is set to 3, 3 intermediate combined workers will be
  320. synthesized, to divide the socket cores into 3 chunks of 2 cores. If it set to
  321. 2, 2 intermediate combined workers will be synthesized, to divide the the socket
  322. cores into 2 chunks of 3 cores, and then 3 additional combined workers will be
  323. synthesized, to divide the former synthesized workers into a bunch of 2 cores,
  324. and the remaining core (for which no combined worker is synthesized since there
  325. is already a normal worker for it).
  326. The default, 2, thus makes StarPU tend to building a binary trees of combined
  327. workers.
  328. @end defvr
  329. @defvr {Environment variable} STARPU_DISABLE_ASYNCHRONOUS_COPY
  330. Disable asynchronous copies between CPU and GPU devices.
  331. The AMD implementation of OpenCL is known to
  332. fail when copying data asynchronously. When using this implementation,
  333. it is therefore necessary to disable asynchronous data transfers.
  334. @end defvr
  335. @defvr {Environment variable} STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY
  336. Disable asynchronous copies between CPU and CUDA devices.
  337. @end defvr
  338. @defvr {Environment variable} STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY
  339. Disable asynchronous copies between CPU and OpenCL devices.
  340. The AMD implementation of OpenCL is known to
  341. fail when copying data asynchronously. When using this implementation,
  342. it is therefore necessary to disable asynchronous data transfers.
  343. @end defvr
  344. @defvr {Environment variable} STARPU_DISABLE_CUDA_GPU_GPU_DIRECT
  345. Disable direct CUDA transfers from GPU to GPU, and let CUDA copy through RAM
  346. instead. This permits to test the performance effect of GPU-Direct.
  347. @end defvr
  348. @node Scheduling
  349. @subsection Configuring the Scheduling engine
  350. @defvr {Environment variable} STARPU_SCHED
  351. Choose between the different scheduling policies proposed by StarPU: work
  352. random, stealing, greedy, with performance models, etc.
  353. Use @code{STARPU_SCHED=help} to get the list of available schedulers.
  354. @end defvr
  355. @defvr {Environment variable} STARPU_CALIBRATE
  356. If this variable is set to 1, the performance models are calibrated during
  357. the execution. If it is set to 2, the previous values are dropped to restart
  358. calibration from scratch. Setting this variable to 0 disable calibration, this
  359. is the default behaviour.
  360. Note: this currently only applies to @code{dm}, @code{dmda} and @code{heft} scheduling policies.
  361. @end defvr
  362. @defvr {Environment variable} STARPU_BUS_CALIBRATE
  363. If this variable is set to 1, the bus is recalibrated during intialization.
  364. @end defvr
  365. @defvr {Environment variable} STARPU_PREFETCH
  366. @anchor{STARPU_PREFETCH}
  367. This variable indicates whether data prefetching should be enabled (0 means
  368. that it is disabled). If prefetching is enabled, when a task is scheduled to be
  369. executed e.g. on a GPU, StarPU will request an asynchronous transfer in
  370. advance, so that data is already present on the GPU when the task starts. As a
  371. result, computation and data transfers are overlapped.
  372. Note that prefetching is enabled by default in StarPU.
  373. @end defvr
  374. @defvr {Environment variable} STARPU_SCHED_ALPHA
  375. To estimate the cost of a task StarPU takes into account the estimated
  376. computation time (obtained thanks to performance models). The alpha factor is
  377. the coefficient to be applied to it before adding it to the communication part.
  378. @end defvr
  379. @defvr {Environment variable} STARPU_SCHED_BETA
  380. To estimate the cost of a task StarPU takes into account the estimated
  381. data transfer time (obtained thanks to performance models). The beta factor is
  382. the coefficient to be applied to it before adding it to the computation part.
  383. @end defvr
  384. @defvr {Environment variable} STARPU_SCHED_GAMMA
  385. Define the execution time penalty of a joule (@pxref{Power-based scheduling}).
  386. @end defvr
  387. @defvr {Environment variable} STARPU_IDLE_POWER
  388. Define the idle power of the machine (@pxref{Power-based scheduling}).
  389. @end defvr
  390. @defvr {Environment variable} STARPU_PROFILING
  391. Enable on-line performance monitoring (@pxref{Enabling on-line performance monitoring}).
  392. @end defvr
  393. @node Extensions
  394. @subsection Extensions
  395. @defvr {Environment variable} SOCL_OCL_LIB_OPENCL
  396. THE SOCL test suite is only run when the environment variable
  397. @code{SOCL_OCL_LIB_OPENCL} is defined. It should contain the location
  398. of the libOpenCL.so file of the OCL ICD implementation.
  399. @end defvr
  400. @defvr {Environment variable} STARPU_COMM_STATS
  401. @anchor{STARPU_COMM_STATS}
  402. Communication statistics for starpumpi (@pxref{StarPU MPI support})
  403. will be enabled when the environment variable @code{STARPU_COMM_STATS}
  404. is defined to an value other than 0.
  405. @end defvr
  406. @defvr {Environment variable} STARPU_MPI_CACHE
  407. @anchor{STARPU_MPI_CACHE}
  408. Communication cache for starpumpi (@pxref{StarPU MPI support}) will be
  409. disabled when the environment variable @code{STARPU_MPI_CACHE} is set
  410. to 0. It is enabled by default or for any other values of the variable
  411. @code{STARPU_MPI_CACHE}.
  412. @end defvr
  413. @node Misc
  414. @subsection Miscellaneous and debug
  415. @defvr {Environment variable} STARPU_OPENCL_PROGRAM_DIR
  416. @anchor{STARPU_OPENCL_PROGRAM_DIR}
  417. This specifies the directory where the OpenCL codelet source files are
  418. located. The function @ref{starpu_opencl_load_program_source} looks
  419. for the codelet in the current directory, in the directory specified
  420. by the environment variable @code{STARPU_OPENCL_PROGRAM_DIR}, in the
  421. directory @code{share/starpu/opencl} of the installation directory of
  422. StarPU, and finally in the source directory of StarPU.
  423. @end defvr
  424. @defvr {Environment variable} STARPU_SILENT
  425. This variable allows to disable verbose mode at runtime when StarPU
  426. has been configured with the option @code{--enable-verbose}. It also
  427. disables the display of StarPU information and warning messages.
  428. @end defvr
  429. @defvr {Environment variable} STARPU_LOGFILENAME
  430. This variable specifies in which file the debugging output should be saved to.
  431. @end defvr
  432. @defvr {Environment variable} STARPU_FXT_PREFIX
  433. This variable specifies in which directory to save the trace generated if FxT is enabled. It needs to have a trailing '/' character.
  434. @end defvr
  435. @defvr {Environment variable} STARPU_LIMIT_GPU_MEM
  436. This variable specifies the maximum number of megabytes that should be
  437. available to the application on each GPUs. In case this value is smaller than
  438. the size of the memory of a GPU, StarPU pre-allocates a buffer to waste memory
  439. on the device. This variable is intended to be used for experimental purposes
  440. as it emulates devices that have a limited amount of memory.
  441. @end defvr
  442. @defvr {Environment variable} STARPU_GENERATE_TRACE
  443. When set to 1, this variable indicates that StarPU should automatically
  444. generate a Paje trace when @code{starpu_shutdown()} is called.
  445. @end defvr
  446. @defvr {Environment variable} STARPU_MEMORY_STATS
  447. When set to 0, disable the display of memory statistics on data which
  448. have not been unregistered at the end of the execution (@pxref{Memory
  449. feedback}).
  450. @end defvr
  451. @defvr {Environment variable} STARPU_BUS_STATS
  452. When defined, statistics about data transfers will be displayed when calling
  453. @code{starpu_shutdown()} (@pxref{Profiling}).
  454. @end defvr
  455. @defvr {Environment variable} STARPU_WORKER_STATS
  456. When defined, statistics about the workers will be displayed when calling
  457. @code{starpu_shutdown()} (@pxref{Profiling}). When combined with the
  458. environment variable @code{STARPU_PROFILING}, it displays the power
  459. consumption (@pxref{Power-based scheduling}).
  460. @end defvr
  461. @defvr {Environment variable} STARPU_STATS
  462. When set to 0, data statistics will not be displayed at the
  463. end of the execution of an application (@pxref{Data statistics}).
  464. @end defvr