ChangeLog 60 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033103410351036103710381039104010411042104310441045104610471048104910501051105210531054105510561057105810591060106110621063106410651066106710681069107010711072107310741075107610771078107910801081108210831084108510861087108810891090109110921093109410951096109710981099110011011102110311041105110611071108110911101111111211131114111511161117111811191120112111221123112411251126112711281129113011311132113311341135113611371138113911401141114211431144114511461147114811491150115111521153115411551156115711581159116011611162116311641165116611671168116911701171117211731174117511761177117811791180118111821183118411851186118711881189119011911192119311941195119611971198119912001201120212031204120512061207120812091210121112121213121412151216121712181219122012211222122312241225122612271228122912301231123212331234123512361237123812391240124112421243124412451246124712481249125012511252125312541255125612571258125912601261126212631264126512661267126812691270127112721273127412751276127712781279
  1. # StarPU --- Runtime system for heterogeneous multicore architectures.
  2. #
  3. # Copyright (C) 2012-2014,2016-2018 Inria
  4. # Copyright (C) 2009-2020 Université de Bordeaux
  5. # Copyright (C) 2010-2020 CNRS
  6. #
  7. # StarPU is free software; you can redistribute it and/or modify
  8. # it under the terms of the GNU Lesser General Public License as published by
  9. # the Free Software Foundation; either version 2.1 of the License, or (at
  10. # your option) any later version.
  11. #
  12. # StarPU is distributed in the hope that it will be useful, but
  13. # WITHOUT ANY WARRANTY; without even the implied warranty of
  14. # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
  15. #
  16. # See the GNU Lesser General Public License in COPYING.LGPL for more details.
  17. #
  18. StarPU 1.4.0 (git revision xxxx)
  19. ==============================================
  20. New features:
  21. * Fault tolerance support with starpu_task_ft_failed().
  22. * Add get_max_size method to data interfaces for applications using data with
  23. variable size to express their maximal potential size.
  24. * New offline tool to draw graph showing elapsed time between sent
  25. or received data and their use by tasks
  26. * Add 4D tensor data interface.
  27. * New sched_tasks.rec trace file which monitors task scheduling push/pop actions
  28. * New STARPU_MPI_MEM_THROTTLE environment variable to throttle mpi
  29. submission according to memory use.
  30. Small changes:
  31. * Use the S4U interface of Simgrid instead of xbt and MSG.
  32. StarPU 1.3.4 (git revision xxx)
  33. ==============================================
  34. Small features:
  35. * New environment variables STARPU_BUS_STATS_FILE and
  36. STARPU_WORKER_STATS_FILE to specify files in which to display
  37. statistics about data transfers and workers.
  38. * Add starpu_bcsr_filter_vertical_block filtering function.
  39. * Add starpu_interface_copy2d, 3d, and 4d to easily request data copies from
  40. data interfaces.
  41. * Move optimized cuda 2d copy from interfaces to new
  42. starpu_cuda_copy2d_async_sync and starpu_cuda_copy3d_async_sync, and use
  43. them from starpu_interface_copy2d and 3d.
  44. * New function starpu_task_watchdog_set_hook to specify a function
  45. to be called when the watchdog is raised
  46. StarPU 1.3.3 (git revision 11afc5b007fe1ab1c729b55b47a5a98ef7f3cfad)
  47. ====================================================================
  48. New features:
  49. * New semantic for starpu_task_insert() and alike parameters
  50. STARPU_CALLBACK_ARG, STARPU_PROLOGUE_CALLBACK_ARG, and
  51. STARPU_PROLOGUE_CALLBACK_POP_ARG which set respectively
  52. starpu_task::callback_arg_free,
  53. starpu_task::prologue_callback_arg_free and
  54. starpu_task::prologue_callback_pop_arg_free to 1 when used.
  55. New parameters STARPU_CALLBACK_ARG_NFREE,
  56. STARPU_CALLBACK_WITH_ARG_NFREE, STARPU_PROLOGUE_CALLBACK_ARG_NFREE, and
  57. STARPU_PROLOGUE_CALLBACK_POP_ARG_NFREE which set the corresponding
  58. fields of starpu_task to 0.
  59. * starpufft: Support 3D.
  60. * New modular-eager-prio scheduler.
  61. * Add 'ready' heuristic to modular schedulers.
  62. * New modular-heteroprio scheduler.
  63. * Add STARPU_TASK_SCHED_DATA
  64. * Add support for staging schedulers.
  65. * New modular-heteroprio-heft scheduler.
  66. * New dmdap "data-aware performance model (priority)" scheduler
  67. Changes:
  68. * Modification in the Native Fortran interface of the functions
  69. fstarpu_mpi_task_insert, fstarpu_mpi_task_build and
  70. fstarpu_mpi_task_post_build to only take 1 parameter being the MPI
  71. communicator, the codelet and the various parameters for the task.
  72. Small features:
  73. * New starpu_task_insert() and alike parameter STARPU_TASK_WORKERIDS
  74. allowing to set the fields starpu_task::workerids_len and
  75. starpu_task::workerids
  76. * New starpu_task_insert() and alike parameters
  77. STARPU_SEQUENTIAL_CONSISTENCY, STARPU_TASK_NO_SUBMITORDER and
  78. STARPU_TASK_PROFILING_INFO
  79. * New function starpu_create_callback_task() which creates and
  80. submits an empty task with the specified callback
  81. Small changes:
  82. * Default modular worker queues to 2 tasks unless it's an heft
  83. scheduler
  84. * Separate out STATUS_SLEEPING_SCHEDULING state from
  85. STATUS_SLEEPING state
  86. When running the scheduler while being idle, workers do not go in
  87. the STATUS_SCHEDULING state, so that that time is considered as
  88. idle time instead of overhead.
  89. StarPU 1.3.2 (git revision af22a20fc00a37addf3cc6506305f89feed940b0)
  90. ====================================================================
  91. Small changes:
  92. * Improve OpenMP support to detect the environment is valid before
  93. launching OpenMP
  94. * Delete old code (drivers gordon, scc, starpu-top, and plugin gcc)
  95. and update authors file accordingly
  96. * Add Heteroprio documentation (including a simple example)
  97. * Add a progression hook, to be called when workers are idle, which
  98. is used in the NewMadeleine implementation of StarPU-MPI to ensure
  99. communications progress.
  100. StarPU 1.3.1 (git revision 01949488b4f8e6fe26d2c200293b8aae5876b038)
  101. ====================================================================
  102. Small features:
  103. * Add starpu_filter_nparts_compute_chunk_size_and_offset helper.
  104. * Add starpu_bcsr_filter_canonical_block_child_ops.
  105. Small changes:
  106. * Improve detection of NVML availability. Do not only check the
  107. library is available, also check the compiled code can be run.
  108. StarPU 1.3.0 (git revision 24ca83c6dbb102e1cfc41db3bb21c49662067062)
  109. ====================================================================
  110. New features:
  111. * New scheduler 'heteroprio' with heterogeneous priorities
  112. * Support priorities for data transfers.
  113. * Add support for multiple linear regression performance models
  114. - Bump performance model file format version to 45.
  115. * Add MPI Master-Slave support to use the cores of remote nodes. Use the
  116. --enable-mpi-master-slave option to activate it.
  117. * Add STARPU_CUDA_THREAD_PER_DEV environment variable to support driving all
  118. GPUs from only one thread when almost all kernels are asynchronous.
  119. * Add starpu_replay tool to replay tasks.rec files with Simgrid.
  120. * Add experimental support of NUMA nodes. Use STARPU_USE_NUMA to activate it.
  121. * Add a new set of functions to make Out-of-Core based on HDF5 Library.
  122. * Add a new implementation of StarPU-MPI on top of NewMadeleine
  123. * Add optional callbacks to notify an external resource manager
  124. about workers going to sleep and waking up
  125. * Add implicit support for asynchronous partition planning. This means one
  126. does not need to call starpu_data_partition_submit() etc. explicitly any
  127. more, StarPU will make the appropriate calls as needed.
  128. * Add starpu_task_notify_ready_soon_register() to be notified when it is
  129. determined when a task will be ready an estimated amount of time from now.
  130. * New StarPU-MPI initialization function (starpu_mpi_init_conf())
  131. which allows StarPU-MPI to manage reserving a core for the MPI thread, or
  132. merging it with CPU driver 0.
  133. * Add possibility to delay the termination of a task with the
  134. functions starpu_task_end_dep_add() which specifies the number of
  135. calls to the function starpu_task_end_dep_release() needed to
  136. trigger the task termination, or with starpu_task_declare_end_deps_array()
  137. and starpu_task_declare_end_deps() to just declare termination dependencies
  138. between tasks.
  139. * Add possibility to define the sequential consistency at the task level
  140. for each handle used by the task.
  141. * Add STARPU_SPECIFIC_NODE_LOCAL, STARPU_SPECIFIC_NODE_CPU, and
  142. STARPU_SPECIFIC_NODE_SLOW as generic values for codelet specific memory
  143. nodes which can be used instead of exact node numbers.
  144. * Add starpu_get_next_bindid() and starpu_bind_thread_on() to allow
  145. binding an application-started thread on a free core. Use it in
  146. StarPU-MPI to automatically bind the MPI thread on an available core.
  147. * Add STARPU_RESERVE_NCPU environment variable and
  148. starpu_config::reserve_ncpus field to make StarPU use a few cores
  149. less.
  150. * Add STARPU_MAIN_THREAD_BIND environment variable to make StarPU reserve a
  151. core for the main thread.
  152. * New StarPU-RM resource management module to share processor cores and
  153. accelerator devices with other parallel runtime systems. Use
  154. --enable-starpurm option to activate it.
  155. * New schedulers modular-gemm, modular-pheft, modular-prandom and
  156. modular-prandom-prio
  157. * Add STARPU_MATRIX_SET_NX/NY/LD and STARPU_VECTOR_SET_NX to change a matrix
  158. tile or vector size without reallocating the buffer.
  159. * Application can change the allocation used by StarPU with
  160. starpu_malloc_set_hooks()
  161. * XML output for starpu_perfmodel_display and starpu_perfmodel_dump_xml()
  162. function
  163. Small features:
  164. * Scheduling contexts may now be associated a user data pointer at creation
  165. time, that can later be recalled through starpu_sched_ctx_get_user_data().
  166. * New environment variables STARPU_SIMGRID_TASK_SUBMIT_COST and
  167. STARPU_SIMGRID_FETCHING_INPUT_COST to simulate the cost of task
  168. submission and data fetching in simgrid mode.
  169. This provides more accurate simgrid predictions, especially for the
  170. beginning of the execution and regarding data transfers.
  171. * New environment variable STARPU_SIMGRID_SCHED_COST to take into
  172. account the time to perform scheduling when running in SimGrid mode.
  173. * New configure option --enable-mpi-pedantic-isend (disabled by
  174. default) to acquire data in STARPU_RW (instead of STARPU_R) before
  175. performing MPI_Isend() call
  176. * New function starpu_worker_display_names() to display the names of
  177. all the workers of a specified type.
  178. * Arbiters now support concurrent read access.
  179. * Add a field starpu_task::where similar to starpu_codelet::where
  180. which allows to restrict where to execute a task. Also add
  181. STARPU_TASK_WHERE to be used when calling starpu_task_insert().
  182. * Add SubmitOrder trace field.
  183. * Add workerids and workerids_len task fields.
  184. * Add priority management to StarPU-MPI. Can be disabled with
  185. the STARPU_MPI_PRIORITIES environment variable.
  186. * Add STARPU_MAIN_THREAD_CPUID and STARPU_MPI_THREAD_CPUID environment
  187. variables.
  188. * Add disk to disk copy functions and support asynchronous full read/write
  189. in disk backends.
  190. * New starpu_task_insert() parameter STARPU_CL_ARGS_NFREE which allows
  191. to set codelet parameters but without freeing them.
  192. * New starpu_task_insert() parameter STARPU_TASK_DEPS_ARRAY which
  193. allows to declare task dependencies similarly to
  194. starpu_task_declare_deps_array()
  195. * Add dependency backward information in debugging mode for gdb's
  196. starpu-print-task
  197. * Add sched_data field in starpu_task structure.
  198. * New starpu_fxt_tool option -label-deps to label dependencies on
  199. the output graph
  200. * New environment variable STARPU_GENERATE_TRACE_OPTIONS to specify
  201. fxt options (to be used with STARPU_GENERATE_TRACE)
  202. * New function starpu_task_set() similar as starpu_task_build() but
  203. with a task object given as the first parameter
  204. * New functions
  205. starpu_data_partition_submit_sequential_consistency() and
  206. starpu_data_unpartition_submit_sequential_consistency()
  207. * Add a new value STARPU_TASK_SYNCHRONOUS to be used in
  208. starpu_task_insert() to define if the task is (or not) synchronous
  209. * Add memory states events in the traces.
  210. * Add starpu_sched_component_estimated_end_min_add() to fix termination
  211. estimations in modular schedulers.
  212. * New function starpu_data_partition_not_automatic() to disable the
  213. automatic partitioning of a data handle for which a asynchronous
  214. plan has previously been submitted
  215. * Add starpu_task_declare_deps()
  216. * New function starpu_data_unpartition_submit_sequential_consistency_cb()
  217. to specify a callback for the task submitting the unpartitioning
  218. * New tool starpu_mpi_comm_trace.py to draw heatmap of MPI
  219. communications
  220. * Support for ARM performance libraries
  221. * Add functionality to disable signal catching either through field
  222. starpu_conf::catch_signals or through the environment variable
  223. STARPU_CATCH_SIGNALS
  224. * Support for OpenMP Taskloop directive
  225. * Optional data interface init function (used by the vector and
  226. matrix interfaces)
  227. Changes:
  228. * Vastly improve simgrid simulation time.
  229. * Switch default scheduler to lws.
  230. * Add "to" parameter to pull_task and can_push methods of
  231. components.
  232. * Deprecate starpu_data_interface_ops::handle_to_pointer interface
  233. operation in favor of new starpu_data_interface_ops::to_pointer
  234. operation.
  235. * Sort data access requests by priority.
  236. * Cluster support is disabled by default, unless the configure
  237. option --enable-cluster is specified
  238. * For unpack operations, move the memory deallocation from
  239. starpu_data_unpack() to the interface function
  240. starpu_data_interface_ops::unpack_data(). Pack and unpack
  241. functions of predefined interfaces
  242. use public API starpu_malloc_on_node_flags() and
  243. starpu_free_on_node_flags() to allocate and de-allocate memory
  244. Small changes:
  245. * Use asynchronous transfers for task data fetches with were not prefetched.
  246. * Allow to call starpu_sched_ctx_set_policy_data on the main
  247. scheduler context
  248. * Fonction starpu_is_initialized() is moved to the public API.
  249. * Fix code to allow to submit tasks to empty contexts
  250. * STARPU_COMM_STATS also displays the bandwidth
  251. * Update data interfaces implementations to only use public API
  252. StarPU 1.2.9 (git revision 3aca8da3138a99e93d7f93905d2543bd6f1ea1df)
  253. ====================================================================
  254. Small changes:
  255. * Add STARPU_SIMGRID_TRANSFER_COST environment variable to easily disable
  256. data transfer costs.
  257. * New dmdap "data-aware performance model (priority)" scheduler
  258. * Modification in the Native Fortran interface of the functions
  259. fstarpu_mpi_task_insert, fstarpu_mpi_task_build and
  260. fstarpu_mpi_task_post_build to only take 1 parameter being the MPI
  261. communicator, the codelet and the various parameters for the task.
  262. StarPU 1.2.8 (git revision f66374c9ad39aefb7cf5dfc31f9ab3d756bcdc3c)
  263. ====================================================================
  264. Small features:
  265. * Minor fixes
  266. StarPU 1.2.7 (git revision 07cb7533c22958a76351bec002955f0e2818c530)
  267. ====================================================================
  268. Small features:
  269. * Add STARPU_HWLOC_INPUT environment variable to save initialization time.
  270. * Add starpu_data_set/get_ooc_flag.
  271. * Use starpu_mpi_tag_t (int64_t) for MPI communication tag
  272. StarPU 1.2.6 (git revision 23049adea01837479f309a75c002dacd16eb34ad)
  273. ====================================================================
  274. Small changes:
  275. * Fix crash for lws scheduler
  276. * Avoid making hwloc load PCI topology when CUDA is not enabled
  277. StarPU 1.2.5 (git revision 22f32916916d158e3420033aa160854d1dd341bd)
  278. ====================================================================
  279. Small features:
  280. * Add a new value STARPU_TASK_COLOR to be used in
  281. starpu_task_insert() to pick up the color of a task in dag.dot
  282. * Add starpu_data_pointer_is_inside().
  283. Changes:
  284. * Do not export -lcuda -lcudart -lOpenCL in *starpu*.pc.
  285. StarPU 1.2.4 (git revision 255cf98175ef462749780f30bfed21452b74b594)
  286. ====================================================================
  287. Small features:
  288. * Catch of signals SIGINT and SIGSEGV to dump fxt trace files.
  289. * New configure option --disable-icc to disable the compilation of
  290. specific ICC examples
  291. * Add starpu_codelet_pack_arg_init, starpu_codelet_pack_arg,
  292. starpu_codelet_pack_arg_fini for more fine-grain packing capabilities.
  293. * Add starpu_task_insert_data_make_room,
  294. starpu_task_insert_data_process_arg,
  295. starpu_task_insert_data_process_array_arg,
  296. starpu_task_insert_data_process_mode_array_arg
  297. * Do not show internal tasks in fxt dag by default. Allow to hide
  298. acquisitions too.
  299. * Add a way to choose the dag.dot colors.
  300. StarPU 1.2.3 (git revision 586ba6452a8eef99f275c891ce08933ae542c6c2)
  301. ====================================================================
  302. New features:
  303. * Add per-node MPI data.
  304. Small features:
  305. * When debug is enabled, starpu data accessors first check the
  306. validity of the data interface type
  307. * Print disk bus performances when STARPU_BUS_STATS is set
  308. * Add starpu_vector_filter_list_long filter.
  309. * Data interfaces now define a name through the struct starpu_data_interface_ops
  310. * StarPU-MPI :
  311. - allow predefined data interface not to define a mpi datatype and
  312. to be exchanged through pack/unpack operations
  313. - New function starpu_mpi_comm_get_attr() which allows to return
  314. the value of the attribute STARPU_MPI_TAG_UB, i.e the upper
  315. bound for tag value.
  316. - New configure option enable-mpi-verbose to manage the display of
  317. extra MPI debug messages.
  318. * Add STARPU_WATCHDOG_DELAY environment variable.
  319. * Add a 'waiting' worker status
  320. * Allow new value 'extra' for configure option --enable-verbose
  321. Small changes:
  322. * Add data_unregister event in traces
  323. * StarPU-MPI
  324. - push detached requests at the back of the testing list, so they
  325. are tested last since they will most probably finish latest
  326. * Automatically initialize handles on data acquisition when
  327. reduction methods are provided, and make sure a handle is
  328. initialized before trying to read it.
  329. StarPU 1.2.2 (git revision a0b01437b7b91f33fb3ca36bdea35271cad34464)
  330. ===================================================================
  331. New features:
  332. * Add starpu_data_acquire_try and starpu_data_acquire_on_node_try.
  333. * Add NVCC_CC environment variable.
  334. * Add -no-flops and -no-events options to starpu_fxt_tool to make
  335. traces lighter
  336. * Add starpu_cusparse_init/shutdown/get_local_handle for proper CUDA
  337. overlapping with cusparse.
  338. * Allow precise debugging by setting STARPU_TASK_BREAK_ON_PUSH,
  339. STARPU_TASK_BREAK_ON_SCHED, STARPU_TASK_BREAK_ON_POP, and
  340. STARPU_TASK_BREAK_ON_EXEC environment variables, with the job_id
  341. of a task. StarPU will raise SIGTRAP when the task is being
  342. scheduled, pushed, or popped by the scheduler.
  343. Small features:
  344. * New function starpu_worker_get_job_id(struct starpu_task *task)
  345. which returns the job identifier for a given task
  346. * Show package/numa topology in starpu_machine_display
  347. * MPI: Add mpi communications in dag.dot
  348. * Add STARPU_PERF_MODEL_HOMOGENEOUS_CPU environment variable to
  349. allow having one perfmodel per CPU core
  350. * Add starpu_perfmodel_arch_comb_fetch function.
  351. * Add starpu_mpi_get_data_on_all_nodes_detached function.
  352. Small changes:
  353. * Output generated through STARPU_MPI_COMM has been modified to
  354. allow easier automated checking
  355. * MPI: Fix reactivity of the beginning of the application, when a
  356. lot of ready requests have to be processed at the same time, we
  357. want to poll the pending requests from time to time.
  358. * MPI: Fix gantt chart for starpu_mpi_irecv: it should use the
  359. termination time of the request, not the submission time.
  360. * MPI: Modify output generated through STARPU_MPI_COMM to allow
  361. easier automated checking
  362. * MPI: enable more tests in simgrid mode
  363. * Use assumed-size instead of assumed-shape arrays for native
  364. fortran API, for better backward compatibility.
  365. * Fix odd ordering of CPU workers on CPUs due to GPUs stealing some
  366. cores
  367. StarPU 1.2.1 (git revision 473acaec8a1fb4f4c73d8b868e4f044b736b41ea)
  368. ====================================================================
  369. New features:
  370. * Add starpu_fxt_trace_user_event_string.
  371. * Add starpu_tasks_rec_complete tool to add estimation times in tasks.rec
  372. files.
  373. * Add STARPU_FXT_TRACE environment variable.
  374. * Add starpu_data_set_user_data and starpu_data_get_user_data.
  375. * Add STARPU_MPI_FAKE_SIZE and STARPU_MPI_FAKE_RANK to allow simulating
  376. execution of just one MPI node.
  377. * Add STARPU_PERF_MODEL_HOMOGENEOUS_CUDA/OPENCL/MIC/SCC to share performance
  378. models between devices, making calibration much faster.
  379. * Add modular-heft-prio scheduler.
  380. * Add starpu_cublas_get_local_handle helper.
  381. * Add starpu_data_set_name, starpu_data_set_coordinates_array, and
  382. starpu_data_set_coordinates to describe data, and starpu_iteration_push and
  383. starpu_iteration_pop to describe tasks, for better offline traces analysis.
  384. * New function starpu_bus_print_filenames() to display filenames
  385. storing bandwidth/affinity/latency information, available through
  386. tools/starpu_machine_display -i
  387. * Add support for Ayudame version 2.x debugging library.
  388. * Add starpu_sched_ctx_get_workers_list_raw, much less costly than
  389. starpu_sched_ctx_get_workers_list
  390. * Add starpu_task_get_name and use it to warn about dmda etc. using
  391. a dumb policy when calibration is not finished
  392. * MPI: Add functions to test for cached values
  393. Changes:
  394. * Fix performance regression of lws for small tasks.
  395. * Improve native Fortran support for StarPU
  396. Small changes:
  397. * Fix type of data home node to allow users to pass -1 to define
  398. temporary data
  399. * Fix compatibility with simgrid 3.14
  400. StarPU 1.2.0 (git revision 5a86e9b61cd01b7797e18956283cc6ea22adfe11)
  401. ====================================================================
  402. New features:
  403. * MIC Xeon Phi support
  404. * SCC support
  405. * New function starpu_sched_ctx_exec_parallel_code to execute a
  406. parallel code on the workers of the given scheduler context
  407. * MPI:
  408. - New internal communication system : a unique tag called
  409. is now used for all communications, and a system
  410. of hashmaps on each node which stores pending receives has been
  411. implemented. Every message is now coupled with an envelope, sent
  412. before the corresponding data, which allows the receiver to
  413. allocate data correctly, and to submit the matching receive of
  414. the envelope.
  415. - New function
  416. starpu_mpi_irecv_detached_sequential_consistency which
  417. allows to enable or disable the sequential consistency for
  418. the given data handle (sequential consistency will be
  419. enabled or disabled based on the value of the function
  420. parameter and the value of the sequential consistency
  421. defined for the given data)
  422. - New functions starpu_mpi_task_build() and
  423. starpu_mpi_task_post_build()
  424. - New flag STARPU_NODE_SELECTION_POLICY to specify a policy for
  425. selecting a node to execute the codelet when several nodes
  426. own data in W mode.
  427. - New selection node policies can be un/registered with the
  428. functions starpu_mpi_node_selection_register_policy() and
  429. starpu_mpi_node_selection_unregister_policy()
  430. - New environment variable STARPU_MPI_COMM which enables
  431. basic tracing of communications.
  432. - New function starpu_mpi_init_comm() which allows to specify
  433. a MPI communicator.
  434. * New STARPU_COMMUTE flag which can be passed along STARPU_W or STARPU_RW to
  435. let starpu commute write accesses.
  436. * Out-of-core support, through registration of disk areas as additional memory
  437. nodes. It can be enabled programmatically or through the STARPU_DISK_SWAP*
  438. environment variables.
  439. * Reclaiming is now periodically done before memory becomes full. This can
  440. be controlled through the STARPU_*_AVAILABLE_MEM environment variables.
  441. * New hierarchical schedulers which allow the user to easily build
  442. its own scheduler, by coding itself each "box" it wants, or by
  443. combining existing boxes in StarPU to build it. Hierarchical
  444. schedulers have very interesting scalability properties.
  445. * Add STARPU_CUDA_ASYNC and STARPU_OPENCL_ASYNC flags to allow asynchronous
  446. CUDA and OpenCL kernel execution.
  447. * Add STARPU_CUDA_PIPELINE and STARPU_OPENCL_PIPELINE to specify how
  448. many asynchronous tasks are submitted in advance on CUDA and
  449. OpenCL devices. Setting the value to 0 forces a synchronous
  450. execution of all tasks.
  451. * Add CUDA concurrent kernel execution support through
  452. the STARPU_NWORKER_PER_CUDA environment variable.
  453. * Add CUDA and OpenCL kernel submission pipelining, to overlap costs and allow
  454. concurrent kernel execution on Fermi cards.
  455. * New locality work stealing scheduler (lws).
  456. * Add STARPU_VARIABLE_NBUFFERS to be set in cl.nbuffers, and nbuffers and
  457. modes field to the task structure, which permit to define codelets taking a
  458. variable number of data.
  459. * Add support for implementing OpenMP runtimes on top of StarPU
  460. * New performance model format to better represent parallel tasks.
  461. Used to provide estimations for the execution times of the
  462. parallel tasks on scheduling contexts or combined workers.
  463. * starpu_data_idle_prefetch_on_node and
  464. starpu_idle_prefetch_task_input_on_node allow to queue prefetches to be done
  465. only when the bus is idle.
  466. * Make starpu_data_prefetch_on_node not forcibly flush data out, introduce
  467. starpu_data_fetch_on_node for that.
  468. * Add data access arbiters, to improve parallelism of concurrent data
  469. accesses, notably with STARPU_COMMUTE.
  470. * Anticipative writeback, to flush dirty data asynchronously before the
  471. GPU device is full. Disabled by default. Use STARPU_MINIMUM_CLEAN_BUFFERS
  472. and STARPU_TARGET_CLEAN_BUFFERS to enable it.
  473. * Add starpu_data_wont_use to advise that a piece of data will not be used
  474. in the close future.
  475. * Enable anticipative writeback by default.
  476. * New scheduler 'dmdasd' that considers priority when deciding on
  477. which worker to schedule
  478. * Add the capability to define specific MPI datatypes for
  479. StarPU user-defined interfaces.
  480. * Add tasks.rec trace output to make scheduling analysis easier.
  481. * Add Fortran 90 module and example using it
  482. * New StarPU-MPI gdb debug functions
  483. * Generate animated html trace of modular schedulers.
  484. * Add asynchronous partition planning. It only supports coherency through
  485. the home node of data for now.
  486. * Add STARPU_MALLOC_SIMULATION_FOLDED flag to save memory when simulating.
  487. * Include application threads in the trace.
  488. * Add starpu_task_get_task_scheduled_succs to get successors of a task.
  489. * Add graph inspection facility for schedulers.
  490. * New STARPU_LOCALITY flag to mark data which should be taken into account
  491. by schedulers for improving locality.
  492. * Experimental support for data locality in ws and lws.
  493. * Add a preliminary framework for native Fortran support for StarPU
  494. Small features:
  495. * Tasks can now have a name (via the field const char *name of
  496. struct starpu_task)
  497. * New functions starpu_data_acquire_cb_sequential_consistency() and
  498. starpu_data_acquire_on_node_cb_sequential_consistency() which allows
  499. to enable or disable sequential consistency
  500. * New configure option --enable-fxt-lock which enables additional
  501. trace events focused on locks behaviour during the execution
  502. * Functions starpu_insert_task and starpu_mpi_insert_task are
  503. renamed in starpu_task_insert and starpu_mpi_task_insert. Old
  504. names are kept to avoid breaking old codes.
  505. * New configure option --enable-calibration-heuristic which allows
  506. the user to set the maximum authorized deviation of the
  507. history-based calibrator.
  508. * Allow application to provide the task footprint itself.
  509. * New function starpu_sched_ctx_display_workers() to display worker
  510. information belonging to a given scheduler context
  511. * The option --enable-verbose can be called with
  512. --enable-verbose=extra to increase the verbosity
  513. * Add codelet size, footprint and tag id in the paje trace.
  514. * Add STARPU_TAG_ONLY, to specify a tag for traces without making StarPU
  515. manage the tag.
  516. * On Linux x86, spinlocks now block after a hundred tries. This avoids
  517. typical 10ms pauses when the application thread tries to submit tasks.
  518. * New function char *starpu_worker_get_type_as_string(enum starpu_worker_archtype type)
  519. * Improve static scheduling by adding support for specifying the task
  520. execution order.
  521. * Add starpu_worker_can_execute_task_impl and
  522. starpu_worker_can_execute_task_first_impl to optimize getting the
  523. working implementations
  524. * Add STARPU_MALLOC_NORECLAIM flag to allocate without running a reclaim if
  525. the node is out of memory.
  526. * New flag STARPU_DATA_MODE_ARRAY for the function family
  527. starpu_task_insert to allow to define a array of data handles
  528. along with their access modes.
  529. * New configure option --enable-new-check to enable new testcases
  530. which are known to fail
  531. * Add starpu_memory_allocate and _deallocate to let the application declare
  532. its own allocation to the reclaiming engine.
  533. * Add STARPU_SIMGRID_CUDA_MALLOC_COST and STARPU_SIMGRID_CUDA_QUEUE_COST to
  534. disable CUDA costs simulation in simgrid mode.
  535. * Add starpu_task_get_task_succs to get the list of children of a given
  536. task.
  537. * Add starpu_malloc_on_node_flags, starpu_free_on_node_flags, and
  538. starpu_malloc_on_node_set_default_flags to control the allocation flags
  539. used for allocations done by starpu.
  540. * Ranges can be provided in STARPU_WORKERS_CPUID
  541. * Add starpu_fxt_autostart_profiling to be able to avoid autostart.
  542. * Add arch_cost_function perfmodel function field.
  543. * Add STARPU_TASK_BREAK_ON_SCHED, STARPU_TASK_BREAK_ON_PUSH, and
  544. STARPU_TASK_BREAK_ON_POP environment variables to debug schedulers.
  545. * Add starpu_sched_display tool.
  546. * Add starpu_memory_pin and starpu_memory_unpin to pin memory allocated
  547. another way than starpu_malloc.
  548. * Add STARPU_NOWHERE to create synchronization tasks with data.
  549. * Document how to switch between differents views of the same data.
  550. * Add STARPU_NAME to specify a task name from a starpu_task_insert call.
  551. * Add configure option to disable fortran --disable-fortran
  552. * Add configure option to give path for smpirun executable --with-smpirun
  553. * Add configure option to disable the build of tests --disable-build-tests
  554. * Add starpu-all-tasks debugging support
  555. * New function
  556. void starpu_opencl_load_program_source_malloc(const char *source_file_name, char **located_file_name, char **located_dir_name, char **opencl_program_source)
  557. which allocates the pointers located_file_name, located_dir_name
  558. and opencl_program_source.
  559. * Add submit_hook and do_schedule scheduler methods.
  560. * Add starpu_sleep.
  561. * Add starpu_task_list_ismember.
  562. * Add _starpu_fifo_pop_this_task.
  563. * Add STARPU_MAX_MEMORY_USE environment variable.
  564. * Add starpu_worker_get_id_check().
  565. * New function starpu_mpi_wait_for_all(MPI_Comm comm) that allows to
  566. wait until all StarPU tasks and communications for the given
  567. communicator are completed.
  568. * New function starpu_codelet_unpack_args_and_copyleft() which
  569. allows to copy in a new buffer values which have not been unpacked by
  570. the current call
  571. * Add STARPU_CODELET_SIMGRID_EXECUTE flag.
  572. * Add STARPU_CODELET_SIMGRID_EXECUTE_AND_INJECT flag.
  573. * Add STARPU_CL_ARGS flag to starpu_task_insert() and
  574. starpu_mpi_task_insert() functions call
  575. Changes:
  576. * Data interfaces (variable, vector, matrix and block) now define
  577. pack und unpack functions
  578. * StarPU-MPI: Fix for being able to receive data which have not yet
  579. been registered by the application (i.e it did not call
  580. starpu_data_set_tag(), data are received as a raw memory)
  581. * StarPU-MPI: Fix for being able to receive data with the same tag
  582. from several nodes (see mpi/tests/gather.c)
  583. * Remove the long-deprecated cost_model fields and task->buffers field.
  584. * Fix complexity of implicit task/data dependency, from quadratic to linear.
  585. Small changes:
  586. * Rename function starpu_trace_user_event() as
  587. starpu_fxt_trace_user_event()
  588. * "power" is renamed into "energy" wherever it applies, notably energy
  589. consumption performance models
  590. * Update starpu_task_build() to set starpu_task::cl_arg_free to 1 if
  591. some arguments of type ::STARPU_VALUE are given.
  592. * Simplify performance model loading API
  593. * Better semantic for environment variables STARPU_NMIC and
  594. STARPU_NMICDEVS, the number of devices and the number of cores.
  595. STARPU_NMIC will be the number of devices, and STARPU_NMICCORES
  596. will be the number of cores per device.
  597. StarPU 1.1.8 (git revision f7b7abe9f86361cbc96f2b51c6ad7336b7d1d628)
  598. ====================================================================
  599. The scheduling context release
  600. Small changes:
  601. * Fix compatibility with simgrid 3.14
  602. * Fix lock ordering for memory reclaiming
  603. StarPU 1.1.7 (git revision 341044b67809892cf4a388e482766beb50256907)
  604. ====================================================================
  605. The scheduling context release
  606. Small changes:
  607. * Fix type of data home node to allow users to pass -1 to define
  608. temporary data
  609. StarPU 1.1.6 (git revision cdffbd5f5447e4d076d659232b3deb14f3c20da6)
  610. ====================================================================
  611. The scheduling context release
  612. Small features:
  613. * Add starpu_task_get_task_succs to get the list of children of a given
  614. task.
  615. * Ranges can be provided in STARPU_WORKERS_CPUID
  616. Small changes:
  617. * Various fixes for MacOS and windows systems
  618. StarPU 1.1.5 (git revision 20469c6f3e7ecd6c0568c8e4e4b5b652598308d8xxx)
  619. =======================================================================
  620. The scheduling context release
  621. New features:
  622. * Add starpu_memory_pin and starpu_memory_unpin to pin memory allocated
  623. another way than starpu_malloc.
  624. * Add starpu_task_wait_for_n_submitted() and
  625. STARPU_LIMIT_MAX_NSUBMITTED_TASKS/STARPU_LIMIT_MIN_NSUBMITTED_TASKS to
  626. easily control the number of submitted tasks by making task submission
  627. block.
  628. * Add STARPU_NOWHERE to create synchronization tasks with data.
  629. * Document how to switch between differents views of the same data.
  630. * Add Fortran 90 module and example using it
  631. StarPU 1.1.4 (git revision 2a3d30b28d6d099d271134a786335acdbb3931a3)
  632. ====================================================================
  633. The scheduling context release
  634. New features:
  635. * Fix and actually enable the cache allocation.
  636. * Enable allocation cache in main RAM when STARPU_LIMIT_CPU_MEM is set by
  637. the user.
  638. * New MPI functions starpu_mpi_issend and starpu_mpi_issend_detached
  639. to send data using a synchronous and non-blocking mode (internally
  640. uses MPI_Issend)
  641. * New data access mode flag STARPU_SSEND to be set when calling
  642. starpu_mpi_insert_task to specify the data has to be sent using a
  643. synchronous and non-blocking mode
  644. * New environment variable STARPU_PERF_MODEL_DIR which can be set to
  645. specify a directory where to store performance model files in.
  646. When unset, the files are stored in $STARPU_HOME/.starpu/sampling
  647. * MPI:
  648. - New function starpu_mpi_data_register_comm to register a data
  649. with another communicator than MPI_COMM_WORLD
  650. - New functions starpu_mpi_data_set_rank() and starpu_mpi_data_set_tag()
  651. which call starpu_mpi_data_register_comm()
  652. Small features:
  653. * Add starpu_memory_wait_available() to wait for a given size to become
  654. available on a given node.
  655. * New environment variable STARPU_RAND_SEED to set the seed used for random
  656. numbers.
  657. * New function starpu_mpi_cache_set() to enable or disable the
  658. communication cache at runtime
  659. * Add starpu_paje_sort which sorts Pajé traces.
  660. Changes:
  661. * Fix complexity of implicit task/data dependency, from quadratic to linear.
  662. StarPU 1.1.3 (git revision 11afc5b007fe1ab1c729b55b47a5a98ef7f3cfad)
  663. ====================================================================
  664. The scheduling context release
  665. New features:
  666. * One can register an existing on-GPU buffer to be used by a handle.
  667. * Add the starpu_paje_summary statistics tool.
  668. * Enable gpu-gpu transfers for matrices.
  669. * Let interfaces declare which transfers they allow with the can_copy
  670. methode.
  671. Small changes:
  672. * Lock performance model files while writing and reading them to avoid
  673. issues on parallel launches, MPI runs notably.
  674. * Lots of build fixes for icc on Windows.
  675. StarPU 1.1.2 (git revision d14c550798630bbc4f3da2b07d793c47e3018f02)
  676. ====================================================================
  677. The scheduling context release
  678. New features:
  679. * The reduction init codelet is automatically used to initialize temporary
  680. buffers.
  681. * Traces now include a "scheduling" state, to show the overhead of the
  682. scheduler.
  683. * Add STARPU_CALIBRATE_MINIMUM environment variable to specify the minimum
  684. number of calibration measurements.
  685. * Add STARPU_TRACE_BUFFER_SIZE environment variable to specify the size of
  686. the trace buffer.
  687. StarPU 1.1.1 (git revision dab2e51117fac5bef767f3a6b7677abb2147d2f2)
  688. ====================================================================
  689. The scheduling context release
  690. New features:
  691. * MPI:
  692. - New variable STARPU_MPI_CACHE_STATS to print statistics on
  693. cache holding received data.
  694. - New function starpu_mpi_data_register() which sets the rank
  695. and tag of a data, and also allows to automatically clear
  696. the MPI communication cache when unregistering the data. It
  697. should be called instead of both calling
  698. starpu_data_set_tag() and starpu_data_set_rank()
  699. * Use streams for all CUDA transfers, even initiated by CPUs.
  700. * Add paje traces statistics tools.
  701. * Use streams for GPUA->GPUB and GPUB->GPUA transfers.
  702. Small features:
  703. * New STARPU_EXECUTE_ON_WORKER flag to specify the worker on which
  704. to execute the task.
  705. * New STARPU_DISABLE_PINNING environment variable to disable host memory
  706. pinning.
  707. * New STARPU_DISABLE_KERNELS environment variable to disable actual kernel
  708. execution.
  709. * New starpu_memory_get_total function to get the size of a memory node.
  710. * New starpu_parallel_task_barrier_init_n function to let a scheduler decide
  711. a set of workers without going through combined workers.
  712. Changes:
  713. * Fix simgrid execution.
  714. * Rename starpu_get_nready_tasks_of_sched_ctx to starpu_sched_ctx_get_nready_tasks
  715. * Rename starpu_get_nready_flops_of_sched_ctx to starpu_sched_ctx_get_nready_flops
  716. * New functions starpu_pause() and starpu_resume()
  717. * New codelet specific_nodes field to specify explicit target nodes for data.
  718. * StarPU-MPI: Fix overzealous allocation of memory.
  719. * Interfaces: Allow interface implementation to change pointers at will, in
  720. unpack notably.
  721. Small changes:
  722. * Use big fat abortions when one tries to make a task or callback
  723. sleep, instead of just returning EDEADLCK which few people will test
  724. * By default, StarPU FFT examples are not compiled and checked, the
  725. configure option --enable-starpufft-examples needs to be specified
  726. to change this behaviour.
  727. StarPU 1.1.0 (git revision 3c4bc72ccef30e767680cad3d749c4e9010d4476)
  728. ====================================================================
  729. The scheduling context release
  730. New features:
  731. * OpenGL interoperability support.
  732. * Capability to store compiled OpenCL kernels on the file system
  733. * Capability to load compiled OpenCL kernels
  734. * Performance models measurements can now be provided explicitly by
  735. applications.
  736. * Capability to emit communication statistics when running MPI code
  737. * Add starpu_data_unregister_submit, starpu_data_acquire_on_node and
  738. starpu_data_invalidate_submit
  739. * New functionnality to wrapper starpu_insert_task to pass a array of
  740. data_handles via the parameter STARPU_DATA_ARRAY
  741. * Enable GPU-GPU direct transfers.
  742. * GCC plug-in
  743. - Add `registered' attribute
  744. - A new pass was added that warns about the use of possibly
  745. unregistered memory buffers.
  746. * SOCL
  747. - Manual mapping of commands on specific devices is now
  748. possible
  749. - SOCL does not require StarPU CPU tasks anymore. CPU workers
  750. are automatically disabled to enhance performance of OpenCL
  751. CPU devices
  752. * New interface: COO matrix.
  753. * Data interfaces: The pack operation of user-defined data interface
  754. defines a new parameter count which should be set to the size of
  755. the buffer created by the packing of the data.
  756. * MPI:
  757. - Communication statistics for MPI can only be enabled at
  758. execution time by defining the environment variable
  759. STARPU_COMM_STATS
  760. - Communication cache mechanism is enabled by default, and can
  761. only be disabled at execution time by setting the
  762. environment variable STARPU_MPI_CACHE to 0.
  763. - Initialisation functions starpu_mpi_initialize_extended()
  764. and starpu_mpi_initialize() have been made deprecated. One
  765. should now use starpu_mpi_init(int *, char ***, int). The
  766. last parameter indicates if MPI should be initialised.
  767. - Collective detached operations have new parameters, a
  768. callback function and a argument. This is to be consistent
  769. with the detached point-to-point communications.
  770. - When exchanging user-defined data interfaces, the size of
  771. the data is the size returned by the pack operation, i.e
  772. data with dynamic size can now be exchanged with StarPU-MPI.
  773. * Add experimental simgrid support, to simulate execution with various
  774. number of CPUs, GPUs, amount of memory, etc.
  775. * Add support for OpenCL simulators (which provide simulated execution time)
  776. * Add support for Temanejo, a task graph debugger
  777. * Theoretical bound lp output now includes data transfer time.
  778. * Update OpenCL driver to only enable CPU devices (the environment
  779. variable STARPU_OPENCL_ONLY_ON_CPUS must be set to a positive
  780. value when executing an application)
  781. * Add Scheduling contexts to separate computation resources
  782. - Scheduling policies take into account the set of resources corresponding
  783. to the context it belongs to
  784. - Add support to dynamically change scheduling contexts
  785. (Create and Delete a context, Add Workers to a context, Remove workers from a context)
  786. - Add support to indicate to which contexts the tasks are submitted
  787. * Add the Hypervisor to manage the Scheduling Contexts automatically
  788. - The Contexts can be registered to the Hypervisor
  789. - Only the registered contexts are managed by the Hypervisor
  790. - The Hypervisor can detect the initial distribution of resources of
  791. a context and constructs it consequently (the cost of execution is required)
  792. - Several policies can adapt dynamically the distribution of resources
  793. in contexts if the initial one was not appropriate
  794. - Add a platform to implement new policies of redistribution
  795. of resources
  796. * Implement a memory manager which checks the global amount of
  797. memory available on devices, and checks there is enough memory
  798. before doing an allocation on the device.
  799. * Discard environment variable STARPU_LIMIT_GPU_MEM and define
  800. instead STARPU_LIMIT_CUDA_MEM and STARPU_LIMIT_OPENCL_MEM
  801. * Introduce new variables STARPU_LIMIT_CUDA_devid_MEM and
  802. STARPU_LIMIT_OPENCL_devid_MEM to limit memory per specific device
  803. * Introduce new variable STARPU_LIMIT_CPU_MEM to limit memory for
  804. the CPU devices
  805. * New function starpu_malloc_flags to define a memory allocation with
  806. constraints based on the following values:
  807. - STARPU_MALLOC_PINNED specifies memory should be pinned
  808. - STARPU_MALLOC_COUNT specifies the memory allocation should be in
  809. the limits defined by the environment variables STARPU_LIMIT_xxx
  810. (see above). When no memory is left, starpu_malloc_flag tries
  811. to reclaim memory from StarPU and returns -ENOMEM on failure.
  812. * starpu_malloc calls starpu_malloc_flags with a value of flag set
  813. to STARPU_MALLOC_PINNED
  814. * Define new function starpu_free_flags similarly to starpu_malloc_flags
  815. * Define new public API starpu_pthread which is similar to the
  816. pthread API. It is provided with 2 implementations: a pthread one
  817. and a Simgrid one. Applications using StarPU and wishing to use
  818. the Simgrid StarPU features should use it.
  819. * Allow to have a dynamically allocated number of buffers per task,
  820. and so overwrite the value defined --enable-maxbuffers=XXX
  821. * Performance models files are now stored in a directory whose name
  822. include the version of the performance model format. The version
  823. number is also written in the file itself.
  824. When updating the format, the internal variable
  825. _STARPU_PERFMODEL_VERSION should be updated. It is then possible
  826. to switch easily between differents versions of StarPU having
  827. different performance model formats.
  828. * Tasks can now define a optional prologue callback which is executed
  829. on the host when the task becomes ready for execution, before getting
  830. scheduled.
  831. * Small CUDA allocations (<= 4MiB) are now batched to avoid the huge
  832. cudaMalloc overhead.
  833. * Prefetching is now done for all schedulers when it can be done whatever
  834. the scheduling decision.
  835. * Add a watchdog which permits to easily trigger a crash when StarPU gets
  836. stuck.
  837. * Document how to migrate data over MPI.
  838. * New function starpu_wakeup_worker() to be used by schedulers to
  839. wake up a single worker (instead of all workers) when submitting a
  840. single task.
  841. * The functions starpu_sched_set/get_min/max_priority set/get the
  842. priorities of the current scheduling context, i.e the one which
  843. was set by a call to starpu_sched_ctx_set_context() or the initial
  844. context if the function has not been called yet.
  845. * Fix for properly dealing with NAN on windows systems
  846. Small features:
  847. * Add starpu_worker_get_by_type and starpu_worker_get_by_devid
  848. * Add starpu_fxt_stop_profiling/starpu_fxt_start_profiling which permits to
  849. pause trace recording.
  850. * Add trace_buffer_size configuration field to permit to specify the tracing
  851. buffer size.
  852. * Add starpu_codelet_profile and starpu_codelet_histo_profile, tools which draw
  853. the profile of a codelet.
  854. * File STARPU-REVISION --- containing the SVN revision number from which
  855. StarPU was compiled --- is installed in the share/doc/starpu directory
  856. * starpu_perfmodel_plot can now directly draw GFlops curves.
  857. * New configure option --enable-mpi-progression-hook to enable the
  858. activity polling method for StarPU-MPI.
  859. * Permit to disable sequential consistency for a given task.
  860. * New macro STARPU_RELEASE_VERSION
  861. * New function starpu_get_version() to return as 3 integers the
  862. release version of StarPU.
  863. * Enable by default data allocation cache
  864. * New function starpu_perfmodel_directory() to print directory
  865. storing performance models. Available through the new option -d of
  866. the tool starpu_perfmodel_display
  867. * New batch files to execute StarPU applications under Microsoft
  868. Visual Studio (They are installed in path_to_starpu/bin/msvc)/
  869. * Add cl_arg_free, callback_arg_free, prologue_callback_arg_free fields to
  870. enable automatic free(cl_arg); free(callback_arg);
  871. free(prologue_callback_arg) on task destroy.
  872. * New function starpu_task_build
  873. * New configure options --with-simgrid-dir
  874. --with-simgrid-include-dir and --with-simgrid-lib-dir to specify
  875. the location of the SimGrid library
  876. Changes:
  877. * Rename all filter functions to follow the pattern
  878. starpu_DATATYPE_filter_FILTERTYPE. The script
  879. tools/dev/rename_filter.sh is provided to update your existing
  880. applications to use new filters function names.
  881. * Renaming of diverse functions and datatypes. The script
  882. tools/dev/rename.sh is provided to update your existing
  883. applications to use the new names. It is also possible to compile
  884. with the pkg-config package starpu-1.0 to keep using the old
  885. names. It is however recommended to update your code and to use
  886. the package starpu-1.1.
  887. * Fix the block filter functions.
  888. * Fix StarPU-MPI on Darwin.
  889. * The FxT code can now be used on systems other than Linux.
  890. * Keep only one hashtable implementation common/uthash.h
  891. * The cache of starpu_mpi_insert_task is fixed and thus now enabled by
  892. default.
  893. * Improve starpu_machine_display output.
  894. * Standardize objects name in the performance model API
  895. * SOCL
  896. - Virtual SOCL device has been removed
  897. - Automatic scheduling still available with command queues not
  898. assigned to any device
  899. - Remove modified OpenCL headers. ICD is now the only supported
  900. way to use SOCL.
  901. - SOCL test suite is only run when environment variable
  902. SOCL_OCL_LIB_OPENCL is defined. It should contain the location
  903. of the libOpenCL.so file of the OCL ICD implementation.
  904. * Fix main memory leak on multiple unregister/re-register.
  905. * Improve hwloc detection by configure
  906. * Cell:
  907. - It is no longer possible to enable the cell support via the
  908. gordon driver
  909. - Data interfaces no longer define functions to copy to and from
  910. SPU devices
  911. - Codelet no longer define pointer for Gordon implementations
  912. - Gordon workers are no longer enabled
  913. - Gordon performance models are no longer enabled
  914. * Fix data transfer arrows in paje traces
  915. * The "heft" scheduler no longer exists. Users should now pick "dmda"
  916. instead.
  917. * StarPU can now use poti to generate paje traces.
  918. * Rename scheduling policy "parallel greedy" to "parallel eager"
  919. * starpu_scheduler.h is no longer automatically included by
  920. starpu.h, it has to be manually included when needed
  921. * New batch files to run StarPU applications with Microsoft Visual C
  922. * Add examples/release/Makefile to test StarPU examples against an
  923. installed version of StarPU. That can also be used to test
  924. examples using a previous API.
  925. * Tutorial is installed in ${docdir}/tutorial
  926. * Schedulers eager_central_policy, dm and dmda no longer erroneously respect
  927. priorities. dmdas has to be used to respect priorities.
  928. * StarPU-MPI: Fix potential bug for user-defined datatypes. As MPI
  929. can reorder messages, we need to make sure the sending of the size
  930. of the data has been completed.
  931. * Documentation is now generated through doxygen.
  932. * Modification of perfmodels output format for future improvements.
  933. * Fix for properly dealing with NAN on windows systems
  934. * Function starpu_sched_ctx_create() now takes a variable argument
  935. list to define the scheduler to be used, and the minimum and
  936. maximum priority values
  937. * The functions starpu_sched_set/get_min/max_priority set/get the
  938. priorities of the current scheduling context, i.e the one which
  939. was set by a call to starpu_sched_ctx_set_context() or the initial
  940. context if the function was not called yet.
  941. * MPI: Fix of the livelock issue discovered while executing applications
  942. on a CPU+GPU cluster of machines by adding a maximum trylock
  943. threshold before a blocking lock.
  944. Small changes:
  945. * STARPU_NCPU should now be used instead of STARPU_NCPUS. STARPU_NCPUS is
  946. still available for compatibility reasons.
  947. * include/starpu.h includes all include/starpu_*.h files, applications
  948. therefore only need to have #include <starpu.h>
  949. * Active task wait is now included in blocked time.
  950. * Fix GCC plugin linking issues starting with GCC 4.7.
  951. * Fix forcing calibration of never-calibrated archs.
  952. * CUDA applications are no longer compiled with the "-arch sm_13"
  953. option. It is specifically added to applications which need it.
  954. * Explicitly name the non-sleeping-non-running time "Overhead", and use
  955. another color in vite traces.
  956. * Use C99 variadic macro support, not GNU.
  957. * Fix performance regression: dmda queues were inadvertently made
  958. LIFOs in r9611.
  959. StarPU 1.0.3 (git revision 25f8b3a7b13050e99bf1725ca6f52cfd62e7a861)
  960. ====================================================================
  961. Changes:
  962. * Several bug fixes in the build system
  963. * Bug fixes in source code for non-Linux systems
  964. * Fix generating FXT traces bigger than 64MiB.
  965. * Improve ENODEV error detections in StarPU FFT
  966. StarPU 1.0.2 (git revision 6f95de279d6d796a39debe8d6c5493b3bdbe0c37)
  967. ====================================================================
  968. Changes:
  969. * Add starpu_block_shadow_filter_func_vector and an example.
  970. * Add tag dependency in trace-generated DAG.
  971. * Fix CPU binding for optimized CPU-GPU transfers.
  972. * Fix parallel tasks CPU binding and combined worker generation.
  973. * Fix generating FXT traces bigger than 64MiB.
  974. StarPU 1.0.1 (git revision 97ea6e15a273e23e4ddabf491b0f9481373ca01a)
  975. ====================================================================
  976. Changes:
  977. * hwloc support. Warn users when hwloc is not found on the system and
  978. produce error when not explicitely disabled.
  979. * Several bug fixes
  980. * GCC plug-in
  981. - Add `#pragma starpu release'
  982. - Fix bug when using `acquire' pragma with function parameters
  983. - Slightly improve test suite coverage
  984. - Relax the GCC version check
  985. * Update SOCL to use new API
  986. * Documentation improvement.
  987. StarPU 1.0.0 (git revision d3ad9ca318ec9acfeaf8eb7d8a018b09e4722292)
  988. ====================================================================
  989. The extensions-again release
  990. New features:
  991. * Add SOCL, an OpenCL interface on top of StarPU.
  992. * Add a gcc plugin to extend the C interface with pragmas which allows to
  993. easily define codelets and issue tasks.
  994. * Add reduction mode to starpu_mpi_insert_task.
  995. * A new multi-format interface permits to use different binary formats
  996. on CPUs & GPUs, the conversion functions being provided by the
  997. application and called by StarPU as needed (and as less as
  998. possible).
  999. * Deprecate cost_model, and introduce cost_function, which is provided
  1000. with the whole task structure, the target arch and implementation
  1001. number.
  1002. * Permit the application to provide its own size base for performance
  1003. models.
  1004. * Applications can provide several implementations of a codelet for the
  1005. same architecture.
  1006. * Add a StarPU-Top feedback and steering interface.
  1007. * Permit to specify MPI tags for more efficient starpu_mpi_insert_task
  1008. Changes:
  1009. * Fix several memory leaks and race conditions
  1010. * Make environment variables take precedence over the configuration
  1011. passed to starpu_init()
  1012. * Libtool interface versioning has been included in libraries names
  1013. (libstarpu-1.0.so, libstarpumpi-1.0.so,
  1014. libstarpufft-1.0.so, libsocl-1.0.so)
  1015. * Install headers under $includedir/starpu/1.0.
  1016. * Make where field for struct starpu_codelet optional. When unset, its
  1017. value will be automatically set based on the availability of the
  1018. different XXX_funcs fields of the codelet.
  1019. * Define access modes for data handles into starpu_codelet and no longer
  1020. in starpu_task. Hence mark (struct starpu_task).buffers as
  1021. deprecated, and add (struct starpu_task).handles and (struct
  1022. starpu_codelet).modes
  1023. * Fields xxx_func of struct starpu_codelet are made deprecated. One
  1024. should use fields xxx_funcs instead.
  1025. * Some types were renamed for consistency. when using pkg-config libstarpu,
  1026. starpu_deprecated_api.h is automatically included (after starpu.h) to
  1027. keep compatibility with existing software. Other changes are mentioned
  1028. below, compatibility is also preserved for them.
  1029. To port code to use new names (this is not mandatory), the
  1030. tools/dev/rename.sh script can be used, and pkg-config starpu-1.0 should
  1031. be used.
  1032. * The communication cost in the heft and dmda scheduling strategies now
  1033. take into account the contention brought by the number of GPUs. This
  1034. changes the meaning of the beta factor, whose default 1.0 value should
  1035. now be good enough in most case.
  1036. Small features:
  1037. * Allow users to disable asynchronous data transfers between CPUs and
  1038. GPUs.
  1039. * Update OpenCL driver to enable CPU devices (the environment variable
  1040. STARPU_OPENCL_ON_CPUS must be set to a positive value when
  1041. executing an application)
  1042. * struct starpu_data_interface_ops --- operations on a data
  1043. interface --- define a new function pointer allocate_new_data
  1044. which creates a new data interface of the given type based on
  1045. an existing handle
  1046. * Add a field named magic to struct starpu_task which is set when
  1047. initialising the task. starpu_task_submit will fail if the
  1048. field does not have the right value. This will hence avoid
  1049. submitting tasks which have not been properly initialised.
  1050. * Add a hook function pre_exec_hook in struct starpu_sched_policy.
  1051. The function is meant to be called in drivers. Schedulers
  1052. can use it to be notified when a task is about being computed.
  1053. * Add codelet execution time statistics plot.
  1054. * Add bus speed in starpu_machine_display.
  1055. * Add a STARPU_DATA_ACQUIRE_CB which permits to inline the code to be
  1056. done.
  1057. * Add gdb functions.
  1058. * Add complex support to LU example.
  1059. * Permit to use the same data several times in write mode in the
  1060. parameters of the same task.
  1061. Small changes:
  1062. * Increase default value for STARPU_MAXCPUS -- Maximum number of
  1063. CPUs supported -- to 64.
  1064. * Add man pages for some of the tools
  1065. * Add C++ application example in examples/cpp/
  1066. * Add an OpenMP fork-join example.
  1067. * Documentation improvement.
  1068. StarPU 0.9 (git revision 12bba8528fc0d85367d885cddc383ba54efca464)
  1069. ==================================================================
  1070. The extensions release
  1071. * Provide the STARPU_REDUX data access mode
  1072. * Externalize the scheduler API.
  1073. * Add theoretical bound computation
  1074. * Add the void interface
  1075. * Add power consumption optimization
  1076. * Add parallel task support
  1077. * Add starpu_mpi_insert_task
  1078. * Add profiling information interface.
  1079. * Add STARPU_LIMIT_GPU_MEM environment variable.
  1080. * OpenCL fixes
  1081. * MPI fixes
  1082. * Improve optimization documentation
  1083. * Upgrade to hwloc 1.1 interface
  1084. * Add fortran example
  1085. * Add mandelbrot OpenCL example
  1086. * Add cg example
  1087. * Add stencil MPI example
  1088. * Initial support for CUDA4
  1089. StarPU 0.4 (git revision ad8d8be3619f211f228c141282d7d504646fc2a6)
  1090. ==================================================================
  1091. The API strengthening release
  1092. * Major API improvements
  1093. - Provide the STARPU_SCRATCH data access mode
  1094. - Rework data filter interface
  1095. - Rework data interface structure
  1096. - A script that automatically renames old functions to accomodate with the new
  1097. API is available from https://scm.gforge.inria.fr/svn/starpu/scripts/renaming
  1098. (login: anonsvn, password: anonsvn)
  1099. * Implement dependencies between task directly (eg. without tags)
  1100. * Implicit data-driven task dependencies simplifies the design of
  1101. data-parallel algorithms
  1102. * Add dynamic profiling capabilities
  1103. - Provide per-task feedback
  1104. - Provide per-worker feedback
  1105. - Provide feedback about memory transfers
  1106. * Provide a library to help accelerating MPI applications
  1107. * Improve data transfers overhead prediction
  1108. - Transparently benchmark buses to generate performance models
  1109. - Bind accelerator-controlling threads with respect to NUMA locality
  1110. * Improve StarPU's portability
  1111. - Add OpenCL support
  1112. - Add support for Windows
  1113. StarPU 0.2.901 aka 0.3-rc1 (git revision 991f2abb772c17c3d45bbcf27f46197652e6a3ef)
  1114. ==================================================================================
  1115. The asynchronous heterogeneous multi-accelerator release
  1116. * Many API changes and code cleanups
  1117. - Implement starpu_worker_get_id
  1118. - Implement starpu_worker_get_name
  1119. - Implement starpu_worker_get_type
  1120. - Implement starpu_worker_get_count
  1121. - Implement starpu_display_codelet_stats
  1122. - Implement starpu_data_prefetch_on_node
  1123. - Expose the starpu_data_set_wt_mask function
  1124. * Support nvidia (heterogeneous) multi-GPU
  1125. * Add the data request mechanism
  1126. - All data transfers use data requests now
  1127. - Implement asynchronous data transfers
  1128. - Implement prefetch mechanism
  1129. - Chain data requests to support GPU->RAM->GPU transfers
  1130. * Make it possible to bypass the scheduler and to assign a task to a specific
  1131. worker
  1132. * Support restartable tasks to reinstanciate dependencies task graphs
  1133. * Improve performance prediction
  1134. - Model data transfer overhead
  1135. - One model is created for each accelerator
  1136. * Support for CUDA's driver API is deprecated
  1137. * The STARPU_WORKERS_CUDAID and STARPU_WORKERS_CPUID env. variables make it possible to
  1138. specify where to bind the workers
  1139. * Use the hwloc library to detect the actual number of cores
  1140. StarPU 0.2.0 (git revision 73e989f0783e10815aff394f80242760c4ed098c)
  1141. ====================================================================
  1142. The Stabilizing-the-Basics release
  1143. * Various API cleanups
  1144. * Mac OS X is supported now
  1145. * Add dynamic code loading facilities onto Cell's SPUs
  1146. * Improve performance analysis/feedback tools
  1147. * Application can interact with StarPU tasks
  1148. - The application may access/modify data managed by the DSM
  1149. - The application may wait for the termination of a (set of) task(s)
  1150. * An initial documentation is added
  1151. * More examples are supplied
  1152. StarPU 0.1.0 (git revision 911869a96b40c74eb92b30a43d3e08bf445d8078)
  1153. ====================================================================
  1154. First release.
  1155. Status:
  1156. * Only supports Linux platforms yet
  1157. * Supported architectures
  1158. - multicore CPUs
  1159. - NVIDIA GPUs (with CUDA 2.x)
  1160. - experimental Cell/BE support
  1161. Changes:
  1162. * Scheduling facilities
  1163. - run-time selection of the scheduling policy
  1164. - basic auto-tuning facilities
  1165. * Software-based DSM
  1166. - transparent data coherency management
  1167. - High-level expressive interface
  1168. # Local Variables:
  1169. # mode: text
  1170. # coding: utf-8
  1171. # ispell-local-dictionary: "american"
  1172. # End: