ChangeLog 44 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946
  1. # StarPU --- Runtime system for heterogeneous multicore architectures.
  2. #
  3. # Copyright (C) 2009-2017 Université de Bordeaux
  4. # Copyright (C) 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017 CNRS
  5. # Copyright (C) 2014, 2016, 2017 INRIA
  6. #
  7. # StarPU is free software; you can redistribute it and/or modify
  8. # it under the terms of the GNU Lesser General Public License as published by
  9. # the Free Software Foundation; either version 2.1 of the License, or (at
  10. # your option) any later version.
  11. #
  12. # StarPU is distributed in the hope that it will be useful, but
  13. # WITHOUT ANY WARRANTY; without even the implied warranty of
  14. # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
  15. #
  16. # See the GNU Lesser General Public License in COPYING.LGPL for more details.
  17. StarPU 1.3.0 (svn revision xxxx)
  18. ==============================================
  19. New features:
  20. * New scheduler with heterogeneous priorities
  21. * Support priorities for data transfers.
  22. * Add support for Ayudame version 2.x debugging library.
  23. * Add support for multiple linear regression performance models
  24. * Add MPI Master-Slave support to use the cores of remote nodes. Use the
  25. --enable-mpi-master-slave option to activate it.
  26. * Add STARPU_CUDA_THREAD_PER_DEV environment variable to support driving all
  27. GPUs from only one thread when almost all kernels are asynchronous.
  28. * Add starpu_replay tool to replay tasks.rec files with Simgrid.
  29. Small features:
  30. * Scheduling contexts may now be associated a user data pointer at creation
  31. time, that can later be recalled through starpu_sched_ctx_get_user_data().
  32. * Add STARPU_SIMGRID_TASK_SUBMIT_COST and STARPU_SIMGRID_FETCHING_INPUT_COST
  33. to simulate the cost of task submission and data fetching in simgrid mode.
  34. This provides more accurate simgrid predictions, especially for the
  35. beginning of the execution and regarding data transfers.
  36. * STARPU_SIMGRID_SCHED_COST to take into account the time to perform scheduling
  37. when running in SimGrid mode.
  38. * New configure option --enable-mpi-pedantic-isend (disabled by
  39. default) to acquire data in STARPU_RW (instead of STARPU_R) before
  40. performing MPI_Isend call
  41. * New function starpu_worker_display_names to display the names of
  42. all the workers of a specified type.
  43. * Arbiters now support concurrent read access.
  44. * Add a field starpu_task::where similar to starpu_codelet::where
  45. which allows to restrict where to execute a task. Also add
  46. STARPU_TASK_WHERE to be used when calling starpu_task_insert().
  47. * Add SubmitOrder trace field.
  48. * Add workerids and workerids_len task fields.
  49. * Add priority management to StarPU-MPI.
  50. * Add STARPU_MAIN_THREAD_CPUID and STARPU_MPI_THREAD_CPUID environment
  51. variables.
  52. Changes:
  53. * Vastly improve simgrid simulation time.
  54. * Switch default scheduler to lws.
  55. Small changes:
  56. * Use asynchronous transfers for task data fetches with were not prefetched.
  57. * Allow to call starpu_sched_ctx_set_policy_data on the main
  58. scheduler context
  59. * Fonction starpu_is_initialized() is moved to the public API.
  60. StarPU 1.2.2 (svn revision xxx)
  61. ==============================================
  62. New features:
  63. * Add starpu_data_acquire_try and starpu_data_acquire_on_node_try.
  64. * Add NVCC_CC environment variable.
  65. * Add -no-flops and -no-events options to starpu_fxt_tool to make
  66. traces lighter
  67. * Add starpu_cusparse_init/shutdown/get_local_handle for proper CUDA
  68. overlapping with cusparse.
  69. * Allow precise debugging by setting STARPU_TASK_BREAK_ON_PUSH,
  70. STARPU_TASK_BREAK_ON_SCHED, STARPU_TASK_BREAK_ON_POP, and
  71. STARPU_TASK_BREAK_ON_EXEC environment variables, with the job_id
  72. of a task. StarPU will raise SIGTRAP when the task is being
  73. scheduled, pushed, or popped by the scheduler.
  74. Small features:
  75. * New function starpu_worker_get_job_id(struct starpu_task *task)
  76. which returns the job identifier for a given task
  77. * Show package/numa topology in starpu_machine_display
  78. * MPI: Add mpi communications in dag.dot
  79. * Add STARPU_PERF_MODEL_HOMOGENEOUS_CPU environment variable to
  80. allow having one perfmodel per CPU core
  81. * Add starpu_vector_filter_list_long filter.
  82. * Add starpu_perfmodel_arch_comb_fetch function.
  83. * Add STARPU_WATCHDOG_DELAY environment variable.
  84. Small changes:
  85. * Output generated through STARPU_MPI_COMM has been modified to
  86. allow easier automated checking
  87. * MPI: Fix reactivity of the beginning of the application, when a
  88. lot of ready requests have to be processed at the same time, we
  89. want to poll the pending requests from time to time.
  90. * MPI: Fix gantt chart for starpu_mpi_irecv: it should use the
  91. termination time of the request, not the submission time.
  92. * MPI: Modify output generated through STARPU_MPI_COMM to allow
  93. easier automated checking
  94. * MPI: enable more tests in simgrid mode
  95. * Use assumed-size instead of assumed-shape arrays for native
  96. fortran API, for better backward compatibility.
  97. * Fix odd ordering of CPU workers on CPUs due to GPUs stealing some
  98. cores
  99. StarPU 1.2.1 (svn revision 20299)
  100. ==============================================
  101. New features:
  102. * Add starpu_fxt_trace_user_event_string.
  103. * Add starpu_tasks_rec_complete tool to add estimation times in tasks.rec
  104. files.
  105. * Add STARPU_FXT_TRACE environment variable.
  106. * Add starpu_data_set_user_data and starpu_data_get_user_data.
  107. * Add STARPU_MPI_FAKE_SIZE and STARPU_MPI_FAKE_RANK to allow simulating
  108. execution of just one MPI node.
  109. * Add STARPU_PERF_MODEL_HOMOGENEOUS_CUDA/OPENCL/MIC/SCC to share performance
  110. models between devices, making calibration much faster.
  111. * Add modular-heft-prio scheduler.
  112. * Add starpu_cublas_get_local_handle helper.
  113. * Add starpu_data_set_name, starpu_data_set_coordinates_array, and
  114. starpu_data_set_coordinates to describe data, and starpu_iteration_push and
  115. starpu_iteration_pop to describe tasks, for better offline traces analysis.
  116. * New function starpu_bus_print_filenames() to display filenames
  117. storing bandwidth/affinity/latency information, available through
  118. tools/starpu_machine_display -i
  119. * Add support for Ayudame version 2.x debugging library.
  120. * Add starpu_sched_ctx_get_workers_list_raw, much less costly than
  121. starpu_sched_ctx_get_workers_list
  122. * Add starpu_task_get_name and use it to warn about dmda etc. using
  123. a dumb policy when calibration is not finished
  124. * MPI: Add functions to test for cached values
  125. Changes:
  126. * Fix performance regression of lws for small tasks.
  127. * Improve native Fortran support for StarPU
  128. Small changes:
  129. * Fix type of data home node to allow users to pass -1 to define
  130. temporary data
  131. * Fix compatibility with simgrid 3.14
  132. StarPU 1.2.0 (svn revision 18521)
  133. ==============================================
  134. New features:
  135. * MIC Xeon Phi support
  136. * SCC support
  137. * New function starpu_sched_ctx_exec_parallel_code to execute a
  138. parallel code on the workers of the given scheduler context
  139. * MPI:
  140. - New internal communication system : a unique tag called
  141. is now used for all communications, and a system
  142. of hashmaps on each node which stores pending receives has been
  143. implemented. Every message is now coupled with an envelope, sent
  144. before the corresponding data, which allows the receiver to
  145. allocate data correctly, and to submit the matching receive of
  146. the envelope.
  147. - New function
  148. starpu_mpi_irecv_detached_sequential_consistency which
  149. allows to enable or disable the sequential consistency for
  150. the given data handle (sequential consistency will be
  151. enabled or disabled based on the value of the function
  152. parameter and the value of the sequential consistency
  153. defined for the given data)
  154. - New functions starpu_mpi_task_build() and
  155. starpu_mpi_task_post_build()
  156. - New flag STARPU_NODE_SELECTION_POLICY to specify a policy for
  157. selecting a node to execute the codelet when several nodes
  158. own data in W mode.
  159. - New selection node policies can be un/registered with the
  160. functions starpu_mpi_node_selection_register_policy() and
  161. starpu_mpi_node_selection_unregister_policy()
  162. - New environment variable STARPU_MPI_COMM which enables
  163. basic tracing of communications.
  164. - New function starpu_mpi_init_comm() which allows to specify
  165. a MPI communicator.
  166. * New STARPU_COMMUTE flag which can be passed along STARPU_W or STARPU_RW to
  167. let starpu commute write accesses.
  168. * Out-of-core support, through registration of disk areas as additional memory
  169. nodes. It can be enabled programmatically or through the STARPU_DISK_SWAP*
  170. environment variables.
  171. * Reclaiming is now periodically done before memory becomes full. This can
  172. be controlled through the STARPU_*_AVAILABLE_MEM environment variables.
  173. * New hierarchical schedulers which allow the user to easily build
  174. its own scheduler, by coding itself each "box" it wants, or by
  175. combining existing boxes in StarPU to build it. Hierarchical
  176. schedulers have very interesting scalability properties.
  177. * Add STARPU_CUDA_ASYNC and STARPU_OPENCL_ASYNC flags to allow asynchronous
  178. CUDA and OpenCL kernel execution.
  179. * Add STARPU_CUDA_PIPELINE and STARPU_OPENCL_PIPELINE to specify how
  180. many asynchronous tasks are submitted in advance on CUDA and
  181. OpenCL devices. Setting the value to 0 forces a synchronous
  182. execution of all tasks.
  183. * Add CUDA concurrent kernel execution support through
  184. the STARPU_NWORKER_PER_CUDA environment variable.
  185. * Add CUDA and OpenCL kernel submission pipelining, to overlap costs and allow
  186. concurrent kernel execution on Fermi cards.
  187. * New locality work stealing scheduler (lws).
  188. * Add STARPU_VARIABLE_NBUFFERS to be set in cl.nbuffers, and nbuffers and
  189. modes field to the task structure, which permit to define codelets taking a
  190. variable number of data.
  191. * Add support for implementing OpenMP runtimes on top of StarPU
  192. * New performance model format to better represent parallel tasks.
  193. Used to provide estimations for the execution times of the
  194. parallel tasks on scheduling contexts or combined workers.
  195. * starpu_data_idle_prefetch_on_node and
  196. starpu_idle_prefetch_task_input_on_node allow to queue prefetches to be done
  197. only when the bus is idle.
  198. * Make starpu_data_prefetch_on_node not forcibly flush data out, introduce
  199. starpu_data_fetch_on_node for that.
  200. * Add data access arbiters, to improve parallelism of concurrent data
  201. accesses, notably with STARPU_COMMUTE.
  202. * Anticipative writeback, to flush dirty data asynchronously before the
  203. GPU device is full. Disabled by default. Use STARPU_MINIMUM_CLEAN_BUFFERS
  204. and STARPU_TARGET_CLEAN_BUFFERS to enable it.
  205. * Add starpu_data_wont_use to advise that a piece of data will not be used
  206. in the close future.
  207. * Enable anticipative writeback by default.
  208. * New scheduler 'dmdasd' that considers priority when deciding on
  209. which worker to schedule
  210. * Add the capability to define specific MPI datatypes for
  211. StarPU user-defined interfaces.
  212. * Add tasks.rec trace output to make scheduling analysis easier.
  213. * Add Fortran 90 module and example using it
  214. * New StarPU-MPI gdb debug functions
  215. * Generate animated html trace of modular schedulers.
  216. * Add asynchronous partition planning. It only supports coherency through
  217. the home node of data for now.
  218. * Add STARPU_MALLOC_SIMULATION_FOLDED flag to save memory when simulating.
  219. * Include application threads in the trace.
  220. * Add starpu_task_get_task_scheduled_succs to get successors of a task.
  221. * Add graph inspection facility for schedulers.
  222. * New STARPU_LOCALITY flag to mark data which should be taken into account
  223. by schedulers for improving locality.
  224. * Experimental support for data locality in ws and lws.
  225. * Add a preliminary framework for native Fortran support for StarPU
  226. Small features:
  227. * Tasks can now have a name (via the field const char *name of
  228. struct starpu_task)
  229. * New functions starpu_data_acquire_cb_sequential_consistency() and
  230. starpu_data_acquire_on_node_cb_sequential_consistency() which allows
  231. to enable or disable sequential consistency
  232. * New configure option --enable-fxt-lock which enables additional
  233. trace events focused on locks behaviour during the execution
  234. * Functions starpu_insert_task and starpu_mpi_insert_task are
  235. renamed in starpu_task_insert and starpu_mpi_task_insert. Old
  236. names are kept to avoid breaking old codes.
  237. * New configure option --enable-calibration-heuristic which allows
  238. the user to set the maximum authorized deviation of the
  239. history-based calibrator.
  240. * Allow application to provide the task footprint itself.
  241. * New function starpu_sched_ctx_display_workers() to display worker
  242. information belonging to a given scheduler context
  243. * The option --enable-verbose can be called with
  244. --enable-verbose=extra to increase the verbosity
  245. * Add codelet size, footprint and tag id in the paje trace.
  246. * Add STARPU_TAG_ONLY, to specify a tag for traces without making StarPU
  247. manage the tag.
  248. * On Linux x86, spinlocks now block after a hundred tries. This avoids
  249. typical 10ms pauses when the application thread tries to submit tasks.
  250. * New function char *starpu_worker_get_type_as_string(enum starpu_worker_archtype type)
  251. * Improve static scheduling by adding support for specifying the task
  252. execution order.
  253. * Add starpu_worker_can_execute_task_impl and
  254. starpu_worker_can_execute_task_first_impl to optimize getting the
  255. working implementations
  256. * Add STARPU_MALLOC_NORECLAIM flag to allocate without running a reclaim if
  257. the node is out of memory.
  258. * New flag STARPU_DATA_MODE_ARRAY for the function family
  259. starpu_task_insert to allow to define a array of data handles
  260. along with their access modes.
  261. * New configure option --enable-new-check to enable new testcases
  262. which are known to fail
  263. * Add starpu_memory_allocate and _deallocate to let the application declare
  264. its own allocation to the reclaiming engine.
  265. * Add STARPU_SIMGRID_CUDA_MALLOC_COST and STARPU_SIMGRID_CUDA_QUEUE_COST to
  266. disable CUDA costs simulation in simgrid mode.
  267. * Add starpu_task_get_task_succs to get the list of children of a given
  268. task.
  269. * Add starpu_malloc_on_node_flags, starpu_free_on_node_flags, and
  270. starpu_malloc_on_node_set_default_flags to control the allocation flags
  271. used for allocations done by starpu.
  272. * Ranges can be provided in STARPU_WORKERS_CPUID
  273. * Add starpu_fxt_autostart_profiling to be able to avoid autostart.
  274. * Add arch_cost_function perfmodel function field.
  275. * Add STARPU_TASK_BREAK_ON_SCHED, STARPU_TASK_BREAK_ON_PUSH, and
  276. STARPU_TASK_BREAK_ON_POP environment variables to debug schedulers.
  277. * Add starpu_sched_display tool.
  278. * Add starpu_memory_pin and starpu_memory_unpin to pin memory allocated
  279. another way than starpu_malloc.
  280. * Add STARPU_NOWHERE to create synchronization tasks with data.
  281. * Document how to switch between differents views of the same data.
  282. * Add STARPU_NAME to specify a task name from a starpu_task_insert call.
  283. * Add configure option to disable fortran --disable-fortran
  284. * Add configure option to give path for smpirun executable --with-smpirun
  285. * Add configure option to disable the build of tests --disable-build-tests
  286. * Add starpu-all-tasks debugging support
  287. * New function
  288. void starpu_opencl_load_program_source_malloc(const char *source_file_name, char **located_file_name, char **located_dir_name, char **opencl_program_source)
  289. which allocates the pointers located_file_name, located_dir_name
  290. and opencl_program_source.
  291. * Add submit_hook and do_schedule scheduler methods.
  292. * Add starpu_sleep.
  293. * Add starpu_task_list_ismember.
  294. * Add _starpu_fifo_pop_this_task.
  295. * Add STARPU_MAX_MEMORY_USE environment variable.
  296. * Add starpu_worker_get_id_check().
  297. * New function starpu_mpi_wait_for_all(MPI_Comm comm) that allows to
  298. wait until all StarPU tasks and communications for the given
  299. communicator are completed.
  300. * New function starpu_codelet_unpack_args_and_copyleft() which
  301. allows to copy in a new buffer values which have not been unpacked by
  302. the current call
  303. * Add STARPU_CODELET_SIMGRID_EXECUTE flag.
  304. * Add STARPU_CODELET_SIMGRID_EXECUTE_AND_INJECT flag.
  305. * Add STARPU_CL_ARGS flag to starpu_task_insert() and
  306. starpu_mpi_task_insert() functions call
  307. Changes:
  308. * Data interfaces (variable, vector, matrix and block) now define
  309. pack und unpack functions
  310. * StarPU-MPI: Fix for being able to receive data which have not yet
  311. been registered by the application (i.e it did not call
  312. starpu_data_set_tag(), data are received as a raw memory)
  313. * StarPU-MPI: Fix for being able to receive data with the same tag
  314. from several nodes (see mpi/tests/gather.c)
  315. * Remove the long-deprecated cost_model fields and task->buffers field.
  316. * Fix complexity of implicit task/data dependency, from quadratic to linear.
  317. Small changes:
  318. * Rename function starpu_trace_user_event() as
  319. starpu_fxt_trace_user_event()
  320. * "power" is renamed into "energy" wherever it applies, notably energy
  321. consumption performance models
  322. * Update starpu_task_build() to set starpu_task::cl_arg_free to 1 if
  323. some arguments of type ::STARPU_VALUE are given.
  324. * Simplify performance model loading API
  325. * Better semantic for environment variables STARPU_NMIC and
  326. STARPU_NMICDEVS, the number of devices and the number of cores.
  327. STARPU_NMIC will be the number of devices, and STARPU_NMICCORES
  328. will be the number of cores per device.
  329. StarPU 1.1.5 (svn revision xxx)
  330. ==============================================
  331. The scheduling context release
  332. * Add starpu_memory_pin and starpu_memory_unpin to pin memory allocated
  333. another way than starpu_malloc.
  334. * Add starpu_task_wait_for_n_submitted() and
  335. STARPU_LIMIT_MAX_NSUBMITTED_TASKS/STARPU_LIMIT_MIN_NSUBMITTED_TASKS to
  336. easily control the number of submitted tasks by making task submission
  337. block.
  338. StarPU 1.1.4 (svn revision 14856)
  339. ==============================================
  340. The scheduling context release
  341. New features:
  342. * Fix and actually enable the cache allocation.
  343. * Enable allocation cache in main RAM when STARPU_LIMIT_CPU_MEM is set by
  344. the user.
  345. * New MPI functions starpu_mpi_issend and starpu_mpi_issend_detached
  346. to send data using a synchronous and non-blocking mode (internally
  347. uses MPI_Issend)
  348. * New data access mode flag STARPU_SSEND to be set when calling
  349. starpu_mpi_insert_task to specify the data has to be sent using a
  350. synchronous and non-blocking mode
  351. * New environment variable STARPU_PERF_MODEL_DIR which can be set to
  352. specify a directory where to store performance model files in.
  353. When unset, the files are stored in $STARPU_HOME/.starpu/sampling
  354. * MPI:
  355. - New function starpu_mpi_data_register_comm to register a data
  356. with another communicator than MPI_COMM_WORLD
  357. - New functions starpu_mpi_data_set_rank() and starpu_mpi_data_set_tag()
  358. which call starpu_mpi_data_register_comm()
  359. Small features:
  360. * Add starpu_memory_wait_available() to wait for a given size to become
  361. available on a given node.
  362. * New environment variable STARPU_RAND_SEED to set the seed used for random
  363. numbers.
  364. * New function starpu_mpi_cache_set() to enable or disable the
  365. communication cache at runtime
  366. * Add starpu_paje_sort which sorts Pajé traces.
  367. Changes:
  368. * Fix complexity of implicit task/data dependency, from quadratic to linear.
  369. StarPU 1.1.3 (svn revision 13450)
  370. ==============================================
  371. The scheduling context release
  372. New features:
  373. * One can register an existing on-GPU buffer to be used by a handle.
  374. * Add the starpu_paje_summary statistics tool.
  375. * Enable gpu-gpu transfers for matrices.
  376. * Let interfaces declare which transfers they allow with the can_copy
  377. methode.
  378. Small changes:
  379. * Lock performance model files while writing and reading them to avoid
  380. issues on parallel launches, MPI runs notably.
  381. * Lots of build fixes for icc on Windows.
  382. StarPU 1.1.2 (svn revision 13011)
  383. ==============================================
  384. The scheduling context release
  385. New features:
  386. * The reduction init codelet is automatically used to initialize temporary
  387. buffers.
  388. * Traces now include a "scheduling" state, to show the overhead of the
  389. scheduler.
  390. * Add STARPU_CALIBRATE_MINIMUM environment variable to specify the minimum
  391. number of calibration measurements.
  392. * Add STARPU_TRACE_BUFFER_SIZE environment variable to specify the size of
  393. the trace buffer.
  394. StarPU 1.1.1 (svn revision 12638)
  395. ==============================================
  396. The scheduling context release
  397. New features:
  398. * MPI:
  399. - New variable STARPU_MPI_CACHE_STATS to print statistics on
  400. cache holding received data.
  401. - New function starpu_mpi_data_register() which sets the rank
  402. and tag of a data, and also allows to automatically clear
  403. the MPI communication cache when unregistering the data. It
  404. should be called instead of both calling
  405. starpu_data_set_tag() and starpu_data_set_rank()
  406. * Use streams for all CUDA transfers, even initiated by CPUs.
  407. * Add paje traces statistics tools.
  408. * Use streams for GPUA->GPUB and GPUB->GPUA transfers.
  409. Small features:
  410. * New STARPU_EXECUTE_ON_WORKER flag to specify the worker on which
  411. to execute the task.
  412. * New STARPU_DISABLE_PINNING environment variable to disable host memory
  413. pinning.
  414. * New STARPU_DISABLE_KERNELS environment variable to disable actual kernel
  415. execution.
  416. * New starpu_memory_get_total function to get the size of a memory node.
  417. * New starpu_parallel_task_barrier_init_n function to let a scheduler decide
  418. a set of workers without going through combined workers.
  419. Changes:
  420. * Fix simgrid execution.
  421. * Rename starpu_get_nready_tasks_of_sched_ctx to starpu_sched_ctx_get_nready_tasks
  422. * Rename starpu_get_nready_flops_of_sched_ctx to starpu_sched_ctx_get_nready_flops
  423. * New functions starpu_pause() and starpu_resume()
  424. * New codelet specific_nodes field to specify explicit target nodes for data.
  425. * StarPU-MPI: Fix overzealous allocation of memory.
  426. * Interfaces: Allow interface implementation to change pointers at will, in
  427. unpack notably.
  428. Small changes:
  429. * Use big fat abortions when one tries to make a task or callback
  430. sleep, instead of just returning EDEADLCK which few people will test
  431. * By default, StarPU FFT examples are not compiled and checked, the
  432. configure option --enable-starpufft-examples needs to be specified
  433. to change this behaviour.
  434. StarPU 1.1.0 (svn revision 11960)
  435. ==============================================
  436. The scheduling context release
  437. New features:
  438. * OpenGL interoperability support.
  439. * Capability to store compiled OpenCL kernels on the file system
  440. * Capability to load compiled OpenCL kernels
  441. * Performance models measurements can now be provided explicitly by
  442. applications.
  443. * Capability to emit communication statistics when running MPI code
  444. * Add starpu_data_unregister_submit, starpu_data_acquire_on_node and
  445. starpu_data_invalidate_submit
  446. * New functionnality to wrapper starpu_insert_task to pass a array of
  447. data_handles via the parameter STARPU_DATA_ARRAY
  448. * Enable GPU-GPU direct transfers.
  449. * GCC plug-in
  450. - Add `registered' attribute
  451. - A new pass was added that warns about the use of possibly
  452. unregistered memory buffers.
  453. * SOCL
  454. - Manual mapping of commands on specific devices is now
  455. possible
  456. - SOCL does not require StarPU CPU tasks anymore. CPU workers
  457. are automatically disabled to enhance performance of OpenCL
  458. CPU devices
  459. * New interface: COO matrix.
  460. * Data interfaces: The pack operation of user-defined data interface
  461. defines a new parameter count which should be set to the size of
  462. the buffer created by the packing of the data.
  463. * MPI:
  464. - Communication statistics for MPI can only be enabled at
  465. execution time by defining the environment variable
  466. STARPU_COMM_STATS
  467. - Communication cache mechanism is enabled by default, and can
  468. only be disabled at execution time by setting the
  469. environment variable STARPU_MPI_CACHE to 0.
  470. - Initialisation functions starpu_mpi_initialize_extended()
  471. and starpu_mpi_initialize() have been made deprecated. One
  472. should now use starpu_mpi_init(int *, char ***, int). The
  473. last parameter indicates if MPI should be initialised.
  474. - Collective detached operations have new parameters, a
  475. callback function and a argument. This is to be consistent
  476. with the detached point-to-point communications.
  477. - When exchanging user-defined data interfaces, the size of
  478. the data is the size returned by the pack operation, i.e
  479. data with dynamic size can now be exchanged with StarPU-MPI.
  480. * Add experimental simgrid support, to simulate execution with various
  481. number of CPUs, GPUs, amount of memory, etc.
  482. * Add support for OpenCL simulators (which provide simulated execution time)
  483. * Add support for Temanejo, a task graph debugger
  484. * Theoretical bound lp output now includes data transfer time.
  485. * Update OpenCL driver to only enable CPU devices (the environment
  486. variable STARPU_OPENCL_ONLY_ON_CPUS must be set to a positive
  487. value when executing an application)
  488. * Add Scheduling contexts to separate computation resources
  489. - Scheduling policies take into account the set of resources corresponding
  490. to the context it belongs to
  491. - Add support to dynamically change scheduling contexts
  492. (Create and Delete a context, Add Workers to a context, Remove workers from a context)
  493. - Add support to indicate to which contexts the tasks are submitted
  494. * Add the Hypervisor to manage the Scheduling Contexts automatically
  495. - The Contexts can be registered to the Hypervisor
  496. - Only the registered contexts are managed by the Hypervisor
  497. - The Hypervisor can detect the initial distribution of resources of
  498. a context and constructs it consequently (the cost of execution is required)
  499. - Several policies can adapt dynamically the distribution of resources
  500. in contexts if the initial one was not appropriate
  501. - Add a platform to implement new policies of redistribution
  502. of resources
  503. * Implement a memory manager which checks the global amount of
  504. memory available on devices, and checks there is enough memory
  505. before doing an allocation on the device.
  506. * Discard environment variable STARPU_LIMIT_GPU_MEM and define
  507. instead STARPU_LIMIT_CUDA_MEM and STARPU_LIMIT_OPENCL_MEM
  508. * Introduce new variables STARPU_LIMIT_CUDA_devid_MEM and
  509. STARPU_LIMIT_OPENCL_devid_MEM to limit memory per specific device
  510. * Introduce new variable STARPU_LIMIT_CPU_MEM to limit memory for
  511. the CPU devices
  512. * New function starpu_malloc_flags to define a memory allocation with
  513. constraints based on the following values:
  514. - STARPU_MALLOC_PINNED specifies memory should be pinned
  515. - STARPU_MALLOC_COUNT specifies the memory allocation should be in
  516. the limits defined by the environment variables STARPU_LIMIT_xxx
  517. (see above). When no memory is left, starpu_malloc_flag tries
  518. to reclaim memory from StarPU and returns -ENOMEM on failure.
  519. * starpu_malloc calls starpu_malloc_flags with a value of flag set
  520. to STARPU_MALLOC_PINNED
  521. * Define new function starpu_free_flags similarly to starpu_malloc_flags
  522. * Define new public API starpu_pthread which is similar to the
  523. pthread API. It is provided with 2 implementations: a pthread one
  524. and a Simgrid one. Applications using StarPU and wishing to use
  525. the Simgrid StarPU features should use it.
  526. * Allow to have a dynamically allocated number of buffers per task,
  527. and so overwrite the value defined --enable-maxbuffers=XXX
  528. * Performance models files are now stored in a directory whose name
  529. include the version of the performance model format. The version
  530. number is also written in the file itself.
  531. When updating the format, the internal variable
  532. _STARPU_PERFMODEL_VERSION should be updated. It is then possible
  533. to switch easily between differents versions of StarPU having
  534. different performance model formats.
  535. * Tasks can now define a optional prologue callback which is executed
  536. on the host when the task becomes ready for execution, before getting
  537. scheduled.
  538. * Small CUDA allocations (<= 4MiB) are now batched to avoid the huge
  539. cudaMalloc overhead.
  540. * Prefetching is now done for all schedulers when it can be done whatever
  541. the scheduling decision.
  542. * Add a watchdog which permits to easily trigger a crash when StarPU gets
  543. stuck.
  544. * Document how to migrate data over MPI.
  545. * New function starpu_wakeup_worker() to be used by schedulers to
  546. wake up a single worker (instead of all workers) when submitting a
  547. single task.
  548. * The functions starpu_sched_set/get_min/max_priority set/get the
  549. priorities of the current scheduling context, i.e the one which
  550. was set by a call to starpu_sched_ctx_set_context() or the initial
  551. context if the function has not been called yet.
  552. * Fix for properly dealing with NAN on windows systems
  553. Small features:
  554. * Add starpu_worker_get_by_type and starpu_worker_get_by_devid
  555. * Add starpu_fxt_stop_profiling/starpu_fxt_start_profiling which permits to
  556. pause trace recording.
  557. * Add trace_buffer_size configuration field to permit to specify the tracing
  558. buffer size.
  559. * Add starpu_codelet_profile and starpu_codelet_histo_profile, tools which draw
  560. the profile of a codelet.
  561. * File STARPU-REVISION --- containing the SVN revision number from which
  562. StarPU was compiled --- is installed in the share/doc/starpu directory
  563. * starpu_perfmodel_plot can now directly draw GFlops curves.
  564. * New configure option --enable-mpi-progression-hook to enable the
  565. activity polling method for StarPU-MPI.
  566. * Permit to disable sequential consistency for a given task.
  567. * New macro STARPU_RELEASE_VERSION
  568. * New function starpu_get_version() to return as 3 integers the
  569. release version of StarPU.
  570. * Enable by default data allocation cache
  571. * New function starpu_perfmodel_directory() to print directory
  572. storing performance models. Available through the new option -d of
  573. the tool starpu_perfmodel_display
  574. * New batch files to execute StarPU applications under Microsoft
  575. Visual Studio (They are installed in path_to_starpu/bin/msvc)/
  576. * Add cl_arg_free, callback_arg_free, prologue_callback_arg_free fields to
  577. enable automatic free(cl_arg); free(callback_arg);
  578. free(prologue_callback_arg) on task destroy.
  579. * New function starpu_task_build
  580. * New configure options --with-simgrid-dir
  581. --with-simgrid-include-dir and --with-simgrid-lib-dir to specify
  582. the location of the SimGrid library
  583. Changes:
  584. * Rename all filter functions to follow the pattern
  585. starpu_DATATYPE_filter_FILTERTYPE. The script
  586. tools/dev/rename_filter.sh is provided to update your existing
  587. applications to use new filters function names.
  588. * Renaming of diverse functions and datatypes. The script
  589. tools/dev/rename.sh is provided to update your existing
  590. applications to use the new names. It is also possible to compile
  591. with the pkg-config package starpu-1.0 to keep using the old
  592. names. It is however recommended to update your code and to use
  593. the package starpu-1.1.
  594. * Fix the block filter functions.
  595. * Fix StarPU-MPI on Darwin.
  596. * The FxT code can now be used on systems other than Linux.
  597. * Keep only one hashtable implementation common/uthash.h
  598. * The cache of starpu_mpi_insert_task is fixed and thus now enabled by
  599. default.
  600. * Improve starpu_machine_display output.
  601. * Standardize objects name in the performance model API
  602. * SOCL
  603. - Virtual SOCL device has been removed
  604. - Automatic scheduling still available with command queues not
  605. assigned to any device
  606. - Remove modified OpenCL headers. ICD is now the only supported
  607. way to use SOCL.
  608. - SOCL test suite is only run when environment variable
  609. SOCL_OCL_LIB_OPENCL is defined. It should contain the location
  610. of the libOpenCL.so file of the OCL ICD implementation.
  611. * Fix main memory leak on multiple unregister/re-register.
  612. * Improve hwloc detection by configure
  613. * Cell:
  614. - It is no longer possible to enable the cell support via the
  615. gordon driver
  616. - Data interfaces no longer define functions to copy to and from
  617. SPU devices
  618. - Codelet no longer define pointer for Gordon implementations
  619. - Gordon workers are no longer enabled
  620. - Gordon performance models are no longer enabled
  621. * Fix data transfer arrows in paje traces
  622. * The "heft" scheduler no longer exists. Users should now pick "dmda"
  623. instead.
  624. * StarPU can now use poti to generate paje traces.
  625. * Rename scheduling policy "parallel greedy" to "parallel eager"
  626. * starpu_scheduler.h is no longer automatically included by
  627. starpu.h, it has to be manually included when needed
  628. * New batch files to run StarPU applications with Microsoft Visual C
  629. * Add examples/release/Makefile to test StarPU examples against an
  630. installed version of StarPU. That can also be used to test
  631. examples using a previous API.
  632. * Tutorial is installed in ${docdir}/tutorial
  633. * Schedulers eager_central_policy, dm and dmda no longer erroneously respect
  634. priorities. dmdas has to be used to respect priorities.
  635. * StarPU-MPI: Fix potential bug for user-defined datatypes. As MPI
  636. can reorder messages, we need to make sure the sending of the size
  637. of the data has been completed.
  638. * Documentation is now generated through doxygen.
  639. * Modification of perfmodels output format for future improvements.
  640. * Fix for properly dealing with NAN on windows systems
  641. * Function starpu_sched_ctx_create() now takes a variable argument
  642. list to define the scheduler to be used, and the minimum and
  643. maximum priority values
  644. * The functions starpu_sched_set/get_min/max_priority set/get the
  645. priorities of the current scheduling context, i.e the one which
  646. was set by a call to starpu_sched_ctx_set_context() or the initial
  647. context if the function was not called yet.
  648. * MPI: Fix of the livelock issue discovered while executing applications
  649. on a CPU+GPU cluster of machines by adding a maximum trylock
  650. threshold before a blocking lock.
  651. Small changes:
  652. * STARPU_NCPU should now be used instead of STARPU_NCPUS. STARPU_NCPUS is
  653. still available for compatibility reasons.
  654. * include/starpu.h includes all include/starpu_*.h files, applications
  655. therefore only need to have #include <starpu.h>
  656. * Active task wait is now included in blocked time.
  657. * Fix GCC plugin linking issues starting with GCC 4.7.
  658. * Fix forcing calibration of never-calibrated archs.
  659. * CUDA applications are no longer compiled with the "-arch sm_13"
  660. option. It is specifically added to applications which need it.
  661. * Explicitly name the non-sleeping-non-running time "Overhead", and use
  662. another color in vite traces.
  663. * Use C99 variadic macro support, not GNU.
  664. * Fix performance regression: dmda queues were inadvertently made
  665. LIFOs in r9611.
  666. StarPU 1.0.3 (svn revision 7379)
  667. ==============================================
  668. Changes:
  669. * Several bug fixes in the build system
  670. * Bug fixes in source code for non-Linux systems
  671. * Fix generating FXT traces bigger than 64MiB.
  672. * Improve ENODEV error detections in StarPU FFT
  673. StarPU 1.0.2 (svn revision 7210)
  674. ==============================================
  675. Changes:
  676. * Add starpu_block_shadow_filter_func_vector and an example.
  677. * Add tag dependency in trace-generated DAG.
  678. * Fix CPU binding for optimized CPU-GPU transfers.
  679. * Fix parallel tasks CPU binding and combined worker generation.
  680. * Fix generating FXT traces bigger than 64MiB.
  681. StarPU 1.0.1 (svn revision 6659)
  682. ==============================================
  683. Changes:
  684. * hwloc support. Warn users when hwloc is not found on the system and
  685. produce error when not explicitely disabled.
  686. * Several bug fixes
  687. * GCC plug-in
  688. - Add `#pragma starpu release'
  689. - Fix bug when using `acquire' pragma with function parameters
  690. - Slightly improve test suite coverage
  691. - Relax the GCC version check
  692. * Update SOCL to use new API
  693. * Documentation improvement.
  694. StarPU 1.0.0 (svn revision 6306)
  695. ==============================================
  696. The extensions-again release
  697. New features:
  698. * Add SOCL, an OpenCL interface on top of StarPU.
  699. * Add a gcc plugin to extend the C interface with pragmas which allows to
  700. easily define codelets and issue tasks.
  701. * Add reduction mode to starpu_mpi_insert_task.
  702. * A new multi-format interface permits to use different binary formats
  703. on CPUs & GPUs, the conversion functions being provided by the
  704. application and called by StarPU as needed (and as less as
  705. possible).
  706. * Deprecate cost_model, and introduce cost_function, which is provided
  707. with the whole task structure, the target arch and implementation
  708. number.
  709. * Permit the application to provide its own size base for performance
  710. models.
  711. * Applications can provide several implementations of a codelet for the
  712. same architecture.
  713. * Add a StarPU-Top feedback and steering interface.
  714. * Permit to specify MPI tags for more efficient starpu_mpi_insert_task
  715. Changes:
  716. * Fix several memory leaks and race conditions
  717. * Make environment variables take precedence over the configuration
  718. passed to starpu_init()
  719. * Libtool interface versioning has been included in libraries names
  720. (libstarpu-1.0.so, libstarpumpi-1.0.so,
  721. libstarpufft-1.0.so, libsocl-1.0.so)
  722. * Install headers under $includedir/starpu/1.0.
  723. * Make where field for struct starpu_codelet optional. When unset, its
  724. value will be automatically set based on the availability of the
  725. different XXX_funcs fields of the codelet.
  726. * Define access modes for data handles into starpu_codelet and no longer
  727. in starpu_task. Hence mark (struct starpu_task).buffers as
  728. deprecated, and add (struct starpu_task).handles and (struct
  729. starpu_codelet).modes
  730. * Fields xxx_func of struct starpu_codelet are made deprecated. One
  731. should use fields xxx_funcs instead.
  732. * Some types were renamed for consistency. when using pkg-config libstarpu,
  733. starpu_deprecated_api.h is automatically included (after starpu.h) to
  734. keep compatibility with existing software. Other changes are mentioned
  735. below, compatibility is also preserved for them.
  736. To port code to use new names (this is not mandatory), the
  737. tools/dev/rename.sh script can be used, and pkg-config starpu-1.0 should
  738. be used.
  739. * The communication cost in the heft and dmda scheduling strategies now
  740. take into account the contention brought by the number of GPUs. This
  741. changes the meaning of the beta factor, whose default 1.0 value should
  742. now be good enough in most case.
  743. Small features:
  744. * Allow users to disable asynchronous data transfers between CPUs and
  745. GPUs.
  746. * Update OpenCL driver to enable CPU devices (the environment variable
  747. STARPU_OPENCL_ON_CPUS must be set to a positive value when
  748. executing an application)
  749. * struct starpu_data_interface_ops --- operations on a data
  750. interface --- define a new function pointer allocate_new_data
  751. which creates a new data interface of the given type based on
  752. an existing handle
  753. * Add a field named magic to struct starpu_task which is set when
  754. initialising the task. starpu_task_submit will fail if the
  755. field does not have the right value. This will hence avoid
  756. submitting tasks which have not been properly initialised.
  757. * Add a hook function pre_exec_hook in struct starpu_sched_policy.
  758. The function is meant to be called in drivers. Schedulers
  759. can use it to be notified when a task is about being computed.
  760. * Add codelet execution time statistics plot.
  761. * Add bus speed in starpu_machine_display.
  762. * Add a STARPU_DATA_ACQUIRE_CB which permits to inline the code to be
  763. done.
  764. * Add gdb functions.
  765. * Add complex support to LU example.
  766. * Permit to use the same data several times in write mode in the
  767. parameters of the same task.
  768. Small changes:
  769. * Increase default value for STARPU_MAXCPUS -- Maximum number of
  770. CPUs supported -- to 64.
  771. * Add man pages for some of the tools
  772. * Add C++ application example in examples/cpp/
  773. * Add an OpenMP fork-join example.
  774. * Documentation improvement.
  775. StarPU 0.9 (svn revision 3721)
  776. ==============================================
  777. The extensions release
  778. * Provide the STARPU_REDUX data access mode
  779. * Externalize the scheduler API.
  780. * Add theoretical bound computation
  781. * Add the void interface
  782. * Add power consumption optimization
  783. * Add parallel task support
  784. * Add starpu_mpi_insert_task
  785. * Add profiling information interface.
  786. * Add STARPU_LIMIT_GPU_MEM environment variable.
  787. * OpenCL fixes
  788. * MPI fixes
  789. * Improve optimization documentation
  790. * Upgrade to hwloc 1.1 interface
  791. * Add fortran example
  792. * Add mandelbrot OpenCL example
  793. * Add cg example
  794. * Add stencil MPI example
  795. * Initial support for CUDA4
  796. StarPU 0.4 (svn revision 2535)
  797. ==============================================
  798. The API strengthening release
  799. * Major API improvements
  800. - Provide the STARPU_SCRATCH data access mode
  801. - Rework data filter interface
  802. - Rework data interface structure
  803. - A script that automatically renames old functions to accomodate with the new
  804. API is available from https://scm.gforge.inria.fr/svn/starpu/scripts/renaming
  805. (login: anonsvn, password: anonsvn)
  806. * Implement dependencies between task directly (eg. without tags)
  807. * Implicit data-driven task dependencies simplifies the design of
  808. data-parallel algorithms
  809. * Add dynamic profiling capabilities
  810. - Provide per-task feedback
  811. - Provide per-worker feedback
  812. - Provide feedback about memory transfers
  813. * Provide a library to help accelerating MPI applications
  814. * Improve data transfers overhead prediction
  815. - Transparently benchmark buses to generate performance models
  816. - Bind accelerator-controlling threads with respect to NUMA locality
  817. * Improve StarPU's portability
  818. - Add OpenCL support
  819. - Add support for Windows
  820. StarPU 0.2.901 aka 0.3-rc1 (svn revision 1236)
  821. ==============================================
  822. The asynchronous heterogeneous multi-accelerator release
  823. * Many API changes and code cleanups
  824. - Implement starpu_worker_get_id
  825. - Implement starpu_worker_get_name
  826. - Implement starpu_worker_get_type
  827. - Implement starpu_worker_get_count
  828. - Implement starpu_display_codelet_stats
  829. - Implement starpu_data_prefetch_on_node
  830. - Expose the starpu_data_set_wt_mask function
  831. * Support nvidia (heterogeneous) multi-GPU
  832. * Add the data request mechanism
  833. - All data transfers use data requests now
  834. - Implement asynchronous data transfers
  835. - Implement prefetch mechanism
  836. - Chain data requests to support GPU->RAM->GPU transfers
  837. * Make it possible to bypass the scheduler and to assign a task to a specific
  838. worker
  839. * Support restartable tasks to reinstanciate dependencies task graphs
  840. * Improve performance prediction
  841. - Model data transfer overhead
  842. - One model is created for each accelerator
  843. * Support for CUDA's driver API is deprecated
  844. * The STARPU_WORKERS_CUDAID and STARPU_WORKERS_CPUID env. variables make it possible to
  845. specify where to bind the workers
  846. * Use the hwloc library to detect the actual number of cores
  847. StarPU 0.2.0 (svn revision 1013)
  848. ==============================================
  849. The Stabilizing-the-Basics release
  850. * Various API cleanups
  851. * Mac OS X is supported now
  852. * Add dynamic code loading facilities onto Cell's SPUs
  853. * Improve performance analysis/feedback tools
  854. * Application can interact with StarPU tasks
  855. - The application may access/modify data managed by the DSM
  856. - The application may wait for the termination of a (set of) task(s)
  857. * An initial documentation is added
  858. * More examples are supplied
  859. StarPU 0.1.0 (svn revision 794)
  860. ==============================================
  861. First release.
  862. Status:
  863. * Only supports Linux platforms yet
  864. * Supported architectures
  865. - multicore CPUs
  866. - NVIDIA GPUs (with CUDA 2.x)
  867. - experimental Cell/BE support
  868. Changes:
  869. * Scheduling facilities
  870. - run-time selection of the scheduling policy
  871. - basic auto-tuning facilities
  872. * Software-based DSM
  873. - transparent data coherency management
  874. - High-level expressive interface
  875. # Local Variables:
  876. # mode: text
  877. # coding: utf-8
  878. # ispell-local-dictionary: "american"
  879. # End: