ChangeLog 30 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665
  1. # StarPU --- Runtime system for heterogeneous multicore architectures.
  2. #
  3. # Copyright (C) 2009-2014 Université de Bordeaux 1
  4. # Copyright (C) 2010, 2011, 2012, 2013, 2014 Centre National de la Recherche Scientifique
  5. #
  6. # StarPU is free software; you can redistribute it and/or modify
  7. # it under the terms of the GNU Lesser General Public License as published by
  8. # the Free Software Foundation; either version 2.1 of the License, or (at
  9. # your option) any later version.
  10. #
  11. # StarPU is distributed in the hope that it will be useful, but
  12. # WITHOUT ANY WARRANTY; without even the implied warranty of
  13. # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
  14. #
  15. # See the GNU Lesser General Public License in COPYING.LGPL for more details.
  16. StarPU 1.2.0 (svn revision xxxx)
  17. ==============================================
  18. New features:
  19. * MIC Xeon Phi support
  20. * SCC support
  21. * New function starpu_sched_ctx_exec_parallel_code to execute a
  22. parallel code on the workers of the given scheduler context
  23. * MPI:
  24. - New internal communication system : a unique tag called
  25. is now used for all communications, and a system
  26. of hashmaps on each node which stores pending receives has been
  27. implemented. Every message is now coupled with an envelope, sent
  28. before the corresponding data, which allows the receiver to
  29. allocate data correctly, and to submit the matching receive of
  30. the envelope.
  31. - New function
  32. starpu_mpi_irecv_detached_sequential_consistency which
  33. allows to enable or disable the sequential consistency for
  34. the given data handle (sequential consistency will be
  35. enabled or disabled based on the value of the function
  36. parameter and the value of the sequential consistency
  37. defined for the given data)
  38. - New functions starpu_mpi_task_build() and
  39. starpu_mpi_task_post_build()
  40. * New STARPU_COMMUTE flag which can be passed along STARPU_W or STARPU_RW to
  41. let starpu commute write accesses.
  42. * Out-of-core support, through registration of disk areas as additional memory
  43. nodes.
  44. * New hierarchical schedulers which allow the user to easily build
  45. its own scheduler, by coding itself each "box" it wants, or by
  46. combining existing boxes in StarPU to build it. Hierarchical
  47. schedulers have very interesting scalability properties.
  48. * Add STARPU_CUDA_ASYNC and STARPU_OPENCL_ASYNC flags to allow asynchronous
  49. CUDA and OpenCL kernel execution.
  50. * Add CUDA concurrent kernel execution support through
  51. the STARPU_NWORKER_PER_CUDA environment variable.
  52. * Add CUDA and OpenCL kernel submission pipelining, to overlap costs and allow
  53. concurrent kernel execution on Fermi cards.
  54. * New locality work stealing scheduler (lws).
  55. * Add STARPU_VARIABLE_NBUFFERS to be set in cl.nbuffers, and nbuffers and
  56. modes field to the task structure, which permit to define codelets taking a
  57. variable number of data.
  58. Small features:
  59. * Tasks can now have a name (via the field const char *name of
  60. struct starpu_task)
  61. * New functions starpu_data_acquire_cb_sequential_consistency() and
  62. starpu_data_acquire_on_node_cb_sequential_consistency() which allows
  63. to enable or disable sequential consistency
  64. * New configure option --enable-fxt-lock which enables additional
  65. trace events focused on locks behaviour during the execution
  66. * Functions starpu_insert_task and starpu_mpi_insert_task are
  67. renamed in starpu_task_insert and starpu_mpi_task_insert. Old
  68. names are kept to avoid breaking old codes.
  69. * New configure option --enable-calibration-heuristic which allows
  70. the user to set the maximum authorized deviation of the
  71. history-based calibrator.
  72. * Allow application to provide the task footprint itself.
  73. * New function starpu_sched_ctx_display_workers() to display worker
  74. information belonging to a given scheduler context
  75. * The option --enable-verbose can be called with
  76. --enable-verbose=extra to increase the verbosity
  77. * Add codelet size, footprint and tag id in the paje trace.
  78. * Add STARPU_TAG_ONLY, to specify a tag for traces without making StarPU
  79. manage the tag.
  80. * On Linux x86, spinlocks now block after a hundred tries. This avoids
  81. typical 10ms pauses when the application thread tries to submit tasks.
  82. * New function char *starpu_worker_get_type_as_string(enum starpu_worker_archtype type)
  83. Changes:
  84. * Data interfaces (variable, vector, matrix and block) now define
  85. pack und unpack functions
  86. * StarPU-MPI: Fix for being able to receive data which have not yet
  87. been registered by the application (i.e it did not call
  88. starpu_data_set_tag(), data are received as a raw memory)
  89. * StarPU-MPI: Fix for being able to receive data with the same tag
  90. from several nodes (see mpi/tests/gather.c)
  91. * Remove the long-deprecated cost_model fields and task->buffers field.
  92. * Fix complexity of implicit task/data dependency, from quadratic to linear.
  93. Small changes:
  94. * Rename function starpu_trace_user_event() as
  95. starpu_fxt_trace_user_event()
  96. StarPU 1.1.3 (svn revision xxx)
  97. ==============================================
  98. The scheduling context release
  99. New features:
  100. * One can register an existing on-GPU buffer to be used by a handle.
  101. * Add the starpu_paje_summary statistics tool.
  102. * Enable gpu-gpu transfers for matrices.
  103. * Let interfaces declare which transfers they allow with the can_copy
  104. methode.
  105. Small changes:
  106. * Lock performance model files while writing and reading them to avoid
  107. issues on parallel launches, MPI runs notably.
  108. StarPU 1.1.2 (svn revision 13011)
  109. ==============================================
  110. The scheduling context release
  111. New features:
  112. * The reduction init codelet is automatically used to initialize temporary
  113. buffers.
  114. * Traces now include a "scheduling" state, to show the overhead of the
  115. scheduler.
  116. * Add STARPU_CALIBRATE_MINIMUM environment variable to specify the minimum
  117. number of calibration measurements.
  118. * Add STARPU_TRACE_BUFFER_SIZE environment variable to specify the size of
  119. the trace buffer.
  120. StarPU 1.1.1 (svn revision 12638)
  121. ==============================================
  122. The scheduling context release
  123. New features:
  124. * MPI:
  125. - New variable STARPU_MPI_CACHE_STATS to print statistics on
  126. cache holding received data.
  127. - New function starpu_mpi_data_register() which sets the rank
  128. and tag of a data, and also allows to automatically clear
  129. the MPI communication cache when unregistering the data. It
  130. should be called instead of both calling
  131. starpu_data_set_tag() and starpu_data_set_rank()
  132. * Use streams for all CUDA transfers, even initiated by CPUs.
  133. * Add paje traces statistics tools.
  134. * Use streams for GPUA->GPUB and GPUB->GPUA transfers.
  135. Small features:
  136. * New STARPU_EXECUTE_ON_WORKER flag to specify the worker on which
  137. to execute the task.
  138. * New STARPU_DISABLE_PINNING environment variable to disable host memory
  139. pinning.
  140. * New STARPU_DISABLE_KERNELS environment variable to disable actual kernel
  141. execution.
  142. * New starpu_memory_get_total function to get the size of a memory node.
  143. * New starpu_parallel_task_barrier_init_n function to let a scheduler decide
  144. a set of workers without going through combined workers.
  145. Changes:
  146. * Fix simgrid execution.
  147. * Rename starpu_get_nready_tasks_of_sched_ctx to starpu_sched_ctx_get_nready_tasks
  148. * Rename starpu_get_nready_flops_of_sched_ctx to starpu_sched_ctx_get_nready_flops
  149. * New functions starpu_pause() and starpu_resume()
  150. * New codelet specific_nodes field to specify explicit target nodes for data.
  151. * StarPU-MPI: Fix overzealous allocation of memory.
  152. * Interfaces: Allow interface implementation to change pointers at will, in
  153. unpack notably.
  154. Small changes:
  155. * Use big fat abortions when one tries to make a task or callback
  156. sleep, instead of just returning EDEADLCK which few people will test
  157. * By default, StarPU FFT examples are not compiled and checked, the
  158. configure option --enable-starpufft-examples needs to be specified
  159. to change this behaviour.
  160. StarPU 1.1.0 (svn revision 11960)
  161. ==============================================
  162. The scheduling context release
  163. New features:
  164. * OpenGL interoperability support.
  165. * Capability to store compiled OpenCL kernels on the file system
  166. * Capability to load compiled OpenCL kernels
  167. * Performance models measurements can now be provided explicitly by
  168. applications.
  169. * Capability to emit communication statistics when running MPI code
  170. * Add starpu_data_unregister_submit, starpu_data_acquire_on_node and
  171. starpu_data_invalidate_submit
  172. * New functionnality to wrapper starpu_insert_task to pass a array of
  173. data_handles via the parameter STARPU_DATA_ARRAY
  174. * Enable GPU-GPU direct transfers.
  175. * GCC plug-in
  176. - Add `registered' attribute
  177. - A new pass was added that warns about the use of possibly
  178. unregistered memory buffers.
  179. * SOCL
  180. - Manual mapping of commands on specific devices is now
  181. possible
  182. - SOCL does not require StarPU CPU tasks anymore. CPU workers
  183. are automatically disabled to enhance performance of OpenCL
  184. CPU devices
  185. * New interface: COO matrix.
  186. * Data interfaces: The pack operation of user-defined data interface
  187. defines a new parameter count which should be set to the size of
  188. the buffer created by the packing of the data.
  189. * MPI:
  190. - Communication statistics for MPI can only be enabled at
  191. execution time by defining the environment variable
  192. STARPU_COMM_STATS
  193. - Communication cache mechanism is enabled by default, and can
  194. only be disabled at execution time by setting the
  195. environment variable STARPU_MPI_CACHE to 0.
  196. - New variable STARPU_MPI_CACHE_STATS to print statistics on
  197. cache holding received data.
  198. - Initialisation functions starpu_mpi_initialize_extended()
  199. and starpu_mpi_initialize() have been made deprecated. One
  200. should now use starpu_mpi_init(int *, char ***, int). The
  201. last parameter indicates if MPI should be initialised.
  202. - Collective detached operations have new parameters, a
  203. callback function and a argument. This is to be consistent
  204. with the detached point-to-point communications.
  205. - When exchanging user-defined data interfaces, the size of
  206. the data is the size returned by the pack operation, i.e
  207. data with dynamic size can now be exchanged with StarPU-MPI.
  208. - New function starpu_mpi_data_register() which sets the rank
  209. and tag of a data, and also allows to automatically clear
  210. the MPI communication cache when unregistering the data. It
  211. should be called instead of both calling
  212. starpu_data_set_tag() and starpu_data_set_rank()
  213. * Add experimental simgrid support, to simulate execution with various
  214. number of CPUs, GPUs, amount of memory, etc.
  215. * Add support for OpenCL simulators (which provide simulated execution time)
  216. * Add support for Temanejo, a task graph debugger
  217. * Theoretical bound lp output now includes data transfer time.
  218. * Update OpenCL driver to only enable CPU devices (the environment
  219. variable STARPU_OPENCL_ONLY_ON_CPUS must be set to a positive
  220. value when executing an application)
  221. * Add Scheduling contexts to separate computation resources
  222. - Scheduling policies take into account the set of resources corresponding
  223. to the context it belongs to
  224. - Add support to dynamically change scheduling contexts
  225. (Create and Delete a context, Add Workers to a context, Remove workers from a context)
  226. - Add support to indicate to which contexts the tasks are submitted
  227. * Add the Hypervisor to manage the Scheduling Contexts automatically
  228. - The Contexts can be registered to the Hypervisor
  229. - Only the registered contexts are managed by the Hypervisor
  230. - The Hypervisor can detect the initial distribution of resources of
  231. a context and constructs it consequently (the cost of execution is required)
  232. - Several policies can adapt dynamically the distribution of resources
  233. in contexts if the initial one was not appropriate
  234. - Add a platform to implement new policies of redistribution
  235. of resources
  236. * Implement a memory manager which checks the global amount of
  237. memory available on devices, and checks there is enough memory
  238. before doing an allocation on the device.
  239. * Discard environment variable STARPU_LIMIT_GPU_MEM and define
  240. instead STARPU_LIMIT_CUDA_MEM and STARPU_LIMIT_OPENCL_MEM
  241. * Introduce new variables STARPU_LIMIT_CUDA_devid_MEM and
  242. STARPU_LIMIT_OPENCL_devid_MEM to limit memory per specific device
  243. * Introduce new variable STARPU_LIMIT_CPU_MEM to limit memory for
  244. the CPU devices
  245. * New function starpu_malloc_flags to define a memory allocation with
  246. constraints based on the following values:
  247. - STARPU_MALLOC_PINNED specifies memory should be pinned
  248. - STARPU_MALLOC_COUNT specifies the memory allocation should be in
  249. the limits defined by the environment variables STARPU_LIMIT_xxx
  250. (see above). When no memory is left, starpu_malloc_flag tries
  251. to reclaim memory from StarPU and returns -ENOMEM on failure.
  252. * starpu_malloc calls starpu_malloc_flags with a value of flag set
  253. to STARPU_MALLOC_PINNED
  254. * Define new function starpu_free_flags similarly to starpu_malloc_flags
  255. * Define new public API starpu_pthread which is similar to the
  256. pthread API. It is provided with 2 implementations: a pthread one
  257. and a Simgrid one. Applications using StarPU and wishing to use
  258. the Simgrid StarPU features should use it.
  259. * Allow to have a dynamically allocated number of buffers per task,
  260. and so overwrite the value defined --enable-maxbuffers=XXX
  261. * Performance models files are now stored in a directory whose name
  262. include the version of the performance model format. The version
  263. number is also written in the file itself.
  264. When updating the format, the internal variable
  265. _STARPU_PERFMODEL_VERSION should be updated. It is then possible
  266. to switch easily between differents versions of StarPU having
  267. different performance model formats.
  268. * Tasks can now define a optional prologue callback which is executed
  269. on the host when the task becomes ready for execution, before getting
  270. scheduled.
  271. * Small CUDA allocations (<= 4MiB) are now batched to avoid the huge
  272. cudaMalloc overhead.
  273. * Prefetching is now done for all schedulers when it can be done whatever
  274. the scheduling decision.
  275. * Add a watchdog which permits to easily trigger a crash when StarPU gets
  276. stuck.
  277. * Document how to migrate data over MPI.
  278. * New function starpu_wakeup_worker() to be used by schedulers to
  279. wake up a single worker (instead of all workers) when submitting a
  280. single task.
  281. * The functions starpu_sched_set/get_min/max_priority set/get the
  282. priorities of the current scheduling context, i.e the one which
  283. was set by a call to starpu_sched_ctx_set_context() or the initial
  284. context if the function has not been called yet.
  285. * Fix for properly dealing with NAN on windows systems
  286. Small features:
  287. * Add starpu_worker_get_by_type and starpu_worker_get_by_devid
  288. * Add starpu_fxt_stop_profiling/starpu_fxt_start_profiling which permits to
  289. pause trace recording.
  290. * Add trace_buffer_size configuration field to permit to specify the tracing
  291. buffer size.
  292. * Add starpu_codelet_profile and starpu_codelet_histo_profile, tools which draw
  293. the profile of a codelet.
  294. * File STARPU-REVISION --- containing the SVN revision number from which
  295. StarPU was compiled --- is installed in the share/doc/starpu directory
  296. * starpu_perfmodel_plot can now directly draw GFlops curves.
  297. * New configure option --enable-mpi-progression-hook to enable the
  298. activity polling method for StarPU-MPI.
  299. * Permit to disable sequential consistency for a given task.
  300. * New macro STARPU_RELEASE_VERSION
  301. * New function starpu_get_version() to return as 3 integers the
  302. release version of StarPU.
  303. * Enable by default data allocation cache
  304. * New function starpu_perfmodel_directory() to print directory
  305. storing performance models. Available through the new option -d of
  306. the tool starpu_perfmodel_display
  307. * New batch files to execute StarPU applications under Microsoft
  308. Visual Studio (They are installed in path_to_starpu/bin/msvc)/
  309. * Add cl_arg_free, callback_arg_free, prologue_callback_arg_free fields to
  310. enable automatic free(cl_arg); free(callback_arg);
  311. free(prologue_callback_arg) on task destroy.
  312. * New function starpu_task_build
  313. * New configure options --with-simgrid-dir
  314. --with-simgrid-include-dir and --with-simgrid-lib-dir to specify
  315. the location of the SimGrid library
  316. Changes:
  317. * Rename all filter functions to follow the pattern
  318. starpu_DATATYPE_filter_FILTERTYPE. The script
  319. tools/dev/rename_filter.sh is provided to update your existing
  320. applications to use new filters function names.
  321. * Renaming of diverse functions and datatypes. The script
  322. tools/dev/rename.sh is provided to update your existing
  323. applications to use the new names. It is also possible to compile
  324. with the pkg-config package starpu-1.0 to keep using the old
  325. names. It is however recommended to update your code and to use
  326. the package starpu-1.1.
  327. * Fix the block filter functions.
  328. * Fix StarPU-MPI on Darwin.
  329. * The FxT code can now be used on systems other than Linux.
  330. * Keep only one hashtable implementation common/uthash.h
  331. * The cache of starpu_mpi_insert_task is fixed and thus now enabled by
  332. default.
  333. * Improve starpu_machine_display output.
  334. * Standardize objects name in the performance model API
  335. * SOCL
  336. - Virtual SOCL device has been removed
  337. - Automatic scheduling still available with command queues not
  338. assigned to any device
  339. - Remove modified OpenCL headers. ICD is now the only supported
  340. way to use SOCL.
  341. - SOCL test suite is only run when environment variable
  342. SOCL_OCL_LIB_OPENCL is defined. It should contain the location
  343. of the libOpenCL.so file of the OCL ICD implementation.
  344. * Fix main memory leak on multiple unregister/re-register.
  345. * Improve hwloc detection by configure
  346. * Cell:
  347. - It is no longer possible to enable the cell support via the
  348. gordon driver
  349. - Data interfaces no longer define functions to copy to and from
  350. SPU devices
  351. - Codelet no longer define pointer for Gordon implementations
  352. - Gordon workers are no longer enabled
  353. - Gordon performance models are no longer enabled
  354. * Fix data transfer arrows in paje traces
  355. * The "heft" scheduler no longer exists. Users should now pick "dmda"
  356. instead.
  357. * StarPU can now use poti to generate paje traces.
  358. * Rename scheduling policy "parallel greedy" to "parallel eager"
  359. * starpu_scheduler.h is no longer automatically included by
  360. starpu.h, it has to be manually included when needed
  361. * New batch files to run StarPU applications with Microsoft Visual C
  362. * Add examples/release/Makefile to test StarPU examples against an
  363. installed version of StarPU. That can also be used to test
  364. examples using a previous API.
  365. * Tutorial is installed in ${docdir}/tutorial
  366. * Schedulers eager_central_policy, dm and dmda no longer erroneously respect
  367. priorities. dmdas has to be used to respect priorities.
  368. * StarPU-MPI: Fix potential bug for user-defined datatypes. As MPI
  369. can reorder messages, we need to make sure the sending of the size
  370. of the data has been completed.
  371. * Documentation is now generated through doxygen.
  372. * Modification of perfmodels output format for future improvements.
  373. * Fix for properly dealing with NAN on windows systems
  374. * Function starpu_sched_ctx_create() now takes a variable argument
  375. list to define the scheduler to be used, and the minimum and
  376. maximum priority values
  377. * The functions starpu_sched_set/get_min/max_priority set/get the
  378. priorities of the current scheduling context, i.e the one which
  379. was set by a call to starpu_sched_ctx_set_context() or the initial
  380. context if the function was not called yet.
  381. * MPI: Fix of the livelock issue discovered while executing applications
  382. on a CPU+GPU cluster of machines by adding a maximum trylock
  383. threshold before a blocking lock.
  384. Small changes:
  385. * STARPU_NCPU should now be used instead of STARPU_NCPUS. STARPU_NCPUS is
  386. still available for compatibility reasons.
  387. * include/starpu.h includes all include/starpu_*.h files, applications
  388. therefore only need to have #include <starpu.h>
  389. * Active task wait is now included in blocked time.
  390. * Fix GCC plugin linking issues starting with GCC 4.7.
  391. * Fix forcing calibration of never-calibrated archs.
  392. * CUDA applications are no longer compiled with the "-arch sm_13"
  393. option. It is specifically added to applications which need it.
  394. * Explicitly name the non-sleeping-non-running time "Overhead", and use
  395. another color in vite traces.
  396. * Use C99 variadic macro support, not GNU.
  397. * Fix performance regression: dmda queues were inadvertently made
  398. LIFOs in r9611.
  399. * Use big fat abortions when one tries to make a task or callback
  400. sleep, instead of just returning EDEADLCK which few people will test
  401. * By default, StarPU FFT examples are not compiled and checked, the
  402. configure option --enable-starpufft-examples needs to be specified
  403. to change this behaviour.
  404. StarPU 1.0.3 (svn revision 7379)
  405. ==============================================
  406. Changes:
  407. * Several bug fixes in the build system
  408. * Bug fixes in source code for non-Linux systems
  409. * Fix generating FXT traces bigger than 64MiB.
  410. * Improve ENODEV error detections in StarPU FFT
  411. StarPU 1.0.2 (svn revision 7210)
  412. ==============================================
  413. Changes:
  414. * Add starpu_block_shadow_filter_func_vector and an example.
  415. * Add tag dependency in trace-generated DAG.
  416. * Fix CPU binding for optimized CPU-GPU transfers.
  417. * Fix parallel tasks CPU binding and combined worker generation.
  418. * Fix generating FXT traces bigger than 64MiB.
  419. StarPU 1.0.1 (svn revision 6659)
  420. ==============================================
  421. Changes:
  422. * hwloc support. Warn users when hwloc is not found on the system and
  423. produce error when not explicitely disabled.
  424. * Several bug fixes
  425. * GCC plug-in
  426. - Add `#pragma starpu release'
  427. - Fix bug when using `acquire' pragma with function parameters
  428. - Slightly improve test suite coverage
  429. - Relax the GCC version check
  430. * Update SOCL to use new API
  431. * Documentation improvement.
  432. StarPU 1.0.0 (svn revision 6306)
  433. ==============================================
  434. The extensions-again release
  435. New features:
  436. * Add SOCL, an OpenCL interface on top of StarPU.
  437. * Add a gcc plugin to extend the C interface with pragmas which allows to
  438. easily define codelets and issue tasks.
  439. * Add reduction mode to starpu_mpi_insert_task.
  440. * A new multi-format interface permits to use different binary formats
  441. on CPUs & GPUs, the conversion functions being provided by the
  442. application and called by StarPU as needed (and as less as
  443. possible).
  444. * Deprecate cost_model, and introduce cost_function, which is provided
  445. with the whole task structure, the target arch and implementation
  446. number.
  447. * Permit the application to provide its own size base for performance
  448. models.
  449. * Applications can provide several implementations of a codelet for the
  450. same architecture.
  451. * Add a StarPU-Top feedback and steering interface.
  452. * Permit to specify MPI tags for more efficient starpu_mpi_insert_task
  453. Changes:
  454. * Fix several memory leaks and race conditions
  455. * Make environment variables take precedence over the configuration
  456. passed to starpu_init()
  457. * Libtool interface versioning has been included in libraries names
  458. (libstarpu-1.0.so, libstarpumpi-1.0.so,
  459. libstarpufft-1.0.so, libsocl-1.0.so)
  460. * Install headers under $includedir/starpu/1.0.
  461. * Make where field for struct starpu_codelet optional. When unset, its
  462. value will be automatically set based on the availability of the
  463. different XXX_funcs fields of the codelet.
  464. * Define access modes for data handles into starpu_codelet and no longer
  465. in starpu_task. Hence mark (struct starpu_task).buffers as
  466. deprecated, and add (struct starpu_task).handles and (struct
  467. starpu_codelet).modes
  468. * Fields xxx_func of struct starpu_codelet are made deprecated. One
  469. should use fields xxx_funcs instead.
  470. * Some types were renamed for consistency. when using pkg-config libstarpu,
  471. starpu_deprecated_api.h is automatically included (after starpu.h) to
  472. keep compatibility with existing software. Other changes are mentioned
  473. below, compatibility is also preserved for them.
  474. To port code to use new names (this is not mandatory), the
  475. tools/dev/rename.sh script can be used, and pkg-config starpu-1.0 should
  476. be used.
  477. * The communication cost in the heft and dmda scheduling strategies now
  478. take into account the contention brought by the number of GPUs. This
  479. changes the meaning of the beta factor, whose default 1.0 value should
  480. now be good enough in most case.
  481. Small features:
  482. * Allow users to disable asynchronous data transfers between CPUs and
  483. GPUs.
  484. * Update OpenCL driver to enable CPU devices (the environment variable
  485. STARPU_OPENCL_ON_CPUS must be set to a positive value when
  486. executing an application)
  487. * struct starpu_data_interface_ops --- operations on a data
  488. interface --- define a new function pointer allocate_new_data
  489. which creates a new data interface of the given type based on
  490. an existing handle
  491. * Add a field named magic to struct starpu_task which is set when
  492. initialising the task. starpu_task_submit will fail if the
  493. field does not have the right value. This will hence avoid
  494. submitting tasks which have not been properly initialised.
  495. * Add a hook function pre_exec_hook in struct starpu_sched_policy.
  496. The function is meant to be called in drivers. Schedulers
  497. can use it to be notified when a task is about being computed.
  498. * Add codelet execution time statistics plot.
  499. * Add bus speed in starpu_machine_display.
  500. * Add a STARPU_DATA_ACQUIRE_CB which permits to inline the code to be
  501. done.
  502. * Add gdb functions.
  503. * Add complex support to LU example.
  504. * Permit to use the same data several times in write mode in the
  505. parameters of the same task.
  506. Small changes:
  507. * Increase default value for STARPU_MAXCPUS -- Maximum number of
  508. CPUs supported -- to 64.
  509. * Add man pages for some of the tools
  510. * Add C++ application example in examples/cpp/
  511. * Add an OpenMP fork-join example.
  512. * Documentation improvement.
  513. StarPU 0.9 (svn revision 3721)
  514. ==============================================
  515. The extensions release
  516. * Provide the STARPU_REDUX data access mode
  517. * Externalize the scheduler API.
  518. * Add theoretical bound computation
  519. * Add the void interface
  520. * Add power consumption optimization
  521. * Add parallel task support
  522. * Add starpu_mpi_insert_task
  523. * Add profiling information interface.
  524. * Add STARPU_LIMIT_GPU_MEM environment variable.
  525. * OpenCL fixes
  526. * MPI fixes
  527. * Improve optimization documentation
  528. * Upgrade to hwloc 1.1 interface
  529. * Add fortran example
  530. * Add mandelbrot OpenCL example
  531. * Add cg example
  532. * Add stencil MPI example
  533. * Initial support for CUDA4
  534. StarPU 0.4 (svn revision 2535)
  535. ==============================================
  536. The API strengthening release
  537. * Major API improvements
  538. - Provide the STARPU_SCRATCH data access mode
  539. - Rework data filter interface
  540. - Rework data interface structure
  541. - A script that automatically renames old functions to accomodate with the new
  542. API is available from https://scm.gforge.inria.fr/svn/starpu/scripts/renaming
  543. (login: anonsvn, password: anonsvn)
  544. * Implement dependencies between task directly (eg. without tags)
  545. * Implicit data-driven task dependencies simplifies the design of
  546. data-parallel algorithms
  547. * Add dynamic profiling capabilities
  548. - Provide per-task feedback
  549. - Provide per-worker feedback
  550. - Provide feedback about memory transfers
  551. * Provide a library to help accelerating MPI applications
  552. * Improve data transfers overhead prediction
  553. - Transparently benchmark buses to generate performance models
  554. - Bind accelerator-controlling threads with respect to NUMA locality
  555. * Improve StarPU's portability
  556. - Add OpenCL support
  557. - Add support for Windows
  558. StarPU 0.2.901 aka 0.3-rc1 (svn revision 1236)
  559. ==============================================
  560. The asynchronous heterogeneous multi-accelerator release
  561. * Many API changes and code cleanups
  562. - Implement starpu_worker_get_id
  563. - Implement starpu_worker_get_name
  564. - Implement starpu_worker_get_type
  565. - Implement starpu_worker_get_count
  566. - Implement starpu_display_codelet_stats
  567. - Implement starpu_data_prefetch_on_node
  568. - Expose the starpu_data_set_wt_mask function
  569. * Support nvidia (heterogeneous) multi-GPU
  570. * Add the data request mechanism
  571. - All data transfers use data requests now
  572. - Implement asynchronous data transfers
  573. - Implement prefetch mechanism
  574. - Chain data requests to support GPU->RAM->GPU transfers
  575. * Make it possible to bypass the scheduler and to assign a task to a specific
  576. worker
  577. * Support restartable tasks to reinstanciate dependencies task graphs
  578. * Improve performance prediction
  579. - Model data transfer overhead
  580. - One model is created for each accelerator
  581. * Support for CUDA's driver API is deprecated
  582. * The STARPU_WORKERS_CUDAID and STARPU_WORKERS_CPUID env. variables make it possible to
  583. specify where to bind the workers
  584. * Use the hwloc library to detect the actual number of cores
  585. StarPU 0.2.0 (svn revision 1013)
  586. ==============================================
  587. The Stabilizing-the-Basics release
  588. * Various API cleanups
  589. * Mac OS X is supported now
  590. * Add dynamic code loading facilities onto Cell's SPUs
  591. * Improve performance analysis/feedback tools
  592. * Application can interact with StarPU tasks
  593. - The application may access/modify data managed by the DSM
  594. - The application may wait for the termination of a (set of) task(s)
  595. * An initial documentation is added
  596. * More examples are supplied
  597. StarPU 0.1.0 (svn revision 794)
  598. ==============================================
  599. First release.
  600. Status:
  601. * Only supports Linux platforms yet
  602. * Supported architectures
  603. - multicore CPUs
  604. - NVIDIA GPUs (with CUDA 2.x)
  605. - experimental Cell/BE support
  606. Changes:
  607. * Scheduling facilities
  608. - run-time selection of the scheduling policy
  609. - basic auto-tuning facilities
  610. * Software-based DSM
  611. - transparent data coherency management
  612. - High-level expressive interface
  613. # Local Variables:
  614. # mode: text
  615. # coding: utf-8
  616. # ispell-local-dictionary: "american"
  617. # End: