ChangeLog 20 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450
  1. # StarPU --- Runtime system for heterogeneous multicore architectures.
  2. #
  3. # Copyright (C) 2009-2013 Université de Bordeaux 1
  4. # Copyright (C) 2010, 2011, 2012, 2013 Centre National de la Recherche Scientifique
  5. #
  6. # StarPU is free software; you can redistribute it and/or modify
  7. # it under the terms of the GNU Lesser General Public License as published by
  8. # the Free Software Foundation; either version 2.1 of the License, or (at
  9. # your option) any later version.
  10. #
  11. # StarPU is distributed in the hope that it will be useful, but
  12. # WITHOUT ANY WARRANTY; without even the implied warranty of
  13. # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
  14. #
  15. # See the GNU Lesser General Public License in COPYING.LGPL for more details.
  16. StarPU 1.2.0 (svn revision xxxx)
  17. ==============================================
  18. New features:
  19. * New function starpu_sched_ctx_exec_parallel_code to execute a
  20. parallel code on the workers of the given scheduler context
  21. * MPI:
  22. - New internal communication system : a unique tag called
  23. is now used for all communications, and a system
  24. of hashmaps on each node which stores pending receives has been
  25. implemented. Every message is now coupled with an envelope, sent
  26. before the corresponding data, which allows the receiver to
  27. allocate data correctly, and to submit the matching receive of
  28. the envelope.
  29. StarPU 1.1.0 (svn revision xxxx)
  30. ==============================================
  31. New features:
  32. * OpenGL interoperability support.
  33. * Capability to store compiled OpenCL kernels on the file system
  34. * Capability to load compiled OpenCL kernels
  35. * Performance models measurements can now be provided explicitly by
  36. applications.
  37. * Capability to emit communication statistics when running MPI code
  38. * Add starpu_unregister_submit, starpu_data_acquire_on_node and
  39. starpu_data_invalidate_submit
  40. * New functionnality to wrapper starpu_insert_task to pass a array of
  41. data_handles via the parameter STARPU_DATA_ARRAY
  42. * Enable GPU-GPU direct transfers.
  43. * GCC plug-in
  44. - Add `registered' attribute
  45. - A new pass was added that warns about the use of possibly
  46. unregistered memory buffers.
  47. * SOCL
  48. - Manual mapping of commands on specific devices is now
  49. possible
  50. - SOCL does not require StarPU CPU tasks anymore. CPU workers
  51. are automatically disabled to enhance performance of OpenCL
  52. CPU devices
  53. * New interface: COO matrix.
  54. * Data interfaces: The pack operation of user-defined data interface
  55. defines a new parameter count which should be set to the size of
  56. the buffer created by the packing of the data.
  57. * MPI:
  58. - Communication statistics for MPI can only be enabled at
  59. execution time by defining the environment variable
  60. STARPU_COMM_STATS
  61. - Communication cache mechanism is enabled by default, and can
  62. only be disabled at execution time by setting the
  63. environment variable STARPU_MPI_CACHE to 0.
  64. - Initialisation functions starpu_mpi_initialize_extended()
  65. and starpu_mpi_initialize() have been made deprecated. One
  66. should now use starpu_mpi_init(int *, char ***, int). The
  67. last parameter indicates if MPI should be initialised.
  68. - Collective detached operations have new parameters, a
  69. callback function and a argument. This is to be consistent
  70. with the detached point-to-point communications.
  71. - When exchanging user-defined data interfaces, the size of
  72. the data is the size returned by the pack operation, i.e
  73. data with dynamic size can now be exchanged with StarPU-MPI.
  74. * Add experimental simgrid support, to simulate execution with various
  75. number of CPUs, GPUs, amount of memory, etc.
  76. * Add support for OpenCL simulators (which provide simulated execution time)
  77. * Add support for Temanejo, a task graph debugger
  78. * Theoretical bound lp output now includes data transfer time.
  79. * Update OpenCL driver to only enable CPU devices (the environment
  80. variable STARPU_OPENCL_ONLY_ON_CPUS must be set to a positive
  81. value when executing an application)
  82. * Add Scheduling contexts to separate computation resources
  83. - Scheduling policies take into account the set of resources corresponding
  84. to the context it belongs to
  85. - Add support to dynamically change scheduling contexts
  86. (Create and Delete a context, Add Workers to a context, Remove workers from a context)
  87. - Add support to indicate to which contexts the tasks are submitted
  88. * Add the Hypervisor to manage the Scheduling Contexts automatically
  89. - The Contexts can be registered to the Hypervisor
  90. - Only the registered contexts are managed by the Hypervisor
  91. - The Hypervisor can detect the initial distribution of resources of
  92. a context and constructs it consequently (the cost of execution is required)
  93. - Several policies can adapt dynamically the distribution of resources
  94. in contexts if the initial one was not appropriate
  95. - Add a platform to implement new policies of redistribution
  96. of resources
  97. * Implement a memory manager which checks the global amount of
  98. memory available on devices, and checks there is enough memory
  99. before doing an allocation on the device.
  100. * Discard environment variable STARPU_LIMIT_GPU_MEM and define
  101. instead STARPU_LIMIT_CUDA_MEM and STARPU_LIMIT_OPENCL_MEM
  102. * Introduce new variables STARPU_LIMIT_CUDA_devid_MEM and
  103. STARPU_LIMIT_OPENCL_devid_MEM to limit memory per specific device
  104. * Introduce new variable STARPU_LIMIT_CPU_MEM to limit memory for
  105. the CPU devices
  106. * New function starpu_malloc_flags to define a memory allocation with
  107. constraints based on the following values:
  108. - STARPU_MALLOC_PINNED specifies memory should be pinned
  109. - STARPU_MALLOC_COUNT specifies the memory allocation should be in
  110. the limits defined by the environment variables STARPU_LIMIT_xxx
  111. (see above). When no memory is left, starpu_malloc_flag tries
  112. to reclaim memory from StarPU and returns -ENOMEM on failure.
  113. * starpu_malloc calls starpu_malloc_flags with a value of flag set
  114. to STARPU_MALLOC_PINNED
  115. * Define new function starpu_free_flags similarly to starpu_malloc_flags
  116. * Define new public API starpu_pthread which is similar to the
  117. pthread API. It is provided with 2 implementations: a pthread one
  118. and a Simgrid one. Applications using StarPU and wishing to use
  119. the Simgrid StarPU features should use it.
  120. * Allow to have a dynamically allocated number of buffers per task,
  121. and so overwrite the value defined --enable-maxbuffers=XXX
  122. Small features:
  123. * Add starpu_worker_get_by_type and starpu_worker_get_by_devid
  124. * Add starpu_fxt_stop_profiling/starpu_fxt_start_profiling which permits to
  125. pause trace recording.
  126. * Add trace_buffer_size configuration field to permit to specify the tracing
  127. buffer size.
  128. * Add starpu_codelet_profile and starpu_codelet_histo_profile, tools which draw
  129. the profile of a codelet.
  130. * File STARPU-REVISION --- containing the SVN revision number from which
  131. StarPU was compiled --- is installed in the share/doc/starpu directory
  132. * starpu_perfmodel_plot can now directly draw GFlops curves.
  133. * New configure option --enable-mpi-progression-hook to enable the
  134. activity polling method for StarPU-MPI.
  135. * Permit to disable sequential consistency for a given task.
  136. * New macro STARPU_RELEASE_VERSION
  137. * New function starpu_get_version() to return as 3 integers the
  138. release version of StarPU.
  139. * Enable by default data allocation cache
  140. Changes:
  141. * Rename all filter functions to follow the pattern
  142. starpu_DATATYPE_filter_FILTERTYPE. The script
  143. tools/dev/rename_filter.sh is provided to update your existing
  144. applications to use new filters function names.
  145. * Renaming of diverse functions and datatypes. The script
  146. tools/dev/rename.sh is provided to update your existing
  147. applications to use the new names. It is also possible to compile
  148. with the pkg-config package starpu-1.0 to keep using the old
  149. names. It is however recommended to update your code and to use
  150. the package starpu-1.1.
  151. * Fix the block filter functions.
  152. * Fix StarPU-MPI on Darwin.
  153. * The FxT code can now be used on systems other than Linux.
  154. * Keep only one hashtable implementation common/uthash.h
  155. * The cache of starpu_mpi_insert_task is fixed and thus now enabled by
  156. default.
  157. * Improve starpu_machine_display output.
  158. * Standardize objects name in the performance model API
  159. * SOCL
  160. - Virtual SOCL device has been removed
  161. - Automatic scheduling still available with command queues not
  162. assigned to any device
  163. - Remove modified OpenCL headers. ICD is now the only supported
  164. way to use SOCL.
  165. - SOCL test suite is only run when environment variable
  166. SOCL_OCL_LIB_OPENCL is defined. It should contain the location
  167. of the libOpenCL.so file of the OCL ICD implementation.
  168. * Fix main memory leak on multiple unregister/re-register.
  169. * Improve hwloc detection by configure
  170. * Cell:
  171. - It is no longer possible to enable the cell support via the
  172. gordon driver
  173. - Data interfaces no longer define functions to copy to and from
  174. SPU devices
  175. - Codelet no longer define pointer for Gordon implementations
  176. - Gordon workers are no longer enabled
  177. - Gordon performance models are no longer enabled
  178. * Fix data transfer arrows in paje traces
  179. * The "heft" scheduler no longer exists. Users should now pick "dmda"
  180. instead.
  181. * StarPU can now use poti to generate paje traces.
  182. * Rename scheduling policy "parallel greedy" to "parallel eager"
  183. * starpu_scheduler.h is no longer automatically included by
  184. starpu.h, it has to be manually included when needed
  185. * New batch files to run StarPU applications with Microsoft Visual C
  186. * Add examples/release/Makefile to test StarPU examples against an
  187. installed version of StarPU. That can also be used to test
  188. examples using a previous API.
  189. * Tutorial is installed in ${docdir}/tutorial
  190. * Schedulers eager_central_policy, dm and dmda no longer erroneously respect
  191. priorities. dmdas has to be used to respect priorities.
  192. Small changes:
  193. * STARPU_NCPU should now be used instead of STARPU_NCPUS. STARPU_NCPUS is
  194. still available for compatibility reasons.
  195. * include/starpu.h includes all include/starpu_*.h files, applications
  196. therefore only need to have #include <starpu.h>
  197. * Active task wait is now included in blocked time.
  198. * Fix GCC plugin linking issues starting with GCC 4.7.
  199. * Fix forcing calibration of never-calibrated archs.
  200. * CUDA applications are no longer compiled with the "-arch sm_13"
  201. option. It is specifically added to applications which need it.
  202. StarPU 1.0.3 (svn revision 7379)
  203. ==============================================
  204. Changes:
  205. * Several bug fixes in the build system
  206. * Bug fixes in source code for non-Linux systems
  207. * Fix generating FXT traces bigger than 64MiB.
  208. * Improve ENODEV error detections in StarPU FFT
  209. StarPU 1.0.2 (svn revision xxx)
  210. ==============================================
  211. Changes:
  212. * Add starpu_block_shadow_filter_func_vector and an example.
  213. * Add tag dependency in trace-generated DAG.
  214. * Fix CPU binding for optimized CPU-GPU transfers.
  215. * Fix parallel tasks CPU binding and combined worker generation.
  216. * Fix generating FXT traces bigger than 64MiB.
  217. StarPU 1.0.1 (svn revision 6659)
  218. ==============================================
  219. Changes:
  220. * hwloc support. Warn users when hwloc is not found on the system and
  221. produce error when not explicitely disabled.
  222. * Several bug fixes
  223. * GCC plug-in
  224. - Add `#pragma starpu release'
  225. - Fix bug when using `acquire' pragma with function parameters
  226. - Slightly improve test suite coverage
  227. - Relax the GCC version check
  228. * Update SOCL to use new API
  229. * Documentation improvement.
  230. StarPU 1.0.0 (svn revision 6306)
  231. ==============================================
  232. The extensions-again release
  233. New features:
  234. * Add SOCL, an OpenCL interface on top of StarPU.
  235. * Add a gcc plugin to extend the C interface with pragmas which allows to
  236. easily define codelets and issue tasks.
  237. * Add reduction mode to starpu_mpi_insert_task.
  238. * A new multi-format interface permits to use different binary formats
  239. on CPUs & GPUs, the conversion functions being provided by the
  240. application and called by StarPU as needed (and as less as
  241. possible).
  242. * Deprecate cost_model, and introduce cost_function, which is provided
  243. with the whole task structure, the target arch and implementation
  244. number.
  245. * Permit the application to provide its own size base for performance
  246. models.
  247. * Applications can provide several implementations of a codelet for the
  248. same architecture.
  249. * Add a StarPU-Top feedback and steering interface.
  250. * Permit to specify MPI tags for more efficient starpu_mpi_insert_task
  251. Changes:
  252. * Fix several memory leaks and race conditions
  253. * Make environment variables take precedence over the configuration
  254. passed to starpu_init()
  255. * Libtool interface versioning has been included in libraries names
  256. (libstarpu-1.0.so, libstarpumpi-1.0.so,
  257. libstarpufft-1.0.so, libsocl-1.0.so)
  258. * Install headers under $includedir/starpu/1.0.
  259. * Make where field for struct starpu_codelet optional. When unset, its
  260. value will be automatically set based on the availability of the
  261. different XXX_funcs fields of the codelet.
  262. * Define access modes for data handles into starpu_codelet and no longer
  263. in starpu_task. Hence mark (struct starpu_task).buffers as
  264. deprecated, and add (struct starpu_task).handles and (struct
  265. starpu_codelet).modes
  266. * Fields xxx_func of struct starpu_codelet are made deprecated. One
  267. should use fields xxx_funcs instead.
  268. * Some types were renamed for consistency. when using pkg-config libstarpu,
  269. starpu_deprecated_api.h is automatically included (after starpu.h) to
  270. keep compatibility with existing software. Other changes are mentioned
  271. below, compatibility is also preserved for them.
  272. To port code to use new names (this is not mandatory), the
  273. tools/dev/rename.sh script can be used, and pkg-config starpu-1.0 should
  274. be used.
  275. * The communication cost in the heft and dmda scheduling strategies now
  276. take into account the contention brought by the number of GPUs. This
  277. changes the meaning of the beta factor, whose default 1.0 value should
  278. now be good enough in most case.
  279. Small features:
  280. * Allow users to disable asynchronous data transfers between CPUs and
  281. GPUs.
  282. * Update OpenCL driver to enable CPU devices (the environment variable
  283. STARPU_OPENCL_ON_CPUS must be set to a positive value when
  284. executing an application)
  285. * struct starpu_data_interface_ops --- operations on a data
  286. interface --- define a new function pointer allocate_new_data
  287. which creates a new data interface of the given type based on
  288. an existing handle
  289. * Add a field named magic to struct starpu_task which is set when
  290. initialising the task. starpu_task_submit will fail if the
  291. field does not have the right value. This will hence avoid
  292. submitting tasks which have not been properly initialised.
  293. * Add a hook function pre_exec_hook in struct starpu_sched_policy.
  294. The function is meant to be called in drivers. Schedulers
  295. can use it to be notified when a task is about being computed.
  296. * Add codelet execution time statistics plot.
  297. * Add bus speed in starpu_machine_display.
  298. * Add a STARPU_DATA_ACQUIRE_CB which permits to inline the code to be
  299. done.
  300. * Add gdb functions.
  301. * Add complex support to LU example.
  302. * Permit to use the same data several times in write mode in the
  303. parameters of the same task.
  304. Small changes:
  305. * Increase default value for STARPU_MAXCPUS -- Maximum number of
  306. CPUs supported -- to 64.
  307. * Add man pages for some of the tools
  308. * Add C++ application example in examples/cpp/
  309. * Add an OpenMP fork-join example.
  310. * Documentation improvement.
  311. StarPU 0.9 (svn revision 3721)
  312. ==============================================
  313. The extensions release
  314. * Provide the STARPU_REDUX data access mode
  315. * Externalize the scheduler API.
  316. * Add theoretical bound computation
  317. * Add the void interface
  318. * Add power consumption optimization
  319. * Add parallel task support
  320. * Add starpu_mpi_insert_task
  321. * Add profiling information interface.
  322. * Add STARPU_LIMIT_GPU_MEM environment variable.
  323. * OpenCL fixes
  324. * MPI fixes
  325. * Improve optimization documentation
  326. * Upgrade to hwloc 1.1 interface
  327. * Add fortran example
  328. * Add mandelbrot OpenCL example
  329. * Add cg example
  330. * Add stencil MPI example
  331. * Initial support for CUDA4
  332. StarPU 0.4 (svn revision 2535)
  333. ==============================================
  334. The API strengthening release
  335. * Major API improvements
  336. - Provide the STARPU_SCRATCH data access mode
  337. - Rework data filter interface
  338. - Rework data interface structure
  339. - A script that automatically renames old functions to accomodate with the new
  340. API is available from https://scm.gforge.inria.fr/svn/starpu/scripts/renaming
  341. (login: anonsvn, password: anonsvn)
  342. * Implement dependencies between task directly (eg. without tags)
  343. * Implicit data-driven task dependencies simplifies the design of
  344. data-parallel algorithms
  345. * Add dynamic profiling capabilities
  346. - Provide per-task feedback
  347. - Provide per-worker feedback
  348. - Provide feedback about memory transfers
  349. * Provide a library to help accelerating MPI applications
  350. * Improve data transfers overhead prediction
  351. - Transparently benchmark buses to generate performance models
  352. - Bind accelerator-controlling threads with respect to NUMA locality
  353. * Improve StarPU's portability
  354. - Add OpenCL support
  355. - Add support for Windows
  356. StarPU 0.2.901 aka 0.3-rc1 (svn revision 1236)
  357. ==============================================
  358. The asynchronous heterogeneous multi-accelerator release
  359. * Many API changes and code cleanups
  360. - Implement starpu_worker_get_id
  361. - Implement starpu_worker_get_name
  362. - Implement starpu_worker_get_type
  363. - Implement starpu_worker_get_count
  364. - Implement starpu_display_codelet_stats
  365. - Implement starpu_data_prefetch_on_node
  366. - Expose the starpu_data_set_wt_mask function
  367. * Support nvidia (heterogeneous) multi-GPU
  368. * Add the data request mechanism
  369. - All data transfers use data requests now
  370. - Implement asynchronous data transfers
  371. - Implement prefetch mechanism
  372. - Chain data requests to support GPU->RAM->GPU transfers
  373. * Make it possible to bypass the scheduler and to assign a task to a specific
  374. worker
  375. * Support restartable tasks to reinstanciate dependencies task graphs
  376. * Improve performance prediction
  377. - Model data transfer overhead
  378. - One model is created for each accelerator
  379. * Support for CUDA's driver API is deprecated
  380. * The STARPU_WORKERS_CUDAID and STARPU_WORKERS_CPUID env. variables make it possible to
  381. specify where to bind the workers
  382. * Use the hwloc library to detect the actual number of cores
  383. StarPU 0.2.0 (svn revision 1013)
  384. ==============================================
  385. The Stabilizing-the-Basics release
  386. * Various API cleanups
  387. * Mac OS X is supported now
  388. * Add dynamic code loading facilities onto Cell's SPUs
  389. * Improve performance analysis/feedback tools
  390. * Application can interact with StarPU tasks
  391. - The application may access/modify data managed by the DSM
  392. - The application may wait for the termination of a (set of) task(s)
  393. * An initial documentation is added
  394. * More examples are supplied
  395. StarPU 0.1.0 (svn revision 794)
  396. ==============================================
  397. First release.
  398. Status:
  399. * Only supports Linux platforms yet
  400. * Supported architectures
  401. - multicore CPUs
  402. - NVIDIA GPUs (with CUDA 2.x)
  403. - experimental Cell/BE support
  404. Changes:
  405. * Scheduling facilities
  406. - run-time selection of the scheduling policy
  407. - basic auto-tuning facilities
  408. * Software-based DSM
  409. - transparent data coherency management
  410. - High-level expressive interface
  411. # Local Variables:
  412. # mode: text
  413. # coding: utf-8
  414. # ispell-local-dictionary: "american"
  415. # End: