ChangeLog 7.0 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177
  1. # StarPU --- Runtime system for heterogeneous multicore architectures.
  2. #
  3. # Copyright (C) 2009, 2010, 2011 Université de Bordeaux 1
  4. # Copyright (C) 2010, 2011, 2012 Centre National de la Recherche Scientifique
  5. #
  6. # StarPU is free software; you can redistribute it and/or modify
  7. # it under the terms of the GNU Lesser General Public License as published by
  8. # the Free Software Foundation; either version 2.1 of the License, or (at
  9. # your option) any later version.
  10. #
  11. # StarPU is distributed in the hope that it will be useful, but
  12. # WITHOUT ANY WARRANTY; without even the implied warranty of
  13. # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
  14. #
  15. # See the GNU Lesser General Public License in COPYING.LGPL for more details.
  16. StarPU 1.0 (svn revision xxxx)
  17. ==============================================
  18. The extensions-again release
  19. * Make where field for struct starpu_codelet optional. When unset, its
  20. value will be automatically set based on the availability of the
  21. different XXX_funcs fields of the codelet.
  22. * Add a hook function pre_exec_hook in struct starpu_sched_policy.
  23. The function is meant to be called in drivers. Schedulers
  24. can use it to be notified when a task is about being computed.
  25. * Define access modes for data handles into starpu_codelet and no longer
  26. in starpu_task. Hence mark (struct starpu_task).buffers as
  27. deprecated, and add (struct starpu_task).handles and (struct
  28. starpu_codelet).modes
  29. * Install headers under $includedir/starpu/1.0.
  30. * Deprecate cost_model, and introduce cost_function, which is provided
  31. with the whole task structure, the target arch and implementation
  32. number
  33. * Permit the application to provide its own size base for performance
  34. models
  35. * Fields xxx_func of struct starpu_codelet are made deprecated. One
  36. should use instead fields xxx_funcs.
  37. * Applications can provide several implementations of a codelet for the
  38. same architecture.
  39. * A new multi-format interface permits to use different binary formats
  40. on CPUs & GPUs, the conversion functions being provided by the
  41. application and called by StarPU as needed (and as less as
  42. possible).
  43. * Add a gcc plugin to extend the C interface with pragmas which allows to
  44. easily define codelets and issue tasks.
  45. * Add codelet execution time statistics plot.
  46. * Add bus speed in starpu_machine_display.
  47. * Add a StarPU-Top feedback and steering interface.
  48. * Documentation improvement.
  49. * Add a STARPU_DATA_ACQUIRE_CB which permits to inline the code to be
  50. done.
  51. * Permit to specify MPI tags for more efficient starpu_mpi_insert_task
  52. * Add SOCL, an OpenCL interface on top of StarPU.
  53. * Add gdb functions.
  54. * Add complex support to LU example.
  55. * Add an OpenMP fork-join example.
  56. * Permit to use the same data several times in write mode in the
  57. parameters of the same task.
  58. * Some types were renamed for consistency. The tools/dev/rename.sh
  59. script can be used to port code using former names. You can also
  60. choose to include starpu_deprecated_api.h (after starpu.h) to keep
  61. using the old types.
  62. StarPU 0.9 (svn revision 3721)
  63. ==============================================
  64. The extensions release
  65. * Provide the STARPU_REDUX data access mode
  66. * Externalize the scheduler API.
  67. * Add theoretical bound computation
  68. * Add the void interface
  69. * Add power consumption optimization
  70. * Add parallel task support
  71. * Add starpu_mpi_insert_task
  72. * Add profiling information interface.
  73. * Add STARPU_LIMIT_GPU_MEM environment variable.
  74. * OpenCL fixes
  75. * MPI fixes
  76. * Improve optimization documentation
  77. * Upgrade to hwloc 1.1 interface
  78. * Add fortran example
  79. * Add mandelbrot OpenCL example
  80. * Add cg example
  81. * Add stencil MPI example
  82. * Initial support for CUDA4
  83. StarPU 0.4 (svn revision 2535)
  84. ==============================================
  85. The API strengthening release
  86. * Major API improvements
  87. - Provide the STARPU_SCRATCH data access mode
  88. - Rework data filter interface
  89. - Rework data interface structure
  90. - A script that automatically renames old functions to accomodate with the new
  91. API is available from https://scm.gforge.inria.fr/svn/starpu/scripts/renaming
  92. (login: anonsvn, password: anonsvn)
  93. * Implement dependencies between task directly (eg. without tags)
  94. * Implicit data-driven task dependencies simplifies the design of
  95. data-parallel algorithms
  96. * Add dynamic profiling capabilities
  97. - Provide per-task feedback
  98. - Provide per-worker feedback
  99. - Provide feedback about memory transfers
  100. * Provide a library to help accelerating MPI applications
  101. * Improve data transfers overhead prediction
  102. - Transparently benchmark buses to generate performance models
  103. - Bind accelerator-controlling threads with respect to NUMA locality
  104. * Improve StarPU's portability
  105. - Add OpenCL support
  106. - Add support for Windows
  107. StarPU 0.2.901 aka 0.3-rc1 (svn revision 1236)
  108. ==============================================
  109. The asynchronous heterogeneous multi-accelerator release
  110. * Many API changes and code cleanups
  111. - Implement starpu_worker_get_id
  112. - Implement starpu_worker_get_name
  113. - Implement starpu_worker_get_type
  114. - Implement starpu_worker_get_count
  115. - Implement starpu_display_codelet_stats
  116. - Implement starpu_data_prefetch_on_node
  117. - Expose the starpu_data_set_wt_mask function
  118. * Support nvidia (heterogeneous) multi-GPU
  119. * Add the data request mechanism
  120. - All data transfers use data requests now
  121. - Implement asynchronous data transfers
  122. - Implement prefetch mechanism
  123. - Chain data requests to support GPU->RAM->GPU transfers
  124. * Make it possible to bypass the scheduler and to assign a task to a specific
  125. worker
  126. * Support restartable tasks to reinstanciate dependencies task graphs
  127. * Improve performance prediction
  128. - Model data transfer overhead
  129. - One model is created for each accelerator
  130. * Support for CUDA's driver API is deprecated
  131. * The STARPU_WORKERS_CUDAID and STARPU_WORKERS_CPUID env. variables make it possible to
  132. specify where to bind the workers
  133. * Use the hwloc library to detect the actual number of cores
  134. StarPU 0.2.0 (svn revision 1013)
  135. ==============================================
  136. The Stabilizing-the-Basics release
  137. * Various API cleanups
  138. * Mac OS X is supported now
  139. * Add dynamic code loading facilities onto Cell's SPUs
  140. * Improve performance analysis/feedback tools
  141. * Application can interact with StarPU tasks
  142. - The application may access/modify data managed by the DSM
  143. - The application may wait for the termination of a (set of) task(s)
  144. * An initial documentation is added
  145. * More examples are supplied
  146. StarPU 0.1.0 (svn revision 794)
  147. ==============================================
  148. First release.
  149. Status:
  150. * Only supports Linux platforms yet
  151. * Supported architectures
  152. - multicore CPUs
  153. - NVIDIA GPUs (with CUDA 2.x)
  154. - experimental Cell/BE support
  155. Changes:
  156. * Scheduling facilities
  157. - run-time selection of the scheduling policy
  158. - basic auto-tuning facilities
  159. * Software-based DSM
  160. - transparent data coherency management
  161. - High-level expressive interface