introduction.doxy 8.9 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236
  1. /*
  2. * This file is part of the StarPU Handbook.
  3. * Copyright (C) 2009--2011 Universit@'e de Bordeaux 1
  4. * Copyright (C) 2010, 2011, 2012, 2013 Centre National de la Recherche Scientifique
  5. * Copyright (C) 2011, 2012 Institut National de Recherche en Informatique et Automatique
  6. * See the file version.doxy for copying conditions.
  7. */
  8. /*! \mainpage Introduction
  9. \section Motivation Motivation
  10. \internal
  11. complex machines with heterogeneous cores/devices
  12. \endinternal
  13. The use of specialized hardware such as accelerators or coprocessors offers an
  14. interesting approach to overcome the physical limits encountered by processor
  15. architects. As a result, many machines are now equipped with one or several
  16. accelerators (e.g. a GPU), in addition to the usual processor(s). While a lot of
  17. efforts have been devoted to offload computation onto such accelerators, very
  18. little attention as been paid to portability concerns on the one hand, and to the
  19. possibility of having heterogeneous accelerators and processors to interact on the other hand.
  20. StarPU is a runtime system that offers support for heterogeneous multicore
  21. architectures, it not only offers a unified view of the computational resources
  22. (i.e. CPUs and accelerators at the same time), but it also takes care of
  23. efficiently mapping and executing tasks onto an heterogeneous machine while
  24. transparently handling low-level issues such as data transfers in a portable
  25. fashion.
  26. \internal
  27. this leads to a complicated distributed memory design
  28. which is not (easily) manageable by hand
  29. added value/benefits of StarPU
  30. - portability
  31. - scheduling, perf. portability
  32. \endinternal
  33. \section StarPUInANutshell StarPU in a Nutshell
  34. StarPU is a software tool aiming to allow programmers to exploit the
  35. computing power of the available CPUs and GPUs, while relieving them
  36. from the need to specially adapt their programs to the target machine
  37. and processing units.
  38. At the core of StarPU is its run-time support library, which is
  39. responsible for scheduling application-provided tasks on heterogeneous
  40. CPU/GPU machines. In addition, StarPU comes with programming language
  41. support, in the form of extensions to languages of the C family
  42. (\ref cExtensions), as well as an OpenCL front-end (\ref SOCLOpenclExtensions).
  43. StarPU's run-time and programming language extensions support a
  44. task-based programming model. Applications submit computational
  45. tasks, with CPU and/or GPU implementations, and StarPU schedules these
  46. tasks and associated data transfers on available CPUs and GPUs. The
  47. data that a task manipulates are automatically transferred among
  48. accelerators and the main memory, so that programmers are freed from the
  49. scheduling issues and technical details associated with these transfers.
  50. StarPU takes particular care of scheduling tasks efficiently, using
  51. well-known algorithms from the literature (\ref TaskSchedulingPolicy).
  52. In addition, it allows scheduling experts, such as compiler or
  53. computational library developers, to implement custom scheduling
  54. policies in a portable fashion (\ref DefiningANewSchedulingPolicy).
  55. The remainder of this section describes the main concepts used in StarPU.
  56. \internal
  57. explain the notion of codelet and task (i.e. g(A, B)
  58. \endinternal
  59. \subsection CodeletAndTasks Codelet and Tasks
  60. One of the StarPU primary data structures is the \b codelet. A codelet describes a
  61. computational kernel that can possibly be implemented on multiple architectures
  62. such as a CPU, a CUDA device or an OpenCL device.
  63. \internal
  64. TODO insert illustration f: f_spu, f_cpu, ...
  65. \endinternal
  66. Another important data structure is the \b task. Executing a StarPU task
  67. consists in applying a codelet on a data set, on one of the architectures on
  68. which the codelet is implemented. A task thus describes the codelet that it
  69. uses, but also which data are accessed, and how they are
  70. accessed during the computation (read and/or write).
  71. StarPU tasks are asynchronous: submitting a task to StarPU is a non-blocking
  72. operation. The task structure can also specify a \b callback function that is
  73. called once StarPU has properly executed the task. It also contains optional
  74. fields that the application may use to give hints to the scheduler (such as
  75. priority levels).
  76. By default, task dependencies are inferred from data dependency (sequential
  77. coherence) by StarPU. The application can however disable sequential coherency
  78. for some data, and dependencies be expressed by hand.
  79. A task may be identified by a unique 64-bit number chosen by the application
  80. which we refer as a \b tag.
  81. Task dependencies can be enforced by hand either by the means of callback functions, by
  82. submitting other tasks, or by expressing dependencies
  83. between tags (which can thus correspond to tasks that have not been submitted
  84. yet).
  85. \internal
  86. TODO insert illustration f(Ar, Brw, Cr) + ..
  87. \endinternal
  88. \internal
  89. DSM
  90. \endinternal
  91. \subsection StarPUDataManagementLibrary StarPU Data Management Library
  92. Because StarPU schedules tasks at runtime, data transfers have to be
  93. done automatically and ``just-in-time'' between processing units,
  94. relieving the application programmer from explicit data transfers.
  95. Moreover, to avoid unnecessary transfers, StarPU keeps data
  96. where it was last needed, even if was modified there, and it
  97. allows multiple copies of the same data to reside at the same time on
  98. several processing units as long as it is not modified.
  99. \section ApplicationTaskification Application Taskification
  100. TODO
  101. \internal
  102. TODO: section describing what taskifying an application means: before
  103. porting to StarPU, turn the program into:
  104. "pure" functions, which only access data from their passed parameters
  105. a main function which just calls these pure functions
  106. and then it's trivial to use StarPU or any other kind of task-based library:
  107. simply replace calling the function with submitting a task.
  108. \endinternal
  109. \section Glossary Glossary
  110. A \b codelet records pointers to various implementations of the same
  111. theoretical function.
  112. A <b>memory node</b> can be either the main RAM or GPU-embedded memory.
  113. A \b bus is a link between memory nodes.
  114. A <b>data handle</b> keeps track of replicates of the same data (\b registered by the
  115. application) over various memory nodes. The data management library manages
  116. keeping them coherent.
  117. The \b home memory node of a data handle is the memory node from which the data
  118. was registered (usually the main memory node).
  119. A \b task represents a scheduled execution of a codelet on some data handles.
  120. A \b tag is a rendez-vous point. Tasks typically have their own tag, and can
  121. depend on other tags. The value is chosen by the application.
  122. A \b worker execute tasks. There is typically one per CPU computation core and
  123. one per accelerator (for which a whole CPU core is dedicated).
  124. A \b driver drives a given kind of workers. There are currently CPU, CUDA,
  125. and OpenCL drivers. They usually start several workers to actually drive
  126. them.
  127. A <b>performance model</b> is a (dynamic or static) model of the performance of a
  128. given codelet. Codelets can have execution time performance model as well as
  129. power consumption performance models.
  130. A data \b interface describes the layout of the data: for a vector, a pointer
  131. for the start, the number of elements and the size of elements ; for a matrix, a
  132. pointer for the start, the number of elements per row, the offset between rows,
  133. and the size of each element ; etc. To access their data, codelet functions are
  134. given interfaces for the local memory node replicates of the data handles of the
  135. scheduled task.
  136. \b Partitioning data means dividing the data of a given data handle (called
  137. \b father) into a series of \b children data handles which designate various
  138. portions of the former.
  139. A \b filter is the function which computes children data handles from a father
  140. data handle, and thus describes how the partitioning should be done (horizontal,
  141. vertical, etc.)
  142. \b Acquiring a data handle can be done from the main application, to safely
  143. access the data of a data handle from its home node, without having to
  144. unregister it.
  145. \section ResearchPapers Research Papers
  146. Research papers about StarPU can be found at
  147. http://runtime.bordeaux.inria.fr/Publis/Keyword/STARPU.html.
  148. A good overview is available in the research report at
  149. http://hal.archives-ouvertes.fr/inria-00467677.
  150. \section FurtherReading Further Reading
  151. The documentation chapters include
  152. <ol>
  153. <li> Part: Using StarPU
  154. <ul>
  155. <li> \ref BuildingAndInstallingStarPU
  156. <li> \ref BasicExamples
  157. <li> \ref AdvancedExamples
  158. <li> \ref HowToOptimizePerformanceWithStarPU
  159. <li> \ref PerformanceFeedback
  160. <li> \ref TipsAndTricksToKnowAbout
  161. <li> \ref MPISupport
  162. <li> \ref FFTSupport
  163. <li> \ref cExtensions
  164. <li> \ref SOCLOpenclExtensions
  165. <li> \ref SchedulingContexts
  166. <li> \ref SchedulingContextHypervisor
  167. </ul>
  168. </li>
  169. <li> Part: Inside StarPU
  170. <ul>
  171. <li> \ref ExecutionConfigurationThroughEnvironmentVariables
  172. <li> \ref CompilationConfiguration
  173. <li> \ref ModuleDocumentation
  174. <li> \ref deprecated
  175. </ul>
  176. <li> Part: Appendix
  177. <ul>
  178. <li> \ref FullSourceCodeVectorScal
  179. <li> \ref GNUFreeDocumentationLicense
  180. </ul>
  181. </ol>
  182. Make sure to have had a look at those too!
  183. */