00introduction.doxy 9.1 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229
  1. /*
  2. * This file is part of the StarPU Handbook.
  3. * Copyright (C) 2009--2011 Universit@'e de Bordeaux 1
  4. * Copyright (C) 2010, 2011, 2012, 2013 Centre National de la Recherche Scientifique
  5. * Copyright (C) 2011, 2012 Institut National de Recherche en Informatique et Automatique
  6. * See the file version.doxy for copying conditions.
  7. */
  8. /*! \mainpage Introduction
  9. \htmlonly
  10. <h1><a class="anchor" id="Foreword"></a>Foreword</h1>
  11. \endhtmlonly
  12. \htmlinclude version.html
  13. \htmlinclude foreword.html
  14. \section Motivation Motivation
  15. // This is a comment and it will be removed before the file is processed by doxygen
  16. // complex machines with heterogeneous cores/devices
  17. The use of specialized hardware such as accelerators or coprocessors offers an
  18. interesting approach to overcome the physical limits encountered by processor
  19. architects. As a result, many machines are now equipped with one or several
  20. accelerators (e.g. a GPU), in addition to the usual processor(s). While a lot of
  21. efforts have been devoted to offload computation onto such accelerators, very
  22. little attention as been paid to portability concerns on the one hand, and to the
  23. possibility of having heterogeneous accelerators and processors to interact on the other hand.
  24. StarPU is a runtime system that offers support for heterogeneous multicore
  25. architectures, it not only offers a unified view of the computational resources
  26. (i.e. CPUs and accelerators at the same time), but it also takes care of
  27. efficiently mapping and executing tasks onto an heterogeneous machine while
  28. transparently handling low-level issues such as data transfers in a portable
  29. fashion.
  30. // this leads to a complicated distributed memory design
  31. // which is not (easily) manageable by hand
  32. // added value/benefits of StarPU
  33. // - portability
  34. // - scheduling, perf. portability
  35. \section StarPUInANutshell StarPU in a Nutshell
  36. StarPU is a software tool aiming to allow programmers to exploit the
  37. computing power of the available CPUs and GPUs, while relieving them
  38. from the need to specially adapt their programs to the target machine
  39. and processing units.
  40. At the core of StarPU is its run-time support library, which is
  41. responsible for scheduling application-provided tasks on heterogeneous
  42. CPU/GPU machines. In addition, StarPU comes with programming language
  43. support, in the form of extensions to languages of the C family
  44. (\ref cExtensions), as well as an OpenCL front-end (\ref SOCLOpenclExtensions).
  45. StarPU's run-time and programming language extensions support a
  46. task-based programming model. Applications submit computational
  47. tasks, with CPU and/or GPU implementations, and StarPU schedules these
  48. tasks and associated data transfers on available CPUs and GPUs. The
  49. data that a task manipulates are automatically transferred among
  50. accelerators and the main memory, so that programmers are freed from the
  51. scheduling issues and technical details associated with these transfers.
  52. StarPU takes particular care of scheduling tasks efficiently, using
  53. well-known algorithms from the literature (\ref TaskSchedulingPolicy).
  54. In addition, it allows scheduling experts, such as compiler or
  55. computational library developers, to implement custom scheduling
  56. policies in a portable fashion (\ref DefiningANewSchedulingPolicy).
  57. The remainder of this section describes the main concepts used in StarPU.
  58. // explain the notion of codelet and task (i.e. g(A, B)
  59. \subsection CodeletAndTasks Codelet and Tasks
  60. One of the StarPU primary data structures is the \b codelet. A codelet describes a
  61. computational kernel that can possibly be implemented on multiple architectures
  62. such as a CPU, a CUDA device or an OpenCL device.
  63. // TODO insert illustration f: f_spu, f_cpu, ...
  64. Another important data structure is the \b task. Executing a StarPU task
  65. consists in applying a codelet on a data set, on one of the architectures on
  66. which the codelet is implemented. A task thus describes the codelet that it
  67. uses, but also which data are accessed, and how they are
  68. accessed during the computation (read and/or write).
  69. StarPU tasks are asynchronous: submitting a task to StarPU is a non-blocking
  70. operation. The task structure can also specify a \b callback function that is
  71. called once StarPU has properly executed the task. It also contains optional
  72. fields that the application may use to give hints to the scheduler (such as
  73. priority levels).
  74. By default, task dependencies are inferred from data dependency (sequential
  75. coherence) by StarPU. The application can however disable sequential coherency
  76. for some data, and dependencies be expressed by hand.
  77. A task may be identified by a unique 64-bit number chosen by the application
  78. which we refer as a \b tag.
  79. Task dependencies can be enforced by hand either by the means of callback functions, by
  80. submitting other tasks, or by expressing dependencies
  81. between tags (which can thus correspond to tasks that have not been submitted
  82. yet).
  83. // TODO insert illustration f(Ar, Brw, Cr) + ..
  84. // DSM
  85. \subsection StarPUDataManagementLibrary StarPU Data Management Library
  86. Because StarPU schedules tasks at runtime, data transfers have to be
  87. done automatically and ``just-in-time'' between processing units,
  88. relieving the application programmer from explicit data transfers.
  89. Moreover, to avoid unnecessary transfers, StarPU keeps data
  90. where it was last needed, even if was modified there, and it
  91. allows multiple copies of the same data to reside at the same time on
  92. several processing units as long as it is not modified.
  93. \section ApplicationTaskification Application Taskification
  94. TODO
  95. // TODO: section describing what taskifying an application means: before
  96. // porting to StarPU, turn the program into:
  97. // "pure" functions, which only access data from their passed parameters
  98. // a main function which just calls these pure functions
  99. // and then it's trivial to use StarPU or any other kind of task-based library:
  100. // simply replace calling the function with submitting a task.
  101. \section Glossary Glossary
  102. A \b codelet records pointers to various implementations of the same
  103. theoretical function.
  104. A <b>memory node</b> can be either the main RAM, GPU-embedded memory or a disk memory.
  105. A \b bus is a link between memory nodes.
  106. A <b>data handle</b> keeps track of replicates of the same data (\b registered by the
  107. application) over various memory nodes. The data management library manages
  108. keeping them coherent.
  109. The \b home memory node of a data handle is the memory node from which the data
  110. was registered (usually the main memory node).
  111. A \b task represents a scheduled execution of a codelet on some data handles.
  112. A \b tag is a rendez-vous point. Tasks typically have their own tag, and can
  113. depend on other tags. The value is chosen by the application.
  114. A \b worker execute tasks. There is typically one per CPU computation core and
  115. one per accelerator (for which a whole CPU core is dedicated).
  116. A \b driver drives a given kind of workers. There are currently CPU, CUDA,
  117. and OpenCL drivers. They usually start several workers to actually drive
  118. them.
  119. A <b>performance model</b> is a (dynamic or static) model of the performance of a
  120. given codelet. Codelets can have execution time performance model as well as
  121. power consumption performance models.
  122. A data \b interface describes the layout of the data: for a vector, a pointer
  123. for the start, the number of elements and the size of elements ; for a matrix, a
  124. pointer for the start, the number of elements per row, the offset between rows,
  125. and the size of each element ; etc. To access their data, codelet functions are
  126. given interfaces for the local memory node replicates of the data handles of the
  127. scheduled task.
  128. \b Partitioning data means dividing the data of a given data handle (called
  129. \b father) into a series of \b children data handles which designate various
  130. portions of the former.
  131. A \b filter is the function which computes children data handles from a father
  132. data handle, and thus describes how the partitioning should be done (horizontal,
  133. vertical, etc.)
  134. \b Acquiring a data handle can be done from the main application, to safely
  135. access the data of a data handle from its home node, without having to
  136. unregister it.
  137. \section ResearchPapers Research Papers
  138. Research papers about StarPU can be found at
  139. http://runtime.bordeaux.inria.fr/Publis/Keyword/STARPU.html.
  140. A good overview is available in the research report at
  141. http://hal.archives-ouvertes.fr/inria-00467677.
  142. \section FurtherReading Further Reading
  143. The documentation chapters include
  144. <ol>
  145. <li> Part: Using StarPU
  146. <ul>
  147. <li> \ref BuildingAndInstallingStarPU
  148. <li> \ref BasicExamples
  149. <li> \ref AdvancedExamples
  150. <li> \ref HowToOptimizePerformanceWithStarPU
  151. <li> \ref PerformanceFeedback
  152. <li> \ref TipsAndTricksToKnowAbout
  153. <li> \ref OutOfCore
  154. <li> \ref MPISupport
  155. <li> \ref FFTSupport
  156. <li> \ref MICSCCSupport
  157. <li> \ref cExtensions
  158. <li> \ref SOCLOpenclExtensions
  159. <li> \ref SchedulingContexts
  160. <li> \ref SchedulingContextHypervisor
  161. </ul>
  162. </li>
  163. <li> Part: Inside StarPU
  164. <ul>
  165. <li> \ref ExecutionConfigurationThroughEnvironmentVariables
  166. <li> \ref CompilationConfiguration
  167. <li> \ref ModuleDocumentation
  168. <li> \ref FileDocumentation
  169. <li> \ref deprecated
  170. </ul>
  171. <li> Part: Appendix
  172. <ul>
  173. <li> \ref FullSourceCodeVectorScal
  174. <li> \ref GNUFreeDocumentationLicense
  175. </ul>
  176. </ol>
  177. Make sure to have had a look at those too!
  178. */