22openmp_runtime_support.doxy 16 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464
  1. /*
  2. * This file is part of the StarPU Handbook.
  3. * Copyright (C) 2014 Inria
  4. * See the file version.doxy for copying conditions.
  5. */
  6. /*! \page OpenMPRuntimeSupport The StarPU OpenMP Runtime Support (SORS)
  7. StarPU provides the necessary routines and support to implement an <a
  8. href="http://www.openmp.org/">OpenMP</a> runtime compliant with the
  9. revision 3.1 of the language specification, and compliant with the
  10. task-related data dependency functionalities introduced in the revision
  11. 4.0 of the language. This StarPU OpenMP Runtime Support (SORS) has been
  12. designed to be targetted by OpenMP compilers such as the Klang-OMP
  13. compiler. Most supported OpenMP directives can both be implemented
  14. inline or as outlined functions.
  15. All functions are defined in \ref API_OpenMP_Runtime_Support.
  16. \section Implementation Implementation Details and Specificities
  17. \subsection MainThread Main Thread
  18. When using the SORS, the main thread gets involved in executing OpenMP tasks
  19. just like every other threads, in order to be compliant with the
  20. specification execution model. This contrasts with StarPU's usual
  21. execution model where the main thread submit tasks but does not take
  22. part in executing them.
  23. \subsection TaskSemantics Extended Task Semantics
  24. The semantics of tasks generated by the SORS are extended with respect
  25. to regular StarPU tasks in that SORS' tasks may block and be preempted
  26. by SORS call, whereas regular StarPU tasks cannot. SORS tasks may
  27. coexist with regular StarPU tasks. However, only the tasks created using
  28. SORS API functions inherit from extended semantics.
  29. \section Configuration Configuration
  30. The SORS can be compiled into <c>libstarpu</c>
  31. by providing the <c>--enable-openmp</c> flag to StarPU's
  32. <c>configure</c>. Conditional compiled source codes may check for the
  33. availability of the OpenMP Runtime Support by testing whether the C
  34. preprocessor macro <c>STARPU_OPENMP</c> is defined or not.
  35. \section InitExit Initialization and Shutdown
  36. The SORS needs to be executed/terminated by the
  37. starpu_omp_init()/starpu_omp_shutdown() instead of
  38. starpu_init()/starpu_shutdown(). This requirement is necessary to make
  39. sure that the main thread gets the proper execution environment to run
  40. OpenMP tasks. These calls will usually be performed by a compiler
  41. runtime. Thus, they can be executed from a constructor/destructor such
  42. as this:
  43. \code{.c}
  44. __attribute__((constructor))
  45. static void omp_constructor(void)
  46. {
  47. int ret = starpu_omp_init();
  48. STARPU_CHECK_RETURN_VALUE(ret, "starpu_omp_init");
  49. }
  50. __attribute__((destructor))
  51. static void omp_destructor(void)
  52. {
  53. starpu_omp_shutdown();
  54. }
  55. \endcode
  56. \sa starpu_omp_init()
  57. \sa starpu_omp_shutdown()
  58. \section Parallel Parallel Regions and Worksharing
  59. The SORS provides functions to create OpenMP parallel regions as well as
  60. mapping work on participating workers. The current implementation does
  61. not provide nested active parallel regions: Parallel regions may be
  62. created recursively, however only the first level parallel region may
  63. have more than one worker. From an internal point-of-view, the SORS'
  64. parallel regions are implemented as a set of implicit, extended semantics
  65. StarPU tasks, following the execution model of the OpenMP specification.
  66. Thus the SORS' parallel region tasks may block and be preempted, by
  67. SORS calls, enabling constructs such as barriers.
  68. \subsection OMPParallel Parallel Regions
  69. Parallel regions can be created with the function
  70. starpu_omp_parallel_region() which accepts a set of attributes as
  71. parameter. The execution of the calling task is suspended until the
  72. parallel region completes. The <c>attr.cl</c> field is a regular StarPU
  73. codelet. However only CPU codelets are supported for parallel regions.
  74. Here is an example of use:
  75. \code{.c}
  76. void parallel_region_f(void *buffers[], void *args)
  77. {
  78. (void) buffers;
  79. (void) args;
  80. pthread_t tid = pthread_self();
  81. int worker_id = starpu_worker_get_id();
  82. printf("[tid %p] task thread = %d\n", (void *)tid, worker_id);
  83. }
  84. void f(void)
  85. {
  86. struct starpu_omp_parallel_region_attr attr;
  87. memset(&attr, 0, sizeof(attr));
  88. attr.cl.cpu_funcs[0] = parallel_region_f;
  89. attr.cl.where = STARPU_CPU;
  90. attr.if_clause = 1;
  91. starpu_omp_parallel_region(&attr);
  92. return 0;
  93. }
  94. \endcode
  95. \sa struct starpu_omp_parallel_region_attr
  96. \sa starpu_omp_parallel_region()
  97. \subsection OMPFor Parallel For
  98. OpenMP <c>for</c> loops are provided by the starpu_omp_for() group of
  99. functions. Variants are available for inline or outlined
  100. implementations. The SORS supports <c>static</c>, <c>dynamic</c>, and
  101. <c>guided</c> loop scheduling clauses. The <c>auto</c> scheduling clause
  102. is implemented as <c>static</c>. The <c>runtime</c> scheduling clause
  103. honors the scheduling mode selected through the environment variable
  104. OMP_SCHEDULE or the starpu_omp_set_schedule() function. For loops with
  105. the <c>ordered</c> clause are also supported. An implicit barrier can be
  106. enforced or skipped at the end of the worksharing construct, according
  107. to the value of the <c>nowait</c> parameter.
  108. The canonical family of starpu_omp_for() functions provide each instance
  109. with the first iteration number and the number of iterations (possibly
  110. zero) to perform. The alternate family of starpu_omp_for_alt() functions
  111. provide each instance with the (possibly empty) range of iterations to
  112. perform, including the first and excluding the last.
  113. The family of starpu_omp_ordered() functions enable to implement
  114. OpenMP's ordered construct, a region with a parallel for loop that is
  115. guaranteed to be executed in the sequential order of the loop
  116. iterations.
  117. \code{.c}
  118. void for_g(unsigned long long i, unsigned long long nb_i, void *arg)
  119. {
  120. (void) arg;
  121. for (; nb_i > 0; i++, nb_i--)
  122. {
  123. array[i] = 1;
  124. }
  125. }
  126. void parallel_region_f(void *buffers[], void *args)
  127. {
  128. (void) buffers;
  129. (void) args;
  130. starpu_omp_for(for_g, NULL, NB_ITERS, CHUNK, starpu_omp_sched_static, 0, 0);
  131. }
  132. \endcode
  133. \sa starpu_omp_for()
  134. \sa starpu_omp_for_inline_first()
  135. \sa starpu_omp_for_inline_next()
  136. \sa starpu_omp_for_alt()
  137. \sa starpu_omp_for_inline_first_alt()
  138. \sa starpu_omp_for_inline_next_alt()
  139. \sa starpu_omp_ordered()
  140. \sa starpu_omp_ordered_inline_begin()
  141. \sa starpu_omp_ordered_inline_end()
  142. \subsection OMPSections Sections
  143. OpenMP <c>sections</c> worksharing constructs are supported using the
  144. set of starpu_omp_sections() variants. The general principle is either
  145. to provide an array of per-section functions or a single function that
  146. will redirect to execution to the suitable per-section functions. An
  147. implicit barrier can be enforced or skipped at the end of the
  148. worksharing construct, according to the value of the <c>nowait</c>
  149. parameter.
  150. \code{.c}
  151. void parallel_region_f(void *buffers[], void *args)
  152. {
  153. (void) buffers;
  154. (void) args;
  155. section_funcs[0] = f;
  156. section_funcs[1] = g;
  157. section_funcs[2] = h;
  158. section_funcs[3] = i;
  159. section_args[0] = arg_f;
  160. section_args[1] = arg_g;
  161. section_args[2] = arg_h;
  162. section_args[3] = arg_i;
  163. starpu_omp_sections(4, section_f, section_args, 0);
  164. }
  165. \endcode
  166. \sa starpu_omp_sections()
  167. \sa starpu_omp_sections_combined()
  168. \subsection OMPSingle Single
  169. OpenMP <c>single</c> workharing constructs are supported using the set
  170. of starpu_omp_single() variants. An
  171. implicit barrier can be enforced or skipped at the end of the
  172. worksharing construct, according to the value of the <c>nowait</c>
  173. parameter.
  174. \code{.c}
  175. void single_f(void *arg)
  176. {
  177. (void) arg;
  178. pthread_t tid = pthread_self();
  179. int worker_id = starpu_worker_get_id();
  180. printf("[tid %p] task thread = %d -- single\n", (void *)tid, worker_id);
  181. }
  182. void parallel_region_f(void *buffers[], void *args)
  183. {
  184. (void) buffers;
  185. (void) args;
  186. starpu_omp_single(single_f, NULL, 0);
  187. }
  188. \endcode
  189. The SORS also provides dedicated support for <c>single</c> sections
  190. with <c>copyprivate</c> clauses through the
  191. starpu_omp_single_copyprivate() function variants. The OpenMP
  192. <c>master</c> directive is supported as well using the
  193. starpu_omp_master() function variants.
  194. \sa starpu_omp_master()
  195. \sa starpu_omp_master_inline()
  196. \sa starpu_omp_single()
  197. \sa starpu_omp_single_inline()
  198. \sa starpu_omp_single_copyprivate()
  199. \sa starpu_omp_single_copyprivate_inline_begin()
  200. \sa starpu_omp_single_copyprivate_inline_end()
  201. \section Task Tasks
  202. The SORS implements the necessary support of OpenMP 3.1 and OpenMP 4.0's
  203. so-called explicit tasks, together with OpenMP 4.0's data dependency
  204. management.
  205. \subsection OMPTask Explicit Tasks
  206. Explicit OpenMP tasks are created with the SORS using the
  207. starpu_omp_task_region() function. The implementation supports
  208. <c>if</c>, <c>final</c>, <c>untied</c> and <c>mergeable</c> clauses
  209. as defined in the OpenMP specification. Unless specified otherwise by
  210. the appropriate clause(s), the created task may be executed by any
  211. participating worker of the current parallel region.
  212. The current SORS implementation requires explicit tasks to be created
  213. within the context of an active parallel region. In particular, an
  214. explicit task cannot be created by the main thread outside of a parallel
  215. region. Explicit OpenMP tasks created using starpu_omp_task_region() are
  216. implemented as StarPU tasks with extended semantics, and may as such be
  217. blocked and preempted by SORS routines.
  218. The current SORS implementation supports recursive explicit tasks
  219. creation, to ensure compliance with the OpenMP specification. However,
  220. it should be noted that StarPU is not designed nor optimized for
  221. efficiently scheduling of recursive task applications.
  222. The code below shows how to create 4 explicit tasks within a parallel
  223. region.
  224. \code{.c}
  225. void task_region_g(void *buffers[], void *args)
  226. {
  227. (void) buffers;
  228. (void) args;
  229. pthread tid = pthread_self();
  230. int worker_id = starpu_worker_get_id();
  231. printf("[tid %p] task thread = %d: explicit task \"g\"\n", (void *)tid, worker_id);
  232. }
  233. void parallel_region_f(void *buffers[], void *args)
  234. {
  235. (void) buffers;
  236. (void) args;
  237. struct starpu_omp_task_region_attr attr;
  238. memset(&attr, 0, sizeof(attr));
  239. attr.cl.cpu_funcs[0] = task_region_g;
  240. attr.cl.where = STARPU_CPU;
  241. attr.if_clause = 1;
  242. attr.final_clause = 0;
  243. attr.untied_clause = 1;
  244. attr.mergeable_clause = 0;
  245. starpu_omp_task_region(&attr);
  246. starpu_omp_task_region(&attr);
  247. starpu_omp_task_region(&attr);
  248. starpu_omp_task_region(&attr);
  249. }
  250. \endcode
  251. \sa struct starpu_omp_task_region_attr
  252. \sa starpu_omp_task_region()
  253. \subsection DataDependencies Data Dependencies
  254. The SORS implements inter-tasks data dependencies as specified in OpenMP
  255. 4.0. Data dependencies are expressed using regular StarPU data handles
  256. (<c>starpu_data_handle_t</c>) plugged into the task's <c>attr.cl</c>
  257. codelet. The family of starpu_vector_data_register() -like functions and the
  258. starpu_data_lookup() function may be used to register a memory area and
  259. to retrieve the current data handle associated with a pointer
  260. respectively. The testcase <c>./tests/openmp/task_02.c</c> gives a
  261. detailed example of using OpenMP 4.0 tasks dependencies with the SORS
  262. implementation.
  263. Note: the OpenMP 4.0 specification only supports data dependencies
  264. between sibling tasks, that is tasks created by the same implicit or
  265. explicit parent task. The current SORS implementation also only supports data
  266. dependencies between sibling tasks. Consequently the behaviour is
  267. unspecified if dependencies are expressed beween tasks that have not
  268. been created by the same parent task.
  269. \subsection TaskSyncs TaskWait and TaskGroup
  270. The SORS implements both the <c>taskwait</c> and <c>taskgroup</c> OpenMP
  271. task synchronization constructs specified in OpenMP 4.0, with the
  272. starpu_omp_taskwait() and starpu_omp_taskgroup() functions respectively.
  273. An example of starpu_omp_taskwait() use, creating two explicit tasks and
  274. waiting for their completion:
  275. \code{.c}
  276. void task_region_g(void *buffers[], void *args)
  277. {
  278. (void) buffers;
  279. (void) args;
  280. printf("Hello, World!\n");
  281. }
  282. void parallel_region_f(void *buffers[], void *args)
  283. {
  284. (void) buffers;
  285. (void) args;
  286. struct starpu_omp_task_region_attr attr;
  287. memset(&attr, 0, sizeof(attr));
  288. attr.cl.cpu_funcs[0] = task_region_g;
  289. attr.cl.where = STARPU_CPU;
  290. attr.if_clause = 1;
  291. attr.final_clause = 0;
  292. attr.untied_clause = 1;
  293. attr.mergeable_clause = 0;
  294. starpu_omp_task_region(&attr);
  295. starpu_omp_task_region(&attr);
  296. starpu_omp_taskwait();
  297. \endcode
  298. An example of starpu_omp_taskgroup() use, creating a task group of two explicit tasks:
  299. \code{.c}
  300. void task_region_g(void *buffers[], void *args)
  301. {
  302. (void) buffers;
  303. (void) args;
  304. printf("Hello, World!\n");
  305. }
  306. void taskgroup_f(void *arg)
  307. {
  308. (void)arg;
  309. struct starpu_omp_task_region_attr attr;
  310. memset(&attr, 0, sizeof(attr));
  311. attr.cl.cpu_funcs[0] = task_region_g;
  312. attr.cl.where = STARPU_CPU;
  313. attr.if_clause = 1;
  314. attr.final_clause = 0;
  315. attr.untied_clause = 1;
  316. attr.mergeable_clause = 0;
  317. starpu_omp_task_region(&attr);
  318. starpu_omp_task_region(&attr);
  319. }
  320. void parallel_region_f(void *buffers[], void *args)
  321. {
  322. (void) buffers;
  323. (void) args;
  324. starpu_omp_taskgroup(taskgroup_f, (void *)NULL);
  325. }
  326. \endcode
  327. \sa starpu_omp_task_region()
  328. \sa starpu_omp_taskwait()
  329. \sa starpu_omp_taskgroup()
  330. \sa starpu_omp_taskgroup_inline_begin()
  331. \sa starpu_omp_taskgroup_inline_end()
  332. \section Synchronization Synchronization Support
  333. The SORS implements objects and method to build common OpenMP
  334. synchronization constructs.
  335. \subsection SimpleLock Simple Locks
  336. The SORS Simple Locks are opaque starpu_omp_lock_t objects enabling multiple
  337. tasks to synchronize with each others, following the Simple Lock
  338. constructs defined by the OpenMP specification. In accordance with such
  339. specification, simple locks may not by acquired multiple times by the
  340. same task, without being released in-between; otherwise, deadlocks may
  341. result. Codes requiring the possibility to lock multiple times
  342. recursively should use Nestable Locks (\ref NestableLock). Codes NOT
  343. requiring the possibility to lock multiple times recursively should use
  344. Simple Locks as they incur less processing overhead than Nestable Locks.
  345. \sa starpu_omp_lock_t
  346. \sa starpu_omp_init_lock()
  347. \sa starpu_omp_destroy_lock()
  348. \sa starpu_omp_set_lock()
  349. \sa starpu_omp_unset_lock()
  350. \sa starpu_omp_test_lock()
  351. \subsection NestableLock Nestable Locks
  352. The SORS Nestable Locks are opaque starpu_omp_nest_lock_t objects enabling
  353. multiple tasks to synchronize with each others, following the Nestable
  354. Lock constructs defined by the OpenMP specification. In accordance with
  355. such specification, nestable locks may by acquired multiple times
  356. recursively by the same task without deadlocking. Nested locking and
  357. unlocking operations must be well parenthesized at any time, otherwise
  358. deadlock and/or undefined behaviour may occur. Codes requiring the
  359. possibility to lock multiple times recursively should use Nestable
  360. Locks. Codes NOT requiring the possibility to lock multiple times
  361. recursively should use Simple Locks (\ref SimpleLock) instead, as they
  362. incur less processing overhead than Nestable Locks.
  363. \sa starpu_omp_nest_lock_t
  364. \sa starpu_omp_init_nest_lock()
  365. \sa starpu_omp_destroy_nest_lock()
  366. \sa starpu_omp_set_nest_lock()
  367. \sa starpu_omp_unset_nest_lock()
  368. \sa starpu_omp_test_nest_lock()
  369. \subsection Critical Critical Sections
  370. The SORS implements support for OpenMP critical sections through the
  371. family of starpu_omp_critical functions. Critical sections may optionally
  372. be named. There is a single, common anonymous critical section. Mutual
  373. exclusion only occur within the scope of single critical section, either
  374. a named one or the anonymous one.
  375. \sa starpu_omp_critical()
  376. \sa starpu_omp_critical_inline_begin()
  377. \sa starpu_omp_critical_inline_end()
  378. \subsection Barrier Barriers
  379. The SORS provides the starpu_omp_barrier() function to implement
  380. barriers over parallel region teams. In accordance with the OpenMP
  381. specification, the starpu_omp_barrier() function waits for every
  382. implicit task of the parallel region to reach the barrier and every
  383. explicit task launched by the parallel region to complete, before
  384. returning.
  385. \sa starpu_omp_barrier()
  386. */