480_openmp_runtime_support.doxy 16 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476
  1. /* StarPU --- Runtime system for heterogeneous multicore architectures.
  2. *
  3. * Copyright (C) 2014-2017, 2019 CNRS
  4. * Copyright (C) 2014 Inria
  5. *
  6. * StarPU is free software; you can redistribute it and/or modify
  7. * it under the terms of the GNU Lesser General Public License as published by
  8. * the Free Software Foundation; either version 2.1 of the License, or (at
  9. * your option) any later version.
  10. *
  11. * StarPU is distributed in the hope that it will be useful, but
  12. * WITHOUT ANY WARRANTY; without even the implied warranty of
  13. * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
  14. *
  15. * See the GNU Lesser General Public License in COPYING.LGPL for more details.
  16. */
  17. /*! \page OpenMPRuntimeSupport The StarPU OpenMP Runtime Support (SORS)
  18. StarPU provides the necessary routines and support to implement an OpenMP
  19. (http://www.openmp.org/) runtime compliant with the
  20. revision 3.1 of the language specification, and compliant with the
  21. task-related data dependency functionalities introduced in the revision
  22. 4.0 of the language. This StarPU OpenMP Runtime Support (SORS) has been
  23. designed to be targetted by OpenMP compilers such as the Klang-OMP
  24. compiler. Most supported OpenMP directives can both be implemented
  25. inline or as outlined functions.
  26. All functions are defined in \ref API_OpenMP_Runtime_Support.
  27. \section Implementation Implementation Details and Specificities
  28. \subsection MainThread Main Thread
  29. When using the SORS, the main thread gets involved in executing OpenMP tasks
  30. just like every other threads, in order to be compliant with the
  31. specification execution model. This contrasts with StarPU's usual
  32. execution model where the main thread submit tasks but does not take
  33. part in executing them.
  34. \subsection TaskSemantics Extended Task Semantics
  35. The semantics of tasks generated by the SORS are extended with respect
  36. to regular StarPU tasks in that SORS' tasks may block and be preempted
  37. by SORS call, whereas regular StarPU tasks cannot. SORS tasks may
  38. coexist with regular StarPU tasks. However, only the tasks created using
  39. SORS API functions inherit from extended semantics.
  40. \section Configuration Configuration
  41. The SORS can be compiled into <c>libstarpu</c> through
  42. the \c configure option \ref enable-openmp "--enable-openmp".
  43. Conditional compiled source codes may check for the
  44. availability of the OpenMP Runtime Support by testing whether the C
  45. preprocessor macro <c>STARPU_OPENMP</c> is defined or not.
  46. \section InitExit Initialization and Shutdown
  47. The SORS needs to be executed/terminated by the
  48. starpu_omp_init() / starpu_omp_shutdown() instead of
  49. starpu_init() / starpu_shutdown(). This requirement is necessary to make
  50. sure that the main thread gets the proper execution environment to run
  51. OpenMP tasks. These calls will usually be performed by a compiler
  52. runtime. Thus, they can be executed from a constructor/destructor such
  53. as this:
  54. \code{.c}
  55. __attribute__((constructor))
  56. static void omp_constructor(void)
  57. {
  58. int ret = starpu_omp_init();
  59. STARPU_CHECK_RETURN_VALUE(ret, "starpu_omp_init");
  60. }
  61. __attribute__((destructor))
  62. static void omp_destructor(void)
  63. {
  64. starpu_omp_shutdown();
  65. }
  66. \endcode
  67. \sa starpu_omp_init()
  68. \sa starpu_omp_shutdown()
  69. \section Parallel Parallel Regions and Worksharing
  70. The SORS provides functions to create OpenMP parallel regions as well as
  71. mapping work on participating workers. The current implementation does
  72. not provide nested active parallel regions: Parallel regions may be
  73. created recursively, however only the first level parallel region may
  74. have more than one worker. From an internal point-of-view, the SORS'
  75. parallel regions are implemented as a set of implicit, extended semantics
  76. StarPU tasks, following the execution model of the OpenMP specification.
  77. Thus the SORS' parallel region tasks may block and be preempted, by
  78. SORS calls, enabling constructs such as barriers.
  79. \subsection OMPParallel Parallel Regions
  80. Parallel regions can be created with the function
  81. starpu_omp_parallel_region() which accepts a set of attributes as
  82. parameter. The execution of the calling task is suspended until the
  83. parallel region completes. The field starpu_omp_parallel_region_attr::cl
  84. is a regular StarPU codelet. However only CPU codelets are
  85. supported for parallel regions.
  86. Here is an example of use:
  87. \code{.c}
  88. void parallel_region_f(void *buffers[], void *args)
  89. {
  90. (void) buffers;
  91. (void) args;
  92. pthread_t tid = pthread_self();
  93. int worker_id = starpu_worker_get_id();
  94. printf("[tid %p] task thread = %d\n", (void *)tid, worker_id);
  95. }
  96. void f(void)
  97. {
  98. struct starpu_omp_parallel_region_attr attr;
  99. memset(&attr, 0, sizeof(attr));
  100. attr.cl.cpu_funcs[0] = parallel_region_f;
  101. attr.cl.where = STARPU_CPU;
  102. attr.if_clause = 1;
  103. starpu_omp_parallel_region(&attr);
  104. return 0;
  105. }
  106. \endcode
  107. \sa struct starpu_omp_parallel_region_attr
  108. \sa starpu_omp_parallel_region()
  109. \subsection OMPFor Parallel For
  110. OpenMP <c>for</c> loops are provided by the starpu_omp_for() group of
  111. functions. Variants are available for inline or outlined
  112. implementations. The SORS supports <c>static</c>, <c>dynamic</c>, and
  113. <c>guided</c> loop scheduling clauses. The <c>auto</c> scheduling clause
  114. is implemented as <c>static</c>. The <c>runtime</c> scheduling clause
  115. honors the scheduling mode selected through the environment variable
  116. \c OMP_SCHEDULE or the starpu_omp_set_schedule() function. For loops with
  117. the <c>ordered</c> clause are also supported. An implicit barrier can be
  118. enforced or skipped at the end of the worksharing construct, according
  119. to the value of the <c>nowait</c> parameter.
  120. The canonical family of starpu_omp_for() functions provide each instance
  121. with the first iteration number and the number of iterations (possibly
  122. zero) to perform. The alternate family of starpu_omp_for_alt() functions
  123. provide each instance with the (possibly empty) range of iterations to
  124. perform, including the first and excluding the last.
  125. The family of starpu_omp_ordered() functions enable to implement
  126. OpenMP's ordered construct, a region with a parallel for loop that is
  127. guaranteed to be executed in the sequential order of the loop
  128. iterations.
  129. \code{.c}
  130. void for_g(unsigned long long i, unsigned long long nb_i, void *arg)
  131. {
  132. (void) arg;
  133. for (; nb_i > 0; i++, nb_i--)
  134. {
  135. array[i] = 1;
  136. }
  137. }
  138. void parallel_region_f(void *buffers[], void *args)
  139. {
  140. (void) buffers;
  141. (void) args;
  142. starpu_omp_for(for_g, NULL, NB_ITERS, CHUNK, starpu_omp_sched_static, 0, 0);
  143. }
  144. \endcode
  145. \sa starpu_omp_for()
  146. \sa starpu_omp_for_inline_first()
  147. \sa starpu_omp_for_inline_next()
  148. \sa starpu_omp_for_alt()
  149. \sa starpu_omp_for_inline_first_alt()
  150. \sa starpu_omp_for_inline_next_alt()
  151. \sa starpu_omp_ordered()
  152. \sa starpu_omp_ordered_inline_begin()
  153. \sa starpu_omp_ordered_inline_end()
  154. \subsection OMPSections Sections
  155. OpenMP <c>sections</c> worksharing constructs are supported using the
  156. set of starpu_omp_sections() variants. The general principle is either
  157. to provide an array of per-section functions or a single function that
  158. will redirect to execution to the suitable per-section functions. An
  159. implicit barrier can be enforced or skipped at the end of the
  160. worksharing construct, according to the value of the <c>nowait</c>
  161. parameter.
  162. \code{.c}
  163. void parallel_region_f(void *buffers[], void *args)
  164. {
  165. (void) buffers;
  166. (void) args;
  167. section_funcs[0] = f;
  168. section_funcs[1] = g;
  169. section_funcs[2] = h;
  170. section_funcs[3] = i;
  171. section_args[0] = arg_f;
  172. section_args[1] = arg_g;
  173. section_args[2] = arg_h;
  174. section_args[3] = arg_i;
  175. starpu_omp_sections(4, section_f, section_args, 0);
  176. }
  177. \endcode
  178. \sa starpu_omp_sections()
  179. \sa starpu_omp_sections_combined()
  180. \subsection OMPSingle Single
  181. OpenMP <c>single</c> workharing constructs are supported using the set
  182. of starpu_omp_single() variants. An
  183. implicit barrier can be enforced or skipped at the end of the
  184. worksharing construct, according to the value of the <c>nowait</c>
  185. parameter.
  186. \code{.c}
  187. void single_f(void *arg)
  188. {
  189. (void) arg;
  190. pthread_t tid = pthread_self();
  191. int worker_id = starpu_worker_get_id();
  192. printf("[tid %p] task thread = %d -- single\n", (void *)tid, worker_id);
  193. }
  194. void parallel_region_f(void *buffers[], void *args)
  195. {
  196. (void) buffers;
  197. (void) args;
  198. starpu_omp_single(single_f, NULL, 0);
  199. }
  200. \endcode
  201. The SORS also provides dedicated support for <c>single</c> sections
  202. with <c>copyprivate</c> clauses through the
  203. starpu_omp_single_copyprivate() function variants. The OpenMP
  204. <c>master</c> directive is supported as well using the
  205. starpu_omp_master() function variants.
  206. \sa starpu_omp_master()
  207. \sa starpu_omp_master_inline()
  208. \sa starpu_omp_single()
  209. \sa starpu_omp_single_inline()
  210. \sa starpu_omp_single_copyprivate()
  211. \sa starpu_omp_single_copyprivate_inline_begin()
  212. \sa starpu_omp_single_copyprivate_inline_end()
  213. \section Task Tasks
  214. The SORS implements the necessary support of OpenMP 3.1 and OpenMP 4.0's
  215. so-called explicit tasks, together with OpenMP 4.0's data dependency
  216. management.
  217. \subsection OMPTask Explicit Tasks
  218. Explicit OpenMP tasks are created with the SORS using the
  219. starpu_omp_task_region() function. The implementation supports
  220. <c>if</c>, <c>final</c>, <c>untied</c> and <c>mergeable</c> clauses
  221. as defined in the OpenMP specification. Unless specified otherwise by
  222. the appropriate clause(s), the created task may be executed by any
  223. participating worker of the current parallel region.
  224. The current SORS implementation requires explicit tasks to be created
  225. within the context of an active parallel region. In particular, an
  226. explicit task cannot be created by the main thread outside of a parallel
  227. region. Explicit OpenMP tasks created using starpu_omp_task_region() are
  228. implemented as StarPU tasks with extended semantics, and may as such be
  229. blocked and preempted by SORS routines.
  230. The current SORS implementation supports recursive explicit tasks
  231. creation, to ensure compliance with the OpenMP specification. However,
  232. it should be noted that StarPU is not designed nor optimized for
  233. efficiently scheduling of recursive task applications.
  234. The code below shows how to create 4 explicit tasks within a parallel
  235. region.
  236. \code{.c}
  237. void task_region_g(void *buffers[], void *args)
  238. {
  239. (void) buffers;
  240. (void) args;
  241. pthread tid = pthread_self();
  242. int worker_id = starpu_worker_get_id();
  243. printf("[tid %p] task thread = %d: explicit task \"g\"\n", (void *)tid, worker_id);
  244. }
  245. void parallel_region_f(void *buffers[], void *args)
  246. {
  247. (void) buffers;
  248. (void) args;
  249. struct starpu_omp_task_region_attr attr;
  250. memset(&attr, 0, sizeof(attr));
  251. attr.cl.cpu_funcs[0] = task_region_g;
  252. attr.cl.where = STARPU_CPU;
  253. attr.if_clause = 1;
  254. attr.final_clause = 0;
  255. attr.untied_clause = 1;
  256. attr.mergeable_clause = 0;
  257. starpu_omp_task_region(&attr);
  258. starpu_omp_task_region(&attr);
  259. starpu_omp_task_region(&attr);
  260. starpu_omp_task_region(&attr);
  261. }
  262. \endcode
  263. \sa struct starpu_omp_task_region_attr
  264. \sa starpu_omp_task_region()
  265. \subsection DataDependencies Data Dependencies
  266. The SORS implements inter-tasks data dependencies as specified in OpenMP
  267. 4.0. Data dependencies are expressed using regular StarPU data handles
  268. (\ref starpu_data_handle_t) plugged into the task's <c>attr.cl</c>
  269. codelet. The family of starpu_vector_data_register() -like functions and the
  270. starpu_data_lookup() function may be used to register a memory area and
  271. to retrieve the current data handle associated with a pointer
  272. respectively. The testcase <c>./tests/openmp/task_02.c</c> gives a
  273. detailed example of using OpenMP 4.0 tasks dependencies with the SORS
  274. implementation.
  275. Note: the OpenMP 4.0 specification only supports data dependencies
  276. between sibling tasks, that is tasks created by the same implicit or
  277. explicit parent task. The current SORS implementation also only supports data
  278. dependencies between sibling tasks. Consequently the behaviour is
  279. unspecified if dependencies are expressed beween tasks that have not
  280. been created by the same parent task.
  281. \subsection TaskSyncs TaskWait and TaskGroup
  282. The SORS implements both the <c>taskwait</c> and <c>taskgroup</c> OpenMP
  283. task synchronization constructs specified in OpenMP 4.0, with the
  284. starpu_omp_taskwait() and starpu_omp_taskgroup() functions respectively.
  285. An example of starpu_omp_taskwait() use, creating two explicit tasks and
  286. waiting for their completion:
  287. \code{.c}
  288. void task_region_g(void *buffers[], void *args)
  289. {
  290. (void) buffers;
  291. (void) args;
  292. printf("Hello, World!\n");
  293. }
  294. void parallel_region_f(void *buffers[], void *args)
  295. {
  296. (void) buffers;
  297. (void) args;
  298. struct starpu_omp_task_region_attr attr;
  299. memset(&attr, 0, sizeof(attr));
  300. attr.cl.cpu_funcs[0] = task_region_g;
  301. attr.cl.where = STARPU_CPU;
  302. attr.if_clause = 1;
  303. attr.final_clause = 0;
  304. attr.untied_clause = 1;
  305. attr.mergeable_clause = 0;
  306. starpu_omp_task_region(&attr);
  307. starpu_omp_task_region(&attr);
  308. starpu_omp_taskwait();
  309. \endcode
  310. An example of starpu_omp_taskgroup() use, creating a task group of two explicit tasks:
  311. \code{.c}
  312. void task_region_g(void *buffers[], void *args)
  313. {
  314. (void) buffers;
  315. (void) args;
  316. printf("Hello, World!\n");
  317. }
  318. void taskgroup_f(void *arg)
  319. {
  320. (void)arg;
  321. struct starpu_omp_task_region_attr attr;
  322. memset(&attr, 0, sizeof(attr));
  323. attr.cl.cpu_funcs[0] = task_region_g;
  324. attr.cl.where = STARPU_CPU;
  325. attr.if_clause = 1;
  326. attr.final_clause = 0;
  327. attr.untied_clause = 1;
  328. attr.mergeable_clause = 0;
  329. starpu_omp_task_region(&attr);
  330. starpu_omp_task_region(&attr);
  331. }
  332. void parallel_region_f(void *buffers[], void *args)
  333. {
  334. (void) buffers;
  335. (void) args;
  336. starpu_omp_taskgroup(taskgroup_f, (void *)NULL);
  337. }
  338. \endcode
  339. \sa starpu_omp_task_region()
  340. \sa starpu_omp_taskwait()
  341. \sa starpu_omp_taskgroup()
  342. \sa starpu_omp_taskgroup_inline_begin()
  343. \sa starpu_omp_taskgroup_inline_end()
  344. \section Synchronization Synchronization Support
  345. The SORS implements objects and method to build common OpenMP
  346. synchronization constructs.
  347. \subsection SimpleLock Simple Locks
  348. The SORS Simple Locks are opaque starpu_omp_lock_t objects enabling multiple
  349. tasks to synchronize with each others, following the Simple Lock
  350. constructs defined by the OpenMP specification. In accordance with such
  351. specification, simple locks may not by acquired multiple times by the
  352. same task, without being released in-between; otherwise, deadlocks may
  353. result. Codes requiring the possibility to lock multiple times
  354. recursively should use Nestable Locks (\ref NestableLock). Codes NOT
  355. requiring the possibility to lock multiple times recursively should use
  356. Simple Locks as they incur less processing overhead than Nestable Locks.
  357. \sa starpu_omp_lock_t
  358. \sa starpu_omp_init_lock()
  359. \sa starpu_omp_destroy_lock()
  360. \sa starpu_omp_set_lock()
  361. \sa starpu_omp_unset_lock()
  362. \sa starpu_omp_test_lock()
  363. \subsection NestableLock Nestable Locks
  364. The SORS Nestable Locks are opaque starpu_omp_nest_lock_t objects enabling
  365. multiple tasks to synchronize with each others, following the Nestable
  366. Lock constructs defined by the OpenMP specification. In accordance with
  367. such specification, nestable locks may by acquired multiple times
  368. recursively by the same task without deadlocking. Nested locking and
  369. unlocking operations must be well parenthesized at any time, otherwise
  370. deadlock and/or undefined behaviour may occur. Codes requiring the
  371. possibility to lock multiple times recursively should use Nestable
  372. Locks. Codes NOT requiring the possibility to lock multiple times
  373. recursively should use Simple Locks (\ref SimpleLock) instead, as they
  374. incur less processing overhead than Nestable Locks.
  375. \sa starpu_omp_nest_lock_t
  376. \sa starpu_omp_init_nest_lock()
  377. \sa starpu_omp_destroy_nest_lock()
  378. \sa starpu_omp_set_nest_lock()
  379. \sa starpu_omp_unset_nest_lock()
  380. \sa starpu_omp_test_nest_lock()
  381. \subsection Critical Critical Sections
  382. The SORS implements support for OpenMP critical sections through the
  383. family of \ref starpu_omp_critical functions. Critical sections may optionally
  384. be named. There is a single, common anonymous critical section. Mutual
  385. exclusion only occur within the scope of single critical section, either
  386. a named one or the anonymous one.
  387. \sa starpu_omp_critical()
  388. \sa starpu_omp_critical_inline_begin()
  389. \sa starpu_omp_critical_inline_end()
  390. \subsection Barrier Barriers
  391. The SORS provides the starpu_omp_barrier() function to implement
  392. barriers over parallel region teams. In accordance with the OpenMP
  393. specification, the starpu_omp_barrier() function waits for every
  394. implicit task of the parallel region to reach the barrier and every
  395. explicit task launched by the parallel region to complete, before
  396. returning.
  397. \sa starpu_omp_barrier()
  398. */