480_openmp_runtime_support.doxy 16 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475
  1. /* StarPU --- Runtime system for heterogeneous multicore architectures.
  2. *
  3. * Copyright (C) 2014-2021 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria
  4. *
  5. * StarPU is free software; you can redistribute it and/or modify
  6. * it under the terms of the GNU Lesser General Public License as published by
  7. * the Free Software Foundation; either version 2.1 of the License, or (at
  8. * your option) any later version.
  9. *
  10. * StarPU is distributed in the hope that it will be useful, but
  11. * WITHOUT ANY WARRANTY; without even the implied warranty of
  12. * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
  13. *
  14. * See the GNU Lesser General Public License in COPYING.LGPL for more details.
  15. */
  16. /*! \page OpenMPRuntimeSupport The StarPU OpenMP Runtime Support (SORS)
  17. StarPU provides the necessary routines and support to implement an OpenMP
  18. (http://www.openmp.org/) runtime compliant with the
  19. revision 3.1 of the language specification, and compliant with the
  20. task-related data dependency functionalities introduced in the revision
  21. 4.0 of the language. This StarPU OpenMP Runtime Support (SORS) has been
  22. designed to be targetted by OpenMP compilers such as the Klang-OMP
  23. compiler. Most supported OpenMP directives can both be implemented
  24. inline or as outlined functions.
  25. All functions are defined in \ref API_OpenMP_Runtime_Support.
  26. \section OMPImplementation Implementation Details and Specificities
  27. \subsection OMPMainThread Main Thread
  28. When using the SORS, the main thread gets involved in executing OpenMP tasks
  29. just like every other threads, in order to be compliant with the
  30. specification execution model. This contrasts with StarPU's usual
  31. execution model where the main thread submit tasks but does not take
  32. part in executing them.
  33. \subsection OMPTaskSemantics Extended Task Semantics
  34. The semantics of tasks generated by the SORS are extended with respect
  35. to regular StarPU tasks in that SORS' tasks may block and be preempted
  36. by SORS call, whereas regular StarPU tasks cannot. SORS tasks may
  37. coexist with regular StarPU tasks. However, only the tasks created using
  38. SORS API functions inherit from extended semantics.
  39. \section OMPConfiguration Configuration
  40. The SORS can be compiled into <c>libstarpu</c> through
  41. the \c configure option \ref enable-openmp "--enable-openmp".
  42. Conditional compiled source codes may check for the
  43. availability of the OpenMP Runtime Support by testing whether the C
  44. preprocessor macro <c>STARPU_OPENMP</c> is defined or not.
  45. \section OMPInitExit Initialization and Shutdown
  46. The SORS needs to be executed/terminated by the
  47. starpu_omp_init() / starpu_omp_shutdown() instead of
  48. starpu_init() / starpu_shutdown(). This requirement is necessary to make
  49. sure that the main thread gets the proper execution environment to run
  50. OpenMP tasks. These calls will usually be performed by a compiler
  51. runtime. Thus, they can be executed from a constructor/destructor such
  52. as this:
  53. \code{.c}
  54. __attribute__((constructor))
  55. static void omp_constructor(void)
  56. {
  57. int ret = starpu_omp_init();
  58. STARPU_CHECK_RETURN_VALUE(ret, "starpu_omp_init");
  59. }
  60. __attribute__((destructor))
  61. static void omp_destructor(void)
  62. {
  63. starpu_omp_shutdown();
  64. }
  65. \endcode
  66. \sa starpu_omp_init()
  67. \sa starpu_omp_shutdown()
  68. \section OMPSharing Parallel Regions and Worksharing
  69. The SORS provides functions to create OpenMP parallel regions as well as
  70. mapping work on participating workers. The current implementation does
  71. not provide nested active parallel regions: Parallel regions may be
  72. created recursively, however only the first level parallel region may
  73. have more than one worker. From an internal point-of-view, the SORS'
  74. parallel regions are implemented as a set of implicit, extended semantics
  75. StarPU tasks, following the execution model of the OpenMP specification.
  76. Thus the SORS' parallel region tasks may block and be preempted, by
  77. SORS calls, enabling constructs such as barriers.
  78. \subsection OMPParallel Parallel Regions
  79. Parallel regions can be created with the function
  80. starpu_omp_parallel_region() which accepts a set of attributes as
  81. parameter. The execution of the calling task is suspended until the
  82. parallel region completes. The field starpu_omp_parallel_region_attr::cl
  83. is a regular StarPU codelet. However only CPU codelets are
  84. supported for parallel regions.
  85. Here is an example of use:
  86. \code{.c}
  87. void parallel_region_f(void *buffers[], void *args)
  88. {
  89. (void) buffers;
  90. (void) args;
  91. pthread_t tid = pthread_self();
  92. int worker_id = starpu_worker_get_id();
  93. printf("[tid %p] task thread = %d\n", (void *)tid, worker_id);
  94. }
  95. void f(void)
  96. {
  97. struct starpu_omp_parallel_region_attr attr;
  98. memset(&attr, 0, sizeof(attr));
  99. attr.cl.cpu_funcs[0] = parallel_region_f;
  100. attr.cl.where = STARPU_CPU;
  101. attr.if_clause = 1;
  102. starpu_omp_parallel_region(&attr);
  103. return 0;
  104. }
  105. \endcode
  106. \sa struct starpu_omp_parallel_region_attr
  107. \sa starpu_omp_parallel_region()
  108. \subsection OMPFor Parallel For
  109. OpenMP <c>for</c> loops are provided by the starpu_omp_for() group of
  110. functions. Variants are available for inline or outlined
  111. implementations. The SORS supports <c>static</c>, <c>dynamic</c>, and
  112. <c>guided</c> loop scheduling clauses. The <c>auto</c> scheduling clause
  113. is implemented as <c>static</c>. The <c>runtime</c> scheduling clause
  114. honors the scheduling mode selected through the environment variable
  115. \c OMP_SCHEDULE or the starpu_omp_set_schedule() function. For loops with
  116. the <c>ordered</c> clause are also supported. An implicit barrier can be
  117. enforced or skipped at the end of the worksharing construct, according
  118. to the value of the <c>nowait</c> parameter.
  119. The canonical family of starpu_omp_for() functions provide each instance
  120. with the first iteration number and the number of iterations (possibly
  121. zero) to perform. The alternate family of starpu_omp_for_alt() functions
  122. provide each instance with the (possibly empty) range of iterations to
  123. perform, including the first and excluding the last.
  124. The family of starpu_omp_ordered() functions enable to implement
  125. OpenMP's ordered construct, a region with a parallel for loop that is
  126. guaranteed to be executed in the sequential order of the loop
  127. iterations.
  128. \code{.c}
  129. void for_g(unsigned long long i, unsigned long long nb_i, void *arg)
  130. {
  131. (void) arg;
  132. for (; nb_i > 0; i++, nb_i--)
  133. {
  134. array[i] = 1;
  135. }
  136. }
  137. void parallel_region_f(void *buffers[], void *args)
  138. {
  139. (void) buffers;
  140. (void) args;
  141. starpu_omp_for(for_g, NULL, NB_ITERS, CHUNK, starpu_omp_sched_static, 0, 0);
  142. }
  143. \endcode
  144. \sa starpu_omp_for()
  145. \sa starpu_omp_for_inline_first()
  146. \sa starpu_omp_for_inline_next()
  147. \sa starpu_omp_for_alt()
  148. \sa starpu_omp_for_inline_first_alt()
  149. \sa starpu_omp_for_inline_next_alt()
  150. \sa starpu_omp_ordered()
  151. \sa starpu_omp_ordered_inline_begin()
  152. \sa starpu_omp_ordered_inline_end()
  153. \subsection OMPSections Sections
  154. OpenMP <c>sections</c> worksharing constructs are supported using the
  155. set of starpu_omp_sections() variants. The general principle is either
  156. to provide an array of per-section functions or a single function that
  157. will redirect to execution to the suitable per-section functions. An
  158. implicit barrier can be enforced or skipped at the end of the
  159. worksharing construct, according to the value of the <c>nowait</c>
  160. parameter.
  161. \code{.c}
  162. void parallel_region_f(void *buffers[], void *args)
  163. {
  164. (void) buffers;
  165. (void) args;
  166. section_funcs[0] = f;
  167. section_funcs[1] = g;
  168. section_funcs[2] = h;
  169. section_funcs[3] = i;
  170. section_args[0] = arg_f;
  171. section_args[1] = arg_g;
  172. section_args[2] = arg_h;
  173. section_args[3] = arg_i;
  174. starpu_omp_sections(4, section_f, section_args, 0);
  175. }
  176. \endcode
  177. \sa starpu_omp_sections()
  178. \sa starpu_omp_sections_combined()
  179. \subsection OMPSingle Single
  180. OpenMP <c>single</c> workharing constructs are supported using the set
  181. of starpu_omp_single() variants. An
  182. implicit barrier can be enforced or skipped at the end of the
  183. worksharing construct, according to the value of the <c>nowait</c>
  184. parameter.
  185. \code{.c}
  186. void single_f(void *arg)
  187. {
  188. (void) arg;
  189. pthread_t tid = pthread_self();
  190. int worker_id = starpu_worker_get_id();
  191. printf("[tid %p] task thread = %d -- single\n", (void *)tid, worker_id);
  192. }
  193. void parallel_region_f(void *buffers[], void *args)
  194. {
  195. (void) buffers;
  196. (void) args;
  197. starpu_omp_single(single_f, NULL, 0);
  198. }
  199. \endcode
  200. The SORS also provides dedicated support for <c>single</c> sections
  201. with <c>copyprivate</c> clauses through the
  202. starpu_omp_single_copyprivate() function variants. The OpenMP
  203. <c>master</c> directive is supported as well using the
  204. starpu_omp_master() function variants.
  205. \sa starpu_omp_master()
  206. \sa starpu_omp_master_inline()
  207. \sa starpu_omp_single()
  208. \sa starpu_omp_single_inline()
  209. \sa starpu_omp_single_copyprivate()
  210. \sa starpu_omp_single_copyprivate_inline_begin()
  211. \sa starpu_omp_single_copyprivate_inline_end()
  212. \section OMPTask Tasks
  213. The SORS implements the necessary support of OpenMP 3.1 and OpenMP 4.0's
  214. so-called explicit tasks, together with OpenMP 4.0's data dependency
  215. management.
  216. \subsection OMPTaskExplicit Explicit Tasks
  217. Explicit OpenMP tasks are created with the SORS using the
  218. starpu_omp_task_region() function. The implementation supports
  219. <c>if</c>, <c>final</c>, <c>untied</c> and <c>mergeable</c> clauses
  220. as defined in the OpenMP specification. Unless specified otherwise by
  221. the appropriate clause(s), the created task may be executed by any
  222. participating worker of the current parallel region.
  223. The current SORS implementation requires explicit tasks to be created
  224. within the context of an active parallel region. In particular, an
  225. explicit task cannot be created by the main thread outside of a parallel
  226. region. Explicit OpenMP tasks created using starpu_omp_task_region() are
  227. implemented as StarPU tasks with extended semantics, and may as such be
  228. blocked and preempted by SORS routines.
  229. The current SORS implementation supports recursive explicit tasks
  230. creation, to ensure compliance with the OpenMP specification. However,
  231. it should be noted that StarPU is not designed nor optimized for
  232. efficiently scheduling of recursive task applications.
  233. The code below shows how to create 4 explicit tasks within a parallel
  234. region.
  235. \code{.c}
  236. void task_region_g(void *buffers[], void *args)
  237. {
  238. (void) buffers;
  239. (void) args;
  240. pthread tid = pthread_self();
  241. int worker_id = starpu_worker_get_id();
  242. printf("[tid %p] task thread = %d: explicit task \"g\"\n", (void *)tid, worker_id);
  243. }
  244. void parallel_region_f(void *buffers[], void *args)
  245. {
  246. (void) buffers;
  247. (void) args;
  248. struct starpu_omp_task_region_attr attr;
  249. memset(&attr, 0, sizeof(attr));
  250. attr.cl.cpu_funcs[0] = task_region_g;
  251. attr.cl.where = STARPU_CPU;
  252. attr.if_clause = 1;
  253. attr.final_clause = 0;
  254. attr.untied_clause = 1;
  255. attr.mergeable_clause = 0;
  256. starpu_omp_task_region(&attr);
  257. starpu_omp_task_region(&attr);
  258. starpu_omp_task_region(&attr);
  259. starpu_omp_task_region(&attr);
  260. }
  261. \endcode
  262. \sa struct starpu_omp_task_region_attr
  263. \sa starpu_omp_task_region()
  264. \subsection OMPDataDependencies Data Dependencies
  265. The SORS implements inter-tasks data dependencies as specified in OpenMP
  266. 4.0. Data dependencies are expressed using regular StarPU data handles
  267. (\ref starpu_data_handle_t) plugged into the task's <c>attr.cl</c>
  268. codelet. The family of starpu_vector_data_register() -like functions and the
  269. starpu_data_lookup() function may be used to register a memory area and
  270. to retrieve the current data handle associated with a pointer
  271. respectively. The testcase <c>./tests/openmp/task_02.c</c> gives a
  272. detailed example of using OpenMP 4.0 tasks dependencies with the SORS
  273. implementation.
  274. Note: the OpenMP 4.0 specification only supports data dependencies
  275. between sibling tasks, that is tasks created by the same implicit or
  276. explicit parent task. The current SORS implementation also only supports data
  277. dependencies between sibling tasks. Consequently the behaviour is
  278. unspecified if dependencies are expressed beween tasks that have not
  279. been created by the same parent task.
  280. \subsection OMPTaskSyncs TaskWait and TaskGroup
  281. The SORS implements both the <c>taskwait</c> and <c>taskgroup</c> OpenMP
  282. task synchronization constructs specified in OpenMP 4.0, with the
  283. starpu_omp_taskwait() and starpu_omp_taskgroup() functions respectively.
  284. An example of starpu_omp_taskwait() use, creating two explicit tasks and
  285. waiting for their completion:
  286. \code{.c}
  287. void task_region_g(void *buffers[], void *args)
  288. {
  289. (void) buffers;
  290. (void) args;
  291. printf("Hello, World!\n");
  292. }
  293. void parallel_region_f(void *buffers[], void *args)
  294. {
  295. (void) buffers;
  296. (void) args;
  297. struct starpu_omp_task_region_attr attr;
  298. memset(&attr, 0, sizeof(attr));
  299. attr.cl.cpu_funcs[0] = task_region_g;
  300. attr.cl.where = STARPU_CPU;
  301. attr.if_clause = 1;
  302. attr.final_clause = 0;
  303. attr.untied_clause = 1;
  304. attr.mergeable_clause = 0;
  305. starpu_omp_task_region(&attr);
  306. starpu_omp_task_region(&attr);
  307. starpu_omp_taskwait();
  308. \endcode
  309. An example of starpu_omp_taskgroup() use, creating a task group of two explicit tasks:
  310. \code{.c}
  311. void task_region_g(void *buffers[], void *args)
  312. {
  313. (void) buffers;
  314. (void) args;
  315. printf("Hello, World!\n");
  316. }
  317. void taskgroup_f(void *arg)
  318. {
  319. (void)arg;
  320. struct starpu_omp_task_region_attr attr;
  321. memset(&attr, 0, sizeof(attr));
  322. attr.cl.cpu_funcs[0] = task_region_g;
  323. attr.cl.where = STARPU_CPU;
  324. attr.if_clause = 1;
  325. attr.final_clause = 0;
  326. attr.untied_clause = 1;
  327. attr.mergeable_clause = 0;
  328. starpu_omp_task_region(&attr);
  329. starpu_omp_task_region(&attr);
  330. }
  331. void parallel_region_f(void *buffers[], void *args)
  332. {
  333. (void) buffers;
  334. (void) args;
  335. starpu_omp_taskgroup(taskgroup_f, (void *)NULL);
  336. }
  337. \endcode
  338. \sa starpu_omp_task_region()
  339. \sa starpu_omp_taskwait()
  340. \sa starpu_omp_taskgroup()
  341. \sa starpu_omp_taskgroup_inline_begin()
  342. \sa starpu_omp_taskgroup_inline_end()
  343. \section OMPSynchronization Synchronization Support
  344. The SORS implements objects and method to build common OpenMP
  345. synchronization constructs.
  346. \subsection OMPSimpleLock Simple Locks
  347. The SORS Simple Locks are opaque starpu_omp_lock_t objects enabling multiple
  348. tasks to synchronize with each others, following the Simple Lock
  349. constructs defined by the OpenMP specification. In accordance with such
  350. specification, simple locks may not by acquired multiple times by the
  351. same task, without being released in-between; otherwise, deadlocks may
  352. result. Codes requiring the possibility to lock multiple times
  353. recursively should use Nestable Locks (\ref NestableLock). Codes NOT
  354. requiring the possibility to lock multiple times recursively should use
  355. Simple Locks as they incur less processing overhead than Nestable Locks.
  356. \sa starpu_omp_lock_t
  357. \sa starpu_omp_init_lock()
  358. \sa starpu_omp_destroy_lock()
  359. \sa starpu_omp_set_lock()
  360. \sa starpu_omp_unset_lock()
  361. \sa starpu_omp_test_lock()
  362. \subsection OMPNestableLock Nestable Locks
  363. The SORS Nestable Locks are opaque starpu_omp_nest_lock_t objects enabling
  364. multiple tasks to synchronize with each others, following the Nestable
  365. Lock constructs defined by the OpenMP specification. In accordance with
  366. such specification, nestable locks may by acquired multiple times
  367. recursively by the same task without deadlocking. Nested locking and
  368. unlocking operations must be well parenthesized at any time, otherwise
  369. deadlock and/or undefined behaviour may occur. Codes requiring the
  370. possibility to lock multiple times recursively should use Nestable
  371. Locks. Codes NOT requiring the possibility to lock multiple times
  372. recursively should use Simple Locks (\ref SimpleLock) instead, as they
  373. incur less processing overhead than Nestable Locks.
  374. \sa starpu_omp_nest_lock_t
  375. \sa starpu_omp_init_nest_lock()
  376. \sa starpu_omp_destroy_nest_lock()
  377. \sa starpu_omp_set_nest_lock()
  378. \sa starpu_omp_unset_nest_lock()
  379. \sa starpu_omp_test_nest_lock()
  380. \subsection OMPCritical Critical Sections
  381. The SORS implements support for OpenMP critical sections through the
  382. family of \ref starpu_omp_critical functions. Critical sections may optionally
  383. be named. There is a single, common anonymous critical section. Mutual
  384. exclusion only occur within the scope of single critical section, either
  385. a named one or the anonymous one.
  386. \sa starpu_omp_critical()
  387. \sa starpu_omp_critical_inline_begin()
  388. \sa starpu_omp_critical_inline_end()
  389. \subsection OMPBarrier Barriers
  390. The SORS provides the starpu_omp_barrier() function to implement
  391. barriers over parallel region teams. In accordance with the OpenMP
  392. specification, the starpu_omp_barrier() function waits for every
  393. implicit task of the parallel region to reach the barrier and every
  394. explicit task launched by the parallel region to complete, before
  395. returning.
  396. \sa starpu_omp_barrier()
  397. */