data_management.doxy 19 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370
  1. /*
  2. * This file is part of the StarPU Handbook.
  3. * Copyright (C) 2009--2011 Universit@'e de Bordeaux
  4. * Copyright (C) 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017 CNRS
  5. * Copyright (C) 2011, 2012 INRIA
  6. * See the file version.doxy for copying conditions.
  7. */
  8. /*! \defgroup API_Data_Management Data Management
  9. \brief This section describes the data management facilities provided
  10. by StarPU. We show how to use existing data interfaces in
  11. \ref API_Data_Interfaces, but developers can design their own data interfaces if
  12. required.
  13. \typedef starpu_data_handle_t
  14. \ingroup API_Data_Management
  15. StarPU uses ::starpu_data_handle_t as an opaque handle to
  16. manage a piece of data. Once a piece of data has been registered to
  17. StarPU, it is associated to a ::starpu_data_handle_t which keeps track
  18. of the state of the piece of data over the entire machine, so that we
  19. can maintain data consistency and locate data replicates for instance.
  20. \typedef starpu_arbiter_t
  21. \ingroup API_Data_Management
  22. This is an arbiter, which implements an advanced but centralized management of
  23. concurrent data accesses, see \ref ConcurrentDataAccess for the details.
  24. \enum starpu_data_access_mode
  25. \ingroup API_Data_Management
  26. This datatype describes a data access mode.
  27. \var starpu_data_access_mode::STARPU_NONE
  28. TODO
  29. \var starpu_data_access_mode::STARPU_R
  30. read-only mode.
  31. \var starpu_data_access_mode::STARPU_W
  32. write-only mode.
  33. \var starpu_data_access_mode::STARPU_RW
  34. read-write mode. This is equivalent to ::STARPU_R|::STARPU_W
  35. \var starpu_data_access_mode::STARPU_SCRATCH
  36. A temporary buffer is allocated for the task, but StarPU does not
  37. enforce data consistency---i.e. each device has its own buffer,
  38. independently from each other (even for CPUs), and no data
  39. transfer is ever performed. This is useful for temporary variables
  40. to avoid allocating/freeing buffers inside each task. Currently,
  41. no behavior is defined concerning the relation with the ::STARPU_R
  42. and ::STARPU_W modes and the value provided at registration ---
  43. i.e., the value of the scratch buffer is undefined at entry of the
  44. codelet function. It is being considered for future extensions at
  45. least to define the initial value. For now, data to be used in
  46. ::STARPU_SCRATCH mode should be registered with node -1 and
  47. a <c>NULL</c> pointer, since the value of the provided buffer is
  48. simply ignored for now.
  49. \var starpu_data_access_mode::STARPU_REDUX
  50. todo
  51. \var starpu_data_access_mode::STARPU_COMMUTE
  52. ::STARPU_COMMUTE can be passed along
  53. ::STARPU_W or ::STARPU_RW to express that StarPU can let tasks
  54. commute, which is useful e.g. when bringing a contribution into
  55. some data, which can be done in any order (but still require
  56. sequential consistency against reads or non-commutative writes).
  57. \var starpu_data_access_mode::STARPU_SSEND
  58. used in starpu_mpi_insert_task() to specify the data has to be
  59. sent using a synchronous and non-blocking mode (see
  60. starpu_mpi_issend())
  61. \var starpu_data_access_mode::STARPU_LOCALITY
  62. used to tell the scheduler which data is the most important for
  63. the task, and should thus be used to try to group tasks on the
  64. same core or cache, etc. For now only the ws and lws schedulers
  65. take this flag into account, and only when rebuild with
  66. USE_LOCALITY flag defined in the
  67. src/sched_policies/work_stealing_policy.c source code.
  68. \var starpu_data_access_mode::STARPU_ACCESS_MODE_MAX
  69. todo
  70. @name Basic Data Management API
  71. \ingroup API_Data_Management
  72. Data management is done at a high-level in StarPU: rather than
  73. accessing a mere list of contiguous buffers, the tasks may manipulate
  74. data that are described by a high-level construct which we call data
  75. interface.
  76. An example of data interface is the "vector" interface which describes
  77. a contiguous data array on a spefic memory node. This interface is a
  78. simple structure containing the number of elements in the array, the
  79. size of the elements, and the address of the array in the appropriate
  80. address space (this address may be invalid if there is no valid copy
  81. of the array in the memory node). More informations on the data
  82. interfaces provided by StarPU are given in \ref API_Data_Interfaces.
  83. When a piece of data managed by StarPU is used by a task, the task
  84. implementation is given a pointer to an interface describing a valid
  85. copy of the data that is accessible from the current processing unit.
  86. Every worker is associated to a memory node which is a logical
  87. abstraction of the address space from which the processing unit gets
  88. its data. For instance, the memory node associated to the different
  89. CPU workers represents main memory (RAM), the memory node associated
  90. to a GPU is DRAM embedded on the device. Every memory node is
  91. identified by a logical index which is accessible from the
  92. function starpu_worker_get_memory_node(). When registering a piece of
  93. data to StarPU, the specified memory node indicates where the piece of
  94. data initially resides (we also call this memory node the home node of
  95. a piece of data).
  96. \fn void starpu_data_register(starpu_data_handle_t *handleptr, int home_node, void *data_interface, struct starpu_data_interface_ops *ops)
  97. \ingroup API_Data_Management
  98. Register a piece of data into the handle located at the
  99. \p handleptr address. The \p data_interface buffer contains the initial
  100. description of the data in the \p home_node. The \p ops argument is a
  101. pointer to a structure describing the different methods used to
  102. manipulate this type of interface. See starpu_data_interface_ops for
  103. more details on this structure.
  104. If \p home_node is -1, StarPU will automatically allocate the memory when
  105. it is used for the first time in write-only mode. Once such data
  106. handle has been automatically allocated, it is possible to access it
  107. using any access mode.
  108. Note that StarPU supplies a set of predefined types of interface (e.g.
  109. vector or matrix) which can be registered by the means of helper
  110. functions (e.g. starpu_vector_data_register() or
  111. starpu_matrix_data_register()).
  112. \fn void starpu_data_ptr_register(starpu_data_handle_t handle, unsigned node)
  113. \ingroup API_Data_Management
  114. Register that a buffer for \p handle on \p node will be set. This is typically
  115. used by starpu_*_ptr_register helpers before setting the interface pointers for
  116. this node, to tell the core that that is now allocated.
  117. \fn void starpu_data_register_same(starpu_data_handle_t *handledst, starpu_data_handle_t handlesrc)
  118. \ingroup API_Data_Management
  119. Register a new piece of data into the handle \p handledst with the
  120. same interface as the handle \p handlesrc.
  121. \fn void starpu_data_unregister(starpu_data_handle_t handle)
  122. \ingroup API_Data_Management
  123. Unregister a data \p handle from StarPU. If the
  124. data was automatically allocated by StarPU because the home node was
  125. -1, all automatically allocated buffers are freed. Otherwise, a valid
  126. copy of the data is put back into the home node in the buffer that was
  127. initially registered. Using a data handle that has been unregistered
  128. from StarPU results in an undefined behaviour. In case we do not need
  129. to update the value of the data in the home node, we can use
  130. the function starpu_data_unregister_no_coherency() instead.
  131. \fn void starpu_data_unregister_no_coherency(starpu_data_handle_t handle)
  132. \ingroup API_Data_Management
  133. This is the same as starpu_data_unregister(), except that
  134. StarPU does not put back a valid copy into the home node, in the
  135. buffer that was initially registered.
  136. \fn void starpu_data_unregister_submit(starpu_data_handle_t handle)
  137. \ingroup API_Data_Management
  138. Destroy the data \p handle once it is not needed anymore by any
  139. submitted task. No coherency is assumed.
  140. \fn void starpu_data_invalidate(starpu_data_handle_t handle)
  141. \ingroup API_Data_Management
  142. Destroy all replicates of the data \p handle immediately. After
  143. data invalidation, the first access to \p handle must be performed in
  144. ::STARPU_W mode. Accessing an invalidated data in ::STARPU_R mode
  145. results in undefined behaviour.
  146. \fn void starpu_data_invalidate_submit(starpu_data_handle_t handle)
  147. \ingroup API_Data_Management
  148. Submit invalidation of the data \p handle after completion of
  149. previously submitted tasks.
  150. \fn void starpu_data_set_wt_mask(starpu_data_handle_t handle, uint32_t wt_mask)
  151. \ingroup API_Data_Management
  152. Set the write-through mask of the data \p handle (and
  153. its children), i.e. a bitmask of nodes where the data should be always
  154. replicated after modification. It also prevents the data from being
  155. evicted from these nodes when memory gets scarse. When the data is
  156. modified, it is automatically transfered into those memory nodes. For
  157. instance a <c>1<<0</c> write-through mask means that the CUDA workers
  158. will commit their changes in main memory (node 0).
  159. \fn int starpu_data_fetch_on_node(starpu_data_handle_t handle, unsigned node, unsigned async)
  160. \ingroup API_Data_Management
  161. Issue a fetch request for the data \p handle to \p node, i.e.
  162. requests that the data be replicated to the given node as soon as possible, so that it is
  163. available there for tasks. If \p async is 0, the call will
  164. block until the transfer is achieved, else the call will return immediately,
  165. after having just queued the request. In the latter case, the request will
  166. asynchronously wait for the completion of any task writing on the data.
  167. \fn int starpu_data_prefetch_on_node(starpu_data_handle_t handle, unsigned node, unsigned async)
  168. \ingroup API_Data_Management
  169. Issue a prefetch request for the data \p handle to \p node, i.e.
  170. requests that the data be replicated to \p node when there is room for it, so that it is
  171. available there for tasks. If \p async is 0, the call will
  172. block until the transfer is achieved, else the call will return immediately,
  173. after having just queued the request. In the latter case, the request will
  174. asynchronously wait for the completion of any task writing on the data.
  175. \fn int starpu_data_idle_prefetch_on_node(starpu_data_handle_t handle, unsigned node, unsigned async)
  176. \ingroup API_Data_Management
  177. Issue an idle prefetch request for the data \p handle to \p node, i.e.
  178. requests that the data be replicated to \p node, so that it is
  179. available there for tasks, but only when the bus is really idle. If \p async is 0, the call will
  180. block until the transfer is achieved, else the call will return immediately,
  181. after having just queued the request. In the latter case, the request will
  182. asynchronously wait for the completion of any task writing on the data.
  183. \fn void starpu_data_wont_use(starpu_data_handle_t handle)
  184. \ingroup API_Data_Management
  185. Advise StarPU that \p handle will not be used in the close future, and is
  186. thus a good candidate for eviction from GPUs. StarPU will thus write its value
  187. back to its home node when the bus is idle, and select this data in priority
  188. for eviction when memory gets low.
  189. \fn starpu_data_handle_t starpu_data_lookup(const void *ptr)
  190. \ingroup API_Data_Management
  191. Return the handle corresponding to the data pointed to by the \p ptr host pointer.
  192. \fn int starpu_data_request_allocation(starpu_data_handle_t handle, unsigned node)
  193. \ingroup API_Data_Management
  194. Explicitly ask StarPU to allocate room for a piece of data on
  195. the specified memory \p node.
  196. \fn void starpu_data_query_status(starpu_data_handle_t handle, int memory_node, int *is_allocated, int *is_valid, int *is_requested)
  197. \ingroup API_Data_Management
  198. Query the status of \p handle on the specified \p memory_node.
  199. \fn void starpu_data_advise_as_important(starpu_data_handle_t handle, unsigned is_important)
  200. \ingroup API_Data_Management
  201. Specify that the data \p handle can be discarded without impacting the application.
  202. \fn void starpu_data_set_reduction_methods(starpu_data_handle_t handle, struct starpu_codelet *redux_cl, struct starpu_codelet *init_cl)
  203. \ingroup API_Data_Management
  204. Set the codelets to be used for \p handle when it is accessed in the
  205. mode ::STARPU_REDUX. Per-worker buffers will be initialized with
  206. the codelet \p init_cl, and reduction between per-worker buffers will be
  207. done with the codelet \p redux_cl.
  208. \fn struct starpu_data_interface_ops* starpu_data_get_interface_ops(starpu_data_handle_t handle)
  209. \ingroup API_Data_Management
  210. todo
  211. \fn void starpu_data_set_user_data(starpu_data_handle_t handle, void* user_data)
  212. \ingroup API_Data_Management
  213. Sset the field \c user_data for the \p handle to \p user_data . It can
  214. then be retrieved with starpu_data_get_user_data(). \p user_data can be any
  215. application-defined value, for instance a pointer to an object-oriented
  216. container for the data.
  217. \fn void *starpu_data_get_user_data(starpu_data_handle_t handle)
  218. \ingroup API_Data_Management
  219. This retrieves the field \c user_data previously set for the \p handle.
  220. @name Access registered data from the application
  221. \ingroup API_Data_Management
  222. \fn int starpu_data_acquire(starpu_data_handle_t handle, enum starpu_data_access_mode mode)
  223. \ingroup API_Data_Management
  224. The application must call this function prior to accessing
  225. registered data from main memory outside tasks. StarPU ensures that
  226. the application will get an up-to-date copy of \p handle in main memory
  227. located where the data was originally registered, and that all
  228. concurrent accesses (e.g. from tasks) will be consistent with the
  229. access mode specified with \p mode. starpu_data_release() must
  230. be called once the application does not need to access the piece of
  231. data anymore. Note that implicit data dependencies are also enforced
  232. by starpu_data_acquire(), i.e. starpu_data_acquire() will wait for all
  233. tasks scheduled to work on the data, unless they have been disabled
  234. explictly by calling starpu_data_set_default_sequential_consistency_flag() or
  235. starpu_data_set_sequential_consistency_flag(). starpu_data_acquire() is a
  236. blocking call, so that it cannot be called from tasks or from their
  237. callbacks (in that case, starpu_data_acquire() returns <c>-EDEADLK</c>). Upon
  238. successful completion, this function returns 0.
  239. \fn int starpu_data_acquire_cb(starpu_data_handle_t handle, enum starpu_data_access_mode mode, void (*callback)(void *), void *arg)
  240. \ingroup API_Data_Management
  241. Asynchronous equivalent of starpu_data_acquire(). When the data
  242. specified in \p handle is available in the access \p mode, the \p
  243. callback function is executed. The application may access
  244. the requested data during the execution of \p callback. The \p callback
  245. function must call starpu_data_release() once the application does not
  246. need to access the piece of data anymore. Note that implicit data
  247. dependencies are also enforced by starpu_data_acquire_cb() in case they
  248. are not disabled. Contrary to starpu_data_acquire(), this function is
  249. non-blocking and may be called from task callbacks. Upon successful
  250. completion, this function returns 0.
  251. \fn int starpu_data_acquire_cb_sequential_consistency(starpu_data_handle_t handle, enum starpu_data_access_mode mode, void (*callback)(void *), void *arg, int sequential_consistency)
  252. \ingroup API_Data_Management
  253. Equivalent of starpu_data_acquire_cb() with the possibility of enabling or disabling data dependencies.
  254. When the data specified in \p handle is available in the access
  255. \p mode, the \p callback function is executed. The application may access
  256. the requested data during the execution of this \p callback. The \p callback
  257. function must call starpu_data_release() once the application does not
  258. need to access the piece of data anymore. Note that implicit data
  259. dependencies are also enforced by starpu_data_acquire_cb_sequential_consistency() in case they
  260. are not disabled specifically for the given \p handle or by the parameter \p sequential_consistency.
  261. Similarly to starpu_data_acquire_cb(), this function is
  262. non-blocking and may be called from task callbacks. Upon successful
  263. completion, this function returns 0.
  264. \def STARPU_ACQUIRE_NO_NODE
  265. \ingroup API_Data_Management
  266. This macro can be used to acquire data, but not require it to be available on a given node, only enforce R/W dependencies.
  267. This can for instance be used to wait for tasks which produce the data, but without requesting a fetch to the main memory.
  268. \def STARPU_ACQUIRE_NO_NODE_LOCK_ALL
  269. \ingroup API_Data_Management
  270. This is the same as ::STARPU_ACQUIRE_NO_NODE, but will lock the data on all nodes, preventing them from being evicted for instance.
  271. This is mostly useful inside starpu only.
  272. \fn int starpu_data_acquire_on_node(starpu_data_handle_t handle, int node, enum starpu_data_access_mode mode)
  273. \ingroup API_Data_Management
  274. This is the same as starpu_data_acquire(), except that the data
  275. will be available on the given memory node instead of main
  276. memory.
  277. ::STARPU_ACQUIRE_NO_NODE and ::STARPU_ACQUIRE_NO_NODE_LOCK_ALL can be
  278. used instead of an explicit node number.
  279. \fn int starpu_data_acquire_on_node_cb(starpu_data_handle_t handle, int node, enum starpu_data_access_mode mode, void (*callback)(void *), void *arg)
  280. \ingroup API_Data_Management
  281. This is the same as starpu_data_acquire_cb(), except that the
  282. data will be available on the given memory node instead of main
  283. memory.
  284. ::STARPU_ACQUIRE_NO_NODE and ::STARPU_ACQUIRE_NO_NODE_LOCK_ALL can be
  285. used instead of an explicit node number.
  286. \fn int starpu_data_acquire_on_node_cb_sequential_consistency(starpu_data_handle_t handle, int node, enum starpu_data_access_mode mode, void (*callback)(void *), void *arg, int sequential_consistency)
  287. \ingroup API_Data_Management
  288. This is the same as starpu_data_acquire_cb_sequential_consistency(), except that the
  289. data will be available on the given memory node instead of main
  290. memory.
  291. ::STARPU_ACQUIRE_NO_NODE and ::STARPU_ACQUIRE_NO_NODE_LOCK_ALL can be used instead of an
  292. explicit node number.
  293. \def STARPU_DATA_ACQUIRE_CB(handle, mode, code)
  294. \ingroup API_Data_Management
  295. STARPU_DATA_ACQUIRE_CB() is the same as starpu_data_acquire_cb(),
  296. except that the code to be executed in a callback is directly provided
  297. as a macro parameter, and the data \p handle is automatically released
  298. after it. This permits to easily execute code which depends on the
  299. value of some registered data. This is non-blocking too and may be
  300. called from task callbacks.
  301. \fn void starpu_data_release(starpu_data_handle_t handle)
  302. \ingroup API_Data_Management
  303. Release the piece of data acquired by the
  304. application either by starpu_data_acquire() or by
  305. starpu_data_acquire_cb().
  306. \fn void starpu_data_release_on_node(starpu_data_handle_t handle, int node)
  307. \ingroup API_Data_Management
  308. This is the same as starpu_data_release(), except that the data
  309. will be available on the given memory \p node instead of main memory.
  310. The \p node parameter must be exactly the same as the corresponding \c
  311. starpu_data_acquire_on_node* call.
  312. \fn starpu_arbiter_t starpu_arbiter_create(void)
  313. \ingroup API_Data_Management
  314. Create a data access arbiter, see \ref ConcurrentDataAccess for the details
  315. \fn void starpu_data_assign_arbiter(starpu_data_handle_t handle, starpu_arbiter_t arbiter)
  316. \ingroup API_Data_Management
  317. Make access to \p handle managed by \p arbiter
  318. \fn void starpu_arbiter_destroy(starpu_arbiter_t arbiter)
  319. \ingroup API_Data_Management
  320. Destroy the \p arbiter . This must only be called after all data
  321. assigned to it have been unregistered.
  322. */