basic_examples.doxy 25 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733
  1. /*
  2. * This file is part of the StarPU Handbook.
  3. * Copyright (C) 2009--2011 Universit@'e de Bordeaux 1
  4. * Copyright (C) 2010, 2011, 2012, 2013 Centre National de la Recherche Scientifique
  5. * Copyright (C) 2011, 2012 Institut National de Recherche en Informatique et Automatique
  6. * See the file version.doxy for copying conditions.
  7. */
  8. /*! \page BasicExamples Basic Examples
  9. \section HelloWorldUsingTheCExtension Hello World Using The C Extension
  10. This section shows how to implement a simple program that submits a task
  11. to StarPU using the StarPU C extension (\ref cExtensions). The complete example, and additional examples,
  12. is available in the <c>gcc-plugin/examples</c> directory of the StarPU
  13. distribution. A similar example showing how to directly use the StarPU's API is shown
  14. in \ref HelloWorldUsingStarPUAPI.
  15. GCC from version 4.5 permit to use the StarPU GCC plug-in (\ref cExtensions). This makes writing a task both simpler and less error-prone.
  16. In a nutshell, all it takes is to declare a task, declare and define its
  17. implementations (for CPU, OpenCL, and/or CUDA), and invoke the task like
  18. a regular C function. The example below defines <c>my_task</c> which
  19. has a single implementation for CPU:
  20. \snippet hello_pragma.c To be included
  21. The code can then be compiled and linked with GCC and the <c>-fplugin</c> flag:
  22. \verbatim
  23. $ gcc `pkg-config starpu-1.2 --cflags` hello-starpu.c \
  24. -fplugin=`pkg-config starpu-1.2 --variable=gccplugin` \
  25. `pkg-config starpu-1.2 --libs`
  26. \endverbatim
  27. The code can also be compiled without the StarPU C extension and will
  28. behave as a normal sequential code.
  29. \verbatim
  30. $ gcc hello-starpu.c
  31. hello-starpu.c:33:1: warning: ‘task’ attribute directive ignored [-Wattributes]
  32. $ ./a.out
  33. Hello, world! With x = 42
  34. \endverbatim
  35. As can be seen above, the C extensions allows programmers to
  36. use StarPU tasks by essentially annotating ``regular'' C code.
  37. \section HelloWorldUsingStarPUAPI Hello World Using StarPU's API
  38. This section shows how to achieve the same result as in the previous
  39. section using StarPU's standard C API.
  40. \subsection RequiredHeaders Required Headers
  41. The header starpu.h should be included in any code using StarPU.
  42. \code{.c}
  43. #include <starpu.h>
  44. \endcode
  45. \subsection DefiningACodelet Defining A Codelet
  46. \code{.c}
  47. struct params
  48. {
  49. int i;
  50. float f;
  51. };
  52. void cpu_func(void *buffers[], void *cl_arg)
  53. {
  54. struct params *params = cl_arg;
  55. printf("Hello world (params = {%i, %f} )\n", params->i, params->f);
  56. }
  57. struct starpu_codelet cl =
  58. {
  59. .where = STARPU_CPU,
  60. .cpu_funcs = { cpu_func, NULL },
  61. .cpu_funcs_name = { "cpu_func", NULL },
  62. .nbuffers = 0
  63. };
  64. \endcode
  65. A codelet is a structure that represents a computational kernel. Such a codelet
  66. may contain an implementation of the same kernel on different architectures
  67. (e.g. CUDA, x86, ...). For compatibility, make sure that the whole
  68. structure is properly initialized to zero, either by using the
  69. function starpu_codelet_init(), or by letting the
  70. compiler implicitly do it as examplified above.
  71. The field starpu_codelet::nbuffers specifies the number of data buffers that are
  72. manipulated by the codelet: here the codelet does not access or modify any data
  73. that is controlled by our data management library. Note that the argument
  74. passed to the codelet (the field starpu_task::cl_arg) does not count
  75. as a buffer since it is not managed by our data management library,
  76. but just contain trivial parameters.
  77. \internal
  78. TODO need a crossref to the proper description of "where" see bla for more ...
  79. \endinternal
  80. We create a codelet which may only be executed on the CPUs. The field
  81. starpu_codelet::where is a bitmask that defines where the codelet may
  82. be executed. Here, the value ::STARPU_CPU means that only CPUs can
  83. execute this codelet. Note that field starpu_codelet::where is
  84. optional, when unset its value is automatically set based on the
  85. availability of the different fields <c>XXX_funcs</c>.
  86. When a CPU core executes a codelet, it calls the function
  87. <c>cpu_func</c>, which \em must have the following prototype:
  88. \code{.c}
  89. void (*cpu_func)(void *buffers[], void *cl_arg);
  90. \endcode
  91. In this example, we can ignore the first argument of this function which gives a
  92. description of the input and output buffers (e.g. the size and the location of
  93. the matrices) since there is none.
  94. The second argument is a pointer to a buffer passed as an
  95. argument to the codelet by the means of the field starpu_task::cl_arg.
  96. \internal
  97. TODO rewrite so that it is a little clearer ?
  98. \endinternal
  99. Be aware that this may be a pointer to a
  100. \em copy of the actual buffer, and not the pointer given by the programmer:
  101. if the codelet modifies this buffer, there is no guarantee that the initial
  102. buffer will be modified as well: this for instance implies that the buffer
  103. cannot be used as a synchronization medium. If synchronization is needed, data
  104. has to be registered to StarPU, see \ref VectorScalingUsingStarPUAPI.
  105. \subsection SubmittingATask Submitting A Task
  106. \code{.c}
  107. void callback_func(void *callback_arg)
  108. {
  109. printf("Callback function (arg %x)\n", callback_arg);
  110. }
  111. int main(int argc, char **argv)
  112. {
  113. /* initialize StarPU */
  114. starpu_init(NULL);
  115. struct starpu_task *task = starpu_task_create();
  116. task->cl = &cl; /* Pointer to the codelet defined above */
  117. struct params params = { 1, 2.0f };
  118. task->cl_arg = &params;
  119. task->cl_arg_size = sizeof(params);
  120. task->callback_func = callback_func;
  121. task->callback_arg = 0x42;
  122. /* starpu_task_submit will be a blocking call */
  123. task->synchronous = 1;
  124. /* submit the task to StarPU */
  125. starpu_task_submit(task);
  126. /* terminate StarPU */
  127. starpu_shutdown();
  128. return 0;
  129. }
  130. \endcode
  131. Before submitting any tasks to StarPU, starpu_init() must be called. The
  132. <c>NULL</c> argument specifies that we use default configuration. Tasks cannot
  133. be submitted after the termination of StarPU by a call to
  134. starpu_shutdown().
  135. In the example above, a task structure is allocated by a call to
  136. starpu_task_create(). This function only allocates and fills the
  137. corresponding structure with the default settings, but it does not
  138. submit the task to StarPU.
  139. \internal
  140. not really clear ;)
  141. \endinternal
  142. The field starpu_task::cl is a pointer to the codelet which the task will
  143. execute: in other words, the codelet structure describes which computational
  144. kernel should be offloaded on the different architectures, and the task
  145. structure is a wrapper containing a codelet and the piece of data on which the
  146. codelet should operate.
  147. The optional field starpu_task::cl_arg field is a pointer to a buffer
  148. (of size starpu_task::cl_arg_size) with some parameters for the kernel
  149. described by the codelet. For instance, if a codelet implements a
  150. computational kernel that multiplies its input vector by a constant,
  151. the constant could be specified by the means of this buffer, instead
  152. of registering it as a StarPU data. It must however be noted that
  153. StarPU avoids making copy whenever possible and rather passes the
  154. pointer as such, so the buffer which is pointed at must kept allocated
  155. until the task terminates, and if several tasks are submitted with
  156. various parameters, each of them must be given a pointer to their
  157. buffer.
  158. Once a task has been executed, an optional callback function is be called.
  159. While the computational kernel could be offloaded on various architectures, the
  160. callback function is always executed on a CPU. The pointer
  161. starpu_task::callback_arg is passed as an argument of the callback
  162. function. The prototype of a callback function must be:
  163. \code{.c}
  164. void (*callback_function)(void *);
  165. \endcode
  166. If the field starpu_task::synchronous is non-zero, task submission
  167. will be synchronous: the function starpu_task_submit() will not return
  168. until the task was executed. Note that the function starpu_shutdown()
  169. does not guarantee that asynchronous tasks have been executed before
  170. it returns, starpu_task_wait_for_all() can be used to that effect, or
  171. data can be unregistered (starpu_data_unregister()), which will
  172. implicitly wait for all the tasks scheduled to work on it, unless
  173. explicitly disabled thanks to
  174. starpu_data_set_default_sequential_consistency_flag() or
  175. starpu_data_set_sequential_consistency_flag().
  176. \subsection ExecutionOfHelloWorld Execution Of Hello World
  177. \verbatim
  178. $ make hello_world
  179. cc $(pkg-config --cflags starpu-1.2) $(pkg-config --libs starpu-1.2) hello_world.c -o hello_world
  180. $ ./hello_world
  181. Hello world (params = {1, 2.000000} )
  182. Callback function (arg 42)
  183. \endverbatim
  184. \section VectorScalingUsingTheCExtension Vector Scaling Using the C Extension
  185. The previous example has shown how to submit tasks. In this section,
  186. we show how StarPU tasks can manipulate data.
  187. We will first show how to use the C language extensions provided by
  188. the GCC plug-in (\ref cExtensions). The complete example, and
  189. additional examples, is available in the <c>gcc-plugin/examples</c>
  190. directory of the StarPU distribution. These extensions map directly
  191. to StarPU's main concepts: tasks, task implementations for CPU,
  192. OpenCL, or CUDA, and registered data buffers. The standard C version
  193. that uses StarPU's standard C programming interface is given in the
  194. next section (\ref VectorScalingUsingStarPUAPI).
  195. First of all, the vector-scaling task and its simple CPU implementation
  196. has to be defined:
  197. \code{.c}
  198. /* Declare the `vector_scal' task. */
  199. static void vector_scal (unsigned size, float vector[size],
  200. float factor)
  201. __attribute__ ((task));
  202. /* Define the standard CPU implementation. */
  203. static void
  204. vector_scal (unsigned size, float vector[size], float factor)
  205. {
  206. unsigned i;
  207. for (i = 0; i < size; i++)
  208. vector[i] *= factor;
  209. }
  210. \endcode
  211. Next, the body of the program, which uses the task defined above, can be
  212. implemented:
  213. \snippet hello_pragma2.c To be included
  214. The <c>main</c> function above does several things:
  215. <ul>
  216. <li>
  217. It initializes StarPU.
  218. </li>
  219. <li>
  220. It allocates <c>vector</c> in the heap; it will automatically be freed
  221. when its scope is left. Alternatively, good old <c>malloc</c> and
  222. <c>free</c> could have been used, but they are more error-prone and
  223. require more typing.
  224. </li>
  225. <li>
  226. It registers the memory pointed to by <c>vector</c>. Eventually,
  227. when OpenCL or CUDA task implementations are added, this will allow
  228. StarPU to transfer that memory region between GPUs and the main memory.
  229. Removing this <c>pragma</c> is an error.
  230. </li>
  231. <li>
  232. It invokes the <c>vector_scal</c> task. The invocation looks the same
  233. as a standard C function call. However, it is an asynchronous
  234. invocation, meaning that the actual call is performed in parallel with
  235. the caller's continuation.
  236. </li>
  237. <li>
  238. It waits for the termination of the <c>vector_scal</c>
  239. asynchronous call.
  240. </li>
  241. <li>
  242. Finally, StarPU is shut down.
  243. </li>
  244. </ul>
  245. The program can be compiled and linked with GCC and the <c>-fplugin</c>
  246. flag:
  247. \verbatim
  248. $ gcc `pkg-config starpu-1.2 --cflags` vector_scal.c \
  249. -fplugin=`pkg-config starpu-1.2 --variable=gccplugin` \
  250. `pkg-config starpu-1.2 --libs`
  251. \endverbatim
  252. And voilà!
  253. \subsection AddingAnOpenCLTaskImplementation Adding an OpenCL Task Implementation
  254. Now, this is all fine and great, but you certainly want to take
  255. advantage of these newfangled GPUs that your lab just bought, don't you?
  256. So, let's add an OpenCL implementation of the <c>vector_scal</c> task.
  257. We assume that the OpenCL kernel is available in a file,
  258. <c>vector_scal_opencl_kernel.cl</c>, not shown here. The OpenCL task
  259. implementation is similar to that used with the standard C API
  260. (\ref DefinitionOfTheOpenCLKernel). It is declared and defined
  261. in our C file like this:
  262. \code{.c}
  263. /* The OpenCL programs, loaded from 'main' (see below). */
  264. static struct starpu_opencl_program cl_programs;
  265. static void vector_scal_opencl (unsigned size, float vector[size],
  266. float factor)
  267. __attribute__ ((task_implementation ("opencl", vector_scal)));
  268. static void
  269. vector_scal_opencl (unsigned size, float vector[size], float factor)
  270. {
  271. int id, devid, err;
  272. cl_kernel kernel;
  273. cl_command_queue queue;
  274. cl_event event;
  275. /* VECTOR is GPU memory pointer, not a main memory pointer. */
  276. cl_mem val = (cl_mem) vector;
  277. id = starpu_worker_get_id ();
  278. devid = starpu_worker_get_devid (id);
  279. /* Prepare to invoke the kernel. In the future, this will be largely automated. */
  280. err = starpu_opencl_load_kernel (&kernel, &queue, &cl_programs,
  281. "vector_mult_opencl", devid);
  282. if (err != CL_SUCCESS)
  283. STARPU_OPENCL_REPORT_ERROR (err);
  284. err = clSetKernelArg (kernel, 0, sizeof (size), &size);
  285. err |= clSetKernelArg (kernel, 1, sizeof (val), &val);
  286. err |= clSetKernelArg (kernel, 2, sizeof (factor), &factor);
  287. if (err)
  288. STARPU_OPENCL_REPORT_ERROR (err);
  289. size_t global = 1, local = 1;
  290. err = clEnqueueNDRangeKernel (queue, kernel, 1, NULL, &global,
  291. &local, 0, NULL, &event);
  292. if (err != CL_SUCCESS)
  293. STARPU_OPENCL_REPORT_ERROR (err);
  294. clFinish (queue);
  295. starpu_opencl_collect_stats (event);
  296. clReleaseEvent (event);
  297. /* Done with KERNEL. */
  298. starpu_opencl_release_kernel (kernel);
  299. }
  300. \endcode
  301. The OpenCL kernel itself must be loaded from <c>main</c>, sometime after
  302. the <c>initialize</c> pragma:
  303. \code{.c}
  304. starpu_opencl_load_opencl_from_file ("vector_scal_opencl_kernel.cl",
  305. &cl_programs, "");
  306. \endcode
  307. And that's it. The <c>vector_scal</c> task now has an additional
  308. implementation, for OpenCL, which StarPU's scheduler may choose to use
  309. at run-time. Unfortunately, the <c>vector_scal_opencl</c> above still
  310. has to go through the common OpenCL boilerplate; in the future,
  311. additional extensions will automate most of it.
  312. \subsection AddingACUDATaskImplementation Adding a CUDA Task Implementation
  313. Adding a CUDA implementation of the task is very similar, except that
  314. the implementation itself is typically written in CUDA, and compiled
  315. with <c>nvcc</c>. Thus, the C file only needs to contain an external
  316. declaration for the task implementation:
  317. \code{.c}
  318. extern void vector_scal_cuda (unsigned size, float vector[size],
  319. float factor)
  320. __attribute__ ((task_implementation ("cuda", vector_scal)));
  321. \endcode
  322. The actual implementation of the CUDA task goes into a separate
  323. compilation unit, in a <c>.cu</c> file. It is very close to the
  324. implementation when using StarPU's standard C API (\ref DefinitionOfTheCUDAKernel).
  325. \code{.c}
  326. /* CUDA implementation of the `vector_scal' task, to be compiled with `nvcc'. */
  327. #include <starpu.h>
  328. #include <stdlib.h>
  329. static __global__ void
  330. vector_mult_cuda (unsigned n, float *val, float factor)
  331. {
  332. unsigned i = blockIdx.x * blockDim.x + threadIdx.x;
  333. if (i < n)
  334. val[i] *= factor;
  335. }
  336. /* Definition of the task implementation declared in the C file. */
  337. extern "C" void
  338. vector_scal_cuda (size_t size, float vector[], float factor)
  339. {
  340. unsigned threads_per_block = 64;
  341. unsigned nblocks = (size + threads_per_block - 1) / threads_per_block;
  342. vector_mult_cuda <<< nblocks, threads_per_block, 0,
  343. starpu_cuda_get_local_stream () >>> (size, vector, factor);
  344. cudaStreamSynchronize (starpu_cuda_get_local_stream ());
  345. }
  346. \endcode
  347. The complete source code, in the <c>gcc-plugin/examples/vector_scal</c>
  348. directory of the StarPU distribution, also shows how an SSE-specialized
  349. CPU task implementation can be added.
  350. For more details on the C extensions provided by StarPU's GCC plug-in,
  351. \ref cExtensions.
  352. \section VectorScalingUsingStarPUAPI Vector Scaling Using StarPU's API
  353. This section shows how to achieve the same result as explained in the
  354. previous section using StarPU's standard C API.
  355. The full source code for
  356. this example is given in \ref FullSourceCodeVectorScal.
  357. \subsection SourceCodeOfVectorScaling Source Code of Vector Scaling
  358. Programmers can describe the data layout of their application so that StarPU is
  359. responsible for enforcing data coherency and availability across the machine.
  360. Instead of handling complex (and non-portable) mechanisms to perform data
  361. movements, programmers only declare which piece of data is accessed and/or
  362. modified by a task, and StarPU makes sure that when a computational kernel
  363. starts somewhere (e.g. on a GPU), its data are available locally.
  364. Before submitting those tasks, the programmer first needs to declare the
  365. different pieces of data to StarPU using the functions
  366. <c>starpu_*_data_register</c>. To ease the development of applications
  367. for StarPU, it is possible to describe multiple types of data layout.
  368. A type of data layout is called an <b>interface</b>. There are
  369. different predefined interfaces available in StarPU: here we will
  370. consider the <b>vector interface</b>.
  371. The following lines show how to declare an array of <c>NX</c> elements of type
  372. <c>float</c> using the vector interface:
  373. \code{.c}
  374. float vector[NX];
  375. starpu_data_handle_t vector_handle;
  376. starpu_vector_data_register(&vector_handle, 0, (uintptr_t)vector, NX,
  377. sizeof(vector[0]));
  378. \endcode
  379. The first argument, called the <b>data handle</b>, is an opaque pointer which
  380. designates the array in StarPU. This is also the structure which is used to
  381. describe which data is used by a task. The second argument is the node number
  382. where the data originally resides. Here it is 0 since the <c>vector array</c> is in
  383. the main memory. Then comes the pointer <c>vector</c> where the data can be found in main memory,
  384. the number of elements in the vector and the size of each element.
  385. The following shows how to construct a StarPU task that will manipulate the
  386. vector and a constant factor.
  387. \code{.c}
  388. float factor = 3.14;
  389. struct starpu_task *task = starpu_task_create();
  390. task->cl = &cl; /* Pointer to the codelet defined below */
  391. task->handles[0] = vector_handle; /* First parameter of the codelet */
  392. task->cl_arg = &factor;
  393. task->cl_arg_size = sizeof(factor);
  394. task->synchronous = 1;
  395. starpu_task_submit(task);
  396. \endcode
  397. Since the factor is a mere constant float value parameter,
  398. it does not need a preliminary registration, and
  399. can just be passed through the pointer starpu_task::cl_arg like in the previous
  400. example. The vector parameter is described by its handle.
  401. starpu_task::handles should be set with the handles of the data, the
  402. access modes for the data are defined in the field
  403. starpu_codelet::modes (::STARPU_R for read-only, ::STARPU_W for
  404. write-only and ::STARPU_RW for read and write access).
  405. The definition of the codelet can be written as follows:
  406. \code{.c}
  407. void scal_cpu_func(void *buffers[], void *cl_arg)
  408. {
  409. unsigned i;
  410. float *factor = cl_arg;
  411. /* length of the vector */
  412. unsigned n = STARPU_VECTOR_GET_NX(buffers[0]);
  413. /* CPU copy of the vector pointer */
  414. float *val = (float *)STARPU_VECTOR_GET_PTR(buffers[0]);
  415. for (i = 0; i < n; i++)
  416. val[i] *= *factor;
  417. }
  418. struct starpu_codelet cl =
  419. {
  420. .cpu_funcs = { scal_cpu_func, NULL },
  421. .cpu_funcs_name = { "scal_cpu_func", NULL },
  422. .nbuffers = 1,
  423. .modes = { STARPU_RW }
  424. };
  425. \endcode
  426. The first argument is an array that gives
  427. a description of all the buffers passed in the array starpu_task::handles. The
  428. size of this array is given by the field starpu_codelet::nbuffers. For
  429. the sake of genericity, this array contains pointers to the different
  430. interfaces describing each buffer. In the case of the <b>vector
  431. interface</b>, the location of the vector (resp. its length) is
  432. accessible in the starpu_vector_interface::ptr (resp.
  433. starpu_vector_interface::nx) of this interface. Since the vector is
  434. accessed in a read-write fashion, any modification will automatically
  435. affect future accesses to this vector made by other tasks.
  436. The second argument of the function <c>scal_cpu_func</c> contains a
  437. pointer to the parameters of the codelet (given in
  438. starpu_task::cl_arg), so that we read the constant factor from this
  439. pointer.
  440. \subsection ExecutionOfVectorScaling Execution of Vector Scaling
  441. \verbatim
  442. $ make vector_scal
  443. cc $(pkg-config --cflags starpu-1.2) $(pkg-config --libs starpu-1.2) vector_scal.c -o vector_scal
  444. $ ./vector_scal
  445. 0.000000 3.000000 6.000000 9.000000 12.000000
  446. \endverbatim
  447. \section VectorScalingOnAnHybridCPUGPUMachine Vector Scaling on an Hybrid CPU/GPU Machine
  448. Contrary to the previous examples, the task submitted in this example may not
  449. only be executed by the CPUs, but also by a CUDA device.
  450. \subsection DefinitionOfTheCUDAKernel Definition of the CUDA Kernel
  451. The CUDA implementation can be written as follows. It needs to be compiled with
  452. a CUDA compiler such as nvcc, the NVIDIA CUDA compiler driver. It must be noted
  453. that the vector pointer returned by ::STARPU_VECTOR_GET_PTR is here a
  454. pointer in GPU memory, so that it can be passed as such to the
  455. <c>vector_mult_cuda</c> kernel call.
  456. \code{.c}
  457. #include <starpu.h>
  458. static __global__ void vector_mult_cuda(unsigned n, float *val,
  459. float factor)
  460. {
  461. unsigned i = blockIdx.x*blockDim.x + threadIdx.x;
  462. if (i < n)
  463. val[i] *= factor;
  464. }
  465. extern "C" void scal_cuda_func(void *buffers[], void *_args)
  466. {
  467. float *factor = (float *)_args;
  468. /* length of the vector */
  469. unsigned n = STARPU_VECTOR_GET_NX(buffers[0]);
  470. /* CUDA copy of the vector pointer */
  471. float *val = (float *)STARPU_VECTOR_GET_PTR(buffers[0]);
  472. unsigned threads_per_block = 64;
  473. unsigned nblocks = (n + threads_per_block-1) / threads_per_block;
  474. vector_mult_cuda<<<nblocks,threads_per_block, 0, starpu_cuda_get_local_stream()>>>
  475. (n, val, *factor);
  476. cudaStreamSynchronize(starpu_cuda_get_local_stream());
  477. }
  478. \endcode
  479. \subsection DefinitionOfTheOpenCLKernel Definition of the OpenCL Kernel
  480. The OpenCL implementation can be written as follows. StarPU provides
  481. tools to compile a OpenCL kernel stored in a file.
  482. \code{.c}
  483. __kernel void vector_mult_opencl(int nx, __global float* val, float factor)
  484. {
  485. const int i = get_global_id(0);
  486. if (i < nx) {
  487. val[i] *= factor;
  488. }
  489. }
  490. \endcode
  491. Contrary to CUDA and CPU, ::STARPU_VECTOR_GET_DEV_HANDLE has to be used,
  492. which returns a <c>cl_mem</c> (which is not a device pointer, but an OpenCL
  493. handle), which can be passed as such to the OpenCL kernel. The difference is
  494. important when using partitioning, see \ref PartitioningData.
  495. \code{.c}
  496. #include <starpu.h>
  497. extern struct starpu_opencl_program programs;
  498. void scal_opencl_func(void *buffers[], void *_args)
  499. {
  500. float *factor = _args;
  501. int id, devid, err; /* OpenCL specific code */
  502. cl_kernel kernel; /* OpenCL specific code */
  503. cl_command_queue queue; /* OpenCL specific code */
  504. cl_event event; /* OpenCL specific code */
  505. /* length of the vector */
  506. unsigned n = STARPU_VECTOR_GET_NX(buffers[0]);
  507. /* OpenCL copy of the vector pointer */
  508. cl_mem val = (cl_mem) STARPU_VECTOR_GET_DEV_HANDLE(buffers[0]);
  509. { /* OpenCL specific code */
  510. id = starpu_worker_get_id();
  511. devid = starpu_worker_get_devid(id);
  512. err = starpu_opencl_load_kernel(&kernel, &queue, &programs,
  513. "vector_mult_opencl", devid); /* Name of the codelet defined above */
  514. if (err != CL_SUCCESS) STARPU_OPENCL_REPORT_ERROR(err);
  515. err = clSetKernelArg(kernel, 0, sizeof(n), &n);
  516. err |= clSetKernelArg(kernel, 1, sizeof(val), &val);
  517. err |= clSetKernelArg(kernel, 2, sizeof(*factor), factor);
  518. if (err) STARPU_OPENCL_REPORT_ERROR(err);
  519. }
  520. { /* OpenCL specific code */
  521. size_t global=n;
  522. size_t local=1;
  523. err = clEnqueueNDRangeKernel(queue, kernel, 1, NULL, &global, &local, 0, NULL, &event);
  524. if (err != CL_SUCCESS) STARPU_OPENCL_REPORT_ERROR(err);
  525. }
  526. { /* OpenCL specific code */
  527. clFinish(queue);
  528. starpu_opencl_collect_stats(event);
  529. clReleaseEvent(event);
  530. starpu_opencl_release_kernel(kernel);
  531. }
  532. }
  533. \endcode
  534. \subsection DefinitionOfTheMainCode Definition of the Main Code
  535. The CPU implementation is the same as in the previous section.
  536. Here is the source of the main application. You can notice that the fields
  537. starpu_codelet::cuda_funcs and starpu_codelet::opencl_funcs are set to
  538. define the pointers to the CUDA and OpenCL implementations of the
  539. task.
  540. \snippet vector_scal_c.c To be included
  541. \subsection ExecutionOfHybridVectorScaling Execution of Hybrid Vector Scaling
  542. The Makefile given at the beginning of the section must be extended to
  543. give the rules to compile the CUDA source code. Note that the source
  544. file of the OpenCL kernel does not need to be compiled now, it will
  545. be compiled at run-time when calling the function
  546. starpu_opencl_load_opencl_from_file().
  547. \verbatim
  548. CFLAGS += $(shell pkg-config --cflags starpu-1.2)
  549. LDFLAGS += $(shell pkg-config --libs starpu-1.2)
  550. CC = gcc
  551. vector_scal: vector_scal.o vector_scal_cpu.o vector_scal_cuda.o vector_scal_opencl.o
  552. %.o: %.cu
  553. nvcc $(CFLAGS) $< -c $@
  554. clean:
  555. rm -f vector_scal *.o
  556. \endverbatim
  557. \verbatim
  558. $ make
  559. \endverbatim
  560. and to execute it, with the default configuration:
  561. \verbatim
  562. $ ./vector_scal
  563. 0.000000 3.000000 6.000000 9.000000 12.000000
  564. \endverbatim
  565. or for example, by disabling CPU devices:
  566. \verbatim
  567. $ STARPU_NCPU=0 ./vector_scal
  568. 0.000000 3.000000 6.000000 9.000000 12.000000
  569. \endverbatim
  570. or by disabling CUDA devices (which may permit to enable the use of OpenCL,
  571. see \ref EnablingOpenCL) :
  572. \verbatim
  573. $ STARPU_NCUDA=0 ./vector_scal
  574. 0.000000 3.000000 6.000000 9.000000 12.000000
  575. \endverbatim
  576. */