basic-examples.texi 30 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954
  1. @c -*-texinfo-*-
  2. @c This file is part of the StarPU Handbook.
  3. @c Copyright (C) 2009--2011 Universit@'e de Bordeaux 1
  4. @c Copyright (C) 2010, 2011, 2012 Centre National de la Recherche Scientifique
  5. @c Copyright (C) 2011, 2012 Institut National de Recherche en Informatique et Automatique
  6. @c See the file starpu.texi for copying conditions.
  7. @menu
  8. * Compiling and linking options::
  9. * Hello World:: Submitting Tasks
  10. * Scaling a Vector:: Manipulating Data
  11. * Vector Scaling on an Hybrid CPU/GPU Machine:: Handling Heterogeneous Architectures
  12. @end menu
  13. @node Compiling and linking options
  14. @section Compiling and linking options
  15. Let's suppose StarPU has been installed in the directory
  16. @code{$STARPU_DIR}. As explained in @ref{Setting flags for compiling and linking applications},
  17. the variable @code{PKG_CONFIG_PATH} needs to be set. It is also
  18. necessary to set the variable @code{LD_LIBRARY_PATH} to locate dynamic
  19. libraries at runtime.
  20. @example
  21. % PKG_CONFIG_PATH=$STARPU_DIR/lib/pkgconfig:$PKG_CONFIG_PATH
  22. % LD_LIBRARY_PATH=$STARPU_DIR/lib:$LD_LIBRARY_PATH
  23. @end example
  24. The Makefile could for instance contain the following lines to define which
  25. options must be given to the compiler and to the linker:
  26. @cartouche
  27. @example
  28. CFLAGS += $$(pkg-config --cflags starpu-1.0)
  29. LDFLAGS += $$(pkg-config --libs starpu-1.0)
  30. @end example
  31. @end cartouche
  32. Also pass the @code{--static} option if the application is to be linked statically.
  33. @node Hello World
  34. @section Hello World
  35. This section shows how to implement a simple program that submits a task
  36. to StarPU.
  37. Writing a task is both simpler and less error-prone when using the C
  38. extensions implemented by StarPU's GCC plug-in (@pxref{C Extensions}).
  39. In a nutshell, all it takes is to declare a task, declare and define its
  40. implementations (for CPU, OpenCL, and/or CUDA), and invoke the task like
  41. a regular C function. The example below defines @code{my_task}, which
  42. has a single implementation for CPU:
  43. @example
  44. /* Task declaration. */
  45. static void my_task (int x) __attribute__ ((task));
  46. /* Declaration of the CPU implementation of `my_task'. */
  47. static void my_task_cpu (int x)
  48. __attribute__ ((task_implementation ("cpu", my_task)));
  49. /* Definition of said CPU implementation. */
  50. static void
  51. my_task_cpu (int x)
  52. @{
  53. printf ("Hello, world! With x = %d\n", x);
  54. @}
  55. int
  56. main ()
  57. @{
  58. /* Initialize StarPU. */
  59. #pragma starpu initialize
  60. /* Do an asynchronous call to `my_task'. */
  61. my_task (42);
  62. /* Wait for the call to complete. */
  63. #pragma starpu wait
  64. /* Terminate. */
  65. #pragma starpu shutdown
  66. return 0;
  67. @}
  68. @end example
  69. @noindent
  70. The code can then be compiled and linked with GCC and the
  71. @code{-fplugin} flag:
  72. @example
  73. $ gcc hello-starpu.c \
  74. -fplugin=`pkg-config starpu-1.0 --variable=gccplugin` \
  75. `pkg-config starpu-1.0 --libs`
  76. @end example
  77. As can be seen above, basic use the C extensions allows programmers to
  78. use StarPU tasks while essentially annotating ``regular'' C code.
  79. The remainder of this section shows how to achieve the same result using
  80. StarPU's standard C API.
  81. @menu
  82. * Required Headers::
  83. * Defining a Codelet::
  84. * Submitting a Task::
  85. * Execution of Hello World::
  86. @end menu
  87. @node Required Headers
  88. @subsection Required Headers
  89. The @code{starpu.h} header should be included in any code using StarPU.
  90. @cartouche
  91. @smallexample
  92. #include <starpu.h>
  93. @end smallexample
  94. @end cartouche
  95. @node Defining a Codelet
  96. @subsection Defining a Codelet
  97. @cartouche
  98. @smallexample
  99. struct params @{
  100. int i;
  101. float f;
  102. @};
  103. void cpu_func(void *buffers[], void *cl_arg)
  104. @{
  105. struct params *params = cl_arg;
  106. printf("Hello world (params = @{%i, %f@} )\n", params->i, params->f);
  107. @}
  108. struct starpu_codelet cl =
  109. @{
  110. .where = STARPU_CPU,
  111. .cpu_funcs = @{ cpu_func, NULL @},
  112. .nbuffers = 0
  113. @};
  114. @end smallexample
  115. @end cartouche
  116. A codelet is a structure that represents a computational kernel. Such a codelet
  117. may contain an implementation of the same kernel on different architectures
  118. (e.g. CUDA, Cell's SPU, x86, ...).
  119. The @code{nbuffers} field specifies the number of data buffers that are
  120. manipulated by the codelet: here the codelet does not access or modify any data
  121. that is controlled by our data management library. Note that the argument
  122. passed to the codelet (the @code{cl_arg} field of the @code{starpu_task}
  123. structure) does not count as a buffer since it is not managed by our data
  124. management library, but just contain trivial parameters.
  125. @c TODO need a crossref to the proper description of "where" see bla for more ...
  126. We create a codelet which may only be executed on the CPUs. The @code{where}
  127. field is a bitmask that defines where the codelet may be executed. Here, the
  128. @code{STARPU_CPU} value means that only CPUs can execute this codelet
  129. (@pxref{Codelets and Tasks} for more details on this field). Note that
  130. the @code{where} field is optional, when unset its value is
  131. automatically set based on the availability of the different
  132. @code{XXX_funcs} fields.
  133. When a CPU core executes a codelet, it calls the @code{cpu_func} function,
  134. which @emph{must} have the following prototype:
  135. @code{void (*cpu_func)(void *buffers[], void *cl_arg);}
  136. In this example, we can ignore the first argument of this function which gives a
  137. description of the input and output buffers (e.g. the size and the location of
  138. the matrices) since there is none.
  139. The second argument is a pointer to a buffer passed as an
  140. argument to the codelet by the means of the @code{cl_arg} field of the
  141. @code{starpu_task} structure.
  142. @c TODO rewrite so that it is a little clearer ?
  143. Be aware that this may be a pointer to a
  144. @emph{copy} of the actual buffer, and not the pointer given by the programmer:
  145. if the codelet modifies this buffer, there is no guarantee that the initial
  146. buffer will be modified as well: this for instance implies that the buffer
  147. cannot be used as a synchronization medium. If synchronization is needed, data
  148. has to be registered to StarPU, see @ref{Scaling a Vector}.
  149. @node Submitting a Task
  150. @subsection Submitting a Task
  151. @cartouche
  152. @smallexample
  153. void callback_func(void *callback_arg)
  154. @{
  155. printf("Callback function (arg %x)\n", callback_arg);
  156. @}
  157. int main(int argc, char **argv)
  158. @{
  159. /* @b{initialize StarPU} */
  160. starpu_init(NULL);
  161. struct starpu_task *task = starpu_task_create();
  162. task->cl = &cl; /* @b{Pointer to the codelet defined above} */
  163. struct params params = @{ 1, 2.0f @};
  164. task->cl_arg = &params;
  165. task->cl_arg_size = sizeof(params);
  166. task->callback_func = callback_func;
  167. task->callback_arg = 0x42;
  168. /* @b{starpu_task_submit will be a blocking call} */
  169. task->synchronous = 1;
  170. /* @b{submit the task to StarPU} */
  171. starpu_task_submit(task);
  172. /* @b{terminate StarPU} */
  173. starpu_shutdown();
  174. return 0;
  175. @}
  176. @end smallexample
  177. @end cartouche
  178. Before submitting any tasks to StarPU, @code{starpu_init} must be called. The
  179. @code{NULL} argument specifies that we use default configuration. Tasks cannot
  180. be submitted after the termination of StarPU by a call to
  181. @code{starpu_shutdown}.
  182. In the example above, a task structure is allocated by a call to
  183. @code{starpu_task_create}. This function only allocates and fills the
  184. corresponding structure with the default settings (@pxref{Codelets and
  185. Tasks, starpu_task_create}), but it does not submit the task to StarPU.
  186. @c not really clear ;)
  187. The @code{cl} field is a pointer to the codelet which the task will
  188. execute: in other words, the codelet structure describes which computational
  189. kernel should be offloaded on the different architectures, and the task
  190. structure is a wrapper containing a codelet and the piece of data on which the
  191. codelet should operate.
  192. The optional @code{cl_arg} field is a pointer to a buffer (of size
  193. @code{cl_arg_size}) with some parameters for the kernel
  194. described by the codelet. For instance, if a codelet implements a computational
  195. kernel that multiplies its input vector by a constant, the constant could be
  196. specified by the means of this buffer, instead of registering it as a StarPU
  197. data. It must however be noted that StarPU avoids making copy whenever possible
  198. and rather passes the pointer as such, so the buffer which is pointed at must
  199. kept allocated until the task terminates, and if several tasks are submitted
  200. with various parameters, each of them must be given a pointer to their own
  201. buffer.
  202. Once a task has been executed, an optional callback function is be called.
  203. While the computational kernel could be offloaded on various architectures, the
  204. callback function is always executed on a CPU. The @code{callback_arg}
  205. pointer is passed as an argument of the callback. The prototype of a callback
  206. function must be:
  207. @code{void (*callback_function)(void *);}
  208. If the @code{synchronous} field is non-zero, task submission will be
  209. synchronous: the @code{starpu_task_submit} function will not return until the
  210. task was executed. Note that the @code{starpu_shutdown} method does not
  211. guarantee that asynchronous tasks have been executed before it returns,
  212. @code{starpu_task_wait_for_all} can be used to that effect, or data can be
  213. unregistered (@code{starpu_data_unregister(vector_handle);}), which will
  214. implicitly wait for all the tasks scheduled to work on it, unless explicitly
  215. disabled thanks to @code{starpu_data_set_default_sequential_consistency_flag} or
  216. @code{starpu_data_set_sequential_consistency_flag}.
  217. @node Execution of Hello World
  218. @subsection Execution of Hello World
  219. @smallexample
  220. % make hello_world
  221. cc $(pkg-config --cflags starpu-1.0) $(pkg-config --libs starpu-1.0) hello_world.c -o hello_world
  222. % ./hello_world
  223. Hello world (params = @{1, 2.000000@} )
  224. Callback function (arg 42)
  225. @end smallexample
  226. @node Scaling a Vector
  227. @section Manipulating Data: Scaling a Vector
  228. The previous example has shown how to submit tasks. In this section,
  229. we show how StarPU tasks can manipulate data. The full source code for
  230. this example is given in @ref{Full source code for the 'Scaling a Vector' example}.
  231. @menu
  232. * Extended C Source Code of Vector Scaling:: StarPU program using C extensions
  233. * Source code of Vector Scaling:: StarPU program written in standard C
  234. * Execution of Vector Scaling:: Running the program
  235. @end menu
  236. @node Extended C Source Code of Vector Scaling
  237. @subsection Extended C Source Code of Vector Scaling
  238. The simplest way to get started writing StarPU programs is using the C
  239. language extensions provided by the GCC plug-in (@pxref{C Extensions}).
  240. These extensions map directly to StarPU's main concepts: tasks, task
  241. implementations for CPU, OpenCL, or CUDA, and registered data buffers.
  242. The example below is a vector-scaling program, that multiplies elements
  243. of a vector by a given factor@footnote{The complete example, and
  244. additional examples, is available in the @file{gcc-plugin/examples}
  245. directory of the StarPU distribution.}. For comparison, the standard C
  246. version that uses StarPU's standard C programming interface is given in
  247. the next section (@pxref{Source code of Vector Scaling, standard C
  248. version of the example}).
  249. First of all, the vector-scaling task and its simple CPU implementation
  250. has to be defined:
  251. @example
  252. /* Declare the `vector_scal' task. */
  253. static void vector_scal (size_t size, float vector[size],
  254. float factor)
  255. __attribute__ ((task));
  256. /* Declare and define the standard CPU implementation. */
  257. static void vector_scal_cpu (size_t size, float vector[size],
  258. float factor)
  259. __attribute__ ((task_implementation ("cpu", vector_scal)));
  260. static void
  261. vector_scal_cpu (size_t size, float vector[size], float factor)
  262. @{
  263. size_t i;
  264. for (i = 0; i < size; i++)
  265. vector[i] *= factor;
  266. @}
  267. @end example
  268. Next, the body of the program, which uses the task defined above, can be
  269. implemented:
  270. @example
  271. int
  272. main (void)
  273. @{
  274. #pragma starpu initialize
  275. #define NX 0x100000
  276. #define FACTOR 3.14
  277. @{
  278. float vector[NX] __attribute__ ((heap_allocated));
  279. #pragma starpu register vector
  280. size_t i;
  281. for (i = 0; i < NX; i++)
  282. vector[i] = (float) i;
  283. vector_scal (NX, vector, FACTOR);
  284. #pragma starpu wait
  285. @} /* VECTOR is automatically freed here. */
  286. #pragma starpu shutdown
  287. return valid ? EXIT_SUCCESS : EXIT_FAILURE;
  288. @}
  289. @end example
  290. @noindent
  291. The @code{main} function above does several things:
  292. @itemize
  293. @item
  294. It initializes StarPU. This has to be done explicitly, as it is
  295. undesirable to add implicit initialization code in user code.
  296. @item
  297. It allocates @var{vector} in the heap; it will automatically be freed
  298. when its scope is left. Alternatively, good old @code{malloc} and
  299. @code{free} could have been used, but they are more error-prone and
  300. require more typing.
  301. @item
  302. It @dfn{registers} the memory pointed to by @var{vector}. Eventually,
  303. when OpenCL or CUDA task implementations are added, this will allow
  304. StarPU to transfer that memory region between GPUs and the main memory.
  305. Removing this @code{pragma} is an error.
  306. @item
  307. It invokes the @code{vector_scal} task. The invocation looks the same
  308. as a standard C function call. However, it is an @dfn{asynchronous
  309. invocation}, meaning that the actual call is performed in parallel with
  310. the caller's continuation.
  311. @item
  312. It @dfn{waits} for the termination of the @code{vector_scal}
  313. asynchronous call.
  314. @item
  315. Finally, StarPU is shut down, giving it an opportunity to write
  316. profiling info to a file on disk, for instance (@pxref{Off-line,
  317. off-line performance feedback}).
  318. @end itemize
  319. The program can be compiled and linked with GCC and the @code{-fplugin}
  320. flag:
  321. @example
  322. $ gcc hello-starpu.c \
  323. -fplugin=`pkg-config starpu-1.0 --variable=gccplugin` \
  324. `pkg-config starpu-1.0 --libs`
  325. @end example
  326. And voil@`a!
  327. @unnumberedsubsubsec Adding an OpenCL Task Implementation
  328. Now, this is all fine and great, but you certainly want to take
  329. advantage of these newfangled GPUs that your lab just bought, don't you?
  330. So, let's add an OpenCL implementation of the @code{vector_scal} task.
  331. We assume that the OpenCL kernel is available in a file,
  332. @file{vector_scal_opencl_kernel.cl}, not shown here. The OpenCL task
  333. implementation is similar to that used with the standard C API
  334. (@pxref{Definition of the OpenCL Kernel}). It is declared and defined
  335. in our C file like this:
  336. @example
  337. /* Include StarPU's OpenCL integration. */
  338. #include <starpu_opencl.h>
  339. /* The OpenCL programs, loaded from `main' (see below). */
  340. static struct starpu_opencl_program cl_programs;
  341. static void vector_scal_opencl (size_t size, float vector[size],
  342. float factor)
  343. __attribute__ ((task_implementation ("opencl", vector_scal)));
  344. static void
  345. vector_scal_opencl (size_t size, float vector[size], float factor)
  346. @{
  347. int id, devid, err;
  348. cl_kernel kernel;
  349. cl_command_queue queue;
  350. cl_event event;
  351. /* VECTOR is GPU memory pointer, not a main memory pointer. */
  352. cl_mem val = (cl_mem) vector;
  353. id = starpu_worker_get_id ();
  354. devid = starpu_worker_get_devid (id);
  355. /* Prepare to invoke the kernel. In the future, this will be largely
  356. automated. */
  357. err = starpu_opencl_load_kernel (&kernel, &queue, &cl_programs,
  358. "vector_mult_opencl", devid);
  359. if (err != CL_SUCCESS)
  360. STARPU_OPENCL_REPORT_ERROR (err);
  361. err = clSetKernelArg (kernel, 0, sizeof (val), &val);
  362. err |= clSetKernelArg (kernel, 1, sizeof (size), &size);
  363. err |= clSetKernelArg (kernel, 2, sizeof (factor), &factor);
  364. if (err)
  365. STARPU_OPENCL_REPORT_ERROR (err);
  366. size_t global = 1, local = 1;
  367. err = clEnqueueNDRangeKernel (queue, kernel, 1, NULL, &global, &local, 0,
  368. NULL, &event);
  369. if (err != CL_SUCCESS)
  370. STARPU_OPENCL_REPORT_ERROR (err);
  371. clFinish (queue);
  372. starpu_opencl_collect_stats (event);
  373. clReleaseEvent (event);
  374. /* Done with KERNEL. */
  375. starpu_opencl_release_kernel (kernel);
  376. @}
  377. @end example
  378. @noindent
  379. The OpenCL kernel itself must be loaded from @code{main}, sometime after
  380. the @code{initialize} pragma:
  381. @example
  382. starpu_opencl_load_opencl_from_file ("vector_scal_opencl_kernel.cl",
  383. &cl_programs, "");
  384. @end example
  385. @noindent
  386. And that's it. The @code{vector_scal} task now has an additional
  387. implementation, for OpenCL, which StarPU's scheduler may choose to use
  388. at run-time. Unfortunately, the @code{vector_scal_opencl} above still
  389. has to go through the common OpenCL boilerplate; in the future,
  390. additional extensions will automate most of it.
  391. @unnumberedsubsubsec Adding a CUDA Task Implementation
  392. Adding a CUDA implementation of the task is very similar, except that
  393. the implementation itself is typically written in CUDA, and compiled
  394. with @code{nvcc}. Thus, the C file only needs to contain an external
  395. declaration for the task implementation:
  396. @example
  397. extern void vector_scal_cuda (size_t size, float vector[size],
  398. float factor)
  399. __attribute__ ((task_implementation ("cuda", vector_scal)));
  400. @end example
  401. The actual implementation of the CUDA task goes into a separate
  402. compilation unit, in a @file{.cu} file. It is very close to the
  403. implementation when using StarPU's standard C API (@pxref{Definition of
  404. the CUDA Kernel}).
  405. @example
  406. /* CUDA implementation of the `vector_scal' task, to be compiled
  407. with `nvcc'. */
  408. #include <starpu.h>
  409. #include <starpu_cuda.h>
  410. #include <stdlib.h>
  411. static __global__ void
  412. vector_mult_cuda (float *val, unsigned n, float factor)
  413. @{
  414. unsigned i = blockIdx.x * blockDim.x + threadIdx.x;
  415. if (i < n)
  416. val[i] *= factor;
  417. @}
  418. /* Definition of the task implementation declared in the C file. */
  419. extern "C" void
  420. vector_scal_cuda (size_t size, float vector[], float factor)
  421. @{
  422. unsigned threads_per_block = 64;
  423. unsigned nblocks = (size + threads_per_block - 1) / threads_per_block;
  424. vector_mult_cuda <<< nblocks, threads_per_block, 0,
  425. starpu_cuda_get_local_stream () >>> (vector, size, factor);
  426. cudaStreamSynchronize (starpu_cuda_get_local_stream ());
  427. @}
  428. @end example
  429. The complete source code, in the @file{gcc-plugin/examples/vector_scal}
  430. directory of the StarPU distribution, also shows how an SSE-specialized
  431. CPU task implementation can be added.
  432. For more details on the C extensions provided by StarPU's GCC plug-in,
  433. @xref{C Extensions}.
  434. @node Source code of Vector Scaling
  435. @subsection Source code of Vector Scaling
  436. Programmers can describe the data layout of their application so that StarPU is
  437. responsible for enforcing data coherency and availability across the machine.
  438. Instead of handling complex (and non-portable) mechanisms to perform data
  439. movements, programmers only declare which piece of data is accessed and/or
  440. modified by a task, and StarPU makes sure that when a computational kernel
  441. starts somewhere (e.g. on a GPU), its data are available locally.
  442. Before submitting those tasks, the programmer first needs to declare the
  443. different pieces of data to StarPU using the @code{starpu_*_data_register}
  444. functions. To ease the development of applications for StarPU, it is possible
  445. to describe multiple types of data layout. A type of data layout is called an
  446. @b{interface}. There are different predefined interfaces available in StarPU:
  447. here we will consider the @b{vector interface}.
  448. The following lines show how to declare an array of @code{NX} elements of type
  449. @code{float} using the vector interface:
  450. @cartouche
  451. @smallexample
  452. float vector[NX];
  453. starpu_data_handle_t vector_handle;
  454. starpu_vector_data_register(&vector_handle, 0, (uintptr_t)vector, NX,
  455. sizeof(vector[0]));
  456. @end smallexample
  457. @end cartouche
  458. The first argument, called the @b{data handle}, is an opaque pointer which
  459. designates the array in StarPU. This is also the structure which is used to
  460. describe which data is used by a task. The second argument is the node number
  461. where the data originally resides. Here it is 0 since the @code{vector} array is in
  462. the main memory. Then comes the pointer @code{vector} where the data can be found in main memory,
  463. the number of elements in the vector and the size of each element.
  464. The following shows how to construct a StarPU task that will manipulate the
  465. vector and a constant factor.
  466. @cartouche
  467. @smallexample
  468. float factor = 3.14;
  469. struct starpu_task *task = starpu_task_create();
  470. task->cl = &cl; /* @b{Pointer to the codelet defined below} */
  471. task->handles[0] = vector_handle; /* @b{First parameter of the codelet} */
  472. task->cl_arg = &factor;
  473. task->cl_arg_size = sizeof(factor);
  474. task->synchronous = 1;
  475. starpu_task_submit(task);
  476. @end smallexample
  477. @end cartouche
  478. Since the factor is a mere constant float value parameter,
  479. it does not need a preliminary registration, and
  480. can just be passed through the @code{cl_arg} pointer like in the previous
  481. example. The vector parameter is described by its handle.
  482. There are two fields in each element of the @code{buffers} array.
  483. @code{handle} is the handle of the data, and @code{mode} specifies how the
  484. kernel will access the data (@code{STARPU_R} for read-only, @code{STARPU_W} for
  485. write-only and @code{STARPU_RW} for read and write access).
  486. The definition of the codelet can be written as follows:
  487. @cartouche
  488. @smallexample
  489. void scal_cpu_func(void *buffers[], void *cl_arg)
  490. @{
  491. unsigned i;
  492. float *factor = cl_arg;
  493. /* length of the vector */
  494. unsigned n = STARPU_VECTOR_GET_NX(buffers[0]);
  495. /* CPU copy of the vector pointer */
  496. float *val = (float *)STARPU_VECTOR_GET_PTR(buffers[0]);
  497. for (i = 0; i < n; i++)
  498. val[i] *= *factor;
  499. @}
  500. struct starpu_codelet cl = @{
  501. .where = STARPU_CPU,
  502. .cpu_funcs = @{ scal_cpu_func, NULL @},
  503. .nbuffers = 1,
  504. .modes = @{ STARPU_RW @}
  505. @};
  506. @end smallexample
  507. @end cartouche
  508. The first argument is an array that gives
  509. a description of all the buffers passed in the @code{task->handles}@ array. The
  510. size of this array is given by the @code{nbuffers} field of the codelet
  511. structure. For the sake of genericity, this array contains pointers to the
  512. different interfaces describing each buffer. In the case of the @b{vector
  513. interface}, the location of the vector (resp. its length) is accessible in the
  514. @code{ptr} (resp. @code{nx}) of this array. Since the vector is accessed in a
  515. read-write fashion, any modification will automatically affect future accesses
  516. to this vector made by other tasks.
  517. The second argument of the @code{scal_cpu_func} function contains a pointer to the
  518. parameters of the codelet (given in @code{task->cl_arg}), so that we read the
  519. constant factor from this pointer.
  520. @node Execution of Vector Scaling
  521. @subsection Execution of Vector Scaling
  522. @smallexample
  523. % make vector_scal
  524. cc $(pkg-config --cflags starpu-1.0) $(pkg-config --libs starpu-1.0) vector_scal.c -o vector_scal
  525. % ./vector_scal
  526. 0.000000 3.000000 6.000000 9.000000 12.000000
  527. @end smallexample
  528. @node Vector Scaling on an Hybrid CPU/GPU Machine
  529. @section Vector Scaling on an Hybrid CPU/GPU Machine
  530. Contrary to the previous examples, the task submitted in this example may not
  531. only be executed by the CPUs, but also by a CUDA device.
  532. @menu
  533. * Definition of the CUDA Kernel::
  534. * Definition of the OpenCL Kernel::
  535. * Definition of the Main Code::
  536. * Execution of Hybrid Vector Scaling::
  537. @end menu
  538. @node Definition of the CUDA Kernel
  539. @subsection Definition of the CUDA Kernel
  540. The CUDA implementation can be written as follows. It needs to be compiled with
  541. a CUDA compiler such as nvcc, the NVIDIA CUDA compiler driver. It must be noted
  542. that the vector pointer returned by STARPU_VECTOR_GET_PTR is here a pointer in GPU
  543. memory, so that it can be passed as such to the @code{vector_mult_cuda} kernel
  544. call.
  545. @cartouche
  546. @smallexample
  547. #include <starpu.h>
  548. #include <starpu_cuda.h>
  549. static __global__ void vector_mult_cuda(float *val, unsigned n,
  550. float factor)
  551. @{
  552. unsigned i = blockIdx.x*blockDim.x + threadIdx.x;
  553. if (i < n)
  554. val[i] *= factor;
  555. @}
  556. extern "C" void scal_cuda_func(void *buffers[], void *_args)
  557. @{
  558. float *factor = (float *)_args;
  559. /* length of the vector */
  560. unsigned n = STARPU_VECTOR_GET_NX(buffers[0]);
  561. /* CUDA copy of the vector pointer */
  562. float *val = (float *)STARPU_VECTOR_GET_PTR(buffers[0]);
  563. unsigned threads_per_block = 64;
  564. unsigned nblocks = (n + threads_per_block-1) / threads_per_block;
  565. @i{ vector_mult_cuda<<<nblocks,threads_per_block, 0, starpu_cuda_get_local_stream()>>>(val, n, *factor);}
  566. @i{ cudaStreamSynchronize(starpu_cuda_get_local_stream());}
  567. @}
  568. @end smallexample
  569. @end cartouche
  570. @node Definition of the OpenCL Kernel
  571. @subsection Definition of the OpenCL Kernel
  572. The OpenCL implementation can be written as follows. StarPU provides
  573. tools to compile a OpenCL kernel stored in a file.
  574. @cartouche
  575. @smallexample
  576. __kernel void vector_mult_opencl(__global float* val, int nx, float factor)
  577. @{
  578. const int i = get_global_id(0);
  579. if (i < nx) @{
  580. val[i] *= factor;
  581. @}
  582. @}
  583. @end smallexample
  584. @end cartouche
  585. Contrary to CUDA and CPU, @code{STARPU_VECTOR_GET_DEV_HANDLE} has to be used,
  586. which returns a @code{cl_mem} (which is not a device pointer, but an OpenCL
  587. handle), which can be passed as such to the OpenCL kernel. The difference is
  588. important when using partitioning, see @ref{Partitioning Data}.
  589. @cartouche
  590. @smallexample
  591. #include <starpu.h>
  592. @i{#include <starpu_opencl.h>}
  593. @i{extern struct starpu_opencl_program programs;}
  594. void scal_opencl_func(void *buffers[], void *_args)
  595. @{
  596. float *factor = _args;
  597. @i{ int id, devid, err;}
  598. @i{ cl_kernel kernel;}
  599. @i{ cl_command_queue queue;}
  600. @i{ cl_event event;}
  601. /* length of the vector */
  602. unsigned n = STARPU_VECTOR_GET_NX(buffers[0]);
  603. /* OpenCL copy of the vector pointer */
  604. cl_mem val = (cl_mem) STARPU_VECTOR_GET_DEV_HANDLE(buffers[0]);
  605. @i{ id = starpu_worker_get_id();}
  606. @i{ devid = starpu_worker_get_devid(id);}
  607. @i{ err = starpu_opencl_load_kernel(&kernel, &queue, &programs,}
  608. @i{ "vector_mult_opencl", devid); /* @b{Name of the codelet defined above} */}
  609. @i{ if (err != CL_SUCCESS) STARPU_OPENCL_REPORT_ERROR(err);}
  610. @i{ err = clSetKernelArg(kernel, 0, sizeof(val), &val);}
  611. @i{ err |= clSetKernelArg(kernel, 1, sizeof(n), &n);}
  612. @i{ err |= clSetKernelArg(kernel, 2, sizeof(*factor), factor);}
  613. @i{ if (err) STARPU_OPENCL_REPORT_ERROR(err);}
  614. @i{ @{}
  615. @i{ size_t global=n;}
  616. @i{ size_t local=1;}
  617. @i{ err = clEnqueueNDRangeKernel(queue, kernel, 1, NULL, &global, &local, 0, NULL, &event);}
  618. @i{ if (err != CL_SUCCESS) STARPU_OPENCL_REPORT_ERROR(err);}
  619. @i{ @}}
  620. @i{ clFinish(queue);}
  621. @i{ starpu_opencl_collect_stats(event);}
  622. @i{ clReleaseEvent(event);}
  623. @i{ starpu_opencl_release_kernel(kernel);}
  624. @}
  625. @end smallexample
  626. @end cartouche
  627. @node Definition of the Main Code
  628. @subsection Definition of the Main Code
  629. The CPU implementation is the same as in the previous section.
  630. Here is the source of the main application. You can notice the value of the
  631. field @code{where} for the codelet. We specify
  632. @code{STARPU_CPU|STARPU_CUDA|STARPU_OPENCL} to indicate to StarPU that the codelet
  633. can be executed either on a CPU or on a CUDA or an OpenCL device.
  634. @cartouche
  635. @smallexample
  636. #include <starpu.h>
  637. #define NX 2048
  638. extern void scal_cuda_func(void *buffers[], void *_args);
  639. extern void scal_cpu_func(void *buffers[], void *_args);
  640. extern void scal_opencl_func(void *buffers[], void *_args);
  641. /* @b{Definition of the codelet} */
  642. static struct starpu_codelet cl = @{
  643. .where = STARPU_CPU|STARPU_CUDA|STARPU_OPENCL; /* @b{It can be executed on a CPU,} */
  644. /* @b{on a CUDA device, or on an OpenCL device} */
  645. .cuda_funcs = @{ scal_cuda_func, NULL @},
  646. .cpu_funcs = @{ scal_cpu_func, NULL @},
  647. .opencl_funcs = @{ scal_opencl_func, NULL @},
  648. .nbuffers = 1,
  649. .modes = @{ STARPU_RW @}
  650. @}
  651. #ifdef STARPU_USE_OPENCL
  652. /* @b{The compiled version of the OpenCL program} */
  653. struct starpu_opencl_program programs;
  654. #endif
  655. int main(int argc, char **argv)
  656. @{
  657. float *vector;
  658. int i, ret;
  659. float factor=3.0;
  660. struct starpu_task *task;
  661. starpu_data_handle_t vector_handle;
  662. starpu_init(NULL); /* @b{Initialising StarPU} */
  663. #ifdef STARPU_USE_OPENCL
  664. starpu_opencl_load_opencl_from_file(
  665. "examples/basic_examples/vector_scal_opencl_codelet.cl",
  666. &programs, NULL);
  667. #endif
  668. vector = malloc(NX*sizeof(vector[0]));
  669. assert(vector);
  670. for(i=0 ; i<NX ; i++) vector[i] = i;
  671. @end smallexample
  672. @end cartouche
  673. @cartouche
  674. @smallexample
  675. /* @b{Registering data within StarPU} */
  676. starpu_vector_data_register(&vector_handle, 0, (uintptr_t)vector,
  677. NX, sizeof(vector[0]));
  678. /* @b{Definition of the task} */
  679. task = starpu_task_create();
  680. task->cl = &cl;
  681. task->handles[0] = vector_handle;
  682. task->cl_arg = &factor;
  683. task->cl_arg_size = sizeof(factor);
  684. @end smallexample
  685. @end cartouche
  686. @cartouche
  687. @smallexample
  688. /* @b{Submitting the task} */
  689. ret = starpu_task_submit(task);
  690. if (ret == -ENODEV) @{
  691. fprintf(stderr, "No worker may execute this task\n");
  692. return 1;
  693. @}
  694. @c TODO: Mmm, should rather be an unregistration with an implicit dependency, no?
  695. /* @b{Waiting for its termination} */
  696. starpu_task_wait_for_all();
  697. /* @b{Update the vector in RAM} */
  698. starpu_data_acquire(vector_handle, STARPU_R);
  699. @end smallexample
  700. @end cartouche
  701. @cartouche
  702. @smallexample
  703. /* @b{Access the data} */
  704. for(i=0 ; i<NX; i++) @{
  705. fprintf(stderr, "%f ", vector[i]);
  706. @}
  707. fprintf(stderr, "\n");
  708. /* @b{Release the RAM view of the data before unregistering it and shutting down StarPU} */
  709. starpu_data_release(vector_handle);
  710. starpu_data_unregister(vector_handle);
  711. starpu_shutdown();
  712. return 0;
  713. @}
  714. @end smallexample
  715. @end cartouche
  716. @node Execution of Hybrid Vector Scaling
  717. @subsection Execution of Hybrid Vector Scaling
  718. The Makefile given at the beginning of the section must be extended to
  719. give the rules to compile the CUDA source code. Note that the source
  720. file of the OpenCL kernel does not need to be compiled now, it will
  721. be compiled at run-time when calling the function
  722. @code{starpu_opencl_load_opencl_from_file()} (@pxref{starpu_opencl_load_opencl_from_file}).
  723. @cartouche
  724. @smallexample
  725. CFLAGS += $(shell pkg-config --cflags starpu-1.0)
  726. LDFLAGS += $(shell pkg-config --libs starpu-1.0)
  727. CC = gcc
  728. vector_scal: vector_scal.o vector_scal_cpu.o vector_scal_cuda.o vector_scal_opencl.o
  729. %.o: %.cu
  730. nvcc $(CFLAGS) $< -c $@
  731. clean:
  732. rm -f vector_scal *.o
  733. @end smallexample
  734. @end cartouche
  735. @smallexample
  736. % make
  737. @end smallexample
  738. and to execute it, with the default configuration:
  739. @smallexample
  740. % ./vector_scal
  741. 0.000000 3.000000 6.000000 9.000000 12.000000
  742. @end smallexample
  743. or for example, by disabling CPU devices:
  744. @smallexample
  745. % STARPU_NCPUS=0 ./vector_scal
  746. 0.000000 3.000000 6.000000 9.000000 12.000000
  747. @end smallexample
  748. or by disabling CUDA devices (which may permit to enable the use of OpenCL,
  749. see @ref{Enabling OpenCL}):
  750. @smallexample
  751. % STARPU_NCUDA=0 ./vector_scal
  752. 0.000000 3.000000 6.000000 9.000000 12.000000
  753. @end smallexample