basic-examples.texi 31 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984
  1. @c -*-texinfo-*-
  2. @c This file is part of the StarPU Handbook.
  3. @c Copyright (C) 2009--2011 Universit@'e de Bordeaux 1
  4. @c Copyright (C) 2010, 2011, 2012 Centre National de la Recherche Scientifique
  5. @c Copyright (C) 2011, 2012 Institut National de Recherche en Informatique et Automatique
  6. @c See the file starpu.texi for copying conditions.
  7. @menu
  8. * Compiling and linking options::
  9. * Hello World:: Submitting Tasks
  10. * Vector Scaling Using the C Extension::
  11. * Vector Scaling Using StarPu's API::
  12. * Vector Scaling on an Hybrid CPU/GPU Machine:: Handling Heterogeneous Architectures
  13. @end menu
  14. @node Compiling and linking options
  15. @section Compiling and linking options
  16. Let's suppose StarPU has been installed in the directory
  17. @code{$STARPU_DIR}. As explained in @ref{Setting flags for compiling and linking applications},
  18. the variable @code{PKG_CONFIG_PATH} needs to be set. It is also
  19. necessary to set the variable @code{LD_LIBRARY_PATH} to locate dynamic
  20. libraries at runtime.
  21. @example
  22. % PKG_CONFIG_PATH=$STARPU_DIR/lib/pkgconfig:$PKG_CONFIG_PATH
  23. % LD_LIBRARY_PATH=$STARPU_DIR/lib:$LD_LIBRARY_PATH
  24. @end example
  25. The Makefile could for instance contain the following lines to define which
  26. options must be given to the compiler and to the linker:
  27. @cartouche
  28. @example
  29. CFLAGS += $$(pkg-config --cflags starpu-1.0)
  30. LDFLAGS += $$(pkg-config --libs starpu-1.0)
  31. @end example
  32. @end cartouche
  33. Make sure that @code{pkg-config --libs starpu-1.0} actually produces some output
  34. before going further: @code{PKG_CONFIG_PATH} has to point to the place where
  35. @code{starpu-1.0.pc} was installed during @code{make install}.
  36. Also pass the @code{--static} option if the application is to be linked statically.
  37. @node Hello World
  38. @section Hello World
  39. This section shows how to implement a simple program that submits a task
  40. to StarPU. You can either use the StarPU C extension (@pxref{C
  41. Extensions}) or directly use the StarPU's API.
  42. @menu
  43. * Hello World using the C Extension::
  44. * Hello World using StarPU's API::
  45. @end menu
  46. @node Hello World using the C Extension
  47. @subsection Hello World using the C Extension
  48. GCC from version 4.5 permit to use the StarPU GCC plug-in (@pxref{C
  49. Extensions}). This makes writing a task both simpler and less error-prone.
  50. In a nutshell, all it takes is to declare a task, declare and define its
  51. implementations (for CPU, OpenCL, and/or CUDA), and invoke the task like
  52. a regular C function. The example below defines @code{my_task}, which
  53. has a single implementation for CPU:
  54. @cartouche
  55. @smallexample
  56. /* Task declaration. */
  57. static void my_task (int x) __attribute__ ((task));
  58. /* Definition of the CPU implementation of `my_task'. */
  59. static void my_task (int x)
  60. @{
  61. printf ("Hello, world! With x = %d\n", x);
  62. @}
  63. int main ()
  64. @{
  65. /* Initialize StarPU. */
  66. #pragma starpu initialize
  67. /* Do an asynchronous call to `my_task'. */
  68. my_task (42);
  69. /* Wait for the call to complete. */
  70. #pragma starpu wait
  71. /* Terminate. */
  72. #pragma starpu shutdown
  73. return 0;
  74. @}
  75. @end smallexample
  76. @end cartouche
  77. @noindent
  78. The code can then be compiled and linked with GCC and the
  79. @code{-fplugin} flag:
  80. @example
  81. $ gcc hello-starpu.c \
  82. -fplugin=`pkg-config starpu-1.0 --variable=gccplugin` \
  83. `pkg-config starpu-1.0 --libs`
  84. @end example
  85. As can be seen above, basic use the C extensions allows programmers to
  86. use StarPU tasks while essentially annotating ``regular'' C code.
  87. @node Hello World using StarPU's API
  88. @subsection Hello World using StarPU's API
  89. The remainder of this section shows how to achieve the same result using
  90. StarPU's standard C API.
  91. @menu
  92. * Required Headers::
  93. * Defining a Codelet::
  94. * Submitting a Task::
  95. * Execution of Hello World::
  96. @end menu
  97. @node Required Headers
  98. @subsubsection Required Headers
  99. The @code{starpu.h} header should be included in any code using StarPU.
  100. @cartouche
  101. @smallexample
  102. #include <starpu.h>
  103. @end smallexample
  104. @end cartouche
  105. @node Defining a Codelet
  106. @subsubsection Defining a Codelet
  107. @cartouche
  108. @smallexample
  109. struct params @{
  110. int i;
  111. float f;
  112. @};
  113. void cpu_func(void *buffers[], void *cl_arg)
  114. @{
  115. struct params *params = cl_arg;
  116. printf("Hello world (params = @{%i, %f@} )\n", params->i, params->f);
  117. @}
  118. struct starpu_codelet cl =
  119. @{
  120. .where = STARPU_CPU,
  121. .cpu_funcs = @{ cpu_func, NULL @},
  122. .nbuffers = 0
  123. @};
  124. @end smallexample
  125. @end cartouche
  126. A codelet is a structure that represents a computational kernel. Such a codelet
  127. may contain an implementation of the same kernel on different architectures
  128. (e.g. CUDA, Cell's SPU, x86, ...).
  129. The @code{nbuffers} field specifies the number of data buffers that are
  130. manipulated by the codelet: here the codelet does not access or modify any data
  131. that is controlled by our data management library. Note that the argument
  132. passed to the codelet (the @code{cl_arg} field of the @code{starpu_task}
  133. structure) does not count as a buffer since it is not managed by our data
  134. management library, but just contain trivial parameters.
  135. @c TODO need a crossref to the proper description of "where" see bla for more ...
  136. We create a codelet which may only be executed on the CPUs. The @code{where}
  137. field is a bitmask that defines where the codelet may be executed. Here, the
  138. @code{STARPU_CPU} value means that only CPUs can execute this codelet
  139. (@pxref{Codelets and Tasks} for more details on this field). Note that
  140. the @code{where} field is optional, when unset its value is
  141. automatically set based on the availability of the different
  142. @code{XXX_funcs} fields.
  143. When a CPU core executes a codelet, it calls the @code{cpu_func} function,
  144. which @emph{must} have the following prototype:
  145. @code{void (*cpu_func)(void *buffers[], void *cl_arg);}
  146. In this example, we can ignore the first argument of this function which gives a
  147. description of the input and output buffers (e.g. the size and the location of
  148. the matrices) since there is none.
  149. The second argument is a pointer to a buffer passed as an
  150. argument to the codelet by the means of the @code{cl_arg} field of the
  151. @code{starpu_task} structure.
  152. @c TODO rewrite so that it is a little clearer ?
  153. Be aware that this may be a pointer to a
  154. @emph{copy} of the actual buffer, and not the pointer given by the programmer:
  155. if the codelet modifies this buffer, there is no guarantee that the initial
  156. buffer will be modified as well: this for instance implies that the buffer
  157. cannot be used as a synchronization medium. If synchronization is needed, data
  158. has to be registered to StarPU, see @ref{Vector Scaling Using StarPu's API}.
  159. @node Submitting a Task
  160. @subsubsection Submitting a Task
  161. @cartouche
  162. @smallexample
  163. void callback_func(void *callback_arg)
  164. @{
  165. printf("Callback function (arg %x)\n", callback_arg);
  166. @}
  167. int main(int argc, char **argv)
  168. @{
  169. /* @b{initialize StarPU} */
  170. starpu_init(NULL);
  171. struct starpu_task *task = starpu_task_create();
  172. task->cl = &cl; /* @b{Pointer to the codelet defined above} */
  173. struct params params = @{ 1, 2.0f @};
  174. task->cl_arg = &params;
  175. task->cl_arg_size = sizeof(params);
  176. task->callback_func = callback_func;
  177. task->callback_arg = 0x42;
  178. /* @b{starpu_task_submit will be a blocking call} */
  179. task->synchronous = 1;
  180. /* @b{submit the task to StarPU} */
  181. starpu_task_submit(task);
  182. /* @b{terminate StarPU} */
  183. starpu_shutdown();
  184. return 0;
  185. @}
  186. @end smallexample
  187. @end cartouche
  188. Before submitting any tasks to StarPU, @code{starpu_init} must be called. The
  189. @code{NULL} argument specifies that we use default configuration. Tasks cannot
  190. be submitted after the termination of StarPU by a call to
  191. @code{starpu_shutdown}.
  192. In the example above, a task structure is allocated by a call to
  193. @code{starpu_task_create}. This function only allocates and fills the
  194. corresponding structure with the default settings (@pxref{Codelets and
  195. Tasks, starpu_task_create}), but it does not submit the task to StarPU.
  196. @c not really clear ;)
  197. The @code{cl} field is a pointer to the codelet which the task will
  198. execute: in other words, the codelet structure describes which computational
  199. kernel should be offloaded on the different architectures, and the task
  200. structure is a wrapper containing a codelet and the piece of data on which the
  201. codelet should operate.
  202. The optional @code{cl_arg} field is a pointer to a buffer (of size
  203. @code{cl_arg_size}) with some parameters for the kernel
  204. described by the codelet. For instance, if a codelet implements a computational
  205. kernel that multiplies its input vector by a constant, the constant could be
  206. specified by the means of this buffer, instead of registering it as a StarPU
  207. data. It must however be noted that StarPU avoids making copy whenever possible
  208. and rather passes the pointer as such, so the buffer which is pointed at must
  209. kept allocated until the task terminates, and if several tasks are submitted
  210. with various parameters, each of them must be given a pointer to their own
  211. buffer.
  212. Once a task has been executed, an optional callback function is be called.
  213. While the computational kernel could be offloaded on various architectures, the
  214. callback function is always executed on a CPU. The @code{callback_arg}
  215. pointer is passed as an argument of the callback. The prototype of a callback
  216. function must be:
  217. @code{void (*callback_function)(void *);}
  218. If the @code{synchronous} field is non-zero, task submission will be
  219. synchronous: the @code{starpu_task_submit} function will not return until the
  220. task was executed. Note that the @code{starpu_shutdown} method does not
  221. guarantee that asynchronous tasks have been executed before it returns,
  222. @code{starpu_task_wait_for_all} can be used to that effect, or data can be
  223. unregistered (@code{starpu_data_unregister(vector_handle);}), which will
  224. implicitly wait for all the tasks scheduled to work on it, unless explicitly
  225. disabled thanks to @code{starpu_data_set_default_sequential_consistency_flag} or
  226. @code{starpu_data_set_sequential_consistency_flag}.
  227. @node Execution of Hello World
  228. @subsubsection Execution of Hello World
  229. @smallexample
  230. % make hello_world
  231. cc $(pkg-config --cflags starpu-1.0) $(pkg-config --libs starpu-1.0) hello_world.c -o hello_world
  232. % ./hello_world
  233. Hello world (params = @{1, 2.000000@} )
  234. Callback function (arg 42)
  235. @end smallexample
  236. @node Vector Scaling Using the C Extension
  237. @section Vector Scaling Using the C Extension
  238. The previous example has shown how to submit tasks. In this section,
  239. we show how StarPU tasks can manipulate data. The version of this
  240. example using StarPU's API is given in the next sections.
  241. @menu
  242. * Adding an OpenCL Task Implementation::
  243. * Adding a CUDA Task Implementation::
  244. @end menu
  245. The simplest way to get started writing StarPU programs is using the C
  246. language extensions provided by the GCC plug-in (@pxref{C Extensions}).
  247. These extensions map directly to StarPU's main concepts: tasks, task
  248. implementations for CPU, OpenCL, or CUDA, and registered data buffers.
  249. The example below is a vector-scaling program, that multiplies elements
  250. of a vector by a given factor@footnote{The complete example, and
  251. additional examples, is available in the @file{gcc-plugin/examples}
  252. directory of the StarPU distribution.}. For comparison, the standard C
  253. version that uses StarPU's standard C programming interface is given in
  254. the next section (@pxref{Vector Scaling Using StarPu's API, standard C
  255. version of the example}).
  256. First of all, the vector-scaling task and its simple CPU implementation
  257. has to be defined:
  258. @cartouche
  259. @smallexample
  260. /* Declare the `vector_scal' task. */
  261. static void vector_scal (unsigned size, float vector[size],
  262. float factor)
  263. __attribute__ ((task));
  264. /* Define the standard CPU implementation. */
  265. static void
  266. vector_scal (unsigned size, float vector[size], float factor)
  267. @{
  268. unsigned i;
  269. for (i = 0; i < size; i++)
  270. vector[i] *= factor;
  271. @}
  272. @end smallexample
  273. @end cartouche
  274. Next, the body of the program, which uses the task defined above, can be
  275. implemented:
  276. @cartouche
  277. @smallexample
  278. int
  279. main (void)
  280. @{
  281. #pragma starpu initialize
  282. #define NX 0x100000
  283. #define FACTOR 3.14
  284. @{
  285. float vector[NX] __attribute__ ((heap_allocated));
  286. #pragma starpu register vector
  287. size_t i;
  288. for (i = 0; i < NX; i++)
  289. vector[i] = (float) i;
  290. vector_scal (NX, vector, FACTOR);
  291. #pragma starpu wait
  292. @} /* VECTOR is automatically freed here. */
  293. #pragma starpu shutdown
  294. return valid ? EXIT_SUCCESS : EXIT_FAILURE;
  295. @}
  296. @end smallexample
  297. @end cartouche
  298. @noindent
  299. The @code{main} function above does several things:
  300. @itemize
  301. @item
  302. It initializes StarPU.
  303. @item
  304. It allocates @var{vector} in the heap; it will automatically be freed
  305. when its scope is left. Alternatively, good old @code{malloc} and
  306. @code{free} could have been used, but they are more error-prone and
  307. require more typing.
  308. @item
  309. It @dfn{registers} the memory pointed to by @var{vector}. Eventually,
  310. when OpenCL or CUDA task implementations are added, this will allow
  311. StarPU to transfer that memory region between GPUs and the main memory.
  312. Removing this @code{pragma} is an error.
  313. @item
  314. It invokes the @code{vector_scal} task. The invocation looks the same
  315. as a standard C function call. However, it is an @dfn{asynchronous
  316. invocation}, meaning that the actual call is performed in parallel with
  317. the caller's continuation.
  318. @item
  319. It @dfn{waits} for the termination of the @code{vector_scal}
  320. asynchronous call.
  321. @item
  322. Finally, StarPU is shut down.
  323. @end itemize
  324. The program can be compiled and linked with GCC and the @code{-fplugin}
  325. flag:
  326. @example
  327. $ gcc hello-starpu.c \
  328. -fplugin=`pkg-config starpu-1.0 --variable=gccplugin` \
  329. `pkg-config starpu-1.0 --libs`
  330. @end example
  331. And voil@`a!
  332. @node Adding an OpenCL Task Implementation
  333. @subsection Adding an OpenCL Task Implementation
  334. Now, this is all fine and great, but you certainly want to take
  335. advantage of these newfangled GPUs that your lab just bought, don't you?
  336. So, let's add an OpenCL implementation of the @code{vector_scal} task.
  337. We assume that the OpenCL kernel is available in a file,
  338. @file{vector_scal_opencl_kernel.cl}, not shown here. The OpenCL task
  339. implementation is similar to that used with the standard C API
  340. (@pxref{Definition of the OpenCL Kernel}). It is declared and defined
  341. in our C file like this:
  342. @cartouche
  343. @smallexample
  344. /* Include StarPU's OpenCL integration. */
  345. #include <starpu_opencl.h>
  346. /* The OpenCL programs, loaded from `main' (see below). */
  347. static struct starpu_opencl_program cl_programs;
  348. static void vector_scal_opencl (unsigned size, float vector[size],
  349. float factor)
  350. __attribute__ ((task_implementation ("opencl", vector_scal)));
  351. static void
  352. vector_scal_opencl (unsigned size, float vector[size], float factor)
  353. @{
  354. int id, devid, err;
  355. cl_kernel kernel;
  356. cl_command_queue queue;
  357. cl_event event;
  358. /* VECTOR is GPU memory pointer, not a main memory pointer. */
  359. cl_mem val = (cl_mem) vector;
  360. id = starpu_worker_get_id ();
  361. devid = starpu_worker_get_devid (id);
  362. /* Prepare to invoke the kernel. In the future, this will be largely
  363. automated. */
  364. err = starpu_opencl_load_kernel (&kernel, &queue, &cl_programs,
  365. "vector_mult_opencl", devid);
  366. if (err != CL_SUCCESS)
  367. STARPU_OPENCL_REPORT_ERROR (err);
  368. err = clSetKernelArg (kernel, 0, sizeof (val), &val);
  369. err |= clSetKernelArg (kernel, 1, sizeof (size), &size);
  370. err |= clSetKernelArg (kernel, 2, sizeof (factor), &factor);
  371. if (err)
  372. STARPU_OPENCL_REPORT_ERROR (err);
  373. size_t global = 1, local = 1;
  374. err = clEnqueueNDRangeKernel (queue, kernel, 1, NULL, &global,
  375. &local, 0, NULL, &event);
  376. if (err != CL_SUCCESS)
  377. STARPU_OPENCL_REPORT_ERROR (err);
  378. clFinish (queue);
  379. starpu_opencl_collect_stats (event);
  380. clReleaseEvent (event);
  381. /* Done with KERNEL. */
  382. starpu_opencl_release_kernel (kernel);
  383. @}
  384. @end smallexample
  385. @end cartouche
  386. @noindent
  387. The OpenCL kernel itself must be loaded from @code{main}, sometime after
  388. the @code{initialize} pragma:
  389. @cartouche
  390. @smallexample
  391. starpu_opencl_load_opencl_from_file ("vector_scal_opencl_kernel.cl",
  392. &cl_programs, "");
  393. @end smallexample
  394. @end cartouche
  395. @noindent
  396. And that's it. The @code{vector_scal} task now has an additional
  397. implementation, for OpenCL, which StarPU's scheduler may choose to use
  398. at run-time. Unfortunately, the @code{vector_scal_opencl} above still
  399. has to go through the common OpenCL boilerplate; in the future,
  400. additional extensions will automate most of it.
  401. @node Adding a CUDA Task Implementation
  402. @subsection Adding a CUDA Task Implementation
  403. Adding a CUDA implementation of the task is very similar, except that
  404. the implementation itself is typically written in CUDA, and compiled
  405. with @code{nvcc}. Thus, the C file only needs to contain an external
  406. declaration for the task implementation:
  407. @cartouche
  408. @smallexample
  409. extern void vector_scal_cuda (unsigned size, float vector[size],
  410. float factor)
  411. __attribute__ ((task_implementation ("cuda", vector_scal)));
  412. @end smallexample
  413. @end cartouche
  414. The actual implementation of the CUDA task goes into a separate
  415. compilation unit, in a @file{.cu} file. It is very close to the
  416. implementation when using StarPU's standard C API (@pxref{Definition of
  417. the CUDA Kernel}).
  418. @cartouche
  419. @smallexample
  420. /* CUDA implementation of the `vector_scal' task, to be compiled
  421. with `nvcc'. */
  422. #include <starpu.h>
  423. #include <starpu_cuda.h>
  424. #include <stdlib.h>
  425. static __global__ void
  426. vector_mult_cuda (float *val, unsigned n, float factor)
  427. @{
  428. unsigned i = blockIdx.x * blockDim.x + threadIdx.x;
  429. if (i < n)
  430. val[i] *= factor;
  431. @}
  432. /* Definition of the task implementation declared in the C file. */
  433. extern "C" void
  434. vector_scal_cuda (size_t size, float vector[], float factor)
  435. @{
  436. unsigned threads_per_block = 64;
  437. unsigned nblocks = (size + threads_per_block - 1) / threads_per_block;
  438. vector_mult_cuda <<< nblocks, threads_per_block, 0,
  439. starpu_cuda_get_local_stream () >>> (vector, size, factor);
  440. cudaStreamSynchronize (starpu_cuda_get_local_stream ());
  441. @}
  442. @end smallexample
  443. @end cartouche
  444. The complete source code, in the @file{gcc-plugin/examples/vector_scal}
  445. directory of the StarPU distribution, also shows how an SSE-specialized
  446. CPU task implementation can be added.
  447. For more details on the C extensions provided by StarPU's GCC plug-in,
  448. @xref{C Extensions}.
  449. @node Vector Scaling Using StarPu's API
  450. @section Vector Scaling Using StarPu's API
  451. This section shows how to achieve the same result as explained in the
  452. previous section using StarPU's standard C API.
  453. The full source code for
  454. this example is given in @ref{Full source code for the 'Scaling a
  455. Vector' example}.
  456. @menu
  457. * Source Code of Vector Scaling::
  458. * Execution of Vector Scaling:: Running the program
  459. @end menu
  460. @node Source Code of Vector Scaling
  461. @subsection Source Code of Vector Scaling
  462. Programmers can describe the data layout of their application so that StarPU is
  463. responsible for enforcing data coherency and availability across the machine.
  464. Instead of handling complex (and non-portable) mechanisms to perform data
  465. movements, programmers only declare which piece of data is accessed and/or
  466. modified by a task, and StarPU makes sure that when a computational kernel
  467. starts somewhere (e.g. on a GPU), its data are available locally.
  468. Before submitting those tasks, the programmer first needs to declare the
  469. different pieces of data to StarPU using the @code{starpu_*_data_register}
  470. functions. To ease the development of applications for StarPU, it is possible
  471. to describe multiple types of data layout. A type of data layout is called an
  472. @b{interface}. There are different predefined interfaces available in StarPU:
  473. here we will consider the @b{vector interface}.
  474. The following lines show how to declare an array of @code{NX} elements of type
  475. @code{float} using the vector interface:
  476. @cartouche
  477. @smallexample
  478. float vector[NX];
  479. starpu_data_handle_t vector_handle;
  480. starpu_vector_data_register(&vector_handle, 0, (uintptr_t)vector, NX,
  481. sizeof(vector[0]));
  482. @end smallexample
  483. @end cartouche
  484. The first argument, called the @b{data handle}, is an opaque pointer which
  485. designates the array in StarPU. This is also the structure which is used to
  486. describe which data is used by a task. The second argument is the node number
  487. where the data originally resides. Here it is 0 since the @code{vector} array is in
  488. the main memory. Then comes the pointer @code{vector} where the data can be found in main memory,
  489. the number of elements in the vector and the size of each element.
  490. The following shows how to construct a StarPU task that will manipulate the
  491. vector and a constant factor.
  492. @cartouche
  493. @smallexample
  494. float factor = 3.14;
  495. struct starpu_task *task = starpu_task_create();
  496. task->cl = &cl; /* @b{Pointer to the codelet defined below} */
  497. task->handles[0] = vector_handle; /* @b{First parameter of the codelet} */
  498. task->cl_arg = &factor;
  499. task->cl_arg_size = sizeof(factor);
  500. task->synchronous = 1;
  501. starpu_task_submit(task);
  502. @end smallexample
  503. @end cartouche
  504. Since the factor is a mere constant float value parameter,
  505. it does not need a preliminary registration, and
  506. can just be passed through the @code{cl_arg} pointer like in the previous
  507. example. The vector parameter is described by its handle.
  508. There are two fields in each element of the @code{buffers} array.
  509. @code{handle} is the handle of the data, and @code{mode} specifies how the
  510. kernel will access the data (@code{STARPU_R} for read-only, @code{STARPU_W} for
  511. write-only and @code{STARPU_RW} for read and write access).
  512. The definition of the codelet can be written as follows:
  513. @cartouche
  514. @smallexample
  515. void scal_cpu_func(void *buffers[], void *cl_arg)
  516. @{
  517. unsigned i;
  518. float *factor = cl_arg;
  519. /* length of the vector */
  520. unsigned n = STARPU_VECTOR_GET_NX(buffers[0]);
  521. /* CPU copy of the vector pointer */
  522. float *val = (float *)STARPU_VECTOR_GET_PTR(buffers[0]);
  523. for (i = 0; i < n; i++)
  524. val[i] *= *factor;
  525. @}
  526. struct starpu_codelet cl = @{
  527. .where = STARPU_CPU,
  528. .cpu_funcs = @{ scal_cpu_func, NULL @},
  529. .nbuffers = 1,
  530. .modes = @{ STARPU_RW @}
  531. @};
  532. @end smallexample
  533. @end cartouche
  534. The first argument is an array that gives
  535. a description of all the buffers passed in the @code{task->handles}@ array. The
  536. size of this array is given by the @code{nbuffers} field of the codelet
  537. structure. For the sake of genericity, this array contains pointers to the
  538. different interfaces describing each buffer. In the case of the @b{vector
  539. interface}, the location of the vector (resp. its length) is accessible in the
  540. @code{ptr} (resp. @code{nx}) of this array. Since the vector is accessed in a
  541. read-write fashion, any modification will automatically affect future accesses
  542. to this vector made by other tasks.
  543. The second argument of the @code{scal_cpu_func} function contains a pointer to the
  544. parameters of the codelet (given in @code{task->cl_arg}), so that we read the
  545. constant factor from this pointer.
  546. @node Execution of Vector Scaling
  547. @subsection Execution of Vector Scaling
  548. @smallexample
  549. % make vector_scal
  550. cc $(pkg-config --cflags starpu-1.0) $(pkg-config --libs starpu-1.0) vector_scal.c -o vector_scal
  551. % ./vector_scal
  552. 0.000000 3.000000 6.000000 9.000000 12.000000
  553. @end smallexample
  554. @node Vector Scaling on an Hybrid CPU/GPU Machine
  555. @section Vector Scaling on an Hybrid CPU/GPU Machine
  556. Contrary to the previous examples, the task submitted in this example may not
  557. only be executed by the CPUs, but also by a CUDA device.
  558. @menu
  559. * Definition of the CUDA Kernel::
  560. * Definition of the OpenCL Kernel::
  561. * Definition of the Main Code::
  562. * Execution of Hybrid Vector Scaling::
  563. @end menu
  564. @node Definition of the CUDA Kernel
  565. @subsection Definition of the CUDA Kernel
  566. The CUDA implementation can be written as follows. It needs to be compiled with
  567. a CUDA compiler such as nvcc, the NVIDIA CUDA compiler driver. It must be noted
  568. that the vector pointer returned by STARPU_VECTOR_GET_PTR is here a pointer in GPU
  569. memory, so that it can be passed as such to the @code{vector_mult_cuda} kernel
  570. call.
  571. @cartouche
  572. @smallexample
  573. #include <starpu.h>
  574. #include <starpu_cuda.h>
  575. static __global__ void vector_mult_cuda(float *val, unsigned n,
  576. float factor)
  577. @{
  578. unsigned i = blockIdx.x*blockDim.x + threadIdx.x;
  579. if (i < n)
  580. val[i] *= factor;
  581. @}
  582. extern "C" void scal_cuda_func(void *buffers[], void *_args)
  583. @{
  584. float *factor = (float *)_args;
  585. /* length of the vector */
  586. unsigned n = STARPU_VECTOR_GET_NX(buffers[0]);
  587. /* CUDA copy of the vector pointer */
  588. float *val = (float *)STARPU_VECTOR_GET_PTR(buffers[0]);
  589. unsigned threads_per_block = 64;
  590. unsigned nblocks = (n + threads_per_block-1) / threads_per_block;
  591. @i{ vector_mult_cuda<<<nblocks,threads_per_block, 0, starpu_cuda_get_local_stream()>>>(val, n, *factor);}
  592. @i{ cudaStreamSynchronize(starpu_cuda_get_local_stream());}
  593. @}
  594. @end smallexample
  595. @end cartouche
  596. @node Definition of the OpenCL Kernel
  597. @subsection Definition of the OpenCL Kernel
  598. The OpenCL implementation can be written as follows. StarPU provides
  599. tools to compile a OpenCL kernel stored in a file.
  600. @cartouche
  601. @smallexample
  602. __kernel void vector_mult_opencl(__global float* val, int nx, float factor)
  603. @{
  604. const int i = get_global_id(0);
  605. if (i < nx) @{
  606. val[i] *= factor;
  607. @}
  608. @}
  609. @end smallexample
  610. @end cartouche
  611. Contrary to CUDA and CPU, @code{STARPU_VECTOR_GET_DEV_HANDLE} has to be used,
  612. which returns a @code{cl_mem} (which is not a device pointer, but an OpenCL
  613. handle), which can be passed as such to the OpenCL kernel. The difference is
  614. important when using partitioning, see @ref{Partitioning Data}.
  615. @cartouche
  616. @smallexample
  617. #include <starpu.h>
  618. @i{#include <starpu_opencl.h>}
  619. @i{extern struct starpu_opencl_program programs;}
  620. void scal_opencl_func(void *buffers[], void *_args)
  621. @{
  622. float *factor = _args;
  623. @i{ int id, devid, err;}
  624. @i{ cl_kernel kernel;}
  625. @i{ cl_command_queue queue;}
  626. @i{ cl_event event;}
  627. /* length of the vector */
  628. unsigned n = STARPU_VECTOR_GET_NX(buffers[0]);
  629. /* OpenCL copy of the vector pointer */
  630. cl_mem val = (cl_mem) STARPU_VECTOR_GET_DEV_HANDLE(buffers[0]);
  631. @i{ id = starpu_worker_get_id();}
  632. @i{ devid = starpu_worker_get_devid(id);}
  633. @i{ err = starpu_opencl_load_kernel(&kernel, &queue, &programs,}
  634. @i{ "vector_mult_opencl", devid); /* @b{Name of the codelet defined above} */}
  635. @i{ if (err != CL_SUCCESS) STARPU_OPENCL_REPORT_ERROR(err);}
  636. @i{ err = clSetKernelArg(kernel, 0, sizeof(val), &val);}
  637. @i{ err |= clSetKernelArg(kernel, 1, sizeof(n), &n);}
  638. @i{ err |= clSetKernelArg(kernel, 2, sizeof(*factor), factor);}
  639. @i{ if (err) STARPU_OPENCL_REPORT_ERROR(err);}
  640. @i{ @{}
  641. @i{ size_t global=n;}
  642. @i{ size_t local=1;}
  643. @i{ err = clEnqueueNDRangeKernel(queue, kernel, 1, NULL, &global, &local, 0, NULL, &event);}
  644. @i{ if (err != CL_SUCCESS) STARPU_OPENCL_REPORT_ERROR(err);}
  645. @i{ @}}
  646. @i{ clFinish(queue);}
  647. @i{ starpu_opencl_collect_stats(event);}
  648. @i{ clReleaseEvent(event);}
  649. @i{ starpu_opencl_release_kernel(kernel);}
  650. @}
  651. @end smallexample
  652. @end cartouche
  653. @node Definition of the Main Code
  654. @subsection Definition of the Main Code
  655. The CPU implementation is the same as in the previous section.
  656. Here is the source of the main application. You can notice the value of the
  657. field @code{where} for the codelet. We specify
  658. @code{STARPU_CPU|STARPU_CUDA|STARPU_OPENCL} to indicate to StarPU that the codelet
  659. can be executed either on a CPU or on a CUDA or an OpenCL device.
  660. @cartouche
  661. @smallexample
  662. #include <starpu.h>
  663. #define NX 2048
  664. extern void scal_cuda_func(void *buffers[], void *_args);
  665. extern void scal_cpu_func(void *buffers[], void *_args);
  666. extern void scal_opencl_func(void *buffers[], void *_args);
  667. /* @b{Definition of the codelet} */
  668. static struct starpu_codelet cl = @{
  669. .where = STARPU_CPU|STARPU_CUDA|STARPU_OPENCL; /* @b{It can be executed on a CPU,} */
  670. /* @b{on a CUDA device, or on an OpenCL device} */
  671. .cuda_funcs = @{ scal_cuda_func, NULL @},
  672. .cpu_funcs = @{ scal_cpu_func, NULL @},
  673. .opencl_funcs = @{ scal_opencl_func, NULL @},
  674. .nbuffers = 1,
  675. .modes = @{ STARPU_RW @}
  676. @}
  677. #ifdef STARPU_USE_OPENCL
  678. /* @b{The compiled version of the OpenCL program} */
  679. struct starpu_opencl_program programs;
  680. #endif
  681. int main(int argc, char **argv)
  682. @{
  683. float *vector;
  684. int i, ret;
  685. float factor=3.0;
  686. struct starpu_task *task;
  687. starpu_data_handle_t vector_handle;
  688. starpu_init(NULL); /* @b{Initialising StarPU} */
  689. #ifdef STARPU_USE_OPENCL
  690. starpu_opencl_load_opencl_from_file(
  691. "examples/basic_examples/vector_scal_opencl_codelet.cl",
  692. &programs, NULL);
  693. #endif
  694. vector = malloc(NX*sizeof(vector[0]));
  695. assert(vector);
  696. for(i=0 ; i<NX ; i++) vector[i] = i;
  697. @end smallexample
  698. @end cartouche
  699. @cartouche
  700. @smallexample
  701. /* @b{Registering data within StarPU} */
  702. starpu_vector_data_register(&vector_handle, 0, (uintptr_t)vector,
  703. NX, sizeof(vector[0]));
  704. /* @b{Definition of the task} */
  705. task = starpu_task_create();
  706. task->cl = &cl;
  707. task->handles[0] = vector_handle;
  708. task->cl_arg = &factor;
  709. task->cl_arg_size = sizeof(factor);
  710. @end smallexample
  711. @end cartouche
  712. @cartouche
  713. @smallexample
  714. /* @b{Submitting the task} */
  715. ret = starpu_task_submit(task);
  716. if (ret == -ENODEV) @{
  717. fprintf(stderr, "No worker may execute this task\n");
  718. return 1;
  719. @}
  720. @c TODO: Mmm, should rather be an unregistration with an implicit dependency, no?
  721. /* @b{Waiting for its termination} */
  722. starpu_task_wait_for_all();
  723. /* @b{Update the vector in RAM} */
  724. starpu_data_acquire(vector_handle, STARPU_R);
  725. @end smallexample
  726. @end cartouche
  727. @cartouche
  728. @smallexample
  729. /* @b{Access the data} */
  730. for(i=0 ; i<NX; i++) @{
  731. fprintf(stderr, "%f ", vector[i]);
  732. @}
  733. fprintf(stderr, "\n");
  734. /* @b{Release the RAM view of the data before unregistering it and shutting down StarPU} */
  735. starpu_data_release(vector_handle);
  736. starpu_data_unregister(vector_handle);
  737. starpu_shutdown();
  738. return 0;
  739. @}
  740. @end smallexample
  741. @end cartouche
  742. @node Execution of Hybrid Vector Scaling
  743. @subsection Execution of Hybrid Vector Scaling
  744. The Makefile given at the beginning of the section must be extended to
  745. give the rules to compile the CUDA source code. Note that the source
  746. file of the OpenCL kernel does not need to be compiled now, it will
  747. be compiled at run-time when calling the function
  748. @code{starpu_opencl_load_opencl_from_file()} (@pxref{starpu_opencl_load_opencl_from_file}).
  749. @cartouche
  750. @smallexample
  751. CFLAGS += $(shell pkg-config --cflags starpu-1.0)
  752. LDFLAGS += $(shell pkg-config --libs starpu-1.0)
  753. CC = gcc
  754. vector_scal: vector_scal.o vector_scal_cpu.o vector_scal_cuda.o vector_scal_opencl.o
  755. %.o: %.cu
  756. nvcc $(CFLAGS) $< -c $@
  757. clean:
  758. rm -f vector_scal *.o
  759. @end smallexample
  760. @end cartouche
  761. @smallexample
  762. % make
  763. @end smallexample
  764. and to execute it, with the default configuration:
  765. @smallexample
  766. % ./vector_scal
  767. 0.000000 3.000000 6.000000 9.000000 12.000000
  768. @end smallexample
  769. or for example, by disabling CPU devices:
  770. @smallexample
  771. % STARPU_NCPUS=0 ./vector_scal
  772. 0.000000 3.000000 6.000000 9.000000 12.000000
  773. @end smallexample
  774. or by disabling CUDA devices (which may permit to enable the use of OpenCL,
  775. see @ref{Enabling OpenCL}):
  776. @smallexample
  777. % STARPU_NCUDA=0 ./vector_scal
  778. 0.000000 3.000000 6.000000 9.000000 12.000000
  779. @end smallexample