basic-examples.texi 21 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637
  1. @c -*-texinfo-*-
  2. @c This file is part of the StarPU Handbook.
  3. @c Copyright (C) 2009--2011 Universit@'e de Bordeaux 1
  4. @c Copyright (C) 2010, 2011, 2012 Centre National de la Recherche Scientifique
  5. @c Copyright (C) 2011 Institut National de Recherche en Informatique et Automatique
  6. @c See the file starpu.texi for copying conditions.
  7. @menu
  8. * Compiling and linking options::
  9. * Hello World:: Submitting Tasks
  10. * Scaling a Vector:: Manipulating Data
  11. * Vector Scaling on an Hybrid CPU/GPU Machine:: Handling Heterogeneous Architectures
  12. @end menu
  13. @node Compiling and linking options
  14. @section Compiling and linking options
  15. Let's suppose StarPU has been installed in the directory
  16. @code{$STARPU_DIR}. As explained in @ref{Setting flags for compiling and linking applications},
  17. the variable @code{PKG_CONFIG_PATH} needs to be set. It is also
  18. necessary to set the variable @code{LD_LIBRARY_PATH} to locate dynamic
  19. libraries at runtime.
  20. @example
  21. % PKG_CONFIG_PATH=$STARPU_DIR/lib/pkgconfig:$PKG_CONFIG_PATH
  22. % LD_LIBRARY_PATH=$STARPU_DIR/lib:$LD_LIBRARY_PATH
  23. @end example
  24. The Makefile could for instance contain the following lines to define which
  25. options must be given to the compiler and to the linker:
  26. @cartouche
  27. @example
  28. CFLAGS += $$(pkg-config --cflags libstarpu-1.0)
  29. LDFLAGS += $$(pkg-config --libs libstarpu-1.0)
  30. @end example
  31. @end cartouche
  32. Also pass the @code{--static} option if the application is to be linked statically.
  33. @node Hello World
  34. @section Hello World
  35. @menu
  36. * Required Headers::
  37. * Defining a Codelet::
  38. * Submitting a Task::
  39. * Execution of Hello World::
  40. @end menu
  41. In this section, we show how to implement a simple program that submits a task to StarPU.
  42. @node Required Headers
  43. @subsection Required Headers
  44. The @code{starpu.h} header should be included in any code using StarPU.
  45. @cartouche
  46. @smallexample
  47. #include <starpu.h>
  48. @end smallexample
  49. @end cartouche
  50. @node Defining a Codelet
  51. @subsection Defining a Codelet
  52. @cartouche
  53. @smallexample
  54. struct params @{
  55. int i;
  56. float f;
  57. @};
  58. void cpu_func(void *buffers[], void *cl_arg)
  59. @{
  60. struct params *params = cl_arg;
  61. printf("Hello world (params = @{%i, %f@} )\n", params->i, params->f);
  62. @}
  63. struct starpu_codelet cl =
  64. @{
  65. .where = STARPU_CPU,
  66. .cpu_funcs = @{ cpu_func, NULL @},
  67. .nbuffers = 0
  68. @};
  69. @end smallexample
  70. @end cartouche
  71. A codelet is a structure that represents a computational kernel. Such a codelet
  72. may contain an implementation of the same kernel on different architectures
  73. (e.g. CUDA, Cell's SPU, x86, ...).
  74. The @code{nbuffers} field specifies the number of data buffers that are
  75. manipulated by the codelet: here the codelet does not access or modify any data
  76. that is controlled by our data management library. Note that the argument
  77. passed to the codelet (the @code{cl_arg} field of the @code{starpu_task}
  78. structure) does not count as a buffer since it is not managed by our data
  79. management library, but just contain trivial parameters.
  80. @c TODO need a crossref to the proper description of "where" see bla for more ...
  81. We create a codelet which may only be executed on the CPUs. The @code{where}
  82. field is a bitmask that defines where the codelet may be executed. Here, the
  83. @code{STARPU_CPU} value means that only CPUs can execute this codelet
  84. (@pxref{Codelets and Tasks} for more details on this field). Note that
  85. the @code{where} field is optional, when unset its value is
  86. automatically set based on the availability of the different
  87. @code{XXX_funcs} fields.
  88. When a CPU core executes a codelet, it calls the @code{cpu_func} function,
  89. which @emph{must} have the following prototype:
  90. @code{void (*cpu_func)(void *buffers[], void *cl_arg);}
  91. In this example, we can ignore the first argument of this function which gives a
  92. description of the input and output buffers (e.g. the size and the location of
  93. the matrices) since there is none.
  94. The second argument is a pointer to a buffer passed as an
  95. argument to the codelet by the means of the @code{cl_arg} field of the
  96. @code{starpu_task} structure.
  97. @c TODO rewrite so that it is a little clearer ?
  98. Be aware that this may be a pointer to a
  99. @emph{copy} of the actual buffer, and not the pointer given by the programmer:
  100. if the codelet modifies this buffer, there is no guarantee that the initial
  101. buffer will be modified as well: this for instance implies that the buffer
  102. cannot be used as a synchronization medium. If synchronization is needed, data
  103. has to be registered to StarPU, see @ref{Scaling a Vector}.
  104. @node Submitting a Task
  105. @subsection Submitting a Task
  106. @cartouche
  107. @smallexample
  108. void callback_func(void *callback_arg)
  109. @{
  110. printf("Callback function (arg %x)\n", callback_arg);
  111. @}
  112. int main(int argc, char **argv)
  113. @{
  114. /* @b{initialize StarPU} */
  115. starpu_init(NULL);
  116. struct starpu_task *task = starpu_task_create();
  117. task->cl = &cl; /* @b{Pointer to the codelet defined above} */
  118. struct params params = @{ 1, 2.0f @};
  119. task->cl_arg = &params;
  120. task->cl_arg_size = sizeof(params);
  121. task->callback_func = callback_func;
  122. task->callback_arg = 0x42;
  123. /* @b{starpu_task_submit will be a blocking call} */
  124. task->synchronous = 1;
  125. /* @b{submit the task to StarPU} */
  126. starpu_task_submit(task);
  127. /* @b{terminate StarPU} */
  128. starpu_shutdown();
  129. return 0;
  130. @}
  131. @end smallexample
  132. @end cartouche
  133. Before submitting any tasks to StarPU, @code{starpu_init} must be called. The
  134. @code{NULL} argument specifies that we use default configuration. Tasks cannot
  135. be submitted after the termination of StarPU by a call to
  136. @code{starpu_shutdown}.
  137. In the example above, a task structure is allocated by a call to
  138. @code{starpu_task_create}. This function only allocates and fills the
  139. corresponding structure with the default settings (@pxref{Codelets and
  140. Tasks, starpu_task_create}), but it does not submit the task to StarPU.
  141. @c not really clear ;)
  142. The @code{cl} field is a pointer to the codelet which the task will
  143. execute: in other words, the codelet structure describes which computational
  144. kernel should be offloaded on the different architectures, and the task
  145. structure is a wrapper containing a codelet and the piece of data on which the
  146. codelet should operate.
  147. The optional @code{cl_arg} field is a pointer to a buffer (of size
  148. @code{cl_arg_size}) with some parameters for the kernel
  149. described by the codelet. For instance, if a codelet implements a computational
  150. kernel that multiplies its input vector by a constant, the constant could be
  151. specified by the means of this buffer, instead of registering it as a StarPU
  152. data. It must however be noted that StarPU avoids making copy whenever possible
  153. and rather passes the pointer as such, so the buffer which is pointed at must
  154. kept allocated until the task terminates, and if several tasks are submitted
  155. with various parameters, each of them must be given a pointer to their own
  156. buffer.
  157. Once a task has been executed, an optional callback function is be called.
  158. While the computational kernel could be offloaded on various architectures, the
  159. callback function is always executed on a CPU. The @code{callback_arg}
  160. pointer is passed as an argument of the callback. The prototype of a callback
  161. function must be:
  162. @code{void (*callback_function)(void *);}
  163. If the @code{synchronous} field is non-zero, task submission will be
  164. synchronous: the @code{starpu_task_submit} function will not return until the
  165. task was executed. Note that the @code{starpu_shutdown} method does not
  166. guarantee that asynchronous tasks have been executed before it returns,
  167. @code{starpu_task_wait_for_all} can be used to that effect, or data can be
  168. unregistered (@code{starpu_data_unregister(vector_handle);}), which will
  169. implicitly wait for all the tasks scheduled to work on it, unless explicitly
  170. disabled thanks to @code{starpu_data_set_default_sequential_consistency_flag} or
  171. @code{starpu_data_set_sequential_consistency_flag}.
  172. @node Execution of Hello World
  173. @subsection Execution of Hello World
  174. @smallexample
  175. % make hello_world
  176. cc $(pkg-config --cflags libstarpu-1.0) $(pkg-config --libs libstarpu-1.0) hello_world.c -o hello_world
  177. % ./hello_world
  178. Hello world (params = @{1, 2.000000@} )
  179. Callback function (arg 42)
  180. @end smallexample
  181. @node Scaling a Vector
  182. @section Manipulating Data: Scaling a Vector
  183. The previous example has shown how to submit tasks. In this section,
  184. we show how StarPU tasks can manipulate data. The full source code for
  185. this example is given in @ref{Full source code for the 'Scaling a Vector' example}.
  186. @menu
  187. * Source code of Vector Scaling::
  188. * Execution of Vector Scaling::
  189. @end menu
  190. @node Source code of Vector Scaling
  191. @subsection Source code of Vector Scaling
  192. Programmers can describe the data layout of their application so that StarPU is
  193. responsible for enforcing data coherency and availability across the machine.
  194. Instead of handling complex (and non-portable) mechanisms to perform data
  195. movements, programmers only declare which piece of data is accessed and/or
  196. modified by a task, and StarPU makes sure that when a computational kernel
  197. starts somewhere (e.g. on a GPU), its data are available locally.
  198. Before submitting those tasks, the programmer first needs to declare the
  199. different pieces of data to StarPU using the @code{starpu_*_data_register}
  200. functions. To ease the development of applications for StarPU, it is possible
  201. to describe multiple types of data layout. A type of data layout is called an
  202. @b{interface}. There are different predefined interfaces available in StarPU:
  203. here we will consider the @b{vector interface}.
  204. The following lines show how to declare an array of @code{NX} elements of type
  205. @code{float} using the vector interface:
  206. @cartouche
  207. @smallexample
  208. float vector[NX];
  209. starpu_data_handle_t vector_handle;
  210. starpu_vector_data_register(&vector_handle, 0, (uintptr_t)vector, NX,
  211. sizeof(vector[0]));
  212. @end smallexample
  213. @end cartouche
  214. The first argument, called the @b{data handle}, is an opaque pointer which
  215. designates the array in StarPU. This is also the structure which is used to
  216. describe which data is used by a task. The second argument is the node number
  217. where the data originally resides. Here it is 0 since the @code{vector} array is in
  218. the main memory. Then comes the pointer @code{vector} where the data can be found in main memory,
  219. the number of elements in the vector and the size of each element.
  220. The following shows how to construct a StarPU task that will manipulate the
  221. vector and a constant factor.
  222. @cartouche
  223. @smallexample
  224. float factor = 3.14;
  225. struct starpu_task *task = starpu_task_create();
  226. task->cl = &cl; /* @b{Pointer to the codelet defined below} */
  227. task->handles[0] = vector_handle; /* @b{First parameter of the codelet} */
  228. task->cl_arg = &factor;
  229. task->cl_arg_size = sizeof(factor);
  230. task->synchronous = 1;
  231. starpu_task_submit(task);
  232. @end smallexample
  233. @end cartouche
  234. Since the factor is a mere constant float value parameter,
  235. it does not need a preliminary registration, and
  236. can just be passed through the @code{cl_arg} pointer like in the previous
  237. example. The vector parameter is described by its handle.
  238. There are two fields in each element of the @code{buffers} array.
  239. @code{handle} is the handle of the data, and @code{mode} specifies how the
  240. kernel will access the data (@code{STARPU_R} for read-only, @code{STARPU_W} for
  241. write-only and @code{STARPU_RW} for read and write access).
  242. The definition of the codelet can be written as follows:
  243. @cartouche
  244. @smallexample
  245. void scal_cpu_func(void *buffers[], void *cl_arg)
  246. @{
  247. unsigned i;
  248. float *factor = cl_arg;
  249. /* length of the vector */
  250. unsigned n = STARPU_VECTOR_GET_NX(buffers[0]);
  251. /* CPU copy of the vector pointer */
  252. float *val = (float *)STARPU_VECTOR_GET_PTR(buffers[0]);
  253. for (i = 0; i < n; i++)
  254. val[i] *= *factor;
  255. @}
  256. struct starpu_codelet cl = @{
  257. .where = STARPU_CPU,
  258. .cpu_funcs = @{ scal_cpu_func, NULL @},
  259. .nbuffers = 1,
  260. .modes = @{ STARPU_RW @}
  261. @};
  262. @end smallexample
  263. @end cartouche
  264. The first argument is an array that gives
  265. a description of all the buffers passed in the @code{task->handles}@ array. The
  266. size of this array is given by the @code{nbuffers} field of the codelet
  267. structure. For the sake of genericity, this array contains pointers to the
  268. different interfaces describing each buffer. In the case of the @b{vector
  269. interface}, the location of the vector (resp. its length) is accessible in the
  270. @code{ptr} (resp. @code{nx}) of this array. Since the vector is accessed in a
  271. read-write fashion, any modification will automatically affect future accesses
  272. to this vector made by other tasks.
  273. The second argument of the @code{scal_cpu_func} function contains a pointer to the
  274. parameters of the codelet (given in @code{task->cl_arg}), so that we read the
  275. constant factor from this pointer.
  276. @node Execution of Vector Scaling
  277. @subsection Execution of Vector Scaling
  278. @smallexample
  279. % make vector_scal
  280. cc $(pkg-config --cflags libstarpu-1.0) $(pkg-config --libs libstarpu-1.0) vector_scal.c -o vector_scal
  281. % ./vector_scal
  282. 0.000000 3.000000 6.000000 9.000000 12.000000
  283. @end smallexample
  284. @node Vector Scaling on an Hybrid CPU/GPU Machine
  285. @section Vector Scaling on an Hybrid CPU/GPU Machine
  286. Contrary to the previous examples, the task submitted in this example may not
  287. only be executed by the CPUs, but also by a CUDA device.
  288. @menu
  289. * Definition of the CUDA Kernel::
  290. * Definition of the OpenCL Kernel::
  291. * Definition of the Main Code::
  292. * Execution of Hybrid Vector Scaling::
  293. @end menu
  294. @node Definition of the CUDA Kernel
  295. @subsection Definition of the CUDA Kernel
  296. The CUDA implementation can be written as follows. It needs to be compiled with
  297. a CUDA compiler such as nvcc, the NVIDIA CUDA compiler driver. It must be noted
  298. that the vector pointer returned by STARPU_VECTOR_GET_PTR is here a pointer in GPU
  299. memory, so that it can be passed as such to the @code{vector_mult_cuda} kernel
  300. call.
  301. @cartouche
  302. @smallexample
  303. #include <starpu.h>
  304. #include <starpu_cuda.h>
  305. static __global__ void vector_mult_cuda(float *val, unsigned n,
  306. float factor)
  307. @{
  308. unsigned i = blockIdx.x*blockDim.x + threadIdx.x;
  309. if (i < n)
  310. val[i] *= factor;
  311. @}
  312. extern "C" void scal_cuda_func(void *buffers[], void *_args)
  313. @{
  314. float *factor = (float *)_args;
  315. /* length of the vector */
  316. unsigned n = STARPU_VECTOR_GET_NX(buffers[0]);
  317. /* CUDA copy of the vector pointer */
  318. float *val = (float *)STARPU_VECTOR_GET_PTR(buffers[0]);
  319. unsigned threads_per_block = 64;
  320. unsigned nblocks = (n + threads_per_block-1) / threads_per_block;
  321. @i{ vector_mult_cuda<<<nblocks,threads_per_block, 0, starpu_cuda_get_local_stream()>>>(val, n, *factor);}
  322. @i{ cudaStreamSynchronize(starpu_cuda_get_local_stream());}
  323. @}
  324. @end smallexample
  325. @end cartouche
  326. @node Definition of the OpenCL Kernel
  327. @subsection Definition of the OpenCL Kernel
  328. The OpenCL implementation can be written as follows. StarPU provides
  329. tools to compile a OpenCL kernel stored in a file.
  330. @cartouche
  331. @smallexample
  332. __kernel void vector_mult_opencl(__global float* val, int nx, float factor)
  333. @{
  334. const int i = get_global_id(0);
  335. if (i < nx) @{
  336. val[i] *= factor;
  337. @}
  338. @}
  339. @end smallexample
  340. @end cartouche
  341. Contrary to CUDA and CPU, @code{STARPU_VECTOR_GET_DEV_HANDLE} has to be used,
  342. which returns a @code{cl_mem} (which is not a device pointer, but an OpenCL
  343. handle), which can be passed as such to the OpenCL kernel. The difference is
  344. important when using partitioning, see @ref{Partitioning Data}.
  345. @cartouche
  346. @smallexample
  347. #include <starpu.h>
  348. @i{#include <starpu_opencl.h>}
  349. @i{extern struct starpu_opencl_program programs;}
  350. void scal_opencl_func(void *buffers[], void *_args)
  351. @{
  352. float *factor = _args;
  353. @i{ int id, devid, err;}
  354. @i{ cl_kernel kernel;}
  355. @i{ cl_command_queue queue;}
  356. @i{ cl_event event;}
  357. /* length of the vector */
  358. unsigned n = STARPU_VECTOR_GET_NX(buffers[0]);
  359. /* OpenCL copy of the vector pointer */
  360. cl_mem val = (cl_mem) STARPU_VECTOR_GET_DEV_HANDLE(buffers[0]);
  361. @i{ id = starpu_worker_get_id();}
  362. @i{ devid = starpu_worker_get_devid(id);}
  363. @i{ err = starpu_opencl_load_kernel(&kernel, &queue, &programs,}
  364. @i{ "vector_mult_opencl", devid); /* @b{Name of the codelet defined above} */}
  365. @i{ if (err != CL_SUCCESS) STARPU_OPENCL_REPORT_ERROR(err);}
  366. @i{ err = clSetKernelArg(kernel, 0, sizeof(val), &val);}
  367. @i{ err |= clSetKernelArg(kernel, 1, sizeof(n), &n);}
  368. @i{ err |= clSetKernelArg(kernel, 2, sizeof(*factor), factor);}
  369. @i{ if (err) STARPU_OPENCL_REPORT_ERROR(err);}
  370. @i{ @{}
  371. @i{ size_t global=n;}
  372. @i{ size_t local=1;}
  373. @i{ err = clEnqueueNDRangeKernel(queue, kernel, 1, NULL, &global, &local, 0, NULL, &event);}
  374. @i{ if (err != CL_SUCCESS) STARPU_OPENCL_REPORT_ERROR(err);}
  375. @i{ @}}
  376. @i{ clFinish(queue);}
  377. @i{ starpu_opencl_collect_stats(event);}
  378. @i{ clReleaseEvent(event);}
  379. @i{ starpu_opencl_release_kernel(kernel);}
  380. @}
  381. @end smallexample
  382. @end cartouche
  383. @node Definition of the Main Code
  384. @subsection Definition of the Main Code
  385. The CPU implementation is the same as in the previous section.
  386. Here is the source of the main application. You can notice the value of the
  387. field @code{where} for the codelet. We specify
  388. @code{STARPU_CPU|STARPU_CUDA|STARPU_OPENCL} to indicate to StarPU that the codelet
  389. can be executed either on a CPU or on a CUDA or an OpenCL device.
  390. @cartouche
  391. @smallexample
  392. #include <starpu.h>
  393. #define NX 2048
  394. extern void scal_cuda_func(void *buffers[], void *_args);
  395. extern void scal_cpu_func(void *buffers[], void *_args);
  396. extern void scal_opencl_func(void *buffers[], void *_args);
  397. /* @b{Definition of the codelet} */
  398. static struct starpu_codelet cl = @{
  399. .where = STARPU_CPU|STARPU_CUDA|STARPU_OPENCL; /* @b{It can be executed on a CPU,} */
  400. /* @b{on a CUDA device, or on an OpenCL device} */
  401. .cuda_funcs = @{ scal_cuda_func, NULL @},
  402. .cpu_funcs = @{ scal_cpu_func, NULL @},
  403. .opencl_funcs = @{ scal_opencl_func, NULL @},
  404. .nbuffers = 1,
  405. .modes = @{ STARPU_RW @}
  406. @}
  407. #ifdef STARPU_USE_OPENCL
  408. /* @b{The compiled version of the OpenCL program} */
  409. struct starpu_opencl_program programs;
  410. #endif
  411. int main(int argc, char **argv)
  412. @{
  413. float *vector;
  414. int i, ret;
  415. float factor=3.0;
  416. struct starpu_task *task;
  417. starpu_data_handle_t vector_handle;
  418. starpu_init(NULL); /* @b{Initialising StarPU} */
  419. #ifdef STARPU_USE_OPENCL
  420. starpu_opencl_load_opencl_from_file(
  421. "examples/basic_examples/vector_scal_opencl_codelet.cl",
  422. &programs, NULL);
  423. #endif
  424. vector = malloc(NX*sizeof(vector[0]));
  425. assert(vector);
  426. for(i=0 ; i<NX ; i++) vector[i] = i;
  427. @end smallexample
  428. @end cartouche
  429. @cartouche
  430. @smallexample
  431. /* @b{Registering data within StarPU} */
  432. starpu_vector_data_register(&vector_handle, 0, (uintptr_t)vector,
  433. NX, sizeof(vector[0]));
  434. /* @b{Definition of the task} */
  435. task = starpu_task_create();
  436. task->cl = &cl;
  437. task->handles[0] = vector_handle;
  438. task->cl_arg = &factor;
  439. task->cl_arg_size = sizeof(factor);
  440. @end smallexample
  441. @end cartouche
  442. @cartouche
  443. @smallexample
  444. /* @b{Submitting the task} */
  445. ret = starpu_task_submit(task);
  446. if (ret == -ENODEV) @{
  447. fprintf(stderr, "No worker may execute this task\n");
  448. return 1;
  449. @}
  450. @c TODO: Mmm, should rather be an unregistration with an implicit dependency, no?
  451. /* @b{Waiting for its termination} */
  452. starpu_task_wait_for_all();
  453. /* @b{Update the vector in RAM} */
  454. starpu_data_acquire(vector_handle, STARPU_R);
  455. @end smallexample
  456. @end cartouche
  457. @cartouche
  458. @smallexample
  459. /* @b{Access the data} */
  460. for(i=0 ; i<NX; i++) @{
  461. fprintf(stderr, "%f ", vector[i]);
  462. @}
  463. fprintf(stderr, "\n");
  464. /* @b{Release the RAM view of the data before unregistering it and shutting down StarPU} */
  465. starpu_data_release(vector_handle);
  466. starpu_data_unregister(vector_handle);
  467. starpu_shutdown();
  468. return 0;
  469. @}
  470. @end smallexample
  471. @end cartouche
  472. @node Execution of Hybrid Vector Scaling
  473. @subsection Execution of Hybrid Vector Scaling
  474. The Makefile given at the beginning of the section must be extended to
  475. give the rules to compile the CUDA source code. Note that the source
  476. file of the OpenCL kernel does not need to be compiled now, it will
  477. be compiled at run-time when calling the function
  478. @code{starpu_opencl_load_opencl_from_file()} (@pxref{starpu_opencl_load_opencl_from_file}).
  479. @cartouche
  480. @smallexample
  481. CFLAGS += $(shell pkg-config --cflags libstarpu-1.0)
  482. LDFLAGS += $(shell pkg-config --libs libstarpu-1.0)
  483. CC = gcc
  484. vector_scal: vector_scal.o vector_scal_cpu.o vector_scal_cuda.o vector_scal_opencl.o
  485. %.o: %.cu
  486. nvcc $(CFLAGS) $< -c $@
  487. clean:
  488. rm -f vector_scal *.o
  489. @end smallexample
  490. @end cartouche
  491. @smallexample
  492. % make
  493. @end smallexample
  494. and to execute it, with the default configuration:
  495. @smallexample
  496. % ./vector_scal
  497. 0.000000 3.000000 6.000000 9.000000 12.000000
  498. @end smallexample
  499. or for example, by disabling CPU devices:
  500. @smallexample
  501. % STARPU_NCPUS=0 ./vector_scal
  502. 0.000000 3.000000 6.000000 9.000000 12.000000
  503. @end smallexample
  504. or by disabling CUDA devices (which may permit to enable the use of OpenCL,
  505. see @ref{Enabling OpenCL}):
  506. @smallexample
  507. % STARPU_NCUDA=0 ./vector_scal
  508. 0.000000 3.000000 6.000000 9.000000 12.000000
  509. @end smallexample