advanced-examples.texi 44 KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411511611711811912012112212312412512612712812913013113213313413513613713813914014114214314414514614714814915015115215315415515615715815916016116216316416516616716816917017117217317417517617717817918018118218318418518618718818919019119219319419519619719819920020120220320420520620720820921021121221321421521621721821922022122222322422522622722822923023123223323423523623723823924024124224324424524624724824925025125225325425525625725825926026126226326426526626726826927027127227327427527627727827928028128228328428528628728828929029129229329429529629729829930030130230330430530630730830931031131231331431531631731831932032132232332432532632732832933033133233333433533633733833934034134234334434534634734834935035135235335435535635735835936036136236336436536636736836937037137237337437537637737837938038138238338438538638738838939039139239339439539639739839940040140240340440540640740840941041141241341441541641741841942042142242342442542642742842943043143243343443543643743843944044144244344444544644744844945045145245345445545645745845946046146246346446546646746846947047147247347447547647747847948048148248348448548648748848949049149249349449549649749849950050150250350450550650750850951051151251351451551651751851952052152252352452552652752852953053153253353453553653753853954054154254354454554654754854955055155255355455555655755855956056156256356456556656756856957057157257357457557657757857958058158258358458558658758858959059159259359459559659759859960060160260360460560660760860961061161261361461561661761861962062162262362462562662762862963063163263363463563663763863964064164264364464564664764864965065165265365465565665765865966066166266366466566666766866967067167267367467567667767867968068168268368468568668768868969069169269369469569669769869970070170270370470570670770870971071171271371471571671771871972072172272372472572672772872973073173273373473573673773873974074174274374474574674774874975075175275375475575675775875976076176276376476576676776876977077177277377477577677777877978078178278378478578678778878979079179279379479579679779879980080180280380480580680780880981081181281381481581681781881982082182282382482582682782882983083183283383483583683783883984084184284384484584684784884985085185285385485585685785885986086186286386486586686786886987087187287387487587687787887988088188288388488588688788888989089189289389489589689789889990090190290390490590690790890991091191291391491591691791891992092192292392492592692792892993093193293393493593693793893994094194294394494594694794894995095195295395495595695795895996096196296396496596696796896997097197297397497597697797897998098198298398498598698798898999099199299399499599699799899910001001100210031004100510061007100810091010101110121013101410151016101710181019102010211022102310241025102610271028102910301031103210331034103510361037103810391040104110421043104410451046104710481049105010511052105310541055105610571058105910601061106210631064106510661067106810691070107110721073107410751076107710781079108010811082108310841085108610871088108910901091109210931094109510961097109810991100110111021103110411051106110711081109111011111112111311141115111611171118111911201121112211231124112511261127112811291130113111321133
  1. @c -*-texinfo-*-
  2. @c This file is part of the StarPU Handbook.
  3. @c Copyright (C) 2009--2011 Universit@'e de Bordeaux 1
  4. @c Copyright (C) 2010, 2011, 2012 Centre National de la Recherche Scientifique
  5. @c Copyright (C) 2011 Institut National de Recherche en Informatique et Automatique
  6. @c See the file starpu.texi for copying conditions.
  7. @menu
  8. * Using multiple implementations of a codelet::
  9. * Enabling implementation according to capabilities::
  10. * Task and Worker Profiling::
  11. * Partitioning Data::
  12. * Performance model example::
  13. * Theoretical lower bound on execution time::
  14. * Insert Task Utility::
  15. * Data reduction::
  16. * Temporary buffers::
  17. * Parallel Tasks::
  18. * Debugging::
  19. * The multiformat interface::
  20. * On-GPU rendering::
  21. * More examples:: More examples shipped with StarPU
  22. @end menu
  23. @node Using multiple implementations of a codelet
  24. @section Using multiple implementations of a codelet
  25. One may want to write multiple implementations of a codelet for a single type of
  26. device and let StarPU choose which one to run. As an example, we will show how
  27. to use SSE to scale a vector. The codelet can be written as follows:
  28. @cartouche
  29. @smallexample
  30. #include <xmmintrin.h>
  31. void scal_sse_func(void *buffers[], void *cl_arg)
  32. @{
  33. float *vector = (float *) STARPU_VECTOR_GET_PTR(buffers[0]);
  34. unsigned int n = STARPU_VECTOR_GET_NX(buffers[0]);
  35. unsigned int n_iterations = n/4;
  36. if (n % 4 != 0)
  37. n_iterations++;
  38. __m128 *VECTOR = (__m128*) vector;
  39. __m128 factor __attribute__((aligned(16)));
  40. factor = _mm_set1_ps(*(float *) cl_arg);
  41. unsigned int i;
  42. for (i = 0; i < n_iterations; i++)
  43. VECTOR[i] = _mm_mul_ps(factor, VECTOR[i]);
  44. @}
  45. @end smallexample
  46. @end cartouche
  47. @cartouche
  48. @smallexample
  49. struct starpu_codelet cl = @{
  50. .where = STARPU_CPU,
  51. .cpu_funcs = @{ scal_cpu_func, scal_sse_func, NULL @},
  52. .nbuffers = 1,
  53. .modes = @{ STARPU_RW @}
  54. @};
  55. @end smallexample
  56. @end cartouche
  57. Schedulers which are multi-implementation aware (only @code{dmda}, @code{heft}
  58. and @code{pheft} for now) will use the performance models of all the
  59. implementations it was given, and pick the one that seems to be the fastest.
  60. @node Enabling implementation according to capabilities
  61. @section Enabling implementation according to capabilities
  62. Some implementations may not run on some devices. For instance, some CUDA
  63. devices do not support double floating point precision, and thus the kernel
  64. execution would just fail; or the device may not have enough shared memory for
  65. the implementation being used. The @code{can_execute} field of the @code{struct
  66. starpu_codelet} structure permits to express this. For instance:
  67. @cartouche
  68. @smallexample
  69. static int can_execute(unsigned workerid, struct starpu_task *task, unsigned nimpl)
  70. @{
  71. const struct cudaDeviceProp *props;
  72. if (starpu_worker_get_type(workerid) == STARPU_CPU_WORKER)
  73. return 1;
  74. /* Cuda device */
  75. props = starpu_cuda_get_device_properties(workerid);
  76. if (props->major >= 2 || props->minor >= 3)
  77. /* At least compute capability 1.3, supports doubles */
  78. return 1;
  79. /* Old card, does not support doubles */
  80. return 0;
  81. @}
  82. struct starpu_codelet cl = @{
  83. .where = STARPU_CPU|STARPU_CUDA,
  84. .can_execute = can_execute,
  85. .cpu_funcs = @{ cpu_func, NULL @},
  86. .cuda_funcs = @{ gpu_func, NULL @}
  87. .nbuffers = 1,
  88. .modes = @{ STARPU_RW @}
  89. @};
  90. @end smallexample
  91. @end cartouche
  92. This can be essential e.g. when running on a machine which mixes various models
  93. of CUDA devices, to take benefit from the new models without crashing on old models.
  94. Note: the @code{can_execute} function is called by the scheduler each time it
  95. tries to match a task with a worker, and should thus be very fast. The
  96. @code{starpu_cuda_get_device_properties} provides a quick access to CUDA
  97. properties of CUDA devices to achieve such efficiency.
  98. Another example is compiling CUDA code for various compute capabilities,
  99. resulting with two CUDA functions, e.g. @code{scal_gpu_13} for compute capability
  100. 1.3, and @code{scal_gpu_20} for compute capability 2.0. Both functions can be
  101. provided to StarPU by using @code{cuda_funcs}, and @code{can_execute} can then be
  102. used to rule out the @code{scal_gpu_20} variant on a CUDA device which
  103. will not be able to execute it:
  104. @cartouche
  105. @smallexample
  106. static int can_execute(unsigned workerid, struct starpu_task *task, unsigned nimpl)
  107. @{
  108. const struct cudaDeviceProp *props;
  109. if (starpu_worker_get_type(workerid) == STARPU_CPU_WORKER)
  110. return 1;
  111. /* Cuda device */
  112. if (nimpl == 0)
  113. /* Trying to execute the 1.3 capability variant, we assume it is ok in all cases. */
  114. return 1;
  115. /* Trying to execute the 2.0 capability variant, check that the card can do it. */
  116. props = starpu_cuda_get_device_properties(workerid);
  117. if (props->major >= 2 || props->minor >= 0)
  118. /* At least compute capability 2.0, can run it */
  119. return 1;
  120. /* Old card, does not support 2.0, will not be able to execute the 2.0 variant. */
  121. return 0;
  122. @}
  123. struct starpu_codelet cl = @{
  124. .where = STARPU_CPU|STARPU_CUDA,
  125. .can_execute = can_execute,
  126. .cpu_funcs = @{ cpu_func, NULL @},
  127. .cuda_funcs = @{ scal_gpu_13, scal_gpu_20, NULL @},
  128. .nbuffers = 1,
  129. .modes = @{ STARPU_RW @}
  130. @};
  131. @end smallexample
  132. @end cartouche
  133. Note: the most generic variant should be provided first, as some schedulers are
  134. not able to try the different variants.
  135. @node Task and Worker Profiling
  136. @section Task and Worker Profiling
  137. A full example showing how to use the profiling API is available in
  138. the StarPU sources in the directory @code{examples/profiling/}.
  139. @cartouche
  140. @smallexample
  141. struct starpu_task *task = starpu_task_create();
  142. task->cl = &cl;
  143. task->synchronous = 1;
  144. /* We will destroy the task structure by hand so that we can
  145. * query the profiling info before the task is destroyed. */
  146. task->destroy = 0;
  147. /* Submit and wait for completion (since synchronous was set to 1) */
  148. starpu_task_submit(task);
  149. /* The task is finished, get profiling information */
  150. struct starpu_task_profiling_info *info = task->profiling_info;
  151. /* How much time did it take before the task started ? */
  152. double delay += starpu_timing_timespec_delay_us(&info->submit_time, &info->start_time);
  153. /* How long was the task execution ? */
  154. double length += starpu_timing_timespec_delay_us(&info->start_time, &info->end_time);
  155. /* We don't need the task structure anymore */
  156. starpu_task_destroy(task);
  157. @end smallexample
  158. @end cartouche
  159. @cartouche
  160. @smallexample
  161. /* Display the occupancy of all workers during the test */
  162. int worker;
  163. for (worker = 0; worker < starpu_worker_get_count(); worker++)
  164. @{
  165. struct starpu_worker_profiling_info worker_info;
  166. int ret = starpu_worker_get_profiling_info(worker, &worker_info);
  167. STARPU_ASSERT(!ret);
  168. double total_time = starpu_timing_timespec_to_us(&worker_info.total_time);
  169. double executing_time = starpu_timing_timespec_to_us(&worker_info.executing_time);
  170. double sleeping_time = starpu_timing_timespec_to_us(&worker_info.sleeping_time);
  171. float executing_ratio = 100.0*executing_time/total_time;
  172. float sleeping_ratio = 100.0*sleeping_time/total_time;
  173. char workername[128];
  174. starpu_worker_get_name(worker, workername, 128);
  175. fprintf(stderr, "Worker %s:\n", workername);
  176. fprintf(stderr, "\ttotal time: %.2lf ms\n", total_time*1e-3);
  177. fprintf(stderr, "\texec time: %.2lf ms (%.2f %%)\n", executing_time*1e-3,
  178. executing_ratio);
  179. fprintf(stderr, "\tblocked time: %.2lf ms (%.2f %%)\n", sleeping_time*1e-3,
  180. sleeping_ratio);
  181. @}
  182. @end smallexample
  183. @end cartouche
  184. @node Partitioning Data
  185. @section Partitioning Data
  186. An existing piece of data can be partitioned in sub parts to be used by different tasks, for instance:
  187. @cartouche
  188. @smallexample
  189. int vector[NX];
  190. starpu_data_handle_t handle;
  191. /* Declare data to StarPU */
  192. starpu_vector_data_register(&handle, 0, (uintptr_t)vector, NX, sizeof(vector[0]));
  193. /* Partition the vector in PARTS sub-vectors */
  194. starpu_filter f =
  195. @{
  196. .filter_func = starpu_block_filter_func_vector,
  197. .nchildren = PARTS
  198. @};
  199. starpu_data_partition(handle, &f);
  200. @end smallexample
  201. @end cartouche
  202. The task submission then uses @code{starpu_data_get_sub_data} to retrieve the
  203. sub-handles to be passed as tasks parameters.
  204. @cartouche
  205. @smallexample
  206. /* Submit a task on each sub-vector */
  207. for (i=0; i<starpu_data_get_nb_children(handle); i++) @{
  208. /* Get subdata number i (there is only 1 dimension) */
  209. starpu_data_handle_t sub_handle = starpu_data_get_sub_data(handle, 1, i);
  210. struct starpu_task *task = starpu_task_create();
  211. task->handles[0] = sub_handle;
  212. task->cl = &cl;
  213. task->synchronous = 1;
  214. task->cl_arg = &factor;
  215. task->cl_arg_size = sizeof(factor);
  216. starpu_task_submit(task);
  217. @}
  218. @end smallexample
  219. @end cartouche
  220. Partitioning can be applied several times, see
  221. @code{examples/basic_examples/mult.c} and @code{examples/filters/}.
  222. Wherever the whole piece of data is already available, the partitioning will
  223. be done in-place, i.e. without allocating new buffers but just using pointers
  224. inside the existing copy. This is particularly important to be aware of when
  225. using OpenCL, where the kernel parameters are not pointers, but handles. The
  226. kernel thus needs to be also passed the offset within the OpenCL buffer:
  227. @cartouche
  228. @smallexample
  229. void opencl_func(void *buffers[], void *cl_arg)
  230. @{
  231. cl_mem vector = (cl_mem) STARPU_VECTOR_GET_DEV_HANDLE(buffers[0]);
  232. unsigned offset = STARPU_BLOCK_GET_OFFSET(buffers[0]);
  233. ...
  234. clSetKernelArg(kernel, 0, sizeof(vector), &vector);
  235. clSetKernelArg(kernel, 1, sizeof(offset), &offset);
  236. ...
  237. @}
  238. @end smallexample
  239. @end cartouche
  240. And the kernel has to shift from the pointer passed by the OpenCL driver:
  241. @cartouche
  242. @smallexample
  243. __kernel void opencl_kernel(__global int *vector, unsigned offset)
  244. @{
  245. block = (__global void *)block + offset;
  246. ...
  247. @}
  248. @end smallexample
  249. @end cartouche
  250. StarPU provides various interfaces and filters for matrices, vectors, etc.,
  251. but applications can also write their own data interfaces and filters, see
  252. @code{examples/interface} and @code{examples/filters/custom_mf} for an example.
  253. @node Performance model example
  254. @section Performance model example
  255. To achieve good scheduling, StarPU scheduling policies need to be able to
  256. estimate in advance the duration of a task. This is done by giving to codelets
  257. a performance model, by defining a @code{starpu_perfmodel} structure and
  258. providing its address in the @code{model} field of the @code{struct starpu_codelet}
  259. structure. The @code{symbol} and @code{type} fields of @code{starpu_perfmodel}
  260. are mandatory, to give a name to the model, and the type of the model, since
  261. there are several kinds of performance models. For compatibility, make sure to
  262. initialize the whole structure to zero, either by using explicit memset, or by
  263. letting the compiler implicitly do it as examplified below.
  264. @itemize
  265. @item
  266. Measured at runtime (@code{STARPU_HISTORY_BASED} model type). This assumes that for a
  267. given set of data input/output sizes, the performance will always be about the
  268. same. This is very true for regular kernels on GPUs for instance (<0.1% error),
  269. and just a bit less true on CPUs (~=1% error). This also assumes that there are
  270. few different sets of data input/output sizes. StarPU will then keep record of
  271. the average time of previous executions on the various processing units, and use
  272. it as an estimation. History is done per task size, by using a hash of the input
  273. and ouput sizes as an index.
  274. It will also save it in @code{$HOME/.starpu/sampling/codelets}
  275. for further executions (@code{$USERPROFILE/.starpu/sampling/codelets} in windows
  276. environments), and can be observed by using the
  277. @code{starpu_perfmodel_display} command, or drawn by using
  278. the @code{starpu_perfmodel_plot} (@pxref{Performance model calibration}). The
  279. models are indexed by machine name. To
  280. share the models between machines (e.g. for a homogeneous cluster), use
  281. @code{export STARPU_HOSTNAME=some_global_name}. Measurements are only done
  282. when using a task scheduler which makes use of it, such as @code{heft} or
  283. @code{dmda}. Measurements can also be provided explicitly by the application, by
  284. using the @code{starpu_perfmodel_update_history} function.
  285. The following is a small code example.
  286. If e.g. the code is recompiled with other compilation options, or several
  287. variants of the code are used, the symbol string should be changed to reflect
  288. that, in order to recalibrate a new model from zero. The symbol string can even
  289. be constructed dynamically at execution time, as long as this is done before
  290. submitting any task using it.
  291. @cartouche
  292. @smallexample
  293. static struct starpu_perfmodel mult_perf_model = @{
  294. .type = STARPU_HISTORY_BASED,
  295. .symbol = "mult_perf_model"
  296. @};
  297. struct starpu_codelet cl = @{
  298. .where = STARPU_CPU,
  299. .cpu_funcs = @{ cpu_mult, NULL @},
  300. .nbuffers = 3,
  301. .modes = @{ STARPU_R, STARPU_R, STARPU_W @},
  302. /* for the scheduling policy to be able to use performance models */
  303. .model = &mult_perf_model
  304. @};
  305. @end smallexample
  306. @end cartouche
  307. @item
  308. Measured at runtime and refined by regression (@code{STARPU_*REGRESSION_BASED}
  309. model type). This still assumes performance regularity, but works
  310. with various data input sizes, by applying regression over observed
  311. execution times. STARPU_REGRESSION_BASED uses an a*n^b regression
  312. form, STARPU_NL_REGRESSION_BASED uses an a*n^b+c (more precise than
  313. STARPU_REGRESSION_BASED, but costs a lot more to compute).
  314. For instance,
  315. @code{tests/perfmodels/regression_based.c} uses a regression-based performance
  316. model for the @code{memset} operation.
  317. Of course, the application has to issue
  318. tasks with varying size so that the regression can be computed. StarPU will not
  319. trust the regression unless there is at least 10% difference between the minimum
  320. and maximum observed input size. It can be useful to set the
  321. @code{STARPU_CALIBRATE} environment variable to @code{1} and run the application
  322. on varying input sizes, so as to feed the performance model for a variety of
  323. inputs, or to provide the measurements explictly by using
  324. @code{starpu_perfmodel_update_history}. The @code{starpu_perfmodel_display} and
  325. @code{starpu_perfmodel_plot}
  326. tools can be used to observe how much the performance model is calibrated (@pxref{Performance model calibration}); when
  327. their output look good, @code{STARPU_CALIBRATE} can be reset to @code{0} to let
  328. StarPU use the resulting performance model without recording new measures. If
  329. the data input sizes vary a lot, it is really important to set
  330. @code{STARPU_CALIBRATE} to @code{0}, otherwise StarPU will continue adding the
  331. measures, and result with a very big performance model, which will take time a
  332. lot of time to load and save.
  333. For non-linear regression, since computing it
  334. is quite expensive, it is only done at termination of the application. This
  335. means that the first execution of the application will use only history-based
  336. performance model to perform scheduling, without using regression.
  337. @item
  338. Provided as an estimation from the application itself (@code{STARPU_COMMON} model type and @code{cost_function} field),
  339. see for instance
  340. @code{examples/common/blas_model.h} and @code{examples/common/blas_model.c}.
  341. @item
  342. Provided explicitly by the application (@code{STARPU_PER_ARCH} model type): the
  343. @code{.per_arch[arch][nimpl].cost_function} fields have to be filled with pointers to
  344. functions which return the expected duration of the task in micro-seconds, one
  345. per architecture.
  346. @end itemize
  347. For the @code{STARPU_HISTORY_BASED} and @code{STARPU_*REGRESSION_BASE},
  348. the total size of task data (both input and output) is used as an index by
  349. default. The @code{size_base} field of @code{struct starpu_perfmodel} however
  350. permits the application to override that, when for instance some of the data
  351. do not matter for task cost (e.g. mere reference table), or when using sparse
  352. structures (in which case it is the number of non-zeros which matter), or when
  353. there is some hidden parameter such as the number of iterations, etc.
  354. How to use schedulers which can benefit from such performance model is explained
  355. in @ref{Task scheduling policy}.
  356. The same can be done for task power consumption estimation, by setting the
  357. @code{power_model} field the same way as the @code{model} field. Note: for
  358. now, the application has to give to the power consumption performance model
  359. a name which is different from the execution time performance model.
  360. The application can request time estimations from the StarPU performance
  361. models by filling a task structure as usual without actually submitting
  362. it. The data handles can be created by calling @code{starpu_data_register}
  363. functions with a @code{NULL} pointer (and need to be unregistered as usual)
  364. and the desired data sizes. The @code{starpu_task_expected_length} and
  365. @code{starpu_task_expected_power} functions can then be called to get an
  366. estimation of the task duration on a given arch. @code{starpu_task_destroy}
  367. needs to be called to destroy the dummy task afterwards. See
  368. @code{tests/perfmodels/regression_based.c} for an example.
  369. @node Theoretical lower bound on execution time
  370. @section Theoretical lower bound on execution time
  371. For kernels with history-based performance models (and provided that they are completely calibrated), StarPU can very easily provide a theoretical lower
  372. bound for the execution time of a whole set of tasks. See for
  373. instance @code{examples/lu/lu_example.c}: before submitting tasks,
  374. call @code{starpu_bound_start}, and after complete execution, call
  375. @code{starpu_bound_stop}. @code{starpu_bound_print_lp} or
  376. @code{starpu_bound_print_mps} can then be used to output a Linear Programming
  377. problem corresponding to the schedule of your tasks. Run it through
  378. @code{lp_solve} or any other linear programming solver, and that will give you a
  379. lower bound for the total execution time of your tasks. If StarPU was compiled
  380. with the glpk library installed, @code{starpu_bound_compute} can be used to
  381. solve it immediately and get the optimized minimum, in ms. Its @code{integer}
  382. parameter allows to decide whether integer resolution should be computed
  383. and returned too.
  384. The @code{deps} parameter tells StarPU whether to take tasks and implicit data
  385. dependencies into account. It must be understood that the linear programming
  386. problem size is quadratic with the number of tasks and thus the time to solve it
  387. will be very long, it could be minutes for just a few dozen tasks. You should
  388. probably use @code{lp_solve -timeout 1 test.pl -wmps test.mps} to convert the
  389. problem to MPS format and then use a better solver, @code{glpsol} might be
  390. better than @code{lp_solve} for instance (the @code{--pcost} option may be
  391. useful), but sometimes doesn't manage to converge. @code{cbc} might look
  392. slower, but it is parallel. Be sure to try at least all the @code{-B} options
  393. of @code{lp_solve}. For instance, we often just use
  394. @code{lp_solve -cc -B1 -Bb -Bg -Bp -Bf -Br -BG -Bd -Bs -BB -Bo -Bc -Bi} , and
  395. the @code{-gr} option can also be quite useful.
  396. Setting @code{deps} to 0 will only take into account the actual computations
  397. on processing units. It however still properly takes into account the varying
  398. performances of kernels and processing units, which is quite more accurate than
  399. just comparing StarPU performances with the fastest of the kernels being used.
  400. The @code{prio} parameter tells StarPU whether to simulate taking into account
  401. the priorities as the StarPU scheduler would, i.e. schedule prioritized
  402. tasks before less prioritized tasks, to check to which extend this results
  403. to a less optimal solution. This increases even more computation time.
  404. Note that for simplicity, all this however doesn't take into account data
  405. transfers, which are assumed to be completely overlapped.
  406. @node Insert Task Utility
  407. @section Insert Task Utility
  408. StarPU provides wrapper functions @code{starpu_insert_task} and
  409. @code{starpu_insert_task_array} to ease the creation and submission of tasks.
  410. @menu
  411. * With a defined number of handles::
  412. * With an undefined number of handles::
  413. * Task insertion depending on some earlier computation::
  414. @end menu
  415. @node With a defined number of handles
  416. @subsection With a defined number of handles
  417. @deftypefun int starpu_insert_task (struct starpu_codelet *@var{cl}, ...)
  418. Create and submit a task corresponding to @var{cl} with the following
  419. arguments. The argument list must be zero-terminated.
  420. The arguments following the codelets can be of the following types:
  421. @itemize
  422. @item
  423. @code{STARPU_R}, @code{STARPU_W}, @code{STARPU_RW}, @code{STARPU_SCRATCH}, @code{STARPU_REDUX} an access mode followed by a data handle;
  424. @item
  425. the specific values @code{STARPU_VALUE}, @code{STARPU_CALLBACK},
  426. @code{STARPU_CALLBACK_ARG}, @code{STARPU_CALLBACK_WITH_ARG},
  427. @code{STARPU_PRIORITY}, followed by the appropriated objects as
  428. defined below.
  429. @end itemize
  430. Parameters to be passed to the codelet implementation are defined
  431. through the type @code{STARPU_VALUE}. The function
  432. @code{starpu_codelet_unpack_args} must be called within the codelet
  433. implementation to retrieve them.
  434. @end deftypefun
  435. @defmac STARPU_VALUE
  436. this macro is used when calling @code{starpu_insert_task}, and must be
  437. followed by a pointer to a constant value and the size of the constant
  438. @end defmac
  439. @defmac STARPU_CALLBACK
  440. this macro is used when calling @code{starpu_insert_task}, and must be
  441. followed by a pointer to a callback function
  442. @end defmac
  443. @defmac STARPU_CALLBACK_ARG
  444. this macro is used when calling @code{starpu_insert_task}, and must be
  445. followed by a pointer to be given as an argument to the callback
  446. function
  447. @end defmac
  448. @defmac STARPU_CALLBACK_WITH_ARG
  449. this macro is used when calling @code{starpu_insert_task}, and must be
  450. followed by two pointers: one to a callback function, and the other to
  451. be given as an argument to the callback function; this is equivalent
  452. to using both @code{STARPU_CALLBACK} and
  453. @code{STARPU_CALLBACK_WITH_ARG}
  454. @end defmac
  455. @defmac STARPU_PRIORITY
  456. this macro is used when calling @code{starpu_insert_task}, and must be
  457. followed by a integer defining a priority level
  458. @end defmac
  459. @deftypefun void starpu_codelet_pack_args ({char **}@var{arg_buffer}, {size_t *}@var{arg_buffer_size}, ...)
  460. Pack arguments of type @code{STARPU_VALUE} into a buffer which can be
  461. given to a codelet and later unpacked with the function
  462. @code{starpu_codelet_unpack_args} defined below.
  463. @end deftypefun
  464. @deftypefun void starpu_codelet_unpack_args ({void *}@var{cl_arg}, ...)
  465. Retrieve the arguments of type @code{STARPU_VALUE} associated to a
  466. task automatically created using the function
  467. @code{starpu_insert_task} defined above.
  468. @end deftypefun
  469. Here the implementation of the codelet:
  470. @smallexample
  471. void func_cpu(void *descr[], void *_args)
  472. @{
  473. int *x0 = (int *)STARPU_VARIABLE_GET_PTR(descr[0]);
  474. float *x1 = (float *)STARPU_VARIABLE_GET_PTR(descr[1]);
  475. int ifactor;
  476. float ffactor;
  477. starpu_codelet_unpack_args(_args, &ifactor, &ffactor);
  478. *x0 = *x0 * ifactor;
  479. *x1 = *x1 * ffactor;
  480. @}
  481. struct starpu_codelet mycodelet = @{
  482. .where = STARPU_CPU,
  483. .cpu_funcs = @{ func_cpu, NULL @},
  484. .nbuffers = 2,
  485. .modes = @{ STARPU_RW, STARPU_RW @}
  486. @};
  487. @end smallexample
  488. And the call to the @code{starpu_insert_task} wrapper:
  489. @smallexample
  490. starpu_insert_task(&mycodelet,
  491. STARPU_VALUE, &ifactor, sizeof(ifactor),
  492. STARPU_VALUE, &ffactor, sizeof(ffactor),
  493. STARPU_RW, data_handles[0], STARPU_RW, data_handles[1],
  494. 0);
  495. @end smallexample
  496. The call to @code{starpu_insert_task} is equivalent to the following
  497. code:
  498. @smallexample
  499. struct starpu_task *task = starpu_task_create();
  500. task->cl = &mycodelet;
  501. task->handles[0] = data_handles[0];
  502. task->handles[1] = data_handles[1];
  503. char *arg_buffer;
  504. size_t arg_buffer_size;
  505. starpu_codelet_pack_args(&arg_buffer, &arg_buffer_size,
  506. STARPU_VALUE, &ifactor, sizeof(ifactor),
  507. STARPU_VALUE, &ffactor, sizeof(ffactor),
  508. 0);
  509. task->cl_arg = arg_buffer;
  510. task->cl_arg_size = arg_buffer_size;
  511. int ret = starpu_task_submit(task);
  512. @end smallexample
  513. @node With an undefined number of handles
  514. @subsection With an undefined number of handles
  515. @deftypefun int starpu_insert_task_array (struct starpu_codelet *@var{cl}, {starpu_data_handle_t *}@var{handles}, unsigned @var{nb_handles}, ...)
  516. Create and submit a task corresponding to @var{cl} with the given
  517. number @var{nb_handles} of @var{handles}, and the following
  518. arguments. The argument list must be zero-terminated.
  519. The arguments following the codelets can be of the following types:
  520. @itemize
  521. @item
  522. the specific values @code{STARPU_VALUE}, @code{STARPU_CALLBACK},
  523. @code{STARPU_CALLBACK_ARG}, @code{STARPU_CALLBACK_WITH_ARG},
  524. @code{STARPU_PRIORITY}, followed by the appropriated objects as
  525. defined above.
  526. @end itemize
  527. The number of data handles @var{nb_handles} must be equal to the
  528. number of data handles expected by the codelet @var{cl}. The access
  529. modes will be the ones defined by @var{cl}.
  530. Parameters to be passed to the codelet implementation are defined
  531. through the type @code{STARPU_VALUE}. The function
  532. @code{starpu_codelet_unpack_args} must be called within the codelet
  533. implementation to retrieve them.
  534. @end deftypefun
  535. Here a call to the @code{starpu_insert_task_array} wrapper equivalent
  536. to the example above.
  537. @smallexample
  538. starpu_insert_task(&mycodelet,
  539. data_handles, 2,
  540. STARPU_VALUE, &ifactor, sizeof(ifactor),
  541. STARPU_VALUE, &ffactor, sizeof(ffactor),
  542. 0);
  543. @end smallexample
  544. @node Task insertion depending on some earlier computation
  545. @subsection Task insertion depending on some earlier computation
  546. If some part of the task insertion depends on the value of some computation,
  547. the @code{STARPU_DATA_ACQUIRE_CB} macro can be very convenient. For
  548. instance, assuming that the index variable @code{i} was registered as handle
  549. @code{i_handle}:
  550. @smallexample
  551. /* Compute which portion we will work on, e.g. pivot */
  552. starpu_insert_task(&which_index, STARPU_W, i_handle, 0);
  553. /* And submit the corresponding task */
  554. STARPU_DATA_ACQUIRE_CB(i_handle, STARPU_R, starpu_insert_task(&work, STARPU_RW, A_handle[i], 0));
  555. @end smallexample
  556. The @code{STARPU_DATA_ACQUIRE_CB} macro submits an asynchronous request for
  557. acquiring data @code{i} for the main application, and will execute the code
  558. given as third parameter when it is acquired. In other words, as soon as the
  559. value of @code{i} computed by the @code{which_index} codelet can be read, the
  560. portion of code passed as third parameter of @code{STARPU_DATA_ACQUIRE_CB} will
  561. be executed, and is allowed to read from @code{i} to use it e.g. as an
  562. index. Note that this macro is only avaible when compiling StarPU with
  563. the compiler @code{gcc}.
  564. @node Data reduction
  565. @section Data reduction
  566. In various cases, some piece of data is used to accumulate intermediate
  567. results. For instances, the dot product of a vector, maximum/minimum finding,
  568. the histogram of a photograph, etc. When these results are produced along the
  569. whole machine, it would not be efficient to accumulate them in only one place,
  570. incurring data transmission each and access concurrency.
  571. StarPU provides a @code{STARPU_REDUX} mode, which permits to optimize
  572. that case: it will allocate a buffer on each memory node, and accumulate
  573. intermediate results there. When the data is eventually accessed in the normal
  574. @code{STARPU_R} mode, StarPU will collect the intermediate results in just one
  575. buffer.
  576. For this to work, the user has to use the
  577. @code{starpu_data_set_reduction_methods} to declare how to initialize these
  578. buffers, and how to assemble partial results.
  579. For instance, @code{cg} uses that to optimize its dot product: it first defines
  580. the codelets for initialization and reduction:
  581. @smallexample
  582. struct starpu_codelet bzero_variable_cl =
  583. @{
  584. .cpu_funcs = @{ bzero_variable_cpu, NULL @},
  585. .cuda_funcs = @{ bzero_variable_cuda, NULL @},
  586. .nbuffers = 1,
  587. @}
  588. static void accumulate_variable_cpu(void *descr[], void *cl_arg)
  589. @{
  590. double *v_dst = (double *)STARPU_VARIABLE_GET_PTR(descr[0]);
  591. double *v_src = (double *)STARPU_VARIABLE_GET_PTR(descr[1]);
  592. *v_dst = *v_dst + *v_src;
  593. @}
  594. static void accumulate_variable_cuda(void *descr[], void *cl_arg)
  595. @{
  596. double *v_dst = (double *)STARPU_VARIABLE_GET_PTR(descr[0]);
  597. double *v_src = (double *)STARPU_VARIABLE_GET_PTR(descr[1]);
  598. cublasaxpy(1, (double)1.0, v_src, 1, v_dst, 1);
  599. cudaStreamSynchronize(starpu_cuda_get_local_stream());
  600. @}
  601. struct starpu_codelet accumulate_variable_cl =
  602. @{
  603. .cpu_funcs = @{ accumulate_variable_cpu, NULL @},
  604. .cuda_funcs = @{ accumulate_variable_cuda, NULL @},
  605. .nbuffers = 1,
  606. @}
  607. @end smallexample
  608. and attaches them as reduction methods for its dtq handle:
  609. @smallexample
  610. starpu_data_set_reduction_methods(dtq_handle,
  611. &accumulate_variable_cl, &bzero_variable_cl);
  612. @end smallexample
  613. and dtq_handle can now be used in @code{STARPU_REDUX} mode for the dot products
  614. with partitioned vectors:
  615. @smallexample
  616. int dots(starpu_data_handle_t v1, starpu_data_handle_t v2,
  617. starpu_data_handle_t s, unsigned nblocks)
  618. @{
  619. starpu_insert_task(&bzero_variable_cl, STARPU_W, s, 0);
  620. for (b = 0; b < nblocks; b++)
  621. starpu_insert_task(&dot_kernel_cl,
  622. STARPU_RW, s,
  623. STARPU_R, starpu_data_get_sub_data(v1, 1, b),
  624. STARPU_R, starpu_data_get_sub_data(v2, 1, b),
  625. 0);
  626. @}
  627. @end smallexample
  628. The @code{cg} example also uses reduction for the blocked gemv kernel, leading
  629. to yet more relaxed dependencies and more parallelism.
  630. @node Temporary buffers
  631. @section Temporary buffers
  632. There are two kinds of temporary buffers: temporary data which just pass results
  633. from a task to another, and scratch data which are needed only internally by
  634. tasks.
  635. @subsection Temporary data
  636. Data can sometimes be entirely produced by a task, and entirely consumed by
  637. another task, without the need for other parts of the application to access
  638. it. In such case, registration can be done without prior allocation, by using
  639. the special -1 memory node number, and passing a zero pointer. StarPU will
  640. actually allocate memory only when the task creating the content gets scheduled,
  641. and destroy it on unregistration.
  642. In addition to that, it can be tedious for the application to have to unregister
  643. the data, since it will not use its content anyway. The unregistration can be
  644. done lazily by using the @code{starpu_data_unregister_lazy(handle)} function,
  645. which will record that no more tasks accessing the handle will be submitted, so
  646. that it can be freed as soon as the last task accessing it is over.
  647. The following code examplifies both points: it registers the temporary
  648. data, submits three tasks accessing it, and records the data for automatic
  649. unregistration.
  650. @smallexample
  651. starpu_vector_data_register(&handle, -1, 0, n, sizeof(float));
  652. starpu_insert_task(&produce_data, STARPU_W, handle, 0);
  653. starpu_insert_task(&compute_data, STARPU_RW, handle, 0);
  654. starpu_insert_task(&summarize_data, STARPU_R, handle, STARPU_W, result_handle, 0);
  655. starpu_data_unregister_lazy(handle);
  656. @end smallexample
  657. @subsection Scratch data
  658. Some kernels sometimes need temporary data to achieve the computations, i.e. a
  659. workspace. The application could allocate it at the start of the codelet
  660. function, and free it at the end, but that would be costly. It could also
  661. allocate one buffer per worker (similarly to @ref{Per-worker library
  662. initialization }), but that would make them systematic and permanent. A more
  663. optimized way is to use the SCRATCH data access mode, as examplified below,
  664. which provides per-worker buffers without content consistency.
  665. @smallexample
  666. starpu_vector_data_register(&workspace, -1, 0, sizeof(float));
  667. for (i = 0; i < N; i++)
  668. starpu_insert_task(&compute, STARPU_R, input[i], STARPU_SCRATCH, workspace, STARPU_W, output[i], 0);
  669. @end smallexample
  670. StarPU will make sure that the buffer is allocated before executing the task,
  671. and make this allocation per-worker: for CPU workers, notably, each worker has
  672. its own buffer. This means that each task submitted above will actually have its
  673. own workspace, which will actually be the same for all tasks running one after
  674. the other on the same worker. Also, if for instance GPU memory becomes scarce,
  675. StarPU will notice that it can free such buffers easily, since the content does
  676. not matter.
  677. @node Parallel Tasks
  678. @section Parallel Tasks
  679. StarPU can leverage existing parallel computation libraries by the means of
  680. parallel tasks. A parallel task is a task which gets worked on by a set of CPUs
  681. (called a parallel or combined worker) at the same time, by using an existing
  682. parallel CPU implementation of the computation to be achieved. This can also be
  683. useful to improve the load balance between slow CPUs and fast GPUs: since CPUs
  684. work collectively on a single task, the completion time of tasks on CPUs become
  685. comparable to the completion time on GPUs, thus relieving from granularity
  686. discrepancy concerns. Hwloc support needs to be enabled to get good performance,
  687. otherwise StarPU will not know how to better group cores.
  688. Two modes of execution exist to accomodate with existing usages.
  689. @subsection Fork-mode parallel tasks
  690. In the Fork mode, StarPU will call the codelet function on one
  691. of the CPUs of the combined worker. The codelet function can use
  692. @code{starpu_combined_worker_get_size()} to get the number of threads it is
  693. allowed to start to achieve the computation. The CPU binding mask for the whole
  694. set of CPUs is already enforced, so that threads created by the function will
  695. inherit the mask, and thus execute where StarPU expected, the OS being in charge
  696. of choosing how to schedule threads on the corresponding CPUs. The application
  697. can also choose to bind threads by hand, using e.g. sched_getaffinity to know
  698. the CPU binding mask that StarPU chose.
  699. For instance, using OpenMP (full source is available in
  700. @code{examples/openmp/vector_scal.c}):
  701. @example
  702. void scal_cpu_func(void *buffers[], void *_args)
  703. @{
  704. unsigned i;
  705. float *factor = _args;
  706. struct starpu_vector_interface *vector = buffers[0];
  707. unsigned n = STARPU_VECTOR_GET_NX(vector);
  708. float *val = (float *)STARPU_VECTOR_GET_PTR(vector);
  709. #pragma omp parallel for num_threads(starpu_combined_worker_get_size())
  710. for (i = 0; i < n; i++)
  711. val[i] *= *factor;
  712. @}
  713. static struct starpu_codelet cl =
  714. @{
  715. .modes = @{ STARPU_RW @},
  716. .where = STARPU_CPU,
  717. .type = STARPU_FORKJOIN,
  718. .max_parallelism = INT_MAX,
  719. .cpu_funcs = @{scal_cpu_func, NULL@},
  720. .nbuffers = 1,
  721. @};
  722. @end example
  723. Other examples include for instance calling a BLAS parallel CPU implementation
  724. (see @code{examples/mult/xgemm.c}).
  725. @subsection SPMD-mode parallel tasks
  726. In the SPMD mode, StarPU will call the codelet function on
  727. each CPU of the combined worker. The codelet function can use
  728. @code{starpu_combined_worker_get_size()} to get the total number of CPUs
  729. involved in the combined worker, and thus the number of calls that are made in
  730. parallel to the function, and @code{starpu_combined_worker_get_rank()} to get
  731. the rank of the current CPU within the combined worker. For instance:
  732. @example
  733. static void func(void *buffers[], void *args)
  734. @{
  735. unsigned i;
  736. float *factor = _args;
  737. struct starpu_vector_interface *vector = buffers[0];
  738. unsigned n = STARPU_VECTOR_GET_NX(vector);
  739. float *val = (float *)STARPU_VECTOR_GET_PTR(vector);
  740. /* Compute slice to compute */
  741. unsigned m = starpu_combined_worker_get_size();
  742. unsigned j = starpu_combined_worker_get_rank();
  743. unsigned slice = (n+m-1)/m;
  744. for (i = j * slice; i < (j+1) * slice && i < n; i++)
  745. val[i] *= *factor;
  746. @}
  747. static struct starpu_codelet cl =
  748. @{
  749. .modes = @{ STARPU_RW @},
  750. .where = STARP_CPU,
  751. .type = STARPU_SPMD,
  752. .max_parallelism = INT_MAX,
  753. .cpu_funcs = @{ func, NULL @},
  754. .nbuffers = 1,
  755. @}
  756. @end example
  757. Of course, this trivial example will not really benefit from parallel task
  758. execution, and was only meant to be simple to understand. The benefit comes
  759. when the computation to be done is so that threads have to e.g. exchange
  760. intermediate results, or write to the data in a complex but safe way in the same
  761. buffer.
  762. @subsection Parallel tasks performance
  763. To benefit from parallel tasks, a parallel-task-aware StarPU scheduler has to
  764. be used. When exposed to codelets with a Fork or SPMD flag, the @code{pheft}
  765. (parallel-heft) and @code{pgreedy} (parallel greedy) schedulers will indeed also
  766. try to execute tasks with several CPUs. It will automatically try the various
  767. available combined worker sizes and thus be able to avoid choosing a large
  768. combined worker if the codelet does not actually scale so much.
  769. @subsection Combined worker sizes
  770. By default, StarPU creates combined workers according to the architecture
  771. structure as detected by hwloc. It means that for each object of the hwloc
  772. topology (NUMA node, socket, cache, ...) a combined worker will be created. If
  773. some nodes of the hierarchy have a big arity (e.g. many cores in a socket
  774. without a hierarchy of shared caches), StarPU will create combined workers of
  775. intermediate sizes.
  776. The user can give some hints to StarPU about combined workers sizes to favor.
  777. This can be done by using the environment variables @code{STARPU_MIN_WORKERSIZE}
  778. and @code{STARPU_MAX_WORKERSIZE}. When set, they will force StarPU to create the
  779. biggest combined workers possible without overstepping the defined boundaries.
  780. However, StarPU will create the remaining combined workers without abiding by
  781. the rules if not possible.
  782. For example : if the user specifies a minimum and maximum combined workers size
  783. of 3 on a machine containing 8 CPUs, StarPU will create a combined worker of
  784. size 2 beside the combined workers of size 3.
  785. @subsection Concurrent parallel tasks
  786. Unfortunately, many environments and librairies do not support concurrent
  787. calls.
  788. For instance, most OpenMP implementations (including the main ones) do not
  789. support concurrent @code{pragma omp parallel} statements without nesting them in
  790. another @code{pragma omp parallel} statement, but StarPU does not yet support
  791. creating its CPU workers by using such pragma.
  792. Other parallel libraries are also not safe when being invoked concurrently
  793. from different threads, due to the use of global variables in their sequential
  794. sections for instance.
  795. The solution is then to use only one combined worker at a time. This can be
  796. done by setting @code{single_combined_worker} to 1 in the @code{starpu_conf}
  797. structure, or setting the @code{STARPU_SINGLE_COMBINED_WORKER} environment
  798. variable to 1. StarPU will then run only one parallel task at a time.
  799. @node Debugging
  800. @section Debugging
  801. StarPU provides several tools to help debugging aplications. Execution traces
  802. can be generated and displayed graphically, see @ref{Generating traces}. Some
  803. gdb helpers are also provided to show the whole StarPU state:
  804. @smallexample
  805. (gdb) source tools/gdbinit
  806. (gdb) help starpu
  807. @end smallexample
  808. @node The multiformat interface
  809. @section The multiformat interface
  810. It may be interesting to represent the same piece of data using two different
  811. data structures: one that would only be used on CPUs, and one that would only
  812. be used on GPUs. This can be done by using the multiformat interface. StarPU
  813. will be able to convert data from one data structure to the other when needed.
  814. Note that the heft scheduler is the only one optimized for this interface. The
  815. user must provide StarPU with conversion codelets:
  816. @cartouche
  817. @smallexample
  818. #define NX 1024
  819. struct point array_of_structs[NX];
  820. starpu_data_handle_t handle;
  821. /*
  822. * The conversion of a piece of data is itself a task, though it is created,
  823. * submitted and destroyed by StarPU internals and not by the user. Therefore,
  824. * we have to define two codelets.
  825. * Note that for now the conversion from the CPU format to the GPU format has to
  826. * be executed on the GPU, and the conversion from the GPU to the CPU has to be
  827. * executed on the CPU.
  828. */
  829. #ifdef STARPU_USE_OPENCL
  830. void cpu_to_opencl_opencl_func(void *buffers[], void *args);
  831. struct starpu_codelet cpu_to_opencl_cl = @{
  832. .where = STARPU_OPENCL,
  833. .opencl_funcs = @{ cpu_to_opencl_opencl_func, NULL @},
  834. .nbuffers = 1,
  835. .modes = @{ STARPU_RW @}
  836. @};
  837. void opencl_to_cpu_func(void *buffers[], void *args);
  838. struct starpu_codelet opencl_to_cpu_cl = @{
  839. .where = STARPU_CPU,
  840. .cpu_funcs = @{ opencl_to_cpu_func, NULL @},
  841. .nbuffers = 1,
  842. .modes = @{ STARPU_RW @}
  843. @};
  844. #endif
  845. struct starpu_multiformat_data_interface_ops format_ops = @{
  846. #ifdef STARPU_USE_OPENCL
  847. .opencl_elemsize = 2 * sizeof(float),
  848. .cpu_to_opencl_cl = &cpu_to_opencl_cl,
  849. .opencl_to_cpu_cl = &opencl_to_cpu_cl,
  850. #endif
  851. .cpu_elemsize = 2 * sizeof(float),
  852. ...
  853. @};
  854. starpu_multiformat_data_register(handle, 0, &array_of_structs, NX, &format_ops);
  855. @end smallexample
  856. @end cartouche
  857. Kernels can be written almost as for any other interface. Note that
  858. STARPU_MULTIFORMAT_GET_CPU_PTR shall only be used for CPU kernels. CUDA kernels
  859. must use STARPU_MULTIFORMAT_GET_CUDA_PTR, and OpenCL kernels must use
  860. STARPU_MULTIFORMAT_GET_OPENCL_PTR. STARPU_MULTIFORMAT_GET_NX may be used in any
  861. kind of kernel.
  862. @cartouche
  863. @smallexample
  864. static void
  865. multiformat_scal_cpu_func(void *buffers[], void *args)
  866. @{
  867. struct point *aos;
  868. unsigned int n;
  869. aos = STARPU_MULTIFORMAT_GET_CPU_PTR(buffers[0]);
  870. n = STARPU_MULTIFORMAT_GET_NX(buffers[0]);
  871. ...
  872. @}
  873. extern "C" void multiformat_scal_cuda_func(void *buffers[], void *_args)
  874. @{
  875. unsigned int n;
  876. struct struct_of_arrays *soa;
  877. soa = (struct struct_of_arrays *) STARPU_MULTIFORMAT_GET_CUDA_PTR(buffers[0]);
  878. n = STARPU_MULTIFORMAT_GET_NX(buffers[0]);
  879. ...
  880. @}
  881. @end smallexample
  882. @end cartouche
  883. A full example may be found in @code{examples/basic_examples/multiformat.c}.
  884. @node On-GPU rendering
  885. @section On-GPU rendering
  886. Graphical-oriented applications need to draw the result of their computations,
  887. typically on the very GPU where these happened. Technologies such as OpenGL/CUDA
  888. interoperability permit to let CUDA directly work on the OpenGL buffers, making
  889. them thus immediately ready for drawing, by mapping OpenGL buffer, textures or
  890. renderbuffer objects into CUDA. CUDA however imposes some technical
  891. constraints: peer memcpy has to be disabled, and the thread that runs OpenGL has
  892. to be the one that runs CUDA computations for that GPU.
  893. To achieve this with StarPU, pass the @code{--disable-cuda-memcpy-peer} option
  894. to @code{./configure} (TODO: make it dynamic), OpenGL/GLUT has to be initialized
  895. first, and the interoperability mode has to
  896. be enabled by using the @code{cuda_opengl_interoperability} field of the
  897. @code{starpu_conf} structure, and the driver loop has to be run by
  898. the application, by using the @code{not_launched_drivers} field of
  899. @code{starpu_conf} to prevent StarPU from running it in a separate thread, and
  900. by using @code{starpu_driver_run} to run the loop. The @code{gl_interop} and
  901. @code{gl_interop_idle} examples shows how it articulates in a simple case, where
  902. rendering is done in task callbacks. The former uses @code{glutMainLoopEvent}
  903. to make GLUT progress from the StarPU driver loop, while the latter uses
  904. @code{glutIdleFunc} to make StarPU progress from the GLUT main loop.
  905. Then, to use an OpenGL buffer as a CUDA data, StarPU simply needs to be given
  906. the CUDA pointer at registration, for instance:
  907. @cartouche
  908. @smallexample
  909. for (workerid = 0; workerid < starpu_worker_get_count(); workerid++)
  910. if (starpu_worker_get_type(workerid) == STARPU_CUDA_WORKER)
  911. break;
  912. cudaGraphicsResourceGetMappedPointer((void**)&output, &num_bytes, resource);
  913. starpu_vector_data_register(&handle, starpu_worker_get_memory_node(workerid), output, num_bytes / sizeof(float4), sizeof(float4));
  914. starpu_insert_task(&cl, STARPU_RW, handle, 0);
  915. @end smallexample
  916. @end cartouche
  917. and display it e.g. in the callback function.
  918. @node More examples
  919. @section More examples
  920. More examples are available in the StarPU sources in the @code{examples/}
  921. directory. Simple examples include:
  922. @table @asis
  923. @item @code{incrementer/}:
  924. Trivial incrementation test.
  925. @item @code{basic_examples/}:
  926. Simple documented Hello world (as shown in @ref{Hello World}), vector/scalar product (as shown
  927. in @ref{Vector Scaling on an Hybrid CPU/GPU Machine}), matrix
  928. product examples (as shown in @ref{Performance model example}), an example using the blocked matrix data
  929. interface, an example using the variable data interface, and an example
  930. using different formats on CPUs and GPUs.
  931. @item @code{matvecmult/}:
  932. OpenCL example from NVidia, adapted to StarPU.
  933. @item @code{axpy/}:
  934. AXPY CUBLAS operation adapted to StarPU.
  935. @item @code{fortran/}:
  936. Example of Fortran bindings.
  937. @end table
  938. More advanced examples include:
  939. @table @asis
  940. @item @code{filters/}:
  941. Examples using filters, as shown in @ref{Partitioning Data}.
  942. @item @code{lu/}:
  943. LU matrix factorization, see for instance @code{xlu_implicit.c}
  944. @item @code{cholesky/}:
  945. Cholesky matrix factorization, see for instance @code{cholesky_implicit.c}.
  946. @end table