440_fpga_support.doxy 10 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267
  1. /* StarPU --- Runtime system for heterogeneous multicore architectures.
  2. *
  3. * Copyright (C) 2019-2020 Université de Bordeaux, CNRS (LaBRI UMR 5800), Inria
  4. *
  5. * StarPU is free software; you can redistribute it and/or modify
  6. * it under the terms of the GNU Lesser General Public License as published by
  7. * the Free Software Foundation; either version 2.1 of the License, or (at
  8. * your option) any later version.
  9. *
  10. * StarPU is distributed in the hope that it will be useful, but
  11. * WITHOUT ANY WARRANTY; without even the implied warranty of
  12. * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
  13. *
  14. * See the GNU Lesser General Public License in COPYING.LGPL for more details.
  15. */
  16. /*! \page FPGASupport FPGA Support
  17. \section Introduction Introduction
  18. Maxeler provides hardware and software solutions for accelerating computing applications on dataflow engines (DFEs). DFEs are in-house designed accelerators that encapsulate reconfigurable high-end FPGAs at their core and are equipped with large amounts of DDR memory.
  19. We extend the StarPU task programming library that initially targets heterogeneous architectures to support Field Programmable Gate Array (FPGA).
  20. To create <c>StarPU/FPGA</c> applications exploiting DFE configurations, MaxCompiler allows an application to be split into three parts:
  21. - <c>Kernel</c>, which implements the computational components of the application in hardware.
  22. - <c>Manager configuration</c>, which connects Kernels to the CPU, engine RAM, other Kernels and other DFEs via MaxRing.
  23. - <c>CPU application</c>, which interacts with the DFEs to read and write data to the Kernels and engine RAM.
  24. The Simple Live CPU interface (SLiC) is Maxeler’s application programming interface for seamless CPU-DFE integration. SLiC allows CPU applications to configure and load a number of DFEs as well as to subsequently schedule and run actions on those DFEs using simple function calls. In StarPU/FPGA applications, we use <c>Dynamic SLiC Interface</c> to exchange data streams between the CPU (Main Memory) and DFE (Local Memory).
  25. \section PortingApplicationsToFPGA Porting Applications to FPGA
  26. The way to port an application to FPGA is to set the field
  27. starpu_codelet::fpga_funcs, to provide StarPU with the function
  28. for FPGA implementation, so for instance:
  29. \verbatim
  30. struct starpu_codelet cl =
  31. {
  32. .fpga_funcs = {myfunc},
  33. .nbuffers = 1,
  34. }
  35. \endverbatim
  36. \subsection FPGAExample StarPU/FPGA Application
  37. To give you an idea of the interface that we used to exchange data between <c>host</c> (CPU) and <c>FPGA</c> (DFE), here is an example, based on one of the examples of Maxeler (https://trac.version.fz-juelich.de/reconfigurable/wiki/Public).
  38. <c>StreamFMAKernel.maxj</c> represents the Java kernel code; it implements a very simple kernel (c=a+b), and <c>Test.c</c> starts it from the <c>fpga_add</c> function; it first sets streaming up from the CPU pointers, triggers execution and waits for the result. The API to interact with DFEs is called <c>SLiC</c> which then also involves the <c> MaxelerOS</c> runtime.
  39. - <c>StreamFMAKernel.maxj</c>: the DFE part is described in the MaxJ programming language which is a Java-based metaprogramming approach.
  40. \code{.c}
  41. package tests;
  42. import com.maxeler.maxcompiler.v2.kernelcompiler.Kernel;
  43. import com.maxeler.maxcompiler.v2.kernelcompiler.KernelParameters;
  44. import com.maxeler.maxcompiler.v2.kernelcompiler.types.base.DFEType;
  45. import com.maxeler.maxcompiler.v2.kernelcompiler.types.base.DFEVar;
  46. class StreamFMAKernel extends Kernel {
  47. private static final DFEType type = dfeInt(32);
  48. protected StreamFMAKernel(KernelParameters parameters) {
  49. super(parameters);
  50. DFEVar a = io.input("a", type);
  51. DFEVar b = io.input("b", type);
  52. DFEVar c;
  53. c = a+b;
  54. io.output("output", c, type);
  55. }
  56. }
  57. \endcode
  58. - <c>StreamFMAManager.maxj</c>: is also described in the MaxJ programming language and orchestrates data movement between the host and the DFE.
  59. \code{.c}
  60. package tests;
  61. import com.maxeler.maxcompiler.v2.build.EngineParameters;
  62. import com.maxeler.maxcompiler.v2.managers.custom.blocks.KernelBlock;
  63. import com.maxeler.platform.max5.manager.Max5LimaManager;
  64. class StreamFMAManager extends Max5LimaManager {
  65. private static final String kernel_name = "StreamFMAKernel";
  66. public StreamFMAManager(EngineParameters arg0) {
  67. super(arg0);
  68. KernelBlock kernel = addKernel(new StreamFMAKernel(makeKernelParameters(kernel_name)));
  69. kernel.getInput("a") <== addStreamFromCPU("a");
  70. kernel.getInput("b") <== addStreamFromCPU("b");
  71. addStreamToCPU("output") <== kernel.getOutput("output");
  72. }
  73. public static void main(String[] args) {
  74. StreamFMAManager manager = new StreamFMAManager(new EngineParameters(args));
  75. manager.build();
  76. }
  77. }
  78. \endcode
  79. Once <c>StreamFMAKernel.maxj</c> and <c>StreamFMAManager.maxj</c> are written, there are other steps to do:
  80. - Building the JAVA program: (for Kernel and Manager (.maxj))
  81. \verbatim
  82. $ maxjc -1.7 -cp $MAXCLASSPATH streamfma/
  83. \endverbatim
  84. - Running the Java program to generate a DFE implementation (a .max file) that can be called from a StarPU/FPGA application and slic headers (.h) for simulation:
  85. \verbatim
  86. $ java -XX:+UseSerialGC -Xmx2048m -cp $MAXCLASSPATH:. streamfma.StreamFMAManager DFEModel=MAIA maxFileName=StreamFMA target=DFE_SIM
  87. \endverbatim
  88. - Build the slic object file (simulation):
  89. \verbatim
  90. $ sliccompile StreamFMA.max
  91. \endverbatim
  92. - <c>Test.c </c>:
  93. to interface StarPU task-based runtime system with Maxeler's DFE devices, we use the advanced dynamic interface of <c>SLiC</c> in <b>non_blocking</b> mode.
  94. Test code must include <c>MaxSLiCInterface.h</c> and <c>MaxFile.h</c>. The .max file contains the bitstream. The StarPU/FPGA application can be written in C, C++, etc.
  95. \code{.c}
  96. #include "StreamFMA.h"
  97. #include "MaxSLiCInterface.h"
  98. void fpga_add(void *buffers[], void *cl_arg)
  99. {
  100. (void)cl_arg;
  101. int *a = (int*) STARPU_VECTOR_GET_PTR(buffers[0]);
  102. int *b = (int*) STARPU_VECTOR_GET_PTR(buffers[1]);
  103. int *c = (int*) STARPU_VECTOR_GET_PTR(buffers[2]);
  104. int size = STARPU_VECTOR_GET_NX(buffers[0]);
  105. /* actions to run on an engine */
  106. max_actions_t *act = max_actions_init(maxfile, NULL);
  107. /* set the number of ticks for a kernel */
  108. max_set_ticks (act, "StreamFMAKernel", size);
  109. /* send input streams */
  110. max_queue_input(act, "a", a, size *sizeof(a[0]));
  111. max_queue_input(act, "b", b, size*sizeof(b[0]));
  112. /* store output stream */
  113. max_queue_output(act,"output", c, size*sizeof(c[0]));
  114. /* run actions on the engine */
  115. printf("**** Run actions in non blocking mode **** \n");
  116. /* run actions in non_blocking mode */
  117. max_run_t *run0= max_run_nonblock(engine, act);
  118. printf("*** wait for the actions on DFE to complete *** \n");
  119. max_wait(run0);
  120. }
  121. static struct starpu_codelet cl =
  122. {
  123. .cpu_funcs = {cpu_func},
  124. .cpu_funcs_name = {"cpu_func"},
  125. .fpga_funcs = {fpga_add},
  126. .nbuffers = 3,
  127. .modes = {STARPU_R, STARPU_R, STARPU_W}
  128. };
  129. int main(int argc, char **argv)
  130. {
  131. ...
  132. /* Implementation of a maxfile */
  133. max_file_t *maxfile = StreamFMA_init();
  134. /* Implementation of an engine */
  135. max_engine_t *engine = max_load(maxfile, "*");
  136. starpu_init(NULL);
  137. ... Task submission etc. ...
  138. starpu_shutdown();
  139. /* deallocate the set of actions */
  140. max_actions_free(act);
  141. /* unload and deallocate an engine obtained by way of max_load */
  142. max_unload(engine);
  143. return 0;
  144. }
  145. \endcode
  146. To write the StarPU/FPGA application: first, the programmer must describe the codelet using StarPU’s C API. This codelet provides both a CPU implementation and an FPGA one. It also specifies that the task has two inputs and one output through the <c>nbuffers</c> and <c>modes</c> attributes.
  147. <c>fpga_add</c> function is the name of the FPGA implementation and is mainly divided in four steps:
  148. - Init actions to be run on DFE.
  149. - Add data to an input stream for an action.
  150. - Add data storage space for an output stream.
  151. - Run actions on DFE in <b>non_blocking</b> mode; a non-blocking call returns immediately, allowing the calling code to do more CPU work in parallel while the actions are run.
  152. - Wait for the actions to complete.
  153. In the <c>main</c> function, there are four important steps:
  154. - Implement a maxfile.
  155. - Load a DFE.
  156. - Free actions.
  157. - Unload and deallocate the DFE.
  158. The rest of the application (data registration, task submission, etc.) is as usual with StarPU
  159. \subsection FPGADataTransfers Data Transfers in StarPU/FPGA Applications
  160. The communication between the host and the DFE is done through the <c>Dynamic advance interface</c> to exchange data between the main memory and the local memory of the DFE.
  161. For instant, we use \ref STARPU_MAIN_RAM to send and store data to/from DFE's local memory. However, we aim to use a multiplexer to choose which memory node we will use to read/write data. So, the user can tell that the computational kernel will take data from the main memory or DFE's local memory for example.
  162. In starPU applications, When \ref starpu_codelet::specific_nodes is 1, this specifies the memory nodes where each data should be sent to for task execution.
  163. \subsection FPGAConfiguration FPGA Configuration
  164. To configure StarPU with FPGA accelerators, we can enable <c>FPGA</c> through the \c configure option <b>"--with-fpga"</b>.
  165. Compiling and installing StarPU/FPGA application is done following the standard procedure:
  166. \verbatim
  167. $ make
  168. $ make install
  169. \endverbatim
  170. \subsection FPGALaunchingprograms Launching Programs: Simulation
  171. Maxeler provides a simple tutorial to use MaxCompiler (https://trac.version.fz-juelich.de/reconfigurable/wiki/Public). Running the Java program to generate maxfile and slic headers (hardware) on Maxeler's DFE device, takes a VERY long time, approx. 2 hours even for this very small example. That's why we use the simulation.
  172. - To start the simulation on Maxeler's DFE device:
  173. \verbatim
  174. $ maxcompilersim -c LIMA -n StreamFMA restart
  175. \endverbatim
  176. - To run the binary (simulation)
  177. \verbatim
  178. $ export LD_LIBRARY_PATH=$MAXELEROSDIR/lib:$LD_LIBRARY_PATH
  179. $ export SLIC_CONF="use_simulation=StreamFMA"
  180. \endverbatim
  181. - To force tasks to be scheduled on the FPGA, one can disable the use of CPU
  182. cores by setting the \ref STARPU_NCPU environment variable to 0.
  183. \verbatim
  184. $ STARPU_NCPU=0 ./StreamFMA
  185. \endverbatim
  186. - To stop the simulation
  187. \verbatim
  188. $ maxcompilersim -c LIMA -n StreamFMA stop
  189. \endverbatim
  190. */