@c -*-texinfo-*- @c This file is part of the StarPU Handbook. @c Copyright (C) 2009--2011 Universit@'e de Bordeaux 1 @c Copyright (C) 2010, 2011, 2012, 2013 Centre National de la Recherche Scientifique @c Copyright (C) 2011, 2012 Institut National de Recherche en Informatique et Automatique @c See the file starpu.texi for copying conditions. @menu * Installing a Binary Package:: * Installing from Source:: * Setting up Your Own Code:: * Benchmarking StarPU:: @end menu @node Installing a Binary Package @section Installing a Binary Package One of the StarPU developers being a Debian Developer, the packages are well integrated and very uptodate. To see which packages are available, simply type: @example $ apt-cache search starpu @end example To install what you need, type: @example $ sudo apt-get install libstarpu-1.0 libstarpu-dev @end example @node Installing from Source @section Installing from Source StarPU can be built and installed by the standard means of the GNU autotools. The following chapter is intended to briefly remind how these tools can be used to install StarPU. @menu * Optional Dependencies:: * Getting Sources:: * Configuring StarPU:: * Building StarPU:: * Installing StarPU:: @end menu @node Optional Dependencies @subsection Optional Dependencies The @url{http://www.open-mpi.org/software/hwloc, @code{hwloc} topology discovery library} is not mandatory to use StarPU but strongly recommended. It allows for topology aware scheduling, which improves performance. @code{hwloc} is available in major free operating system distributions, and for most operating systems. If @code{hwloc} is not available on your system, the option @code{--without-hwloc} should be explicitely given when calling the @code{configure} script. If @code{hwloc} is installed with a @code{pkg-config} file, no option is required, it will be detected automatically, otherwise @code{with-hwloc=prefix} should be used to specify the location of @code{hwloc}. @node Getting Sources @subsection Getting Sources StarPU's sources can be obtained from the @url{http://runtime.bordeaux.inria.fr/StarPU/files/,download page} of the StarPU website. All releases and the development tree of StarPU are freely available on INRIA's gforge under the LGPL license. Some releases are available under the BSD license. The latest release can be downloaded from the @url{http://gforge.inria.fr/frs/?group_id=1570,INRIA's gforge} or directly from the @url{http://runtime.bordeaux.inria.fr/StarPU/files/,StarPU download page}. The latest nightly snapshot can be downloaded from the @url{http://starpu.gforge.inria.fr/testing/,StarPU gforge website}. @example $ wget http://starpu.gforge.inria.fr/testing/starpu-nightly-latest.tar.gz @end example And finally, current development version is also accessible via svn. It should be used only if you need the very latest changes (i.e. less than a day!)@footnote{The client side of the software Subversion can be obtained from @url{http://subversion.tigris.org}. If you are running on Windows, you will probably prefer to use @url{http://tortoisesvn.tigris.org/, TortoiseSVN}.}. @example svn checkout svn://scm.gforge.inria.fr/svn/starpu/trunk StarPU @end example @node Configuring StarPU @subsection Configuring StarPU Running @code{autogen.sh} is not necessary when using the tarball releases of StarPU. If you are using the source code from the svn repository, you first need to generate the configure scripts and the Makefiles. This requires the availability of @code{autoconf}, @code{automake} >= 2.60, and @code{makeinfo}. @example $ ./autogen.sh @end example You then need to configure StarPU. Details about options that are useful to give to @code{./configure} are given in @ref{Compilation configuration}. @example $ ./configure @end example If @code{configure} does not detect some software or produces errors, please make sure to post the content of @code{config.log} when reporting the issue. By default, the files produced during the compilation are placed in the source directory. As the compilation generates a lot of files, it is advised to to put them all in a separate directory. It is then easier to cleanup, and this allows to compile several configurations out of the same source tree. For that, simply enter the directory where you want the compilation to produce its files, and invoke the @code{configure} script located in the StarPU source directory. @example $ mkdir build $ cd build $ ../configure @end example @node Building StarPU @subsection Building StarPU @example $ make @end example Once everything is built, you may want to test the result. An extensive set of regression tests is provided with StarPU. Running the tests is done by calling @code{make check}. These tests are run every night and the result from the main profile is publicly @url{http://starpu.gforge.inria.fr/testing/,available}. @example $ make check @end example @node Installing StarPU @subsection Installing StarPU In order to install StarPU at the location that was specified during configuration: @example $ make install @end example Libtool interface versioning information are included in libraries names (libstarpu-1.0.so, libstarpumpi-1.0.so and libstarpufft-1.0.so). @node Setting up Your Own Code @section Setting up Your Own Code @menu * Setting Flags for Compiling:: * Running a Basic StarPU Application:: * Kernel Threads Started by StarPU:: * Enabling OpenCL:: @end menu @node Setting Flags for Compiling @subsection Setting Flags for Compiling, Linking and Running Applications StarPU provides a pkg-config executable to obtain relevant compiler and linker flags. Compiling and linking an application against StarPU may require to use specific flags or libraries (for instance @code{CUDA} or @code{libspe2}). To this end, it is possible to use the @code{pkg-config} tool. If StarPU was not installed at some standard location, the path of StarPU's library must be specified in the @code{PKG_CONFIG_PATH} environment variable so that @code{pkg-config} can find it. For example if StarPU was installed in @code{$prefix_dir}: @example $ PKG_CONFIG_PATH=$PKG_CONFIG_PATH:$prefix_dir/lib/pkgconfig @end example The flags required to compile or link against StarPU are then accessible with the following commands@footnote{It is still possible to use the API provided in the version 0.9 of StarPU by calling @code{pkg-config} with the @code{libstarpu} package. Similar packages are provided for @code{libstarpumpi} and @code{libstarpufft}.}: @example $ pkg-config --cflags starpu-1.1 # options for the compiler $ pkg-config --libs starpu-1.1 # options for the linker @end example Make sure that @code{pkg-config --libs starpu-1.1} actually produces some output before going further: @code{PKG_CONFIG_PATH} has to point to the place where @code{starpu-1.1.pc} was installed during @code{make install}. Also pass the @code{--static} option if the application is to be linked statically. It is also necessary to set the variable @code{LD_LIBRARY_PATH} to locate dynamic libraries at runtime. @example $ LD_LIBRARY_PATH=$prefix_dir/lib:$LD_LIBRARY_PATH @end example When using a Makefile, the following lines can be added to set the options for the compiler and the linker: @cartouche @example CFLAGS += $$(pkg-config --cflags starpu-1.1) LDFLAGS += $$(pkg-config --libs starpu-1.1) @end example @end cartouche @node Running a Basic StarPU Application @subsection Running a Basic StarPU Application Basic examples using StarPU are built in the directory @code{examples/basic_examples/} (and installed in @code{$prefix_dir/lib/starpu/examples/}). You can for example run the example @code{vector_scal}. @example $ ./examples/basic_examples/vector_scal BEFORE: First element was 1.000000 AFTER: First element is 3.140000 @end example When StarPU is used for the first time, the directory @code{$STARPU_HOME/.starpu/} is created, performance models will be stored in that directory (@pxref{STARPU_HOME}). Please note that buses are benchmarked when StarPU is launched for the first time. This may take a few minutes, or less if @code{hwloc} is installed. This step is done only once per user and per machine. @node Kernel Threads Started by StarPU @subsection Kernel Threads Started by StarPU StarPU automatically binds one thread per CPU core. It does not use SMT/hyperthreading because kernels are usually already optimized for using a full core, and using hyperthreading would make kernel calibration rather random. Since driving GPUs is a CPU-consuming task, StarPU dedicates one core per GPU While StarPU tasks are executing, the application is not supposed to do computations in the threads it starts itself, tasks should be used instead. TODO: add a StarPU function to bind an application thread (e.g. the main thread) to a dedicated core (and thus disable the corresponding StarPU CPU worker). @node Enabling OpenCL @subsection Enabling OpenCL When both CUDA and OpenCL drivers are enabled, StarPU will launch an OpenCL worker for NVIDIA GPUs only if CUDA is not already running on them. This design choice was necessary as OpenCL and CUDA can not run at the same time on the same NVIDIA GPU, as there is currently no interoperability between them. To enable OpenCL, you need either to disable CUDA when configuring StarPU: @example $ ./configure --disable-cuda @end example or when running applications: @example $ STARPU_NCUDA=0 ./application @end example OpenCL will automatically be started on any device not yet used by CUDA. So on a machine running 4 GPUS, it is therefore possible to enable CUDA on 2 devices, and OpenCL on the 2 other devices by doing so: @example $ STARPU_NCUDA=2 ./application @end example @node Benchmarking StarPU @section Benchmarking StarPU Some interesting benchmarks are installed among examples in @code{$prefix_dir/lib/starpu/examples/}. Make sure to try various schedulers, for instance STARPU_SCHED=dmda @menu * Task size overhead:: * Data transfer latency:: * Gemm:: * Cholesky:: * LU:: @end menu @node Task size overhead @subsection Task size overhead This benchmark gives a glimpse into how big a size should be for StarPU overhead to be low enough. Run @code{tasks_size_overhead.sh}, it will generate a plot of the speedup of tasks of various sizes, depending on the number of CPUs being used. @node Data transfer latency @subsection Data transfer latency @code{local_pingpong} performs a ping-pong between the first two CUDA nodes, and prints the measured latency. @node Gemm @subsection Matrix-matrix multiplication @code{sgemm} and @code{dgemm} perform a blocked matrix-matrix multiplication using BLAS and cuBLAS. They output the obtained GFlops. @node Cholesky @subsection Cholesky factorization @code{cholesky*} perform a Cholesky factorization (single precision). They use different dependency primitives. @node LU @subsection LU factorization @code{lu*} perform an LU factorization. They use different dependency primitives.