Explorar o código

make it clear that performance models are per size

Samuel Thibault %!s(int64=15) %!d(string=hai) anos
pai
achega
4752e60676
Modificáronse 1 ficheiros con 5 adicións e 2 borrados
  1. 5 2
      doc/starpu.texi

+ 5 - 2
doc/starpu.texi

@@ -1159,8 +1159,11 @@ same. This is very true for regular kernels on GPUs for instance (<0.1% error),
 and just a bit less true on CPUs (~=1% error). This also assumes that there are
 and just a bit less true on CPUs (~=1% error). This also assumes that there are
 few different sets of data input/output sizes. StarPU will then keep record of
 few different sets of data input/output sizes. StarPU will then keep record of
 the average time of previous executions on the various processing units, and use
 the average time of previous executions on the various processing units, and use
-it as an estimation. It will also save it in @code{~/.starpu/sampling/codelets}
-for further executions.  The following is a small code example.
+it as an estimation. History is done per task size, by using a hash of the input
+and ouput sizes as an index.
+It will also save it in @code{~/.starpu/sampling/codelets}
+for further executions, and can be observed by using the
+@code{starpu_perfmodel_display} command.  The following is a small code example.
 
 
 @cartouche
 @cartouche
 @smallexample
 @smallexample