浏览代码

rename "power" into "energy" wherever it applies, notably energy consumption performance models

Samuel Thibault 8 年之前
父节点
当前提交
6d917d694e

+ 2 - 0
ChangeLog

@@ -223,6 +223,8 @@ Small changes:
     STARPU_NMICDEVS, the number of devices and the number of cores.
     STARPU_NMICDEVS, the number of devices and the number of cores.
     STARPU_NMIC will be the number of devices, and STARPU_NMICCORES
     STARPU_NMIC will be the number of devices, and STARPU_NMICCORES
     will be the number of cores per device.
     will be the number of cores per device.
+  * "power" is renamed into "energy" wherever it applies, notably energy
+    consumption performance models
 
 
 StarPU 1.1.5 (svn revision xxx)
 StarPU 1.1.5 (svn revision xxx)
 ==============================================
 ==============================================

+ 1 - 1
doc/doxygen/chapters/00introduction.doxy

@@ -153,7 +153,7 @@ them.
 
 
 A <b>performance model</b> is a (dynamic or static) model of the performance of a
 A <b>performance model</b> is a (dynamic or static) model of the performance of a
 given codelet. Codelets can have execution time performance model as well as
 given codelet. Codelets can have execution time performance model as well as
-power consumption performance models.
+energy consumption performance models.
 
 
 A data \b interface describes the layout of the data: for a vector, a pointer
 A data \b interface describes the layout of the data: for a vector, a pointer
 for the start, the number of elements and the size of elements ; for a matrix, a
 for the start, the number of elements and the size of elements ; for a matrix, a

+ 1 - 1
doc/doxygen/chapters/05check_list_performance.doxy

@@ -40,7 +40,7 @@ link to \ref TaskSchedulingPolicy
 
 
 link to \ref TaskDistributionVsDataTransfer
 link to \ref TaskDistributionVsDataTransfer
 
 
-link to \ref Power-basedScheduling
+link to \ref Energy-basedScheduling
 
 
 link to \ref StaticScheduling
 link to \ref StaticScheduling
 
 

+ 8 - 8
doc/doxygen/chapters/08scheduling.doxy

@@ -98,33 +98,33 @@ real application execution, contention makes transfer times bigger.
 This is of course imprecise, but in practice, a rough estimation
 This is of course imprecise, but in practice, a rough estimation
 already gives the good results that a precise estimation would give.
 already gives the good results that a precise estimation would give.
 
 
-\section Power-basedScheduling Power-based Scheduling
+\section Energy-basedScheduling Energy-based Scheduling
 
 
-If the application can provide some power performance model (through
-the field starpu_codelet::power_model), StarPU will
+If the application can provide some energy consumption performance model (through
+the field starpu_codelet::energy_model), StarPU will
 take it into account when distributing tasks. The target function that
 take it into account when distributing tasks. The target function that
 the scheduler <c>dmda</c> minimizes becomes <c>alpha * T_execution +
 the scheduler <c>dmda</c> minimizes becomes <c>alpha * T_execution +
 beta * T_data_transfer + gamma * Consumption</c> , where <c>Consumption</c>
 beta * T_data_transfer + gamma * Consumption</c> , where <c>Consumption</c>
 is the estimated task consumption in Joules. To tune this parameter, use
 is the estimated task consumption in Joules. To tune this parameter, use
 <c>export STARPU_SCHED_GAMMA=3000</c> for instance, to express that each Joule
 <c>export STARPU_SCHED_GAMMA=3000</c> for instance, to express that each Joule
 (i.e kW during 1000us) is worth 3000us execution time penalty. Setting
 (i.e kW during 1000us) is worth 3000us execution time penalty. Setting
-<c>alpha</c> and <c>beta</c> to zero permits to only take into account power consumption.
+<c>alpha</c> and <c>beta</c> to zero permits to only take into account energy consumption.
 
 
-This is however not sufficient to correctly optimize power: the scheduler would
+This is however not sufficient to correctly optimize energy: the scheduler would
 simply tend to run all computations on the most energy-conservative processing
 simply tend to run all computations on the most energy-conservative processing
 unit. To account for the consumption of the whole machine (including idle
 unit. To account for the consumption of the whole machine (including idle
 processing units), the idle power of the machine should be given by setting
 processing units), the idle power of the machine should be given by setting
 <c>export STARPU_IDLE_POWER=200</c> for 200W, for instance. This value can often
 <c>export STARPU_IDLE_POWER=200</c> for 200W, for instance. This value can often
 be obtained from the machine power supplier.
 be obtained from the machine power supplier.
 
 
-The power actually consumed by the total execution can be displayed by setting
+The energy actually consumed by the total execution can be displayed by setting
 <c>export STARPU_PROFILING=1 STARPU_WORKER_STATS=1</c> .
 <c>export STARPU_PROFILING=1 STARPU_WORKER_STATS=1</c> .
 
 
 On-line task consumption measurement is currently only supported through the
 On-line task consumption measurement is currently only supported through the
 <c>CL_PROFILING_POWER_CONSUMED</c> OpenCL extension, implemented in the MoviSim
 <c>CL_PROFILING_POWER_CONSUMED</c> OpenCL extension, implemented in the MoviSim
 simulator. Applications can however provide explicit measurements by
 simulator. Applications can however provide explicit measurements by
 using the function starpu_perfmodel_update_history() (examplified in \ref PerformanceModelExample
 using the function starpu_perfmodel_update_history() (examplified in \ref PerformanceModelExample
-with the <c>power_model</c> performance model). Fine-grain
+with the <c>energy_model</c> performance model). Fine-grain
 measurement is often not feasible with the feedback provided by the hardware, so
 measurement is often not feasible with the feedback provided by the hardware, so
 the user can for instance run a given task a thousand times, measure the global
 the user can for instance run a given task a thousand times, measure the global
 consumption for that series of tasks, divide it by a thousand, repeat for
 consumption for that series of tasks, divide it by a thousand, repeat for
@@ -198,7 +198,7 @@ methods of the policy.
 Make sure to have a look at the \ref API_Scheduling_Policy section, which
 Make sure to have a look at the \ref API_Scheduling_Policy section, which
 provides a list of the available functions for writing advanced schedulers, such
 provides a list of the available functions for writing advanced schedulers, such
 as starpu_task_expected_length, starpu_task_expected_data_transfer_time,
 as starpu_task_expected_length, starpu_task_expected_data_transfer_time,
-starpu_task_expected_power, starpu_prefetch_task_input_node, etc. Other
+starpu_task_expected_energy, starpu_prefetch_task_input_node, etc. Other
 useful functions include starpu_transfer_bandwidth, starpu_transfer_latency,
 useful functions include starpu_transfer_bandwidth, starpu_transfer_latency,
 starpu_transfer_predict, ...
 starpu_transfer_predict, ...
 
 

+ 4 - 4
doc/doxygen/chapters/12online_performance_tools.doxy

@@ -453,10 +453,10 @@ variable to <c>1</c>, or even reset by setting it to <c>2</c>.
 How to use schedulers which can benefit from such performance model is explained
 How to use schedulers which can benefit from such performance model is explained
 in \ref TaskSchedulingPolicy.
 in \ref TaskSchedulingPolicy.
 
 
-The same can be done for task power consumption estimation, by setting
-the field starpu_codelet::power_model the same way as the field
+The same can be done for task energy consumption estimation, by setting
+the field starpu_codelet::energy_model the same way as the field
 starpu_codelet::model. Note: for now, the application has to give to
 starpu_codelet::model. Note: for now, the application has to give to
-the power consumption performance model a name which is different from
+the energy consumption performance model a name which is different from
 the execution time performance model.
 the execution time performance model.
 
 
 The application can request time estimations from the StarPU performance
 The application can request time estimations from the StarPU performance
@@ -465,7 +465,7 @@ it. The data handles can be created by calling any of the functions
 <c>starpu_*_data_register</c> with a <c>NULL</c> pointer and <c>-1</c>
 <c>starpu_*_data_register</c> with a <c>NULL</c> pointer and <c>-1</c>
 node and the desired data sizes, and need to be unregistered as usual.
 node and the desired data sizes, and need to be unregistered as usual.
 The functions starpu_task_expected_length() and
 The functions starpu_task_expected_length() and
-starpu_task_expected_power() can then be called to get an estimation
+starpu_task_expected_energy() can then be called to get an estimation
 of the task cost on a given arch. starpu_task_footprint() can also be
 of the task cost on a given arch. starpu_task_footprint() can also be
 used to get the footprint used for indexing history-based performance
 used to get the footprint used for indexing history-based performance
 models. starpu_task_destroy() needs to be called to destroy the dummy
 models. starpu_task_destroy() needs to be called to destroy the dummy

+ 4 - 4
doc/doxygen/chapters/40environment_variables.doxy

@@ -397,14 +397,14 @@ the coefficient to be applied to it before adding it to the computation part.
 <dd>
 <dd>
 \anchor STARPU_SCHED_GAMMA
 \anchor STARPU_SCHED_GAMMA
 \addindex __env__STARPU_SCHED_GAMMA
 \addindex __env__STARPU_SCHED_GAMMA
-Define the execution time penalty of a joule (\ref Power-basedScheduling).
+Define the execution time penalty of a joule (\ref Energy-basedScheduling).
 </dd>
 </dd>
 
 
 <dt>STARPU_IDLE_POWER</dt>
 <dt>STARPU_IDLE_POWER</dt>
 <dd>
 <dd>
 \anchor STARPU_IDLE_POWER
 \anchor STARPU_IDLE_POWER
 \addindex __env__STARPU_IDLE_POWER
 \addindex __env__STARPU_IDLE_POWER
-Define the idle power of the machine (\ref Power-basedScheduling).
+Define the idle power of the machine (\ref Energy-basedScheduling).
 </dd>
 </dd>
 
 
 <dt>STARPU_PROFILING</dt>
 <dt>STARPU_PROFILING</dt>
@@ -762,8 +762,8 @@ starpu_shutdown() (\ref Profiling).
 \addindex __env__STARPU_WORKER_STATS
 \addindex __env__STARPU_WORKER_STATS
 When defined, statistics about the workers will be displayed when calling
 When defined, statistics about the workers will be displayed when calling
 starpu_shutdown() (\ref Profiling). When combined with the
 starpu_shutdown() (\ref Profiling). When combined with the
-environment variable \ref STARPU_PROFILING, it displays the power
-consumption (\ref Power-basedScheduling).
+environment variable \ref STARPU_PROFILING, it displays the energy
+consumption (\ref Energy-basedScheduling).
 </dd>
 </dd>
 
 
 <dt>STARPU_STATS</dt>
 <dt>STARPU_STATS</dt>

+ 2 - 2
doc/doxygen/chapters/api/codelet_and_tasks.doxy

@@ -344,8 +344,8 @@ Optional pointer to the task duration performance model associated to
 this codelet. This optional field is ignored when set to <c>NULL</c> or when
 this codelet. This optional field is ignored when set to <c>NULL</c> or when
 its field starpu_perfmodel::symbol is not set.
 its field starpu_perfmodel::symbol is not set.
 
 
-\var struct starpu_perfmodel *starpu_codelet::power_model
-Optional pointer to the task power consumption performance model
+\var struct starpu_perfmodel *starpu_codelet::energy_model
+Optional pointer to the task energy consumption performance model
 associated to this codelet. This optional field is ignored when set to
 associated to this codelet. This optional field is ignored when set to
 <c>NULL</c> or when its field starpu_perfmodel::field is not set. In
 <c>NULL</c> or when its field starpu_perfmodel::field is not set. In
 the case of parallel codelets, this has to account for all processing
 the case of parallel codelets, this has to account for all processing

+ 1 - 1
doc/doxygen/chapters/api/opencl_extensions.doxy

@@ -167,7 +167,7 @@ This function allows to collect statistics on a kernel execution.
 After termination of the kernels, the OpenCL codelet should call this
 After termination of the kernels, the OpenCL codelet should call this
 function to pass it the even returned by clEnqueueNDRangeKernel, to
 function to pass it the even returned by clEnqueueNDRangeKernel, to
 let StarPU collect statistics about the kernel execution (used cycles,
 let StarPU collect statistics about the kernel execution (used cycles,
-consumed power).
+consumed energy).
 
 
 @name OpenCL utilities
 @name OpenCL utilities
 \ingroup API_OpenCL_Extensions
 \ingroup API_OpenCL_Extensions

+ 4 - 4
doc/doxygen/chapters/api/profiling.doxy

@@ -61,8 +61,8 @@ Number of cycles used by the task, only available in the MoviSim
 \var uint64_t starpu_profiling_task_info::stall_cycles
 \var uint64_t starpu_profiling_task_info::stall_cycles
 Number of cycles stalled within the task, only available in the MoviSim
 Number of cycles stalled within the task, only available in the MoviSim
 
 
-\var double starpu_profiling_task_info::power_consumed
-Power consumed by the task, only available in the MoviSim
+\var double starpu_profiling_task_info::energy_consumed
+Energy consumed by the task, only available in the MoviSim
 
 
 \struct starpu_profiling_worker_info
 \struct starpu_profiling_worker_info
 This structure contains the profiling information associated to
 This structure contains the profiling information associated to
@@ -83,8 +83,8 @@ starpu_profiling_worker_get_info()
         Number of cycles used by the worker, only available in the MoviSim
         Number of cycles used by the worker, only available in the MoviSim
 \var uint64_t starpu_profiling_worker_info::stall_cycles
 \var uint64_t starpu_profiling_worker_info::stall_cycles
         Number of cycles stalled within the worker, only available in the MoviSim
         Number of cycles stalled within the worker, only available in the MoviSim
-\var double starpu_profiling_worker_info::power_consumed
-        Power consumed by the worker, only available in the MoviSim
+\var double starpu_profiling_worker_info::energy_consumed
+        Energy consumed by the worker, only available in the MoviSim
 
 
 \struct starpu_profiling_bus_info
 \struct starpu_profiling_bus_info
 todo
 todo

+ 2 - 2
doc/doxygen/chapters/api/scheduling_policy.doxy

@@ -199,9 +199,9 @@ Returns expected data transfer time in micro-seconds.
 \ingroup API_Scheduling_Policy
 \ingroup API_Scheduling_Policy
 Predict the transfer time (in micro-seconds) to move \p handle to a memory node
 Predict the transfer time (in micro-seconds) to move \p handle to a memory node
 
 
-\fn double starpu_task_expected_power(struct starpu_task *task, struct starpu_perfmodel_arch *arch, unsigned nimpl)
+\fn double starpu_task_expected_energy(struct starpu_task *task, struct starpu_perfmodel_arch *arch, unsigned nimpl)
 \ingroup API_Scheduling_Policy
 \ingroup API_Scheduling_Policy
-Returns expected power consumption in J
+Returns expected energy consumption in J
 
 
 \fn double starpu_task_expected_conversion_time(struct starpu_task *task, struct starpu_perfmodel_arch *arch, unsigned nimpl)
 \fn double starpu_task_expected_conversion_time(struct starpu_task *task, struct starpu_perfmodel_arch *arch, unsigned nimpl)
 \ingroup API_Scheduling_Policy
 \ingroup API_Scheduling_Policy

+ 2 - 2
doc/doxygen/chapters/api/task_bundles.doxy

@@ -48,9 +48,9 @@ it when possible.
 \ingroup API_Task_Bundles
 \ingroup API_Task_Bundles
 Return the expected duration of \p bundle in micro-seconds.
 Return the expected duration of \p bundle in micro-seconds.
 
 
-\fn double starpu_task_bundle_expected_power(starpu_task_bundle_t bundle, struct starpu_perfmodel_arch *arch, unsigned nimpl)
+\fn double starpu_task_bundle_expected_energy(starpu_task_bundle_t bundle, struct starpu_perfmodel_arch *arch, unsigned nimpl)
 \ingroup API_Task_Bundles
 \ingroup API_Task_Bundles
-Return the expected power consumption of \p bundle in J.
+Return the expected energy consumption of \p bundle in J.
 
 
 \fn double starpu_task_bundle_expected_data_transfer_time(starpu_task_bundle_t bundle, unsigned memory_node)
 \fn double starpu_task_bundle_expected_data_transfer_time(starpu_task_bundle_t bundle, unsigned memory_node)
 \ingroup API_Task_Bundles
 \ingroup API_Task_Bundles

+ 4 - 4
examples/basic_examples/vector_scal.c

@@ -1,7 +1,7 @@
 /* StarPU --- Runtime system for heterogeneous multicore architectures.
 /* StarPU --- Runtime system for heterogeneous multicore architectures.
  *
  *
  * Copyright (C) 2010, 2011, 2012, 2013, 2015  CNRS
  * Copyright (C) 2010, 2011, 2012, 2013, 2015  CNRS
- * Copyright (C) 2010-2015  Université de Bordeaux
+ * Copyright (C) 2010-2016  Université de Bordeaux
  *
  *
  * StarPU is free software; you can redistribute it and/or modify
  * StarPU is free software; you can redistribute it and/or modify
  * it under the terms of the GNU Lesser General Public License as published by
  * it under the terms of the GNU Lesser General Public License as published by
@@ -44,10 +44,10 @@ static struct starpu_perfmodel vector_scal_model =
 	.symbol = "vector_scal"
 	.symbol = "vector_scal"
 };
 };
 
 
-static struct starpu_perfmodel vector_scal_power_model =
+static struct starpu_perfmodel vector_scal_energy_model =
 {
 {
 	.type = STARPU_HISTORY_BASED,
 	.type = STARPU_HISTORY_BASED,
-	.symbol = "vector_scal_power"
+	.symbol = "vector_scal_energy"
 };
 };
 
 
 static struct starpu_codelet cl =
 static struct starpu_codelet cl =
@@ -93,7 +93,7 @@ static struct starpu_codelet cl =
 	.nbuffers = 1,
 	.nbuffers = 1,
 	.modes = {STARPU_RW},
 	.modes = {STARPU_RW},
 	.model = &vector_scal_model,
 	.model = &vector_scal_model,
-	.power_model = &vector_scal_power_model
+	.energy_model = &vector_scal_energy_model
 };
 };
 
 
 #ifdef STARPU_USE_OPENCL
 #ifdef STARPU_USE_OPENCL

+ 3 - 3
include/starpu_profiling.h

@@ -1,6 +1,6 @@
 /* StarPU --- Runtime system for heterogeneous multicore architectures.
 /* StarPU --- Runtime system for heterogeneous multicore architectures.
  *
  *
- * Copyright (C) 2010-2014  Université de Bordeaux
+ * Copyright (C) 2010-2014, 2016  Université de Bordeaux
  * Copyright (C) 2010, 2011, 2013  CNRS
  * Copyright (C) 2010, 2011, 2013  CNRS
  *
  *
  * StarPU is free software; you can redistribute it and/or modify
  * StarPU is free software; you can redistribute it and/or modify
@@ -57,7 +57,7 @@ struct starpu_profiling_task_info
 
 
 	uint64_t used_cycles;
 	uint64_t used_cycles;
 	uint64_t stall_cycles;
 	uint64_t stall_cycles;
-	double power_consumed;
+	double energy_consumed;
 };
 };
 
 
 struct starpu_profiling_worker_info
 struct starpu_profiling_worker_info
@@ -70,7 +70,7 @@ struct starpu_profiling_worker_info
 
 
 	uint64_t used_cycles;
 	uint64_t used_cycles;
 	uint64_t stall_cycles;
 	uint64_t stall_cycles;
-	double power_consumed;
+	double energy_consumed;
 
 
 	double flops;
 	double flops;
 };
 };

+ 2 - 2
include/starpu_scheduler.h

@@ -90,12 +90,12 @@ double starpu_task_expected_length(struct starpu_task *task, struct starpu_perfm
 double starpu_worker_get_relative_speedup(struct starpu_perfmodel_arch *perf_arch);
 double starpu_worker_get_relative_speedup(struct starpu_perfmodel_arch *perf_arch);
 double starpu_task_expected_data_transfer_time(unsigned memory_node, struct starpu_task *task);
 double starpu_task_expected_data_transfer_time(unsigned memory_node, struct starpu_task *task);
 double starpu_data_expected_transfer_time(starpu_data_handle_t handle, unsigned memory_node, enum starpu_data_access_mode mode);
 double starpu_data_expected_transfer_time(starpu_data_handle_t handle, unsigned memory_node, enum starpu_data_access_mode mode);
-double starpu_task_expected_power(struct starpu_task *task, struct starpu_perfmodel_arch *arch, unsigned nimpl);
+double starpu_task_expected_energy(struct starpu_task *task, struct starpu_perfmodel_arch *arch, unsigned nimpl);
 double starpu_task_expected_conversion_time(struct starpu_task *task, struct starpu_perfmodel_arch *arch, unsigned nimpl);
 double starpu_task_expected_conversion_time(struct starpu_task *task, struct starpu_perfmodel_arch *arch, unsigned nimpl);
 
 
 double starpu_task_bundle_expected_length(starpu_task_bundle_t bundle, struct starpu_perfmodel_arch *arch, unsigned nimpl);
 double starpu_task_bundle_expected_length(starpu_task_bundle_t bundle, struct starpu_perfmodel_arch *arch, unsigned nimpl);
 double starpu_task_bundle_expected_data_transfer_time(starpu_task_bundle_t bundle, unsigned memory_node);
 double starpu_task_bundle_expected_data_transfer_time(starpu_task_bundle_t bundle, unsigned memory_node);
-double starpu_task_bundle_expected_power(starpu_task_bundle_t bundle, struct starpu_perfmodel_arch *arch, unsigned nimpl);
+double starpu_task_bundle_expected_energy(starpu_task_bundle_t bundle, struct starpu_perfmodel_arch *arch, unsigned nimpl);
 
 
 void starpu_sched_ctx_worker_shares_tasks_lists(int workerid, int sched_ctx_id);
 void starpu_sched_ctx_worker_shares_tasks_lists(int workerid, int sched_ctx_id);
 
 

+ 1 - 1
include/starpu_task.h

@@ -116,7 +116,7 @@ struct starpu_codelet
 	int *dyn_nodes;
 	int *dyn_nodes;
 
 
 	struct starpu_perfmodel *model;
 	struct starpu_perfmodel *model;
-	struct starpu_perfmodel *power_model;
+	struct starpu_perfmodel *energy_model;
 
 
 	unsigned long per_worker_stats[STARPU_NMAXWORKERS];
 	unsigned long per_worker_stats[STARPU_NMAXWORKERS];
 
 

+ 2 - 2
socl/src/command.c

@@ -1,6 +1,6 @@
 /* StarPU --- Runtime system for heterogeneous multicore architectures.
 /* StarPU --- Runtime system for heterogeneous multicore architectures.
  *
  *
- * Copyright (C) 2010,2011, 2014 University of Bordeaux
+ * Copyright (C) 2010,2011, 2014, 2016 University of Bordeaux
  *
  *
  * StarPU is free software; you can redistribute it and/or modify
  * StarPU is free software; you can redistribute it and/or modify
  * it under the terms of the GNU Lesser General Public License as published by
  * it under the terms of the GNU Lesser General Public License as published by
@@ -188,7 +188,7 @@ command_ndrange_kernel command_ndrange_kernel_create (
 
 
 	starpu_codelet_init(&cmd->codelet);
 	starpu_codelet_init(&cmd->codelet);
 	cmd->codelet.where = STARPU_OPENCL;
 	cmd->codelet.where = STARPU_OPENCL;
-	cmd->codelet.power_model = NULL;
+	cmd->codelet.energy_model = NULL;
 	cmd->codelet.opencl_funcs[0] = &soclEnqueueNDRangeKernel_task;
 	cmd->codelet.opencl_funcs[0] = &soclEnqueueNDRangeKernel_task;
 
 
 	/* Kernel is mutable, so we duplicate its parameters... */
 	/* Kernel is mutable, so we duplicate its parameters... */

+ 2 - 2
src/core/jobs.h

@@ -205,8 +205,8 @@ struct _starpu_job {
 	/* Cumulated execution time for discontinuous jobs */
 	/* Cumulated execution time for discontinuous jobs */
 	struct timespec cumulated_ts;
 	struct timespec cumulated_ts;
 
 
-	/* Cumulated power consumption for discontinuous jobs */
-	double cumulated_power_consumed;
+	/* Cumulated energy consumption for discontinuous jobs */
+	double cumulated_energy_consumed;
 #endif
 #endif
 
 
 	/* The value of the footprint that identifies the job may be stored in
 	/* The value of the footprint that identifies the job may be stored in

+ 9 - 9
src/core/perfmodel/perfmodel.c

@@ -207,12 +207,12 @@ double starpu_task_expected_length(struct starpu_task *task, struct starpu_perfm
 	return starpu_model_expected_perf(task, task->cl->model, arch, nimpl);
 	return starpu_model_expected_perf(task, task->cl->model, arch, nimpl);
 }
 }
 
 
-double starpu_task_expected_power(struct starpu_task *task, struct starpu_perfmodel_arch* arch, unsigned nimpl)
+double starpu_task_expected_energy(struct starpu_task *task, struct starpu_perfmodel_arch* arch, unsigned nimpl)
 {
 {
 	if (!task->cl)
 	if (!task->cl)
 		/* Tasks without codelet don't actually take time */
 		/* Tasks without codelet don't actually take time */
 		return 0.0;
 		return 0.0;
-	return starpu_model_expected_perf(task, task->cl->power_model, arch, nimpl);
+	return starpu_model_expected_perf(task, task->cl->energy_model, arch, nimpl);
 }
 }
 
 
 double starpu_task_expected_conversion_time(struct starpu_task *task,
 double starpu_task_expected_conversion_time(struct starpu_task *task,
@@ -353,10 +353,10 @@ double starpu_task_bundle_expected_length(starpu_task_bundle_t bundle, struct st
 	return expected_length;
 	return expected_length;
 }
 }
 
 
-/* Return the expected power consumption of the entire task bundle in J */
-double starpu_task_bundle_expected_power(starpu_task_bundle_t bundle, struct starpu_perfmodel_arch* arch, unsigned nimpl)
+/* Return the expected energy consumption of the entire task bundle in J */
+double starpu_task_bundle_expected_energy(starpu_task_bundle_t bundle, struct starpu_perfmodel_arch* arch, unsigned nimpl)
 {
 {
-	double expected_power = 0.0;
+	double expected_energy = 0.0;
 
 
 	/* We expect total consumption of the bundle the be the sum of the different tasks consumption. */
 	/* We expect total consumption of the bundle the be the sum of the different tasks consumption. */
 	STARPU_PTHREAD_MUTEX_LOCK(&bundle->mutex);
 	STARPU_PTHREAD_MUTEX_LOCK(&bundle->mutex);
@@ -366,19 +366,19 @@ double starpu_task_bundle_expected_power(starpu_task_bundle_t bundle, struct sta
 
 
 	while (entry)
 	while (entry)
 	{
 	{
-		double task_power = starpu_task_expected_power(entry->task, arch, nimpl);
+		double task_energy = starpu_task_expected_energy(entry->task, arch, nimpl);
 
 
 		/* In case the task is not calibrated, we consider the task
 		/* In case the task is not calibrated, we consider the task
 		 * ends immediately. */
 		 * ends immediately. */
-		if (task_power > 0.0)
-			expected_power += task_power;
+		if (task_energy > 0.0)
+			expected_energy += task_energy;
 
 
 		entry = entry->next;
 		entry = entry->next;
 	}
 	}
 
 
 	STARPU_PTHREAD_MUTEX_UNLOCK(&bundle->mutex);
 	STARPU_PTHREAD_MUTEX_UNLOCK(&bundle->mutex);
 
 
-	return expected_power;
+	return expected_energy;
 }
 }
 
 
 /* Return the time (in µs) expected to transfer all data used within the bundle */
 /* Return the time (in µs) expected to transfer all data used within the bundle */

+ 4 - 4
src/core/task.c

@@ -584,8 +584,8 @@ static int _starpu_task_submit_head(struct starpu_task *task)
 		if (task->cl->model)
 		if (task->cl->model)
 			_starpu_init_and_load_perfmodel(task->cl->model);
 			_starpu_init_and_load_perfmodel(task->cl->model);
 
 
-		if (task->cl->power_model)
-			_starpu_init_and_load_perfmodel(task->cl->power_model);
+		if (task->cl->energy_model)
+			_starpu_init_and_load_perfmodel(task->cl->energy_model);
 	}
 	}
 
 
 	return 0;
 	return 0;
@@ -656,8 +656,8 @@ int starpu_task_submit(struct starpu_task *task)
 			if (entry->task->cl->model)
 			if (entry->task->cl->model)
 				_starpu_init_and_load_perfmodel(entry->task->cl->model);
 				_starpu_init_and_load_perfmodel(entry->task->cl->model);
 
 
-			if (entry->task->cl->power_model)
-				_starpu_init_and_load_perfmodel(entry->task->cl->power_model);
+			if (entry->task->cl->energy_model)
+				_starpu_init_and_load_perfmodel(entry->task->cl->energy_model);
 
 
 			entry = entry->next;
 			entry = entry->next;
 		}
 		}

+ 12 - 12
src/drivers/driver_common/driver_common.c

@@ -207,7 +207,7 @@ void _starpu_driver_update_job_feedback(struct _starpu_job *j, struct _starpu_wo
 			_starpu_worker_update_profiling_info_executing(workerid, &measured_ts, 1,
 			_starpu_worker_update_profiling_info_executing(workerid, &measured_ts, 1,
 								       profiling_info->used_cycles,
 								       profiling_info->used_cycles,
 								       profiling_info->stall_cycles,
 								       profiling_info->stall_cycles,
-								       profiling_info->power_consumed,
+								       profiling_info->energy_consumed,
 								       j->task->flops);
 								       j->task->flops);
 			updated =  1;
 			updated =  1;
 		}
 		}
@@ -252,32 +252,32 @@ void _starpu_driver_update_job_feedback(struct _starpu_job *j, struct _starpu_wo
 	if (!updated)
 	if (!updated)
 		_starpu_worker_update_profiling_info_executing(workerid, NULL, 1, 0, 0, 0, 0);
 		_starpu_worker_update_profiling_info_executing(workerid, NULL, 1, 0, 0, 0, 0);
 
 
-	if (profiling_info && profiling_info->power_consumed && cl->power_model && cl->power_model->benchmarking)
+	if (profiling_info && profiling_info->energy_consumed && cl->energy_model && cl->energy_model->benchmarking)
 	{
 	{
 #ifdef STARPU_OPENMP
 #ifdef STARPU_OPENMP
-		double power_consumed = profiling_info->power_consumed;
-		unsigned do_update_power_model;
+		double energy_consumed = profiling_info->energy_consumed;
+		unsigned do_update_energy_model;
 		if (j->continuation)
 		if (j->continuation)
 		{
 		{
-			j->cumulated_power_consumed += power_consumed;
-			do_update_power_model = 0;
+			j->cumulated_energy_consumed += energy_consumed;
+			do_update_energy_model = 0;
 		}
 		}
 		else 
 		else 
 		{
 		{
 			if (j->discontinuous)
 			if (j->discontinuous)
 			{
 			{
-				power_consumed += j->cumulated_power_consumed;
+				energy_consumed += j->cumulated_energy_consumed;
 			}
 			}
-			do_update_power_model = 1;
+			do_update_energy_model = 1;
 		}
 		}
 #else
 #else
-		const double power_consumed = profiling_info->power_consumed;
-		const unsigned do_update_power_model = 1;
+		const double energy_consumed = profiling_info->energy_consumed;
+		const unsigned do_update_energy_model = 1;
 #endif
 #endif
 
 
-		if (do_update_power_model)
+		if (do_update_energy_model)
 		{
 		{
-			_starpu_update_perfmodel_history(j, j->task->cl->power_model, perf_arch, worker->devid, power_consumed, j->nimpl);
+			_starpu_update_perfmodel_history(j, j->task->cl->energy_model, perf_arch, worker->devid, energy_consumed, j->nimpl);
 		}
 		}
 	}
 	}
 }
 }

+ 5 - 5
src/drivers/opencl/driver_opencl_utils.c

@@ -600,16 +600,16 @@ int starpu_opencl_collect_stats(cl_event event STARPU_ATTRIBUTE_UNUSED)
 	}
 	}
 #endif
 #endif
 #ifdef CL_PROFILING_POWER_CONSUMED
 #ifdef CL_PROFILING_POWER_CONSUMED
-	if (info && (starpu_profiling_status_get() || (task->cl && task->cl->power_model && task->cl->power_model->benchmarking)))
+	if (info && (starpu_profiling_status_get() || (task->cl && task->cl->energy_model && task->cl->energy_model->benchmarking)))
 	{
 	{
 		cl_int err;
 		cl_int err;
-		double power_consumed;
+		double energy_consumed;
 		size_t size;
 		size_t size;
-		err = clGetEventProfilingInfo(event, CL_PROFILING_POWER_CONSUMED, sizeof(power_consumed), &power_consumed, &size);
+		err = clGetEventProfilingInfo(event, CL_PROFILING_POWER_CONSUMED, sizeof(energy_consumed), &energy_consumed, &size);
 		if (err != CL_SUCCESS) STARPU_OPENCL_REPORT_ERROR(err);
 		if (err != CL_SUCCESS) STARPU_OPENCL_REPORT_ERROR(err);
-		STARPU_ASSERT(size == sizeof(power_consumed));
+		STARPU_ASSERT(size == sizeof(energy_consumed));
 
 
-		info->power_consumed += power_consumed;
+		info->energy_consumed += energy_consumed;
 	}
 	}
 #endif
 #endif
 
 

+ 5 - 5
src/profiling/profiling.c

@@ -144,8 +144,8 @@ struct starpu_profiling_task_info *_starpu_allocate_profiling_info_if_needed(str
 {
 {
 	struct starpu_profiling_task_info *info = NULL;
 	struct starpu_profiling_task_info *info = NULL;
 
 
-	/* If we are benchmarking, we need room for the power consumption */
-	if (starpu_profiling_status_get() || (task->cl && task->cl->power_model && (task->cl->power_model->benchmarking || _starpu_get_calibrate_flag())))
+	/* If we are benchmarking, we need room for the energy */
+	if (starpu_profiling_status_get() || (task->cl && task->cl->energy_model && (task->cl->energy_model->benchmarking || _starpu_get_calibrate_flag())))
 	{
 	{
 		info = (struct starpu_profiling_task_info *) calloc(1, sizeof(struct starpu_profiling_task_info));
 		info = (struct starpu_profiling_task_info *) calloc(1, sizeof(struct starpu_profiling_task_info));
 		STARPU_ASSERT(info);
 		STARPU_ASSERT(info);
@@ -173,7 +173,7 @@ static void _starpu_worker_reset_profiling_info_with_lock(int workerid)
 
 
 	worker_info[workerid].used_cycles = 0;
 	worker_info[workerid].used_cycles = 0;
 	worker_info[workerid].stall_cycles = 0;
 	worker_info[workerid].stall_cycles = 0;
-	worker_info[workerid].power_consumed = 0;
+	worker_info[workerid].energy_consumed = 0;
 	worker_info[workerid].flops = 0;
 	worker_info[workerid].flops = 0;
 
 
 	/* We detect if the worker is already sleeping or doing some
 	/* We detect if the worker is already sleeping or doing some
@@ -279,7 +279,7 @@ void _starpu_worker_register_executing_end(int workerid)
 }
 }
 
 
 
 
-void _starpu_worker_update_profiling_info_executing(int workerid, struct timespec *executing_time, int executed_tasks, uint64_t used_cycles, uint64_t stall_cycles, double power_consumed, double flops)
+void _starpu_worker_update_profiling_info_executing(int workerid, struct timespec *executing_time, int executed_tasks, uint64_t used_cycles, uint64_t stall_cycles, double energy_consumed, double flops)
 {
 {
 	if (starpu_profiling_status_get())
 	if (starpu_profiling_status_get())
 	{
 	{
@@ -290,7 +290,7 @@ void _starpu_worker_update_profiling_info_executing(int workerid, struct timespe
 
 
 		worker_info[workerid].used_cycles += used_cycles;
 		worker_info[workerid].used_cycles += used_cycles;
 		worker_info[workerid].stall_cycles += stall_cycles;
 		worker_info[workerid].stall_cycles += stall_cycles;
-		worker_info[workerid].power_consumed += power_consumed;
+		worker_info[workerid].energy_consumed += energy_consumed;
 		worker_info[workerid].executed_tasks += executed_tasks;
 		worker_info[workerid].executed_tasks += executed_tasks;
 		worker_info[workerid].flops += flops;
 		worker_info[workerid].flops += flops;
 
 

+ 1 - 1
src/profiling/profiling.h

@@ -32,7 +32,7 @@ void _starpu_worker_reset_profiling_info(int workerid);
 
 
 /* Update the per-worker profiling info after a task (or more) was executed.
 /* Update the per-worker profiling info after a task (or more) was executed.
  * This tells StarPU how much time was spent doing computation. */
  * This tells StarPU how much time was spent doing computation. */
-void _starpu_worker_update_profiling_info_executing(int workerid, struct timespec *executing_time, int executed_tasks, uint64_t used_cycles, uint64_t stall_cycles, double consumed_power, double flops);
+void _starpu_worker_update_profiling_info_executing(int workerid, struct timespec *executing_time, int executed_tasks, uint64_t used_cycles, uint64_t stall_cycles, double consumed_energy, double flops);
 
 
 /* Record the date when the worker started to sleep. This permits to measure
 /* Record the date when the worker started to sleep. This permits to measure
  * how much time was spent sleeping. */
  * how much time was spent sleeping. */

+ 8 - 8
src/profiling/profiling_helpers.c

@@ -1,6 +1,6 @@
 /* StarPU --- Runtime system for heterogeneous multicore architectures.
 /* StarPU --- Runtime system for heterogeneous multicore architectures.
  *
  *
- * Copyright (C) 2011, 2013  Université de Bordeaux
+ * Copyright (C) 2011, 2013, 2016  Université de Bordeaux
  *
  *
  * StarPU is free software; you can redistribute it and/or modify
  * StarPU is free software; you can redistribute it and/or modify
  * it under the terms of the GNU Lesser General Public License as published by
  * it under the terms of the GNU Lesser General Public License as published by
@@ -118,13 +118,13 @@ void starpu_profiling_worker_helper_display_summary(void)
 				total_time, executing_time, sleeping_time, total_time - executing_time - sleeping_time);
 				total_time, executing_time, sleeping_time, total_time - executing_time - sleeping_time);
 			if (info.used_cycles || info.stall_cycles)
 			if (info.used_cycles || info.stall_cycles)
 				fprintf(stderr, "\t%lu Mcy %lu Mcy stall\n", info.used_cycles/1000000, info.stall_cycles/1000000);
 				fprintf(stderr, "\t%lu Mcy %lu Mcy stall\n", info.used_cycles/1000000, info.stall_cycles/1000000);
-			if (info.power_consumed)
-				fprintf(stderr, "\t%f J consumed\n", info.power_consumed);
+			if (info.energy_consumed)
+				fprintf(stderr, "\t%f J consumed\n", info.energy_consumed);
 			if (info.flops)
 			if (info.flops)
 				fprintf(stderr, "\t%f GFlop/s\n\n", info.flops / total_time / 1000000);
 				fprintf(stderr, "\t%f GFlop/s\n\n", info.flops / total_time / 1000000);
 		}
 		}
 
 
-		sum_consumed += info.power_consumed;
+		sum_consumed += info.energy_consumed;
 	}
 	}
 
 
 	if (profiling)
 	if (profiling)
@@ -133,11 +133,11 @@ void starpu_profiling_worker_helper_display_summary(void)
 		if (strval_idle_power)
 		if (strval_idle_power)
 		{
 		{
 			double idle_power = atof(strval_idle_power); /* Watt */
 			double idle_power = atof(strval_idle_power); /* Watt */
-			double idle_consumption = idle_power * overall_time / 1000.; /* J */
+			double idle_energy = idle_power * overall_time / 1000.; /* J */
 
 
-			fprintf(stderr, "Idle consumption: %.2lf J\n", idle_consumption);
-			fprintf(stderr, "Total consumption: %.2lf J\n",
-				sum_consumed + idle_consumption);
+			fprintf(stderr, "Idle energy: %.2lf J\n", idle_energy);
+			fprintf(stderr, "Total energy: %.2lf J\n",
+				sum_consumed + idle_energy);
 		}
 		}
 	}
 	}
 	fprintf(stderr, "#---------------------\n");
 	fprintf(stderr, "#---------------------\n");

+ 1 - 1
src/sched_policies/component_heft.c

@@ -128,7 +128,7 @@ static int heft_progress_one(struct starpu_sched_component *component)
 			int offset = component->nchildren * best_task;
 			int offset = component->nchildren * best_task;
 			int icomponent = suitable_components[offset + i];
 			int icomponent = suitable_components[offset + i];
 #ifdef STARPU_DEVEL
 #ifdef STARPU_DEVEL
-#warning FIXME: take power consumption into account
+#warning FIXME: take energy consumption into account
 #endif
 #endif
 			double tmp = starpu_mct_compute_fitness(d,
 			double tmp = starpu_mct_compute_fitness(d,
 						     estimated_ends_with_task[offset + icomponent],
 						     estimated_ends_with_task[offset + icomponent],

+ 1 - 1
src/sched_policies/component_mct.c

@@ -69,7 +69,7 @@ static int mct_push_task(struct starpu_sched_component * component, struct starp
 	{
 	{
 		int icomponent = suitable_components[i];
 		int icomponent = suitable_components[i];
 #ifdef STARPU_DEVEL
 #ifdef STARPU_DEVEL
-#warning FIXME: take power consumption into account
+#warning FIXME: take energy consumption into account
 #endif
 #endif
 		double tmp = starpu_mct_compute_fitness(d,
 		double tmp = starpu_mct_compute_fitness(d,
 					     estimated_ends_with_task[icomponent],
 					     estimated_ends_with_task[icomponent],

+ 9 - 9
src/sched_policies/deque_modeling_policy_data_aware.c

@@ -602,7 +602,7 @@ static void compute_all_performance_predictions(struct starpu_task *task,
 						double *max_exp_endp,
 						double *max_exp_endp,
 						double *best_exp_endp,
 						double *best_exp_endp,
 						double local_data_penalty[nworkers][STARPU_MAXIMPLEMENTATIONS],
 						double local_data_penalty[nworkers][STARPU_MAXIMPLEMENTATIONS],
-						double local_power[nworkers][STARPU_MAXIMPLEMENTATIONS],
+						double local_energy[nworkers][STARPU_MAXIMPLEMENTATIONS],
 						int *forced_worker, int *forced_impl, unsigned sched_ctx_id, unsigned sorted_decision)
 						int *forced_worker, int *forced_impl, unsigned sched_ctx_id, unsigned sorted_decision)
 {
 {
 	int calibrating = 0;
 	int calibrating = 0;
@@ -684,14 +684,14 @@ static void compute_all_performance_predictions(struct starpu_task *task,
 				/* TODO : conversion time */
 				/* TODO : conversion time */
 				local_task_length[worker_ctx][nimpl] = starpu_task_bundle_expected_length(bundle, perf_arch, nimpl);
 				local_task_length[worker_ctx][nimpl] = starpu_task_bundle_expected_length(bundle, perf_arch, nimpl);
 				local_data_penalty[worker_ctx][nimpl] = starpu_task_bundle_expected_data_transfer_time(bundle, memory_node);
 				local_data_penalty[worker_ctx][nimpl] = starpu_task_bundle_expected_data_transfer_time(bundle, memory_node);
-				local_power[worker_ctx][nimpl] = starpu_task_bundle_expected_power(bundle, perf_arch,nimpl);
+				local_energy[worker_ctx][nimpl] = starpu_task_bundle_expected_energy(bundle, perf_arch,nimpl);
 
 
 			}
 			}
 			else
 			else
 			{
 			{
 				local_task_length[worker_ctx][nimpl] = starpu_task_expected_length(task, perf_arch, nimpl);
 				local_task_length[worker_ctx][nimpl] = starpu_task_expected_length(task, perf_arch, nimpl);
 				local_data_penalty[worker_ctx][nimpl] = starpu_task_expected_data_transfer_time(memory_node, task);
 				local_data_penalty[worker_ctx][nimpl] = starpu_task_expected_data_transfer_time(memory_node, task);
-				local_power[worker_ctx][nimpl] = starpu_task_expected_power(task, perf_arch,nimpl);
+				local_energy[worker_ctx][nimpl] = starpu_task_expected_energy(task, perf_arch,nimpl);
 				double conversion_time = starpu_task_expected_conversion_time(task, perf_arch, nimpl);
 				double conversion_time = starpu_task_expected_conversion_time(task, perf_arch, nimpl);
 				if (conversion_time > 0.0)
 				if (conversion_time > 0.0)
 					local_task_length[worker_ctx][nimpl] += conversion_time;
 					local_task_length[worker_ctx][nimpl] += conversion_time;
@@ -757,8 +757,8 @@ static void compute_all_performance_predictions(struct starpu_task *task,
 				nimpl_best = nimpl;
 				nimpl_best = nimpl;
 			}
 			}
 
 
-			if (isnan(local_power[worker_ctx][nimpl]))
-				local_power[worker_ctx][nimpl] = 0.;
+			if (isnan(local_energy[worker_ctx][nimpl]))
+				local_energy[worker_ctx][nimpl] = 0.;
 
 
 		}
 		}
 		worker_ctx++;
 		worker_ctx++;
@@ -790,7 +790,7 @@ static double _dmda_push_task(struct starpu_task *task, unsigned prio, unsigned
 	unsigned nworkers_ctx = workers->nworkers;
 	unsigned nworkers_ctx = workers->nworkers;
 	double local_task_length[nworkers_ctx][STARPU_MAXIMPLEMENTATIONS];
 	double local_task_length[nworkers_ctx][STARPU_MAXIMPLEMENTATIONS];
 	double local_data_penalty[nworkers_ctx][STARPU_MAXIMPLEMENTATIONS];
 	double local_data_penalty[nworkers_ctx][STARPU_MAXIMPLEMENTATIONS];
-	double local_power[nworkers_ctx][STARPU_MAXIMPLEMENTATIONS];
+	double local_energy[nworkers_ctx][STARPU_MAXIMPLEMENTATIONS];
 
 
 	/* Expected end of this task on the workers */
 	/* Expected end of this task on the workers */
 	double exp_end[nworkers_ctx][STARPU_MAXIMPLEMENTATIONS];
 	double exp_end[nworkers_ctx][STARPU_MAXIMPLEMENTATIONS];
@@ -811,7 +811,7 @@ static double _dmda_push_task(struct starpu_task *task, unsigned prio, unsigned
 					    &max_exp_end,
 					    &max_exp_end,
 					    &best_exp_end,
 					    &best_exp_end,
 					    local_data_penalty,
 					    local_data_penalty,
-					    local_power,
+					    local_energy,
 					    &forced_best,
 					    &forced_best,
 					    &forced_impl, sched_ctx_id, sorted_decision);
 					    &forced_impl, sched_ctx_id, sorted_decision);
 
 
@@ -840,7 +840,7 @@ static double _dmda_push_task(struct starpu_task *task, unsigned prio, unsigned
 				}
 				}
 				fitness[worker_ctx][nimpl] = dt->alpha*(exp_end[worker_ctx][nimpl] - best_exp_end)
 				fitness[worker_ctx][nimpl] = dt->alpha*(exp_end[worker_ctx][nimpl] - best_exp_end)
 					+ dt->beta*(local_data_penalty[worker_ctx][nimpl])
 					+ dt->beta*(local_data_penalty[worker_ctx][nimpl])
-					+ dt->_gamma*(local_power[worker_ctx][nimpl]);
+					+ dt->_gamma*(local_energy[worker_ctx][nimpl]);
 
 
 				if (exp_end[worker_ctx][nimpl] > max_exp_end)
 				if (exp_end[worker_ctx][nimpl] > max_exp_end)
 				{
 				{
@@ -858,7 +858,7 @@ static double _dmda_push_task(struct starpu_task *task, unsigned prio, unsigned
 					best_in_ctx = worker_ctx;
 					best_in_ctx = worker_ctx;
 					selected_impl = nimpl;
 					selected_impl = nimpl;
 
 
-					//_STARPU_DEBUG("best fitness (worker %d) %e = alpha*(%e) + beta(%e) +gamma(%e)\n", worker, best_fitness, exp_end[worker][nimpl] - best_exp_end, local_data_penalty[worker][nimpl], local_power[worker][nimpl]);
+					//_STARPU_DEBUG("best fitness (worker %d) %e = alpha*(%e) + beta(%e) +gamma(%e)\n", worker, best_fitness, exp_end[worker][nimpl] - best_exp_end, local_data_penalty[worker][nimpl], local_energy[worker][nimpl]);
 
 
 				}
 				}
 			}
 			}

+ 3 - 3
src/sched_policies/helper_mct.c

@@ -1,6 +1,6 @@
 /* StarPU --- Runtime system for heterogeneous multicore architectures.
 /* StarPU --- Runtime system for heterogeneous multicore architectures.
  *
  *
- * Copyright (C) 2013-2014  Université de Bordeaux
+ * Copyright (C) 2013-2014, 2016  Université de Bordeaux
  * Copyright (C) 2013  INRIA
  * Copyright (C) 2013  INRIA
  * Copyright (C) 2013  Simon Archipoff
  * Copyright (C) 2013  Simon Archipoff
  *
  *
@@ -107,13 +107,13 @@ static double compute_expected_time(double now, double predicted_end, double pre
 	return predicted_end;
 	return predicted_end;
 }
 }
 
 
-double starpu_mct_compute_fitness(struct _starpu_mct_data * d, double exp_end, double min_exp_end, double max_exp_end, double transfer_len, double local_power)
+double starpu_mct_compute_fitness(struct _starpu_mct_data * d, double exp_end, double min_exp_end, double max_exp_end, double transfer_len, double local_energy)
 {
 {
 	/* Note: the expected end includes the data transfer duration, which we want to be able to tune separately */
 	/* Note: the expected end includes the data transfer duration, which we want to be able to tune separately */
 
 
 	return d->alpha * (exp_end - min_exp_end)
 	return d->alpha * (exp_end - min_exp_end)
 		+ d->beta * transfer_len
 		+ d->beta * transfer_len
-		+ d->_gamma * local_power
+		+ d->_gamma * local_energy
 		+ d->_gamma * d->idle_power * (exp_end - max_exp_end);
 		+ d->_gamma * d->idle_power * (exp_end - max_exp_end);
 }
 }
 
 

+ 2 - 2
src/sched_policies/helper_mct.h

@@ -1,6 +1,6 @@
 /* StarPU --- Runtime system for heterogeneous multicore architectures.
 /* StarPU --- Runtime system for heterogeneous multicore architectures.
  *
  *
- * Copyright (C) 2013-2014  Université de Bordeaux
+ * Copyright (C) 2013-2014, 2016  Université de Bordeaux
  *
  *
  * StarPU is free software; you can redistribute it and/or modify
  * StarPU is free software; you can redistribute it and/or modify
  * it under the terms of the GNU Lesser General Public License as published by
  * it under the terms of the GNU Lesser General Public License as published by
@@ -28,4 +28,4 @@ int starpu_mct_compute_expected_times(struct starpu_sched_component *component,
 		double *estimated_lengths, double *estimated_transfer_length, double *estimated_ends_with_task,
 		double *estimated_lengths, double *estimated_transfer_length, double *estimated_ends_with_task,
 		double *min_exp_end_with_task, double *max_exp_end_with_task, int *suitable_components);
 		double *min_exp_end_with_task, double *max_exp_end_with_task, int *suitable_components);
 
 
-double starpu_mct_compute_fitness(struct _starpu_mct_data * d, double exp_end, double min_exp_end, double max_exp_end, double transfer_len, double local_power);
+double starpu_mct_compute_fitness(struct _starpu_mct_data * d, double exp_end, double min_exp_end, double max_exp_end, double transfer_len, double local_energy);

+ 6 - 6
src/sched_policies/parallel_heft.c

@@ -292,7 +292,7 @@ static int _parallel_heft_push_task(struct starpu_task *task, unsigned prio, uns
 
 
 	double local_task_length[nworkers_ctx][STARPU_MAXIMPLEMENTATIONS];
 	double local_task_length[nworkers_ctx][STARPU_MAXIMPLEMENTATIONS];
 	double local_data_penalty[nworkers_ctx][STARPU_MAXIMPLEMENTATIONS];
 	double local_data_penalty[nworkers_ctx][STARPU_MAXIMPLEMENTATIONS];
-	double local_power[nworkers_ctx][STARPU_MAXIMPLEMENTATIONS];
+	double local_energy[nworkers_ctx][STARPU_MAXIMPLEMENTATIONS];
 	double local_exp_end[nworkers_ctx][STARPU_MAXIMPLEMENTATIONS];
 	double local_exp_end[nworkers_ctx][STARPU_MAXIMPLEMENTATIONS];
 	double fitness[nworkers_ctx][STARPU_MAXIMPLEMENTATIONS];
 	double fitness[nworkers_ctx][STARPU_MAXIMPLEMENTATIONS];
 
 
@@ -401,11 +401,11 @@ static int _parallel_heft_push_task(struct starpu_task *task, unsigned prio, uns
 			}
 			}
 
 
 
 
-			local_power[worker_ctx][nimpl] = starpu_task_expected_power(task, perf_arch,nimpl);
-			//_STARPU_DEBUG("Scheduler parallel heft: task length (%lf) local power (%lf) worker (%u) kernel (%u) \n", local_task_length[worker],local_power[worker],worker,nimpl);
+			local_energy[worker_ctx][nimpl] = starpu_task_expected_energy(task, perf_arch,nimpl);
+			//_STARPU_DEBUG("Scheduler parallel heft: task length (%lf) local energy (%lf) worker (%u) kernel (%u) \n", local_task_length[worker],local_energy[worker],worker,nimpl);
 
 
-			if (isnan(local_power[worker_ctx][nimpl]))
-				local_power[worker_ctx][nimpl] = 0.;
+			if (isnan(local_energy[worker_ctx][nimpl]))
+				local_energy[worker_ctx][nimpl] = 0.;
 
 
 		}
 		}
 		worker_ctx++;
 		worker_ctx++;
@@ -437,7 +437,7 @@ static int _parallel_heft_push_task(struct starpu_task *task, unsigned prio, uns
 
 
 				fitness[worker_ctx][nimpl] = hd->alpha*(local_exp_end[worker_ctx][nimpl] - best_exp_end)
 				fitness[worker_ctx][nimpl] = hd->alpha*(local_exp_end[worker_ctx][nimpl] - best_exp_end)
 						+ hd->beta*(local_data_penalty[worker_ctx][nimpl])
 						+ hd->beta*(local_data_penalty[worker_ctx][nimpl])
-						+ hd->_gamma*(local_power[worker_ctx][nimpl]);
+						+ hd->_gamma*(local_energy[worker_ctx][nimpl]);
 
 
 				if (local_exp_end[worker_ctx][nimpl] > max_exp_end)
 				if (local_exp_end[worker_ctx][nimpl] > max_exp_end)
 					/* This placement will make the computation
 					/* This placement will make the computation