From: afanfakh Date: Thu, 13 Nov 2014 14:09:01 +0000 (+0100) Subject: adding ref. keys X-Git-Tag: pdsec15_submission~67 X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/mpi-energy2.git/commitdiff_plain/c0f811adf4e7fb3bb57c6042f027be6adc4db147?ds=sidebyside;hp=8ee15c2ba96800b66995243e04a7a34970bca7d9 adding ref. keys --- diff --git a/Heter_paper.tex b/Heter_paper.tex index 3e88c0d..6c37115 100644 --- a/Heter_paper.tex +++ b/Heter_paper.tex @@ -82,27 +82,29 @@ the normalized performance equation, as follows: \section{Introduction} \label{sec.intro} -Modern processors continue to increased in a performance. +Modern processors continue increasing in a performance. The CPUs constructors are competing to achieve maximum number of floating point operations per second (FLOPS). Thus, the energy consumption and the heat dissipation are increased drastically according to this increase. Because the number of FLOPS -is linearly related to the power consumption of a CPU~\cite{51}. +is linearly related to the power consumption of a CPU +~\cite{Luley_Energy.efficiency.evaluation.and.benchmarking}. As an example of the more power hungry cluster, Tianhe-2 became in -the top of the Top500 list in June 2014 \cite{43}. It has more than -3 millions of cores and consumed more than 17.8 megawatts. -Moreover, according to the U.S. annual energy outlook 2014 \cite{60}, -the price of energy for 1 megawatt-hour was approximately equal to \$70. +the top of the Top500 list in June 2014 \cite{TOP500_Supercomputers_Sites}. +It has more than 3 millions of cores and consumed more than 17.8 megawatts. +Moreover, according to the U.S. annual energy outlook 2014 +\cite{U.S_Annual.Energy.Outlook.2014}, the price of energy for 1 megawatt-hour +was approximately equal to \$70. Therefore, we can consider the price of the energy consumption for the Tianhe-2 platform is approximately more than \$10 millions for one year. For this reason, the heterogeneous clusters must be offer more energy efficiency due to the increase in the energy cost and the environment influences. Therefore, a green computing clusters with maximum number of FLOPS per watt are required nowadays. For example, the GSIC center of Tokyo, -became the top of the Green500 list in June 2014 \cite{59}. This platform -has more than four thousand of MFLOPS per watt. Dynamic voltage and frequency -scaling (DVFS) is a process used widely to reduce the energy consumption of -the processor. In a heterogeneous clusters enabled DVFS, many researchers +became the top of the Green500 list in June 2014 \cite{Green500_List}. +This platform has more than four thousand of MFLOPS per watt. Dynamic +voltage and frequency scaling (DVFS) is a process used widely to reduce the energy +consumption of the processor. In a heterogeneous clusters enabled DVFS, many researchers used DVFS in a different ways. DVFS can be minimized the energy consumption but it leads to a disadvantage due to increase in performance degradation. Therefore, researchers used different optimization strategies to overcame @@ -136,45 +138,52 @@ also allowed in the graphical processors GPUs, to achieved the same goal. Apply DVFS has a dramatical side effect if it is applied to minimum levels to gain more energy reduction, producing a high percentage of performance degradations for the parallel applications. Many researchers used different strategies to solve this -nonlinear problem for example in~\cite{19,42}, their methods add big overheads to -the algorithm to select the suitable frequency. In this paper we present a method +nonlinear problem for example in +~\cite{Hao_Learning.based.DVFS,Dhiman_Online.Learning.Power.Management}, their methods +add big overheads to the algorithm to select the suitable frequency. +In this paper we present a method to find the optimal set of frequency scaling factors for a heterogeneous cluster to simultaneously optimize both the energy and the execution time without adding a big -overhead. This work is developed from our previous work of a homogeneous cluster~\cite{45}. +overhead. This work is developed from our previous work of a homogeneous cluster~\cite{Our_first_paper}. Therefore we are interested to present some works that concerned the heterogeneous clusters enabled DVFS. In general, the heterogeneous cluster works fall into two categorizes: GPUs-CPUs heterogeneous clusters and CPUs-CPUs heterogeneous clusters. In GPUs-CPUs heterogeneous clusters some parallel tasks executed on a GPUs and the others executed -on a CPUs. As an example of this works, Luley et al.~\cite{51}, proposed a heterogeneous +on a CPUs. As an example of this works, Luley et al. +~\cite{Luley_Energy.efficiency.evaluation.and.benchmarking}, proposed a heterogeneous cluster composed of Intel Xeon CPUs and NVIDIA GPUs. Their main goal is to determined the energy efficiency as a function of performance per watt, the best tradeoff is done when the -performance per watt function is maximized. In the work of Kia Ma et al.~\cite{49}, -They developed a scheduling algorithm to distributed different workloads proportional -to the computing power of the node to be executed on a CPU or a GPU, emphasize all tasks -must be finished in the same time. -Recently, Rong et al.~\cite{50}, Their study explain that a heterogeneous clusters enabled -DVFS using GPUs and CPUs gave better energy and performance efficiency than other clusters -composed of only CPUs. The CPUs-CPUs heterogeneous clusters consist of number of computing -nodes all of the type CPU. Our work in this paper can be classified to this type of the -clusters. As an example of this works see Naveen et al.~\cite{52} work, They developed a -policy to dynamically assigned the frequency to a heterogeneous cluster. The goal is to -minimizing a fixed metric of $energy*delay^2$. Where our proposed method is automatically +performance per watt function is maximized. In the work of Kia Ma et al. +~\cite{KaiMa_Holistic.Approach.to.Energy.Efficiency.in.GPU-CPU}, They developed a scheduling +algorithm to distributed different workloads proportional to the computing power of the node +to be executed on a CPU or a GPU, emphasize all tasks must be finished in the same time. +Recently, Rong et al.~\cite{Rong_Effects.of.DVFS.on.K20.GPU}, Their study explain that +a heterogeneous clusters enabled DVFS using GPUs and CPUs gave better energy and performance +efficiency than other clusters composed of only CPUs. +The CPUs-CPUs heterogeneous clusters consist of number of computing nodes all of the type CPU. +Our work in this paper can be classified to this type of the clusters. +As an example of this works see Naveen et al.~\cite{Naveen_Power.Efficient.Resource.Scaling} work, +They developed a policy to dynamically assigned the frequency to a heterogeneous cluster. +The goal is to minimizing a fixed metric of $energy*delay^2$. Where our proposed method is automatically optimized the relation between the energy and the delay of the iterative applications. -Other works such as Lizhe et al.~\cite{53}, their algorithm divided the executed tasks into -two types: the critical and non critical tasks. The algorithm scaled down the frequency of -the non critical tasks as function to the amount of the slack and communication times that +Other works such as Lizhe et al.~\cite{Lizhe_Energy.aware.parallel.task.scheduling}, +their algorithm divided the executed tasks into two types: the critical and +non critical tasks. The algorithm scaled down the frequency of the non critical tasks +as function to the amount of the slack and communication times that have with maximum of performance degradation percentage of 10\%. In our method there is no fixed bounds for performance degradation percentage and the bound is dynamically computed according to the energy and the performance tradeoff relation of the executed application. There are some approaches used a heterogeneous cluster composed from two different types -of Intel and AMD processors such as~\cite{54} and \cite{55}, they predicated both the energy +of Intel and AMD processors such as~\cite{Joshi_Blackbox.prediction.of.impact.of.DVFS} +and \cite{Spiliopoulos_Green.governors.Adaptive.DVFS}, they predicated both the energy and the performance for each frequency gear, then the algorithm selected the best gear that gave the best tradeoff. In contrast our algorithm works over a heterogeneous platform composed of -four different types of processors. Others approaches such as \cite{56} and \cite{57}, they -are selected the best frequencies for a specified heterogeneous clusters offline using some +four different types of processors. Others approaches such as +\cite{Shelepov_Scheduling.on.Heterogeneous.Multicore} and \cite{Li_Minimizing.Energy.Consumption.for.Frame.Based.Tasks}, +they are selected the best frequencies for a specified heterogeneous clusters offline using some heuristic methods. While our proposed algorithm works online during the execution time of -iterative application. Greedy dynamic approach used by Chen et al.~\cite{58}, minimized -the power consumption of a heterogeneous severs with time/space complexity, this approach +iterative application. Greedy dynamic approach used by Chen et al.~\cite{Chen_DVFS.under.quality.of.service.requirements}, +minimized the power consumption of a heterogeneous severs with time/space complexity, this approach had considerable overhead. In our proposed scaling algorithm has very small overhead and it is works without any previous analysis for the application time complexity. @@ -230,8 +239,9 @@ as in EQ (\ref{eq:s}). The execution time of the computation part is linearly proportional to the frequency scaling factor $S$ but the communication time is not affected by the scaling factor because the processors involved remain idle during the - communications~\cite{17}. The communication time for a task is the summation of - periods of time that begin with an MPI call for sending or receiving a message + communications~\cite{Freeh_Exploring.the.Energy.Time.Tradeoff}. + The communication time for a task is the summation of periods of + time that begin with an MPI call for sending or receiving a message till the message is synchronously sent or received. Since in a heterogeneous platform, each node has different characteristics, @@ -259,14 +269,16 @@ equal to the execution time of one iteration as in EQ(\ref{eq:perf}) multiplied by the number of iterations of that application. This prediction model is based on our model for predicting the execution time of -message passing distributed applications for homogeneous architectures~\cite{45}. +message passing distributed applications for homogeneous architectures~\cite{Our_first_paper}. The execution time prediction model is used in our method for optimizing both energy consumption and performance of iterative methods, which is presented in the following sections. \subsection{Energy model for heterogeneous platform} -Many researchers~\cite{9,3,15,26} divide the power consumed by a processor into +Many researchers~\cite{Malkowski_energy.efficient.high.performance.computing, +Rauber_Analytical.Modeling.for.Energy,Zhuo_Energy.efficient.Dynamic.Task.Scheduling, +Rizvandi_Some.Observations.on.Optimal.Frequency} divide the power consumed by a processor into two power metrics: the static and the dynamic power. While the first one is consumed as long as the computing unit is turned on, the latter is only consumed during computation times. The dynamic power $P_{d}$ is related to the switching @@ -293,11 +305,10 @@ where $T$ is the execution time of the program, $T_{cp}$ is the computation time and $T_{cp} \leq T$. $T_{cp}$ may be equal to $T$ if there is no communication and no slack time. -The main objective of DVFS operation is to -reduce the overall energy consumption~\cite{37}. The operational frequency $F$ -depends linearly on the supply voltage $V$, i.e., $V = \beta \cdot F$ with some +The main objective of DVFS operation is to reduce the overall energy consumption~\cite{Le_DVFS.Laws.of.Diminishing.Returns}. +The operational frequency $F$ depends linearly on the supply voltage $V$, i.e., $V = \beta \cdot F$ with some constant $\beta$. This equation is used to study the change of the dynamic -voltage with respect to various frequency values in~\cite{3}. The reduction +voltage with respect to various frequency values in~\cite{Rauber_Analytical.Modeling.for.Energy}. The reduction process of the frequency can be expressed by the scaling factor $S$ which is the ratio between the maximum and the new frequency as in EQ~(\ref{eq:s}). The CPU governors are power schemes supplied by the operating @@ -318,7 +329,7 @@ where $ {P}_\textit{dNew}$ and $P_{dOld}$ are the dynamic power consumed with new frequency and the maximum frequency respectively. According to EQ(\ref{eq:pdnew}) the dynamic power is reduced by a factor of $S^{-3}$ when -reducing the frequency by a factor of $S$~\cite{3}. Since the FLOPS of a CPU is proportional +reducing the frequency by a factor of $S$~\cite{Rauber_Analytical.Modeling.for.Energy}. Since the FLOPS of a CPU is proportional to the frequency of a CPU, the computation time is increased proportionally to $S$. The new dynamic energy is the dynamic power multiplied by the new time of computation and is given by the following equation: @@ -327,7 +338,8 @@ and is given by the following equation: E_\textit{dNew} = P_{dOld} \cdot S^{-3} \cdot (Tcp \cdot S)= S^{-2}\cdot P_{dOld} \cdot Tcp \end{equation} The static power is related to the power leakage of the CPU and is consumed during computation -and even when idle. As in~\cite{3,46}, we assume that the static power of a processor is constant +and even when idle. As in~\cite{Rauber_Analytical.Modeling.for.Energy,Zhuo_Energy.efficient.Dynamic.Task.Scheduling}, +we assume that the static power of a processor is constant during idle and computation periods, and for all its available frequencies. The static energy is the static power multiplied by the execution time of the program. According to the execution time model in EQ(\ref{eq:perf}), the execution time of the program @@ -362,7 +374,7 @@ for each processor. It is computed as follows: Reducing the frequencies of the processors according to the vector of scaling factors $(S_1, S_2,\dots, S_N)$ may degrade the performance of the application and thus, increase the static energy because the execution time is -increased~\cite{36}. We can measure the overall energy consumption for the iterative +increased~\cite{Kim_Leakage.Current.Moore.Law}. We can measure the overall energy consumption for the iterative application by measuring the energy consumption for one iteration as in EQ(\ref{eq:energy}) multiplied by the number of iterations of that application. @@ -380,7 +392,7 @@ of the application might not be the optimal one. It is not trivial to select the frequency scaling factor for each processor while considering the characteristics of each processor (computation power, range of frequencies, dynamic and static powers) and the task executed (computation/communication ratio) in order to reduce the overall energy consumption and not -significantly increase the execution time. In our previous work~\cite{45}, we proposed a method +significantly increase the execution time. In our previous work~\cite{Our_first_paper}, we proposed a method that selects the optimal frequency scaling factor for a homogeneous cluster executing a message passing iterative synchronous application while giving the best trade-off between the energy consumption and the performance for such applications. In this work we are interested in @@ -391,8 +403,8 @@ between energy consumption and performance. The relation between the energy consumption and the execution time for an application is complex and nonlinear, Thus, unlike the relation between the execution time and the scaling factor, the relation of the energy with the frequency scaling -factors is nonlinear, for more details refer to~\cite{17}. Moreover, they are -not measured using the same metric. To solve this problem, we normalize the +factors is nonlinear, for more details refer to~\cite{Freeh_Exploring.the.Energy.Time.Tradeoff}. +Moreover, they are not measured using the same metric. To solve this problem, we normalize the execution time by computing the ratio between the new execution time (after scaling down the frequencies of some processors) and the initial one (with maximum frequency for all nodes,) as follows: @@ -464,7 +476,7 @@ where $N$ is the number of nodes and $F$ is the number of available frequencies Then we can select the optimal set of scaling factors that satisfies EQ~(\ref{eq:max}). Our objective function can work with any energy model or any power values for each node (static and dynamic powers). However, the most energy reduction gain can be achieved when -the energy curve has a convex form as shown in~\cite{15,3,19}. +the energy curve has a convex form as shown in~\cite{Zhuo_Energy.efficient.Dynamic.Task.Scheduling,Rauber_Analytical.Modeling.for.Energy,Hao_Learning.based.DVFS}. \section{The scaling factors selection algorithm for heterogeneous platforms } \label{sec.optim} @@ -609,7 +621,7 @@ which results in bigger energy savings. \section{Experimental results} \label{sec.expe} To evaluate the efficiency and the overall energy consumption reduction of algorithm~(\ref{HSA}), -it was applied to the NAS parallel benchmarks NPB v3.3 \cite{44}. The experiments were executed +it was applied to the NAS parallel benchmarks NPB v3.3 \cite{NAS.Parallel.Benchmarks}. The experiments were executed on the simulator SimGrid/SMPI v3.10~\cite{casanova+giersch+legrand+al.2014.versatile} which offers easy tools to create a heterogeneous platform and run message passing applications over it. The heterogeneous platform that was used in the experiments, had one core per node because just one @@ -622,7 +634,7 @@ for example if a benchmark was executed on 8 nodes, 2 nodes from each type were of CPUs do not specify the dynamic and the static power of their CPUs, for each type of node they were chosen proportionally to its computing power (FLOPS). In the initial heterogeneous platform, while computing with highest frequency, each node consumed power proportional to its computing power which 80\% of it was -dynamic power and the rest was 20\% for the static power, the same assumption was made in \cite{45,3}. +dynamic power and the rest was 20\% for the static power, the same assumption was made in \cite{Our_first_paper,Rauber_Analytical.Modeling.for.Energy}. Finally, These nodes were connected via an ethernet network with 1 Gbit/s bandwidth. diff --git a/my_reference.bib b/my_reference.bib index 1e0b36f..327238d 100644 --- a/my_reference.bib +++ b/my_reference.bib @@ -23,7 +23,7 @@ } -@inproceedings{3, +@inproceedings{Rauber_Analytical.Modeling.for.Energy, author = {Rauber, Thomas and R\"{u}nger, Gudula}, title = {Analytical Modeling and Simulation of the Energy Consumption of Independent Tasks}, booktitle = {Proceedings of the Winter Simulation Conference}, @@ -94,7 +94,7 @@ keywords={Clustering algorithms;Delay effects;Dynamic voltage scaling;Energy con doi={10.1145/1362622.1362688} } -@phdthesis {9, +@phdthesis {Malkowski_energy.efficient.high.performance.computing, author = "Malkowski, Konrad", title = "Co-adapting scientific applications and architectures toward energy-efficient high performance computing", school = "The Pennsylvania State University", @@ -169,7 +169,7 @@ keywords={parallel processing;power aware computing;workstation clusters;cluster doi={10.1109/CCGRID.2009.88} } -@article{15, +@article{Zhuo_Energy.efficient.Dynamic.Task.Scheduling, author = {Zhuo, Jianli and Chakrabarti, Chaitali}, title = {Energy-efficient Dynamic Task Scheduling Algorithms for DVS Systems}, journal = {ACM Trans. Embed. Comput. Syst.}, @@ -203,7 +203,7 @@ doi={10.1109/CCGRID.2009.88} address = {Washington, DC, USA} } -@inproceedings{17, +@inproceedings{Freeh_Exploring.the.Energy.Time.Tradeoff, author = {Freeh, Vincent W. and Pan, Feng and Kappiah, Nandini and Lowenthal, David K. and Springer, Rob}, title = {Exploring the Energy-Time Tradeoff in {MPI} Programs on a Power-Scalable Cluster}, booktitle = {Proceedings of the 19th IEEE International Parallel and Distributed Processing Symposium (IPDPS'05) - Papers - Volume 01}, @@ -230,7 +230,7 @@ ISSN={1530-2075} } -@inproceedings{19, +@inproceedings{Hao_Learning.based.DVFS, author = {Hao Shen and Jun Lu and Qinru Qiu}, @@ -346,7 +346,7 @@ ISSN={1530-2075} keywords = {benchmarking, dynamic voltage frequency scaling, energy optimization, high performance computing, memory latency} } -@article{26, +@article{Rizvandi_Some.Observations.on.Optimal.Frequency, author = {Rizvandi, Nikzad Babaii and Taheri, Javid and Zomaya, Albert Y.}, title = {Some Observations on Optimal Frequency Selection in {DVFS}-based Energy Consumption Minimization}, journal = {J. Parallel Distrib. Comput.}, @@ -485,7 +485,7 @@ author = {Wei Liu and Wei Du and Jing Chen and Wei Wang and GuoSun Zeng} } -@article{36, +@article{Kim_Leakage.Current.Moore.Law, author = {Kim, Nam Sung and Austin, Todd and Blaauw, David and Mudge, Trevor and Flautner, Kriszti\'{a}n and Hu, Jie S. and Irwin, Mary Jane and Kandemir, Mahmut and Narayanan, Vijaykrishnan}, title = {Leakage Current: Moore's Law Meets Static Power}, journal = {Computer}, @@ -503,7 +503,7 @@ author = {Wei Liu and Wei Du and Jing Chen and Wei Wang and GuoSun Zeng} address = {Los Alamitos, CA, USA} } -@inproceedings{37, +@inproceedings{Le_DVFS.Laws.of.Diminishing.Returns, author = {Le Sueur, Etienne and Gernot Heiser}, month = {October}, year = {2010}, @@ -568,7 +568,7 @@ doi={10.1145/1283780.1283825} address = {Washington, DC, USA} } -@ARTICLE{42, +@ARTICLE{Dhiman_Online.Learning.Power.Management, author={Dhiman, G. and Rosing, T.S.}, journal={Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on}, title={System-Level Power Management Using Online Learning}, @@ -582,7 +582,7 @@ doi={10.1109/TCAD.2009.2015740}, ISSN={0278-0070} } -@MISC{43, +@MISC{TOP500_Supercomputers_Sites, title = {{TOP500 Supercomputers Sites}}, url = {http://www.top500.org} } @@ -592,7 +592,7 @@ ISSN={0278-0070} url = {http://simgrid.org} } -@MISC{44, +@MISC{NAS.Parallel.Benchmarks, author = {{NASA Advanced Supercomputing Division}}, title = {{NAS} Parallel Benchmarks}, url = {http://www.nas.nasa.gov/publications/npb.html}, @@ -602,7 +602,7 @@ ISSN={0278-0070} } -@INPROCEEDINGS{45, +@INPROCEEDINGS{Our_first_paper, author = {Jean-Claude Charr and Rapha{\"e}l Couturier and Ahmed Fanfakh and Arnaud Giersch}, title = {Dynamic Frequency Scaling for Energy Consumption @@ -651,26 +651,6 @@ ISSN={0278-0070} pdf = {http://hal.inria.fr/docs/01/05/75/41/PDF/simgrid3-journal.pdf} } -@article{46, - author = {Zhuo, Jianli and Chakrabarti, Chaitali}, - title = {Energy-efficient Dynamic Task Scheduling Algorithms for DVS Systems}, - journal = {ACM Trans. Embed. Comput. Syst.}, - issue_date = {February 2008}, - volume = {7}, - number = {2}, - month = jan, - year = {2008}, - issn = {1539-9087}, - pages = {17:1--17:25}, - articleno = {17}, - numpages = {25}, - url = {http://doi.acm.org/10.1145/1331331.1331341}, - doi = {10.1145/1331331.1331341}, - acmid = {1331341}, - publisher = {ACM}, - address = {New York, NY, USA}, - keywords = {DVS system, Dynamic task scheduling, energy minimization, optimal scaling factor, real time} -} @MISC{47, title = {Intel microprocessor export compliance metrics}, @@ -687,7 +667,7 @@ ISSN={0278-0070} file = {a548738.pdf:files/30/a548738.pdf:application/pdf} } -@INPROCEEDINGS{49, +@INPROCEEDINGS{KaiMa_Holistic.Approach.to.Energy.Efficiency.in.GPU-CPU, author={Kai Ma and Xue Li and Wei Chen and Chi Zhang and Xiaorui Wang}, booktitle={Parallel Processing (ICPP), 2012 41st International Conference on}, title={GreenGPU: A Holistic Approach to Energy Efficiency in GPU-CPU Heterogeneous Architectures}, @@ -701,7 +681,7 @@ ISSN={0190-3918} -@INPROCEEDINGS{50, +@INPROCEEDINGS{Rong_Effects.of.DVFS.on.K20.GPU, author={Rong Ge and Vogt, R. and Majumder, J. and Alam, A. and Burtscher, M. and Ziliang Zong}, booktitle={Parallel Processing (ICPP), 2013 42nd International Conference on}, title={Effects of Dynamic Voltage and Frequency Scaling on a K20 GPU}, @@ -714,7 +694,7 @@ ISSN={0190-3918} } -@techreport{51, +@techreport{Luley_Energy.efficiency.evaluation.and.benchmarking, title = {Energy efficiency evaluation and benchmarking of {AFRL}'s Condor high performance computer}, urldate = {2014-10-16}, @@ -726,14 +706,14 @@ ISSN={0190-3918} -@INPROCEEDINGS{52, +@INPROCEEDINGS{Naveen_Power.Efficient.Resource.Scaling, author = {Naveen Muralimanohar and Karthik Ramani and Rajeev Balasubramonian}, title = {Power Efficient Resource Scaling in Partitioned Architectures through Dynamic Heterogeneity}, booktitle = {In Proceedings of ISPASS}, year = {2006} } -@article{53, +@article{Lizhe_Energy.aware.parallel.task.scheduling, title = "Energy-aware parallel task scheduling in a cluster ", journal = "Future Generation Computer Systems ", volume = "29", @@ -750,7 +730,7 @@ author = {Lizhe Wang and Samee U. Khan and Dan Chen and Joanna Kołodziej and Ra -@article{54, +@article{Joshi_Blackbox.prediction.of.impact.of.DVFS, title = {Blackbox prediction of the impact of {DVFS} on end-to-end performance of multitier systems}, volume = {37}, url = {http://dl.acm.org/citation.cfm?id=1773404}, @@ -763,7 +743,7 @@ author = {Lizhe Wang and Samee U. Khan and Dan Chen and Joanna Kołodziej and Ra } -@INPROCEEDINGS{55, +@INPROCEEDINGS{Spiliopoulos_Green.governors.Adaptive.DVFS, author={Spiliopoulos, V. and Kaxiras, S. and Keramidas, G.}, booktitle={Green Computing Conference and Workshops (IGCC), 2011 International}, title={Green governors: A framework for Continuously Adaptive DVFS}, @@ -774,7 +754,7 @@ doi={10.1109/IGCC.2011.6008552} } -@proceedings{56, +@proceedings{Shelepov_Scheduling.on.Heterogeneous.Multicore, author = {Shelepov, D. and Fedorova, A.}, intrahash = {2287b0be888deceb937bace77634081a}, organization = {Workshop on the Interaction between Operating Systems and Computer Architecture, in conjunction with ISCA}, @@ -784,7 +764,7 @@ doi={10.1109/IGCC.2011.6008552} } -@article{57, +@article{Li_Minimizing.Energy.Consumption.for.Frame.Based.Tasks, author={Li, D. and Wu, J.}, journal={Parallel and Distributed Systems, IEEE Transactions on}, title={Minimizing Energy Consumption for Frame-Based Tasks on Heterogeneous Multiprocessor Platforms}, @@ -798,7 +778,7 @@ doi={10.1109/TPDS.2014.2313338}, ISSN={1045-9219},} } -@article{58, +@article{Chen_DVFS.under.quality.of.service.requirements, title = {Dynamic frequency scaling schemes for heterogeneous clusters under quality of service requirements}, volume = {28}, url = {http://www.tik.ee.ethz.ch/file/6b2639d5dad0cd754d723ba0eb92cbf6/201211_06.pdf}, @@ -811,12 +791,12 @@ ISSN={1045-9219},} } -@MISC{59, +@MISC{Green500_List, title = {{The Green500 List of Heterogeneous Supercomputing Systems}}, url = {http://www.green500.org} } -@MISC{60, +@MISC{U.S_Annual.Energy.Outlook.2014, title = {{U.S. Energy Information Administration, Annual Energy Outlook 2014}}, url = {http://www.eia.gov/} } \ No newline at end of file