Vous êtes sur la page 1sur 3

PAPER REVIEW 4

Phomane T.C 201503660


Tshabalala M. 201502896
Fihlo N.P 201501380
Some Ideas and Principles for Achieving Higher System Energy
Efficiency

In this paper, what the author basically points out or rather highlights are that in most if not all of
today’s power-limited systems, energy is performance, as such engaging in the same task with the same
performance with quite an improved efficiency is a key to improved overall system performance as
much as energy. There is quite a number of ideas and principles that outline how this can be achieved.
The author articulates that the high-performance energy-efficient systems should be unique in a way in
which they handle tasks and how much energy is allocated to undertaking those tasks, that is, it should
be based on factors like criticality, priority as well as the importance. This roots from the fact that not all
tasks impact the system’s performance and as such different tasks require different energy levels.

The author thereby suggested that the system’s throughput could actually be improved if for every task,
a present slack could be identified and thus the resources designed accordingly and finally the task
execution managed, in such a way that every tasks’ slack is kept as close to zero as possible. From a
personal point of view, with the amount of compelling evidence and research that the author has laid
out in this paper, it is imperative that the paper has a shot at a solution to the performance, energy and
efficiency problems in the current systems.
Parallel Application Memory Scheduling

Chip multiprocessors (CMPs) are usually used to speed up a single application using multiple
threads that concurrently execute on multiple cores. This research paper introduced the
Parallel Application Memory Scheduler (PAMS). This is a new memory controller design that
manages inter-thread memory interference in parallel applications to simply reduce the overall
execution time. For the PAMs to work, it uses the hardware or software cooperative approach
that consists of two components which runtime system estimate and memory scheduler. The
previously proposed solutions memory scheduling algorithms for CMPs did not take into
account of inter-dependent nature of threads in a parallel application hence PAMS made
improvement.
Furthermore, PAMS was the first memory controller design that explicitly developed to reduce
inter-thread interference between inter-dependent threads of a parallel application. When
brought into practice, PAMS improves parallel application performance, outperforming the best
previous memory scheduler designed for multi-programmed workloads and a memory
scheduler we devised that uses a previously-proposed thread criticality prediction mechanism
to estimate and prioritize critical
threads. The performance gain of PAMS over baseline is 16.7% and 2.3% is due to optimization
enabled by loop progress measurement.

Vous aimerez peut-être aussi