Académique Documents
Professionnel Documents
Culture Documents
Seminar Topic:
Multicore Processors
Abstract:
Processor (or CPU) manufacturers, such as Intel and AMD, face an everincreasing demand for processing power. CPU over clocking, the traditional method of increasing CPU performance, has reached its technological limits. Over clocking has two primary undesirable side effects more heat, and the generation of more electronic noise within the CPU. The heat can cause a server to fail, and the electronic noise can cause corruption within the data stream. Processors do not execute processes, they execute threads. One process can launch many threads. The operating system is responsible for allocating resources, like CPU and RAM, to those threads. One can conclude that the more CPUs you have, the more threads can be handled at once, hence the advent of hyper-treading single processors and dual-core processors. However, this technology is only an advantage if applications are designed to work in such an environment. Single threaded applications will run on these newer processors just as if they were running on a single processor. This whitepaper discusses both multicore processor technologies and its limitation.
Many new applications are multithreaded General trend in computer architecture (shift towards more parallelism)
MULTICORE
Multicore more than one core in single processor Having multiple arithmetic, logical and shift circuit in single processor. Multiple processes can reside on same
architecture. Number of process depends on number of core. Virtual multi-tasking can be done using time sharing algo. Processes can have more than one thread but they cant execute at same time. Real multi-tasking can take place. Multiple threads of same process can be executed at same time. Enables better threading (e.g. up to 30%) Fully utilization is much difficult. The cache coherence problem. Inter core communication. Thread safety Programming is much easy To get advantage programming must be in such way that creates multi threads at time. Scheduling for multi core is much complex for high performance. Each core cant have higher frequency then single core. Heat problem is n time then single core.
Scheduling algorithms are simple. Single core can work on very high frequency. Heat problem is there but not serious.
The number of transistors in a core determines basic power consumption Architectural efficiency matters a lot when designing new cores More functional units means more transistors Deeper pipelines mean more transistors Larger caches mean more transistors
As we try to increase clock frequency to get high performance Leakage current value will increases by exponentially which make problem in case of architecture reliability.
Advantages of private:
They are closer to core, so faster access Reduces contention
Advantages of shared:
Threads on different cores can share the same cache data More cache space available if a single (or a few) high-performance thread runs on the system
cache coherence (also cache coherency) refers to the consistency of data stored in local caches of a shared resource. Simple occurrences are : 1. write write conflict 2. Read write conflict 3. Write read conflict
http://en.wikipedia.org/wiki/File:Cache_Coherency_Generic.png
1. Snooping is the process where the individual caches monitor address lines for accesses
to memory locations that they have cached. When a write operation is observed to a location that a cache has a copy of, the cache controller invalidates its own copy of the snooped memory location.
2. Snarfing is where a cache controller watches both address and data in an attempt to
update its own copy of a memory location when a second master modifies a location in main memory. When a write operation is observed to a location that a cache has a copy of, the cache controller updates its own copy of the snarfed memory location with the new data.
A coherency protocol is a protocol which maintains the consistency between all the caches in a system of distributed shared memory. The protocol maintains memory coherence according to a specific consistency model. Protocols are designed and verified using state diagram and then applied on the cache. Efficiency of protocol is decided on the average retrieval time for each instruction. 1.
sequential consistency model
Summary :
New CPU technologies, such as hyper-threading and Multi-core, are becoming much more common as processor manufacturers try to keep pace with the demands for more processing power. This whitepaper discussed both multi-core architecture and problems while preparing multicore platforms and its solutions.
References:
[1] Memory System Design for a Multi-core Processor paper published by:- Jianjun Guo, Mingche Lai, Zhengyuan Pang, Libo Huang, Fangyuan Chen, Kui Dai, Zhiying Wang. [2] Multi-core Cache Hierarchies (Synthesis Lectures on Computer Architecture) by morgan publication. [3] Web references: http://en.wikipedia.org http://www.ieee.org http://www.intel.com/products/processor/
Sign Of Student:
Submitted
on
_____________