Vous êtes sur la page 1sur 12

FINAL REPORT

Seminar Topic:

Multicore Processors

GUIDE NAME : Prof. Jitendra B. Bhatia NAME : Kaneriya Jigar M. (09bit039)

Abstract:
Processor (or CPU) manufacturers, such as Intel and AMD, face an everincreasing demand for processing power. CPU over clocking, the traditional method of increasing CPU performance, has reached its technological limits. Over clocking has two primary undesirable side effects more heat, and the generation of more electronic noise within the CPU. The heat can cause a server to fail, and the electronic noise can cause corruption within the data stream. Processors do not execute processes, they execute threads. One process can launch many threads. The operating system is responsible for allocating resources, like CPU and RAM, to those threads. One can conclude that the more CPUs you have, the more threads can be handled at once, hence the advent of hyper-treading single processors and dual-core processors. However, this technology is only an advantage if applications are designed to work in such an environment. Single threaded applications will run on these newer processors just as if they were running on a single processor. This whitepaper discusses both multicore processor technologies and its limitation.

A Brief History of Microprocessors


Intel manufactured the first microprocessor, the 4-bit 4004, in the early 1970s which was basically just a number-crunching machine. Shortly afterwards they developed the 8008 and 8080, both 8-bit, and Motorola followed suit with their 6800 which was equivalent to Intels 8080. The companies then fabricated 16-bit microprocessors, Motorola had their 68000 and Intel the 8086 and 8088; the former would be the basis for Intels 80386 32-bit and later their popular Pentium line-up which were in the first consumer-based PCs. 1. The first commercial dual core processor was IBMs Power 4 processor for its RISC servers in 2001. 2. The first dual core processor for home use was INTELs Pentium 840 released in 2005. 3. Less then two weeks AMD brought Sthlon 64x2 processor.

What is Multi Core ?


A multi core microprocessor is one which combines two or more independent processor into single package often a single integrated circuit.

Examples of Multi Core


A dual-core processor has two cores (e.g. AMD Phenom II X2, Intel Core Duo), a quad-core processor contains four cores (e.g. AMD Phenom II X4, the Intel 2010 core line that includes three levels of quad-core processors, see i3, i5, and i7 at Intel Core), and a hexa-core processor contains six cores (e.g. AMD Phenom II X6, Intel Core i7 Extreme Edition 980X)

Why we need MULTI CORE processor?


Difficult to make single-core clock frequencies even higher Deeply pipelined circuits: heat problems speed of light problems difficult design and verification large design teams necessary server farms need expensive air-conditioning

Many new applications are multithreaded General trend in computer architecture (shift towards more parallelism)

Difference between Unicore and multicore processor


UNICORE
Unicore one core processor Having one arithmetic circuit, one logical circuit , one shift logic circuit. One process can reside at a point of time.

MULTICORE
Multicore more than one core in single processor Having multiple arithmetic, logical and shift circuit in single processor. Multiple processes can reside on same

architecture. Number of process depends on number of core. Virtual multi-tasking can be done using time sharing algo. Processes can have more than one thread but they cant execute at same time. Real multi-tasking can take place. Multiple threads of same process can be executed at same time. Enables better threading (e.g. up to 30%) Fully utilization is much difficult. The cache coherence problem. Inter core communication. Thread safety Programming is much easy To get advantage programming must be in such way that creates multi threads at time. Scheduling for multi core is much complex for high performance. Each core cant have higher frequency then single core. Heat problem is n time then single core.

Easy to utilize at full efficient way.

Scheduling algorithms are simple. Single core can work on very high frequency. Heat problem is there but not serious.

Difficulties at software level


Special Algorithm and modification in operating system to manage multiple processors and cache at a time with high through-put. Multi core processor need special support of software for improving performance with multiple threads of same process. Ex, Crysis. Emergent Game Technologies' Gamebryo engine includes their Floodgate technology which simplifies multicore development across game platforms. In addition, Apple Inc.'s second latest OS, Mac OS X Snow Leopard has a built-in multi-core facility called Grand Central Dispatch for Intel CPUs.

Hardware level Difficulties

The number of transistors in a core determines basic power consumption Architectural efficiency matters a lot when designing new cores More functional units means more transistors Deeper pipelines mean more transistors Larger caches mean more transistors

Current linkage vs. Frequency

As we try to increase clock frequency to get high performance Leakage current value will increases by exponentially which make problem in case of architecture reliability.

Power vs. Frequency

The cache coherence problem:


What is cache?
A CPU cache is a cache used by the central processing unit of a computer to reduce the average time to access memory. The cache is a smaller, faster memory which stores copies of the data from the most frequently used main memory locations. As long as most memory accesses are cached memory locations, the average latency of memory accesses will be closer to the cache latency than to the latency of main memory.

Types of cache arrangement in processor.

1. Separated cache: 2. Shared cache

Private vs. shared caches:

Advantages of private:
They are closer to core, so faster access Reduces contention

Advantages of shared:
Threads on different cores can share the same cache data More cache space available if a single (or a few) high-performance thread runs on the system

Cache coherence problem:

cache coherence (also cache coherency) refers to the consistency of data stored in local caches of a shared resource. Simple occurrences are : 1. write write conflict 2. Read write conflict 3. Write read conflict

http://en.wikipedia.org/wiki/File:Cache_Coherency_Generic.png

solution to cache coherence problem


Directory-based coherence: In a directory-based system, the data being shared is placed in a common directory that maintains the coherence between caches. The directory acts as a filter through which the processor must ask permission to load an entry from the primary memory to its cache. When an entry is changed the directory either updates or invalidates the other caches with that entry.

1. Snooping is the process where the individual caches monitor address lines for accesses
to memory locations that they have cached. When a write operation is observed to a location that a cache has a copy of, the cache controller invalidates its own copy of the snooped memory location.

2. Snarfing is where a cache controller watches both address and data in an attempt to
update its own copy of a memory location when a second master modifies a location in main memory. When a write operation is observed to a location that a cache has a copy of, the cache controller updates its own copy of the snarfed memory location with the new data.

3. Distributed shared memory systems mimic these mechanisms in an attempt to maintain


consistency between blocks of memory in loosely coupled systems.

Protocol based solution:

A coherency protocol is a protocol which maintains the consistency between all the caches in a system of distributed shared memory. The protocol maintains memory coherence according to a specific consistency model. Protocols are designed and verified using state diagram and then applied on the cache. Efficiency of protocol is decided on the average retrieval time for each instruction. 1.
sequential consistency model

2. release consistency model

Summary :
New CPU technologies, such as hyper-threading and Multi-core, are becoming much more common as processor manufacturers try to keep pace with the demands for more processing power. This whitepaper discussed both multi-core architecture and problems while preparing multicore platforms and its solutions.

References:
[1] Memory System Design for a Multi-core Processor paper published by:- Jianjun Guo, Mingche Lai, Zhengyuan Pang, Libo Huang, Fangyuan Chen, Kui Dai, Zhiying Wang. [2] Multi-core Cache Hierarchies (Synthesis Lectures on Computer Architecture) by morgan publication. [3] Web references: http://en.wikipedia.org http://www.ieee.org http://www.intel.com/products/processor/

Sign Of Student:

Remarks and further instruction by Guide:

Signature of Guide: Signature of seminar faculty:

Submitted

on

_____________

Vous aimerez peut-être aussi