Vous êtes sur la page 1sur 18

Term paper

Based on (Clocking systems)


Session (2010-11) ,SEM 3rd

Department Of E.C.E

Name: V.Vivek Course: B.Tech (ECE) 2nd year {3rD SEM} Subject: DIGITAL ELECTRONICS CIRCUITS Course code: E.C.E-202 Roll no: RE2003 A14 Section:-E-2003 Reg no: 11005832

Acknowledgement
Gratitude cannot be seen or expressed. It can only be felt in heart and is beyond description. Often words are inadequate to serve as a model of expression of ones feeling, especially the sense of indebtedness and gratitude to all those who help us in our duty. It is of immense pleasure and profound privilege to express my gratitude and indebtedness along with sincere thanks to Shrivishal Tripathi, lecturer of Lovely Professional University for Digital Electronic Circuits providing me the opportunity to work for a project on Clocking Systems I am beholden to my family and friends for their blessings and encouragement.

Yours obediently, V.VIVEK

CONTENTS:1) Introduction of Clocking Systems 2) HISTORY 3) Clock Signal 4) Timing Parameters 5) Theory of Clocked Storage Elements 6) Over Clocking 7) Advantages and Disadvantages 8) Limitations

CLOCKING SYSTEMS
Introduction:Clocking is one of the single most important decisions facing the designer of a digital system. Unfortunately much too often it has been taken lightly at the beginning of a design and that viewpoint has proven to be very costly in the long run (Wagner 1988). Thus, it is not pretentious to dedicate an entire book to this subject. However, this book is limited to the even narrower issue of Clocked storage elements (CSE), widely known as flip-flops and latches. Clock is the frequency of the processor at which its running. The standard unit is hertz (Hz); most people today use GHz and MHz. Generally, within the same CPU series, higher processor clock speed means faster processing time. The technical term for processor clock is the speed, in which a single atomic action can be performed. Hence a 1GHz CPU can evaluate a single NAND 1 billion time a second, however, this is not very helpful. It can be used to accurately compare processors of the exact same design, a different design may be slower in terms of Hz but have a faster design, i.e. needing fewer transistors to perform a complex operation. Overclocking refers to any attempt to run a computer hardware at higher speeds, beyond their stock settings, to archive greater performance. This involves going into the BIOS to tweak voltages, bus speeds, etc. Overclocking can be somewhat dangerous if done improperly and may cause permanent damage to that hardware. Overclocking is done on your own risk; none of us will be responsible for any damage.

History:For most of the early history of microcomputers, clock rate was not a differentiating factor between models. Each CPU type was typically clocked at a standard rate - 1 MHz for 6502-based architectures like the Commodore 64 and Apple II series, 4.77 MHz for Z-80 computers and the first generation of Intel 8086 as used in the original IBM PC, 8 MHz for early Motorola 68000 machines such as the Macintosh 128k and Amiga 1000. Since these processor generations followed each other quickly and generally did not compete between themselves (except for the Z-80 and 8086, which shared the same clock rate), manufacturers tended not to emphasize clock rate in their marketing material.

Computer buyers first became aware of clock speed when new generations of PC compatibles started to appear with "Turbo" clock rates faster than 4.77 MHz. On some of these computers the speed was selectable by a front panel switch from the faster speed down to the then-standard 4.77 MHz. This was used for games, which had no timing routines of their own then, or for compatibility with software that couldn't operate at the faster speed. When the 80286 was released in 1982 at a standard clock rate of 6 MHz, followed by the 80386 in 1985, running at 12 MHz, computer manufacturers seized on the clock rate as an easy way to promote the faster, more expensive CPUs to potential buyers. They were helped by Intel, which was able to increase the rate of the 286 to 25 MHz over that processor's lifetime, and the 386 was clocked up to 40 MHz by the time it was superseded by the 80486. By the early 1990s, most computer companies advertised their computers' performance chiefly by referring to their CPUs' clock rates. This led to various marketing games, such as Apple Computer's decision to create and market the Power Macintosh 8100 with a clock rate of 110 MHz so that Apple could advertise that its computer had the fastest clock rate availablethe fastest Intel processor available at the time ran at 100 MHz. This slight superiority in clock rate was meaningless, however, since the PowerPC 601 and Pentium implemented different instruction set architectures and had different microarchitectures. After 2000, Intel's competitor, Advanced Micro Devices, started using model numbers instead of clock rates to market its CPUs because of the lower CPU clocks when compared to Intel. Continuing this trend it attempted to dispel the "megahertz myth" which it claimed did not tell the whole story of the power of its CPUs. In 2004, Intel announced it would do the same, probably because of consumer confusion over its Pentium M mobile CPU, which reportedly ran at about half the clock rate of the roughly equivalent Pentium 4 CPU. As of 2007, performance improvements have continued to come through innovations in pipelining, instruction sets, and the development of multi-core processors, rather than clock rate increases (which have been constrained by CPU power dissipationissues).

The importance of clocking has become even more emphasized, as the clock speed is rising rapidly, doubling every three years, as seen in Fig. 1.1. However, the clock uncertainties have not been scaling proportionally with the frequency increase, and an increasingly large portion of the clock cycle has been spent on the clocking overhead. The ability to absorb clock skew or to make the clocked storage element faster is reflected directly in the enhanced performance, since the performance is directly proportional to the clock frequency of a given system. Such performance improvements are very difficult to obtain using traditional techniques on the architecture or microarchitecture levels. The difficulties are caused by the overhead imposed by the CSE delay, and the clock uncertainties. Thus, setting the clock to the right frequency, and utilizing every available picosecond of the critical path, is increasingly important. It is our opinion that traditional clocking techniques will reach their limit when the clock frequency reaches the 5 to 10 GHz range. Thus, new ideas and new ways of designing digital systems are needed. We do not pretend to know what the future trend in clocking should

Computers built in the past were large and filled several electronic cabinets in large air-conditioned rooms that occupied entire floors. They were built from discrete components or used a few large-scale integration (LSI) chips in the later models. Those systems were clocked at frequencies of about one or a few tens of megahertz, as shown in Table 1.1. The first electronic computer, ENIAC (Electronic Numerical

Integrator and Calculator), for example, operated at the maximal clock frequency of 18 kHz. Given the low scale of integration, it was possible to "tune" the clock. This was achieved by either adjusting the length of the wires that distributed the clock signals, or by tuning the various delay elements on the cabinets or the circuit boards, so that the clock signal arrived at every circuit board at approximately the same time. With the advent of very large scale integration (VLSI) technology, and increased integration levels, the ability to tune the clock has been greatly diminished. The clock signals are generated and distributed internally within the VLSI chip. Therefore, much of the burden of absorbing clock signal variations at various points on the VLSI chip has fallen on the clocked storage element.

Clock Frequency Supercomputers:-

of

Selected

Historic

Computers

and

Clock Signal:In electronics and especially synchronous digital circuit, a clock signal is a particular type of signal that oscillates between a high and a low state and is utilized like a metronome to coordinate actions of circuits. Although the word signal has a number of other meanings, the term is here used for "transmitted energy that can carry information". A clock signal is produced by a clock generator. Although more complex arrangements are used, the most common clock signal is in the form of a square wave with a 50% duty cycle, usually with a fixed, constant frequency. Circuits using the clock signal for synchronization may become active at either the rising edge, falling edge, or, in the case of double data rate, both in the rising and in the falling edges of the clock cycle.

Digital circuits:-

Most integrated circuits (ICs) of sufficient complexity use a clock signal in order to synchronize different parts of the circuit, cycling at a rate less than the worst-case internal propagation delays. In some cases, more than one clock cycle is required to perform a predictable action. As ICs become more complex, the problem of supplying accurate and synchronized clocks to all the circuits becomes increasingly difficult. The preeminent example of such complex chips is the microprocessor, the central component of modern computers, which relies on a clock from a crystal oscillator. The only exceptions are asynchronous circuits such as asynchronous CPUs. A clock signal might also be gated, that is, combined with a controlling signal that enables or disables the clock signal for a certain part of a circuit. This technique is often used to save power by effectively shutting down portions of a digital circuit when they are not in use, but comes at a cost of increased complexity in timing analysis.

Single-phase clock:Most modern synchronous circuits use only a "single phase clock" -- in other words, they transmit all clock signals on (effectively) 1 wire.

Two-phase clock:In synchronous circuits, a "two-phase clock" refers to clock signals distributed on 2 wires, each with nonoverlapping pulses. Traditionally one wire is called "phase 1" or "phi1", the other wire carries the "phase 2" or "phi2" signal. MOS ICs typically used dual clock signals (a two-phase clock) in the 1970s. These were generated externally for both the 6800 and the 8080. The next generation of microprocessors incorporated the clock generation on chip. The 8080 had a 2 MHz clock but the processing throughput was similar to the 1 MHz 6800. The 8080 require more clock cycles to execute a processor instruction. The 6800 had a minimum clock rate of 100 kHz while the 8080 could be halted. Higher speed versions of both microprocessors were released by 1976. The 6501 required an external 2-phase clock generator. The MOS Technology 6502 used the same 2phase logic internally, but also included a two-phase clock generator on-chip, so it only needed a single phase clock input, simplifying system design.

4-phase clock:A "4-phase clock" has clock signals distributed on 4 wires ( [5], four phase logic ). In some early microprocessors such as the National Semiconductor IMP-16 family, a multi-phase clock was used. In the case of the IMP-16, the clock had four phases, each 90 degrees apart, in order to synchronize the operations of the processor core and its peripherals.

Some ICs use four-phase logic. Most modern microprocessors and microcontrollers use a single-phase clock, however.

Clock multiplier:Many modern microcomputers use a clock multiplier" which multiplies a lower frequency external clock to the appropriater clock rate of the microprocessor. This allows the CPU to operate at a much higher frequency than the rest of the computer, which affords performance gains in situations where the CPU does not need to wait on an external factor (like memory or input/output).

Dynamic frequency change:The vast majority of digital devices do not require a clock at a fixed, constant frequency. As long as the minimum and maximum clock times are respected, the time between clock edges can vary widely from one edge to the next. Such digital devices work just as well with a clock generator that dynamically changes its frequency, such as spread-spectrum clock generation, power now ,cool and quiet , Speed Step, etc. Devices that use static logic do not even have a maximum clock time; such devices can be slowed down and paused indefinitely, then resumed at full clock speed at any later time.

TIMING PARAMETERS:It is appropriate at this point to consider the clock distribution system and define the clock parameters that will be used throughout this text. For the purposes of definition we should start with the Fig. 1.16, which shows the timing parameters for a single-phase clock. The clock signal is characterized by its period, T , which is inversely proportional to the clockfrequency, f . The time during which the clock is active (assuming logic 1 value) is defined as clock width, W . The ratio of WIT is defined as clock duty cycle (w). Usually, the clock signal has a symmetric shape, which implies a 50% duty cycle. This is also the best we can expect, especially when distributing a high-frequency clock. Another important point is the ability to precisely control the duty cycle. This point is of special importance when each phase of the clock is used for logic evaluation, or when we trigger the clock storage elements on each edge of the clock (as we will see later in the book).Some recently reported work demonstrates the ability to control the duty cycle to within +/-0.5% (Bailey and Benschneider 1998). There are two other important timing parameters that we need to define: clockskew and clock jitter.

Clock Skew:Clock skew is defined as a spatial variation of the clock signal as distributed through the system. The clock skew is measured from some reference point in the system: the clock entry point to the board or VLSI chip, or the central point from where the clock distribution starts. Because of the various delay characteristics of the clock paths to the various points in the system, as well as different loading of the clock signal at different points, the clock signal arrives at different points at different times. This clock skew is defined as the difference between the reference point and the particular destination CSE. Further, we can distinguish global clock skew and local clock skew. We define global clock skew as the maximal difference between two clock signals reaching any of the two storage elements on the chip, or in the system, that exchange data under the control of the same clock. Our definition of the clock skew describes global clock skew. Clock skew occurring between two adjacent CSEs represents local clock skew. If the two adjacent clock storage elements are connected with no logic in-between, the problem of data race-through can occur. Characterizing a maximum local clock skew is therefore important. These clock skew definitions are equally important in high-performance system design.

Clock Jitter:Clock jitter is defined as temporal variation of the clock signal with regard to the reference transition (reference edge) of the clock signal, as illustrated in Fig. 1.17. Clock jitter represents edge-to-edge variation of the clock signal in time. As such, clock jitter can also be classified as

long-term jitter and cycleto-cycle (or edge-to-edge) jitter. Edge-to-edge clock jitter is the clock signal variation between two consecutive clock edges. In the course of high-speed logic design, we are more concerned about cycle-to-cycle clock jitter, because it is this phenomena that affects the time available to the logic. Long-term jitter represents clock-edge variation over a large number of clock cycles (long-term). While short-term jitter is dependent on the type and quality of the clock generator, long term jitter is a result of the accumulated effects. Long-term jitter usually affects communication and synchronization between various blocks within a system that are same distance apart and need to operate in synchrony.

Other circuits:Some sensitive mixed-signal circuits, such as precision analog to digital converters, use sine waves rather than square waves as their clock signals, because square waves contain high-frequency harmonics that can interfere with the analog circuitry and cause noise. Such sine wave clocks are often differential signals, because this type of signal has twice the slew-rate, and therefore half the timing uncertainty, of a single-ended signal with the same voltage range. Differential signals radiate less strongly than a single line. Alternatively, a single line shielded by power and ground lines can be used. In CMOS circuits, gate capacitances are charged and uncharged continually. A capacitor does not dissipate energy, but energy is wasted in the driving transistors. In reversible computing inductors can be used to store this energy and reduce the energy loss, but they tend to be quite large. Alternatively, using a sine wave clock, CMOS transmission gates and energy-saving techniques, the power requirements can be reduced.

Distribution:The most effective way to get the clock signal to every part of a chip that needs it, with the lowest skew, is a metal grid. In a large microprocessor, the power used to drive the clock signal can be over 30% of the total power used by the entire chip. The clock signal must be propagated with a clock distribution network This is often done with a recursive H tree. The whole structure with the gates at the ends and all amplifiers in between have to be loaded and unloaded every cycle. To save energy, unused parts of the tree may be temporarily cut off (clock gating).

THEORY OF CLOCKED STORAGE ELEMENTS:The function of a clocked storage element is to capture the information at a particular moment in time and preserve it for as long as it is needed by the digital system. Having said this, it is not possible to define a storage element without defining its relationship to a clocking mechanism in a digital system, which is used to determine discrete time events. This definition is general and should include various ways of implementing a digital system. More particularly, the element that determines time in a synchronous system is the clock.

FLIP-FLOP:The main feature of the flip-flop is that the process of capturing data is related to the transition of the clock (from 0 to 1 or from 1 to 0), thus the flip-flop is not transparent. Therefore flip-flopbased systems are easier to model, and the timing tools find flip-flop-based systems simpler and less problematic to analyze. The precise point in time when data are captured is determined by the clock event designated as either the leading or trailing edge of the clock. In other words, the transition of the clock from logic 0 to logic 1 causes data to be captured (it is the 1-to-0 transition in the trailing edge-triggered the flip-flop). In general, the flip-flop is not transparent, since it is assumed that the clock transition is almost instantaneous. As we will see later, even the flip-flop can have a very small period of transparency associated with the narrow time

window during which the clock changes, as will be discussed later. In general, we treat the flipflop as a nontransparent clocked storage element. Given that the triggering mechanism of a flipflop is the transition of the clock signal, there are several ways of deriving the flip-flop structure. To better understand its functionality, it helps to look at an early version of a flip-flop, shown in Fig. 2.10, that was used in early computers and digital systems (see Siewiorek et al. 1982). The pulse, which causes thechange, is derived from the triggering signal (also referred to as trigger) by using a simple differentiator consisting of a capacitor C and resistor R . One can also understand a glitch introduced by the flip-flop. If the triggering signal transition is slow, a pulse derived in this way may not be capable of triggering the flip-flop. On the other hand, even a small glitch on the trigger line can cause a false triggering. To further our understanding of the flip-flop, it is helpful to start making the distinction between the flip-flop and the latch-based CSE. The flip-flop and the latch operate on different principles. While the latch is level-sensitive meaning it is reacting on the level (logical value) of the clock signal, the flip-flop is edge sensitive, meaning that the mechanism of capturing

the data value on its input is related to the changes in the clock signal. Level sensitivity implies that the latch captures the data during the entire period of time when the clock is active (logic l), which means the latch is transparent. The two are designed from a different set of requirements, and so consist of inherently different circuit topologies. The general structure of the flip-flop is shown in Fig. 2.11a. The difference between a flip-flop structure and the M-S latch, shown in Fig. 2.11b

Overclocking:Overclocking is the process of operating a computer component at a higher clock rate (more clock cycles per second) than it was designed for or was specified by the manufacturer. This is practiced more by enthusiasts than professional users seeking an increase in the performance of their computers, as overclocking carries risks of less reliable functioning and damage. There are several purposes for overclocking: for professional users overclocking allows pushing the boundary of professional personal computing capacity therefore allowing improved productivity or allows testing over-the-horizon technologies beyond that possible with the available component specifications before entering the specialized computing realm and pricing - this leverages the manufacturing practice to specify components to a level that optimizes yield and profit margin, some components are capable of more; there are hobbyist that like car enthusiast enjoy building, tuning, and comparison racing their systems with standardized benchmark software - providing help forums for the other groups named here; some purchase less expensive computer components and then overclock to higher clock rates, taking advantage of marketing strategies that rely on multi-tier pricing of the same component that is equally

hobbled for tiered performance; a similar but slightly different approach to cost saving is overclocking outdated components to keep pace with new system requirements, rather than purchasing new hardware - leveraging the low risk resulting from a failure since the system is fully depreciated and new system is needed anyways. People who overclock their components mainly focus their efforts on processors, video,cards, motherboard chipsets, and RAM. It is done through manipulating the CPU multiplier and the motherboard's Front-side Bus (FSB) clock rate until a maximum stable operating frequency is reached, although with the introduction of Intel's new X58 chipset and the core i7 processor, the front side bus has been replaced with the QPI (Quick Path Interconnect); often this is called the Baseclock (BCLK). While the idea is simple, variation in the electrical and physical characteristics of computing systems complicates the process. Power consumption of digital circuits often increases with frequency or clocking speed. The high-frequency operation of semiconductor devices as used in computers to a certain extent improves with an increase in voltage; but operation at high speed and increased voltage increases power dissipation and heating. Overheating caused by higher dissipation, and operation at higher voltage regardless of power, can cause malfunctioning or permanent damage. Increasing voltage supplied and improving cooling can increase the maximum stable operating speed, subject to these risks. CPU multipliers, bus dividers, voltages, thermal loads, cooling techniques and several other factors such as individual semiconductor clock and thermal tolerances can affect it.

Factors allowing overclocking:Overclockability arises in part due to the economics of the manufacturing processes of CPUs and other components. In most cases components with different rated clock rates are manufactured by the same process, and tested after manufacture to determine their actual ratings. The clock rate that the component is rated for is at or below the clock rate at which the CPU has passed the manufacturer's functionality tests when operating in worst-case conditions (for example, the highest allowed temperature and lowest allowed supply voltage). Manufacturers must also leave additional margin for reasons discussed below. Sometimes manufacturers produce more high-performing parts than they can sell, so some are marked as medium-performance chips to be sold for medium prices. Pentium architect Bob Colwell calls overclocking an "uncontrolled experiment in better-than-worst-case system operation".

Measuring effects of overclocking:Benchmarks are used to evaluate performance. The benchmarks can themselves become a kind of 'sport', in which users compete for the highest scores. As discussed above, stability and functional correctness may be compromised when overclocking, and meaningful benchmark results depend on correct execution of the benchmark. Because of this, benchmark scores may be qualified with stability and correctness notes (e.g. an overclocker may report a score, noting that the benchmark only runs to completion 1 in 5 times, or that signs of incorrect execution such as display corruption are visible while running the benchmark). A widely used test of stability is Prime95 as this has in-built error checking and the computer fails if unstable. Given only benchmark scores it may be difficult to judge the difference overclocking makes to the overall performance of a computer. For example, some benchmarks test only one aspect of the system, such as memory bandwidth, without taking into consideration how higher clock rates in this aspect will improve the system performance as a whole. Apart from demanding applications such as video encoding, highdemand databases and scientific computing, memory bandwidth is typically not a bottleneck, so a great increase in memory bandwidth may be unnoticeable to a user depending on the applications used. Other benchmarks, such as 3DMark attempt to replicate game conditions.

Advantages
The user can, in many cases, purchase a lower performance, cheaper component and overclock it to the clock rate of a more expensive component. Higher performance in games, encoding, video editing applications, and system tasks at no additional expense, but at an increased cost for electrical power consumption. Particularly for enthusiasts who regularly upgrade their hardware, overclocking can increase the time before an upgrade is needed. Some systems have "bottlenecks," where small overclocking of a component can help realize the full potential of another component to a greater percentage than the limiting hardware is overclocked. For instance, many motherboards with AMD Athlon 64 processors limit the clock rate of four units of RAM to 333 MHz. However, the memory performance is computed by dividing the processor clock rate (which is a base number times a CPU multiplier, for instance 1.8 GHz is most likely 9200 MHz) by a fixed integer such that, at a stock clock rate, the RAM would run at a clock rate near 333 MHz. Manipulating elements of how the processor clock rate is set (usually lowering the multiplier), one can often overclock the processor a small amount, around 100200 MHz (less than 10%), and gain a RAM clock rate of 400 MHz (20% increase), releasing the full potential of the RAM. Overclocking can be an engaging hobby in itself and supports many dedicated online communities. The PCMark website is one such site that hosts a leader-board for the most powerful computers to be bench-marked using the program. A new overclocker with proper research and precaution or a guiding hand can gain useful knowledge and hands-on experience about their system and PC systems in general.

Disadvantages:-

Many of the disadvantages of overclocking can be mitigated or reduced in severity by skilled overclockers. However, novice overclockers may make mistakes while overclocking which can introduce avoidable drawbacks and which are more likely to damage the overclocked components (as well as other components they might affect).

Incorrectly performed overclocking: Increasing the operation frequency of a component will usually increase its thermal output in a linear fashion, while an increase in voltage usually causes heat to increase quadratically. Excessive voltages or improper cooling may cause chip temperatures to rise almost instantaneously, causing the chip to be damaged or destroyed. More common than hardware failure is functional incorrectness. Although the hardware is not permanently damaged, this is inconvenient and can lead to instability and data loss. In rare, extreme cases entire file system failure may occur, causing the loss of all data.With poor placement of fans, turbulence and vortices may be created in the computer case, resulting in reduced cooling effectiveness and increased noise. In addition, improper fan mounting may cause rattling or vibration. Improper installation of exotic cooling solutions like liquid cooling may result in failure of the cooling system, which may result in water damage. With sub-zero cooling methods such as phase change cooling or liquid nitrogen, extra precautions such as foam or spray insulation must be made to prevent water from condensing upon the PCB and other areas. This can cause the board to become "frosted" or covered in frost. While the water is frozen it is usually safe, however once it melts it can cause shorts and other malignant issues. Sometimes products claim to be intended specifically for overclocking and may be just decoration. Novice buyers should be aware of the marketing hype surrounding some products. Examples include heat spreaders and heat sinks designed for chips (or components) which do not generate enough heat to benefit from these devices (capacitors, for example).

Limitations:The utility of overclocking is limited for a few reasons: Personal Computers are mostly used for tasks which are not computationally demanding, or which are performance-limited by bottlenecks outside of the local machine. For example, web browsing does not require a high performance computer, and the limiting factor will almost certainly be the bandwidth of the internet connection of either the user or the server. Overclocking a processor will also do little to help increase application loading times as the limiting factor is reading data off the hard drive. Other general office tasks such as word processing and sending email are more dependent on the efficiency of the user than on the performance of the hardware. In these situations any performance increases through overclocking are unlikely to be noticeable. It is generally accepted that, even for computationally heavy tasks, clock rate increases of less than ten percent are difficult to discern. For example, when playing video games, it is difficult to discern an increase from 60 to 66 frames per second (FPS) without the aid of an on-screen frame counter. Overclocking of a processor will rarely improve gaming performance noticeably, as the frame rates

achieved in most modern games are usually bound by the GPU at resolutions beyond 1024768. One exception to this rule is when the overclocked component is the bottleneck of the system, in which case the most gains can be seen. Computational workloads which require absolute mathematical accuracy such as spreadsheets and banking applications benefit significantly from deterministic and correct processor operation.

Future scope:This clocking is used in modern VLSI systems to improve the speed of the processors

Major Companies which prepares clocking systems:INTEL , AMD, NVIDIA, APPLE etc

Vous aimerez peut-être aussi