Vous êtes sur la page 1sur 6

What is the Difference Between Static and Dynamic RAM

by A D MI N on JUNE 26, 2008

A computer uses RAM or memory as a place to store data in between times that the processor is using it. For example, a processor might be adding up a series of numbers, and would use the RAM as a place to store all the numbers its working with. There are two kinds of RAM used in computers; so whats the difference between static and dynamic RAM? The most common form of RAM in a computer is dynamic RAM. Each chip contains millions of tiny memory cells made up of a transistor and a capacitor, and can contain one bit of information a 0 or a 1. In order to store a bit of information, the computer needs to put a tiny amount of power into the cell to charge the capacitor, but this energy leaks out quickly. So to keep information in dynamic RAM, your computer needs to recharge all the cells in the memory chip every few milliseconds, or all the data is lost. This constant refreshing gives dynamic RAM its name. Static RAM, on the other hand, works with a completely different technology. Each cell holds a bit of information that can be flip-flopped, from 0 to 1, and doesnt need to be refreshed; although, it requires more transistors to make it work. Because it never needs to be refreshed, it uses less

power and operates much more quickly. But its much more expensive to manufacture. All modern computers use a tiny amount of static RAM, as a cache close to the CPU where its most needed to help perform calculations quickly, and then larger quantities of dynamic RAM to hold programs and data.

Your computer probably uses both static RAM and dynamic RAM at the same time, but it uses them for different reasons because of the cost difference between the two types. If you understand how dynamic RAM and static RAM chips work inside, it is easy to see why the cost difference is there, and you can also understand the names. Dynamic RAM is the most common type of memory in use today. Inside a dynamic RAM chip, each memory cell holds one bit of information and is made up of two parts: a transistor and a capacitor. These are, of course, extremely small transistors and capacitors so that millions of them can fit on a single memory chip. The capacitor holds the bit of information -- a 0 or a 1 (see How Bits and Bytes Work for information on bits). The transistor acts as a switch that lets the control circuitry on the memory chip read the capacitor or change its state. A capacitor is like a small bucket that is able to store electrons. To store a 1 in the memory cell, the bucket is filled with electrons. To store a 0, it is emptied. The problem with the capacitor's bucket is that it has a leak. In a matter of a few milliseconds a full bucket becomes empty. Therefore, for dynamic memory to work, either the CPU or the memory controller has to come along and recharge all of the capacitors holding a 1 before they discharge. To do this, the memory controller reads the memory and then writes it right back. This refresh operation happens automatically thousands of times per second. This refresh operation is where dynamic RAM gets its name. Dynamic RAM has to be dynamically refreshed all of the time or it forgets what it is holding. The downside of all of this refreshing is that it takes time and slows down the memory. Static RAM uses a completely different technology. In static RAM, a form of flip-flop holds each bit of memory (see How Boolean Gates Work for detail on flip-flops). A flip-flop for a memory cell takes 4 or 6 transistors along with some wiring, but never has to be refreshed. This makes static RAM significantly faster than dynamic RAM. However, because it has more parts, a static memory cell takes a lot more space on a chip than a dynamic memory cell. Therefore you get less memory per chip, and that makes static RAM a lot more expensive. So static RAM is fast and expensive, and dynamic RAM is less expensive and slower. Therefore static RAM is used to create the CPU's speed-sensitive cache, while dynamic RAM forms the larger system RAM space.

Subroutine
From Wikipedia, the free encyclopedia

In computer science, a subroutine (also known as a procedure, function, routine, method, or subprogram) is a portion of code within a larger program that performs a specific task and is relatively independent of the remaining code. As the name "subprogram" suggests, a subroutine behaves in much the same way as a computer program that is used as one step in a larger program or another subprogram. A subroutine is often coded so that it can be started ("called") several times and/or from several places during a single execution of the program, including from other subroutines, and then branch back (return) to the next instruction after the "call" once the subroutine's task is done. Subroutines are a powerful programming tool,[1] and the syntax of many programming languages includes support for writing and using them. Judicious use of subroutines (for example, through the structured programming approach) will often substantially reduce the cost of developing and maintaining a large program, while increasing its quality and reliability.[2] Subroutines, often collected into libraries, are an important mechanism for sharing and trading software. The discipline of object-oriented programming is based on objects and methods (which are subroutines attached to these objects or object classes). In the compilation technique called threaded code, the executable program is basically a sequence of subroutine calls. Maurice Wilkes,David Wheeler, and Stanley Gill are credited with the invention of this concept, which they referred to as closed subroutine.[3]

Contents
[hide]

1 Main concepts 2 Language support 3 Advantages 4 Disadvantages 5 History

o o o o o o o

5.1 Language support 5.2 Self-modifying code 5.3 Subroutine libraries 5.4 Return by indirect jump 5.5 Jump to subroutine 5.6 Call stack 5.7 Delayed stacking

6 C and C++ examples 7 Visual Basic 6 examples

o o o o o

7.1 By value [ByVal] 7.2 By reference [ByRef] 7.3 Public (optional) 7.4 Private (optional) 7.5 Friend (optional)

8 Local variables, recursion and re-entrancy 9 Overloading 10 Closures 11 Conventions

11.1 Return codes

12 Optimization of subroutine calls

12.1 Inlining

13 Related terms and clarification 14 See also 15 References

[edit]Main

concepts

The content of a subroutine is its body, the piece of program code that is executed when the subroutine is called or invoked.

A subroutine may be written so that it expects to obtain one or more data values from the calling program (its parameters or arguments). It may also return a computed value to its caller (its return value), or provide various result values or out(put) parameters. Indeed, a common use of subroutines is to implement mathematical functions, in which the purpose of the subroutine is purely to compute one or more results whose values are entirely determined by the parameters passed to the subroutine. (Examples might include computing the logarithm of a number or the determinant of a matrix.) However, a subroutine call may also have side effects, such as modifying data structures in the computer's memory, reading from or writing to a peripheral device, creating a file, halting the program or the machine, or even delaying the program's execution for a specified time. A subprogram with side effects may return different results each time it is called, even if it is called with the same arguments. An example is arandom number function, available in many languages, that returns a different random-looking number each time it is called. The widespread use of subroutines with side effects is a characteristic of imperative programming languages. A subroutine can be coded so that it may call itself recursively, at one or more places, in order to perform its task. This technique allows direct implementation of functions defined by mathematical induction and recursive divide and conquer algorithms. A subroutine whose purpose is to compute a single boolean-valued function (that is, to answer a yes/no question) is called a predicate. Inlogic programming languages, often[vague] all subroutines are called "predicates", since they primarily[vague] determine success or failure.[citation needed] For example,any type of function is a subroutine but not main()
[edit]Language

support

High-level programming languages usually include specific constructs for

delimiting the part of the program (body) that comprises the subroutine, assigning a name to the subroutine, specifying the names and/or types of its parameters and/or return values, providing a private naming scope for its temporary variables, identifying variables outside the subroutine that are accessible within it, calling the subroutine, providing values to its parameters, specifying the return values from within its body, returning to the calling program, disposing of the values returned by a call, handling any exceptional conditions encountered during the call, packaging subroutines into a module, library, object, class, etc.

Some programming languages, such as Visual Basic .NET, Pascal , Fortran, and Ada, distinguish between "functions" or "function subprograms", which provide an explicit return value to the calling program, and "subroutines" or "procedures", which do not. In those languages, function calls are normally embedded in expressions (e.g., a sqrt function may be called as y = z + sqrt(x)); whereas procedure calls behave syntactically as statements (e.g., a print procedure may be called as if x > 0 then

print(x). Other languages, such as C and Lisp, do not make this distinction, and treat those terms as
synonymous. In strictly functional programming languages such as Haskell, subprograms can have no side effects, and will always return the same result if repeatedly called with the same arguments. Such languages typically only support functions, since subroutines that do not return a value have no use unless they can cause a side effect. In programming languages, such as C, C++, and C#, subroutines may also simply be called "functions", not to be confused withmathematical functions or functional programming, which are different concepts. A language's compiler will usually translate procedure calls and returns into machine instructions according to a well-defined calling convention, so that subroutines can be compiled separately from the programs that call them. The instruction sequences corresponding to call and return statements are called the procedure's prologue and epilogue.
[edit]Advantages

The advantages of breaking a program into subroutines include:

decomposition of a complex programming task into simpler steps: this is one of the two main tools of structured programming, along withdata structures.

reducing the duplication of code within a program, enabling the reuse of code across multiple programs, dividing a large programming task among various programmers, or various stages of a project, hiding implementation details from users of the subroutine. improves traceability, i.e. most languages offer ways to obtain the call trace which includes the names of the involved subroutines and perhaps even more information such as file names and line numbers. By not decomposing the code into subroutines, debugging would be impaired severely.

[edit]Disadvantages

The invocation of a subroutine (rather than using in-line code) imposes some computational overhead in the call mechanism itself

The subroutine typically requires standard housekeeping codeboth at entry to, and exit from, the function (function prologue and epilogueusually saving general purpose registers and return address as a minimum)

Vous aimerez peut-être aussi