Académique Documents
Professionnel Documents
Culture Documents
3) Word :The size of a word varies from one computer to another, depending on
the CPU. For computers with a 16-bit CPU, a word is 16 bits (2 bytes). On large
mainframes, a word can be as long as 64 bits (8 bytes).Some computers and
programming languages distinguish between shortwords and longwords. A
shortword is usually 2 bytes long, while a longword is 4 bytes.
-------summary:-------
1 bit = 0 or 1
1 Byte = 8 bits
1 word = 2 Bytes = 2 X (8 bits) = 16 bits
Double Word = 4 Bytes=4 X (8 bits)= 32 bits
A bus, in computing, is a set of physical connections (cables, printed circuits, etc.) which
can be shared by multiple hardware components in order to communicate with one
another.
The purpose of buses is to reduce the number of "pathways" needed for communication
between the components, by carrying out all communications over a single data channel.
This is why the metaphor of a "data highway" is sometimes used.
If only two hardware components communicate over the line, it is called a hardware
port (such as a serial port or parallel port).
Characteristics of a bus
A bus is characterised by the amount of information that can be transmitted at once. This
amount, expressed in bits, corresponds to the number of physical lines over which data is
sent simultaneously. A 32-wire ribbon cable can transmit 32 bits in parallel. The term
"width" is used to refer to the number of bits that a bus can transmit at once.
Additionally, the bus speed is also defined by its frequency (expressed in Hertz), the
number of data packets sent or received per second. Each time that data is sent or
received is called a cycle.
This way, it is possible to find the maximum transfer speed of the bus, the amount of
data which it can transport per unit of time, by multiplying its width by its frequency. A
bus with a width of 16 bits and a frequency of 133 MHz, therefore, has a transfer speed
equal to:
Bus subassembly
In reality, each bus is generally constituted of 50 to 100 distinct physical lines, divided
into three subassemblies:
• The address bus (sometimes called the memory bus) transports memory
addresses which the processor wants to access in order to read or write data. It is a
unidirectional bus.
• The data bus transfers instructions coming from or going to the processor. It is a
bidirectional bus.
• The control bus (or command bus) transports orders and synchonisation signals
coming from the control unit and travelling to all other hardware components. It is
a bidirectional bus, as it also transmits response signals from the hardware.
• the internal bus (sometimes called the front-side bus, or FSB for short). The
internal bus allows the processor to communicate with the system's central
memory (the RAM).
• the expansion bus (sometimes called the input/output bus) allows various
motherboard components (USB, serial, and parallel ports, cards inserted in PCI
connectors, hard drives, CD-ROM and CD-RW drives, etc.) to communicate with
one another. However, it is mainly used to add new devices using what are called
expansion slots connected to the input/outpur bus.
The chipset
A chipset is the component which routes data between the computer's buses, so that all
the components which make up the computer can communicate with each other. The
chipset originally was made up of a large number of electronic chips, hence the name. It
generally has two components:
Here is a table which gives the specifications for the most commonly used buses:
fre·quen·cy
ˈfri kwən siShow Spelled[free-kwuh n-see] Show IPA
–noun, plural -cies.
1.
Also, fre·quence. the state or fact of being frequent; frequent
occurrence: We are alarmed by the frequency of fires in the neighborhood.
2.
rate of occurrence: The doctor has increased the frequency of his visits.
3.
Physics .
a.
the number of periods or regularly occurring events of any given kind
in unit of time, usually in one second.
b.
the number of cycles or completed alternations per unit time of a wave
or oscillation. Symbol: F; Abbreviation: freq.
4.
Mathematics . the number of times a value recurs in a unit change of
the independent variable of a given function.
5.
Statistics . the number of items occurring in a given category.
Calculate Frequency
Frequency is the number of repetitions of a periodic process per 1 second and is measured
in Hertz (Hz). Light or sound are examples of such periodic processes and are called
electromagnetic waves. Frequency is a way to distinguish among different types of waves.
For instance, the frequency of sound is between 20 to 20,000 Hz, while the frequency of light
is about one trillion times higher. It can be calculated either from the energy or the
wavelength using the fundamental physical constants.
Difficulty:
Moderately Easy
Instructions
• Calculator
1.
o 1
Find the fundamental constants using the links shown below in \"Resources.\"<br
/>Speed of light (c) =299,792,458 m/s. Planck constant (h) =4.13566733E?15 eV s
(1E-15 denotes "10 in power -15").
o 2
Divide energy by the Planck constant to calculate frequency. Frequency(Hz) =
Energy/h=Energy/4.13566733E?15 eV s. Energy needs to be expressed in "electron
volt (eV)" units. For example, to calculate the wavelength of the ultraviolet (UV) light
with the energy 4.5 eV: Frequency =4.5 eV/4.13566733E?15 eV s=1.09E15
Hz=1,090 THz. Note the prefix "Tera(T)-" implies the magnitude of 1E12 (see
resources).
o 3
Divide the speed of light by the wavelength to calculate frequency. Frequency (Hz) =
c/ wavelength(m) =299,792,458 (m/s)/frequency. For example, to calculate the
frequency of the visible red light with the wavelength of 700 nanometers (nm): 700
nm equals to 7E-7 m. Wavelength=299,792,458 (m/s)/7E-7m= 4.28E14=428 THz.
In computing, a cycle (or, more specifically, a clock cycle) is the basic unit of
measurement that the CPU uses to carry out instructions given to it by software.
Therefore, in a CPU running at 900MHz, 900 million clock cycles will occur per second.
Software sends commands to the processor called, instructions. These commands are the
basis for how all programs run on a computer and are handled by the computer in a very
complicated manner.
To complicate matters further, it is not accurate to say that a higher speed processor is
better than another one at a lower speed. Certain AMD processors, for example, run at
lower speeds than comparable Intel processors of their family but, because they use
different architecture, perform at the same (and, sometimes, higher) performance levels
than CPU's with high clock speeds.
Also, processors with some sort of Hyper-Threading technology or, better yet, multiple
cores (like Intel Core 2 Duo processors) will be rated at lowered speeds than other CPU's
in their price range but, because of more than one (virtual) processor is running parallel to
the others, more instructions are performed per clock cycle.
There are also a few more factors to consider but this is the gist of it.
Processor states
While a device or processor operates (D0 and C0, respectively), it can be in one of
several power-performance states. These states are implementation-dependent, but P0 is
always the highest-performance state, with P1 to Pn being successively lower-
performance states, up to an implementation-specific limit of n no greater than 16.
Baud rate
The baud rate is the number of times per second a serial communication signal
changes states; a state being either a voltage level, a frequency, or a frequency
phase angle.
If the signal changes once for each data bit, then one bps is equal to one baud. For
example, a 300 baud modem changes its states 300 times a second
Bit rates measure of the number of data bits (that's 0's and 1's) transmitted in one second
in a communication channel. A figure of 2400 bits per second means 2400 zeros or ones
can be transmitted in one second, hence the abbreviation "bps". Individual characters (for
example letters or numbers) which are also referred to as bytes are composed of several
bits.
The number of bit per baud is determined by the modulation technique. Here are two
examples:
When FSK ("Frequency Shift Keying", a transmission technique) is used, each baud
transmits one bit; only one change in state is required to send a bit. Thus, the modem's
bps rate is equal to the baud rate: When we use a baud rate of 2400, you use a modulation
technique called phase modulation that transmits four bits per baud. So:
parallel data transfer refers to the type of data transfer in which a group of bits are
transferred simultaneously while serial data transfer refers to the type of data
transfer in which a group of data bits are transferred one bit at a time. so that means
that the amount of data transferred serially is less that the data transferred parallelly
per second.
but serial data transfer requires less cables so if the data has to be transmitted over
longer distances serial data transfer is preffered.
uptil this point all is well and good. the point that i dont understand is that in
computers many of the interfaces that were once parallel are now being made serial.
serial ATA, hyperTransport and Multiol are all serial interfaces. and all these
interfaces have data transfer rates greater than the previous parallel interfaces. how
is that so???? how come serial data transfer is faster than parallel data transfer?
Although serial interfaces have less bits, due to the above two reasons they can be
clocked at MUCH higher rates than parallel interfaces and the clock speed increase
outweighs the bandwidth decrease due to less bits being transferred parallely.
Always parallel is faster, this is why CPU, cache and RAM communicate in parallel and
this is why a 64bit CPU is faster than a 32 or 8 bit one.
On the contrast, a hard disk reads serially. The Hard disk has 2 or more heads. Each
head must read the data and then pass it through a special circuit to combine it to
parallel. Now new ideas said, do not do this, just transmit it serially. It will need less
time to manipulate data serially than to make it parallel compatible. In theory it is
also cheaper since less electronics are involved, in practice it should be more
expensive since it is a new and faster technology that should sell.
Peripherals outsite the PC are prefered to be of serial communication for many
reasons but the speed. The main problem is interference/interaction between
datalines. This is a different story I think.
Recently PCI Express(serial) is introduced and another standard PCI X(high speed
parallel), but you would have heard about PCI Express only because it is compact use
the same type of connector with huge bandwidth the main thing is that it is serial,
now the serial bus standards are overtaking parallel ones because of their high speed
flexibility as you can easily change the internal protocol not useful or possible in
parallel transfer, and design and production cost.
ASYNCHRONOUS
Sending data encoded into your signal requires that the sender and receiver are both using
the same enconding/decoding method, and know where to look in the signal to find data.
Asynchronous systems do not send separate information to indicate the encoding or
clocking information. The receiver must decide the clocking of the signal on it's own.
This means that the receiver must decide where to look in the signal stream to find ones
and zeroes, and decide for itself where each individual bit stops and starts. This
information is not in the data in the signal sent from transmitting unit.
When the receiver of a signal carrying information has to derive how that signal is
organized without consulting the transmitting device, it is called asynchronous
communication. In short, the two ends do not always negotiate or work out the
connection parameters before communicating. Asynchronous communication is more
efficient when there is low loss and low error rates over the transmission medium because
data is not retransmitted and no time is spent setting negotiating the connection
parameters at the beginning of transmission. Asynchronous systems just transmit and let
the far end station figure it out. Asynchronous is sometimes called "best effort"
transmission because one side simply transmits, and the other does it's best to receive.
EXAMPLES:
Asynchronous communication is used on RS-232 based serial devices such as on an
IBM-compatible computer's COM 1, 2, 3, 4 ports. Asynchronous Transfer Mode (ATM)
also uses this means of communication. Your PS2 ports on your computer also use serial
communication. This is the method is also used to communicate with an external modem.
Asynchronous communication is also used for things like your computer's keyboard and
mouse.
SYNCHRONOUS
Synchronous systems negotiate the communication parameters at the data link layer
before communication begins. Basic synchronous systems will synchronize both clocks
before transmission begins, and reset their numeric counters for errors etc. More
advanced systems may negotiate things like error correction and compression.
It is possible to have both sides try to synchronize the connection at the same time.
Usually, there is a process to decide which end should be in control. Both sides can go
through a lengthy negotiation cycle where they exchange communications parameters
and status information. Once a connection is established, the transmitter sends out a
signal, and the receiver sends back data regarding that transmission, and what it received.
This connection negotiation process takes longer on low error-rate lines, but is highly
efficient in systems where the transmission medium itself (an electric wire, radio signal
or laser beam) is not particularly reliable.
A USB flash drive(pen drive) consists of a NAND-type flash memory data storage device
integrated with a USB (universal serial bus) interface.
A flash drive(pen drive) consists of a small printed circuit board protected inside a
plastic, metal, or rubberized case, robust enough for carrying with no additional
protection—in a pocket or on a key chain, for example.
The USB connector is protected by a removable cap or by retracting into the body of the
drive, although it is not liable to be damaged if exposed.
The development of high-speed serial data interfaces such as USB for the first time made
memory systems with serially accessed storage viable, and the simultaneous development
of small, high-speed, low-power microprocessor systems allowed this to be incorporated
into extremely compact systems.
Serial access also greatly reduced the number of electrical connections required for the
memory chips, which has allowed the successful manufacture of multi-gigabyte
capacities.
Computers access modern flash memory systems(pen drives) very much like hard disk
drives, where the controller system has full control over where information is actually
stored.
The following options are available for the LPT port 1 setting of each virtual machine.
• None
Select this option if you do not want the virtual machine to use the LPT1 port of the physical
computer. This is the default setting.
• LPT1
Select this option to configure the virtual machine to use the specified parallel port of the physical
computer for input and output through the virtual machine.
When you start the virtual machine, it attempts to capture the parallel port of the physical
computer. If the parallel port has already been captured, the virtual machine cannot capture it. If
the virtual machine captures the parallel port, the parallel port is not released to the physical
computer or available to any other virtual machine or to the host operating system until the
virtual machine is shut down.
The RS-232 standard is used by many specialized and custom-built devices. This list
includes some of the more common devices that are connected to the serial port on a PC.
Some of these such as modems and serial mice are falling into disuse while others are
readily available.
Serial ports are very common on certain types of microcontroller such as the Microchip
PIC, where they can be used to communicate with a PC or other serial devices.
The need to provide data transfer between a computer and a remote terminal has led to the development of
serial communication.
Serial data transmission implies transfer data transfer bit by bit on the single (serial) communication line .
In case of serial transmission data is sent in a serial form i.e. bit by bit on a single line. Also, the cost of
communication hardware is considerable reduced since only a single wire or channel is require for the serial
bit transmission. Serial data transmission is slow as compared to parallel transmission.
However, parallel data transmission is less common but faster than serial transmission. Most data are
organized into 8 bit bytes. In some computers, data are further organized into multiple bits called half words,
full words. Accordingly data is transferred some times a byte or word at a time on multiple wires with each
wire carrying individual data bits. Thus transmitting all bits of a given data byte or word at the same time is
known as parallel data transmission.
Parallel transmission is used primarily for transferring data between devices at the same site. For eg :
communication between a computer and printer is most often parallel, so that entire byte can be transferred
in one operation.
Synchronous Communication
In Synchronous communication scheme, after a fixed number of data bytes a special bit pattern is send
called SYNC by the sending end.
Data transmission take place without any gap between two adjacent characters., however data is send block
by block. A block is a continuous steam of characters or data bit pattern coming at a fixed speed. You will
find a Sync bit pattern between any two blocks of data and hence the data transmission is synchronized.
Synchronous communication is used generally when two computers are communicating to each other at a
high speed or a buffered terminal is communicating to the computer.
Main advantage of Synchronous data communication is the high speed. The synchronous communications
required high-speed peripherals/devices and a good-quality, high brandwidth communication channel.
The disadvantage include the possible in accuracy. Because when a receiver goes out of Synchronization,
loosing tracks of where individual characters begin and end. Correction of errors takes additional time.
Besides that, the chip would have to have compatibility for 32 and 64 bit built it. its not something
inherent in a newer chip. It has to be designed that way.
And of course, apple's not going to use any IBM chips anymore anyway.
mad jew
Oct 21, 2005, 01:24 AM
Erm, so long as the chip was enabled to work with 32-bit software, it'd be okay. Of course, this is a
pretty far-out hypothetical. It took ages to get to where we are with 64 bit with the transition still
only partially underway. There'd be no real-life benefits of a 128 bit chip at this stage considering
the advantages of 64 bit are still coming to fruition. :)
Chaszmyr
Oct 21, 2005, 01:26 AM
A 128-bit chip would be able to address practically all of the RAM in the known universe, but I don't
see how any of us would benefit from that.
I don't know how the calculations work, but a 32 bit chip can address 4gb of RAM, and the G5 (a 64
bit chip) can address 4tb of RAM, so I am assuming a 128 bit chip may be able to address 4pb of
RAM. (I wouldn't be even a little bit surprised if this is wrong, I'm too lazy to find out for sure, but I
assure you a 128 bit chip can address a huge amount of RAM).
aesth3tic
Oct 21, 2005, 02:51 AM
Ok, what if IBM made a 128-bit processor and Apple used it for their Macs, but it still had 32-bit and
64-bit support? Would this cause any problems or would this be a good thing?
A 1-bit machine can address two bytes: the one called 0 and the one called 1.
A 2-bit machine can address four (2*2) bytes: 00, 01, 10, and 11.
A 3-bit machine can address eight (2*2*2) bytes: 000, 001, 010, 011, 100, 101, 110, 111.
Similarly, a 32-bit machine can address 2^32 = 2*2*2*...*2 (32 of them) bytes. That's 4 GB, about
4 billion.
A 64-bit machine can address 2^64 bytes. That's about 18 exabytes. 18 followed by 18 zeros.
18446744073709551616 to be precise. That's huge.
Now, a 128-bit machine -- of course, you can extend the pattern. Is it going to be twice as much as
a 64-bit machine? How about four times? Eight? A billion times? No, actually, it's 2^64 times more
than 2^64.
340282366920938463463374607431768211456
I'm not even going to try to find SI prefixes for that. It's nuts.
So, no, there is absolutely no point for consumer machines to address more than 64 bits of address
space at this point in time. We're not quite in the territory of "number of atoms in the universe" but
we're definitely blasting off our home planet before we are looking at 128-bit machines making any
sense in your personal computer.
The key difference between general-computing operating systems and real-time operating
systems is the need for " deterministic " timing behavior in the real-time operating
systems. Formally, "deterministic" timing means that operating system services consume
only known and expected amounts of time. In theory, these service times could be
expressed as mathematical formulas. These formulas must be strictly algebraic and not
include any random timing components. Random elements in service times could cause
random delays in application software and could then make the application randomly
miss real-time deadlines – a scenario clearly unacceptable for a real-time embedded
system. Many non-real-time operating systems also provide similar kernel services.
On the other hand, real-time operating systems often go a step beyond basic determinism.
For most kernel services, these operating systems offer constant load-independent timing:
In other words, the algebraic formula is as simple as: T(message_send) = constant ,
irrespective of the length of the message to be sent, or other factors such as the numbers
of tasks and queues and messages being managed by the RTOS.
Many RTOS proponents argue that a real-time operating system must not use virtual
memory concepts, because paging mechanics prevent a deterministic response. While this
is a frequently supported argument, it should be noted that the term "real-time operating
system" and determinism in this context covers a very wide meaning, and vendors of
many different operating systems apply these terms with varied meaning. When selecting
an operating system for a specific task, the real-time attribute alone is an insufficient
criterion, therefore. Deterministic behavior and deterministic latencies have value only if
the response lies within the boundaries of the physics of the process that is to be
controlled. For example, controlling a combustion engine in a racing car has different
real-time requirements to the problem of filling a 1,000,000 litre water tank through a 2"
pipe.
Real-time operating systems are often uses in embedded solutions, that is, computing
platforms that are within another device. Examples for embedded systems include
combustion engine controllers or washing machine controllers and many others. Desktop
PC and other general-purpose computers are not embedded systems. While real-time
operating systems are typically designed for and used with embedded systems, the two
aspects are essentially distinct, and have different requirements. A real-time operating
system for embedded system addresses both sets of requirements.