Vous êtes sur la page 1sur 28

ECE 2211

Microprocessor and Interfacing


Number System

Outline

Number System

Number System ( digits )

Decimal ( base10) - 810


Numbers- 0,1,2,3,4,5,6,7,8,9

Binary (base2) 11002 or 1100b


Numbers- 0 and 1

Hexadecimal (base16) A816 or A8H


Numbers- 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F

Binary Numbers

Bit is a binary digit that can have the value 0 or 1. Base 2


Binary numbers are used to store both instruction and data
Address, Data , Signal are transmitted in binary
ALU performs calculation in binary
Instructions are converted to MACHINE CODES ,
binary numbers EX : MOV AX, BX

MSB

LSB

1111011

126 + 125 + 124 + 123 + 022 + 121 + 120


= 164 + 132 + 116 + 18 + 04 + 12 + 11
= 123

Binary Numbers Terms

A byte is defines as 8 bits

A nibble is half a byte (4 bits)


A word is 2 bytes (16 bits)

A doubleword is 2 words

A doubleword is 4 bytes

A quadword is 2 doublewords

A kilobyte is

20
2
A megabyte is
bytes, that is 1,048,576 bytes

A gigabyte is 2

210 bytes, that is 1,024 bytes

30

bytes that is 1,073,741,824 bytes

Unsigned/Signed Numbers

One byte (eight bits) can be used to represent the decimal number range

0 to 255 (unsigned)

-128 to 127 (signed)

Finding a negative binary

Fixed length, 8 bit, 16 bit number


To form a two's complement number that is negative you simply take the
corresponding positive number, invert all the bits, and add 1

Find 8 bit signed binary numbers for -35(D). Use direct method and 2s complement

35(D) = 0010 0011(B)


invert -> 1101 1100(B) add 1 -> 1101 1101(B)

Golden Question:
So how can you tell the difference between:

-123(D)= 10000101(B)
and
133(D) = 10000101(B)

You cant unless you know whether youre using signed or unsigned arithmetic:

Conversion: Hexadecimal to decimal

Base of 16
Numerical symbols, 0 9 ; A F for ten fifteen
Example 70A(H) or
70A16

Convenient representation of long binaries : A series of 0,1 are simplified


to hexadecimal

Example of conversion from hexadecimal to decimal:

7B
7161 + 11160 = 12310

Conversion : Decimal to Binary


Example Converting 12310 into binary :
Division method
123 / 2
61 / 2
30 / 2
15 /2
7 /2
3/2
1 /2

=
=
=
=
=
=
=

61
30
15
7
3
1
0

remainder 1
remainder 1
remainder 0
remainder 1
remainder 1
remainder 1
remainder 1

Least significant bit


(rightmost)

Most significant bit


(leftmost)

Answer : (123)10 = (1111011)2

Conversion : Decimal to Hexa


Converting 12310 into hex

123

16

7 16

= 7 remainder 11 (or B)
= 0 remainder 7

Answer : 123(D) = 7B(H)

BCH: Binary Coded Hexadecimal


Representation of hexadecimal data in binary code
Each group of four binary bits maps on to a single hex digit. CPU uses BCH

0111 1011
7B

E.g.

1011 1001 0110 1111 1010


B96FA

Computer Data Formats

ASCII data representing alphanumeric characters in memory of computer systems

Unicode http://www.unicode.org

Packed / Unpacked BCD

Byte Size Data ( unsigned : 0 255 , 00 ff ) (signed: -128 0, to 127 )

Word Size Data

Double Word Size Data

Representations of Real numbers , complex numbers

ASCII Characters

Computers can only understand numbers : 0, 1

Then, how characters such as A, a , or @ is represented ?

Binary patterns are assigned to represent letters and characters

ASCII stands for American Standard Code for Information Interchange (1960)

It represent numerical numbers ( 0 9 ), alphabets ( LC , UC), symbols

Well accepted by all manufacturers

7 bit representation, 0 -> MSB : 8 bit code

ASCII Tables :

A: 41(H)

a: 61(H)

Ascii characters lookuptable

BCD
Definition BCD represents each of the digits of an unsigned decimal as the 4-bit
binary equivalents.
Unpacked BCD In unpacked BCD, the lower 4 bits of the number represent the
BCD number and the rest of the bits are 0. Example: 0000 1001 and 0000 0101
are unpacked BCD for 9 and 5, respectively. In case of unpacked BCD it takes 1
byte of memory location or a register of 8 bits to contain it.
Packed BCD In packed BCD, a single byte has two BCD numbers in it, one in the
lower 4 bits and one in the upper 4 bits. For example, 0101 1001is packed BCD
for 59. It takes only 1 byte of memory to store the packed BCD operands. This is
one reason to use the packed BCD since it is twice as efficient in storing data.
Why use binary coded decimal? Because people think in decimal

Ex. 0011 0010 1001 = 32910


This is NOT the same as 0011001010012.

Exercise :
194(d) -> unpack and pack ? 238(d) to unpack and pack ?

0000 0011

0000 0011

0000 0011

IEEE standard 754 Floating Point


the most common representation today for real numbers on
computers, including Intel-based PC's, Macintoshes, and most
Unix platforms
IEEE Single Precision
called "float" in the C language family, and "real" or "real*4" in
Fortran. This occupies 32 bits (4 bytes) and has a significand
precision of 24 bits (about 7 decimal digits).
IEEE Double Precision
called "double" in the C language family, and "double precision" or
"real*8" in Fortran. This occupies 64 bits (8 bytes) and has a
significand precision of 53 bits (about 16 decimal digits).
Floating point representation: High Precisions : very large or very small
numbers

Ex:

5.675 X 10 24 or 8.9769 X 10-35

IEEE Precision standard - I


Three components in IEEE Floating Point representation:
a) Sign bit

0 denotes a positive number;

1 denotes a negative number

b) Exponent
represent both positive and negative exponents.
Base 2
a bias is added to the actual exponent in order to get the stored exponent.
For IEEE single-precision floats, this value is 127 (7F).
- A stored value of 200 indicates an exponent of (200-127), or 73
For double precision, the exponent field is 11 bits, and has a bias of 1023 (3FF).
c) Mantissa
The mantissa, also known as the significand, represents the precision bits of the number

http://steve.hollasch.net/cgindex/coding/ieeefloat.html

The floating-point numbers in (a) single-precision using a bias of 7FH and (b)
double-precision using a bias of 3FFH.

Steps to convert real numbers to IEEE Format


1.
2.
3.
4.
5.

Convert into binary format


Normalize: represent in scientific format
Set bit 31 (63) for sign ( 1 for negative )
Calculate the exponent ( add 7f(h) or 3ff(h) ) and
place in bit 30 23 ( 62-52)
Significand is placed in bits 22 0 ( 51 0 )

Example 1: convert real number to IEEE format


9.75 (d) convert into IEEE Single Precision
1.
2.
3.
4.
5.

Convert into binary format


9.75(d) -> binary 1001.11
Normalize: represent in scientific notation format
binary 1001.11-> 1.00111 X 23
Set bit 31 for sign ( 1 for negative, 0 for positive)
bit 31: 0
Calculate the exponent ( add 7f(h) ) and place in bit 30 23
exponent bias ( 30 23 ) : 3+7F = 82(h) = 1000 0010
Significand is placed in bits 22 0
Significand (22 0) : 00111 .

.75 x 2 =1.5
.5 x 2 = 1
.75 -> .11

0100 0001 0001 1100 0000 0000 0000 0000 = 411C0000 (h)

Example 2
Convert -12.62510 in single precision IEEE-754 format
Step #1: Convert to target base. -12.62510 = -1100.1012
Step #2: Normalize. -1100.1012 = -1.100101 x 23
Step #3: Set bit 31 for sign : 1(negative)
Step #3: Find the exponent : 3 + 127(d) = 130(d) = 82(h) = 1000 0010 (b)
Step #4: Put together 1 1000 0010 1001 0100 0000 0000 0000 000
1100 0001 0100 1010 0000 0000 0000 0000 (b) = ?

Exercise 3: Floating Point Conversion


Express 3.14 as a 32-bit floating point number
Solution:
1. 3.14 To Binary (approx)11.001000111101
2. Normalize
3. Find the exponent
4. Find the sign bit
5. Put everything together
0 10000000 1001000111101 0000000000
0100 0000 0100 1000 1111 0100 0000 0000 = 4048f400 (h)
Approximation with single precision 3.1398926
Application to convert http://www.h-schmidt.net/FloatApplet/IEEE754.html

Example 4
0

10000010

11000000000000000000000

1.112 = 1.7510 significand


130 127 = 3 bias adjustment
0 = positive

+1.75 23 = 14.0

Example 5
Given IEEE SP 41C8 000016 , find the real number
1. 0100 0001 1100 1000 0000 0000 0000 00002
2. Sign bit: 0
Exponent: 1000 00112 = 8316 = 131 (d)
Decoded exponent: 131 - 127 = 4 (d)
3. Significand: 100 1000 0000 0000 0000 00002
Decoded significant: 1 + 0.5 + 0.0625 = 1.5625 (d)
4. 1.5625 24 = 25 (d)

Exercise
Following is IEEE SP number, find the number
11000011100101100000000000000000

Little Endian and Big Endian

We have seen how numbers are represented in micro-p. How are they stored ?
ENDIAN : ordering of individually addressable sub-components within the representation of a
larger data item as stored in external memory
"Little Endian"
low-order byte of the number is stored in memory at the lowest address, and the highorder byte at the highest address. (The little end comes first.)
For example, a 4 byte Long Int : Byte3 Byte2 Byte1 Byte0
will be arranged in memory as follows
Base Address+0 Byte0
Base Address+1 Byte1
Base Address+2 Byte2
Base Address+3 Byte3

Big Endian"
means that the high-order byte of the number is stored in memory at the lowest address,
and the low-order byte at the highest address. (The big end comes first.)

Example of big endian : Sun machine, Adobe Photoshop, Motorola

Example 6