Vous êtes sur la page 1sur 8

ASCII and Unicode + Binary addition

ASCII -American Standard Code for Information Interchange
ASCII is a 7 bit system used to code the character set of a computer There are 127 in total codes possibly by the use of ASCII
ASCII code was very useful for transmitting textual messages but fails to deal with other characters we need, such as mathematical symbols and non-English letters ASCII was extended to 8 bits when 8-bit computers became common so ASCII now had an extra 128 characters
ASCII values increase this means and so the characters are arranged in value order this means that the values of letters would not follow alphabetical, for example Z will have a higher value than A

Character set: characters available to the computer



Unicode- is a up to 32-bit system to code the character set of a computer Before Unicode many character coding systems were used like ASCII which may conflict with each other, Unicode purpose is to provide a single character set for all computers like a single language Unicode provides over 4 billion character value possibilities and is therefore much better than the 7-bit system of ACSII as all characters needed can be created with Unicode

1. How many bits are in ASCI? 2. What was ASCII useful for? 3. Why is ASCII insufficient 4. What is meant by character set? 5. Which character has a higher value J or R? 6. What is the purpose of Unicode? 7. What is the difference between Unicode and ASCII 8. Why is Unicode used? 9. Why is Unicode better than ASCII? 10. What is Unicode

1. 7 bits 2. Transmitting textual messages 3. Does not have enough code possibilities for all characters needed 4. The characters available to the computer 5. R 6. To provide a single character coding set for all character 7. Unicode is a up to 32 bit system ASCII is a 7 bit system 8. It is used because it has sufficient possibilities to code all needed characters 9. Because it codes for more characters 10. Unicode is a up to 32-bit character coding system

Overflow error: when the number becomes too big to fit into the number of bits allocated

Binary addition

Binary addition is the addition of two numbers in the form of binary, binary addition is similar to normal addition The way we add binary numbers is by carrying 1 and 0 to the next bit. If the sum of If the sum of For example 1011 the bit is 2 we the bit is 0 we write down 0 write down 0 + 1101 and carry a 1 and carry If the sum of
the bit is 1 we write down 1 and carry nothing 1 1 000 111 1 If the sum of the bit is 3 we write down 1 and carry a 1 nothing

What is 1011 + 1101 in binary?? What is an overflow error? What is 101 + 1001 in binary?

11000 When the number becomes too big for the allocated bits 1110