Vous êtes sur la page 1sur 36

Table of Contents

1. Introduction
1.1 Error correction
codes ............................................................................1
1.2 Error
correction........................................................................................2
2. Introduction to Verilog® HDL
2.1 What is
HDL ? ........................................................................................4
2.2 Verilog
Overview ..................................................................................5
2.3 Design
Styles .........................................................................................6
2.4 Abstraction Levels of
Verilog...................................................................6
3. Design of Hamming
code .................................................................8
3.1 Origin of Hamming
Code.........................................................................8
3.2 Basic
Theory ...........................................................................................8
3.3 Designing (n, k, t) Hamming
code .........................................................10
3.4 Methodology of operation of a simple ( 7, 4 , 1 ) hamming
code..........12
3.5 Design of the Hamming code Encoder and
Decoder ............................17
3.6 Pin
Descriptions .....................................................................................23
4. Implementation
4.1 The (11, 7, 1) Hamming
code.................................................................24
4.2 Simulation
Results..................................................................................29
4.3 Synthesis
report .....................................................................................31
5. Advantages and applications
5.1 Advantages of the hamming
code..........................................................32
5.2
Applications.............................................................................................32
6. Conclusion and future work
6.1
Conclusion...............................................................................................34
6.2 Future
work .............................................................................................34
7. Source code
Hamming Code

7.1
Hamming_encoder.v ...............................................................................36
7.2
Hamming_decoder.v ..............................................................................37
References

DSCE, Bangalore 2
Hamming Code

Introduction
1.1 Error correction codes
In computer science and information theory, the issue of error correction
and detection has great practical importance. Error correction codes ( ECCs)
permit detection and correction of errors that result from noise or other
impairments during transmission from the transmitter to the receiver. Given
some data, ECC methods enable you to check whether data has been corrupted,
which can provide the difference between a functional and nonfunctional system.

Error correction schemes permit error localization and also give the
possibility of correcting them. Error correction and detection schemes find use in
implementation of reliable data transfer over noisy transmission links, data
storage media (including dynamic RAM, compact discs), and other applications
where the integrity of data is important. Error correction avoids retransmission of
the data, which can degrade system performance.

RAM Devices
RAM devices do not as such support error control codes. There are no
mandatory requirements for ECC support on RAM/DRAM devices. Memory
suppliers are generally not in favor of implementing a complex logic function like
ECC onto a RAM die. It is costly, inefficient, and leads to an expensive memory
subsystem. Where enhanced reliability is a requirement, the standard technique
is to use a wider interface. In the context of SDRAMs, DIMMs come in two widths:
64 and 72 bits. The 72-bit DIMMs are targeted for use with ECCs, because of the
extra 8 bits. The extra 8 bits are merely extra data bits , in reality you can use
any of the bits. An extra 8 bits of parity on 64 bits of data allows you to employ a
two-bit error detection. single bit correcting Hamming code.

Hamming code
Hamming code is an error-correction code that can be used to detect
single and double-bit errors and correct single-bit errors that can occur when
binary data is transmitted from one device into another.
Hamming codes provide for FEC using a "block parity" mechanism that
can be inexpensively implemented. In general, their use allows the correction of
single bit errors and detection of two bit errors per unit data, called a code word.

The fundamental principal embraced by Hamming codes is parity.


Hamming codes, as mentioned before, are capable of correcting one error or

DSCE, Bangalore 3
Hamming Code

detecting two errors but not capable of doing both simultaneously. You may
choose to use Hamming codes as an error detection mechanism to catch both
single and double bit errors or to correct single bit error. This is accomplished by
using more than one parity bit, each computed on different combination of bits
in the data.

This report presents design and development of (11, 7, 1) Hamming code


using Verilog hardware description language (HDL). Here, ‘11’ corresponds to
the total number of Hamming code bits in a transmittable unit comprising data
bits and redundancy bits, 7 is the number of data bits while ‘1’ denotes the
maximum number of error bits in the transmittable unit. This code fits well into
small field-programmable gate arrays (FPGAs), complex programmable logic
devices (CPLDs) and application-specific integrated circuits (ASICs) and is ideally
suited to communication applications that need error-control.

1. 2 Error correction
Use of simple parity allows detection of single-bit errors in a received
message. Correction of these errors requires more information, since the position
of the corrupted bit must be identified if it is to be corrected. (If a corrupted bit
can be detected, it can be corrected by simply complementing its value.)
Correction is not possible with one parity bit since any bit error in any position
produces exactly the same information, i.e., error. If more bits are included in a
message, and if those bits can be arranged such that different corrupted bits
produce different error results, then corrupted bits could be identified.

Forward error correction (FEC)


Digital communication systems, particularly those used in military, need to
perform accurately and reliably even in the presence of noise and interference.
Among many possible ways to achieve this goal, forward error correction coding
is the most effective and economical. Forward error correction coding (also
called ‘channel coding’) is a type of digital signal processing that improves
reliability of the data by introducing a known structure into the data sequence
prior to transmission.
This structure enables the receiving system to detect and possibly correct
errors caused by corruption from the channel and the receiver. As the name
implies, this coding technique enables the decoder to correct errors without
requesting retransmission of the original information. Hamming code is a typical
example of forward error correction.
In a communication system that employs forward error-correction coding,
the digital information source sends a data sequence to an encoder. The encoder
inserts redundant (or parity) bits, thereby outputting a longer sequence of code
bits, called a ‘code word.’ These code words can then be transmitted to a
receiver, which uses a suitable decoder to extract the original data sequence.

DSCE, Bangalore 4
Hamming Code

Introduction to Verilog® HDL


2 .1 What is HDL ?
A typical Hardware Description Language (HDL) supports a mixed-level
description in which gate and netlist constructs are used with functional
descriptions. This mixed-level capability enables you to describe system
architectures at a high level of abstraction, then incrementally refine a design’s
detailed gate-level implementation.

HDL descriptions offer the following advantages:

• We can verify design functionality early in the design process. A design written
as an HDL description can be simulated immediately. Design simulation at this
high level — at the gate-level before implementation — allows you to evaluate
architectural and design decisions.

• An HDL description is more easily read and understood than a netlist or


schematic description. HDL descriptions provide technology-independent
documentation of a design and its functionality. Because the initial HDL design
description is technology independent, you can use it again to generate the
design in a different technology, without having to translate it from the original
technology.

• Large designs are easier to handle with HDL tools than schematic tools.

2 . 2 Verilog Overview :

Introduction :
Verilog is a HARDWARE DESCRIPTION LANGUAGE (HDL). A hardware
description Language is a language used to describe a digital system, for
example, a microprocessor or a memory or a simple flip-flop. This just
DSCE, Bangalore 5
Hamming Code

means that, by using a DL one can describe any hardware (digital ) at any
level.

Figure 2.1 D Flip Flop Verilog Code for a D Flip Flop

One can describe a simple Flip flop as that in above figure as well as one
can describe a complicated design having 1 million gates. Verilog is one of the
HDL languages available in the industry for designing the Hardware. Verilog
allows us to design a Digital design at Behavior Level, Register Transfer Level
(RTL), Gate level and at switch level. Verilog allows hardware designers to
express their designs with behavioral constructs, deterring the details of
implementation to a later stage of design in the final design.
Verilog provides both behavioral and structural language structures. These
structures allow expressing design objects at high and low levels of abstraction.
Designing hardware with a language such as Verilog allows using software
concepts such as parallel processing and object-oriented programming. Verilog
has syntax similar to C and Pascal.

2 . 3 Design Styles
Verilog like any other hardware description language permits the designers to
create a design in either Bottom-up or Top-down methodology.

Bottom-Up Design
The traditional method of electronic design is bottom-up. Each design is
performed at the gate-level using the standard gates. With increasing complexity
of new designs this approach is nearly impossible to maintain. New systems
consist of ASIC or microprocessors with a complexity of thousands of transistors.
These traditional bottom-up designs have to give way to new structural,
hierarchical design methods. Without these new design practices it would be
impossible to handle the new complexity.

DSCE, Bangalore 6
Hamming Code

Top-Down Design

The desired design-style of all designers is the top-down design. A real


top-down design allows early testing, easy change of different technologies, a
structured system design and offers many other advantages. But it is very
difficult to follow a pure top-down design. Due to this fact most designs are mix
of both the methods, implementing some key elements of both design styles.
Complex circuits are commonly designed using the top down
methodology. Various specification levels are required at each stage of the
design process.

2 . 4 Abstraction Levels of Verilog

Verilog supports a design at many different levels of abstraction. Three of them


are very important:
• Behavioral level
• Register-Transfer Level
• Gate Level

Behavioral level
This level describes a system by concurrent algorithms (Behavioral). Each
algorithm itself is sequential, that means it consists of a set of instructions that
are executed one after the other. Functions, Tasks and Always blocks are the
main elements. There is no regard to the structural realization of the design.

Register-Transfer Level
Designs using the Register-Transfer Level specify the characteristics of a
circuit by operations and the transfer of data between the registers. An explicit
clock is used. RTL design contains exact timing possibility; operations are
scheduled to occur at certain times. Modern definition of a RTL code is "Any code
that is synthesizable is called RTL code".

Gate Level

Within the logic level the characteristics of a system are described by


logical links and their timing properties. All signals are discrete signals. They can
only have definite logical values (`0', `1', `X', `Z`). The usable operations are
predefined logic primitives (AND, OR, NOT etc gates). Using gate level modeling
might not be a good idea for any level of logic design. Gate level code is
generated by tools like synthesis tools and this Netlist is used for gate level
simulation and for backend.

DSCE, Bangalore 7
Hamming Code

DSCE, Bangalore 8
Hamming Code

Design of Hamming code


4 . 1 Origin of Hamming Code
In the late 1940’s Claude Shannon was developing information theory and
coding as a mathematical model for communication. At the same time, Richard
Hamming, a colleague of Shannon’s at Bell Laboratories, found a need for error
correction in his work on computers. Parity checking was already being used to
detect errors in the calculations of the relay-based computers of the day, and
Hamming realized that a more sophisticated pattern of parity checking allowed
the correction of single errors along with the detection of double errors. The
codes that Hamming devised, the single-error correcting binary Hamming codes
and their single-error-correcting, double error- detecting extended versions
marked the beginning of coding theory. These codes remain important to this
day, for theoretical and practical
reasons as well as historical.

4 . 2 Basic Theory
Hamming’s development [Ham] is a very direct construction of a code that
permits correcting single-bit errors. He assumes that the data to be transmitted
consists of a certain number of information bits u, and he adds to these a
number of check bits p such that if a block is received that has at most one bit in
error, then p identifies the bit that is in error (which may be one of the check
bits).

Specifically, in Hamming’s code p is interpreted as an integer which is 0 if


no error occurred, and otherwise is the 1-origined index of the bit that is in error.
Let k be the number of information bits, and m the number of check bits used.
Because the m check bits must check themselves as well as the information bits,
the value of p, interpreted as an integer, must range from 0 to which is distinct
values.

Because m bits can distinguish cases, we must have


2m ≥m+k+1.

Where
k = Number of “information” or “message” bits.
m = Number of parity-check bits (“check bits,” for short).
n = Code length, n = m + k.
u = Information bit vector, u0, u1, … uk–1.
p = Parity check bit vector, p0, p1, …, pm–1.
s = Syndrome vector, s0, s1, …, sm–1.

This is known as the Hamming rule. It applies to any single error


correcting (SEC) binary FEC block code in which all of the transmitted bits must
be checked. The check bits will be interspersed among the information bits in a
manner described below.

DSCE, Bangalore 9
Hamming Code

Because p indexes the bit (if any) that is in error, the least significant bit of
p must be 1 if the erroneous bit is in an odd position, and 0 if it is in an even
position or if there is no error. A simple way to achieve this is to let the least
significant bit of p, p0, be an even parity check on the odd positions of the block,
and to put p0 in an odd position. The receiver then checks the parity of the odd
positions (including that of p0). If the result is 1, an error has occurred in an odd
position, and if the result is 0, either no error occurred or an error occurred in an
even position. This satisfies the condition that p should be the index of the
erroneous bit, or be 0 if no error occurred.

Similarly, let the next from least significant bit of p, p1, be an even parity
check of positions 2, 3, 6, 7, 10, 11, … (in binary, 10, 11, 110, 111, 1010, 1011,
…), and put p1 in one of these positions. Those positions have a 1 in their second
from least significant binary position number. The receiver checks the parity of
these positions (including the position of p1). If the result is 1, an error occurred
in one of those positions, and if the result is 0, either no error occurred or an
error occurred in some other position. Continuing, the third from least significant
check bit, p2, is made an even parity check on those positions that have a 1 in
their third from least significant position number, namely positions 4, 5, 6, 7, 12,
13, 14, 15, 20, …, and p2 is put in one of those positions.

Putting the check bits in power-of-two positions (1, 2, 4, 8, …) has the


advantage that they are independent. That is, the sender can compute p0
independently of p1, p2, … and, more generally, it can compute each check bit
independently of the others.

4 . 3 Designing (n, k, t) Hamming code


The (n, k, t) code refers to an ‘n’-bit code word having ‘k’ data bits (where
n > k) and ‘r’ (=n–k) error-control bits called ‘redundant’ or ‘redundancy’ bits
with the code having the capability of correcting ‘t’ bits in the error (i.e., ‘t’
corrupted bits).

If the total number of bits in a transmittable unit (i.e., code word) is ‘n’
(=k+m), ‘m’ must be able to indicate at least ‘n+1’ (=k+m+1) different states.

Of these, one state means no error, and ‘n’ states indicate the location of
an error in each of the ‘n’ positions.

So ‘n+1’ states must be discoverable by ‘m’ bits; and ‘m’ bits can indicate
2m different states. Therefore, 2m must be equal to or greater than ‘n+1’.

This is the hamming rule itself that says:

2m ≥m+k+1.
The value of ‘m’ can be determined by substituting the value of ‘k’ (the original
length of the data to be transmitted).
For example, if the value of ‘k’ is ‘7,’ the smallest ‘m’ value that can satisfy this
constraint is ‘4’ as :

DSCE, Bangalore 10
Hamming Code

24 ≥ 7+4+1

Parity Bits
A parity bit is the extra bit included to make the total number of 1’s in the
resulting code word either even or odd.

With Even Parity With Odd


Parity
• 1000001 01000001 11000001
7bits 8 bits
1010100 11010100 01010100
Fig 4.1 Parity bit example
For a 7-bit number there are 7 possible one bit errors. 64 different binary
permutations can be recognized in a string of length 6 bits. However, another
state is needed to represent the case when a detectable error has not occurred.

The number of parity bits, m, needed to detect and correct a single bit
error
in a data string of length n is given by the following equation:

m = log2n +1
The ECC block uses the Hamming code with an additional parity bit, which
can detect single and double-bit errors, and correct single-bit errors. The extra
parity bit applies to all bits after the Hamming code check bits have been added.
This extra parity bit represents the parity of the codeword. If one erroroccurs, the
parity changes, if two errors occur, the parity stays the same. In general the
number of parity bits, m, needed to detect a double-bit error or detect and
correct a single-bit error in a data string of length n, is given by the following
equation:

m = log2n +2

DSCE, Bangalore 11
Hamming Code

4.4 Methodology of operation of a


simple ( 7 , 4 , 1 ) hamming code :

The goal of Hamming codes is to create a set of parity bits that overlap
such that a single-bit error (the bit is logically flipped in value) in a data bit or a
parity bit can be detected and corrected. While multiple overlaps can be created,
the general method is presented in Hamming codes.

Bit# 1 2 3 4 5 6 7
Transmitted p1 p2 d1 p3 d2 d3 d4
This table
describes which bit
parity bits cover P1 Yes No Yes No Yes No Yes
which P2 No Yes Yes No No Yes Yes
transmitted bits P3 No No No Yes Yes Yes Yes
in the encoded
word. For example, p2 covers bits 2, 3, 6, & 7. It also details which transmitted
by which parity bit by reading the column. For example, d1 is covered by p1 and
p2 but not p3. This table will have a striking resemblance to the parity-check
matrix ( ) in the next section.

DSCE, Bangalore 12
Hamming Code

D1 D2 D3 D4
P1 Yes Yes No Yes
Furthermore, if
P2 Yes No Yes Yes
the parity columns in
P3 No Yes Yes Yes
the above table were
removed then resemblance to rows 1, 2, & 4 of the code generator matrix ( G)
below will also be evident.
So, by picking the parity bit coverage correctly, all errors of Hamming
distance of 1 can be detected and corrected, which is the point of using a
Hamming code.

Hamming matrices
Hamming codes can be computed in linear algebra terms through
matrices because Hamming codes are linear codes. For the purposes of
Hamming codes, two Hamming matrices can be defined: the code generator
matrix and the parity-check matrix H:

And

As mentioned above, rows 1, 2, & 3 of should look familiar as they map


the data bits to their parity bits:
• p1 covers d1, d2, d4
• p2 covers d1, d3, d4
• p3 covers d2, d3, d4
The remaining rows (4, 5, 6, 7) map the data to their position in encoded form
and there is only 1 in that row so it is an identical copy. In fact, these four rows
are linearly independent and form the identity matrix (by design, not
coincidence).
Also as mentioned above, the three rows H of should be familiar. These rows
are used to compute the syndrome vector at the receiving end and if the
syndrome vector is the null vector (all zeros) then the received word is errorfree;
if non-zero then the value indicates which bit has been flipped.

The 4 data bits — assembled as a vector — is pre-multiplied by (i.e., ) and


taken modulo 2 to yield the encoded value that is transmitted. The original 4

DSCE, Bangalore 13
Hamming Code

data bits are converted to 7 bits [ hence the name "Hamming(7,4)" ] with 3
parity bits added to ensure even parity using the above data bit coverages. The
first table above shows the mapping between each data and parity bit into its
final bit position (1 through 7) but this can also be presented in a Venn diagram.
For the remainder of this section, the following 4 bits (shown as a column
vector) will be used as a running example:

Channel coding
Suppose we want to transmit this data over a noisy communication
channel. Specifically, a binary symmetric channel meaning that error corruption
does not favor either zero or one (it is symmetric in causing errors). Furthermore,
all source vectors are assumed to be equi probable. We take the product of G
and p, with entries modulo 2, to determine the transmitted codeword x:

This means that 0110011 would be transmitted instead of transmitting


1011.

Parity check

If no error occurs during transmission, then the received codeword r is identical


to the transmitted codeword x:
r=x
The receiver multiplies H and r to obtain the syndrome vector , which indicates
whether an error has occurred, and if so, for which codeword bit. Performing this
multiplication (again, entries modulo 2) :

Since the syndrome z is the null vector, the receiver can conclude that no error
has occurred. This conclusion is based on the observation that when the data
vector is multiplied by G, a change of basis occurs into a vector subspace that is

DSCE, Bangalore 14
Hamming Code

the kernel of H. As long as nothing happens during transmission, will remain in


the kernel of H nd the multiplication will yield the null vector.

Error correction
Otherwise, suppose a single bit error has occurred. Mathematically, we can Write
R = x + ei
modulo 2, where ei is the ith unit vector, that is, a zero vector with a 1 in the ith,
counting from 1

Thus the above expression signifies a single bit error in the ith place. Now, if we
multiply this vector by H:
Hr = H(x + ei) = Hx + Hei
Since x is the transmitted data, it is without error, and as a result, the product of
H and x is zero. Thus

Now, the product of H with the ith standard basis vector picks out that
column of H, we know the error occurs in the place where this column of H
occurs. For example, suppose we have introduced a bit error on bit #5

Now,

which corresponds to the fifth column of . Furthermore, since the general


algorithm used was intention in its construction then the syndrome of 101
corresponds to the binary value of 5, which also indicates the fifth bit was

DSCE, Bangalore 15
Hamming Code

corrupted. Thus, an error has been detected in bit 5, and can be corrected
(simply flip or negate its value):

This corrected received value indeed, now, matches the transmitted


value
from above.

Decoding
Once the received vector has been determined to be error-free or corrected if an
error occurred (assuming only zero or one bit errors are possible) then the
received data needs to be decoded back into the original 4 bits.
First, define a matrix R:

Then the received value, pr is

and using the running example from above

which is same as the transmitted 4 Bit data.

4.5 Design of the Hamming code Encoder


and
Decoder

DSCE, Bangalore 16
Hamming Code

Before we design a hamming code generator let us summarize the


procedure for designing of any hamming code:

Encoding is performed by multiplying the original message vector by the


generator matrix; decoding is performed by multiplying the codeword vector by
the parity check matrix H. All additions are performed modulo 2. Inhardware, this
process equates to XORing a particular set of data elements and is
computationally inexpensive. If an error occurs and one of the parity or data bits
change during transmission, the ECC decoder-corrector gives the bit syndrome of
the data bit that is affected by recalculating the parity bits and XORing them
with the transmitted parity bits (computing the syndrome). Then the decoder-
corrector allows the correction of a single-bit error.
ECC detects errors through the process of data encoding and decoding.

For example, when ECC is applied in a transmission application, data read


from the source are encoded before being sent to the receiver. The output (code
word) from the encoder consists of the raw data appended with the number of
parity bits. The exact number of parity bits appended depends on the
number of bits in the input data. The generated code word is then
transmitted to the destination.

The receiver receives the code word and decodes it. Information obtained
by the decoder determines whether or not an error is detected. The decoder
detects single-bit and double-bit errors, but can fix only single-bit errors in the
corrupted data. This kind of ECC is called Single Error Correction Double Error
Detection (SECDED).

The general block diagram of a Hamming code encoder and decoder is


shown below :

Fig 4.1 Hamming code system


DSCE, Bangalore 17
Hamming Code

This Hamming code design provides two modules, HAMMING_ENCODER


and HAMMING_DECODER, to implement the ECC functionality.

The data input to the HAMMINGC_ENCODER module is encoded to


generate a code word that is a combination of the data input and the generated
parity bits. The generated code word is transmitted to the HAMMINGC_DECODER
module for decoding just before reaching its destination block.
The HAMMINGC_DECODER module generates a syndrome vector to
determine if there is any error in the received code word. It fixes the data if and
only if the single-bit error is from the data bits.

4.5.1 General Description of the HAMMING_ENCODER


Module

The HAMMING_ENCODER Module takes in and encodes the data using the
Hamming Coding scheme. The Hamming Coding scheme derives the parity bits
and appends them to the original data to produce the output code word. The
number of parity bits appended depends on the width of the data.

The parity bit derivation uses an even-parity checking. The additional 1 bit
(shown in Table 1–1 as +1) is appended to the parity bits as the MSB of the code
word. This ensures that the code word has an even number of 1’s.

For example, if the data width is 4 bits, 4 parity bits are appended to the
data to become a code word with a total of 8 bits. If 7 bits from the LSB of the 8-
bit code word have an odd number of 1’s, the 8th bit (MSB) of the code word is 1
making the total number of 1’s in the code word even.
Figure 1–3 shows a block diagram of the HAMMING_ENCODER module.

Inp(6:0)
outp(10:0)

reset Fig 4.2 Block diagram of


Hamming encoder module.

DSCE, Bangalore 18
Hamming Code

DSCE, Bangalore 19
Hamming Code

4.5.2 General Description of the HAMMING_DECODER


Module:
The HAMMING_DECODER Module decodes the input data (codeword) by
extracting the parity bits and data bits from the code word. The parity bits and
data bits are recalculated based on the Hamming Coding scheme to generate a
syndrome code. The generated syndrome code provides the status of the data
received. The ECC detects single-bit and double-bit errors, but onlysingle-bit
errors are corrected.

Figure 1–3 shows a block diagram of the HAMMING_DECODER module.

Inp(10:0)
outp(6:0)

reset

Fig 4.4 Block diagram of Hamming_encoder module.

The incoming 7-bit data along with the 4-bit parity are XOR'd together to
generate the 4-bit syndrome (S1 through S4).

In order to correct a single bit error, a 7-bit correction mask is created.


Each bit of this mask is generated based on the result of the syndrome from
previous stage. When no error is detected, all bits of the mask become zero.
When a single bit error is detected, the corresponding mask masks out the rest
of the bits except for the error bit. The subsequent stage then XORs the mask
with the original data. As a result, the error bit is reversed (or corrected) to the
correct state. If a double bit error is detected, all mask bits become zero. The
error type and corresponding correction mask are created during the same clock
cycle.

In the data correction stage, the mask is XOR'd together with the original
incoming data to flip the error bit to the correct state, if needed. When there are
no bit errors or double bit errors, all the mask bits are zeros. As a result, the
incoming data goes through the ECC unit without changing the original data.
Fig 4.5 Hamming Decoder RTL Schematic

DSCE, Bangalore 20
Hamming Code

Fig 3.5Hamming Decoder RTL Schematic

4.6 Pin Descriptions : -


DSCE, Bangalore 21
Hamming Code

Pin In/Out Width Module Description


Name
Inp In 6:0 Encoder Original data input to the encoder. This
(7 bits) data is the message to be
encoded in a codeword.

Reset In Encoder Active High Reset


Outp Out 10:0 Encoder Encoded codeword data
(11 bits) ( Hamming code ) from the
encoder. The codeword is
driven from the encoder to the
transmitter interface

Inp In 10:0 Decoder Codeword ( Hamming ) input


(11 bits) to the decoder-corrector from
the receiver interface.
Reset In Decoder Active High Reset
Outp Out 6:0 Decoder Corrected original data from the decoder
(7 bits)

Table 4.1 Pin descriptions

DSCE, Bangalore 22
Hamming Code

Implementation
5.1 The (11, 7, 1) Hamming code:
The Hamming code can be applied to data units of any length. It uses the
relationship between data and redundancy bits discussed above, and has the
capability of correcting single-bit errors.

For example, a 7-bit ASCII code requires four redundancy bits that can be
added at the end of the data unit or interspersed with the original data bits to
form the (11, 7, 1) Hamming code.

In Fig 5.1, these redundancy bits are placed in positions 1, 2, 4 and 8 (the
positions in an 11-bit sequence that are powers of ‘2’). For clarity in the
examples below, these bits are referred to as ‘p1,’ ‘p2,’ ‘p4’ and ‘p8.’ The data
bits are referred to as ‘d1’, d2’, ’d3’, ’d4’, ’d5’, ’d6’ and ‘d7’.

In the Hamming code, each ‘p’ bit is the parity bit for one combination of data
bits as shown below:
P1 = XOR of Bits ( 1, 3, 5, 7, 9, 11)
P2 = XOR of Bits ( 2, 3, 6, 7, 10, 11)
P4 = XOR of Bits ( 4, 5, 6, 7)
P8 = XOR of Bits ( 8, 9, 10, 11)
MSB LSB
11 10 9 8 7 6 5 4 3 2 1
d7 d6 d5 p4 d4 d3 d2 p3 d1 p2 p1

Fig. 5.1 : Positions of parity bits and data bits in hamming code

5.1.1 Example 1:
Let us assume that the data word is the ASCII Value of ‘A’ i.e.
65 hex = 41octal = 101 decimal = 1000001 binary = capital A ( ASCII )

11 10 9 8 7 6 5 4 3 2 1
1 0 0 p4 0 0 0 p3 1 p2 p1

Calculating parity bits ( even parity ) for the above data word:

P 1 = XOR of Bits ( 3, 5, 7, 9, 11) = 1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 1 = 0


P 2 = XOR of Bits ( 3, 6, 7, 10, 11) = 1 ⊕ 0 ⊕ 0 ⊕ 0 ⊕ 1 = 0
P 3 = XOR of Bits ( 5, 6, 7) = 0 ⊕ 0 ⊕ 0 = 0

DSCE, Bangalore 23
Hamming Code

P 4 = XOR of Bits ( 9, 10, 11) = 0 ⊕ 0 ⊕ 1 = 1

The 7-bit data word is written into the memory together with the 4 parity bits as
a 11-bit composite word. Substituting the 4 parity bits in their proper positions,
we obtain the 11-bit composite word.
11 10 9 8 7 6 5 4 3 2 1
1 0 0 1 0 0 0 0 1 0 0

The final 11 bit hamming code that is generated from the 7 bit data word and the
4 bit parity word is = 10010000100 = 484 h = 1156 d.

The above code is now ready for transmission.

Error detection and correction :


Suppose that by the time the above transmission is received, the seventh bit
has changed from ‘1’ to ‘0.’

11 10 9 8 7 6 5 4 3 2 1
1 0 0 1 0 0 0 0 1 0 0
Original transmitted Hamming code

1 0 0 1 1 0 0 0 1 0 0
Received faulty Hamming code

If we simply take the data bits and find out it’s ASCII equivalent, we get the
character ‘ I ’ as against the ‘A’ which we had earlier sent.
So, the error in the data word needs to be detected and corrected before it can
be processed further.
The receiver takes the transmission and recalculates four new parity bits, using
the same sets of bits used by the sender plus the relevant parity ‘p’ bit for each
set.

11 10 9 8 7 6 5 4 3 2 1
1 0 0 1 1 0 0 0 1 0 0
Received Faulty Hamming code

Calculating parity bits ( even parity ) for the above transmitted word:
P 1 = XOR of Bits ( 1, 3, 5, 7, 9, 11) = 0 ⊕ 1 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 1 = 1
P 2 = XOR of Bits ( 2, 3, 6, 7, 10, 11) = 0 ⊕ 1 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 1 = 1
P 3 = XOR of Bits ( 4, 5, 6, 7) = 0 ⊕ 0 ⊕ 0 ⊕ 1 = 1
P 4 = XOR of Bits ( 8, 9, 10, 11) = 1 ⊕ 0 ⊕ 0 ⊕ 1 = 0

DSCE, Bangalore 24
Hamming Code

Now assembling the new parity values into a binary number in the descending
order of ‘p’ position (p4, p3, p2, p1), we get :
P4 P3 P2 P1 = “ 0111 b” = 7d which is the precise location of the corrupted bit !
The error can now be corrected by complementing the corresponding bit. After
correction, we get the hamming code as

11 10 9 8 7 6 5 4 3 2 1
1 0 0 1 0 0 0 0 1 0 0

which is the same as the transmitted one.


Now by extracting the data bits from the above code, we get the original data
which is 65h = 41o = 101d = 1000001b= capital A ( ASCII ).

5.1.2 Example 2:
Let us repeat the above process with another data word, but in a tabular fashion.
Let the data word be 1101101 b, which is the ASCII equivalent of ‘lowercase M’.
Changing bases, 109 d = 6D h = 155 o = 1101101 b = lowercase M = ‘m’.

Bit Positions 1 10 9 8 7 6 5 4 3 2 1
1
Generic Name d d6 d p d d d p d p2 p
7 5 4 4 3 2 3 1 1
Data Word(without 1 1 0 1 1 0 1
parity
Calulating parity 1 1 0 1 0 1 1
Calulating parity 2 1 1 1 1 1 1
Calulating parity 3 1 1 0 0
Calulating parity 4 1 1 0 0
Data word(with 1 1 0 0 1 1 0 0 1 1 1
parity)
Table 5.1 Generation of hamming code

The final hamming code generated is 11000001001 b = 609 h = 1545 d.


Now let us assume that during transmission, the 9th bit has changed
from ‘ 0 ’ to ‘ 1 ’. Therefore, the received faulty hamming code is:

1 1 1 0 1 1 0 0 1 1 1

Now, taking the above hamming code and determining the parity bits again so
that the erroneous bit can be detected and then corrected.

DSCE, Bangalore 25
Hamming Code

Bit 1 1 9 8 7 6 5 4 3 2 1 Parit Parit


Position y y Bit
s 1 0 chec
k
Generic d d d p d d d p d p p
Name
7 6 5 4 4 3 2 3 1 2 1
Received 1 1 1 0 1 1 0 0 1 1 1
Data
Word
Calulatin 1 1 1 0 1 1 Fail 1
g parity
1
Calulatin 1 1 1 1 1 1 Pas 0
g parity
2
s
Calulatin 1 1 0 0 Pas 0
g parity
3
s
Calulatin 1 1 0 0 Fail 1
g parity
4
Table 5.2 Correction of received Hamming code.

Now, assembling the new parity values into a binary number in the descending
order of ‘p’ position (p4, p3, p2, p1), we get :

P4 P3 P2 P1 = “ 1001 b” = 9 d which is the precise location of the corrupted


bit !

The error can now be corrected by complementing the corresponding bit.


After correction, we get the hamming code as :

1 1 0 0 1 1 0 0 1 1 1

which is the same as the transmitted one.


Now, by extracting the data bits from the above code, we get the original data
which is
109 d = 6D h = 155 o = 110 1101 b = lowercase M ( ASCII ) = ‘m’.

5.2 Simulation Results


The Hamming_Encode.v is the (11, 7, 1) Verilog HDL coding for the
Hamming code encoder that converts a 7-bit ASCII code into an 11-bit code word
while the Hamming_Decode.v is the (11, 7, 1) Hamming code decoder that
converts an 11-bit code word back into a 7-bit ASCII code after correcting the
single bit error, if any. Both these programs have been developed in Verilog HDL
and simulated using Xilinx ISE 9.2i EDA software.

DSCE, Bangalore 26
Hamming Code

The following pages contain the results of several simulations with different
parameters, which were performed to confirm the proper behavior and
functionality of the device.

Brief overview of Simulation test s conducted


Test 1 - > No error during transmission.
Test 2 - > Error in the 2nd bit of the transmitted Hamming code.
Test 3 - > Error in the 5th bit of the transmitted Hamming code.
Test 4 - > Error in the 8th bit of the transmitted Hamming code.
Test 5 - > Error in the 11th bit of the transmitted Hamming code.

DSCE, Bangalore 27
Hamming Code

5.3 Synthesis report


Final synthesis report
Release 9.2i - xst J.36
==============================================================
==========
* Final Report
*
==============================================================
==========
Final Results
RTL Top Level Output File Name : hamm_enc.ngr
Top Level Output File Name : hamm_enc
Output Format : NGC
Optimization Goal : Speed
Keep Hierarchy : NO
Design Statistics
# IOs : 19
Cell Usage :
# BELS : 12
# LUT2 : 7
# LUT3 : 1
# LUT4 : 4
# IO Buffers : 19
# IBUF : 8
# OBUF : 11
==============================================================
==========
Device utilization summary:
---------------------------
Selected Device : 3s500eft256-5
Number of Slices: 7 out of 4656 0%
Number of 4 input LUTs: 12 out of 9312 0%
Number of IOs: 19
Number of bonded IOBs: 19 out of 190 10%
==============================================================
==========
* TIMING REPORT *
==============================================================
==========
Clock Information:
------------------
No clock signals found in this design
Asynchronous Control Signals Information:
----------------------------------------
No asynchronous control signals found in this design
Timing Summary:
---------------
Speed Grade: -5
DSCE, Bangalore 28
Hamming Code

Minimum period: No path found


Minimum input arrival time before clock: No path found
Maximum output required time after clock: No path found
Maximum combinational path delay: 6.771ns
Timing Detail:
--------------
All values displayed in nanoseconds (ns)
==============================================================
==========
Timing constraint: Default path analysis
Total number of paths / destination ports: 34 / 11
--------------------------------------------------------------
----------
Delay: 6.771ns (Levels of Logic = 4)
Source: inp<0> (PAD)
Destination: outp<1> (PAD)
Data Path: inp<0> to outp<1>
Gate Net
Cell:in->out fanout Delay Delay Logical Name (Net Name)
---------------------------------------- ------------
IBUF:I->O 2 1.106 0.532 inp_0_IBUF (inp_0_IBUF)
LUT3:I0->O 2 0.612 0.383
Mxor_mux0000_xor0001_xo<1>1 (mux0000_xor0001)
LUT4:I3->O 1 0.612 0.357 outp_1_mux00001
(outp_1_OBUF)
OBUF:I->O 3.169 outp_1_OBUF (outp<1>)
----------------------------------------
Total 6.771ns (5.499ns logic, 1.272ns route)
(81.2% logic, 18.8% route)
==============================================================
==========
CPU : 42.92 / 48.01 s | Elapsed : 43.00 / 48.00 s

DSCE, Bangalore 29
Hamming Code

Advantages and applications

6 .1 Advantages of the hamming code:


The biggest advantage of the Hamming ECC is its absolute
simplicity in generating of the Hamming code and thereafter recovery of
the original data word. Moreover, by use of the relatively simple Hamming
rule, data words of any length can be encoded along with the
corresponding number of parity bits. The Hamming rule shows that four
parity bits can provide error correction for five to eleven data bits, with
the latter being a perfect code. Analysis shows that overhead introduced
to the data stream is modest for the range of data bits available (11 bits
36% overhead, 8 bits 50% overhead, 5 bits 80% overhead).

6 . 2 Applications
The first IBM computer to use Hamming codes was the IBM Stretch
computer (model 7030), built in 1961 [LC]. A follow-on machine known as
Harvest (model 7950), built in 1962, was equipped with 22-track tape
drives that employed a (22, 16) SEC-DED code. The ECCs found on
modern machines are usually not Hamming codes, but rather are codes
devised for some logical or electrical property such as minimizing the
depth of the parity check trees, and making them all the same length.
Such codes give up Hamming’s simple method of determining which bit is
in error, and instead use a hardware table lookup.

Server-class computers generally have ECC at the SEC-DED level. In


the early solid state computers equipped with ECC memory, the memory
was usually in the form of eight check bits and 64 information bits. A
memory module (group of chips) might be built from, typically, nine eight-
bit wide chips. A word access (72 bits, including check bits) fetches eight
bits from each of these nine chips. Each chip is laid out in such a way that
the eight bits accessed for a single word are physically far apart. Thus, a
word access references 72 bits that are physically somewhat separated.
With bits interleaved in that way, if a few close together bits in the same
chip are altered, as for example by an alpha particle or cosmic ray hit, a
few words will have single-bit errors, which can be corrected.
Some larger memories incorporate a technology known as Chipkill.
This allows the computer to continue to function even if an entire memory
chip fails, for example due to loss of power to the chip.
The interleaving technique can be used in communication
applications to
correct burst errors, by interleaving the bits in time.

DSCE, Bangalore 30
Hamming Code

In modern server-class machines, Hamming ECC may be used in


different levels of cache memory, as well as in main memory. It may also
be used in non-memory areas, such as on busses.
Hamming codes can also be used for data compression allowing a
small amount of distortion (loss of information) by “running the machine
backwards.”

DSCE, Bangalore 31
Hamming Code

Conclusion and future work

7.1 Conclusion
Error Correction Code (ECC) is a method of error detection and
correction in digital data transmission. This project presented design and
development of (11, 7, 1) Hamming code using Verilog hardware
description language (HDL). Here, ‘11’ corresponded to the total number
of Hamming code bits in a transmittable unit comprising data bits and
redundancy bits, 7 was the number of data bits while ‘1’ denoted the
maximum number of error bits in the transmittable unit.

In a communication system that employs forward error-correction


coding, the digital information source sends a data sequence to an
encoder. The encoder inserts redundant (or parity) bits, thereby
outputting a longer sequence of code bits, called a ‘code word.’ These
code words can then be transmitted to a receiver, which uses a suitable
decoder to extract the original data sequence.

The Verilog code fitted well into small field-programmable gate


arrays (FPGAs), complex programmable logic devices (CPLDs) and
applicationspecific integrated circuits (ASICs) and therefore is ideally
suited to communication applications that need error-control.

7. 2Future work
Today the Hamming ECC are used in memories and its organization
is often more complicated than simply having eight check bits and 64
information bits. Modern server memories may have 16 or 32 information
bytes (128 or 256 bits) checked as a single ECC word. Each DRAM chip
may store two, three, or four bits in physically adjacent positions.
Correspondingly, ECC is done on alphabets of four, eight, or 16 characters
— a subject not discussed here. Because the DRAM chips usually come in
8- or 16-bit wide configurations, the memory module often provides more
than enough bits for the ECC function. The extra bits might be used for
other functions, such as one or two parity bits on the memory address.
This allows the memory to check that the address it receives is (probably)
the address that the CPU generated.

Turbo codes are built out of Hamming codes. Using the Hamming
code from this page, you'd have a 15x15 matrix of bits. That's 11 data
rows and 4 check rows, and 11 data columns and 4 check columns. Fill in
the (data,data) positions with data. Fill in (data, check) positions in data

DSCE, Bangalore 32
Hamming Code

rows by filling in the check bits for a 15-bit Hamming code for that row,
and fill in the (check, data) bits in data columns the same way. Now the
data bits are filled for the check rows and check columns. Fill in the
(check, check) positions by filling in the check bits for the 15-bit Hamming
code for their row. You'd get the same results for (check, check) bits if you
did the Hamming code for their column, because the value is really the
XOR of a bunch of (data, data) bits, and it's the same set of (data, data)
bits either way. You could even go hog-wild and build N-dimensional Turbo
codes, or Turbo codes with different Hamming codes in different
dimensions. Decoding Turbo codes well requires more tricks.

DSCE, Bangalore 33
Hamming Code

Source code
8.1 Hamming_encoder.v
module hamm_enc(outp,inp,reset);
parameter n=11,k=7;
output [n-1:0] outp;
input [k-1:0] inp;
input reset;
reg [n-1:0] outp;
integer i,j;
always @(inp or reset)
begin
if(reset)
outp = 0;
else
begin
i=0; j=0;
while((i<n) || (j<k))
begin
while(i==0 || i==1 || i==3 || i==7)
begin
outp[i] = 0;
i=i+1;
end
outp[i] = inp[j];
i=i+1;
j=j+1;
end
if(^(outp & 11'b101_0101_0101))
outp[0] = ~outp[0];
if(^(outp & 11'b110_0110_0110))
outp[1] = ~outp[1];
if(^(outp & 11'b000_0111_1000))
outp[3] = ~outp[3];
if(^(outp & 11'b111_1000_0000))
outp[7] = ~outp[7];
end
end
endmodule

DSCE, Bangalore 34
Hamming Code

8.2 Hamming_decoder.v
module hamm_dec(outp,inp,reset);
parameter n=11,k=7;
output [k-1:0] outp;
input [n-1:0] inp;
input reset;
reg [k-1:0] outp;
reg r1,r2,r4,r8;
reg [3:0] r;
reg [n-1:0] IN;
integer i,j;
always @(inp or reset)
begin
if(reset)
outp=0;
else
begin
r1 = ^(inp & 11'b101_0101_0101);
r2 = ^(inp & 11'b110_0110_0110);
r4 = ^(inp & 11'b000_0111_1000);
r8 = ^(inp & 11'b111_1000_0000);
r = {r8,r4,r2,r1};
IN = inp;
IN[r-1] = ~IN[r-1];
i=0; j=0;
while((i<n) || (j<k))
begin
while(i==0 || i==1 || i==3 || i==7)
i=i+1;
outp[j]=IN[i];
i=i+1;
j=j+1;
end
end
end
endmodule

DSCE, Bangalore 35
Hamming Code

References
1. R. W. Hamming "Error Detection and Error Correction Codes" Bell
Systems Tech. Journal, vol 29, pp 147-160, April, 1950

2. Hamming Code. http://en.wikipedia.org/wiki/Hamming_code, May 2009

3. Hill, Raymond. A First Course in Coding Theory. Clarendon Press, 1986.

4. Roman, Steven. Coding and Information Theory. Springer-Verlag, 1992.

5. PETERSEN W.W and E.J.WELDON “Error-correcting Codes” MIT Press. 2nd


edition 1972 pp 256- 261.

6. Agrell, Erik. http://www.s2.chalmers.se/~agrell/bounds/, October 2003.

7. Lin, Shu and Costello, Daniel J., Jr. Error Control Coding: Fundamentals
and Applications. Prentice-Hall, 1983.

8. IEEE Standard Verilog Hardware Description Language, New York, 2001

9. Palnatkir S. Verilog HDL: A guide to Digital Design and synthesis.


Prentice
Hall, Engelwood Cliff, New Jersey.

DSCE, Bangalore 36

Vous aimerez peut-être aussi