Vous êtes sur la page 1sur 97

8 IT 01

DIGITAL AND WIRELESS COMMUNICATION


by

Prof. (Dr) Prashant V Ingole


Syllabus
UNIT 1: Information Theory: Elements of digital communication system, Unit of information, Entropy,
Rate of information, Joint entropy, Conditional entropy, Mutual information , Channel Capacity,
Shanonn’s Theorem, Shannon & Hartely Theorem, coding efficiency, Shannon-Fano coding Theorem.
UNIT II: Error Controlling and Coding: Methods of controlling error, linear block codes, matrix
description of linear block codes, error detection and error correction capabilities of linear block
codes, single error correcting Hamming codes, Cyclic codes, syndromes calculation, error detection,
Introduction to Convolution codes.
UNIT III: Spread Spectrum Signals : Model of spread spectrum communication system, direct sequence
spread spectrum signals, generation of PN sequences, frequency hopping spread spectrum (slow
frequency and high frequency hopping), comparison, basic principles of TDMA, FDMA, CDMA.
UNIT IV: Cellular Telephone Concepts : Introduction, mobile telephone service, cellular telephone
architecture, frequency reuse, cell splitting, sectoring, segmentation and dualization, cellular system
topology, roaming and handoffs.
UNIT V: GSM and CDMA Technologies : Network Architecture, Protocol Architecture, GSM Channels,
Frame structure for GSM, Authentication & Security in GSM, Introduction to CDMA, Architecture of
CDMA System, IS-95 CDMA Forward & Reverse channel, Soft handoff.
UNIT V: Wireless Network Technologies : IEEE 802.11WLAN technology,
ETSI HIPERLAN Technology, IEEE 802.15 WPAN Technology, IEEE 802.16 WMAN Technology, Mobile Ad
hoc Network (MANETs), Mobile IP and Mobility Management, Mobile TCP, Wireless Sensor Networks,
RFID Technology, Security Requirements for Wireless Network
Syllabus
UNIT 1: Information Theory: Elements of digital communication system, Unit of information, Entropy,
Rate of information, Joint entropy, Conditional entropy, Mutual information , Channel Capacity,
Shanonn’s Theorem, Shannon & Hartely Theorem, coding efficiency, Shannon-Fano coding Theorem.
UNIT II: Error Controlling and Coding: Methods of controlling error, linear block codes, matrix
description of linear block codes, error detection and error correction capabilities of linear block
codes, single error correcting Hamming codes, Cyclic codes, syndromes calculation, error detection,
Introduction to Convolution codes.
UNIT III: Spread Spectrum Signals : Model of spread spectrum communication system, direct sequence
spread spectrum signals, generation of PN sequences, frequency hopping spread spectrum (slow
frequency and high frequency hopping), comparison, basic principles of TDMA, FDMA, CDMA.
UNIT IV: Cellular Telephone Concepts : Introduction, mobile telephone service, cellular telephone
architecture, frequency reuse, cell splitting, sectoring, segmentation and dualization, cellular system
topology, roaming and handoffs.
UNIT V: GSM and CDMA Technologies : Network Architecture, Protocol Architecture, GSM Channels,
Frame structure for GSM, Authentication & Security in GSM, Introduction to CDMA, Architecture of
CDMA System, IS-95 CDMA Forward & Reverse channel, Soft handoff.
UNIT V: Wireless Network Technologies : IEEE 802.11WLAN technology,
ETSI HIPERLAN Technology, IEEE 802.15 WPAN Technology, IEEE 802.16 WMAN Technology, Mobile Ad
hoc Network (MANETs), Mobile IP and Mobility Management, Mobile TCP, Wireless Sensor Networks,
RFID Technology, Security Requirements for Wireless Network
Syllabus
UNIT 1: Information Theory: Elements of digital communication system, Unit of information, Entropy,
Rate of information, Joint entropy, Conditional entropy, Mutual information , Channel Capacity,
Shanonn’s Theorem, Shannon & Hartely Theorem, coding efficiency, Shannon-Fano coding Theorem.
UNIT II: Error Controlling and Coding: Methods of controlling error, linear block codes, matrix
description of linear block codes, error detection and error correction capabilities of linear block
codes, single error correcting Hamming codes, Cyclic codes, syndromes calculation, error detection,
Introduction to Convolution codes.
UNIT III: Spread Spectrum Signals : Model of spread spectrum communication system, direct sequence
spread spectrum signals, generation of PN sequences, frequency hopping spread spectrum (slow
frequency and high frequency hopping), comparison, basic principles of TDMA, FDMA, CDMA.
UNIT IV: Cellular Telephone Concepts : Introduction, mobile telephone service, cellular telephone
architecture, frequency reuse, cell splitting, sectoring, segmentation and dualization, cellular system
topology, roaming and handoffs.
UNIT V: GSM and CDMA Technologies : Network Architecture, Protocol Architecture, GSM Channels,
Frame structure for GSM, Authentication & Security in GSM, Introduction to CDMA, Architecture of
CDMA System, IS-95 CDMA Forward & Reverse channel, Soft handoff.
UNIT V: Wireless Network Technologies : IEEE 802.11WLAN technology,
ETSI HIPERLAN Technology, IEEE 802.15 WPAN Technology, IEEE 802.16 WMAN Technology, Mobile Ad
hoc Network (MANETs), Mobile IP and Mobility Management, Mobile TCP, Wireless Sensor Networks,
RFID Technology, Security Requirements for Wireless Network
Syllabus
UNIT 1: Information Theory: Elements of digital communication system, Unit of information, Entropy,
Rate of information, Joint entropy, Conditional entropy, Mutual information , Channel Capacity,
Shanonn’s Theorem, Shannon & Hartely Theorem, coding efficiency, Shannon-Fano coding Theorem.
UNIT II: Error Controlling and Coding: Methods of controlling error, linear block codes, matrix
description of linear block codes, error detection and error correction capabilities of linear block
codes, single error correcting Hamming codes, Cyclic codes, syndromes calculation, error detection,
Introduction to Convolution codes.
UNIT III: Spread Spectrum Signals : Model of spread spectrum communication system, direct sequence
spread spectrum signals, generation of PN sequences, frequency hopping spread spectrum (slow
frequency and high frequency hopping), comparison, basic principles of TDMA, FDMA, CDMA.
UNIT IV: Cellular Telephone Concepts : Introduction, mobile telephone service, cellular telephone
architecture, frequency reuse, cell splitting, sectoring, segmentation and dualization, cellular system
topology, roaming and handoffs.
UNIT V: GSM and CDMA Technologies : Network Architecture, Protocol Architecture, GSM Channels,
Frame structure for GSM, Authentication & Security in GSM, Introduction to CDMA, Architecture of
CDMA System, IS-95 CDMA Forward & Reverse channel, Soft handoff.
UNIT V: Wireless Network Technologies : IEEE 802.11WLAN technology,
ETSI HIPERLAN Technology, IEEE 802.15 WPAN Technology, IEEE 802.16 WMAN Technology, Mobile Ad
hoc Network (MANETs), Mobile IP and Mobility Management, Mobile TCP, Wireless Sensor Networks,
RFID Technology, Security Requirements for Wireless Network
Syllabus
UNIT 1: Information Theory: Elements of digital communication system, Unit of information, Entropy,
Rate of information, Joint entropy, Conditional entropy, Mutual information , Channel Capacity,
Shanonn’s Theorem, Shannon & Hartely Theorem, coding efficiency, Shannon-Fano coding Theorem.
UNIT II: Error Controlling and Coding: Methods of controlling error, linear block codes, matrix
description of linear block codes, error detection and error correction capabilities of linear block
codes, single error correcting Hamming codes, Cyclic codes, syndromes calculation, error detection,
Introduction to Convolution codes.
UNIT III: Spread Spectrum Signals : Model of spread spectrum communication system, direct sequence
spread spectrum signals, generation of PN sequences, frequency hopping spread spectrum (slow
frequency and high frequency hopping), comparison, basic principles of TDMA, FDMA, CDMA.
UNIT IV: Cellular Telephone Concepts : Introduction, mobile telephone service, cellular telephone
architecture, frequency reuse, cell splitting, sectoring, segmentation and dualization, cellular system
topology, roaming and handoffs.
UNIT V: GSM and CDMA Technologies : Network Architecture, Protocol Architecture, GSM Channels,
Frame structure for GSM, Authentication & Security in GSM, Introduction to CDMA, Architecture of
CDMA System, IS-95 CDMA Forward & Reverse channel, Soft handoff.
UNIT V: Wireless Network Technologies : IEEE 802.11WLAN technology,
ETSI HIPERLAN Technology, IEEE 802.15 WPAN Technology, IEEE 802.16 WMAN Technology, Mobile Ad
hoc Network (MANETs), Mobile IP and Mobility Management, Mobile TCP, Wireless Sensor Networks,
RFID Technology, Security Requirements for Wireless Network
Syllabus
UNIT 1: Information Theory: Elements of digital communication system, Unit of information, Entropy,
Rate of information, Joint entropy, Conditional entropy, Mutual information , Channel Capacity,
Shanonn’s Theorem, Shannon & Hartely Theorem, coding efficiency, Shannon-Fano coding Theorem.
UNIT II: Error Controlling and Coding: Methods of controlling error, linear block codes, matrix
description of linear block codes, error detection and error correction capabilities of linear block
codes, single error correcting Hamming codes, Cyclic codes, syndromes calculation, error detection,
Introduction to Convolution codes.
UNIT III: Spread Spectrum Signals : Model of spread spectrum communication system, direct sequence
spread spectrum signals, generation of PN sequences, frequency hopping spread spectrum (slow
frequency and high frequency hopping), comparison, basic principles of TDMA, FDMA, CDMA.
UNIT IV: Cellular Telephone Concepts : Introduction, mobile telephone service, cellular telephone
architecture, frequency reuse, cell splitting, sectoring, segmentation and dualization, cellular system
topology, roaming and handoffs.
UNIT V: GSM and CDMA Technologies : Network Architecture, Protocol Architecture, GSM Channels,
Frame structure for GSM, Authentication & Security in GSM, Introduction to CDMA, Architecture of
CDMA System, IS-95 CDMA Forward & Reverse channel, Soft handoff.
UNIT VI: Wireless Network Technologies : IEEE 802.11WLAN technology,
ETSI HIPERLAN Technology, IEEE 802.15 WPAN Technology, IEEE 802.16 WMAN Technology, Mobile Ad
hoc Network (MANETs), Mobile IP and Mobility Management, Mobile TCP, Wireless Sensor Networks,
RFID Technology, Security Requirements for Wireless Network
Syllabus
UNIT 1: Information Theory: Elements of digital communication system, Unit of information, Entropy, Rate of information, Joint entropy,
Conditional entropy, Mutual information , Channel Capacity, Shanonn’s Theorem, Shannon & Hartely Theorem, coding efficiency,
Shannon-Fano coding Theorem.

UNIT II: Error Controlling and Coding: Methods of controlling error, linear block codes, matrix description of linear block codes, error
detection and error correction capabilities of linear block codes, single error correcting Hamming codes, Cyclic codes, syndromes
calculation, error detection, Introduction to Convolution codes.

UNIT III: Spread Spectrum Signals : Model of spread spectrum communication system, direct sequence spread spectrum signals,
generation of PN sequences, frequency hopping spread spectrum (slow frequency and high frequency hopping), comparison, basic
principles of TDMA, FDMA, CDMA.

UNIT IV: Cellular Telephone Concepts : Introduction, mobile telephone service, cellular telephone architecture, frequency reuse, cell
splitting, sectoring, segmentation and dualization, cellular system topology, roaming and handoffs.

UNIT V: GSM and CDMA Technologies : Network Architecture, Protocol Architecture, GSM Channels,
Frame structure for GSM, Authentication & Security in GSM, Introduction to CDMA, Architecture of CDMA System, IS-95 CDMA Forward
& Reverse channel, Soft handoff.

UNIT V: Wireless Network Technologies : IEEE 802.11WLAN technology,


ETSI HIPERLAN Technology, IEEE 802.15 WPAN Technology, IEEE 802.16 WMAN Technology, Mobile Ad hoc Network (MANETs), Mobile IP
and Mobility Management, Mobile TCP, Wireless Sensor Networks, RFID Technology, Security Requirements for Wireless Network

Text Books : K S Shanmugam Digital and Analog Communication Systems Wiley


R P Singh , SD Sapre : Communication Systems, Tata McGraw Hill
T L Singal : Wireless Communication, Tata McGraw Hill
T S Rapaport : Wireless Communications Principles and Practice, Pearson Education
UNIT 1: Information Theory
• Elements of digital communication system,

• Unit of information, Entropy, Rate of information, Joint entropy,


Conditional entropy, Mutual information ,

• Channel Capacity,

• Shannon’s Theorem, Shannon & Hartely Theorem,

• coding efficiency,

• Shannon-Fano coding Theorem.


Communication System
Input
Information Signal
Source and Input Transmitter MW, Free
Transducer Space,
Wires,
OFC

Distantly
Communication
located
Channel

Output
Output Signal
Receiver
Transducer
Communication System
Input
Information Signal
Source and Input Transmitter
Transducer

Mike, camera,
Sensor Data, Communication
files Channel

Output
Output Signal
Receiver
Transducer
Communication System
Input
Information Signal
Source and Input Transmitter
Transducer

Various Modulations
Amplitude, Frequency,
Phase, Pulse etc using Communication
Carrier wave Channel

Output
Output Signal
Receiver
Transducer
Communication System
Input
Information Signal
Source and Input Transmitter
Transducer

Various types of
Transmission channels Communication
Coaxial Cables, Open Channel
Space, Microwave,
Optical Fibre cable

Output
Output Signal
Receiver
Transducer
Communication System
Input
Information Signal
Source and Input Transmitter
Transducer

Various demodulations
Amplitude, Frequency,
Communication
Phase, Pulse
Channel

Output
Output Signal
Receiver
Transducer
Communication System
Input
Information Signal
Source and Input Transmitter
Transducer

Speakers, Screen,
LED Displays, Communication
Indicators. Channel

Output
Output Signal
Receiver
Transducer
Communication System
Input
Information Signal
Source and Input Transmitter
Transducer

Communication
Channel

Output
Output Signal
Receiver
Transducer
Elements of Digital Communication Systems

Noise
Elements of Digital Communication Systems

Source and destination


are separated in space
and Interconnected by
Channel
Noise
Elements of Digital Communication Systems
Source

Source and destination


are separated in space
and Interconnected by
Channel
Noise
Elements of Digital Communication Systems

Source and destination


are separated in space
and Interconnected by
Channel
Noise

Destination
Elements of Digital Communication Systems
Electrical /
Electromagnetic
/ Optical

Source and destination


are separated in space
and Interconnected by
Channel
Noise
Elements of Digital Communication Systems
Electrical /
Electromagnetic
/ Optical

Source and destination


are separated space and
Interconnected by
Channel
Noise

Electrical /
Electromagnetic
/ Optical
Elements of Digital Communication Systems

Sequence
of Symbols

Noise
Elements of Digital Communication Systems
ABCXYZ
1206@#

Sequence
of Symbols

Noise

ABCXYZ
1206@#
Information Source
• Source output depends upon the nature of the source
• Analog – Microphone , Sensor output, Video signal
• Digital – TTY, File read from Memory, Position Encoders
• Analog signals can be converted into digital form by passing
it through ADC
• Discrete information sources are characterised by the
following parameters
1) Source Alphabets (Symbols/ Letters)
2) Symbol Rate
3) Source Alphabet Probabilities
4) Probabilistic dependence of symbols in a sequence
• Source Entropy (H) in Bits per symbol – average information
contents per symbol in long message
• Source Information Rate is bits per second
Information Source

• A to Z 26 characters and 6 special characters total 32


symbols
• If we assume equal probability of occurrence of each
character/symbol then probability of each character/symbol
will be p=1/32
• In practice however the probability of occurrence of
symbols are not equal.
• Character E will occur more that Q W Z and Q doesn’t occur
alone, it is QUE
• Due to probabilistic dependence of symbol and unequal
probabilities of occurrence of symbols, the average
information contents in the symbols are considerably
reduced.
• Source information Rate =Source Entropy * Symbol Rate
Elements of Digital Communication Systems
ABCXYZ
1206@#

Sequence Binary
of Symbols Stream

Noise

ABCXYZ
1206@#
Elements of Digital Communication Systems
ABCXYZ 10101 10101
1206@# 10101 01110

Sequence Binary
of Symbols Stream

Noise

ABCXYZ 10101 10101


1206@# 10101 01110
Source Encoder / Decoder
• Source to the Encoder is the string of symbols occurring at the
rate of r symbols/sec
• Source (En)coder converts the symbol sequence into a binary
sequence of O’s and 1’s
• Code words are assigned to the symbols in the input Fixed
length Binary code to each symbols
• Ex: for the 32 symbol system, these symbols can be
represented by 5 bits code words.
For this system, Symbol rate of 10 symbols/sec will generate
source coder output data rate of 50 bits/sec.
• Fixed length coding is efficient only if the probability of
occurrence of symbols is equal
Source Encoder / Decoder
• In most practical situations the symbols in a sequence are
statistically dependent and they occur with unequal
probabilities
• In such case the source coder takes two or more symbols as a
block and assigns variable length code words to these blocks
• The actual output rate of source encoder will be greater than
the source information rate R.
• Important Parameters of source encoder :Block size, Code
Word Length, Average data rate , and efficiency of Coder.
• At the receiver, source decoder converts the binary output of
the channel decoder into Symbol sequence
• Uses either the Fixed length decoder (Simple) or a Variable
length Decoder (Complex)
• Decoders must be able to handle problems such as : growing
memory requirement, loss of synchronization due to bit errors.
Elements of Digital Communication Systems
ABCXYZ 10101 10101
1206@# 10101 01110

Sequence Binary Binary


of Symbols Stream Stream

Noise

ABCXYZ 10101 10101


1206@# 10101 01110
Elements of Digital Communication Systems
ABCXYZ 10101 10101 10101101 10101100
1206@# 10101 01110 10101001 01110111

Sequence Binary Binary


of Symbols Stream Stream

Noise

ABCXYZ 10101 10101 10101101 10101100


1206@# 10101 01110 10101001 01110111
Channel Encoder / Decoder
• Digital Channel Coding is the practical method of realizing high
transmission reliability and efficiency.

• To recognize the short duration signals at the destination


normally longer duration signals are used at the transmitter but
that decreases the transmission rate

• In order to increase the transmission reliability and efficiency


channel coding is used

• Relatively small set of analog signals (often two) are selected for
transmission over line.
Channel Encoder / Decoder
• In order to increase the transmission reliability and efficiency
channel coding is used
• Relatively small set of analog signals often two are selected for
transmission over line.
• Demodulator has a conceptually simple task to distinguish
between these two different waveforms of known shapes.
Channel Encoder / Decoder
• Error Control is accomplished by the channel coding operation
that consists of systematically adding extra bits to the output
of source coder.
• These extra bits themselves represent no information but
make it possible for the receiver to detect of transmission error
and correct some of the errors in the information bits.
Communication Channel and Capacity
• Channel is media that interconnect Source and Destination such
as a pair of wire or free space
• Channel have only finite Bandwidth B
• Information bearing signal often suffers amplitude and phase
distortion as it travels over the channel
• As signal travels it come across attenuation that decrease the
signal power
• Signal is further corrupted by unwanted, unpredictable electrical
signals called Noise
• Internal and/or external Noise degrades the signal
• Some of the bad effects can be removed at the
receiver/destination but some bad effects cannot be completely
removed.
Communication Channel and Capacity
• Primary objectives of the communication system design is to
supress the bad effects of noise as much as possible
• Signal amplitude/Noise (S/N) ratio
• Increasing the signal Power is not a proper solution
• Inability of the system components and devices to handle large
signals
• Other important parameters of the channel are: Usable
Bandwidth (B), Amplitude and Phase Response, and statistical
properties of noise.
• If the parameters of the communication channel are known,
then we can compute the channel capacity C.
• C=B*log2 (1+S/N) bits/sec
Elements of Digital Communication Systems
ABCXYZ 10101 10101 10101101 10101100
1206@# 10101 01110 10101001 01110111

Sequence Binary Binary Analog


of Symbols Stream Stream Electrical
Signals

Noise

ABCXYZ 10101 10101 10101101 10101100


1206@# 10101 01110 10101001 01110111
Elements of Digital Communication Systems
ABCXYZ 10101 10101 10101101 10101100
1206@# 10101 01110 10101001 01110111

Sequence Binary Binary Analog


of Symbols Stream Stream Electrical
Signals

Noise

ABCXYZ 10101 10101 10101101 10101100


1206@# 10101 01110 10101001 01110111
Digital Modulator / Demodulator
• The Modulator accepts the bit stream (101010111……)
and convert it into electrical
• Waveform that is suitable for transmission over the
communication channel
• Modulation is one of the most powerful tool in the
hand of Communication System
• Designer
• It can be effectively used - to minimize the effect of
channel noise,
- to match the frequency spectrum of the transmitted
signal with channel characteristics
- to provide the capacity to multiplex many signals and
- overcome some equipment limitations
Digital Modulator / Demodulator
• Important parameters of modulator are
- Type of Waveforms
- Duration of Waveforms
- Power levels
- Bandwidth used
• Law of large numbers: While the outcome of the
single random experiment may fluctuate widely,
overall result of many repetitions of a random
experiments can be accurately predicted.
• In data communication this principle is used to
advantage by making the duration of the signalling
waveform long. By averaging over longer duration of
time the effect of noise can be minimized.
Digital Modulator / Demodulator
• Input is 0 and 1
• Let ‘O’ corresponds to A Cos 𝑤1 t and ‘1’ to A Cos 𝑤2 t
• So after modulation the information is coded in the
form of electrical waveform
• Similarly during modulation, multiple bits of
particular information may be coded as a waveform
of one frequency 𝑓1 and other pattern as 𝑓2 and third
as 𝑓3 and so on.
Digital Modulator / Demodulator
• Modulation is a reversible process and demodulator is
used to retrieve/extract the information present in
the information bearing modulated signal.
• The modulation type is an important parameter and
the demodulator is of same type as that of modulator
e g AM, FM, PCM etc.
• Given the type and duration of waveforms used by
the modulator, the power level at the modulator, the
physical and noise characteristics of the channel and
the type the demodulation we can derive unique
relationships between data rates, power, Bandwidth
requirements and probability of incorrectly decoding
a message bit.
Digital Modulator / Demodulator
• Characteristics of modulator, the demodulator and
the channel establish an average bit error rate (BER)
between output of the Encoder at source and the
input of the decoder at the destination.

• Bit Error Rate and corresponding symbol error rate


will be higher than desired. This is undesired
characteristic.

• A lower BER can be accomplished by redesigning


Modulator – Demodulator (Modem) or by error
control coding.
Digital Communication Systems
Example of Digital Communication Systems
Unit of Information
• Performance of the communication system may never be
deterministic in nature.
• It is always described in statistical terms because
occurrence of a particular symbol is completely
unpredictable or uncertain
• Transmitter transmits any of the predetermined messages.
• Probability of transmitting each individual message is
known.
• When communication system model is statistically defined,
we are able to describe its overall or average performance
• Principle of improbability: the inverse relationship between
probability of an event and amount of information
• Example of Dog and Man
Unit of Information
• More probable events have less information and the less
probable events have high information
• Thus it is inverse relationship between the probability of the
event and the amount of Information associated with it.

• If some event is represented as xi, then probability of this


event is represented as p(xi)

1
• So 𝐼 𝑥𝑖 = 𝑓
𝑝(𝑥𝑖 )
Unit of Information
• Thus it is inverse relationship between the probability of the event and the
amount of Information associated with it.

• If some event is represented as xi, then probability of this


event is represented as p(xi)

1
• So 𝐼 𝑥𝑖 = 𝑓
𝑃(𝑥𝑖 )
• Now let there be another independent event yk,
1
So 𝐼 𝑦𝑘 = 𝑓
𝑃(𝑦𝑘 )
Hence probability of joint event is represented as
p(xi,yk) = p(xi).p(yk) with associated information contents
1 1
𝐼(𝑥𝑖 , 𝑦𝑘 ) = f =f
𝑝(𝑥𝑖 ,𝑦𝑘 ) 𝑝(𝑥𝑖 ) 𝑝(𝑦𝑘 )
The total Information 𝐼(𝑥𝑖 , 𝑦𝑘 ) must be equal to the sum of
individual informations 𝐼(𝑥𝑖 ) and 𝐼(𝑦𝑘 )
Unit of Information
1 1
• 𝐼(𝑥𝑖 , 𝑦𝑘 ) = f =f , thus represents the
𝑝(𝑥𝑖 ,𝑦𝑘 ) 𝑝(𝑥𝑖 ) 𝑝(𝑦𝑘 )
joint probability of two independent events.
• Now we would like to know the functions on the right hand
side of the above equation which converts the multiplication in
to addition
• The function which converts multiplication into addition is
Logarithm.
1
• Thus 𝐼(𝑥𝑖 , 𝑦𝑘 ) = log
𝑝(𝑥𝑖 ) 𝑝(𝑦𝑘 )
1 1
= log +log
𝑝(𝑥𝑖 ) 𝑝(𝑦𝑘 )
=𝐼 𝑥𝑖 + 𝐼(𝑦𝑘 )
• Hence basic equation defining the amount of information is
1
𝐼(𝑥𝑖 ) = log = - log 𝑝(𝑥𝑖 ) for binary system it is 𝑙𝑜𝑔2
𝑝(𝑥𝑖 )
Entropy
• A communication system is meant to deal with all possible
messages.

• Instantaneous information corresponding an Individual


message may be erratic.

• Even then statistically average information per individual


message of a source can be represented.

• The average information per individual message is known as


Entropy
Entropy
• The average information per individual message is known as
Entropy
Source Destination

Let there be M messages from source to destination


𝑚1 , 𝑚2 , 𝑚3 , … . . 𝑚𝑀 with their probability of occurrences as
𝑃1 , 𝑃2 , 𝑃3 , … . . 𝑃𝑀 .

Let us assume that in a long time interval, L messages have been


generated. Let L be large so that L>>M, then the no of messages
𝑚1 =𝑃1 *L.
1
The amount of information in message 𝑚1 = log . Thus the
𝑝1
1
amount of information in all 𝑚1 messages = 𝑃1 ∗L ∗ log
𝑝1
Entropy
• The average information per individual message is known as
Entropy
1
The amount of information in message 𝑚1 = log . Thus the
𝑝1
1
amount of information in all 𝑚1 messages = 𝑃1 ∗L ∗ log .
𝑝1
The amount of information in all L messages will then be
1 1 1
𝐼𝑡 = 𝑃1 ∗L ∗ log +𝑃2 ∗L ∗ log +….. + 𝑃𝑀 ∗L ∗ log .
𝑝1 𝑝2 𝑝𝑀
The average information per message will then be

𝐼𝑡 1 1 1
𝐻= = 𝑃1 ∗ log +𝑃2 ∗ log +….. + 𝑃𝑀 ∗ log
𝐿 𝑝1 𝑝2 𝑝𝑀

1
= σ𝑀
𝑘=1 𝑝𝑘 𝑙𝑜𝑔 = - σ𝑀
𝑘=1 𝑝𝑘 log 𝑝𝑘
𝑝𝑘
Entropy
• The average information per individual message is known as
Entropy
The average information per message, Entropy, H is given by

1
H = σ𝑀
𝑘=1 𝑝𝑘 𝑙𝑜𝑔 = - σ𝑀
𝑘=1 𝑝𝑘 log 𝑝𝑘
𝑝𝑘
If there is only single possible message i e M=1 and 𝑝𝑘 = 𝑃1 = 1
𝑀 1 1 1
σ
Then H = 𝑘=1 𝑝𝑘 𝑙𝑜𝑔 = 𝑝1 𝑙𝑜𝑔 = 1 𝑙𝑜𝑔 = 0
𝑝𝑘 𝑝1 1

Thus it can be seen that in a single probability message the


reception of that message conveys no information.
Entropy
• The average information per individual message is known as
Entropy
The average information per message, Entropy, H is given by

1
H = σ𝑀
𝑘=1 𝑝𝑘 𝑙𝑜𝑔 = - σ𝑀
𝑘=1 𝑝𝑘 log 𝑝𝑘
𝑝𝑘
Let there be only one message out of M messages having a
probability 1 and all other 0. In that case
𝑀 1
Then H = σ𝑘=1 𝑝𝑘 𝑙𝑜𝑔
𝑝𝑘
1 1 1
= 𝑝1 ∗ log +lim [ 𝑝 ∗ log + p ∗ log + ⋯]
𝑝1 𝑝→0 𝑝 𝑝
1
1 log
= +0
1
Thus if all probabilities are zero except for one, which ought to be
unity, the entropy is zero. In all other cases the entropy is greater
than zero.
Entropy
For Binary system (M=2) the entropy is
1 1
𝐻 = 𝑝1 𝑙𝑜𝑔 + 𝑝2 𝑙𝑜𝑔
𝑝1 𝑝2
Let 𝑝1 = 𝑝 then 𝑝2 = 1 − 𝑝1 = 1-p=q
1 1 1 1
𝐻 = 𝑝 𝑙𝑜𝑔 + 1 − 𝑝 log = 𝑝 𝑙𝑜𝑔 + 𝑞 log =H(p) = H(q)
𝑝 (1−𝑝) 𝑝 𝑞
Entropy is measured in bits per message. It is found that Entropy is
maximum when the probability of messages is equally likely.
Entropy
Important properties of Entropy can be summarized as:
1) Log M≥ 𝐻 𝑥 ≥ 0
2) H(x) = 0 if all possibilities are zero except for one which must
be unity.
3) H(x) = log M if all possibilities are equal so that
1
𝑝(𝑥)𝑖 = 𝑝𝑖 = for all i’s.
𝑀
H under different cases for M=2
• Case I : 𝑝1 = 0.01 , 𝑝2 = 0.99  H=0.08
• Case II : 𝑝1 = 0. 4, 𝑝2 = 0.6  H=0.97
• Case III : 𝑝1 = 0.5 , 𝑝2 = 0.5  H=1.00
• Entropy is less when uncertainty is less &
It is more when uncertainty is more.
• Thus we can say that Entropy is measure
of Uncertainty.
Rate of Information
If a message source generates messages at the rate of r messages
per second, the rate of information R is defined as the average
number of bits of information per second .
H is the average number of bits of information per message
Hence R=r H bits/sec
Ex : Let us consider two sources of equal Entropy H, generating 𝑟1
and 𝑟2 messages per seconds respectively. The first source will
transmit the information at a rate 𝑅1 = 𝑟1 𝐻 and second source
will transmit information at rate 𝑅2 = 𝑟2 𝐻.
Now if 𝑟1 > 𝑟2 then 𝑅1 > 𝑅2 , Thus in a given period more
information is transmitted form first source than second source,
placing greater demand on communication channel. Hence the
source is not described by its Entropy alone but also by its rate of
information.
Some times R is referred to as bits/sec entropy & H as
bits/message entropy.
Rate of Information
1
Ex : An event has six possible outcomes with the probabilities 𝑝1 = ,
2
1 1 1 1 1
𝑝2 = , 𝑝3 = , 𝑝4 = , 𝑝5 = , 𝑝6 = . Find the entropy of the
4 8 16 32 32
system. Also find the rate of information if there are 16 outcomes per
second .
1
Solution : The entropy H is H=σ6𝑘=1 𝑝𝑘 log
𝑝𝑘
1 1 1 1 1 1
𝐻 = log 2 + log 4 + log 8 + log 16 + 𝑙𝑜𝑔 32 + log 32
2 4 8 16 32 32
1 1 1 1 1 1
𝐻 = + ∗2+ ∗3+ ∗4+ ∗5+ ∗5
2 4 8 16 32 32
1∗16+2∗8+3∗4+4∗2+5+5 16+16+12+8+5+5
𝐻= =
32 32
62 31
𝐻= = bits/message
32 16
Now r = 16 outcomes/sec
Hence rate of information R is R = rH
31
𝑅 = 16 ∗ = 31 bits/sec
16
Joint Entropy
Single probability scheme – one dimensional probability
Its Applications in Communication Engg
Two dimensional Probability and its applications in
communication Engg

Let there be two finite discrete sample spaces 𝑆1 𝑎𝑛𝑑 𝑆2 and there
product spaces be 𝑆 = 𝑆1 𝑆2 .
Now the set of events in 𝑆1 are given by [X] = [𝑥1 , 𝑥2 , … . . , 𝑥𝑚 ] and
that of set 𝑆2 are given by [Y] = [𝑦1 , 𝑦2 , … . . , 𝑦𝑛 ] .
Each event 𝑥𝑖 of 𝑆1 may occur in conjunction with any event 𝑦𝑘 of
𝑆2 .
Joint Entropy
Hence the complete set of events in 𝑆 = 𝑆1 𝑆2 is

𝑥1 𝑦1 𝑥1 𝑦2 𝑥1 𝑦𝑛
[XY] = 𝑥2 𝑦1 𝑥2 𝑦2 𝑥2 𝑦𝑛
𝑥𝑚 𝑦1 𝑥𝑚 𝑦2 𝑥𝑚 𝑦𝑛

Thus we have three set of complete probability schemes

• P(X) = [P(𝑥𝑖 )]
• P(Y) = [P(𝑦𝑘 )]
• P(XY) = [P(𝑥𝑖 , 𝑦𝑘 )]

In a set of probable events σ𝑛𝑖=1 P(𝑥𝑖 )=1 is known as Complete


Probability.
Joint Entropy
We have three complete probability schemes and naturally there
will be three associated entropies

𝐻 𝑋 = − σ𝑚
𝑗=1 𝑝 𝑥𝑗 log 𝑝(𝑥𝑗 ) ,
where 𝑝 𝑥𝑗 = σ𝑛𝑘=1 𝑝(𝑥𝑗 , 𝑦𝑘 )

𝐻 𝑌 = − σ𝑛𝑘=1 𝑝 𝑦𝑘 log 𝑝(𝑦𝑘 ) ,


where 𝑝 𝑦𝑘 = σ𝑚
𝑗=1 𝑝(𝑥𝑗 , 𝑦𝑘 )

𝑚 𝑛

𝐻 𝑋𝑌 = − ෍ ෍ 𝑝 𝑥𝑗 , 𝑦𝑘 log 𝑝 𝑥𝑗 , 𝑦𝑘
𝑗=1 𝑘=1
H(X) and H(Y) are marginal entropies of X and Y respectively.
H(XY) is the joint entropy of X and Y.
Conditional Entropy
Conditional Probability p(X|Y) is given by
𝑝(𝑋,𝑌)
𝑝 𝑋𝑌 =
𝑝(𝑌)
We know that 𝑦𝑘 may occur in conjunction with 𝑥1 , 𝑥2 , … . 𝑥𝑚.
𝑥 𝑥 𝑥
Thus [𝑋Τ𝑦𝑘] = [ 1 , 2 , … … , 𝑚 ]
𝑦𝑘 𝑦𝑘 𝑦𝑘
And the associated probability scheme is
𝑥1 𝑥 𝑥
𝑝 𝑋Τ𝑦𝑘 = [𝑝 , 𝑝 2 , ……..,𝑝 𝑚 ]
𝑦𝑘 𝑦𝑘 𝑦𝑘
𝑥1 ,𝑦𝑘 𝑥2 ,𝑦𝑘 𝑥𝑚 ,𝑦𝑘
= [𝑝 ,𝑝 , ……..,𝑝 ] (1)
𝑦𝑘 𝑦𝑘 𝑦𝑘
Now p 𝑥1 , 𝑦𝑘 + 𝑝 𝑥2 , 𝑦𝑘 + ⋯ + 𝑝 𝑥𝑚 , 𝑦𝑘 = 𝑝(𝑦𝑘 )
The sum elements in eq (1) is unity. Hence the probability scheme
defined in eq (1) is complete. Therefore an entropy may be
associated with it.
Conditional Entropy
𝑋 𝑚 𝑝 𝑥𝑗 ,𝑦𝑘 𝑝(𝑥𝑗 ,𝑦𝑘 )
Thus 𝐻 = − σ𝑗=1 𝑙𝑜𝑔
𝑦𝑘 𝑝 𝑦𝑘 𝑝(𝑦𝑘 )
𝑚 𝑥𝑗 𝑥𝑗
= σ
− 𝑗=1 𝑝 log 𝑝( )
𝑦 𝑘 𝑦𝑘
We may take the average of this conditional entropy for all
admissible values of 𝑦𝑘 in order to obtain a measure of an average
conditional entropy of the system
𝐻 𝑋ൗ𝑌 = 𝐻(𝑋ൗ𝑦𝑘 )

= σ𝑛𝑘=1 𝑝 𝑦𝑘 𝐻(𝑋Τ𝑦𝑘 )
𝑥𝑗
= − σ𝑚 σ 𝑛
𝑗=1 𝑘=1 𝑝 𝑦𝑘 𝑝 log 𝑝( 𝑥𝑗 /𝑦𝑘 )
𝑦𝑘
= − σ𝑚 σ 𝑛
𝑗=1 𝑘=1 𝑝 𝑥𝑗 , 𝑦𝑘 log 𝑝( 𝑥𝑗 /𝑦𝑘 )
Similarly 𝐻 𝑌Τ𝑋 = − σ𝑚 σ 𝑛
𝑗=1 𝑘=1 𝑝 𝑥𝑗 , 𝑦𝑘 log 𝑝( 𝑦𝑘 /𝑥𝑗 )
𝐻 𝑋Τ𝑌 and 𝐻(𝑌Τ𝑋) are average conditional entropies or simply
conditional entropies
Entropies associated with 2D Probability scheme
There are five entropies associated with a two dimensional
probability scheme.
They are H(X), H(Y), H(X,Y), H(X|Y), H(Y|X)
Now let X represents Transmitter and Y a Receiver, then following
entropies are defined

H(X) : Average information per character at the transmitter;


Entropy of the Transmission.

H(Y) : Average information per character at the receiver;


Entropy of the receiver

H(X, Y) : Average information per pair of the transmitted and


received characters or average uncertainty of the communication
system as a whole.
Entropies associated with 2D Probability scheme
There are five entropies associated with a two dimensional
probability scheme are H(X), H(Y), H(X,Y), H(X|Y), H(Y|X)
H(X|Y) : A received character 𝑦𝑘 may be the result of transmission
of one of the 𝑥𝑗 ′𝑠 with the given probability. The entropy
associated with this probability scheme , when 𝑦𝑘 covers all
received symbols i.e. 𝐻(𝑋|𝑦𝑘 ) is the conditional entropy H(X|Y),
It is a measure of information about the transmitter where it is
known that Y is received.
H(Y|X) : A transmitted character 𝑥𝑗 may result in reception of one
of the 𝑦𝑘 ′𝑠 with the given probability. The entropy associated with
this probability scheme , when 𝑥𝑗 covers all transmitted symbols
𝐻(𝑌|𝑥𝑗 ) is the conditional entropy H(Y|X),
It is a measure of information about the receiver where it is
known that X is transmitted.
Entropies associated with 2D Probability scheme
• H(X) gives indications of probabilistic nature of Transmitter

• H(Y) gives indications of probabilistic nature of Receiver

• H(X|Y) indicates how well one can recover the transmitted


symbol form the received symbol. It gives measure of
equivocation

• H(Y|X) indicates how well one can recover the received symbol
form the transmitted symbol. It gives measure of error or noise.
Entropies associated with 2D Probability scheme
Relationships between the different entropies is found as follows
𝐻 𝑋𝑌 = − σ𝑚 σ 𝑛
𝑗=1 𝑘=1 𝑝 𝑥𝑗 , 𝑦𝑘 log 𝑝 𝑥𝑗 , 𝑦𝑘
𝑥𝑗
= − σ𝑚 σ 𝑛
𝑗=1 𝑘=1 𝑝 𝑥𝑗 , 𝑦𝑘 log[𝑝 𝑝(𝑦𝑘 )]
𝑦𝑘
𝑥𝑗
= − σ𝑚 σ 𝑛
𝑗=1 𝑘=1[𝑝 𝑥𝑗 , 𝑦𝑘 log[𝑝 + 𝑝 𝑥𝑗 , 𝑦𝑘 𝑙𝑜𝑔 𝑝(𝑦𝑘 )]
𝑦𝑘

= 𝐻(𝑋|𝑌) − σ𝑚 σ 𝑛
𝑗=1 𝑘=1[ 𝑝 𝑥𝑗 , 𝑦𝑘 𝑙𝑜𝑔 𝑝(𝑦𝑘 )

= 𝐻(𝑋|𝑌) − σ𝑛𝑘=1 𝑝 𝑥𝑗 , 𝑦𝑘 ] 𝑙𝑜𝑔𝑝(𝑦𝑘 )


𝑛

= 𝐻(𝑋|𝑌) − ෍ 𝑝 𝑦𝑘 log 𝑝(𝑦𝑘 )


𝑘=1
𝐻 𝑋𝑌 = 𝐻 𝑋 𝑌 + 𝐻(𝑌) similarly it can be shown that
𝐻 𝑋𝑌 = 𝐻 𝑌 𝑋 + 𝐻(𝑋)
Mutual Information
• It deals with uncertainty during transmission, specifically
reduction in uncertainty due to reception of prior received
symbols

• We are interested in the transfer of information from


transmitter through a channel to a receiver.

• Prior to reception of a message the state of knowledge at the


receiver about the transmitted signal 𝑥𝑗 is the probability that
𝑥𝑗 will be selected for transmission. This is known as a-priori
probability 𝑝(𝑥𝑗 ).
Mutual Information

Source Destination
𝑥𝑗 𝑦𝑘

• After the reception and selection of the symbol 𝑦𝑘 , the state of


knowledge concerning 𝑥𝑗 is the conditional probability
𝑃(𝑥𝑗 |𝑦𝑘 ) which is also known as a-posteriori probability.
• Thus before 𝑦𝑘 is received the uncertainty is − log 𝑝(𝑥𝑗 )
It is more uncertainty
• After 𝑦𝑘 is received the uncertainty becomes − log 𝑝(𝑥𝑗 /𝑦𝑘 )
It is less uncertainty
• The information gained about 𝑥𝑗 by the reception of 𝑦𝑘 is the
net reduction in its uncertainty and is known as mutual
information 𝐼(𝑥𝑗 , 𝑦𝑘 )
Mutual Information
• Thus 𝐼(𝑥𝑗 , 𝑦𝑘 )= initial information – final uncertainty
= − log 𝑝(𝑥𝑗 ) −[− log 𝑝(𝑥𝑗 /𝑦𝑘 )]
𝑥
𝑝( 𝑗ൗ𝑦𝑘 )
=log
𝑝(𝑥𝑗 )

𝑥𝑗 𝑃(𝑥𝑗 ,𝑦𝑘 )
We know that 𝑝 ൗ𝑦𝑘 = so
𝑝(𝑦𝑘 )

𝑦
𝑝(𝑥𝑗 ,𝑦𝑘 ) 𝑝( 𝑘ൗ𝑥𝑘 )
𝐼 𝑥𝑗 , 𝑦𝑘 = 𝑙𝑜𝑔 = 𝑙𝑜𝑔
𝑝 𝑥𝑗 𝑝(𝑦𝑘 ) 𝑝(𝑦𝑘 )
= 𝐼(𝑦𝑘 , 𝑥𝑗 )
Thus mutual information is symmetric in 𝑥𝑗 and 𝑦𝑘 that is

𝐼 𝑥𝑗 , 𝑦𝑘 = 𝐼(𝑦𝑘 , 𝑥𝑗 )
Mutual Information
The average mutual information is the entropy corresponding to
mutual information is given by
𝐼 𝑋; 𝑌 = 𝐼(𝑥𝑗 ; 𝑦𝑘 )
By solving I X; Y = 𝐼(𝑥𝑗 ; 𝑦𝑘 ) = σ𝑚 σ 𝑛
𝑗=1 𝑘=1 𝑝 𝑥𝑗 , 𝑦𝑘 𝐼(𝑥𝑗 ; 𝑦𝑘 )
which results into
𝐼 𝑋; 𝑌 = 𝐻 𝑋 − 𝐻(𝑋|𝑌)
𝐼 𝑋; 𝑌 = 𝐻 𝑋 + 𝐻(𝑌) − 𝐻(𝑋|𝑌)
𝐼 𝑋; 𝑌 = 𝐻 𝑌 − 𝐻(𝑌|𝑋)
𝐼 𝑋; 𝑌 does not depend upon the individual symbol 𝑥𝑗 or 𝑦𝑘 it is
the property of whole communication system.
On the other hand 𝐼(𝑥𝑗 ; 𝑦𝑘 ) depends on the individual symbols 𝑥𝑗
or 𝑦𝑘 .
Channel Capacity
Mutual information 𝐼 𝑋; 𝑌 indicates a measure of the average
information per symbol transmitted in the system.
A suitable measure for efficiency of transmission of information
may be introduced by comparing the actual rate and the upper
bound of the rate of information transmission for a given channel.
Claude Shannon introduced the concept of channel capacity C
It is given by
C= max 𝐼 𝑋; 𝑌 = max [H(X) –H(X|Y)]
The transmission efficiency or channel efficiency is defined as
𝑎𝑐𝑡𝑢𝑎𝑙 𝐼𝑛𝑓𝑜𝑟𝑚𝑎𝑡𝑖𝑜𝑛
η=
𝑚𝑎𝑥𝑖𝑚𝑢𝑚 𝑡𝑟𝑎𝑛𝑠𝑖𝑛𝑓𝑜𝑟𝑚𝑎𝑡𝑖𝑜𝑛
𝐼(𝑋;𝑌) 𝐼(𝑋;𝑌)
η= =
max 𝐼(𝑋;𝑌) 𝐶
Redundancy of the channel is defined as 𝑅 = 1 − η
𝐶−𝐼(𝑋;𝑌)
=
𝐶
Shannon Theorem

This theorem is concerned with the rate of information


transmission over a communication channel.
The term communication channel covers all the features and
components parts of the transmission system which introduce
noise or limit the Bandwidth.

Shannon’s theorem says that it is possible, in principle, to device a


means whereby a communication system will transmit
information with an arbitrarily small probability of error
provided that the information rate R is less than or equal to a
rate C, (R≤C) the channel capacity.
Shannon Theorem
The statement of the Shannon’s theorem is as follows:
Given a source of M equally likely messages, with M>>1, which is
generating information at a rate R. Given channel with a capacity C.
Then, if R ≤ C , there exists a coding technique such that the output
of the source may be transmitted over the channel with a
probability of error of receiving the message which may be made
arbitrarily small.
The important feature of the theorem is that it indicates that for
R ≤ C, error free transmission is possible in the presence of noise.
Thus the channel capacity C of a communication channel is its
very important characteristic. It decides the maximum
permissible rate at which an error-free transmission of
information is possible through it
Effect of increase in BER

Bit Error Rate (BER)


Shannon –Hartley Theorem
The noise characteristics of channels encountered in practice is
generally Gaussian – so they are called as Gaussian Channels.
The results obtained for a Gaussian Channel often provide a lower
bound on the performance of a system with the non Gaussian
channel. Thus if a particular encoder-decoder is used with a
Gaussian, giving an error probability 𝑃𝑒 , then with a non Gaussian
channel another encoder-decoder can be designed for which the
error probability will be less than 𝑃𝑒 . Hence the study of a
Gaussian channel is very important.
The Gaussian Probability curve shape is represented by
1 −𝑥 2 Τ2𝜎 2
𝑝 𝑥 = 𝑒
2π𝜎 2
Shannon –Hartley Theorem
1 −𝑥 2 Τ2𝜎 2
𝑝 𝑥 = 𝑒
2π𝜎 2

𝐻 𝑥 = − න 𝑝 𝑥 𝑙𝑜𝑔𝑝(𝑥) 𝑑𝑥
−∞
𝑥2 Τ
But − log 𝑝 𝑥 = 𝑙𝑜𝑔 2
2𝜋𝜎 + 𝑙𝑜𝑔 𝑒 2𝜎 2
Hence
∞ ∞
2
𝐻 𝑥 = න 𝑝 𝑥 log 2𝜋𝜎 2 𝑑𝑥 +න 𝑝 𝑥 𝑙𝑜𝑔 𝑒 Τ2𝜎 2
𝑥
−∞ −∞
This on calculation yields 𝐻 𝑋 = 𝑙𝑜𝑔 2𝜋𝑒𝜎 2 bits/message
Now if the signal is band limited to 𝜔 Hz, then it may be uniquely
specified by taking 2𝜔 samples per second (by Sampling theorem)
Shannon –Hartley Theorem
Hence the rate of information transmission is 𝑅 𝑥 = 2𝜔𝐻 𝑥
𝑅 𝑥 = 2𝜔 log 2𝜋𝑒𝜎 2 = 𝜔log( 2𝜋𝑒𝜎 2 )2
𝑅 𝑥 = 𝜔 log(2𝜋𝑒𝜎 2 )
If p(x) is a bandlimited Gaussian noise with an average noise
power N, then we have
𝑅 𝑥 = 𝑅 𝑛 = 𝜔 log(2𝜋𝑒𝑁) where 𝜎 2 = 𝑁
Now consider a continuous source transmitting information over
a noisy channel. If the received signal is composed of a
transmitted signal x and a noise n, then the joint entropy of the
source and noise is given by
𝑅 𝑥, 𝑛 = 𝑅 𝑥 + 𝑅(𝑛Τ𝑥)
Assuming that the transmitted signal and noise are independent
𝑅 𝑥, 𝑛 = 𝑅 𝑥 + 𝑅(𝑛)
Shannon –Hartley Theorem
Assuming that the transmitted signal and noise are independent
𝑅 𝑥, 𝑛 = 𝑅 𝑥 + 𝑅(𝑛),
Since the received signal y is the sum of the transmitted signal and
noise, we may equate
H(x,y) = H(x,n)
So H(y)+H(x|y) = H(x) + H(n)
Or R(y)+R(x|y) = R(x) + R(n)
The rate at which the information is received from a noisy channel
is R = R(x)-R(x|y)
And R = R(y)-R(n) bits/sec
The channel capacity C in bits per sec is
C=Max [ (R) ] bits/sec
C= Max [ R(y) – R(n)] bits/ sec
Shannon –Hartley Theorem
Since R(n) is assumed to be independent of x(t), maximizing R
requires maximizing R(y).
Let the transmitted signal be limited to an average signal power S
and the noise of the channel be white Gaussian noise with an
average power N within bandwidth w of the channel. The received
signal will now have an average power (S+N). R(Y) is maximum
when y(t) is also a Gaussian random process because noise is
assumed to be Gaussian. Thus the entropy from equation of noise
𝑅 𝑛 = 𝜔 log(2𝜋𝑒𝑁) is
𝑅 𝑦 = 𝜔 log[2𝜋𝑒(𝑆 + 𝑁)] bits/sec
The channel capacity may now be obtained directly since R(y) has
been maximized. Thus 𝐶 = max R y − R n
𝑆+𝑁
𝐶 = w log 2π𝑒 𝑆 + 𝑁 − 𝑤 log [2𝜋 𝑒𝑁] = w log [ ]
𝑁
𝑆
= 𝑤 log [1 + ] bits/sec
𝑁
Markov Diagram and Entropy Calculations
When the statistical activity that is having some kind of states can
be represented with the help of Markoff Diagram which is also
known as Markoff chains. In the transmission system the discrete
symbols are transmitted or received. In such case there are
transmission rates associated with it such as Ts. These diagrams
indicates the graphical probabilistic
representation of the set of activities

C 1/4

A 1 2 B

3/4 3/4

C 1/4
Markov Diagram and Entropy Calculations
Example :
Coding
• Coding is most significant application of Information Theory
The main purpose of coding is to improve the efficiency of the
communication system.
• Coding is procedure for mapping a given set of messages
[ 𝑚1 , 𝑚2 , … . 𝑚𝑁 ] into a new set of encoded messages
[ 𝐶1 , 𝐶2 , … . 𝐶𝑁 ] in such a way that the transformation in one to one
i e for each message there is only one encoded message.
This is source coding.
• By coding one seeks to improve the efficiency of transmission.
• It is possible to device codes for special purposes without
relevance to the efficiency of transmission (such as Secrecy or
minimum probability of error) is know as channel coding.
Coding Terminologies
a) Letter, symbol or character: Any individual member of the alphabet set.
b) Message or Word: A finite sequence of letters of an alphabet
c) Length of the word : The number of letters in the message
d) Coding, encoding, enciphering : A procedure for associating words constructed
from a finite alphabets of a language with the with the given words of another
language in a one to one manner.
e) Decoding , deciphering : The inverse operation of assigning words of the
second language corresponding to the given words in first language.
f) Uniquely decipherable or separable encoding and decoding : In this operation
the correspondence of all possible sequences of words between the two
languages is one-to-one when there is no space between the words.
g) Irreducibility or prefix property : when no encoded words can be obtained
from each other by the addition of more letters, the code is said to be irreducible
or of a prefix property.
When a code is irreducible, it is also uniquely decipherable ; but the reverse is not
true.
Coding Efficiency

Let M be the number of symbols in an encoding alphabet. Let


there be N messages [ 𝑚1 , 𝑚2 , … . 𝑚𝑁 ] , with the probabilities
[𝑃 𝑚1 , 𝑃 𝑚2 , ……. 𝑃 𝑚𝑁 , ]. Let 𝑛𝑖 be the number of symbols in the
𝑖𝑡ℎ message. The average length of the message or average length
per code word is then given by
𝐿ത = σ𝑁
𝑖=1 𝑛𝑖 𝑝(𝑛𝑖 ) letters/message

𝐿ത should be minimum to have an efficient transmission.


Then Coding efficiency can be defined as

𝐿ത 𝑚𝑖𝑛
η=
𝐿ത
Coding Efficiency
Let H(x) be the entropy of the source in bits /message. Also let
log M be the maximum average information associated with each
letter in bits/letter.
𝐻(𝑥) 𝑏𝑖𝑡𝑠Τ𝑚𝑒𝑠𝑠𝑎𝑔𝑒
Hence, the ratio having a unit or
log 𝑀 𝑏𝑖𝑡𝑠 Τ𝑙𝑒𝑡𝑡𝑒𝑟
letter/message, gives the minimum average number of letters per
message.
𝐻(𝑥)
So = 𝐿ത 𝑚𝑖𝑛
log 𝑀
𝐿ത 𝑚𝑖𝑛 𝐻(𝑥)
Hence the coding efficiency is η = =
𝐿ത 𝐿ത log 𝑀
And redundancy is = 1 − η
Coding Efficiency Example
Ex 1: Let M=[𝑚1 , 𝑚2 , 𝑚3 , 𝑚4 ] and P(M) =[1Τ2 , 1Τ4 , 1Τ8 , 1Τ8]
Without coding and considering one to one correspondence
( i.e. a noiseless channel) the efficiency is
𝐼(𝑋; 𝑌) 𝐻(𝑋)
η= =
𝐶 log 𝑁
−[1ൗ2 𝑙𝑜𝑔 1ൗ2 + 1ൗ4 𝑙𝑜𝑔 1ൗ4 + 1ൗ8 𝑙𝑜𝑔 1ൗ8 + 1ൗ8 𝑙𝑜𝑔 1ൗ8]
=
log 4
7Τ 7
4
= = = 87.5%
2 8
Now let us consider a binary code (0 and 1) so M=2
message code length of code
m1 C1= 0 0 n1=2
m2 C2= 0 1 n2=2
m3 C3= 1 0 n3=2
m4 C4= 1 1 n4=2
Coding Efficiency Example
For this code
4

𝐿ത = ෍ 𝑛𝑘 𝑝 𝑘 = 2 ∗ 0.5 + 2 ∗ 0.25 + 2 ∗ 0.125 + 2 ∗ 0.125


𝑘=1
= 2 letters/message
𝐻(𝑥) 7ൗ
η= = 4 = 87.5%
𝐿ത 𝑙𝑜𝑔𝑀 2 log 2
Thus coding procedure (applying coding) does not improve the
efficiency.
Coding Efficiency Example
Ex 2: However let us consider another coding technique in which
message code length of code
m1 C1= 0 n1=1
m2 C2= 1 0 n2=2
m3 C3= 1 1 0 n3=3
m4 C4= 1 1 1 n4=3
For this code
4

𝐿ത = ෍ 𝑛𝑘 𝑝 𝑘 = 1 ∗ 0.5 + 2 ∗ 0.25 + 3 ∗ 0.125 + 3 ∗ 0.125


𝑘=1
= 7/4 letters/message
𝐻(𝑥) 7ൗ
η= = 4 = 100%
𝐿ത 𝑙𝑜𝑔𝑀 7ൗ log 2
4
Thus the second coding technique gives better result.
Coding Efficiency Example
For this code
4

𝐿ത = ෍ 𝑛𝑘 𝑝 𝑘 = 2 ∗ 0.5 + 2 ∗ 0.25 + 2 ∗ 0.125 + 2 ∗ 0.125


𝑘=1
= 2 letters/message
𝐻(𝑥) 7ൗ
η= = 4 = 87.5%
𝐿ത 𝑙𝑜𝑔𝑀 2 log 2
Thus coding procedure does not improve the efficiency.
Shannon Fano Coding
This method of coding is directed towards constructing reasonably
efficient separable binary codes.
Let [X] be the ensemble of messages to be transmitted and [P] be
their corresponding probabilities.
The sequence 𝐶𝑘 of binary numbers of the length 𝑛𝑘 associated
with each message 𝑥𝑘 should fulfil following conditions:
1) No sequence of employed binary numbers 𝐶𝑘 can be obtained
from each other by adding more binary digits to the shorter
sequence (prefix).
2) The transmission of an encoded message is reasonably
efficient i.e. 1 and 0 appear independently, with almost equal
probabilities
Shannon Fano Coding
The messages are first written in the order of non increasing
probabilities.
The message set is then partitioned into two most equi-probable
subsets [X1] and [X2].
A 0 is assigned to each message contained in one subset and a 1 is
assigned to the other subset.
The same process is repeated for subset [X1] and [X2] i.e. [X1] will
be partitioned into two subsets [X11] and [X12] and [X2] will be
partitioned into two subsets [X21] and [X22].
The codewords in [X11] will start with 00 ,[X12] will start with 01,
[X21] will start with 10 and [X22] will start with 11.
The procedure is continued until each subset contains only one
message. Note that each digit 0 or 1 in each partitioning of the
probability space appears with a more or less equal probabilities
and is independent of the previous or subsequent partitioning.
Hence, P(0) and P(1) are also more or less equal.
DWC : QUIZ UNIT I
Max Marks: 20 Time: 30 Min

Qu 1: Which block in the Digital Communication diagram is responsible for converting


symbols into equivalent codes? (2)

Qu 2:Define Entropy in Digital communication system and write relationship if


probabilities of the events are known. (2)

Qu 3: If 𝑝1 = 0. 4, 𝑝2 = 0.6 in a two symbols Communication system with 10


messages per second find Rate of information for this system. (3)
Qu 4: Illustrate Shannon’s Channel Capacity Theorem with Gaussian Channel. Also
find Channel capacity for a channel with Signal to Noise Ratio of 1000 and Bandwidth
of 3.3 KHz. (3)
Qu 5: Apply Shannon Fano coding theorem for a digital system the following message
ensemble
[X] = [𝑥1 𝑥2 𝑥3 𝑥4 𝑥5 𝑥6 𝑥7 ]
[P] = [0.2 0.08 0.12 0.4 0.04 0.08 0.08]
Find coding efficiency?. (10)
Thank You!
Send your questions on
pvingole@mitra.ac.in

Vous aimerez peut-être aussi