Académique Documents
Professionnel Documents
Culture Documents
UNIT II: Error Controlling and Coding: Methods of controlling error, linear block codes, matrix description of linear block codes, error
detection and error correction capabilities of linear block codes, single error correcting Hamming codes, Cyclic codes, syndromes
calculation, error detection, Introduction to Convolution codes.
UNIT III: Spread Spectrum Signals : Model of spread spectrum communication system, direct sequence spread spectrum signals,
generation of PN sequences, frequency hopping spread spectrum (slow frequency and high frequency hopping), comparison, basic
principles of TDMA, FDMA, CDMA.
UNIT IV: Cellular Telephone Concepts : Introduction, mobile telephone service, cellular telephone architecture, frequency reuse, cell
splitting, sectoring, segmentation and dualization, cellular system topology, roaming and handoffs.
UNIT V: GSM and CDMA Technologies : Network Architecture, Protocol Architecture, GSM Channels,
Frame structure for GSM, Authentication & Security in GSM, Introduction to CDMA, Architecture of CDMA System, IS-95 CDMA Forward
& Reverse channel, Soft handoff.
• Channel Capacity,
• coding efficiency,
Distantly
Communication
located
Channel
Output
Output Signal
Receiver
Transducer
Communication System
Input
Information Signal
Source and Input Transmitter
Transducer
Mike, camera,
Sensor Data, Communication
files Channel
Output
Output Signal
Receiver
Transducer
Communication System
Input
Information Signal
Source and Input Transmitter
Transducer
Various Modulations
Amplitude, Frequency,
Phase, Pulse etc using Communication
Carrier wave Channel
Output
Output Signal
Receiver
Transducer
Communication System
Input
Information Signal
Source and Input Transmitter
Transducer
Various types of
Transmission channels Communication
Coaxial Cables, Open Channel
Space, Microwave,
Optical Fibre cable
Output
Output Signal
Receiver
Transducer
Communication System
Input
Information Signal
Source and Input Transmitter
Transducer
Various demodulations
Amplitude, Frequency,
Communication
Phase, Pulse
Channel
Output
Output Signal
Receiver
Transducer
Communication System
Input
Information Signal
Source and Input Transmitter
Transducer
Speakers, Screen,
LED Displays, Communication
Indicators. Channel
Output
Output Signal
Receiver
Transducer
Communication System
Input
Information Signal
Source and Input Transmitter
Transducer
Communication
Channel
Output
Output Signal
Receiver
Transducer
Elements of Digital Communication Systems
Noise
Elements of Digital Communication Systems
Destination
Elements of Digital Communication Systems
Electrical /
Electromagnetic
/ Optical
Electrical /
Electromagnetic
/ Optical
Elements of Digital Communication Systems
Sequence
of Symbols
Noise
Elements of Digital Communication Systems
ABCXYZ
1206@#
Sequence
of Symbols
Noise
ABCXYZ
1206@#
Information Source
• Source output depends upon the nature of the source
• Analog – Microphone , Sensor output, Video signal
• Digital – TTY, File read from Memory, Position Encoders
• Analog signals can be converted into digital form by passing
it through ADC
• Discrete information sources are characterised by the
following parameters
1) Source Alphabets (Symbols/ Letters)
2) Symbol Rate
3) Source Alphabet Probabilities
4) Probabilistic dependence of symbols in a sequence
• Source Entropy (H) in Bits per symbol – average information
contents per symbol in long message
• Source Information Rate is bits per second
Information Source
Sequence Binary
of Symbols Stream
Noise
ABCXYZ
1206@#
Elements of Digital Communication Systems
ABCXYZ 10101 10101
1206@# 10101 01110
Sequence Binary
of Symbols Stream
Noise
Noise
Noise
• Relatively small set of analog signals (often two) are selected for
transmission over line.
Channel Encoder / Decoder
• In order to increase the transmission reliability and efficiency
channel coding is used
• Relatively small set of analog signals often two are selected for
transmission over line.
• Demodulator has a conceptually simple task to distinguish
between these two different waveforms of known shapes.
Channel Encoder / Decoder
• Error Control is accomplished by the channel coding operation
that consists of systematically adding extra bits to the output
of source coder.
• These extra bits themselves represent no information but
make it possible for the receiver to detect of transmission error
and correct some of the errors in the information bits.
Communication Channel and Capacity
• Channel is media that interconnect Source and Destination such
as a pair of wire or free space
• Channel have only finite Bandwidth B
• Information bearing signal often suffers amplitude and phase
distortion as it travels over the channel
• As signal travels it come across attenuation that decrease the
signal power
• Signal is further corrupted by unwanted, unpredictable electrical
signals called Noise
• Internal and/or external Noise degrades the signal
• Some of the bad effects can be removed at the
receiver/destination but some bad effects cannot be completely
removed.
Communication Channel and Capacity
• Primary objectives of the communication system design is to
supress the bad effects of noise as much as possible
• Signal amplitude/Noise (S/N) ratio
• Increasing the signal Power is not a proper solution
• Inability of the system components and devices to handle large
signals
• Other important parameters of the channel are: Usable
Bandwidth (B), Amplitude and Phase Response, and statistical
properties of noise.
• If the parameters of the communication channel are known,
then we can compute the channel capacity C.
• C=B*log2 (1+S/N) bits/sec
Elements of Digital Communication Systems
ABCXYZ 10101 10101 10101101 10101100
1206@# 10101 01110 10101001 01110111
Noise
Noise
1
• So 𝐼 𝑥𝑖 = 𝑓
𝑝(𝑥𝑖 )
Unit of Information
• Thus it is inverse relationship between the probability of the event and the
amount of Information associated with it.
1
• So 𝐼 𝑥𝑖 = 𝑓
𝑃(𝑥𝑖 )
• Now let there be another independent event yk,
1
So 𝐼 𝑦𝑘 = 𝑓
𝑃(𝑦𝑘 )
Hence probability of joint event is represented as
p(xi,yk) = p(xi).p(yk) with associated information contents
1 1
𝐼(𝑥𝑖 , 𝑦𝑘 ) = f =f
𝑝(𝑥𝑖 ,𝑦𝑘 ) 𝑝(𝑥𝑖 ) 𝑝(𝑦𝑘 )
The total Information 𝐼(𝑥𝑖 , 𝑦𝑘 ) must be equal to the sum of
individual informations 𝐼(𝑥𝑖 ) and 𝐼(𝑦𝑘 )
Unit of Information
1 1
• 𝐼(𝑥𝑖 , 𝑦𝑘 ) = f =f , thus represents the
𝑝(𝑥𝑖 ,𝑦𝑘 ) 𝑝(𝑥𝑖 ) 𝑝(𝑦𝑘 )
joint probability of two independent events.
• Now we would like to know the functions on the right hand
side of the above equation which converts the multiplication in
to addition
• The function which converts multiplication into addition is
Logarithm.
1
• Thus 𝐼(𝑥𝑖 , 𝑦𝑘 ) = log
𝑝(𝑥𝑖 ) 𝑝(𝑦𝑘 )
1 1
= log +log
𝑝(𝑥𝑖 ) 𝑝(𝑦𝑘 )
=𝐼 𝑥𝑖 + 𝐼(𝑦𝑘 )
• Hence basic equation defining the amount of information is
1
𝐼(𝑥𝑖 ) = log = - log 𝑝(𝑥𝑖 ) for binary system it is 𝑙𝑜𝑔2
𝑝(𝑥𝑖 )
Entropy
• A communication system is meant to deal with all possible
messages.
𝐼𝑡 1 1 1
𝐻= = 𝑃1 ∗ log +𝑃2 ∗ log +….. + 𝑃𝑀 ∗ log
𝐿 𝑝1 𝑝2 𝑝𝑀
1
= σ𝑀
𝑘=1 𝑝𝑘 𝑙𝑜𝑔 = - σ𝑀
𝑘=1 𝑝𝑘 log 𝑝𝑘
𝑝𝑘
Entropy
• The average information per individual message is known as
Entropy
The average information per message, Entropy, H is given by
1
H = σ𝑀
𝑘=1 𝑝𝑘 𝑙𝑜𝑔 = - σ𝑀
𝑘=1 𝑝𝑘 log 𝑝𝑘
𝑝𝑘
If there is only single possible message i e M=1 and 𝑝𝑘 = 𝑃1 = 1
𝑀 1 1 1
σ
Then H = 𝑘=1 𝑝𝑘 𝑙𝑜𝑔 = 𝑝1 𝑙𝑜𝑔 = 1 𝑙𝑜𝑔 = 0
𝑝𝑘 𝑝1 1
1
H = σ𝑀
𝑘=1 𝑝𝑘 𝑙𝑜𝑔 = - σ𝑀
𝑘=1 𝑝𝑘 log 𝑝𝑘
𝑝𝑘
Let there be only one message out of M messages having a
probability 1 and all other 0. In that case
𝑀 1
Then H = σ𝑘=1 𝑝𝑘 𝑙𝑜𝑔
𝑝𝑘
1 1 1
= 𝑝1 ∗ log +lim [ 𝑝 ∗ log + p ∗ log + ⋯]
𝑝1 𝑝→0 𝑝 𝑝
1
1 log
= +0
1
Thus if all probabilities are zero except for one, which ought to be
unity, the entropy is zero. In all other cases the entropy is greater
than zero.
Entropy
For Binary system (M=2) the entropy is
1 1
𝐻 = 𝑝1 𝑙𝑜𝑔 + 𝑝2 𝑙𝑜𝑔
𝑝1 𝑝2
Let 𝑝1 = 𝑝 then 𝑝2 = 1 − 𝑝1 = 1-p=q
1 1 1 1
𝐻 = 𝑝 𝑙𝑜𝑔 + 1 − 𝑝 log = 𝑝 𝑙𝑜𝑔 + 𝑞 log =H(p) = H(q)
𝑝 (1−𝑝) 𝑝 𝑞
Entropy is measured in bits per message. It is found that Entropy is
maximum when the probability of messages is equally likely.
Entropy
Important properties of Entropy can be summarized as:
1) Log M≥ 𝐻 𝑥 ≥ 0
2) H(x) = 0 if all possibilities are zero except for one which must
be unity.
3) H(x) = log M if all possibilities are equal so that
1
𝑝(𝑥)𝑖 = 𝑝𝑖 = for all i’s.
𝑀
H under different cases for M=2
• Case I : 𝑝1 = 0.01 , 𝑝2 = 0.99 H=0.08
• Case II : 𝑝1 = 0. 4, 𝑝2 = 0.6 H=0.97
• Case III : 𝑝1 = 0.5 , 𝑝2 = 0.5 H=1.00
• Entropy is less when uncertainty is less &
It is more when uncertainty is more.
• Thus we can say that Entropy is measure
of Uncertainty.
Rate of Information
If a message source generates messages at the rate of r messages
per second, the rate of information R is defined as the average
number of bits of information per second .
H is the average number of bits of information per message
Hence R=r H bits/sec
Ex : Let us consider two sources of equal Entropy H, generating 𝑟1
and 𝑟2 messages per seconds respectively. The first source will
transmit the information at a rate 𝑅1 = 𝑟1 𝐻 and second source
will transmit information at rate 𝑅2 = 𝑟2 𝐻.
Now if 𝑟1 > 𝑟2 then 𝑅1 > 𝑅2 , Thus in a given period more
information is transmitted form first source than second source,
placing greater demand on communication channel. Hence the
source is not described by its Entropy alone but also by its rate of
information.
Some times R is referred to as bits/sec entropy & H as
bits/message entropy.
Rate of Information
1
Ex : An event has six possible outcomes with the probabilities 𝑝1 = ,
2
1 1 1 1 1
𝑝2 = , 𝑝3 = , 𝑝4 = , 𝑝5 = , 𝑝6 = . Find the entropy of the
4 8 16 32 32
system. Also find the rate of information if there are 16 outcomes per
second .
1
Solution : The entropy H is H=σ6𝑘=1 𝑝𝑘 log
𝑝𝑘
1 1 1 1 1 1
𝐻 = log 2 + log 4 + log 8 + log 16 + 𝑙𝑜𝑔 32 + log 32
2 4 8 16 32 32
1 1 1 1 1 1
𝐻 = + ∗2+ ∗3+ ∗4+ ∗5+ ∗5
2 4 8 16 32 32
1∗16+2∗8+3∗4+4∗2+5+5 16+16+12+8+5+5
𝐻= =
32 32
62 31
𝐻= = bits/message
32 16
Now r = 16 outcomes/sec
Hence rate of information R is R = rH
31
𝑅 = 16 ∗ = 31 bits/sec
16
Joint Entropy
Single probability scheme – one dimensional probability
Its Applications in Communication Engg
Two dimensional Probability and its applications in
communication Engg
Let there be two finite discrete sample spaces 𝑆1 𝑎𝑛𝑑 𝑆2 and there
product spaces be 𝑆 = 𝑆1 𝑆2 .
Now the set of events in 𝑆1 are given by [X] = [𝑥1 , 𝑥2 , … . . , 𝑥𝑚 ] and
that of set 𝑆2 are given by [Y] = [𝑦1 , 𝑦2 , … . . , 𝑦𝑛 ] .
Each event 𝑥𝑖 of 𝑆1 may occur in conjunction with any event 𝑦𝑘 of
𝑆2 .
Joint Entropy
Hence the complete set of events in 𝑆 = 𝑆1 𝑆2 is
𝑥1 𝑦1 𝑥1 𝑦2 𝑥1 𝑦𝑛
[XY] = 𝑥2 𝑦1 𝑥2 𝑦2 𝑥2 𝑦𝑛
𝑥𝑚 𝑦1 𝑥𝑚 𝑦2 𝑥𝑚 𝑦𝑛
• P(X) = [P(𝑥𝑖 )]
• P(Y) = [P(𝑦𝑘 )]
• P(XY) = [P(𝑥𝑖 , 𝑦𝑘 )]
𝐻 𝑋 = − σ𝑚
𝑗=1 𝑝 𝑥𝑗 log 𝑝(𝑥𝑗 ) ,
where 𝑝 𝑥𝑗 = σ𝑛𝑘=1 𝑝(𝑥𝑗 , 𝑦𝑘 )
𝑚 𝑛
𝐻 𝑋𝑌 = − 𝑝 𝑥𝑗 , 𝑦𝑘 log 𝑝 𝑥𝑗 , 𝑦𝑘
𝑗=1 𝑘=1
H(X) and H(Y) are marginal entropies of X and Y respectively.
H(XY) is the joint entropy of X and Y.
Conditional Entropy
Conditional Probability p(X|Y) is given by
𝑝(𝑋,𝑌)
𝑝 𝑋𝑌 =
𝑝(𝑌)
We know that 𝑦𝑘 may occur in conjunction with 𝑥1 , 𝑥2 , … . 𝑥𝑚.
𝑥 𝑥 𝑥
Thus [𝑋Τ𝑦𝑘] = [ 1 , 2 , … … , 𝑚 ]
𝑦𝑘 𝑦𝑘 𝑦𝑘
And the associated probability scheme is
𝑥1 𝑥 𝑥
𝑝 𝑋Τ𝑦𝑘 = [𝑝 , 𝑝 2 , ……..,𝑝 𝑚 ]
𝑦𝑘 𝑦𝑘 𝑦𝑘
𝑥1 ,𝑦𝑘 𝑥2 ,𝑦𝑘 𝑥𝑚 ,𝑦𝑘
= [𝑝 ,𝑝 , ……..,𝑝 ] (1)
𝑦𝑘 𝑦𝑘 𝑦𝑘
Now p 𝑥1 , 𝑦𝑘 + 𝑝 𝑥2 , 𝑦𝑘 + ⋯ + 𝑝 𝑥𝑚 , 𝑦𝑘 = 𝑝(𝑦𝑘 )
The sum elements in eq (1) is unity. Hence the probability scheme
defined in eq (1) is complete. Therefore an entropy may be
associated with it.
Conditional Entropy
𝑋 𝑚 𝑝 𝑥𝑗 ,𝑦𝑘 𝑝(𝑥𝑗 ,𝑦𝑘 )
Thus 𝐻 = − σ𝑗=1 𝑙𝑜𝑔
𝑦𝑘 𝑝 𝑦𝑘 𝑝(𝑦𝑘 )
𝑚 𝑥𝑗 𝑥𝑗
= σ
− 𝑗=1 𝑝 log 𝑝( )
𝑦 𝑘 𝑦𝑘
We may take the average of this conditional entropy for all
admissible values of 𝑦𝑘 in order to obtain a measure of an average
conditional entropy of the system
𝐻 𝑋ൗ𝑌 = 𝐻(𝑋ൗ𝑦𝑘 )
= σ𝑛𝑘=1 𝑝 𝑦𝑘 𝐻(𝑋Τ𝑦𝑘 )
𝑥𝑗
= − σ𝑚 σ 𝑛
𝑗=1 𝑘=1 𝑝 𝑦𝑘 𝑝 log 𝑝( 𝑥𝑗 /𝑦𝑘 )
𝑦𝑘
= − σ𝑚 σ 𝑛
𝑗=1 𝑘=1 𝑝 𝑥𝑗 , 𝑦𝑘 log 𝑝( 𝑥𝑗 /𝑦𝑘 )
Similarly 𝐻 𝑌Τ𝑋 = − σ𝑚 σ 𝑛
𝑗=1 𝑘=1 𝑝 𝑥𝑗 , 𝑦𝑘 log 𝑝( 𝑦𝑘 /𝑥𝑗 )
𝐻 𝑋Τ𝑌 and 𝐻(𝑌Τ𝑋) are average conditional entropies or simply
conditional entropies
Entropies associated with 2D Probability scheme
There are five entropies associated with a two dimensional
probability scheme.
They are H(X), H(Y), H(X,Y), H(X|Y), H(Y|X)
Now let X represents Transmitter and Y a Receiver, then following
entropies are defined
• H(Y|X) indicates how well one can recover the received symbol
form the transmitted symbol. It gives measure of error or noise.
Entropies associated with 2D Probability scheme
Relationships between the different entropies is found as follows
𝐻 𝑋𝑌 = − σ𝑚 σ 𝑛
𝑗=1 𝑘=1 𝑝 𝑥𝑗 , 𝑦𝑘 log 𝑝 𝑥𝑗 , 𝑦𝑘
𝑥𝑗
= − σ𝑚 σ 𝑛
𝑗=1 𝑘=1 𝑝 𝑥𝑗 , 𝑦𝑘 log[𝑝 𝑝(𝑦𝑘 )]
𝑦𝑘
𝑥𝑗
= − σ𝑚 σ 𝑛
𝑗=1 𝑘=1[𝑝 𝑥𝑗 , 𝑦𝑘 log[𝑝 + 𝑝 𝑥𝑗 , 𝑦𝑘 𝑙𝑜𝑔 𝑝(𝑦𝑘 )]
𝑦𝑘
= 𝐻(𝑋|𝑌) − σ𝑚 σ 𝑛
𝑗=1 𝑘=1[ 𝑝 𝑥𝑗 , 𝑦𝑘 𝑙𝑜𝑔 𝑝(𝑦𝑘 )
Source Destination
𝑥𝑗 𝑦𝑘
𝑥𝑗 𝑃(𝑥𝑗 ,𝑦𝑘 )
We know that 𝑝 ൗ𝑦𝑘 = so
𝑝(𝑦𝑘 )
𝑦
𝑝(𝑥𝑗 ,𝑦𝑘 ) 𝑝( 𝑘ൗ𝑥𝑘 )
𝐼 𝑥𝑗 , 𝑦𝑘 = 𝑙𝑜𝑔 = 𝑙𝑜𝑔
𝑝 𝑥𝑗 𝑝(𝑦𝑘 ) 𝑝(𝑦𝑘 )
= 𝐼(𝑦𝑘 , 𝑥𝑗 )
Thus mutual information is symmetric in 𝑥𝑗 and 𝑦𝑘 that is
𝐼 𝑥𝑗 , 𝑦𝑘 = 𝐼(𝑦𝑘 , 𝑥𝑗 )
Mutual Information
The average mutual information is the entropy corresponding to
mutual information is given by
𝐼 𝑋; 𝑌 = 𝐼(𝑥𝑗 ; 𝑦𝑘 )
By solving I X; Y = 𝐼(𝑥𝑗 ; 𝑦𝑘 ) = σ𝑚 σ 𝑛
𝑗=1 𝑘=1 𝑝 𝑥𝑗 , 𝑦𝑘 𝐼(𝑥𝑗 ; 𝑦𝑘 )
which results into
𝐼 𝑋; 𝑌 = 𝐻 𝑋 − 𝐻(𝑋|𝑌)
𝐼 𝑋; 𝑌 = 𝐻 𝑋 + 𝐻(𝑌) − 𝐻(𝑋|𝑌)
𝐼 𝑋; 𝑌 = 𝐻 𝑌 − 𝐻(𝑌|𝑋)
𝐼 𝑋; 𝑌 does not depend upon the individual symbol 𝑥𝑗 or 𝑦𝑘 it is
the property of whole communication system.
On the other hand 𝐼(𝑥𝑗 ; 𝑦𝑘 ) depends on the individual symbols 𝑥𝑗
or 𝑦𝑘 .
Channel Capacity
Mutual information 𝐼 𝑋; 𝑌 indicates a measure of the average
information per symbol transmitted in the system.
A suitable measure for efficiency of transmission of information
may be introduced by comparing the actual rate and the upper
bound of the rate of information transmission for a given channel.
Claude Shannon introduced the concept of channel capacity C
It is given by
C= max 𝐼 𝑋; 𝑌 = max [H(X) –H(X|Y)]
The transmission efficiency or channel efficiency is defined as
𝑎𝑐𝑡𝑢𝑎𝑙 𝐼𝑛𝑓𝑜𝑟𝑚𝑎𝑡𝑖𝑜𝑛
η=
𝑚𝑎𝑥𝑖𝑚𝑢𝑚 𝑡𝑟𝑎𝑛𝑠𝑖𝑛𝑓𝑜𝑟𝑚𝑎𝑡𝑖𝑜𝑛
𝐼(𝑋;𝑌) 𝐼(𝑋;𝑌)
η= =
max 𝐼(𝑋;𝑌) 𝐶
Redundancy of the channel is defined as 𝑅 = 1 − η
𝐶−𝐼(𝑋;𝑌)
=
𝐶
Shannon Theorem
C 1/4
A 1 2 B
3/4 3/4
C 1/4
Markov Diagram and Entropy Calculations
Example :
Coding
• Coding is most significant application of Information Theory
The main purpose of coding is to improve the efficiency of the
communication system.
• Coding is procedure for mapping a given set of messages
[ 𝑚1 , 𝑚2 , … . 𝑚𝑁 ] into a new set of encoded messages
[ 𝐶1 , 𝐶2 , … . 𝐶𝑁 ] in such a way that the transformation in one to one
i e for each message there is only one encoded message.
This is source coding.
• By coding one seeks to improve the efficiency of transmission.
• It is possible to device codes for special purposes without
relevance to the efficiency of transmission (such as Secrecy or
minimum probability of error) is know as channel coding.
Coding Terminologies
a) Letter, symbol or character: Any individual member of the alphabet set.
b) Message or Word: A finite sequence of letters of an alphabet
c) Length of the word : The number of letters in the message
d) Coding, encoding, enciphering : A procedure for associating words constructed
from a finite alphabets of a language with the with the given words of another
language in a one to one manner.
e) Decoding , deciphering : The inverse operation of assigning words of the
second language corresponding to the given words in first language.
f) Uniquely decipherable or separable encoding and decoding : In this operation
the correspondence of all possible sequences of words between the two
languages is one-to-one when there is no space between the words.
g) Irreducibility or prefix property : when no encoded words can be obtained
from each other by the addition of more letters, the code is said to be irreducible
or of a prefix property.
When a code is irreducible, it is also uniquely decipherable ; but the reverse is not
true.
Coding Efficiency
𝐿ത 𝑚𝑖𝑛
η=
𝐿ത
Coding Efficiency
Let H(x) be the entropy of the source in bits /message. Also let
log M be the maximum average information associated with each
letter in bits/letter.
𝐻(𝑥) 𝑏𝑖𝑡𝑠Τ𝑚𝑒𝑠𝑠𝑎𝑔𝑒
Hence, the ratio having a unit or
log 𝑀 𝑏𝑖𝑡𝑠 Τ𝑙𝑒𝑡𝑡𝑒𝑟
letter/message, gives the minimum average number of letters per
message.
𝐻(𝑥)
So = 𝐿ത 𝑚𝑖𝑛
log 𝑀
𝐿ത 𝑚𝑖𝑛 𝐻(𝑥)
Hence the coding efficiency is η = =
𝐿ത 𝐿ത log 𝑀
And redundancy is = 1 − η
Coding Efficiency Example
Ex 1: Let M=[𝑚1 , 𝑚2 , 𝑚3 , 𝑚4 ] and P(M) =[1Τ2 , 1Τ4 , 1Τ8 , 1Τ8]
Without coding and considering one to one correspondence
( i.e. a noiseless channel) the efficiency is
𝐼(𝑋; 𝑌) 𝐻(𝑋)
η= =
𝐶 log 𝑁
−[1ൗ2 𝑙𝑜𝑔 1ൗ2 + 1ൗ4 𝑙𝑜𝑔 1ൗ4 + 1ൗ8 𝑙𝑜𝑔 1ൗ8 + 1ൗ8 𝑙𝑜𝑔 1ൗ8]
=
log 4
7Τ 7
4
= = = 87.5%
2 8
Now let us consider a binary code (0 and 1) so M=2
message code length of code
m1 C1= 0 0 n1=2
m2 C2= 0 1 n2=2
m3 C3= 1 0 n3=2
m4 C4= 1 1 n4=2
Coding Efficiency Example
For this code
4