© All Rights Reserved

13 vues

© All Rights Reserved

- Be Eceiv v Semesters
- 08-Electronics & Comm Engg
- A Course in Digital Signal Processing-10p
- M.tech. - Embedded_Systems
- BM304 Biomedical signal processing (1).pdf
- Course Outline TE321
- Lab S5 DLTI GUI and Nulling Filters
- DSP & VLSI Technology Syllabus
- Chandu Dsp Manual
- Questions bank for DSP
- EE_4th yr
- (11)Look Up Table Design
- 07580596 (1)
- Syllabus
- Mrk Dsp Subjects1
- Syllabus-Advance Digital Signal Processing
- lab guide dsp.pdf
- 05_15
- Pages From Syllabus for b.tech. Instrumentation
- DSP-LECTURE 11.doc

Vous êtes sur la page 1sur 55

KL University, Vaddeswaram

Dept. of ECE

B. Tech (All branches), IInd year, Sem-1

Signal Processing-13-ES205: 2015-16

Project Based Labs

Prepared by

Dr. M. Venu Gopala Rao

Professor

Dept. of ECE

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

PROJECT BASED LABSINTEGRATING THEORY WITH PRACTICE

Introduction

Project-based learning introduces students to a discipline through the process of

conceiving, designing, and implementing activities which integrate theory with practice.

This project focuses on technology-enabled learning spaces that are optimized for interpersonal small group interaction as well as for using practical tools to implement project

work. Project Based Learning is a transformative teaching method for engaging all

students in meaningful learning and developing the competencies of critical

thinking/problem solving, collaboration, creativity and communication.

The need to move towards project based labs is due to the following reasons:

Todays students, more than ever, often find college to be boring and

meaningless. In Project Based Learning, students are active, not passive. A

project engages their hearts and minds, and provides real-world relevance for

learning.

After completing a project, students remember what they practice rather than

learn and retain it for a longer period. Because of this, students who gain content

knowledge with Project Based Learning are better able to apply what they know

and can do to new situations.

In the 21st century workplace, success requires more than basic knowledge and

skills. In Project Based Learning, students not only understand content more

deeply but also learn how to solve problems, work collaboratively, communicate

ideas, and be creative innovators.

The Common Core and other present-day standards emphasize real-world

application of knowledge and skills, and the development of the 21st century

competencies such as critical thinking, communication in a variety of media, and

collaboration. Project Based Learning provides an effective way to address such

standards.

Modern technology which students use so much in their lives is a perfect fit

with project based lab. With technology, teachers and students can connect with

experts, partners, and audiences around the world, and use tech tools to find

resources and information, create products, and collaborate more effectively.

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

Project Based Learning allows teachers to work more closely with active,

engaged students doing high-quality, meaningful work, and in many cases to

rediscover the joy of learning alongside their students.

In Project Based Learning, students go through an extended process of inquiry in

response to a complex question, problem, or challenge. Essential requirements of

PROJECT BASED LAB include:

Significant Content - At its core, the project should be focused on teaching students

important knowledge and skills, derived from standards and key concepts at the

heart of academic subjects.

Competencies - Students should build competencies valuable for todays world,

such as critical thinking/problem solving, collaboration, and communication, and

creativity/ innovation, which are taught and assessed.

In-Depth Inquiry - Students should be engaged in a rigorous, extended process of

asking questions, using resources, and developing answers.

Driving Question - Project work should be focused on an open-ended question that

students understand and find intriguing, which captures their task or frames their

exploration.

Need to Know - Students should see the need to gain knowledge, understand

concepts, and apply skills in order to answer the Driving Question and create project

products, beginning with an Entry Event that generates interest and curiosity.

Voice and Choice - Students are allowed to make some choices about the products

to be created, how they work, and how they use their time, guided by the teacher.

Revision and Reflection - The project should include processes for students to use

feedback to consider additions and changes that lead to high-quality products, and

think about what and how they are learning.

Public Audience - Students should be able to present their work to other people,

beyond their classmates and teacher.

Execution Of Project Based Lab In K L University:

Every B. Tech program will be designed to achieve certain outcomes that Map to

Program Education Objectives that are set for all the B. Tech programs taken together.

Some of the outcomes that are set for the B. Tech programs in KLU are to be achieved

through the Project Based Lab component. The execution of project based lab occurs in

two phases:

3

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

Phase I: Experiments Based: First six weeks student need work out and execute the

list of programs /experiments as decided by the instructor. These lists of programs

/Experiments must cover all the basics required to implement any project in the

concerned course lines.

Phase II: Project Based: After six weeks student needs to work out on the project on

the concerned course designed by faculty or he may be allowed to do implement his

own idea in the concerned course.

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

S. No

Page No.

1.

8-9

2.

10

3.

11-12

4.

13-15

5.

16

6.

techniques.

7.

17

Band-pass Filter.

18

8.

19-20

9.

21-22

10.

23-24

11.

25-26

12.

27-28

13.

29-30

14.

31-32

15.

33-34

16.

35-36

17.

37-38

18.

39-40

19.

41

20.

42-43

21.

44-46

22.

47-49

23.

50-52

24.

53-54

25.

Estimating the Time Delay using Correlation for Audio Signals and its

Echoes.

55

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

Additional Recommended Projects

26. Cepstral Analysis of Speech signals.

27. Linear Prediction Analysis.

28. Identifying a vowels in the speech signal.

29. Cepstral-Based Formant Estimation

30. Design and implementation of Anti-aliasing filter.

31. Log Harmonic Product Spectrum Pitch Detector

32. LPC-Based Pitch Detector

33. LPC-Based Formant Estimation

34. Filter the Speech Signal in Order to Eliminate Extraneous Low and High Frequency

Components

35. Eliminating the Frequency Conversion Components in Analog Signals using Digital

FIR Filter.

36. Short Term Frequency domain Processing of Speech signals.

37. Wideband and narrowband speech spectrograms for a user-designated speech file

38. Simulation of ECG signal

39. Spectral Smoothing

40. Autocorrelation Estimates for speech signals

41. Sampling Rate Conversion Between Typical Speech and Audio Rates

42. Correlation Techniques to Process Noise-Corrupted Signals

43. Echo and Reverberation.

44. Time-domain scrambling of audio signals

45. Non-Stationary Nature of Speech Signal.

46. Response of Composite Discrete-Time LTI Systems.

47. Even and Odd Components of Spectrum for an Arbitrary Sequence.

48. Real and Imaginary Components of Spectrum for an Arbitrary Sequences.

49. Frequency Response of Composite DT Systems with Time Reversal Sequences.

50. Sampling and Reconstruction of Analog Signals with Aliasing.

51. Response of LTI systems for a Weighted and Delayed Input Sequences

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

References:

1. John G. Proakis and Dimitris G. Manalakis, Digital Signal Processing, principles,

algorithms and applications, Pearson Prentice Hall, 2011.

2. Alan V. Oppenheim, Ronald W. Schafer, Discrete-Time Signal Processing, Pearson

Education Signal Processing Series, 2002.

3. Vinay K. Ingle and John G. Proakis, Essentials of Digital Signal Processing Using

MATLAB,Third Edition 2012, Cengage Learning.

5. Li Tan, Digital Signal Processing, Academic Press, Elsevier Inc, 2008.

6. Sanjit H. Mitra, Digital Signal Processing, A computer based approach, Third edition,

Tata McGraw-Hill Publishing Company Limited, 2010.

7. Rabiner, Digital Processing of Speech Signals Prentice-Hall Signal Processing Series:

Pearson Education, India.

8. E. S. Gopi,Digital Speech Processing Using Matlab, Springer, India, 2014,

ISBN: 978-81-322-1676-6.

9. http://cronos.rutgers.edu/~lrr/

10. www.mathworks.in/matlabcentral/fileexchange/

11. http://cvsp.cs.ntua.gr/~nassos/resources/speech_course_2004/

12. Voicebox Toolbox.

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-1

Objectives:

(a) Understanding the basic theory of RADAR system.

(b) Implement the auto-correlation and cross-correlation for Radar system in the

noisy environment.

(c) Measuring the time delay by computing cross-correlation.

(d) Calculating the distance of the target.

delay applied to one of them. Auto-correlation is a cross-correlation between the signal

and itself. The correlation functions are of used in many applications in communication

systems. For example, in Radar and Sonar systems, it can be used to detect the delay

between the transmitted and received signals. Hence the distance between the target

and Sonar / Radar can be detected.

Let xa (t ) be the transmitted signal and ya (t ) be the received signal in a RADAR

system, where

ya (t ) axa (t td ) va (t )

and, va (t ) is additive random noise. The signals xa (t ) and ya (t ) are sampled in the

receiver, according to the sampling theorem, and are processed digitally to determine

the time delay and hence the distance of the object. The resulting discrete-time signals

are

8

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

x[n] xa (nT )

y[n] ya (nT ) axa (nT td ) va (nT )

ax[n D] v[n]

Task1: Explain how we can measure the delay D by computing the auto-correlation

rxy (l ) .

Task2: Let x[n] be the 13-point Barker sequence:

and v[n] can be Gaussian random sequence with zero mean and variance 2 0.01 .

Write a program that generates the sequence the sequence y[n] , 0 n 199 for

Task3: Compute and plot the cross-correlation rxy (l ) , for the range (i) 0 l 59 and (ii)

30 l 30 . Use the plot to estimate the value of the delay D in both cases and

compare and comment.

Task4: Repeat parts (b) and (c) for 2 0.1 and 2 1 .

Task5: If x[n] is 32 pseudo random noise m-sequence and given by

x[n] [1,-1,-1,1,-1,1,1,-1,-1,1,1,1,1,1,-1,-1,-1,1,1,-1,1,1,1,-1,1,-1,1,-1,-1,-1,-1,-1]

and the received y[n] is given by

y[n] =[ 0.4923 0.6947 0.9727 0.3278 0.8378 0.7391 0.9542 0.0319 0.3569 0.6627

1.2815 -0.7696 -0.2889 1.6246 -0.4094 1.6604 1.0476 -0.6512 -0.5487 1.2409

1.7150 1.8562 1.2815 1.7311 -0.8622 -0.1633 -0.8614 1.5882 1.3662 -0.1932

1.5038 1.4896 1.8770 -0.6469 1.4494 -0.0365 1.0423 -0.0270 -0.8108 -0.3329

-0.4136 -0.3249 0.3610 0.6203 0.8112 0.0193 0.0839 0.9748 0.6513 0.2312

0.4035 0.1220 0.2684 0.2578 0.3317 0.1522 0.3480 0.1217 0.8842 0.0943]

Plot the cross-correlation function rxy (l ) and from the plot find the time delay. If the

sampling frequency is 1 MHz, Find the distance between the object and the radar.

Reference:

Page 116 and 144, John G. Proakis and Dimitris G. Manolakis. Digital Signal

Processing, Principles, Algorithms and Applibations, Fourth edition, Pearson Prentice

Hall, 2007.

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-2

Objectives:

(a) Generate and display the analog signals and discrete-time sequences.

(b) Determine the spectrum of various signals and sequences.

(c) Perform ADC and DAC for various signals and sequences.

(d) Analyzing the aliasing effects in sampling process.

for the cases for sampling frequencies (i) Fs 50 Hz (ii) Fs 30 Hz. Plot the signals

and their spectrum.

Task2: Consider Fig (a). Determine x[n] , y[n] and y1(t ) . Plot x[n] , y[n] , y1(t ) using

Sync interpolation filter and their spectrum for two cases Fs 50 Hz and Fs 30 Hz.

using Sync interpolation filter and their spectrum for two cases Fs 50 Hz and Fs 30

Hz. Compare y2 (t ) and y1(t ) and comment for both cases Fs 50 Hz and Fs 30 Hz.

for the sampling frequencies (i) Fs 600 Hz (ii) Fs 1200 Hz.

10

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-3

Objectives:

(a) Understand the ECG signal and their components / features.

(b) Load, store and display ECG signal

(c) Design and implement a second order digital Notch filter to remove power line

interference.

(d) Design and implementation of Notch FIR and IIR filters.

with a power supply in an ECG recording application. The sampling frequency used here

is assumed to be Fs 400 Hz.

Load and display an error free ECG signal. Identify the various components in the ECG

signal. Compute and display its spectrum.

Task2: Add a 50 Hz sinusoidal frequency to the ECG signal to produce distorted ECG

signal xw (t ) . Plot xw (t ) and its spectrum.

Task3: Design a second order FIR Notch filter to remove power line interference. The

transfer function of such a filter is defined as H [ z ] b0[1 2cos 0 z 1 z 2 ] , where 0

is frequency to be suppressed. Choose b0 so that | H [e j ] | 1 for 0 . Plot impulse

response, xa (t ) and their spectrum.

Task4: Design a second order pole-zero Notch filter to remove power line interference.

The transfer function of such a filter is defined as

H [ z ] b0

1 2cos 0 z 1 z 2

,

1 2r cos 0 z 1 r 2 z 2

11

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

where 0 is frequency to be suppressed and r is a constant. Consider r for two cases

(i) r 0.85 , and (ii) r 0.95 . Choose b0 so that | H [e j ] | 1 for 0 . Plot impulse

response, xa (t ) and their spectrum.

Reference: Page 339, 376 problem 5.52, John G. Proakis and Dimitris G. Manolakis.

Digital Signal Processing, Principles, Algorithms and Applibations, Fourth edition,

Pearson Prentice Hall, 2007.

12

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-4

Objectives:

(a) Generate and display plot of speech signals in time domain.

(b) Design a notch filter to eliminate the pre-dominant frequency components.

(c) Compute and display the spectrum of these signals.

In this project it is desired to perform frequency analysis on the two speech

recordings. Specifically, it is required to compute and display the spectrum of one

segment of each of your two signals.

Task1: Record yourself saying `yes' and `no' and create a wav files. The recordings

should be at 8000 samples per second.Using MATLAB, you can form one vector of

4000 samples (half second) for the `yes'. You should form a second vector also of 4000

samples of the `no'. Plot the two speech signals.

Note: Your original recordings need not to be a half second in duration; you can trim the

signal down to 4000 samples after you read the signal into MATLAB. Save your 4000point speech signals as a simple data file using the save command.

Task2: Extract a 50 millisecond segment of voiced speech from your `yes' signal. You

should select the segment during the `e' sound of `yes'. The segment should be roughly

periodic.

Compute and display the spectrum (DTFT) of your 50 millisecond speech segment. The

frequency units should be in Hertz and should be computed according the sampling rate

(8000 samples/second). Plot the spectrum on a linear scale and on a log scale. (For the

log scale, use 20log10 | X ( f ) | ).

Based on your spectrum, what is the fundamental frequency present? Can you

recognize the harmonics in the spectrum?

Task3: It can be observed that the spectrum of a short segment of speech signal

contains roughly equally-spaced peaks. Now it is desired to eliminate one of the

prominent peaks. Select the frequency (in cycles/second) of the peak to be eliminated

from the spectrum computed.

Design a second order digital notch filter using the following transfer function

13

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

b b z 1 b2 z 2

H [ z] 0 1

1 a1z 1 a2 z 2

that has zeros at the selected frequencies and corresponding poles. The zeros should

be e j 2 f n and the poles should be re j 2 f n , where r is slightly less than 1, and f n

is normalized frequency (cycles/sample); this should be a number between zero and one

half. Verify different values of r .

Task4: Repeat Task2 for your `no' signal.

Based on the spectra that you compute, what is the pitch frequency of your speech?

(The pitch frequency is the fundamental frequency of the quasi-periodic voiced speech

signal.)

Task5: Plot the pole-zero diagram of your filter (use the Matlab command zplane) in

Matlab. Verify that the poles and zero match where they were designed to be.

Task6: Plot the frequency response magnitude of your filter | H [ f ] | versus physical

frequency (the frequency axis should go from zero to half the sampling frequency). You

can use the Matlab command freqz to compute the frequency response. Verify that the

frequency response has a null at the intended frequency.

Task7: Plot the impulse response h[n] of your filter. You can create an impulse signal

and then use the Matlab command filter to apply your filter to the impulse signal.

Task8: Apply your filter to your speech signal (use Matlab command filter). Extract a

short segment of your speech signal before and after filtering. Plot both the original

speech waveform x[n] and filtered speech waveform y[n] (you might try to plot them on

the same axis). Also plot the difference between these two waveforms d[n] = y(n) - x[n]

What do you expect the signal d[n] to look like?

For the short segment you extract from the original and filtered speech waveforms,

compute and plot the spectrum. Were you able to eliminate the spectral peak that you

intended to?

14

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

15

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-5

Signals

Objectives:

(a) Generate discrete-time sequences from analog signals for various phase angles.

(b) Determine

analog

signals

from

discrete-time

sequences

using

various

interpolation filters

(c) Study the effect of phase on reconstruction signals.

(d) Design a filter to remove noise.

/ 3 and / 2

Task2: This analog signal is sampled at Ts 0.05 sec intervals to obtain x[n] Compute

x[n] from xa (t ) for all the phase values. Plot x[n] and their spectrum.

Task3: Reconstruct the analog signal ya (t ) from the samples x[n] using (a) Sync

(b) Cubic Spline interpolation filters. Use t 0.001 sec.

Task4: Observe the resultant construction in each case that has the correct frequency

but a different amplitude. Explain these observations. Comment on the role of phase of

Task5: Consider a AWGN is corrupted the signal with variance of 20, 30 dB. Plot the

noisy signal and its spectrum. Design a filter to remove the noise. Repeat the above

steps for each case.

Reference : Vinay K. Ingle and John G. Proakis, Digital Signal Processing Using

MATLAB, Third Edition 2012, 2007 Cengage Learning.

16

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-6

Various Interpolation Techniques

Objectives:

(a) Generate and display analog signals.

(b) Performing ADC and DAC operations.

(c) Study the effect of sampling on the frequency-domain quantities.

(d) Reconstruction of the signal using various interpolation.

Task2: To study the effect of sampling on the frequency-domain quantities. xa (t ) is

sampled at different sampling frequencies:

(a) Sample xa (t ) at Fs 5000 sam/sec to obtain x1[n] . Determine and plot X1[e j ] .

(b) Sample xa (t ) at Fs 1000 sam/sec to obtain x2 [n] . Determine and plot X 2 [e j ] .

Task3: Reconstruct the signal ya (t ) using the following interpolation techniques.

(a) Sync Interpolation

(b) zero-order hold interpolation

(c) spline interpolation

Check whether ya (t ) is equal to xa (t ) for each case and comment.

Task4: Consider a AWGN is corrupted the signal with variance of 20, 30 dB. Plot the

noisy signal and its spectrum. Design a filter to remove the noise. Repeat the above

steps for each case.

Task5: Perform the above steps for speech signals.

Reference : Vinay K. Ingle and John G. Proakis, Digital Signal Processing Using

MATLAB, Third Edition 2012, 2007 Cengage Learning.

17

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-7

Digital Processing of Speech Signals using the preemphasis filter and Band-pass Filter

Objectives:

(a) Load, store display and manipulations of Speech signals.

(b) Plot the speech signals in time domain and frequency domain.

(c) Design of Pre-emphasis filter and Band-pass filters.

(d) Compute and display the response of Pre-emphasis and Band-pass filters.

KHz with various magnitudes. The magnitude of signal should decrease as the

frequency increases.

Task2: Design a pre-emphasis filter to boost the high frequency components from 2.5

to 5 KHz. Plot the frequency response of the filter.

Task3: Apply this pre-emphasis filter to the analog signal generated in Task-1. Plot the

unfiltered and filtered spectrum. Observe the results and comment.

Task4: Design a Band-pass filter that passes only desired band of frequencies from

1000 Hz to 1400 Hz. Use sampling rate 8000 Hz. Plot the frequency response of

the filter.

Task5: Apply this filter to the pre-emphasized signal obtained in Task3. Plot the

unfiltered and filtered spectrum. Observe the results and comment.

Task6: Repeat the above steps for real speech signals.

18

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-8

ECG signal is usually corrupted by 50 Hz power line frequency and their first and

second harmonis. Hum noise is created by poor power supplies, transformers or electro

magnetic interference sourced by a main power supply is characterized by a frequency of

50 Hz and its harmonics. If this noise interfers with a desired audio or biomedical signal

(e.g., electrocardiography [ECG]), the desired signal is not useful for diagnosis purpose.

To eliminate these unwanted signal frequencies, it is desired suitable filtering process. In

most practical applications, elimination of the 50 Hz hum frequency with its second and

third harmonics is sufficient. This filtering process can be achieved by cascading with

digital notch filters having notch frequencies of 50 Hz and 100 Hz respectively. Further it

is needed to eliminate DC drift and muscle noise, which may occur at approximately 40

Hz or more. Hence a bandpass filter with pass band frequency 0.25-40 Hz is desired. The

following figure dipicts the functional block diagram.

Objectives

(a) Load, display and manipulating of ECG signals

(b) Design notch filters both in FIR and IIR for 50 Hz and 100 Hz respectively.

(c) Design a band pass filter to eliminate muscle noise.

(d) Plot the spectrum of the designed filters.

(e) Display the corrupted and filtered ECG signals and their spectrum.

Task1: Load an ECG signal. add a 50 Hz, and 100 Hz frequency sinusoidal signals to

this ECG signals. Plot the original ECG signal, corrupted ECG signal and their spectrum.

Task2: Design two notch filters with the following specifications.

19

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

Notch Filters:

Frequencies to be suppressed : 50 Hz and 100 Hz.

3 dB bandwidth for each filter : 4 Hz.

Sampling rate

: 600 Hz

Design methods

Bandpass Filter:

Passband frequency range

: 0.25 40 Hz.

Passband ripple

: 0.5 dB

Sampling rate

: 600 Hz

Filter type

Design method

: Bilinear transformation

Step2: Design a second order pole-zero notch filter.

In both cases choose the gain b0 so that | | 1 for 0 . Write the mathematical

equations for both transfer function and its difference equations in each case.

Step3: Plot the spectrums of each notch filters and cascaded filters.

Task3: Design a fourth order digital IIR band pass filter using bi-linear transformation

with Chebyshev approximations. The specifications are given above. Plot the original,

corrupted and filtered ECG signals and their spectrum.

References:

DSP by Proakis, pp339,

edition, Pearson Prentice Hall, 2007.

20

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-9

A speech signal is corrupted by an inference sinusoidal signal and its harmonics. It is

desired to design notch filter to eliminate the undesired frequency components.

Objectives:

(a) Load, display and manipulating of speech signals

(b) Design notch filters both in FIR and IIR for 360 Hz and 1080 Hz respectively.

(c) Plot the spectrum of the designed filters.

(d) Display the corrupted and filtered speech signals and their spectrum.

Task1: A speech sampled at 8000 Hz is corrupted by a sine wave of 360 Hz. Design a

notch filter to eliminate the unwanted interference signal.

Load a speech signal sampled at 8000 Hz. Add a 360 Hz frequency sinusoidal signals to

this speech signals. Plot the original speech signal, corrupted speech signal and their

spectrum.

Task2: Design a notch filter with the following specifications.

Notch Filters:

Type of Notch filter

: Chebyshev filter

Center frequency

: 360 Hz

Bandwidth

: 60 Hz.

Passband ripple

: 0.5 dB

Stopband attenuation

Determine the transfer function and difference equation. Plot the filtered signal and its

spectrum.

Task3: Assume that the speech signal is corrupted by a sine wave of 360 Hz and its

third harmonic. Design two notch filters that are cascaded to remove noise signals. The

possible specifications are given as below.

21

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

Notch Filter1:

Type of Notch filter

: Chebyshev filter

Center frequency

: 360 Hz

Bandwidth

: 60 Hz.

Passband ripple

: 0.5 dB

Stopband attenuation

Notch Filter2:

Type of Notch filter

: Chebyshev filter

Center frequency

: 1080 Hz

Bandwidth

: 60 Hz.

Passband ripple

: 0.5 dB

Stopband attenuation

Determine the transfer function and difference equation. Plot the filtered signal and its

spectrum.

22

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-10

ECG is a small electrical signal captured from an ECG sensor. The ECG signal is

produced by activity of the human heart, thus can be used for heart rate detection, fetal

monitoring and diagnostic purpose. The ECG signal is characterized by five peaks and

valleys, labeled P,Q,R,S and T. The highest positive wave is the R wave. Shortly before

and before and after the R wave are negative waves called Q wave and S wave. The Q,R

and S waves together are called the QRS complex. The properties of QRS complex, with

its rate of occurance and times, highs and widths provide information to cardiologists

cencerning various pathological conditions of the heart. The resiprocal of the time period

between R wave peaks (in milli seconds) muliplied by 60000 gives the instantaneous

heart rate in beats per minute.

Objectives

(a) Load, display and manipulating ECG signals

(b) Design IIR notch filter for 50 Hz and 100 Hz.

(c) Design a band pass filter to eliminate muscle noise.

(d) Perform zero crossing algorithm to determine the heart rate.

Task1: Load an ECG signal. Add a 50 Hz, 100 Hz and 150 Hz frequency sinusoidal

signals to this ECG signals. Plot the original ECG signal, corrupted ECG signal and their

spectrum.

Task2: Design three notch filters with the following specifications.

Frequencies to be suppressed : 50 Hz, 100 Hz and 150 Hz.

3 dB bandwidth for each filter : 4 Hz.

Sampling rate

: 600 Hz

23

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

Type of Notch filter

Design methods

Step2: Design a second order pole-zero notch filter.

In both cases choose the gain b0 so that | | 1 for 0 . Write the mathematical

equations for both transfer function and its difference equations in each case.

Step3: Plot the spectrums of each notch filters and cascaded filters.

Step4: Plot the original, corrupted and filtered ECG signals and their spectrum.

Task 3: Perform zero crossing algorithm to determine the heart rate.

24

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-11

Objectives:

(a) Load, display and manipulating ECG signals

(b) Detect QRS complexes using the Pan-Tompkins algorithm.

(c) Measure ECG parameters for rhythm analysis.

Background:

The QRS complex detection algorithm developed by Pan and Tompkins identifies QRS

complexes based on analysis of the slope, amplitude, and width. The various stages of

the algorithm are shown in Figure 1. The bandpass filter, formed using lowpass and

highpass filters, reduces noise in the ECG signal. Noise such as muscle contraction

artifacts, 60 Hz power-line interference, and baseline wander are removed by bandpass

filtering. The signal is then passed through a differentiator for highlighting the high slopes

that distinguish QRS complexes from low-frequency ECG components such as the P

and T waves. The next operation is the squaring

operation, which emphasizes the higher values that are mainly due to QRS complexes.

The squared signal is then pass through a moving-window integrator of window length

N= 30 samples (for a sampling frequency of fs = 200 Hz). The result is a single smooth

peak for each ECG cycle. The output of the moving-window integrator may be used to

detect QRS complexes, measure RR intervals, and determine the QRS complex

duration (see Figure 2).

Tasks1: Download the ECG data files (sampled at 200 Hz). Develop a Matlab program

to perform the various filtering procedures that form the Pan-Tompkins algorithm. Use

the 'filter' command for each step. Study the plots of the results at different stages of the

QRS-detection algorithm.

Tasks2: Implement a simple thresholding procedure including a blanking interval for the

detection of QRS waves from the output of the Pan-Tompkins algorithm.

25

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

Tasks3: Develop Matlab code to use the output of the Pan-Tompkins algorithm to detect

QRS complexes and compute the following parameters for each of the sample ECG

signals provided:

1. Total number of beats in each signal and the heart rate in beats per minute.

2. Average RR interval and standard deviation of RR intervals of each signal (in ms).

3. Average QRS width computed over all the beats in each signal (in ms).

26

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-12

The electro cardiographic signal (ECG) is the electrical representation of the hearts

activity. Computerized ECG analysis is widely used as a reliable technique for the

diagnosis of cardio vascular diseases. Baseline wander is an artifact which produces

atrifactual data when measuring the ECG parameters, especially the ST segment

measures are strongly affected by this wandering. In most of the ECG recordings the

respiration, electrode impedance change due to perspiration and increased body

movements are the main causes of the baseline wandering. The baseline wander noise

makes the analysis of ECG data difficult, and therefore it is necessary to suppress this

noise for correct evaluation of ECG.

Objectives

(a) Load, display and manipulating ECG signals

(b) Design and implementation of various filters to remove the baseline wander

artifact.

(c) Compare of various filtering techniques.

(d) Compute power spectral density and plot their spectrum.

Respiratory signal wanders between 0.15Hz and 0.5Hz frequencies. The design of a

linear, time-invariant, high pass filter for removal of baseline wander involves several

considerations, of which the most crucial are the choice of filter cut-off frequency and

phase response characteristic. The cut-off frequency should be chosen so that the

clinical information in the ECG signals remains undistorted while as much as possible of

the baseline wander is removed. Hence, it is essential to find the lowest frequency

component of the ECG spectrum. In general, the slowest heart rate is considered to

define this particular frequency component; the PQRST waveform is attributed to higher

frequencies. If too high a cut-off frequency is employed, the output of the high pass filter

contains an unwanted, oscillatory component that is strongly correlated to the heart rate.

On the basis of Impulse Response, there are generally two types of digital Filters:

Infinite Impulse response(IIR)

Finite impulse Response(FIR)

27

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

Task1: Perform mean and median (or moving average) filter to eliminate the baseline

wander artifact. Plot the original, filtered ECG signals and their spectrum. Compute the

power density spectrum of the signals.

Task2: Perform two stages mean and median (or moving average) filter to eliminate the

baseline wander artifact. Plot the original, filtered ECG signals and their spectrum.

Compute the power density spectrum of the signals.

Task3: Perform first order zero-phase low pass filter to eliminate the baseline wander

artifact. Plot the original, filtered ECG signals and their spectrum. Compute the power

density spectrum of the signals.

Note: Zero-phase filtering minimizes start-up and ending transients by matching initial

conditions & helps in preserving features in the filtered time waveform exactly where

those features occur in the unfiltered waveform

Task4: Perform band pass filter using FFT filtering to eliminate the baseline wander

artifact. Plot the original, filtered ECG signals and their spectrum. Compute the power

density spectrum of the signals.

Compare the parameters of all filtering approaches.

Filtration

Method

PSD at 0.35Hz

(dB/Hz) Before

filtration

PSD at 0.35Hz

(dB/Hz) After

filtration

SNR

(dB)

Average Power

of Signal

(dB)

Waveform

Modification

IIR HP

IIR zero

phase

FIR HP

FIR zero

phase

Moving

Average

Mean

SavitzkyGolay

Polynomial

Fitting

28

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-13

Objectives

(a) Load, display and manipulation of speech signals.

(b) Estimate the fundamental frequency of a section of speech signal from its

waveform using autocorrelation.

(c) Estimate the fundamental frequency of a section of speech signal from its

spectrum using cepstrum.

(d) Compute and plot the spectrum of speech signals.

Task1: Fundamental frequency estimation-time domain: Auto-correlation

The perception of pitch is more strongly related to periodicity in the waveform itself. A

means to estimate fundamental frequency from the waveform directly is to use

autocorrelation. The autocorrelation function for a section of signal shows how well the

waveform shape correlates with itself at a range of different delays. We expect a

periodic signal to correlate well with itself at very short delays and at delays

corresponding to multiples of pitch periods. We can estimate the fundamental frequency

by looking for a peak in the delay interval corresponding to the normal pitch range in

speech.

Task2: Fundamental frequency estimation- frequency domain: Cepstrum

A reliable way of obtaining an estimate of the dominant fundamental frequency for long,

clean, stationary speech signals is to use the cepstrum. The cepstrum is a Fourier analysis

of the logarithmic amplitude spectrum of the signal as shown in Fig.1. If the log

amplitude spectrum contains many regularly spaced harmonics, then the Fourier analysis

of the spectrum will show a peak corresponding to the spacing between the harmonics:

i.e. the fundamental frequency. Effectively we are treating the signal spectrum as another

signal, then looking for periodicity in the spectrum itself.

The cepstrum is so-called because it turns the spectrum inside-out. The x-axis of the

cepstrum has units of quefrency, and peaks in the cepstrum (which relate to periodicities

29

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

frequency from the cepstrum we look for a peak in the quefrency region corresponding to

typical speech fundamental frequencies.

Task3: Repeat the above tasks-1 and 2 for noisy speech signals.

Task4: Repeat the above tasks-1 and 2 for noisy musical signals.

Task5: Repeat the above tasks-1 and 2 for noisy musical speech signals.

30

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-14

Objectives

(a) Load, display and manipulation of speech signals both in time domain and

frequency domain.

(b) Build and perform a filter bank analysis of a speech signal.

(c) Use the discrete Fourier transform to convert a waveform to a spectrum and vice

versa.

(d) Divide a signal into overlapping windows.

(e) Compute and display a spectrogram.

Task1: Filter bank analysis:

The most flexible way to perform spectral analysis is to use a bank of band pass

filters. A filter bank can be designed to provide a spectral analysis with any degree of

frequency resolution (wide or narrow), even with non-linear filter spacing and

bandwidths. A dis-advantage of filter banks is that they almost always take more

calculation and processing time than discrete Fourier analysis using the FFT.

To use a filter bank for analysis we need one band-pass filter per channel to do the

filtering, a means to perform rectification, and a low-pass filter to smooth the energies.

In this example, we build a 19-channel filter bank using bandwidths that are modelled on

human auditory bandwidths. We rectify and smooth the filtered energies and convert to a

decibel scale.

Task2: Spectral analysis using Fourier transform:

The discrete-time discrete-frequency version of the Fourier transform (DFT) converts an

array of N sample amplitudes to an array of N complex harmonic amplitudes. If the

sampling rate is f s , the N input samples are 1/ f s seconds apart, and the output

harmonic frequencies are f s / N Hertz apart. That is the N output amplitudes are

evenly spaced at frequencies between 0 and (N-1) f s / N Hertz.

Perform DFT for the speech signal. Use sizes of 512, 1024, etc., for the fastest

speed. Plot and display the magnitude and phase spectrum.

31

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

Task3: Windowing a signal:

Often it is desired to analyze a long signal in overlapping short sections called

windows.

spectrogram. Unfortunately it cannot simply chop the signal into short pieces because

this will cause sharp discontinuities at the edges of each section. Instead it is preferable

to have smooth joins between sections. Raised cosine windows are a popular shape for

the joins:

Tas4: Spectrograms:

MATLAB has a built-in function specgram() for spectrogram calculation. This function

divides a long signal into windows and performs a Fourier transform on each window,

storing complex amplitudes in a table in which the columns represent time and the rows

represent frequency.

32

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-15

Objectives

(a) Generate and manipulations of signals and sequences.

(b) Compute and display the spectrum of signals.

(c) Performing the cepstral operation to the signals and speech.

(d) Calculating the fundamental frequency from the cepstrum.

Generate and display the following analog signal

It can be observed that the fundamental frequency is 200Hz and harmonics are 400Hz

and 600Hz respectively.

Determine the DFT of signal and plot its spectrum. It can be observed that this spectrum

is periodic and discrete. The idea behind cepstrum is to look at such periodic DFT as if it

is some discrete signal. Determine the distance between the harmonic signals, which is

referred to as fundamental frequency.

Task2: Signal with one set of harmonics:

Generate and display the following analog signal

It can be observed that the fundamental frequency is 300Hz and harmonic is 600Hz.

Determine the DFT of signal and plot its spectrum. It can be observed that this spectrum

is periodic and discrete. Determine the distance between the harmonic signals, which is

referred to as fundamental frequency.

Task3: Signal with two sets of harmonics:

sin(3t ) sin(6t ) , T 0.01, Ts (T / 6) / 2

It is observed that

First fundamental frequency is 200Hz and harmonics are 400Hz and 600Hz.

Second fundamental frequency is 300Hz and harmonic is 600Hz.

33

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

Determine the DFT of signal and plot its spectrum. It can be observed that this spectrum

is periodic and discrete. Determine the distance between the harmonic signals, which is

referred to as fundamental frequencies.

Task4: Fundamental frequency of guitar string.

Load a guitar string that is properly tightened to produce predifened fundamental

frequency before you can play guitar.

Frequences of guitar strings in Hz should be: E=82.4, A=110, D=146.8, G=196,

B=246.92, E=329.6

34

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-16

Objectives:

(a) Load, display and manipulate speech signals.

(b) Computing and display the spectrum of speech signals.

(c) Understand and applying cepstral analysis and homomorphic deconvolution

algorithms for speech signals.

(d) Create and play around with audio signal processing applications.

In this project you have to obtain samples of male and female voice signals for all five

different vowel sounds. Many computers have built in microphones to do this and it is up

to you how you will obtain your 10 speech samples. Then, you must compute the

cepstrum of the sound samples and determine any differences you notice between male

and females voice patterns and amongst the different vowels themselves. Last, you

must lifter the signals to remove the transfer function (using an appropriate window

length) and compute the excitation signal by taking the inverse cepstrum to get back to

the time domain.

Task1:

Acquire voice samples. In this part, either record or find at least 10 voice

samples of a male and female individual making the five vowel sounds a, e, i, o,

u. If you are going to record them yourself or using a friend, then exaggerate the

sounds a little and keep your voice extended for a while. It is also okay to work with

other groups and use their voice samples, but please credit them. Make note of the

conditions used to obtain the voice samples (e.g., what computer, what type of speaker

built-in or microphone, using which software program, or where or from whom the files

were obtained). You should have at least 10 files in the end. If you want to be keen and

impressive, you can get more than one male and female voice to obtain a better understanding of differences between both signals in the cepstral domain.

35

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

Task 2: Compute the cepstrum of each voice signal and discuss any difference

qualitatively and quantitatively amongst male and female voices in general and amongst

the different vowel sounds. This is an important component of the project, so please be

creative and as comprehensive as possible. Your report should provide figures with

original time-domain signals as well as cepstrum signals. Female voices should

generally have more peaks than male voices in the cepstrum domain. You should

discuss why you think this would be the case.

Task3: Lifter the cepstrum domain signals. Design a window (length is an important

design parameter and you should discuss how and what you select it can be the same

or different for each speech sample depending on what you would like to experiment

with) to remove the transfer function dependency. Then, compute the time domain signal

of the corresponding windowed result to obtain the deconvolved signal. Plot the

deconvolved result. Is there anything you can say about the signal and its difference

from the original time domain recorded sample? Again, your discussion is an important

part of the report.

36

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-17

Objectives:

(a) Generate, display and manipulate of signals and sequences.

(b) Display band pass filter for speech equalizer.

(c) Compare and plot the spectrum of analog signals and speech sequences.

(d) Design and filters in both FIR and IIR

Speech equalizer is an algorithm to compensate mid range frequency loss of hearing.

This process is shown in the following block diagram.

Fig 1: Equalizer

Task 1: Generate an analog signal having frequency having frequency component of

500 Hz, 700 Hz, 900 Hz, 1000 Hz, 1200 Hz, 1400 Hz, 1500 Hz, 1600 Hz, 1700 Hz, 1800

Hz, 2000 Hz, 2200 Hz, 2500 Hz with unity magnitude. Plot the analog signal and its

spectrum. Observe the frequency components in the spectrum.

Task 2: Design a Band pass filter (FIR and IIR). Construct all equalizer circuit as shown

in fig. 1. Use the gain 5. Plot the unfiltered analog signal, filtered analog signal and their

spectrum. Plot the frequency response of the filter.

Design specification of filter as belowBPF-FIR filter specification:

Sampling rate: 8000 Hz with

Rectangular window

Hamming window

Hanning window

Frequency range to be emphasized : 1500-2000 Hz

37

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

Lower stop band: 0-1000 Hz

Upper stop band: 2500-4000 Hz

Pass band ripple: 0.1 dB

Stop band attenuator- 45 dB

Determine the filter length and lower and upper cut off frequencies.

BPF IIR filters specifications:

(a) Butterworth IIR filter

Sampling rate: 8000 Hz

Frequency to be emphasized: 1500-2000 Hz

Lower stop band: 0-1000 Hz

Upper stop band: 2500-4000 Hz

Pass band ripple: 3dB

Stop band attenuator: 20dB

Determine the filter order and filter transfer function.

(b)

Second order band pass IIR filter

Frequency range to be emphasized: 1500-2000 Hz

Pass band ripple: 3dB

Determine the transfer function

Task 3: Repeat the above steps for real speech and musical signals.

38

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-18

System

In audio system, there is often a situation where the application requires the entire

audible range of frequencies, but this is beyond the capability of any single speaker

driver. So we combine several drivers such as the speaker cones and horns, each

covering different frequency range, to reproduce the full audio frequency range.

Objectives

(a) Load, display and manipulate of speech signals.

(b) Compute and plot the spectrum of speech signals.

(c) Design of low pass and high pass filter for given specifications.

(d) Design of the FIR and IIR filters.

A typical two band digital cross over can be designed as shown in Fig.1. There are

two speaker drivers. The woofer responds to low frequencies and the tweeter responds

to high frequencies. The incoming digital audio signal is split into two bands by using a

low pass filter and high pass filer in parallel. We then amplify the separated audio signals

and send them to their respective corresponding speaker driver. Hence the objectives is

to design the low pass filter and high pass filter so that their combined frequency

response is flat, while keeping the transition as sharp as possible to prevent audio signal

distortions in transition frequency range. Although traditional cross over systems are

designed using active circuits or passive circuits, the digital cross over system provides

a cost effective solution with programmable ability, flexibility, and high quality.

39

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

Task1: Design a digital FIR cross over system using the following specifications.

Type of filter:

FIR filter

Windows: (a) Rectangular window (b) Hamming window (c) Hanning window

Sampling rate

: 44,100 Hz

: 1000 Hz

Transition band

: 600 to 1400 Hz

Pass band frequency

: 0 to 600 Hz.

: 0.02 dB

: 1400 Hz

: 50 dB

From these specifications determine the cut-off frequency and filter length.

(Ans: cut-off frequency = 1000 Hz and filter length=183).

Use Matlab to design and plot frequency responses for both filters.

Task2: Design a IIR digital cross over system using the following specifications.

Type of filter:

IIR Filter

Sampling rate

: 44,100 Hz

: 1000 Hz

Third order Butterworth filter

Cut-off frequency

: 1000 Hz

Third order Butterworth filter

Cut-off frequency

: 1000 Hz

Determine

(a) The transfer functions and difference equation for the high-pass and low-pass

filters.

(b) Frequency responses for the high-pass and low-pass filters.

(c) Combined frequency response for both filters.

40

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

19E SP Project-19

Objectives:

(a) Performing multiplication operations for various DT sequences

(b) Implementing the modulation property of DFT.

(c) Generating Amplitude Modulated DT sequences.

(d) Reconstructing the signal by demodulation.

(e) Plot the spectral analysis for various N-point DFTs

and

18

f2

5

128

50

128

x AM [n] x[n]cos 2 fc t

(a) Compute and plot 128-point DFT of AM signal x AM [n] ; 0 n 127

(b) Compute and plot 128-point DFT of AM signal x AM [n] ; 0 n 99

(c) Compute and plot 256-point DFT of AM signal x AM [n] ; 0 n 179

xF [n] x AM [n]cos 2 f c t x[n]cos 2 2 f c t

1 1

x[n] cos 4 f c t

2 2

x[n] x[n]

cos 4 fc t

2

2

Determine xF [n] and its spectrum for N-point DFT as mentioned in Task2.

Task4: Design a LPF to remove the second harmonic carrier frequency component

cos 4 fc t to produce x[n] . Plot x[n] and its spectrum. Compare the original signal and

demodulated signals.

41

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-20

Objectives

(a) Load, display and manipulation of speech signals.

(b) Compute and display the spectrum of speech signals.

(c) Determine and plot the Power Density Spectrum of Speech signals.

(d) Develop a simple spectral subtraction filtering technique for elimination of noise.

Telephones are increasingly being used in noisy environments such as cars, airports

etc. The aim of this project is to implement a system that will reduce the background

noise in a speech signal while leaving the signal itself intact: this process is called

speech enhancement. It is desired to implement spectral subtraction technique for this

purpose.

Algorithm: Many different algorithms have been proposed for speech enhancement: the

one that we will use is known as spectral subtraction. This technique operates in the

frequency domain and makes the assumption that the spectrum of the input signal can

be expressed as the sum of the speech spectrum and the noise spectrum. The

procedure is illustrated in the diagram and contains two tricky parts:

a. Estimating the spectrum of the background noise

b. Subtracting the noise spectrum from the speech

Step1: Generate a multi tone signal with frequency components 100 Hz, 500 Hz, 600Hz,

800 Hz and 1000 Hz. Add AWGN for the various noise variances (20dB, 30dB and

40dB). Display the signal and its spectrum.

42

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

Step2: Compute the magnitude and phase response of the signal using FFT and plot

them. Find the Power Density Spectrum and plot.

Step3: Estimate the noise power.

Step4: Subtract the noisy estimate (power) from the Power Density Spectrum of signal.

Determine the magnitude spectrum from resultant signal

Step5: Perform the inverse IFFT operation. Find the signal to noise ratio (SNR) and

peak signal to noise ratio (PSNR).

Task2: Denoising for male voice speech signal.

Step1: Load and display a male voice speech signal and its spectrum.

Step2: Add AWGN for the various noise variances (20dB, 30dB and 40dB).

Step3: Divide the given speech signal into 50 ms blocks of speech frames and shift of

10 msec.

Step4: Compute the magnitude and phase response of the segmented speech signal

using FFT and plot them. Find the Power Density Spectrum and plot.

Step5: Estimate the noise power by computing Log Energy and zero crossing to

determine non-speech activity.

Step6: Subtract the noisy estimate (power) from the Power Density Spectrum of

segmented speech signal. Determine the magnitude spectrum from resultant signal

Step7: Perform the inverse IFFT that results the denoised speech signal. Find the signal

to noise ratio (SNR) and peak signal to noise ratio (PSNR).

Step8: Repeat the above steps for various segmented speech signal.

Task3: Filter the noise using Wiener filter.

Task3: Repeat the Task-2 and Task-3 for female voice speech signal.

Task4: Repeat the Task-2 and Task-3 for musical speech signal.

43

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-21

Objectives

(e) Load, display and manipulation of speech signals.

(f) Develop a method for the estimation of pitch by the autocorrelation of speech

signal.

(g) Develop a cepstrum pitch estimation method.

(h) Develop a simple inverse filtering technique (SIFT) pitch estimation method.

(i) Comparison of all these three methods.

Basic Theory: Speech signal can be classified into voiced, unvoiced and silence

regions. The near periodic vibration of vocal folds is excitation for the production of

voiced speech. The random ...like excitation is present for unvoiced speech. There is no

excitation during silence region. Majority of speech regions are voiced in nature that

include vowels, semivowels and other voiced components. The voiced regions look like

a near periodic signal in the time domain representation. In a short term, we may treat

the voiced speech segments to be periodic for all practical analysis and processing. The

periodicity associated with such segments is defined is 'pitch period T0 in the time

domain and 'Pitch frequency or Fundamental Frequency F0 in the frequency domain.

Unless specified, the term 'pitch' refers to the fundamental frequency F0 . Pitch is an

important attribute of voiced speech. It contains speaker-specific information. It is also

needed for speech coding task. Thus estimation of pitch is one of the important issue in

speech processing.

There are a large set of methods that have been developed in the speech processing

area for the estimation of pitch. Among them the three mostly used methods include,

autocorrelation of speech, cepstrum pitch determination and single inverse filtering

technique (SIFT) pitch estimation. One success of these methods is due to the

involvement of simple steps for the estimation of pitch. Even though autocorrelation

method is of theoretical interest, it produce a frame work for SIFT methods.

44

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

Task1: Pitch determination using auto-correlation:

Step1: Load a speech signal.

Step2: Divide the given speech signal into 50 ms blocks of speech frames and shift of

10 msec.

Step3: Determine the auto correlation sequence of each frame. The pitch period can be

computed by finding the time lag corresponds to the second largest peak from the

central peak of autocorrelation sequence.

Task2: Cepstrum Pitch Determination: The main limitation of pitch estimation by the

auto correlation of speech is that there may be peaks larger than the peak

corresponding to the pitch period T0 due to the frequencies of the vocal tract. As a result

there may be picking of highest peaks and hence wrong estimation of pitch. The

approach to minimize such errors is to separate the vocal tract and excitation source

related information in the speech signal and there use the source information for pitch

estimation. The ceptral analysis of speech provides such an approach.

The ceptrum of speech is defined as the inverse Fourier transform of the log

magnitude spectrum. The cepstrum projects all the slowly varying components in log

magnitude spectrum to the low frequency region and fast varying components to the

high frequency regions. In the log magnitude spectrum, the slowly varying components

represent the envelope corresponds to the vocal tract and the fast varying components

to the excitation source. As a result the vocal tract and excitation source components get

represented naturally in the spectrum of speech.

Block diagram of Cepstrum

Step2: Divide the given speech signal into 50 ms blocks of speech frames and shift of

10 msec.

Step3: Develop Matlab code to determine log magnitude spectrum and cepstrum. Plot

the cepstrum. Observe the prominent peaks in the cepstrum of voice signals. What you

are observed in the case of non-voice signals.

The cepstral approach does not have large peaks as in the autocorrelation case that

may interfere with the estimation of pitch.

45

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

Task3: Pitch estimation by SIFT method: The SIFT method is yet another mostly

used pitched estimation method. This is based on the linear prediction (LP) analysis of

speech. The SIFT in turn employs the autocorrelation method for the estimation of pitch.

However, the main discussion is, it performs autocorrelation of the LP residual than

speech directly. For the optimal LP ..., more of the vocal tract information is modeled by

the LP coefficients and hence the LP residual mostly contains the excitation source

information. The autocorrelation of LP residual will therefore have unambiguous peaks

representing the pitch period 'T0' information.

Step1: Load a speech signal.

Step2: Divide the given speech signal into 50 ms blocks of speech frames and shift of

10 msec.

Step3: Develop Matlab code to determine the autocorrelation of LP residual. Observe

the unambiguous peaks representing the pitch period T0 .

Task4: Consider a male speaker speech speaker signal having pitch period of about 10

ms. Compute its pitch by SIFT method using a segment size of 30 ms, 15 ms and 5 ms.

Compare the true autocorrelation sequence and justify their nature. As a result explain

how much is the main length of speech segment for the estimation of pitch in term of

pitch period T0 .

Task5: Consider a male speaker speech speaker signal having pitch period of about 10

ms. Compute its pitch by Cepstrum using a segment size of 30 ms, 15 ms and 5 ms.

Compare the true autocorrelation sequence and justify their nature. As a result explain

how much is the main length of speech segment for the estimation of pitch in term of

pitch period T0 .

Task6: Repeat the above tasks for female voice.

46

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-22

Objectives

(a) Load, display and manipulation of speech signals.

(b) Study and understand the time and frequency domain characteristics of voiced

speech.

(c) Classification of the voiced/unvoiced/silence features of speech signals in both

(d) Differentiate these features for both male and female speech signals.

Basic Theory: Speech is an acoustic signal produced from a speech production system.

From our understanding of signals and systems, the system characteristics depend on

the design of the system. For the case of linear time invariant system, this is completely

characterized in terms its impulse response. However, the nature of response depends

on the type of input excitation to the system. For instance, we have impulse response,

step response, sinusoidal response and so on for a given system. Each of these output

responses are used to understand the behavior of the system under different conditions.

A similar phenomenon happens in the production of speech also. Based on the input

excitation phenomenon, the speech production can be broadly categorized into three

activities. The first case where the input excitation is nearly periodic in nature, the

second case where the input excitation is random noise-like in nature and third case

where there is no excitation to the system. Accordingly, the speech signal can be

broadly categorized into three regions. The study of these regions is the aim of this

experiment.

1. Voiced Speech: If the input excitation to is a system is nearly periodic impulse

sequence, then the corresponding speech looks visually nearly periodic and is termed as

voiced speech.

The periodicity associated with the voiced speech can be measured by the

autocorrelation analysis. This period is more commonly termed as pitch period.

47

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

2. Unvoiced Speech: If the excitation is random noise-like, then the resulting speech

will also be random noise-like without any periodic nature and is termed as Unvoiced

Speech. The typical nature of excitation and resulting unvoiced speech are shown in Fig

2 itself. As it can be seen, the unvoiced speech will not have any periodic nature. This

will be the main distinction between voiced and unvoiced speech. The aperiodicity of

unvoiced speech can also be observed by the autocorrelation analysis.

3. Silence Region: The speech production process involves generating voiced and

unvoiced speech in succession, separated by what is called silence region. During

silence region, there is no excitation supplied to the vocal tract and hence no speech

output. However, silence is an integral part of speech signal. Without the presence of

silence region between voiced and unvoiced speech, the speech will not intelligible.

Further, the duration of silence along with other voiced or unvoiced speech is also an

indicator of certain category of sounds. Even though from amplitude/energy point of

view, silence region is unimportant, but its duration is very essential for intelligible

speech.

4. Voiced/Unvoiced/Silence Classification of Speech

Above discussion gave a feel about the production of voiced/unvoiced speech and also

significance of silence region. Now the next question is how to identify these regions of

speech? First by visual perception and next by automatic approach. If the speech signal

waveform looks periodic in nature, then it may be marked as voiced speech, otherwise, it

may be marked as unvoiced/silence region based on the associated energy. If the signal

amplitude is low or negligible, then it can be marked as silence, otherwise as unvoiced

region. Finally, there may be regions where the speech can be mixed version of voiced

and unvoiced speech. In mixed speech, the speech signal will look like unvoiced speech,

but you will also observe some periodic structure.

Task1: Time domain analysis.

Step1: Load a speech signal.

48

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

Step2: Divide the given speech signal into 50 ms blocks of speech frames.

Step3: Plot the segmented speech signal. Observe the time domain characteristics for

the three cases (Voice, un-voice and silence).

Explain how these voice, un-voice and silence signals are identified in time domain

plots?

Voiced segment represents periodicity in time domain.

Unvoiced segment is random noise-like in time domain

Silence region does not have energy in time domain.

Explain how these voice, un-voice and silence signals are identified in terms of enery

and zero crossing?

Repeat the above steps for several speech segments and comment for these three

features.

Task2: Frequency domain analysis.

Step1: Load a speech signal.

Step2: Divide the given speech signal into 50 ms blocks of speech frames.

Step3: Plot the spectrum of segmented speech signal. Observe the frequency domain

characteristics for the three cases (Voice, un-voice and silence). Explain how these

voice, un-voice and silence signals are identified in the spectrum of speech signals?

Voiced segment represents harmonic structure in frequency domain.

Unvoiced segment is non harmonic structure in frequency domain.

Silence region does not have energy in frequency domain.

Repeat the above steps for several speech segments and comment for these three

features.

Task4: Do the following

(a) Write a Matlab program that reads a speech file and plots the waveform,

spectrum and autocorrelation sequence of any three voiced segments present in

the given speech signal.

(b) Write a Matlab program that reads a speech file and plots the waveform,

spectrum and autocorrelation sequence of any three unvoiced segments present

in the given speech signal.

(c) Write a Matlab program that reads a speech file and plots the waveform,

spectrum and autocorrelation sequence of any three silence region present in the

given speech signal.

Task5: Repeat the above steps for both male female and musical speech signals.

49

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-23

Detection Algorithms.

Dual-tone multi-frequency signaling (DTMF) is used for telecommunication signaling

over analog telephone lines in the voice-frequency band between telephone handsets

and other communications devices and the switching center. The version of DTMF that

is used in push-button telephones for tone dialing is known as Touch-Tone. DTMF is

standardized by ITU-T Recommendation Q.23. The Touch-Tone system, using the

telephone keypad, gradually replaced the use of rotary dial starting in 1963, and since

then DTMF or Touch-Tone became the industry standard for both cell-phones and

landline service.

Objectives

1. Generate, display and manipulation of analog signals.

2. Production of DTMF tones.

3. Perform DTMF detection using the Goertzel algorithm.

4. Compute and plot spectrum of signals.

Task1: DTMF Tone Generation:

The DTMF tone generator uses two digital filters in parallel each using the impulse

sequence as an input. The filter for the DTMF tone for key 7 is depicted below.

L 2 852 / f s

H L ( z)

z sin L

z 2 z cos L 1

2

( n)

DTMF Tone

z sin H

H H ( z) 2

z 2 z cos H 1

y7 ( n )

7

H 2 1209 / f s

Fig1. Digital DTMF tone generator for the digit 7.

50

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

The industry standard frequency specifications for all the keys are listed in Fig 2.

1209 Hz 1336 Hz 1477 Hz

697 Hz

770 Hz

852 Hz

941 Hz

According to the DTMF tone specification, develop the MATLAB program that will be

able to generate each tone.

Task2: DTMF detection:

The DTMF detection relies on the Geortzel algorithm (Geortzel filter). The main purpose

of using the Goertzel filters is to calculate the spectral value at the specified frequency

index using the filtering method. Its advantage includes the reduction of the required

computations and avoidance of complex algebra. The seven modified Goertzel filters are

implemented in parallel shown in Fig 3. As shown in Fig 3, the output from each Goertzel

filter is fed to its detector to compute its spectral value, which is given by

m Ak

2

205

X (k )

detected value m is larger than the threshold value, the logic operation outputs the logic

1 otherwise it outputs the logic 0. Then the logic operation at the last stage is to decode

the key information based on the 7-bit binary pattern.

51

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

H18 ( z )

H20 ( z )

x ( n ) y7 ( n )

DTMF Tone

H22 ( z )

H24 ( z )

H31 ( z )

H34 ( z )

H38 ( z )

v18 (n)

v20 (n)

v22 (n)

v24 (n)

v31 (n)

v34 (n)

v38 (n)

A18

logic

A20

logic

A22

logic

A24

logic

A31

logic

A34

logic

A38

logic

0

0

1

0

logic

1

0

0

Threshold

( A18 A20 A22 A24 A31 A34 A38 ) / 4

Fig 3 DTMF tone detector.

Develop MATLAB program to perform detection and display each detected key on the

screen.

52

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-24

An engineering solution proposed for processing speech was to make use of existing

signal processing tools in a modified fashion. To be more specific, the tools can still

assume the signal under processing to be stationary. Speech signal may be stationary

when it is viewed in blocks of 10-30 msec. Hence to process speech by different signal

processing tools, it is viewed in terms of 10-30 msec. Such a processing is termed as

Short Term Processing (STP).

Short Term Processing of speech can be performed either in time domain or in

frequency domain. The particular domain of processing depends on the information from

the speech that we are interested in. For instance, parameters like short term energy,

short term zero crossing rate and short term autocorrelation can be computed from the

time domain processing of speech.

Objectives

(a) Load, display and manipulation of speech signals.

(b) Understand need for short term processing of speech.

(c) Find short term energy and study its significance.

(d) Perform short term zero crossing rate and study its significance.

(e) Compute short term autocorrelation and study its significance.

(f) Estimate pitch of speech using short term autocorrelation.

(g) Perform voiced/unvoiced/silence classification of speech using short term time

domain parameters.

Task1: Load and display a speech signal and plot its spectrum. Divide the given speech

signal into blocks 30-40 msec of speech frames. Develop a Matlab program computing

short term energy and all with a frame shift of 10 msec sample. use a rectangular

window.

Repeat the above step for frame sizes of 100, 200 & 500 msec and all with a frame shift

of one sample. Compare the results with the case of 30, 50 & 100 msec cases. Write

down the observation using a rectangular window.

Task2: Modify the above program for the case of (a) Hamming window, (b) Hanning.

Illustrate your observations.

53

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

Task3: Modify the short term zero crossing (ZCR) program by not including the factor

"N"(frame length) in the relation. Compute the ST ZCR for window size of 30, 50 & 100

msec. compare the same with the earlier case given in the procedure section. Do you

find any difference? comment.

Task4: Develop the pitch estimation program in Matlab using frame sizes of 10, 50 &

100 msec, each with a shift of 10 msec. Compare the nature of plots in three different

cases & comment.

54

Dr. M. Venu Gopala Rao, Professor, Dept. of ECE, KL University, A.P., India.

SP Project-25

Signals and its Echoes

Objectives:

(a) Generate and display discrete-time sequence.

(b) Create echoes of audio signals.

(c) Performing auto-correlation and cross-correlation operations.

(d) Compute and display the spectrum of signals.

Back Ground: In a certain hall, echoes of the original audio signal x[n] are generated

due to the reflections at the walls and ceiling. The audio signal experienced by the

listener y[n] is a combination of x[n] and its echoes. . It is desired to estimate the delay

using cross-correlation analysis.

Task1: Let an audio signal is represented by a discrete-time sequence

Task2: Let the listener signal be

where k is the amount of delay in samples and is its relative strength. It is desired to

estimate the delay using cross-correlation analysis.

Let the delay in samples be k 50 and gain constant 0.1 . Generate 200 samples of

Task3: Determine the cross correlation ryx (l ) . Can you obtain and k by observing

Task4: Load the real audio signal and do the above tasks.

Task5: Assume that the system output is corrupted by additive white Gaussian noise

with variance of 20. Design a filter to remove the noise. Perform the Task3 for the

denoised signal.

Reference: Vinay K. Ingle and John G. Proakis, Essentials of Digital Signal Processing

Using MATLAB, Third Edition 2012, Cengage Learning.

55

- Be Eceiv v SemestersTransféré parRajani Rai
- 08-Electronics & Comm EnggTransféré parraja2kumari
- A Course in Digital Signal Processing-10pTransféré parAGANTABURINA1
- M.tech. - Embedded_SystemsTransféré parPaidi Vijay
- BM304 Biomedical signal processing (1).pdfTransféré parsethu
- Course Outline TE321Transféré parmaroshakhan
- Lab S5 DLTI GUI and Nulling FiltersTransféré parArmando Cajahuaringa
- DSP & VLSI Technology SyllabusTransféré parSougata Ghosh
- Chandu Dsp ManualTransféré parSumit Padhi
- Questions bank for DSPTransféré parArchana Ogale
- EE_4th yrTransféré pardhruv
- (11)Look Up Table DesignTransféré parannamalai_s873323
- 07580596 (1)Transféré parEzhilarasi Periyathambi
- SyllabusTransféré parRishi Sinha
- Mrk Dsp Subjects1Transféré parrppvch
- Syllabus-Advance Digital Signal ProcessingTransféré parNabeel
- lab guide dsp.pdfTransféré parprakashranjit_736608
- 05_15Transféré paranilshaw27
- Pages From Syllabus for b.tech. InstrumentationTransféré parTumesh Nancy Kakkar
- DSP-LECTURE 11.docTransféré parDanielHaile
- Cs2403 Dsp 2 MarksTransféré parEkambaramMuniyandi
- IEEE VLSI Verilog Xilinx 2015 Dsp Project List Mtech BeTransféré parmanjulakinnal
- DSP LABTransféré parParamesh Waran
- active noiseTransféré parfidou1860
- Byun - 2009 - Digital Audio Effect System-On-A-Chip Based on EmbTransféré parjulianpalacino
- rijo1977Transféré parAditya Kurniawan
- Digital Signal Processing S5 SyllabusTransféré parShanavaz Thampykunju
- M-Tech-2015 VLSI-New.docTransféré parKiran Kumar
- jfttrhghmjbTransféré parBikash Ranjan Dash
- w4xnx0o1 Recursive Median FilterTransféré parPaul Carissimo

- 5515 39801 HCI Lesson Plan-S8.DocTransféré parSatyam Lala
- Fan Regulator Using 8051 Microcontroller - CopyTransféré parSatyam Lala
- BlockTransféré parSatyam Lala
- ias codeTransféré parSatyam Lala
- DsaTransféré parSatyam Lala
- Quiz BuzzerTransféré parSatyam Lala
- ias 3314Transféré parSatyam Lala
- Lecture 2Transféré parSatyam Lala
- Lecture 3Transféré parSatyam Lala
- Se AbstractTransféré parSatyam Lala
- TEST-1 PPTTransféré parSatyam Lala
- 03AGB02.pdfTransféré parSatyam Lala
- SP_Lab6(1)Transféré parSatyam Lala
- Semiconductors TemplateTransféré parSatyam Lala
- ADO.NET Using C#Transféré parCaleb Centeno
- Gate 2016 Question & Keys Set 5Transféré parUjjawal Maan
- gate answer key cs 2014Transféré parEr Nikita Bansal
- CS02_2014Transféré parSatyam Lala
- 59 48977 Dldco Projects ListTransféré parSatyam Lala

- Ads 1282Transféré parMussab Zubair
- ec-6 syllabusTransféré parshashwatbhattacharya
- Digital Signal Processing-SubhaTransféré parNagammaieie
- chap7Transféré parEzhilarasan Kaliyamoorthy
- VHDL Control QuestionsTransféré parcastillo_leo
- DSP Lab ManualTransféré parPradeep Joy
- A Review on Different Denoising Technique of ECG SignalTransféré parijsret
- Laser Direct Writing Digital Servo Controller Based on SOPC TechnologyTransféré parhieuhuech1
- Muestreo y digitalizaciónTransféré parClaudio Probando
- Weiner FilterTransféré parBhargabjyoti Saikia
- DSP Complete ManualTransféré parAnonymous Q6WJ2JU
- VOice recod DSK6713Transféré parAshish Mangal
- Device aware light paintingTransféré parNazerke Safina
- Pole-Zero Plots StabilityTransféré parNaveen Sai
- DSP-5 (IIR) (S)Transféré parJyothi Jo
- Digital FiltersTransféré parfisrii
- M513 Tutorial 4 SolsTransféré parVenkatraman Perumal Veeraraghavan
- Filters 2Transféré parAbdullah Nisar
- M.docxTransféré paryupsup9
- filtTransféré parMonish Prasad Bhuvanagiri
- Full Text 01Transféré parRam Solo
- intrTransféré parTarun Huirem
- HDL Coding Practices wp231.pdfTransféré parAaron Robbins
- BTECH CSE FT2008-2012Transféré parPraveen Yadav
- M.Tech-PSTransféré parmallikarjunbpatil
- EE6403-Discrete Time Systems and Signal ProcessingTransféré parSathish Bala
- Agilent Spectrum Noise NotesTransféré parOri Negri
- CS2403 Digital Signal ProcessingTransféré parlogarasu
- KrishTransféré parRamliana Th
- Dsp Fundamentals and Implementation ConsiderationsTransféré parshoko1990