Vous êtes sur la page 1sur 4

International Journal of Advanced Engineering Research and Technology (IJAERT), ISSN: 23488190

ICRTIET-2014 Conference Proceeding, 30th -31st August 2014

SUPER LISTENER
P.MANO
Electronics & communication Engg.
Renganayagi Varatharaj College Engineering
Sivakasi, Tamilnadu

ABSTRACT
This paper presents a synthesis based technique to
improve the listening of the humans through the device.
The method that used here is sound synthesizing
technique. This technique is applied to the speech
synthesized from the parameters derived for the speech
processing of the word Digital Signal Processing.
From the output of the device, the more number of the
signals that detected in the input were synthesized
separately and filtered specifically in the sound. So the
device is also being a developed mobile phone with a
specific application. The mobile can do the process that
what we need. It gets the input as audio in the detection
circuit. The output was connected with an enhancement
device to the humans.

M.DINESH
Electronics & communication Engg.
Renganayagi Varatharaj College Engineering
Sivakasi, Tamilnadu

2. SIGNAL PROCESSING
Humans are the most advanced signal processors in
speech and pattern recognition, speech synthesis, so it is
easy to split the human voice from the others sounds.
Generally sounds are vibrations, they has compression
and rarefaction.

Keywords: Digital Signal Processing, Enhancement,


Improvement of listening, mobile software, speech
processing, voice signal, sound synthesis.

1. INTRODUCTION
Speech processing devices like digital hearing aids,
mobiles and other man machine interface in our daily
life to make them more robust in noisy environmental
conditions. In this technically developing world, there
are lots of noise pollution were presents. So the sound
proof rooms may be available at the home but one cant
use everywhere. But one can easily carry their mobiles
as sound proofing. Human speech can be modeled as
filter acting on excitation waveform. There often occur
conditions under which we measure and then transform
the speech signal to another form in order to enhance our
ability to communicate with each other is 4 KHz. The
detected audio signal is in the form of analog signal.
Then the audio was transmitted to digital form through
analog to digital converter. One of the most common
noises is the background noise which is present at any
location. Other types of noise include channel noise
which affects both digital and analog transmission. The
speech signal is corrupted by various noises such as
Gaussian noise, babble noise.

Most real world the signals are in analog they are


continuous in time and amplitude convert to voltage or
currents using sensors and transducers analog circuits
process these signals using resistors, capacitors,
inductors and amplifiers, etc. analog signal processing
examples audio processing in FM radios video
processing in traditional TV sets. Signal processing:
convert the audio wave into sequence of feature vectors.
Speech recognition: Decode the sequence of feature
vectors into a sequence of words.

3. METHODS
1. Filtering the specific sound method
2. Amplification method

4. AUDIO DETECTION
Sound wave is a pressure wave, a detector could used to
detect oscillations in pressure from a high pressure to a
low pressure and back to a high pressure. As the

Divya Jyoti College of Engineering & Technology, Modinagar, Ghaziabad (U.P.), India

55

International Journal of Advanced Engineering Research and Technology (IJAERT), ISSN: 23488190
ICRTIET-2014 Conference Proceeding, 30th -31st August 2014

compression (High pressure) and rarefaction (Low


pressure) move through the medium, they would reach
the detector 500 times per second if the frequency of the
wave were 500Hz similarly if the rarefactions reach the
detector 500 times/sec if the frequency of the wave
500Hz. The frequency of the sound wave not only
represents a number of back and forth vibrations of the
particles per unit of time, but also refers the number of
compressions and rarefactions.

was shown to the user and user can take the decision.
For an example, when we are in the situation there we
are being hearing the sound by two men by talking to us
in the same time, on that time we have to use a device
that gives a advancement of listening, so we can filtered
out ones sound and we also can hear the remaining
ones speech clearly. This method was also used to hear
small noise into amplified one. When the user is in the
political meeting, the user cant hear the leaders speech
clearly, if he uses a device, he can clearly hear the
leaders speech.

5. FILTERING THE SPECIFIC SOUND

Algorithm

5. AMPLIFICATION METHOD
The problem was explained in the above part so the
solution is; the problem is to be amplified and compared
then finally user can take a decision who user doesnt
want. The both of their speech were identified and that
was to be amplified and compared by the device then it

Algorithm I:
i). S(n) --.wav file for running time=1 min. or 2 min. ii)
For running time iii) Input signal iv) Original Signal
ii) After applying windowing/Filter/LPC Algorithms
iii) Synthesized
iv) After applying codec file: codec, fs, D);

Divya Jyoti College of Engineering & Technology, Modinagar, Ghaziabad (U.P.), India

56

International Journal of Advanced Engineering Research and Technology (IJAERT), ISSN: 23488190
ICRTIET-2014 Conference Proceeding, 30th -31st August 2014

v) LPC=16
vi) LPC+codec
vii) Input signal
viii) For running time 1 sec/1min/2min.
ix) Codec with file or file name (ref.16kb)
x) Plot codec
xi) Grid
xii) Run.
Algorithm II:
Step1: s (n), to .wav file
Step2: Input signal
Step3: Original file
Step4: After applying windowing/filter/LPC algorithm
Step5: Synthesized
Step6: after applying codec file
Step7: codec, fs, D:; LPC=16+codec Step8: deburg-save
and run
Step9: we have to received output signal or voiced or
unvoiced signal
Step10: expected response, plot, original plot and
synthesized plot
Algorithm III: Simulation Procedure
i) Entering into mat lab environment
ii) Opening of the Mat Lab Software
iii) After opening the Mat Lab the first screen is as
iv) Shows existed data
v) Selecting the project from computer as
vi) Importing project into the Matlab Software as
vii) Click on exe File i.e. practical values are observed

Step 8: Pitch calculation purpose is Frequency values


are decided such as high Frequency /low frequency
Step 7: Unvoiced pitch can not be calculated
Algorithm VI: Simulation on Real Speech Data
Step 1: wavelet an 32 msec./1min.long synthesized
voiced speech segment-position/dsp/ sample at 16 kHz
Step 2: plot Original, synthesized adjusted high
frequency components
Step 3: a) segment /overlap/buffer/reshape/ the signal
into 32 msec. b) decision: i) voiced, ii) unvoiced c) Pitch
period is estimated d) Pre- emphasizing filtering e) The
th

LPC calculated order .16 order


Step 4: a) wavelet based b) Variance of residual signal
c) Variance of last four values of the wavelet coefficient
for every segment or frame, frame= part of speech signal
=segment=part of signal =frame d) Code vector is
formed , ao = a16 ---- values =vector= direction on with

magnitude ak= ? =L, ak= ak =LPC


Simulation results from Algorithm III- Practical
values: We can estimate voiced and unvoiced signal. X=
0, unvoiced signal Remaining values= voiced signals,
Window=512, N=512, fs= sample frequency, window
length= 16, LPC order=16 or 32, Residue= error signal,
Power=0.026155, Fs= sample frequency, 1second-16000 samples, a= LPC, g=gain, = 5.4830e-004 that
means 10

Algorithm IV:
Step 1: s (n) =speech signal
Step 2: windowing
Step 3: pre-emphasizing
Step 4: LPC, y (n)
Algorithm:
Step 1: windowing a pre-emphasizing
Step 2: LPC or coefficients
Step 3: wavelet based analysis
Step 4: Inverse filtering
Step 5: voiced/unvoiced detection (If voiced: Vowel,
AEIOU, Pitch=Frequency = they are calculated as
voiced signal or high magnitude Step 6: Pitchthe
frequency at where the signal pick is maximum, high
magnitude
is
Available.
SignalGraph-------Energy/magnitude vs. frequency graph is identified with
help of pitch values. Pitch Calculation: Magnitude or
Energy vs. Frequency=PITCH

Divya Jyoti College of Engineering & Technology, Modinagar, Ghaziabad (U.P.), India

57

International Journal of Advanced Engineering Research and Technology (IJAERT), ISSN: 23488190
ICRTIET-2014 Conference Proceeding, 30th -31st August 2014

Original speech signal


REFERENCES
[1] www.google.com
[2] www.Wikipedia.com/article/frequency
[3] www.Physicsclassroom.com/Audiofrequency
[4] www.ECELABS.com/audiodetection
[5] www.hyperphysics.com/amplifier
[6] Journal of Advanced Communication
Engineering- S. China Venkateswarlu

Divya Jyoti College of Engineering & Technology, Modinagar, Ghaziabad (U.P.), India

58

Vous aimerez peut-être aussi