Vous êtes sur la page 1sur 115

I

Modelling and Simulation of an Underwater


Acoustic Communication Channel




Submitted by: Kalangi Pullarao Prasanth



A Thesis approved on







by the following committee:


















Hochschule Bremen
University of applied sciences
Bremen, Germany.

Date
Kraus, Dieter, Prof. Dr.-Ing.
Wenke, Gerhard, Prof. Dr.-Ing.


II








ACKNOWLEDGEMENTS


I take this opportunity to thank all those magnanimous persons who stood behind me as
an inspiration and rendered their full service throughout my thesis. I am deeply indebted
to my thesis supervisor, Prof. Dr.-Ing. Dieter Kraus for timely and kind help, guidance,
providing me with valuable suggestions when ever I used to digress away from the aim
of project work and also the most essential materials required for the completion of this
report. He stood as an inspiration through out my project work and explained me even
the minute details very patiently at various stages of the project.

I would like to thank Prof. Dr.-Ing. Gerhard Wenke, for his support and cooperation in
this project. Finally, I want to thank my parents and sisters for providing me mental and
emotional support through my endeavour. I want to thank all my friends who have
distinguished themselves by giving me strength, encouragement, guidance, and support
to persevere throughout this project despite many difficult obstacles.













III



ABSTRACT


Underwater acoustic communication is a rapidly growing field of research and
engineering. The wave propagation in an underwater sound channel mainly gets
affected by channel variations, multipath propagation and Doppler shift which keep lot
of hurdles for achieving high data rates and transmission robustness. Furthermore, the
usable bandwidth of an underwater sound channel is typically a few kHz at large
distances. In order to achieve high data rates it is natural to employ bandwidth efficient
modulation.

Thus we present a reliable simulation environment for underwater acoustic
communication applications (reducing the need of sea trails) that models the sound
channel by incorporating multipath propagation, surface and bottom reflection
coefficients, attenuation, spreading and scattering losses as well as the
transmitter/receiver device employing Quadrature Phase-Shift Keying (QPSK)
modulation techniques. To express the quality of the simulation tool various simulation
results for exemplary scenes are presented.


















IV



TABLE OF CONTENTS


1 Introduction...............................................................................................................1
2 Fundamentals of Ocean acoustics.............................................................................4
2.1 Sound Velocity in the Ocean.............................................................................4
2.2 Dependence of c on T, S and z ..........................................................................5
2.3 Typical Vertical Profiles of Sound Velocity.....................................................6
2.3.1 Underwater Sound Channel (USC)...........................................................6
2.3.2 Surface Sound Channel .............................................................................8
2.3.3 Antiwaveguide Propagation......................................................................9
2.3.4 Sound Propagation in Shallow Water.......................................................9
2.4 Propagation Loss of Sound.............................................................................10
2.4.1 Spreading Loss........................................................................................10
2.4.2 Sound Attenuation in water.....................................................................11
2.4.3 Sound Attenuation in sediment...............................................................14
2.4.4 Reflection & Transmission Coefficients, R and T ..................................15
2.4.5 Surface and Bottom Scattering................................................................19
2.4.6 Ambient Noise........................................................................................20
3 Sound Propagation..................................................................................................22
3.1 The Wave Equation.........................................................................................22
3.2 Helmholtz Equation........................................................................................26
3.3 Sound Propagation in Homogenous Waveguide............................................27
3.3.1 Image or Mirror Method.........................................................................28
3.3.2 Grazing angles.........................................................................................30
3.3.3 Travel Times...........................................................................................32
3.3.4 Transmission loss for each ray................................................................33
4 Modulation..............................................................................................................35
4.1 Digital Modulation Techniques.......................................................................35
4.1.1 ASK.........................................................................................................35
4.1.2 FSK .........................................................................................................36
4.1.3 PSK .........................................................................................................37


V


4.2 Bit rate and Symbol rate..................................................................................38
4.3 Representation of Signals................................................................................39
4.3.1 Baseband and Bandpass Signals.............................................................39
4.3.2 Baseband vs. Bandpass...........................................................................39
4.4 Modulation QPSK........................................................................................41
4.5 Pulse shaping...................................................................................................46
5 System description..................................................................................................48
5.1 Simulation system...........................................................................................51
5.2 Transmitter......................................................................................................52
5.2.1 Training Sequence...................................................................................52
5.2.2 QPSK mapping.......................................................................................53
5.2.3 Pulse shaping...........................................................................................53
5.2.4 Carrier modulation..................................................................................53
5.3 Channel ...........................................................................................................53
5.4 Receiver ..........................................................................................................54
5.4.1 Bandpass Filtering...................................................................................55
5.4.2 Down conversion and Sampling.............................................................56
5.4.3 Matched Filtering....................................................................................56
5.4.4 Synchronization......................................................................................57
5.4.5 Sampling.................................................................................................58
5.4.6 Phase Estimation.....................................................................................58
5.4.7 Decision..................................................................................................59
6 Observations and Results........................................................................................60
7 Summary and Concluding Remarks........................................................................71
Appendix.........................................................................................................................73
References.....................................................................................................................107










VI


LIST OF FIGURES


Fig. 1: Temperature vs. Depth.......................................................................................5
Fig. 2: Sound velocity vs. Depth...................................................................................5
Fig. 3: Sound velocity vs. Salinity.................................................................................6
Fig. 4: Underwater sound channel of the first kind (c
o
<c
h
).
(a) profile c(z), (b) ray diagram..........................................................................7
Fig. 5: Underwater sound channel of the second kind (c
o
>c
h
).
(a) profile c(z), (b) ray diagram..........................................................................7
Fig. 6: Surface sound channel. (a) profile c(z), (b) ray diagram ...................................8
Fig. 7: Formation of a geometrical shadow zone when the
velocity monotonically decreases with depth. ...................................................9
Fig. 8: Sound propagation in a shallow sea. (a) profile c(z), (b) ray diagram...............9
Fig. 9: Diagram indicating empirical formulae for different frequency domains.......13
Fig. 10: General diagram indicating the three regions of B(OH)
3
,
MgSO
4
and H
2
O...............................................................................................13
Fig. 11: Attenuation plot for various salinities & for temperature a) 20C b) 30C.....14
Fig. 12: Reflection and Transmission at a fluid-fluid interface.....................................15
Fig. 13: Ambient Noise Level for different domains at v
w
=20 kn...............................21
Fig. 14: Hierarchy of underwater acoustic models........................................................22
Fig. 15: Schematic diagram indicating displacement of a particle
from x to x + dx in water column.....................................................................23
Fig. 16: Homogenous waveguide with source S and receiver R....................................27
Fig. 17: Reflections of a wave from the boundaries of a layer, and the
image sources...................................................................................................28
Fig. 18: Diagram illustrating dependence of
1
R on grazing angle,
frequency and two wind speeds........................................................................31
Fig. 19: Diagram illustrating dependence of
2
R on grazing angle
and two bottom types........................................................................................31
Fig. 20: Multipath propagation depicting delays in 2D-view........................................32
Fig. 21: Multipath propagation depicting delays in 3D-view........................................33
Fig. 22: Multipath propagation depicting transmission loss..........................................34


VII


Fig. 23: Baseband information sequence - 0010110010...............................................36
Fig. 24: Binary ASK (OOK) carrier..............................................................................36
Fig. 25: Binary FSK carrier...........................................................................................36
Fig. 26: Binary PSK carrier (note the 180 phase sifts at big edges) ............................37
Fig. 27: The relation between a Bandpass signal and its Baseband equivalent signal ..40
Fig. 28: Converting a Bandpass signal into its Baseband equivalent signal. ................40
Fig. 29: QSPK state diagram.........................................................................................42
Fig. 30: General block diagram QPSK transmitter........................................................43
Fig. 31: Data sequence transmitted................................................................................44
Fig. 32: Modulated carrier signal for I channel .............................................................44
Fig. 33: Modulated carrier signal for Q channel............................................................45
Fig. 34: QPSK signal for the given data sequence........................................................45
Fig. 35: Root raised cosine pulse with a roll-off factor, 0.5 = ..................................47
Fig. 36: Underwater Acoustic simulation system..........................................................48
Fig. 37: The baseband equivalent system......................................................................49
Fig. 38: Oversampling the system.................................................................................49
Fig. 39: Moving the anti-alias filter and the sampling device in front of
the matched filter..............................................................................................50
Fig. 40: The equivalent discrete time baseband system. ...............................................51
Fig. 41: The Simulation system considered...................................................................51
Fig. 42: The transmitter.................................................................................................52
Fig. 43: Mapping of bits into QPSK symbols...............................................................53
Fig. 44: Underwater Acoustic Channel Model ..............................................................54
Fig. 45: The receiver......................................................................................................55
Fig. 46: Output from the matched filter for successive signaling in absence of noise..57
Fig. 47: Example of cross-correlating the received sequence with the training
sequence in order to find the timing.................................................................58
Fig. 48: Simulation results showing relative travel times for various
receiver locations of a sinc-pulse without including any transmission
loss phenomenon..............................................................................................61
Fig. 49: Simulation results showing relative travel times for various
transmitter and receiver locations of a sinc-pulse including the
transmission loss phenomenon.........................................................................64


VIII


Fig. 50: Simulation results showing relative travel times for two different
vertical depths of transmitter and receiver of a sinc-pulse including
the transmission loss phenomenon...................................................................66
Fig. 51: BER plot direct-path for the above Environmental scenario 1. .......................67
Fig. 52: BER plots multi-path propagation for the above Environmental
scenario 2, case 1 a) linear scale b) log scale...................................................68
Fig. 53: BER plots multi-path propagation for the above Environmental
scenario 2, case 2 a) linear scale b) log scale...................................................69
Fig. 54: Received QPSK states for direct path..............................................................70
Fig. 55: Received QPSK states for multi path...............................................................70
Fig. 56: Schematic diagram for Simulation...................................................................74






1




1 Introduction
The need for underwater wireless communications exists in applications such as remote
control in off-shore oil industry, pollution monitoring in environmental systems,
collection of scientific data recorded at ocean-bottom stations, speech transmission
between divers, and mapping of the ocean floor for detection of objects, as well as for
the discovery of new resources. Wireless underwater communications can be
established by transmission of acoustic waves.

Underwater communications, which once were exclusively military, are extending into
commercial fields. The possibility to maintain signal transmission, but eliminate
physical connection of tethers, enables gathering of data from submerged instruments
without human intervention, and unobstructed operation of unmanned or autonomous
underwater vehicles (UUVs
1
, AUVs
2
).

Underwater communications in general mainly gets affected due to

Channel Variations

Channel variations are variations in:
- Temperature
- Salinity of water
- pH of water
- Depth of water column or pressure and
- Surface/bottom roughness.

Multipath Propagation

The channel can be considered as a wave guide and due to the reflections at surface
and bottom we have the consequence of multipath propagation of the signal.

Attenuation
Acoustic energy is partly transformed into heat and lost due to sound scattering by
inhomogeneities.


1
UUV Unmanned underwater vehicle

2
AUV Autonomous underwater vehicle
2



Doppler Shift

- Due to the movement of the water surface, the ray getting reflected from surface can
be seen as a ray actually getting transmitted from a moving transmitter, and thereby,
having Doppler shift in the received.
- When the receiver and transmitter are moving with respect to each other, the emitted
signal will either be compressed or expanded at the receiver. Thereby, Doppler
effect is observed.

Channel variations and multipath propagation keep a lot of hurdles for the achievement
of high data rates and robust communication links. Moreover, the increasing absorption
towards higher frequencies limits the usable bandwidth typically to only a few kHz at
large distances.

In this report, the channel has been modeled by considering multipath propagation,
surface and bottom reflection coefficients. In order to achieve high data rates it is
natural to employ bandwidth efficient modulation. In our case Quadrature Phase-Shift
Keying (QPSK, which is equivalent to 4-QAM) modulation techniques have been used
for transmitter and receiver.

A random bit generator is employed as the bit source. The transmitter converts the bits
into QPSK symbols and the output from transmitter is fed into Underwater Acoustic
Channel. The receiver block takes the output from the channel, estimates timing and
phase offset, and demodulates the received QPSK symbols into information bits.

The QPSK modulation technique is extensively being used in several applications like
CDMA (Code Division Multiple Access) cellular service, wireless local loop, Iridium (a
voice/data satellite system) and DVB-S (Digital Video Broadcasting-Satellite). In our
case the idea of receiver design has been taken from these applications.

In this report, we have considered in depth the channel variations and multipath
propagation as our investigation. Thus we present a reliable simulation environment for
underwater acoustic communication applications (reducing the need of sea trails) that
models the sound channel by incorporating multipath propagation, surface and bottom
reflection coefficients, attenuation, spreading and scattering losses as well as the
transmitter/receiver device employing Quadrature Phase-Shift Keying (QPSK)
modulation techniques.

3



To express the quality of the simulation tool various simulation results for exemplary
scenes are presented. In the following, chapters 2 and 3 describe completely about
underwater acoustic channel, its variations and effects, the multipath propagation
phenomenon, the channel design, etc. Chapters 4 and 5 present a detailed description
about the QPSK modulation techniques used in this thesis, transmitter design, receiver
design and the complete communication part of the system. A part of this thesis is
already published in cf. [11].
4




2 Fundamentals of Ocean acoustics
The ocean is an extremely complicated acoustic medium. The most characteristic
feature of the oceanic medium is its inhomogeneous nature. There are two kinds of
inhomogeneities:
regular and
random
Both strongly influence the sound field in the ocean. The regular variation of the sound
velocity with depth leads to the formation of the underwater sound channel and, as a
consequence, to long-range sound propagation. The random inhomogeneities give rise
to scattering of sound waves and, therefore, to fluctuations in the sound field.
2.1 Sound Velocity in the Ocean
Variations of the sound velocity c in the ocean are relatively small. As a rule, c lies
between 1450 and 1540 m/s. But even, small changes of c significantly affect the
propagation of sound in the ocean.

Numerous laboratory and field measurements have now shown that the sound speed
increases in a complicated way with increasing temperature, hydrostatic pressure (or
depth), and the amount of dissolved salts in water. A simplified formula for the speed in
m/s was given by Medwin in [3]:

( )( )
2 2
1449.2 4.6 0.055 0.00029 1.34 0.01 35 0.016 c T T T T S z = + + + + (2.1)

Here the temperature T is expressed in C, salinity S in parts per thousand [ ], depth z
in meters, and sound velocity c in meters per second. Eq. (2.1) is valid for:

0 T 35C
0 S 45 ppt
and 0 z 1000 m

The Eq. (2.1) is sufficiently accurate for most cases. However, when the propagation
distances have to be derived from time-of-flight measurements, more accurate sound
speed formulae may be required (i.e. 0.1 m/s). These are provided by accurate
velocitometers.
0
00
5



2.2 Dependence of c on T, S and z
Fig. 1 shows the typical Temperature profile with surface of the sea at higher
temperature than the temperature at the sea bed. Here we can see, temperature decreases
with depth till some depth value 300 z = m and after that getting constant. This
corresponds to a summer profile of a typical sea.













Fig. 1: Temperature vs. Depth

Sound velocity varies with temperature, salinity and depth. The impact of temperature
and pressure upon the sound velocity, c is shown in Fig. 2. This can be viewed in three
domains. In the first domain, temperature is the dominating factor upon the velocity of
sound. In the second domain or transition domain, both the temperature and depths are
dominating upon the velocity of sound. In the third domain, sound velocity purely
depends on depths. These three domains can be seen in Fig. 2, first domain is till depths
of 200 m, transition domain is from 200-400 m and the third domain is above 400 m.













Fig. 2: Sound velocity vs. Depth
15.5 16 16.5 17 17.5 18 18.5 19 19.5 20
-1000
-900
-800
-700
-600
-500
-400
-300
-200
-100
0
Temperature [C]
D
e
p
t
h

[
m
]
Temperature Vs Depth curve
1518 1519 1520 1521 1522 1523 1524 1525 1526
-1000
-900
-800
-700
-600
-500
-400
-300
-200
-100
0
Sound Velocity [m/s]
D
e
p
t
h

[
m
]
Sound Velocity Vs Depth curve
6



Dependence of c on salinity, S is shown in Fig. 3. Here, with the increase of S, velocity
of sound, c, also increases keeping the shape of the profile unaffected.













Fig. 3: Sound velocity vs. Salinity
2.3 Typical Vertical Profiles of Sound Velocity
The shape of the sound velocity profile

( ) ( ) ( ) ( )
, , c z c T z S z z = (2.2)

is the most important for the propagation of sound in the ocean.

The c(z) profiles:
- are different in the various regions of the ocean and
- vary with time (seasons).

At depths below 1 km variations of T and S are usually weak and the increase of sound
velocity is almost exclusively due to the increasing hydrostatic pressure. As a
consequence sound velocity increases almost linearly with depth.
2.3.1 Underwater Sound Channel (USC)
In the deep water regions typical profiles possess:
- Velocity minimum at a certain depth, z
m
(Fig. 4a)


- z
m
defines the axis of underwater sound channel
- above z
m
sound velocity increases mainly due to temperature and
- below it sound velocity increases due to hydrostatic pressure

1500 1505 1510 1515 1520 1525 1530 1535
-1000
-900
-800
-700
-600
-500
-400
-300
-200
-100
0
Sound Velocity [m/s]
D
e
p
t
h

[
m
]
Sound Velocity varying with Salinity
S=20
S=25
S=30
S=35
S=40
7



If a sound source is on the access of the USC or near it, some part of the sound energy is
trapped in the USC and propagates within it, not reaching the bottom or surface, and,
therefore, not undergoing scattering or absorption at these boundaries, cf. [6].

Underwater Sound Channel of the first kind, c
o
< c
h

c
o
velocity at the surface, c
h
velocity at a depth h

Fig. 4: Underwater sound channel of the first kind (c
o
< c
h
). (a) profile c(z), (b) ray diagram

Waveguide propagation can be observed in the interval depths of 0
c
z z < <
.
The
depth 0 z = and
c
z z = are the boundaries of the USC. The channel traps all sound
rays that leave a source located on the USC axis at grazing angles;


max
< with ( )
12
max
2
o m m
c c c =

(2.3)

where c
m
and c
o
are the sound velocities at the axis and boundaries of the channel,
respectively. Hence, the greater the difference c
o
c
m
, the larger is the interval of angles
in which the rays are trapped, i.e. the waveguide is more effective, cf. [6].

Underwater Sound Channel of the second kind, c
o
> c
h

c
o
velocity at the surface, c
h
velocity at a depth h
Fig. 5: Underwater sound channel of the second kind (c
o
>c
h
). (a) profile c(z), (b) ray diagram

8



Here, the USC extends from the bottom up to the depth z
c
where the sound velocity
equals c
h
. Two limiting rays are shown in Fig. 5b for this case. Trapped rays do not
extend above the depth z
c
. Only the rays reflected from the bottom reach this zone.

The depth of the USC axis in deep ocean is usually 1000-1200 m. In the tropical areas it
can range down to 2000 m. The sound velocity ranges from, cf. [6]:
- 1450 m/s to 1485 m/s in the Pacific Ocean.
- 1450 m/s to 1500 m/s in the Atlantic Ocean.
2.3.2 Surface Sound Channel
This channel is formed when the axis is at the surface. A typical profile for this case is
shown in Fig. 6a. The sound velocity increases down to depth z =h and then begins to
decrease. Rays leaving the source at grazing angles
b
< propagate with multiple
reflections in the surface sound channel, cf. [6].

Fig. 6: Surface sound channel. (a) profile c(z), (b) ray diagram

In the case of a rough ocean surface, the sound energy is partly scattered into angles
b
< at each interaction with the surface, i.e.
- rays leave the sound channel
- sound level decays in the surface sound channel and increases below the
surface sound channel
Surface sound channels frequently occur
- in tropical and moderate zones of the ocean, where T and S are constant
due to mixing in the upper ocean layer. c increases due to hydrostatic
pressure gradient.
- if the temperature on the surface decays due to seasonal changes, i.e.
summer autumn winter
9



- in Arctic and Antarctic regions, where a monotonically increasing sound
velocity profile from the surface to the bottom can be observed.
2.3.3 Antiwaveguide Propagation
Antiwaveguide propagation is observed when the sound velocity monotonically
decreases with depth (Fig. 7a). Such sound velocity profiles are often a result of
intensive heating by solar radiation of the upper ocean layer.
Fig. 7: Formation of a geometrical shadow zone when the velocity
monotonically decreases with depth.

All rays refract downwards. The ray tangent to the surface is the limiting one. The
shaded area represents the geometrical shadow zone (Fig. 7b). The geometrical shadow
zone is not a region of zero sound intensity, cf. [6].
2.3.4 Sound Propagation in Shallow Water
This type of propagation corresponds to the case where each ray from the source, when
continued long enough is reflected at the bottom. A typical profile is shown in Fig. 8a. It
is observed in shallow seas and the ocean shelf, especially during summer-autumn
period when the upper water layers get well heated, cf. [6].

Fig. 8: Sound propagation in a shallow sea. (a) profile c(z), (b) ray diagram

10



2.4 Propagation Loss of Sound
2.4.1 Spreading Loss
Spreading loss is a measure of signal weakening due to the geometrical spreading of a
wave propagating outward from the source.
Two geometries are of importance in underwater acoustics:

1. Spherical Spreading

In a homogenous and infinitely extended medium, the power generated by a point
source is radiated in all directions on the surface of a sphere. This is called spherical
spreading. Since intensity equals power per area, we obtain at ranges r
o
and r, cf. [2]
I
o
=
2
4
o
a
r
P

, I =
2
4 r
P
a

(2.4)
With
r
o
- reference distance (=1m),
P
a
- acoustic power of source,
I
o
- acoustic intensity of source at distance r
o
,
I - acoustic intensity of source at distance r.

The loss due to spherical spreading is:

( )
2
.
o
sphere
sphere o
I r
g r
I r


= =




(2.5)


In the case of spherical spreading, the intensity decreases with r
2
. For spherical
spreading, the spreading loss is
( ) 10log 20log .
o
sphere
sphere o
I r
G r
I r


= =




(2.6)


2. Cylindrical Spreading

Cylindrical spreading exists when the medium is confined by two reflecting planes. The
distance between the planes is supposed to be 10 h > . Where,

denotes the
wavelength of the sound wave. Since intensity equals power per area, we obtain at
ranges r
o
and r (withr h > ), cf. [2].
11



2
a
o
o
P
I
hr
= ,
2
a
P
I
hr
= . (2.7)


The loss due to cylindrical spreading is

( ) .
o
cylinder
o
I r
g r
I r


= =




(2.8)

The intensity decreases linearly with distance r. In logarithmic notation, for cylindrical
spreading, the spreading loss is
( ) 10log 10log .
o
cylinder
cylinder o
I r
G r
I r


= =






Taking n as the exponent, we can express the spreading loss for geometric spreading in
logarithmic notation

( ) 10log 10log 10 log ,
n
o
o o
I r r
G r n
I r r


= = =




(2.9)


where exponent n =1 for cylindrical spreading; n =2 for spherical spreading.
2.4.2 Sound Attenuation in water
The acoustic energy of a sound wave propagating in the ocean is partly:
- absorbed, i.e. the energy is transformed into heat.
- lost due to sound scattering by inhomogeneities.

Remark: It is not possible to distinguish between absorption and scattering effects in
real ocean experiments. Both phenomena contribute to the sound attenuation in sea
water.
On the basis of extensive laboratory and field experiments the following empirical
formulae for attenuation coefficient in sea water have been derived.
a) Thorp formula, valid frequency domain see Fig. 9a


2 2
2 2
0.11 44
[dB/km]
1 4100
f f
f f
= +
+ +
(2.10)
where,
[ ]
is frequency f kHz
12



b.) Schulkin and Marsh, valid frequency domain see Fig. 9b

( )
2 2
3 4
2 2
8.686 10 1 6.54 10 [dB/km]
T
T T
SAf f Bf
P
f f f



= +

+

(2.11)
where


6
2.34 10 A

= ,
6
3.38 10 B

= ,

[ ]
is the salinity in ppt S ,

2
is hydrostatic pressure P kg cm

,

[ ]
is frequency f kHz , and

( )
[ ]
6 1520 273
21.9 10
T
T
f kHz
+
= ,
is the relaxation frequency with T the temperature in [C]. While the temperature
ranges from 0 to 30C,
T
f varies approximately from 59 to 210 kHz.

c.) Francois and Garrison, valid frequency domain see Fig. 9c


2
3 4
2 2
2 1 1 1 2 2 2
3 3 2 2 2 2
1 2
H O, Pure water
B(OH) , Boric acid MgSO , Magnesiumsulphate
[dB/km]
A P f f A P f f
A P f
f f f f
= + +
+ +
_
_ _
(2.12)

The first term in Eq. (2.12) corresponds to:

Boric acid B(OH)
3



0.78 5
1
1
1245
4
273
1
8.686
10
1
2.8 10
35
ph
T
A
c
P
S
f

+
=
=
=
(2.13)

Magnesium sulphate MgSO
4



( )
( )
( )
2
4 9 2
2 max max
1990
8
273
2
21.44 1 0.025
1 1.37 10 6.2 10
8.17 10
1 0.0018 35
T
S
A T
c
P z z
f
S

+
= +
= +

=
+
(2.14)

The sound speed is approximately given by



max
1412 3.21 1.19 0.0167 c T S z = + + + (2.15)

13



Pure water H
2
O


4 5 7 2 8 3
3
4 5 7 2 8 3
5 10 2
3 max max
4.937 10 2.59 10 9.1110 1.5 10 20
3.964 10 1.146 10 1.45 10 6.5 10 20
1 3.83 10 4.9 10
T T T forT C
A
T T T forT C
P z z



+

=

+


= +
(2.16)

with f in [kHz], T in [C], S in [ppt]. And where
max
z , pH and c denote the depth in
[m], the pH-value and the sound speed in [m/s] respectively.













Fig. 9: Diagram indicating empirical formulae for different frequency domains

A general diagram showing the variation of alpha, with the three regions of Boric
acid, B(OH)
3
, Magnesium sulphate, MgSO
4
and Pure water, H
2
O is depicted in Fig. 10.

10
2
10
3
10
4
10
5
10
6
10
7
10
-6
10
-4
10
-2
10
0
10
2
10
4
10
6
Attenuation Coefficient in Water fromFrancois/Garrison method
Frequency [Hz]
A
t
t
e
n
u
a
t
i
o
n

[
d
B
/
k
m
]
Pure Water
Magnesiumsulphate
Boric Acid
Attenuation=Sumof All

Fig. 10: General diagram indicating the three
regions of B(OH)
3
, MgSO
4
and H
2
O

a)
b)
c)
100 Hz 3 KHz 0 10 KHz 0.5 MHz 1 MHz
f
14



From Fig. 10, it can be observed that for the Boric acid region, Attenuation is
proportional to
2
f . And for the regions Magnesium sulphate and pure water also
Attenuation is proportional to
2
f . In the transition domains it is proportional to f .
Attenuation increases with increasing salinity and temperature, Fig. 11. Attenuation
increases with increasing frequency.

0 20 40 60 80 100 120 140 160 180 200
0
10
20
30
40
50
60
70
80
frequency in KHz
A
t
t
e
n
u
a
t
i
o
n

i
n

[
d
B
/
k
m
]
Salinity=15ppt
Salinity=20ppt
Salinity=25ppt
Salinity=30ppt
Salinity=35ppt
pH=8
Water Temperature=20C

0 20 40 60 80 100 120 140 160 180 200
0
10
20
30
40
50
60
70
80
90
100
frequency in KHz
A
t
t
e
n
u
a
t
i
o
n

i
n

[
d
B
/
k
m
]
Salinity=15ppt
Salinity=20ppt
Salinity=25ppt
Salinity=30ppt
Salinity=35ppt
pH=8
Water Temperature=30C


Fig. 11: Attenuation plot for various salinities & for temperature a) 20C b) 30C

2.4.3 Sound Attenuation in sediment
The sound attenuation in sediment mainly varies with the bottom type. Bottom type, in
short represented by bt , defines the sediment material of the ocean. The following table
provides the values of bt for each sediment type.

Sediment type

value of bt
very coarse sand 0
coarse sand 1
medium sand 2
fine sand 3
very fine sand 4
very coarse silt 5
coarse silt 6
medium silt 7
15



fine silt 8
very fine silt 9
clay 10

Table 1

The following empirical formula is provided to find the sound attenuation in the
sediment depending on the bt .


1 1

8.686 1kHz m
n
S
f
K

=



(2.17)
Where,
S
- attenuation of the sediment

The following table provides the values for K and n for four sediment types.


Sediment type

K n
very fine silt 0.17 0.96
fine sand 0.45 1.02
medium sand 0.48 0.98
coarse sand 0.53 0.96

Table 2
2.4.4 Reflection & Transmission Coefficients, R and T
Reflectivity is the ratio of the amplitudes of a reflected plane wave to a plane wave
incident on an interface separating two media. It is an important measure of the impact
of the bottom on sound propagation. The reflection coefficient for a simple case is
derived here.

















Fig. 12: Reflection and Transmission at a fluid-fluid interface.
1 1
Medium 1
,c
2 2
Medium 2
,c
Incident Reflected
Transmitted
1

z
x
16



Fig. 12 shows reflection at an interface separating two homogeneous fluid media with
density
i
and sound speed , 1,2.
i
c i =

The pertinent incident angles with the horizontal
in thexz -plane are denoted by
i
. Assuming the incident wave to have unit amplitude
and denoting the amplitudes of the reflected and transmitted waves by R and T,
respectively, we can write the acoustic pressures as, Ref. [1]
( )
1 1 1 1
1
exp sin cos ,
i
p ik x z k
c

= +

, (2.18)
( )
1 1 1
exp sin cos ,
r
p R ik x z =

(2.19)
( )
2 2 2 2
2
exp sin cos , ,
t
p T ik x z k
c

= +

(2.20)
where the common time factor ( ) exp i t is omitted.

The unknown quantities R, T and
2
are determined from the boundary conditions
requiring continuity of pressure and vertical particle velocity across the interface at
0 z = . With the total pressure in medium 1 given by
1 i r
p p p = +

and the pressure in
medium 2 by
2 t
p p = , the boundary conditions can be mathematically stated as

1 2
1 2
1 2
1 1
, .
p p
p p
i z i z

= =

(2.21)

It is easily seen that the requirement of continuity of pressure at 0 z = leads to

( )
2 2 1 1
1 exp cos cos , R T i k k x + =

(2.22)
and
( )
1 2
1 2
1 2
1 cos cos .
k k
R T

= (2.23)

Since the left side is independent of x , this yields to Snells law of refraction,


1 1 2 2
cos cos , k k = (2.24)

1 2 2 1
2 1 1 2
sin
.
sin
k c c
n
k c c


= = = = (2.25)

This law simply states the invariability of the horizontal component of the wave vector
across the interface. From Eqs. (2.22) and (2.24), we can now write,

1 . R T + = (2.26)
With
2
1
m

= , and together with the equation obtained from the second boundary
condition Eq.(2.25), it follows
17




( )
1 2
1 cos cos m R nT = . (2.27)

Rearranging Eq. (2.27) by substituting Eq.(2.26), lead to the following expressions for
the reflection coefficient R and the transmission coefficient T


2 2
1 1 1 2
2 2
1 2
1 1
cos sin cos cos
cos cos
cos sin
m n m n
R
m n
m n




= =
+
+
(2.28)


1 1
2 2
1 2
1 1
2 cos 2 cos
cos cos
cos sin
m m
T
m n
m n



= =
+
+
(2.29)

Computation of Reflection & Transmission Coefficients, R and T with absorption
behavior of the media considered
In the above reflection and transmission characteristics without consideration of the
absorption behaviour of the media were deduced. In contrast to sound propagation at the
water-air-boundary layer experimental investigations at the water sediment-boundary-
layer showed that the theory agrees sufficiently exact only with consideration of the
absorption in the sediment with the results of measurement. Due to this reason the
results of section 2.4.3 are now extended for the case of a boundary layer between a
absorption-free medium1 (water) and an absorption-afflicted medium2 (sediment) .

With the introduction of absorption we have a complex wave number,


2 2, 2, R I
k k ik = + (2.30)
with

2,
2
R
k
c

=

and
2, 2 I
k = . Where
2
c and
2
denote the velocity of sound and attenuation
of the medium 2 (sediment) respectively. The attenuation of medium 1 (water) is
negligible and it can be neglected.

Eq. (2.25) can be written as,


2, 2,
1 2
2 1 1 1
sin
.
sin
R I
R I
k k
k
i n in n
k k k

= = + = + = (2.31)

18



The wave number
1
k and
1
are always real and therefore, their product
1 1
sin k is also
real. The complex wave number
2
k and the product
2 2
sin k are complex.
From all the above we can write,


1 1
2
2
sin
sin
k
k

= (2.32)
which implies,



2 2
2 2 1 1
2, 2,
2 2 2 1 1 2
2
sin
cos 1 sin R I
k
k k k n k ik
k

= = = + (2.33)
and


1 1 1
sin k k =

. (2.34)

Rearranging the above, we can write the pressure transmitted as,


( )
{ }
( )
{ }
2, 1 2,
exp 0, exp ,
T T
t l R
p T i k r i k k r t

=



(2.35)

where


( ) ( ) 1 2,
2, 2,
, sin ,cos
T
T
R
P p P P
k k k k = = (2.36)

( ) ( ) 2,
2, 2,
0, sin ,cos
T
T
I
A A A A
k k k = = (2.37)



{ }
2
2 2
2 2 2
1 2,
1 1 1
sin Re sin R
P
k k k k n

= + = +


(2.38)
and

{ }
2 2
2,
1 1
Im sin . I
A
k k k n = = (2.39)

The angles of refraction for the phase and amplitude fronts are,

( )

{ }
1
1
2,
2 2
2,
1
sin
arg arctan arctan
Re sin
P P
R
k
k
k
n




= = =




(2.40)
and
( )
2,
arg 0
A A
k = = (2.41)

with the use of

2
P
P P
k
c

= = (2.42)

19



the phase velocity of the wave in the sediment can be written as,


{ }
1
2
2 2 2
1 1
.
sin Re sin
P
P
c
c
k
n


= =

+


(2.43)

2.4.5 Surface and Bottom Scattering
Scattering is a mechanism for loss, interference and fluctuation. A rough sea surface or
seafloor causes attenuation of the mean acoustic field propagating in the ocean
waveguide. The attenuation increases with increasing frequency. The field scattered
away from the specular direction, and, in particular, the backscattered field (called
reverberation) acts as interference for active sonar systems. Because the ocean surface
moves, it will also generate acoustic fluctuations. Bottom roughness can also generate
fluctuations when the source or receiver is moving. The importance of boundary
roughness depends on the sound-speed profiles which determine the degree of
interaction of sound with the rough boundaries.

Often the effect of scattering from a rough surface is thought of simply an additional
loss to the specularly reflected (coherent) component resulting from the scattering of
energy away from the specular direction. If the ocean bottom or surface can be modeled
as randomly rough surface, and if the roughness is small with respect to the acoustic
wavelength, the reflection loss can be considered to be modified in a simple fashion by
the scattering process. A formula often used to describe reflectivity from a rough
boundary is
( ) ( )
2
0.5
R R e

=

(2.44)

where ( ) R

is the new reflection coefficient, reduced because of scattering at the


randomly rough interface. is the Rayleigh roughness parameter defined as

2 sin k = (2.45)


where 2 k = is the acoustic wave number and is the rms roughness. As said in
section 2.4.4, the attenuation from water can be neglected and therefore, the reflection
coefficient for smooth ocean surface can be taken as 1. Therefore, the rough sea-
surface reflection coefficient for the coherent field is

( )
2
0.5
. R e

=

(2.46)
20



The roughness of the ocean surface is due to wind induced waves. It can be calculated
from the spectral density of ocean displacements. It is often modeled by the Neumann-
Pierson wave spectrum. The rms roughness or rms wave height of a fully developed
wind wavefield is then approximately

5 5
0.324 10
w
v

(2.47)

where,
w
v denotes the wind speed, [m/s].


For ocean bottom, is related to the particle size (particle refers to the material of the
sediment, further see section 2.4.3, Table 1) by,


2
1000
bt


= m (2.48)

Where

bt - represents the bottom type (refer, section 2.4.3, Table 1).
2.4.6 Ambient Noise
An important acoustic characteristic of the ocean is its underwater ambient noise. It
contains a great bulk of information concerning the state of the ocean surface, the
atmosphere over the ocean, tectonic processes in the earths crust under the ocean, the
behaviour of marine animals and so on cf.[1], [6].

From, Fig. 13, different dominating levels of ambient noise and total noise level can be
observed and the individual formulae for all these are as stated below:

For shipping noise (traffic) 10-300 Hz
( )
( )
[ ]
8
4 4
3 10
10 log in kHz
1 10
traffic
NL f f
f

=

+

(2.49)


Turbulence noise

( ) ( ) [ ] 30 30 log in kHz
turb
NL f f f = (2.50)


Self noise of the vessel
( ) [ ] , in dB
vessel s
NL f v (2.51)
where f and
s
v

denote the frequency and vessel speed respectively.
21



Biological noise (fishes, scrimps etc.)
( ) [ ]
, in dB
Bio
NL f S (2.52)
where f and S denote the frequency and seasonal dependence.

Sea state noise
( ) [ ] , in dB
ss w
NL f v (2.53)
The sea state noise can be determined as function of wind speed v
w
in [kn] and
frequency f in [kHz] by
( )
5
3
2
, 40 10 log
1
w
ss w
v
NL f v
f

= +

+

(2.54)

Thermal noise ( )
Therm
NL f in [dB]
The thermal noise is due to molecular agitation (Brownian Motion). It can be
expressed as function of frequency f in [kHz] by

( ) 15 20 log
Therm
NL f = + (2.55)


Thus the total noise level can be determined by:


( )
(
)
0.1
0.1 0.1
0.1 0.1 0.1
, , , 10 log 10 10 10
10 10 10
traffic
turbo vessel
bio ss therm
NL
NL NL
s w
NL NL NL
NL f v S v = + + +
+ +
(2.56)

10
1
10
2
10
3
10
4
10
5
10
6
0
10
20
30
40
50
60
70
80
90
100
Frequency [Hz]
N
L

[
d
B
]
Ambient Noise Curve Formulation for windspeed=20 kn
NL Traffic
NL Turbulance
NL Sea State
NL Thermal
NL

Fig. 13: Ambient Noise Level for different domains at v
w
=20 kn





22




3 Sound Propagation
Sound propagation in the ocean is mathematically described by the wave equation,
whose parameters and boundary conditions are descriptive of the ocean environment.
As schematically shown in the Fig.14, there are essentially five types of models
(computer solutions to the wave equation) to describe sound propagation in the sea, cf.
[1]:

FFP - fast field program
NM - normal mode
PE - parabolic equation
FD - direct finite-difference
FE - finite-element










Fig. 14: Hierarchy of underwater acoustic models.

3.1 The Wave Equation
The wave equation in an ideal fluid can be derived from hydrodynamics and the
adiabatic relation between pressure and density. The equation for conservation of mass,
Eulers equation (Newtons 2
nd
Law), and the adiabatic equation of state, are as stated
below, Ref. [1], [3], [6]:

Fig. 15 is used to describe the motion of a particle in a water column from where in the
entire derivation of the wave equation has been done.
FD/FE
Range
Independent
Wave Equation
Normal
Mode
Coupled
FFP
Coupled
NM
Adiabatic
NM
Ray PE
Range
dependent
Fast Field
Program
23












Fig. 15: Schematic diagram indicating displacement of a particle
from x to x + dx in water column.

In deriving all the following equations we have used the terms
g
P and
g
. These two
terms are defined as follows:


g o
p p p = + (3.1)

where

g
p - Total pressure
o
p - Static pressure
p - change in pressure


g o
= + (3.2)


where

g
- Total density
o
- Static density
- change in density


Equation for conservation of mass


( ) ( ) ( ) ( )

Resultant mass stream


density variation
Mass variation
g
g g
x dx Av x dx x Av x Adx
t



+ + =

_
_
(3.3)



( ) Av x dx +


( ) Av x
- Inward Volume stream at position x
x
p
g
(x)
v(x)
x
x + dx
p
g
(x + dx)
v(x + dx)
A
- Outward Mass Stream at a displacement of dx ( ) ( )
g
x dx Av x dx + +
- Outward Volume stream at a displacement of dx
24



( ) ( )
g
x Av x
( ) Mass increament + =
( ) Mass decreament =


Eulers Equation (Newtons 2
nd
Law)

From Fig. 15, the Newtons 2
nd
law F m a = can be written as:

( ) ( )

Total Force,
g g g
V
F
a
m
dv
p x A p x dx A Adx
dt
+ =
_
_
(3.4)

With

v v
dv dt dx
t x

= +

(3.5)


dv v v dx dv v v
v
dt t x dt dt t x

= + = = +

(3.6)
and

( ) ( ) ( ) ( ) , , , ,
g g g g g
p x dx t p x t p x t p x dx t p
dx dx x
+ +

= =

(3.7)

Eq. (3.4) can be re-written as



g
p
p v v
v
x x t x


= = +



(3.8)

and with

( ) ( ) ( ) ( ) ( ) , , , ,
g
g g
v
x dx t v x dx t x t v x t
dx x



+ +
=

(3.9)


Eq. (3.3), can be written as:



( )
g g
v
x t t




= =

(3.10)

which is known as the equation of continuity.




- Inward Mass stream at position x
25



Adiabatic equation of state

( )
2
2
2
1
,
2
g g
g o
g g
s s
p p
p p


= + + +




(3.11)

and for convenience we define the quantity


2
s
p
c


, (3.12)

where c will turn out to be the speed of sound in an ideal fluid. In the above equations,
is the density, v is the particle velocity, p is the pressure, and the subscript s
denotes that the thermodynamics partial derivatives are taken at constant entropy.

For
o
p p < and
o
< , Eq. (3.11) becomes:

2
p c = (3.13)


Considering that the time scale of oceanographic changes is much longer than the time
scale of acoustic propagation, we will assume that the material properties
o
and
2
c are
independent of time. Then, taking the partial derivatives of Eulers equation, Eq. (3.8)
with respect to x and for continuity equation, Eq. (3.10) with respect to t gives


2
2 g g
p v v
v
x x t x x


= +



(3.14)

and

2 2
2 2 2
1
g
g
v p
v
x t x t t c t


+ = =




. (3.15)

For lower speeds, in Eq. (3.14) we can ignore,
g
v
v
x x





and for
o
< , the term
g
v
x t




in Eq. (3.15) can be ignored. Now, Eqs. (3.14) and (3.15) can be written as:


2
2 g
p v
x x t


=



(3.16)


2
2 2
1
g
v p
x t c t


=



. (3.17)

Combining Eqs. (3.16) and (3.17), we get the one dimensional linear wave equation


2 2
2 2 2
1 p p
x c t

=

. (3.18)
26



Extending it to three dimensional equation we get,

2
2 2
1 p
p
c t

(3.19)
where

2 2 2
2 2 2
x y z

= + +


denotes the Laplacian operator.
3.2 Helmholtz Equation
For ( ) ( ) , , , exp p P x y z t j t = , we obtain

2
0 P k P + = . (3.20)
In spherical coordinates the Laplacian can be expressed by
2 2
2 P P R R P R = + ,
if taken into account that P only depends upon R. Spherical wave solution of the
Helmholtz Equation is given by,

( ) exp
4
A jkR
P
R

= (3.21)
with
( ) ( ) ( )
2 2 2
o o o
R x x y y z z = + + (3.22)
where , ,
o o o
x y z are the coordinates of an omni directional point source (pulsating sphere
of small radius). Another simple and important solution is given by plane wave,

( )
exp
x y z
P A j k x k y k z

= + +

(3.23)
where , and
x y z
k k k denote the wave numbers that satisfy,

2 2 2
x y z
k k k = + +
T
k k (3.24)
with
( )
, ,
T
x y z
k k k = k the wave number vector.




27



3.3 Sound Propagation in Homogenous Waveguide
A homogenous water column within infinitely extended perfectly reflecting boundaries,
as shown in Fig. 16. is considered in the sequel.
















Fig. 16: Homogenous waveguide with source S and receiver R


The field produced by a point source at( ) 0,
S
z in the absence of boundaries is given by,


( ) ( ) ,
4
jkR
e
P r z A
R

= . (3.25)

where,
( ) ( )
2 2
s s
R z z r r = +
( ) Amplitude or Source strength A =


Next we need to add a solution to the homogenous Helmholtz equation to satisfy the
boundary conditions of vanishing pressure at the surface and bottom of the waveguide.

The method which we use for this is image or mirror method, and is explained in the
following section. Here, the ocean surface and bottom are considered as two mirrors.
The rays which hit the surface and bottom are then starting exactly at the images of the
actual sources of origin. With this logic the whole image or mirror method is developed
and thereby it is easy to provide mathematics for multipath propagation.


R
0
S
r
D
S
r
1480 c m s =
3
1000 kg m =
sediment
air
2. Boundary
1. Boundary
z
S
z


r
z
28



3.3.1 Image or Mirror Method
The image method superimposes the free-field solution with the fields produced by the
image sources. In the waveguide case, sound will be multiply reflected between the two
boundaries, requiring an infinite number of image sources to be included see for details
[1,6].




















Fig. 17: Reflections of a wave from the boundaries
of a layer, and the image sources





Fig. 17 shows a schematic representation of the contributions from the physical source
at depth
S
z

and the first three image sources, leading to the first 4 terms in the
expression for the total field,


( ) ( ) ( )
( ) ( ) ( )
01 02
03 04
1 02
01 02
2 03 1 04 2 04
03 04

, , ,

, , ,
jkL jkL
jkL jkL
e e
P r z A R
L L
e e
R R R
L L



+ +

(3.26)


( ) ( )

, , ,
2
i i i
R R R



= =



, where i =1,2.

Z
r
0
2
R
1
R
D
Air
Bottom
Water
Image
Surface
Image
Bottom
S
z
2
S
D z +
2
S
D z
S
z
03
L
01
L
02
L
04
L
z
r
03

04

02

29



( )
( )
( )
( )
2
2
01
2
2
02
2
2
03
2
2
04
,
,
2 ,
2 .
s
s
s
s
L r z z
L r z z
L r D z z
L r D z z
= +
= + +
= +
= + +


The remaining terms are obtained by successive imaging of these sources to yield the
ray expansion for the total field,



( ) ( ) ( ) ( )
( ) ( )
( ) ( )
1
2
3
1 1 2 1
0
1
1
1 2 2 2
2
1
1 3 2 3
3

, , , ,

, ,

, ,

m
m
m
jkL
m m
m m
m
m
jkL
m m
m m
m
jkL
m m
m m
m
e
P r z A R R
L
e
R R
L
e
R R
L




=

+
+

( ) ( )
4
1 1
1 4 2 4
4

, ,
m
jkL
m m
m m
m
e
R R
L


+ +

+

(3.27)

where,

A - Amplitude of the signal,
1

R - Surface reflection coefficient,


2

R - Bottom reflection coefficient,


k - Complex wave number,
1 2 3 4
, , ,
m m m m
L L L L - Lengths of all rays.

with

( )
( )
( ) ( )
( ) ( )
2
2
1
2
2
2
2
2
3
2
2
4
2
2
2 1
2 1
m S
m S
m S
m S
L r Dm z z
L r Dm z z
L r D m z z
L r D m z z
= + +
= + + +
= + +
= + + +
(3.28)

and D being the vertical depth of the duct.
30



3.3.2 Grazing angles
The angle with which each ray grazes the boundaries is usually termed as grazing angle.
This is quite important because of its influence on both the bottom and surface
reflection coefficients. With simple mathematics, the grazing angle for the four paths or
for all the rays can be computed, and is given by


( )
1
1
2
tan
s
m
Dm z z
r


+
=


, (3.29)

( )
1
2
2
tan
s
m
Dm z z
r


+ +
=


, (3.30)

( ) ( )
1
3
2 1
tan
s
m
D m z z
r



+
=


, (3.31)

( ) ( )
1
4
2 1
tan
s
m
D m z z
r



+ +
=


, (3.32)

where
1 2 3 4
, , ,
m m m m
- grazing angles of all the rays,
D- depth of the duct or channel,
0 to m = ,
s
z - depth of the source in meters,
z - depth of the receiver in meters,
r - distance of the receiver in meters.

Further, from Eq. (3.27), we can write down the influence of
Wind speed, frequency and grazing angle on Surface reflection coefficient,
1

R ,
Bottom type and grazing angle on Bottom reflection coefficient,
2

R .

The functional dependence of wind speed
w
v , frequency f and grazing angle
m
on
Surface reflection coefficient is illustrated in Fig. 18.

31



0 10 20 30 40 50 60 70 80 90
-800
-700
-600
-500
-400
-300
-200
-100
0
grazing angle [deg], f=25kHz
s
c
a
t
t
e
r
i
n
g

l
o
s
s

a)
w
v =5 kn
0 10 20 30 40 50 60 70 80 90
-15000
-10000
-5000
0
grazing angle [deg], f=25kHz
s
c
a
t
t
e
r
i
n
g

l
o
s
s

b)
w
v =15 kn.

Fig. 18: Diagram illustrating dependence of
1
R on grazing angle, frequency and two wind speeds

From Fig. 18, it can be observed that with an increase of grazing angle the scattering
loss also increases. In the same way with the increase of wind speed, there is an increase
in scattering loss.

Similarly we can also observe the dependence of Bottom reflection coefficient,
2
R on
grazing angle
m
and bottom type bt . This is illustrated in Fig. 19.


0 10 20 30 40 50 60 70 80 90
-7
-6
-5
-4
-3
-2
-1
0
grazing angle [deg]
r
e
f
l
e
c
t
i
o
n

l
o
s
s

[
d
B
]

a) bt

=coarse sand
0 10 20 30 40 50 60 70 80 90
-35
-30
-25
-20
-15
-10
-5
0
grazing angle [deg]
r
e
f
l
e
c
t
i
o
n

l
o
s
s

[
d
B
]

b) bt

=very fine sand

Fig. 19: Diagram illustrating dependence of
2
R on grazing angle and two bottom types

32



3.3.3 Travel Times
The travel time of each ray is the time taken for it to reach the receiver. From the above
discussion it is vivid that, all the other paths need more time compared to the direct
path. Travel times for all the rays can be easily computed provided we know the
lengths or distances of all rays and the velocity of each ray. From Eq.(3.28), we know
the distances of all rays and the velocity of each ray is the speed of sound, c. Thereby,
we can write as

1 1
2 2
3 3
4 4
,
,
,
.
m m
m m
m m
m m
T L c
T L c
T L c
T L c
=
=
=
=
(3.33)
where,

c - Sound velocity in meters per second.

1 2 3 4
, , and
m m m m
L L L L - Lengths of each ray in meters.

1 2 3 4
, , and
m m m m
T T T T - Travel times of all rays in seconds.

Travel time of each ray in seconds
N
u
m
b
e
r

o
f

r
a
y
s
0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
1
2
3
4
5
6
7
8
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1

Fig. 20: Multipath propagation depicting delays in 2D-view.
33



0
0.05
0.1
0.15
0.2
0
2
4
6
8
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Travel time of each ray in seconds
Number of rays
A
m
p
l
i
t
u
d
e

Fig. 21: Multipath propagation depicting delays in 3D-view.


Fig. 20 and Fig. 21 show the delay of rays in 2 dimensional and 3 dimensional views. It
can be clearly observed the delay of sinc pulse from ray 1 till ray 8. Here, sinc pulse is
just taken as an example to show the concept of delay.
3.3.4 Transmission loss for each ray
The transmission loss or sometimes referred as propagation loss is nothing but the sum
of all the losses, a ray gets effected. So, transmission loss can be written as,

04
04 1 2
04
1

l
tl e R R
l

= (3.34)
In the above equation the transmission loss is written only for ray 4, as an example. In
terms of Eq. (3.27), spreading loss is due to the terms
1 2 3 4
1 1 1 1
, , ,
m m m m
l l l l
(as discussed in
sec. 2.4.1). The attenuation or absorption is from the imaginary part of the complex
wave number k , as discussed in sec. 2.4.3, refer to Eq. (2.30). The Total Reflection loss
can be seen in this way. It is a sum of,
reflection loss, loss which is caused when a ray travels from medium1 to medium2
due to the refraction of the ray, due to the reflection of the ray.
scattering loss, loss which is caused by the roughness of the boundary. That is rays
get scattered in an un orderly fashion.
34



In Eq. (3.27), it is caused due to the terms
1 1 1 1
1 2 1 2 1 2 1 2
, , ,
m m m m m m m m
R R R R R R R R
+ + + +
. The
following figure illustrates the transmission loss phenomenon.

0
0.05
0.1
0.15
0.2
0
2
4
6
8
0
0.005
0.01
0.015
0.02
Travel time of each ray in seconds
Number of rays
A
m
p
l
i
t
u
d
e

Fig. 22: Multipath propagation depicting transmission loss.

As an example, a sinc pulse is taken to present the transmission loss phenomenon. In
Fig.22, one can observe clearly the degradation process in amplitude from ray 1 till
ray 8.


















35




4 Modulation
Ordinarily, the transmission of a message signal (be it in analog or digital form) over a
band-pass communication channel (e.g., telephone line, satellite channel) requires a
shift of frequencies contained in the signal into other frequency ranges suitable for
transmission, and a corresponding shift back to the original frequency range after
reception. A shift of the range of frequencies in a signal is accomplished by using
modulation, defined as the process by which some characteristic of a carrier is varied
in accordance with a message signal. The message signal is referred to as the
modulating signal, and the result of the modulation process is referred to as the
modulated signal. At the receiving end of the communication system, we usually
require the message signal to be recovered. This is accomplished by using the process
known as demodulation, which is the inverse of the modulation process [7].
4.1 Digital Modulation Techniques
With a binary modulation technique, the modulation process corresponds to switching
or keying the amplitude, frequency, or phase of the carrier between either of two
possible values corresponding to binary symbols 0 and 1. This results in three basic
signaling techniques, namely, amplitude shift-keying (ASK), frequency shift-keying
(FSK) and phase shift-keying (PSK), as described herein [7]:
4.1.1 ASK
In ASK the amplitude of the signal is changed in accordance to the information and all
else is kept fixed. 1 is transmitted by a signal of particular amplitude. To transmit 0, we
change the amplitude keeping the frequency constant. On-off keying (OOK) is a special
form of ASK, where one of the amplitudes is zero as shown below;






36








Fig. 23: Baseband information sequence - 0010110010

( ) ( ) ( ) sin 2 ASK t s t ft = (4.1)










Fig. 24: Binary ASK (OOK) carrier

4.1.2 FSK
In FSK, we change the frequency in response to the information, one particular
frequency for a 1 and another frequency for a 0 as shown below for the same bit
sequence as above. In the example below, frequency
1
f for a 1 is higher than
2
f used
for the 0.

( )
( )
( )
1
2
sin 2 for 1
sin 2 for 0
f t
FSK t
f t

(4.2)








Fig. 25: Binary FSK carrier
37



4.1.3 PSK
In PSK, we change the phase of the sinusoidal carrier to indicate the information. Phase
in this context is the starting angle at which the sinusoid starts. Depending on the start
of the binary sequence, to transmit 0 or 1, we shift the phase of sinusoid by 180. Phase
shift represents the change in the state of information in this case.

If ( ) b t

represents the binary sequence then we can write

( ) ( ) ( ) sin 2 PSK t ft b t = . (4.3)









Fig. 26: Binary PSK carrier (note the 180 phase sifts at big edges)



Remarks

ASK
Pulse shaping can be employed to remove spectral spreading.
One binary digit is represented by presence of carrier, at constant amplitude. Other
binary digit by absence of carrier.
ASK is susceptible to sudden gain changes and demonstrates poor performance.

FSK
Bandwidth occupancy of FSK is dependant on the spacing of the two symbols. A
frequency spacing of 0.5 times the symbol period is typically used.
FSK can be expanded to a M-ary scheme, employing multiple frequencies as different
states.

PSK
PSK can be expanded to a M-ary scheme, employing multiple phases as different
states.
38



Filtering can be employed to avoid spectral spreading.
4.2 Bit rate and Symbol rate
Symbols, Bits and Bauds

A symbol is quite apart from a bit in concept although both can be represented by a
sinusoidal or wave functions. Where bit is the unit of information and symbol is the unit
of transmitted energy. The definition of a symbol is always little bit ambiguous. In
general or according to communication aspects, it can be broadly defined as: A symbol
is a set of bits, not just one bit. The size of the set depends upon the modulation
scheme which you are using. For example, in BPSK, symbol has only a single bit and in
QPSK, 2 bits constitute a symbol. In our case considered, the number of bits in a symbol
can be written as,
2
log M . Where M represents the number of phase shifts.

To understand and compare different modulation format efficiencies, it is important to
first understand the difference between bit rate and symbol rate. The signal bandwidth
for the communications channel needed depends on the symbol rate, not on the bit rate,
cf. [9].




bit rate
Symbol rate
the number of bits transmitted with each symbol
=


Bit rate is the frequency of a system bit stream. Take, for example, an 8 bit sampler,
sampling at 10 kHz. The bit rate, the basic bit stream rate, would be eight bits multiplied
by 10K samples per second or 80 Kbits per second. (For the moment we will ignore the
extra bits required for synchronization, error correction, etc.).

The symbol rate is the bit rate divided by the number of bits that can be transmitted with
each symbol. If one bit is transmitted per symbol, as with BPSK, then the symbol rate
would be the same as the bit rate of 80 Kbits per second. If two bits are transmitted per
symbol, as in QPSK, then the symbol rate would be half of the bit rate or 40 Kbits per
second. A Baud rate is same as the Symbol rate. If more bits can be sent with each
symbol, then the same amount of data can be sent in a narrower spectrum. This is why
modulation formats that are more complex and use a higher number of states can send
the same information over a narrower piece of the RF spectrum.
39



4.3 Representation of Signals
4.3.1 Baseband and Bandpass Signals
In many communication systems the baseband signal that conveys the message to be
transmitted is up-converted (i.e., translated in frequency) in order to better suit the
characteristics of the channel. An example of such a system is a QPSK system. The
QPSK modulation process can be viewed as a two step procedure. First, a baseband
signal, consisting of a series of complex valued pulses, is formed, cf. section 4.5. This
signal is then up-converted to the desired carrier frequency. The result is a bandpass
signal which can be transmitted over a physical channel [10].
4.3.2 Baseband vs. Bandpass
In general any amplitude or phase modulation technique can be described by the
relation
( ) ( ) ( ) cos 2
c
x t A t f t t = +

(4.4)

where
c
f is the carrier frequency. The bandwidths of the phase function ( ) t

and the
amplitude function ( ) A t are in general much lower than the carrier frequency. Hence,
the rate-of-change in these signals is typically much lower than
c
f . Consequently, ( ) x t
is a bandpass signal with its spectrum concentrated around the carrier frequency
c
f .
Using trigonometric relations, it is possible to write the same function as

( ) ( ) ( ) ( ) ( ) cos 2 sin 2
I c Q c
x t x t f t x t f t = (4.5)

where
( ) ( ) ( ) ( )
cos
I
x t A t t = (4.6)

( ) ( ) ( ) ( )
sin
Q
x t A t t = (4.7)

represent the quadrature components. Here, ( )
I
x t and ( )
Q
x t is the in-phase (I) and the
quadrature component (Q) respectively. Eq. (4.5) can be rewritten using complex
numbers as
( ) ( ) { }
2
Re
c
j f t
bb
x t x t e

= (4.8)
where
( ) ( ) ( )
bb I Q
x t x t jx t = + (4.9)

is the baseband equivalent signal.
40



It is always instructive to study the baseband and bandpass equivalent signals in the
frequency domain. As illustrated in Fig. 27, the baseband equivalent signal is obtained
from the bandpass signal by removing the image of the signal on the negative frequency
axis, scaling the remaining spectrum by a factor of two and moving the result to the
baseband by shifting the spectrum
c
f Hz to the left. This frequency domain result, as
well as baseband and bandpass signals in general, is thoroughly discussed in chapter 4
of [4].

Fig. 27: The relation between a Bandpass signal and its Baseband equivalent signal
in the frequency domain.


A baseband equivalent signal can be obtained by using a down-converter. Such a device
might be implemented in several ways. One example of an implementation is shown in
Fig. 28. The signal is first multiplied by
2
2
c
j f t
e

in order to shift the spectrum and scale
it by a factor of two. The low-pass filter removes the negative frequency image. It is
assumed that the low-pass filter is ideal and has a sufficiently large bandwidth so as not
to alter the shape of the positive frequency image. A down-converter is found in most
bandpass communication receivers.







Fig. 28: Converting a Bandpass signal into its Baseband equivalent signal.

Communication signals are usually represented using just the complex signal ( )
bb
x t

in
Eq.(4.9), which is called as baseband representation as opposed to the bandpass
representation ( ) x t , which is a real valued signal. The baseband representation is much
easier to work than the bandpass representation, as will be illustrated below.
( ) x t

LP
( )
I
x t

LP ( )
Q
x t
( ) 2sin 2
c
f t
( ) 2cos 2
c
f t
41



Advantages of the baseband representation
The use of a baseband representation simplifies communications system simulation and
analysis in a number of ways. A simulation or analysis of a baseband system is not tied
to any particular carrier frequency and can be reused if the carrier frequency is changed.
Certain very useful operations become extremely simple when using the complex
representation. For example, a frequency shift by
shift
f is done by multiplying the signal
by
2
shift
j f t
e

. A phase shift of
shift


is done by multiplying the signal with
shift
j
e

.
4.4 Modulation QPSK
In binary data transmission, we send only one of two possible signals during each bit
interval
b
T . On the other hand, in an M-ary data transmission system we send any one
of M possible signals, during each signaling interval T. For almost all applications, the
number of possible signals 2
n
M = , where n is an integer, and the signaling interval
b
T nT = . It is apparent that a binary data transmission system is a special case of an
M-ary data transmission system. Each of the M signals is called a symbol. The rate at
which these symbols are transmitted through a communication channel is expressed in
units of bauds (as explained in the above section). For M-ary data transmission, it
equals to
2
log M bits per second.

Quadrature phase shift keying (QPSK) is an example of M-ary data transmission with
4 M = . In QPSK, one of four possible signaling elements is transmitted during each
signaling interval, with each signal uniquely related to a dibit (pairs of bits are termed as
dibits).

For example, we may represent the four possible dibits 00, 10, 11, and 01 in Gray-
encoded form (further on Gray encoding cf. [4], p.170), by transmitting a sinusoidal
carrier with one of four possible values, as follows:

42



( )
cos 2 , dibit 00
4
3
cos 2 , dibit 10
4
cos 2 , dibit 01
4
3
cos 2 , dibit 11
4
c c
c c
c c
c c
A f t
A f t
s t
A f t
A f t


+


=


(4.10)

where 0 t T ; we refer to T as the symbol duration. Fig. 29 depicts the signal state
diagram of Eq.(4.10).









Fig. 29: QSPK state diagram


Clearly QPSK represents a special form of phase modulation. This is done by
expressing ( ) s t succinctly as
( ) ( ) cos 2
c c
s t A f t t = +

(4.11)

where the phase ( ) t assumes a constant value for each dibit of the incoming data
stream. A further insight into the representation of QPSK can be developed by
expanding the cosine term in Eq.(4.11) and rewriting the expression for ( ) s t as,

( ) ( ) ( ) ( ) ( ) cos cos 2 sin sin 2
c c c c
s t A t f t A t f t =

(4.12)

According to this representation, the QPSK wave ( ) s t has an in-phase component equal
to ( ) cos
c
A t

and a quadrature component equal to ( ) sin
c
A t

.

The representation of Eq. (4.12) provides the basis for the general block diagram of the
QPSK transmitter shown in Fig. 30. It consists of a serial-to-parallel converter, a pair of
product modulators, a supply of the two carrier waves (inphase, quadrature) and a
00
01
10
11
I
Q
43



summer. The function of the serial-to-parallel converter is to represent each successive
pair of bits of the incoming binary data stream ( ) m t

as two separate bits, with one bit
applied to the in-phase channel of the transmitter and the other bit applied to the
quadrature channel.

















Fig. 30: General block diagram QPSK transmitter


It is apparent that the signaling interval T in a QPSK system is twice as long as the bit
duration
b
T

of the input binary data stream ( ) m t . That is, for a given bit rate 1/
b
T , a
QPSK system requires half of the transmission bandwidth of the corresponding binary
PSK system.

Assuming the coding arrangement of Eq. (4.10), for the following data sequence, signal
wave forms are shown for modulated carrier for I channel, Q channel and the final
QPSK carrier.

Example to illustrate QPSK modulation

Assumed data sequence =[0 0 1 1 0 0 0 1 1 1 0 1 1 1 1 0 1 1] to be transmitted.

( ) cos 2
c c
A f t
Serial-
to-parallel
converter
Oscillator
( ) sin 2
c c
A f t

QPSK
signal


2

I data, odd bits


Q data, even bits

Binary data stream


( ) m t
44



0 0.5 1 1.5 2 2.5 3
x 10
-3
-1.5
-1
-0.5
0
0.5
1
1.5
data sequence
time in seconds
m
a
g
n
i
t
u
d
e

Fig. 31: Data sequence transmitted


0 0.5 1 1.5 2 2.5 3
x 10
-3
-1.5
-1
-0.5
0
0.5
1
1.5
modulated carrier Signal for I channel
time in seconds
m
a
g
n
i
t
u
d
e

Fig. 32: Modulated carrier signal for I channel

45



0 0.5 1 1.5 2 2.5 3
x 10
-3
-1.5
-1
-0.5
0
0.5
1
1.5
modulated carrier Signal for Q channel
time in seconds
m
a
g
n
i
t
u
d
e

Fig. 33: Modulated carrier signal for Q channel


Here the mapping of the bits is done according to the state diagram of QPSK, Fig. 29.

0 0.5 1 1.5 2 2.5 3
x 10
-3
-1.5
-1
-0.5
0
0.5
1
1.5
QPSK Signal
time in seconds
m
a
g
n
i
t
u
d
e

Fig. 34: QPSK signal for the given data sequence







46



4.5 Pulse shaping
The resulting QPSK symbols are passed through a pulse shaping filter. The rectangular
pulses are not practical to send and require a lot of bandwidth. So, in their lieu we send
shaped pulses that convey the same information but use smaller bandwidths and have
other good properties such as intersymbol interference rejection. One of the most
common pulse shaping used with QPSK is root raised cosine, in short RRC. This
pulse shaping has a so called roll-off parameter which controls the shape and the
bandwidth of the signal.

Some common pulse shaping methods are
Root Raised cosine (used with QPSK)
Half-sinusoid (used with MSK)
Gaussian (used with GMSK)

The root raised pulse shape is given by,

( )
( ) ( ) ( )
1
2 2 2
cos 1 sin 1 4
4
1 16
T
t T t T t T
t
T t T


+ +

=


(4.13)


where T, is the symbol time and is the roll-off factor. The roll-off factor usually
lies between 0 and 1 and defines the excess bandwidth100 % . Using a smaller
results in a more compact power density spectrum, but the link performance becomes
more sensitive to errors in the symbol timing. A typical root raised pulse with a roll-off
factor of 0.5 = is shown in Fig. 35.

47



-5 -4 -3 -2 -1 0 1 2 3 4 5
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
Root raised cosine pulse
delay t/T
A
m
p
l
i
t
u
d
e

Fig. 35: Root raised cosine pulse with a roll-off factor, 0.5 =
































48




5 System description
In sec. 4.3, we have dealt about bandpass and baseband descriptions. As we have seen
there are many advantages when we use baseband representation over bandpass in terms
of simulations. So, here we have used baseband system representation. A detailed
description about the simulation of a continuous-time baseband system is provided here.








Fig. 36: Underwater Acoustic simulation system.

The communication system considered is shown in Fig. 36. This is a typical set up
which can represent any kind of system using quadrature amplitude modulation (QAM).
This QPSK system is used in our investigations. A brief overview of the system now
follows. At the transmitting side, the sequence of symbols ( ) d n is converted to a
continuous-time baseband signal ( )
bb
s t by a pulse amplitude modulator (PAM). Note
that ( ) d n takes the values from discrete set of complex-valued symbols. Up-conversion
is performed by multiplying with
2
c
j f t
e

, resulting in a bandpass signal ( ) s t , being
transmitted over the channel (refer chapters 2 and 3). In order to remove the carrier, the
received signal ( ) r t is processed by a down-converter which outputs the corresponding
baseband equivalent signal ( )
bb
r t . The down-converter is followed by a low pass and
then by a matched filter. The detector gives the estimates of the transmitted symbols. As
we already said above, baseband representation is useful in order to be able to simulate
the system using, for example, Matlab, where only time discrete signals can be
represented. Fig. 37 represents the baseband equivalent system.

nT
2
c
j f t
e

( )

d n
2
c
j f t
e

( ) d n
( ) r t

det

BP
( )
bb
r t
PAM

( )
bb
s t
( ) s t
Noise
Under-
water
acoustic
channel
( ) p t
( ) p t
49








Fig. 37: The baseband equivalent system


UAC Underwater Acoustic channel.

A simulation is often based on oversampled system, i.e. the sampling rate is higher than
the symbol rate. In general, a higher sampling rate will more accurately reflect the
original system. However, this comes at the cost of a longer simulation time since more
samples need to be processed. It is common to use an oversampling rate that is a
multiple of the symbol rate. The number of samples per symbols, here denoted by Q is
then an integer.

In order to arrive at the desired discrete time system, we will take the continuous-time
baseband equivalent system, introduce an ideal anti-alias filter at the output of the
matched filter and then oversample its output. This is depicted in Fig. 38, where also
down-sampling a factor Q is assumed to be chosen so large that the bandwidth of the
matched filter is smaller than the bandwidth 2 Q T of the anti-alias filter. Consequently,
the anti-alias filter does not change the signal output from the matched filter. The signal
at the input of the detector is same as for the continuous-time system. Thus, this
oversampled system is equivalent to the original system.










Fig. 38: Oversampling the system

Below, a series of equivalent systems are presented where the anti-alias filter and the
sampling device step by step, is moved to the left until a completely discrete system
remains. As illustrated in Fig. 39, the first step is to switch the order of the matched
filter and the anti-alias filter and also use a discrete time matched filter ( ) p n . With a
nT Q
( )

d n
det
( )
bb
r t
( ) p t
PAM
( ) d n
( ) p t
( )
bb
s t
Noise

UAC

nT Q
( )

d n

2
Q
T 2
Q
T

f
1
det Q
( )
bb
r t
( ) p t
PAM
( ) d n
( ) p t
( )
bb
s t
Noise

UAC
50



slight change of notation, ( ) x n denotes the discrete-time signal, ( ) x nT Q . The
matched filter has a bandwidth smaller than the Nyquist frequency 2 Q T so this
reordering operation does not affect the signal. This can be further understood with a
simple mathematical motivation by considering the sampled output of the continuous-
time convolution.

( ) ( ) ( ) y t h t x t =

and approximating the integral with a summation to yield

( ) ( ) ( )
s s
y nT h x nT d


( ) ( ) ( )
s s s
k
T h kT x n k T


( ) ( )
s
T h n x n =

where,
s
T
T
Q
=

The relation holds exactly if both ( ) h t

and ( ) x t has a bandwidth less than the Nyquist
frequency 12
s
T . As a result, sampled continuous-time convolutions can be computed
using discrete-time processing and if the output is scaled by the sample period
s
T .










Fig. 39: Moving the anti-alias filter and the sampling device in front of the matched filter.


As a next step it should be obvious that the anti-alias filter and the sampler can be
moved in front of the summation without changing the signal at the detector. The anti-
alias filter after PAM can be removed (since the bandwidth ( ) p t is smaller) which
means that we can write the sampled transmitted signal as
( )

d n
det Q

2
Q
T 2
Q
T

f
1 nT Q

T Q
( )
bb
r t
PAM
( ) d n
( ) p t
( )
bb
s t
Noise

UAC ( ) p n
51



( ) ( ) ( ) .
bb
k
s n d k p n kQ T Q

=
=



Clearly, this summation can be implemented by up-sampling ( ) d n by a factor of Q,
followed by filtering the resultant sequence with ( ) p n .









Fig. 40: The equivalent discrete time baseband system.


It is finally possible to summarize the above development in the equivalent discrete-
time-system shown in Fig. 40.
5.1 Simulation system
The simulation system is illustrated in Fig. 41. It consists of a bit source, transmitter,
channel, receiver and a bit sink. The bit source generates the random binary sequence
that is to be transmitted by the transmitter. Typically a random bit source is employed in
simulations and this is the case in our simulation as well. The transmitter converts the
bits into QPSK symbols, applies pulse shaping and up-conversion is done to the desired
carrier frequency.






Fig. 41: The Simulation system considered


The output from the transmitter is fed through the underwater acoustic channel. The
receiver block takes the output from the channel, estimates phase and timing offset, and
demodulates the received QPSK symbols into information bits which are fed to the bit
sink. Here, the bit sink counts the number of errors that occurred to gather the statistics
used for investigating the performance of the system.
Bit source Transmitter
Underwater
acoustic channel
Receiver Bit sink
( )

d n
det
Noise

Q
( )
bb
r n
Q ( ) d n

( )
bb
s n
( ) p n ( ) p n

UAC
52



5.2 Transmitter
The transmitter which is used in this report is illustrated in Fig. 42. It consists of blocks
for training sequence generation, QPSK mapping, pulse shaping put together known as
QPSK modulation and carrier modulation block.
















Fig. 42: The transmitter

5.2.1 Training Sequence
The training sequence generator generates a known data sequence which is transmitted
prior to any data transmission. Its purpose is to provide the receiver with a known
sequence, which can be used for phase estimation and synchronization. The training
sequence is multiplexed with the data sequence before QPSK modulation as shown in
Fig. 42. In this report, multiplexing is done such that the whole training sequence is
transmitted before the data sequence, but any other scheme can also be used. Keeping
the training sequence in the middle of the data, i.e. half the data bits followed by
training sequence, followed by other half of the data bits, is another common scheme.

Training sequence carry no information and it is therefore to be seen as useless
overhead. A shorter training sequence is preferred from a overhead point of view, while
a longer one usually results in better performance of the synchronization and phase
estimation algorithms in the receiver. The length of the training sequence in general is
not fixed any where; it depends on the receiver design and modulation scheme which is
used. Later in this report performance results are shown for shorter and longer train
sequences.
Traning
sequence
QPSK
mapping
Pulse
shaping
Carrier
modulation
QPSK modulation
Train
Data transmitted
53



5.2.2 QPSK mapping
The bits are mapped onto corresponding QPSK symbols using Gray coding, as shown in
Fig. 43. Each QPSK symbol is represented by
I Q
d jd + , corresponding to real-valued I
and Q channels, respectively. This is covered completely in sec. 4.4.












Fig. 43: Mapping of bits into QPSK symbols

5.2.3 Pulse shaping
The resulting QPSK symbols are passed through pulse shaping filter. Often a
rectangular pulse shape is used in simulations, although a root raised cosine pulse is
common choice in a real system. Here, we have used a root raised cosine pulse shape.
The complete description of the RRC pulse shaper is given in sec. 4.5.
5.2.4 Carrier modulation
Subsequent to pulse shaping is carrier modulation, taking the complex valued pulse
shaped QPSK symbols in the baseband, shifting them in frequency finishes the process
of carrier modulation. Which carrier frequency that one has to choose depends upon the
channel. Underwater acoustic channel is a low frequency channel and here, the carrier
that is chosen is in the range of 20-30 KHz. But, the carrier frequency is almost always
subsequently higher than the baseband frequency determined by symbol rate.
5.3 Channel
The complete description of the channel can be understood from chapters. 1, 2 and 3.
Nevertheless, a brief summary of it is again provided here. The main problems of this
channel are its multipath propagation, thereby a cause of interference. And next are
1,0 0,0
1,1 0,1
I
d
Q
d
54



channel variations, variations in physical parameters of the ocean such as temperature,
pH, salinity, pressure or depth of water. All these are extensively discussed in the above
chapters mentioned. This report considers almost all the parameters into consideration
while modeling the channel. The simulation block diagram of the channel can be found
in Appendix of the report. Fig. 44 represents the Underwater acoustic channel model
used in this simulation. ( )
1
h t , represents the direct path or first ray with zero delay
(relative) and ( )
N
h t represents the N
th
ray with a delay of
N
with respect to the direct
path.


Fig. 44: Underwater Acoustic Channel Model

5.4 Receiver
The receiver design in any communication system is usually complicated compared to
transmitter and channel design. But, here as the channel is considered extensively the
complexity of the receiver design appears to be a bit reduced compared to channel
design. Fig. 45 depicts the receiver block diagram used in this report. The following
sections explain the functionality of each and every block used here.
Rx signal




Tx signal
Noise
( )
1
h t
( )
n
h t
( )
N
h t
( ) h t
55



5.4.1 Bandpass Filtering
The first block in the receiver is a bandpass filter with center frequency equal to the
carrier frequency
c
f and a bandwidth matching the bandwidth of the transmitted signal.
The purpose of the bandpass filter is to remove out-of-band noise. Choosing bandwidth
of the bandpass filter should be taken some care. If the bandwidth is chosen too large,
more noise enters than necessary will pass on to the subsequent stages. On the other
hand, if it is too narrow, the desired signal is distorted.
































Fig. 45: The receiver

Decision

BP
Phase
estimation
Down
conversion
Front end
Matched
filter
Phase
estimation
Sync.
Phase
correction

Training
56



5.4.2 Down conversion and Sampling
The down conversion block down-converts the received bandpass signal resulting into
complex valued baseband signal. In down-conversion operation the input signal is
multiplied with the local oscillator signal. Here a local oscillator is not used and the
same carrier frequency is assumed in both receiver and transmitter. So, there wont be
any affect in terms of carrier frequency. One aspect of the local oscillator signal in the
down-conversion block is how to set the initial phase. In Fig. 45, a connection from the
optional phase estimation block to the down-conversion block is shown with dashed
lines and phase estimate obtained from the phase estimator is used for the initial phase
of the local oscillator. Another approach, which is common in practice, is not to lock the
phase of the local oscillator, but instead to do a phase compensation of the baseband
signal. This is done in our case after the matched filtering, where phase compensation is
simply a rotation of the signal constellation. The latter approach is shown with solid
lines in Fig. 45.
5.4.3 Matched Filtering
The matched filtering block contains a filter matched to the transmitted pulse shape. The
matched filter operation can be done on a discrete time signal or a continuous time
signal. The two possibilities are equivalent, but from the implementation point of view,
operating on the discrete time signal is to prefer.

In case of a rectangular pulse-shape, the matched filter is an integrate-and-dump filter.
In Fig. 46, the output signal from the matched filter (either I or Q channel) is shown for
the case of rectangular pulse shapes. The black dots represent the sampled signal in the
receiver, assuming four samples per symbol. The optimal sampling instants are
illustrated with small arrows. In the figure, the sampling of the matched filter happens
to be at one of the samples of the discrete signal, but this is typically not the case. If the
matched filter is to be sampled between two solid dots, interpolation can be used to find
the value between two samples, or, simpler but with loss in performance the closest
sample can be chosen.

57





Fig. 46: Output from the matched filter for successive signaling in absence of noise.

The solid line is the resulting output signal from the matched filters and the dotted lines
are the contributions from the first two bits (the remaining bits each has a similar
contribution, but this is not shown). The small arrows illustrate the preferred sampling
instants.
5.4.4 Synchronization
The synchronization algorithm is crucial for the operation of the system. Its task is to
find the best sampling time for the sampling device. Ideally, the matched filter should
be implemented such that the signal to noise ratio for the decision variable is
maximized. For a rectangular pulse shape, the best sampling time
samp
t is at the peak of
the triangles coming out from the matched filter, illustrated with small arrows in
Fig. 46.

The synchronization algorithm used in this report is based on the complex training
sequence. During the training sequence, it is known to the receiver what the transmitter
is transmitting. Hence, one possible way of recovering the symbol timing is to cross-
correlate the complex valued samples after the matched filter with locally generated
time-shifted replica of the training sequence. By trying different time-shifts in steps of
T Q, where Q is the number of samples per symbol, the symbol timing can be found
with a resolution of T Q. Keeping it mathematical terms, if ( ) { }
1
0
L
n
c n

=
is the locally
generated symbol-spaced replica of the QPSK training sequence of length L and ( ) r n
denotes the output from the matched filter, the timing can be found as
samp
t
samp
t T +
samp
6 t T +
1 + 1 + 1 + 1 + 1 1 1
t
58





( ) ( )
1
samp samp
0
arg max .
L
k
t r kQ t c k


=

= +

(5.1)

190 200 210 220 230 240
0
20
40
60
80
100
120
140

Fig. 47: Example of cross-correlating the received sequence with the
training sequence in order to find the timing.


In this example, the delay was estimated to 211 samples (corresponding to the
maximum) and, hence, the matched filter should be sampled at 211, 211 +Q, . in
order to recover the QPSK symbols. The correlation properties of the training sequence
are important as they affect the estimation accuracy. Ideally the autocorrelation function
for the training sequence should be equal to a delta pulse, i.e. zero correlation
everywhere except at lag zero. Therefore, a training sequence should be carefully
designed.
5.4.5 Sampling
The output from the matched filter is down-sampled with a sampling rate of 1Q , i.e.
every
th
Q symbol in the output sequence is kept. The position for the sample (illustrated
by arrows in Fig. 46 controlled by the synchronization device previously described.
5.4.6 Phase Estimation
The phase estimator estimates the phase of the transmitted signal, which is necessary to
know in order to demodulate the signal. Phase estimation especially in the low SNR
region, is a hard problem and several different techniques are available. The phase
59



estimation algorithm used in this report is as follows. Using a complex baseband
representation, the sub samples of the matched filter output is a sequence of the form


( ) ( ) ( ) ( )
1 1 2
..., , , , ,...,
n n n n
j j j j
e e e e

+ +
+ + + +
(5.2)

where { } 4, 3 4
n
is the information bearing phase of the
th
n symbol and
is the unknown phase offset caused by the channel. If
n
is known, which is the case
during the training sequence, the receiver can easily remove the influence from the
information in each received symbol by element-wise multiplication with complex
conjugate of a QPSK modulated training sequence replica, generated by the receiver.
The value of can then easily obtained by averaging over the sequence. In other
words, if ( ) { }
1
0
L
n
r n

=
denotes the L received QPSK symbols (i.e. received signal after
down sampling) during the training sequence and ( ) { }
1
0
L
n
c n

=
is the local replica of the
complex training sequence, an estimate of the unknown phase offset can be obtained as
( ) ( )
( )
1
0
1
arg
L
k
r k c k
L

=
=

. (5.3)

The longer the training sequence, the better the phase estimate as the influence from
noise decreases. A longer training sequence, on the other hand, reduces the amount of
payload that can be transmitted during a given time.
5.4.7 Decision
The decision device is a threshold device comparing the I and Q channels, respectively,
with threshold zero. If the decision variable is larger than zero, a logical 0 is decided
and if its less than zero a logical 1 is decided.













60




6 Observations and Results
This chapter presents you with some exemplary simulation results along with some
interesting observations. First we look into the Underwater Acoustic Channel then we
cover the Communication part of the system.

As discussed in the above chapters 2 and 3, the major impact in an underwater acoustic
channel would be its multipath propagation. Always our desired goal is to achieve high
data rates at a decent geometry of the transmitter and receiver (low BER is implied).
Here, the term geometry means the physical positioning of a transmitter and receiver in
an underwater acoustic channel of depth D and infinite length. At shorter distances the
multipath reaches the receiver at a much longer time compared to the direct path. This
statement may appear some what contrasting to what we think. But, it certainly makes
sense when we look into it in a deeper view. Here, we are not speaking about the time
taken for each ray to reach the receiver. But, instead we are referring to the Relative
time of all the other rays comparing to direct path.

Fig. 48 presents the simulation results for a particular environmental scenario varying
the receiver location. This figure explains the impact of distances, (indirectly its grazing
angles which play a major role) on time delays of multipath propagation for the
following environmental scenario. Here, the wind speed and bottom type are not
included as we are representing only the time delay concept without any transmission
loss phenomenon included.

Environmental Scenario (for Fig. 48)

Source location ( ) ( ) , 0,20
S S
r z = m
Receiver locations
( ) ( )
1 1
, x,20
R R
r z = m, x 10,100,500,1000 =
Sound velocity c =1500 m/s
Water depth 40 D = m
Salinity 35 S = ppt
Water temperature T =14 C
pH value pH =8

61




0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
1
2
3
4
5
6
7
8
0
0.2
0.4
0.6
0.8
1
Number of ray
Relative travel time [s]
A
m
p
l
i
t
u
d
e

a)
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
1
2
3
4
5
6
7
8
0
0.2
0.4
0.6
0.8
1
Number of ray
Relative travel time [s]
A
m
p
l
i
t
u
d
e

b)
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
1
2
3
4
5
6
7
8
0
0.2
0.4
0.6
0.8
1
Number of ray
Relative travel time [s]
A
m
p
l
i
t
u
d
e

c)
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
1
2
3
4
5
6
7
8
0
0.2
0.4
0.6
0.8
1
Number of ray
Relative travel time [s]
A
m
p
l
i
t
u
d
e

d)

Fig. 48: Simulation results showing relative travel times for various receiver locations of a sinc-pulse
without including any transmission loss phenomenon.

The relative times of all the 8 rays comparing to direct and the grazing angles for each
case are provided in the following.

a)
( ) ( )
1 1
, 10,20
R R
r z =
T =[0 20.8207 20.8207 47.0817 47.0817 73.6106 73.6106 100.2081]
Angles =[0 75.9638 75.9638 82.8750 82.8750 85.2364 85.2364 86.4237]

b)
( ) ( )
2 2
, 100,20
R R
r z =
T =[0 5.1355 5.1355 18.7083 18.7083 37.4700 37.4700 59.1197]
Angles =[0 21.8014 21.8014 38.6598 38.6598 50.1944 50.1944 57.9946]
62



c)
( ) ( )
3 3
, 500,20
R R
r z =
T =[0 1.0650 1.0650 4.2397 4.2397 9.4656 9.4656 16.6508]
Angles =[0 4.5739 4.5739 9.0903 9.0903 13.4957 13.4957 17.7447]

d)
( ) ( )
4 4
, 1000,20
R R
r z =
T =[0 0.5331 0.5331 2.1299 2.1299 4.7828 4.7828 8.4794]
Angles =[0 2.2906 2.2906 4.5739 4.5739 6.8428 6.8428 9.0903]

There is a huge difference in relative travel times for very shorter distances of 10 m,
case (a), compared to a desirable range of 1000 m, case (d). This can be understood
when we observe the corresponding grazing angles for each case. In case (a), the
grazing angles are very high due to shorter distances [Refer. Eqs. (3.29) - (3.32)]
where, as in case (d) you observe very low grazing angles. Another observation is the
same, relative travel times and grazing angles for rays hitting surface or bottom,
surface-bottom-surface or bottom-surface-bottom, etc. This is due to the location of
both transmitter and receiver at exactly half of channels depth.

Here, we did not show any impact of Reflection loss and Spreading loss, only the time
delay concept has been focused. From now on, we refer mainly to grazing angles to
explain the behaviour of the system as the distance or lengths of each ray are included
when you calculate the grazing angle. The following simulation results are exclusively
presented to show the impact of transmission loss (including time delays) on multipath
propagation at various vertical depths of transmitter and receiver along with various
horizontal distances. Changing the vertical depths means, placing the receiver not
exactly at the half of the water channel depths but instead placing it, either nearer to the
bottom or nearer to the surface to look what exactly is happening for the bottom and
surface reflection coefficients.

When the separation between them is 10-200 m you dont have that much impact of
multi-path propagation and thereby, receiver design complexity is much reduced. But,
in practical applications the distance between the transmitter and receiver is generally
desired beyond 500 m. So, all our simulation results are presented considering a 1000 m
separation between the transmitter and receiver. But, nevertheless, we do present some
simulation results when the receiver is at smaller distances from the transmitter.
Another criterion which is considered in the following simulation results is the change
63



of transmitter and receiver vertical depths. These are presented only to show the impact
of the distances between transmitter and receiver. For example, when we think of the
water channel depths to be 40 m and if the receiver is located at depths of 35 m, it
signifies that there would be certainly a more amount reflection of the signal from
Surface compared to Bottom. We always mean relative time delays with respect to the
direct path when we speak about time delays.

Environmental Scenario (for Fig. 49)
Source location
( ) ( )
1
, 0,y
S S
r z = m, y 10,35 =
Receiver locations
( ) ( )
1 1
, x,y
R R
r z = m, x 200,1000 =
Sound velocity c =1500 m/s
Water depth 40 D = m
Salinity 35 S = ppt
Water temperature T =14 C
pH value pH =8
Wind speed 8
w
v =
Bottom type bt = coarse silt

Fig. 49 presents, simulation results including the transmission loss phenomenon for two
transmitters and receivers at different locations. First we look into Fig. 49a and 49b.
This for the case where the receiver is placed at a shorter distance of 200 m and vertical
depths of transmitter and receiver are swapped between 10 and 35 m. The complete
environmental scenario that has been chosen is given above. As said above, here, we
observe only the direct path and the multi-paths are completely suppressed. This is due
to the lower reflection coefficients at higher grazing angles and thereby, the
transmission loss of each ray becomes quite negligible. The transmission loss considers
the number of reflections (in turn reflection coefficients) when a ray hits the boundaries
along with the spreading loss (1 L). So, the direct path will never have any reflection
loss. Apart from the direct path we observe the 3
rd
ray in both the cases (a) and (b), but
not the 2
nd
ray. This is due to zero reflection of the 2
nd
ray when it hits the surface.

1
R Surface reflection coefficient
2
R Bottom reflection coefficient
64



a)
( ) ( ) ( ) ( )
1 1 1 1
, 0,10 and , 200,35
S S R R
r z r z = =
Angles =[7.1250 12.6804 9.9262 15.3763 27.6995 32.0054 29.8989 34.0193]
R1 =[0.2766 0.0179 0.0835 0.0028 0.0000 0.0000 0.0000 0.0000]
R2 =[0.2002 0.0591 0.1084 0.0342 0.0427 0.0481 0.0457 0.0501]

b)
( ) ( ) ( ) ( )
2 2 2 2
, 0,35 and , 200,10
S S R R
r z r z = =
Angles =[-7.1250 12.6804 9.9262 27.6995 15.3763 32.0054 29.8989 42.7688]
R1 =[0.2766 0.0179 0.0835 0.0000 0.0028 0.0000 0.0000 0.0000]
R2 =[0.9772 0.0591 0.1084 0.0427 0.0342 0.0481 0.0457 0.0559]


0
0.05
0.1
0.15
0.2
0
2
4
6
8
0
1
2
3
4
5
x 10
-3
Relative travel time [s]
Number of ray
A
m
p
l
i
t
u
d
e

a)
( ) ( ) ( ) ( )
1 1 1 1
, 0,10 and , 200,35
S S R R
r z r z = =
0
0.05
0.1
0.15
0.2
0
2
4
6
8
0
1
2
3
4
5
x 10
-3
Relative travel time [s]
Number of ray
A
m
p
l
i
t
u
d
e

b)
( ) ( ) ( ) ( )
2 2 2 2
, 0,35 and , 200,10
S S R R
r z r z = =
0
0.05
0.1
0.15
0.2
0
2
4
6
8
0
0.2
0.4
0.6
0.8
1
x 10
-3
Relative travel time [s]
Number of ray
A
m
p
l
i
t
u
d
e

c)
( ) ( ) ( ) ( )
3 3 3 3
, 0,10 and , 1000,35
S S R R
r z r z = =
0
0.05
0.1
0.15
0.2
0
2
4
6
8
0
0.2
0.4
0.6
0.8
1
x 10
-3
Relative travel time [s]
Number of ray
A
m
p
lit
u
d
e

d)
( ) ( ) ( ) ( )
4 4 4 4
, 0,35 and , 1000,10
S S R R
r z r z = =


Fig. 49: Simulation results showing relative travel times for various transmitter and receiver locations of
a sinc-pulse including the transmission loss phenomenon.

Coming to Fig. 49c and 49d, we certainly see the impact of multipath growing to greater
extent as the separation between transmitter and receiver is more, i.e. 1000 m. In Fig.
49c, the 4
th
ray hits the surface 2 times and then bottom one time, i.e. S-B-S and the 5
th

ray hits the surface one time and 2 times the bottom, i.e. B-S-B. From the following
results, it is observed that the 4
th
ray grazes at an angle of 3.1481 and 5
th
ray with
65



5.9941. This leads to lower reflection coefficients for 5
th
ray compared to 4
th
ray.
Similarly, in Fig. 49d, the 5
th
ray hits the surface one time and 2 times the bottom, i.e.
B-S-B and the 4
th
ray hits the surface 2 times and then bottom one time, i.e. S-B-S. From
the results provided in d), it is observed that the 5
th
ray grazes at an angle of 3.1481 and
4
th
ray with 5.9941. This leads to lower reflection coefficients for 4
th
ray compared to
5
th
ray.

Here, we see another interesting observation, i.e. in Fig. 49c, the 4
th
ray has larger
amplitude compared to 5
th
ray and exactly the opposite is seen in Fig. 49d. This is due to
the swapping of the vertical placements of transmitter and receiver from 10 to 35 m.
The simulation results for grazing angles and reflection coefficients:

c)
( ) ( ) ( ) ( )
3 3 3 3
, 0,10 and , 1000,35
S S R R
r z r z = =
Angles =[1.4321 2.5766 2.0045 3.1481 5.9941 7.1250 6.5602 7.6884]
R1 =[0.9492 0.8447 0.9028 0.7773 0.4021 0.2766 0.3361 0.2242]
R2 =[0.7191 0.5533 0.6306 0.4858 0.2568 0.2002 0.2266 0.1769]

d)
( ) ( ) ( ) ( )
4 4 4 4
, 0,35 and , 1000,10
S S R R
r z r z = =
Angles =[-1.4321 2.5766 2.0045 5.9941 3.1481 7.1250 6.5602 10.4812]
R1 =[0.9492 0.8447 0.9028 0.4021 0.7773 0.2766 0.3361 0.0630]
R2 =[0.9772 0.5533 0.6306 0.2568 0.4858 0.2002 0.2266 0.0960]

Fig. 50 represents another simulation result to just show the impact of multi-path at a
little bit lower wind speed of 6 knots and with a bottom type value of 4.

Environmental Scenario (for Fig. 50)
Source location
( ) ( )
1
, 0,x
S S
r z = m, x 10,35 =
Receiver locations
( ) ( )
1 1
, 1000,x
R R
r z = m
Sound velocity c =1500 m/s
Water depth 40 D = m
Salinity 35 S = ppt
Water temperature T =14 C
pH value pH =8
Wind speed 6
w
v =
Bottom type bt = very fine sand
66




Fig. 50: Simulation results showing relative travel times for two different vertical depths of transmitter
and receiver of a sinc-pulse including the transmission loss phenomenon.


By now it is understood, multi-path dominates when the separation between the
transmitter and receiver increases and also varies with the vertical positions of the
transmitter and receiver. Added to this when you have a lower wind speed and a soft
bottom, it becomes more worse. The difference can be clearly observed from Fig. 49c,d
and Fig. 50. Here also the same behavior of amplitude difference is observed for 4
th
,
5
th
, 6
th
, 7
th
, etc as the geometry is different for both cases.

Finally, you have the case of a constructive interference and destructive interference of
the multipath. When the multi-path gets added to the direct path in accordance with its
phase then we have a constructive interference otherwise a destructive one. So,
sometimes even if the multi-path is not dominative, you may still have a poor BER.


As we have discussed till now the multipath propagation in underwater acoustic channel
and all the channel effects, now we move to communications part of the system. Always
in communications the desired goal is to achieve maximum signal to noise ratio. In
underwater acoustic channel the noise is in two forms, one is the ambient noise
discussed in chapter 2 and the other is the multipath itself. We can also say, here the
signal itself acts as a noise as the multipath is nothing but (delayed versions of direct
path) generated from our signal only. So, here when ever we refer to SNR we imply that
it the ratio between the signal strengths of the direct path and multipath. The following
0
0.05
0.1
0.15
0.2
0
2
4
6
8
0
0.2
0.4
0.6
0.8
1
x 10
-3
Relative travel time [s]
Number of ray
A
m
p
l
i
t
u
d
e

a)
( ) ( ) ( ) ( )
1 1 1 1
, 0,35 and , 1000,10
S S R R
r z r z = =
0
0.05
0.1
0.15
0.2
0
2
4
6
8
0
0.2
0.4
0.6
0.8
1
x 10
-3
Relative travel time [s]
Number of ray
A
m
p
l
i
t
u
d
e

b)
( ) ( ) ( ) ( )
1 1 1 1
, 0,10 and , 1000,35
S S R R
r z r z = =
67



are some simulation results which show the Bit Error Ratio for only direct path and
multi-path for 2 different wind speeds and bottom types.


DIRECT PATH

1. Environmental Scenario (for Fig. 51)
Source location ( ) ( ) , 0,10
S S
r z = m
Receiver locations ( ) ( ) , 1000,35
R R
r z = m
Sound velocity c =1500 m/s
Water depth 40 D = m
Salinity 35 S = ppt
Water temperature T =14 C
pH value pH =8
Wind speed 6
w
v = knots
Bottom type bt = coarse silt


Fig. 51 represents the BER plot only for direct path. As one can imagine, when we
transmit only the direct path, there will not be any noise present only you have
attenuation of the signal strength. So, the BER of direct path is 0.

1 2 3 4 5 6 7 8 9 10
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Eb/No Values
B
i
t

E
r
r
o
r

R
a
t
e

Fig. 51: BER plot direct-path for the above Environmental scenario 1.

68



In the following two cases of multi-path propagation have been considered. One is at a
mud bottom type and lower wind speeds and the other is a bit higher wind speed and
sand bottom type. In case 1, the BER is much higher compared to case 2 as expected.
This is due to much reflections at lower wind speeds and softer bottom types.

MULTI-PATH PROPAGATION
Case 1
1. Environmental Scenario (for Fig. 52)

Source location ( ) ( ) , 0,10
S S
r z = m
Receiver locations ( ) ( ) , 1000,35
R R
r z = m
Sound velocity c =1500 m/s
Water depth 40 D = m
Salinity 35 S = ppt
Water temperature T =14 C
pH value pH =8
Wind speed 6
w
v = knots
Bottom type bt = coarse silt

1 2 3 4 5 6 7 8 9 10
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
Eb/No Values
B
it

E
r
r
o
r

R
a
t
e
1 1.5 2 2.5 3 3.5 4
10
-3
10
-2
10
-1
10
0
Eb/No Values
B
it

E
r
r
o
r

R
a
t
e

Fig. 52: BER plots multi-path propagation for the above Environmental scenario 2, case 1 a) linear scale
b) log scale.
69



Case 2

2. Environmental Scenario (for Fig. 53)
Source location ( ) ( ) , 0,10
S S
r z = m
Receiver locations ( ) ( ) , 1000,35
R R
r z = m
Sound velocity c =1500 m/s
Water depth 40 D = m
Salinity 35 S = ppt
Water temperature T =14 C
pH value pH =8
Wind speed 8
w
v = knots
Bottom type bt = very fine sand

1 2 3 4 5 6 7 8 9 10
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
Eb/No Values
B
it

E
r
r
o
r

R
a
t
e
1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3
10
-4
10
-3
10
-2
10
-1

Fig. 53: BER plots multi-path propagation for the above Environmental scenario 2, case 2 a) linear scal
b) log scale.

From Figs. 52 and 53 it can observed that when you have higher wind speeds and
rough bottom types, the strengths of all the rays constituting multi-path propagation is
getting minimized. In these type of situations the communication aspect would become
easy compared to the acoustic channel. But, in practical applications, lower wind speeds
are present and thereby, making the communications design more hard. So, we should
always keep the range of wind speeds between 0-20 knots for our desired underwater
acoustic applications and the communication system should be designed robust even at
lower wind speeds.
70



Constellation Diagrams

From the following constellation diagrams, you can see an error free propagation for
direct path, Fig. 54 and errors for multi-path, Fig. 55


Fig. 54: Received QPSK states for direct path



Fig. 55: Received QPSK states for multi path




71




7 Summary and Concluding Remarks
Some underwater acoustic applications like simple status reports or transfer time-
position co-ordinates may require a bit rate of 100
1
bits s

. But in several other


applications like seafloor mapping and in some military applications bit rates of several
1
kbits s

are required due to the transfer of high size images. As an initial step to
explore systems for communication that have the potential of transferring data at rates
of multiple
1
kbits s

over distances of several kilometres underwater, we have


developed this simulation tool.

This simulation tool is designed for communication using Quadrature Phase Shift
Keying (QPSK) modulation techniques in an Underwater Acoustic Channel (UAC). It
mainly consists of a transmitter, UAC and a receiver. It provides a thorough insight into
various problems that are encountered by underwater sound channel and also explains
the degradation of bit error rate (BER) due to channel variations and presence of
multipath propagation.

All the oceanographic acoustic fundamentals have been considered in depth while
modelling the UAC. QPSK modulation techniques have been employed for the
transmitter and receiver. This tool works with a very low BER for the direct path even
at higher bit rates and is also robust for all channel variations. In short we can
summarize the following about what this simulation model provides:

a thorough insight into the complexity of an underwater acoustic channel.
the ability to design and analyse time invariant equalizers with sensitivity to
equalizer mismatch.
gives the flexibility to change the carrier frequency.

This tool shows the practical poor BER for multi path propagation and it produces
satisfactory results in the bandwidths ranging 1-2 Kbps. The robustness of the system
for multipath propagation drastically decreases when the channel variations are getting
worse. The simulation tool developed here was for fixed transmitter and receiver
locations. As explained in this report, the presence of multipath causes an intersymbol
72



interference (ISI) that destroys the message, due to different travel times for different
rays. Depending on the particular sound underwater channel in question, the ISI can
involve, in tens or even hundreds of symbols. A solution for this problem might be to
employ an adaptive equalizer in the simulation tool (here, adaptive is used as we refer to
a moving transmitter and receiver). An equalizer can be viewed as an inverse filter to
the channel. But, nevertheless, in practical situations even the employment of an
equalizer would not solve the problem of transferring high bit rates. This can pose us to
think of employing modulation techniques like Orthogonal Frequency Division
Multiplexing (OFDM). So, our future outlook for the extension of this simulation tool
would be:

Incorporation of moving transmitter and receiver.
Model validation with measurements.
Investigation of adaptive single input multiple output (SIMO) equalization.
Application of orthogonal frequency division multiplex (OFDM) communication.






















73



Appendix

The following Schematic diagram for simulation gives complete idea how the main
program is structured down into functions and then sub-functions. Later to the
schematic diagram, the complete Matlab code is provided according to each function as
stated in the diagram.







































74









Fig. 56: Schematic diagram for Simulation

75



Simulation code Main.m

76



Main.m (contd.)

77



Main.m (contd.)











































78



tansmitter.m

79



tansmitter.m (contd.)

80



tansmitter.m (contd.)
















































81



root_raised_cosine.m




















82



training_sequence.m




































83



qpsk.m

84



random_data.m





































85



underwater_acoustic_channel.m

86



underwater_acoustic_channel.m (contd.)

87



underwater_acoustic_channel.m (contd.)


88



channel.m

89



channel.m (contd.)


































90



attenuation.m

91



attenuation.m (contd.)
























92



loss.m

93



loss.m (contd.)








































94



ambient_noise.m




95



SRC.m

96



SRC.m (contd.)





















97



BRC.m

98



BRC.m (contd.)

99



BRC.m (contd.)















100



receiver.m

101



receiver.m (contd.)

102



receiver.m (contd.)










































103



phase_estimation.m




























104



detect.m

105



detect.m (contd.)

106



detect.m (contd.)



























107




References

[1] F.B. J ensen, W.A. Kuperman, M.B. Porter and H. Schmidt, Computational Ocean
Acoustics (Springer- Verlag, New York, Inc., 2000).
[2] H.G. Urban, Handbook of Underwater Acoustic Engineering (STN Atlas Elektronik
GmbH, Bremen, 2002).
[3] H. Medwin and C.S. Clay, Fundamentals of Acoustical Oceanography (Academic Press,
San Diego, 1998).
[4] J ohn. G. Proakis, Digital Communications, fourth edition (McGraw-Hill, NY, 2001).
[5] J ohnny R. J ohnson, Introduction to Digital Signal Processing (Prentice-Hall of India Pvt.
Ltd, New Delhi, 1996).
[6] L.M. Brekhovskikh and Yu. P. Lysanov, Fundamentals of Ocean Acoustics (Springer-
Verlag, second edition).
[7] Simon Haykin, An introduction to Analog & Digital Communications (J ohn Wiley &
Sons, Singapore, 1994).
[8] www.complextoreal.com
[9] http://literature.agilent.com
[10] http://www.kth.se
[11] 51-st Open Seminar on Acoustic Program, Gdansk 2004.

Vous aimerez peut-être aussi