Vous êtes sur la page 1sur 27

1.

ABSTRACT

Software defined radio (SDR) has emerged from obscurity to be heralded in recent years as
offering a potential solution to our historical and continued inability to achieve common
global communication standards. Software defined radio offers a highly flexible system
that can work with different communication standards. Moreover, it allows new features to
be added to the system without requiring the underlying architecture to be changed.
SDR put as much as possible of a transmitter's and receiver's functions into software
running on a high-speed digital computer. A receiver can be very simple: an antenna, an
analog-to-digital converter chip and a computer. A transmitter is a computer plus a digital-
to-analog converter and a power amplifier. The circuits are straightforward; all the
complicated operations are in software.
Software defined radio is a way of handling many of the functions of ordinary radio
receivers and transmitters by converting electrical signals to and from streams of numbers
(digits) and processing them in a computer. The computer can be either a highly
specialized digital signal processor (DSP) chip or a general purpose personal computer
(PC). In any case, the operations are controlled by software (sometimes called firm ware).
SDR evolves with the technology without much maintenance since the architecture lets it
glide through any changes.

KMM College of Arts and Science, Thrikkakara 1


2. INTRODUCTION

From the first wireless transmissions around 1890, radio transmission techniques have
continually evolved, providing users the possibility to stay connected with increasing
transmission rates. The triumphant radio era came first, in the mid-1930, at a time when
limited band widths were used for analog voice communications. Then, came the golden
era of broadcast transmission in the 50s with analogic television broadcasts that consumed
more bandwidth but provided a rich customer experience. As computers became smaller
and more powerful, reaching the 60s, they began to be useful as a communication media
for long distances, using both wired connectivity via ARPANET (which became later the
Internet) and wireless satellite ALOHANET.

Cell phones also emerged around this time, allowing users to establish wireless voice
communications from any public place or vehicle, although the original mobiles were hard
to operate and to travel with, given their volume and weight. Many modern phones are now
almost portable computers, providing access to both cellular networks and the Internet, and
achieving wireless communications at speeds that were unimaginable a generation ago.

Despite the growth achieved by multiple technologies, an interesting and potentially


problematic issue common to all mentioned devices is that their radios and protocols are
mostly hardware based. Therefore, reprogramming or reconfiguration options are minimal,
at least regarding radio functions. This lack of flexibility is disturbing in the sense that if an
error occurs in the hardware, firmware, or software then generally there is no reasonable
way to correct the problem: the built-in vulnerabilities are not easy to remove. In fact, these
devices are commonly limited in their functionality to the hardware components and
cannot be reconfigured to perform other wireless protocols beyond what the hardware itself
provide.

Software Defined Radio (SDR) is a design paradigm for wireless communications devices.
Its creator, Joseph Mitola, defined the term in the early90s as an identifier of a class of
radios that could be reprogrammed and reconfigured through software. Mitola envisioned
an ideal Software Defined Radio, whose physical components were only an antenna and an
Analog Digital Converter (ADC) on the receiver side. Likewise, the transmitter would have
a Digital Analog Converter (DAC) and a transmitting antenna. The rest of the functions
would be handled by reprogrammable processors. As the idea conceived in the 90sis still
not achievable, and a sit will not be likely for some time, the term SDR is used to describe

KMM College of Arts and Science, Thrikkakara 2


a viable device that is primarily defined by software, but includes significant hardware
components. Even with these components, the SDR receiver is quite different from a
traditional receiver.

Software defined radio is a way of handling many of the functions of ordinary radio
receivers and transmitters by converting electrical signals to and from streams of numbers
(digits) and processing them in a computer. The computer can be either a highly
specialized digital signal processor (DSP) chip or a general purpose personal computer
(PC). In any case, the operations are controlled by software (sometimes called firm ware).
The spectrum of software defined radio products covers a wide range of sizes, shapes and
capabilities.

KMM College of Arts and Science, Thrikkakara 3


3. LITERATURE STUDY

“Software defined radio technology - next generation intelligent Radios” Sandeep Kaur,
Vaishali Bahl.
This paper analyses the overview of Software Defined Radio. SDR technologies are
important feature for SDR communication system. I have gathered most of the features and
included some missing features also to my paper.
“Software defined radio” by Marwa mamoun abdelgadir abdelrahman
This paper analyses the software defined radio its programming background. I have
followed each step mentioned in this paper since the paper focused mainly on the
technology related to SDR in both hardware and Software.
“Reconfigurable Software Defined Radio and Its Applications” Chi-Yuan Chen1, Fan-
Hsun Tseng2, Kai-Di Chang3, Han-Chieh Chao1, 2 and Jiann-Liang Chen.
In this paper, we propose a SDR platform with digital data communication capability and
the applications of SDR in different sectors. Some of the applications mentioned are no
more in use. So I have collaborated my own search findings with this to show all the
possible applications where SDR is used

KMM College of Arts and Science, Thrikkakara 4


4. TECHNOLOGY

To develop a software defined radio that can be used for reception and transmission of
radio signals. This is by implementing the RF front end part and the digital modulation and
demodulation part of the transceiver system shown in figure 4.1 typical transceiver system.

Figure: 4.1 Typical transceiver system

Figure 4.2 Receiver Subsystem and Figure 4.3 Transmitter Subsystem illustrate the general
structure of receiver and transmitter subsystems. The soundcard will be used to perform the
conversion operation from analog to digital and from digital to analog, and all processing is
carried out by PC. Consequently, the system must contain and perform the following:
1. A RF front end must be included in order to perform all analog operations and to
generate an in-phase and quadrature projection of the received signal and passing
them to the microphone jack.

2. Getting the signals directly from the sound card.

3. Modulation or demodulation on the input signals.

4. Delivering the modulated or demodulated signal to the sound card at which the signal
can be taken from speakers jack.

KMM College of Arts and Science, Thrikkakara 5


5. Giving the user the ability to change the settings of the software and specify the
mode of operation (modulation, demodulation, others …) and the type of each.

Figure: 4.2 Receiver Subsystem

Figure: 4.3 Transmitter Subsystem

The system design is divided into hardware and software design. Both designs require the
knowledge of PC sound card accessing. Therefore sound card is first discussed then
hardware and software designs are presented.
PC Sound Card
Nowadays every PC nowadays comes with a sound card; they vary immensely in sound
quality, features and input/output options. Most standard PC soundcards allow the
incoming signal to be sampled at rates up to 44.1 kHz per channel. High end professional
soundcards allow higher sampling rates, typically 96 kHz and above. Most common
soundcards operate using 16 bit A/D converters. High end soundcards use 24 bit A/D
converters allowing a greater range of output values. This is because sound card is intended
to process only audio signals. Sound cards also contain a built in anti-aliasing and
reconstruction filter for their A/D and D/A converters respectively.
The sound card inputs audio signals from the microphone jack and outputs it to the speakers
through its A/D and D/A converters.

KMM College of Arts and Science, Thrikkakara 6


Software Defined Radio Ideal Architecture
A software defined radio consists of, for the most part, the same basic functional blocks as
any digital communication system. Software defined radio lays new demands on many of
these blocks in order to provide multiple bands, multiple service operation and re-
configurability needed for supporting various air interface standards. To achieve the
required flexibility, the boundary of digital processing should be moved as close as
possible to the antenna, and application specific integrated circuits, which are used for
baseband signal processing, should be replaced with programmable implementations.
An ideal software defined radio would have a very minimal analog front end as illustrated
by Figure 2.1 Ideal transceiver architecture, note that the A/D converter is assumed to have
a built-in anti-alias filter and that the D/A is assumed to have a built-in reconstruction
filter. The system consists of the following:
Digital signal processor: the modulation scheme, channelization, protocols, and
equalization for transmit and receive are all determined in software within the digital
processing subsystem. This is shown containing a DSP in Figure 2.1 Ideal transceiver
architecture.

Circulator: is used to isolate transmit and receive path signals to allow using a single
antenna for both transmitter and receiver.

Amplifier: two amplifiers are used. One of which is responsible for including enough
power in the signal for transmission after the D/A conversion. The other one magnifies the
incoming signal from the antenna before passing it to A/D. The amplifier must have linear
characteristics to prevent signal distortion.
Anti-Aliasing and Reconstruction Filter: anti-aliasing filter is included before the A/D
converter to prevent generation of image signals. Because once an aliased image is created
in the sampling process, no amount of further processing can distinguish between a true
signal and an aliased signal.
The filter must be a low pass filter tuned to a frequency that is less or equal half the
sampling frequency of the A/D converter. The filter must have also linear characteristics in
order to protect signal properties.
For the same reason, the output of a D/A converter requires a low pass analog filter, called
a reconstruction filter, as the output signal must be band limited, to prevent aliasing (here
meaning Fourier coefficients being reconstructed as low frequency waves, not as higher
frequency aliases). Ideally, both filters should be brick wall filters, constant phase delay in

KMM College of Arts and Science, Thrikkakara 7


the pass band with constant flat frequency response, and zero response from the Nyquist
frequency.
Antenna: it is used to radiate and receive signals.

A/D converter: it is used to convert the incoming analog signal into digital in order to
perform digital signal processing techniques to it. The converter must have a sampling rate
that is twice maximum incoming signal frequency according to sampling theorem. A/D
converters of high sampling rate for high frequency signals are quiet expensive and are not
usually available in the local markets. A potential solution is to down convert the incoming
signal frequency into a suitable frequency range using a mixer stage before the converter.
D/A converter: it is used to convert digital signal into analog in order to transmit it by the
antenna. This component is chosen depending on the required frequency band of the output
signals. Its sampling rate must also follow the sampling theorem.

Hardware Design
The hardware design consists of designing a RF front end for receiver and transmitter
subsystems. These are discussed in the following.
Receiver RF Front End
In this section all the analog operations are performed to get the signal ready for the next
section in which the signal is transformed into digital. The RF front end constitutes of the
following components:
1. Antenna
For reception of radio frequency an antenna is used.

Figure: 4.4 Receiver RF front end block diagram

KMM College of Arts and Science, Thrikkakara 8


2. Pre-Selector Filter
A simple band pass filter is used to select the frequency band required for reception of
radio signals. A tank circuit is used with a variable inductor along with a variable
capacitor to change the frequency range. The circuit designed is shown in Figure
4.5.The center frequency of the circuit is determined by the following equation:

Eqn: 4.1

Figure: 4.5 Pre-selector filter

3. Mixer
An external analog mixer is required to down convert the signal frequency into low
frequencies depending on the sound card sampling rate.
Simple double diode mixer is used as shown in Figure 4.6 Mixer. The mixer has two
inputs. One of which is taken from the RF input signal, the other is taken from a
variable frequency oscillator.

KMM College of Arts and Science, Thrikkakara 9


Figure: 4.6 Mixer
The mixer produces two frequency components. One of which with frequency
equals the sum of the two input signals frequencies and the other is the difference.
Only the lower frequency signal will be passed through the anti-aliasing filter. This
is what is referred to as frequency down conversion. It is described by the following
equations:

Eqn: 4.2

Eqn: 4.3

Eqn: 4.4

And after filtering:

Eqn: 4.5

4. Variable Frequency Oscillator


The variable frequency oscillator used is the laboratory signal generator. The signal
generator can generate a signal from dc up to a frequency of 1GHz. It can also
generate a signal of different forms. That is square, triangular and sinusoidal wave.
The signal form used here is the sinusoidal.

5. Phase Transformer
To make the 90° phase shift, the circuit shown in Figure 4.7 was used. The circuit
must be tuned to a specific frequency in order to give the required phase shift.
Therefore a potentiometer is used. By varying the potentiometer the phase shift is

KMM College of Arts and Science, Thrikkakara 10


varied until phase shift required is obtained at a particular frequency. The capacitor
and potentiometer values depend on the following equation.

Figure: 4.7 Phase Transformer

Transmitter RF Front End


The transmitter RF front end contains most of the components used for the receiver part.
The signal is taken from the speaker jack that consists of the left and right components.
The transmitter RF front end (shown in Figure 4.8 Transmitter RF front end block
diagram) consists of the following:

Figure: 4.8 Transmitter RF front end block diagram

KMM College of Arts and Science, Thrikkakara 11


1. Mixer
Here the mixer is used to up convert the signal from baseband to higher frequencies
suitable for transmission as described by:

Eqn: 4.6

2. Variable Frequency Oscillator


The variable frequency oscillator is also needed here to generate the carrier. The
device used here is the laboratory function generator.

3. Antenna:
The same reception antenna is used for the transmitter.

Software Design
When talking about the software part of the system, there are three important issues to be
determined; how the software program will interact with the sound card for audio I/O, the
programming language and approach. Both are discussed in the following two sections.
1. Sound Card Access
To develop a real time audio application it is necessary to obtain the audio data
directly from the sound card using the operating system (OS). Since there are
different types of operating systems and much hardware each with their own driver,
there are a wide variety of possibilities to access the audio data. These access points
are commonly called application programming interface (API).The software access
to the sound card is guided by varies application programming interfaces (API),
which differs within and from platform to platform.
For a cross platform application, one that works on different platforms or OS, it is
necessary to be able to use all these different API‟s. In this case an additional API
which is a fully software orientated API is used to create one access point to
interface a number of different system API‟s such as: CLAM, OpenAL,pureData,
STK and PortAudio. Most of these APIs are licensed under the General Public
License (GPL) (see Appendix A) so they are free to use within these regulations.
The API used for software design is Port Audio Library because it causes the
smallest overhead for this application and is easy to use.

KMM College of Arts and Science, Thrikkakara 12


PortAudio is a free, audio I/O library, it is meant to have cross platform abilities
(including Windows, Macintosh (8, 9, X), Unix (OSS), SGI, and BeOS) and a very
easy to use API for recording and/or playing. It is a simple API and does not bring
any functions like mixing, filtering or analyzing.
PortAudio Library was chosen among other API's that are used for audio data
because it causes the smallest overhead for this application and is easy to use.

Figure: 4.9 Typical Computer Architecture

Software Implementation
The system software is divided into five modules, each module represent a C++ class that
consist of its own data and functions, the functions operate on these data to perform a
specific task. These classes are Audio class, Modulation class, Demodulation class,
Filter class and GUI class.

KMM College of Arts and Science, Thrikkakara 13


Figure: 4.10 Receiver RF front end circuit schematic

Figure: 4.11 Transmitter RF front end circuit schematic

KMM College of Arts and Science, Thrikkakara 14


Audio Class
This class is responsible for Audio I/O operations using PortAudio library functions as
illustrated by Figure 4.12 Audio stream flow Using PortAudio is quiet easy; the software
comes as pure code, therefore it was compiled as a dynamic link library (DLL). After this,
the header was included and the DLL was linked to the project. The two channels (stereo)
function of the soundcard was used. The first recorded block is the right channel the next
one the left and so on. In order to run PortAudio, the following steps were taken:
 PortAudio was initialized

 The stream was managed.

 The manipulation of the audio data was performed depending on the mode that the
user selects.

 The stream was then terminated terminate.

The main part of initialization is to set the basic parameters for the audio, like input
channels, output channels, data type, sample rate, frames per buffer then the callback
function performs the audio manipulation. However, PortAudio delivers the audio data
always in blocks of sizes which must be chosen as minimum as possible.
The actual signal processing is done in the manipulation part. The task that has to be
performed by callback function is to call the sub-routines that will do the desired
processing by the user.

Figure: 4.12 Audio stream flow

Modulation Class
An instance of this class (object) is created when the user choose to start a modulation
process. Then the carrier frequency, modulation type and the sampling rate values are

KMM College of Arts and Science, Thrikkakara 15


collected from the GUI and sent to the constructor to be initialized. According to the
modulation type that is chosen by the user, one of the following modulation techniques is
applied: AM, DSB, ASK, FM, BFSK or BPSK. For each type there is a function that
implements operations that needed by specific modulation type. Since C++ is a high level
language, the representation of the operations which are just mathematical equations are
straightforward.
One important issue that was taken into account, is phase incrementation control, such that
it does not exceed the float number range. Since Sin (90+2π) is same as Sin (90), whenever
the angle becomes more than 2π, 2π is subtracted from the angle. This ensures that the
phase incrementation is always within float range.
Another thing that should be considered, is that the values of the samples are represented
by float values in the range from -1 to 1, hence whenever the output of the modulation is
expected to be more than 1 or less than -1, it should be normalized to 1.

Demodulation Class
An instance of this class (object) is created when the user choose to start a demodulation
process. Then the filter bandwidth, modulation type and the sampling rate values are
collected from the GUI, this is used to initialize the data of the created object. According to
the modulation type that is chosen by the user, one of the following demodulation
techniques is applied: AM, ASK, FM, BFSK or BPSK. For each type there is a function
that implements operations that needed by specific demodulation type. Since C++ is a high
level language, the representation of the operations which are just mathematical equations
are straight forward.
Using I/Q representation of the modulated signal, any signal can be demodulated, for an
AM, ASK, and BPSK it is just l2+Q2. For FM and BFSK quadricorrelator method is used.
For digital demodulation, multi levels decision is used to determine the received digital
data if it is 0 or 1.

Filter Class
An instance of this class (object) is created when the user choose to start a filtering process.
Then the filter bandwidth, filter type and the sampling rate values are collected from the
GUI, which is used to initialize the data of the created object. According to the filter type
LPF, HPF, BPF or BRF will be performed for the samples. The filter chosen to be

KMM College of Arts and Science, Thrikkakara 16


implemented is FIR with Blackman window. Blackman filters are excruciatingly slow,
because they must use convolution but they have high selectivity.

When using filter in a real-time environment there is one problem occurring. The resulting
signal is going to contain a periodic clipping noise. This is because of the nature of the
filters and their tendency to settlement. To avoid this problem, the values of the delay lines
after every filtering are saved, so they can be used as the filter initialization of the next
block. With this, a smooth crossover from one to the next block is guaranteed and the
settlement takes only place during the first initialization of the filter, which is negligible.

GUI Implementation
Through the GUI the user is able to select different modulation, demodulation and filter
types in addition to setting other parameters. Moreover, the GUI allows user to save and
play back signals to or from file, and display these signals as in oscilloscopes. User
Interface window made is shown in Figure 4.13.
The Two Important modes for the program are modulation and demodulation. When the
user checks the button Mod Enable on the window enters the requested values and then
presses the Start button, the program will run on the modulation mode. This process is
described by the flow chart shown Figure 4.14. On the other hand, when user checks
Demod Enable, the program will run on the demodulation mode as described by the flow
chart will in Figure 4.15.

Figure: 4.13 GUI window

KMM College of Arts and Science, Thrikkakara 17


Figure: 4.14 Modulation mode operations

KMM College of Arts and Science, Thrikkakara 18


Figure: 4.15 Demodulation mode operations

KMM College of Arts and Science, Thrikkakara 19


5. APPLICATION

1. Amateur Radio

Figure: 5.1 Amateur Radio

Amateur Radio also known as Hams Radio, describe the use of radio frequency
spectrum for the purposes of non-commercial exchange of messages, wireless
experimentation, self-training, private recreation, radio sport, contesting, and
emergency communication. It uses a variety of voice, text, image and data
communication modes and have access to frequency allocation throughout the RF
spectrum. This enables communication across city, region, country, continent, world
or even space.

KMM College of Arts and Science, Thrikkakara 20


2. Radio Astronomy

Figure: 5.2 Radio Astronomy

Marcus Leech of Science Radio Laboratories published a paper entitled “A 21cm Radio
Telescope for the Cost-Conscious”, in which he describes how this can be built using
RTLSDR hardware along with other low cost and easily sourced components, with the
option of using an Ettus Research USRP B100 + WBX daughter card for improved
performance.

3. Military Communication

SDR most use in military because SDR enable and improve efficiency like
interoperability (provide connection between different system) and joint operations
(provide cooperation between separate troops).The need of civil service sectors and
agencies, organizations under like: police, coast guard, fire and other communication
system. SDR is applicable international and national operations. In heterogeneous
networks are reconfigurable multi-standard. SDR commercial application is a Alcatel-
Lucent. In SDR reconfigurable depend upon 3G base station and also there economics
driver is a provide opportunity to parallel processing computing platforms, all are active
on base stations and base station potentially provide a low cost because base station
expanding the production of commercial products. The innovative approaches of SDR

KMM College of Arts and Science, Thrikkakara 21


technology applied then software based-CDMA as a proximity sensor for a virtual
mouse. Organizations like Research Laboratories, Industry standards, Investors, Test
and Verification, Regulation and Policy, Educational Institutions. SDR base station
modules can be installed on the 500,000 base stations and they are already developed
and they are smooth path or more advanced capability for future.

4. Vehicular Networking
VANETs are one type of mobile ad hoc networks (MANETs) that specifically addresses
scenarios involving moving ground vehicles. Three types of VANET applications
include
• Road safety applications: Warning applications and emergency vehicle warning
applications. Messages from these applications possess top priority.
• Traffic management applications: Local and map information.
• Infotainment: Multimedia content based on the traditional IPv6 based internet.
Vehicles within the DSRC range can share situational awareness information among
each other via BSM, including scenarios such as
• Lane Change Warning: Vehicles periodically share situational information including
position, heading, direction, and speed via V2V communication within the DSRC range.
When a driver signals a lane change intention, the OBUis able to determine if other
vehicles are located in blind spots. The driver will be warned if other vehicles do exist
in the blind spot; this is referred to as blind spot warning. On the other hand, if no
vehicles exist in the blind spot, the OBU will predict whether or not there is enough of a
gap for a safe lane change based on the traffic information via BSMs. If the gap in the
adjacent lane is not sufficient, a lane change warning is provided to the driver.
• Collision Warning: The vehicle dynamically receives the traffic info from BSMs and
compares that information with its own position, velocity, heading, and roadway
information. Based on the results of the comparison algorithm, the vehicle will
determine whether a potential collision is likely to happen and a collision warning is
provided to the driver.
• Emergency Vehicle Warning: Emergency vehicles transmit a signal to inform nearby

vehicles that an emergency vehicle is approaching.

KMM College of Arts and Science, Thrikkakara 22


5. Mobile Communication
SDR are very useful in areas such as mobile communication. By upgrading software it is
possible to apply changes to any standards and even add new waveforms purely by
upgrading the software and without the need for change to the hardware. This can even
be done remotely, thereby providing considerable savings in cost.
6. Research and development
SDR is very useful in many research projects. The radios can be configured to provide
the exact receiver and transmitter requirements for any application without the need for a
total hardware design from scratch.
7. Track ships via AIS transmissions

Automatic Identification System (AIS) is an automatic tracking system employed by


ships to identify and locate vessels, which is used to supplement marine radar. There are
a number of options available for receiving and decoding AIS data, and one which uses
RTLSDR hardware with a GNU Radio-based receiver plus gnuais is described in a blog
post by Alexandru Csete, who also happens to be the author of the aforementioned Gqrx
software. Using this AIS messages can be logged, plotted, and fed to the Google Maps-
based aprs.fiservice.

KMM College of Arts and Science, Thrikkakara 23


6. ADVANTAGES AND DISADVANTAGES

ADVANTAGES

1. It is possible to achieve very high levels of performance.

2. Performance can be changed by updating the software.

3. The same hardware platform can be used for several different radios.

4. Easy to implement.

5. Cheaper RF Front End design.

6. They can talk and listen to multiple channels at the same time.

7. Smaller list of components.

8. Faster time to market.

DISADVANTAGES

1. Analogue to digital converters limit top frequencies that can be used by the digital
section.

2. For very simple radios the basic platform may be too expensive.

3. Development of a SDR requires both hardware and software skills.

4. Security: SDR provide flexibility but it does not allow to access certain waveforms.

5. Power consumption is high.

KMM College of Arts and Science, Thrikkakara 24


7. FUTURE ENHANCEMENT

As the ubiquity of 4G handsets has propelled SDRs, the prospects of emerging


technologies such as 5G, the Internet of Things (IoT), and sensor networks promise to
again increase the volume of SDRs by another order of magnitude. As with previous leaps
in SDR adoption, it will likely be a combination of both hardware and software
technologies. One of the next technology drivers in hardware looks to be the combination
of analog and digital technology onto a single monolithic chip to reduce cost and size,
weight, and power (SWaP). For infrastructure, this driver could be FPGAs with integrated
analog-to-digital converters (ADCs) and digitalto-analog converters (DACs). For handsets
and sensors, this could be application processors, also with integrated ADCs and DACs.
New innovations in hardware won’t be very useful, however, if the software and tools
don’t follow. That is the whole point of SDR, after all. To enable the development of these
chips, as well as the waveforms and application software running on them, there will be a
requirement for better system-level tools that can be used to design and debug across the
analog and digital domains. As SDRs become used for increasingly complex tasks, they are
being designed with more powerful FPGAs designed for intensive DSP (Figure 2). As a
result, there is an inevitable growing need for FPGA tools that can handle rapidly
increasing amounts of data and complexity. While general-purpose processors (GPPs) have
served the SDR community well in the past, they are struggling to meet the performance
required for areas like 5G and MILCOM. Software tools such as the LabVIEW FPGA
Module and RF Network on Chip (RFNoC) offers a streamlined user experience that
makes FPGA programming vastly more efficient. Ultimately, integration will drive the
next generation of SDRs. The integration of analog and digital technology into mixed-
signal chips will be key, but SDRs have fundamentally reached a point where the primary
limitation on growth is in software, not hardware. Without software development
environments that can seamlessly program both GPPs and FPGAs, the additional hardware
features of next-generation SDRs will be underused and development will stall. The ability
of tools like LabVIEW FPGA to enable wireless engineers who are not HDL experts to
develop and rapidly iterate on sophisticated designs offer the best opportunity moving
forward to unlock the next generation of SDR.

KMM College of Arts and Science, Thrikkakara 25


8. CONCLUSION

Even though SDR technology has evolved more slowly than anticipated some years ago,
there are now many positive signs, the clearest ones being in the form of SDR products
entering the market. Several major initiatives, at national and cooperative levels between
nations and the industry are paving the way for SDR. The increasing availability of SCA
SW tools and development platforms is contributing to reducing the learning threshold of
the SCA and also increase the productivity of SDR development. Developments within
Model Driven Design may further increase this productivity. The SDR eases portability by
providing a standard for deploying and managing applications. It is expected that the SDR
will remain the dominating architecture in the military sector where waveform application
portability and reuse are major priorities, especially through cooperative programs.

A fundamental challenge for SDR designs is that of providing sufficient computational


performance for the signal processing tasks and within the relevant size weight and power
requirements. This is particularly challenging for small handheld units, and for ubiquitous
units. Parallel computation enhancements and the rapid evolvement of DSP and FPGA
performance help to provide this computational performance. Processing units having
multiple SIMD processing elements appear to be very promising for low-power SDR units.
The re-configurability of SDR systems has security challenges as a side effect. One such
security challenge is that the system must be protected from loading unauthorized and/or
malicious code. Also, the rigidity of conventional security architectures in many ways
contrast the desired flexibility and portability ideally required for SDR.

SDR will have continued focus as a highly flexible platform to meet the demands from
military organizations facing the requirements from network centric and coalitional
operations. SDR will also have continued focus as a convenient platform for future
cognitive radio networks, enabling more information capacity for a given amount of
spectrum and have the ability to adapt on-demand to waveform standards.

KMM College of Arts and Science, Thrikkakara 26


9. REFERENCE

 “Software defined radio technology - next generation intelligent Radios” Sandeep


Kaur, Vaishali Bahl

 “Reconfigurable Software Defined Radio and Its Applications” Chi-Yuan Chen1,


Fan-Hsun Tseng2, Kai-Di Chang3, Han-Chieh Chao1, 2 and Jiann-Liang Chen.
 “Software defined radio” by Marwa mamoun abdelgadir abdelrahman

KMM College of Arts and Science, Thrikkakara 27

Vous aimerez peut-être aussi