Vous êtes sur la page 1sur 93

CHAPTER -1

INTRODUCTION
1.1 GENERAL
Continuous-time recurrent neural networks (CTRNNs) are networks of
model neurons the success of 3rd generation wireless cellular networks ismainly
based on efficient provisioning of the expected widevariety of services requiring
different Quality of Service withrespect to data rate, delay and error rate. In order
to improvesupport for high data rate packet switched services, 3GPP hasdeveloped
an evolution of UMTS based on WCDMA knownas High Speed Downlink Packet
Access (HSDPA) which wasincluded in the Release 5 specifications. HSDPA
targetsincreased capacity reduced round trip delay, and higher peakdownlink (DL)
data rates. Evolutions of HSDPA featuring datarates up to 84 Mbps are under
development.In HSDPA, the user equipment (UE) (also known as mobilestation)
monitors the quality of the downlink wireless channeland periodically reports this
information to the base station(referred to here as Node B) on the uplink.
This feedback, called Channel Quality Indicator (CQI), is an indication ofthe
highest data rate that the UE can reliably receive in theexisting conditions on the
downlink wireless channel. Thefrequency of reporting CQI is configured by the
network,and is typically set to once every few milliseconds. Usingthe channel
quality reports, the Node-B accordingly schedulesdata on the High Speed Physical
Downlink Shared Channel (HS-PDSCH).
The Node-Bs selection of the transport blocksize (number of information
bits per packet), number of channelizationcodes, modulation and resource
1

allocation choicessuch as HS-PDSCH transmit power allocation are guided bythe


Node-Bs interpretation of the reported CQI.CQI reports are intended to accurately
reflect the HSPDSCHperformance that the UE can support in the existingwireless
channel conditions.
It is recommended in that, in static channel conditions, the UE report CQI
such thatit achieves a block error rate (BLER) close to 10% whenscheduled data
corresponding to the median reported CQI. Inpractice, the accuracy of CQI reports
in reflecting HS-PDSCHperformance is influenced by the wireless channel
conditionssuch as the speed of the mobile user and the dispersive natureof the
channel. Achieving a certain target BLER at a givenscheduled data rate requires
different average HS-DSCH SNRunder different channel conditions.
Also, the NodeB oftenuses different transport block sizes, number of codes
andmodulation, collectively referred to as the transport formatresource
combination (TFRC), to achieve similar data rates.The exact choice of TFRC that
the NodeB uses affects therequired HS-PDSCH SNR to achieve a certain target
BLER.These variability may cause the actual BLER to deviate fromthe 10% target.
Moreover, the 10% target BLER may notyield maximum throughput under
All conditions of the wirelesschannel.The cell throughput optimization in
HSDPA can be considereda two part problem: one is code and power
allocationacross users, and the other is maximizing the link throughputfor each
user for a given resource allocation.
In this paper,we focus on the link throughput optimization and
considerthroughput optimization through simple adjustments to thereported CQI.

1.2 OVERVIEW
The analysis described in this paper demonstrates that even small CTRNNs
are capable of complicated dynamical behavior. Indeed, it appears that an N-neuron
CTRNN can exhibit all of the qualitatively different sorts of behavior that are
possible in smooth N-dimensional dynamical systems.
While this certainly makes their analysis more difficult, this demonstration is
important because, although Funahashi and Nakamura (1993) proved that
sufficiently largeCTRNNs can approximate the dynamics of arbitrary smooth
dynamical systems for finite time arbitrarily well, their result tells us little about
the capabilities of the relatively small CTRNNs that are currently evolved in
autonomous agent research.
In addition, the fact that even small and simple dynamical neural networks
can

exhibit

dynamics

of

significant

complexity

may

have

important

neurobiological implications.
To date, attempts to evolve dynamical neural controllers for autonomous
agents have been exclusively empirical in nature. It is common to place the entire
burden of finding circuits with useful dynamics on an evolutionary algorithm, with
the result that these algorithms often fail on even simple tasks unless the fitness
function, parameter encoding, population size, mutation rate, etc. are carefully
chosen, and they do not scale well to more difficult tasks.
It is also common to treat the networks which do evolve as mysterious black
boxes. However, this paper has shown how the mathematical tools of dynamical
systems theory can be used to delay and error rate. In order to improve support for
3

high data rate packet switched services, 3GPP has developed an evolution of
UMTS based on WCDMA known as High Speed Downlink Packet Access
(HSDPA) which was included in the Release 5 specifications.
HSDPA targets increased capacity, reduced round trip delay, and higher peak
downlink (DL) data rates. Evolutions of HSDPA featuring data rates gain
significant insight into the dynamics of CTRNNs.
While fairly exhaustive analysis of the sort presented here for 1- and 2neuron circuits is not in general possible for larger networks, these and other
techniques can be applied to particular circuits or highly symmetric classes of
circuits.
I have also suggested ways in which the sort of analysis presented here can
be used to focus evolutionary searches into fruitful regions of parameter space and
thereby improve the performance and yield of such searches.

CHAPTER 2
LITERATURE SURVEY

The Convergence and Parameter Relationship for Discrete-Time ContinuousState Hopfield Networks (1)
Gang Feng.,
A discrete-time convergence theorem for continuous-state Hopfield
networks with self-interaction neurons is proposed. This theorem differs from the
previous work by Wang in that the original updating rule is maintained while the
network is still guaranteed to monotonically decrease to a stable state. The
relationship between the parameters in a typical class of energy functions is also
investigated, and consequently a guided trial-and-error technique is proposed to
determine the parameter values. The effectiveness of all the theorems proposed in
this paper is demonstrated by a large number of computer simulations on the
assignment problem and the N-queen problem of different sizes
The continuous Hopfield neural network (CHNN) can be used to solve an
optimization problem in such a way that the cost function and constraints are first
mapped to an energy function (if possible) and then a solution is obtained as the
network stabilizes. Ever since Hopfield and Tank applied this network to solve the
traveling salesman problem (TSP), the CHNN has been employed to solve a
variety of combinatorial optimization problems. However, since in most cases we
use the first order Euler approximation to simulate the dynamics of the network on
a digital computer, some researchers pointed out that if there exist self interaction
neurons, the Euler approximation method could occasionally lead to an increase in
the energy function. Wang clearly stated this problem in and furthermore proposed
5

an updating rule which can guarantee that the energy function monotonically
decreases. However, we notice that in the case when there exist self-interaction
neurons Wangs updating rule involves complicated computations
On the other hand, as also pointed out in, energy functions that include selfinteraction terms of the form V2 count for a large number of energy functions used
in optimization problems. Therefore, to find a simple updating rule guaranteeing
the monotonic decrease of the energy function for this case is very important. In
this paper, we will show that this goal can be easily achieved by using the original
updating rule while imposing a constraint on the time-step used in the Euler
approximation
Stochastic Noise Process Enhancement of Hopfield Neural Networks(2)
Vladimir Pavlovic, Dan Schonfeld, Gary Friedman
Hopfield neural networks (HNN) are a class of densely connected singlelayer nonlinear networks of perceptions. The networks energy function is defined
through a learning procedure so that its minima coincide with states from a
predefined set. However, because of the networks nonlinearity, a number of
undesirable local energy minima emerge from the learning procedure. This has
shown to significantly effect the networks performance. In this brief, we present a
stochastic process-enhanced binary HNN. Given a fixed network topology, the
desired final distribution of states can be reached by modulating the networks
stochastic process. We design this process, in a computationally efficient manner,
by associating it with stability intervals of the no desired stable states of the
network. Our experimental simulations confirm the predicted improvement in
performance
Neural networks (HNNs) are a class of nonlinear function approximators
represented by a single layer network consisting of interconnected individual
6

perceptrons and modified perceptions (with sigmoid nonlinearities). The basis for
its operation is the Hebbian learning algorithm which selects network weights to
minimize the network energy function for a set of desired states. Unfortunately,
because of its nonlinear character, the network also exhibits no desirable, local
minima. This has shown to affect the network performance, both in its capacity and
its ability to address its content. Several approaches based on stochastic
modifications of the network have been proposed that deal with the problem of
local minima. Alternatively, the stochastic HNNs can be viewed as a Boltzmann
machine and Gibbs distribution of the final network states in global minima.
Network learning and convergence can then be studied in probabilistic terms. In
many applications, however, the desired final network state distribution
corresponds to particular local minima, and not necessarily to the global minima.
The use of Gibbs distribution is thus undesirable in many applications

Using Hopfield Networks to Solve Assignment Problem and N-Queen


Problem: An Application of Guided Trial and Error Technique (3)
Christos Douligeris Gang Feng
In the use of Hopfield networks to solve optimization problems, a critical
problem is the determination of appropriate values of the parameters in the energy
function so that the network can converge to the best valid solution. In this paper,
we first investigate the relationship between the parameters in a typical class of
energy functions, and consequently propose a guided trial-and-error" technique to
determine the parameter values. The effectiveness of this technique is
demonstrated by a large number of computer simulations on the assignment
problem and the N-queen problem of different sizes

The continuous Hopfield neural network (CHNN) can be used to solve an


optimization problem in such a way that the cost function and constraints are first
mapped to an energy function (if possible) and then a solution is obtained as the
network stabilizes. Ever since Hopfield and Tank applied this network to solve the
traveling salesman problem (TSP), it has been employed to solve a variety of
combinatorial optimization problems. However, a critical problem arising in the
use of the HNN to solve optimization problem is how to choose the best
parameters in the energy function so that the network can converge to valid
solutions of high quality. In this paper, we will propose a relatively general method
to determine the parameters
In the past decade, the most extensively used method is the trial-and-error
technique, and to our point of view, this technique (at most with more constraints)
will still be used in the future, especially for those problems that are NP-hard or
NP-complete. This is based on the observation that given an energy function for a
specific problem, it seems that we can at most determine a range for each
parameter that might result in better solutions. Therefore, what we need to do is to
find as many constraints for these parameters as possible, and thus considerably
reduce the number of trials before good parameters are found. This method for
determining parameters, slightly different from the original trial-and-error
technique, can be called guided trail and error" method. Previous related works
include analysis

Global exponential stability of discrete-time neural networks for constrained


quadratic optimization (4)
K.C. Tan, H.J. Tang
A class of discrete-time recurrent neural networks for solving quadratic
optimization problems over bound constraints is studied. The regularity and
completeness of the network are discussed. The network is proven to be globally
exponentially stable (GES) under some mild conditions. The analysis of GES
extends the existing stability results for discrete-time recurrent networks. A
simulation example is included to validate the theoretical results obtained in this
letter.
Since the early work of Tank and Hop and Kennedy and Chua the
construction of a recurrent neural network (RNN) for solving linear and nonlinear
programming has become an active research topic in the of neural networks. The
study of nonlinear optimization is also valuable in the sense that it allows one to
apply its results to variational inequality problems, since there is a two-way bridge
connecting both issues. In the recent literature, there exist a few RNN models for
solving nonlinear optimization problems over convex constraints, a discrete-time
RNN model was proposed to solve the strictly convex quadratic optimization
problem with bound constraints. SuFcient conditions for the GES of the model and
several corresponding neuron updating rules were presented. In, a continuous-time
RNN was presented for solving bound-constrained nonlinear diGerentiable
optimization problems
Studied a class of discrete-time recurrent neural networks for constrained
quadratic optimization problems. The regularity and completeness of the network
have been discussed. The analysis of global exponential stability (GES) has
presented new and mild conditions for the strictly convex quadratic optimization
problems. Simulation results illustrated the applicability of the proposed theory.
9

A Recurrent Neural Network for Nonlinear Optimization with a


Continuously Differentiable Objective Function and Bound Constraints
Xue-Bin Liang, Jun Wang (5)
Presents a continuous-time recurrent neural-network model for nonlinear
optimization with any continuously differentiable objective function and bound
constraints. Quadratic optimization with bound constraints is a special problem
which can be solved by the recurrent neural network. The proposed recurrent
neural network has the following characteristics. 1) It is
Regular in the sense that any optimum of the objective function with bound
constraints is also an equilibrium point of the neural network. If the objective
function to be minimized is convex, then the recurrent neural network is complete
in the sense that the set of optima of the function with bound constraints coincides
with the set of equilibria of the neural network. 2) The recurrent neural network is
primal and quasiconvergent in the sense that its trajectory cannot escape from the
feasible region and will converge to the set of equilibria of the neural network for
any initial point in the feasible bound region. 3) The recurrent neural network has
an attractivity property in the sense that its trajectory will eventually converge to
the feasible region for any initial states even at outside of the bounded feasible
region. 4) For minimizing any strictly convex quadratic objective function subject
to bound constraints, the recurrent neural network is globally exponentially stable
for almost any positive network parameters. Simulation results are given to
demonstrate the convergence and performance of the proposed recurrent neural
network for nonlinear optimization with bound constraints.
We propose a continuous-time RNN model for bound-constrained nonlinear
optimization with any continuously differentiable objective function which is not
10

necessarily quadratic or convex. Quadratic optimization with bound constraints is


then a special problem which can also be solved by using the proposed RNN
model. The proposed RNN model in the present paper has the following features.
1) The RNN model is regular in the sense that any optimum of the objective
function subject to bound constraints is also an equilibrium point of the RNN. If
the minimized function is convex, then the RNN model is complete in the sense
that the set of optima of the objective function with bound constraints is equal to
the set of equilibria of the RNN. 2) The RNN model is a primal method in the
sense that for any initial point in the feasible
bound region, its trajectory can never escape from the feasible region. It has the
quasiconvergence property that all trajectories starting from the feasible bound
region will converge to the set of equilibria of the RNN. 3) The RNN model has an
attractivity property in the sense that its trajectory will eventually converge to the
feasible region for any initial point even at outside of the feasible bound region. 4)
For strictly convex quadratic optimization problems with bound constraints, the
RNN model is GES with almost any network parameters

11

CHAPTER -3
SYSTEM ANALYSIS
3.1 EXISTING SYSTEM
CQI reports are intended to accurately reflect the HS-PDSCH performance
that the UE can support in the existing wireless channel conditions. It is
recommended in that, in static channel conditions, the UE report CQI such that it
achieves a block error rate (BLER) close to 10% when scheduled data
corresponding to the median reported CQI. In practice, the accuracy of CQI reports
in reflecting HS-PDSCH performance is influenced by the wireless channel
conditions.
DISADVANTAGES OF EXISTING SYSTEM
1. The code and power allocation across users.
2. To maximizing the link throughput for each user
for a given resource allocation.
3. Higher round trip delay
3.2 PROPOSED SYSTEM
An adaptive algorithm to achieve a given target BLER using the stochastic gradient
descent method, which adjusts the CQI offset adaptively based on the short term
BLER obtained from the ACK/NACK history. By searching through different
target BLERs, we can find the throughput optimal BLER offline. The proposed
algorithm can be implemented at the UE as well as at the Node B. When applied at
the Node B, in addition to achieving the target BLER, it can also save transmit

12

power. This algorithm could be used not only to refine CQI-BLER alignment but
also to enable fair resource allocation among mobile users in HSDPA.
Standard stochastic approximation (SA) algorithms typically require a decreasing
step size.

Higher peak Downlink Data rate


Reduce Round trip Delay.
Higher Data rates up to 84 MB/s.

ADVANTAGE
In general, the throughput optimal BLER is not always 10% and depends on
the channel path profile. For AWGN channels, it is about 10%, as is implied in [5].
Considering that the UE implementation in the simulation closely mirrors
commercially shipping devices and already includes several receiver optimizations,
the additional gain obtained through the algorithm is indicative of potential
HSDPA throughput enhancement realizable in practice

CHAPTER-4
13

REQUIREMENTS
4.1 GENERAL
The requirements for a system are the description of the services provided
by the system and its operational constraints. Requirements are collected often
before developing system because the system development may be complex in
nature. The following are the requirements that are to be discussed.
1. Hardware requirements
2. Software requirements
4.2 SOFTWARE TOOLS

SOFTWARE SPECIFICATION
4.3 GENERAL
The garbage collector is a particularly innovative run-time service that appears for
the first time on a Microsoft platform (other than in the Java Virtual Machine). It
profoundly affects your overall programming style, regardless of language used.
Conventionally, all data where responsibility for allocation and lifetime is
delegated to the CLR garbage collector is referred to as 'managed memory'.
Garbage collection is recommended but not imposed, so unmanaged memory is
data you have to take care of yourself.
Under .NET, your source code is not compiled directly into machine code.
Instead the compiler translates it into Microsoft Intermediate Language (MSIL,
usually abbreviated to IL), and this is how binary modules get stored on disk.
When the application is launched, the IL module is loaded into memory where it
is translated into machine code on the fly by a fast JIT (Just in Time) compiler
and then executed.
4.4 LANGUAGE SUPPORT
14

The addition of the IL layer introduces a lot of freedom when it comes to


choosing the language you wish to use. Any language that has a compiler capable
of generating IL binary code, and support for the CLS types can be used to
program against the .NET framework.
The preview SDK comes with tools and compilers for VB.NET,
C# and Managed C++, and these will be supported in the forthcoming. C#
(pronounced 'See sharp') is the new hybrid language conceived and designed by
Microsoft with the ambitious goal of collecting together the best elements from
several mainstream languages to create the single most powerful, readable and
intuitive object oriented language for writing .NET software. The starting point
was apparently C++, but there are obvious influences from Java.
Only time will reveal if large numbers of Windows developers will switch to C#
for future projects, or stick to the latest upgrades of their native languages.
Meanwhile, Microsoft was prompt in submitting the C# grammar specification,
together with portions of the CLR specification, to the ECMA technical
committees to begin the official standardization process - a wise move that Sun, by
comparison, has never embraced with regard to Java.
Managed C++ is an extension to the standard C++ language giving support
to the enhanced features of the CLR that requires non-standard keywords in the
source code, such as garbage collection, binary type compatibility and custom
attributes. Obviously, the compiler must be instructed to treat a C++ file as
Managed C++ if it is to recognize the extended keywords and output IL files
rather than object files. This is achieved through the /com+ command line
parameter. Incidentally, the term 'managed' refers to both the managed execution
environment and to memory managed by the garbage collector.

4.5 THE .NET FRAMEWORK


15

4.5.1 GENERAL
.Net Framework is platform independent and language independent. This
means that .Net Framework allows you to use different programming languages
such as VB.Net, C#, Jscript, VBScript, and Managed C++ and run applications
on different platforms such as UNIX, Macintosh, and Linux. Moreover, .Net
Framework enables you to use various off-the-shelf libraries that help the
development of applications faster, easier, and cheaper. .Net Framework now
supports over 20 different programming languages.
The .NET Framework (pronounced dot net framework) defines the
environment that you use to execute Visual Basic .NET applications and the
services you can use within those applications. One of the main goals of this
framework is to make it easier to develop applications that run over the Internet.
However, this framework can also be used to develop traditional business
applications that run on the Windows desktop. Visual Studio also includes several
other components that make it an outstanding development product. One of these
is the Microsoft Development Environment, which youll be introduced to in a
moment. Another is the Microsoft SQL Server 2000 Desktop Engine (or MSDE).
MSDE is a database engine that runs on your own PC so you can use Visual
Studio for developing database applications that are compatible with Microsoft
SQL Server.
The two other languages that come with Visual Studio .NET are C# and C+
+. C# .NET (pronounced C sharp dot net) is a new language that has been
developed by Microsoft especially for the .NET Framework. Visual C++ .NET is
Microsofts version of the C++ language that is used on many platforms besides
Windows PCs.
16

4.5.2COMPONENTS OF THE .NET FRAMEWORK

4.5.2 Common Language Run Time:

Framework provides a common set of services that application programs


written in a .NET language such as Visual Basic .NET can use to run on various
operating systems and hardware platforms. The .NET Framework is divided into
two main components: the .NET Framework Class Library and the Common
Language Runtime. The .NET Framework Class Library consists of segments of
pre-written code called classes that provide many of the functions that you need for
developing .NET applications. For instance, the Windows Forms classes are used
for developing Windows Forms applications. The ASP.NET classes are used for
17

developing Web Forms applications. And other classes let you work with
databases, manage security, access files, and perform many other functions.
Although its not apparent in this figure, the classes in the .NET Framework
Class Library are organized in a hierarchical structure. Within this structure, related
classes are organized into groups called namespaces. Each namespace contains the
classes used to support a particular function.
The Common Language Runtime, or CLR, provides the services that are
needed for executing any application thats developed with one of the .NET
languages. That way, you can use more than one of the .NET languages as you
develop a single application without worrying about incompatible data types.
The .NET Framework provides a Runtime environment called the Common
Language Runtime or (CLR) that handles the execution of the code and provides
useful services for the implementation of the application. CLR takes care of code
management upon program execution and provides various services such as
memory management, thread management, security management and other system
services. The managed code targets CLR benefits by using useful features such as
cross-language integration, cross-language exception handling, versioning,
enhanced security, deployment support, and debugging.

4.5.3 COMMON TYPE SYSTEM (CTS)


It describes how types are declared, used and managed. CTS facilitate
cross-language integration, type safety, and high performance code execution.
The CLS is a specification that defines the rules to support language integration.

18

This is done in such a way, that programs written in any language (.NET
compliant) can interoperate with one another. This also can take full advantage of
inheritance, polymorphism, exceptions, and other features.
4.5.4 MSIL (MICROSOFT INTERMEDIATE LANGUAGE)
The compiler translates your code into Microsoft intermediate language
(MSIL). The common language runtime includes a JIT compiler for converting
this MSIL then to native code. MSIL contains metadata that is the key to cross
language interoperability. Since this metadata is standardized across all .NET
languages, a program written in one language can understand the metadata and
execute code, written in a different language.
MSIL includes instructions for loading, storing, initializing, and calling
methods on objects, as well as instructions for arithmetic and logical operations,
control flow, direct memory access, exception handling, and other operations.

4.5.5 JIT (JUST IN TIME)


In .NET Framework, the intermediate language is complied "just in time"
(JIT) into native code when the application or component is run instead of
compiling the application at development time.
The Microsoft.NET runtime consists of two JIT compilers. They are
standard JIT compiler and the EconoJIT compiler.
4.5.6.NET CLASS LIBRARY

19

.NET comes with thousands of classes to perform all important and notso-important operations. Its library is completely Object oriented, providing
around 5000 classes to perform just about everything. The following are the main
areas that are covered by Class library.
1.
2.
3.
4.
5.
6.
7.

Data Structures
IO management
Windows and Web Controls
Database access
Multithreading
Remoting
Reflections

The above list is comprehensive and only to provide you an instant idea
regarding how comprehensive the library is. The most fascinating part of .NET is
the class library; it's common to all language of .NET. That means the way you
access files in VB.NET will be exactly same in C#, and in fact all other
languages of .NET. You learn library only for once, but use it in every language.
Also the library is common for all types of applications. The following are
different types of applications that can make use of .NET class library.
1.
2.
3.
4.
5.

Console applications.
Windows GUI applications.
ASP.NET applications.
XML Web services.
Windows services.

So, you can leverage your knowledge of library irrespective of language and
type of application you are developing Imagine moving from COBOL to C and
then from C to VB. You learned how to perform common operations three times
because those three languages have any function in common.
20

4.6 C# LANGUAGE
C Sharp is an Object Oriented Language, introduced in the .NET Framework.
The .Net languages extends developers capabilities by introducing Structured
Exception Handling, Multi Threaded Programming, Versioning, ability to quickly
create and use Web Services etc. The following links gives you an overview
of .Net Framework, C# etc.

A.What is C#?
C# (pronounced "see sharp" or "C Sharp") is one of many .NET
programming languages. It is object-oriented and allows you to build reusable
components for a wide variety of application types. Microsoft introduced C# on
June 26th, 2000 and it became a v1.0 product on Feb 13th 2002.
C# is an evolution of the C and C++ family of languages. However, it
borrows features from other programming languages, such as Delphi and Java. If
you look at the most basic syntax of both C# and Java, the code looks very similar,
but then again, the code looks a lot like C++ too, which is intentional. Developers
often ask questions about why C# supports certain features or works in a certain
way. The answer is often rooted in its C++ heritage.
a) How Does a C# Application Run?
An important point is that C# is a "managed" language, meaning that it
requires the .NET Common Language Runtime (CLR) to execute. Essentially, as
an application that is written in C# executes, the CLR is managing memory,
performing garbage collection, handling exceptions, and providing many more
services that you, as a developer, don't have to write code for. The C# compiler
21

produces Intermediate Language (IL), rather than machine language, and the
CLR understands IL.
Because C# requires the CLR, you must have the CLR installed on your
system. All new Windows operating systems ship with a version of the CLR and it
is available via Windows Update for older systems.
FEASIBILITY REPORT
Preliminary investigation examine project feasibility, the likelihood the
system will be useful to the organization. The main objective of the feasibility
study is to test the Technical, Operational and Economical feasibility for adding
new modules and debugging old running system.
All system is feasible if they are unlimited resources and infinite time.
There are aspects in the feasibility study portion of the preliminary investigation.

Technical Feasibility

Operation Feasibility

Economical Feasibility

4.7 TECHNICAL FEASIBILITY


The technical issue usually raised during the feasibility stage of the
investigation includes the following:

Does the necessary technology exist to do what is suggested?


Do the proposed equipment have the technical capacity to hold the
data required to use the new system?
22

Will the proposed system provide adequate response to inquiries,


regardless of the number or location of users?

Can the system be upgraded if developed?

Are there technical guarantees of accuracy, reliability, ease of access


and data security?

Earlier no system existed to cater to the needs of Secure Infrastructure


Implementation System. The current system developed is technically feasible. It is
a web based user interface for audit workflow at NIC-CSD. Thus it provides an
easy access to the users. The databases purpose is to create, establish and maintain
a workflow among various entities in order to facilitate all concerned users in their
various capacities or roles.
Permission to the users would be granted based on the roles specified.
Therefore, it provides the technical guarantee of accuracy, reliability and security.
The software and hard requirements for the development of this project are not
many and are already available in-house at NIC or are available as free as open
source. The work for the project is done with the current equipment and existing
software technology. Necessary bandwidth exists for providing a fast feedback to
the users irrespective of the number of users using the system.

4.8 OPERATIONAL FEASIBILITY


Proposed projects are beneficial only if they can be turned out into
information system. That will meet the organizations operating requirements.
Operational feasibility aspects of the project are to be taken as an important part of
the project implementation. Some of the important issues raised are to test the
operational feasibility of a project includes the following: 23

Is there sufficient support for the management from the users?

Will the system be used and work properly if it is being


developed and implemented?

Will there be any resistance from the user that will undermine
the possible application benefits?

This system is targeted to be in accordance with the above-mentioned issues.


Beforehand, the management issues and user requirements have been taken into
consideration. So there is no question of resistance from the users that can
undermine the possible application benefits.
The well-planned design would ensure the optimal utilization of the
computer resources and would help in the improvement of performance status.

4.9 ECONOMIC FEASIBILITY


A system can be developed technically and that will be used if installed must
still be a good investment for the organization. In the economical feasibility, the
development cost in creating the system is evaluated against the ultimate benefit
derived from the new systems. Financial benefits must equal or exceed the costs.

The system is economically feasible. It does not require any addition


hardware or software. Since the interface for this system is developed using the
existing resources and technologies available at NIC, There is nominal expenditure
and economical feasibility for certain.

4.10 HARDWARE REQUIRED


System

: Pentium Dual Core 2.4 GHz


24

Hard Disk
Monitor
Mouse
Key board
RAM

: 160 GB
: 15 VGA color
: Logitech
: 110 Keyboard
: 2 GB

4.11 SOFTWARE REQUIRED


O/S
Front End

:
:

Windows XP.
DOT NET

CHAPTER -5
METHODOLOGIES
5.1 PROBLEM DEFINITION
System analysis is a process of gathering and interpreting facts, diagnosing
problems and the information to recommend improvements on the system. It is a
problem solving activity that requires intensive communication between the system
users and system developers. System analysis or study is an important phase of any
system development process. The system is studied to the minutest detail and
analyzed.

25

The system analyst plays the role of the interrogator and dwells deep into the
working of the present system. The system is viewed as a whole and the input to
the system are identified.
The

outputs

from

the

organizations

are

traced

to

the

various

processes. System analysis is concerned with becoming aware of the problem,


identifying the relevant and decisional variables, analyzing and synthesizing the
various factors and determining an optimal or at least a satisfactory solution or
program of action.

A detailed study of the process must be made by various techniques like


interviews, questionnaires etc. The data collected by these sources must be
scrutinized to arrive to a conclusion. The conclusion is an understanding of how
the system functions. This system is called the existing system.
Now the existing system is subjected to close study and problem areas are
identified. The designer now functions as a problem solver and tries to sort out the
difficulties that the enterprise faces.
The solutions are given as proposals. The proposal is then weighed with the
existing system analytically and the best one is selected. The proposal is presented
to the user for an endorsement by the user. The proposal is reviewed on user
request and suitable changes are made. This is loop that ends as soon as the user is
satisfied with proposal.

26

Preliminary study is the process of gathering and interpreting facts, using the
information for further studies on the system. Preliminary study is problem solving
activity that requires intensive communication between the system users and
system developers.
It does various feasibility studies. In these studies a rough figure of the system
activities can be obtained, fromwhich the decision about the strategies to be
followed for effective system study and analysis can be taken.
5.2 MODULES
1.
2.
3.
4.

Server Module.
Path Set Module.
Packet Transaction Module.
Client Module.

5.2.1 SERVER MODULE


Server module is used to upload the file to the user and view to the user file
request. If the server to accept the user file request the control is passing to the
router otherwise the server to reject the user request, automatically the request is
deleted and user download option is canceled.
5.2.2 PATH SET MODULE
The Path set module is used to set the path to transact the files based on this path
selection. The server to provide the ten possibilities based on the shortest path.
Normally, twelve towers are used for this transaction process. For each transaction,
the transaction path takes minimum four towers or five towers.
27

5.2.3 PACKET TRANSACTION MODULE


A Packet transaction module is used to split the file into eight packets in same size
and then the router send the packets server to client, the client returns the
acknowledgement to the server. The server once gets the acknowledgement; send
another packet to the client. If tower size is less than the packet size, the server
cant send via the tower.
5.2.4 CLIENT MODULE
The Client module can view the server uploaded files and send the download
request to the server. For downloading files the client registers their personal
details. After login, the client can change their password and download the server
accepted files.

CHAPTER- 6
DESIGN PHASE
6.1 GENERAL
UML stands for Unified Modeling Language. UML is standardized generalpurpose modeling language in the field of object-oriented software engineering.
The standard is managed, and was created by the Object Management Group.
The goal is for UML to become a common language for creating models of
object oriented computer software. In its current from UML is comprised of two
major components.

28

The Unified Modeling Language is a standard language for specifying,


Visualization, Constructing and documenting the artifacts of software system.
The UML is a very important part of developing objects oriented software
and the software development process.
6.2UML DIAGRAMS
1.
2.
3.
4.

USECASE DIAGRAM
CLASS DIAGRAM
SEQUENCE DIAGRAM
ACTIVITY DIAGRAM

6.2.1USECASE DIAGRAM OF ADMINISTRATOR


A use case diagram in the Unified Modeling Language (UML) is a type of b
behavioral diagram defined by and created from a Use case analysis. It purpose is
to present a graphical overview of the functionality provided by a system.

29

Fig - 6.2.1Usecase Diagram of Administrator

6.2.2USECASE DIAGRAM OF CLIENT


A use case diagram in the Unified Modeling Language (UML) is a type of b
behavioral diagram defined by and created from a Use case analysis. It purpose is
to present a graphical overview of the functionality provided by a system.
The main purpose of a use case diagram is to show what system functions
are performed for which actor. Roles of the actors in the system can be depicted.

30

Fig - 6.2.2 Use Case Diagram of Client

31

6.2.3LEVEL -1 & LEVEL -2 DIAGRAM


A use case diagram for level-1 & level-2 diagram for Unified Modeling
Language (UML) is a type of use case diagram.

Fig - 6.2.3 Level -1 & Level -2 Diagrams

32

6.2.4SEQUENCE DIAGRAM
A sequence diagram in Unified Modeling Language (UML) is a kind of
interaction diagram that shows how processes operate with one another and in
what order. It is a construct of a Message Sequence Chart. Sequence diagrams are
sometimes called event diagrams, event scenarios, and timing diagram.

PREVIOUS

NEW

LAST

New Node

New Node

ACK

ACK

SWITCH TO CHANNEL X

Channel Switch

Channel Switch

ACK

Resume OLSR

Resume OLSR

ACK

ACK

Fig - 6.2.4 Sequence Diagram

33

6.2.5CLASS DIAGRAM
In software Engineering a class diagram in the Unified Modeling
Language (UML) is a type of static structure of a system by showing the systems
classes their attributes operations (or methods), and the relationship among the
classes.
API

1. Find Class
2. Find Interface
3. Find functions
4. Find Factor

5. Apply Formula
Finding
Classes

1. Insert
Classes

Finding
Interfaces

1. Insert
Interface

Finding
Functions

1. Finding
functions for
classes and
interface.

Finding
Factors

Apply
Formula1.

1. Finding
necessary
Factors for
formula.

1.Applying
all the
formulas

Showing
Chart as
Output

Fig -6.2.5 Class Diagram

34

CHAPTER-7
IMPLEMENTATION
7.1GENERAL
The chapter explains the sample coding of the project. The following code
describes the functionality for realizing scalable, flexible and fine-grained access
control in cloud computing.
7.2 IMPLEMENTATION
Implementation is the process of converting a new or revised system design
into an operational one. The implementation is the final and important phase. It
involves ser training, system testing and successfully running of developed
proposed system. The user tests the developed system and changes are made
according to their needs. The testing phase involves the testing of developed
system using various kinds of data.
An elaborate testing of data is prepared and the system is tested using that
test data. The corrections are also noted for future use. The users are trained to
operate the developed system. Both the hardware and software securities are made
to run the developed system successfully in future.

7.3 REQUIREMENTS GATHERING


The first phase of software project is to gather requirements. Gathering
software requirements begins as a creative brainstorming process in which the goal
is to develop an idea for a new product that no other software vendor has thought.
New software product ideas normally materialize as a result of analyzing market
data and interviewing customers about their product needs.
35

The main function of the requirements gathering phase is to take an abstract


idea that fills a particular need or that solves a particular problem and create a real
world project with a particular set of objectives, a budget, a timeline and a team.
7.4 DESIGN
The design phase is the one, where the technical problems are really solved
that makes the project a reality. In this phase the relationship of the code, database,
user interface, and classes begin to take shape in the minds of the project team.
During the design phase, project team is responsible for seven deliverables:

Data model design


User interface design
Functional specifications
Documentation plan
Software Quality Assurance (SQA) test
Test cases
Detailed design specifications.

Data model or schema


The primary objective in designing the data model or schema is to meet the
high level software specifications that the requirement document outlines. Usually
the database administrator (DBA) designs the data model for the software project.
User interface
The user interface is the first part of the software application that is visible to
the user. The UI provides the user with the capability of navigating through the
software application. The UI is often known in the software industry as the look
and feel aspect of the software application. Thedesign of the UI must be such that,

36

the software application provides an interface that is as user friendly and as


cosmetically attractive as possible.
Prototype
After the data model and UI design are ready, project team can design the
prototype for the project. Sales and marketing teams generally cannot wait to get
prototype in hands to show it off to sales prospects and industry trade shows.
Functional specification
It provides the definitive overview of what is included in the project.This
deliverables incorporates many of the documents that prepare to this point,
gathering them into one place for easy reference.
Detailed design specifications
It provides a high level view of the project. Detailed design specification,
layouts the blueprint of how to develop the project. The design specifications
includes document that has been created to design phase but in some cases,
provides step by step information on how to implement the specifications.
7.5 SAMPLE CODING

NETWORK DESIGNING
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
37

using System.Text;
usingSystem.Windows.Forms;
using System.Net;
using System.Net.Sockets;
using System.IO;

namespaceResequencing
{
Public partialclass Server : Form
{
MC902.Chart c = new MC902.Chart();
public Server()
{
InitializeComponent();
}

private void Form1_Load(object sender, EventArgs e)


{

private void BtnStartServer_Click(object sender, EventArgs e)


{
38

if (FTServerCode1.receivedPath != null)
{
backgroundWorker1.RunWorkerAsync();
backgroundWorker2.RunWorkerAsync();
backgroundWorker3.RunWorkerAsync();
lblserver.BackColor = Color.LimeGreen;
}
else
{
MessageBox.Show("Please select file receiving path");
}
}

private void BtnLocation_Click(object sender, EventArgs e)


{
FolderBrowserDialogfd = new FolderBrowserDialog();
if (fd.ShowDialog() == DialogResult.OK)
{
FTServerCode1.receivedPath = fd.SelectedPath;
FTServerCode2.receivedPath = fd.SelectedPath;

}
}
39

FTServerCode1 obj1 = new FTServerCode1();


FTServerCode2 obj2 = new FTServerCode2();

private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e)


{
obj1.StartServer();

private void timer1_Tick(object sender, EventArgs e)


{
if (FTServerCode1.filefrm1 == "0")
{
lblc11.BackColor = Color.LimeGreen;
FTServerCode1.timedelay[0] += FTServerCode1.tcalc;
//ac1 = "y";
}
else if (FTServerCode1.filefrm1 == "1")
{
FTServerCode1.timedelay[1] += FTServerCode1.tcalc;
if (FTServerCode1.colc11 == null)
{
FTServerCode1.timedelay[0] += FTServerCode1.tcalc;
40

lblc11.BackColor = Color.Blue;
}

lblc12.BackColor = Color.LimeGreen;
// ac2 = "y";
}
else if (FTServerCode1.filefrm1 == "2")
{
FTServerCode1.timedelay[2] += FTServerCode1.tcalc;
if (FTServerCode1.colc11 == null)
{
FTServerCode1.timedelay[0] += FTServerCode1.tcalc;
lblc11.BackColor = Color.Blue;
}
if (FTServerCode1.colc12 == null)
{
FTServerCode1.timedelay[1] += FTServerCode1.tcalc;

lblc12.BackColor = Color.Blue;
}
lblc13.BackColor = Color.LimeGreen;
// ac3 = "y";
}
41

else if (FTServerCode1.filefrm1 == "3")


{
FTServerCode1.timedelay[3] += FTServerCode1.tcalc;
if (FTServerCode1.colc11 == null)
{
FTServerCode1.timedelay[0] += FTServerCode1.tcalc;
lblc11.BackColor = Color.Blue;
}
if (FTServerCode1.colc12 == null)
{
FTServerCode1.timedelay[1] += FTServerCode1.tcalc;

lblc12.BackColor = Color.Blue;
}
if (FTServerCode1.colc13 == null)
{
FTServerCode1.timedelay[2] += FTServerCode1.tcalc;
lblc13.BackColor = Color.Blue;
}
lblc14.BackColor = Color.LimeGreen;
//ac4 = "y";
}
else if (FTServerCode1.filefrm1 == "4")
42

{
FTServerCode1.timedelay[4] += FTServerCode1.tcalc;
if (FTServerCode1.colc11 == null)
{
FTServerCode1.timedelay[0] += FTServerCode1.tcalc;
lblc11.BackColor = Color.Blue;
}
if (FTServerCode1.colc12 == null)
{
FTServerCode1.timedelay[1] += FTServerCode1.tcalc;
lblc12.BackColor = Color.Blue;
}
if (FTServerCode1.colc13 == null)
{
FTServerCode1.timedelay[2] += FTServerCode1.tcalc;
lblc13.BackColor = Color.Blue;
}
if (FTServerCode1.colc14 == null)
{
FTServerCode1.timedelay[3] += FTServerCode1.tcalc;

lblc14.BackColor = Color.Blue;
}
43

lblc15.BackColor = Color.LimeGreen;
//ac5 = "y";
}
else if (FTServerCode1.filefrm1 == "5")
{
FTServerCode1.timedelay[5] += FTServerCode1.tcalc;

lblc16.BackColor = Color.LimeGreen;
// ac6 = "y";
}
else if (FTServerCode1.filefrm1 == "6")
{
FTServerCode1.timedelay[6] += FTServerCode1.tcalc;

if (FTServerCode1.colc16 == null)
{

lblc16.BackColor = Color.Blue;
}
lblc17.BackColor = Color.LimeGreen;
//ac7 = "y";
}
else if (FTServerCode1.filefrm1 == "7")
44

{
FTServerCode1.timedelay[7] += FTServerCode1.tcalc;
if (FTServerCode1.colc16 == null)
{
FTServerCode1.timedelay[6] += FTServerCode1.tcalc;
lblc16.BackColor = Color.Blue;
}
if (FTServerCode1.colc17 == null)
{
FTServerCode1.timedelay[7] += FTServerCode1.tcalc;

lblc17.BackColor = Color.Blue;
}
lblc18.BackColor = Color.LimeGreen;
//ac8 = "y";
}
else if (FTServerCode1.filefrm1 == "8")
{
FTServerCode1.timedelay[8] += FTServerCode1.tcalc;
if (FTServerCode1.colc16 == null)
{
FTServerCode1.timedelay[6] += FTServerCode1.tcalc;
lblc16.BackColor = Color.Blue;
45

}
if (FTServerCode1.colc17 == null)
{
FTServerCode1.timedelay[7] += FTServerCode1.tcalc;
lblc17.BackColor = Color.Blue;
}
if (FTServerCode1.colc18 == null)
{
FTServerCode1.timedelay[8] += FTServerCode1.tcalc;
lblc18.BackColor = Color.Blue;
}
lblc19.BackColor = Color.LimeGreen;
//ac9 = "y";
}
else if (FTServerCode1.filefrm1 == "9")
{
FTServerCode1.timedelay[9] += FTServerCode1.tcalc;
if (FTServerCode1.colc16 == null)
{
FTServerCode1.timedelay[6] += FTServerCode1.tcalc;

lblc16.BackColor = Color.Blue;
}
46

if (FTServerCode1.colc17 == null)
{
FTServerCode1.timedelay[7] += FTServerCode1.tcalc;

lblc17.BackColor = Color.Blue;
}
if (FTServerCode1.colc18 == null)
{
FTServerCode1.timedelay[8] += FTServerCode1.tcalc;
lblc18.BackColor = Color.Blue;
}
if (FTServerCode1.colc19 == null)
{
FTServerCode1.timedelay[9] += FTServerCode1.tcalc;
lblc19.BackColor = Color.Blue;
}
lblc110.BackColor = Color.LimeGreen;
//ac10 = "y";
}

// send();
}

47

private void timer2_Tick(object sender, EventArgs e)


{
if (FTServerCode2.filefrm2 == "0")
{
FTServerCode2.timedelay[0] += FTServerCode2.tcalc;
lblc21.BackColor = Color.LimeGreen;
//ac1 = "y";
}
else if (FTServerCode2.filefrm2 == "1")
{
FTServerCode2.timedelay[1] += FTServerCode2.tcalc;
if (FTServerCode2.colc21 == null)
{
FTServerCode2.timedelay[0] += FTServerCode2.tcalc;
lblc21.BackColor = Color.Blue;
}

lblc22.BackColor = Color.LimeGreen;
// ac2 = "y";
}
else if (FTServerCode2.filefrm2 == "2")
{
FTServerCode2.timedelay[2] += FTServerCode2.tcalc;
48

if (FTServerCode2.colc21 == null)
{
FTServerCode2.timedelay[0] += FTServerCode2.tcalc;
lblc21.BackColor = Color.Blue;
}
if (FTServerCode2.colc22 == null)
{
FTServerCode2.timedelay[1] += FTServerCode2.tcalc;
lblc22.BackColor = Color.Blue;
}
lblc23.BackColor = Color.LimeGreen;
// ac3 = "y";
}
else if (FTServerCode2.filefrm2 == "3")
{
FTServerCode2.timedelay[3] += FTServerCode2.tcalc;
if (FTServerCode2.colc21 == null)
{
FTServerCode2.timedelay[0] += FTServerCode2.tcalc;
lblc21.BackColor = Color.Blue;
}
if (FTServerCode2.colc22 == null)
{
49

FTServerCode2.timedelay[1] += FTServerCode2.tcalc;

lblc22.BackColor = Color.Blue;
}
if (FTServerCode2.colc23 == null)
{
FTServerCode2.timedelay[3] += FTServerCode2.tcalc;
lblc23.BackColor = Color.Blue;
}
lblc24.BackColor = Color.LimeGreen;
//ac4 = "y";
}
else if (FTServerCode2.filefrm2 == "4")
{
FTServerCode2.timedelay[4] += FTServerCode2.tcalc;
if (FTServerCode2.colc21 == null)
{
FTServerCode2.timedelay[0] += FTServerCode2.tcalc;
lblc21.BackColor = Color.Blue;
}
if (FTServerCode2.colc22 == null)
{
FTServerCode2.timedelay[1] += FTServerCode2.tcalc;
50

lblc22.BackColor = Color.Blue;
}
if (FTServerCode2.colc23 == null)
{
FTServerCode2.timedelay[3] += FTServerCode2.tcalc;

lblc23.BackColor = Color.Blue;
}
if (FTServerCode2.colc24 == null)
{
FTServerCode2.timedelay[4] += FTServerCode2.tcalc;
lblc24.BackColor = Color.Blue;
}
lblc25.BackColor = Color.LimeGreen;
//ac5 = "y";
}
else if (FTServerCode2.filefrm2 == "5")
{
FTServerCode2.timedelay[5] += FTServerCode2.tcalc;

lblc26.BackColor = Color.LimeGreen;
// ac6 = "y";
}
51

else if (FTServerCode2.filefrm2 == "6")


{
FTServerCode2.timedelay[6] += FTServerCode2.tcalc;

if (FTServerCode2.colc26 == null)
{
FTServerCode2.timedelay[6] += FTServerCode2.tcalc;
lblc26.BackColor = Color.Blue;
}
lblc27.BackColor = Color.LimeGreen;
//ac7 = "y";
}
else if (FTServerCode2.filefrm2 == "7")
{

r.label20.Text = FTServerCode2.timedelay[9].ToString() + "


MilliSeconds.";
r.Show();
}

private void btn3_Click(object sender, EventArgs e)


{

52

private void button1_Click(object sender, EventArgs e)


{
c.Show();
}
}
//Channel 1

class FTServerCode1
{
IPEndPointipEnd;
Socket sock;
string ser1;
stringfileDes, fileini;
intlen;
byte[] data1;
byte[] data2;
byte[] data3;
byte[] data4;
byte[] data5;
byte[] data6;
byte[] data7;
53

byte[] data8;
byte[] data9;
byte[] data10;
byte[] write;
int fsize1, fsize2, fsize3, fsize4, fsize5, fsize6, fsize7, fsize8, fsize9, fsize10;
doubletstart;
double tend;

public static string[] path = null;


public static string filefrm1;
stringakc;
public FTServerCode1()
{
ipEnd = new IPEndPoint(IPAddress.Any, 5);
sock = new Socket(AddressFamily.InterNetwork, SocketType.Stream,
ProtocolType.IP);
sock.Bind(ipEnd);
}
public static string receivedPath;
public static string curMsg = "Stopped";
public static string colc11;
public static string colc12;
public static string colc13;
54

public static string colc14;


public static string colc15;
public static string colc16;
public static string colc17;
public static string colc18;
public static string colc19;
public static string colc110;
public static double tcalc;
public static double[] timedelay=new double[10];
public void StartServer()
{
try
{

sock.Listen(100);

Socket clientSock = sock.Accept();

byte[] clientData = new byte[1024 * 15000];

intreceivedBytesLen = clientSock.Receive(clientData);
curMsg = "Receiving data...";
55

filefrm1 = Encoding.ASCII.GetString(clientData, 0, 1);


tstart = Convert.ToDouble(DateTime.Now.Millisecond);
if (filefrm1 == "0")
{
colc11 = "R";
data1 = new byte[receivedBytesLen - 1];
Array.Copy(clientData, 1, data1, 0, receivedBytesLen - 1);
akc = "0";
fsize1 = receivedBytesLen - 1;
send();
tend = tstart - Convert.ToDouble(DateTime.Now.Millisecond) / 1000;
tcalc = tend;
if (data1 != null && data2 != null && data3 != null && data4 != null
&& data5 != null && data6 != null && data7 != null && data8 != null && data9 !
= null && data10 != null)
{
save();
}
}
else if (filefrm1 == "1")
{
colc12 = "R";
data2 = new byte[receivedBytesLen - 1];

56

Array.Copy(clientData, 1, data2, 0, receivedBytesLen - 1);


akc = "1";
fsize2 = receivedBytesLen - 1;
send();
tend = tstart - Convert.ToDouble(DateTime.Now.Millisecond) / 1000;
tcalc = tend;
if (data1 != null && data2 != null && data3 != null && data4 != null
&& data5 != null && data6 != null && data7 != null && data8 != null && data9 !
= null && data10 != null)
{
save();
}
}
else if (filefrm1 == "2")
{
colc13 = "R";
data3 = new byte[receivedBytesLen - 1];
Array.Copy(clientData, 1, data3, 0, receivedBytesLen - 1);
akc = "2";
fsize3 = receivedBytesLen - 1;
send();
tend = tstart - Convert.ToDouble(DateTime.Now.Millisecond) / 1000;
tcalc = tend;

57

if (data1 != null && data2 != null && data3 != null && data4 != null
&& data5 != null && data6 != null && data7 != null && data8 != null && data9 !
= null && data10 != null)
{
save();
}
}
else if (filefrm1 == "3")
{
colc14 = "R";
data4 = new byte[receivedBytesLen - 1];
Array.Copy(clientData, 1, data4, 0, receivedBytesLen - 1);
akc = "3";
fsize4 = receivedBytesLen - 1;
send();
tend = tstart - Convert.ToDouble(DateTime.Now.Millisecond) / 1000;
tcalc = tend;
if (data1 != null && data2 != null && data3 != null && data4 != null
&& data5 != null && data6 != null && data7 != null && data8 != null && data9 !
= null && data10 != null)
{
save();
}
}
58

else if (filefrm1 == "4")


{
colc15 = "R";
data5 = new byte[receivedBytesLen - 1];
Array.Copy(clientData, 1, data5, 0, receivedBytesLen - 1);
akc = "4";
fsize5 = receivedBytesLen - 1;
send();
tend = tstart - Convert.ToDouble(DateTime.Now.Millisecond) / 1000;
tcalc = tend;
if (data1 != null && data2 != null && data3 != null && data4 != null
&& data5 != null && data6 != null && data7 != null && data8 != null && data9 !
= null && data10 != null)
{
save();
}
}
else if (filefrm1 == "5")
{
colc16 = "R";
data6 = new byte[receivedBytesLen - 1];
Array.Copy(clientData, 1, data6, 0, receivedBytesLen - 1);
akc = "5";

59

fsize6 = receivedBytesLen - 1;
send();
tend = tstart - Convert.ToDouble(DateTime.Now.Millisecond) / 1000;
tcalc = tend;
if (data1 != null && data2 != null && data3 != null && data4 != null
&& data5 != null && data6 != null && data7 != null && data8 != null && data9 !
= null && data10 != null)
{
save();
}
}
else if (filefrm1 == "6")
{
colc17 = "R";
data7 = new byte[receivedBytesLen - 1];
Array.Copy(clientData, 1, data7, 0, receivedBytesLen - 1);
akc = "6";
fsize7 = receivedBytesLen - 1;
send();
tend = tstart - Convert.ToDouble(DateTime.Now.Millisecond) / 1000;
tcalc = tend;
if (data1 != null && data2 != null && data3 != null && data4 != null
&& data5 != null && data6 != null && data7 != null && data8 != null && data9 !
= null && data10 != null)
60

{
save();
}
}
else if (filefrm1 == "7")
{
colc18 = "R";
data8 = new byte[receivedBytesLen - 1];
Array.Copy(clientData, 1, data8, 0, receivedBytesLen - 1);
akc = "7";
fsize8 = receivedBytesLen - 1;
send();
tend = tstart - Convert.ToDouble(DateTime.Now.Millisecond) / 1000;
tcalc = tend;
if (data1 != null && data2 != null && data3 != null && data4 != null
&& data5 != null && data6 != null && data7 != null && data8 != null && data9 !
= null && data10 != null)
{
save();
}
}
else if (filefrm1 == "8")
{

61

colc19 = "R";
data9 = new byte[receivedBytesLen - 1];
Array.Copy(clientData, 1, data9, 0, receivedBytesLen - 1);
akc = "8";
fsize9 = receivedBytesLen - 1;
send();
tend = tstart - Convert.ToDouble(DateTime.Now.Millisecond) / 1000;
tcalc = tend;
if (data1 != null && data2 != null && data3 != null && data4 != null
&& data5 != null && data6 != null && data7 != null && data8 != null && data9 !
= null && data10 != null)
{
save();
}
}
else if (filefrm1 == "9")
{
colc110 = "R";
data10 = new byte[receivedBytesLen - 1];
Array.Copy(clientData, 1, data10, 0, receivedBytesLen - 1);
akc = "9";
fsize10 = receivedBytesLen - 1;
send();

62

tend = tstart - Convert.ToDouble(DateTime.Now.Millisecond) / 1000;


tcalc = tend;

if (data1 != null && data2 != null && data3 != null && data4 != null
&& data5 != null && data6 != null && data7 != null && data8 != null && data9 !
= null && data10 != null)
{
save();
}
}
clientSock.Close();

StartServer();

curMsg = "Reeived& Saved file; Server Stopped.";

}
catch (Exception ex)
{
curMsg = "File Receving error.";

63

}
}
private void save()
{
write = new byte[data1.Length + data2.Length + data3.Length + data4.Length +
data5.Length + data6.Length + data7.Length + data8.Length + data9.Length +
data10.Length];
Array.Copy(data1, 0, write, 0, data1.Length);
Array.Copy(data2, 0, write, fsize1, data2.Length);
Array.Copy(data3, 0, write, fsize1 + fsize2, data3.Length);
Array.Copy(data4, 0, write, fsize1 + fsize2 + fsize3, data4.Length);
Array.Copy(data5, 0, write, fsize1 + fsize2 + fsize3 + fsize4, data5.Length);
Array.Copy(data6, 0, write, fsize1 + fsize2 + fsize3 + fsize4 + fsize5,
data6.Length);
Array.Copy(data7, 0, write, fsize1 + fsize2 + fsize3 + fsize4 + fsize5 + fsize6,
data7.Length);
Array.Copy(data8, 0, write, fsize1 + fsize2 + fsize3 + fsize4 + fsize5 + fsize6 +
fsize7, data8.Length);
Array.Copy(data9, 0, write, fsize1 + fsize2 + fsize3 + fsize4 + fsize5 + fsize6 +
fsize7 + fsize8, data9.Length);
Array.Copy(data10, 0, write, fsize1 + fsize2 + fsize3 + fsize4 + fsize5 + fsize6 +
fsize7 + fsize8 + fsize9, data10.Length);
if (System.IO.Directory.Exists(receivedPath + "/SYS1") == false)
{
System.IO.Directory.CreateDirectory(receivedPath + "/SYS1");
64

}
intfileNameLen = BitConverter.ToInt32(write, 0);
stringfileName = Encoding.ASCII.GetString(write, 4, fileNameLen);

BinaryWriterbWrite = new BinaryWriter(File.Open(receivedPath + "/SYS1/" +


fileName, FileMode.Append)); ;
bWrite.Write(write, 4 + fileNameLen, write.Length - 4 - fileNameLen);

curMsg = "Saving file...";

bWrite.Close();
}
public void send()
{
try
{
IPAddress[] ipAddress = Dns.GetHostAddresses("127.0.0.1");
IPEndPointipEnd = new IPEndPoint(ipAddress[0], 6);
Socket clientSock = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.IP);
byte[] ackn = Encoding.ASCII.GetBytes(akc);
clientSock.Connect(ipEnd);
clientSock.Send(ackn);

65

clientSock.Close();

catch (Exception ex)


{
if (ex.Message == "A connection attempt failed because the connected
party did not properly respond after a period of time, or established connection
failed because connected host has failed to respond")
{
//lblError.Text = "";
//lblError.Text = "No Such System Available Try other IP";
}
else
{
if (ex.Message == "No connection could be made because the target machine
actively refused it")
{
//lblError.Text = "";
//lblError.Text = "File Sending fail. Because server not running.";
}
else
{
//lblError.Text = "";
66

//lblError.Text = "File Sending fail." + ex.Message;


}
}
}
}
}
}
//Chennal 2

class FTServerCode2
{
IPEndPointipEnd;
Socket sock;
string ser1;
stringfileDes, fileini;
intlen;
byte[] data1;
byte[] data2;
byte[] data3;
byte[] data4;
byte[] data5;
byte[] data6;
byte[] data7;
67

byte[] data8;
byte[] data9;
byte[] data10;
byte[] write;
int fsize1, fsize2, fsize3, fsize4, fsize5, fsize6, fsize7, fsize8, fsize9, fsize10;
public static string[] path = null;
public static string filefrm2;
stringakc;
doubletstart;
double tend;
public FTServerCode2()
{
ipEnd = new IPEndPoint(IPAddress.Any, 7);
sock = new Socket(AddressFamily.InterNetwork, SocketType.Stream,
ProtocolType.IP);
sock.Bind(ipEnd);
}
public static string receivedPath;
public static string curMsg = "Stopped";
public static string colc21;
public static string colc22;
public static string colc23;
public static string colc24;
68

public static string colc25;


public static string colc26;
public static string colc27;
public static string colc28;
public static string colc29;
public static string colc210;
public static double tcalc;
public static double[] timedelay = new double[10];
public void StartServer()
{
try
{

sock.Listen(100);

Socket clientSock = sock.Accept();

byte[] clientData = new byte[1024 * 15000];

intreceivedBytesLen = clientSock.Receive(clientData);
tstart = Convert.ToDouble(DateTime.Now.Millisecond);
curMsg = "Receiving data...";
69

filefrm2 = Encoding.ASCII.GetString(clientData, 0, 1);


if (filefrm2 == "0")
{
colc21 = "R";
data1 = new byte[receivedBytesLen - 1];
Array.Copy(clientData, 1, data1, 0, receivedBytesLen - 1);
akc = "0";
fsize1 = receivedBytesLen - 1;
send();
tend = tstart - Convert.ToDouble(DateTime.Now.Millisecond) / 1000;
tcalc = tend;
if (data1 != null && data2 != null && data3 != null && data4 != null
&& data5 != null && data6 != null && data7 != null && data8 != null && data9 !
= null && data10 != null)
{
save();
}
}
else if (filefrm2 == "1")
{
colc22 = "R";
data2 = new byte[receivedBytesLen - 1];
Array.Copy(clientData, 1, data2, 0, receivedBytesLen - 1);

70

akc = "1";
fsize2 = receivedBytesLen - 1;
send();
tend = tstart - Convert.ToDouble(DateTime.Now.Millisecond) / 1000;
tcalc = tend;
if (data1 != null && data2 != null && data3 != null && data4 != null
&& data5 != null && data6 != null && data7 != null && data8 != null && data9 !
= null && data10 != null)
{
save();
}
}
else if (filefrm2 == "2")
{
colc23 = "R";
data3 = new byte[receivedBytesLen - 1];
Array.Copy(clientData, 1, data3, 0, receivedBytesLen - 1);
akc = "2";
fsize3 = receivedBytesLen - 1;
send();
tend = tstart - Convert.ToDouble(DateTime.Now.Millisecond) / 1000;
tcalc = tend;

71

if (data1 != null && data2 != null && data3 != null && data4 != null
&& data5 != null && data6 != null && data7 != null && data8 != null && data9 !
= null && data10 != null)
{
save();
}
}
else if (filefrm2 == "3")
{
colc24 = "R";
data4 = new byte[receivedBytesLen - 1];
Array.Copy(clientData, 1, data4, 0, receivedBytesLen - 1);
akc = "3";
fsize4 = receivedBytesLen - 1;
send();
tend = tstart - Convert.ToDouble(DateTime.Now.Millisecond) / 1000;
tcalc = tend;
if (data1 != null && data2 != null && data3 != null && data4 != null
&& data5 != null && data6 != null && data7 != null && data8 != null && data9 !
= null && data10 != null)
{
save();
}
}
72

else if (filefrm2 == "4")


{
colc25 = "R";
data5 = new byte[receivedBytesLen - 1];
Array.Copy(clientData, 1, data5, 0, receivedBytesLen - 1);
akc = "4";
fsize5 = receivedBytesLen - 1;
send();
tend = tstart - Convert.ToDouble(DateTime.Now.Millisecond) / 1000;
tcalc = tend;
if (data1 != null && data2 != null && data3 != null && data4 != null
&& data5 != null && data6 != null && data7 != null && data8 != null && data9 !
= null && data10 != null)
{
save();
}
}
else if (filefrm2 == "5")
{
colc26 = "R";
data6 = new byte[receivedBytesLen - 1];
Array.Copy(clientData, 1, data6, 0, receivedBytesLen - 1);
akc = "5";

73

fsize6 = receivedBytesLen - 1;
send();
tend = tstart - Convert.ToDouble(DateTime.Now.Millisecond) / 1000;
tcalc = tend;
if (data1 != null && data2 != null && data3 != null && data4 != null
&& data5 != null && data6 != null && data7 != null && data8 != null && data9 !
= null && data10 != null)
{
save();
}
}
else if (filefrm2 == "6")
{
colc27 = "R";
data7 = new byte[receivedBytesLen - 1];
Array.Copy(clientData, 1, data7, 0, receivedBytesLen - 1);
akc = "6";
fsize7 = receivedBytesLen - 1;
send();
tend = tstart - Convert.ToDouble(DateTime.Now.Millisecond) / 1000;
tcalc = tend;
if (data1 != null && data2 != null && data3 != null && data4 != null
&& data5 != null && data6 != null && data7 != null && data8 != null && data9 !
= null && data10 != null)
74

{
save();
}
}
else if (filefrm2 == "7")
{
colc28 = "R";
data8 = new byte[receivedBytesLen - 1];
Array.Copy(clientData, 1, data8, 0, receivedBytesLen - 1);
akc = "7";
fsize8 = receivedBytesLen - 1;
send();
tend = tstart - Convert.ToDouble(DateTime.Now.Millisecond) / 1000;
tcalc = tend;
if (data1 != null && data2 != null && data3 != null && data4 != null
&& data5 != null && data6 != null && data7 != null && data8 != null && data9 !
= null && data10 != null)
{
save();
}
}
else if (filefrm2 == "8")
{

75

colc29 = "R";
data9 = new byte[receivedBytesLen - 1];
Array.Copy(clientData, 1, data9, 0, receivedBytesLen - 1);
akc = "8";
fsize9 = receivedBytesLen - 1;
send();
tend = tstart - Convert.ToDouble(DateTime.Now.Millisecond) / 1000;
tcalc = tend;
if (data1 != null && data2 != null && data3 != null && data4 != null
&& data5 != null && data6 != null && data7 != null && data8 != null && data9 !
= null && data10 != null)
{
save();
}
}
else if (filefrm2 == "9")
{
colc210 = "R";
data10 = new byte[receivedBytesLen - 1];
Array.Copy(clientData, 1, data10, 0, receivedBytesLen - 1);
akc = "9";
fsize10 = receivedBytesLen - 1;
send();

76

tend = tstart - Convert.ToDouble(DateTime.Now.Millisecond) / 1000;


tcalc = tend;

if (data1 != null && data2 != null && data3 != null && data4 != null
&& data5 != null && data6 != null && data7 != null && data8 != null && data9 !
= null && data10 != null)
{
save();
}
}
clientSock.Close();

StartServer();

curMsg = "Reeived& Saved file; Server Stopped.";

}
catch (Exception ex)
{
curMsg = "File Receving error.";

77

}
}
private void save()
{
write = new byte[data1.Length + data2.Length + data3.Length + data4.Length +
data5.Length + data6.Length + data7.Length + data8.Length + data9.Length +
data10.Length];
Array.Copy(data1, 0, write, 0, data1.Length);
Array.Copy(data2, 0, write, fsize1, data2.Length);
Array.Copy(data3, 0, write, fsize1 + fsize2, data3.Length);
Array.Copy(data4, 0, write, fsize1 + fsize2 + fsize3, data4.Length);
Array.Copy(data5, 0, write, fsize1 + fsize2 + fsize3 + fsize4, data5.Length);
Array.Copy(data6, 0, write, fsize1 + fsize2 + fsize3 + fsize4 + fsize5,
data6.Length);
Array.Copy(data7, 0, write, fsize1 + fsize2 + fsize3 + fsize4 + fsize5 + fsize6,
data7.Length);
Array.Copy(data8, 0, write, fsize1 + fsize2 + fsize3 + fsize4 + fsize5 + fsize6 +
fsize7, data8.Length);
Array.Copy(data9, 0, write, fsize1 + fsize2 + fsize3 + fsize4 + fsize5 + fsize6 +
fsize7 + fsize8, data9.Length);
Array.Copy(data10, 0, write, fsize1 + fsize2 + fsize3 + fsize4 + fsize5 + fsize6 +
fsize7 + fsize8 + fsize9, data10.Length);
if (System.IO.Directory.Exists(receivedPath + "/SYS2") == false)
{
System.IO.Directory.CreateDirectory(receivedPath + "/SYS2");
78

intfileNameLen = BitConverter.ToInt32(write, 0);


stringfileName = Encoding.ASCII.GetString(write, 4, fileNameLen);

BinaryWriterbWrite = new BinaryWriter(File.Open(receivedPath + "/SYS2/" +


fileName, FileMode.Append)); ;
bWrite.Write(write, 4 + fileNameLen, write.Length - 4 - fileNameLen);

curMsg = "Saving file...";

bWrite.Close();
}
public void send()
{
try
{
IPAddress[] ipAddress = Dns.GetHostAddresses("127.0.0.1");
IPEndPointipEnd = new IPEndPoint(ipAddress[0], 8);
Socket clientSock = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.IP);
byte[] ackn = Encoding.ASCII.GetBytes(akc);
clientSock.Connect(ipEnd);

79

clientSock.Send(ackn);
clientSock.Close();

catch (Exception ex)


{
if (ex.Message == "A connection attempt failed because the connected
party did not properly respond after a period of time, or established connection
failed because connected host has failed to respond")
{
//lblError.Text = "";
//lblError.Text = "No Such System Available Try other IP";
}
else
{
if (ex.Message == "No connection could be made because the target machine
actively refused it")
{
//lblError.Text = "";
//lblError.Text = "File Sending fail. Because server not running.";
}
else
{
80

//lblError.Text = "";
//lblError.Text = "File Sending fail." + ex.Message;
}
}
}
}
}

CHAPTER-8
SNAPSHOTS
81

8.1GENERAL
Snapshots are the output screenswhich is being displayed while executing the
project. The chapter describes the various snapshots that appear in the project.
These snapshots provide a concise view of executing the project.
The following are the snapshots that are identified during the execution
of the project as follows.
8.2SCREENSHOTS
The following are the snapshots that are identified during the execution of
the project are as follows
8.2.1HOMEPAGE
This page shows the home page for accessing both owners and data user.

Fig8.2.1

HOME PAGE
8.2.2CHANNEL-1&CHANNEL-2 FILE TRANSMIT
This page channel - 1 and channel 2 are filing transmission part.

82

Fig-8.2.2 TRUSTED CHANNEL FILE TRANSMIT DIAGRAM


8.2.3START SERVER DIAGRAM
This page start server from location path set diagram.

Fig-8.2.3 START SERVER FROM LOCATION PATH SET DIAGRAM

8.2.4SENDER TO RECEIVER SIDE LOCATION


83

This page sender to receiver side segmentation packet access diagram.

Fig-8.2.4 SENDER TO RECEIVER SIDE SEGMENTATION


PACKET ACCESS DIAGRAM
8.2.5ACKNOWLEDGEMENT RECEIVE
This page acknowledgement receive diagram.

Fig-8.2.5 ACNOWLEDGEMENT RECEIVE DIAGRAM

8.2.6 AVERAGE DELAY PROBABILITIES


This page average delay probabilities for chart.
84

Fig-8.6 AVERAGE DELAY PROBABILITIES DIAGRAM

CHAPTER-9
TESTING
85

9.1 GENERAL
Testing is not isolated to only one phase of the project but should be
exercised in all phases of the project. After developing each unit of the software
product, the developers go through an extensive testing process of the software.
After the development of software modules, developers perform a thorough unit
testing of each software component. They also perform integration testing of all
combined modules.
9.2 INTEGRATION TESTING
When the individual components are working correctly and meeting the
specified objectives, they are combined into a working system. This integration is
planned and co-coordinated so that when a failure occurs, there is some idea of
what caused it. In addition, the order in which components are tested, affects the
choice of test cases and tools. This test strategy explains why and how the
components are combined to test the working system. It affects not only the
integration timing and coding order, but also the cost and thoroughness of the
testing.
9.2.1 BOTTOM-UP INTEGRATION
One popular approach for merging components to the larger system is
bottom-up testing. When this method is used, each component at the lowest level
of the system hierarchy is tested individually. Then, the next components to be
tested are those that call the previously tested ones. This approach is followed
repeatedly until all components are included in the testing.
Bottom-up method is useful when many of the low-level components are
general-purpose utility routines that are invoked often by others, when the design is

86

object-oriented or when the system is integrated using a large number of standalone reused components.
9.2.2 TOP-DOWN INTEGRATION
Many developers prefer to use a top-down approach, which in many ways is
the reverse of bottom-up. The top level, usually one controlling component, is
tested by itself. Then, all components called by the tested components are
combined and tested as a larger unit. This approach is reapplied until all
components are incorporated.
A component being tested may call another that is not yet tested, so we write
a stub, a special-purposeprogram tostimulate the activity of the missing
component. The stub answers the calling sequence and passes back the output data
that lets the testing process continue.
For example, if a component is called to calculate the next available address but
that component is not yet tested, then a stub is created for it, that may pass back a
fixed address which allows only testing to proceed. As with drivers, stubs need not
be complex or logically complete.

9.2.3 BIG-BANG INTEGRATION


When all components are tested in isolation, it is tempting to mix them together as
the final system and see if it works the first time. Many programmers use the bigbang approach for small systems, but it is not practical for large ones.
In fact, since big-bang testing has several disadvantages, it is not recommended for
any system. First, it requires both stubs and drives to test the independent
87

components. Second, because all components are merged at once, it is difficult to


find the cause of any failure. Finally, interface faults cannot be distinguished easily
from other types of faults.
9.3 BLACK BOX TESTING
Black Box Testing involves testing without knowledge of the internal
workings of the item being tested. For example, when black box testing is applied
to software engineering, the tester would only know the "legal" inputs and what the
expected outputs should be, but not how the program actually arrives at those
outputs. It is because of this that black box testing can be considered testing with
respect to the specifications, no other knowledge about the program is necessary.
For this reason, the tester and the programmer can be independent of one
another, avoiding programmer bias toward his own work. For this testing, test
groups are often used. Also, due to the nature of black box testing, the test planning
can begin as soon as the specifications are written.

The opposite of this would be glass box testing where test data are derived
from direct examination of the code to be tested.

For glass box testing, the test cases cannot be determined until the code has
actually been written. Both of these testing techniques have advantages and
disadvantages, but when combined, they help to ensure thorough testing of the
product.
9.4WHITE BOX TESTING
88

White box testing uses an internal perspective of the system to design test
cases based on internal structure. It is also known as glass box, structural, clear box
and open box testing. It requires programming skills to identify all paths of the
software. The tester chooses test case inputs to exercise all paths and to determine
the appropriate outputs. In electrical hardware, testing every node in a circuit may
be probed and measured. EX: in-circuit testing (ICT).
Since the tests are based on the actual implementation, when the
implementation changes the tests also change probably. For instance, ICT needs
update if the component value changes, and needs modified/new fixture if the
circuit changes.
This adds financial resistance to the change process, thus buggy products
may stay buggy. Automated Optical Inspection (AOI) offers similar component
level correctness checking without the cost of ICT fixtures. However changes still
require test updates.
While white box testing is applicable at the unit, integration and system
levels of the software testing process, it is typically applied to the unit. So when it
normally tests paths within a unit, it can also test paths between units during
integration, and between subsystems during a system level test.
Though this method of test design can uncover an overwhelming number of
test cases, it might not detect unimplemented parts of the specification or missing
requirements.But it is sure that all the paths through the test objects are executed.

Typical white box test design techniques include:


89

Control flow testing


Data flow testing

9.4.1 WHITE BOX TESTING STRATEGY


White box testing strategy deals with the internal logic and structure of the
code. The tests that are written based on the white box testing strategy incorporates
coverage of the code written, branches, paths, statements and internal logic of the
code etc.
It is also known as glass box, structural, clear box and open box testing. It
requires programming skills to identify all paths of the software. The tester chooses
test case inputs to exercise all paths and to determine the appropriate outputs. In
electrical hardware, testing every node in a circuit may be probed and measured.
In order to implement white box testing, the tester has to deal with the code
and hence he should possess knowledge of coding and logic i.e. internal working
of the code. White box testing also needs the tester to look into the code and find
out which unit/ statement/ chunk of the code is malfunctioning.

90

CHAPTER -10
CONCLUSION
Dynamical neural networks are being increasingly utilized as simple model
nervous systems in autonomous agents research. As their use grows, a thorough
understanding of the dynamical capabilities of such networks will become
essential. In this paper, I have illustrated how the mathematical tools of dynamical
systems theory can be used to gain significant insight into the operation of small
continuous-time recurrent neural networks. Using a combination of elementary
analysis and numerical studies, I have given a fairly complete description of the
possible dynamical behavior and bifurcations of 1- and 2-neuron circuits, along
with a few specific results for larger networks. These results provide both
qualitative insight and, in many cases, quantitative formulae for predicting the
dynamical behavior of particular circuits and how that behavior changes as
network parameters are varied. The synaptic input diagrams described in are
especially useful for understanding the dynamics of 2-neuron circuits. In addition, I
have illustrated one simple method for gaining a qualitative understanding of
CTRNNs with time-varying inputs and presented an example of the complicated
dynamics that can arise in such cases.

91

10.1 FUTURE ENHANCEMENT


In future work, we will investigate the impact of other parameters while
sharing a file in the network. In future this work can be extended for large scale
networks for fast transmission data.

92

REFERENCES
1. Abraham, R. H. and C. D. Shaw (1992). Dynamics - The Geometry of
Behavior. Redwood City, CA: Addison-Wesley.
2. Arnolds, V. I., Ed. (1994). Dynamical Systems V: Bifurcation Theory
and Catastrophe Theory. New York: Springer-Verlag.
3. Atiya, A. and P. Baldi (1989). Oscillations and synchronizations in
neural networks: An exploration of the labeling hypothesis.
International Journal of Neural Systems 1(2): 103-124.
4. Beer, R. D. (1995a). A dynamical systems perspective on agentenvironment interaction. Artificial Intelligence 72:173-215.
5. Beer, R. D. (1995b). Computational and dynamical languages for
autonomous agents. In R. Port and T. van Gelded (Eds.), Mind as
Motion: Explorations in the Dynamics of Cognition (pp. 121- 147).
Cambridge, Mass.: MIT Press.
6. Beer, R. D. and J. C. Gallagher (1992). Evolving dynamical neural
networks for adaptive behavior. Adaptive Behavior 1: 91-122.
7. Blum, E. K. and X. Wang (1992). Stability of fixed points and
periodic orbits and bifurcations in analog neural networks.
Neural Networks 5: 577-587.

93

Vous aimerez peut-être aussi