Vous êtes sur la page 1sur 112



A Project Report Submitted to the

Andhra University in Partial Fulfillment

of requirement for the award of P.G.Degree of


Submitted by


Under the Esteemed Guidance of

Sri C. M. N. SRINIVAS M.C.A., M.Tech.


(Affiliated to Andhra University)




This is to certify that it is a bonafied record of the dissertations work entitled


Master of Computer Applications in the department of Computer Science(M.C.A),
K.G.R.L College, Bhimavaram during the period of 2008-2011 in the fulfillment of the
requirements for the award of M.C.A. This work is not submitted to any University for
the award of any degree. This work is carried out in authorization and guidance of the
organization titled “8Sigma Softvill Technologies”.

Internal Guide Head Of The Department

(N.Srinivas) (N. Srinivas)


This is to certify that it is a bonafied record of the dissertations work entitled


Master of Computer Applications in the PG department of Computer Science, K.G.R.L
College, Bhimavaram during the period of 2008-2011 in the fulfillment of the
requirements for the award of M.C.A. This work is not submitted to any University for
the award of any degree.

Internal Examiner External Examiner


I here by declare that the project work titled “TRIPLE-A SECURE RGB


submitted to the PG department of Computer Science, K.G.R.L College, Bhimavaram
affiliated to Andhra University, as a partial fulfillment for the award of Master of
Computer Applications degree. This is an original work done by me and has not been
submitted to any other Institution.





The pleasure, the achievement, the glory, the satisfaction, the reward, the
application and the construction of my project can not be thought without a few, how
apart from their regular schedule, spared a valuable time for me. This acknowledgement
not just a position of words but also an account of the indictment. They have been a
guiding light and source of inspiration towards the completion of the project.

I would like to thank Dr M.Lakshmana Rao, Director of K.G.R.L. College of

P.G.Courses, Bhimavaram. I would like to express my deep sense gratitude to Sri.
N.Srinivas, Head of the Department of Computer Science and my special gratitude to my
Internal Guide Sri.C.M.N.Sridhar, Asst.Professor in computer science and all of my
lecturers for give up valuable suggestions during my project and I sincerely thanks to Sri
K.Srinivas, my college Librarian and Sri A.Ramesh Naidu, M.C.A.lab incharge for
giving me the technical support.

I wish to express my thanks to one to one and all, which directly or indirectly
helped me during my project work.


Table of Contents




































Steganography is the art of hiding information into another covering media in a

way that nobody except the receiver can detect the secret message and retrieve it.
Steganography (which means “covered writing” in Greek) is an old art that has been used
since the golden age of Greece where some practices were recorded like: writing a
message on a wooden table then covering it with wax, and tattooing a messenger hair
after shaving and then let his hair grow up before sending him to the receiver where his
hair was shaved again. Other techniques use invisible ink, microdots, converting channels
and character arrangement .Digital steganography has many applications in today’s life.
It could be used as a digital watermarking to protect the copy-rights, or to tag notes to
digital images (like post-it notes attached to paper files), or to maintain the confidentiality
of valuable data from possible sabotage, theft, and unauthorized viewing.

Image-based steganography techniques need an image to hide the data in. This
image is called a cover media. Digital images are stored in computer systems as an array
of points (pixels) where each pixel has three color components: Red, Green, and Blue
(RGB). Each pixel is represented with three bytes to indicate the intensity of these three
colors (RGB). Some techniques have been used for image steganography such as LSB,
SCC, Pixel Indicator and image intensity.

In LSB, the least significant bit of each pixel for a specific color channel or for all
color channels is replaced with a bit from the secret data. Although it is a simple
techniques, but the probability of detecting the hidden data is high. SCC technique is an
enhancement. The color channel, where the secret data will be hidden in, is cycling

frequently for every bit according to a specific pattern. For example, the first bit of the
secret data is stored in the LSB of red channel, the second bit in the green channel, the
third bit in the blue channel and so on. This technique is more secure than the LSB but
still it is suffers detecting the cycling pattern that will reveal the secret data. Also it has
less capacity than the LSB. Pixel indicator technique is another image steganography
technique where the least two significant bits of specific color channel is used to indicate
the existence of secret data in the least significant two bits of other two channels
according to rules detailed in . A new idea of RGB steganography is proposed in this
project. It is based on color intensity, which is outside the focus of this work.

Even though pixel indicator technique adds some randomization to harden the
detection of the secret data, its capacity varies depending on the actual values of the
indicator channel so the actual capacity is unpredictable. Our suggested technique tries to
add more randomization to the selection of the pixels in which the secret data is stored,
affecting the number of bits used to keep the secret data, and the channels that are used to
store the secret data.


Digital Steganography is the art of hiding information into another overing media
in a way that nobody except the receiver can detect the secret message and retrieve it.

The user can set different password for every Message he sends. This will enable
the manager to transmit to two groups the same image but with two different passwords
two different Messages

By using this system the image size will not change .

Here you can provide cryptography encrypted algorithms.

1.2 SCOPE:

There is only one safe place for private data and messages: the place where
nobody looks for it. A file encrypted with algorithms like PGP are not readable, but
everybody knows that there is something hidden. Wouldn't it be nice, if everyone could
open your encrypted files, and see noisy photos of some old friends instead of your
private data. They surely wouldn't look for pieces of encrypted messages in the pixel

The security can be considered two layers the hiding layer and the encryption
layer. Layer one is the hiding part of triple- A algorithm . Notice that the secret data is
scattered throughout the whole image. Also, extracting the secret data without the
Knowledge of seeds is almost impossible. On top of layer one, layer two uses AES
algorithm to encrypt the data. Therefore, even if the attacker knows how to extract the
data from the image it is still encrypted. Our algorithm has the same unpredictable
message size as the pixel indictor scheme but the former has maximum capacity ratio
better than the latter. Also, the unpredictability in pixel indicator is function of the image
carrier (C) which is usually has mega sizes. Triple-A in the other case depends on the key
(K) which is of smaller size.


In old days of Greece where some practices were recorded like: writing a message
on a wooden table then covering it with wax, and tattooing a messenger hair after shaving
and then let his hair grow up before sending him to the receiver where his hair was
shaved again. Other techniques use invisible ink, microdots, it is hectic and latency
problem and in secure.

In old mechanisms after encrypting the image size will increased.

Increase image size is lead break the hidden text in image

There is no security for secret text


We will not change the text itself, but we will change the unseen attributes of the
text. These attributes are many and it is impossible for web servers to track them all.
There are lots of Steganographic methods and tracking them will waste huge amounts of
processing for uncertain results. Be aware that Steganography is more effective than
encryption when used in the right way

For stochastic modulation steganography, in order to achieve higher hiding

capacity, the variance of stego-noice needs to be increased sharply, which may result in
great degradation in the cover image.


By this application.

 The authorized persons only can handle this application.

 The secret data must be hidden in the image file only.


Triple-A algorithm is implemented using software package developed using C#.

The tool encrypts the data before hiding it using ASE encryption scheme. The resulting
stego-images is tested and compared with the original images by using histograms
generated by MATLAB to check the level of noise or distortion caused by the Triple-A
algorithm. The results are compared with other stego-images generated using SCC
algorithm. The level of distortion and the capacity issues are highlighted. Figure 3 shows
an original carrier compared to the same carrier with secret using both SCC and Triple-A
algorithm. From the first moment, you can not see difference within the images; but the
histogram of the images shown in Fig.4 shows a minor different in the value of the
components: R, G and B.


8Sigma Softvill Technologies, a most promising IT solution providing company

is a one-stop shop for all your business needs. Besides rendering professional and
comprehensive business solutions we also caters to a range of services in software and
web technology. The acuminous insight of our team has help set standards in this ever
evolving industry... 8Sigma Softvill Technologies strives to achieve commercial success
through non commercial satisfaction. Customer satisfaction is paramount to us, which we
seek by providing quality services at competitive prices.

8 SIGMA SOFTVILL TECHNOLGIES has been Web Design & software

development experience in programming using PHP, C++, Perl, ASP, ASP.net, JSP,
Visual Basic, Java using variety of databases like MySQL, MSSQL, Access, on various
platforms like Linux, Apache, Windows 98,2000,2003 & XP. We are highly responsive
and provide you an interactive cycle of web-development - from prototype to
development, documentation and testing. We provide robust, secure and highly scalable
solutions to enhance your business.

Services :

We are highly competent in operating all the standard platforms of technology

that are required to handle different domains of business. We offers a wide range of
services including professional web design and web development, Customized software
development, e-commerce solutions & BPO services. Our team closely follows new
trends and effectively implements fresh ideas and novel approaches.

8 SIGMA SOFTVILL TECHNOLGIES has been Web Design & software

development experience in programming using PHP, C++, Perl, ASP, ASP.net, JSP,
Visual Basic, Java using variety of databases like MySQL, MSSQL, Access, on various
platforms like Linux, Apache, Windows 98,2000,2003 & XP. We are highly responsive
and provide you an interactive cycle of web-development - from prototype to

development, documentation and testing. We provide robust, secure and highly scalable
solutions to enhance your business.

.NET :

Microsoft .NET framework represents a major step forward for Microsoft

developers, encompassing many of the object-oriented design disciplines and managed
code innovations that have become popular in e-business application development over
the last few years. Consequently, 8 Sigma Softvill Technologies has established a special
competency center to spread knowledge and promote best practices with Microsoft's
new .NET Framework (Asp .Net / Vb .Net / C# etc).

The objectives of this team involve:-

 Providing technical solutions implementing Microsoft .NET

 Resolving problems faced in applying Microsoft .NET technology

 Establishing a forum for knowledge sharing among developers

 Building competency in chosen Microsoft servers and services

 Developing reusable components that can be used across projects

 Conducting organization-wide training programs in Microsoft technologies


PHP is recursively known as PHP Hypertext Preprocessor. This open source

server side scripting language has become wildly popular over the past few years; many
developers now swear by it. It lets programmers create web pages with dynamic content
that can interact with databases. This is a huge advantage when you need to develop web
based software applications. If you're looking for something new to do with PHP, look no
further. We have it right here!

8 Sigma Softvill Technologies provides timely, efficient and affordable PHP

Programming Services, in order to provide offshore PHP programming services for both

new and existing dynamic websites running on the PHP, Apache and MySQL
combination becoming choice of masses for delivering dynamic web content.


Java Competency Center helps clients realize the benefits of Enterprise Java J2EE
platforms, and related technologies including Web Services and J2ME. 8 Sigma Softvill
Technologies has built a competency center that focuses on skill building, knowledge
management and pioneering research in emerging Java technologies.

8 Sigma Softvill Technologies leverages offshore cost and scalability advantage

to significantly reduce development cost across various J2EE development. 8 Sigma
Softvill Technologies Java Competency Center uses deep platform expertise in
developing and delivering enterprise solutions. 8 Sigma Softvill Technologies reduces
software development costs by over 50% by leveraging competency expertise, offshore
cost and scalability.

8 Sigma Softvill Technologies has made significant investments in creating and

growing the Java Competency Center. With trained and experienced Java specialists,
engineers in 8 Sigma Softvill Technologies Competency Center conduct internal training
programs for continuous learning and hands on experience.


At 8 Sigma Softvill , we are striving to understand changing customer needs. We

want to make our customers' lives easier by simply making technology usable. We have
solid Information Systems Professionals, who with the help of world class tools and
equipment, study, design, develop, enhance, customize, implement, maintain and support
various aspects of Information Technology. We have the expertise and experience to help
you cut costs significantly without impacting product quality or delivery schedules.

8 Sigma Softvill Offshore Software Development, is committed to provide ever
increasing levels of customer satisfaction by offering the highest quality in software
development, e-commerce solutions, web site design, after sales and other IT enabled
services. For this, we use modern software development platforms and software
development tools.

We have top class software professionals like project managers, software

engineers, software programmers, software application developers, software quality
testers, web designers and technical writers with exclusive skill sets for this. Transparent
project management and change management practices that emphasize customer
communication at pre-determined intervals through e-mail, teleconferencing and video
conferencing ensure that the customer and project delivery teams carry a consistent
understanding of requirements and project status at all times.

Our area of software development takes it birth from the basic requirement of a
small vendor or even a kid and grows up to fulfill the requirements of large corporations.


Create. Innovate. Be the best. Within this world, you are your own boss. You set
your own standards and you strive to meet them. Every team member is an asset, and 8
SIGMA SOFTVILL TECHNOLOGIES knows that only the best people can help
make it the best company. Providing impeccable services to Clients requires people who
probe their business, understand it, and interpret it for the global web environment.
Empowering our people to deliver their best at all times is a continuous process at8

The twin objectives of career development at 8 SIGMA SOFTVILL

 To provide opportunities to enhance their competencies and in turn achieve career

 To ensure that career development activities are aligned with organizational
objectives to achieve growth for the organization.



Assuming that a new system is to be developed, the next phase is system analysis.
Analysis involved a detailed study of the current system, leading to specifications of a
new system. Analysis is a detailed study of various operations performed by a system and
their relationships within and outside the system. During analysis, data are collected on
the available files, decision points and transactions handled by the present system.
Interviews, on-site observation and questionnaire are the tools used for system analysis.
Using the following steps it becomes easy to draw the exact boundary of the new system
under consideration:

 Keeping in view the problems and new requirements

 Workout the pros and cons including new areas of the system


In old days of Greece where some practices were recorded like: writing a message
on a wooden table then covering it with wax, and tattooing a messenger hair after shaving
and then let his hair grow up before sending him to the receiver where his hair was
shaved again. Other techniques use invisible ink, microdots, it is hectic and latency
problem and in secure.


1. In old mechanisms after encrypting the image size will increased.

2. Increase image size is lead break the hidden text in image.

3. There is no security for secret text.


The proposed system will get through all the drawbacks of the existing system. It
is used as a converting the information without any perceptible distortions. It an adaptive
image steganography with high capacity and good security is proposed.

In the proposed system a depth-varying adaptive image steganography is

proposed. It may achieve higher capacity while being resistant to several steganalytic
algorithms effectively. A new image-based steganography technique – called triple-A
algorithm - is proposed. It uses the same principle of LSB, where the secret is hidden in
the least significant bits of the pixels, with more randomization in selection of the number
of bits used and the color channels that are used. This technique can be applied to RGB
images where each pixel is represented by three bytes to indicate the intensity of red,
green, and blue in that pixel.

This randomization is expected to increase the security of the system.

By using this system the capacity of data is increased.

The size of the image file not increased.


Text data only used as plain text.

Bmp images are only used as a covering media.


Preliminary investigation examine project feasibility, the likelihood the system

will be useful to the organization. The main objective of the feasibility study is to test the
Technical, Operational and Economical feasibility for adding new modules and
debugging old running system. All system is feasible if they are unlimited resources and

infinite time. There are aspects in the feasibility study portion of the preliminary

• Technical Feasibility

• Operational Feasibility

• Economical Feasibility


Evaluating the technical feasibility is the important part of a feasibility study. This
is because, at this point of time not to many detailed design of the system, making it
difficult to access like performance, cost(on account of technology to be deployed)etc.

We used .NET technology it platform dependent so it run on any platform, and

we used SQL server as a database for data storage, it provides security for the data.

This project needs simple hardware components those are PIV 2.8 GHz Processor
and Above, RAM 1 GB and Above, HDD 40 GB Hard Disk Space and Above

The developed environment of this system is WINDOWS OS (XP), Visual Studio

,.Net 2005 Enterprise Edition, Internet Information Server 5.0 (IIS), Visual Studio .Net
Framework (Minimal for Deployment) version 2.0, Microsoft Visual C# .Net


Proposed projects are beneficial only if they can be turned out into information
system. That will meet the organization’s operating requirements. Operational feasibility
aspects of the project are to be taken as an important part of the project implementation.

Some of the important issues raised are to test the operational feasibility of a project
includes the following: -

 Is there sufficient support for the management from the users?

 Will the system be used and work properly if it is being developed and implemented?

 Will there be any resistance from the user that will undermine the possible application

This system is targeted to be in accordance with the above-mentioned issues.

Beforehand, the management issues and user requirements have been taken into
consideration. So there is no question of resistance from the users that can undermine the
possible application benefits.

The well-planned design would ensure the optimal utilization of the computer resources
and would help in the improvement of performance status.


A system can be developed technically and that will be used if installed must still
be a good investment for the organization. In the economical feasibility, the development
cost in creating the system is evaluated against the ultimate benefit derived from the new
systems. Financial benefits must equal or exceed the costs.

The system is economically feasible. It does not require any addition hardware or
software. Since the interface for this system is developed using the existing resources and
technologies available at NIC, There is nominal expenditure and economical feasibility
for certain.



Outputs from computer systems are required primarily to communicate the results
of security in network. They are also used to provides a permanent copy of the results for
later consultation. The various types of outputs in general are:

 This project provides the network security using stegnography.

 Internal Outputs whose destination is within organization and they are the

 User’s main interface with the computer.

 Operational outputs whose use is purely within the computer network and using

 Interface outputs, which involve the user in communicating directly.

 Input data can be in four different forms - Relational DB, text files, .xls and xml files.
For testing and demo you can choose data from any domain. User-B can provide
business data as input.


User must have specified login name/user name and password. In the login form
the user must enter valid user name and password.

If he entered wrong details login failed message should be displayed.


If a new person wants to registration he must fill the registration form. In this user
must enter all the fields which are present in the registration form. If there is any empty
field present in the form it should display registration failed message.


After successfully login stegnography image window opens. In this stage user
enters image file path that should be use as a cover media. Secret key it gives security for
the data. Enter secret data in the text field. After entering these details click hide message
button for data hiding. Then save the file which have image and secret data in the data


In this stage the user select the encrypted file and enter valid key. Then click
extract button. The secret message should be displayed in the text field. Then the user
may save the file in the data base if require.


1. Secure access of confidential data (user’s details).

2. 24 X 7 availability

3. Better component design to get better performance at peak time

4. Flexible service based architecture will be highly desirable for future extension

Non-functional Requirements is a constraint on the operation of the system not

related to the f unction of the system. These requirements describes user-visible aspects
of the system that are not directly related with the functional behavior of the system.


Usability is the ease of use and learnability of a human-made object. The object
of use can be a software application, website, process, or anything a human interacts
with. A usability study may be conducted as a primary job function by a usability analyst.

This application is more usability because in this application the fields are more
familiar to the user in the run time.


In general, reliability (systemic def.) is the ability of a person or system to

perform and maintain its functions in routine circumstances, as well as hostile or
unexpected circumstances.

This application is 24 X 7 available.

3.2.3 Performance:

The performance phase is describes the accuracy of the system.This application

Better component design to get better performance at peak time. The performance of this
application is depend on the configuration of the system.

This application performs more accuracy. The hidden data is accurately decrypted
in the receiver end.


This is software is robust and platform independent because this is developed by

using the .net frame work.

This application supports any type of text format can be used to hide in the image file.


These are simply the constraints which are imposed by the client in development
of the project. So these requirements should be implemented while the development of
the project.

 The deployment platform of the system is .NET,SQLServer.

 Typical pseudo requirements are the platform and language on which the system
is implemented.

 Security is increased in this system.

 Capacity of hiding data is increased in this system.

Hardware Requirements:

Processor : PIV 2.8 GHz Processor and Above

Ram : RAM 1 GB and Above

Hard Disk : HDD 40 GB Hard Disk Space and Above

Software Requirements:

Front-End : VS .NET 2005

Coding Language : C#.Net

Operating System : Windows XP.

Back End : SQLSERVER 2005



The Microsoft .NET Framework is a software technology that is available with

several Microsoft Windows operating systems. It includes a large library of pre-coded
solutions to common programming problems and a virtual machine that manages the
execution of programs written specifically for the framework. The .NET Framework is a
key Microsoft offering and is intended to be used by most new applications created for
the Windows platform.

The pre-coded solutions that form the framework's Base Class Library cover a
large range of programming needs in a number of areas, including user interface, data
access, database connectivity, cryptography, web application development, numeric
algorithms, and network communications. The class library is used by programmers, who
combine it with their own code to produce applications.

Programs written for the .NET Framework execute in a software environment that
manages the program's runtime requirements. Also part of the .NET Framework, this
runtime environment is known as the Common Language Runtime (CLR). The CLR
provides the appearance of an application virtual machine so that programmers need not
consider the capabilities of the specific CPU that will execute the program. The CLR also
provides other important services such as security, memory management, and exception
handling. The class library and the CLR together compose the .NET Framework.

Principal design features

Interoperability :

Because interaction between new and older applications is commonly required,

the .NET Framework provides means to access functionality that is implemented in

programs that execute outside the .NET environment. Access to COM components is
provided in the System.Runtime.InteropServices and System.EnterpriseServices
namespaces of the framework; access to other functionality is provided using the
P/Invoke feature.

Common Runtime Engine:

The Common Language Runtime (CLR) is the virtual machine component of

the .NET framework. All .NET programs execute under the supervision of the CLR,
guaranteeing certain properties and behaviors in the areas of memory management,
security, and exception handling.

Base Class Library:

The Base Class Library (BCL), part of the Framework Class Library (FCL), is a
library of functionality available to all languages using the .NET Framework. The BCL
provides classes which encapsulate a number of common functions, including file reading
and writing, graphic rendering, database interaction and XML document manipulation.

Simplified Deployment :

Installation of computer software must be carefully managed to ensure that it does

not interfere with previously installed software, and that it conforms to security
requirements. The .NET framework includes design features and tools that help address
these requirements.


The design is meant to address some of the vulnerabilities, such as buffer

overflows, that have been exploited by malicious software. Additionally, .NET provides a
common security model for all applications.


The design of the .NET Framework allows it to theoretically be platform agnostic,

and thus cross-platform compatible. That is, a program written to use the framework
should run without change on any type of system for which the framework is
implemented. Microsoft's commercial implementations of the framework cover
Windows, Windows CE, and the Xbox 360. In addition, Microsoft submits the
specifications for the Common Language Infrastructure (which includes the core class
libraries, Common Type System, and the Common Intermediate Language), the C#
language, and the C++/CLI language to both ECMA and the ISO, making them available
as open standards. This makes it possible for third parties to create compatible
implementations of the framework and its languages on other platforms.


Visual overview of the Common Language Infrastructure (CLI)

Common Language Infrastructure:

The core aspects of the .NET framework lie within the Common Language
Infrastructure, or CLI. The purpose of the CLI is to provide a language-neutral platform
for application development and execution, including functions for exception handling,
garbage collection, security, and interoperability. Microsoft's implementation of the CLI
is called the Common Language Runtime or CLR.


The intermediate CIL code is housed in .NET assemblies. As mandated by

specification, assemblies are stored in the Portable Executable (PE) format, common on
the Windows platform for all DLL and EXE files. The assembly consists of one or more
files, one of which must contain the manifest, which has the metadata for the assembly.
The complete name of an assembly (not to be confused with the filename on disk)
contains its simple text name, version number, culture, and public key token. The public
key token is a unique hash generated when the assembly is compiled, thus two assemblies
with the same public key token are guaranteed to be identical from the point of view of
the framework. A private key can also be specified known only to the creator of the
assembly and can be used for strong naming and to guarantee that the assembly is from
the same author when a new version of the assembly is compiled (required to add an
assembly to the Global Assembly Cache).


All CLI is self-describing through .NET metadata. The CLR checks the metadata
to ensure that the correct method is called. Metadata is usually generated by language
compilers but developers can create their own metadata through custom attributes.
Metadata contains information about the assembly, and is also used to implement the
reflective programming capabilities of .NET Framework.


.NET has its own security mechanism with two general features: Code Access
Security (CAS), and validation and verification. Code Access Security is based on
evidence that is associated with a specific assembly. Typically the evidence is the source
of the assembly (whether it is installed on the local machine or has been downloaded
from the intranet or Internet). Code Access Security uses evidence to determine the
permissions granted to the code. Other code can demand that calling code is granted a
specified permission. The demand causes the CLR to perform a call stack walk: every
assembly of each method in the call stack is checked for the required permission; if any
assembly is not granted the permission a security exception is thrown.

When an assembly is loaded the CLR performs various tests. Two such tests are
validation and verification. During validation the CLR checks that the assembly contains
valid metadata and CIL, and whether the internal tables are correct. Verification is not so
exact. The verification mechanism checks to see if the code does anything that is 'unsafe'.
The algorithm used is quite conservative; hence occasionally code that is 'safe' does not
pass. Unsafe code will only be executed if the assembly has the 'skip verification'
permission, which generally means code that is installed on the local machine.

.NET Framework uses appdomains as a mechanism for isolating code running in

a process. Appdomains can be created and code loaded into or unloaded from them
independent of other appdomains. This helps increase the fault tolerance of the
application, as faults or crashes in one appdomain do not affect rest of the application.
Appdomains can also be configured independently with different security privileges. This
can help increase the security of the application by isolating potentially unsafe code. The
developer, however, has to split the application into subdomains; it is not done by the

Class library

Namespaces in the BCL


System. CodeDom

System. Collections

System. Diagnostics

System. Globalization

System. IO

System. Resources

System. Text


Microsoft .NET Framework includes a set of standard class libraries. The class
library is organized in a hierarchy of namespaces. Most of the built in APIs are part of
either System.* or Microsoft.* namespaces. It encapsulates a large number of common
functions, such as file reading and writing, graphic rendering, database interaction, and
XML document manipulation, among others. The .NET class libraries are available to
all .NET languages. The .NET Framework class library is divided into two parts: the
Base Class Library and the Framework Class Library.

The Base Class Library (BCL) includes a small subset of the entire class library
and is the core set of classes that serve as the basic API of the Common Language
Runtime. The classes in mscorlib.dll and some of the classes in System.dll and
System.core.dll are considered to be a part of the BCL. The BCL classes are available in
both .NET Framework as well as its alternative implementations including .NET
Compact Framework, Microsoft Silverlight and Mono.

The Framework Class Library (FCL) is a superset of the BCL classes and refers
to the entire class library that ships with .NET Framework. It includes an expanded set of
libraries, including WinForms, ADO.NET, ASP.NET, Language Integrated Query,
Windows Presentation Foundation, Windows Communication Foundation among others.
The FCL is much larger in scope than standard libraries for languages like C++, and
comparable in scope to the standard libraries of Java.

Memory management:

The .NET Framework CLR frees the developer from the burden of managing
memory (allocating and freeing up when done); instead it does the memory management
itself. To this end, the memory allocated to instantiations of .NET types (objects) is done
contiguously from the managed heap, a pool of memory managed by the CLR. As long as
there exists a reference to an object, which might be either a direct reference to an object
or via a graph of objects, the object is considered to be in use by the CLR. When there is
no reference to an object, and it cannot be reached or used, it becomes garbage. However,
it still holds on to the memory allocated to it. .NET Framework includes a garbage
collector which runs periodically, on a separate thread from the application's thread, that
enumerates all the unusable objects and reclaims the memory allocated to them.

The .NET Garbage Collector (GC) is a non-deterministic, compacting, mark-and-

sweep garbage collector. The GC runs only when a certain amount of memory has been
used or there is enough pressure for memory on the system. Since it is not guaranteed
when the conditions to reclaim memory are reached, the GC runs are non-deterministic.
Each .NET application has a set of roots, which are pointers to objects on the managed
heap (managed objects). These include references to static objects and objects defined as
local variables or method parameters currently in scope, as well as objects referred to by
CPU registers. When the GC runs, it pauses the application, and for each object referred
to in the root, it recursively enumerates all the objects reachable from the root objects and
marks them as reachable. It uses .NET metadata and reflection to discover the objects
encapsulated by an object, and then recursively walk them. It then enumerates all the

objects on the heap (which were initially allocated contiguously) using reflection. All
objects not marked as reachable are garbage. This is the mark phase. Since the memory
held by garbage is not of any consequence, it is considered free space. However, this
leaves chunks of free space between objects which were initially contiguous. The objects
are then compacted together, by using memcpy to copy them over to the free space to
make them contiguous again. Any reference to an object invalidated by moving the
object is updated to reflect the new location by the GC. The application is resumed after
the garbage collection is over.

The GC used by .NET Framework is actually generational. Objects are assigned

a generation; newly created objects belong to Generation 0. The objects that survive a
garbage collection are tagged as Generation 1, and the Generation 1 objects that survive
another collection are Generation 2 objects. The .NET Framework uses up to Generation
2 objects. Higher generation objects are garbage collected less frequently than lower
generation objects. This helps increase the efficiency of garbage collection, as older
objects tend to have a larger lifetime than newer objects. Thus, by removing older (and
thus more likely to survive a collection) objects from the scope of a collection run, fewer
objects need to be checked and compacted.


Microsoft started development on the .NET Framework in the late 1990s

originally under the name of Next Generation Windows Services (NGWS). By late 2000
the first beta versions of .NET 1.0 were released.

.NET Framework stack:

Version Version Number Release Date

1.0 1.0.3705.0 2002-01-05

1.1 1.1.4322.573 2003-04-01

2.0 2.0.50727.42 2005-11-07

3.0 3.0.4506.30 2006-11-06

3.5 3.5.21022.8 2007-11-09

Client Application Development:

Client applications are the closest to a traditional style of application in Windows-

based programming. These are the types of applications that display windows or forms on
the desktop, enabling a user to perform a task. Client applications include applications
such as word processors and spreadsheets, as well as custom business applications such
as data-entry tools, reporting tools, and so on. Client applications usually employ
windows, menus, buttons, and other GUI elements, and they likely access local resources

such as the file system and peripherals such as printers. Another kind of client application
is the traditional ActiveX control (now replaced by the managed Windows Forms
control) deployed over the Internet as a Web page. This application is much like other
client applications: it is executed natively, has access to local resources, and includes
graphical elements.

In the past, developers created such applications using C/C++ in conjunction with
the Microsoft Foundation Classes (MFC) or with a rapid application development (RAD)
environment such as Microsoft® Visual Basic®. The .NET Framework incorporates
aspects of these existing products into a single, consistent development environment that
drastically simplifies the development of client applications.

The Windows Forms classes contained in the .NET Framework are designed to be
used for GUI development. You can easily create command windows, buttons, menus,
toolbars, and other screen elements with the flexibility necessary to accommodate
shifting business needs.

For example, the .NET Framework provides simple properties to adjust visual
attributes associated with forms. In some cases the underlying operating system does not
support changing these attributes directly, and in these cases the .NET Framework
automatically recreates the forms. This is one of many ways in which the .NET
Framework integrates the developer interface, making coding simpler and more

Server Application Development

Server-side applications in the managed world are implemented through runtime

hosts. Unmanaged applications host the common language runtime, which allows your
custom managed code to control the behavior of the server.

This model provides you with all the features of the common language runtime
and class library while gaining the performance and scalability of the host server.

The following illustration shows a basic network schema with managed code
running in different server environments. Servers such as IIS and SQL Server can
perform standard operations while your application logic executes through the managed

Server-side managed code

ASP.NET is the hosting environment that enables developers to use the .NET
Framework to target Web-based applications. However, ASP.NET is more than just a
runtime host; it is a complete architecture for developing Web sites and Internet-
distributed objects using managed code. Both Web Forms and XML Web services use IIS
and ASP.NET as the publishing mechanism for applications, and both have a collection
of supporting classes in the .NET


The Relationship of C# to .NET

C# is a new programming language, and is significant in two respects:

 It is specifically designed and targeted for use with Microsoft's .NET Framework (a
feature rich platform for the development, deployment, and execution of distributed
 It is a language based upon the modern object-oriented design methodology, and
when designing it Microsoft has been able to learn from the experience of all the

other similar languages that have been around over the 20 years or so since object-
oriented principles came to prominence

One important thing to make clear is that C# is a language in its own right.
Although it is designed to generate code that targets the .NET environment, it is not itself
part of .NET. There are some features that are supported by .NET but not by C#, and you
might be surprised to learn that there are actually features of the C# language that are not
supported by .NET like Operator Overloading.

However, since the C# language is intended for use with .NET, it is important for
us to have an understanding of this Framework if we wish to develop applications in C#
effectively. So, in this chapter

The Common Language Runtime:

Central to the .NET framework is its run-time execution environment, known as

the Common Language Runtime (CLR) or the .NET runtime. Code running under the
control of the CLR is often termed managed code.

However, before it can be executed by the CLR, any source code that we develop
(in C# or some other language) needs to be compiled. Compilation occurs in two steps
in .NET:

1. Compilation of source code to Microsoft Intermediate Language (MS-IL)

2. Compilation of IL to platform-specific code by the CLR

At first sight this might seem a rather long-winded compilation process. Actually,
this two-stage compilation process is very important, because the existence of the
Microsoft Intermediate Language (managed code) is the key to providing many of the
benefits of .NET. Let's see why.

Advantages of Managed Code:

Microsoft Intermediate Language (often shortened to "Intermediate Language", or
"IL") shares with Java byte code the idea that it is a low-level language with a simple
syntax (based on numeric codes rather than text), which can be very quickly translated
into native machine code. Having this well-defined

Universal syntax for code has significant advantages.

Platform Independence

First, it means that the same file containing byte code instructions can be placed
on any platform; at runtime the final stage of compilation can then be easily
accomplished so that the code will run on that particular platform. In other words, by
compiling to Intermediate Language we obtain platform independence for .NET, in much
the same way as compiling to Java byte code gives Java platform independence.

You should note that the platform independence of .NET is only theoretical at
present because, at the time of writing, .NET is only available for Windows. However,
porting .NET to other platforms is being explored (see for example the Mono project, an
effort to create an open source implementation of .NET, at http://www.go-mono.com/).

Performance Improvement

Although we previously made comparisons with Java, IL is actually a bit more

ambitious than Java byte code. Significantly, IL is always Just-In-Time compiled,
whereas Java byte code was often interpreted. One of the disadvantages of Java was that,
on execution, the process of translating from Java byte code to native executable resulted
in a loss of performance (apart from in more recent cases, here Java is JIT-compiled on
certain platforms).

Instead of compiling the entire application in one go (which could lead to a slow
start-up time), the JIT compiler simply compiles each portion of code as it is called (just-
in-time). When code has been compiled once, the resultant native executable is stored
until the application exits, so that it does not need to be recompiled the next time that

portion of code is run. Microsoft argues that this process is more efficient than compiling
the entire application code at the start, because of the likelihood those large portions of
any application code will not actually be executed in any given run. Using the JIT
compiler, such code will never get compiled.

This explains why we can expect that execution of managed IL code will be
almost as fast as executing native machine code. What it doesn't explain is why Microsoft
expects that we will get a performance improvement. The reason given for this is that,
since the final stage of compilation takes place at run time, the JIT compiler will know
exactly what processor type the program will run on. This means that it can optimize the
final executable code to take advantage of any features or particular machine code
instructions offered by that particular processor.

Traditional compilers will optimize the code, but they can only perform
optimizations that will be independent of the particular processor that the code will run
on. This is because traditional compilers compile to native executable before the software
is shipped. This means that the compiler doesn't know what type of processor the code
will run on beyond basic generalities, such as that it will be an x86-compatible processor
or an Alpha processor. Visual Studio 6, for example, optimizes for a generic Pentium
machine, so the code that it generates cannot take advantages of hardware features of
Pentium III processors. On the other hand, the JIT compiler can do all the optimizations
that Visual Studio 6 can, and in addition to that it will optimize for the particular
processor the code is running on.

Language Interoperability:

How the use of IL enables platform independence, and how JIT compilation
should improve performance. However, IL also facilitates language interoperability.

Simply put, you can compile to IL from one language, and this compiled code should
then be interoperable with code that has been compiled to IL from another language.

Intermediate Language

From what we learned in the previous section, Intermediate Language obviously

plays a fundamental role in the .NET Framework. As C# developers, we now understand
that our C# code will be compiled into Intermediate Language before it is executed
(indeed, the C# compiler only compiles to managed code). It makes sense, then, that we
should now take a closer look at the main characteristics of IL, since any language that
targets .NET would logically need to support the main characteristics of IL too.

Here are the important features of the Intermediate Language:

 Object-orientation and use of interfaces

 Strong distinction between value and reference types

 Strong data typing

 Error handling through the use of exceptions

 Use of attributes

Support of Object Orientation and Interfaces:

The language independence of .NET does have some practical limits. In

particular, IL, however it is designed, is inevitably going to implement some particular
programming methodology, which means that languages targeting it are going to have to
be compatible with that methodology. The particular route that Microsoft has chosen to
follow for IL is that of classic object-oriented programming, with single implementation
inheritance of classes.

Besides classic object-oriented programming, Intermediate Language also brings

in the idea of interfaces, which saw their first implementation under Windows with

COM. .NET interfaces are not the same as COM interfaces; they do not need to support
any of the COM infrastructure (for example, they are not derived from I Unknown, and
they do not have associated GUIDs). However, they do share with

COM interfaces the idea that they provide a contract, and classes that implement a
given interface must provide implementations of the methods and properties specified by
that interface.

Object Orientation and Language Interoperability

Working with .NET means compiling to the Intermediate Language, and that in
turn means that you will need to be programming using traditional object-oriented
methodologies. That alone is not, however, sufficient to give us language interoperability.
After all, C++ and Java both use the same object-oriented paradigms, but they are still not
regarded as interoperable. We need to look a little more closely at the concept of
language interoperability.

An associated problem was that, when debugging, you would still have to
independently debug components written in different languages. It was not possible to
step between languages in the debugger. So what we really mean by language
interoperability is that classes written in one language should be able to talk directly to
classes written in another language. In particular:

 A class written in one language can inherit from a class written in another language

 The class can contain an instance of another class, no matter what the languages of
the two classes are

 An object can directly call methods against another object written in another language

 Objects (or references to objects) can be passed around between methods

 When calling methods between languages we can step between the method calls in

debugger, even where this means stepping between source code written in different

This is all quite an ambitious aim, but amazingly, .NET and the Intermediate
Language have achieved it. For the case of stepping between methods in the debugger,
this facility is really offered by the Visual Studio .NET IDE rather than from the CLR

Strong Data Type

One very important aspect of IL is that it is based on exceptionally strong data

typing. What we mean by that is that all variables are clearly marked as being of a
particular, specific data type (there is no room in IL, for example, for the Variant data
type recognized by Visual Basic and scripting languages). In particular, IL does not
normally permit any operations that result in ambiguous data types.

For instance, VB developers will be used to being able to pass variables around
without worrying too much about their types, because VB automatically performs type
conversion. C++ developers will be used to routinely casting pointers between different
types. Being able to perform this kind of operation can be great for performance, but it
breaks type safety. Hence, it is permitted only in very specific circumstances in some of
the languages that compile to managed code. Indeed, pointers (as opposed to references)
are only permitted in marked blocks of code in C#, and not at all in VB (although they
are allowed as normal in managed C++). Using pointers in your code will immediately
cause it to fail the memory type safety checks performed by the CLR.

You should note that some languages compatible with .NET, such as VB.NET,
still allow some laxity in typing, but that is only possible because the compilers behind
the scenes ensure the type safety is enforced in the emitted IL.

Although enforcing type safety might initially appear to hurt performance, in
many cases this is far outweighed by the benefits gained from the services provided by
.NET that rely on type safety. Such services include:

 Language Interoperability

 Garbage Collection

 Security

 Application Domains

Common Type System (CTS)

This data type problem is solved in .NET through the use of the Common Type
System (CTS). The CTS defines the predefined data types that are available in IL, so that
all languages that target the .NET framework will produce compiled code that is
ultimately based on these types.

The CTS doesn't merely specify primitive data types, but a rich hierarchy of
types, which includes well-defined points in the hierarchy at which code is permitted to
define its own types. The hierarchical structure of the Common Type System reflects the
single-inheritance object-oriented methodology of IL, and looks like this:

Common Language Specification (CLS)

The Common Language Specification works with the Common Type

System to ensure language interoperability. The CLS is a set of minimum standards that
all compilers targeting .NET must support. Since IL is a very rich language, writers of
most compilers will prefer to restrict the capabilities of a given compiler to only support a
subset of the facilities offered by IL and the CTS. That is fine, as long as the compiler
supports everything that is defined in the CLS.

Garbage Collection

The garbage collector is .NET's answer to memory management, and in

particular to the question of what to do about reclaiming memory that running

applications ask for. Up until now there have been two techniques used on Windows
platform for deal locating memory that processes have dynamically requested from the

 Make the application code do it all manually

 Make objects maintain reference counts

The .NET runtime relies on the garbage collector instead. This is a program
whose purpose is to clean up memory. The idea is that all dynamically requested memory
is allocated on the heap (that is true for all languages, although in the case of .NET, the
CLR maintains its own managed heap for .NET applications to use). Every so often,
when .NET detects that the managed heap for a given process is becoming full and
therefore needs tidying up, it calls the garbage collector. The garbage collector runs
through variables currently in scope in your code, examining references to objects stored
on the heap to identify which ones are accessible from your code – that is to say which
objects have references that refer to them. Any objects that are not referred to are deemed
to be no longer accessible from your code and can therefore be removed. Java uses a
similar system of garbage collection to this.


.NET can really excel in terms of complementing the security mechanisms

provided by Windows because it can offer code-based security, whereas Windows only
really offers role-based security.

Role-based security is based on the identity of the account under which the
process is running, in other words, who owns and is running the process. Code-based
security on the other hand is based on what the code actually does and on how much the
code is trusted. Thanks to the strong type safety of IL, the CLR is able to inspect code
before running it in order to determine required security permissions. .NET also offers a

mechanism by which code can indicate in advance what security permissions it will
require to run.

The importance of code-based security is that it reduces the risks associated with
running code of dubious origin (such as code that you've downloaded from the Internet).
For example, even if code is running under the administrator account, it is possible to use
code-based security to indicate that that code should still not be permitted to perform
certain types of operation that the administrator account would normally be allowed to
do, such as read or write to environment variables, read or write to the registry, or to
access the .NET reflection features.

.Net Framework Classes

The .NET base classes are a massive collection of managed code classes that have
been written by Microsoft, and which allow you to do almost any of the tasks that were
previously available through the Windows API. These classes follow the same object
model as used by IL, based on single inheritance. This means that you can either
instantiate objects of whichever .NET base class is appropriate, or you can derive your
own classes from them.

The great thing about the .NET base classes is that they have been designed to be
very intuitive and easy to use. For example, to start a thread, you call the Start() method
of the Thread class. To disable a TextBox, you set the Enabled property of a TextBox
object to false. This approach will be familiar to Visual Basic and Java developers, whose
respective libraries are just as easy to use. It may however come as a great relief to C++
developers, who for years have had to cope with such API functions as GetDIBits(),
RegisterWndClassEx(), and IsEqualIID(), as well as a whole plethora of functions that
required Windows handles to be passed around.

Name Spaces

Namespaces are the way that .NET avoids name clashes between classes. They
are designed, for example, to avoid the situation in which you define a class to represent a
customer, name your class Customer, and then someone else does the same thing (quite a
likely scenario – the proportion of businesses that have customers seems to be quite

A namespace is no more than a grouping of data types, but it has the effect that
the names of all data types within a namespace automatically get prefixed with the name
of the namespace. It is also possible to nest namespaces within each other. For example,
most of the general-purpose .NET base classes are in a namespace called System. The
base class Array is in this namespace, so its full name is System. Array.

If a namespace is explicitly supplied, then the type will be added to a nameless

global namespace.

Creating .Net Application using C#:

C# can be used to create console applications: text-only applications that run in a

DOS window. You'll probably use console applications when unit testing class libraries,
and for creating Unix/Linux daemon processes. However, more often you'll use C# to
create applications that use many of the technologies associated with .NET. In this
section, we'll give you an overview of the different types of application that you can write
in C#.

Creating Windows Forms:

Although C# and .NET are particularly suited to web development, they still offer
splendid support for so-called "fat client" apps, applications that have to be installed on
the end-user's machine where most of the processing takes place. This support is from

Windows Forms:

A Windows Form is the .NET answer to a VB 6 Form. To design a graphical

window interface, you just drag controls from a toolbox onto a Windows Form. To
determine the window's behavior, you write event-handling routines for the form's
controls. A Windows Form project compiles to an EXE that must be installed alongside
the .NET runtime on the end user's computer. Like other .NET project types, Windows
Form projects are supported by both VB.NET and C#.

Windows Control:

Although Web Forms and Windows Forms are developed in much the same way,
you use different kinds of controls to populate them. Web Forms use Web Controls, and
Windows Forms use Windows Controls.

A Windows Control is a lot like an ActiveX control. After a Window control is

implemented, it compiles to a DLL that must be installed on the client's machine. In fact,
the .NET SDK provides a utility that creates a wrapper for ActiveX controls, so that they
can be placed on Windows Forms. As is the case with Web Controls, Windows Control
creation involves deriving from a particular class, System.Windows.Forms.Control.

Windows Services:

A Windows Service (originally called an NT Service) is a program that is

designed to run in the background in Windows NT/2000/XP (but not Windows 9x).
Services are useful where you want a program to be running continuously and ready to
respond to events without having been explicitly started by the user. A good example
would be the World Wide Web Service on web servers, which

listens out for web requests from clients.

It is very easy to write services in C#. There are .NET Framework base classes
available in the System.ServiceProcess namespace that handle many of the boilerplate

tasks associated with services, and in addition, Visual Studio .NET allows you to create a
C# Windows Service project, which starts you out with the Framework C# source code
for a basic Windows service.

The Role of C# In .Net Enterprise Architecture:

C# requires the presence of the .NET runtime, and it will probably be a

few years before most clients – particularly most home machines – have .NET installed.
In the meantime, installing a C# application is likely to mean also installing the .NET
redistributable components. Because of that, it is likely that the first place we will see
many C# applications is in the enterprise environment. Indeed, C# arguably presents an
outstanding opportunity for organizations that are interested in building robust, n-tiered
client-server applications.

Using Visual Studio.NET, there is no need to open the Enterprise Manager from
SQL Server. Visual Studio.NET has the SQL Servers tab within the Server Explorer that
gives a list of all the servers that are connected to those having SQL Server on them.
Opening up a particular server tab gives five options:
 Database Diagrams
 Tables
 Views
 Stored Procedures
 Functions

To create a new diagram right click Database diagrams and select New Diagram.
The Add Tables dialog enables to select one to all the tables that you want in the visual
diagram you are going to create.Visual Studio .NET looks at all the relationships between
the tables and then creates a diagram that opens in the Document window.
Each table is represented in the diagram and a list of all the columns that are
available in that particular table. Each relationship between tables is represented by a
connection line between those tables.The properties of the relationship can be viewed by
right clicking the relationship line.

The Server Explorer allows to work directly with the tables in SQL Server. It
gives a list of tables contained in the particular database selected. By double clicking one
of the tables, the table is seen in the Document window. This grid of data shows all the
columns and rows of data contained in the particular table.
The data can be added or deleted from the table grid directly in the Document
window. To add a new row of data , move to the bottom of the table and type in a new
row of data after selecting the first column of the first blank row. You can also delete a

row of data from the table by right clicking the gray box at the left end of the row and
selecting Delete.
By right clicking the gray box at the far left end of the row, the primary key can
be set for that particular column. The relationships to columns in other tables can be set
by selecting the Relationships option.To create a new table right-click the Tables section
within the Server Explorer and selecting New Table. This gives the design view that
enables to start specifying the columns and column details about the table.
To run queries against the tables in Visual Studio .NET, open the view of the
query toolbar by choosing View->Toolbars->Query.To query a specific table, open that
table in the Document window. Then click the SQL button which divides the Document
window into two panes-one for query and other to show results gathered from the
query.The query is executed by clicking the Execute Query button and the result is
produced in the lower pane of the Document window.

To create a new view, right-click the View node and select New View. The Add
Table dialog box enables to select the tables from which the view is produced. The next
pane enables to customize the appearance of the data in the view.

The OLAP Services feature available in SQL Server version 7.0 is now called
SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the
term Analysis Services. Analysis Services also includes a new data mining component.
The Repository component available in SQL Server version 7.0 is now called Microsoft
SQL Server 2000 Meta Data Services. References to the component now use the term
Meta Data Services. The term repository is used only in reference to the repository
engine within Meta Data Services.

SQL-SERVER database consist of six type of objects, They are:

A database is a collection of data about a specific topic.

We can work with a table in two types,
1. Design View
2. Datasheet View

Design View
To build or modify the structure of a table we work in the table design view. We
can specify what kind of data will be hold.

Datasheet View
To add, edit or analyses the data itself we work in tables datasheet view mode.
A query is a question that has to be asked the data. Access gathers data that
answers the question from one or more table. The data that make up the answer is either
dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run query, we get
latest information in the dynaset. Access either displays the dynaset or snapshot for us to
view or perform an action on it such as deleting or updating.



Software design sits at the technical kernel of the software engineering process
and is applied regardless of the development paradigm and area of application. Design is
the first step in the development phase for any engineered product or system. The
designer’s goal is to produce a model or representation of an entity that will later be built.
Beginning, once system requirement have been specified and analyzed, system design is
the first of the three technical activities -design, code and test that is required to build and
verify software.

The importance can be stated with a single word “Quality”. Design is the place
where quality is fostered in software development. Design provides us with
representations of software that can assess for quality. Design is the only way that we can
accurately translate a customer’s view into a finished software product or system.
Software design serves as a foundation for all the software engineering steps that follow.
Without a strong design we risk building an unstable system – one that will be difficult to
test, one whose quality cannot be assessed until the last stage.

During design, progressive refinement of data structure, program structure, and

procedural details are developed reviewed and documented. System design can be viewed
from either technical or project management perspective. From the technical point of
view, design is comprised of four activities – architectural design, data structure design,
interface design and procedural design.

Systems design is the process or art of defining the architecture, components,

modules, interfaces, and data for a system to satisfy specified requirements. One could
see it as the application of systems theory to product development. There is some overlap
and synergy with the disciplines of systems analysis, systems architecture and systems



In this system two types of process modules are developed.

There is only one safe place for private data and messages: the place where
nobody looks for it. A file encrypted with algorithms like PGP are not readable, but
everybody knows that there is something hidden. Wouldn't it be nice, if everyone could
open your encrypted files, and see noisy photos of some old friends instead of your
private data? They surely wouldn't look for pieces of encrypted messages in the pixel


1. Carrier Bitmap

2. Key/filename

3. Hide

4. Extract

Carrier Bitmap:

In this module u have upload the image. To hide a message, open a bitmap file

Key :

To enter a password or select a key file. The key file can be any file, another
bitmap for example. This password or key will be treated as a stream of bytes specifying
the space between two changed pixels. I don't recommend text files, because they may
result in a quite regular noise pattern. The longer your key file or password is, the less
regular the noise will appear.


Next step, enter the secret message or choose a file, and click the Hide button.
The application writes the length of the message in bytes into the first pixel. After that it

reads a byte from the message, reads another byte from the key, and calculates the
coordinates of the pixel to use for the message-byte. It increments or resets the color
component index, to switch between the R, G and B component. Then it replaces the R,
G or B component of the pixel (according to the color component index) with the
message-byte, and repeats the procedure with the next byte of the message. At last, the
new bitmap is displayed. Save the bitmap by clicking the Save button. If the grayscale
flag is set, all components of the color are changed. Grayscale noise is less visible in most


To extract a hidden message from a bitmap, open the bitmap file and specify the
password or key you used when hiding the message. Then choose a file to store the
extracted message in (or leave the field blank, if you only want to view hidden Unicode
text), and click the Extract button. The application steps through the pattern specified by
the key and extracts the bytes from the pixels. At last, it stores the extracted stream in the
file and tries to display the message. Don't bother about the character chaos, if your
message is not a Unicode text. The data in the file will be all right. This works with every
kind of data, you can even hide a small bitmap inside a larger bitmap. If you are really
paranoid, you can encrypt your files with PGP or GnuPG before hiding them in bitmaps.

Pretty Good Privacy:

PGP is a computer program that provides cryptographic privacy and

authentication. PGP is often used for signing, encrypting and decrypting e-mails to
increase the security of e-mail communications.

GNU Privacy Guard:

GnuPG or GPG is a free open source alternative to the PGP suite of

cryptographic software


Every organization needs security for their data. By using this application the data
will stored and send in securely.

First user enter valid user id and password for login. Then he enter in to data
hiding /extract window.

If user entered wrong values login failed message should be displayed.

If a new person wants to register, the person should fill the all fields in the
registration form. If the person may leave any field in empty it should be displayed a

After successfully login. A new window should be displayed. This window is

used for data hiding/extract.

Data hiding:

In data hiding step the data is encrypted by using AES algorithm with secret key,
which is entered by the user.

In this application we use carrier bit map as a cover media. Key is used to
provide security for the data by using this key the data is encrypted with the help of AES
algorithm. Then the data is immerges in to the carrier bit map image.

After hiding the message the file is send to the receiver, and save it into the
database if he required.

Data Extracting :

In this step the data will extract from the carrier bit map image file which is send
by the sender. Now the receiver can extract the secret message from that image file by
using valid key. And save that file into database if required.


The functional model, represents in UML with use case diagrams, describes the
functionality of the system from the user’s point of view.


Use case is nothing but collection of scenarios. An actor initiate a use case. After its
initiation, a use case may interacts with other users as well. A Use case represents a
complete flow of events through the system in the sense that it describes a series of
interactions that results from the initiation of the use case.

Sender use case diagram:




Image Upload


Enter Password



Receivers use case diagram:




Image Upload


Enter Password




Usecase diagram:

Upload Image To EXtract




User/ Admin

Save File

Extract Hidden text

Use case for data hideing:

Use case name DataHidening

Participation actors Sender

Entry condition The sender must have valid user name and

Flow of events 1 Select an image.

2 Enter a secret key.

3Enter secret data.

4Hide the text then save and send.

Exit condition Data Hide successfully.

Use case for data extracting:

Use case name DataExtracting

Participating actors Receiver

Entry condition Receive must have valid user name and


Flow of events 1.Select a file from data base/(received)

2.Enter valid key.

3.Extract data.

Exit condition The data will be displayed.


It describes the structure of the system in terms of objects attributes, associations
and operations


It describes the structure of the system in a class diagram each class is modal as a
rectangle. This rectangle. This rectangle contains three parts one defines the class name,
the second one describes the attributes and the last one describe about operations.

Class: A Class is a description for a set of objects that shares the same attributes,
and has similar operations, relationships, behaviors and semantics.

Generalization: Generalization is a relationship between a general element and a

more specific kind of that element. It means that the more specific element can be used
whenever the general element appears. This relation is also known as specialization or
inheritance link.

Realization: Realization is the relationship between a specialization and its

implementation. It is an indication of the inheritance of behavior without the inheritance
of structure.

Association: Association is represented by drawing a line between classes.

Associations represent structural relationships between classes and can be named to
facilitate model understanding. If two classes are associated, you can navigate from an
object of one class to an object of the class.

Aggregation: Aggregation is a special kind of association in which one class represents

as the larger class that consists of a smaller class. It has the meaning of “has-a”



It describes internal behavior of the system. This can be explained by the

following UML diagrams.


Sequence diagrams are used to formalize the behavior of the system and to
visualize the communication among objects. They are useful for identifying the
additional objects that participate in use Cases.


Object can be viewed as an entity at a particular point in time with a specific

value and as a holder of identity that has different values over time. Associations among
objects are not shown. When you place an object tag in the design area, a lifeline is
automatically drawn and attached to that object tag.


An actor represents a coherent set of roles that users of a system play when
interacting with the use cases of the system. An actor participates in use cases to
accomplish an overall purpose. An actor can represent the role of a human, a device, or
any other systems.


A message is a sending of a signal from one sender object to other receiver

object(s). It can also be the call of an operation on receiver object by caller object. The

arrow can be labeled with the name of the message (operation or signal) and its argument

Duration Message:

A message that indicates an action will cause transition from one state to another

Self Message:

A message that indicates an action will perform at a particular state and stay there.

Create Message:

A message that indicates an action that will perform between two states.

Sequence diagram for login:

Sender/Reciever loginCls DB Homepage

Enter userid,password()


response Yes()

response no()

This sequence diagram can explain how the user can login in to the system. It
contains 4 objectives, the user can entered id and password these values are verified at the
login form, and those values are check for validation by using data base. If the values not
correct system gives a failed message. Otherwise it proceeds in to the next level.

Sequence diagram for registration:

Sender/Reciever RegisterCls DB Login

Enter uservalues()





This sequence diagram explains how the new user can register into the system. In
this stage user can fill all the fields which are present in that form. After inserting those
values are stored into the database. Then the user can login in to the system.

Sequence diagram for hideing the plain text in an image file:

Imageuploa generatinc hidewith
sender dcls ls Takeplain encrypted send
text data

enter imager


click on generation()

plain text()

send to reciever()


This sequence diagram can explain how the sender can merge the data in to the
image file. Sender first upload an image file and give a key and plain text. The plain text
is encripted by using AES algorithem wit the key. Then the data is hide in to the image
file and send it to the receiver.

Sequence Diagram for extract plain text from image file:

enter private key decrypt

reciever image plaintext save

enter imager url()


enter key()

execute decrypt

algorithms() get plaintext()

click save()


In this sequence diagram how the data is extracted from the image is explained.
Receiver enter the received image file and related key to decrypt the data. After getting
the data the file is saved in to the data base .


State chart diagrams describe the behavior of an individual object as a number of

state and transition between the states.A state represents a set of value for an odject.


Upload Image


EnterKey/anyfile as password

Enter text to Hide

click to Hide

Save Hided Image


Upoload Hiding image


EnterKey/anyfile as password to Extracting message frm image

click to Extract

save secret Message


An activity diagram describes a system in terms of activities. Activities are states

that represent the set of operations. Activity diagrams are similar to flow chart diagrams
in that they can be used to represents the control flow.

Registration Activity Diagram:

[Enter User Name and Password ]

Get The Details

[submit ]

Validate Details

Get Details
Details ]

[submit ]

Validate Data


[Success Fully Registered ]

Login Activity Diagram:

[Enter U ser N ame and Passw ord

Get Details


Validate Data

Rejected No yes Accepted

Admin Activity Diagram:

[Enter Login Details ]

Get the Data

[submit ]

Validate Data



Enter text Upload Image

Get the Data Get the Data

[submit ]
Validate Details
Validate Data

yes no

User Activity Diagram:

[Enter Login Details ]

Get the Data

[submit ]

Validate Data



Decrypt Upload Image

Get the Data Get the Data

Validate Details
Validate Data

yes no

Flow Chart for hiding secret text in Image and extracting


Upload Image

Enter ke/password

Is file
Is Key d

Hidetext Using key/file Hidetext Using key/file

into Image into Image



5.5 Database Dictionaray:

6.Sample Code:

using System;

using System.Drawing;

using System.Windows.Forms;

using System.Text;

using System.IO;

namespace PictureKey {

public class CryptUtility {

public static void HideMessageInBitmap(Stream messageStream, Bitmap bitmap, Stream

keyStream, bool useGrayscale){

HideOrExtract(ref messageStream, bitmap, keyStream, false, useGrayscale);

messageStream = null;

public static void ExtractMessageFromBitmap(Bitmap bitmap, Stream keyStream, ref Stream


HideOrExtract(ref messageStream, bitmap, keyStream, true, false);

private static void HideOrExtract(ref Stream messageStream, Bitmap bitmap,

Stream keyStream, bool extract, bool useGrayscale){

int currentStepWidth = 0;

byte currentKeyByte;

//Current byte in the key stream - reverse direction

byte currentReverseKeyByte;

//current position in the key stream

long keyPosition;

//maximum X and Y position

int bitmapWidth = bitmap.Width-1;

int bitmapHeight = bitmap.Height-1;

int currentColorComponent = 0;

Color pixelColor;

Int32 messageLength;


pixelColor = bitmap.GetPixel(0,0);

messageLength = (pixelColor.R << 2) + (pixelColor.G << 1) + pixelColor.B;

messageStream = new MemoryStream(messageLength);


messageLength = (Int32)messageStream.Length;

if(messageStream.Length >= 16777215){

String exceptionMessage = "Themessage is too long, only 16777215

bytes are allowed.";

throw new Exception(exceptionMessage);

long countPixels = (bitmapWidth*bitmapHeight) -1;

//Pixels required - start with one pixel for length of message

long countRequiredPixels = 1;

//add up the gaps between used pixels (sum up all the bytes of the key)

while((keyStream.Position < keyStream.Length)&&(keyStream.Position <


countRequiredPixels += keyStream.ReadByte();

countRequiredPixels *=
(long)System.Math.Ceiling( ((float)messageStream.Length / (float)keyStream.Length) );

if(countRequiredPixels > countPixels){ //The bitmap is too small

String exceptionMessage = "The image is too small for this message

and key. "+countRequiredPixels+" pixels are required.";

throw new Exception(exceptionMessage);

//Write length of the bitmap into the first pixel

int colorValue = messageLength;

int red = colorValue >> 2;

colorValue -= red << 2;

int green = colorValue >> 1;

int blue = colorValue - (green << 1);

pixelColor = Color.FromArgb(red, green, blue);

bitmap.SetPixel(0,0, pixelColor);

//Reset the streams

keyStream.Seek(0, SeekOrigin.Begin);

messageStream.Seek(0, SeekOrigin.Begin);

Point pixelPosition = new Point(1,0);

//Loop over the message and hide each byte

for(int messageIndex=0; messageIndex<messageLength; messageIndex++){

//repeat the key, if it is shorter than the message

if(keyStream.Position == keyStream.Length){

keyStream.Seek(0, SeekOrigin.Begin);

//Get the next pixel-count from the key, use "1" if it's 0

currentKeyByte = (byte)keyStream.ReadByte();

currentStepWidth = (currentKeyByte==0) ? (byte)1 : currentKeyByte;

//jump to reverse-read position and read from the end of the stream

keyPosition = keyStream.Position;

keyStream.Seek(-keyPosition, SeekOrigin.End);

currentReverseKeyByte = (byte)keyStream.ReadByte();

//jump back to normal read position

keyStream.Seek(keyPosition, SeekOrigin.Begin);

//Perform line breaks, if current step is wider than the image

while(currentStepWidth > bitmapWidth){

currentStepWidth -= bitmapWidth;


//Move X-position

if((bitmapWidth - pixelPosition.X) < currentStepWidth){

pixelPosition.X = currentStepWidth - (bitmapWidth - pixelPosition.X);



pixelPosition.X += currentStepWidth;

pixelColor = bitmap.GetPixel(pixelPosition.X, pixelPosition.Y);


byte foundByte = (byte)(currentReverseKeyByte ^

GetColorComponent(pixelColor, currentColorComponent));


//Rotate color components

currentColorComponent = (currentColorComponent==2) ? 0 :


//To add a bit of confusion, xor the byte with a byte read from the

int currentByte = messageStream.ReadByte() ^



pixelColor = Color.FromArgb(currentByte, currentByte,



//Change one component of the color to the message-byte

SetColorComponent(ref pixelColor, currentColorComponent,


//Rotate color components

currentColorComponent = (currentColorComponent==2) ? 0 :

bitmap.SetPixel(pixelPosition.X, pixelPosition.Y, pixelColor);

//the stream will be closed by the calling method

bitmap = null;

keyStream = null;

private static byte GetColorComponent(Color pixelColor, int colorComponent){

byte returnValue = 0;


case 0:

returnValue = pixelColor.R;


case 1:

returnValue = pixelColor.G;


case 2:

returnValue = pixelColor.B;


return returnValue;

private static void SetColorComponent(ref Color pixelColor, int colorComponent, int newValue){


case 0:

pixelColor = Color.FromArgb(newValue, pixelColor.G, pixelColor.B);


case 1:

pixelColor = Color.FromArgb(pixelColor.R, newValue, pixelColor.B);


case 2:

pixelColor = Color.FromArgb(pixelColor.R, pixelColor.G, newValue);


private static String UnTrimColorString(String color, int desiredLength){

int difference = desiredLength - color.Length;

if(difference > 0){

color = new String('0', difference) + color;

return color;



Upload Image(like .bmp)

Enter a password or select a key file:

Secret message or choose a file, and click the Hide button. The application

Save the file in specified location:

Extracting Process:

Open Extracting Images from specified location:

Enter a password or select a key file to Extract:

Select The Extract Tab on Form:

Click extract hidden text button to extract secret text from image:

To save the content text click save browse button it is on extract tab:




Software testing is a critical element of software quality assurance and represents

the ultimate review of specification, design and coding. In fact, testing is the one step in
the software engineering process that could be viewed as destructive rather than

A strategy for software testing integrates software test case design methods into a
well-planned series of steps that result in the successful construction of software. Testing
is the set of activities that can be planned in advance and conducted systematically. The
underlying motivation of program testing is to affirm software quality with methods that
can economically and effectively apply to both strategic to both large and small-scale


The software engineering process can be viewed as a spiral. Initially system

engineering defines the role of software and leads to software requirement analysis where
the information domain, functions, behavior, performance, constraints and validation
criteria for software are established. Moving inward along the spiral, we come to design
and finally to coding. To develop computer software we spiral in along streamlines that
decrease the level of abstraction on each turn.

A strategy for software testing may also be viewed in the context of the spiral.
Unit testing begins at the vertex of the spiral and concentrates on each unit of the
software as implemented in source code. Testing progress by moving outward along the
spiral to integration testing, where the focus is on the design and the construction of the
software architecture. Talking another turn on outward on the spiral we encounter

validation testing where requirements established as part of software requirements
analysis are validated against the software that has been constructed. Finally we arrive at
system testing, where the software and other system elements are tested as a whole.



Component Testing


Integration Testing




Unit testing focuses verification effort on the smallest unit of software design, the
module. The unit testing we have is white box oriented and some modules the steps are
conducted in parallel.


This type of testing ensures that

 All independent paths have been exercised at least once

 All logical decisions have been exercised on their true and false sides

 All loops are executed at their boundaries and within their operational bounds

 All internal data structures have been exercised to assure their validity.

To follow the concept of white box testing we have tested each form .we have
created independently to verify that Data flow is correct, All conditions are exercised to
check their validity, All loops are executed on their boundaries.


Established technique of flow graph with Cyclomatic complexity was used to

derive test cases for all the functions. The main steps in deriving test cases were:

Use the design of the code and draw correspondent flow graph.

Determine the Cyclomatic complexity of resultant flow graph, using formula:

V(G)=E-N+2 or

V(G)=P+1 or

V(G)=Number Of Regions

Where V(G) is Cyclomatic complexity,

E is the number of edges,

N is the number of flow graph nodes,

P is the number of predicate nodes.

Determine the basis of set of linearly independent paths.


In this part of the testing each of the conditions were tested to both true and false
aspects. And all the resulting paths were tested. So that each path that may be generate on
particular condition is traced to uncover any possible errors.


This type of testing selects the path of the program according to the location of
definition and use of variables. This kind of testing was used only when some local
variable were declared. The definition-use chain method was used in this type of testing.
These were particularly useful in nested statements.


In this type of testing all the loops are tested to all the limits possible. The
following exercise was adopted for all loops:

 All the loops were tested at their limits, just above them and just below them.

 All the loops were skipped at least once.

 For nested loops test the inner most loop first and then work outwards.

 For concatenated loops the values of dependent loops were set with the help of
connected loop.

 Unstructured loops were resolved into nested loops or concatenated loops and tested
as above.


Test Case for Login:

Test Description Input Output Actual Status
ut P = Passed
No. F = Failed

1 Login Enter username, Login success split page

successful display split displayed
Password, page P

2 invalid Login as user with Login failed Error p

user login wrong login details error message message
should be

3 validate Login as blank Login Error p

username username with successful message
in login correct password display split should be
page displayed

4 valid Login as blank Error Error p

Password password with message message
in login correct username should be should be
displayed. displayed

Username Login as blank Error Error p

& message message
5 password username should be should be
Blank password displayed. displayed

6 Username Login as blank Error Error P

& message message
password username 103 should be
Blank password As login displayed
Test Case for registration:

Test Description Input Output Actual output Status

C. No. P = Passed

F = Failed

1 Insert values in Enter first Values login page P

to name, last inserted displayed
corresponding name,use rid display login
text areas. page

2 Insertion failed Login as user Login failed Error message p

with existing error message should be
login details displayed

4 enter blank Enter blank registration Error message F

Password for password successful should be
registration enter display login displayed
username page

5 Blank Login as registration Error message

Username for blank successful should be
registration display login displayed F
username page

6 Username & blank Error message Error message P

password should be
Blank username As login displayed
password failed

7 Username & blank Values Error message F
password inserted should be
Blank username displayed
Test Case for selecting an image file and entering a secret message:

(Message hiding time)

Test Description Input Output Actual output Status

C. P= Passed
F = Failed

Open an Select file selected file selected file

image file. path path
1 displayed displayed P

2 Open an Empty No file path No file path

image file. displayed displayed

3 Key Any secret Display key Display key P

key. as * symbols. as * symbols.

4 Verify key No key is Message Message P

& entered should be should be
displayed. displayed.
Click hide

text button.

Enter secret Enter text Display the Display the p

message. text entered text entered
in text dox in text dox
area. area.

6 Enter secret No message Message Message p

message. & entered(leave should be should be
Click Hide empty) displayed. displayed.
text button.

Test Case for extracting hidden text from an image file:

Test Description Input Output Actual Status
C. No. t P = Passed

F = Failed

Select an Image file Display the Display the

image file path. file path.
1 P
which has
the secret

Select an No image No path will No path will

image file. file be displayed. be displayed.
2 p

Enter key Valid Key Secret text Secret text

value. should be should be
displayed. displayed. P

Verify key. If key was Message Message p

not entered. should be should be
4 displayed. displayed.

Verify key In valid key Un Un

for selected is entered. understandab understandab
5 image. le text should le text should p
be displayed. be displayed.
Test case for selecting an image file from database in extract hidden text step:

Test Description Input Output Actual output Status

C. No. P = Passed

F = Failed

Click on load Click on Display Display image

button. load button. image file no. file no.
1 P

Validate load Click on Nothing Nothing

button. if user load button. should be should be
2 has no images displayed. displayed. p
on DB.


Triple-A concealment technique is introduced as a new method to hide digital data inside
image-based medium. The algorithm adds more randomization by using two different
seeds generated from a user-chosen key in order to select the component(s) used to hide
the secret bits as well as the number of the bits used inside the RGB image component.
This randomization adds more security especially if an active encryption technique is
used such as AES. The capacity ratio is increased above SCC and pixel indicator scheme.
Triple-A has a capacity ratio of 14% and can be increased if more number of bit is used
inside the component(s). As a final note, we can say that SCC algorithm is a special case
of Triple-A algorithm if the number of bit used is fixed and equal 1 and Seed2 is
restricted to [0,2] with circular effect.


The Steganography is used to cover the information. It also avoids the information
without any perceptible distortions. In future this process can not only be used in images
alone it can use in video formats and also in audio formats to enhance in future.


 S. Dumitrescu, X. Wu, and N. Memon. On steganalysis ofrandom lsb embedding

in continuous-tone images. In IEEEInternational Conference on Image Processing (ICIP
2002),volume 3, pages 641–644, 2002.

 S. Dumitrescu, X. Wu, and Z. Wang. Detection of lsbsteganography via sample

pair analysis. IEEE Transactions on Signal Processing, 51(7):1995–2007, 2003.

 J. Fridrich and M. Goljan. Digital image steganography usingstochastic

modulation. In E. J. D. III and P. W. Wong, editors, Security and Watermarking of
Multimedia Contents

V, volume 5020 of Proc. SPIE, pages 191–202, Bellingham,

June 2003. SPIE.

 J. Fridrich, M. Goljan, and R. Du. Detecting lsb steganographyn color and gray-
scale images. IEEE Multimedia,8(4):22–28, 2001.

 N. Provos and P. Honeyman. Detecting steganographic content on the internet. In
NDSS’02: Network and Distributed System Security Symposium, pages 1–13, Reston,
Virginia, 2002. Internet Society.

 T. Sharp. An implementation of key-based digital signal steganography. In

IHW’01: the 4th International Workshop on Information Hiding, pages 13–26, London,
UK, 2001.