Académique Documents
Professionnel Documents
Culture Documents
ABSTRACT
The main intention of introducing this system is to reduce the manual work at telephone
department offices. Every sort of task is performed by the system ,such as registering different
types of applications ,enquiries, and complaints etc. reducing mush paper work and burden of
file storage.
Also the latest information is right available for the officials and executives wherever
they require. the system also facilitates the customer to enquire about their application progress,
bills and directory enquiries such as enquiry by telephone, by name etc.
Where the system must be placed?
There are a lot of benefits to the organization by placing the system at their offices bill
payment centers, E-seva centers. At the same time the customers are also benefited using this
system. They can get latest information they require within no time.
How to use the system?
Using the system is as simple as using the personal computer. Since end user computing
is developing in our country, It is beneficial to both organization and the customers. Every step is
clearly defined and help is provided through out the application to the user. Even the exceptions
are handled well to avoid confusion. Third system can be used in a multi user environment. As
no sort of updating is allowed, there is no possibility of dead locks or Transactions getting
aborted.
How is it beneficial to the customers?
The customers can get much out of the system. They can get latest information they require
within no time. There will be no need for them to stand hours in queues for enquiry or to apply
or to do any other business with the corporation. They are welcome to use various services.
We are developing this project using .NET frame work. So we are using SQL server as
back end.
44
CONTENTS
1.
Introduction
1.1 A brief overview of the Telecom Business Information System
1.1.1
1.2
Company Profile
1.3
1.4
2.
Requirement Analysis
2.1 Introduction
2.2 Data Collection
2.2.1 Observation
2.3 Software Requirement Specification Document
2.3.1 Introduction
2.3.2 Problem Definition
2.3.3 Hardware requirements
2.3.4 Software requirements
2.3.5 Design Constraints
3. System Analysis
3.1 Module description
3.2 Feasibility analysis
3.3 Studying the existing system
44
Design Phase
4.1 Introduction.
4.2 Flow charts
4.3 Data flow diagrams
5. Development Phase
5.1 Features of language
5.2 Servlet Technology
6. Testing Phase
7.
Screens
8. Conclusion
9. Bibliography
44
44
1.1.
1.1.1
require within no time. There will be no need for them to stand hours in queues for enquiry or to
apply or to do any other business with the corporation. They are welcome to use various services.
44
44
Company Profile
Mysteries resolved is a software development and consultancy company engaged in
leading-edge product making and consultancy in various platforms and areas. The
company was established by a group of enthusiastic and dynamic engineering postgraduates in August 1999, a four-year-old company.
Since the inception of the company, it has been involved into latest and cutting-edge
technologies like, enterprise computing, distributed computing, and internet technologies,
all of which being realized in the areas like Java, Visual C++, and Visual Basic. With the
exponential growth of the mobile industry the company also entered into the area of
mobile programming solutions on J2ME (Java 2 Micro Edition) platform. The company
boasts of being the first of its kind to enter into mobile solutions arena. The company is
planning to enter into BREW (Binary Runtime Environment for Wireless) related
technologies soon as a part of its tie-up with an international mobile game-provider.
The company has got total employee strength of eighteen, purely technical, of which
three serve as project leaders, five serve as senior programmers, and the rest as junior
programmers. The areas of expertise of the companys technical staff have classified
them into three core action groups.
The first group deal with the low-level embedded programming area involving the skill
set of C, C++, UNIX internals, Real Time Operating Systems, Vx Works, and Device
Drivers. This group deals with the programming of the real-time systems and device
programming like writing drivers for various devices that are new to the market.
The second group is into Microsoft technologies with the skill set of Visual C++, Visual
Basic, and Active Server Pages with the distributed computing skills like COM and
DCOM. These people are commonly involved into the design of dynamic web sites with
connectivity to the corporate databases, front-end solutions for various products like
speech-recognition systems etc.
44
The third group is concentrating on Sun technologies with an expertise into the areas of
Java 2 Standard, Enterprise, and Mobile or Micro editions. This group currently is
concentrating on mobile programming solutions. The company has got a credit of
supplying mobile-based games to worlds best mobile companies.
The vision of the company is to establish itself as a strong head in the crucial areas of the
computing industry with a strong commitment to adapt to the latest and cutting-edge
technologies.
44
44
44
Those are,
1. REQUIREMENT ANALYSIS PHASE
2. DESIGN PHASE
3. DEVELOPMENT PHASE
4. CODING PHASE
5. TESTING PHASE
1.4.1. REQUIREMENT ANALYSIS PHASE:
This phase includes the identification of the problem, in order to identify the
problem, we have to know information about the problem, the purpose of the evaluation
for problem to be known. We have to clearly know about the clients requirements and
the objectives of the project.
44
1.4.3
DEVELOPMENT PHASE:
The development phase includes choosing of a suitable s/w to solve the particular
problem given. The various facilities and the sophistication in the selected s/w give a
better development of the problem.
1.4.4 CODING PHASE:
The coding phase is for translating the design of the system produced during the
design phase into code in a given programming language, which can be executed by a
computer and which performs the computation specified by the design.
1.4.5 TESTING PHASE:
Testing is done in various ways such as testing the algorithm, programming code,
sample data debugging is also one of following the above testing.
44
44
2.1 INTRODUCTION
2.2 DATA COLLECTION:
Observation of the Existing System:
In a typical telecom service provider scenario customers raise their new connection
requests to the local DOT office. The DOT agent generally gives them forms to fill up
which are subsequently scrutinized and verified with the DOT provided features/ services
as applicable and further verifications about the customer are made. The local DOT office
also connects to the branch exchange to verify the services available and to identify if the
exchange would need infrastructural up gradations. Traffic analysis and availability of
bandwidth and other technical validations are made. Further the branch exchange goes
though a sequence verification and document processing operations, which are replicated
at a city and subsequently at a national exchange level. The entire process is very time
consuming and involves tones of paper work- mostly manually, which is both error prone
and time consuming.
The new system would have customers raising apps to the local DOT office
which in further gets in touch with the branch office and the city exchange and all the
customer details are finally updated and stored at a nation exchange level database, apart
from being replicated at each of the lower line databases.
The following diagram exhibits the typical telecom service provider scenario.
2.3 SYSTEM REQUIREMENT SPECIFICATION DOCUMENT
What is SRS?
Software Requirement Specification (SRS) is the starting point of the software
developing activity. As system grew more complex it became evident that the goal of the
entire system cannot be easily comprehended. Hence the need for the requirement phase
arose. The software project is initiated by the client needs. The SRS is the means of
44
translating the ideas of the minds of clients (the input) into a formal document (the output
of the requirement phase.)
The SRS phase consists of two basic activities:
1) Problem/Requirement Analysis:
The process is order and more nebulous of the two, deals with understand
the problem, the goal and constraints.
2) Requirement Specification:
Here, the focus is on specifying what has been found giving analysis such
as representation, specification languages and tools, and checking the
specifications are addressed during this activity.
The Requirement phase terminates with the production of the validate
SRS document. Producing the SRS document is the basic goal of this phase.
ROLE OF SRS:
The purpose of the Software Requirement Specification is to reduce the
communication gap between the clients and the developers.
Software
Requirement Specification is the medium though which the client and user needs
are accurately specified. It forms the basis of software development. A good SRS
should satisfy all the parties involved in the system.
2.3.1 INTRODUCTION:
2.3.1.1. PURPOSE:
The purpose of this document is to describe all external requirements
for The Telecom Business Information System. It also describes the interfaces for
the system.
44
2.3.1.2. SCOPE:
This document is the only one that describes the requirements of the
system. It is meant for the use by the developers , and will also by the basis for
validating the final delivered system. Any changes made to the requirements in
the future will have to go through a formal change approval process. The
developer is responsible for asking for clarifications, where necessary, and will
not make any alterations without the permission of the client.
2.3.2 PROJECT DEFINITION
The Telecom Business Information System project has been divided into four modules.
They are
1.
Applications
2.
Entries
3.
Enquiries
4.
Complaints
Module-One
Applications
44
Module-Three
Enquiries
This module has been divided into five sub modules. They are
Bill Enquiry
Paid Bill Enquiry
Changed Number Enquiry
Enquiry By Telephone number
Application Enquiry
Module-Four
Complaints.
This module again is divided into three sub modules. They are
Line Disturbance
Phone Dead
Incorrect Billing.
44
Software Requirements:
Platform
Software
Hardware Requirements:
Processor
RAM
256 MB
Hard Disk
40 GB
Keyboard
101 keys
Mouse
Design Constraints:
This Telecom Business Information System require huge resources as Hundreds of
thousands of the users will require the services instantly, quick response time are needed.
The database should also be very large and robust to maintain very huge customer data.
44
44
SYSTEM ANALYSIS
Analysis is the detailed study of the various operations performed by a system and
their relationships within and outside of the system. A key question is: What must be
done to solve the problem? One aspect of analysis is defining the boundaries of the
system and determining whether or not candidate system should consider other related
systems. During analysis, data are collected on the available files, decision points, and
transactions handled by the present system.
3.1. MODULE DESCRIPTION
This section attempts to describe each module of the project in brief, and the detailed
description of each of these modules is spread throughout this document.
The Telecom Business Information System project has been divided into four
modules. They are
1.
Applications
Entries
Enquiries
Complaints
1.
Applications
This module has been divided into five sub modules. They are
1.1.
44
If customer would like to take new telephone new connection, he or she has to fill in an
application form called Application for new Phone connection which includes Name,
Address, purpose (Residence/Business/Office), facility (Local/STD/ISD), DD No., Bank
Name and Amount, with DD drawn from any nationalized Bank.
1.2
1.3
1.4
If a customer wants any modifications except phone no., ref no., and address, he can
get changed details for example purpose of phone from residence to Business or
Facility from Local to STD etc.
1.5
44
we are storing this customer record into a History file. It contains the list
of all cancelled connections.
Entries
This module is divided into three modules. They are
2.1
Bill Entry
2.2
2.3
2.4
Bill Entry
This is an entry done by entry operator after taping phone calls from a
device. The Bill ID should be generated automatically by the system. He
has to check how many phone calls customer makes and what is the
amount. Amount will be calculated automatically by the system by giving
calls made by specifying Local, STD etc.
2.2
2.3
2.4
44
Enquiry
This module has been divided into five sub modules. They are
3.1
Bill Enquiry
3.2
3.3
3.4
3.5
Application Enquiry
3.1
Bill Enquiry
This is an enquiry made by a customer to know what is the amount of his bill
by showing his bill ID or phone number.
3.2
Paid Enquiry
This is an enquiry made by a customer to know whether the bill is paid or not.
3.3
3.4
44
Complaints
If customer has any complaints, he has to come to office and register the
complaint by specifying his phone number and name. According to
complaints, those will be responded immediately.
1.
Line Disturbance
2.
Phone Dead
3.
Incorrect Billing.
to be
operational. Some products may work very well at design and implementation
44
but may fail in the real time environment. It includes the study of additional
human resource required and their technical expertise.
3. Technical Feasibility: It refers to whether the software that
is available in the market fully supports the present application. It studies the pros
and cons of using a particular software for the development and its feasibility. It
also studies the additional training needed to be given to the people to make the
application work.
Implementation Plan:
The main plan for the system developed is to mimic the existing system as it is in
the proposed system.
Study of the Existing System
The present system has obvious problems, inhibiting growth and profitability.
Demand of telephone connections have been identified as the major growth
area. These telephone connections demand an improved, computer system to
support them. At the same time operations could be automated into an overall
system.
By using the present system, work is done manually. So, each and every
transaction takes much time to complete. Whenever customer requires any
information, the searching process also takes more time and it is difficult to
search particular information from a file.
Disadvantages
1.
2.
3.
44
4.
5.
3.4 THE PROPOSED SYSTEM: The present system has obvious problems,
inhibiting growth and profitability. Demand of telephone connections have been
identified as the major growth area. These telephone connections demand an
improved, computer system to support them. At the same time operations could
be automated into an overall system.
By using the proposed system, the whole system is computerized. So, each and
every transaction takes less time to complete. Whenever customer requires any
information, the searching process also takes less time and it is easy to search
particular customer information from a file.
Advantages
1.A fast and more efficient service to all customers. As there are thousands
of customer records; Searching process is an easy task.
2.
3.
4.
5.
Disadvantage
1.
44
2.
3.
4.
Bills will be issued on time to customers and status of the bills will be
observed schedule wise.
5.
44
44
4.1 INTRODUCTION
Design is the first step in the development phase for any techniques and
principles for the purpose of defining a device, a process or system in sufficient
detail to permit its physical realization.
Once the software requirements have been analyzed and specified the software
design involves three technical activities design, coding, generation and testing
that are required to build and verify the software.
The design activities are of main importance in this phase, because in this
activity, decisions ultimately effecting the success of the software implementation
and its ease of maintenance are made. These decisions have the final bearing
upon reliability and maintainability of the system. Design is the only way to
accurately translate the customers requirements into finished software or a
system.
Design is the place where quality is fostered in development. Software design is
a process through which requirements are translated into a representation of
software. Software design is conducted in two steps. Preliminary design is
concerned with the transformation of requirements into data.
4.2 FLOW CHARTS
Before solving a problem with the help of a computer, it is essential to plan the
solution in a step-by-step manner. Such a planning is represented symbolically
with the help of flow chart. It is an important tool of system analysts and
Programmers for tracing the information flow and the logical sequence in data
processing Logic is the essence of a flow chart.
44
The system analyst to describe data flow and operations for the data processing
cycle uses these. A system flow chart defines the broad processing in the
organizations, showing the origin of the data, filling structure, processing to be
performed, output that is to generate and necessity of the offline operation.
2. Program Flow Chart (or) Computer Procedure flow chart
44
Advantages:
Apart from, the DFDS the flow charts has been helping the programmer to
develop the programming logic and to serve as the documentation for a
Completed program, it has the following advantages
Disadvantages:
1. Communication lines are not always easy to show.
2. The charts are sometimes complicated.
3. Reproduction is difficult.
4. They are hard to modify.
44
UML DIAGRAMS
44
44
44
SEQUECE DIAGRAM
Home
Applications
Applications for
new connection
Applications for
phone transfer
Area Manager
View new
connections details
Verify
connection
Enter()
New()
Transfer()
Assign()
Get()
Verify()
44
COLLABORATION DIAGRAM
Verify
connection
Applications for
new connection
6: Verify()
Area
Manager
View new
connections details
2: New()
5: Get()
4: Assign()
Applications for
phone transfer
Applicati
ons
1: Enter()
Home
3: Transfer()
44
SCREEN SHOTS
44
44
44
44
44
44
44
44
DEVELOPMENT PHASE
INTRODUCTION
The goal of any system development is to develop and implement the system cost
effectively; user-friendly and most suited to the users analysis is the heart of the process.
Analysis is the study of the various operations performed by the system and their
relationship within and outside of the system. During analysis, data collected on the files,
decision points and transactions handled by the present system. Different kinds of tools
are used in analysis of which interview is a common one.
INITIAL INVESTIGATION
The first step in system development life cycle is the identification of need of
change to improve or enhance an existing system. An initial investigation on existing
system was carried out. The present system of hospital is completely manual. Many
problems were identified during the initial study of the existing system.
VISUAL STUDIO
Microsoft Visual Studio is an Integrated Development Environment (IDE) from
Microsoft. It can be used to develop console and graphical user interface
applications along with Windows Forms applications, web sites, web applications,
and web services in both native code together with managed code for all platforms
supported by Microsoft Windows, Windows Mobile, Windows CE, .NET
Framework, .NET Compact Framework and Microsoft Silver light.
Visual Studio includes a code editor supporting IntelliSense as well as code
refactoring. The integrated debugger works both as a source-level debugger and a
machine-level debugger. Other built-in tools include a forms designer for building
GUI applications, web designer, class designer, and database schema designer. It
allows plug-ins to be added that enhance the functionality at almost every level including adding support for source control systems (like Subversion and Visual
SourceSafe) to adding new toolsets like editors and visual designers for domainspecific languages or toolsets for other aspects of the software development lifecycle
(like the Team Foundation Server client: Team Explorer).
44
Visual Studio supports languages by means of language services, which allow any
programming language to be supported (to varying degrees) by the code editor and
debugger, provided a language-specific service has been authored. Built-in
languages include C/C++ (via Visual C++), VB.NET (via Visual Basic .NET), and C#
(via Visual C#). Support for other languages such as F#, M, Python, and Ruby
among others has been made available via language services which are to be
installed separately. It also supports XML/XSLT, HTML/XHTML, JavaScript and
CSS. Language-specific versions of Visual Studio also exist which provide more
limited language services to the user. These individual packages are called Microsoft
Visual Basic, Visual J#, Visual C#, and Visual C++.
Microsoft provides "Express" editions of its Visual Studio 2008 components Visual
Basic, Visual C#, Visual C++, and Visual Web Developer at no cost. Visual Studio
2008 and 2005 Professional Editions, along with language-specific versions (Visual
Basic, C++, C#, J#) of Visual Studio 2005 are available for free to students as
downloads via Microsoft's DreamSpark program. Visual Studio 2010 is currently in
beta testing and can be downloaded by the general public at no cost.
ARCHITECTURE
Visual Studio does not support any programming language, solution or tool intrinsically.
Instead, it allows various functionality to be plugged in. Specific functionality is coded as
a VSPackage. When installed, the functionality is available as a Service. The IDE
provides three services: SVsSolution, which provides the ability to enumerate projects
and solutions; SVsUIShell, which provides windowing and UI functionality (including
tabs, toolbars and tool windows); and SVsShell, which deals with registration of
VSPackages. In addition, the IDE is also responsible for coordinating and enabling
communication between services. All editors, designers, project types and other tools are
implemented as VSPackages. Visual Studio uses COM to access the VSPackages. The
Visual Studio SDK also includes the Managed Package Framework (MPF), which is a set
of managed wrappers around the COM-interfaces that allow the Packages to be written in
.NET languages. However, MPF does not provide all the functionality exposed by the
44
Visual Studio COM interfaces. The services can then be consumed for creation of other
packages, which add functionality to the Visual Studio IDE.
Support for programming languages is added by using a specific VSPackage called a
Language Service. A language service defines various interfaces which the VSPackage
implementation can implement to add support for various functionality. Functionality that
can be added this way includes syntax coloring, statement completion, brace matching,
parameter information tooltips, member lists and error markers for background
compilation. If the interface is implemented, the functionality will be available for the
language. Language services are to be implemented on a per-language basis. The
implementations can reuse code from the parser or the compiler for the language.
Language services can be implemented either in native code or managed code. For native
code, either the native COM interfaces or the Babel Framework (part of Visual Studio
SDK) can be used. For managed code, the MPF includes wrappers for writing managed
language services.
Visual Studio does not include any source control support built in but it defines the
MSSCCI (Microsoft Source Code Control Interface) by implementing which source
control systems can integrate with the IDE. MSSCCI defines a set of functions that are
used to implement various source control functionality. MSSCCI was first used to
integrate Visual SourceSafe with Visual Studio 6.0 but was later opened up via the Visual
Studio SDK. Visual Studio .NET 2002 used MSSCCI 1.1, and Visual Studio .NET 2003
used MSSCCI 1.2. Both Visual Studio 2005 and 2008 use MSSCCI Version 1.3, which
adds support for rename and delete propagation as well as asynchronous opening.
Visual Studio supports running multiple instances of the environment (each with its own
set of VSPackages). The instances use different registry hives (see MSDN's definition of
the term "registry hive" in the sense used here) to store their configuration state and are
differentiated by their AppId (Application ID). The instances are launched by an AppIdspecific .exe that selects the AppId, sets the root hive and launches the IDE. VSPackages
registered for one AppId are integrated with other VSPackages for that AppId. The
various product editions of Visual Studio are created using the different AppIds. The
44
Visual Studio Express edition products are installed with their own AppIDs, but the
Standard, Professional and Team Suite products share the same AppId. Consequently, the
Express editions can be installed side-by-side with other editions, unlike the other
editions which update the same installation. The professional edition includes a superset
of the VSPackages in the standard edition and the team suite includes a superset of the
VSPackages in both other editions. The AppId system is leveraged by the Visual Studio
Shell in Visual Studio 2008.
Extensibility
Visual Studio allows developers to write extensions for Visual Studio to extend its
capabilities. These extensions "plug into" Visual Studio and extend its functionality.
Extensions come in the form of macros, add-ins, and packages. Macros represent
repeatable tasks and actions that developers can record programmatically for saving,
replaying, and distributing. Macros, however, cannot be used to implement new
commands or create tool windows. They are written using Visual Basic and are not
compiled. Add-Ins provide access to the Visual Studio object model and can interact with
the IDE tools. Add-Ins can be used to implement new functionality and can add new tool
windows. Add-Ins are plugged in to the IDE via COM and can be created in any COMcompliant languages. Packages are created using the Visual Studio SDK and provide the
highest level of extensibility. It is used to create designers and other tools, as well as to
integrate other programming languages. The Visual Studio SDK provides both
unmanaged as well as a managed API to accomplish these tasks. However, the managed
API isn't as comprehensive as the unmanaged one. Extensions are supported in the
Standard (and higher) versions of Visual Studio 2005. Express Editions do not support
hosting extensions.
Visual Studio 2008 introduced the Visual Studio Shell that allows for development of a
customized version of the IDE. The Visual Studio Shell defines a set of VSPackages that
provide the functionality required in any IDE. On top of that, other packages can be
added to customize the installation. The Isolated mode of the shell creates a new AppId
where the packages are installed. These are to be started with a different executable. It is
aimed for development of custom development environments, either for a specific
44
language or a specific scenario. The Integrated mode installs the packages into the AppId
of the Professional/Standard/Team System editions, so that the tools integrate into these
editions. The Visual Studio Shell is available as a free download.
After the release of Visual Studio 2008, Microsoft created the Visual Studio Gallery. It
serves as the central location for posting information about extensions to Visual Studio.
Community developers as well as commercial developers can upload information about
their extensions to Visual Studio .NET 2002 through Visual Studio 2008. Users of the site
can rate and review the extensions to help assess the quality of extensions being posted.
RSS feeds to notify users on updates to the site and tagging features are also planned.
C# .NET FEATURES
By design, C# is the programming language that most directly reflects the underlying
Common Language Infrastructure (CLI). Most of its intrinsic types correspond to valuetypes implemented by the CLI framework. However, the language specification does not
state the code generation requirements of the compiler: that is, it does not state that a C#
compiler must target a Common Language Runtime, or generate Common Intermediate
Language (CIL), or generate any other specific format. Theoretically, a C# compiler
could generate machine code like traditional compilers of C++ or FORTRAN. In practice,
all existing compiler implementations target CIL.
Some notable C# distinguishing features are:
* There are no global variables or functions. All methods and members must be
declared within classes. Static members of public classes can substitute for global
variables and functions.
* Local variables cannot shadow variables of the enclosing block, unlike C and C++.
Variable shadowing is often considered confusing by C++ texts.
* C# supports a strict Boolean datatype, bool. Statements that take conditions, such as
while and if, require an expression of a boolean type. While C++ also has a boolean type,
it can be freely converted to and from integers, and expressions such as if(a) require only
that a is convertible to bool, allowing a to be an int, or a pointer. C# disallows this
"integer meaning true or false" approach on the grounds that forcing programmers to use
expressions that return exactly bool can prevent certain types of programming mistakes
such as if (a = b) (use of = instead of ==).
44
* In C#, memory address pointers can only be used within blocks specifically marked
as unsafe, and programs with unsafe code need appropriate permissions to run. Most
object access is done through safe object references, which always either point to a "live"
object or have the well-defined null value; it is impossible to obtain a reference to a
"dead" object (one which has been garbage collected), or to random block of memory. An
unsafe pointer can point to an instance of a value-type, array, string, or a block of
memory allocated on a stack. Code that is not marked as unsafe can still store and
manipulate pointers through the System.IntPtr type, but it cannot dereference them.
* Managed memory cannot be explicitly freed; instead, it is automatically garbage
collected. Garbage collection addresses memory leaks by freeing the programmer of
responsibility for releasing memory which is no longer needed. C# also provides direct
support for deterministic finalization with the using statement (supporting the Resource
Acquisition Is Initialization idiom).
* Multiple inheritance is not supported, although a class can implement any number of
interfaces. This was a design decision by the language's lead architect to avoid
complication, avoid dependency hell and simplify architectural requirements throughout
CLI.
* C# is more typesafe than C++. The only implicit conversions by default are those
which are considered safe, such as widening of integers and conversion from a derived
type to a base type. This is enforced at compile-time, during JIT, and, in some cases, at
runtime. There are no implicit conversions between booleans and integers, nor between
enumeration members and integers (except for literal 0, which can be implicitly
converted to any enumerated type). Any user-defined conversion must be explicitly
marked as explicit or implicit, unlike C++ copy constructors and conversion operators,
which are both implicit by default.
* Enumeration members are placed in their own scope.
* C# provides properties as syntactic sugar for a common pattern in which a pair of
methods, accessor (getter) and mutator (setter) encapsulate operations on a single
attribute of a class.
* Full type reflection and discovery is available.
* C# currently (as of 3 June 2008) has 77 reserved words.
44
Categories of datatypes
CTS separates datatypes into two categories[7]:
1. Value types
2. Reference types
Value types are plain aggregations of data. Instances of value types do not have
referential identity nor a referential comparison semantics - equality and inequality
comparisons for value types compare the actual data values within the instances, unless
the corresponding operators are overloaded. Value types
System.ValueType, always have a default value, and can always be created and copied.
Some other limitations on value types are that they cannot derive from each other (but
can implement interfaces) and cannot have a default (parameterless) constructor.
Examples of value types are some primitive types, such as int (a signed 32-bit integer),
float (a 32-bit IEEE floating-point number), char (a 16-bit Unicode codepoint), and
System.DateTime (identifies a specific point in time with millisecond precision).
In contrast, reference types have the notion of referential identity - each instance of
reference type is inherently distinct from every other instance, even if the data within
both instances is the same. This is reflected in default equality and inequality
comparisons for reference types, which test for referential rather than structural equality,
unless the corresponding operators are overloaded (such as the case for System.String). In
general, it is not always possible to create an instance of a reference type, nor to copy an
existing instance, or perform a value comparison on two existing instances, though
specific reference types can provide such services by exposing a public constructor or
implementing a corresponding interface (such as ICloneable or IComparable). Examples
of reference types are object (the ultimate base class for all other C# classes),
System.String (a string of Unicode characters), and System.Array (a base class for all C#
arrays).
Both type categories are extensible with user-defined types.
Asp .Net:
ASP.NET is a web application framework developed and marketed by Microsoft to allow
programmers to build dynamic web sites, web applications and web services. It was first
released in January 2002 with version 1.0 of the .NET Framework, and is the successor to
44
Microsoft's Active Server Pages (ASP) technology. ASP.NET is built on the Common
Language Runtime (CLR), allowing programmers to write ASP.NET code using any
supported .NET language.
History
After the release of Internet Information Services 4.0 in 1997, Microsoft began
researching possibilities for a new web application model that would solve common
complaints about ASP, especially with regard to separation of presentation and content
and being able to write "clean" code. Mark Anders, a manager on the IIS team, and Scott
Guthrie, who had joined Microsoft in 1997 after graduating from Duke University, were
tasked with determining what that model would look like. The initial design was
developed over the course of two months by Anders and Guthrie, and Guthrie coded the
initial prototypes during the Christmas holidays in 1997.
Scott Guthrie in 2007.
The initial prototype was called "XSP"; Guthrie explained in a 2007 interview that,
"People would always ask what the X stood for. At the time it really didn't stand for
anything. XML started with that; XSLT started with that. Everything cool seemed to start
with an X, so that's what we originally named it." The initial prototype of XSP was done
using Java,[3] but it was soon decided to build the new platform on top of the Common
Language Runtime (CLR), as it offered an object-oriented programming environment,
garbage collection and other features that were seen as desirable features that Microsoft's
Component Object Model platform didn't support. Guthrie described this decision as a
"huge risk", as the success of their new web development platform would be tied to the
success of the CLR, which, like XSP, was still in the early stages of development, so
much so that the XSP team was the first team at Microsoft to target the CLR.
With the move to the Common Language Runtime, XSP was re-implemented in C#
(known internally as "Project Cool" but kept secret from the public), and renamed to
ASP+, as by this point the new platform was seen as being the successor to Active Server
Pages, and the intention was to provide an easy migration path for ASP developers.
44
Mark Anders first demonstrated ASP+ at the ASP Connections conference in Phoenix,
Arizona on May 2, 2000. Demonstrations to the wide public and initial beta release of
ASP+ (and the rest of the .NET Framework) came at the 2000 Professional Developers
Conference on July 11, 2000 in Orlando, Florida. During Bill Gates's keynote
presentation, Fujitsu demonstrated ASP+ being used in conjunction with COBOL, and
support for a variety of other languages was announced, including Microsoft's new Visual
Basic .NET and C# languages, as well as Python and Perl support by way of
interoperability tools created by ActiveState.
Once the ".NET" branding was decided on in the second half of 2000, it was decided to
rename ASP+ to ASP.NET. Mark Anders explained on an appearance on The MSDN
Show that year that, "The .NET initiative is really about a number of factors, its about
delivering software as a service, it's about XML and web services and really enhancing
the Internet in terms of what it can do .... we really wanted to bring its name more in line
with the rest of the platform pieces that make up the .NET framework."
After four years of development, and a series of beta releases in 2000 and 2001,
ASP.NET 1.0 was released on January 5, 2002 as part of version 1.0 of the .NET
Framework. Even prior to the release, dozens of books had been written about ASP.NET,
and Microsoft promoted it heavily as part of their platform for web services. Guthrie
became the product unit manager for ASP.NET, and development continued apace, with
version 1.1 being released on April 24, 2003 as a part of Windows Server 2003. This
release focused on improving ASP.NET's support for mobile devices.
.NET pages, known officially as "web forms", are the main building block for application
development. Web forms are contained in files with an ".aspx" extension; in
programming jargon, these files typically contain static (X)HTML markup, as well as
markup defining server-side Web Controls and User Controls where the developers place
all the required static and dynamic content for the web page. Additionally, dynamic code
which runs on the server can be placed in a page within a block <% -- dynamic code -%> which is similar to other web development technologies such as PHP, JSP, and ASP,
44
but this practice is generally discouraged except for the purposes of data binding since it
requires more calls when rendering the page.
Note that this sample uses code "inline", as opposed to code behind.
<%@ Page Language="C#" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<script runat="server">
protected void Page_Load(object sender, EventArgs e)
{
Label1.Text = DateTime.Now.ToLongDateString();
}
</script>
<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
<title>Sample page</title>
</head>
<body>
<form id="form1" runat="server">
<div>
The current time is: <asp:Label runat="server" id="Label1" />
</div>
</form>
</body>
</html>
Code-behind model
It is recommended by Microsoft for dealing with dynamic program code to use the codebehind model, which places this code in a separate file or in a specially designated script
tag. Code-behind files typically have names like MyPage.aspx.cs or MyPage.aspx.vb
(same filename as the ASPX file, with the final extension denoting the page language).
44
This practice is automatic in Microsoft Visual Studio and other IDEs. When using this
style of programming, the developer writes code to respond to different events, like the
page being loaded, or a control being clicked, rather than a procedural walk through the
document.
ASP.NET's code-behind model marks a departure from Classic ASP in that it encourages
developers to build applications with separation of presentation and content in mind. In
theory, this would allow a web designer, for example, to focus on the design markup with
less potential for disturbing the programming code that drives it. This is similar to the
separation of the controller from the view in model-view-controller frameworks.
Example
<%@
Page
Language="C#"
CodeFile="SampleCodeBehind.aspx.cs"
Inherits="Website.SampleCodeBehind"
AutoEventWireup="true" %>
The above tag is placed at the beginning of the ASPX file. The CodeFile property of the
@ Page directive specifies the file (.cs or .vb) acting as the code-behind while the Inherits
property specifies the Class the Page derives from. In this example, the @ Page directive
is included in SampleCodeBehind.aspx, then SampleCodeBehind.aspx.cs acts as the
code-behind for this page:
using System;
namespace Website
{
public partial class SampleCodeBehind : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
// ...
}
}
}
44
In this case, the Page_Load() method is called every time the ASPX page is requested.
The programmer can implement event handlers at several stages of the page execution
process to perform processing.
User controls
ASP.NET supports creating reusable components through the creation of User controls. A
user control follows the same structure as a Web form, except that such controls are
derived from the System.Web.UI.UserControl class, and are stored in ASCX files. Like
ASPX files, an ASCX file contains static HTML or XHTML markup, as well as markup
defining web control and other user controls. The code-behind model can be used.
Programmers can add their own properties, methods, and event handlers. An event
bubbling mechanism provides the ability to pass an event fired by a user control up to its
containing page.
Rendering technique
ASP.NET uses a visited composites rendering technique. During compilation, the
template (.aspx) file is compiled into initialization code which builds a control tree (the
composite) representing the original template. Literal text goes into instances of the
Literal control class, and server controls are represented by instances of a specific control
class. The initialization code is combined with user-written code (usually by the assembly
of multiple partial classes) and results in a class specific for the page. The page doubles
as the root of the control tree.
Actual requests for the page are processed through a number of steps. First, during the
initialization steps, an instance of the page class is created and the initialization code is
executed. This produces the initial control tree which is now typically manipulated by the
methods of the page in the following steps. As each node in the tree is a control
represented as an instance of a class, the code may change the tree structure as well as
manipulate the properties/methods of the individual nodes. Finally, during the rendering
step a visitor is used to visit every node in the tree, asking each node to render itself using
the methods of the visitor. The resulting HTML output is sent to the client.
After the request has been processed, the instance of the page class is discarded and with
it the entire control tree. This is usually a source of confusion among novice ASP.NET
44
programmers that rely on class instance members that are lost with every page
request/response cycle.
State management
ASP.NET applications are hosted in a web server and are accessed over the stateless
HTTP protocol. As such, if the application uses stateful interaction, it has to implement
state management on its own. ASP.NET provides various functionality for state
management in ASP.NET applications. Conceptually, Microsoft treats "state" as mostly
GUI state, big problems may arise when an application needs to keep track of "data state"
such as a finite state machine that may be in a transient state between requests (lazy
evaluation) or just takes long to initialize.
Application state
Application state is a collection of user-defined variables that are shared by an ASP.NET
application. These are set and initialized when the Application_OnStart event fires on the
loading of the first instance of the applications and are available till the last instance exits.
Application state variables are accessed using the Applications collection, which provides
a wrapper for the application state variables. Application state variables are identified by
names.
Session state
Session state is a collection of user-defined session variables, which are persisted during
a user session. These variables are unique to different instances of a user session, and are
accessed using the Session collection. Session variables can be set to be , even if the
session does not end. At the client end, a user session is identified either by a cookie or by
encoding the session ID in the URL itself.
ASP.NET supports three modes of persistence for session variables:
In Process Mode
The session variables are maintained within the ASP.NET process. This is the fastest
way; however, in this mode the variables are destroyed when the ASP.NET process is
recycled or shut down. Since the application is recycled from time to time this mode is
not recommended for critical applications, rather in practice this mode is not
recommended for any applications.
44
ASPState Mode
In this mode, ASP.NET runs a separate Windows service that maintains the state
variables. Because the state management happens outside the ASP.NET process and .NET
Remoting must be utilized by the ASP.NET engine to access the data, this mode has a
negative impact on performance in comparison to the In Process mode, although this
mode allows an ASP.NET application to be load-balanced and scaled across multiple
servers. However, since the state management service runs independent of ASP.NET, the
session variables can persist across ASP.NET process shutdowns.
Same problem arises though - since session state server runs as a single instance it is a
single point of failure as far as session state is concerned. This service can not be load
balanced and also imposes restrictions on types that can be stored in a session variable.
SqlServer Mode
In this mode, the state variables are stored in a database server, accessible using SQL.
Session variables can be persisted across ASP.NET process shutdowns in this mode as
well. The main advantage of this mode is it would allow the application to balance load
on a server cluster while sharing sessions between servers. This is the slowest method of
session state management in ASP.NET.
View state
View state refers to the page-level state management mechanism, which is utilized by the
HTML pages emitted by ASP.NET applications to maintain the state of the web form
controls and widgets. The state of the controls is encoded and sent to the server at every
form submission in a hidden field known as __VIEWSTATE. The server sends back the
variable so that when the page is re-rendered, the controls render at their last state. At the
server side, the application might change the viewstate, if the processing results in
updating the state of any control. The states of individual controls are decoded at the
server, and are available for use in ASP.NET pages using the ViewState collection.[12]
[13]
The main use for this is to preserve form information across postbacks. So if a user fills
out a form but enters a wrong value, the form is automatically filled back in when the
44
page is sent back to the user for correction. Since Viewstate is turned on by default and it
serializes every object on the page regardless of whether you actually use it, it wastes an
extremely large amount of space and increases transfer times all over the web. Viewstate
is on by default on every page, but rarely is it actually used. For forms it is much more
efficient and straightforward to just manually send back the form values you wish to
preserve. Viewstate could also help with creating a dynamic page that updates itself
automatically, but AJAX is a much more efficient and clear cut way of achieving this.
Other
Other means of state management that are supported by ASP.NET are cookies, caching,
and using the query string.
Template engine
When first released, ASP.NET lacked a template engine. Because the .NET framework is
object-oriented and allows for inheritance, many developers would define a new base
class that inherits from "System.Web.UI.Page", write methods here that render HTML,
and then make the pages in their application inherit from this new class. While this allows
for common elements to be reused across a site, it adds complexity and mixes source
code with markup. Furthermore, this method can only be visually tested by running the
application - not while designing it. Other developers have used include files and other
tricks to avoid having to implement the same navigation and other elements in every
page.
ASP.NET 2.0 introduced the concept of "master pages", which allow for template-based
page development. A web application can have one or more master pages, which
beginning with ASP.NET 3.5, can be nested.[14] Master templates have place-holder
controls, called ContentPlaceHolders to denote where the dynamic content goes, as well
as HTML and JavaScript shared across child pages.
Child pages use those ContentPlaceHolder controls, which must be mapped to the placeholder of the master page that the content page is populating. The rest of the page is
defined by the shared parts of the master page, much like a mail merge in a word
processor. All markup and server controls in the content page must be placed within the
ContentPlaceHolder control.
44
When a request is made for a content page, ASP.NET merges the output of the content
page with the output of the master page, and sends the output to the user.
The master page remains fully accessible to the content page. This means that the content
page may still manipulate headers, change title, configure caching etc. If the master page
exposes public properties or methods (e.g. for setting copyright notices) the content page
can use these as well.
Design Document
The entire system is projected with a physical diagram which specifics the
actual storage parameters that are physically necessary for any database to
be stored on to the disk. The overall systems existential idea is derived
from this diagram.
The relation upon the system is structure through a conceptual ERDiagram, which not only specifics the existential entities but also the
standard relations through which the system exists and the cardinalities
that are necessary for the system state to continue.
The content level DFD is provided to have an idea of the functional inputs
and outputs that are achieved through the system. The system depicts the
input and out put standards at the high level of the systems existence.
The Data flow diagram provides additional information that is used during
the analysis of the information domain, and server as a basis for the
modeling of functions.
44
ER-Diagrams
The set of primary components that are identified by the ERD are
Data object
Relationships
Attributes
The primary purpose of the ERD is to represent data objects and their
relationships.
A UML system is represented using five different views that describe the
system from distinctly different perspective. Each view is defined by a set
of diagram, which is as follows.
In this model the data and functionality are arrived from inside the system.
This model view models the static structures.
44
interactions of collection between various structural elements described in the user model
and structural model view.
Implementation Model View
44
TESTING
Software testing is a critical element of software quality assurance and represents
the ultimate review of specification, design and coding. Testing is the exposure of
the system to trial input to see whether it produces correct output.
Testing Phases:
Software testing phases include the following:
Test activities are determined and test data selected.
The test is conducted and test results are compared with the expected results.
There are various types of Testing:
Unit Testing:
Unit testing is essentially for the verification of the code produced during the
coding phase and the goal is test the internal logic of the module/program.
This project is thoroughly tested by exposing it to the various test cases regarding
correct event generation, as this project passed all the tests its quality is completely
assured.
Integration Testing:
All the tested modules are combined into sub systems, which are then tested. The
goal is to see if the modules are properly integrated, and the emphasis being on the
testing interfaces between the modules. On this project integration testing is done
mainly while implementing menus in a sample application such as Browser for
Mobiles.
System Testing:
It is mainly used if the software meets its requirements. The reference document for this
process is the requirement document.
44
Acceptance Testing:
It is performed with realistic data of the client to demonstrate that the software is
working satisfactorily.
Testing Methods:
Testing is a process of executing a program to find out errors. If testing is conducted
successfully, it will uncover all the errors in the software. Any testing can be done
basing on two ways:
White Box Testing:
It is a test case design method that uses the control structures of the procedural
design to derive test cases. using this testing a software Engineer can derive the
following test cases:
Exercise all the logical decisions on either true or false sides. Execute all loops at their boundaries
and within their operational boundaries. Exercise the internal data structures to assure their
validity.
Black Box Testing:
It is a test case design method used on the functional requirements of the software.
It will help a software engineer to derive sets of input conditions that will exercise
all the functional requirements of the program. Black Box testing attempts to find
errors in the following categories:
Incorrect or missing functions
Interface errors
Errors in data structures
Performance errors
Initialization and termination errors
By Black Box Testing we derive a set of test cases that satisfy the following criteria:
44
Test cases that reduce by a count that is greater than one, the number of additional test
cases that must be designed to achieve reasonable testing.
Test cases that tell us something about the presence or absence of classes of errors rather
than errors associated only with a specific test at hand.
Test Approach :
Testing can be done in two ways:
Bottom up approach
Top down approach
Bottom up Approach:
Testing can be performed starting from smallest and lowest level modules and proceeding
one at a time. For each module in bottom up testing a short program executes the module
and provides the needed data so that the module is asked to perform the way it will when
embedded with in the larger system. When bottom level modules are tested attention
turns to those on the next level that use the lower level ones they are tested individually
and then linked with the previously examined lower level modules.
Top down approach:
This type of testing starts from upper level modules. Since the detailed activities usually
performed in the lower level routines are not provided stubs are written. A stub is a
module shell called by upper level module and that when reached properly will return a
message to the calling module indicating that proper interaction occurred. No attempt is
made to verify the correctness of the lower level module.
44
44
44
BIBLIOGRAPHY
Books
Site Address
www.associatedcontent.com
www.members.tripod.com
44