Vous êtes sur la page 1sur 16

gement System (FoxPro)

Leave a Reply

Document Transcript
Slide -1 1 PAYROLL MANAGEMENT SYSTEM DISSERTATION SUBMITTED IN PARTIAL
FULFILLMENT OF THE REQUIREMENTS FOR THE AWARD OF THE DEGREE OF MASTER IN
COMPUTER MANAGEMENT (M.C.M.)
Slide -2 2 ABSTRACT The project Payroll system aims at developing the system by which the Payroll can
be managed efficiently. The project was developed at IRCON INTERNATIONAL LIMITED. The project is
developed using FoxPro 2.6. Chapter 1: gives introduction about the company, project, proposed
software, Need of computerization, importance of work and future scope of modification. Chapter 2:
describes feasibility study, requirement analysis, software used and Hardware used. Chapter 3: describes
database design, screen design and report design. Chapter 4: show the testing and the implementation
phase. This chapter gives Testing and implementation details. Chapter 5: gives the whole conclusion of
the project and work done. Chapter 6: in the end an annexure is given which has Context Analysis
Diagram, Data Flow Diagram, Flow Chart, Data Dictionary, Decision Tree and Input/Output sample
screens.
Slide -3 3 LIST OF TABLES
________________________________________________________________ TABLE No.
DESCRIPTION PAGE No. Table 3.1 Employees personal & salary details 42 Table 3.2 Employees Salary
Slip record 43
Slide -4 4 LIST OF FIGURES FIGURE NO. DESCRIPTION PAGE NO. Figure 2.1 Requirement Analysis
29 Figure 3.1 Main Screen Window 44 Figure 3.2 Data Entry Screen Window 45 Figure 3.3 Data Print
Screen Window 46 Figure 3.4 Employees Detail Screen Window 47 Figure 3.5 Employees Record
Screen Window 48 Figure 3.6 Pay Slip Format 49 Figure 3.7 Final result of all Calculations in Pay Slips
50 Figure 4.1 Diagram showing Unit Testing Process 60 Figure 4.2 Diagram showing Unit Testing Module
60
Slide -5 5 CONTENT CHAPTER 1: INTRODUCTION Page No. 1.1 Introduction about Company 11 1.2
Introduction about Project 13 1.3 Present state of Art 16 1.4 Need of Computerization 17 1.5 Proposed
Software 19 1.6 Importance of the Work 21 1.7 Advantages and Limitations of the System 22 CHAPTER
2: SYSTEM ANALYSIS 2.1 Feasibility Study 24 2.2 Analysis Methodology 28 2.3 Choice of the Platforms
33 2.3.1 S / W Used 33 2.3.2 H / W Used 34 CHAPTER 3: SYSTEM DESIGN 3.1 Design Methodology 36
3.2 Database Design 44 3.3 Form / Screen Design 44 3.4 Report Design 49
Slide -6 6 CHAPTER 4: TESTING AND IMPLEMENTATION Page No. 4.1 Testing Methodology 52 4.2
Unit Testing 58 4.3 Module Testing 62 4.4 System Testing 63 4.5 Alpha / Beta Testing 67 4.6 White Box /
Black Box Testing 69 4.7 Implementation 72 CHAPTER 5: CONCLUSION AND REFERENCES 5.1
Conclusion 76 5.2 Limitations of the System 78 5.3 Future Scope for Modification 79 5.4 System
Specification 5.4.1 H / W Requirement 81 5.4.2 S / W Requirement 82 5.5 References / Bibliography 82
Slide -7 7 CHAPTER 6: ANNEXURES Page No. A-1 Menu Flow Diagram -- A-2 Structure Chart -- A-3
CAD -- A-4 DFD -- A-5 ERD -- A-6 Decision Table / Tree -- A-7 Data Dictionary -- A-8 Sample Input -- A-9
Sample Output -- ****************
Slide -8 8 CHAPTER 1 INTRODUCTION ? Introduction about Company? Introduction about Project?
Need of Computerization? Present state of art? Proposed Software? Importance of the work?
Advantages & Limitations of the System
Slide -9 9 1.1 INTRODUCTION ABOUT COMPANY IRCON, (earlier known as Indian Railway
Construction Company Ltd.) a public sector undertaking of the Ministry of Railways, Government of India
was established in 1976 with the objective of implementing multidisciplinary Railway Construction works
in India and extend the benefit of vast experience gained by Indian Railway of developing nation. IRCON,
since then pioneer in Civil, Mechanical, Electrical, Signaling and Telecommunication Works has won ISO
9002 accreditation for all its operation in India and Abroad. With the present emphasis on infrastructure
projects, IRCON has geared itself to undertake projects on the concept of BOT, BOOT, BOLT, BLT, BOO
etc., in the execution of expressways, bypass, bridges, flyovers, tunnels, rail upgradation, Railway
electrification, power projects optical fibre and telecom etc. The company has so far successfully
completed around 59 Projects in 13 Countries. The company has executed more than 100 varied type of
Projects in India. IRCON has recently been granted the status of ?MINI RATNA? category-1 by the
Ministry of Railways.
Slide -10 10 AREAS OF OPERATION ? New Railway lines and facilities ? Strengthening doubling and
conversion of existing lines. ? Construction strengthening, rebuilding and regirdering of railway bridges. ?
Station buildings workshops and production units for rolling stock, concrete sleepers and track
components. ? Supply, installation testing and commissioning of signaling and telecommunication
systems including modernization. ? Railway electrification, transmission lines and power stations. ?
Operation and maintenance of railway systems installations. ? Public buildings, commercial complexes
and industrial structures. ? Roads, highways, bridges and flyovers. ? Runways and hangars for civil
aviation. ? Mass rapid transit systems. ? Water supply and sewerage works. ? Privatization projects
(BOT, BOOT, BOLT, BOO etc.) in surface transport and infrastructure. ? Leasing of locomotives.
Slide -11 11 1.2 INTRODUCTION ABOUT PROJECT The project payroll system? Is developed using
foxpro 2.6 on windows background. As we already knows that all the employees of the company get
salary for the work they do in the company. In this company every employee gets his salary after
deducting deductions and adding benefits gained by the employees. As the previous software for
calculating salary was using database and so there was a need to upgrade that with new one. The main
aim of this developed software is to print out salary slip of all the employees in the company and maintain
the results in databases for future use. Database Management System Database technique has been
described as one of the most rapidly growing areas of computer and information science. As a field, it is
still comparatively young; manufacturers and vendors did not begin to offer DBMS products until well into
the 1960s. Despite its youth, however, the field has quickly become one of the considerable importance,
both practical and theoretical. The total amount of data now committed to the database can be measured,
conservatively, in billions of bytes. And it is no exaggeration to say that many thousands of organizations
have been critically dependent on the continued and successful operation of a database system.
Database system exactly is nothing more than a computer based record keeping system i.e., the system
whose overall purpose is to record and maintain information. The information concerned can be anything
that is deemed to be of significance to the organization the system is serving anything, in other words,
that may be necessary to the decision-making processes involved in the management
of that organization. Database system involves four components: data, hardware, software and user’s
data. The data stored in the system is partitioned into one or more databases. A Database, then, is a
repository for stored data. In general it is both integrated and shared. By integrated we mean that
database may be thought of as a unification of several distinct data files, with any redundancy among
those files partially or wholly eliminated. By shared we mean that individual pieces of data in the database
may be shared among several different users, in the sense that each of those users may have access to
the same piece of data. Such sharing is really a consequence of the fact that databases are integrated.
Hardware consists of secondary storage volumes- disks, drums etc. On which database resides, together
with the associated devices, control units, channels, and so forth. SOFTWARE: Between the physical
database itself and the user of the system is a layer of software usually called the Data Base
Management System or DBMS. All request from user for accesses to the database are handled by the
DBMS. One general function provided by the DBMS is, thus, shielding of database from hardware level
details USERS: There are three broad classes of users. First, there is application programmer,
responsible for writing application programs that use the database.
Slide -13 13 These application programs operate on the data in all the usual ways: retrieving information,
creating new information, changing existing information. The programs themselves may be conventional
batch application or they may be on-line programs that are designed to support an end user interacting
with the system from an online terminal. The second class of users is the end user, accessing database
from the terminal. An end user may employ a query language provided as an integral part of the system,
or he may invoke a user written application program that accepts command from the terminal and in turn
issues requests to the DBMS on end users behalf. The third class of the user is the Database
Administrator or DBA. DBA is the person (or group of persons) responsible for overall control of the
database system. The DBA?s responsibilities includes the following : a) Deciding the information contents
of database . b) Deciding the storage structure and access strategy. c) Liasing with users. d) Defining
authorization checks and validation procedures. e) Defining a strategy for backups and recovery. f)
Monitoring performance and responding to changes in requirements.
Slide -14 14 1.3 PRESENT STATE OF THE ART The existing process of Payroll generation was
completely manual and the following registers were maintained to keep track of the various pay
components:- 1. Employees Details Register 2. Allowances Register 3. Deductions Register 4. Leaves
Register 5. Miscellaneous Expenditure The salary had to be prepared by the 25 th of every month and the
process starts from the 2 nd of the month. There were four clerks and one accountant which did the
calculation and record keeping. So in all, there were five employees who were busy all the time in
calculation work related to salary. Whenever an employee required the Pay Slip he had to submit a
returned application, based on which the exact salary structure was to be derived from the various
different structures. Even such a minor document was prepared in minimum two days of work. Beside this
there was no update help available to the employees regarding their loan, income tax, leave situation.
There was no intigrity between the personnel department and the accountants generating the salary
record and inturn Salary Slips. Overall thus there was no such standard system which can be relied
upon.
Slide -15 15 1.4 Need for Computerization When I went though the defiled description of problem I
observed that as there are large number of employees, the problem of quickly finding an employee?s
statement of salary becomes cumbersome. The records of the salary entitled and given have to be
maintained in an efficient way. To locate a particular record, one has to go through the entire register, as
there is no fully computerised system. Sometime, it is found that there is calculation error in the records
maintained and it takes a lot of time in correcting them. There is always a delay in giving the printed Pay-
Slips, and the accountant is not able to keep the records up to date which cause serious problems. As the
management wants to get the information about the all or the particular record up to date, error free and
as quickly as possible, all these factors have given rise to the this new computerised Payroll system,
which will serve all these necessities. The computerised system will greatly simplify the process of Payroll
Management and the results are error free, up to date and which are generated quickly with in no time.
Information plays an important role in policy making, planning and decision making process in any
organization setup. In today?s environment , efficiency has become a critical factor for the success of any
organization. In the present scenario, volume of information required is increasing while the time in which
it is
Slide -16 16 to be made available is decreasing. Such a changing environment has made it necessary,
that one starts taking help of methods that ensures:- ? Better data organization. ? Efficient retrieval of
information. ? Enhanced processing capabilities. ? Better communications. Towards this objective, time
demands that one should go in for computerization. In India both public and private sector have been
introducing modernization programs in administrative setup at all level, bring the vast changes in the day
to day administrative functions so as to enable the administration to fulfill the increasing needs of the
employees and the management of the organisation.
Slide -17 17 1.5 PROPOSED SOFTWARE The Payroll-System has been designed to maintain previous
and current data related to salary of the employees working in the organisation in order to answer various
types of queries frequently asked by the Government authorities, staff and others. This system covers
most frequently used queries that are answered by Sales department and Account section. The system
generates error free reports as a when required by the Government and the staff members. It
automatically gets the information about the rates from the existing salary system and stores them in
database. So as a when there is a revision in the database being maintained by the Payroll System
comes into action, the rates in this database become the previous rate and the new rate become the
current rate. This system designed on windows using MS-Foxpro 2.6 database system. In other words the
database and the queries both are designed the same platform so that more efficiency can be achieved.
And so the data maintenance can be done in the same way The objective of this Payroll System is to
provide the user an efficient way to manage the various operations which at present is been handled
manually. This system will take care of the following functions:
Slide -18 18 ? Provide the user with the case to Store / Retrieve Data / Information. ? Provide reports as
and when required. ? Provide accurate details of the existing rates and the records contained in the
database . ? To assist in correcting Erroneous data through screens. ? To provide validations in the entry
of data through user-friendly screens to ensure the correctness of data to some extent. The system will
automatically fetch the latest records from the existing database table and show the result likewise and
calculations are made on the basis of existing particulars. The most important benefit of all, is the
generation of reports and Pay-Slips which at present is done manually. This will reduce the time required
in generation of the report and also prevent any errors which otherwise could have been made while
entering data manually. All these options are along with their working and use are made made
understand to the management who themselves evaluated the advantages and disadvantages of the
proposed software.
Slide -19 19 1.6 IMPORTANCE OF THE WORK When I went through the detailed description of problem,
I observed that computerizing the project Payroll Management System would improve the efficiency of
existing system. The proposed system will consist of various inter- related modules so that they can
function independently as well as joined together with other modules. Detailed study of the system was
carried out to develop computer-based Payroll Management System. In this regard several meetings
were made with concerned persons to understand the logic of the system, gather the facts for finding out
the solution of the problem and improving the efficiency of the system. To understand the problem
following steps were taken: 1. Detailed study of the existing system was done. 2. Objective of the
department was analyzed and strategies were made to attain these objectives. ? Data entry screens were
designed. ? Key success variables were identified for meeting the organizational objectives. ? Suitable
reports were designed. ? Different screens were written to accept the input data and generation of the
reports. ? Designing suitable selection screens made user interactions. ? System was tested and
implemented.
Slide -20 20 1.7 ADVANTAGES AND LIMITATIONS OF THE SYSTEM The existing system is a partial
manual system, in which the accountant uses a register to keep the records of each and every type of
records pertaining to different heads of the salary. This system has a poor performance as it is not error
free and causes a delay in demand and supply of the data records. To measure and access the
performance of the system, the best source is used i.e., the staff (to whom the Salary-Slips are given). On
the other hand, the proposed system is efficient, least error-prone and quite fast relatively. In the existing
manual system no specific security precautions are taken to safeguard against the improper activity, Data
is exposed to outside means as it is maintained on a register, which can be accessed by any person. On
the other hand in the proposed system we have a centralized control of the precious data which is
secured from unscrupulous elements. In the existing manual system the Salary Slips which are given to
the staff are not properly made and the quality of printing is also very poor. But the proposed system will
provide all the necessary details right in the Salary Slips and which are printed neatly and in the right
format. The proposed system has a few limitations as well.The cost of the system is very high as
compared to the manual system. The running cost and the maintenance cost of the system is high. 2. If
the software is installed on the network then, if one person is using the package then the other are not
able to access it. 3. The software is liable to manual faults i.e., if the data entered is wrong then we
cannot expect the system itself to make corrections by itself.
Slide -21 21 CHAPTER 2 SYSTEM ANALYSIS ? Feasibility Study ? Analysis Methodology ? Choice of
the Platform ? Software Used ? Hardware Used
Slide -22 22 2.1 FEASIBILITY STUDY The main objective of feasibility study is to test the technical ,
operational and economical feasibility of developing a computer system . All projects are feasible , given
unlimited resources and infinite time . It is both necessary and prudent to evaluate the feasibility of a
project at the earliest possible time , that is , the system study phase itself . Feasibility and risk analysis is
related in many ways . If project risk is more , for any reasons , the feasibility of producing quality is
reduced . A feasibility study is not warranted for systems in which economic justification is obvious ,
technical risk is low , least legal problems are expected ,and no reasonable alternative exists . However ,
if any of the proceeding conditions fail , a study of that area should be conducted . During project
engineering , however , we concentrate our attention on three primary areas of interest : ? Technical
Feasibility ? Operational Feasibility ? Economic Feasibility
Slide -23 23 Technical Feasibility During technical analysis the technical merits of the system concept are
evaluated , at the same time collecting additional information about performance , reliability ,
maintainability and producibility . In some cases , it also includes a limited amount of research and design
. Technical analysis begins with an assessment of the technical viability of the proposed system . What
technologies are required to accomplish system function and performance ? What new methods ,
algorithms or processes are required , and what is their development risk ? How will these technology
issues affect cost . The tools available for technical analysis are derived from mathematical modeling and
optimization techniques , probability and statistics , queuing theory and control theory . It is important to
note , however , that analytical evaluation is not always possible . Modeling is an effective mechanism for
technical analysis of computer based systems . Technical feasibility is frequently the most difficult area to
assess at this stage of the product engineering process . Because objectives , functions , and
performance are somewhat hazy , anything seems possible if the ?right? assumptions are made . It is
essential that the process of analysis and definition be conducted in parallel with an assessment of
technical feasibility . The considerations that are normally associated with technical feasibility include :
Development risk : Can the system element be designed so that necessary function and performance are
achieved within the constraints uncovered during analysis ?
Slide -24 24 Resource availability : Are skilled staff available to develop the system element in question ?
Are other necessary resources ( hardware and software ) available to build the system ? Technology :
Has the relevant technology progressed to a state that will support the system ? During an evaluation of
technical feasibility , a cynical , if not pessimistic , attitude should prevail . Misjudgment at this stage can
be disastrous . The considerations that are normally associated with technical feasibility include
development risk , resource availability and technology . Ircon International Ltd. New Delhi has Pentium
machines connected to Novel NetWare Server , Windows workgroup and SCO UNIX server to provide
Multi-user environment facility. IRCON staff was available to develop the system . Management provide
latest hardware and software facilities for the successful completion of the project . Operational Feasibility
In the manual system, it is very difficult to maintain huge amount of pricing information of products . The
development of the new system was started because of the requirements put forward by the
management of the concerned department. So it is sure that the system developed is operationally
feasible .
Slide -25 25 Economic Feasibility : Among the most important information contained in a feasibility study
is cost-benefit analysis an assessment of the economic justification for a computer- based system project
. Cost-benefit analysis delineates costs for project development and weighs them against tangible( i.e.,
measurable directly in currency) and intangible benefits of a system . Cost-benefit analysis is complicated
by criteria that vary with the characteristics of the system to be developed , the relative size of the project
, and the expected return on investment desired as part of a company?s strategic plan . In addition many
benefits derived from computer-based systems are intangible (e.g. , better design quality through iterative
optimization , increased customer satisfaction through programmable control , and better business
decisions through reformatted and pre-analyzed sales data). Direct quantitative comparisons may be
difficult to achieve .
Slide -26 26 2.2 ANALYSIS METHODOLOGY A complete understanding of software requirements is
essential to the success of a software development effort. No matter how well designed or well coded, a
poorly analyzed and specified program will disappoint the user and bring grief to the developer. The
requirements analysis task is a process of discovery, refinement, modeling, and specification. The
software scope, initially established by the system engineer and refined during software project planning,
is refined in detail. Models of the required data, information and control flow, and operational behavior are
created. Alternative solutions are analyzed and allocated to various software elements. Both the
development and customer take an active role in requirements analysis and specification. The customer
attempts to reformulate a sometimes nebulous concept of software function and performance into
concrete detail. The developer acts as interrogator, consultant, and problem solver. Requirement analysis
and specification may appear to be a relatively simple task, but appearances are deceiving.
Communication content is very high. Chances for misinterpretation or misinformation abound. Ambiguity
is probable. The dilemma that confronts a software engineer may best be understood by repeating the
statement of an anonymous (infamous?) customer. ?I know you believe you understood what you think I
said, but I am not sure you realize that what you heard is not what I meant.?
Slide -27 27 REQUIREMENTS ANALYSIS Requirement analysis is a software engineering task that
bridges the gap between system-level software allocation and software design (Figure 2.1).
Requirements analysis enables the system engineer to specify software function and performance,
indicate software?s interface with other system elements, and establish constraints that software must
meet. Requirements analysis allows the software engineer (often called analyst in this role) to refine the
software allocation a build models of the data, functional, and behavioral domains that will be treated by
software. Requirement analysis provides the interface, and procedural design. Finally, the requirement
specification provides the developer and the customer with the means to assess quality once software is
built. System Engineering Software Requirements Analysis Software Design Figure ? 2.1
Slide -28 28 COMMUNICATION TECHNIQUES Software requirement analysis always begins with
communication between two or more parties. A customer has a problem that may be amenable to a
computer-based solution. A developer responds to the customer?s request for help. Communication has
begun. But as we have already noted, the road from communication to understanding is often full of
potholes. INITIATING THE PROCESS The most commonly used analysis technique to bridge the
communication gap between the customer and developer and to get the communication process started
is to conduct a preliminary meeting or interview. The first meeting between a software engineer (the
analyst) and the customer can be likened to the awkwardness of first date between two adolescents.
Neither person knows what to say or ask; both are worried that what they do say will be misinterpreted;
both are thinking about where it might lead (both are likely to have radically different expectations here);
both want to get the thing over with; but at the same time, both want it to be a success. Over the past two
decades, investigators have identified analysis problems and their causes, and have developed a variety
of modeling notations and corresponding sets of heuristics to overcome them. Each analysis method has
a unique point of view. However, all analysis methods are related by a set of operational principles:
Slide -29 29 1. The information domain of a problem must be represented and understood. 2. The
functions that the software is to perform must be defined. 3. The behavior of the software (as a
consequence of external events) must be represented. 4. The models that depict information, function,
and behavior must be partitioned in manner that uncovers detail in layered (or hierarchical) fashion. 5.
The analysis process should move from essential information toward implementation detail. By applying
these principles, the analyst approaches a problem systematically. The information domain is examined
so that function may be understood more completely. Models are used so that the characteristics of
function and behavior can be communicated in compact fashion. Partitioning is applied to reduce
complexity. Essential and implementation views of the software are necessary to accommodate the
logical constraints imposed by processing requirements and the physical constraints imposed by other
system elements. In addition to the operational analysis principles noted above, Davis suggests a set of
guiding principles: ? Understand the problem before you begin to create the analysis model. There is a
tendency to rush to a solution, even before the problem is understood. This often leads to elegant
software that solves the wrong problem! ? Develop prototypes that enable a user to understand how
human-machine interaction will occur. Since the perception of the quality of software is often
Slide -30 30 based on the perception of the ?friendliness? of the interface, prototyping (and the iteration
that results) is highly recommended. ? Record the origin of and the reason for every requirement. This is
the first step in establishing traceability back to the customer. ? Use multiple views of requirements.
Building data, functional and behavioral models provide the software engineer with thee different views.
This reduces the likelihood that something will be missed and increases the likelihood that inconsistency
will be recognized. ? Prioritize requirements. Tight deadlines may preclude the implementation of every
software requirement. If an incremental process model is applied, those requirements to be delivered in
the first increment must be identified. ? Work to eliminate ambiguity. Because most requirements are
described in a natural language, the opportunity for ambiguity abounds. The use of formal technical
reviews is one way to uncover and eliminate ambiguity. A software engineer who takes these principles to
heart is more likely to develop a software specification that will provide an excellent foundation for
design.
Slide -31 31 2.3 CHOICE OF THE PLATFORM I have selected Microsoft FoxPro 2.6 under Windows
Environment. As a Programming Language and as well as for the Database Management System.
SOFTWARE USED why the choice of language for development is FoxPro? FoxPro is one of the leading
Data Base Management System (DBMS) software for the PC. FoxPro is also called the Relational
Database Management System (RDBMS). Well, whatever the name may suggest, we find it very simple
and easy to use. FoxPro helps us to design database files as per the requirements and as per the
specified format. It also helps us enter and manage the database files. FoxPro helps us to edit, view and
change data in the database through simple built in commands. Once the database is ready, we can use
it to retrieve the selected information from the database table. The retrieved information can be displayed
or printed as per the desired report format.
The best part is that the data stored in the database is flexible i.e. we can change / modify the contents
as well as the structure of the database any number of times. In spite of being simple, the FoxPro
commands are very powerful and flexible. FoxPro has many other powerful commands for managing
multiple database files, protection and documentation of files, automatic design of menus, etc. We can
use FoxPro on a Local Area Network (LAN) where several persons work simultaneously on different
machines (connected through cables) on single or different application. 2.3.2 HARDWARE USED
PROCESSOR: INTEL PENTIUM 200MZ VIDEO ADAPTER: VGA / COLOR RAM: 32 M.B. HARD DISK:
4.3 G.B. FLOPPY DISK DRIVE: 1.44 M.B. CD-ROM DRIVE: 48X
Slide -33 33 CHAPTER 3 SYSTEM DESIGN ? Design Methodology ? Database Design ? Form / Screen
Design ? Report Design
Slide -34 34 3.1 DESIGN METHODOLOGY DESIGN CONCEPT The design of an information system
produces the details that state how a system will meet the requirements identified during system analysis.
System specialists often refer to this stage as logical design, in contrast to the process of developing
program software, which is referred to as physical design. System analyses begin process by identifying
reports and other outputs the system will produce. Then the specific data on each are pinpointed. Usually,
designers sketch the form of display as they expect it to appeal when the system is complete. This may
be done on paper or on a computer display, using one of the automated system tools available. The
system design also describes the data to be input, calculated or stored. Individual data items and
calculation procedures are written in detail. The procedure tells how to process the data and produce the
output. DESIGN OBJECTIVES The following goals were kept in mind while designing the existing system:
To reduce the manual work required to be done in the existing system. To avoid errors inherent in the
manual working and hence make the outputs consistent and correct.
Slide -35 35 To improve the management of permanent information of the company by keeping it in
properly structured tables and provides facilities to update this information as efficiently as possible. To
make the system completely menu-driven and hence user friendly, this was necessary so that even non-
programmers could use the system efficiently and system could act as Catalyst in achieving objectives.
To make the system completely compatible i.e., it should ?fit in? in the total, integrated system. To design
the system in such a way that reduced future maintenance and enhancement times and efforts. To make
the system reliable, understandable and cost effective. DESIGN OVERVIEW The design stage takes the
final specification of the system from analysis stages and finds the best way of filling them, given the
technical environment and precious decision on required level of automation.
Slide -36 36 The system design is carried out in two phases: Architectural Design (High Level Design)
Detailed Design (Low Level Design) HIGH LEVEL DESIGN The High level design maps the given system
to logical data structure. It involves: Identifying the entity All the entities related to the module were
identified checked and consolidated. Identifying the relationships The relationships between entities within
and outside the system identified. Attributed definition The entities were identified and their field
characteristics were specified. Normalization The entities were normalized. After first and second
normalization third normalization was achieved for all entities of system.
Slide -37 37 LOW LEVEL DESIGN The low level design maps the logical model of the system to physical
database design. Tables were created for the system. Entities and attributes are mapped into tables. The
name of the entity is taken as the table name: User Preferences Based on user preferences like block
name, validation of primary keys, layouts of blocks, layouts of fields, creating titles for blocks, mandatory
input field prompts etc., were incorporated here. Generate the program The program was generated
based on the relationship specified and according to the user preferences. Program Specification The
program specification was written for the master?s transactions, reports and queries. The logic for each
field, block and window were written so that anyone who doesn?t know the system will be able to code
the logic.
Slide -38 38 SYSTEM DESIGN CONSIDERATIONS DATA / FILE DESIGN For the design of any system;
before the actual development, certain rules or considerations have to be agreed upon for the
standardization of the system, consistency, and integrity between the modules of the system. The
information for the System is stored in the database?s tables. The input i.e the emp_no, date and
effective date etc. each of them are stored as a record in the database. The details of the latest records
can be obtained from the table, which obtains its latest records as it is linked to the tables in the
calculation module. INPUT DESIGN Any basic system needs input. In this system, the input is Basic
salary, Rates Month of request etc. This input is obtained from the user in a very user friendly manner as
all input required is displayed to the user in a form which is fully equipped with error checks and validation
checks on all the fields and pop up screens. FORM / WINDOW DESIGN Interactive Forms / windows are
designed to input the data to the system. The goal of constructing a form is to make it easy to use and
lets an operator input the data with as few mistake as possible, from hand written documents into the
database tables. While entering data, the operator should be able to verify the accuracy of the
handwritten documents by checking the corresponding columns. Validation checks are made and default
values are supplied to some of the fields.
Slide -39 39 OUTPUT DESIGN Reports are the most essential requirement of any information System.
Reports serve as a crucial interface between the end-users and the system as it helps in all types of
decision-making. Due to this, reports are designed such that it satisfies the need of the end-users and
gives as much essential information in single simple report.
Slide -40 40 3.2 DATABASE DESIGN Only two Database files are made i.e., Emp_main and Emp_reco.
Emp_main is the master database file, which is used to store all the necessary details pertaining to the
employee?s salary as well as personal details. While Emp_reco database file is used to store all the past
records of all the employees. This file contains the details of all the Par Slips issued to the staff. 3.1.1 The
structure of the Database Emp_main.dbf is as under:- NO. FIELD NAME TYPE WIDTH DECIMAL Tag^
Name Character 15 Designation Character 15 Department Character 12 Basic Numeric 10 0 Executive
Logical 1 House Logical 1 Table No. 3.1
Slide -41 41 The structure of the database Emp_reco.dbf is as under:- NO. FIELD NAME TYPE WIDTH
DECIMAL Tag^ Name Character 15 Month Numeric 2 0 Year Numeric 4 0 Emp_gross Numeric 5 0
Emp_pf Numeric 5 0 Table No. 3.2
Slide -42 42 3.3 FORM / SCREEN DESIGN MAIN SCREEN WINDOW Figure No. 3.1
Slide -43 43 DATA ENTRY SCREEN WINDOW Figure No. 3.2
Slide -44 44 DATA PRINT SCREEN WINDOW Figure No. 3.3
Slide -45 45 EMPLOYEES DETAILS SCREEN WINDOW Figure No. 3.4
Slide -46 46 EMPLOYEES RECORD SCREEN WINDOW Figure No. 3.5
Slide -47 47 3.4 REPORT DESIGN PAY SLIP FORMAT Figure No. 3.6
Slide -48 48 FINAL RESULT OF ALL CALCULATION IN PAY SLIP Figure No. 3.7
Slide -49 49 CHAPTER 4 TESTING AND IMPLEMENTATION ? Testing methodology ? Unit testing ?
Module testing ? System testing ? Alpha / beta testing ? White box / black box testing ? Implementation
manual ? Implementation ? Post implementation modification
Slide -50 50 4.1 TESTING METHODOLOGY Software testing is critical element of software quality
assurance and represents the ultimate review of specification, design, and coding. The increasing
visibility of software as a system element and the attendant ?costs? associated with a software failure are
motivating forces for well-planned, thorough testing. It is not unusual for software for a software
development organization to expend between 30 and 40 percent of total project effort on testing. In the
extreme, testing of human-rated software (e.g., flight control, nuclear reactor monitoring) can cost three to
five times as much as all other software engineering activities combined! TESTING OBJECTIVES: 1.
Testing is process of executing a program with the intent o finding an error. 2. A good test case is one
that has a high probability of finding an as-yet undiscovered error. 3. A successful test is one that
uncovers an as-yet-undiscovered error. The above objectives imply a dramatic change in viewpoint. They
move counter to the commonly held view that a successful test is one in which no errors are found. Our
objective is to design tests that systematically uncover different classes of errors and do so with a
minimum amount of time and effort.
Slide -51 51 If testing is conducted successfully (according to the objectives stated above), it will uncover
errors in the software. As a secondary benefit, testing demonstrates that software functions appear to be
working according to specification and that performance requirements appear to have been met. In
addition, data collected as testing is conducted provides a good indication of software reliability and some
indication of software quality as a whole. But there is one thing that testing cannot do: Testing cannot
show the absence of defects, it can only show that software errors are present. TESTING PRINCIPLES
Before applying methods to design effective test cases, a software engineer must understand the basic
principles that guide software testing. Davis suggests a set of testing principles, which are as follows: ?
All tests should be traceable to user requirement: As we have seen, the objective of software testing is to
uncover errors. It follows that the most severe defects (from the user?s point of view) are those that cause
the program to fail to meet its requirements. ? Tests should be planned long before testing begins: Test
planning can begin as soon as the requirement model is complete. Detailed definition of test cases can
begin as soon as the design model has been solidified. Therefore, all tests can be planned and designed
before any code has been generated.
Slide -52 52 ? The pareto principle applies to software testing: Stated simply, the pareto principle implies
that 80 percent of all errors uncovered during testing will likely be traceable to 20 percent of all program
modules. The problem, of course, is to isolate these suspect modules and to thoroughly test them. ?
Testing should begin ?in the small? and progress toward testing ?in the large?: The first tests planned
and executed generally focus on individual program modules. As testing progresses, testing shifts focus
in an attempt of find errors in integrated clusters of modules and ultimately in the entire system. ?
Exhaustive testing is not possible: The number of path permutations for even a moderately sized program
is exceptionally large. For this reason, it is impossible to execute every combination of paths during
testing. It is possible, however, to adequately cover program logic and to ensure that all conditions in the
procedural design have been exercised. ? To be most effective, testing should be conducted by an
independent third party: by ?most effective,? we mean testing that has the highest probability of finding
errors (the primary objective of testing). For reasons that the software engineers who created the system
is not the best person to conduct all tests for the software. TESTING FOR SPECIALIZED
ENVIRONMENTS AND APPLICATIONS As computer software has become more complex, the need for
specialized testing approaches has also grown. The white-box and black-box
Slide -53 53 testing methods are applicable across all environments, the creation of user interfaces has
become less time consuming and more precise. At the same time, the complexity of GUIs has grown,
leading to more difficulty in design and execution of test cases. Because modern GUIs have the same
look and feel, a series of standard tests can be derived. The following questions can serves as a
guideline for creating a series of generic tests for GUIs : For windows : ? Will the window open properly
based on related typed or menu-based commands? ? Can the window be resized, moved and scrolled? ?
Is all data content contained within the window properly addressable with a mouse, function keys,
directional arrows, and keyboard. ? Does the window properly regenerate when it is overwritten and then
recalled? ? Are the functions that relate to the window operational? ? Are the relevant pull-down menus,
tool bars, scroll bars, dialog boxes, and buttons, icons and other controls available and properly displayed
for the window ?
Slide -54 54 ? When multiple windows are displayed , is the same name of the window properly
represented? ? Is the active window properly highlighted? ? If multitasking is used, are all windows
updated at appropriate times? ? Do multiple or incorrect mouse picks within the window cause
unexpected side effects? ? Does the window properly closed? For pull-down menus and mouse
operations: ? Is the appropriate menu bar displayed in the appropriate context? ? Does the application
menu bar display system related features(e.g., clock display)? ? Do pull-down operations work properly?
? Are all menu functions properly addressable by the mouse? ? Is it possible to invoke each menu
function using its alternative text-based command? ? Are the names of menu functions self-explanatory?
Slide -55 55 ? Is help available for each menu item? ? If the mouse has multiple buttons, they properly
recognized in context? Data entry: ? Is alphanumeric data entry properly echoed and input to the system?
? Do graphical modes of data entry (e.g., a slide bar) work properly? ? Is invalid data properly
recognized? ? Are data input/modification/deletion messages intelligible? Because of the large no of
permutations associated with GUI operations, testing should be approached using automated tools. A
wide array of GUI testing tools is available to achieve this goal. TESTING OF CLIENT/SERVER
ARCHITECTURES Client/server (C/S) architectures represent a significant challenge for software testing.
The distributed nature of client/server environments, the performance issues associated with transaction
processing, the potential presence of a number of different hardware platforms, the complexities of
network communication, the need to service multiple clients from a centralized (or in some cases,
distributed ) database, and the coordination requirements imposed on the server all combine to make
testing of C/S architectures and the software that reside within them considerably more difficult than
testing standalone applications.
Slide -56 56 1.2 UNIT TESTING Unit testing focuses verification effort on the smallest unit of software
design the modules. Using the procedural design description as a guide, important control paths are
tested to uncover errors within the boundary of the module. The relative complexity of testing. The unit
test is normally white-box oriented, and the step can be conducted in parallel for multiple modules. UNIT
TEST CONSIDERATIONS The module interface is tested to ensure that information properly flows into
and out of the program unit under test. The local data structure is examined to ensure that data stored
temporarily maintains its integrity during all steps in an algorithm?s execution. Boundary conditions are
tested to ensure that the module operates properly at boundaries established to limit or restrict
processing. All independent paths (basis paths) through the control structure are exercised to ensure that
all statements in module have been executed at least once. And finally, all error-handling paths are
tested. Tests of data flow across a module interface are required before any other test is initiated. If data
do not enter and exit properly, all other tests are moot. In his text on software testing, Myers proposes a
checklist for interface tests: ? Number of input parameters equal to number of arguments? ? Parameter
and argument attributes match?
Slide -57 57 ? Parameter and argument units systems match? ? Input only arguments altered? ? Global
variable definitions consistent across modules? ? File attributes correct? ? OPEN/CLOSE statements
correct? ? End-of-file conditions handled? ? Inconsistent data types. Unit test procedures Unit testing is
normally considered as an adjunct to the coding step. After source-level code has been developed,
reviewed, and verified for correct syntax, unit test case design begins. A review of design information
provides guidance for establishing test cases that are likely to uncover errors in each of the categories
discussed above. Each test case should be coupled with a set of expected results. Because a module is
not a standalone program, driver and/or stub software must be developed for each unit test. The unit test
environment is illustrated in the following figure:
Slide -58 58 Figure ? 4.1 In most applications a driver is nothing more than a ?main program? that
accepts test case data, passes such data to the module (to be tested), and prints relevant results. Stubs
serve to replace modules that are subordinate to (called by) The module to be tested. A stub or ?dummy
subprogram? uses the subordinate module?s interface, may do minimal data manipulation, prints
verification of entry, and returns. driver module to be tested stub stub Interface local data structures
boundary conditions independent paths error handling paths test cases RESULTS Figure ? 4.2
Slide -59 59 Drivers and stubs represent overhead. That is, both are software that must be developed but
that is not delivered with the final software product. If drivers and stubs are kept simple, actual overhead
is relatively low. Unfortunately, many modules cannot be adequately unit tested with ?simple? overhead
software. In such cases, complete testing can be postponed until the integration test step (where drivers
or stubs are also used) Unit testing is simplified when a module with high cohesion is designed. When a
module addresses only one function, the number of test cases is reduced and errors can be more easily
predicted and uncovered.
Slide -60 60 4.3 MODULE TESTING A module represents the logical elements of a system. For a module
to run satisfactorily, it must compile and test data correctly and tie in properly with other modules.
Achieving an error-free module is the responsibility of the programmer. Module testing checks for two
types of error: syntax and logic. A syntax error is a module statement that violates one or more rules of
the language in which it is written. An improperly defined field dimension or omitted key words are
common syntax errors. These errors are shown through error messages generated by the computer. A
logic error, on the other hand, deals with incorrect data fields, out- range items, and invalid combinations,
since diagnostics do not detect logic errors, the programmer must examine the output carefully for them.
When a module is tested, the actual output is compared with the expected output. When there is a
discrepancy, the sequence of instructions must be traced to determine the problem. Breaking the
program sown into self-contained portions, each of which can be checked at certain key points, facilitates
the process. The idea is to compare module values against desk-calculated values to isolate the module.
Slide -61 61 4.4 SYSTEM TESTING The last high-order testing step falls outside the boundary of
software engineering and into the broader context of computer system engineering. Software, once
validated, must be combined with other system element (e.g., hardware, people, and databases). System
testing verifies that all elements mesh properly and that overall system function/performance is achieved.
Ultimately, software is incorporated with other system elements (e.g., new hardware and information), and
a series of system integration and validation tests are conducted. These tests fall outside the scope of the
software engineering process and are not conducted solely by the software developer. However, steps
taken during software design and testing can greatly improve the probability of successful integration in
the larger system. A classic system-testing problem is ?finger pointing.? This occurs when an error is
uncovered, and each system element developer blames the other for the problem. Rather than indulging
in such nonsense, the software engineer should anticipate potential interfacing problems and: ? Design
error-handling paths that test all information coming from other elements of the system. ? Conduct a
series of tests that simulate bad data or other potential errors at the software interface. ? Record the
results of tests to use as ?evidence? if finger pointing does occur
Slide -62 62 ? Participate in planning and design of system tests to ensure that software is adequately
tested. System testing is actually a series of different tests whose primary purpose is to fully exercise the
computer-based system. Although each test has a different purpose, all work to verify that all system
elements have been properly integrated and perform allocated functions. In the sections that follow, we
discuss the types of system tests that are worth while for software-based system. Recovery Testing
Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that
recovery is properly performed. If recovery is automatic (performed by the system itself), re-initialization,
check pointing mechanisms, data recovery, and restart are each evaluated for correctness. If recovery
requires human intervention, the mean time to repair is evaluated to determine whether it is within
acceptable limits. Security Testing Security testing attempts to verify that protection mechanisms built into
a system will in fact protect it from improper penetration. The system?s security must, of course, be tested
for invulnerability from frontal attack ? but must also be tested for invulnerability from flank or rear attack.
During security testing, the tester plays the role(s) of the individual who desires to penetrate the system.
Anything goes! The tester may attempt to acquire passwords through external clerical means, may attack
the system with custom
Slide -63 63 software designed to break down any defenses that have been constructed; may overwhelm
the system, thereby denying service to other; may purposely cause system errors, hoping to penetrate
during recovery; may browse through insecure data, hoping to find the key to system entry; and so on.
Given enough time and resources, good security testing will ultimately penetrate a system. The role of the
system designer is to make penetration cost greater then the value of the information that will be
obtained. Stress Testing Stress testing executes a system in a manner that demands resources in
abnormal quantity, frequency, or volume. For example: Special tests may be deigned that generate 10
interrupts per second, when one or two is the average rate. Input data rates may be increased by an
order of magnitude to determine how input functions would respond. Test cases that require maximum
memory or other resources may be executed. Test cases that may cause thrashing in a virtual operating
system may be designed. Test cases that may cause excessive hunting for disk resident data may
create. Essentially, the tester attempts to break the program.
Slide -64 64 Performance Testing Performance tests are often coupled with stress testing and often
require both hardware and software instrumentation. That is, it is often necessary to measure resource
utilization (e.g., processor cycles) in an exacting fashion. External instrumentation can monitor execution
intervals, log events (e.g., interrupts) as they occur, and sample machine states on a regular basis. By
instrumenting a system, the tester can uncover situations that lead to degradation and possible system
failure.
Slide -65 65 4.5 ALPHA AND BATA TESTING It is virtually impossible for a software developer to foresee
how the customer will really use a program. Instructions for use may be misinterpreted; strange
combinations of data may be regularly used; and output that seemed clear to the tester may be
unintelligible to a user in the field. When custom software is built for one customer, a series of acceptance
tests are conducted to enable the customer to validate all requirements. Conducted by the end user
rather than the system developer, an acceptance test can range from an informal ?test drive? to a
planned and systematically executed series of tests. In fact, acceptance testing can be conducted over a
period of weeks or months, thereby uncovering cumulative errors that might degrade the system over
time. It software is developed as a product to be used by many customers, it is impractical to perform
formal acceptance tests with each one. Most software product builders use a process called alpha and
beta testing to uncover errors that only the end user seems able to find. A customer conducts the alpha
test at the developer?s site. The software is used in natural setting with the developer ?looking over the
shoulder? of the user and recording errors and usage problems. Alpha tests are conducted in controlled
environment. The beta test is conducted at one or more customer sites by the end user(s) of the software.
Unlike alpha testing, the developer is generally not present.
Slide -66 66 Therefore, the beta test is a ?live? application of the software in an environment that cannot
be controlled by the developer. The customer records all problems (Real or imagined) that is encountered
during beta testing and reports these to the developer at regular intervals. As a result of problems
reported during beta test, the software developer makes modifications and then prepares for release of
the software product to the entire customer base.
Slide -67 67 4.6 WHITE BOX / BLACK BOX TESTING White Box Testing White-box testing, sometimes
called glass-box testing, is a test case design method that uses the control structure of the procedural
design to derive test cases. Using white-box testing methods, the software engineer can derive test cases
that (1) guarantee that all independent paths within a module have been exercised at least once; (2)
exercise all logical decisions on their true and false sides; (3) execute all loops at their boundaries and
within their operational bounds; and (4) exercise internal data structures to assure their validity. A
reasonable question might be posed at this juncture: ?why spend time and energy worrying about (and
testing) logical minutiae when we might better expend effort ensuring that program requirements have
been met?? Stated another way, why don’t we spend all of our energies on black-box tests? The answer
lies in nature of software defects. Logic errors and incorrect assumptions are inversely proportional to the
probability that a program path will be executed. Errors tend to creep into our work when we design and
implement function, conditions, or control that are out of the mainstream. Everyday processing tends to
be well understood (and well scrutinised), while ?special case? Processing tends to fall into the cracks.
We often believe that a logical path is not likely to be executed when, in fact, it may be executed on
regular basis. The logical flow of a program is sometimes counterintuitive, meaning that our unconscious
assumptions a but flow of
Slide -68 68 control and data may lead us to make design errors that are uncovered only once path
testing commences. ? Typographical errors are random. When a program is translated into programming
language source code, it is likely that some typing errors will occur. Many will be uncovered by syntax
checking mechanisms, but others will go undetected until testing begins. It is, as likely that type will exist
on an obscure logical path as on a mainstream path. Each of these reasons provides an argument for
conducting white-box tests. Black-box testing, no matter how thorough, may miss the kinds of errors
noted above. As Beizer has stated: ?Bugs lurk in corners and congregate at boundaries.? White-box
testing is far more likely to uncover them. Black Box Testing Black-box testing, focuses on the functional
requirements of the software. That is, black-box testing enables the software engineer to derive sets of
input conditions that will fully exercise all functional requirements for a program. Black-box testing is not
an alternative to white-box techniques. Rather, it is a complementary approach that is likely to uncover a
different class of errors than white-box methods. Black-box testing attempts to find errors in the following
categories: (1) incorrect or missing functions, (2) interface errors, (3) errors in data structures or external
data base access, (4) performance errors, and (5) initialization and termination errors.
Slide -69 69 Unlike white-box testing, which is performed early in the testing process, black-box testing
tends to be applied during later stages of testing? Because black- box testing purposely disregards
control structure, attention is focused on the information domain. Tests are designed to answer the
following questions. How is functional validity tested. What classes of input will make good test cases. Is
the system particularly sensitive to certain input values. How are the boundaries of a data class isolated
.What data rates and data volume can the system tolerate .What effect will specific combinations of data
have on system operation? By applying black-box techniques, we derive a set of test cases that satisfy
the following criteria: (1) test cases that reduce, by a count that is greater than one, the number of
additional test cases that must be designed to achieve reasonable testing, and (2) test cases that tell us
something about the presence or absence of classes of errors, rather than errors associated only with the
specific test at hand.
Slide -70 70 4.7 IMPLEMENTATION An important aspect of a systems analysts job is to make sure that
the new design is implemented to established standards. The term implementation as different meanings,
ranging from the conversion of basic application to a complete replacement of a computer system. The
procedure, however, is virtually the same. Implementation is used here to men the process of converting
a new or a revised system design into an operational one. Conversion is one aspect of implementation.
The other aspects are the post implementation review and software maintenance. These topics are
covered later in the chapter. There are three types of implementation: 1. Implementation of a computer
system to replace a manual system. The problems encountered are converting files, training users,
creating accurate files, and verifying printouts for integrity. 2. Implementation of a new computer system
to replace an existing one. This is usually a difficult conversion. If not properly planned, there can be
many problems. Some large computer systems have taken as long as a year to convert. 3.
Implementation of a modified application to replace an existing one, using the same computer. This type
of conversion is relatively easy to handle, provided there are no major changes in the files.
Slide -71 71 File conversion File conversion involves capturing data and creating a computer file from
existing files. Problems are staff shortages for loading data and the specialized training necessary to
prepare records in accordance with the new system specifications. In most cases, the vendors staff or
an outside agency performs this function for a flat fee. Copying the old files intact for the new system is
the prime concern during conversion. The programs that copy the files should produce identical files to
test programs on both systems. As the outset, a decision is made to determine which files need copying.
Personnel files must be kept, of course, but an accounts receivable file with many activities might not
need copying. Instead, new customer accounts might be put on the new system, while running out the old
accounts on the old system. Once it is determined that a particular file should be transferred, the next
step is to specify the data to be converted: current files, year-end files, and so on. The files to be copied
must be identified by name, the programmer who will do the copying, and the method by which the
accuracy of the copying will be verified. A file-comparison program is best used for this purpose. Creating
Test Files: The best method for gaining control of the conversion is to use well- planned test files for
testing all new programs. Before production files are used to test live data, test files must be created on
the old system, copied over to the new
Slide -72 72 system, and used for the initial test of each program. The test file should offer the following:
1. Predictable results. 2. Previously determined output results to check with a sampling of different types
of records. 3. Printed results in seconds. 4. Simplified error-finding routines. 5. Ability to build from a small
number of records out of the production files and then progressively alter the records until they are
challenging to the programs. Selecting the right records for testing is not easing. The users key staff
should get involved in the selection process. The trick is to select a reasonable number of uncomplicated
records and also records that have caused problems in the past. User training: An analysis of user
training focuses on two factors: user capabilities and the nature of the system being installed. Users
range from the na�ve to the highly sophisticated. Developmental research provides interesting insights
into how na�ve computer users think about their first exposure to a new system. They approach it as
concrete learners, learning how to use the system without trying to understand which abstract principles
determine which function. The distinction between concrete and formal (student type) learning says much
about what one can expect from trainees in general.
Slide -73 73 Forms and Displays Conversion: During this activity, old forms and displays are withdrawn
and new ones are instituted. Various controls are implemented to ensure the systems reliability, integrity,
and security. The activities implemented here were initiated early in the system design phase. Conversion
of Administrative Procedures: A final important activity in the conversion phase is setting up administrative
procedures for controlling the new system. This includes scheduling, determining job priorities on the
system, and implementing personnel policies for managing the system. The user is trained to handle
various emergencies and procedures. Most important, supervisors are trained on how the information is
gathered, produced, and presented to management.
Slide -74 74 CHAPTER 5 CONCLUSION AND REFERENCES : Conclusion Limitations of the system
Future scope for modification ? System specification, Hardware required Software, required References
/ Bibliography
Slide -75 75 5.1 CONCLUSION The system, which the organization was working in, was a manual one.
During the course of the project the system was transformed from manual to computerized one. The new
system gave many advantages like it helps in searching the records quickly, it helped in getting the
printed Pay Slips more efficiently and the time taken in performing each activity was shortened to a great
extent. There are also some limitations with the software like if long data is entered in some field the
system would respond negatively. Secondly, the software would work only for one person even it is
connected with the LAN. Overall, we should say that, now the accountant as well as the staff is happy
with the system as both will be benefited with the working of the software.
Slide -76 76 5.2 LIMITATIONS OF THE SYSTEM The existing system is a partial manual system, in
which the accountant uses a register to keep the records of each and every type of records pertaining to
different heads of the salary. This system has a poor performance as it is not error free and causes a
delay in demand and supply of the data records. To measure and access the performance of the system,
the best source is used i.e., the staff (to whom the Salary-Slips are given). On the other hand, the
proposed system is efficient, least error-prone and quite fast relatively. In the existing manual system no
specific security precautions are taken to safeguard against the improper activity, Data is exposed to
outside means as it is maintained on a register, which can be accessed by any person. On the other hand
in the proposed system we have a centralized control of the precious data which is secured from
unscrupulous elements. In the existing manual system the Salary Slips which are given to the staff are
not properly made and the quality of printing is also very poor. But the proposed system will provide all
the necessary details right in the Salary Slips and which are printed neatly and in the right format.
Slide -77 77 The proposed system has a few limitations as well. The first being the cost of the system
which is very high as compared to the manual system. The running cost and the maintenance cost of the
system is also high. Second limitation with the software is that if the software is installed on the network
then, if one person is using the package then other are not able to access it. ? Thirdly the software is
liable to manual faults i.e., if the data entered is wrong then we cannot expect the system itself to make
corrections by itself.
Slide -78 78 5.3 FUTURE SCOPE FOR MODIFICATION Since no system is really ever complete, it will
be maintained as changes are required because of internal developments, such as new users or
business activities, and, external developments such as new legal requirements, industry standards or
competition. As the system developed is based on a modular approach and each module is prepared
independent of the other module, each and every module can be modified to meet the requirements
without affecting the other modules in any respect . As it is rightly said, that any package is complete only
if it is flexible enough and capable to inherit any future modifications. As no software package is ever
complete because with the passage of time it requires modifications accordingly.
Slide -79 79 5.4 SYSTEM SPECIFICATION 5.4.1 HARDWARE REQUIREMENT PROCESSOR : INTEL
PENTIUM 200MZ VIDEO ADAPTER : VGA / COLOR RAM : 32 M.B. HARD DISK : 4.3 G.B. FLOPPY
DISK DRIVE : 1.44 M.B. CD ROM DRIVE : 48X 5.4.2 SOFTWARE REQUIREMENT OPERATING
SYSTEM : MICROSOFT WINDOWS 95 FOXPRO / LAN : MICROSOFT FOXPRO 2.6 FOR WINDOWS.
Slide -80 80 5.5 REFERENCE / BIBLIOGRAPHY ? Elias M. Awad, ?System Analysis and Design?,
Second Edition, Galgotia Publications (p) Ltd., 1998. ? V. Rajaraman, ?An Introduction to Degital
Computer Design?, Forth Edition, Printice` Hall of India Private Limited, 1998. ? P.K. Sinha, ?Computer
Fundamentals?, Second Edition, BPB Publications, 1992. ? Howard Dickler, ?Programmer?s Guide to
Foxpro 2.6?, Second Edition, BPB Publication, 1995. ? R.K. Taxali, ?Foxpro 2.5 Made Simple for Dos
and Windows?, Second Edition, BPB publications, 1996.
Tag: Final Year Projects, MBA project report, MBA project report in Finance, MBA project report in HR,
MBA project report in sales, MBA project report in marketing, MCA project topic, Download MCA projects,
download synopsis, download Java Projects, download PHP Projects, download Android Projects,
download .Net Projects, download VB Projects, BE BTech MCA
MBA Mtech Projects, Seminar Topics and Reports, MCA Project Topic List, MCA Final Year Project
Topics and Ideas, mca project titles with abstract, mca project report, mca project synopsis, mca project
do

References
There are no sources in the current document.

Vous aimerez peut-être aussi