Vous êtes sur la page 1sur 39

SUMMER TRAINING

PROJECT REPORT
Submitted in partial fulfillment of the requirements for the degree of Bachelor of Technology (B.Tech)
By

Prateek Joshi
B.Tech(IT) Enrol No. A2305308112 Roll No. 1103

AMITY UNIVERSITY NOIDA (UTTAR PARDESH)

INDIA

TABLE OF CONTENTS

Acknowledgement 3 Abstract 4 About the organization. 5

Vision 5 Mission 5 Values .. 5 Objectives . 6 Interface(GUI)

A.

Graphical User . 7

History . 7 Precursors . 8

PARC user interface 8 Evolution 8 Components 9 Post-WIMP interfaces. 10 User interface design..10 and interaction other

Comparison to interfaces.. 11

Screenshots 14

B.

Web browser . 18

History .. 18 Function, Features 20 User interface. . 21

Privacy and security. 21 Standards support. 21 Screenshots 22

C.

Software Life Cycle 25 Software Testing. 26

D.

Qualification

Overview 27 History .. 28 Software topics. 28 testing

References . 32

ACKNOWLEDGEMENT
I am very thankful to Mr. Ranjan Banerjee (AGM D & E SOFTWARE) for providing me the opportunity to undergo practical training in the Design & Engineering Software Division of Bharat Electronics Limited under the guidance of Mr. B L Paliwal (MGR - D & E SOFTWARE). He guided me from time to time and gave me dynamic ideas and suggestions by which I am able to complete my training successfully.

Name :Prateek Joshi B.Tech(IT) Enrollment No. A2305308112 Roll No. 1103 AMITY UNIVERSITY UTTAR PRADESH

ABSTRACT
Capability Maturity Model Integration (CMMI) is a process improvement approach whose goal is to help organizations improve their performance . CMMI can be used to guide process improvement across a project , a division , or an entire organization . CMMI in software engineering and oragnizational development is a process

improvement approach that provides organizations with the essential elements process improvement .

for

effective

Graphical User Interface (GUI) is a type of user interface that allows users to interact with electronic devices with images rather than text commands. A GUI represents the information and actions available to a user through graphical icons and visual indicators such as secondary notation, as opposed to text - based interfaces , typed command labels or text navigation . The actions are usually performed through direct manipulation of the graphical elements. Designing a GUI for the CMMI client and server machine deals with the setting up of an interactive interface that deals with the client and server alike. But the major difference resides in the fact that that the clients can only see the contents and state their requirements whereas the server has the complete control over the GUI and the database that is connected to it. We all have seen the interactive advertisement filled catchy interfaces of various websites such as Facebook, Yahoo, MSN et cetera. The main purpose of this project was to learn how to create such GUIs at the organizational level. The design GUI caters to the need of the CMMI model at the D&E-Software Division of Bharat Electronics Ltd. It has various hyperlinks that take you to the destination requested that may be a pdf file or to a web page. So, concluding we can say that a GUI is a gateway with lot more gateways inside it. A Web Browser is a software application for retrieving, presenting, and traversing information resources on the World Wide Web. An information resource is identified by a Uniform Resource Identifier (URI) and may be a web page, image, video, or other piece of content. Hyperlinks present in resources enable users to easily navigate their browsers to related resources. Although browsers are primarily intended to access the World Wide Web, they can also be used to access information provided by web servers in private networks or files in file systems. The major web browsers are Windows Internet Explorer, Mozilla Firefox, Google Chrome, Apple Safari, and Opera.

ABOUT THE ORGANIZATION


Bharat Electronics Limited (BEL) was set up at Bangalore, India, by the Government of India under the Ministry of Defense in 1954 to meet the specialized electronic needs of the Indian defense services. Over the years, it has grown into a multi product, multi - technology, multi - unit company serving the needs of customers in diverse fields in India and abroad . BEL is among an elite group of public sector undertakings which have been conferred the Navratna status by the Government of India. In 2002, BEL became the first defense PSU to get operational Mini Ratna Category I status. In June 2007, BEL was conferred the prestigious Navratna status based on its consistent performance. During 2008-09, BEL recorded a turnover of Rs.4624 crores and that of Rs. 5440 crores during 2010-2011. VISION - To be a world-class enterprise in professional electronics. MISSION - To be a customer focussed, globally competitive company in defence electronics and in other chosen areas of professional electronics, through quality, technology and innovation. VALUES - Putting customers first.

- Working with transparency, honesty & integrity. - Trusting and respecting individuals. - Fostering team work. - Striving to achieve high employee satisfaction. - Encouraging flexibility & innovation. - Endeavouring to fulfill social responsibilities. - Proud of being a part of the organization. OBJECTIVES - To be a customer focussed company providing state-of-the-art products & solutions at competitive prices, meeting the demands of quality, delivery & service. - To generate internal resources for profitable growth. - To attain technological leadership in defence electronics through in-house R&D, partnership with defence/research laboratories & academic institutions. - To give thrust to exports. - To create a facilitating environment for people to realise their full potential through continuous learning & team work. - To give value for money to customers & create wealth for shareholders. - To constantly benchmark company's performance with best-in-class internationally. - To raise marketing abilities to global standards. - To strive for self-reliance through indigenisation

A. GUI
In computing a graphical user interface (GUI) is a type of user interface that allows users to interact with electronic devices with images rather than text commands. GUIs can be used in computers, hand-held devices such as MP3 players , portable media players or gaming devices , household appliances and office equipment . A GUI represents the information and actions available to a user through graphical icons and visual indicators such as secondary notation, as opposed to text-based interfaces, typed command labels or text navigation. The actions are usually performed through direct manipulation of the graphical elements. The term GUI is historically restricted to the scope of two-dimensional display screens with display resolutions able to describe generic information, in the tradition of the computer science research at the PARC (Palo Alto Research Center). The term GUI earlier might have been applicable to other highresolution types of interfaces that are non-generic, such as video games, or not restricted to flat screens, like volumetric displays

I.

History

An early-1990s style Unix desktop running the X Window System graphical user interface

II.

Precursors

A precursor to GUIs was invented by researchers at the Stanford Research Institute, led by Douglas Engelbart . They developed the use of text-based hyperlinks manipulated with a mouse for the On-Line System. The concept of hyperlinks was further refined and extended to graphics by researchers at Xerox PARC, who went beyond text-based hyperlinks and used a GUI as the primary interface for the Xerox Alto computer. Most modern general purpose GUIs are derived from this system. As a result , some people call this class of interface a PARC User Interface (PUI) (note that PUI is also an acronym for perceptual user interface). Ivan Sutherland developed a pointer-based system called the Sketchpad in 1963. It used a light-pen to guide the creation and manipulation of objects in engineering drawings.

III. PARC user interface


The PARC user interface consisted of graphical elements such as windows, menus, radio buttons, check boxes and icons. The PARC user interface employs a pointing device in addition to a

keyboard. These aspects can be emphasized by using the alternative acronym WIMP, which stands for windows, icons, menus and pointing device.

IV. Evolution

The Xerox Star Workstation introduced the first commercial GUI operating system as shown above. Following PARC the first GUI-centric computer operating model was the Xerox 8010 Star Information System in 1981, followed by the Apple Lisa (which presented the concept of menu bar as well as window controls) in 1983, the Apple Macintosh 128K in 1984, and the Atari ST and Commodore Amiga in 1985. The GUIs familiar to most people today are Microsoft Windows, Mac OS X, and X Window System interfaces. Apple, IBM and Microsoft used many of Xerox's ideas to develop products, and IBM's Common User Access specifications formed the basis of the user interface found in Microsoft Windows, IBM OS/2 Presentation Manager, and the Unix Motif toolkit and window manager. These ideas evolved to create the interface found in current versions of Microsoft Windows, as well as in Mac OS X and various desktop environments for Unix-like operating systems, such as Linux. Thus most current GUIs have largely common idioms.

V.

Components

A GUI uses a combination of technologies and devices to provide a platform the user can interact with, for the tasks of gathering and producing information. A series of elements conforming a visual language have evolved to represent information stored in computers. This makes it easier for people with few computer skills to work with and use computer software. The most common combination of such elements in GUIs is the WIMP ("window, icon, menu, pointing device") paradigm, especially in personal computers. The WIMP style of interaction uses a physical input device to control the position of a cursor and presents information organized in windows and represented with icons. Available commands are compiled together in menus, and actions are performed making gestures with the pointing device. A window manager facilitates the interactions between windows, applications, and the windowing system. The windowing system handles hardware devices such as pointing devices and graphics hardware, as well as the positioning of the cursor. In personal computers all these elements are modeled through a desktop metaphor, to produce a simulation called a desktop environment in which the display represents a desktop, upon which documents and folders of documents can be placed. Window managers and other software combine to simulate the desktop environment with varying degrees of realism.

VI. Post-WIMP interfaces


Smaller mobile devices such as PDAs and smartphones typically use the WIMP elements with different unifying metaphors, due to constraints in space and available input devices. Applications for which WIMP is not well suited may use newer interaction techniques, collectively named as post-WIMP user interfaces.[6] As of 2011, some touch-screen-based operating systems such as Android and Apple's iOS (iPhone) use the class of GUIs named post-WIMP. These support styles of interaction using more than one finger in contact with a display, which allows actions such as pinching and rotating, which are unsupported by one pointer and mouse. Post-WIMP include 3D compositing window managers such as Compiz, Desktop Window Manager, and LG3D.[citation needed] Some post-WIMP interfaces may be better suited for applications which model immersive 3D environments, such as Google Earth.[8]

VII. User interface and interaction design


Designing the visual composition and temporal behavior of GUI is an important part of software application programming. Its goal is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline known as usability. Methods of user-centered design are used to ensure that the visual language introduced in the design is well tailored to the tasks it must perform. Typically, the user interacts with information by manipulating visual widgets that allow for interactions appropriate to the kind of data they hold. The widgets of a well-designed interface are selected to support the actions necessary to achieve the goals of the user. A Model-view-controller allows for a flexible structure in which the interface is independent from and indirectly linked to application functionality, so the GUI can be easily customized. This allows the user to select or design a different skin at will, and eases the designer's work to change the interface as the user needs evolve. Nevertheless, good user interface design relates to the user, not the system architecture. The visible graphical interface features of an application are sometimes referred to as "chrome". Larger widgets, such as windows, usually provide a frame or container for the main presentation content such as a web page, email message or drawing. Smaller ones usually act as a user-input tool. A GUI may be designed for the rigorous requirements of a vertical market. This is known as an "application specific graphical user interface." Among early application specific GUIs was Gene Mosher's 1986 Point of Sale touchscreen GUI. Other examples of an application specific GUIs are:
Self-service checkouts used in a retail store Automated

teller machines (ATM) like a train station or a museum

Airline self-ticketing and check-in Information kiosks in a public space, Monitors

or control screens in an embedded industrial application which employ a real time

operating system (RTOS).

The latest cell phones and handheld game systems also employ application specific touchscreen GUIs. Newer automobiles use GUIs in their navigation systems and touch screen multimedia centers.

VIII. Comparison to other interfaces


1. Command-line interfaces

Modern CLI GUIs were introduced in reaction to the steep learning curve of command-line interfaces (CLI), which require commands to be typed on the keyboard. Since the commands available in command line interfaces can be numerous, complicated operations can be completed using a short sequence of words and symbols. This allows for greater efficiency and productivity once many commands are learned, but reaching this level takes some time because the command words are not easily discoverable and not mnemonic. WIMPs ("window, icon, menu, pointing device"), on the other hand, present the user with numerous widgets that represent and can trigger some of the system's available commands.

WIMPs extensively use modes as the meaning of all keys and clicks on specific positions on the screen are redefined all the time. Command line interfaces use modes only in limited forms, such as the current directory and environment variables. Most modern operating systems provide both a GUI and some level of a CLI, although the GUIs usually receive more attention. The GUI is usually WIMP-based, although occasionally other metaphors surface, such as those used in Microsoft Bob, 3dwm or File System Visualizer (FSV). Applications may also provide both interfaces, and when they do the GUI is usually a WIMP wrapper around the command-line version. This is especially common with applications designed for Unixlike operating systems. The latter used to be implemented first because it allowed the developers to focus exclusively on their product's functionality without bothering about interface details such as designing icons and placing buttons. Designing programs this way also allows users to run the program non-interactively, such as in a shell script.

2.

Three-dimensional user interfaces

For typical computer displays, three-dimensional are a misnomertheir displays are twodimensional. Semantically, however, most graphical user interfaces use three dimensions - in addition to height and width, they offer a third dimension of layering or stacking screen elements over one another. This may be represented visually on screen through an illusionary transparent effect, which offers the advantage that information in background windows may still be read, if not interacted with. Or the environment may simply hide the background information, possibly making the distinction apparent by drawing a drop shadow effect over it. Some environments use the methods of 3D graphics to project virtual three dimensional user interface objects onto the screen. As the processing power of computer graphics hardware increases, this becomes less of an obstacle to a smooth user experience.

3.

Motivation

Three-dimensional GUIs are quite common in science fiction literature and movies, such as in Jurassic Park, which features Silicon Graphics' three-dimensional file manager, "File system navigator", an

actual file manager that never got much widespread use as the user interface for a Unix computer. In fiction, three-dimensional user interfaces are often immersible environments like William Gibson's Cyberspace or Neal Stephenson's Metaverse. Three-dimensional graphics are currently mostly used in computer games, art and computeraided design (CAD). There have been several attempts at making three-dimensional desktop environments like Sun's Project Looking Glass or SphereXP from Sphere Inc. A three-dimensional computing environment could possibly be used for collaborative work. For example, scientists could study threedimensional models of molecules in a virtual reality environment, or engineers could work on assembling a three-dimensional model of an airplane. This is a goal of the Croquet project and Project Looking Glass.

4.

Technologies

The use of three-dimensional graphics has become increasingly common in mainstream operating systems, from creating attractive interfaceseye candy to functional purposes only possible using three dimensions. For example, user switching is represented by rotating a cube whose faces are each user's workspace, and window management is represented via a Rolodex-style flipping mechanism in Windows Vista (see Windows Flip 3D). In both cases, the operating system transforms windows on-the-fly while continuing to update the content of those windows. Interfaces for the X Window System have also implemented advanced three-dimensional user interfaces through compositing window managers such as Beryl, Compiz and KWin using the AIGLX or XGL architectures, allowing for the usage of OpenGL to animate the user's interactions with the desktop. Another branch in the three-dimensional desktop environment is the three-dimensional GUIs that take the desktop metaphor a step further, like the BumpTop, where a user can manipulate documents and windows as if they were "real world" documents, with realistic movement and physics. The Zooming User Interface (ZUI) is a related technology that promises to deliver the representation benefits of 3D environments without their usability drawbacks of orientation problems and hidden objects. It is a logical advancement on the GUI, blending some three-dimensional movement with two-dimensional or "2.5D" vector objects.

5. a.

Screen Shots Form View

b.

Code View

c.

Running View

B. WEB BROWSER
A web browser is a software application for retrieving, presenting, and traversing information resources on the World Wide Web. An information resource is identified by a Uniform Resource Identifier (URI) and may be a web page, image, video, or other piece of content. Hyperlinks present in resources enable users to easily navigate their browsers to related resources. Although browsers are primarily intended to access the World Wide Web, they can also be used to access information provided by web servers in private networks or files in file systems. The major web browsers are Windows Internet Explorer, Mozilla Firefox, Google Chrome, Apple Safari, and Opera.

I.

History

World Wide Web for NeXT, released in 1991, was the first web browser. The history of the web browser dates back to the late 1980s, when a variety of technologies laid the foundation for the first web browser, World Wide Web, by Tim Berners-Lee in 1991. That browser brought together a variety of existing and new software and hardware technologies. The introduction of the NCSA Mosaic web browser in 1993 one of the first graphical web browsers led to an explosion in web use. Marc Andreessen, the leader of the Mosaic team at NCSA, soon started his own company, named Netscape, and released the Mosaic-influenced Netscape Navigator in 1994, which quickly became the world's most popular browser, accounting for 90% of all web use at its peak (see usage share of web browsers). Microsoft responded with its browser Internet Explorer in 1995 (also heavily influenced by Mosaic), initiating the industry's first browser war. By bundling Internet Explorer with Windows, Microsoft was able to leverage its dominance in the operating system market to take over the web browser market; Internet Explorer usage share peaked at over 95% by 2002. Opera debuted in 1996; although it has never achieved widespread use, having less than 1% browser usage share as of February 2009 according to Net Applications, having grown to 2.14 in April

2011 its Opera-mini version has an additive share, in April 2011 amounting to 1.11 % of overall browser use, but focused on the fast-growing mobile phone web browser market, being preinstalled on over 40 million phones. It is also available on several other embedded systems, including Nintendo's Wii video game console. In 1998, Netscape launched what was to become the Mozilla Foundation in an attempt to produce a competitive browser using the open source software model. That browser would eventually evolve into Firefox, which developed a respectable following while still in the beta stage of development; shortly after the release of Firefox 1.0 in late 2004, Firefox (all versions) accounted for 7.4% of browser use. As of April 2011, Firefox has a 21.63% usage share. Apple's Safari had its first beta release in January 2003; as of April 2011, it has a dominant share of Apple-based web browsing, accounting for just over 7.15% of the entire browser market. The most recent major entrant to the browser market is Google's Chrome, first released in September 2008. As of April 2011, it has a 11.94% usage share.

II.

Function

The primary purpose of a web browser is to bring information resources to the user. This process begins when the user inputs a Uniform Resource Identifier (URI), for example http://en.wikipedia.org/, into the browser. The prefix of the URI determines how the URI will be interpreted. The most commonly used kind of URI starts with http: and identifies a resource to be retrieved over the Hypertext Transfer Protocol (HTTP). Many browsers also support a variety of other prefixes, such as https: for HTTPS, ftp: for the File Transfer Protocol, and file: for local files. Prefixes that the web browser cannot directly handle are often handed off to another application entirely. For example, mailto: URIs are usually passed to the user's default e-mail application, and news: URIs are passed to the user's default newsgroup reader. In the case of http, https, file, and others, once the resource has been retrieved the web browser will display it. HTML is passed to the browser's layout engine to be transformed from markup to an

interactive document. Aside from HTML, web browsers can generally display any kind of content that can be part of a web page. Most browsers can display images, audio, video, and XML files, and often have plugins to support Flash applications and Java applets. Upon encountering a file of an unsupported type or a file that is set up to be downloaded rather than displayed, the browser prompts the user to save the file to disk. Information resources may contain hyperlinks to other information resources. Each link contains the URI of a resource to go to. When a link is clicked, the browser navigates to the resource indicated by the link's target URI, and the process of bringing content to the user begins again.

III. Features
Available web browsers range in features from minimal, text-based user interfaces with barebones support for HTML to rich user interfaces supporting a wide variety of file formats and protocols. Browsers which include additional components to support e-mail, Usenet news, and Internet Relay Chat (IRC), are sometimes referred to as "Internet suites" rather than merely "web browsers". All major web browsers allow the user to open multiple information resources at the same time, either in different browser windows or in different tabs of the same window. Major browsers also include pop-up blockers to prevent unwanted windows from "popping up" without the user's consent. Most web browsers can display a list of web pages that the user has bookmarked so that the user can quickly return to them. Bookmarks are also called "Favorites" in Internet Explorer. In addition, all major web browsers have some form of built-in web feed aggregator. In Mozilla Firefox, web feeds are formatted as "live bookmarks" and behave like a folder of bookmarks corresponding to recent entries in the feed.[12] In Opera, a more traditional feed reader is included which stores and displays the contents of the feed. Furthermore, most browsers can be extended via plug-ins, downloadable components that provide additional features.

IV. User interface


Most major web browsers have these user interface elements in common

Back and forward buttons to go back to the previous resource and forward again. A refresh or reload button to reload the current resource.

A stop button to cancel loading the resource. In some browsers, the stop button is often merged with the reload button.

A home button to return to the user's home page An address

bar to input the Uniform Resource Identifier (URI) of the desired resource. into a search engine

A search bar to input terms A

status bar to display progress in loading the resource and also the URI of links when the

cursor hovers over them, and page zooming capability.

V.

Privacy and security

Most browsers support HTTP Secure and offer quick and easy ways to delete the web cache, cookies, and browsing history. For a comparison of the current security vulnerabilities of browsers, see comparison of web browsers.

VI. Standards support


Early web browsers supported only a very simple version of HTML. The rapid development of proprietary web browsers led to the development of non-standard dialects of HTML, leading to problems with interoperability. Modern web browsers support a combination of standards-based and de facto HTML and XHTML, which should be rendered in the same way by all browsers.

VII. Screen Shots a. Design View

b.

Code View

c.

Running View

C. SOFTWARE LIFE CYCLE(IEEE 12207)


IEEE/EIA 12207.0, "Standard for Information Technology Software Life Cycle Processes", is a standard that establishes a common framework for software life cycle process. This standard officially replaced MIL-STD-498 for the development of DoD software systems in May 1998 Other NATO nations may have adopted the standard informally or in parallel with MIL-STD498. This standard defines a comprehensive set of processes that cover the entire life-cycle of a software systemfrom the time a concept is made to the retirement of the software. The standard defines a set of processes, which are in turn defined in terms of activities. The activities are broken down into a set of tasks.

The processes are defined in three broad categories: Primary Life Cycle Processes, Supporting Life Cycle Processes, and Organisational Life Cycle Processes.

2. Process categories
a.

Primary life cycle processes


Acquisition process Supply process Development process Operation process Maintenance process

b.

Supporting life cycle processes


Audit process Configuration Management Joint review process Documentation process Quality assurance process Problem solving process Verification process Validation process

c.

Organizational processes
Management process Infrastructure process Improvement process Training process

D. Software qualification testing


Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. Software testing also provides an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to, the process of executing a program or application with the intent of finding software bugs (errors or other defects). Software testing can also be stated as the process of validating and verifying that a software program/application/product: 1. development; 2. 3. works as expected; and can be implemented with the same characteristics. meets the business and technical requirements that guided its design and

Software testing, depending on the testing method employed, can be implemented at any time in the development process. However, most of the test effort occurs after the requirements have been defined and the coding process has been completed. As such, the methodology of the test is governed by the software development methodology adopted. Different software development models will focus the test effort at different points in the development process. Newer development models, such as Agile, often employ test driven development and place an increased portion of the testing in the hands of the developer, before it reaches a formal team of testers. In a more traditional model, most of the test execution occurs after the requirements have been defined and the coding process has been completed.

1.

Overview
Testing can never completely identify all the defects within software. Instead, it furnishes a

criticism or comparison that compares the state and behavior of the product against oraclesprinciples or mechanisms by which someone might recognize a problem. These oracles may include (but are not limited to) specifications, contracts, comparable products, past versions of the same product, inferences about

intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other criteria. Every software product has a target audience. For example, the audience for video game software is completely different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it can assess whether the software product will be acceptable to its end users, its target audience, its purchasers, and other stakeholders. Software testing is the process of attempting to make this assessment. A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed.
2.

History
The separation of debugging from testing was initially introduced by Glenford J. Myers in

1979. Although his attention was on breakage testing ("a successful test is one that finds a bug") it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification. Dave Gelperin and William C. Hetzel classified in 1988 the phases and goals in software testing in the following stages
Until 1956 - Debugging oriented 19571978 - Demonstration oriented 19791982 - Destruction

oriented

19831987 - Evaluation oriented 19882000 - Prevention oriented

3.

Software testing topics


a. Scope

A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. This is a non-trivial pursuit. Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions. The scope

of software testing often includes examination of code as well as execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.

b.

Functional vs non-functional testing

Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work". Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users.

c.

Defects and failures

Not all software defects are caused by coding errors. One common source of expensive defects is caused by requirement gaps, e.g., unrecognized requirements, that result in errors of omission by the program designer. A common source of requirements gaps is non-functional requirements such as testability, scalability, maintainability, usability, performance, and security. Software faults occur through the following processes. A programmer makes an error (mistake), which results in a defect (fault, bug) in the software source code. If this defect is executed, in certain situations the system will produce wrong results, causing a failure. Not all defects will necessarily result in failures. For example, defects in dead code will never result in failures. A defect can turn into a failure when the environment is changed. Examples of these changes in environment include the software being run on a new hardware platform, alterations in source data or interacting with different software. A single defect may result in a wide range of failure symptoms.

d.

Finding faults early

It is commonly believed that the earlier a defect is found the cheaper it is to fix it.[16] The following table shows the cost of fixing the defect depending on the stage it was found.[17] For example, if a problem in the requirements is found only post-release, then it would cost 10100 times more to fix than if it had already been found by the requirements review.

e.

Compatibility

A common cause of software failure (real or perceived) is a lack of compatibility with other application software, operating systems (or operating system versions, old or new), or target environments that differ greatly from the original (such as a terminal or GUI application intended to be run on the desktop now being required to become a web application, which must render in a web browser). For example, in the case of a lack of backward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment was capable of using. Sometimes such issues can be fixed by proactively abstracting operating system functionality into a separate program module or library.

f.

Input combinations and preconditions

A very fundamental problem with software testing is that testing under all combinations of inputs and preconditions (initial state) is not feasible, even with a simple product. This means that the number of defects in a software product can be very large and defects that occur infrequently are difficult to find in testing. More significantly, non-functional dimensions of quality (how it is supposed to be versus what it is supposed to do)usability, scalability, performance, compatibility, reliabilitycan be highly subjective; something that constitutes sufficient value to one person may be intolerable to another.

g.

Static vs. dynamic testing

There are many approaches to software testing. Reviews, walkthroughs, or inspections are considered as static testing, whereas actually executing programmed code with a given set of test cases is

referred to as dynamic testing. Static testing can be (and unfortunately in practice often is) omitted. Dynamic testing takes place when the program itself is used for the first time (which is generally considered the beginning of the testing stage). Dynamic testing may begin before the program is 100% complete in order to test particular sections of code (modules or discrete functions). Typical techniques for this are either using stubs/drivers or execution from a debugger environment. For example, spreadsheet programs are, by their very nature, tested to a large extent interactively ("on the fly"), with results displayed immediately after each calculation or text manipulation.

h.

Software verification and validation

Software testing is used in association with verification and validation


Verification: Have we built the software right? (i.e., Validation: Have we built the right software? (i.e.,

does it match the specification).

is this what the customer wants).

The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms incorrectly defined. According to the IEEE Standard Glossary of Software Engineering Terminology: Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements.

i.

The software testing team

Software testing can be done by software testers. Until the 1980s the term "software tester" was used generally, but later it was also seen as a separate profession. Regarding the periods and the different goals in software testing, different roles have been established: manager, test lead, test designer, tester, automation developer, and test administrator.

j.

Software quality assurance (SQA)

Though controversial, software testing is a part of the software quality assurance (SQA) process. In SQA, software process specialists and auditors are concerned for the software development process rather than just the artifacts such as documentation, code and systems. They examine and change the software engineering process itself to reduce the amount of faults that end up in the delivered software: the so-called defect rate. What constitutes an "acceptable defect rate" depends on the nature of the software; A flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies. Software testing is a task intended to detect defects in software by contrasting a computer program's expected results with its actual results for a given set of inputs. By contrast, QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from occurring in the first place.

REFERENCES

http://www.bel-india.com/ http://www.csharp-station.com/Tutorial.aspx http://en.wikipedia.org/wiki/Software_develop men process http://csharp.net-tutorials.com/

http://www.shellmethod.com/refs/SDLC.pdf

Vous aimerez peut-être aussi