Vous êtes sur la page 1sur 5

Why use Testing Tools?

During the development of an application many builds are created when adding features
or fixing defects. Because of the rapid development cycle, testing must be performed
repetitiously. The biggest benefit of automated testing tools is that they can save time.

Manual Testing
• Time Consuming
• Low Reliability
• Human Resources
• Inconsistent

Automated Testing
• Speed
• Repeatability
• Programming Capabilities
• Coverage
• Reliability
• Reusability

Which Test cases to Automate?


• Tests that need to be run for every build of the application (Sanity Level)
• Tests that use multiple data values for the same actions (Data driven tests)
• Tests that require detailed information from application internals (e.g., SQL, GUI
attributes)
• Stress/Load Testing

More repetitive execution - Better candidate for automation

Which Test Cases not to automate?


• Usability testing (How easy is the application to use?)
• One time testing
• ASAP testing (We need to test now)
• Ad hoc /Random Testing (Based on intuition and knowledge of application)
• Tests without predictable results

Manual to Automated Testing


Manual
Perform Wait for Check AUT Repeat steps until all
user processes to functions as applications are
Actions complete expected verified

Automated
Generate Synchronize script Add Run tests or suit
automated playback to application Verification of tests
script performance statements
Testing process
• Go through the application requirements documents such as BRD and SRS
• Identify application’s risk aspects, set priorities and determine scope and
limitation of tests
• Determine test approaches and methods such as Unit, Integration, functional,
system, load, Performance and acceptance tests
• Determine test environment requirements (hardware, software, communication
etc)
• Determine testing tools requirements
• Determine test input data requirements
• Prepare Test plan overview and then Test plan in detail and document the same
and have needed reviews and approvals
• Collect Test data and write Test cases
• Have needed reviews/inspections/approvals of Test cases
• Prepare Test environment and testware, obtain needed user
manuals/references/documents/configuration guides/installation guides
• Set up Test tracking processes/logging and archiving processes
• Obtain and install software releases
• Perform tests
• Evaluate and report results with developers
• Attending status review meetings
• Track problems/bugs and fixes
• Retest as per the requirement
• Maintain and update Test plans, Test cases, Test environment and Testware
throughout the Software Development Life Cycle

Software Quality Assurance


Software QA involves the entire software development process monitoring and
improving the processing, making sure that the agreed upon standards and procedures
are followed and ensuring that problems are found and dealt with in time.
It is oriented to prevention.

Software Testing
Testing involves operation of a system or application under controlled conditions (normal
and abnormal) and evaluating the results.

Why Software has bugs?


• Miscommunication or no communication
• Software complexity
• Programming errors
• Changing requirements
• Time pressures
• Poorly documented code and requirements
• Egos
• Software development tools
What is a Test plan?
A Test plan is document that describes the objectives, scope, approach and focus of a
software testing efforts. It helps in validating the acceptability of a software product.
It consists:

• Title
• Revision history of document including authors, dates and approvals
• Table of contents
• Purpose of document
• Objective of testing
• Software product overview
• Related documents such as BRD, SRS
• Standards and metrics such as SEI CMM, ISO and IEEE
• Naming conventions for variables, objects and windows
• Overall s/w project organization and personnel/contact-info/responsibilities
• Test organization and personnel/contact-info/responsibilities
• Testing priorities and focus
• Type of tests Unit, System, Integration and Scope and limitation of testing
• Test approach such as type of test, feature, functionality, process, system,
module
• Test environment – h/w, o/s, s/w, database and interfaces to other systems
• Tools for bug reporting, screen capture and file comparison
• Test tools to be used including versions and patches
• Script maintenance process and version control
• Reporting requirements and testing deliverables
• S/W entrance and exit criteria
• Appendix

Test case
A Test case is a document that describes an input or action and an expected response,
to determine if a feature of an application is working correctly.
• Test case identifier/name/date/status (new or released for retest)
• Objective
• Application name and version
• Function, module, screen, object, feature etc., where the bug occurred
• Test conditions/set up
• Input data requirements
• Severity level
• Expected result and Actual result
• Tester details

What should be done after a bug is found?


The bug needs to be communicated to and assigned to developers to fix it.
After the problem is resolved, fixes should be re-tested and determinations made
regarding requirements for regression testing to check that fixes didn’t create problems
elsewhere.
Testing Client/Server applications
They are quite complex due to the multiple dependencies among clients, data
communications, hardware and servers. The focus should be on integration and system
testing. Additionally load/stress/performance testing should be carried out to determine
limitations and capabilities

Testing WWW
These are basically client/server applications with web server and client browsers.
Considerations should be given to the interaction between HTML pages, TCP/IP
communications, Internet connections, firewalls, applications that run in web pages
(such as Applets, java script, plug-in applications), and applications that run on the
server side (such as CGI scripts, database interface, logging applications, DHTML, ASP
etc).
Other considerations include:

• Variety of severs and browsers with varying versions


• Variation in connection speeds
• Multiple standards and protocols
• What are the expected loads on the server and tools for performance testing?
• Who is the targeted audience and what browsers they will be using and what
kind of connection speeds they have
• Will down time for server and content maintenance/upgrades be allowed
• What kind of security (firewalls, passwords, encryptions) will be required
• Standards for page appearance and graphics for the entire site or parts of site
• Validating internal and external links

Guidelines for WWW Testing


• Pages should be 3-5 screens max unless content is tightly focused on a single
topic. If larger provide internal links within the page
• Page layouts and design elements should be consistent throughout a site
• Pages should be as browser independent as possible, or pages should be
provided or generated based on the browser type
• All pages should have links external to the page and no dead-end pages
• Page owner, revision date and a link to a contact person or organization should
be included in each page

How is testing effected by Object Oriented designs?


Object oriented design can make it easier to trace from code to internal design to
functional design to requirements. There will be little effect on Black box testing and
White box testing can be oriented to the applications objects.

What if the software is so buggy?


Then testers should go on reporting whatever bugs or blocking type problems initially
show up with the focus on the critical bugs. This type of problem can severely effect
schedules and indicate deeper problems in the software development process (such as
insufficient unit testing or integration testing, poor design, improper build or release
procedures). Management should be notified and provided with some documentation as
evidence of the problem.
How to know when to stop testing?
It is very difficult to determine, as these days the software applications are so complex
and run in such an interdependent environment that complete testing can never be
done. Common factors in deciding when to stop are:
• Deadlines (release deadlines, testing deadlines etc)
• Test cases completed with certain percentage passed)
• Coverage of code/functionality/requirements reaches a specific point
• Bug rate falls below a certain level
• Test budget depleted

What if there isn’t enough time for thorough testing?


We have to use risk analysis to determine where testing should be focused because it is
not possible to test every aspect of an application. This requires judgment skills,
common sense and experience.
Considerations can include:
• Which functionality is most important to the project’s intended purpose?
• Which functionality is most visible to user and has the largest safety impact
• Which functionality has the largest financial impact on users
• Which parts of the code are most complex and thus subject to errors
• Which parts of the application are developed in a rush or panic mode
• What kind of tests could easily cover multiple functionalities

What can be done if requirements are changing continuously?


• Work with the project’s stakeholders to understand how requirements might
change so that alternate test plans and strategies can be worked out in advance
• Ensure that code is well commented and well documented as it makes changes
easier for the developers
• Try to move new requirements to a phase 2 version of the application while using
the original requirements for the phase 1 version
• Make the customers and management understand the scheduling impacts and
cost of requirements changes
• Focus automated testing on application aspects that are most likely to remain
unchanged
• Focus less on detailed test plans and test cases and more on ad hoc testing

What is configuration management?


Configuration management covers the processes used to control, coordinate and track:
Code, requirements, documentation, problems, change requests, designs,
tools/compilers/libraries/patches, changes made to them and who makes the changes.

What is Extreme Programming and what’s it got to do with testing?


• XP (Created by Kent Beck) is a software development approach for small teams
on risk prone projects with unstable requirements. Programmers are expected to
write unit and functional test code first – before the application is developed.
• Customers, QA and Testing people are expected to be an integral part of the
project team and to help develop scenarios for acceptance/black box testing.
• Acceptance tests are preferably automated and are modified and rerun for each
of the frequent development iterations. Detailed requirements documentation is
not used, and frequent re-scheduling, re-estimating and re-prioritizing is expected

Vous aimerez peut-être aussi