Académique Documents
Professionnel Documents
Culture Documents
Welcome to Testing Solutions Group’s on-line ISEB Foundation Certificate in Software Testing Course. This
course is intended to thoroughly prepare students to sit the ISEB examination after completing the course.
Details on how to register for the exam via the computer based Thomson Prometric test centres can be
accessed from the ISEB Examination link below or from the menu shown on the left.
The Foundation Certificate in Software Testing is aimed at anyone with an interest in testing. This certificate
provides visible evidence that the individual understands the basics of software testing. The qualification is
fast becoming regarded as an industry standard for software testers and is the mandatory pre-requisite to
the more comprehensive Practitioner Certificate.
This on-line course is based on the highly successful accredited TSG ISEB Foundation Certificate in Software
Testing course. With additional material, a structure to support on-line learning, interactive questions and
answers plus a proven technical delivery method, the course is fully accredited by ISEB.
During the six weeks from your registration date, you have the opportunity to work through this course at
your own pace. Mentors, from a number of service providers, have been assigned to this course to monitor
student progress and are available to answer any questions.
This means that you have access to some of the most highly qualified testers in the world who are also ISEB
and ISTQB accredited presenters of this course.
Mentors will be available on a 24 hour, seven days a week basis. We aim to respond to all questions within 1
hour during weekdays and 4 hours during weekends.
Mentors may be contacted via The Learn Testing Forum using the ‘click here for tutor support button' on the
main menu bar. Mentors will place their responses on the forum for the student to peruse. Students will be
notified on the presence of a response via email.
All students may use The Learn Testing Forum to discuss points of interest appertaining to their current
course. The forum may also be accessed via the student's home page.
The course is structured into 6 lessons with tests at the end of each lesson and review questions at the end of
each section of the lesson. After lesson 4 and at the end of the course there are mock examination papers to
help you prepare for the actual ISEB examination.
Navigation through this course is via the toolbar on the left of your screen. It is worth taking a few minutes
now to familiarise yourself with how to navigate around the lessons.
It is VERY IMPORTANT that you familiarise yourself with following course information prior to starting
lesson 1.
• Introduction to ISEB
• Course Audience
• Course Objectives
• Course Structure
• Course Timetable
• Course Documentation
• ISEB Examination
• ISEB Examination Hints and Tips
• Course Feedback
1
Introduction to ISEB
The Information Systems Examinations Panel is a division of the British Computer Society and was set up in
1990 to administer examinations and issue certificates in a variety of subjects in the field of information
systems engineering.
The examinations have been designed to enable candidates to demonstrate their individual competence and,
if successful, to obtain a practical professional qualification.
Through the Software Testing qualifications offered by ISEB, the Subject Panel has the following objectives:
• To gain industry recognition for testing as an essential and professional software engineering
specialism
• Provision of a standard framework for the development of testers' careers through the BCS
Professional Development Scheme and the Industry Structure Model
• To enable professionally qualified testers to be recognised by employers, customers and peers, and
to raise the profile of testers
• To promote consistent and good testing practice within all software engineering disciplines
• To identify testing topics that are relevant and of value to the organisation
• To provide an opportunity for testers or those with an interest in testing to gain an industry
recognised qualification in the subject
• To enable software suppliers to hire certified testers and thereby gain commercial advantage over
their competitors by advertising their tester recruitment policy
The Foundation Certificate is a general foundation-level qualification appropriate as an entry level to those
interested in testing, and to all testing practioners. The examination lasts one hour and consists of forty
multiple-choice questions. The examination will be 'closed book'. Delegates will be notified of their results by
post, approximately three weeks after the examination date.
Audience
The following is taken from the ISEB guidelines surrounding the eligibility and appropriateness of the
qualification and the associated course.
• Those who wish to demonstrate that they have a basic knowledge of software testing fundamentals,
principles and terminology
• Those who have at least a minimal background in either software development or software testing,
such as;
§ 6 months experience as a system or user acceptance tester
§ 6 months experience as a software developer
• Managers who wish to understand software testing better
• Those who have studied some computer science, programming or testing at 'A' level or beyond
• Those who are prepared to work hard to assimilate the information required to pass the exam
The entry requirements for this course as stated on the ISEB website are:
• The candidate should have a basic working knowledge of IT, and it is recommended that all
candidates attend an ISEB approved training course run by an accredited training provider.
2
• Candidates should note that the overall public exam pass rates are notably lower than for
candidates who have attended an accredited training course.
This course is fairly intensive, as there are a lot of topics to cover from the Foundation Syllabus. See Course
Timetable
Course Objectives
• To cover the basic principles of testing
• To define a generic test process
• To describe the various testing models
• To describe the phases of testing throughout the System Development LifeCycle
• To cover the various aspects of test management
• To make delegates aware of Computer Aided Software Testing (CAST) Tools
• To enable delegates to use terms described in BS7925-1 - The Standard Glossary of Testing Terms
• To enable delegates to use terms described in BS7925-2 - The Standard for Software Component
Testing
• To enable delegates to pass the ISEB exam
Course Structure
The course comprises six lessons each divided into a number of sections.
At the end of each section there are a number of optional review questions.
At the end of each lesson is a test consisting of a set of review questions so that you may evaluate your
understanding of the section.
Remember that you are allowed 1.5 minutes for each question and the pass mark equates to 62.5% or 5
out of 8 of the number of questions set.
If you answer any of these review questions incorrectly, after marking the correct answer will be identified
with an explanation.
On completion of each lesson and the course would you please take a few minutes to complete the
evaluation form to help us implement our policy of continually trying to improve our portfolio of courses.
Lesson Section
3
Component Testing
Integration Testing In The Small
Functional System Testing
Non-Functional System Testing
Integration Testing In The Large
Acceptance Testing
Maintenance Testing
Lesson 2 Summary
Lesson 5 Organisation
Configuration Management
Test Estimation, Monitoring and Control
Incident Management
Standards for Testing
Lesson 5 Summary
Course Timetable
The pace at which you work through the course is a your discretion, however ISEB do give timing guidelines
for each syllabus topic. See course documentation to obtain full details.
The timings table is constructed from the ISEB timings without any allowance for answering the section or
lesson review questions.
When planning your course schedule do not forget to include time for reviews and some contingency.
Lesson 5 Organisation 15
Configuration Management 15
Test Estimation, Monitoring and Control 30
Incident Management 15
Standards for Testing 5
Lesson 5 Summary
1hr 20m
Course Documentation
There are a number of supporting documents that you may view on screen or print a hard copy to facilitate
revision.
It is possible to print each lesson from the online course, however if you would like a printed and bound copy
of the course material then this can be purchased separately from TSG at a cost of £50 plus VAT plus
postage for delivery outside the UK.
5
London
EC4N 8BT
United Kingdom
The course including the mock examination papers is based on the ISEB Syllabus for The Foundation
Certificate in Software Testing. It is strongly recommended that you print and read the Syllabus. The
syllabus may be considered a précis of the course and is a key document to drive your revision.
During the course we shall refer to BS 7925-1:1998 - The Standard Glossary of Testing Terms and to BS
7925-2 - The Standard for Software Component Testing both produced by the British Computer Society
Special Interest Group in Software Testing (BCS SIGIST). Again it is strongly recommended that you read
the relevant sections of these documents
Please note that the detailed content of these draft versions may vary to the course notes, which
are compiled from the current standard.
ISEB Examination
The exam itself consists of 40 multiple-choice questions where you choose 1 from 4 possible answers. You
have 1 hour to do the exam and the pass mark is 25 or over. The exam is closed book, which means you
need to revise and learn the topics covered on the course.
UK candidates may take a public examination sitting, these are generally held at the BCS, 11 Mansfield
Street, London, W1G 9NZ on a quarterly basis. Please contact ISEB for confirmation of venue location on
your chosen date.
For UK and non-UK candidates ISEB is also offering a computer-based exam with Thomson Prometric under
the ISEB Certification Program. The exam is available to be taken at Authorised Prometric Test Centres
worldwide. You can find the location of the test centres, along with their address & contact details from the
Thomson Prometric web site - www.2test.com
• ISEB candidates can obtain ISEB qualifications throughout the year rather than at pre-set exam
dates
• Examinations can be booked with less than two days notice
• Results are immediately available. However ISEB will confirm this result by post, with your
Foundation Certificate in Software Testing, to your home address within three weeks.
• Computer based exams eliminate the need for examination papers and answer sheets and gives an
extra level of security.
If English is not your first language then you may use a translation dictionary, once the exam invigilator or
Authorised Prometric Test Centre has approved it.
6
ISEB Examination Hint and Tips
In the course we discuss testing techniques – have you thought about exam techniques?
The Foundation exam consists of 40 questions. You are allowed one hour, which is on average 1.5 minutes
per question.
Most people find that the time allowed is sufficient to complete the exam, so there is no need to rush!
• Please, please read each question carefully and do the same for the answers
• Do a first pass and answer the easy questions (i.e. the ones you know the answer to)
• Then do a second pass for true / false lists and black box / white box techniques etc.
• All questions are worth one mark, so don't initially spend too much time on one question, return to
it after you have answered the others
• Make sure you answer ALL the questions, you are not deducted marks for wrong answers so if you
are not sure take your best guess
• Watch out for the Negatives in the questions, for example change ‘NOT TRUE’ to ‘FALSE’ and
change ‘NOT FALSE’ to ‘TRUE’
• Where you have a number of statements to separate into true and false lists. Tick and cross the
statements you are sure of and then look at the answers to see which are eliminated. If you are still
not sure then take your best guess
Remember that if English is not your first language then you may take a translation dictionary
with you into the examination. Tell the invigilator that you have the dictionary as it must be
checked and approved prior to the start of the examination.
Course Feedback
We strongly welcome all your comments on this course. We try to continually make improvements to the
course and have a further update planned prior to submitting the course to ISEB / ISTQB for re-
accreditation.
On completion of the course, would you please complete the course feedback form and providing you have
attempted all the sections, course and exam questions you will receive a Certificate of Attendance. If you
require a duplicate certificate then you must request one via the forum.
We are also keen to monitor the effectiveness of this method of learning by measuring the success rate of
students who go on to take the ISEB Foundation Certificate exam.
Whilst you are not obliged to divulge whether you are successful and your exact exam mark, we would be
very pleased if you could let us know this information which will be treated in the strictest of confidence.
If you have any other comments to make, would like information on other on-line courses, or locally
delivered tutor presented courses please contact us via the forum.
Lesson Introduction
Section Introduction
1.1.1
Testing terms vary between organisations so it is important that a standard set is used to avoid any
misunderstandings. There is no industry standard set of definitions although those laid out in BS 7925-1
are widely accepted and will be used for this course.
The British Computer Society (BCS) have formed many different specialist interest groups to address
different specialist areas within IT. The 'SIGIST' is the Special Interest Group in Software Testing and
they meet regularly to discuss subjects and issues pertinent to the group. You do not have to be a
member of the BCS to attend the meetings.
8
The standard was developed by SIGIST in order to provide definitions to aid communication in software
testing and related disciplines.
Other BCS specialist interest groups exist, including the Configuration Management Specialist Group and
the Quality Specialist Group.
Note : For each testing term, where the slide definition differs from the draft version
definition in BS 7925-1, the slide definition is the correct one.
Sources of information:
www.iseb.org.uk
www.testingstandards.co.uk
Back to top
1.1.2
It is difficult to sum up in one statement exactly what testing is and many different definitions exist.
There is not one set of standard testing terms within the worldwide testing community, which has led to
different terms being used by different organisations.
9
1.1.3
If a system is implemented with little or no testing then the risk of failure is quite high. As testing is
performed and faults are detected and corrected the risk of failure is being lowered and quality of the
system improved.
It is important to assess the risks of a particular system before testing commences. The more risks
and/or the higher the risks, the more testing should be performed.
Let's take an example - a company is implementing its new web site, the marketing has been successful
and it is expecting lots of people to visit the site on the first day, it will want to ensure that they are
impressed. The company's reputation is at stake and therefore the risk is very high. The risk of failure
can never be eliminated but the company will want to have performed enough testing to have the
confidence that the system will perform as expected. Put another way - that it has reduced the risk of
failure to an acceptable level.
Back to top
10
1.1.4
Testing can never prove that no faults exist. Typically testing is done to find faults in the behaviour of
systems, to assess whether the system is ready for release, to raise the confidence that the system
works and to provide quantifiable results on which management can assess the risks of implementing a
system.
It is performed to demonstrate that a system meets its specification and does not display incorrect
behaviour when used as intended. But equally we must not forget the reverse of that and remember to
test for invalid scenarios/situations - this is also known as negative testing. This is effectively guessing at
what might go wrong.
Back to top
1. What is testing?
Getting rid of errors
A system that enables the user to do his job effectively and efficiently
11
To implement something on the delivery date
Mark Paper
Answers: c & c
Section Introduction
1.2.1
Unfortunately nobody is perfect and we all make mistakes. Sometimes this can be a misunderstanding
of what is required of us, we are working under pressure such as delivery deadlines or sometimes we
12
just get it wrong! Errors made early have a nasty habit of growing and getting worse.
If errors are present in software they may cause problems immediately but they can also lie dormant
and it may take a while before they surface. When errors have been made and lie undiscovered, the
delivered software will be defective, which can lead to failures, which could mean severe problems for
the business.
An example of such a failure is when a large supermarket chain had a '2 for 1' promotion on a hair
conditioner. A customer noticed that when she had purchased two conditioners, not only had she only
been charged for one of them, but also £3.50 had been deducted from her bill. She performed some
further 'research' and discovered that for every one of the special purchases, £3.50 was deducted from
the final bill.
Over the next few weeks the woman visited all of the supermarket's branches in her area and cleared
the shelves of the conditioners. She estimated she saved herself £1000!
When the company realised that there was a problem, they stopped the promotion. However, how
many more people found the loophole and exploited it???
Back to top
Definitions
1.2.2
Error - a human error - we misunderstood what was required, we made a logic error, or we just got it
wrong. To err is human.
Fault - the manifestation of an error in software. Faults can usefully be seen as mistakes or defects that
reside in software and are the result of an earlier and undetected error. They are 'sitting' in there
somewhere waiting to be found. Physical testing is done to detect faults so that corrections can be
made and system failures avoided.
Failure - when the system does not do what is expected of it - due to a fault. The assumption is that
the system is now being executed and in the execution of the system a failure occurs. Not all faults
result in failures. It is possible for a fault to remain dormant because the part of the system where it
resides has not been exercised.
13
1.2.3
The three definitions taken from the Glossary of Terms BS 7925-1, produced by the BCS SIGIST.
The definitions included in BS7925-1 are the ones the ISEB Foundation examination will be based upon.
Back to top
Reliability
1.2.4
Following on from errors, faults and failures, a reliable system is one that doesn't have any or in most
cases many failures. If a system has been well tested, to the agreed specification, the result should be
a reliable system that does what the user requires it to do. Generally the more faults that are detected
and corrected the more reliable the system will be. It is also important that the testing carried out
reflects the real use of the system.
How much reliability does the user need?
14
We have to ask the question, how reliable does the system need to be. If it is a business critical
system, its' reliability is much more important than one that is not. How often is the system used? - if it
is 24 hours a day, 7 days a week, then reliability is a key requirement.
Note: source of reliability definition – ISEB Foundation Certificate in Software Testing syllabus.
1.2.5
It is important that the requirements for reliability are thought about early in the development process.
The requirements should be stated in such a way that they influence the design. It is often too late to
introduce reliability through testing.
It is very easy for the user/business analyst to say 'the system must be reliable', but what do they
actually mean? The requirements must be detailed to the extent that tests can be created. In some
cases Service Level Agreements may be in place that give guidelines as to the expectations of the
application.
It is impossible to test every aspect of the system so 100% reliability cannot be guaranteed. However,
by assessing the importance of the business processes contained within the system, more emphasis
can be placed on the testing of those areas, thereby reducing the possibility of failure in these areas.
Reliability is measured over time. For example once the system has gone live the number of failures
can be recorded during a certain time frame. The information gathered can be useful in determining
'problem' areas of an system (high error rate) and giving a feel for the risk involved in changing certain
parts of a system.
Back to top
15
1.2.6
A human error can cause a fault to be introduced at any stage within the SDLC, for example:
Users are often unsure as to what they really want or are unable to express their exact requirements.
This can lead to the requirements being misinterpreted or misunderstood by the business analyst.
The system specification is created from the requirements. At this point it is possible for requirements
to be left out, extra functionality added that was not an original requirement or again misinterpretation.
A further level of design document may be created from the system specification. Again it is possible
for functionality to be left out, added or misinterpreted.
The code is created by the developers who can also miss functionality, add functionality or again
misinterpret the design documents.
Different people are involved at each stage and mistakes can be made. The review process helps to
limit these mistakes - this is covered in Lesson 3 of the course.
1.2.7
16
In order to test the code produced, the programs need to be installed on a hardware platform, which in
turn can have different operating systems running. This environment is very important. It is possible
that the programs work in one environment and not in others.
It is important to get the test environments configured to reflect the live environment(s) and to
understand which environments intend to be supported. It is not always possible to test all the different
environments. For example - what about Web sites when they are in the public domain? You cannot
possibly test for all the possible PC configurations with a combination of operating system and browser.
Decisions have to be made as to the scope of the testing.
Even once we enter the dynamic testing phases, it is still possible to make errors which introduce new
faults into the products – software, user guides or training material. These faults may be introduced if
test incidents are raised in error or if test incidents are incorrectly acted on.
1.2.8
As discussed earlier, human errors can cause a fault to be introduced at any stage within the SDLC and
depending upon where they are made, the results can be catastrophic.
Best-case scenario is where the specification and design are correct but an error has been introduced
at the construction stage. These errors can be detected and corrected. No rework of the design stage
or specification stage is needed.
The next scenario is where an error has been made at the design stage. This will mean that the system
has been built to the wrong design. In order to correct the errors, rework will be needed at the design
stage before corrections can be made in construction.
The worst scenario is where an error has been made at the specification stage. The original
specification may be correct, but if a User change is ‘lost’ or ignored then the specification does not
reflect what the user now wants and is erroneous. If the error is not detected then the design work will
be carried out against an incorrect specification, which means it will be wrong. No matter how well the
developers do their job, the construction will also be wrong because it has been carried out against an
17
incorrect design. The problem in this case is that that nobody will be aware of the error because the
original error was not detected. This means that the system will be implemented with hidden errors and
in some cases this can be catastrophic.
An example of this last scenario is the Ariane 5 rocket, made by the European Space Agency.
On June 4th 1996, the maiden flight of the Ariane 5 ended in failure. About 40 seconds after lift-off, the
rocket veered of its flight path, broke up and exploded. The investigations after the failure revealed
that the navigational software used on the Ariane 5 was the same as that used for the Ariane 4 rocket.
The problem with this was that the Ariane 5 was larger and more powerful than the Ariane 4, and when
it was launched the software thought that the rocket was off-course and tried to correct it accordingly.
However, the rocket was actually on course, and the correction the computer made caused it to veer
off-course and subsequently caused its destruction.
In this situation the initial specification of the rocket had been wrong, it had been carried through all of
the development stages and caused a very costly disaster on the maiden flight.
Back to top
1.2.9
If we relate the scenarios mentioned previously to the graph above, if an error is detected in the
specification at the specification stage, then it is relatively cheap to fix. The specification can be
corrected and re-issued. Similarly if an error is detected in the design at the design stage then the
design can be corrected and re-issued. The same applies for construction.
If however an error is introduced in the specification and it is not detected until acceptance testing or
even later still - once the system has been implemented then it will be much more expensive to fix.
This is because rework will be needed in design before changes can be made in construction and all the
testing will need to be repeated.
It is quite often the case that errors detected at a very late stage, depending on how serious they are,
are often not corrected because the cost of doing so is too expensive. This can lead to users being
unhappy with the system that is finally delivered. In some cases, where the error is serious, the system
may have to be de-installed completely. We will see as we progress through the course that the key is
to introduce testing as early as possible in the development lifecycle in order that errors are detected
as early as possible – a more mature test process is one that has early lifecycle testing.
18
Back to top
1. When a Test Analyst talks to the System Designer and there is a misunderstanding in
the conversation, this is known as:
A bug
An error
A fault
A failure
An error
A fault
A failure
Mark Paper
Answers: b, c and b
19
Section Introduction
1.3.1
Exhaustive testing is where you have ensured that you have covered every possible test case. For
example this can include all possible values / combinations of the input data, through every possible
route of the code, on every possible type of environment.
Clearly it is quite impractical to try to cover everything, as this would take far too long and cost too
much money. Even with increasing the number of people creating / executing the tests it could not be
achieved for even small systems within a reasonable time frame.
The costs involved in trying to achieve this would far outweigh any benefits.
20
1.3.2
1.3.3
There are 10 possible valid numeric values but as well as the valid values we need to ensure that all
the invalid values are rejected. There are 26 uppercase alpha characters, 26 lower case, 6 special
characters (as shown in the example although there are more in practice) as well as a blank value. So
there would be at least 68 tests for this example of a 1-digit field.
In practice systems have more than 1 input field with the fields being of varying sizes. These tests
would be alongside others such as running the tests in different environments.
If we take an example where one screen has 15 input fields, each having 5 possible values. To test all
of the input value combinations you would need 30,517,578,125 (515) tests!!!! It is unlikely that the
project timescales would allow for this number of tests.
21
1.3.4
1.3.5
There are various testing techniques that can be used to help create a comprehensive set of tests
without trying to cover all permutations. These are covered in Lesson 4 of the course.
It is also possible select a set of tests to be run based on risk assessment. The concept being that if
you can't test everything then at least the high-risk areas can be tested.
Back to top
22
1.3.6
Risk can be categorised in different ways - it could be that tests are categorised as high risk because of
the likelihood of failure. This could be because the area being tested is prone to errors or an area that
is very complicated. It could also have serious repercussions to the company if it is seen that a system
has failed. There has been a lot of bad press with regard to Company Web sites that have failed.
The company could also be at a disadvantage against fellow competitors if the system isn't delivered on
time or doesn't support certain functionality. For example 'Electronic Trading' is due to start on a
particular day - if your system isn't ready and a competitor's is then you could lose out. Another
example could be where a financial system does not have the ability to deal in Euros.
There was a lot of publicity in the press a few years ago where London Ambulance put a system live
with very bad repercussions. They found that they were unable to deal with all the emergency '999'
calls and route ambulances to them within a reasonable time. In fact at least one death was attributed
to the delay in the ambulance arriving. Safety Critical systems will have areas of very high risk.
1.3.7
23
As we have discussed no system is ever completely error free, it is impossible to test everything.
Testing is about reducing the risk of failure to an acceptable level. The higher the risk (of whatever
type) then the more testing will need to be done to have confidence that the system will perform as
specified.
Back to top
1.3.8
The functionality of a system obviously needs to be tested and this is fairly straightforward in that it
either does what it is supposed to do or it doesn't. The non-functional attributes are concerned with
how well it does it.
It is much harder to test these non-functional attributes, as they are subjective and can be very hard to
define. Quite often you will see requirements such as 'the system must be easy to use' but how would
you test for this? The requirements need to be defined in such a way that tests can be created. In this
instance it might be that screen standards have been followed, that the screen flow follows the
business flow that help text is available and so on.
The quality of the system is improved by the fact that these attributes have been tested and are
working well. They will quite often give the 'competitive edge' against a competitor's similar product.
24
1.3.9
There are three 'quality' terms that are often used within testing - Quality Control, Quality Management
and Quality Assurance and it is worth looking at them in a bit more detail.
Throughout the SDLC we can introduce quality control points. These points are where we perform
activities to help improve the quality of the product. For example the activity could be a review of a
document, test planning or test execution.
It is important that these activities start early in the SDLC such as reviewing the User Requirements
and test planning from those requirements.
It is not possible just to add quality at the end through test execution.
1.3.10
It is up to the managers to decide/ensure that the correct process is in place with quality control points
25
throughout the SDLC. This can also involve activities such as making sure that everyone has the right
level of training and if particular tools are needed that they are purchased and the team has been
trained in the necessary skills.
1.3.11
Quality Assurance is checking to ensure that the process is being followed (auditing) and making sure
that the process is as effective as possible. It is looking at where errors have occurred and introducing
activities to help reduce the number of errors.
For example if through experience it was found that a lot of errors were attributed to
misunderstandings of the requirements document then a review could be put in place to help remove
some of the ambiguities.
1.3.12
26
1.3.13
1.3.14
The standard is about customer satisfaction, continuous improvement, and a process approach that
considers the aims for the whole organisation.
It requires only 6 written procedures, and has a strong emphasis instead on the ability of people to do
27
the work, make decisions etc.
The guidelines are to help you continuously improve beyond the requirements of the standard (because
the standard has a strong emphasis on continuous improvement so you cannot just get the certificate
and stop....)
A careful reading of the elements demonstrates that the ISO quality management system “is not simply
an inspection process to eliminate any parts or services that do not meet a specific set of requirements.
Under ISO 9000, quality is ‘built in,’ not ‘inspected in.”
The Standard and the Guideline are being used throughout the software industry as a benchmark for
software product and process quality.
Information on the standards can be obtained from the British Standards website. (www.bsi-
global.com).
1.3.15
ISO 9000 is a standard for Quality Management systems that is accepted around the world. Currently
90 countries have adopted ISO 9000 as national standards. When you purchase a product or service
from a company that is registered to ISO 9000 standard, you have important assurances that the
quality of what you receive will be as the company stated.
In order to comply with the quality model a company needs to have processes in place and
documented.
It needs to follow these processes and be able to show that it has followed them. This could be by
documentation such as reports from a review, test execution results or test completion reports.
It also needs to keep metrics as to what and where errors have occurred so that these can be
subsequently analysed. This is to determine what changes could be made to improve the process.
The latest version of the standard requires measurement of customer satisfaction and also continuous
improvement of processes.
28
1.3.16
It does require that there are quality points throughout the process. For example this could be a review
of a document or a walkthrough of the code.
It also requires that any deliverables that the process states should be produced are checked.
1.3.17
However it does not actually look at the process in detail and say whether it is good or bad, or that you
are testing to the correct level. It is up to the company to decide what processes they put in place.
Information on the standards can be obtained from the British Standards website (www.bsi-
global.com).
29
Back to top
1.3.18
Within large companies it is quite often the case that the business users involved in specifying the
requirements of a system are in one department and the developers and testers are in another
department - usually the IT Department. It is important that contracts exist between the departments
so that everyone is aware of what they need to do and what is expected of them.
It is also becoming increasingly popular to outsource either the development of projects, the testing of
projects or both. In any of these cases where an external supplier is involved the expectations must be
documented. Contracts are vital for outsourcing.
1.3.19
30
Good contracts must be clearly stated and leave no room for misinterpretation.
Back to top
1.3.20
You must be aware of any legal and/or regulatory requirements that affect your industry sector or type
of application – ignorance is no defence!
Back to top
1.3.21
31
It is important to define the scope of your testing before any testing is actually started. At any stage
within the SDLC the relevant documentation will help determine the scope of your testing. The testing
that is planned to be done is usually defined in a Test Strategy or High Level Test Plan document - the
planned testing is quite often known as the 'Exit Criteria'. As well as actually saying what you plan to
test, the 'Exit Criteria' would include other details such as how well the tests need to have done. For
example, it may state that you need to test everything and have found no serious errors but minor
errors are acceptable.
Testing is an iterative process and as it is impossible to test everything we could just keep going round
the loop indefinitely. If the 'Exit Criteria' has been stated in advance then testing can stop when the
criteria has been met.
1.3.22
However in practice it is not as easy as that as testing seldom goes to plan. It may be that it takes
longer to fix errors than originally scheduled, you may be able to bring in extra staff or you may have
staff leave. The implementation date may be able to be delayed it may not.
Whatever is planned initially, the Test Manager will need to make decisions during the testing phase to
keep the project on track. It is the Test Manager's responsibility to provide quantifiable information
(status reports) as to the test coverage and the problems found so that informed decisions can be
made as to whether the project is ready for implementation. The decision must not be subjective.
Ultimately there may still be outstanding problems and it may not have been possible to test lower risk
areas but as long as the system is deemed 'Fit for Purpose' then the job has been done.
Back to top
Choosing the correct process to ensure that the product is ‘fit for purpose’
The testing performed by users prior to implementation to ensure that the product
is ‘fit for purpose’
BS 7925-2
IEEE 610-12
BS 7925-1
Requirements
33
Mark Paper
Answers: c, c, a, c, a and c
Section Introduction
1.4.1
There are five stages in the Fundamental Test Process - these are shown above and will be explained in
detail in this section of the course.
34
This process can be applied to any phase of testing, for example, Acceptance testing, Functional
System testing - these phases of testing will be covered in lesson 2 of the course.
1.4.2
Back to top
Test Planning
1.4.3
As described earlier it is essential that time is spent planning the testing to be performed.
If the Test Plan does not follow the Test rules (the policy, strategy or standards) in any way, for
35
example it includes some additional types of testing or if any type of testing is being excluded, then it
must be clearly stated.
1.4.4
The scope of the testing will be derived from the appropriate document, for example, at acceptance
test one might use requirements documentation and at component test one may use the program
specifications. If no documentation is available then it is important to seek out knowledgeable users
and find out what the requirements are. It is always important to document the findings for future
reference. Decisions will also be made as to any areas that are deemed out of scope.
1.4.5
The approach that will be taken will be decided – for example top down or bottom up along with the
appropriate stubs and drivers. This is discussed in Section 4 of the course.
Different test techniques are detailed in BS 7925-2 and are described in more detail later in the course
but it is at this point that you would identify which techniques are to be used.
36
Testing may well comprise of different phases. It needs to be clearly defined as to what needs to be in
place before testing can start as well as what needs to be achieved before it is deemed complete. This
will include the test coverage.
The test planning would also include details of the test environments to be used. It may be necessary
to purchase more equipment, software or tools.
Back to top
Test Specification
1.4.6
It is at this point that the individual test cases can be created using the techniques defined in the test
planning phase.
1.4.7
37
A sample test case template can be found in Appendix A.
1.4.8
Some testing processes recognise that there is benefit to be gained by splitting the Specification phase
into two distinct activities. This is more likely to occur for functional system testing and user
acceptance testing.
A level of analysis is required providing more detail as to 'What' is to be tested before tests can be
designed detailing 'How' that might be achieved.
1.4.9
As with test planning, the source documents and the activities depend on the test phase.
At system test, the high-level business processes and non-functional attributes along with their
associated business risk should already have been identified in the planning stage. These are known as
high-level functions and can be broken down further creating a test hierarchy of functions and sub
38
functions.
At component test, the test objectives and conditions are drawn out using methods such as those in
BS7925-2; we might wish to identify particular code paths to test, using techniques such as
branch/decision testing, or we might wish to analyse the logic in the program specification using cause-
effect graphing. We will look at these techniques in section 4 of the course.
Whatever the phase of testing being specified, it is important to identify an order of importance, or
priority, for the test conditions that have been identified.
1.4.10
Once the scope has been decided as to exactly ‘What’ needs to be tested the Design activity is
concerned with the detail of ‘How’ this will be achieved.
At system test, the Test Specifications could be data driven and typically represent a business process
or a thread through the system often known as ‘end to end’ tests. Alternatively they could be process
driven and are designed to test one particular function.
At component test, the Test Specifications could relate to testing particular paths through the code, or
could be related to testing the functionality of the component. .
They would contain the prerequisites such as the set up of the required test environment.
Test cases are created for the specifications providing details of the input data to be entered, the
actions that need to be carried out and the expected outcome. They would also have a link to the test
condition(s) that they are testing (the objective of the test), thus providing traceabilty back to the
requirements.
More detail and practical case study is provided in the TSG Practical Testing Processes course.
Back to top
Test Execution
39
1.4.11
Test execution can only be started once a release of the software to be tested has been received.
Release notes accompanying the release will detail the inclusions and exclusions and from this
decisions can be made as to what tests to run.
Back to top
Test Recording
1.4.12
Information will need to be recorded once the test has been executed - some of this information is
recorded in what is generally referred to as the Test Log.
The test log would include details such as the version of software being tested, date and time of
execution and the name of the person performing the tests. Most importantly, it would also include the
40
actual outcome of the test – whether it has passed or failed. All of this information provides auditable
test results.
The actual outcomes of the tests would be analysed and compared to the predicted/expected outcomes
to look for any discrepancy.
1.4.13
If a discrepancy is found then the source of the fault needs to be found. It is not always the case that
the software is the problem; it could be that the test case has been written incorrectly, that the
environment has not been set up correctly or that the person performing the test has made a mistake.
Due to the fact that this test has failed it may mean that subsequent tests are unable to be executed.
Also to execute this test again may mean that previous tests have to be executed again. Either way it
will impact the remaining testing to be carried out.
The information recorded needs to be at sufficient detail to allow subsequent defect causal analysis.
41
1.4.14
1.4.15
In order to determine how much of the testing you have covered it is important that the tests can be
traced back to the original requirements.
By recording whether the tests have passed or failed you can determine the status of the tests that you
have executed.
But it is not enough just to have the status of the tests without being able to relate it back to how
much of the original requirements you have covered. For example all the tests may be successful but
you may only have covered half of the original requirements.
It is also beneficial if risk has been associated to the tests to be able to report on how well the high risk
tests have done as compared to say the tests for all of the risks. For example you may have covered
100% of the high risk tests but only 70% of all the tests. Remember that faults should always be
42
logged – software, hardware, environment or documentation (including test procedures, test scripts or
test cases).
Back to top
1.4.16
1.4.17
The test coverage required would be determined in the earlier planning / specification stages. If it is
found that the coverage has not been met then it could be for various reasons. This can only be
determined if there is traceabilty from the requirements through to the tests through to the results of
the tests.
One reason could be that tests have failed because there is a fault in the software in this case it will be
necessary to wait for a further release of the software before they can be repeated.
43
Another reason could be that there is a mistake in the definition of the test -in this case the test will
need to be corrected and the test re-run.
Also if there is functionality that hasn't been covered then more tests will need to be created and
executed.
Back to top
Successful Tests
1.4.18
Testing is performed with both the intention of detecting errors and demonstrating correctness. There
is quite a high probability that faults will be found during testing. This in turn means that the fault will
need to be fixed and the tests repeated. It may be necessary for this iteration to be repeated several
times before testing is deemed complete.
This iteration may delay the progress of the project and can be very costly. However if you don't find
the faults and the system is implemented then failures could occur. The cost of the failures and the
repercussions of failure may be many times more costly to correct.
Back to top
Completion/Exit Criteria
44
1.4.19
The completion or exit criteria are used to determine when any phase of testing is complete. The exit
criteria from one phase will usually be the entry criteria to the next phase. For example the exit criteria
from system testing can form part or all of the entry criteria for acceptance testing.
They will be determined in the Test Planning stage although in practice they may be adjusted as the
project progresses.
It may be stated that all the tests created to give complete coverage need to be executed with no
faults outstanding regardless of the time taken or the cost involved.
However due to time or cost restraints it may be necessary that all the tests are not run or all the
faults are not fixed. In this case it may be stated that all the high risk tests are executed with no
serious faults.
In terms of faults, exit criteria may be expressed in different ways: Find Rate - the number of faults
being found in testing over a specific period (e.g. if no critical faults are found in a period of one weeks
testing then testing can stop).
Severity – the number of outstanding faults of different levels of criticality (e.g. testing can stop if
there is no more than 1 critical fault outstanding and 3 major faults outstanding).
Sustainability – how long the system can be used in a production environment with the identified faults
outstanding (there may be parts of the system with faults recorded that will not be used until several
months hence, or acceptable workaround to known faults have been given).
Clean-up rate – there is an acceptable level of outstanding faults in the software that will enable it to
exit testing.
It has to be remembered is exit criteria is unlikely to be expressed only in terms of faults – it has to be
combined with, for example, the amount of coverage required by testing.
Back to top
45
The Fundamental Test Process
Review Questions
1. The following are all stages in the Fundamental Test Process except
Test Planning
Test Design
Test Execution
Test Recording
Test environment
Scope of testing
Expected results
Running tests
Logging results
Test results
Faults detected
Mark Paper
Answers: b, d, d and a
Section Introduction
46
In this section we will cover:
• Why do we test?
• Characteristics of the developer
• Characteristics of the tester
• The developer/tester relationship
• Independence
Why do we test?
1.5.1
There are many objectives of testing such as to show a system meets it's specification, to assess whether
the software is ready for release, to raise the confidence that the system works but the primary reason is to
find faults in the software.
Because of this testing is often perceived as a destructive process rather than a constructive one.
Back to top
Developer Characteristics
47
1.5.2
The developers are normally specialists in certain areas, for example this may be Visual Basic, C++, or Web
development. Because these skill sets have been built up over a number of years (and at the company's
expense), these individuals are normally highly valued by the organisation as they have both the technical
knowledge and business knowledge. Money will have been spent in getting them trained in their specialist
subject.
The developers will have spent a lot of time creating the code; the product is their creation, their 'baby' and
they are often very sensitive to criticism.
Back to top
Tester Characteristics
1.5.3
The testers need to be methodical - to be able to create tests and follow tests through in a structured way.
48
They also don't like to be beaten - if they find an intermittent fault they will continue until they determine
the exact actions that caused the fault. In fact they are happy finding faults!
Testers are often under-valued by the organisation - slowly, this is changing as organisations become more
mature and realise the importance of testing.
Often testers find themselves in the job because they have been placed there to learn the system or they
have been transferred from other areas on a temporary basis but many enjoy the work and choose to stay
on permanently. Quite often testers find themselves in the job with little or no training.
Testers need to be good communicators - they need to be diplomatic with the developers and often have to
liaise with the business users as well to determine the full scope of their testing.
Back to top
1.5.4
Good communication between the tester and the developer is an important factor in the success or
otherwise of a project.
Any communications made must be constructive, not destructive - 'This program is a load of rubbish and it's
got thousands of errors' will not be received well.
Developers must always inform testers what changes have been made to an application - otherwise there is
the real danger of testers missing parts of the system to test because they are unaware of them. This would
normally be addressed in the form of release notes.
Faults found by testers must be reported clearly and concisely - 'The add new client screen doesn't work' will
not provide enough detail for the developers to fix the fault. It is also important that when reporting the
fault the tester does not try to solution it. Faults are normally reported on a problem tracking system or an
incident management system.
Back to top
Independence
49
1.5.5
It is also generally accepted that if you review your own work you will not find the mistakes that someone
else will. This is why independent testing is recommended.
It is generally accepted that authors should not test their own work for the reasons noted above. However,
sometimes it can be expensive to build independence into the test process - there may not be enough
resource, it may take longer for someone else to execute the tests.
1.5.6
There are various levels of independence. It will depend on the risk and costs involved as to how far this is
taken.
The last point - ‘Not chosen by a person’ is where tests are generated or chosen in some automated way, for
example by a test design tool or a test data generator.
The levels shown above indicate a hierarchy of independence, with the author testing their own product
50
being the least independent and not chosen by a person being the most independent.
Back to top
v, iv, i, iii, i
iv, v, iii, i, ii
Mark Paper
Answers: b and d
• What is re-testing?
• What is regression testing?
51
• Selection of regression test cases
What is re-testing?
1.6.1
As we have discussed testing is an iterative process; faults are found, they are fixed and then they need to
be re-tested. This will apply whether you are performing component testing, system testing or acceptance
testing.
1.6.2
Back to top
52
What is regression testing?
1.6.3
When a change is made to a system it is quite possible that it can have a knock-on effect. It could be that
by making the change other areas that were previously working now stop working. The system could end
up worse than it originally was.
Regression testing is performed to give the confidence that this has not happened.
1.6.4
Back to top
53
1.6.5
It is obvious that performing a full regression test every time a change is made is going to be very time
consuming and therefore very costly. Ideally you would want to test everything but in practice it is not
always possible. Difficult decisions sometimes need to be made based on risk.
For example - a large number of changes made to various areas of the system will be a higher risk than a
small cosmetic change. If changes have been made to high risk business processes, or particularly
complicated areas then these will be higher risk and should be included in regression testing.
Other decisions can also be made to combine test cases or drop repetitive tests. Sometimes if tests were
not run last time then they are included and some tests that were run last time are excluded.
1.6.6
When a system is first being developed changes are frequently being made, many faults are usually found
and the application is generally thought of as being unstable. As time progresses the frequency of change is
less, the faults (hopefully!) are less and the application is becoming more stable. It is at this point that
there is less re-testing and more regression testing. Once the system is fairly stable it can be time saving to
54
introduce test automation. By automating the tests you are more likely to be able include all tests as
opposed to having to make decisions as to which ones to include. Test automation is not always suitable
and sometimes a combination of automated tests and manual tests is the preferred solution/combination.
1.6.7
One of the attributes of a good test is that it is repeatable. If a fault is found the developers will want to
recreate the fault to help them determine the cause. How may times have you heard 'It doesn't happen
when I do it'? This equally applies when you are re-testing or regression testing. You need to be sure that
you are repeating the test exactly to show that the fault has been fixed.
Back to top
1. A Regression test is performed when each of the following has changed except
The software
The hardware
The environment
We need to ensure that unchanged areas of a system are still functioning correctly
55
We need to ensure that faults have been fixed
Mark Paper
Answers: b and c
Expected Results
Section Introduction
1.7.1
Expected results are the predicted outcome of a test. It is essential that they are determined prior to
56
test execution.
The outcome is everything you expect to happen. The output is physical output such as a printed
report, a file of names and addresses or an updated database. The outcome of a test may include
physical output.
Back to top
1.7.2
The expected results are compared to the actual results and then a decision can be made as to whether
the test has passed or failed.
Back to top
57
1.7.3
Expected results should always be derived from the system documentation. For example this could be
User Requirements, System Specification or Detail Design document. The same documents that are
used to create the tests should also be used to predict the expected results.
If no documentation exists then it is difficult to know what tests to create and also what the expected
results should be. In this instance it is necessary to find out the requirements by investigation.
The 'Oracle Assumption' is that a tester can routinely identify the correct outcome of a test. This means
that the tester knows where to look to get the information from which the expected outcome can be
derived. The word 'Oracle' comes from Greek mythology and in this instance means the source of the
information. This may be the documentation, the existing system, or an individual's specialised
knowledge but it is never the code.
Back to top
1.7.4
If the expected results have not been predicted in advance then you are leaving the judgement as to
whether the test has worked or not to the person executing the test. They might not fully understand
the implications of the test or may interpret a wrong result as a correct one, either way there is a
chance of them getting it wrong.
Back to top
58
Expected Results
Review Questions
Test outputs
Mark Paper
Answers: c and b
Prioritisation of Tests
Section Introduction
59
1.8.1
All too often testers are in the position where they do not have enough time and/or resources to run all
the intended tests. This can be because the development schedule has slipped and the software is
delivered later than expected or it could be that they are understaffed due to people leaving or
sickness.
If it is possible to put the implementation date back then this would help but this is not always possible.
An example of this is where dates have been set external to the company - 'Electronic trading will be
available from 1/1/2001'. If a company wants to be able to offer this option to their clients then they
will need their system operational on that day, otherwise they may lose their clients to a competitor
whose system is ready.
In this instance the company will want to do the best it can in the time available, look at the results
and make the decision as to whether the system can be implemented.
Back to top
How do we prioritise?
60
1.8.2
Tests must be prioritised in the order of importance so that the most important tests can be performed
first. The best risk reduction profile is by addressing the higher risks (tests) as early as practicable.
Don’t forget that you cannot start testing without software, so you agree the schedule with the
development manager in order that you receive the software in the best order.
Back to top
Prioritisation basis
1.8.3
What are the factors that influence the prioritisation of our testing?
Business criticality – the risk to the business should any critical business process fail.
Severity of potential failure – the multiplication of likelihood and impact of the potential failures
identified by risk assessment.
Visibility of failure – the degree of negative ‘publicity’ due to a failure, for example Web systems.
61
1.8.4
User requirements and priorities - sometimes the users have specified what they see as important in
the User Requirements.
Likelihood of change - areas that frequently change or there is a high probability of change.
Technical criticality and complexity - areas of the system that are particularly complicated.
Back to top
1.8.5
Whilst we can prioritise our testing we must also consider the above points and ensure they have not
62
been compromised.
Back to top
Prioritisation of Tests
Review Questions
Ease of testing
Mark Paper
Answers: b and d
Lesson 1 Summary
In this lesson we have covered:
63
o Why is testing necessary?
o Definitions
o Reliability
o Errors and how they occur
o The cost of errors
o Why do we test?
o Characteristics of the developer
o Characteristics of the tester
o The developer/tester relationship
o Independence
64
o What is re-testing?
o What is regression testing?
o Selection of regression test cases
• Expected results
• Prioritisation of tests
Lesson 1 Review
A Bug
An Error
A Fault
A Failure
The probability that software will not cause the failure of a system for a
specified time under specified conditions
The likelihood that the system will not fail for a period of time specified by the
users
65
3. Exhaustive testing is
Choosing the correct process to ensure that the product is fit for purpose
ISO 9000
BS 7925-2
ISO 9001
BS 7925-1
66
When the live date is reached
Test cases
Performed to ensure that what was working is still working after another part of
the system has been changed
The re-run of tests to ensure what was not working has been corrected and is
now working
Because people can make assumptions about their own work when testing
67
Because not everybody is a skilled tester
15. BS 7925-1 is
16. Before executing a test we should ensure that all of the following are determined except:
17. After executing a test we should document all of the following except:
68
The date and time of test execution
Mark Paper
Answers: d, b, c, d, b, c, c, d, b, b, c, b, c, d, b, b, and b
Section Introduction
69
2.1.1
The Waterfall model is also known as the Sequential model. It is where the development stages
occur sequentially one after another followed by testing in block at the end after the code has
been delivered.
Unfortunately it is still widely used today due to the misconception that testing starts with the
delivery of code.
2.1.2
70
If you look at the project plan generated using the Waterfall model, you can see that none of the
test planning or test creation as well as the test execution is performed until after the programs
have been coded.
So what are the problems you are likely to encounter if this model is used?
2.1.3
Due to the fact that testing does not start until after the code has been delivered testers tend to
look at the delivered product and base their tests on what has been delivered rather than the
documentation. When this happens, errors in the code are found but there is no check back to
the original requirements. It usually ends up with systems being delivered that are free from
code errors but are not what the users originally asked for or wanted.
In addition, as testing is started later in the SDLC any errors that are found that relate back to
the original requirements or system design will be more costly to correct.
It is quite often the case that some or all of the development stages take longer than originally
planned and it is not always possible to put back implementation dates. This leads to the testing
phase, as it is the last one, being squeezed. The test execution cannot happen until the code has
been delivered but if test preparation is performed earlier then there will be more time for test
execution.
Back to top
The V Model
71
2.1.4
The V Model was introduced to address some of the problems associated with the Waterfall
model.
Within the V model testing is not seen as a phase that happens at the end of development. It is
recognised that for every stage of development an equal stage of testing needs to occur. It also
recognises that the test preparation, for example test planning and test creation can be
separated from test execution. The test preparation is not dependant on the code being delivered
and can occur much earlier in the SDLC.
It should be noted that the documentation shown on the left-hand side of the V model is not rigid
- organisations may call documents produced different names, may merge the documents shown,
or may have additional documents that they produce. The right-hand side of the V model is more
rigid in terms of naming conventions, but organisations may choose to exclude certain levels of
testing, depending upon the project in question (for example, if the system under test does not
integrate with any other system, them Integration Testing in the Large would not be performed).
72
2.1.5
When looking at the project plan for the V model you can see that the test preparation is
separated from test execution and occurs earlier in the SDLC.
2.1.6
The tests are created by reviewing and analysing the documentation. By reviewing the
documentation you are introducing a quality control point early in the SDLC. Any errors found
during the review or test preparation will be cheaper to fix than if they had been found later
during test execution.
By adhering to the V model testers are more likely to create the 'correct' tests as they are using
73
the requirements and system specification to derive test conditions rather than code, as has
previously been done.
Testing is now viewed as an activity to be performed throughout the development life cycle
rather than merely a phase at the end. By introducing these activities, or quality control, we are
more likely to create systems that are 'fit for purpose'.
Back to top
2.1.7
The Spiral model differs quite significantly from the previous models shown, It is an incremental
approach to development and testing whereby the full system requirements may not be known at
the start of the project (i.e. the users know their basic requirements but do not really know
exactly what the complete system should look like). The initial requirements are defined,
designed, built and tested (with review points after each activity) and those requirements
enhanced and built upon in further iterations of the define, design, build and test activities. The
system is implemented at the end of the required number of iterations.
74
2.1.8
As previously stated, the Spiral model is an incremental approach to development and testing,
Another type of model for testing is Rapid Application Development, or RAD - the reason for
using this model is that systems can be developed and implemented quickly (rapidly). With RAD,
the system requirements are fully understood at the start of the project, however these
requirements are not formally documented, Instead they are split down into
components/functions, and each one is. then defined, developed, built and tested in parallel, with
a set amount of time allocated - amendments/additions to requirements will only be
accommodated if they can be fitted into the allocated time. At the end of the time, the
components/functions are assembled together into a working version of the application.
DSDM - Dynamic Systems Development Methodology was developed to put controls around RAD
developments, as the danger with this type of approach is that documentation is scant or non-
existent.
2.1.9
75
In order for these models to work it is essential that experienced, knowledgeable users are
included in the team. It is often the case that the knowledgeable users cannot be spared from
their day-to-day work and less experienced users are enrolled. It will then be difficult to ensure
that the user requirements are correct.
Because the system evolves with the Spiral model, or it is timeboxed with RAD, it is quite
common that little or no documentation is created. Certainly the term 'RAD' is often used as an
excuse not to document what has been done. This means that the only people who understand all
the detail of the system are the team involved. When they leave or are allocated to another
project then maintenance becomes a problem. To prevent this situation from occurring,
standards should be laid down that documentation, in whatever form (e.g. flipcharts) must be
stored.
Back to top
2.1.10
Verification, Validation and Testing (V, V & T) can be thought of as good practice that can also be
applied to the earlier models.
Verification is also checking the document against its preceding document to ensure functionality
has not been introduced, lost or misinterpreted.
76
2.1.11
Validation is ensuring that we have built the product as the user intended it to be used.
2.1.12
As discussed in Lesson 1, by detecting errors and ensuring they are fixed we are improving the
quality of the system and reducing the risk of failure.
Testing is the confirmation that we have built the product right and have built the right product.
77
Back to top
Waterfall Model
V Model
W Model
Mark Paper
Answers: b, b and a
Economics Of Testing
Section Introduction
78
• The benefits of early test design
• Finding defects in specs through test preparation
• The cost of faults versus the cost of testing
2.2.1
As we discussed when looking at the V model it is possible to split the test preparation from the
test execution. The test preparation, which includes test planning and test design can then be
performed earlier in the development life cycle.
2.2.2
79
By performing test design at the earliest possible stage in the SDLC, it follows that any errors
identified at this point will be rectified and will not be propagated through to the later stages of
system development. Therefore, the quality of the system is improved.
If an error is found in the test specification during system testing then it could mean that rework
is needed in the development stage, and that component testing needs to be repeated before
system testing can be repeated. If however the test design is carried out either at the same time
or prior to development then any errors detected during test design can be reported to
development saving both time and money.
Back to top
2.2.3
It doesn't matter what specification we are talking about; it may be the Customer Requirements,
Functional Specification, Technical Specification or Module Specification - the same principle
applies.
In order to work out exactly what you are going to test and creating your test cases you need to
understand the detail of the specification. This gives rise to questioning any ambiguities and
identifying any errors, omissions or inconsistencies.
If faults are not detected in the documentation then the development work based on that
documentation will be wrong. One fault in the documentation can lead to many faults in
development, which multiplies the effect of the original error. Sometimes something that would
be simple to fix early on becomes complex and costly to fix later.
Back to top
80
2.2.4
Testing is not a cheap activity - by finding a fault you are in fact causing more work to be done.
The fault and original error need to be corrected and any rework of any subsequent phases
performed. This costs both in time and money.
However, if testing is not performed and a fault is not found which results in a system failure the
results can be quite catastrophic. The implementation of a metropolitan ambulance system is a
good example. If the implementation results in death then no money will bring the dead back to
life again. Where safety critical systems are concerned the cost of testing cannot be of prime
importance.
It is difficult to measure exactly what the costs of fixing a failure are and you have to consider
not only correcting the fault itself but also the repercussions of it - any other areas that might be
affected. You can take this a step further as well and think about correcting the process that let
the fault slip through in the first place.
Organisations need to assess the cost of failure for their individual systems and put as much
testing in place as is warranted. Online trading systems have said that in the event of failure it
costs them £100,000 per minute in lost revenue.
Back to top
Economics Of Testing
Review Questions
1. Why can early test design reduce the cost of fixing faults?
When a failure occurs it may take a long time to find the fault
81
There is no need to re-test faults found in early test design
Mark Paper
Answers: c
2.3.1
Each organisation should have a Test Strategy in place. The Test Strategy will determine whether
there is to be a Test Plan for the whole project or for individual phases of testing within the
82
project. Normally, it would be defined at the project level but there may well be further
breakdowns of that plan for the individual phases of testing. Each of these plans would contain
more detail relating to that specific phase of testing.
We are going to look in more detail at what should be included in a Test Plan.
Back to top
2.3.2
At the start of the project or phase it is important to identify exactly what you are going to test
and what you are not going to test. If this is clearly defined then there can be no
misunderstanding of what the scope of the testing is.
Back to top
Risk Analysis
83
2.3.3
We have already determined that testing is a 'risk reduction process' and discussed the benefit of
placing more importance on the high risk areas, but who decides on what is high risk?
From a business critical point of view it is the users who must be consulted. The risks will be
determined following discussion and documented in the Test Plan.
Once this analysis has been done, emphasis should be placed on testing those high risk areas
with sufficient resource and time allocations.
2.3.4
However, apart from business criticality there are also other factors that need to be taken into
account when analysing risk. For example, the developers can give insight to what they see as
technically difficult or complex and the testers will know which particular areas are prone to error.
Risks to the project could also encompass time constraints, resource constraints, poor quality
84
code and poor quality testing. The test manager should be aware of these aspects.
The type of risks may be more appropriate depending upon the level at which the test plan is
being written. If it is a Project Test Plan then they will be at a higher level, an Acceptance Test
Plan might focus on business criticality whereas a Component Test Plan may focus on the
technically complex areas. At whatever level the risks must be documented in the Test Plan.
Wherever possible the Test Plan should also contain the contingencies. It is not enough just to be
able to identify what the risks to the project are, you must also specify what contingencies you
have put in place to manage them should they occur. For example, if there isn't sufficient time or
resource then it has to be stated that some testing may be jeopardised, and management will
have to make a judgement on that.
Regarding the quality of code and testing, although these are risks, there may well be quality
controls in place that could reduce this risk - if this is the case, then state it.
Back to top
Test phases
2.3.5
If the Test Plan is being written at the project level then it will state what phases of testing are
going to be carried out for this particular project and what they include along with the key
responsibilities.
Each project has different requirements and not all phases are included for every project. In
some cases some of the phases are combined or not needed at all. For example if the system
doesn't integrate with any other system then integration testing in the large will not be needed.
More detail on the different phases of testing is given later in this lesson.
Back to top
85
2.3.6
We have already discussed the concept of exit criteria and it is here in the Test Plan that they
would be documented.
The completion or exit criteria are used to determine when testing of each phase is complete. The
exit criteria from one phase of testing will usually be the entry criteria to the next phase. For
example the exit criteria from system testing will be the entry criteria for acceptance testing.
The Test Plan will state exactly what the deliverables are for each phase of testing. For example
the test scripts along with execution details and results from one phase may need to be provided
as entry criteria for the next. A test completion report documenting what has been done along
with results and reasons against what was intended to be done is also a common deliverable.
Back to top
2.3.7
86
There are many things that need to be considered in the early stages of a project with regard to
the test environment. These need to be planned for in advance because quite often they can
involve a significant amount of cost and will have an effect on the project budget.
It is also often the case that other areas in the organisation are responsible for setting up the
environment and they need to be aware of what is expected of them and schedule their activities
accordingly. You do not want a delay to the testing because certain software had not been loaded
onto a PC or the PCs being used were not of a high enough specification.
The system being tested may link to another system external to the company, and it must be
ensured that the two systems can communicate. This may involve firewall configuration at both
sites or the setting up of additional comms lines.
It is in the Test Plan that all the test environment requirements would be stated.
Back to top
2.3.8
You also need to plan in advance what test data you are going to use for your testing.
There are advantages and disadvantages of using live data and this needs to be considered. It
may be relatively quick to take a copy or an extract from a live file (a file that is currently in use
in the production environment) but will it satisfy all your testing requirements? There may be
new features in the new system that will not be covered by existing files, or features that have
been removed that now make the files invalid.
To comply with the Data Protection Act, data of a confidential nature may have to be changed.
Usually you are dealing with large quantities of data and it is impossible to check all the results.
87
2.3.9
The alternative to using live data is to create your own test data.
The disadvantage of this is that it can be quite time consuming to do, the advantage being that
you can create the specific conditions that you need to test the system and it is easier to check
the results.
If you are creating your own test data, you will know what has been used to test the application
and can repeat the input.
There are no Data Protection Act implications if you use your own test data.
Where possible, it can be very useful to use live data as a confidence check at the end of a phase
of testing.
Back to top
Documentation requirements
88
2.3.10
Different documentation will be available at the different levels of testing. The documentation will
be more detailed the further you progress through the project. It is by analysing these
documents that the scope of the testing can be determined and documented in the Test Plan. It
must be remembered that you need to scope not only the areas of functionality but also the non-
functional, or quality, attributes.
The performance requirements of the non-functional attributes will also be determined from the
documentation.
Ideally you would want the documentation to be created/amended but in practice this rarely
happens. It is not the solution for a tester to say I can't do my job because I don't have any
worthwhile documentation! It is more appropriate to investigate what the requirements are by
seeking out the knowledgeable user and discussing. These discussions/meetings should always
be documented.
Back to top
2.3.11
The Institute of Electrical and Electronics Engineers has developed the IEEE 829-1998 Standard
for Software Test Documentation that aims at providing a standard structure to be used when
creating software testing documents, including Test Plans.
The standard specifies the form and content of the individual test documents, such as Test Case
Specifications, Test Logs, Test Incident Reports.
For a Test Plan, IEEE 829-1998 describes its purpose as prescribing the scope, approach,
resources and schedule of the testing activities. To identify the items being tested, the features
being tested, the testing tasks to be performed, the personnel responsible for each task and the
89
risks associated with the plan.
The various headings that are prescribed for the Test Plan document are described on the
following pages.
2.3.12
Test Plan Identifier - a unique identifier assigned to the Test Plan. It may also indicate its version
and the level of software that it is related to.
Introduction - this is essentially the executive summary part of the Test Plan. It should state its
purpose and it may include references to documents or plans that are relevant to this Test Plan.
Test Items - this should include the items that intend to be tested within the scope of the Test
Plan. Depending upon its level, this may detail different applications, business functions within an
application or an individual program.
Features to be Tested - this should identify all software features and combinations of software
features to be tested. The level of risk associated with each feature could also be identified at this
point.
Features not to be Tested - this should include all features and significant combinations of
features that will not be tested and the reason why.
Approach - this describes the overall approach to testing. For each major group of features or
combination of features, the approach should be specified that will ensure that the features are
adequately tested.
The approach should be specified in sufficient detail to permit identification of the major testing
tasks and estimation of the time required for each one.
The approach should also specify the level of comprehensiveness desired for testing. The
techniques that will be used to judge the comprehensiveness of the testing should also be
specified.
90
2.3.13
Item pass/fail criteria - this will specify the criteria to be used to determine whether each test
item has passed or failed.
Suspension criteria and resumption requirements - this will specify the criteria to be used to
suspend all or a portion of the testing activity on the test items associated with the Test Plan. The
testing activities that must be repeated when testing is resumed must also be specified.
Test deliverables - this will identify the deliverable documents, for example the Test Plan, Test
Design Specifications, Test Case Specifications, Test Logs and Test Incident Reports, Problem
Reports and corrective action. Test Tools and their output may also be included.
Testing tasks - this will identify the set of tasks necessary to prepare for and perform testing. Any
special skills and intertask dependence between tasks should also be identified.
Environmental needs - this will specify both the necessary and desired properties of the test
environment. It should also detail the physical requirements, such as hardware, communications
and system software. The level of security required for the test facilities should also be specified.
2.3.14
91
Responsibilities - this should detail key areas of responsibility. The responsibilities should include
managing, designing, preparing, executing, witnessing, checking and resolving. In addition those
responsible for providing the test items and the test environment should also be specified.
Staffing and training needs - the test staffing needs by skill level should be specified as well as
training options for providing the necessary skills.
Schedule - the schedule of the testing to be performed should be specified and be based on
realistic and validated estimates. All relevant milestones should be identified with their
relationship to the development process identified. The milestones will help in identifying any
slippage in the schedule caused by the test process.
Risks and contingencies - this should detail the high-risk assumptions of the Test Plan. Risks
could include lack of resource, hardware being unavailable and late delivery of software. It is
important that contingency plans are specified for each risk.
Approvals - the identification of who is required to approve the Test Plan and enable it to proceed
to the next level.
Back to top
1. The high-level test plan, compiled according to IEEE 829, includes all the following
sections except
Approach to testing
Testing standards
Environmental needs
Test deliverables
92
Mandatory information to be included in every Test Plan
Mark Paper
Answers: b, a and c
Component Testing
Section Introduction
• Definition
• BS 7925-2 Software Component Testing Standard
• The Component Test Process
Definition
2.4.1
Component testing is the lowest level of testing. It refers to the testing of the smallest items, the
building blocks of the system.
93
2.4.2
Component testing is also often known as unit testing. Also where the individual component or
unit is a module it can be called module testing and where the individual component or unit is a
program it can be called program testing.
Back to top
2.4.3
The Standard defines a generic process for the testing of software components, which we will
now cover. This process follows the same principles as we discussed earlier in Lesson 1.
The Standard also defines test case design techniques and test measurement techniques. The
techniques are defined to help with the design of test cases and to quantify the testing
94
performed. Some of these will be covered later in Section 4.
Back to top
2.4.4
Before component testing starts the Component Test Strategy and Project Component Test Plan
are specified.
The Project Component Test Plan specifies the dependencies between components and their
sequence. This may be influenced by overall project management and work scheduling
95
considerations.
Back to top
Component Testing
Review Questions
BS 7295-2
BS 7529-2
BS 7925-2
2. The Standard for Software Component Testing contains all of the following except
Test case design techniques
96
Coding, Debugging, Sign-Off
4. The standard for component testing, BS 7925-2, contains all the following except
A component test process
Mark Paper
Answers: d, b, d and c
97
Integration Testing In The Small
Section Introduction
• Definition
• Top-down testing
• Bottom-up testing
• Functional Incremental Testing
• Big Bang Testing
Definition
2.5.1
Integration testing in the small is the next stage on from component testing. It is concerned with
bringing individual components together to test them as a whole. The individual components may
work very well on their own but you need to check the interfaces and interaction between them.
For example, a file that is output from one program may be the input to another. If you have two
different developers working on the different programs they may have created the file differently.
This should not happen if the file layout has been documented and is available to both
developers, however this is typically where errors can occur.
Through integration testing in the small we start assembling components into sub-systems. We
can then integrate the sub-systems together to create the overall system being delivered.
Back to top
98
Top-down testing
2.5.2
There are different approaches for bringing the individual components together, the first one to
look at is top-down testing.
As the name suggests you start from the top level of the component hierarchy. Stubs are used to
simulate any components that the top level calls. Stubs are dummy components; they have a
name and they usually return a message like 'program name called' but other than that contain
no functionality.
Once the top-level component has been tested the components at the next level down are
introduced with the components that they call being replaced by stubs. This continues down each
level until all the components have been introduced and no more stubs are needed.
2.5.3
If we look at this example where component A is at the top level, B1 & B2 are at the second level
99
and C1, C2, C3 & C4 are at the third level.
2.5.4
In order to do this a stub needs to be put in place for component B1 to simulate the real B1. The
same would apply for B2.
2.5.5
Once component A has been tested then you can move down the hierarchy and test component
B1.
100
2.5.6
Once A and B1 have been tested together then components C1 and C2 can be introduced.
A similar process will occur down the other leg of the hierarchy. In the first instance B2 will be
introduced with C3 and C4 being simulated with stubs. Once A and B2 have been tested together
then C3 and C4 can be introduced.
2.5.7
Because you are starting from the top and gradually adding in more functionality as it becomes
available you will have a working version of the system very early in the project, even if it
doesn't do everything. This will help in demonstrating that the design is correct.
Because more functionality is being added incrementally, any faults emerging will have
something to do with the new components or the interface to them.
However this approach will need a stub created for every component other than the top one.
101
Back to top
Bottom-up testing
2.5.8
Another approach to integration testing in the small is bottom up testing - this is the opposite of
top-down testing where the lowest level components are tested first.
In this approach 'Drivers' are used to simulate the higher-level components that might not yet
have been written. These are also commonly known as test harnesses and have to be put in
place to simulate the call to the component.
Again the process is repeated but this time working up the hierarchy.
2.5.9
102
Let's take the same example with the three levels of components.
2.5.10
This time we are working from the bottom up and start with testing components C1 and C2.
2.5.11
The same applies for components C3 and C4. A 'Driver' needs to be put in place to simulate
component B2.
103
2.5.12
This continues up with components B1 and B2 being introduced with a 'Driver' simulating
component A.
2.5.13
Because the approach is incremental you do not have to wait until all the components are ready
before you start to test.
As with top-down testing, because functionality is being added incrementally, any new faults or
failures emerging will have something to do with the new components or the interfaces to them.
Because you are starting from the bottom it is not until the top-level component is added at the
end that the complete system is in place. This can lead to design errors being detected late,
which can, as we discussed earlier, be very costly.
104
Drivers or Test Harnesses will need to be created for all levels other than at the lowest level.
Back to top
2.5.14
By performing a risk assessment, the critical low-level components can be identified as well as
the widely used high-level components. Initial testing can then be focussed on these areas. Both
'Stubs' and 'Drivers' will need to be used.
This is probably the most common of all the approaches used on projects.
Back to top
Big-bang testing
105
2.5.15
Big- bang testing is where all the components are tested together at the same time. Instead of
testing some of them together and gradually introducing more as they become available you wait
until they are all ready.
This makes it much more difficult to locate faults as they could be in any one of the components.
As the number of components increases the more difficult it becomes to locate the faults. Also
the more components there are the longer you have to wait until they have all been developed.
Any one of the previous three incremental approaches is recommended for larger systems.
However, if this is impractical and the big bang approach is the only option, then it may be that
old and new systems are run in parallel - this will enable the new system to be fully validated
against the old before implementation occurs.
Back to top
106
Integration Testing In The Small
Review Questions
Test harnesses
Simulators
Stubs
Mark Paper
Answers: c and d
Section Introduction
• Definition
• Requirements-based functional testing
• Business process functional testing
Definition
107
2.6.1
The functionality of the system is what the system does; it is the processes that it carries out. So
functional testing is testing to see that the system does what it is supposed to do.
For example showing that an account is opened correctly, the correct balances are calculated and
a statement is produced correctly.
Back to top
2.6.2
Requirements based functional testing is where you are testing the functionality based on the
requirements. The requirements must be documented and the tests are derived from this
documentation rather than the delivered code.
108
2.6.3
Because the testers are writing their tests based on the requirements rather than the developed
code as soon as the requirements have been agreed they can start preparing their tests. In some
cases they may be involved in the review of the requirements prior to agreement. By preparing
their tests earlier in the SDLC then any errors detected will be cheaper to fix and any valuable
test execution time will not have to be spent on test creation.
Testing against the requirements as opposed to the code will also ensure that the users are
getting a system that conforms to those requirements, which are actually what they want. It is
the requirements document that forms the contract of 'what is asked for' against 'what is
delivered'.
Back to top
2.6.4
109
Business process functional testing is where testing is based upon the business processes of the
system.
The way in which different types of users will use the system is analysed and business scenarios
are created to reflect this use. For example, if it is a person's job to add new customer details to
a database, a scenario may be created where six new customers are added, repeatedly, as the
user would. Previous system testing may have checked that a new customer can be added but
this may identify further issues such as fields being cleared between adding records.
One important point to note is that business process functional testing should aim to test all the
functionality provided by the system, not just the most commonly used areas.
2.6.5
User profiles are based on what functionality of the system that person will use. Not all users will
use all the functions of the system. It is important to understand who the end users of the
system will be and which features they will use.
If you take the above example the system consists of 7 Functions: A, B, C, D, E, F & G. User 1 is
the Manager and he will use functionality relevant to him, A, C & F. User 2 is the Supervisor and
she will use functionality relevant to her, B, C, & G. User 3 is the Data Entry Clerk and he will use
functionality relevant to him, C, D, & E.
There are some functions that they will all use such as C.
Where testing is being based on user profiles it is important that scenarios are set up for all the
different types of users to obtain full test coverage of the system.
Back to top
110
1. Which of the following statements about Functional System Testing is not true?
Requirements-based testing is a recognised approach
Mark Paper
Answers: b and d
Section Introduction
• Definition
• Non-functional requirements
• Non-funtional test types
Definition
111
2.7.1
Non-functional system testing is about testing everything that doesn't relate to the functionality
of the system. This type of testing covers aspects such as ease of use and performance. The
system may provide all the necessary functionality but if it is not easy to use or does not perform
very well then it will not be 'fit for purpose'.
Back to top
Non-functional requirements
2.7.2
The non-functional requirements should be considered at the start of the project and documented
along with the functional requirements in the User Requirements, although in practice this
doesn't always happen.
There are some non-functional attributes such as usability that are not always specified but you
are still expected to test. 'We didn't think we had to tell you that the system must be easy to use,
112
we assumed you would know that' is a common complaint.
When non-functional requirements are specified they can be very broad statements such as 'The
system must be easy to use' or 'The system must perform well'. These requirements as they
stand cannot be tested and further detail is needed. In the case of usability you need to ask
'What makes the system easy to use?' Is it that it conforms to company standards, that help text
is provided or that the screen flow reflects the business flow? You can then test that it does
conform to company standards, that there is help text and that the screen flow does reflect the
business flow.
In the case of performance it needs to be stated what is expected of the system. For example 'If
20 users all query the database at the same time, and the database is of a specific size, the
expected response must be in 2 seconds or less'. The number of users will vary, the size of
database will vary, the types of query they are performing will vary and the response times will
change accordingly, but the expectations in the different instances must be stated. Tests can
then be performed against these requirements.
Back to top
2.7.3
Load is concerned with identifying the operational profile. The live activities must be identified.
What happens when? Who does what and how often? What type and quantities of traffic are
generated on the system? When are high traffic activities scheduled? (for example, when are end
of tax year returns produced on the system?). Load is a means to an end!
Performance testing looks at response times for individual transactions or processes, with a
specific load profile.
Performance requirements should state the various response levels to be achieved under
different conditions (e.g. fullness of master files). Remember to reflect the production
environment (ref. Load testing), including security features(e.g. virus checkers and firewalls).
Performance criteria is defined as the response time between ‘x’ and ‘y’ with a specific loading
measured between two known points.
113
2.7.4
With stress testing we are interested in knowing what happens when the limits of the system are
exceeded – will the system ‘soft land’?, will it fail? Will it fail gracefully? Will it ignore the excess
information or lose it?
It is also operating the application to the maximum design limit for a given (short) period of time.
Under these circumstances, will the performance degrade?
Security – this takes many forms – physical security (servers in a locked room), system access
security (user profiles and password protection), audit trails (records of who did what when),
defense against attacks from the www (firewalls etc.).
2.7.5
Usability testing measures the suitability of the software for its users, and is directed at
measuring the effectiveness, efficiency and satisfaction with which specified users can achieve
specified goals in particular environments or contexts of use. Is the system effective, efficient,
error tolerant, engaging and easy to learn?
Storage is concerned with ensuring that the predicted database capacity is correct, that there is
114
enough server capacity to host the application – not only now, but in the future.
2.7.6
Volume testing is the testing of large volumes of data or transactions with the application. We
test within the design limits, using minimum (empty files), the ‘normal’, maximum and over the
maximum number of transactions and amount of standing data. Note that for functional testing
we would tend to use small volumes of data – we want to show the functionality works, and in
volume testing we show that those same functions work with large volumes.
Volumes may affect other areas of testing – for example it will be related to performance – we
may be looking at performance degradations against volume.
The volumes of data and transactions in the system may affect apparently unrelated areas of the
system, for example the volume of transactions from a process may swamp another process’
ability to have access to the operating system queues.
Installability testing is ensuring that the application installs on all designated platforms, that it
can be de-installed. It should also be considered what happens if the install fails part way
through.
115
2.7.7
Documentation testing is concerned with the content of user manuals, installation guides, on-line
and hardcopy help. Is the content suitable for the reading age of the user?
Recovery testing is concerned with ensuring that the system can be recovered after a failure. Is
the data up to the point of failure recoverable? Can users get back into the system?
In Summary
2.7.8
To summarise, whereas functional testing is more concerned with what a system does non-
functional testing is more concerned with how well it does it.
Non-functional system testing is not always given the importance that it deserves. This may be
because it is harder to perform or sometimes that it is not really understood. Quite often the
system faults found are due to the non-functional attributes, rather than the functional areas.
116
2.7.9
As mentioned earlier, ease of use and the performance of the application are areas that cause
many problems. A system may be extremely well designed and gives the user everything they
require, but if the response times are slow, then the system will be perceived as being unfit for
use.
However when they do meet requirements they are bringing additional quality to the system.
This is why the non-functional attributes are sometimes known as the quality attributes.
All systems are written with a purpose in mind; usually with the intention of making money and
being profitable. It is the non-functional attributes that give the system a competitive edge over
their competitors.
Back to top
Testing to see what happens when the limits of the system are exceeded
3. Volume, Usability, Performance, Storage, Install ability and Security are collectively
known as
The six key non-functional areas to test
Hard to test
Quality attributes
Mark Paper
117
Answers: a, d and d
Section Introduction
• Definition
• Types of integration
• Risks associated with integration
• Incremental approach to testing
• Non-incremental approach to testing
Definition
2.8.1
The definition for integration testing in the large is the same as the one for integration in the
small. The difference here though, is that we are referring to integrating separate systems, rather
than separate components. The activity is the same, but in this instance it is testing the
interaction and interfaces between systems.
Back to top
Types of integration
118
2.8.2
There are different types of integration to consider. It may be that you need to test the interfaces
between two internal systems. The development may be internal or outsourced but in either case
you would have some influence over the design of both systems. You would be testing both
systems individually before integration testing. Another instance is where a new system is being
developed to integrate with an existing one.
Another type of integration is a system integrating with a package. This may be providing
information to the package or extracting information from it. Although sometimes it is possible to
customise packages generally you would not have the same level of influence as to what data is
captured or how the data is stored. In some instances the data may need to be converted in
order to be accepted by the package.
The last type of integration to look at is where a system integrates with an external system.
Information needs to be passed between companies - this can be via messages containing data in
a particular format and to a particular standard, for example, EDI - Electronic Data Interchange.
Back to top
119
2.8.3
When the systems are integrated together there are various problems that can arise, the most
common being that the correct information is not passed from the source system to the target
system. This is often due to the misunderstanding or misinterpretation of the requirements.
Another problem is where the target system data corrupts the data passed by the source system.
It may be that the data is accepted by the target system but it causes corruption of the target
database, again through misunderstanding or misinterpretation of the data requirements.
The transfer of data between applications may be time critical, for example if you think of a web-
based front-end system passing information to a back-end system - if the data is not processed
by the back-end system in the timeframes required, the true status of the data is uncertain and
further enquiries/updates on that data could provide incorrect information.
When data is transferred between systems there is always the risk that data will get lost in the
transfer, especially if it is being transmitted externally.
Back to top
120
2.8.4
When performing Integration Testing in the Large, the approach can be taken whereby the
systems are integrated in stages, possibly based on different areas of business functionality.
If this method is adopted, any errors that occur during integration can be easily located as
functionality is introduced in stages.
By adopting this approach, any errors that could result from integration, for example issues with
firewalls, will be identified at an early stage within this testing phase.
Back to top
2.8.5
When performing Integration Testing in the Large, the approach can also be taken whereby the
121
systems are fully integrated when testing commences.
If this approach this taken, any errors that occur may be difficult to locate, as all system
functionality is available from the start of testing.
By adopting this approach any errors that could result from integration will be identified at a
relatively late stage within this testing phase, and we can again take the example of issues with
firewalls.
Back to top
Mark Paper
Answers: c and d
Acceptance Testing
Section Introduction
122
In this section we will cover:
• Definition
• User Acceptance Testing
• User Acceptance Testing Approach
• Contract Acceptance Testing
• Alpha Testing
• Beta Testing
Definition
2.9.1
Acceptance testing is the testing of a deliverable, for acceptability, by the user of that
deliverable. There are different types of acceptance testing, for example user acceptance testing,
contract acceptance testing, alpha testing and beta testing - we will have a look at each of these.
Back to top
123
2.9.2
User acceptance testing is generally the final stage of system validation. Once the 'testers' have
finished then the 'users' or 'customers' have the opportunity to perform their tests. Sometimes
an acceptance test team consists of both testers and users although the users should influence
the testing performed.
The end user should test all deliverables. The situation should not arise where a deliverable is
implemented and the users then realise it is not what they wanted.
Acceptance criteria should be stated in advance - it is the job of the 'builder' to ensure that the
system built meets those criteria.
2.9.3
The acceptance tests are based on the business activities performed by the users of the system.
There may be different business processes dependent on the user profile. For example in an
online ordering system the person recording the orders will perform different processes to a
manager who wants to see how many orders have been placed. All business activities must be
included in the tests performed.
As well as ensuring coverage of all the business activities the acceptance testing should also
include tests on the performance and quality attributes in expected business situations. For
example it may be that a user can complete a particular activity but they do not find it very
intuitive or easy to do so.
124
2.9.4
Back to top
2.9.5
Acceptance testing, if possible, should be conducted in an environment that is as near to the live
environment as possible. This will give a more realistic picture as to how well the system is doing.
It has to be remembered that it is unlikely that the users will be using this system in isolation.
There may be other activities that the users perform in other systems during a normal working
day that have an effect on the system under test.
If users perform acceptance testing at their desks there is the danger that little testing gets done.
This is because the testing competes with their normal daily work. In these cases faults can be
missed due to the lack of time or attention to detail.
125
Back to top
2.9.6
Contract acceptance testing is where a third party performs the testing. This may still be internal
within an organisation, perhaps a different department, or in some cases external.
As the name suggests the testing carried out is based on an agreed contract. The scope of the
testing and the acceptance criteria is laid out in the contract. Any exclusions would also be
stated. If the acceptance criteria are met the assumption is that the system is 'Fit for Purpose'.
2.9.7
It is very important that there are no ambiguities in the contract and that there is no room for
misinterpretation. The third party will only test according to the contract and you do not want
there to be any excuses or reasons why particular tests or a type of testing is not carried out.
126
Outsourcing is generally becoming more popular within IT and this is also true for testing
activities. If the testing is being outsourced then the appropriate contract must be put in place.
Back to top
Alpha Testing
2.9.8
Alpha testing is another type of acceptance testing. It is prospective users of the system using it
as they intend to once it is live. The software remains under the control of the software owners
during alpha testing.
2.9.9
In Alpha testing the tests are carried out in a simulated or true operational environment and
reflect how the system will be used. The testing only starts when a stable version of the software
is available, typically after system testing.
127
If the system has been developed purely for use within an organisation then the prospective
users within it would perform the alpha testing. If the system were being released externally then
the external users could perform testing, but this would be performed at the software owner's
site, not their own site.
The key factor is the system is not yet being released externally.
Back to top
Beta Testing
2.9.10
Beta testing is the next stage on from alpha testing. Again it is prospective users of the system
using it as they intend to once the system is live but this time the testing is carried out at their
own site.
2.9.11
128
Again the tests are carried out in a simulated or true operational environment and will only start
when a stable version of the software is available.
This type of testing is particularly useful for systems that are going to be used in many different
environments. Take for example a Payroll Package that is being sold to many different
companies. Each company may be running it in a different environment. There is no way the
company developing the product can possibly test it in all the different possible environments so
they may offer it out to existing or prospective users to effectively trial it. In some cases the beta
testing sites are selected because they are known to have a different environment to the other
beta test sites. For example one site may be selected because it is using an Oracle database
instead of Microsoft SQL Server or using an NT operating system instead of Windows 9x. It is also
quite common now for companies to make beta versions of products available for download from
the WEB. As well as testing the beta version in their environment the testers are also getting the
opportunity to view any new or enhanced features early.
However, what does happen is that the beta testers will only test the features of the system that
are relevant to them, based on their user profile.
Back to top
Acceptance Testing
Review Questions
2. All of the following are types of Acceptance Testing with the exception of
Beta Testing
User Testing
Alpha Testing
Security Testing
Mark Paper
129
Answers: b, d and d
Maintenance Testing
Section Introduction
2.10.1
Maintenance testing is concerned with testing changes made to an existing system. As we all
know, change is inevitable. System requirements may change during development but they are
also likely to change once the system has gone live. The longer a system is live, the more likely
change will be needed.
There are several reasons why change may be necessary. Most commonly it is because the
business requirements have changed. It could also be due to changes in technology. Technology
is advancing so quickly that it may be necessary to introduce a new technique that, for example,
helps improve performance. There can also be outside influences that effect change. Examples of
130
this are changes to government legislation or the Year 2000.
Back to top
2.10.2
If a system has been recently implemented and is well documented maintenance testing
shouldn't be a problem. However, maintenance testing can sometimes be necessary for very old
systems and this can be very problematic. For example, some systems requiring change for Y2K
were over 25 years old!
Whether the systems are old or not the tester needs to understand what the system is supposed
to do. If there is a lack of documentation, or the documentation is out of date then this will be
difficult. It may be that when the system was first implemented there were key people who knew
all about the system but as time goes on they may leave or move to another department so
again without knowledge of the system, testing will prove difficult. The lack of knowledge or
quality of documentation of a system usually deteriorates over time.
The lack of system knowledge and/or lack of documentation make it harder to know what might
be affected when a change is made. If a change is made to a particular area of functionality it
might have repercussions on other areas of the system and this needs to be understood.
Back to top
131
2.10.3
Initially a change to a system should be tested in isolation, perhaps using stubs and/or drivers to
ensure that it functions as expected.
The changed code should then be integrated with the rest of the application to ensure that it still
works as expected.
Finally, regression testing should also be performed to ensure that the system as a whole still
works as expected, and that no unexpected errors are now occurring. That what was working is
still working.
Back to top
2.10.4
One of the risks of maintenance testing is that if the system is not fully understood, the changes
132
that are made may be wrong, or have an affect on another part of the system that the
developers and/or testers are unaware of.
To try and alleviate this problem, regression testing is vital in ensuring that supposedly
unaffected parts of the system are indeed unaffected. The problem can sometimes be deciding
just how much regression testing to perform. The less that is known of a system the higher the
risk of failure.
Another risk is that the testing could take longer than originally planned because the full impact
of the change wasn't understood.
Back to top
Maintenance Testing
Review Questions
Mark Paper
Answers: b and c
Lesson 2 Summary
In this lesson we have covered:
133
o The Waterfall model
o The V model
o The Spiral model
o Verification, Validation and Testing
• Economics of Testing
• Component Testing
o Definition
o BS 7925-2 Software Component Testing Standard
o The Component Test Process
o Definition
o Top-down testing
o Bottom-up testing
o Functional Incremental Testing
134
o Big Bang Testing
o Definition
o Requirements-based functional testing
o Business process functional testing
o Definition
o Non-functional requirements
o Non-funtional test types
o Definition
o Types of integration
o Risks associated with integration
o Incremental approach to testing
o Non-incremental approach to testing
• Acceptance Testing
o Definition
o User Acceptance Testing
o User Acceptance Testing Approach
o Contract Acceptance Testing
o Alpha Testing
o Beta Testing
• Maintenance Testing
135
o What is maintenance testing?
o Problems of maintenance testing
o How can we test changes?
o Risks of maintenance testing
Lesson 2 Review
Resource requirements
The testing performed to expose faults in the interfaces and interaction between
systems
Bottom up testing
Functional testing
136
5. Verification as defined by BS7925-1 is
BS 7925-1
BS 7925-2
ISO 9000-3
Load
Component
Performance
Security
Unit testing
Maintenance testing
137
Acceptance testing
A lower level component needs a simulated trigger from a missing higher level
component
Mark Paper
Answers: d, c, a, c, d, b, b, b, b, d, a, a, and c
138
Lesson 3 – Static Testing
Lesson Introduction
Section Introduction
• Definition
• Why review?
• What to review
• When to review
• Costs of reviews
• Benefits of reviews
Definition
3.1.1
This is the definition of a review as defined in BS 7925-1. There are different levels of review,
from very informal through to very formal, they all have their benefits and should be used
throughout the SDLC.
139
Back to top
Why review
3.1.2
We have already discussed the importance of testing early and that the earlier faults are found
the cheaper they are to fix. Reviews help to detect any faults early in the design and
development process because they are not dependent on the delivered code. This in turn will help
reduce the costs and improve the quality of the system.
For these reasons reviews are an important part of testing and are regarded as one of the best
test techniques.
Back to top
What to review
3.1.3
140
All deliverables can be reviewed.
This includes any documents created in the analysis of the system such as User Requirements,
Functional Specifications, Technical Specifications and Component Specifications.
Any other type of design documentation can also be reviewed such as Data Flow Diagrams and
Entity Relationship Diagrams.
But it does not stop there, there is also benefit for the test plans and tests to be reviewed as well
the code itself.
Lastly, the system documentation such as user manuals and help text can also benefit from
reviews.
Back to top
When to review
3.1.4
Whenever a deliverable has been produced it can be reviewed, and the earlier this occurs the
better.
Looking at the left hand side of the V Model, all the documents that are included there should be
reviewed. Reviews will be of most benefit when they occur early in the SDLC. There will be little
benefit if the User Requirements are reviewed after the system has been designed and the code
written.
We will look at the different types of review later but the reviews of documents such as User
Requirements, Functional Specifications and Technical Specifications are usually incremental. The
first review will usually raise some questions and issues. These will need to be resolved and the
document re-reviewed. It may also be the case that all issues cannot be resolved at once. One
issue may need to be resolved before others can be addressed, or having resolved one issue, this
is turn raises further issues.
141
Reviews are therefore an iterative process.
Back to top
Costs of reviews
3.1.5
Analysis has been performed on projects using reviews and it is estimated that on-going reviews
cost around 15% of the total development budget (See Note 1).
These costs include all the preparatory work required for the reviews to take place. For example
agreeing who will be involved in the review, agreeing a date and time for the review and sending
the document out prior to the review. Time must also be allowed for each reviewer to read the
document prior to the review meeting.
The costs also include the actual review meeting and any follow-up work required. Follow-up
work can sometimes involve investigation of issues, correcting the document and producing a
report detailing the outcome of the review. The report would detail the actions required, by
whom, and whether another review meeting is necessary. Metrics would be kept of the number of
faults found which, when analysed, later could lead to process improvement.
Back to top
Benefits of reviews
142
3.1.6
For example, by reviewing the User Requirements you are making sure that you really
understand what the users want and they are more likely to get a better quality product as a
result.
If a fault is found late in the SDLC, earlier phases of testing may well have to be repeated once it
has been corrected. For example, if a fault is found in system testing it could mean changes need
to be made to the functional specification, which then ripple through technical specification,
program specification, code, unit testing and integration testing before work can be resumed in
system testing. By reviewing early you are reducing the likelihood of this happening. This leads to
less time being spent on rework, people are therefore more productive and the project timescales
are shorter.
An error that is made early in the SDLC can have serious repercussions later if it remains
undetected. If the fault is found early then it is one fault found, if it is not found then it could
mean that it spawns six faults later. By detecting the fault early the number of faults in total are
likely to be reduced.
Because of all of the above, the final delivered product will be of a better quality. This will lead to
less support being necessary to fix failures that occur after implementation and changes being
needed because the system didn't achieve what was required. Support of the product can
therefore focus on genuine changes to user requirements.
Back to top
143
Reviews are known to be cost effective
Mark Paper
Answers: c
Types of Review
Section Introduction
• Review Types
• Goals
• Review Activities
• Roles and Responsibilities
• Deliverables
• Pitfalls
• Making the review work
Review Types
3.2.1
144
There are four main types of reviews, starting with informal reviews, which are the least formal
type, through to inspections, which are the most formal and detailed. Basically, with Informal
Reviews, the author will request comment, but may ignore it, a Walkthrough is an educational
activity, a Formal Technical Review is trying to achieve a consensus of opinion and an Inspection
is verification of a work product.
The different review types are not mutually exclusive.
3.2.2
The informal review, as the name suggests, is the least formal of all reviews and is also known as
the 'Buddy' review. This review can occur at any time and this type of activity is largely
unplanned.
For example it can be one person saying to a colleague 'Can you have a look at this for me?' or
'What do you think of this?' It normally occurs between colleagues - on a one to one basis, and
could even be a conversation at a coffee machine. It could also be one person sending a
document via email for a colleague's comments. The comments are sent back via email and the
review item updated accordingly.
These types of reviews can help to build team spirit - everyone becoming responsible for the
finished article and taking responsibility for it. They are a two-way communication process -
interaction between the author and peers.
Although this is a very informal type of review it is cheap, and is a widely used review process -
you may not even be aware that you are doing it!
145
3.2.3
A walkthrough is always led by the author of the review item. This may be a document or quite
often can be program code.
3.2.4
Unlike the informal review, the Formal Technical Review (also known as a Peer Review) is
both planned and documented and usually takes place in the form of a meeting. Preparation
needs to be done in advance by sending out the document so that it can be read prior to the
review meeting, organising a date, time and venue for the meeting, and importantly who is going
to attend.
It is very important to get the right people involved in the review - it should include those who
146
have real input to the content of the document, and subject specialists should be sought where
relevant.
There are two schools of thought as to whether the author should be present or not. The main
reason for including the author is that if there is anything that is not clear or there are any
misunderstandings they can explain immediately. The reason for not including the author is that
peers feel more able to criticise and point out possible issues if the author is not there. As for
management, if they do not have any direct input into the review (i.e. they are neither a peer nor
subject specialist) then they should not be involved in this type of review.
This type of review is more suited to documents such as User Requirements, Functional
Specifications and Technical Specifications rather than the code itself. It can also be useful for
Test Plans.
3.2.5
Inspections are the most formal and most detailed of all reviews, sometimes known as Fagan
Inspections (named after Michael Fagan who devised the 'Software Inspection Process' and who
produced various papers on Inspections in the 1970's and 1980's).
The person leading the inspection is called the moderator and it is essential that they have been
trained and are fully aware of what to do. They can be thought of in the same way as a
Chairman, they are in charge. The moderator is never the author. All other participants have a
specific role asigned. For example one participant may be requested to inspect a document to
ensure that all the requirements contained within it are testable. Another participant may be
requested to ensure that the document follows company standards, in terms of its presentation.
You must have a written document to perform an inspection.
As we have said inspections are the most formal type of review and follow a particular process
based on certain rules. The process makes use of checklists to make sure everything has been
covered and entry and exit criteria are defined.
This type of review is very time consuming and is not necessary for all types of documents and/or
all projects. Inspections may be considered where documents are regarded to be critical or the
project itself is of a safety critical nature.
Back to top
147
Goals
3.2.6
The document is being reviewed to validate it against actual requirements and to verify that it
has been created to standard and contains everything it should.
It is vital that the review team work together to achieve a consensus of opinion regarding the
document being reviewed.
Another goal of the review process is that the review team work together to improve the quality
of the item being reviewed - it is a constructive, not destructive process.
Back to top
Review Activities
3.2.7
148
The following activities should occur for all reviews except informal reviews which are unplanned
and rarely documented.
Planning - the review itself should be a planned activity. The participants will be told the date and
time of the review, sent the document in advance of the review and given time to prepare.
Overview meeting - this is a short meeting between main review participants to determine what
is to be reviewed, who the participants are and confirm the meeting date, time and duration.
Preparation - all review participants should take time to prepare properly. Participants should
spend as long preparing for the meeting as the meeting itself. Part of the planning exercise must
be to ensure that all participants have sufficient preparation time.
Review meeting - the meeting should take place as planned with the reviewers working together
to improve the quality of the document.
Report on review - the findings of the review must be documented with the follow up actions
clearly stated. This is usually in the form of minutes. The minutes of the meeting would also
detail who is responsible for the actions and a finish by date.
Follow-up meetings - a review is of no benefit unless the actions decided are followed up. It may
be necessary for another meeting to be planned to re-review the document once the changes
have been made. Remember reviews are an iterative process.
3.2.8
It is extremely important that time is taken to plan the review activities thoroughly – who is
going to be involved, when the review meeting is going to take place, that everything required to
carry out the review (source documents, reference documents, for example) is available, and that
participants are allocated sufficient time to carry out the activities assigned to them.
An overview meeting is likely to take place to ensure that everybody involved in the review
process understands what their specific roles and responsibilities are. Different roles can be
assigned to participants in order to give the review process more focus – instead of everybody
checking for everything, they look at certain aspects of the review item, for example, ensuring
149
that everything contained in the review item correctly reflects what is in the source document.
Once roles and responsibilities have been assigned, individual preparation can then commence.
3.2.9
Roles can be allocated to individuals where they check the review item against company
standards, or check that the document can be used as a source for the next activities – this could
be preparing an Acceptance Test Plan from the Customer Requirements document, or creating
Test Cases from the same document.
3.2.10
The review meeting needs to be managed, time-boxed and organised. The moderator/chairman
must make sure that there is not too much time spent on one topic to the detriment of others
150
and that the meeting is not dominated by a few people – everyone must be given an opportunity
to speak.
The moderator/chairman must also ensure that the meeting does not become unruly (lots of
people talking at the same time about different things) and that the focus is kept on the review
item.
The scribe is responsible for taking the minutes of the meeting and must document any remedial
actions required and any follow-up activities (further meetings, formal sign-off).
3.2.11
It is the author’s responsibility to carry out any changes required of the review item.
It is all participants’ responsibility to suggest any process improvements – to the review activity
itself and the structure of the review item (could it be made better in any way?)
Back to top
151
3.2.12
The reviewer (also known as Inspector) will prepare for the review by thoroughly reading the
document in question and identify and record any issues they may have. They should also read
any background material that will give them a fuller understanding of the document under
review. It is good if each reviewer is given a specific review role, so that all parts of the
document are reviewed. Roles might include:
• checking the review item has been built to the required standard
• making sure that the review item conforms to the requirements as specified by the
output of the preceding project stage, for example that all the requirements have been
included and nothing extra has been added
• making sure the review item can be used, for example, can tests be drawn up from it,
will it be possible to write the next level document
• checking the management summary does contain all the key points from the document
body.
The author’s responsibility is to prepare the document for review. They may be required to
attend the review, but this may not always be the case. The author must make the agreed
changes to the document and must issue it for re-review when necessary.
The moderator manages inspections. The moderator is fully trained in the inspection process
and ensures this process is followed. The moderator may be required to classify problems and
give guidance as to compliance with standards.
The manager has the responsibility to ensure that time is allocated in the overall project plan
for the review exercises. They will only attend the review meeting if they have a role to play. For
example they may attend as a reviewer if they have the appropriate input to make or as a
moderator for an inspection but they will never attend as a manager!
Back to top
Deliverables
152
3.2.13
The review may result in an action or actions to improve the review item. As this activity has
occurred, the item under review will have changed and improved as a result. This is true
whatever the review item may be.
Reviews may highlight weaknesses in the SDLC process in use - if these weaknesses are known
about, then something can be done about eradicating them.
Back to top
Pitfalls
3.2.14
In order to get the best from a review it is important that the review team are aware of their
responsibilities and know what is expected of them.
One of the objectives of the review is to ensure that the document has been written to company
153
standards - these standards need to be readily available for this to happen.
It is essential that the appropriate people are included in the review with enough knowledge to do
the review justice. Different people will be appropriate depending on the type of document being
reviewed. For example, if there is a particularly complicated technical area included in the review
item then someone with the required technical expertise must attend. If you are reviewing help
text then it would be advantageous for someone from the support area to attend. The manager
should not be there just because they are the manager.
The review process has to be given management support; it is essential that enough time be
allocated to all the review activities.
One of the biggest factors in the success or failure of reviews rests with the people that take part.
It must be remembered that it is the review item being reviewed, not the author. The reviewers
need to be diplomatic in their comments. Criticism is never received well and can sometimes be
very hard to take.
Back to top
3.2.15
There are various things that will help make the review work - the first and foremost thing to
remember is that the document is being reviewed, not the author.
It is important to ask information seeking questions such as 'Who requested this and why do they
want it?' 'What is the frequency of this report and when do they need it by?'
All issues must be graded for severity so that it can be decided which ones need immediate
action.
154
3.2.16
It must be remembered that the purpose of the review is to improve the quality of the document
being reviewed.
The review must be seen as a team activity with everyone working together for the good of the
project.
Most importantly if issues are found then a work around or solution must also be sought /
suggested. If reviews are seen as just finding more problems with little being resolved then there
will be a tendency to stop them.
Back to top
Types of Review
Review Questions
(ii), (iii) and (iv) are true, (i) and (v) are false
(ii), (iii) and (v) are true, (i) and (iv) are false
155
They are primarily seen as an educational exercise
Mark Paper
Answers: d, a and d
Static Analysis
Section Introduction
• Definition
• Simple Static Analysis
• Compiler-generated information
• Data-flow Analysis
• Control-flow Graphing and Complexity Analysis
Definition
156
3.3.1
Static Analysis is performed on program code. It is analysing the code to find errors prior to the
code being executed.
When high-level languages (such as Cobol, VB, C++) are used to develop programs the code that
the developers write is known as source code. The source code is then ‘compiled’ by a compiler.
There are different compilers for the different languages. The compilers can highlight errors that
need correcting. When the source code compiles successfully the compiler creates object code. It
is the object code generated from the compiler that is executed.
In this section we will be looking at what sort of problems we can find by reviewing the source
code and the information provided by compilers.
Back to top
3.3.2
157
As we have said Static Analysis does not involve any execution of the program, it is carried out
prior to execution.
The review of the program code may uncover some or all of the following errors:
Unreachable code - this is code that has been written, but is never executed. As you follow the
logic of the program you will see that there is no way you can get to this piece of code.
Undeclared variables - a variable is a name attached to a piece of memory in which the value of
an item of data is stored. All variables in a program need to be defined, this determines their
characteristics such as size and format. If a variable has been used in a program but has not
been previously defined then it is known as an undeclared variable.
3.3.3
Parameter type mismatches - programs pass parameters between them. These have to be
defined in both the sending program and the receiving program. The mismatches occur where
they are defined differently. For example, the parameters are of a different size or are passed in
a different order to what was expected.
Uncalled functions and procedures - these are functions and procedures that have been defined
but are never called.
Syntax errors - these are mistakes in the construction of the program code. For example spelling
mistakes, incorrect program statement formats.
Array bound violations - if a multi-occurrence table is set up to hold data within a program and a
pointer is used to reference the data, an array bound violation occurs when the pointer is set to a
value higher than the maximum permissible. Data outside of the table is then referenced or
overwritten.
Back to top
Compiler-generated information
158
3.3.4
Remember from a previous slide, that compilers translate source code into object code (machine
code). Any faults found by compliers are found through static analysis. The compiler is not
testing the program, but ensuring that it can actually be executed.
Compilers will highlight errors such as syntax errors and give you a line number where the error
has occurred.
Compilers can also assist in finding other errors as well - for example, uncalled functions,
whereby the developer merely has to interrogate the information provided by the compiler to
determine what functions or procedures have not been called.
Compiler generated information is also useful when the program needs amending. For example if
input values have changed for a field, the developer can easily locate all the occurrences of that
field in the program, check the operations being performed on it and amend accordingly.
Back to top
Data-flow analysis
159
3.3.5
Data flow analysis identifies all the variables within a program, or piece of code and examines
their behaviour as the code is manually 'executed'.
The information obtained when performing the analysis can assist testers when constructing test
cases. They can see what data values need to be set in order to exercise all possible paths within
the code in question.
3.3.6
3.3.7
160
In this example, 'X' is the variable and it is being assigned a value of 10.
3.3.8
3.3.9
In this instance, the value of 'X' will determine the flow of program execution.
Let's look at some lines of code and work through identifying the variables and their D-use, C-use
and P-use.
161
3.3.10
The first stage in looking at this example is to determine the variables. There are five variables
used - A, B, C, D and X.
3.3.11
In taking this further you would identify where each of the variables are used and the type of
use.
This shows that A has a D-Use at statements 2 and 7 and a C-Use at statements 6, 7.and 8.
It shows that C has a D-Use at statement 4, a C-Use at statement 8 and a P-Use at statement 5.
162
It shows that X has a D-Use at statement 8 and a P-Use at statement 5.
3.3.12
Data flow analysis helps you identify the problems - in this small piece of code we can see that
there is a variable D that is defined but it is not used again, either for computational or predicate
use. In this instance we must ask the question why is the field there at all?
We can also see that there is a variable called X, which is defined at statement 8 but is used as a
predicate variable at statement 5. Because the variable is used before it is defined, we will not
know how many times the particular loop will be executed.
In some programming languages, variables can also be 'killed', which means they are not
available for use. Ideally, you would want to see a variable that is defined, used in some way and
then killed.
Data flow analysis can highlight invalid combinations of variable use, for example D-D-K, where a
variable is defined once, defined again, and then killed before being used. Another example is U-
D-K, where a variable is used before it is defined. These invalid combinations need to be
investigated and corrected prior to the module be executed.
163
3.3.13
Data flow analysis can also be used to determine what test cases need to be created to exercise
the decisions in the code - these will be created around the P-use variables - in this example,
variables C and X.
Back to top
3.3.14
Control flow graphs describe the logic structure of software programs. Control flow graphing is a
method by which flows through the program logic are charted, using the code itself rather than
the program specification.
3.3.15
164
This is a very low level form of testing and is mostly performed by developers or programmers at
the component testing level.
This type of testing measures the amount of decision logic in a single program. It is used for two
related purposes. Firstly, it can be used to assess the number of test cases required to exercise
the program logic. Secondly, it will assist in making the software reliable, testable, and
manageable as the intricate detail of the program code is under scrutiny.
It should be used in addition to the testing derived from the requirements, to ensure that the
delivered code achieves the functionality as specified in the Module Specification.
3.3.16
Each flow graph consists of nodes and edges. The nodes represent computational statements or
expressions, and the edges represent transfer of control between the nodes. Together the nodes
and edges encompass an area known as a region.
In the diagram, the structure represents an ‘If Then Else Endif’ construct. Nodes are shown for
the ‘If’ and the ‘Endif’. Edges are shown for the ‘Then’ (the true path) and the ‘Else (the false
path). The Region is the area enclosed by the nodes and the edges.
165
3.3.17
There are four basic structures that are used within control-flow graphing.
The 'Do While' structure will execute a section of code whilst a field or indicator is set to a certain
value. For example,
The 'If then Else' structure will execute one section of code if a field or indicator is set to a certain
value, or execute another section of code if it does not. For example,
IF A > 10
PRINT 'A is greater than 10'
ELSE
PRINT 'A is less than 10'
ENDIF
The 'Do Until' structure will execute a section of code until a field or indicator is set to a certain
value. For example,
DO
ADD A TO C
UNTIL X=10
The 'Go To' structure will divert the program execution to the program section in question. For
example,
166
3.3.18
The Cyclomatic Complexity of a module is calculated from the control-flow graph of that module.
To actually count these elements requires establishing a counting convention (tools to count
Cyclomatic Complexity contain these conventions). The complexity number is generally
considered to provide a stronger measure of a program's structural complexity than is provided
by counting the lines of code.
Thomas McCabe introduced Cyclomatic Complexity in 1976 and it may be considered as a broad
measure of soundness and confidence for a program. This measure provides a single number that
can be compared to the complexity of other programs.
3.3.19
167
In a study conducted, a large number of programs were measured, and ranges of complexity
established that could help the developer determine a program's inherent risk and stability. The
resulting calibrated measure can be used in development maintenance and reengineering
situations to develop estimates of risk, cost, or program stability. Studies show a correlation
between a program's Cyclomatic Complexity and it's error frequency. A high Cyclomatic
Complexity indicates that there is higher risk of failure when, for example, the program is
amended.
Back to top
Static Analysis
Review Questions
IF RAINING
TAKE UMBRELLA
ELSE
IF COLD
TAKE COAT
ENDIF
ENDIF
168
2
IF HUNGRY
BUY SANDWICH
BUY CRISPS
ELSE
BUY SWEETS
ENDIF
IF THIRSTY
BUY 2 DRINKS
ELSE
BUY 1 DRINK
ENDIF
Mark Paper
Answers: c, a, d, c, and c
Take a look at the following examples of code and the control flow graphs that represent them.
You will be able to see how the cyclomatic complexity is calculated.
IF A > 10?
PRINT 'A is greater than 10'
ELSE
PRINT 'A is less than 10'
ENDIF
169
3.4.1
No. of regions + 1 or
No. of Edges - No. of Nodes + 2
IF P = T?
CARRY OUT THIS
IF X = 10?
CARRY OUT THE OTHER
ELSE
CARRY OUT THAT
ENDIF
ENDIF
3.4.2
170
No. of Edges - No. of Nodes + 2 ((5-4) +2)
Strictly speaking, control flow graphs should not show any statements, as they are only
concerned with depicting the logic structures within a piece of code. However, to ease
understanding, statements can be shown, as detailed below.
3.4.3
In this example, there are still 2 regions, but there are now 7 nodes and 8 edges. The cyclomatic
complexity can be calculated as follows:
Lesson 3 Summary
In this lesson we have covered:
171
o Definition
o Why review?
o What to review
o When to review
o Costs of reviews
o Benefits of reviews
• Types of Review
o Review Types
o Goals
o Review Activities
o Roles and Responsibilities
o Deliverables
o Pitfalls
o Making the review work
• Static Analysis
o Definition
o Simple Static Analysis
o Compiler-generated information
o Data-flow Analysis
o Control-flow Graphing and Complexity Analysis
Lesson 3 Review
1. Static analysis is
The analysis of system documentation with the intent of creating test cases
172
3. Consider the following statements about Walkthroughs:
i), ii) and v) are true, iii) and iv) are false
iii) and iv) are true, i), ii) and v) are false
If A = 12 then….
A=C*D
For A = 1 to C do…..
C = 15
Syntax errors
Undeclared variables
Test coverage
It is planned
It is documented
It is led by a moderator
Control-flow graphing
173
Data-flow analysis
Program execution
Complexity analysis
Mark Paper
Answers: b, c, b, b, d, c and c
Lesson Introduction
Section Introduction
174
4.1.1
Back to top
4.1.2
The term 'black box' is used to denote that we are not aware of the internal construction of the
test item - it could just be a black box. What is important is that the tests are based on what the
system should do. The testing is derived from documentation such User Requirements and
Functional Specifications.
This type of testing is used for testing the functionality of a system, which is why it is also known
as functional testing.
175
4.1.3
If we take a calculator as an example, we know that if you enter 3 + 4 the answer should be 7.
We know what we are entering and we know what the result should be but we are not concerned
with how the calculator does it.
There are various techniques that can be used to assist in black box testing and we will cover
these later in this section.
Back to top
4.1.4
Back to top
176
What is white box testing?
4.1.5
White box testing is analysing how the code works. Whilst also concerned with what the program
does the main focus is on testing the precise construction details of the program design.
White box testing is primarily concerned with the way in which a program exercises its logic, i.e.
the different routes a program can take depending upon the input data. By performing white box
testing we can judge if all of the program code is being exercised and if this execution is being
performed in the most efficient manner.
White box testing is also known as structural or glass box testing because you are looking
internally at the structure of the program.
4.1.6
Using the same example of a calculator you would now be concerned with exactly how the results
177
were achieved.
Programs are constructed with various decision points (for example, if 'this' is true then do
something otherwise do something else). White box testing is used to test paths through the
program code and to ensure that there are no sections in the program that can't be reached.
Back to top
4.1.7
White box testing techniques are particularly useful in ensuring the quality of code and code
coverage at a low level. A developer, programmer or someone with a technical background
usually carries them out.
Black box testing techniques are useful in deriving test cases from documentation such as User
Requirements and Functional Specifications and are usually carried out by system and acceptance
testers. Programmers and developers can also use them against Program Specifications.
Black box testing is therefore relevant throughout the SDLC whereas, in general, white box
testing in conjunction with black box testing is more appropriate for unit testing and integration
testing, and becomes progressively less useful towards system and acceptance testing. System
and acceptance testers will tend to focus more on specifications and requirements than on the
code.
178
4.1.8
Therefore, in general, black box testing techniques are most suited to system testing, integration
testing in the large and user acceptance testing.
They allow a tester to create tests without prior knowledge of how the system is designed or
built. If carried out early, the process of creating the tests and discussing the expected results
with the designer may well influence the design and construction of the system. They assist in
the business validation of the application.
4.1.9
As stated earlier, in general, white box testing techniques are most suited to component testing
and integration testing in the small.
They assist in the technical validation of the code and will never do any more than demonstrate
that the program works ‘as coded’.
179
Back to top
4.1.10
We have previously discussed the fact that we cannot test everything and that exhaustive testing
is impossible. Various techniques have been developed to enable a systematic approach to both
black and white box testing. These techniques help to ensure that tests are repeatable and
provide a level of confidence as to the amount of test coverage.
Tools can assist in this process. If you are trying to ensure that every path has been covered in a
large program then it can be very time consuming and error prone if done manually. Tools are
particularly useful in white box testing to increase productivity and improve the quality of the
work.
Back to top
180
It is concerned with how a system or component is coded
Mark Paper
Answers: a and b
Section Introduction
• Equivalence partitioning
• Boundary value analysis
• Cause-effect graphing
• Other black box test techniques
Equivalence partitioning
4.2.1
181
We have already illustrated the fact that exhaustive testing is impractical to carry out due to time
and resource constraints. Software generalises in the way that it deals with subsets of data (for
example alpha input, numeric input). Test cases can be created for each subset identified, and
representative values for each subset chosen. Each subset can be thought of as a partition. This
type of testing is something that is probably already performed by many testers, equivalence
partitioning provides a formal definition of the technique and BS 7925-2 provides a standard
approach for the analysis and definition of test cases.
4.2.2
If we consider the requirement as detailed above, we can see that valid input values are between
17 and 75. Therefore our valid subset or partition value (the number we choose to ensure that a
valid value will be accepted into the input field) would need to be in the range 17 to 75 - normally
the value chosen would be somewhere in the middle of the range (46, for example).
4.2.3
182
We would also have to consider the subsets, or partitions that would result in data being
rejected. We have shown two invalid partitions above - one where the value entered needs to be
below 17 and one where the value needs to be above 75. We may also consider identifying
further invalid partitions, such as alpha values - upper and lower, and special characters. There is
no hard and fast rule regarding the number of invalid partitions identified - it is down to the
discretion of the tester to determine, but if it is felt that elements in an equivalence class are not
treated in the same way, smaller equivalence classes need to be created.
Back to top
4.2.4
Errors are often made concerning the limits of a field - in our example we have a range of 17 to
75, but is that range inclusive of those values, or exclusive? Unless the range is explicitly defined,
wrong assumptions can be made, and even if it is explicitly defined, errors can still be made! Test
cases that explore the boundaries of a field are more likely to uncover faults than those that do
not. Boundary value analysis is concerned with defining test cases on the boundary limit, and
either side of the boundary limit.
183
4.2.5
In our example, the boundaries are 17 and 75. The test cases created would need to test on each
boundary (17 and 75) and the least significant digit either side of each boundary (16, 18, 74 and
76).
There can also be the situation where outputs are generated, depending upon the input. In our
example, an acceptance letter needs to be generated for applicants in the age range 17 to 75. A
rejection letter needs to be generated for applicants below 17 and over 75. Test cases need to
include expected results incorporating the generation of the correct letter, depending upon the
values entered.
4.2.6
Boundary value analysis will always result in three test cases per boundary - one with a value
below the boundary, one with a value on the boundary and one with a value above the boundary.
The values above and below the boundary limit should be the least significant change possible for
the field - if it is expressed in whole numbers, the values above and below the limit would be
boundary limit +1 and boundary limit -1. If it is expressed to 2 decimal places, the values above
and below the limit would be boundary limit +0.01 and boundary limit -0.01.
184
4.2.7
NOTE
There is an exercise on Equivalence Partitioning and Boundary Value Analysis after this section.
Back to top
Cause-effect graphing
4.2.8
This is a very rigorous black box test technique as test cases are created for every combination of
causes (each entry on the decision table), ensuring thorough testing.
185
It assists in translating the requirements and specifications from narrative into a decision table
and is also effective in ensuring that the logic defined within the requirements or specification is
correct.
4.2.9
We are going to use the operation of traffic lights to illustrate this technique.
Traffic lights have three separate lights of different colour – red, amber and green - and the
combination of these lights being on or off results in different actions being taken by road users.
The first step in cause-effect graphing is to identify the causes (or conditions). The three lights
give us three causes – as a default we are saying that their status will be ‘on’.
4.2.10
The next step when using this technique is to identify the actions (or effects) that the
combinations of lights being on and off can have. We have four effects in this scenario –
186
identified above. For clarity, the light combinations have been added.
4.2.11
Now we need to draw a cause-effect graph to show the relationship between the combinations of
conditions and actions.
There are four different graphing symbols that are used to map the causes to the effects.
4.2.12
187
The graph is built up by identifying a combination of causes and the resultant effect. The slide
above shows the combination of the red light being on AND the amber light being off AND the
green light being off should result in road users keeping stationary at the lights in question.
4.2.13
The next cause and effect can then be added. This shows that when the red light is on AND the
amber light is on AND the green light is off then the effect is to get ready to go.
4.2.14
The third cause and effect shows that when the red light is off AND the amber light is on AND the
green light is off then the effect is to get ready to stop.
188
4.2.15
Lastly, when the red light is not on AND the amber light is off AND the green light the effect is to
go.
4.2.16
This is quite a simple example and yet the results of the graphs can be quite confusing! The
intention is to ensure that a test is created for each cause and effect. This is sometimes easier to
see in a decision table.
189
4.2.17
This shows the resulting decision table from our example. Decision tables are very useful and
easily demonstrate the four test scenarios that need to be created to ensure that the correct
causes induce the correct effect. But this is not the complete picture, as we have to consider the
incorrect scenarios as well – for instance, what would happen if all three lights were on at the
same time? We would have to ensure that an error was recognised and appropriate actions
taken.
There is a formula that is used in conjunction with Cause Effect Graphing to determine the
number of test cases required to test all possible combinations of the identified conditions– and
this is 2n where n is equal to the number of conditions identified – in this example it would be 23
which would be 8 (2 x 2 x 2).
4.2.18
Back to top
190
Other black box test techniques
4.2.19
State transition testing looks at the different states a component may occupy, the transition from
one state to the next, the event that caused the transition and the resulting action. It is often
represented in a state transition diagram. It is most useful in process rich applications where
there are many changes of state.
Syntax testing is where the rules have been defined for the format of the data and test cases are
generated to ensure that valid syntaxes are accepted, and invalid ones are rejected. An example
of this would be validating dates for the format DD/MM/YYYY. Other examples of this are
validating the format of National Insurance Numbers and various types of account number.
NOTE
Do not confuse this black box technique with static analysis that we covered in Lesson 3 where
the source code is checked for syntax.
Random Testing uses a model of the input domain of the component that defines the set of all
possible input values. The input distribution (normal, uniform, etc.) to be used in the generation
of random input values shall be based on the expected operational distribution of inputs. Where
no knowledge of this operational distribution is available then a uniform input distribution shall be
used.
Other techniques are defined as a 'catch all' in the standard where new techniques have been
created after the standard was published (1998). A technique can qualify under this heading as
long it is freely available in the public domain, that a source reference is provided and that it is
documented in the same way as other techniques in the standard.
In BS 7925-2 test measurement techniques are defined for most black box test techniques. The
test measurement techniques are defined so that coverage of the item being tested can be
calculated. There are two techniques that do not have measurement techniques - Random and
Syntax. See BS 7925-2 for more information.
Back to top
191
Black Box Test Techniques
Review Questions
State Transition
Syntax
Usability
Random
Recovery
3. A tax system is in place where any amount earned up to £1,500 is tax free, the next
£4,600 is taxed at 10%, the next £30,000 is taxed at 21% and anything above this is
taxed at 40%. Which of the following tests might be the result of designing tests for
only invalid equivalence classes?
£4,000
£30,000
£38,000
£3a,000
4. A tax system is in place where any amount earned up to £1,500 is tax free, the next
£4,600 is taxed at 10%, the next £30,000 is taxed at 21% and anything above this is
taxed at 40%. Which of the following is a valid boundary test?
£33,100
£1,601
£4,601
£36,101
25, 32, 38
26, 31, 33
24, 28, 39
6. Registration numbers for a health club are in the range 500-1500 inclusive for
192
seniors, 2000-4000 inclusive for adults and 5000-6000 inclusive for juniors. Which of
the following set of inputs would result in tests for boundary value analysis for adults?
1000, 2000, 3000 and 3000, 4000, 5000
7. Registration numbers for a health club are in the range 500-1500 inclusive for
seniors, 2000-4000 inclusive for adults and 5000-6000 inclusive for juniors. Which of
the following inputs would not be valid when designing tests for boundary value analysis
for all members?
501
3000
4001
6001
8. Registration numbers for a health club are in the range 500-1500 inclusive for
seniors, 2000-4000 inclusive for adults and 5000-6000 inclusive for juniors. Which of
the following sets of inputs are the result of designing tests for only valid equivalence
classes and valid boundaries for seniors?
499, 500, 501
9. All the following black box techniques have an associated measurement technique
except
Equivalence Partitioning
Syntax
State Transition
Mark Paper
Answers: b, c, d, d, c, c, b, c, and b
Section Introduction
193
In this section we will cover:
• Statement testing
• Branch/Decision testing
• Other white box test techniques
Statement testing
4.3.1
The objective of statement testing is to show that all executable statements within a program
have been executed at least once. An executable statement can be described as a line of source
code that will carry out some type of action in the application. For example:
'Add A to B'
'Display error message on screen'
'Read Customer file'
If all statements have been executed by a set of tests then you have achieved 100% statement
coverage, however if only half of the statements have been executed with a set of tests then you
have only achieved 50% statement coverage.
The aim is to achieve the maximum amount of statement coverage with the minimum number of
test cases.
194
4.3.2
In this example there are two variables - A and B. We are trying to determine what values we
need to set them to, to ensure every statement is exercised. In the diagram above, the
statements we want to exercise are contained within the rectangles. The diamonds contain
decisions (or branches) that control the flow of the processing. It is not necessary for statement
coverage to consider both outcomes of a decision statement (i.e. the 'true' and the 'false' route)
if there is no further statement to execute.
Whilst we have shown a flow diagram, the same logic can also be shown as a control flow
diagram as detailed below.
4.3.3
195
4.3.4
If we set A=25 and B=30 then we will exercise every statement and achieve 100% statement
coverage.
4.3.5
However, if we set A=25 and B=10 then we will not exercise every statement and not achieve
100% statement coverage.
In this example you can achieve 100% statement coverage with just one test providing A & B
196
have been set to the appropriate values.
4.3.6
To summarise, statement testing is the simplest form of white box testing. Its aim is to ensure
that every executable statement within a program has been executed with the minimum number
of tests.
It is not concerned with testing every possible route through a program. If there is no further
statement to execute following a branch/decision then it is not necessary to create a test.
In practice, it is very difficult to achieve 100% statement coverage. There may be error routines
or exception handlers that are only executed under exceptional circumstances. Statement
coverage may also identify areas of unreachable code that can then be removed to improve
efficiency.
Back to top
Branch/decision testing
197
4.3.7
The objective of branch/decision testing is to show that all the branches (or decisions) within a
program have been executed at least once.
If all branches/decisions within a program have been exercised by a given set of tests then 100%
branch/decision coverage has been achieved. However if only half of the branches/decisions have
been taken with a given set of tests then you have only achieved 50% branch/decision coverage.
Again, as with statement testing, the aim is to achieve the maximum amount of coverage with
the minimum number of tests.
NOTE
Branch Testing and Decision Testing are exactly the same for the Foundation Course.
4.3.8
198
If we take the same example as before, we have the same two variables - A & B. In this instance
we are trying to determine what values we need to set them to, to ensure that every
branch/decision is exercised. Again, this can also be represented by control flow diagrams with
the different paths through the program logic shown by ...
4.3.9
4.3.10
199
By setting A=15 you will ensure the 'No' of the first branch/decision is taken.
4.3.11
By setting A=25 and B=10 you will ensure the 'Yes' of the first branch/decision is taken and the
'No' of the second branch/decision is taken.
4.3.12
By setting A=25 and B=20 you will ensure the 'Yes' of the first branch/decision is taken and the
200
'Yes' of the second branch/decision is taken.
Therefore in this example you would need three tests to achieve 100% branch/decision coverage.
4.3.13
Branch/decision testing can be considered as the next logical progression from statement testing
in that now we are not only concerned with testing every statement but also both the true and
false outcomes from every branch/decision.
Compound conditions such as 'If X<Y or (X>Z and Y+Z = a) then….' can prove difficult when
determining branch/decision coverage - this is normally measured using a software tool, such as
Testbed by LDRA (see Lesson 6 for more details).
NOTE
There is an exercise on Statement and Branch/Decision Testing after this section.
Back to top
201
4.3.14
NOTE
You need to be aware of all other White Box Test Techniques although you will not be asked
detailed questions on these in the exam.
4.3.15
If X is greater than 1 and Z = 0, then the program will print Z, otherwise no action is taken.
202
4.3.16
Branch Condition testing requires that each Boolean operand in a statement is evaluated for
being true or false. In this scenario, 100% branch condition coverage can be achieved with the 2
test cases illustrated, although there could be other combinations that could achieve the same,
albeit not so efficiently.
4.3.17
203
statement to be evaluated. There is a mathematical formula that can be used to determine the
number of test cases required for this test technique - this being 2n, where n is the number of
operands in the statement - in this example there would be 22, or 2 x 2 (4) test cases.
24 test cases (or 16) test cases would be required. With this technique, although it is very
thorough, a vast amount of test cases can be created, depending upon the complexity of the
code. An analysis of the risk of the component can determine if this technique should be used.
4.3.18
Modified Condition Decision testing is a pragmatic compromise and requires fewer test cases than
branch condition combination testing. This technique requires that test cases are created for each
Boolean operand that can independently affect the outcome of the decision.
In the example above, because the 'No' route will be taken if either X is not greater than 1 or Z
is not equal to 0, the 'False/False combination can be discarded, because test cases 2 and 3
cover this situation.
204
4.3.19
If our example is changed, and instead of the 'and' operator we have an 'or', then we can discard
the True/True combination, because only one of the values has to be set in order to go through
the 'Yes' route. We need the False/False combination in order to ensure that the 'No' route is
taken.
4.3.20
Whilst BS 7925-2 describes the different test techniques and provides examples, it does not
provide any guidance on when the different techniques should be used. There is no established
205
consensus on which techniques are the most effective, only that the techniques that are chosen
will be dependant upon such things as criticality and cost.
If test completion criteria were set at 100% (full statement coverage, full branch/decision
coverage etc.) it is possible to relate some techniques in an order, where they are shown to
subsume, or include other criteria. The diagram above is taken from BS 7925-2 and shows the
ordering of the various test techniques mentioned in this course.
4.3.21
Data flow testing is to ensure that all data items have been used for the purpose intended. Many
problems can be attributed to the wrong use of the wrong variable.
Linear Code Sequence And Jump is commonly known as LCSAJ testing. It is a method that
measures the jumps made between lines of code. Following the code through linearly the points
are identified where control 'jumps' to another point further down the code. Test cases are
created to cover each 'jump'.
Other techniques are defined as a 'catch all' in the standard where new techniques have been
created after the standard was published (1998). A technique can qualify under this heading as
long it is freely available in the public domain, that a source reference is provided and that it is
documented in the same way as other techniques in the standard.
In BS 7925-2 test measurement techniques are defined for all white box test techniques. The
test measurement techniques are defined so that coverage of the item being tested can be
calculated. See BS 7925-2 for more information.
Back to top
LCSAJ
3. Study the following piece of code. What is the minimum number of tests required to
achieve 100% statement coverage and the minimum number of tests required to
achieve 100% branch/decision coverage.
1. IF TIRED
2. TAKE A NAP
3. IF RESTED
4. TAKE A WALK
5. ENDIF
6. ELSE
7. GO FOR JOG
8. ENDIF
2 statement, 2 branch
3 statement, 3 branch
2 statement, 3 branch
3 statement, 4 branch
4. Study the following piece of code. What is the minimum number of tests required to
achieve 100% statement coverage and the minimum number of tests required to
achieve 100% branch/decision coverage.
1. IF SUMMER
2. IF HOT
3. SUNBATHE
4. ELSE
5. GO SWIMMING
6. ENDIF
7. ELSE
8. IF SNOWING
9. BUILD SNOWMAN
10. ELSE
11. GO FOR WALK
12. ENDIF
13. ENDIF
4 statement, 4 branch
207
4 statement, 6 branch
3 statement, 4 branch
3 statement, 5 branch
5. Study the following piece of code. What is the minimum number of tests required to
achieve 100% statement coverage and the minimum number of tests required to
achieve 100% branch/decision coverage.
1. IF A=B
2. SET X
3. ENDIF
4. IF B=C
5. SET X=C
6. ELSE
7. SET X=D
8. ENDIF
2 statement, 2 branch
2 statement, 3 branch
3 statement, 3 branch
3 statement, 4 branch
Mark Paper
Answers: b, a, c, a and a
When you are given a piece of code one of the easiest ways to determine the paths through the
code is to draw a control flow graph. This is the same technique that we looked at in static
analysis - although now we will be considering the paths that are taken through the code when
particular tests are used at execution time.
208
IF P = T?
CARRY OUT THIS
IF X = 10?
CARRY OUT THE OTHER
ELSE
CARRY OUT THAT
ENDIF
ENDIF
4.4.1
There are 3 statements in this example -Carry Out This, Carry Out The Other and Carry Out That.
To achieve 100% statement coverage we must ensure that the tests we create execute each of
these statements. We also want to ensure we only use the minimum number of tests, so if it
were possible for 1 test to execute more than 1 statement then that would be good.
209
4.4.2
In this example we can achieve 100% statement coverage with two tests.
There are 2 branch/decisions in this example - If P=T and If X=10. To achieve 100%
branch/decision coverage we must ensure that the tests we create exercise each outcome of each
branch/decision. That means we want to execute both the 'Yes' and 'No' outcomes from each
branch/decision. We also want to ensure we only use the minimum number of tests, so if it were
possible for 1 test to execute 1 outcome from 1 branch/decision as well as another outcome from
another branch/decision then that would be good.
210
4.4.3
Error Guessing
Section Introduction
211
4.5.1
Error guessing is a form of negative testing. It is creating tests for situations that are known to
have caused problems in the past.
By using past experience a generic set of tests can be derived and used as appropriate. Here are
some of the known problem areas:
Error guessing is not a systematic form of testing and relies on the experience of the tester. In
some cases testers may attempt a type of test that they know has caused them a problem in the
past. Error guessing can be useful when used in addition to other test techniques.
Error Guessing
Review Questions
212
Mark Paper
Answers: c
Lesson 4 Summary
In this lesson we have covered:
o Equivalence partitioning
o Boundary value analysis
o Cause-effect graphing
o Other black box test techniques
o Statement testing
o Branch/Decision testing
o Other white box test techniques
• Error Guessing
Lesson 4 Review
213
1. Black box testing is also known as
Functional testing
System testing
Program testing
Equivalence Partitioning
Syntax testing
Branch testing
Cause-effect graphing
4. Error-guessing is
Is the division of all possible values a data item can take into classes
214
6. White box testing techniques are most appropriate when performing
Component testing
Maintenance testing
7. A field can have values entered in the range 100 to 999. Which of the following inputs
are results of designing tests for valid boundaries?
99 and 1001
98 and 998
Statement = 1, Branch/Decision = 3
Statement = 2, Branch/Decision = 2
Statement = 2, Branch/Decision = 3
Statement = 1, Branch/Decision = 2
9. Branch testing is
215
10. Equivalence partitioning can best described as
Testing one value in a class that is representative of all values in that class
Mark Paper
Answers: a, c, c, c, a, b, d, d, d and d
There are 30 questions in this Mock Exam, which are taken just from the first 4 Lessons of the
course. We suggest you allow yourself 40 minutes to do the Exam - the pass mark is 19.
In the next Mock Exam and the real Exam there will be 40 questions, the time allowed will be 1
hour and the pass mark is 25.
Some tips:
The process of exercising software to validate that systems are fit for purpose
216
The process of exercising software to validate that the system does what is
expected
Time
Faults found
Coverage criteria
Big-bang testing
Bottom-down testing
Top-down testing
Helpful
Cost effective
Expensive
Time consuming
Compilation
Requirements
Syntax
Objects
217
Regression testing is performed to ensure the original fault has been removed
Lack of training
Lack of documentation
9. When buying an item using a credit card the minimum value accepted is £10.00 and
the maximum value is £500.00. Which set of tests would be generated using Boundary
Value Analysis?
Risk analysis
Environment requirements
218
11. Consider the following statements
i. Quality control must be built into the whole SDLC
ii. Quality control starts as soon as the code is delivered
iii. Quality control is the activity performed to ensure that a product is ‘fit for purpose’
iv. The ‘quality’ attributes of a system include correctness and reliability.
v. Testing gives an assessment of the quality of software
i), ii), & iii) are true, iv) & v) are false
i), iii) & iv) are true, ii) & v) are false
12. A survey is being performed counting 3 different sandwich fillings. The initial choice
is ham or cheese. If cheese is chosen then a salad option is available. Consider the
following code
How many tests are needed to give 100% statement coverage and 100% branch
coverage
Statement 2, branch 3
Statement 3, branch 2
Statement 3, branch 3
Statement 2, branch 2
Error
Problem
Failure
Fault
219
The testing of the non-functional attributes
Where lower level components are then used to test higher level components
If X > 10 then
A = 10
Bx5
DO WHILE X < 8
16. Consider the following statements with regard to Black Box Testing. Which one is not
true?
Techniques such as Boundary Value Analysis can be used to derive test cases
17. Simple static analysis can help identify all of the following except
Unreachable code
Undeclared variables
Statement coverage
BS 7925-1
IS0 9000
BS 7925-2
220
19. Risk Assessment is important because
22. After executing a test, all of the following information is recorded, with the exception
of
221
Performed by users at the developers site
26. The testing techniques to be used within the testing phase of a project should be
identified in the High Level Test Plan. Where are they described?
BS 7925-2
In Test Cases
State Transition
Performance
Cause Effect
Syntax
28. All of the following are valid functional test objectives except:
222
To show that the account number is protected on the Customer Enquiry
screen
To ensure that the Order Request screen responds within a specified period of
time
To ensure that all mandatory fields on the Order Request screen are displayed
in red
To show that the total price field is not incremented when a product is not in
stock
29. Consider the following statements regarding Black and White box test techniques:
i. It is also known as Functional Testing
ii. LCSAJ is one of its recognised test techniques
iii. It is more suited to lower level types of testing
iv. It is concerned with what a system, or part of a system does
v. Syntax testing one of its recognised test techniques
i), ii) and v) relate to Black Box testing, iii) and iv) relate to White Box testing
ii) and iv) relate to Black Box testing, i), iii) and v) relate to White Box testing
iv) and v) relate to Black Box testing, i), ii) and iii) relate to White Box testing
i), iv) and v) relate to Black Box testing, ii) and iii) relate to White Box testing
By experienced testers
Mark Paper
Answers: b, c, b, b, c, a, b, d, c, c, b, c, d, b, b, c, d, d, d, d, d, b, b, c, c, c, b, b, d, and a
Organisation
223
Lesson Introduction
Section Introduction
5.1.1
Different companies deploy different organisational structures for performing their testing. Even
in the same company there may be different organisational structures for the different stages of
testing.
224
5.1.2
At the component testing level, it is very likely that the developer or development team will have
responsibility for testing.
As you progress up the right hand side of the V Model this is likely to change. At the systems
testing level, a separate team could be introduced to perform the testing, or even a different
company.
Acceptance testing could be under the control of the development team, or again could be a
separate department, team or company. Many companies have a quality assurance team who are
responsible for acceptance testing.
Whatever the organisational structure it is essential that each team is aware of their
responsibilities.
Back to top
225
5.1.3
It is important to understand that there are many people that have valuable input into the 'Test
Team'. Not everyone is necessarily involved all the time, but a multi-disciplinary team with
specialist skills is usually necessary.
The client, the project sponsor, provides the budget and has the final sign-off for the project.
The project manager provides project management skills. Typically they would be managing the
project throughout its various stages and providing the client with progress reports.
The user provides the detailed business knowledge especially of current systems. Typically they
would be highlighting problems with the existing systems and defining the requirements of the
new system.
The business analyst provides knowledge of the business and also analysis skills. Typically they
would prepare the User Requirements from discussions with the users.
The systems analyst provides knowledge of system design. Typically they would prepare the
Functional Specification from the User Requirements.
The technical designer provides the technical detail to support the system design. Typically this
involves database design and administration.
226
5.1.4
The developer or programmer provides the programming knowledge and expertise. Typically they
would write the code and perform component testing. They can also assist in later stages of
testing by providing test harnesses and providing technical detail.
The independent tester provides testing expertise. Typically they would be involved in test
creation and execution. In some cases they may be test automation experts.
The test manager provides testing expertise and management skills. Typically they would be
responsible for managing the testing and providing test coverage and status reports.
The auditor provides knowledge of the audit requirements. Typically they would ensure that all
the audit requirements could be achieved.
The trainer provides knowledge of the training requirements. Typically they would ensure that
they would be able to teach the users how to use the new system.
The technical writer provides knowledge of the documentation requirements. Typically they would
write the User Manual and Help Text.
227
5.1.5
The support and help line teams will need to understand how the system will be used in the live
environment and will be required to provide support to live users once the system goes live.
Back to top
Organisation
Review Questions
Mark Paper
Answers: b
Configuration Management
Section Introduction
228
What is Configuration Management?
5.2.1
A 'system' consists of many components (items), both software and hardware, and it is the
combination of these that make the configuration. Configuration Management is the management
of these items.
Configuration Management and Change Management are not quite the same. Change
Management manages the changes made to items and Configuration Management manages all
the items and the status of all the items of the system as a whole.
Back to top
Symptoms of poor CM
5.2.2
229
It is necessary to understand that software exists in two forms, non-executable (source code)
and executable (object code). If there are errors found when a program is executed it may be
necessary to make changes to the source code. It is essential to be able to identify the source
code from which the program being executed was generated.
If changes were made to a program for a release of software and there are problems, it may be
necessary to identify exactly what changes were made.
5.2.3
If two people are changing the same source code at the same time then each of them will not see
the changes made by the other person. Their changes will be tested independently and may not
work when combined. Also, if they are unaware that someone else is changing the same
program, when the programs are copied back to the 'master' area, the second program to be
copied will overwrite the first program. This could result in the changes made to the first program
being lost.
Problems would also occur where, for example Customer Requirements or Business Process
documents are changed without the developers being made aware of the changes. In this
instance the required changes would not be applied to the programs.
230
5.2.4
Following on from that, if the developers change the programs and the testers have not been
made aware of the changes then they will not know to test them. This could result in changes
made 'going live' without being tested.
If changes are made to a document, source code or any other item and then a decision is made
not to go ahead it may be necessary to roll back to the previous version. The previous version
must always be saved.
Test Managers will need to be able to demonstrate what testing has been carried out for a release
of software along with the results. This is necessary to determine the test coverage.
5.2.5
If it is unclear what the requirements are, what test material is being used, or what test
environment the tests are being run in, it can all lead to more time being taken up in the
development and testing of the system, leading to missed project deadlines and error prone
code.
If changes to requirements have not been passed on to the developers, then it is likely that the
code that is implemented will not satisfy the user's requirements. This leads to the users being
unhappy with the final system and enhancements being needed once the system is 'live'.
When you are unable to determine the correct source code, the changes made for a particular
version, or even have a copy of the previous version of the program, it will be very difficult when
changes need to be made. How can you change the source code when you are not sure which
source code is correct? It is easier to make further changes to source code when you are able to
identify the last changes made.
Back to top
Configuration Identification
231
don't suffer from the problems detailed above we need to ensure that Configuration Management
is in place for all of the projects undertaken. There are four aspects of Configuration Management
that we will cover now - these are Configuration Identification, Configuration Control, Status
Accounting and Configuration Auditing.
5.2.6
You must be able to identify all items within a system, whether these are software or hardware.
5.2.7
Configuration Identification is the process of identifying all the items within a project that are
going to be subject to version control. All details of the items need to be recorded including the
status and version. The ability to be able to link versions of items back to requirements is
especially useful.
Back to top
232
Configuration Control
5.2.8
Change is inevitable. The test process of executing tests and finding faults ensures that change
will be necessary to correct the faults found. Over time the requirements of a system may change
which in turn will lead to system changes.
A process needs to be put in place that ensures changes are carefully controlled and monitored. A
'master' copy needs to be kept so that if changes need to be made the item can effectively be
checked out from the 'master'. Details such as dates, times, version number and the person
checking the item out would be recorded. This avoids issues such as two people working on the
same item at the same time. A record needs to be kept of what changes have been made within
the item so that the changes related to this particular version can be identified. Once the changes
have been made and approved the item can be checked back in and it then becomes the
'master'.
Over time you will be able to see a history of what changes have been made to what
configuration items. It is essential that this process be put in place at the start of the project.
Back to top
Status Accounting
233
5.2.9
Status accounting is the process of recording and reporting on the status of the configuration
items. This will be easier if a good control process is in place!
It is the ability to be able to determine the status of any requested changes since the original
item was approved. These may be waiting for approval, approved waiting to be implemented or
implemented.
Back to top
Configuration Auditing
5.2.10
Configuration Auditing is ensuring that the control process that has been put in place is being
234
adhered to.
It is also to ensure that the configuration items actually reflect their characteristics as specified in
the requirements. For example if a version number and status need to be recorded against an
item then it is recorded.
Configuration Management can be very complicated in environments where mixed hardware and
software platforms are being used. There are various Configuration Management tools, such as
PVCS and AccuRev, which are becoming more widely available that can help in this process.
Back to top
Configuration Management
Review Questions
Configuration accounting
Configuration control
Configuration identification
ii, iv, v, vi
iii, iv, v , vi
Mark Paper
235
Answers: b, c and d
Section Introduction
5.3.1
Test estimation is performed in order that we can determine how long activities are likely to take.
If we know we have 2 weeks to complete a piece of work, and we have estimated that it will take
20 man days to do the work, we will be able to understand the resource requirements. In this
example we would need 2 people for the 2 weeks, or 2 people for 1 week and 1 person for 2
weeks. Test estimation allows us to put together a plan of who will do what and when they will do
it.
Performing the estimation and planning process can also help us to understand task
dependencies. It is no good scheduling someone to work on a particular task if it is dependant on
another task, which is not scheduled to end until after this task has started.
236
By estimating tasks, the overall project timescales can be determined. If is not possible to fit all
the required tasks within the given timeframe, with the current resource available, then changes
have to be made. In some cases it may be possible to move the project deadline to allow more
time or it might be possible to bring in extra resource. Where this is not possible the project may
need to be de-scoped, this is where non-essential functionality is removed or delayed.
However, one important factor to remember when estimating is to add some contingency.
Problems can occur due to sickness, rework, or tasks taking longer than anticipated. Plans should
be updated to reflect the current situation and reworked as necessary.
Back to top
5.3.2
When estimating the effort required for testing, it is not only the physical activity of test
execution that needs to be considered. In addition, the reviews, the planning activities, writing
the test cases, recording and checking the test results also need to be included in the process.
It must be remembered that testing is an iterative process and allowance must be made for any
necessary rework.
Back to top
237
5.3.3
In order to obtain realistic project plans it is important that the estimates are as accurate as
possible. There are many different estimation techniques that can be used depending on what is
being estimated. For example if you are estimating how long it is going to take to write a
program there is an algorithm to use that combines the number of inputs & outputs, takes into
account the complexity and adds a weighting for the experience of the person completing the
task.
A metrics database is where the activities from previous testing projects are recorded and used
to assist in determining how long tasks of a similar nature will take.
If a metrics database is not in place, then completed project plans from previous projects could
be used to give information on the duration of previous, similar activities.
If none of the above is available, then the project team will have to rely on their experience to
judge how long tasks will take.
Back to top
238
5.3.4
We need to monitor the testing activities in order to determine what progress is being made.
We need to understand if our test plans are complete, and the progress we have made in writing
our test cases, as well as how much testing has been performed, how successful the testing has
been, how much testing is left to perform as well as how the different areas of testing are
progressing.
5.3.5
We can then determine if the estimates that we originally derived need adjusting - whether tasks
have taken longer than anticipated, shorter than anticipated or there have been other outside
factors that have affected the timescales/resources involved.
239
can be taken in order to manage them.
Back to top
5.3.6
Testing can be monitored in different ways. If we look at how many tests have actually been
executed against how many tests need to be executed, we can determine the percentage of test
coverage.
We can also monitor the pass and failure rate of our tests; this will show us how successful our
testing has been.
It is very important to monitor the problems that have been found. We can monitor the number
that have been raised, the severity, how many are outstanding and how many have been fixed.
We will cover more on this in the Incident Management section.
We can also monitor particular business processes and determine which of those are 'fit for
purpose'. Effort should initially be concentrated on high-risk business areas - those that need to
be ready for 'day one'.
Back to top
Test Control
240
5.3.7
In order to be able to control our testing activities we must monitor those activities.
By monitoring our testing, we can understand the current position. We know what has been
tested, what still needs to be tested and where there are problems.
By monitoring our testing we can recognise problem areas. Problems could be because the
testing is discovering a high percentage of errors or because a particular business area has not
been tested. It could be that some activities are taking longer than anticipated.
Once a problem has been recognised, steps can be taken to try and resolve it. This may be the
diversion of resources - more people to work on testing if necessary, or additional people to try
and sort out the application errors that have been found.
By monitoring testing, management have an early warning if there is not sufficient time to
complete the scheduled work. It may be possible to move the project deadline to allow more time
or where this is not possible the project may need to be de-scoped; this is where non-essential
functionality is removed or delayed.
Back to top
Correcting Code
Re-tests
Regression tests
241
2. Test monitoring metrics may be:
Number of requirement documents
Mark Paper
Answers: b and c
Incident Management
Section Introduction
• What is an incident
• Incidents and the test process
• Incident logging
• Beizer grading of incidents
• Tracking and analysis
What is an Incident?
5.4.1
242
An incident is any problem or query that occurs during testing that requires further investigation.
It is called an incident as opposed to a bug or error because not everything identified will be a
fault in the software.
Typically, an incident is raised when the expected results and the actual results of a test differ.
This will then require further investigation to determine whether the software is at fault or
whether the predicted results were wrong.
It is not only during test execution that incidents can be raised. The queries or issues found when
reviewing documents should also be raised as incidents. An incident can therefore be raised at
any point during the SDLC and it is important that an incident management system is in place
from the start of the project.
Back to top
5.4.2
As incidents are raised and logged you will be able to see if a large number have been raised for
a particular function or business process. This may be due to the quality of the code or the
quality of the test cases but once the problem has been identified it can be addressed
accordingly.
The Test Manager is mainly concerned with two things - how much coverage has been achieved
and how many problems have occurred. The two figures go hand in hand in determining the
progress being made. The number of incidents raised per week usually reaches a peak and then
starts to tail off. The Test Manager will be very keen to know when this has happened.
Once the project has finished various statistics can be taken from the incidents raised to assist in
process improvement. For example you can look at what point in the SDLC the incident was
found and where in the SDLC the error occurred. If lots of incidents were found at acceptance
testing that related to ambiguities or inconsistencies within the Customer Requirements then it
could be that a review of the document did not take place or was not very effective. Once known
this can then be addressed.
243
Back to top
Incident Logging
5.4.3
The logging of an incident is a very important task. There is no point in finding incidents if we do
not have enough information to rectify them. We also need to keep information to allow us to
follow the status of them through to completion and ensure that they are classified correctly to
assist in process improvement. There are various details that can be kept about an incident, you
may not be aware of them all when the incident is initially raised, or some details may not be
appropriate depending on the incident itself, for example:
Incident
A unique reference to the incident.
identification
The date the incident was raised. This is important information as it
Date incident
allows analysis to be carried out on how long incidents have been
raised
outstanding.
A classification of the type of incident found - this will assist in
Classification
process improvement.
System Which area within the system the incident relates to. This is useful in
reference finding out how well particular areas of the system are doing.
The current status of the incident. This will change as it is followed
Status code through to resolution. For example, waiting for review, program
being changed, ready for retest.
This indicates the severity of the error. In some cases it can also
Priority code
trigger a 'Fix by Date' as per a service level agreement.
244
5.4.4
245
5.4.5
Back to top
5.4.6
The severity of an incident will determine how it will be prioritised and from that what action is to
be taken. Dr. Boris Beizer's grading of incidents is detailed above.
However, in practice, it is unlikely that there will be 10 different ways to react to the severity of
an incident. In practice there are usually 4 actions that can be taken:
• Fix it immediately
• Schedule it for the next release
• Fix it next time that piece of software is maintained
• Do nothing, it is not important enough
BORIS BEIZER, PhD, is an internationally known software consultant with almost four decades of
experience in the computer industry. A pioneer in software testing, he is the author of many
books on the subject, two of which, Software Testing Techniques and Software System Testing
and Quality Assurance have long been regarded as standards in the field.
Boris Beizer
Van Nostrand Reinhold, 1990, ISBN 0-442-20672-0.
Back to top
246
5.4.7
Once an incident has been raised it is important that it is tracked through to resolution. After
investigation it may be that the incident can be closed with no further work required, but in most
cases further work will be necessary. Many different departments could be involved, most
commonly the testers and the developers. It is important that an incident is monitored to ensure
that it doesn't get lost or held up along the way.
5.4.8
It is good practice that incidents should not be deleted. It is equally important that as they are
tracked through to resolution all the changed information is kept. This is normally in the form of
history logs, which are added to the incident.
The history logs enable you to see the workflow and are also a source for process improvement.
For example if you can see that an incident has been assigned to the developers to fix then
assigned for retest and this has occurred many times you would want to investigate why.
247
5.4.9
The incident data is considered very valuable information. It is preferable if this data is stored in
some form of relational database such that queries can easily be performed.
The analysis of incidents is one of the factors used in determining whether the system is 'fit for
purpose'.
Back to top
Incident Management
Review Questions
When environment support take more than a day to fix the problem
A requirement is ambiguous
Mark Paper
248
Answers: d and a
Section Introduction
• QA standards
• Industry-specific standards
• Testing standards
QA Standards
5.5.1
Testing standards are an attempt to introduce structure and discipline into the testing process.
They are there to assist the process not hinder it. If a particular standard is not producing any
benefit then it should be reviewed and changed accordingly.
QA Standards such as ISO 9000 simply specify that testing should be performed. However, there
is an expectation that it should be stated what is intended to be tested and to provide evidence of
what testing was actually performed.
249
Back to top
Industry-specific Standards
5.5.2
Industry specific standards such as the Railway signalling standard specify what level of testing
to perform. 'Level' refers to the depth of testing to be performed - for example statement,
branch/decision or branch condition combination. The level (or depth) of testing required can
relate to the safety critical nature of the system - the higher the criticality the greater the depth
of testing.
Back to top
Testing standards
5.5.3
250
Testing Standards such as BS 7925-2 specify how the testing is to be performed and should be
referenced from the previous two types of standards.
Back to top
IEEE 829
BS 7925-1
IEEE 829
BS 7925-2
The V model
Mark Paper
Answers: a and c
Lesson 5 Summary
In this lesson we have covered:
• Organisation
• Configuration Management
251
o What is Configuration Management?
o Symptoms of poor Configuration Management
o Configuration Identification
o Configuration Control
o Status Accounting
o Configuration Auditing
• Incident Management
o What is an incident
o Incidents and the test process
o Incident logging
o Beizer grading of incidents
o Tracking and analysis
o QA standards
o Industry-specific standards
o Testing standards
Lesson 5 Review
The verification that configuration items reflect their defined physical and
functional characteristics
252
2. Consider the following statements regarding Incidents
iii) and v) are true, i), ii) and iv) are false
i), iii) and iv) are true, ii) and v) are false
The developers make changes to the program code without informing the
testers
There are changes made to system documentation without the developers being
aware
5. Test Monitoring should be performed for all of the following reasons except
253
The verification that the configuration management process is being adhered to
The verification that configuration items reflect their physical and functional
characteristics
8. QA Standards
Mark Paper
Answers: c, d, b, a, d, c, b and d
Section Introduction
254
• What are CASE Tools?
It should be noted that all the suppliers mentioned on the following pages are only a few of the
many that exist in the marketplace. Neither ISEB nor TSG recommend or endorse specific
supplier tools.
6.1.1
There are many different types of CAST tool and they are designed to assist with the different
types of testing that are performed within the SDLC.
The different types of tools are suited for use by different sets of testers - from developers being
able to analyse how much of the code in their program has been exercised through to end-users
being able to simulate the loading capabilities of a system.
Back to top
255
6.1.2
There is sometimes confusion between CAST tools and CASE tools. CASE tools are used to assist
in the requirements definition and design of a system. They provide a repository for all system
documentation and some can also provide automation in generating documents and program
code from predecessor documents. For example, the functional decomposition of a system can be
recorded in a CASE tool. Program specifications can then be generated from the functional
decomposition and then program code generated from the program specification.
CASE tools can also be used to assist in the logical and physical database designs.
Back to top
Require no training
256
Mark Paper
Answers: a and c
Section Introduction
6.2.1
257
If we look at the very first stage of the SDLC, we are concerned with capturing requirements.
When 'Requirements Testing' is referred to it is concerned with looking at the Requirements
documents and verifying that they meet required standards, validating that they represent any
predecessor documents, that they satisfy user requirements and that they are consistent. We are
also concerned with how the system may look on the screen.
Although there are no CAST tools that specifically cater for this type of testing there are
Requirements Capturing tools which do provide some facilities that come under the testing
umbrella. For example, checking for consistency (verification) and allowing 'what if' scenarios
that animates the user requirements - this can be classed as an early form of testing.
Back to top
6.2.2
The creation of Test Cases can be one of the lengthier activities in the testing process. In some
instances, test cases can be automatically generated, either from Program Specifications or from
the program code itself.
In order for Test Cases to be generated, there must be some use made of CASE tools, so that the
specifications and program code are recorded in a structured way. From this structured data, test
cases can then be created via the CASE tool.
Back to top
258
Test Data Preparation tools
6.2.3
When testing a system, there may be a wealth of live data available in files or databases, but you
need to be selective about what data to use. If you have too much data, then it will be difficult to
determine what results you are expecting. Test Data Preparation tools enable the selection of
data from existing files, based on criteria that have been entered. For example, 'Select all clients
that have more than one bank account and have on overdraft limit of £30,000 or above'.
Once extracted, data may have to be manipulated to satisfy the new or amended requirements of
the system. Test Data Preparation tools enable this manipulation.
More sophisticated tools can deal with different file types and data from different sources in order
to create test files to be used as input to the system under test.
Examples of Test Data Preparation tools are the File-Aid suite of products from Compuware
(www.Compuware.com/products/fileaid) and Datatect from Banner Software
(www.Datatect.com).
Back to top
259
6.2.4
We have covered Static Analysis earlier in the course. To reiterate, it is concerned with examining
program code rather than executing it.
By performing this examination, the quality of the software can be determined (for example,
comparing program code against company standards).
Static Analysis tools can also provide measurements of various characteristics of the code. One
characteristic that can be measured is cyclomatic complexity.
The tools used in this arena are often language specific, so depending on the programming
languages deployed within a company, one or more Static Analysis tools may be required.
Back to top
260
6.2.5
The use of a Dynamic Analysis tool will provide information about a program when it is executing.
During the execution of a program, it will need to use memory to store information. If a program
requires excessive amounts of memory for data storage, the less memory it has to actually
execute in, so, potentially it will execute slowly. It may well be that the amount of storage
required by the program has been over-estimated by the developer - for example, a table that
has been defined with 10,000 entries may only ever have the first 100 used. A Dynamic Analysis
tool will help to highlight problems such as these.
The execution of a program could also result in a memory leak, whereby a fault in a program
prevents it from freeing up memory it no longer needs. As a result, the program grabs more and
more memory until it finally crashes because there is no memory left.
Pointers are frequently used in programs to reference, for example, entries in a table. If the
pointer is set incorrectly initially, or corrupted during program execution, then the program may
well abend or produce incorrect results. Dynamic Analysis Tools can also detect this type of
problem.
Back to top
6.2.6
When a program is being tested, we want to feel confident that all the executable statements
within it have been exercised. We also want to have confidence that all the functional areas of the
program have also been tested.
By giving careful consideration to the design of test cases, we may feel confident that we have
indeed covered all of the statements and all of the functionality. However, to be absolutely sure,
261
a Coverage Measurement tool can be used to provide us with the information we require.
Coverage Measurement tools record the program statements that have been exercised during a
test run. Reports are then generated indicating those that have been exercised and those that
haven't. They can also refer to the functional requirements of a system and indicate those
functions that have not been exercised.
In order to do this code is instrumented - extra code is compiled into the code and the tool uses
this when monitoring the test execution. The instrumentation does not change the functional
operation of the system.
Examples of Coverage Measurement tools are McCabe Test from McCabe (www.McCabe.com) and
Testbed from LDRA (www.ldra.co.uk).
Back to top
Debugging tools
6.2.7
When a program is being tested, it may produce the wrong results or abend at some point during
execution. Debugging tools are available to assist the developer in finding out what is going
wrong.
Whilst executing a program, the developer may detect that it is failing in a particular area of
code. When they execute the program in conjunction with a debugging tool, they can elect to
stop the program at a particular statement. The developer can then 'step' the program through
one statement at a time to see the manner in which it is executing and to try and determine
where the problem lies. They can also elect to carry on program execution and resume debugging
at a statement of their choice.
262
6.2.8
Debugging tools give the developer the ability to interrogate program variables. One reason that
a program can fail is that a variable contains a wrong value or has been corrupted - a debugging
tool will help in determining if this has happened. If there is a problem with a variable, the tool
will often allow you to modify it, so that program execution can continue to see if any further
problems can be discovered. Of course, the developer must remember to rectify the problem in
the program once they have finished debugging!!
If a problem has occurred because the logical flow through the program is wrong (for example
the program branches to 'A' when is should branch to 'B') then the developer can alter the path
the program is taking via the debugging tool.
Back to top
6.2.9
263
There are Test Harness and Driver tools that are commercially available. However, this is one
area of testing that can equally utilise 'home grown' products, as what is required of the test
harness may be very specific to the system under test.
There are many programs in existence that do not have an immediate user interface, for example
a batch update of a sales ledger from all sales made in that day. If a change is made to the
program the developer may need to have a 'harness' around that program in order to be able to
test it, e.g. to be able to simulate the sales ledger input without having to enter sales figures.
Some commercially available test harnesses also have the ability to run automated test scripts,
giving a more complete solution in this area of testing.
6.2.10
If a system is interfacing to another external system and that external system is not readily
available, a test harness can be utilised in order to simulate the external system. At some stage
in the testing process a link will have to be made to the external system, but in early testing
phases a harness can simulate this and hopefully problems can be uncovered before the real link
is established and tested.
In some instances it is impractical to test links between programs. For example, a country's
military defence system will need to test that the correct alerts are transmitted if a nuclear
warhead is launched against the country - obviously they would want a test harness to simulate
the warhead being launched rather than having to use the real thing!
Examples of test harnesses are TestQuest from TestQuest (www.testquest.com) and TBRun from
LDRA (www.ldra.com).
Back to top
264
6.2.11
Over the last couple of decades there has been a move away from 'green screen' or dumb-
terminal type systems to windows-based GUI (Graphical User Interface) systems. However, there
are still many of the former systems in existence and some are still being developed.
Capture-replay test tools can be used for testing the green screen type systems, whereby screen
input is captured by the tool and can then be 'replayed' as required. The capture-replay facilities
required for the green screen systems are different, and in some respects easier than GUI based
systems. They are easier because mouse movements, mouse clicks etc. are not used, therefore
do not have to be considered when testing.
When the tool is replayed, the differences in screen outputs and formatting will be identified and
then investigated by the tester.
6.2.12
265
data is captured, again these can be modified if required.
However, the most common use for this type of tool is for regression testing - when you are
testing that what was working before a change to another part of the system was made is still
working now.
Back to top
6.2.13
As stated previously, there are also capture-replay tools for GUI based applications.
These tools are more complex, in as much as they have to capture mouse movements and
mouse clicks as well as the screen inputs.
In a windows-based application, there are different types of objects on a screen that all have to
be recognised and catered for in the tool. These objects are items such as windows, radio
buttons, pop-up windows and bit-map images.
266
6.2.14
As for the Character-based tools, the most common use for this type of tool is for regression
testing.
Back to top
6.2.15
267
When we consider testing the performance of a system, we want to understand how it will
perform when the load of the system is representative of live use. For example, it is of no use to
have tested a system with 2 users adding and amending 10 client records an hour, when in
production use there will be 100 users adding and amending 5000 client records an hour. Load
generation tools enable a realistic number of transactions to be exercised against the system.
As well as the load of the system being simulated, it's transaction times under production
conditions also need to be tested. Again, the response times for two users when adding client
records may be acceptable, but it needs to be understood what happens when 100 client records
are added within a short space of time. Test transaction tools are available to monitor the
transaction times of specific actions within a system.
6.2.16
When both load generation and test transaction tools have been deployed, there will be
information available on how the system will perform once in production.
The load testing may highlight areas of a network that cannot cope with the amount of data
being transmitted - it could be that it will need reconfiguring to cope with the volume of data
expected. There may also be inefficient ways that the database handles additions and updates -
the DBA and/or developer may need to investigate how they can change the program interface
with the database in question.
The transaction testing may highlight areas of the system that have unacceptable response times
to the user. These areas will then need to be explored and possible alternative solutions sought.
Back to top
Comparison tools
268
6.2.17
Comparison tools are concerned with comparing one 'item' with another and identifying the
differences.
They can be used to examine the source code of a program with its predecessor version and
highlight any differences between the two. There may be the situation where a developer has
changed a program and have not indicated what they have changed. The comparison tool will
highlight the changes that have been made and the source code can be flagged accordingly
Comparison tools can also be used to compare files and databases - looking at 'before' and 'after'
images and indicating what fields have changed. This saves the tester time in as much as they
don't have to examine the file or database contents field by field, but can concentrate on what
has changed.
Screen images are another item that can be compared with this type of tool - screen standards
can quickly be validated, for example, is the 'Help' key always F1 and in the same position on
each screen?
When comparing items, there may be information that the tester wishes to disregard - for
example screen titles, company logos etc. on a screen, or particular fields on a database, for
example, date and timestamp fields. Comparison tools enable the tester to specify what areas of
a screen or what field values they want to ignore when they are performing the comparison.
Comparison tools can also be used to compare expected results with actual results. The users of
this particular tool will need to employ good testing practices, whereby the expected results are
specified prior to the testing being undertaken!
Back to top
269
6.2.18
Test Management tools are concerned with managing the whole testing process.
Test Management is concerned with the creation and control of all testing material. This includes
the recording of project documentation (such as user requirements, functional specification),
defining what needs to be tested in the system via a functional decomposition, defining how the
system is to be tested by creating scripts and relating those back to the functional decomposition,
defining the when to test by creating a test schedule of the scripts that need to be executed and
finally identifying what happened when the scripts were run. Throughout this Test Management
process the traceability back to the original system requirements is maintained.
Test Management tools can provide a wealth of information on the testing activity. They not only
provide information on how testing is progressing (the number of passes and failures), but can
also relate to the original requirements of the system and relate testing progress back to them.
Test Management tools can also assist in determining how a change to the system requirements
may impact the testing effort - they provide the ability to identify what functions in the functional
decomposition are affected by the change, and then consequently what scripts may have to be
amended.
Not only do Test Management tools provide the ability to manage the testing process, but they
also provide the ability to raise and track incidents that have resulted from the testing performed.
The Incident Management modules normally include some sort of workflow definition, which
defines how an incident should progress once it has been raised, and also a history of the
progression and the changes made to that incident. Incident Management modules provide
important management information and can highlight areas of a system that are causing
particular problems during testing.
Some Test Management tools have the ability to interface to test automation tools so that the
scripts that have been created can be automatically executed. Test results from the automation
tools can also be pulled back into the Management tools once the execution is complete.
Examples of Test Management tools are T-Plan Professional from T-Plan Limited (http://www.t-
plan.co.uk) and QADirector from Compuware
(http://www.compuware.com/products/qacenter/qadirector/).
Back to top
270
Types of CAST Tool Review Questions
2. During functional testing of a program, an error in the code may be found using:
A comparison tool
A test harness
A debugging tool
3. Which of the following statements about a test management tool are true and which
are false?
i. Usually has an incident management module
ii. Generates test cases from specifications
iii. Is a repository for test logs
iv. Is a repository for test data
v. Provides automatic support for verification and validation of requirements
vi. Compares actual to expected results
i , iii, v, are true ii, iv, vi are false
Mark Paper
Answers: b, c and c
Section Introduction
271
In this section we will cover:
6.3.1
There are many benefits of using CAST tools. The test automation tools can automate the
running of tests, making the testing process quicker and easier, and allowing the resources that
are available to be focussed on other areas of testing that cannot be automated or require a
higher level of expertise. Test Management tools provide management reporting capabilities,
providing information such as the percentage of test cases executed, the number of test cases
that have passed and failed (this can be further refined into passes and failures for each level of
risk/priority), details of test cases written and printed 'running orders' for test execution
schedules.
Tests that are executed via a CAST tool will always be the same - data cannot be transposed or
incorrectly typed when being entered, and the same keys will always be entered. Therefore you
can be confident that the testing being performed is consistent and correct - there is no room for
'user error'! CAST tools also ensure that when tests are repeated, they are exactly the same as
the last time the test was run - again, there is no room for user error.
CAST tools can be invaluable when performing regression testing - the tests involved may be
very lengthy and laborious, and an automation tool can considerably reduce the human effort
required. CAST tools are also of great benefit when conducting performance testing. If several
hundred people are required to simulate load testing it may not always be a practical proposition
- a tool can be used to simulate this number of users.
Test Management tools also provide a repository for testing material such as functional
decompositions and test scripts, and once defined they can be re-used many times, making them
an 'asset' of the company.
Different types of CAST tools can give a clear indication on the quality of the software - from
static and dynamic analysis tools that can highlight problems with program code through to Test
272
Management tools that can highlight problem areas of a system through analysis of Incidents
raised.
Back to top
6.3.2
CAST tools can greatly assist in the testing process, however, as can been seen by the number of
different types of tools that we have covered in this course, there is not one single or simple
solution that can be deployed. Several tools may be required to address different testing
requirements.
By using CAST tools, it may be felt that the testing of a system is being comprehensively
addressed. However, at the beginning of every project the testing activities must be clearly
identified and any areas not covered by the testing tools must be identified and dealt with.
With automation tools, as opposed to management tools, the information supplied must be of a
good quality - for example a capture-replay tool will only test the parts of a system you have
'captured' - if you have not captured it, you will not test it!
CAST tools can significantly decrease testing timescales. This must be appreciated when
allocating resources for a project team. Sufficient developers must be available at the right time
to be able to deal with the problems that are found through testing, otherwise the time gained in
the testing process will be lost when trying to resolve the issues that have been found.
CAST tools cannot make up for bad testing practices. The use of a Test Management tool will help
to improve the testing process, but if, for example, there is not a clear understanding of what
should be tested during regression testing then a tool will not resolve this issue. Also, if the test
process is evolving in a company, practices may change frequently and the use of tools during
this period may not be clearly defined or understood.
273
6.3.3
Back to top
1. Which of the following are benefits that may be accrued from the use of a CAST tool?
Improve poor testing practices
Mark Paper
Answers: c
274
In this section we will cover:
6.4.1
Before embarking on the purchase of a CAST tool, some important questions need to be asked.
Firstly, and most importantly, the company must determine if they are ready to use CAST tools.
As stated earlier, it is no good using CAST tools if there is an immature or weak test process, this
must be addressed first.
As there are so many different types of CAST tools available, the company must examine what
areas of testing would benefit most from the use of a tool. If the company has many, well-
established systems that are changed quite frequently, then a capture-replay tool may give them
the most benefit.
There may be different areas of the company that all feel they would benefit from the use of a
CAST tool, from developers through to user acceptance testers. Some sort of prioritisation needs
to be applied to their requirements - this could be on a cost or resource basis.
Back to top
275
6.4.2
Once it has been decided what type of CAST tool is required, the actual requirements need to be
specified in much more detail.
The requirements can list everything that is needed from a tool. They may include a check list
that details the exact requirements - this can then can be applied to each product in turn to
determine its suitability or otherwise.
There are many CAST tools available for each type of testing. It can be a slow and laborious task
to identify those that do meet your requirements. To help in this task, research can be conducted
via the Web, there are also Tools Fairs that take place frequently - these can be a useful way of
seeing the products 'in action' without having to have the tool vendors visit your site.
Back to top
6.4.3
276
Once a short list has been drawn up, some more detailed research can then be undertaken.
For example, will the tool run on all the platforms you need it to? Sometimes products can be
quite specific on the levels of software they interact with, for example Windows NT only, not
Windows 98 or 2000, Oracle Version 8 onwards, but not Version 7.
One very important factor when considering a tool is the cost involved. You have to understand
how many people will need to use the tool - some tool vendors have a sliding scale of licensing
costs depending on the number of licences bought. What are the maintenance costs involved? Do
you have to pay to upgrade to new releases of the product? Do you have to buy other
complimentary tools as well as the one you want? All of these points have to be considered, not
just the base cost of the product.
You may already have other tools that you use. It may be an important factor to consider if the
tools you are considering do or do not interface to your existing tools. For example does your
Test Management tool interface with the test automation tools you are investigating?
Another important factor to consider is what skills are required to use the tool. It may be that the
product is very intuitive, easy to use and people can be trained very easily (and therefore at a
reasonable cost). It may also be that there is a great deal to learn about the tool and training is
lengthy and therefore expensive. Also, is it a tool that is widely used? Can you easily recruit
people if you need to, or will you need to undertake training whenever a new person joins the
team or company? Is this an important issue for you to consider?
Once this process has been undertaken, you should have a much clearer idea as to what products
are suitable for you and which ones are not.
Back to top
6.4.4
The final stage of the tool selection process is to actually select a tool! From all the investigations
that have been undertaken, it should be possible to draw up a list of products that will satisfy
277
your requirements.
A detailed questionnaire should then be drawn up - this should contain questions about how you
specifically want to use the product, its environmental requirements, costs, support and help line
facilities, future releases - everything you feel is relevant to your selection process.
At this point in time the tool vendors should be aware that you are interested in their product,
and they should be willing to come on site and give a demonstration. Some will also undertake a
'Proof of Concept' where they show you how your data or information will fit into the tool in
question.
You should be able to have a copy of the product for evaluation purposes. This evaluation activity
should involve looking at the requirements to see if the tool meets them and producing a report
on the findings.
From the evaluations undertaken, and the responses to the questionnaire an informed decision
can then be made as to what tool to purchase.
Back to top
Tool implementation
6.4.5
Once a tool has been identified and purchased it is best to use it on a pilot project. It is not
recommended to implement a new tool on a large and/or very critical project - there are bound
to be issues early on that need resolving and if there is pressure to deliver a project the tool may
be unfairly dismissed as inappropriate or unsuitable.
For the pilot project, there may be training required - this should be undertaken before the tool is
used.
The objectives of the pilot should be identified and understood. These objectives should include
what it was like to use the tool in practice and if it provided any benefits. There may be changes
required to the current testing process - these need to be stated and understood. Finally, the
project should assess the benefits of using the tool, as well as the costs involved in implementing
it for all users.
It could be that at this stage in the process the tool is deemed unsuitable, and it is not
278
implemented anywhere else in the company. However if the selection process has been rigorous
enough, this situation should not arise and a strategy can be drawn up to roll out the tool for all
target users.
Back to top
1. CAST readiness is
Obtaining the budget to buy a CAST tool
Interesting features
Arrange demos
Mark Paper
Answers: c, c and d
Lesson 6 Summary
In this lesson we have covered:
279
o What are CAST Tools?
o What are CASE Tools?
Lesson 6 Review
280
Provides run time information on an executing program
The project involves the development of a new system and there is a great deal
of testing to perform
The developer
A test harness
281
6. The implementation of a CAST tool is best approached by
Ensuring that all potential users are suitably trained in its use before it is
implemented
Mark Paper
Answers: a, c, c, b, c and a
There are 40 questions in this Mock Exam. You should allow yourself 1 hour to do the Exam - the
pass mark is 25. This is the same as when you take the real exam.
Some tips:
282
Information Systems Examinations Board
Foundation Certificate
in
Software Testing
SAMPLE PAPER 1b
an error
a fault
a failure
a defect
3. IEEE 829 test plan documentation standard contains all of the following except
283
test items
test deliverables
test tasks
test specifications
5. Order numbers on a stock control system can range between 10000 and 99999
inclusive. Which of the following inputs might be a result of designing tests for only valid
equivalence classes and valid boundaries?
284
testing a system function using only the software required for that function
9. Which of the following is the main purpose of the integration strategy for integration
testing in the small?
285
12. Given the following code, which statement is true about the minimum number of
test cases required for full statement and branch coverage?
Read p
Read q
IF p+q > 100THEN
Print "Large"
ENDIF
IF p > 50 THEN
Print "p Large"
ENDIF
testing that the components that comprise the system function together
requirements
documentation
test cases
286
improvements suggested by users
16. Which of the following items would not come under Configuration Management?
operating systems
test documentation
live data
memory leaks
LCSAJ
syntax testing
287
performed by an Independent Test Team
21. Given the following types of tool, which tools would typically be used by developers,
and which by an independent system test team?
i. static analysis
ii. performance testing
iii. test management
iv. dynamic analysis
developers would typically use i and iv; test team ii and iii
developers would typically use ii and iv; test team i and iii
black box test design techniques all have an associated test measurement
technique
white box test design techniques all have an associated test measurement
technique
black box test measurement techniques all have an associated test design
technique
288
25. A typical commercial test execution tool would be able to perform all of the
following, EXCEPT:
re-testing ensures the original fault has been removed; regression testing
looks for unexpected side-effects
re-testing is done after faults are fixed; regression testing is done earlier
28. What type of review requires formal entry and exit criteria, including metrics:
walkthrough
inspection
management review
component testing
289
user acceptance testing
maintenance testing
32. Which expression best matches the following characteristics of the review processes:
s) inspection
t) peer review
u) informal review
v) walkthrough
s = 4 and 5, t = 3, u = 2, v = 1
s = 4, t = 3, u = 2 and 5, v = 1
s = 1 and 5, t = 3, u = 2, v = 4
s = 4 and 5, t = 1, u= 2, v = 3
290
usability testing
expected outcomes are derived from a specification, not from the code
ISO/IEC 12207
BS 7925-1
ANSI/IEEE 829
ANSI/IEEE 729
is not important
37. Which of the following is NOT included in the Test Plan document of the Test
Documentation Standard?
quality plans
291
no, because they are normally applied before testing
recovery testing
by inexperienced testers
Mark Paper
Answers: c, c, d, d, c, a, b, a, c, d, b, b, d, c, d, c, b, c, b, a, a, d, a, d, a, a, c, b, d, c, b, a, d, a, b, b, c, c, b, and
a
292