Vous êtes sur la page 1sur 8

GUI Path Oriented Test Generation Algorithms

Izzat Alsmadi
Kenneth Magel
Department of computer science
North Dakota state university
{izzat.alsmadi, Kenneth.magel}@ndsu.edu

ABSTRACT specifications (conformance testing)?


Testing software manually is a labor How fast can the system perform its
intensive process. Efficient automated tasks (Performance testing)? How does
testing can significantly reduce the the system react if its environment does
overall cost of software development not behave as expected (robustness
and maintenance. Graphical User testing)? And how long can we rely on
Interfaces (GUI’s) code has some the correct functioning of the system
characteristics that distinguish it from (reliability testing)?
the rest of the project code. Generating User interfaces have steadily
test cases from the GUI code requires grown more rich, more user interactive
different algorithms from those usually and more sophisticated over time. In
applied in test case generation. We many applications one of the major
developed several GUI test generation improvements that are suggested with
automated algorithms that do not need the new releases is to improve the user
any user involvement, except in defining interface.
the test inputs and pre conditions, and Generating test cases can happen
that ensure test adequacy in the from requirements, design or the actual
generated test cases. The test cases are GUI implementation. Although it is
generated from an XML GUI model or expected that those three should be
tree that represents the GUI structure. consistent and related, yet they have
This work contributed to the goal of different levels of abstraction.
developing fully GUI test automated Requirements and design are usually of a
framework. high level of abstraction to generate
from them the test cases. On the other
General Terms hand the task of generating the test cases
User interface, Automatic test case from the GUI implementation model
generation. will be delayed until we implement the
Keywords GUI, which is usually occurred in the
late implementation. We should not have
Test Automation, GUI Testing, Test Case
Generation. any problems in delaying GUI testing
giving the fact that a tool will automate
1. INTRODUCTION the generation and executing process.
Testing tries to answer the We designed a tool in C# that uses
following questions (3): Does the system reflection to serialize the GUI control
do what it should do? Or does its components or widgets. Certain control
behavior complies with its functional properties are selected to be serialized.
These properties are relevant to the user the category-partition method) (7). Path-
interface. The application then uses the oriented techniques generally use control
XML file that is produced to build the flow information to identify a set of
GUI tree or the event flow graph and paths to be covered and generate the
generate the test cases. Generating the appropriate test cases for these paths.
test cases takes into consideration the These techniques can further be
tree structure. Test cases are selected classified as static and dynamic. Static
with the consideration of the techniques are often based on symbolic
effectiveness of the selected test suit. We execution, whereas dynamic techniques
will study the fault detection obtain the necessary data by executing
effectiveness of our test case selections. the program under test. Goal-oriented
The algorithms developed to techniques identify test cases covering a
generate test cases from the GUI are selected goal such as a statement or
novels. The two factors that affect the branch, irrespective of the path taken.
suggested algorithms were first Intelligent techniques of automated test
generating a valid test scenario in which case generation rely on complex
each edge is a legal edge in the actual GUI computations to identify test cases. The
model. The second factor is ensuring a real challenge to test generation is in the
certain level of effectiveness on the generation of test cases that are capable
generated test scenarios. of detecting faults in the IUT. We will
The next section introduces the list some of the works related to this
related work. Section 3 lists the goals of paper. Goga(2) introduce an algorithm
this research and describes the work bases on probabilistic approach. It
done toward those goals. Section 4 suggests combining the test generation
introduces in summary the developed and the test execution in one phase.
GUI Auto tool. Section 5 presents the Tretmans(3) studied test case generation
conclusion and future work. algorithms for implementations that
communicate via inputs and outputs,
2. RELATED WORK
based on specifications using Labelled
Software testing is about Transition Systems (LTS). In MulSaw
checking the correctness of the system project (4), the team use 2
and confirming that the implementation complementary frameworks, TestEra
conforms to the specifications. and Korat for specification based test
Conformance testing checks whether a automation. To test a method, TestEra
black box Implementation Under Test and Korat automatically generate all
(IUT) behaves correctly with respect to non-isomorphic test cases from the
its specification. The work in this paper method's pre-condition and check its
is related to test case generation correctness using its post-condition as a
algorithms, automatic test case test oracle. There are several papers
generation and GUI test case generation related to this project. We have a similar
in software testing. Several approaches approach that focus on GUI testing. As
have been proposed for test case explained earlier, one of the goals of our
generation, mainly random, path- automatic generation of test scenarios is
oriented, goal-oriented and intelligent to produce non-isomorphic test
approaches (5) and domain testing scenarios. We also check the results of
(which includes equivalence the tests through comparing the output
partitioning, boundary-value testing, and
results with event tables generated from properties like correctness. It is hoped
the specification. Those event tables are that the application produced by this
similar to the pre post condition event project should form the groundwork
tables. Clay (6) presented an overview for another tool that is capable of
for model based software testing using producing small adequate test-sets that
UML. Prior to test case generation, we can successfully verify that an
develop an XML model tree that implementation of the specification
represents the actual GUI that is produced is correct.
serialized from the implementation. Test In the specific area of GUI test
cases are then generated from the XML case generation, Memon (14) has several
model. Turner and Robson [8] have papers about automatically generating
suggested a new technique for the test cases from the GUI using an AI
validation of OOPS which emphasizes planner, the process is not totally
the interaction between the features and automatic and requires the user decision
the object’s state. Each feature is to set current and goal states. The AI
considered as a mapping from its starting planner will find the best way to reach
or input states to its resultant or output the goal states giving the current state.
states affected by any stimuli. Tse, Chan, Another issue with respect to this
and Chen (9) and (11) introduce normal research is that it does not address the
forms for an axiom based test case problem of the huge number of states
selection strategy for Object oriented that a GUI in even small application can
programs and equivalent sequences of have and hence may generate too many
operations as an integration approach for test cases. The idea of defining the GUI
object oriented test case generation. Orso state as the collection state of each
and Silva (10) introduce some of the control and that the change of a single
challenges that Object Oriented property in one control will lead to a
technologies added to the process of new state is valid but is the reason for
software testing. Rajanna (12) studies producing the huge amount of possible
the impact and usefulness of automated GUI states. We considered in our
software testing tools during the research another alternative definition of
maintenance phase of a software product a GUI state. By generate an XML tree
by citing the pragmatic experiences that represent the GUI structure, we can
gained from the maintenance of a define the GUI state as embedded in this
critical, active, and very large tree. This means that if the tree structure
commercial software product as a case is changed, which is something that can
study. It demonstrated that most of the be automatically checked, then we
error patterns reported during the consider this as a GUI state change.
maintenance are due to the inadequate Although we generate this tree
test coverage, which is often the dynamically at run time and then any
outcome of manual testing, by relating change in the GUI will be reflected in
the error patterns and the capability of the current tree, yet this definition can be
various test data generators at detecting helpful in certain cases where we want
them. Stanford paper (13) is an example to trigger some events ( like regression
of using formal methods in defining the testing ) if the GUI state is changed.
specifications through object Mikkolainen (15) discusses some issues
specification tool that check for some related to GUI test automation
challenges. Alexander (16) and Haward randomly selected from those children,
present the concept of critical path the children for File (Save, SaveAs, Exit,
testing for GUI test case generation. Close, Open, Print, etc) will be the valid
They define the critical paths as those next level controls to select from and so
paths that have “repeated” edges or on.
event in many test cases. The approach 2. Random less previously selected
utilizes earlier manually created test controls. In this algorithm, controls will
cases through a capture\play back tool. be randomly selected like the first
Although this is expected to be an algorithm. The only difference is that if
effective way of defining critical paths, the current control is selected previously
yet it is not automatically calculated. As ( in the test case just before this one ),
an alternative to the need of defining this control will be excluded.
critical paths from run time, we define in 3. Excluding previously generated
one algorithm static critical paths scenarios algorithm. Rather than
through the use of metric weights. The excluding previously selected control,
metric weight is calculated by counting this algorithm will exclude all previously
all the children- or grand children for a generated test case or scenario and verify
control. Other ways of defining critical the generation of a new unique test case
paths is by measuring delay time during or scenario. The scenario will be
execution, or by manually locating generated and if the test suit already
critical paths from specification. From generated contains the new generated
the specification a critical path can be a scenario, it will be excluded and the
path that is calling an external API, process to generate a new scenario will
saving to or calling an external file. start again. In this scenario, the
application may stop before reaching the
3. GOALS AND APPROACHES number of test cases to generate if there
The goals for this work are to are no more unique test scenarios to
produce GUI test generation algorithms create.
and critical path selection techniques.
This is a summary description of the 4. Weight selection technique algorithm.
progress that is completed in this area: In this scenario, rather than giving same
GUI Test Generation Algorithms: probability of selection for all candidate
As explained earlier, the children as in all previous scenarios, in
algorithms created are heuristics. The this algorithm, any child that is
goal is to generate unique test cases that randomly selected from the current node
represent legal test scenarios in the GUI will cause its weight or probability of
tree. selection next time to be reduced by a
1. Random legal sequences. In this certain percent. If same control or node
algorithm, the tool will select for is selected again, its weight will be more
example a random first level control. It reduced and so on.
then selects randomly a child for this Both algorithms 3 and 4 are
control and so on. For example, in a designed to ensure test adequacy in the
Notepad like example, If Notepad Main generated test suit. We are in the process
menu is randomly selected as first level, of developing a test execution and
children will be File, Edit, Format, verification methods to measure the
View, and Help. Then if File control is effectiveness of the selected algorithms.
We also define test suit 1. Critical Path through using nodes
effectiveness that can be calculated weight. In this approach, each control
automatically in the application in order will have a metric weight that represents
to measure the quality of the algorithms the count of all its children. For example
suggested. Test suit effectiveness is if the children of File are: Save, SaveAs,
defined as the total number of edges Close, Exit, Open, Page Setup, and Print,
discovered to the actual total number of then its metric weight will be 7 (another
edges for the Application Under Test alternative is to calculate all the children
(AUT). Figure 1 shows test effectiveness and grand children). Then for each
for the 4 algorithms explained earlier. generated scenario, the weight of that
scenario is calculated as the sum of all
the weights of its individual selected
Test Generation Effectiveness
controls. Figure 2 is a sample output
generated that calculate the weight of
100
algorithm s % effectiveness

90 some test scenarios for an AUT.


80
70 Effec. AI1 Test Sequence Weight
60 Effec. AI2 NOTEPADMAINFILEPRINTPRINTTABPRINTB
50 UTTON2 28
40 Effect AI3 NOTEPADMAINFILEPRINTPRINTTABPRINTL
Effect AI4 ABEL3 28
30 NOTEPADMAINFILEPRINTPRINTTABPRINTL
20 ABEL1 28
10 NOTEPADMAINFILEPRINTPRINTTABPRINTB
0 UTTON1 28
10
30
50
200
400
1000
3000
5000
20000
40000

NOTEPADMAINFILEPRINTPRINTTAB 28
NOTEPADMAINFILEPRINT 28
NOTEPADMAINPAGESETUPPAGESETUPGR
No. Of test cases generated OUPBOX2PAGESETUPLABEL7 34
NOTEPADMAINPAGESETUPPAGESETUPGR
OUPBOX2PAGESETUPLABEL6 34
Figure 1: Test suit effectiveness for the 4 NOTEPADMAINPAGESETUPPAGESETUPGR
algorithms explained earlier. OUPBOX2PAGESETUPTEXTBOX3 34
NOTEPADMAINPAGESETUPPAGESETUPGR
The above figure is showing that OUPBOX2PAGESETUPLABEL8 34
the last 2 algorithms may reach to about NOTEPADMAINPAGESETUPPAGESETUPGR
OUPBOX2PAGESETUPTEXTBOX1 34
100 % effectiveness at around 300 test NOTEPADMAINPAGESETUPPAGESETUPGR
cases generated. OUPBOX2PAGESETUPTEXTBOX4 34
NOTEPADMAINPAGESETUPPAGESETUPGR
Critical Path Selection: OUPBOX2PAGESETUPTEXTBOX2 34
Here are some of the critical path Figure 2: Test scenarios weights.
examples:
1. An external API or command line The algorithm will then select randomly
interface accessing an application. one of those scenarios that share the
2. Paths that occurs in many tests( in a same weight value. An experiment
regression testing database) should be done to test whether those
3. The most time consuming paths. scenarios that have same weight can be
We developed 2 algorithms to represented by one test case or not.
calculate critical paths automatically in 1. Other alternative is to set a minimum
the AUT. Part of future work is to weight required to select a test scenario
calculate the effectiveness of the and then generate all test scenarios that
suggested algorithms. have a higher weight that the selected
cut off. The 2 factors that affect the
critical path weight factor are the described above. Figure 4 is a sample
number of nodes that the path is output for measuring test case reduction
consisted of and the weight of each edge. using the above algorithm. The 5 result
This technique can help us define the scenarios are listed along with their total
longest or deepest paths in an reduction.
application. For example, all the 40
weight values in Notepad application Test Scenarios Total
belong to the node that has Page Setup. percent of
test
In this case it is so because it is the reduction
deepest one. Figure 3 is a table that %
shows the reduction percentage of NOTEPADMAIN, PRINTER,
selected scenarios using the above PRINTERBUTTON1,,,
selection (Giving the fact that the same NOTEPADMAIN,SAVE,SAVELABEL7,,
weight scenarios can be represented by NOTEPADMAIN,EDIT,FIND,TABCONT
ROL1,TABFIND,FINDTABBTNNEXT
one as explained earlier). NOTEPADMAIN,FILE,PRINT,PRINTTA
B,PRINTLABEL7,
AUT Total Reduction NOTEPADMAIN,SAVE,SAVELABEL5 65.1
number of percentage
test scenarios (100 – selected NOTEPADMAIN,FILE,PRINT,PRINTTA
scenarios/all B,PRINTLISTBOX1
scenarios) % NOTEPADMAIN,FONT,FONTLABEL2
Notepad 174 94.25 NOTEPADMAIN,HELPTOPICFORM,HE
FP 28 82.14 LPTOPICS,SEARCH,BUTTON1
Analysis NOTEPADMAIN,FONT,FONTTEXTBO
WordNet 8 75 X2,,
Gradient 153 92.81 NOTEPADMAIN, PRINTER, 41.67
GUI 51 88.23 PRINTERBUTTON2,,,
Controls NOTEPADMAIN,FILE,PRINT,PRINTTA
Hover 10 90 B,PRINTGROUPBOX1
Figure 3: Weight algorithm reduction NOTEPADMAIN,
percentage. PAGESETUP,PRINTER,
NOTEPADMAIN,FONT,FONTLISTBOX
2,,
2. Critical path level reduction. In this
NOTEPADMAIN,OPEN,OPENFILELAB
approach, we will select a test scenario EL4,,
randomly and then for the low levels of NOTEPADMAIN,SAVEAS,SAVEFILEC 51.56
the selected controls, we will exclude OMBOBOX2,
from selection all those controls that Figure 4: Level reduction sample test
share with the selected control the same results.
parent. This reduction shouldn’t exceed 4. GUI Auto; The developed GUI test
half of the tree depth. For example if the automation framework tool.
depth of the tree is 4 levels, we should
exclude controls from levels 3 and 4. We We are in progress of developing
assume that each test scenario should GUI Auto as an implementation for the
start from the main entry point and that 3 suggested framework. GUI Auto tool
controls is the least required for a test generates in the first stage an XML file
scenario (like Notepad – file – exit). We from the assembly of the AUT. It
select 5 test scenarios after each other captures the GUI controls and their
using the same reduction process relations with each other. It also captures
selected properties for those controls that
are relevant to the GUI. The generated 3. Jan Tretmans: Test Generation with Inputs,
XML file is then used to generate a tree Outputs, and Quiescence. TACAS 1996: 127-
146.
model that represent the tested 4. Software Design Group. MIT. Computer
application user interface. Several test Science and Artificial Intelligence Laboratory.
case generation algorithms are used to 2006. http://sdg.csail.mit.edu/index.html.
generate test cases automatically from 5. Prasanna, M, S.N. Sivanandam R.Venkatesan.
the XML model. Test case selection and and R.Sundarrajan. A survey on automatic test
case generation. Academic Open Internet
prioritization are developed to evaluate Journal. Vol. 15. 2005.
test adequacy in generating certain 6. Williams, Clay. Software testing and the
number of test cases. Test execution can UML. ISSRE99. 99. http://www.chillarege.com
be triggered automatically to execute the /fastabstracts/issre99/.
output of any test case generation 7. Beizer, Boris. Software Testing Techniques.
Second Edition. New York, Van Nostrand
algorithm. We are currently building the Reinhold, 1990.
process of test results verification to 8. Turner, C.D. and D.J. Robson. The State-
compare the output of the executed test based Testing of Object-Oriented Programs.
suites with the expected results. The Proceedings of the 1993 IEEE Conference on
generated files are in an XML or comma Software Maintenance (CSM- 93), Montreal,
Quebec, Canada, Sep. 1993.
delimited formats that can re reused on 9. T.H. Tse, F.T. Chan, H.Y. Chen. An Axiom-
different applications. A recent version Based Test Case Selection Strategy for Object-
of the tool can be found at Oriented Programs. University of Hong Kong,
http://www.cs.ndsu.nodak.edu/ Hong Kong. 94.
~alsmadi/ GUI_Testing_Tool/. 10. Orso, Alessandro, and Sergio Silva. Open
issues and research directions in Object Oriented
testing. Italy. AQUIS98.
5. CONCLUSION AND FUTURE 11. T.H. Tse, F.T. Chan, H.Y. Chen. In Black
WORK and White: An Integrated Approach to Object-
In this paper we explained some Oriented Program Testing. University of Hong
GUI test generation algorithms and Kong, Hong Kong. 96.
12. Rajanna V. Automated Software Testing
critical path test selection techniques. Tools and Their Impact on Software
We studied test effectiveness statically Maintenance- An Experience. Tata Consultancy
by measuring the discovered parts to the Services.
total ones. The future work will include 13. Stanford, Matthew. Object specification tool
developing test execution and using VTL. Master dissertation. University of
Sheffield. 2002.
verification to measure the overall fault 14. Memon, Atef. Hierarchical GUI Test Case
detection effectiveness from those Generation Using Automated Planning. IEEE
generated test cases. Measuring the transactions on software engineering. 2001. vol
performance of GUI test case generation 27.
algorithms is another future work 15. Mikkolainen, Markus. Automated
proposed track. Graphical User Interface Testing. 2006.
www.cs.helsinki.fi/u/paakki/mikkolainen.pdf.
16. Alexander K, Ames and Haward Jie. Critical
6. REFERENCES Paths for GUI Regression Testing.
1. Pettichord, Bret. Homebrew test automation. www.cse.ucsc.edu/~sasha/proj/gui_testing.pdf
ThoughtWorks. Sep. 2004.
www.io.com/~wazmo/ papers/
homebrew_test_automation_200409.pdf.
2. Goga, N. A probabilistic coverage for on-the-y
test generation algorithms. Jan. 2003.
fmt.cs.utwente.nl/publications/files/
398_covprob.ps.gz.

Vous aimerez peut-être aussi