Vous êtes sur la page 1sur 8

Table of Contents

1.

Introduction.......................................................................................................... 2

2.

Challenges of mobile application testing..............................................................2

3.

Software Complexity/ Network Challenges...........................................................3

4.

Testing methods and Guidelines for testing.........................................................4


Methods of mobile application testing.....................................................................4
Unit Testing:......................................................................................................... 4
Integration Testing:.............................................................................................. 4
System Testing:.................................................................................................... 4
Regression testing:............................................................................................... 4
Compatibility Testing:........................................................................................... 4
Performance Testing & Stress Testing:..................................................................4
Black Box Testing/ Functional Testing...................................................................4
White Box Testing/ Structural Testing...................................................................4
UI Testing (User Interface).................................................................................... 5

5.

JaBUTi/ME: White Box Testing............................................................................... 5

6.

List of available testing tools................................................................................7

7.

Conclusion............................................................................................................ 7

White box testing uses the internal mechanism of a system to create test cases, which uses calculated
paths to discover unidentified bugs within the system. This type of testing is trying to enforce the quality
of the software system however white box testing is a cost effective method and is compared very
closely to Black box testing. The main jobs of these two functions have the same purpose however it is
majorly debated which one is more efficient and effective. Black box testing concentrates mainly on the
outputs of the system to identify bugs however this function waits till a malfunctioning error has
occurred. As explained above White box testing is used to ensure that the code is complete and to the
correct standard of the software mechanism. Statistics haves proven that by using a complete and precise
systematic test design, will ensure that the majority of bugs within the system will be identified.

When looking at white box testing in more detail it involves checking and ensuring that every program
statement is error free. White box testing allows:

Data Processing
Calculation correctness tests
Software Qualification tests
Maintainability tests
Reusability tests

1. Introduction
Mobile devices are evolving and becoming more complex with a variety of features and functionalities.
Many applications that were originally deployed as desktop applications or web applications are now
being ported to mobile devices. In this thesis, a mobile application is defined as an application running on
mobile devices and taking in input contextual information. They are either pre-installed on phones during
manufacture or downloaded from an application store or through other mobile software distribution
platforms. According to Keane a Firm for IT services mobile applications can be categorized into
standalone- and enterprise- applications. Standalone applications reside in the device and do not
interface with external systems. Enterprise applications must meet the standards for business. They are
developed to perform transactions that are resource-intensive and that must meet requirements for
maintenance, administration and security. Enterprise applications interface with external systems through
Wireless Application Protocol (WAP) or Hyper Text Transfer Protocol (HTTP). Although mobile
applications have limited computing resources they are expected to be agile and reliable like traditional
applications. One of the best quality metrics to decide whether a mobile application is agile and reliable is
mobile application testing.

2. Challenges of mobile application testing


Unlike traditional testing, mobile application testing requires special test cases and techniques. The wide
variety of mobile technologies, platforms, networks and devices presents a challenge when developing
efficient strategies to test mobile software. This section discusses the challenges that have to be
considered while testing mobile applications in comparison with traditional application such as desktop
application testing. Although many traditional software testing practices can be applied to the testing of
mobile applications, there are numerous technical issues that are specific to mobile applications that need
to be considered. Traditional guidelines and methods used in testing of traditional applications may not be
directly applicable to a mobile environment. Traditional applications such as desktop applications run on
Personal Computers and work stations. Desktop application testing is focused on a specific environment.

Complete applications are tested in categories like GUI, functionality, Load, and backend. On client
server application two different components are tested. The application is loaded on server machine while
the application executes on every client machine. Client server applications are tested in categories like,
GUI on both sides, functionality, Load, client-server interaction, backend. This environment is mostly
used in Intranet networks. The number of clients and servers is known as well as their locations in the test
scenario. When testing mobile applications additional test cases should be considered. The test phase
should be able to answer these questions:
How much battery life does the application use? What good is a mobile device that has to be supplied
with electricity just to power the application?
How does the application function with limited or no network connectivity? Minimally the application
should not crash; ideally the user should not even notice a difference.
How fast is the application? Even with slower processors and networks, users still expect desktop
speeds out of their mobile devices.
How quickly can users navigate the application? With limited attention spans, mobile devices need to be
highly intuitive.
How much data will the application need? Will users without unlimited data plans or devices without
large internal storage be able to use the application?
Will peripheral devices affect the application? Whether or not the application uses peripheral devices,
these devices affect the processes running in the background, in turn affecting the application.

3. Software Complexity/ Network Challenges


Available mobile operation systems In addition to hardware-based challenges, tester must handle the
complexity of the software environment of mobile devices. To make certain that performance on the same
range of mobile devices will work properly, all current versions of the iOS4, Windows Mobile, Windows
Phone 7, Symbian, Android as well as RIM Blackberry must be addressed. In tradition application testing
on the current versions of the Windows, Apple Macintosh and Linux operating systems is adequate to
make certain that a desktop application will work properly on most common Personal Computers. The
rapid changes of the handset market require that testing methods for the changing cast of operating
systems are maintained. Many mobile applications are developed using RAD (rapid application
development) in which multiple versions of the software are quickly developed and assessed by end
users. This rapid-fire cycle of coding and re-coding makes it impossible to assess how each change
affects the application's performance, stability or security. Just as mobile operating systems are
constantly changing, so are the networks, protocols and other key elements of the infrastructures used by
network providers. Carriers worldwide are upgrading their networks from 2-G to 3-G, and even to 4-G
with LTE (Long Term Evolution) networks. Internet traffic will be upgraded from IPv4 to IPv6 as well.
Mobile network carriers provide various levels of bandwidth. Carriers use different methods to tunnel
their own traffic into the TCP IP protocol used by the Web, changing how applications receive, transmit
and receive data. Different web proxies are used by the carries to define which Web sites users can
access as well as how the sites should be displayed on the devices. All of these differences can affect the
stability performance or security of a mobile application, and must be tested to assure the end-user
experience. Tests must be built and scripts executed in order to check the interaction among the handset
and between the application and its components. In addition applications must be tested for their
compatibility with the networks on the device they might run on.

4. Testing methods and Guidelines for testing


Although the mobile application testing process is based on traditional testing mobile devices have
different testing characteristics that must be kept in mind when deciding which testing methods to use for
authentication. In this chapter testing methods used in mobile applications testing are briefly listed and
recommendations for optimum testing are given.

Methods of mobile application testing


Unit Testing: Unit testing consists of functional and reliability testing in an Engineering environment.
Test cases are written after coding. The purpose of unit testing is to find (and remove) as many errors in
the mobile software as possible. Unit testing is also referred to as Component Testing.

Integration Testing: Integration testing is testing where modules are combined and tested as a group.
Integration testing is any type of software testing that seeks to verify the interfaces between components
(modules) against a software design. Integration testing follows unit testing and precedes system testing.

System Testing: System testing is conducted on a complete, integrated system to evaluate the system's
compliance with the system specified requirements. During system testing the entire system of the mobile
application will be tested to meet all the specification specified by the application. System testing falls
within the scope of black box testing, and does not require any knowledge of the inner design of the code
or logic.

Regression testing: Regression testing resembles functional testing. A regression test allows a
consistent, repeatable validation of each new release of a mobile application. Regression testing ensures
that reported product defects have been corrected for each new release and that no new quality problems
were introduced in the maintenance process. Although regression testing can be performed manually the
required testing is often automated to reduce time and resources.

Compatibility Testing: Compatibility testing ensures compatibility of an application with different


native device features. Compatibility testing can be performed manually or can be driven by an automated
functional or regression test suite.

Performance Testing & Stress Testing: Performance testing can be applied to understand mobile
application scalability. This sort of testing is particularly useful to identify performance bottlenecks in
high use applications. Performance testing generally involves an automated test suite as this allows easy
simulation of a variety of normal, peak, and exceptional load conditions. An example of the focus of
performance testing is the behavior of mobile application in low resources such as memory and mobile
website when many mobile users simultaneously access mobile websites.

Black Box Testing/ Functional Testing : Functional testing is testing core functionality of mobile
application as per specification and correct performance. This can involve testing of the applications user
interface, APIs, database management, security, installation, and networking. Black box testing or
functional testing is testing without knowledge of the internal workings of the item being tested. Tests are
usually functional except white box testing.

White Box Testing/ Structural Testing White box testing is testing based on an analysis of internal
workings and structure of a piece of software. White box testing includes techniques such as branch
testing and path testing. It is also known as structural testing and glass box testing.

UI Testing (User Interface) Testing UI testing is the process of testing an application with a graphical
user interface to ensure correct behavior and state of the UI. This includes verification of data handling,

control flows, states and display of windows and dialogs. An important aspect in mobile application
testing is to ensure consistency of GUI over various devices.

5. JaBUTi/ME: White Box Testing

White box testing is a technique based on the internal structure of a given implementation, from which
the test requirements are derived. In general, white box testing criteria use a representation known as
Control Flow Graph (CFG) to abstract the structure of the program or of part of the program, as a
procedure or method. JaBUTi (Java Bytecode Understanding and Testing Tool) is a complete tool suite
for understanding and testing Java programs and Java-based components. JaBUTi differs from other
testing tools because it performs the static and analysis directly on Java Bytecode not on the Java source
code. In order to apply the structural testing techniques, a tool is necessary to perform static analysis,
code instrumentation, requirement computation or coverage analysis.
JaBUTi performs the given tasks:
- Static analysis the program is parsed and the control- and data-flow information is abstracted in the
form of def-use graphs. Other information, as for instance, call graphs, inter-method control-flow graphs
and data-flow data can also be gathered.
- Requirement computation based on the information collected in the task above, JaBUTi computes the
set of testing requirements. Such requirements can be a set of nodes, edges or def-use associations.
- Instrumentation in order to measure coverage, i.e., to know which requirements have been exercised
by the test cases, it is necessary to know which pieces of the code have been executed. The most common
way to do this is by instrumenting the original code. Instrumentation 7 consists of inserting extra code in
the original program, in such a way that the instrumented program produces the same results of the
original program and, in addition, produces a list of traces, reporting the program execution.
- Execution of the instrumented code a test is performed using the instrumented code, instead of the
original program. If it behaves inappropriately, a fault is detected. Otherwise, a trace report is generated
and the quality of the test set can be assessed based on such a report.
- Coverage analysis confronting the testing requirements and the paths executed by the program, the
tool can compute how many of the requirements have been covered. It can also give an indication of how
to improve the test set by showing which requirements have not been satisfied.
The extension of JaBUTi to deal with mobile environment is named JaBUTi/ME. With JaBUTi/ME it is
possible to execute the test case on the real environment and still apply structural testing criteria with the
supporting tool.

Fig. 8: Server based testing JaBUTi/ME When instrumenting code some parameters in JaBUTi are
defined that can be chosen when testing a mobile application. These parameters are: address of the test
server, identification name of the program being tested, name of the file used for temporary storage of
trace data, minimum amount of available memory and keep or not the connection.
- The address of the test server determines the IP address and the port to which the connection to send
trace data should be established.
- The identification name of the programming being tested allows the instrumented code to identify itself
to the test server when sending trace data.
- The name of the file used for temporary storage of trace data is optional. If provided, t instructs the code
inserted in the instrumented code to store all the trace data until the end of execution and the, send it at
once to the test server. The data is stored in a tem file in the mobile devices file system.
- The minimum amount of available memory is a parameter, which helps tester to determine the amount
of memory that could be used by the instrumented code to store and trace data before deciding to send it
to the test server.
- Keep or not the connection is a parameter that controls the connection behavior. The ordinary behavior
of instrumented code is to create a single connection with the test server at the beginning of the execution
of test case and keep the connection open until the program end. If the cost of keeping the connection
open is restrictive, this parameter can be used to instruct the instrumented code to create a connection
each time it is needed.

6. List of available testing tools

7. Conclusion
Application development for mobile devices is evolving. The strategies presented in this thesis discuss the
differences between mobile device applications and how important it is to plan a test strategy that is
mobile-specific. Unique challenges like device challenges and software challenges of mobile devices
need to be considered because traditional testing does not cover all characteristic important for mobile
application. The use of automation test tools and test methods should be conducted for a successful
testing result.
Below is a final guideline for testing mobile application
1) Network landscape and device landscape should be understood before testing to identify bottlenecks.
2) Testing in real-environment should be conducted
3) Adequate Automation test tool should be used.

Rules for an ideal tool are: - One tool should support all desired platforms. The tool should support testing
for various screen types, resolutions, and input mechanisms - such as touchpad and keypad. - The tool
should be connected to the external system to carry out end-toend testing.
4) Weighted Device Platform Matrix method should be used to identify the most critical hardware/
platform combination to test. This method is very useful especially when hardware/ platform
combinations are high and time to test is low.
5) End-to-end functional flow in all possible platforms should be checked at least once.
6) Performance testing, GUI testing, and compatibility testing using actual devices should be conducted.
Even if the tests can be done using emulators, testing with real devices is recommended.
7) Performance should be measured only in realistic conditions of wireless traffic and user load.

Vous aimerez peut-être aussi