Vous êtes sur la page 1sur 44

jufo Fotolia.

com

The Magazine for IT Security

February 2010

www.securityacts.com

free digital version

made in Germany

issue 2

Security incidents respect neither geographical,


nor time-zone nor administrative boundaries.
The Annual FIRST Conference focuses exclusively on the field of
computer security incident handling and response, it addresses the
global spread of computer networks and the common threats and
problems faced by everyone involved.
Join us in Miami for this unique gathering of security professionals
and learn first hand the latest in incident prevention, detection and
response best practices.
Forge alliances and become a part of a globally trusted forum.

REGISTER TODAY!
HTTP://WWW.FIRST.ORG
HTTP://CONFERENCE.FIRST.ORG

Editorial

Dear Readers,
The first issue of Security Acts was very well received in the IT Security
Community. We have received a lot of feedback from Readers, congratulating us. We thank you for this. It takes a lot of effort behind the scenes,
creating a successful issue and without the hard work and support of our
colleagues, we would not have achieved this.
I am looking forward to introducing new authors and some great new articles in the next Security Acts issue. It is interesting to see that we are
reaching a Worldwide community with the Security Acts, in some countries more than in others, but we will increase our Marketing activities and
make a point of reaching everyone.
I would really appreciate it if you would pass on the Information about the
Magazine to all your interested colleagues and contacts and ask you to think
about advertising, for your company, in Security Acts. We are reaching numerous readers worldwide, who are specifically in the IT Security market.
In the past few weeks we have noticed that Google, Microsoft and the Chinese government have carried out a lot marketing, in particular on the topic IT Security. I would appreciate it, if one of you could write an article about
this topic. I think it is important to provide the public with all the information.
Thank you for all your support and I wish you a happy and successful 2010.

Yours sincerely

Jos Manuel Daz Delgado


Editor

www.securityacts.com

Contents

Editorial /3
Reader's Opinion /6
by Catalin Bobe
Web Vulnerability Scanners: Tools or Toys? /7
by Dave van Stein
What if I lose all my data? /11
by Mauro Stefano
Windows Identity Foundation and Windows Identity and Access Platform /14
by Manu Cohen-Yashar
Security Testing: Taking the path less travelled /17
by Ashish Khandelwal, Gunankar Tyagi, Anjan Kumar Nayak
Security Testing: Automated or Manual? /20
by Christian Navarrete
File Fuzzing Employing File Content Corruption to Test Software Security /23
by Rahul Verma
Column: IT Security Micro Governance A Practical Alternative /28
by Sachar Paulus
Avoiding loss of sensitive information as simple as 1-2-3 /30
by Peter Davin
IT Security Micro Governance A Practical Alternative /33
by Ron Lepofsky
The CSOs Myopia /37
by Jordan M. Bonagura
Security@University Talking about ICT security with two CTOs /52
by Diego Prez Martnez and Francisco J. Sampalo Lainz
Masthead /42
Index Of Advertisers /42

www.securityacts.com

ISSECO
SECURE SOFTWARE ENGINEERING

BE SAFE!
START SECURE
SOFTWARE
ENGINEERING

Secure software engineering has become an


increasingly important part of software quality,
particularly due to the development of the
Internet. While IT security measures can offer
basic protection for the main areas of your
IT systems, secure software is also critical for
establishing a completely secure business
environment.

ISSECO

SECURE SOFTWARE ENGINEERING

ISSECO

SECURE SOFTWARE ENGINEERING

ISSECO

SECURE SOFTWARE ENGINEERING

ISSECO

SECURE SOFTWARE ENGINEERING

Become ISSECO Certified Professional for


Secure Software Engineering to produce
secure software throughout the entire development cycle. The qualification and certification standard includes

requirements engineering

trust & threat modelling

secure design

secure coding

secure testing

secure deployment

security response

security metrics

code and resource protection.

PLEASE CONTACT:
Malte.Ullmann@isqi.org

www.securityacts.com

Foto: schiffner-photocase.com

WWW.ISSECO.ORG

Reader's Opinion
(issue 1, october 2009)
"The Human Face of Security - #1"
by Mike Murray

I wouldn't even say it was a "people" problem". Because this is


were my point of view differs greatly from Mike Murray's (and
most other infosec professionals'). If we say "it's a user problem",
all it means is that we blame the users for this state of affairs.
I think that's wrong. The users do THEIR jobs, as much as we
should do ours. But because we don't do ours properly, we "transfer" the blame on them.
We, the infosec professionals, need to do a better job at imparting our knowledge to everyone else. That's why I am a great believer in awareness (as a continuous process) and less in (online)
training (as a one time event). To me, the next level of (information) security will happen when we'll motivate/persuade the users (end users, IT staff, executives and senior management) to
integrate whatever is needed for their jobs and wellbeing into
their day-to-day life and habits.
I bet you carry the keys to your home, car, maybe office. Every
day, no? Those are security devices. You lock the door every time
you leave your home or your car after you park it. Why? Why do
you carry the keys? They are heavy. They need to be taken care of
(you can't leave them on a table in a bar).
So what's happening here? Well, the way I see it, we, the infosec professionals, need to come down from our ivory towers and
speak the same language as our audience. Just look at aware-

ness materials available on the Internet. Most are trivial, stupid


and talk down on people. The first thought I have when looking at
them is "Do you think I'm stupid?".
Yes, there is social engineering, and there are social networks.
Hackers and organised crime are getting better and better. I can
see it in the quality of the spam emails I get. I can see it in the
social networks I surf occasionally. We can still put firewalls and
IDS/IPSes and filtering mechanisms in place to stop people getting on Twitter, or Facebook or LinkedIn. But that's not what the
business wants, is it?
The solution? More and better awareness. Day-in, day-out. Continuous process.
Unfortunately, awareness takes time (you can't report on it tomorrow, if you started it this morning). More difficult to measure
(human behaviour is a difficult thing to measure, to start with).
And it doesn't offer a predictable Return-on-Investment (if you
install a firewall, that firewall will stay in the company until the
end of its days and you control it, whereas an employee can leave
at any time, taking any knowledge and time you invested in him
with him).
Catalin Bobe, CISSP, CISM, CISA
SecureBase Consulting, CANADA

You'd like to comment


on an article?
Please feel free to contact:
editorial@securityacts.com
6

www.securityacts.com

iStockphoto.com/JordiDelgado

by Dave van Stein

Web Vulnerability Scanners:


Tools or Toys?
Executing a web application vulnerability scan can be a difficult
and exhaustive process when performed manually. Automating
this process is very welcome from a testers point of view, hence the
availability of many commercially and open-source tools nowadays.
Open-source tools are often specifically created to aid in manual
testing and perform one task very well, or combine several tasks
with a GUI and reporting functions (e.g. W3AF), whereas commercial web vulnerability scanners, like e.g. IBM Rational Appscan,
HP Webinspect, Cenzic Hailstorm, and Acunetix Web Vulnerability
Scanner, are all-in-one test automation tools designed for saving
time and improving coverage.
Web Application Vulnerability scanners basically combine a spidering engine for mapping a web application, a communication
protocol scanner, a scanner for user input fields, a database with
attack vectors and a heuristic response scanner combined with a
professional GUI and advanced reporting functions. Commercial
vendors also provide frequent updates to the scanning engines
and attack vector database.
Evaluating web vulnerability scanners
Over the past years many vulnerability scanner comparisons
have been performed 1, 2, 3 and the most common conclusion is
that the results are not consistent.
This great diversity in results can partially be explained by the
lack in common testing criteria. Vulnerability scanners will typically be used by testers with various backgrounds like functional
testers, network security testers, pen-testers, and developers,
each of them having a different view on how to review these tools,
causing different result interpretations. Sometimes this leads to
comparing web vulnerability scanners with other security-related
products, which is like comparing apples and oranges 1.
Another explanation is the diversity in a test basis. The vast
amount of technologies and ways to implement these in web

applications make it difficult to define a common test strategy. Each application requires a different approach, which is
not always easy to achieve, making it hard to compare results.
Finally, like testers, vendors also have different views on how to
achieve their goals. Although vulnerability scanners might look
the same on the outside, the different underlying technologies
can make interpretation and comparison of results more difficult
than they appear to be.
The Web Application Security Consortium (WASC) started a project in 2007 to construct a set of guidelines to evaluate web
application security scanners on their identification of web application vulnerabilities and its completeness.5. Unfortunately
this project has not reached its final stage yet, although a draft
version has been recently presented6.
This article focuses on the difficulties reviewing and using vulnerability scanners. It does not provide the best vulnerability scanner available, but discusses some of the strengths and weaknesses of these tools and gives insight in how to use them in a
vulnerability analysis.
Using a web vulnerability scanner
Vulnerability scanners are like drills. Although the first are designed
to find holes and the latter for creating them, their usage is similar.
Using drills out-of-the-box will possibly yield some results, but
most likely not the desired ones. Without doing some research
into the several configurations of the machine, the possible drillheads, and the material you are drilling into, you are more than
likely to fail in drilling a good hole and may come across some
surprises. Likewise, running vulnerability scanners out-of-thebox will probably show some results, but without reviewing the
many configuration options and structure of the test object, the
results will not be optimal. Also the optimal configuration differs
in each situation. Before using a vulnerability scanner efficiently,
www.securityacts.com

it is necessary to understand how scanners operate and what


can be tested.
In essence, scanners work in the 3 following steps:
1. identify a possible vulnerability
2. try to exploit the vulnerability
3. search for evidence of a successful exploit
For each of these steps to be executed in an efficient way, the
scanner needs to be configured for the specific situation. Failing
in configuring one of these steps properly will cause the scanner
to report incomplete and untrustworthy results, regardless of the
success rate of the other two steps.

Information disclosure handles all errors that provide sensitive


information about the system under test or the owner and user(s)
of the application. Error pages that reveal too much information
can lead to identification of used technologies and insecure configuration settings. Standard install pages can help an attacker
successfully attacking a web application whereas information
like e-mail addresses, user names, and phone numbers can
help a social engineering attack. Some commercial vendors also
check for entries in the Google Hacking Database7. Its use, however, is limited in the development or acceptance testing stage.
User-independent vulnerabilities cover insecure communications
(e.g. sending passwords in clear text), storing passwords in an
unencrypted or weakly encrypted cookie, predictable session
identifiers, hidden fields, and having enabled debugging options
in the web server.

Knowing what to test


Before a vulnerability scanner can start looking for potential
problems, it first has to know what to test. Mapping a website is
essential to be able to efficiently scan for vulnerabilities. A scanner has to be able to log into the application, stay authenticated,
discover technologies in use, and find all the pages, scripts, and
other elements required for the functionality or security of the
application.
Most vulnerability scanners provide several options for logging in
and website spidering that work for standard web applications,
but when a combination of (custom) technologies are used, additional parameterization is needed.
Another parameter is the ability to choose or modify the user
agent the spider uses. When a web application provides different
functionality for different browsers or contains a mobile version,
the spider has to be able to detect this. A scanner should also be
able to detect when a website requires a certain browser for the
functionality to function properly.
After a successful login, a scanner has to be able to stay authenticated and be able to keep track of the state of a website. While
this is no problem when standard mechanisms (e.g. cookies) are
used, custom mechanisms in the URI can easily cause problems.
Although most scanners are able to identify the (possible) existence of these problem-causing techniques, an automatic solution is rarely provided. When a tester does not know or understand the application, used techniques and possible existence of
problems, the coverage of the test can be severely limited.
The Good
Vulnerability scanners are able to identify a wide range of vulnerabilities, each requiring a different approach. These vulnerabilities can roughly be divided into four aspects:



information disclosure
user input independent
user input dependent
logic errors

www.securityacts.com

Checking for both information disclosure problems and user-independent vulnerabilities manually can be very time-consuming
and strenuous. Vulnerability scanners identify these types of errors efficiently almost by default.
The Bad
Bigger problems arise when testing for user-dependent vulnerabilities. These problems occur due to insecure processing of user
input. The most known vulnerabilities of this kind are Cross-site
scripting (XSS)8, 9 , the closely related Cross-site request forgery
(CSRF)10, 11, and SQL injection12, 13.
The challenge for automated scanning tools, when testing for
these vulnerabilities, lie in detecting a potential vulnerability, exploiting the vulnerability and detecting the results of a successful
exploit.
SQL injection
SQL injections are probably the best known vulnerabilities at the
moment. This attack already caused many website defacements
and hacked databases. Although the most simple attack vectors
are no longer a problem for most web applications, the more sophisticated variants can still pose a threat. Even when an application does not reveal any error messages or feedback on the
attack, it can still be vulnerable to so-called blind SQL injections.
Although some blind injections can be detected by vulnerability
scanners, they cannot be used for complete coverage, mainly
due to performance reasons. Blind SQL injections typically take
a long time to complete, especially when every field in an application is tested for these vulnerabilities. Most vendors acknowledge this limitation and provide a separate blind SQL injection
tool to test a specific location in an application.
XSS and CSRF attacks
Cross-site scripting (and cross-site request forgery) attacks are
probably the most underestimated vulnerabilities at this moment. The consequences of these errors might look relatively
harmless or localized, but more sophisticated uses are discovered each day like hijacking VPN connections, firewall bypass-

ing, and gaining complete control over a victims machine.


The main problem with XSS is that possible attack vectors run into
millions (if not more). For example; an XSS thread on sla.cker.org14
has been running since September 2007, contains close to
22,000 posts so far and new vectors are posted almost daily.
There are several causes that contribute to the vast amount of
possible attack vectors:
It is possible to exploit almost anything a browser can inter pret, so not only via the traditional SCRIPT and HTML tags, but
also CSS templates, iframes, embedded objects, etc.
It is possible to use tags recursively (e.g. <SCR<SCRIPT>IPT>)
for applications that are known to filter out statements.
It is possible to use all sorts of encoding (e.g. unicode15) in
attack vectors.
It is possible to combine two or more of these vectors, creating a new vector that is possibly not properly handled by an
application or filtering mechanism.
With AJAX the possibilities increase exponentially. Obviously it is
impossible to test all these combinations in one lifetime, not in
the least for the performance drop this would cause. Vulnerability
scanners therefore provide a subset of the most common attack
vectors, sometimes combined with fuzzing technologies. This list
is, however, insufficient by nature, so additional attack vectors
should be added or manually tested.
Another problem is the diversity of XSS attacks; the two most
known types are reflective and stored attacks. With reflective attacks the result of the attacks is transferred immediately back
to the client, making analysis relatively simple. Stored or persistent attacks on the other hand are stored at some place and not
immediately visible. The result might even not be visible to the
logged-on user and may require logging in as another user and
understanding the application logic to detect them.
Stored or persistent user input vulnerabilities can basically be
checked in two ways:
Exploit all user input fields in a web application and scan the
application completely again afterwards for indications of successful exploits
Check what is stored on the server after filtering and sanitizing
Most scanners opt for an implementation of the first method,
but detecting all successful exploits is a difficult task. Especially
when an application has different user roles or an extensive data-flow, successful exploits can be hard to detect without understanding and taking into account the application logic.
Acunetix uses an implementation of the second method in a technology called AcuSensor16. Although this technology shows good
results in detecting for example stored XSS and blind SQL injection attacks, the biggest drawback is that it has to be installed on
the web-server in order to use it. This might not be problematic
in a development or even acceptance environment, but in a production environment this is often not an option or even allowed.

The Ugly
The most difficult errors to find in a web-application are application and business logic errors. Although these errors are usually a combination of other vulnerabilities, they also contain a
functional element contributing to the problem. Examples of
logic errors are password resets without proper authentication
or the possibility to order items in a webshop and bypassing the
payment page. Since logic errors are a combination of security
problems and flaws in the functional design of an application,
practically all vulnerability scanners have problems detecting
them. Commercial vendors like IBM and Cenzic do have a module for defining application logic attacks, but these are very basic
modules and require extensive parameterization.
When testing for logic errors, manual testing still is necessary,
although vulnerability scanners can be used for the repetitive or
strenuous parts of the test. Practically all commercial vendors,
but also e.g. Burp Suite Pro, have an option to use the vulnerability scanner as a browser. Hereby the tester chooses the route
to test the application, while the scanner can perform automatic
checks in the background.
Conclusions
Vulnerability scanners can be very useful tools in improving the
security and quality of web applications. However, like any other
testing tool, being aware of the limitations is essential for a proper use. With their efficient scanning of communication problems
and bad practices, they can save time and improve the quality
and security early in the development of web applications. When
used for testing user input filtering and sanitizing, they can save
time by rapidly injecting various attacks. However, manual reviewing of the results is essential and, due to the limited amount
of attack vectors, additional manual testing remains necessary.
Fully automated testing of business and application logic is not
possible with vulnerability scanners. Here vulnerability scanners
have the same limitations as other test automation tools. However, when used by experienced security testers, they can save
time and improve the test coverage when used to automate the
most strenuous parts of the security testing process.
1 http://www.networkcomputing.com/rollingreviews/Web-ApplicationsScanners/
2 http://ha.ckers.org/blog/20071014/web-application-scanning-depthstatistics/
3 http://anantasec.blogspot.com/2009/01/web-vulnerability-scannerscomparison.html
4 http://en.hakin9.org/attachments/consumers_test.pdf
5 http://www.webappsec.org/projects/wassec/
6 http://sites.google.com/site/wassec/final-draft
7 http://johnny.ihackstuff.com/ghdb/
8 http://en.wikipedia.org/wiki/Cross-site_scripting
9 http://www.owasp.org/index.php/Cross-site_Scripting_(XSS)
10 http://en.wikipedia.org/wiki/Csrf
11 http://www.owasp.org/index.php/Cross-Site_Request_Forgery
12 http://en.wikipedia.org/wiki/SQL_injection
13 http://www.owasp.org/index.php/SQL_injection
14 http://sla.ckers.org/forum/read.php?2,15812
15 http://en.wikipedia.org/wiki/Unicode_and_HTML
16 http://www.acunetix.com/websitesecurity/rightwvs.htm

www.securityacts.com

> About the author


Dave van Stein
is a senior test consultant
at ps_testware. He has
close to 8 years of experience in software and
acceptance testing and
started specializing in
Web Application Security
at the beginning of 2008.
Over the years, Dave has
gained experience with
many open-source and commercial testing tools and
has found a special interest in the more technical testing areas and virtualization techniques. Dave is active in
the Dutch OWASP chapter, and he is both ISEB/ISTQBcertified and EC-Council Certified Ethical Hacker

Subscribe at:

www.securityacts.com

Your Ad here
sales@securityacts.com

10

www.securityacts.com

AlexPin Fotolia.com

What if I lose all my data?


by Mauro Stefano

This article describes solutions for saving data. These solutions


are a possible first step towards a Disaster Recovery project,
quickly feasible and at limited cost. The proposed solution fits
at times of economic hardship, or as long as you still consider
the loss of data as a non- relevant problem. The off-site storage
solution is not an alternative to Disaster Recovery; it is only a first
step towards a solution to the entire recovery of ICT services.

operators are prepared to deal with disasters that will never really need to be managed, aware that if they are not in a position
to react, the risk would be to stop the business completely.

When we think of an information system, we immediately imagine a computer or rather one or more computer rooms with many
servers; in fact we are talking about a DC (Data Center).

1. Suitable rooms to accommodate an alternative DC


2. Computer and alternative disk(s), compatible with those
normally used in production
3. A network connection between the recovery DC and the users
offices
4. Data and system configurations

We know very well that the DC with its computers holds all our
data. Servers are only the instruments to access and process our
data. The data are the computer representation of our company,
of our projects and of our knowledge; they are the assets of the
Data Centre. Without the data, much of the vital business information is lost.
We are well aware that many copies of small subsets of our
data are present on the personal computers of our users; we
know very well that other subsets are spread in multiple copies
among our customers as well as among our suppliers. There are
still other subsets on hard copy in our various offices. Finally,
should it be of any use, you can use the employees memories.
However, is this the way to reconstruct the data in the event of accidental loss? Obviously, a structured system, well-organized and
periodically tested, makes it more likely that we are successful.
If you are involved in Disaster Recovery and especially in the management of business continuity in the event of loss of the DC, you
will realize as we discuss these issues that you are working for a
very complex project with very low chance of being invoked.
Fortunately, the likelihood of actually having to use the Disaster
Recovery systems, or to use business continuity alternative processes in order to continue our business in case of unavailability
of production information systems, is really negligible. Most ICT

If we focus on Disaster Recovery more closely and split it into


simple elements, we will see that it is achievable through the
following elements:

If we imagine ourselves in the moment immediately after a disaster, for which we were not prepared, and look at the above
four elements one by one, we will see that the first three can
be designed from scratch, whilst the fourth, the data, cannot be
rebuilt from scratch. Data can only be restored. So we must have
a back-up copy.
It is no problem for us to hire a properly equipped DC, we can
buy new computers compatible with the ones that have been destroyed, we can ask our network provider to set up a new link to
the new DC, but we cannot reconstruct our data from the various
and incomplete copies described above. Again, data can be only
restored.
If I find myself in a period of economic constraints, and if my
Data Center risk assessment allows me to accept a long outage
period, I can temporarily dispense with a complex plan for Disaster Recovery, but at least I must take steps to prepare a remote
regular copy (daily or weekly) of most of my data.
To achieve this project can be easier than you may imagine. In
the following, we will look at some of the possible alternatives.

www.securityacts.com

11

Copying of the cartridge.


We have always been accustomed to deal with what we call
component failure, i.e. breaking of a component in one of the
systems. In some cases, this may lead to the loss of a subset of
the data. We then restore the lost data from the regular copies
(often daily) made for this purpose. At the same time regular copies of data allow us to return from a possible application data
corruption.
The daily back-up is usually a complete copy of the data, which is
normally held in the same rooms in which the primary data are
stored. We will easily see the obvious: In the event of a DC disaster,
we would lose both copies. For data recovery purposes, the simplest solution would be to make a copy, and take the tapes, to which
the daily and weekly data have been saved, to a remote location.
This solution is viable, but could be difficult and costly in largescale environments, where the cartridges are engaged almost
continuously. In these cases, we also need to increase all the
cartridges libraries since those of production are nearly always
already committed.

This solution has another disadvantage in that it requires significant human activity, because we need people to remove the cartridges from the tape library, put them in a box, move them to a
safe place far away and then manage the reverse cycle. The data
flow relating to this solution is represented in blue in the diagram.

This is easier to understand in an e-mail system, where we have


at least two copies of the same message, one of the sender and
one of the recipient. If there is more than one recipient, the number of copies increases. Whenever a mail is forwarded, the new
one still contains the original one, so the number of copies of the
original mail continues to increase.
In the same way, all documents that have the company logo in
the header or footer contain many copies of the company logo.
By using a system to search and delete duplicated copies of data,
you can reduce occupied disk space.
Through deduplication (duplication reduction), it is possible to
define a single file no longer as a string of bytes, but as a string
of pointers to well-known different blocks. If the data is not repeated, depending on the implementations, they are defined as
new blocks or only as changes to existing blocks.
Today these systems still dont always have adequate performance to be used for live data, but they could be used to reduce
the disk space used by saved copies that have been generated,

but remain unused. In the back-up systems, there are normally


still multiple versions of the same files more or less unchanged.
In this case, a system to detect and remove duplication will lead
to an even more significant reduction of the disk space occupancy for the back-up, since the multiple copies of the same files
are only pointers to the same original single objects.

Copy data on a deduplication system.


Analyzing a data store in all details we see that it contains many
small subsets of data repeated many times over and only few
subsets that are really unique.
12

www.securityacts.com

In the case of small changes, depending on the individual implementations, a new object is created which points to the original
with an indication of the changes, or a new object composed by
pointers to blocks remained unchanged and pointers to some

new chunks for each block that contain a change.


We can therefore easily understand how a back-up system based
on reducing duplication can significantly reduce the disk space
required. Another advantage of back-up systems which deduplication is the capability to make a remote copy.
In practice, it is possible to duplicate the data of the primary backup at a second remote site on other disks. This system, as well
as being automated to reduce the human operations, also needs
low-speed network connections, because it will carry only very
few varied data and pointers to blocks that represent new files.
In practice, it becomes possible to have a remote copy of back-up
data by implementing the Virtual Tape Libraries with deduplication features and with a remote replication option in place of the
Tape Libraries. The data flow relating to this solution is represented in green in the diagram.
Copy data on remote disk
A third solution that allows an almost continuous data alignment
is based on a replica from disk to disk. In these solutions data
are synchronously or asynchronously mirrored between the local disk system and a remote disk system, which are connected
through a high-speed link. This solution requires not only more
disk space for high-efficiency, but also a high-speed network link.
Obviously, this solution is adequate in cases where you intend
to achieve a DR solution with little data loss and a fast recovery
time within a short timeframe. The data flow relating to this solution is represented in yellow in the diagram.

> About the author


Mauro Stefano
is Engagement Manager
in a large ICT company
preparing proposal solutions for different ICT
items. In the past he was
ICT Security Manager in
the IT department of a
large Italian automotive
company. Presently, he focuses his activities on Security and End-User Support services, even though he
has experience in all IT services, which is based on 24
years of international activities in the ICT sector.
In the past, Mauro has published some articles in an
Italian specialized ICT magazine and participated at
several conferences and congresses as chairman or
speaker.
From the projects he has managed, the most relevant
are: ICT Security introduction and implementation at his
first company and design of a large Disaster Recovery
project proposal for one of the main customers of his
present employer. He had the opportunity to manage
multiple innovative projects as introduction of Mobile
Computer in 93 and utilization of TCP/IP on mainframes in the nineties.
Mauro holds multiple certifications: CGEIT, CISA, CISM
and ITIL.

Subscribe at:

www.securityacts.com
www.securityacts.com

13

Luminis - Fotolia.com

Windows Identity Foundation and


Windows Identity and Access Platform
by Manu Cohen-Yashar

Identity management is a complex problem, yet almost every application has to address it.
The world of identity management is being revolutionized with
the introduction of WS* standards. Federation, single sign-on
and claim-based authorization are common requirements. The
question that remains open is: How should it be implemented?

ADFS 2.0
ADFS 2.0 is the next generation of Active Directory Federation
Services.

Every framework has to address the identity problem. In this article, I would like to introduce the .Net solution called "Windows
Identity Foundation" previously known as the Geneva framework.
Windows Identity Foundation (WIF) enables the .NET developers to externalize identity logic from their application, improving developer productivity, enhancing application security, and
enabling interoperability with applications written in other platforms. Windows Identity Foundation (WIF) can be used for onpremises software as well as cloud services. Windows Identity
Foundation (WIF), which is part of the new Identity and Access
products wave, gives applications a much richer and flexible way
to deal with identities by relying on the claims-based identity concept I described in a previous article.
Using WIF it is easy to implement a claim-based authorization
system based on industry standard protocols. WIF simplifies the
creation of a security token service - STS (which is the center of
every claim-based system) as well as the interaction with other
existing STS and resources.
Windows Identity and Access platform includes several
releases.
Active Directory Federation Services 2.0
Windows Identity Foundation
Windows Cardspace 2.0

14

www.securityacts.com

At the core of ADFS 2.0 is a security token service (STS) that uses
Active Directory as its identity store. The STS in ADFS 2.0 can
issue security tokens to the caller using various protocols, including WS-Trust, WS-Federation and Security Assertion Markup Language (SAML) 2.0. SAML is the base standard for claim-based
tokens, while the WS* standards are all about the communication and negotiation. To support old versions, ADFS 2.0 STS supports both SAML 1.1 and SAML 2.0 token formats and all WS*
versions. AD FS 2.0 is designed with a clean separation between
wire protocols and the internal token issuance mechanism. Different wire protocols are transformed into a standardized object
model at the entrance of the system, while internally ADFS 2.0

uses the same object model for every protocol. This separation
enables AD FS 2.0 to offer a clean extensibility model, independent of the intricacies of different wire protocols.

WIF introduces all the plumbing needed to use Cardspace on the


client and the infrastructure to build or use an identity provider
on the server. In Cardspace 2.0 there are many performance improvements to assure its use will be easy and comfortable.

Windows Identity Foundation - System.IdentityModel

The identity problem is complex, the challenge is huge, but on the


other hand it must be easy to create applications with advanced
identity capabilities. WIF together with Microsofts Identity and
Access Platform allows exactly that. Security and identity are a
global issue, and thus interoperability between all platforms is a
necessity. Microsofts Identity and Access Platform is based on
well-known industry standard protocols to make sure it will fully
comply with the interoperability requirement.

WIF is a framework for implementing claim-based identity in your


applications. It can be used in any Web application or Web service, cloud or on-site applications.
The goal was to make the interaction with claims easy. It is designed to unify and simplify claim-based applications. It builds on
top of WCFs plumbing. It handles all the cryptography required
and implements all the related WS and SAML standards. WIF
also introduces an HttpModule called the WS-Federation Authentication Module (FAM) that makes it trivial to implement WS-Federation in a browser-based application.
Using WIF it is possible to create your custom STS or connect to
another identity provider with only a few lines of code.
For example, when you build in with WIF, youre shielded from all
of the cryptographic heavy lifting. WIF decrypts the security token passed from the client, validates its signature, validates any
proof keys, shreds the token into a set of claims, and presents
them to you via an easy-to-consume object model.
Cardspace 2.0
Cardspace is an identity selector. To a user a Cardspace represents his identity in a simple and friendly manner. Cardspace is
very much like the ID card in your wallet or a personal card you
distribute to your colleagues. The card is installed on the user
computer. The information contained in the card is not the user
identity. The card contains the information needed to fetch the
identity info from the identity provider.
Cardspace is not a new technology. It was released with .Net
framework 3.0 back in 2005. Cardspace was not a huge success, because it was not easy to use. WIF will change that.

The world of identity is going through a revolution with claimbased authorization systems. If you want to be up to date, I recommend to take a close look at WIF and at Microsofts Identity
and Access Platform.

> About the author


Manu Cohen-Yashar
is an international expert
in application security and
distributed systems. Currently consulting to various enterprises worldwide
and Germany banks, architecting SOA based secure reliable and scalable
solutions.
As an experienced and acknowledged architect Mr
Cohen-Yashar was hosted by Ron Jacobs in an ARCast
show and spoke about interoperability issues in the WCF
age. Mr Cohen-Yashar is a Microsoft Certified Trainer
and trained thousands of IT professionals worldwide.
Mr Cohen-Yashar is the founder of an interoperability
group in cooperation with Microsoft Israel in which distributed system experts meet and discuss interoperability issues. http://www.interopmatters.com/
A wanted speaker in international conventions, Mr Cohen-Yashar lectures on distributed system technologies,
with specialization on WCF In which he is considered one
of the top experts in Israel. Manu won the best presentation award in CONQUEST 2007.
Mr Cohen-Yashar is currently spending much of his time
bringing application security into the development cycle
of major software companies (Amdocs, Comverse, Elbit,
IEI, The Israeli defense System)
Mr Cohen-Yashar is giving consulting services on security technologies and methodologies (SDL etc). Mr CohenYashar is known as one of the top distributed system architects in Israel. As such he offers lectures and workshops
for architects who want to specialize in SOA, and leads
the architecture process of many distributed projects.

www.securityacts.com

15

Application Security
www.diazhilterscheid.com
How high do you value your and your customers data? Do your
applications reflect this value accordingly? Accidental or deliberate
manipulation of Data is something you can be protected against.

iStockphoto.com/abu

Talk to us about securing your Systems. We will assist you to incorporate security issues in your IT development, starting with your
system goals, the processes in your firm or professional training
for your staff.
as@diazhilterscheid.com

Co-Founder of ISSECO (International Secure Software Engineering Council)

16

www.securityacts.com

HannesTietz Fotolia.com

Security Testing:
Taking the path less travelled
by Ashish Khandelwal, Gunankar Tyagi, Anjan Kumar Nayak

Have you ever thought about the idea how easy it would have
been to write simple test cases that could reveal the security
loopholes in your product? This is easier said than done. I have
seen in my experience how often a security testing initiative
gets stopped during implementation. There goes a whole series
of justifying why we need it, do we really need it, what will we
achieve, and so on. After all this comes the myriad effort of bringing the security testing framework alive. Even though the age-old
Threat Modelling approach remains the best tool available, it
still remains a distant dream for many to embrace. In our honest
opinion it would just be a disgrace for the readers to ask questions such as What is security testing?, Why is security testing required?, What is the importance of security testing? and
questions along this line.
Defining security testing no longer remains a challenge; however,
the test techniques might vary from application testing to web
testing. What still remains a distant dream is the acceptance
of security testing in the SDLC. With the recent security threats
posed at various circles and the emerging risks relating to data
theft issues, it has become tougher to address security testing
needs. Added to this is the distinct lack of business requirement
specifying the security states of a product.
Why do security testing? Although its a disgracing question
to ask, its always good to ask this question to get your footing
right.
How to do security testing? This is where most of the initiatives get killed off, since no one has a clear picture of how to
do it, and since the whole procedure is so very undefined and
unclear. Is threat modelling the only source available? Is there
no other way to get started with it as a small venture?
Who will do security testing? Do I need a seasoned security
professional, or can I do this with my regular black-box testers? Can I leverage on by adding some security certification
to my existing resources?

Trinity of Constraints

The most favoured Threat Model has the following three constraints as shown in the diagram:
Complexity: Whats complex in Threat Model? First thing,
its time consuming. Because you need to first understand the
product architecture in every detail. It involves first listing and
then finding the possible interaction levels of each and every
asset of the product.
Connectivity: What are the possibilities of me getting connected to the underlying product architecture? I have come
across this very common complaint from many about the lack
of product architectural documentation. Even if there are documents, who has the time to explain them to you. The other
possibility is to walk through the million lines of code base.
But is it a very wise option to follow?
Changeability: Evolving development models such as Agile
add more pain to the problem. The product feature and its
underlying architecture are subjected to change, if not frequently.

www.securityacts.com

17

Creating an adaptive testing approach that bridges the gap between the pains of Threat Modelling and starting with your own
security testing project is the best solution.
Adaptive Security Testing Approach
Adaptive security testing focuses more on finding security defects rather than finding them early in the cycle, when the expertise level is still building.
This concept originates from the fact that the security testing is
a part of functional testing, rather than something that is performed as an individualistic form under supervision. Look at the
Security testing ladder as it progresses through.
As we go ahead up the ladder, the expertise of the security tester increases and this results in defects that could be functional
as well as security. Each step up the ladder will optimize your
results, but in turn would require consistent upgrading of your
security testing skills and product knowledge. With this ladder,
we have derived a two-way security testing approach termed as
Adaptive Security Testing Approach.

In short, we break the complete effort into two categories:


a) Peripheral Security Testing: It is an entry point based, BlackBox Security testing approach with the aim of quickly finding
surface-level but critical security issues.

Use Case Scenario: - Fuzzing a GUI window
b) Adversarial Security Testing: It is an expertise-based, hostile
Security testing approach, which is being carried inside-out to
reveal loopholes in the product.

Abuse Case Scenario: - Exploiting Windows ACL

Adversarial testing is a successor of Peripheral. As we go up the


ladder, our methodology changes and hence our priorities to test
change. Each testing type is classified in terms of Inputs, Activities and Outputs as seen from Figure3.
Inputs are the prerequisites required in order to follow the particular type of testing. Activities describe the flow to perform the
particular type of testing. Output takes care of results and the
analysis part.

Figure 3

The Adaptive Model is in its initial stage centred more on stimulating the security testing approach by focusing on finding early
security defects rather than on studying deeply into technology
and product in order to attack it. It apparently induces enthusiasm and adaptability in such a zigzagged security arena.
As we move along with this approach, we will start adapting to our
knowledge and constraints to maximize our potential and results.

18

www.securityacts.com

Here, the tester needs to look at software risk analysis on a component-by-component, tier-by-tier, environment-by-environment
level and needs to apply the principles of measuring threats,
risks, vulnerabilities, and impacts at all of these levels.
Entire security efforts and progress should be recorded in the
following ways:

1. Identify: Identify the Security testing technique or the technology that will be tested and also the component of the product/
application that will be and has been subjected to the test.
E.g. it is easy to start with techniques like buffer overflow, privilege escalation, files/folder tampering, DOS etc...
2. Explain: A detailed explanation of the concept and intent of
the technique or technology in the context of your product/
application. Against each of the identified techniques evaluate your products security rating, i.e. how vulnerable is the
product to the identified technique. Once this is done, create
the test scenario.
3. Execute and Record: Execute tests in the above context and
list the scenarios that have been tested.
4. Report: Report issues that have been found during the course
of the testing and the status of the issues in the product/application. The most important of all is to document the test
results, even when tests pass. This would provide a test coverage view in future.
Conclusion
No doubt, in absence of a guided test scenario, testing approach,
lack of requirement, the so- called product management support
towards security testing becomes a tough task. It goes via a myriad of motivational issues, as success is not expected to come
overnight (not even over months).
To make security testing a part of the functional Black-Box testing, one would need to create the necessary skill-set first of all,
which can be appended only by technical certifications, user
group studies, basic understanding of the security concept, network/OS elements.
Treating security testing as any other testing type, rather than
continuing to give it a specialized treatment, would be a good
start. Looking beyond network traffic, network security testing
would also provide a good starting point. As mentioned in the
beginning, the current time data/information loss is also a security breach.
So the thinking now needs to be on how this can be covered in
security testing.

> About the authors


Ashish Khandelwal has
more than 5.5 years of
Software Testing experience. He works as a Senior QA Engineer with
McAfee Host DLP product
solution group. Being a
CEH, he is interested in
the latest Security Testing
trends, and continuously
seeks to improve software
security. Ashish works
towards becoming an technology solution consultant/
architect by providing technical insight to different verticals in testing solutions.

Gunankar Tyagi, a Computer Science gradute,


has around 2 years of
experience solely in Black
Box testing. Gunankar has
earned many accolades
for his out-of-the-box testing skills and has proven
himself a respected tester
in a very short time. His areas of interest delve more
into Security Testing.

Anjan Kumar Nayak


has close to 8 years of
Software Testing experience, the last 3 years at
McAfee. As the Sr Project
Lead QA for McAfee
Host DLP product solution
group, Anjan manages the
end-to-end Quality Assurance and interacts continuously with customers
to understand and cater to their needs. A certified PMP,
Anjan has presented at many international conferences
and been continuously working towards enriching the
testing community knowledge base for the past 5 years.
His areas of interest mainly involve Test Process improvement with statistical test management/ end-point
solution performance testing and Security testing.

Subscribe at:

www.securityacts.com
www.securityacts.com

19

cosma - Fotolia.com

Security Testing:
Automated or Manual?
by Christian Navarrete

One of the hottest and most discussed topics by people involved


in the security testing field is this: Should security testing be
based on automatic or manual methods? However, what is the
truth about using these tools to detect vulnerabilities in systems,
networks or applications? Can these tools help an organization
obtain good security results; can they identify weaknesses in order to put in place the required measures/defences to prevent
real attacks by potential intruders? Moreover, are these automated efforts enough to accomplish the objective of detecting
real vulnerabilities? In this article, we will cover some aspects
of web application security testing, and we will see how manual
testing could be an essential element, working alongside automated testing procedures to reduce false-positives, leading to
defining real vulnerabilities better, which will ultimately lead to
a significant impact on the overall security of the organization in
question.
It is common practice within the industry to adopt a Run & Report approach to security testing, where the execution of an
automated Vulnerability Assessment tool (AVA) is a common approach. The reports generated by such tools are often considered as sufficient by not just mainstream companies but also
by security specialists and contractors, and we question whether
this approach leads to a false sense of security (sic), since these
results in themselves are often not double/manually checked.
For some readers this is an experience they know all too well; or
example when an AVA monthly report delivered by an outsourcer
includes high vulnerability findings for an IIS Web Server even
though the company only deploys Apache-based servers. This
example demonstrates that small errors can lead to large gaps
within companies security, and an equivalently large exposure to
the subsequent potential threats.
Automated vs Manual Security Testing
Within the market there are various automated security tools,
many of them open-source like Nessus (used for Infrastructure
Vulnerability Assessment) or Nikto (to perform Web Application
20

www.securityacts.com

Vulnerability Scanning). More sophisticated commercial tools exist containing advanced features like HTTP Sniffing, Fuzzers, Session Recording, Manual requesters. While both of these classes
of tools have some useful features, this is just one piece of the
puzzle. Those tools are good at detecting infrastructure-based
and application-based vulnerabilities - even many home-made
applications; however, these tools crucially fail to address Manual and Business logic testing. Often guidance exists for both
tool types (automated & manual) for testing to assess the vulnerabilities: For automated tools templates are often used (e.g.
some vulnerability scanners based on the OWASP and SANS
TOP 10 vulnerabilities) or they have their own predefined profile/
template tests (which in turn look for infrastructure vulnerabilities, application vulnerabilities, weak passwords, high-risk alerts
etc). For manual security testing, various standards exist which
help to complete this process in the best possible manner, for
example OWASP (Testing Guide Project http://www.owasp.org/
index.php/Category:OWASP_Testing_Project), PCI DSS Ref. 11.2,
11.3.1, 11.3.2, for Web applications and ISSAF (Information Systems Security Assessment Framework - http://www.oissg.org/
issaf) for Infrastructure testing - created as guidance for staff
implementing such tests and initially reviewed globally by security professionals.
Manual and Business Logic Testing
What is Manual and Business Logic testing? As indicated earlier,
automated testing focuses on technical testing: the tool will
process several (test) templates, which aim to detect application
vulnerabilities like XSS, SQLi, Generic Injection attacks (HTML,
LDAP, etc.), CSRF, among others. For example, when assessing
infrastructure, these tools will attempt to detect if the web server
is running a vulnerable version.However, what these tools based
on such scripts fail to address is illustrated by the case of an
in-house developed solution (in this case web-server) which has
been deployed; here the goal of testing should be to ensure that
no vulnerabilities exist which would allowa clients account to be
compromised. It is easy to understand which event has a greater

security impact: the identified XSS issue after authentication or a


parameter which does not validate data provided by the user, in
turn being a potential window for a fraud event. This is where the
Manual (and Business Logic testing) becomes relevant.
So far, we have described tools and how they work and detect
well-known vulnerabilities, but what about vulnerabilities that
exist, but are not exposed to the internet, vulnerabilities which
do not exist in a vulnerability database or which do not have a
vulnerability identifier? What about undisclosed/undetected vulnerabilities in the internal application? Simple. Manual testing.
Manual testing should be enforced by Business Logic testing,
which means that the testing should be focused on the Business
perspective at the same time as on the technical side. Performing manual testing ensures that the tester does not just cover
technical issues, such as infrastructure or injections, but ensures that other complex vulnerabilities that are harder to detect
over a normal automated security scanning, do not matter if executed on a regular basis. The formula to accomplish the task is
to just think in a business manner and make synergies with the
technical aspect of security that involves the business. In the end
this falls on the web servers, app servers and all the elements
that contribute to making the business secure in a technical way.
The Tools to do the Job
The toolset for manual testing are basically MITM (Man in the
Middle) tools, such as Paros Proxy (http://www.parosproxy.org)
or Burp Suite (http://portswigger.net/suite/), and both are free
for download. These tools act as an intermediary between the
tester browser and the target application. This way, the tester is
able to tamper/modify the HTTP request BEFORE it is sent to
the application. Here is a quick example. What about an HTML
form which has JavaScript-based validation? By using this type of
tool, you have total control over the application.

> About the author


Christian Navarrete
Christian Navarrete has
10 years of experience
in providing professional
Computer, Network and
Internet Security implementations in almost all
states of Mxico, United
States, Argentina, Per,
Ecuador, Chile and Venezuela, in various sectors
like Financial, Education, Private and Government. His
core experiences are focused on Forensic Analysis and
Penetration Testing. He also did presentations at some
Universities (including Instituto Politecnico Nacional) He
has recently participated at b:Secure CSI-Monterrey
where he gave a presentation with the topic Protection
Strategies - Hacker Minded. Also he was exhibitor for
local hacker con-named BugCon, in where he will show
his advances of his new Pentest framework I-Ninja
Pentex v1.0. Currently, Christian works as Information
Security Consultant performing web security testing for
a well-known Banking Corp. In his free time, he acts as
forum administrator of the popular SecTester.NetForum
[http://sectester.net] and is one of the official translators of the OWASP Spanish newsletter team.

If in our case you dont want to be validated by this control, it can


be easily bypassed by deleting the tags which load the validation
control to the user browser. Or another example: What about if
your logon application expects a simple username and the user
sends a malicious SQL statement to execute Operative System
commands and opens a reverse-shell? Interesting, right?
Then, Automated or Manual?
BOTH. When using the power of the fully and updated automated
tool running in parallel, armed with fully made security based
tests cases, along with a MITM tool, the Security testing team is
well prepared to perform deep security analysis covering all the
required aspects. This will give a good security posture and prevent any kind of internal or external attack in the future.

www.securityacts.com

21

iStockphoto/Yuri_Arcurs

online training
english & german (Foundation)

ISTQB Certified Tester Foundation Level


ISTQB Certified Tester
Advanced Level Test Manager

Our company saves up to

60%
of training costs by online training.
The obtained knowledge and the savings ensure
the competitiveness of our company.

www.testingexperience.learntesting.com

22

www.securityacts.com

Stasys Eidiejus Fotolia.com

by Rahul Verma

File Fuzzing Employing File Content


Corruption to Test Software Security

Introduction
Fuzzing is about finding possible security issues with software
through data corruption. The software in discussion might be a
desktop application, a network daemon, an API or anything you
could think of. Fuzzing is extensively used by security researchers
and large-scale product development companies. It has become
an essential part of the Security Development Life Cycle in many
organizations, and is known to find a high percentage of security
issues as compared to other techniques.
This paper focuses on file fuzzing, which is a special class of fuzzing dedicated to corrupting file formats. It is an easy-to-employ
form of security testing and can be quickly put to work. Most of
the software uses some sort of input in the form of files. The
paper discusses the general uses and formats of such files and
the data corruption strategies that can be employed.
The paper starts with a brief introduction of fuzzing and related
concepts and then digs deeper into the area of file fuzzing.

in finding a vulnerability which makes the software exploitable


in a certain way. A commonly discussed example of this sort
is a buffer overflow vulnerability, which can allow an attacker
to inject shellcode in the application at run time and make it
execute malicious code e.g. launch a remote shell.
Fuzzers are anti-parsers
As they say in the security world All input is malicious. In
terms of fuzzing, we try to generate all sorts of malformed/
malicious data. The software employs a lot of parsing routines
to interpret the input and take decisions e.g. buffer allocation, making calculations, type conversions, action execution
etc. Fuzzing is all about breaking false assumptions or faulty
code in such parsers. When malformed data is passed, it can
trigger misallocation of memory, unintended interpretation of
unsigned data in signed context causing buffer overflows and
crashes. In code reviews, a problem may or may not map to
a user input. In fuzzing, because such malformed data is directly tied to user input or variables that can be manipulated
by a user, any such issues can be directly exploited.

Issues and Challenges


Making testers aware of this technique is the first challenge. Understanding it and implementing is the next one!
Wikipedia defines fuzzing as:
Fuzz Testing or Fuzzing is a software testing technique that provides random data ("fuzz") to the inputs of a program. If the program fails (for example, by crashing, or by failing built-in code
assertions), the defects can be noted.
Lets try to understand fuzzing a little further:
Fuzzing as a security testing technique
As indicated by its definition, fuzzing is all about sending malformed data as input to an application to locate bugs. Such
bugs typically result in crashes, which after analysis can result

Fuzzing is essentially an automated testing technique


Fuzzing is essentially an automated testing technique. As the
number of test cases executed can quickly become very large,
it is an art to carry out fuzzing with focus on areas with the
highest likelihood of locating vulnerabilities. It involves prioritization of tests based on analysis of the application, related
protocol(s) and past vulnerabilities in similar applications.
Fuzzing employed for the McAfee Anti-virus Engine
The McAfee anti-virus engine is subjected to file fuzzing using
an in-house built engine specific tool developed by Tony Bartram in C++. Fuzzing is also carried out targeting specific file
formats in a protocol-aware fashion. For the purpose, custom
scripts are developed for samples generation using Python or
Perl. Dedicated hardware rigs are used to carry out data corruption tests running round the clock for weekly builds.

www.securityacts.com

23

Getting Started
Fuzzing has been considered to fit in the category of grey-box testing because of the nature of analysis and automation involved
and can be executed as black-box testing as well. The irony is
that despite this fact, the term and the related implementation
are mostly unknown to software testers.
There is good chunk of fuzzing work that can be taken up by a
software tester against the general view of this being suitable
only for security researchers. When testers locate a bug as part
of their usual job, they are rarely responsible for analyzing which
piece of code is actually responsible for the bug. Testers usually log the defect with test case details and their preliminary
thoughts and analysis from outside the box. Fuzzing is no different, the only difference being the nature of the data that is
submitted.
Fuzzing makes a software tester think beyond BVA and ECP, it
makes him redefine his view of input to the application, and it
extends the traditional approach to testing by bringing in a lot of
possible test areas.

The essence of security testing and common input-based attacks


Using a Hex Editor
A programming language of choice. Python is common in the
world of fuzzing now. Older fuzzers were developed in C, but
one can find some implementations in C#, Java and Perl as
well.
How architecture (Little Endian/Big Endian) impacts binary
packing of data
Programmatically dealing with Binary files (reading and writing data) as per defined data types
Knowledge of concepts and modules related to hashing and
compression.
Patiencea lot of it. When developing samples or understanding file formats, its all about hex data and not about fancy
GUI-based testing. One has to be very patient during the file
format analysis phase of file fuzzing.
Purpose of Input files in a software
Software uses input files for various reasons. Some of the most
common uses that can be thought of are the following:

Tools of the Trade


There are a lot of tools available for carrying out fuzzing of different kinds, namely File Fuzzing, Browser Fuzzing, Command Line
Fuzzing, Environment Variable Fuzzing, Web Application Fuzzing,
ActiveX Fuzzing and so on. As the focus of the paper is File Fuzzing, the readers can specifically look at:
FileFuzz, SpikeFile, NotSpikeFile (http://labs.idefense.com/
software/fuzzing.php)
General-purpose frameworks like Peach (http://peachfuzzer.
com/) and Sulley (http://www.fuzzing.org/fuzzing-software ),
and

An Office Productivity Suite like MS Office, OpenOffice is all


about creating and publishing files e.g. documents, spreadsheets, presentations etc.
A Media Player uses media files of different formats to play
audio/video
A Browser users HTML/XML/CSS files to show web content
An Anti-virus software uses virus definition files to detect malware
License files are used to determine validity/expiry of software
Configuration files are used by web servers
Temporary files are written to disk by software to be read at a
later stage
Understanding File Formats

Last but not the least, an upcoming framework for the purpose PyRAFT (http://pyraft.sourceforge.net), which is being
actively developed by the author of this article.
Knowledge about what already exists in the area of fuzzing helps
to understand practical implementations of different types of
fuzzing. It helps in using or extending existing open- source tools
or coming up with altogether new tools and frameworks by analyzing the code and execution methodology of existing ones.
If one looks specifically at file fuzzing and also at a specific kind
of file, one can look for tools rather than frameworks. Even from
a development perspective, writing a tool for a specific purpose
is a lot easier than writing a general-purpose framework because
of all the design considerations involved.

At high level we can classify file formats into two formats: Text
and Binary.
Text Formats
These can take two common forms. One form can be typically
seen in configuration files in which a plain text file is used
where each line corresponds to a configuration setting and
has a key-value pair separated by :: or = or => etc. This
format can also be seen in log readers where each line in the
log is a comma-separated content of various parameters.
These days, XML is more popular in defining configuration
files. It gives freedom of creating a much more complex structure, e.g. nested definitions. Other text formats are HTML files
which are again mark-up language based files with tags and
attributes.

Pre-Requisites in terms of knowledge


Listed below are some of the concepts/technologies that one
should be aware of before stepping into fuzzing:
24

www.securityacts.com

Binary Formats
These formats are more complex to analyze and are not human-readable. They can simply consist of binary packed data

as per a set protocol, or can be compiled data as well using


proprietary compilers. One common format that binary files
follow is a TLV format Type-Length-Value, where type is
based on identifiers recognized by the software, length gives
the length of data that follows, and then the value i.e. data.
Typically, the type and length fields have a fixed number of
bytes allocated to them in terms of number of bytes, and the
data part is flexible and dependent on the length field. Such
records are put in sequence as shown in the snapshot below:

A very complex format of this nature is the SWF format, which


has the tag identifiers based on a tag record header that
takes 2 bytes. Instead of consuming the full 2 bytes, the format takes the first 10 bits as the tag identifier and the next 6
bits as tag length. Amazing, isnt it? This is true for short tag
formats, and for long formats the approach is changed.
Until we know the format of a file, it is like a black box, and the
data corruption is also black-box corruption. As soon as you start
to understand the file format and start data corruption by taking the dependencies of the file formats into consideration, it becomes gray-box fuzzing. You do not need to know the code that
deals with it, the high-level logic of how the data is interpreted
will suffice.

who find vulnerabilities in third-party software employ both of


the above approaches based on the context.
Identify Input Vectors (Files)
In the case of file fuzzing, the type of input vector is a file, but
there can be multiple files that one wants to fuzz. E.g. an antivirus would scan almost all existing known formats. So, when
fuzzing anti-virus, one would typically fuzz a mixed set of formats. There are situations when you fuzz different file formats
for the same software, but each of them has a separate purpose e.g. a configuration
file, a license file, the
primary file format (document/media file) etc.
Generate Fuzz Data
This is the step where actual fuzzer development comes into
the picture. Based on the inputs chosen, you make decisions
about the kind of fuzzing you want to employ. This governs
the quality and quantity of fuzzed data you will receive for the
inputs you have identified.
Execute
At this step you send (publish) the fuzzed data (for an input
vector) to the target application. This might be a post-generation process, or it might run along with the generation process.

File Fuzzing Putting TIGEMA on Job


Fuzzing is easier understood if we split the process into steps.
The fuzzing process can be remembered with the TIGEMA mnemonic, which stand for:





T Target(s)
I Input Vectors
G Generate
E Execute
M Monitor
A Analyze

Each of the above describes a distinct step in the process of fuzzing. One or more of them might work in conjunction or parallel to
each other. Figure 1 gives a visual snapshot of these steps in conjunction with each other. The following sections discuss the steps
in detail with a view to file fuzzing: see Figure 1 on the next page
Identify Targets
This step can be approached in two ways in file fuzzing:
Identify the software to be fuzzed. Identify all input files that it
takes. Shortlist the formats to be fuzzed. Fuzz them.
Identify the file format to be fuzzed. Identify all software that
supports the identified file format. Shortlist the software to be
fuzzed. Fuzz it.

The first approach is typically employed when testing the security of the product one is working on. Security researchers

In the former, you first generate all the fuzzed data and write
to an output file and later send this data one by one to the application. You might require a lot of disk space in this case depending on the kind of fuzzing. If you are fuzzing file formats,
if the size of the file is large, you might end up consuming a lot
of disk space (or at worst running out of disk space). In some
cases, this option might not be feasible at all.
So, in case of file fuzzing, the latter approach is followed. You
generate fuzz data and send it to the application. If the application crashes, a copy of the data is retained and the next
fuzz iteration gets executed; otherwise the data is ignored (or
deleted if on disk) and the fuzzing process is continued. This
way, only that fuzz data which is problematic is retained on the
disk (in the form of files/database entries etc.).

Monitor
This is done while you are sending the data to the application.
This typically involves a debugger being attached to the application right from the beginning of the test. It might also involve
monitoring the resource utilization on the box. If there is a
crash, the fuzzer should be able to know about it. The debugger takes a dump of the application in the event of a crash for
later analysis. The fuzzer then launches the application again,
attaches the debugger and proceeds to the next fuzzing step.

The fuzzer should have a component which puts a cut-off limit


on the running time of the application (called time threshold)
and monitors the related process. Time thresholds help in killing an application and proceeding with the next test case as a
part of the normal fuzzing process.
www.securityacts.com

25

the protocol built into the fuzzer. This requires a lot of ground
work to be done by reading relevant manuals and analysis. If
no such published data is available, you will have to resort to
reverse engineering skills, which most of the times is quite a
complex task.

Analyze
The crash dump and the fuzz data that caused it are taken for
analysis at this stage. This is typically taken up by a security
researcher and/or development team with knowledge of vulnerability analysis.

The advantage is that you get complete control over the protocol and can get good code coverage.
Mutation is about capturing good data and then fuzzing various sections of data. For this you use a baselined good file
for mutating. The advantage is that you can get started with
fuzzing efforts quickly, but one must take care of internal dependencies of the fields and optimum code coverage.
File Format- Ignoring versus Handling Internal Dependencies:
In many protocols there are fields that are dependent on other
fields in turn, e.g. they might include length, checksums etc. If
you choose to abide by these conditions, the fuzzing process
gets a little trickier than otherwise. A suggested way is to build
these checks into the fuzzer you build and carry out tests in
both ways breaking the dependencies and abiding by them.
This helps to unearth any false assumptions and also makes
sure that the correct parsers are hit (of course by increasing
the number of test cases executed significantly).

Approaches for File Fuzzing

Blind Fuzzing versus Format-aware Fuzzing


A blind fuzzer has no knowledge of the underlying protocol.
It is assigned the task of blindly corrupting or generating a
data packet and sending to the application. This is very easy
to build, but results in wastage of CPU cycles and time by
generating and testing data that is rejected outright, sometimes, much before it actually reaches the target. The protocol- aware fuzzer is complex to build, but is more reliable and
result-oriented.

Figure 1: Steps in Fuzzing

The software testers job at this stage is providing the required


data to the mentioned team. Based on interest, a tester can
learn basic crash dump analysis and be of further help.

There are many factors which govern the way file fuzzing will
be implemented. Some of the key factors one needs to consider are:

Specific File Fuzzing Versus General File Fuzzer


Fuzzing a specific file format can be done using quick scripting
with no time spent on designing reusable components, but
when looking at fuzzing more than one file format using the
same tool, framework design has to be considered and has
to be split into classes/functions that can be employed when
executing file fuzzing of various sorts.
OS platform
The type of OS platform has a large impact on the way the tool
is designed because the fuzzer needs to understand how the
OS handles the process, what are the debugging options available, how resources can be monitored etc.
Data Corruption Method - Generation versus Mutation
At a broad level, a fuzzer can produce fuzz data in two ways
generation and mutation. In generation, the complete protocol is generated from scratch based on the knowledge of
26

www.securityacts.com

Blind fuzzers are usually tied to the mutation approach and


protocol-aware fuzzers are tied to the generation approach (or
to mutation approach while abiding by the dependencies of
the fields).

Conclusion
All in all, file fuzzing (or fuzzing in general) is a good and easy way
to test software for security issues. A software tester can further
contribute in the area by brushing up skills on threat modeling for
analyzing various input vectors and associated threats, code coverage to check the effectiveness of the fuzzing tool, core dump
analysis to understand the cause of the crashes captured, and
vulnerability analysis to associate crashes to a possible vulnerability that could be exploited.
Fuzzing should not be thought of as a replacement for other
forms of testing. It should be a new form of testing added to the
existing tests being conducted.

Definitions, Abbreviations and Acronyms


Acronym

Description

Fuzzing

Fuzz Testing or Fuzzing is a software testing technique that provides random data
("fuzz") to the inputs of a program. If the program fails (for example, by crashing, or by
failing built-in code assertions), the defects
can be noted.

Black Box Testing

Testing an application with little or no knowledge of the underlying implementation

Grey Box Testing

Testing an application with knowledge of


logic/high level implementation but not of
the exact code

Threat Modeling

A method of assessing and documenting the


security risks with a software application.

Vulnerability

A security exposure in an operating system


or other system software or application software component

Threat

Possibility that vulnerability may be exploited to cause harm to a system, environment,


or personnel.

References
Item

Description

Fuzzing: Brute Force


Vulnerability Assessment

A book dedicated to the art of fuzzing by Sutton Michael, Greene Adam, Amini Pedram.
Published by Addison-Wesley Professional

Building a Fuzzing
Framework: A Primer
for Software Testers

Paper written by Rahul Verma (author of this


paper) dealing with building a fuzzing framework. Selected for TEST2008 Conference.

Wikipedia: Fuzz Testing.

http://en.wikipedia.org/wiki/Fuzz_testing

Fuzzing.org: Fuzzing
Software.

http://www.fuzzing.org/fuzzing-software

> About the author


Rahul Verma
With an experience of more than7 years in the industry,
Rahul has explored the areas of security testing, largescale performance engineering and database migration projects. He currently leads theAnti-Malware Core
QA team (MIC Labs) at McAfee India as a Senior Technical Lead. He is a core member of the McAfee Global
Performance Testing Team and a Pythontrainer in the
McAfee Automation Club.

Rahul has presented at several conferences and organizations including CONQUEST 2009 (Germany), STePIN, ISQT, TEST2008, Yahoo! India, McAfee, Applabs and
STIG. His recent presentations were on the subjects of
Fuzzing, Performance engineering COE, Web Application Security, User behavior and Performance Perception Analysis (UBPPA). He received the Testing Thought
Leadership Award at TEST2008 conference for his
Testing Perspective website (www.testingperspective.
com), along with the Best Innovative Paper Award for
his paper on design of fuzzing frameworks. Rahul is a
member of the Indian Testing Board and is associated
as author/reviewer forFoundation and Advanced Level
Certifications by ISTQB.

Rahul holds a B.Tech degree from REC Jalandhar (India).


He has been associated with professional theatre, music, poetry and stage anchoring for more than 12 years.

Subscribe at:

www.securityacts.com
www.securityacts.com

27

Column
IT Security Micro Governance
A Practical Alternative
Prof. Dr. Sachar Paulus
Professor for Corporate Security & Risk Management
Just a few days ago, Microsoft had to admit serious security issues in almost all of its web-enabled products, not only in the
browser, but also in e-mail and other productivity applications.
The recommendation of the German Federal Office for Security
in the Information Technology (BSI) was not to use products that
use the browsing engine of Microsofts Internet Explorer, including the browser itself in the versions 6, 7 and 8.
Now, this is obviously a real threat to internet technology. Not
so much the existence of the flaw itself - as most of you surely
know, there is no such thing as 100% secure software -, but that
the internet-enabling of more and more applications adds additional risk. Let me explain this: Of course, you can use another
browser and another e-mail software, but would you really consider replacing the most standardized office productivity suite?
Let alone, that when switching applications most of the formatting will be gone? So, the answer is probably no - and you will live
with the risk of being attacked, until the vendor will have supplied
patches solving the problem.
There is a risk in using different tools for different purposes, simply by the fact that there might be more flaws and attack vectors
that have to be controlled. But using the same engine in different
products is also risky, because patching probably wont happen
at the same time. As long as there exists an overview of where
these components are used, then the risk - yet being higher - can
still be controlled. But as soon as one looses control over the usage of the components, not only the risk, but also the probability
for a communication crisis increases substantially.
By the way, note that Microsoft did an excellent job in managing
the _discovery_ of the flaw. It was communicated to Microsoft
using responsible disclosure, which is the best way to address
security flaws (the researcher spoke directly to Microsoft and did
not publish it directly, in order not to give potential attackers too
much time to develop attack software). However, this does not
help if one needs a number of weeks to identify in which products
the code is actually used - and consequently the flaw is present.
So the lessons learned from this issue are:

28

www.securityacts.com

1. Keep track of where code is re-used.


2. Implement a responsible disclosure strategy with your
researcher community.
3. Be able to develop and install patches addressing the same
issue for multiple products simultaneously.
Obviously, developing secure software is more than just performing input encoding and avoiding buffer overruns...

> About the author


Sachar Paulus
is Professor for Corporate
Security and Risk Management in the department for
Business Administration
at Brandenburg University
of Applied Sciences. Sachar Paulus has a Ph.D. in
number theory and several publications on cryptography. He has been in the
business for more than 13
years, 8 of which with SAP, the world's largest business
software manufacturer, where he held various positions
related to security, among others Senior Vice President
Product Security and Chief Security Officer. He was member of the RISEPTIS advisory board of the EC, member of
ENISA's permanent stakeholder group and is one of the
authors of the Draft Report of the Task Force on Interdisciplinary Research Activities applicable to the Future
Internet of the EC. He is also President of ISSECO, a nonfor-profit organization aiming at driving secure software
development, and standardizing qualification around secure software engineering.

Katrin Schlke

Berlin, Germany

IT Law
Contract Law
German
English
Spanish
French
www.kanzlei-hilterscheid.de
info@kanzlei-hilterscheid.de

www.securityacts.com

d
29

Sven Hoppe - Fotolia.com

Avoiding loss of sensitive information


as simple as 1-2-3
by Peter Davin

Information loss protection is often considered to be a costly


pain, and such projects are often given low priority. Even if security measurements are required by regional legislation, its often
difficult for companies to know where to start.
Whats the problem?
IT security is about risk management, the cost of taking the risk
of a major data breach versus the cost of avoiding the risk by
implementing a data leak prevention solution. It is quite similar
to getting insurance or taking the chance. A breach of confidential information is, however, more than just fines and penalties,
as it reflects poorly upon the companys credibility and brings
negative media exposure that equates to reducing the overall
corporate value of the organization. But what happens when a
company loses confidential information about individuals, be it
employees or clients? Even though strict laws and regulations
are in place, many companies have still not implemented any
processes to prevent information from being lost or stolen. If the
consequences of losing sensitive information are not clear, companies may take on the risk by taking no action until a serious
incident has occurred.
What can be done?

Solutions addressing loss of sensitive information usually go under the name of Data Leak Prevention (DLP).
DLP simply means making sure that sensitive data does not
leave the organizations network unsecured, and that only the
right people have access to the right information. In the last
few years, companies and organizations have spent huge sums
of money with a view of keeping the bad guys out of their networks by investing in firewalls and other filter technologies to
protect against hackers, viruses, spam and spyware. A Ponemon
study from 2009 shows that employees leaving a company are
considered to be one of the largest security threats for organizations. So now it is time to look inward, and monitor the workflow
30

www.securityacts.com

processes of information within the network and the protection


methods when critical information is stored and/or sent outside
the enterprise network.
What to do?

Today, IT directors and security professionals focus their attention on stopping information from leaking out of the network. And
that challenge is much greater compared to inbound protection
issues.
This challenge cannot be solved based on technology solutions
only. Constantly informing and educating employees regarding
the importance of handling information in a secure way will be
necessary. Integrated coaching mechanisms, where employees
will be notified that some actions might expose security risks,
will become more common. Examples are where a user sends an
email containing confidential information such as bank details or
insurance numbers. Content control mechanisms can detect that
the content is likely to be sensitive, and suggest or even enforce
encryption of the email. Another example might be where a user
inserts a USB memory stick to the computer, and the system recognizes that the memory stick needs to be encrypted before letting the user store information on it. This will again give the user
the ability to choose to encrypt or not to use the memory stick.
In other words, the user will be presented with a solution instead
of a problem. This type of personalized messages which informs
and even educates users, would quickly become ineffective if
they remained entirely static. They requires continuous updating
of the content in such a way that the user absorbs the information every time and not just instinctively clicks past it.
Where do we start?
Several of the leading suppliers of DLP solutions have developed
security platforms for companies and organizations so they can
easily find and implement the right product for their needs. Some
of the security providers are focused on developing modular plat-

forms that allow companies and organizations to begin implementing the DLP solutions where they are needed the most. The
user/administrator can gradually let the solution include a growing number of user groups and security modules. With the help
of such a security platform, the company can manage their DLP
solution centrally. A modular security platform is one of the most
popular platform solutions today, since the initial cost is low and
can easily grow as needs increase.
Conclusion
The threats are real, and they can be very costly. However, a simple, cost-effective solution is available. Its just a matter of getting started, and with SEP, getting started has never been easier.
So, just like you make sure that you have an adequate insurance
for your organization to avoid risks, make sure you also have an
insurance against Data Leak risks!

> About the author


Peter Davin is CEO of the
Swedish company Cryptzone AB. He has worked
in the software and communications industries in
Scandinavia and the US
for many years and is considered one of the veterans within the field of Data
Leak Prevention/Information Protection.

Your Ad here
sales@securityacts.com

Peter has considerable experience and knowledge of developing and leading medium-sized businesses and has
helped to launch over 20 companies during his career.
In 2001 he started Secured eMail with the intention
of creating a company which would provide an affordable, easy-to-use, yet technologically advanced solution
for securing email communications. This idea quickly
evolved into many more products and solutions. The
company eventually developed into Cryptzone, which is
now a public company listed on the stock exchange in
Sweden, Stockholm.
In late 2009, Cryptzone acquired the IT security company AppGate Network Security and Peter is currently
working as CEO of both companies.
A graduate in engineering, Peter has an MBA from Penn
State University, USA, and a BA from the University of
Gothenburg, Sweden. He is also with a board member
of several other companies in Europe.

www.securityacts.com

31

We See the Whole Picture


Addressing Application Security
from a Holistic Perspective

Applications today can no longer rely on the infrastructure to secure their assets. The applications must be built with
integrated security design and that is a great challenge.
We can help you to secure your software!
SELA, a founding organization of ISSECO, offers you comprehensive services helping you to integrate application
security into the product life cycle. The services are provided for each and every step of the development life cycle:

Methodologies Deployment

Security Standards and


Requirements Assessments

Deployment and Response


to Security Issues

Architecture and Design


Application Security
Development

Application Security Testing


Penetration Testing
and Hacking

Available Courses:

Next ISSECO Certied Professional


for Secure Software Engineering Courses

ISSECO Certied
Professional for Secure
Software Engineering

Application Security
Testing

Application Security
Design

Introduction to
Information Security

Application Security
using C++

Web Services
Security

Windows 7 Security

Application Security
using the Microsoft
.NET Framework

Vista Security

Country

City

Date

Israel
Canada
Israel
India
Singapore
Israel
Canada

Tel Aviv
Toronto
Tel Aviv
Pune
Singapore
Tel Aviv
Toronto

November 17-19/2009
November 25-27/2009
December 20-22/2009
January 20-22/2010
January 25-27/2010
February 7-9/2010
February 24-26/2010

solutions@sela.co.il
www.sela.co.il/en

32

www.securityacts.com

Dmitry Naumov - Fotolia.com

IT Security Micro Governance


A Practical Alternative
by Ron Lepofsky

Executive Summary
For most organizations, particularly for medium and small institutions, IT Governance is difficult to initiate and maintain as it is
an ongoing process. There are many subject experts, vendors,
and consultants that cater to implementation, but the inherent
difficulties and complexities make the implementation of it an
elusive goal for many.
Since Governance is, by definition, strategic and focused over
long timeframes, it is not designed to deal with unexpected and
potentially costly IT security threats. Threats which can evolve
into costly security events. A distraught client once described
how a serious access breach within his organization could have
been prevented if the senior management had evaluated and
acted upon his impromptu but appropriate recommendations to
harden access controls.
The author proposes a modified process to respond to mitigating
threats that require funds exceeding the annual IT security budget and calls this micro Governance.

3. Forrester: The act of establishing IT decision structures,


processes, and communication mechanisms in support of the
business objectives and tracking progress against fulfilling
business obligations efficiently and consistently.
4. MIT Sloan School of Management: IT governance is the
process by which firms align actions with their performance
goals and assign accountability for those actions and their
outcomes.
The three predominant frameworks for implementing IT Governance are provided by ISACA, ITIL and ISO. In a more granular
view, the ISO 38500:2008 guiding principles are organized into
three prime sections, specifically Scope, Framework and Guidance. The framework comprises definitions, principles and a model. It sets out six principles for good corporate governance of IT:





Responsibility
Strategy
Acquisition
Performance
Conformance
Human behaviour

Definitions of IT Governance
Significance of IT Security Governance for Compliance
IT Governance is a subset discipline of Corporate Governance
focused on information technology (IT) systems and their performance and risk management. Various bodies of authority on the
subject publish similar definitions of IT Governance, each with
its own emphasis of intent. Four prominent authorities define IT
governance on their web sites as follows:
1. ISACA: provide the leadership, organizational structures
and processes that ensure that the enterprises IT sustains
and extends the enterprises strategies and objectives.
2. ITGI: an effective IT governance framework that addresses
strategic alignment, performance measurement, risk management, value delivery and resource management.

Compliance violations may attract all manner of liability directly


affecting a governance committee, such as fines and confinement for SOX, revocation of interconnection agreements with
electrical utilities for NERC CIP, and violation notices from third
party auditors for COBIT.
Examples of well known regulatory frameworks and compliance
standards are as follows:



Financial - SOX, Bill 109, Basel II, PCI, SAS 70


Electrical Infrastructure for North America - NERC CIP
Privacy - PIPEDA, Red Flag, GLB
Industry Best Practices - COBIT, ITIL
www.securityacts.com

33

IT Security Micro Governance A Practical Alternative


This covers the problems caused by insufficient Governance and
the root causes of this problem.
Insufficient IT Governance Impedes the Security Team
In dynamic network environments, security issues can quickly
appear where insufficient funds are planned to mitigate new security risks. An active IT Governance process is invaluable to deal
with such issues.
Insufficient IT Governance:
Slows decision making.
Inhibits communication of risk and associated potential financial loss between the IT security team and executive management.
Inhibits attaining unplanned, sufficient IT security funding.
Barriers to implementing IT Governance
Well known barriers to attaining IT governance are:
The all-encompassing scope of any Governance is a daunting
challenge to face.
Expensive.
Time consuming.
IT security risk can be very difficult to quantify.
The executives may find it difficult to request additional funds
particularly where the IT security team has done an excellent
job and there are no expensive security vulnerabilities.
A false sense of security makes cost justifying security budgets difficult.
A Governance committee may get bogged down over confusion arising between identifying the content of compliance
frameworks with compliance objectives.
Turf wars over accepting / relegating ownership of responsibilities for various aspects of IT compliance.
Maintaining longevity of the IT Governance process.

Minimizes decision time and frustration levels by identifying


bite-sized issues.
Steps to Implement IT Micro Governance
1. IT Security should identify the top priority IT security risk(s)
that require immediate decisions / funding by the executive
team.
2. Estimate the ROI or potential cost avoidance by mitigating the
risk(s).
3. Formally create a micro-Governance process to address the
risk(s).
4. Engage a third party advisor to expedite the process.
5. Create a virtual (temporary) team to manage each risk management process.
6. Assign other management and employees as appropriate to
the virtual team.
7. Identify a timeline to complete the project.
8. Identify a mechanism to test the degree of success of the mitigation.
9. Identify a timeline to report the degree of success back to the
IT Governance Committee.
10. Assess whether ROI or cost avoidance goals were sufficiently
met.*
11.Mandate longevity for the micro-Governance process by directing the virtual team to continue monitoring the process
and reporting to the Governance Committee.
12.Integrate the process into the IT security operations / admin
istration processes and disband the virtual team.
* It is difficult to obtain data that captures the prevention of a security threat based on a specific action taken. One empirical yet
evidentiary-based method is to compare the frequency of similar
threats before and after mediation steps are implemented.
To assist with calculating IT security-related risk, ROI / cost avoidance, and residual risk, Governance Committees (and IT security
professionals) can contract third party expertise in these matters.
Example Situation

ITSecurity Micro Governance as a Practical Alternative

The Problem Statement

A simplified alternative to the barriers mentioned above creates


a bite-sized micro process, which will provide the following value
to a corporate entity:

1. A CIO of a fictitious company identifies weak identity management as a significant risk to the privacy and integrity to corporate information as well as to SOX compliance.

Minimizes the liability of executives with respect to their fiduciary responsibilities for IT Governance.
Facilitates communications between the Governance Body
and the IT Security Team regarding cost justification of unplanned or insufficient budget.
Provides a regular opportunity for the Security Team to convey
top priorities with requests for expedited executive authorization.
Provides a regular opportunity for executives to convey business priorities that affect IT related risks directly to those responsible for physically managing those risks.

2. The problem has recently arisen due to several factors:

34

www.securityacts.com

The external corporate auditors introduced new IT audit control points for monitoring unauthorized and attempted unauthorized accesses to critical servers and critical applications.
Corporate cost cutting has caused a reduction in the staff levels of the security administration group.
A cost cutting reorganization has dramatically changed employees roles and needs to access various servers and applications.
The group of recently terminated employees which include IT
security administrators has raised the potential threat of ma-

licious activity from ex-employees plus a diminished capacity for


the corporation to adequately administer access privileges.
3. There are insufficient funds for a comprehensive upgrade to
the identity management infrastructure to ensure reasonable
compliance for SOX.
4. The problem is further obfuscated as the lack of any major security breach makes it appear to senior executives that there
are no security threats.
5. Nonexistent IT Governance means decision making about the
new risk will be delayed until the next years budget cycle

ance business model in terms of:


i. Was risk correctly estimated?
ii. Is there an ongoing evaluation of the degree of risk
reduction?
iii. Can the new process and its budget be integrated into IT
security operations / administration. Can the virtual team be
disbanded?

Conclusion
Keep it simple.

IT Micro-Governance Solution
1. If the corporation does in fact have an IT Governance committee that is amenable to reacting quickly with micro-Governance decisions, then the CIO can identify to the Governance
committee the business risks relating to weak identity management.
2. The Governance committee works with the CIO to estimate
the cost to the corporation in the event of a security event at
$5,000,000 per incident.
3. They build a business case modeled upon the chance of a
security event occurring once per year.
a. The CIO estimates the first year annual cost to technically mitigate the risk at $100,000 and $50,000 annually thereafter.
b. The first year mitigation cost / annual loss expectation is
$100,000 / $5,000,000 or 2% and 1% thereafter.
c. The Governance committee decides the return is acceptable.
4. The IT Governance committee formally creates a specific task
force and IT micro-Governance process to mitigate the identity
management risk.
5. They engage a third party advisor to expedite the process, so
that an aggressive date of fully tested implementation is 6
months.
6. They appoint virtual team leaders to manage each risk management process. The team leaders are comprised of two
members of the IT Governance committee, the CIO, three
members of the IT security team, 6 business line managers,
a member of HR and a member of the CFOs team. They also
have external security consultants and auditors to assist with
testing and evaluating the effectiveness of the new process.
7. The virtual team leaders assign other employees to implement the project and to create an ongoing process to monitor,
manage, and report on the proposed identity management
process.
8. The team creates a detailed project plan to complete the project.
9. The third party consultants and auditors work with the team
right from the beginning to design processes and mechanisms to test and report on the degree of success of the new
identity management process.
10.The virtual team and IT Governance committee creates a
schedule for reporting / feedback / direction meetings as
oversight for the new process, including:
a. Evaluating the degree of success of the initial implementation.
b. A subset of the virtual team continues to monitor and report to
the Governance committee.
c. A third party with expertise in calculating IT security risk is assigned the task of re-evaluating the initial ROI or cost avoid-

Sources of Information - Governance Authorities






ISACA (Information Systems Audit and Control Association) www.isaca.org


ITGI (IT Governance Institute) www.itgi.org
Gartner Group www.gartner.com
IBM www-935.ibm.com/services/us/index.wss/offering/its/a1031003
SANS (SysAdmin, Audit, Network, Security Institute) www.sans.org/reading_room/whitepapers/casestudies/corporate_governance_and_information_security_1382
The IT Metrics and Productivity Institute http://www.itmpi.org/default.
aspx?pageid=198
MIT Sloan School of Management http://web.mit.edu/cisr/working%20
papers/cisrwp349.pdf

> About the author


Ron Lepofsky,
B.A.SC. (Mech Eng), CISSP
Owner, ERE Information
Security and Compliance
Auditors. Founder and
President of an information
security audit and compliance company since
2000. The company is
called ERE Information
Security and Compliance
Auditors.
Previously founder and President of a data telecommunications services and product sales company called
PTI Telecommunications, founded in 1989. Graduated
in Mechanical Engineering, University of Toronto. Sales
representative for high tech companies until 1989, for:
Digital Equipment of Canda Ltd., Timeplex Canada Limited, and Data General Canada Ltd.
I contribute articles to publishers of information security, legal, and electrical utility periodicals, and frequently
speak at similarly related conferences.
Specialties:
IT security audits, forensics, server hardening, pen tests,
external vulnerability assessments, gap analysis, network
architecture security, policy, web sites, wireless, USB, employee internet abuse, laptop, firewalls, VPNs, Risk analysis: Compliance audits: SOX security, Bill 198 security,
ISO 17799, CobiT, COSO, ITIL. Privacy audits: PIPEDA,
HIPAA. Perpetual audit / monitoring of network security
and compliance. Writing security policy and procedures.
www.securityacts.com

35

HANDBUCH

TESTEN

I N D E R F I N A N Z W E LT

Das Qualittsmanagement und die Software-Qualittssicherung nehmen


in Projekten der Finanzwelt einen sehr hohen Stellenwert ein, insbesondere
vor dem Hintergrund der Komplexitt der Produkte und Mrkte, der regulatorischen Anforderungen, sowie daraus resultierender anspruchsvoller,
vernetzter Prozesse und Systeme. Das vorliegende QS-Handbuch zum Testen
in der Finanzwelt soll

Testmanagern, Testanalysten und Testern sowie


Projektmanagern, Qualittsmanagern und IT-Managern

Die Autoren
Bjrn Lemke, Heiko Kppen, Jenny Siotka, Jobst Regul, Lisa Crispin, Lucia
Garrido, Manu Cohen-Yashar, Mieke Gevers, Oliver Rupnow, Vipul Kocher
Gebundene Ausgabe: 431 Seiten
ISBN 978-3-00-028082-5

Herausgegeben von
Norbert Bochynek und Jos M. Daz Delgado
Die Autoren
Bjrn Lemke, Heiko Kppen, Jenny Siotka, Jobst Regul, Lisa Crispin, Lucia Garrido,
Manu Cohen-Yashar, Mieke Gevers, Oliver Rupnow, Vipul Kocher
Lektorat
Annette Schwarz
Satz/Layout/Design
Daniel Grtzsch

1. Auflage 2010 (Gre: 24 x 16,5 x 2,3 cm)


48,00 (inkl. Mwst.)
www.diazhilterscheid.de

36

www.securityacts.com

ISBN
978-3-00-028082-5
Printed in Germany
Daz&Hilterscheid
1. Auflage 2010

48,00

1. Auflage 2010

Herausgegeben von Norbert Bochynek und Jos M. Daz Delgado

HAnDBUCH testen IN DER FINANZWELT

einen grundlegenden Einblick in die Software-Qualittssicherung (Methoden & Verfahren) sowie entsprechende Literaturverweise bieten aber
auch eine Anleithilfe fr die konkrete Umsetzung in der Finanzwelt sein.
Dabei ist es unabhngig davon, ob der Leser aus dem Fachbereich oder aus
der IT-Abteilung stammt. Dies geschieht vor allem mit Praxisbezug in den
Ausfhrungen, der auf jahrelangen Erfahrungen des Autorenteams in der
Finanzbranche beruht. Mit dem QSHandbuch sollen insbesondere folgende
Ziele erreicht werden:
1. Sensibilisierung fr den ganzheitlichen Software- Qualittssicherungsansatz
2. Vermittlung der Grundlagen und Methoden des Testens sowie deren
Quellen unter Wrdigung der besonderen Anforderungen in Kreditinstituten im Rahmen des Selbststudiums
3. Bereitstellung von Vorbereitungsinformationen fr das Training Testing for Finance!
testen
IN DER
4. Angebot der Wissensvertiefung
anhand
vonFINANZWELT
Fallstudien
5. Einblick in spezielle Testverfahren und benachbarte Themen des Qualittsmanagements

HAnDBUCH

testen

I N D E R F I N A N Z W E LT
herausgegeben von

Norbert Bochynek
Jos Daz

zothen - Fotolia.com

The CSOs Myopia


by Jordan M. Bonagura

Before reading this article, imagine what it would be like to be


able to manage your own company without your customers data,
or, imagine what it would be like if your competitors got hold of
these data
Well, it has long been established that data are extremely valuable for companies. Your customers databases and the experience they have acquired through the years are fundamental,
and they represent a great competitive advantage in this new
corporative era.
With this in mind, we can see the importance of implementing
specific policies in order to build a base which will guarantee that
these data are safe.
There has been a recent increase in incidents related to security
issues in a way that IT management has become more and more
complex, and automatically the need for a new kind of professional, the CSO, has emerged.
The CSO has become the responsible person for risk areas, data
security, and also for the definition and implementation of the
security strategies and policies that the company
will implement.

Figure 1

The in-box vision, commonly used at the time of creating these


policies, is not enough to encompass all the companys existing
range of vulnerabilities. When we analyze the graphic published
by Breach Security Labs in August, 2009 on The Web Hacking Incidents Database 2009, which demonstrates the vulnerabilities
which were the hackers most favorite during the first half of 2009,
we obviously and automatically realize that a high percentage of
them come from a particular breach in the SQL Injection (19%)
an opening for data theft. I say obviously, because, as previously
mentioned, data is one of the companys most valuable assets.
What vulnerabilities do hackers use?

Such policies are developed to reduce risks and


their impacts, and limit exposure to liability in all
areas.
Figure 1 (top right) shows the direct relation between security enhancement and risk reduction.
It shows that the higher the security, the lower
the risks.
However, the major questions it addresses do not
consider the urge for good professionals in security or the development of good policies. Every
company must go through these steps when it
decides to implement or organize such policies.

Source: Breach Security Labs

www.securityacts.com

37

Such analyses are extremely relevant for a CSO, since they make
it possible to enhance and update the logic control mechanisms
(Firewalls, Anti Virus, IDS/IPS and, etc.) and thus reduce the risks
relating to the well-known breaches that are addressed by the
companys established policies. Furthermore, it becomes possible to take into account new ways to explore these breaches.

This sort of error, besides being considered a breach, may enhance the identification process and consequent exploration of
other breaches. A practical example is the directory listing of a
web server showing database configuration files.

One specific risk I would like to briefly mention in this context is


that there are people in charge of the administration and that
people also make mistakes. Some might ponder that policies
exist for this purpose and that they are there to be carried out
precisely by the employees, yet it is worth emphasizing that policies require continuous review as much as physical and logical
mechanisms require updating. And, also competent professionals involved in security matters require constant training.

It is often difficult to have the out box 100% of the time when
you are dedicated to the in box and mainly on the idea that
everything is under control. A very important recommendation,
in my opinion, is to resort to specialized consulting professionals
(Pentest), who are experts at analyzing breaches, which are still
not familiar to the company, and the different methods to explore
the ones already considered by your present policy.
Attitudes like this might contribute to the decrease in the problems coming from the managerial myopia..

Calm down! Not all is lost

Everything sounds perfect now, doesnt it?


Unfortunately not! Lets refer to the Bible where we find the line
that says the foolish man who built his castle on the sand ...
The major problem is that every security policy is developed with
an in box vision, although a large range of well-known breaches
are available outside the box. In other words, the ones experiencing the problems are the ones who cant see them.
If the CSO simply relies on his own policy, he will not be able to
see what it does not comprise and he will be deceived by his
pseudo-security. That is what I call CSO myopia. By believing in
his defined policy, he thinks he can control the whole thing, when
actually he is only controlling his whole policy.
I mean: Sometimes we hide the key under the doormat and
forget to lock the door
One of the main problems in this myopia is when we treat, for example, the risks concerning the errors of configuration and administration (Configuration/Admin Error (8%)) as in the graphic below.
What vulnerabilities do hackers use?

Source: Adapted from Breach Security Labs

38

www.securityacts.com

Keep alert, keep safe!

> About the author


Jordan M. Bonagura
Jordan M. Bonagura is a
computer scientist, post
graduated in Business
Strategic Management,
Innovation and Teaching
(teaching
methodology
and research). He works
as a business consultant
and researcher in information security with emphasis on new breaches.
He is lecturer in the area of information technology at
various institutions, among them the Brazilian Institute
of Advanced Technology (Veris/IBTA).
As a university professor he has conducted in company training at several nationally recognized organizations, among them the National Institute for Space
Research (INPE).

Feng Yu Fotolia.com

Security@University Talking about


ICT security with two CTOs
Diego Prez Martnez and Francisco J. Sampalo Lainz

Universities are very interesting entities in the ICT area. For example, the number of users in their information systems is large.
A medium size university has between five thousand and thirty
thousand users. Also, the number of services provided to these
users is numerous: web applications, e-learning, email, net storage, VPN access, wireless access, VoIP services, mobility services, single-sign-on, inter-university services.

Moreover, technologies and systems which support these services are becoming more complicated. Summarizing: The work
multiplies while the working team remains basically the same.
Despite the moment of crisis we find ourselves in, ICT financing
for the University of Almera is growing thanks to the awareness
of the Andalusian Government and the management team of the
University. The economics is therefore not a problem at this time.

SG6 has sought the vision of two Chief Technical Officers, CTOs,
of two Spanish universities, around their vision of information
security and the role it plays in the deployment of its ICT services.

In my opinion, there are three main problems with which the ICT
area is struggling at the moment.

Profile: Name, Career, Position, Company,


Diego Prez Martnez, Bachelor of Engineering in Computer Science, Chief Technology Officer (CTO) at Information and Communication Technologies Service, University of Almera
Francisco J. Sampalo Lainz, Bachelor of Engineering in Computer
Science, Chief Technology Officer (CTO) at Information and Communication Technologies Service, Technical University of Cartagena
Universtiy Profile: Name, Web, Students, Professors, Service
Staff and ICT Staff
University of Almera: www.ua.es, 12500 students, 1000 professors, 500 service staff and 60 ICT staff
Technical University of Cartagena: www.upct.es, 6500 students,
550 professors, 375 service staff and 22 ICT staff
1. Which are the main problems in the ICT area in an institution like yours?
The University of Almera, like all Spanish universities, is faced
with a growing request of services by the users. As part of these
services, the users demand, every time, more availability, security, quality

First. Alignment between the goals and priorities of the University. It's essential to reach an agreement that allows the efforts
and resources devoted to the ICT area to be targeted in the same
direction that the strategic objectives of the University can be
met. Also, to show managers the competitive advantages that
ICT can bring.
Second. Interaction and communication with users. The "customer services", sometimes in the management of incidents, other
times in the request of new developments, is probably the aspect
that we must spend more time on. So, it's important to improve
communication and information with our users: training them in
the use of computers and showing them the ICT services that are
available to them. These actions will improve, both the quality of
service, and the possibilities that the users will find.
Third. Quality and security in the service deployment. Many
times, and for various reasons, rapid and/or inexpensive deployments are performed. In such cases, the risk associated with offering a unsafe or poorly developed service is not assessed. It's
essential to devote sufficient time to the phases of analysis and
design, ensuring quality and safety in all phases of deployment. It
is also necessary to explain to managers that this additional cost
is ultimately profitable.
Besides these three problem areas, there are also other issues
of a more internal or technical nature. For example: the lack of
www.securityacts.com

39

technical staff to support the demands of the University, the internal organization of ICT staff, the training of staff, the short
cycle life of current software systems, ...
Note that I don't speak about money, which is a common theme
for all areas of the company ;-)
2. Which are the main challenges?
Spanish universities are facing what is probably the biggest challenge in recent decades: The adequacy of the university and
particularly its information systems to the requirements of the
European Higher Education Area. The contents and the lecturers
are no longer the center of the educational process.The focus of
attention is shifting towards the student.
Moreover, students are very different to a few years ago. Most
of them have a computer, they have important ICT skills, they
participate in social networks, etc. The university has to adapt to
this new kind of student; it needs to know how to approach them
through the internet 2.0 tools.
The challenges will be to find good solutions, (I am not saying the
best solutions), to the three problems raised above.
Also, I think another major challenge is to seek partnerships with
other ICT staff, whether from universities or not, to develop common solutions to common problems.
3. Information security is one of the problems, or is it perhaps
more of a challenge?
I do not know if it is the right view or not, but information security
is perceived basically as a problem.
In my opinion, security is inherent to the development of any service; logically, depending on the criticality of the same or the data
being handled, we must give it a higher or lower security level.
Besides, I think by now we all know that security is not just a
technical problem, we must also take into account the role of the
users, and we also have to ask them for that effort.
4. If you do not mind my asking, which are your goals in information security?
Guarantees of the availability, integrity and confidentiality of information are fundamental obligations in the ICT department of
a university. I do not perceive security itself as a goal, but as a
tool. The goal is supporting systems (and implant new systems)
which provide the functionality they should in an efficient, effective and, of course, secure way.
According to what I said before, we could state the following
goals: Include security considerations in the phases of design
and specification of services, provide training for users, and cooperate/coordinate with other agencies (for example: Iris-CERT)

40

www.securityacts.com

5. Which direct experiences did you have with information security?


As I mentioned before, security is taken into account in the design and implementation of all new information systems.
Moreover, we are aware that security needs a specialization,
which makes it difficult to dispose real, trained and updated specialists in your own team. Further, the team that designed and
developed a system is not, in my opinion, the most appropriate
team to audit its security. Even if they try to be totally impartial, a
contamination still prevails which is impossible to obviate. For
this reason the Information and Communication Technologies
Service decided last year to realize two security audits through
external companies.
The ICT service of the University of Almeria cooperates with external entities like CICA or RedIRIS in the resolution of incidents,
where a team of our University might be involved.
I suppose that this question refers to actions aimed at the treatment of security (and not to security incidents). In this case we
can state the following:
Coordination with other institutions and support to forums
(RedIris).
Implementation and integration of open coded security tools
(SNORT, Nagios, etc.) above the platform OSSIM for using it like
the managing platform of security incidents.
Security audits of some of our public services, realized by SG6
during 2008.
6. How do you value these experiences? Which of them is the
most positive? Is there a negative side?
The experience with the company SG6, who realized the audits in
our network and virtual campus, was absolutely satisfactory. According to my experience, everything is positive, as security holes
are discovered by anticipating potential problems, in an indirect
way the technicians get trained, and a security culture is formed
which permeates the ICT Service.
I cant mention anything negative. It is an experience which we
will repeat.
All experiences have been positive; perhaps the only negative thing we could find in connection with the implementation
of OSSSIM, was the limited documentation and experience we
found in some open-coded platforms, and the consequent difficulty to integrate different products.
7. If you had to give advice to a person who never had any experience with information security, what would it be?
Humility. That they should be aware that even if you are working
very well there will always be a bug or mistake through which the

system can be attacked, and an external audit realized by good


professionals is an opportunity, not a threat.
They should open their minds and accept that nobody is perfect.
To work in cooperation, either with a company of the same industry or with trusted technology partners.
8. What do you associate to each of the following concepts?
Penetration Test?: A need
LOPD (Spanish Data Protection Law)? A danger for companies,
even if they act in good faith.
ISO 27001? Something desirable.
Penetration Test?: Information tool about the security status of
the services; it must be used with a lot of caution and with guarantees.
LOPD (Spanish Data Protection Law)? I cannot say much: compulsory legislation that at least once a year makes us revise our
infrastructure, our practices and our organization and the persons in charge of the data.
ISO 27001? Security standard for information systems; to be
honest, even if it is wrong to say it, I have not read it.
9. Do you think that an organization like yours does have the
risk to suffer an intrusion?
Yes, universities by their very nature are organizations with thousands of users, very heterogeneous, and where one thing is of
prime importance: availability.

Your Ad here
sales@securityacts.com

Everyone of us has read in the newspapers about security incidents in which universities are involved. In some cases as direct
victims, as their information system was attacked, in other cases
as indirect victims, when their good name appears in relation
with attacks realized from the university.
Of course, who does not?
10. Finally, looking ahead, which plans does your institution
have in relation to information security for 2010?
First, finish implementation of all the improvement actions arising from the past audits.
Second, continue realizing partial security audits of our information systems.
Third, realize an external audit which forces us to the Data Protection Law (LOPD).
Fourth, study the possibility of obtaining the ISO 2700 certification.
Besides covering the challenges we have mentioned above, we
would like to secure the electronic administration services we
are going to provide, as well as reviewing the matters relating to
compliance of LOPD and adaptation to the new regulation (RD
1720/2007).
www.securityacts.com

41

Masthead
EDITOR
Daz & Hilterscheid
Unternehmensberatung GmbH
Kurfrstendamm 179
10707 Berlin, Germany
Phone: +49 (0)30 74 76 28-0

Fax: +49 (0)30 74 76 28-99

E-Mail: info@diazhilterscheid.de

Daz & Hilterscheid is a member of Verband der Zeitschriftenverleger Berlin-Brandenburg e.V.


EDITORIAL
Jos Daz

LAYOUT & DESIGN


Frenkelson Werbeagentur
WEBSITE
www.securityacts.com
ARTICLES & AUTHORS
editorial@securityacts.com
ADVERTISEMENTS
sales@securityacts.com
PRICE
online version: free of charge
ISSN
ISSN 1869-4977

In all publications Daz & Hilterscheid Unternehmensberatung GmbH makes every effort to respect the copyright of graphics and texts used, to
make use of its own graphics and texts and to utilise public domain graphics and texts.
All brands and trademarks mentioned, where applicable, registered by third-parties are subject without restriction to the provisions of ruling labelling legislation and the rights of ownership of the registered owners. The mere mention of a trademark in no way allows the conclusion to be
drawn that it is not protected by the rights of third parties.
The copyright for published material created by Daz & Hilterscheid Unternehmensberatung GmbH remains the authors property. The duplication or use of such graphics or texts in other electronic or printed media is not permitted without the express consent of Daz & Hilterscheid
Unternehmensberatung GmbH.
The opinions expressed within the articles and contents herein do not necessarily express those of the publisher. Only the authors are responsible
for the content of their articles.
No material in this publication may be reproduced in any form without permission. Reprints of individual articles available.

Index Of Advertisers

42

Daz & Hilterscheid GmbH

16, 23, 36, 44

Cabildo de Gran Canaria

43

iSQI

SELA

32

Kanzlei Hilterscheid

29

www.securityacts.com

www.securityacts.com

43

Training with a View

also onsite training worldwide in German,


English, Spanish, French at
http://training.diazhilterscheid.com/
training@diazhilterscheid.com
A casual lecture style by Mr. Lieblang, and dry, incisive comments in-between. My attention was correspondingly high.
With this preparation the exam was easy.
Mirko Gossler, T-Systems Multimedia Solutions GmbH
Thanks for the entertaining introduction to a complex topic and the thorough preparation for the certification.
Who would have thought that ravens and cockroaches can be so important in software testing
Gerlinde Suling, Siemens AG

Kurfrstendamm, Berlin Katrin Schlke

- subject to modifications 08.02.10-10.02.10


15.02.10-19.02.10
22.02.10-25.02.10
22.02.10-26.02.10
24.02.10-26.02.10
01.03.10-03.03.10
08.03.10-10.03.10
15.03.10-17.03.10
15.03.10-19.03.10
22.03.10-26.03.10
12.04.10-15.04.10
19.04.10-21.04.10
21.04.10-23.04.10
28.04.10-30.04.10
03.05.10-07.05.10
03.05.10-07.05.10
10.05.10-12.05.10
17.05.10-21.05.10
07.06.10-09.06.10
09.06.10-11.06.10
14.06.10-18.06.10
21.06.10-24.06.10

Certified Tester Foundation Level - Kompaktkurs


Certified Tester - TECHNICAL TEST ANALYST
Certified Tester Foundation Level
Certified Tester Advanced Level - TESTMANAGER
Certified Professional for Requirements Engineering - Foundation Level
ISSECO - Certified Professional for Secure Software Engineering
Certified Tester Foundation Level - Kompaktkurs
Certified Tester Foundation Level - Kompaktkurs
Certified Tester Advanced Level - TEST ANALYST
Certified Tester Advanced Level - TESTMANAGER
Certified Tester Foundation Level
Certified Tester Foundation Level - Kompaktkurs
Certified Professional for Requirements Engineering - Foundation Level
Certified Tester Foundation Level - Kompaktkurs
Certified Tester Advanced Level - TESTMANAGER
Certified Tester - TECHNICAL TEST ANALYST
Certified Tester Foundation Level - Kompaktkurs
Certified Tester Advanced Level - TEST ANALYST
Certified Tester Foundation Level - Kompaktkurs
Certified Professional for Requirements Engineering - Foundation Level
Certified Tester Advanced Level - TESTMANAGER
Certified Tester Foundation Level

Berlin
Berlin
Frankfurt am Main
Dsseldorf/Ratingen
Berlin
Berlin
Mnchen
Berlin
Dsseldorf
Berlin
Berlin
Hamburg
Berlin
Dsseldorf
Frankfurt am Main
Berlin
Berlin
Berlin
Hannover
Berlin
Berlin
Dresden

Vous aimerez peut-être aussi