Vous êtes sur la page 1sur 8

Information Security Risk Management

Verification, Validation, and Evaluation in Information Security Risk Management


By surveying verification, validation, and evaluation methods referenced in information security risk management (ISRM) literature, the authors discuss in which ISRM phases particular methods should be applied and demonstrate appropriate methods with a real-world example.
Stefan fenz Vienna University of Technology andreaS ekelhart SBA Research

rganizations that make extensive use of information technologies can be more efficient and productive. However, this evergrowing dependence on IT also leads to a dramatic increase in expensive information security incidents and failures.1 The US National Institute of Standards and Technology (NIST) defines information security risk management (ISRM) as a process that allows IT managers to balance the operational and economic costs of protective measures and achieve gains in mission capability by protecting the IT systems and data that support their organizations missions.2 Accordingly, ISRM focuses on risks that can emerge from IT systems and data. (NIST defines risk as a function of the likelihood of a given threat source exercising a particular potential vulnerability and the resulting impact of that adverse event on the organization.2) Typical threats that might put an organizations mission at risk include fraud, erroneous decisions, loss of productive time, data inaccuracy, unauthorized data disclosure, and ultimately, loss of public confidence. Since the 1970s, researchers have proposed several approaches to managing information security risks. The focus has been on ISRM as a whole and on related areas such as threat and vulnerability identification, risk calculation,3 and cost-benefit analysis of potential controls.4 (NIST defines vulnerability as a flaw or weakness in system security procedures, design, implementation, or internal controls that could be exercised and result in a security breach or violation of the systems security policy.2 Controls refer to risk-reducing measures.2) Even
COPUBLISHED BY THE IEEE COMPUTER AND RELIABILITY SOCIETIES

after 40 years, ISRM remains a topic of ongoing research with several challenges.5 Most ISRM-related research aims to improve ISRM, but theres still a considerable lack of thorough verification, validation, and evaluation of the developed approaches and their implementation. One of the paramount problems in ISRM is that exact figures for probabilities, control effectiveness, and resulting loss cant be determined.6,7 Many potential threats are rare or depend on numerous parameters, which makes it impossible to provide reliable occurrence data. Nevertheless, current ISRM approaches generate solutions based on these uncertain input data. Due to this high degree of uncertainty, theres a risk that organizations might invest in inefficient security measures or ignore substantial threats. Moreover, organizations might believe the results are accurate and be unaware of their inherent uncertainty. At worst, organizations will only start to question their risk management approaches when already faced with substantial financial or even human loss. Methodologically sound and comparable verification, validation, and evaluation results are crucial for measuring and understanding the implications of applied ISRM approaches. Only then is it possible to trust that security levels meet our expectations and to know whether security investments pay off. Faced with numerous security options, ranging from complex encryption algorithms to human-resource man1540-7993/11/$26.00 2011 IEEE MARCH/APRIL 2011

58

Information Security Risk Management

agement and legislation, organizations struggle when it comes to choosing and implementing the optimal (in terms of risk reduction and cost efficiency) set of controls for their environment.1 The central question for companies and experts alike is this: Which methods should we use in each phase to verify, validate, and evaluate ISRM approaches? Weve compiled data by reviewing the available literature in ISRM-relevant high-quality information systems journals. We also discuss each methods possibilities and limitations.

Our review included ISRM-related articles published in these journals between 1990 and 2008. We grouped the methods used in these articles by approach and analyzed them with a view to applicability for individual ISRM phases.

Verification
We identified three methods used to verify the correctness of internal calculations: sensitivity analysis, internal results comparison, and simulation.
Sensitivity analysis. Sensitivity analysis involves test-

ISRM: Definitions and Phases

Based on NIST Special Publication 500-2348 and other prior work,9 we use the following standard definitions throughout this article: Verification is the process of checking whether the proposed solution complies with the initial specificationfor example, are the internal calculations correct? Validation is the process of checking whether the proposed solution satisfies its expected requirementsfor example, is the overall output correct? Evaluation is the process of determining the significance, worth, or condition of the proposed solutionfor example, can we identify environmental changes caused by the proposed solution? We analyzed ISRM phases and their output to derive a generic ISRM methodology (see Table 1). This model is based on a comparison of the ISRM methodologies CRAMM, NIST SP 800-30, OCTAVE, EBIOS, and ISO 27005. We selected these methodologies based on the European Network and Information Security Agency (ENISA) Risk Management Methodology Assessment (see http://rm-inv.enisa. europa.eu/rm_ra_methods.html), which consists of 13 methodologies from which we selected a mix of five commonly used international, US, and European methodologies. Because these ISRM methodologies have many features in common and only few differences, we were able to derive a generic ISRM view, which we list in the first column of Table 1.

ing the impact of input parameters on the models results. By experimenting with ranges of parameter values, the modeler gains insights into system behavior and can identify weaknesses. Lili Sun, Rajendra Srivastava, and Theodore Mock suggested this method could be used to verify their methodology for an information systems security (ISS) risk assessment, which uses an evidential reasoning approach under the Dempster-Shafer theory of belief functions.11 They defined ISS risk by the plausibility of ISS failures and used sensitivity analysis to evaluate the impact of parameters on the models results. Ram Kumar, Sungjune Park, and Chandrasekar Subramaniam also applied sensitivity analysis in a simulation model for an information security countermeasure (ISSC) portfolio value.12 Their model combined risk assessment, disaster recovery, and countermeasure portfolio perspectives.
Internal results comparison. Internal results com-

Potential Verification, Validation, and Evaluation Methods


We conducted a literature review of the fields major journals and magazines to gain an overview of available ISRM verification, validation, and evaluation methods. In addition to the top-three IS research journalsMIS Quarterly, Information Systems Research, and the Journal of Management Information Systemswe analyzed the Communications of the ACM and IEEE Security & Privacy.10

parison involves manual paper-and-pencil calculations of the same problems (due to the expensive manual work, this method is generally applied only to small test cases), results calculated with other approaches, results generated by using other programs, or results of different code versions. Martin Feather and his colleagues present an internal results comparison method that uses the Defect Detection and Prevention (DDP) tool,13 which NASA uses as a risk-informed requirements engineering tool to aid early-life-cycle decision-making. The tool verification is conducted by comparing the DDPs results with traditional paper-and-pencil calculations, comparing DDP results with risk calculations done by other programs, comparing current DDP results with those computed by earlier DDP versions, and running internal consistency checks.
Simulation. Simulation includes testing the results

of internal mathematical models created with a high number of random input values (such as Monte Carlo simulation). Mehmet Sahinoglu proposed a decision-tree model
www.computer.org/security

59

Information Security Risk Management

Table 1. Information security risk management (ISRM) phase mapping.


Generic ISRM phase and its output System characterization Output: inventory list of assets to be protected, including their acceptable risk level CRAMM phase Asset identification NIST SP 800-30 phase System characterization OCTAVE phase Identify critical assets and corresponding security requirements Identify current security practices Identify threats and organizational vulnerabilities Identify current technology vulnerabilities Risk determination for critical assets EBIOS phase Organizational study Target system study Determination of the security study target Expression of security needs Study threat sources Study vulnerabilities Formalize threats ISO 27005 phase Asset identification

Threat and vulnerability assessment Output: list of threats and corresponding vulnerabilities endangering the identified assets Risk determination Output: quantitative or qualitative risk figures and levels for identified threats Input: threat probability and magnitude of impact

Threat assessment Vulnerability assessment

Threat identification Vulnerability identification control analysis

Identify threats Identify vulnerabilities

Asset valuation risk assessment

Likelihood determination Impact-analysis risk determination

Compare threats with needs (risk determination)

Identify impact Assess threat likelihood Assess vulnerability Likelihood risk estimation Evaluate existing and planned controls Information security risk treatment (risk avoidance, risk transfer, risk reduction, or risk retention)

Control identification Output: list of potential controls that can mitigate the risks to an acceptable level Control evaluation and implementation Output: list of cost-efficient controls that have to be implemented to reduce the risk to an acceptable level

Countermeasure selection

Control recommendations

Identify risk measures

Formalize security objectives

Countermeasure recommendation

Control evaluation Cost-benefit analysis Control selection Safeguard implementation/ plan development Control implementation

Protection strategy development Risk mitigation plan development

Determine security levels Determine security requirements Determine security assurance requirements

to quantify risks.14 He argued that widely spread qualitative models to measure security risks are subjective, difficult to interpret, and without a probabilistic framework, which might lead to expensive misjudgments in decision-making (due to over- or underestimation). He created a numerical range for vulnerabilities (probability for exploitation), threats (probability of occurrence), and countermeasures (countermeasure quality) in percentage values between 0 and 100. The residual risk is multiplied with a criticality value to gain the final risk figure. Sahinoglu verified the models mathematical accuracy with a Monte Carlo simulation.
60
IEEE SECURITY & PRIVACY

Validation
To validate results, researchers have proposed using experts, alternate decision processes, and statistical evidence.
Experts. Nominating technology experts involves

asking them to compare the findings of an applied approach with the results they would expect based on their intuition. An expert team or panel can consist of in-house and external members with knowledge and experience in the ISRM domain or the applied ISRM approach. This method is opinion-based, but its still one of the few ways to validate ISRM results by acMARCH/APRIL 2011

Information Security Risk Management

counting for various real-world parameters. In the case of discrepancies, the experts should be presented with the details of how the results were calculated; letting them recheck the results could persuade the experts of their correctness. Sun, Srivastava, and Mock also suggested using this method but didnt validate it.11 Feather and colleagues supplied an example application of this validation method;13 they conducted the DDPs result set validation by checking whether the findings usually agreed with the experts intuition. In addition, they suggested two methods for a future and deeper validation of the result sets: conducting several longterm studies to strengthen the used a priori likelihoods, and using the DDP tool and independently running an alternate decision-making process on the same problem.
Alternate decision processes. The approach Feath-

Evaluation
In general, computer science is considered a positivist science characterized by the use of inductive natural science methods to generate authentic factual knowledge. According to Richard Baskerville,6 ISRM fails to meet positivist scientific criteria in two areas: input data such as probabilities and outage costs are estimated and cant be precisely measured, and statistical models used to transform input data to risk values are too abstract to represent the essential attributes of the subject under study. Baskerville argues that ISRMs main benefit isnt its output in the form of predictive statistics.6 Rather, we can see it as a communication tool. It transforms and reduces highly specialized information security knowledge to fictitious monetary values that are compatible with the mindset of investment decision-makers at management level. Steve Smithson and Rudy Hirschheim also pointed out that evaluation provides the basic feedback function to managers and forms a fundamental component of the organizational learning process.15 Evaluation is essential for problem diagnosis, planning, and reducing uncertainty. With a focus on the feedback function, reliable evaluation of conducted ISRM initiatives can justify new ISRM investments to senior management. Based on these assumptions, potential evaluation methods include management decision behavior analysis and corresponding changes in control quality.
Management decision behavior analysis. An ex-

er and colleagues proposed consisted of also running at least one alternate decision process on the exact same problem.13 To gain independent results, separate groups of experts should be involved in each study. Considering that its impossible to objectively calculate risk values or to present the best solution for mitigating detected risks, a comparison of results gained with similar approaches is one way of establishing trust in the results. Identifying a decision process that handles the same input parameters and creates comparable results is no trivial matter. The subsequent comparison and interpretation of the result sets is another difficult endeavor and requires expert teams.
Statistical evidence. Using statistical evidence in-

volves conducting or monitoring several studies to gather statistical evidence of a priori likelihoods. Depending on the target domain, a single study or a restricted time frame might be insufficient to predict likelihoods of future events. As we already mentioned, correct likelihood values are a fundamental precondition for correct results. A priori values are available in internal reports, local crime statistics, fire reports, insurance data, and so forth. However, although these sources provide useful data for certain cases, no such data can be gathered for rare or very specific threats. Furthermore, although data for threats that lead to a loss of reputation (such as data loss and fraud) are available internally, organizations often refrain from making these data publicly available. In addition to Feather and colleagues,13 Sun, Srivastava, and Mock11 also suggested field studies to collect statistical evidence as a way to validate their results, but they didnt apply that approach in the paper under review.

ample of analyzing management decisions includes changes in decisions over time in combination with awareness programs or the introduction of ISRM approaches. Potential observable parameters include, but are not limited to, costs of IT security investments, the complexity of IT security investments, countermeasure quality, and organization-wide IT security awareness level. Detmar Straub and Richard Welke applied management decision behavior analysis in comparative qualitative studies in two information services firms to identify a way to address the problem of risk mitigation plans of low effectiveness.16 Usually, such plans are approved by managers who are generally unaware of the full range of actions they could take to reduce risks. The identified approach to raise managers awareness includes using a security risk planning model,
www.computer.org/security

61

Information Security Risk Management

Table 2. Information security risk management (ISRM) literature review results.


Paper Straub and Welke16 Feather and colleagues Sahinoglu14 Sun, Srivastava, and Mock11 Benaroch, Lichtenstein, and Robinson 17 Baker and Wallace
1 13

Verification Internal result comparison Simulation Sensitivity analysis

Validation Experts

Evaluation Management decision behavior analysis

Management decision behavior analysis Control quality assessment Sensitivity analysis

Kumar, Park, and Subramaniam12

training in security awareness, and using a countermeasure matrix analysis. The authors studied managerial responses to security situations using a mixed-method approach consisting of action research and interviews. They found that introducing a countermeasure matrix, in particular, led to managers pursuing more complex, multitiered security options. Michel Benaroch, Yossi Lichtenstein, and Karl Robinson also used this method to evaluate risk management plans developed in the course of their study, in which they empirically tested whether IT managers follow the logic of option-based risk management.17 They analyzed the risk management plans for 50 real IT investments developed by experienced managers. The findings indicate that IT managers thinking and intuition correspond well to the logic of option-based risk management, which seeks to optimally control risk and maximize investment value by finding the most cost-effective combination of real options (such as stage, prototype, lease, or outsourcing options).17
Control quality assessment. Investigating the qual-

controls were sorted by their rated quality. The authors then analyzed variations by organizational size and by industry as well as the effect of the control quality on the number of reported incidents.

Review Results
Table 2 gives an overview of applied verification, validation, and evaluation methods in existing ISRM literature. Of these seven articles, four conducted a verification of their approaches, only one discussed the validity of their approach, and three included a section on evaluation. None of the articles cover all three phases. The requirements against which a system must be tested are usually explicitly defined and formally stated. This makes it possible to objectively verify whether internal calculations deliver correct (in terms of the specification) results, and such tests are, in fact, often conducted automatically. This typically covers risk calculation algorithms and formulas. According to Baskerville6 and George Cybenko,7 one of the elemental problems in ISRM is the uncertainty in probabilities or loss figures. Some potentially harmful events occur so rarely that no reliable occurrence data exists. In addition, events can depend on too many parametersinside and outside of an organizationthat cant be calculated. Therefore, validating ISRM approaches, telling whether their results are realistic and significant, is the most difficult step of these three phases. None of the reviewed articles provides a mathematical model for solving this issue. Instead, some use expert teams as a possible option to validate results. Although this method is subjective, it can, when used correctly, establish trust or raise well-grounded doubts in the validity of the results under review. The literature review showed that when it comes to evaluating ISRM, social research methods such as qualitative interviews and standardized questionnaires are used to obtain data. By using these wellestablished research methods, researchers can measure the impact of ISRM on any given organization.
MARCH/APRIL 2011

ity of implemented controls as an indicator for applied ISRM is part of a control quality assessment. Comparing survey results of different organizations or in one organization over time makes it possible to draw conclusions on the ISRM used. Wade Baker and Linda Wallace conducted a study among information security executives, managers, and technical specialists to evaluate the quality of security controls, which they categorized as technical, operational, or management controls.1 A group of 10 security experts from industry and academia assisted in identifying 80 security controls for the survey, derived from several international standards, such as British Standard 7799, NIST SP 800-53, and the Graham-Leach-Bliley Act of 1999. Participants rated their current control implementation quality for each of the 80 practices on a seven-point scale. Based on the responses, the 80
62
IEEE SECURITY & PRIVACY

Information Security Risk Management

Table 3. Information security risk management (ISRM) phase-specific verification, validation, and evaluation method framework.
System characterization Verification Sensitivity analysis Internal results comparison Simulation Validation Experts Alternate decision process Statistical evidence Evaluation Management decision behavior analysis Control quality assessment Both methods evaluate the influence of the overall ISRM activities on the considered organization Threat and vulnerability assessment Risk determination Control identification Control evaluation and implementation

How to Verify, Validate, and Evaluate ISRM Phases


Table 3 shows which methods we can use in each ISRM phase to verify, validate, or evaluate its output. The method-phase assignment in Table 3 is based on matching each methods required input data with each phases output. For example, because the system characterization phase doesnt utilize mathematical models, but the sensitivity analysis method requires such a model to test its sensitivity, we cant make an assignment. However, if an automated tool is used in the system characterization phase, we can use the internal results comparison method to compare the results generated by the tool with, for example, manual paper-and-pencil results. In general, successfully implementing ISRM depends on trust in the gained results. Relying on an incorrect model will result in incorrect data and security decisions. Likewise, the ISRM output must be validated to ensure a complete, correct decision basiserrors in early ISRM phases will affect all subsequent results. Evaluating ISRM implications provides insights into organizational changes, highlighting benefits and drawbacks, and helps us understand whether the selected ISRM approach is suited and sufficient for the organization. Ideally, every phase would be verified, validated, and evaluated with multiple methods to reach the highest level of security. In practice, however, this is often impossible due to time and budget constraints. Researchers and practitioners conducting ISRM decide which methods to use depending on their situation and available resources. The following points should be considered in the decision-making process. Verification methods and their scope are cho-

sen depending on existing knowledge and resources. Whether the output of the system characterization, threat and vulnerability assessment, and control identification phases can be verified depends on which methods are used in these phases. If automated tools (such as network scanners in the system characterization phase) are used, parts of the internal result comparison method (such as a paper-and-pencil calculation and different code versions) should be used to verify the tools correctness. If we use mathematical models in the risk determination and control evaluation and implementation phases, sensitivity analysis and simulation methods should be used to verify their output. Because incorrect resultsproduced by erroneous or incomplete input valuesor deficiencies in approaches are always a threat, validation should never be omitted. How the output of the system characterization, threat and vulnerability assessment, and control identification phases is validated depends on the organization and its willingness to incorporate external experts. An expert team conducting validation offers various advantages, such as real-world parameter consideration and domain-specific solutions based on experience. Furthermore, it can cover all ISRM phases. If external experts cant be used (for example, due to security concerns or budget constraints), organizations should apply the alternate decision process method with in-house experts to validate the output. Because the risk determination and control evaluation and implementation phases require the support of automated calculations, we can apply the entire range of validation methods. Statistical evidence should be used whenever its available. However, because the statistical evidence method deals mainly with historical data, it cant help
www.computer.org/security

63

Information Security Risk Management

validate the output of the system characterization, threat and vulnerability assessment, or control identification phases. Finally, its necessary to measure ISRMs impact on an organization to give management feedback and provide the basis for subsequent decisions. As Table 3 shows, evaluation cant be conducted for single ISRM phases. Instead, the management decision behavior analysis and control quality assessment methods should be used to evaluate the influence of the overall ISRM activities on the organization in question.

Which Methods Are Best for Each Phase?


Choosing the optimum method for each ISRM phase is often difficult. To demonstrate how to use our systematic overview in real-world settings, we illustrate it using the example of a fictitious financial institution, Capital, which wishes to implement an organization-wide ISRM approach. For system characterization, the internal results comparison method ensures that inventory tools are working as expected. Experts, such as the IT administrator, validate the output for completeness. By using existing or manually generated inventory lists as reference documents, Capital can compare the completeness of the tools output. To assess threats and vulnerabilities, the internal results comparison verifies that ISRM tools generate lists containing relevant threats and vulnerabilities that endanger the identified assets. For validation, Capital consults both in-house and external experts to check the threat and vulnerability assessment phases output. In-house experts ensure that organization-specific threats and vulnerabilities are assessed; external experts help Capital discover threats and vulnerabilities that are new or were previously ignored or overlooked by the in-house experts. Because the risk determination phase consists mainly of mathematical calculation, the underlying mathematical model must be verified using internal result comparisons and sensitivity analyses. Both methods let Capital detect flaws within the model that is, small threat probability changes resulting in high-risk value changes. Capital uses statistical evidence to validate threat probabilities. In-house and external experts validate final risk figures and risk calculation input factors (probability and impact). This validation increases trust in the calculated risk figures. The internal results comparison verifies whether control identification tools rely on correct input data and identification algorithms. In a workshop-based environment, in-house and external experts validate whether the identified controls meet the requirements of the organization-specific risk setting. While in64
IEEE SECURITY & PRIVACY

house experts contribute their knowledge regarding already existing control implementations, external experts assist Capital in validating the overall control identification results. Because the control evaluation and implementation phase can include mathematical models, Capital has to verify these models with internal result comparisons and sensitivity analyses. Both methods ensure that the underlying mathematical model behaves as expected at different input data combinations. Experts validate the output and statistical data gathered in similar organizational settingsfor example, the effectiveness of specific control implementations in similar environments. Expert validation ensures that appropriate control implementations are found for each of the identified risks. After verification and validation, Capital evaluates the entire ISRM approach by analyzing management decision behavior and assessing changes in quality control. Management decision behavior analysis includes factors such as the managements IT security awareness level, the number and value of IT security investment decisions, and the complexity of IT security investments. External experts assess changes in control quality to evaluate whether the IT security investments made generate a true benefit for Capital. The verification, validation, and evaluation results help Capital find the most suitable ISRM approach for its organization-specific setting. After implementing the ISRM approach, Capital decides to continue to conduct evaluations (possibly on a yearly basis) to determine whether the approach still provides the expected benefit. Fundamental changes such as mergers or company culture changes might require a different ISRM approach.

SRM researchers and practitioners are the primary user groups to profit from the methods weve identified here. Both groups take great interest in correct and comparable verification, validation, and evaluation results. Researchers need to show that their ISRM approaches perform as expected, and moreover, they require methods to systematically compare their work to other approaches. Depending on the focus of the ISRM research, they can target specific ISRM phases. After determining their focus, researchers can select suitable verification, validation, and evaluation methods. Practitioners must establish trust in potential or already implemented ISRM approaches. This usually requires verifying and validating all ISRM phases. Validation methods make it possible to determine whether the utilized ISRM approach satisfies the organizations requirements. The identified methods can also be used to compare different ISRM approaches and thereby help them select the most favorable solution. Although organizations should conduct verificaMARCH/APRIL 2011

Information Security Risk Management

tion and validation at the beginning of the process, evaluation should be continuous so as to determine the benefit of the implemented approach. So far, there are no standardized ISRM-specific methods for evaluating management decision behavior and control quality changes. This poses an open research challenge. To fill this gap, we need standardized interview guidelines and questionnaires to help organizations compare and track changes caused by ISRM. Only standardized and widely accepted evaluation methods allow organizations to reliably measure the success of their ISRM approaches. Furthermore, we must enhance and refine the proposed verification, validation, and evaluation method framework with feedback from researchers and the practitioners who use it. We want to underline the importance of methodically sound and comparable verification, validation, and evaluation results and of measuring and understanding the implications of ISRM approaches. Only then is it possible to place trust in the expected level of security and to know whether security investments pay off. We hope that our findings and their implications stimulate both researchers and practitioners to view ISRM verification, validation, and evaluation as an elemental part of their work and that this article provides the necessary foundation to carry it out.
References 1. W. Baker and L. Wallace, Is Information Security Under Control? Investigating Quality in Information Security Management, IEEE Security & Privacy, vol. 5, no. 1, 2007, pp. 3644. 2. G. Stoneburner, A. Goguen, and A. Feringa, Risk Management Guide for Information Technology Systems: Recommendations of the National Institute of Standards and Technology, special publication 80030, NIST, 2002; http://csrc.nist.gov/publications/nist pubs/800-30/sp800-30.pdf. 3. L. Bodin, L. Gordon, and M. Loeb, Information Security and Risk Management, Comm. ACM, vol. 51, no. 4, 2008, pp. 6468. 4. H. Cavusoglu, B. Mishra, and S. Raghunathan, A Model for Evaluating IT Security Investments, Comm. ACM, vol. 47, no. 3, 2004, pp. 8792. 5. S. Smith and E. Spafford, Grand Challenges in Information Security: Process and Output, IEEE Security & Privacy, vol. 2, no. 1, 2004, pp. 6971. 6. R. Baskerville, Risk Analysis as a Source of Professional Knowledge, Computers & Security, vol. 10, no. 9, 1991, pp. 749764. 7. G. Cybenko, Why Johnny Cant Evaluate Security Risk, IEEE Security & Privacy, vol. 4, no. 1, 2006, p. 5. 8. D.R. Wallace et al., Reference Information for the Software Verification and Validation Process, special publication 500-234, NIST, 1996; http://hissa.nist.

gov/HHRFdata/Artifacts/ITLdoc/234/val-proc.html. 9. J.A. Wentworth, R. Knaus, and H. Aougab, Verification, Validation and Evaluation of Expert Systems, US Dept. Transportation, 1995. 10. K. Peffers and Y. Tang, Identifying and Evaluating the Universe of Outlets for Information Systems Research: Ranking the Journals, J. Information Technology Theory and Application, vol. 5, no. 1, 2003, pp. 6384. 11. L. Sun, R.P. Srivastava, and T.J. Mock, An Information Systems Security Risk Assessment Model under the Dempster-Shafer Theory of Belief Functions, J. Management Information Systems, vol. 22, no. 4, 2006, pp. 109142. 12. R. Kumar, R. Park, and C. Subramaniamr, Understanding the Value of Countermeasure Portfolios in Information Systems Security, J. Management Information Systems, vol. 25, 2008, pp. 241280. 13. M.S. Feather et al., Applications of Tool Support for Risk-Informed Requirements Reasoning, Intl J. Computer Systems Science & Engineering, vol. 20, no. 1, 2005, pp. 517. 14. M. Sahinoglu, Security Meter: A Practical DecisionTree Model to Quantify Risk, IEEE Security & Privacy, vol. 3, no. 3, 2005, pp. 1824. 15. S. Smithson and R. Hirschheim, Analysing Information Systems Evaluation: Another Look at an Old Problem, European J. Information Systems, vol. 7, no. 3, 1998, pp. 158174. 16. D. Straub and R. Welke, Coping with Systems Risk: Security Planning Models for Management Decision Making, MIS Quarterly, vol. 22, no. 4, 1998, pp. 441469. 17. M. Benaroch, Y. Lichtenstein, and K. Robinson, Real Options in Information Technology Risk Management: An Empirical Validation of Risk-Option Relationships, MIS Quarterly, vol. 30, no. 4, 2006, pp. 827864.
Stefan Fenz is a researcher at the Vienna University of Technology and SBA Research. His research interests include information security, with secondary interests in semantic technologies and named-entity recognition. Fenz has a PhD in computer science from the Vienna University of Technology. He is a member of the IFIP WG 11.1 on Information Security Management; IEEE Systems, Man, and Cybernetics Society; and the International Information Systems Security Certification Consortium (ISC2). Contact him at stefan.fenz@tuwien.ac.at. Andreas Ekelhart is a researcher at SBA Research and a PhD candidate at the Institute of Software Technology and Interactive Systems at the Vienna University of Technology. His research interests include semantic applications and applied concepts of IT security, with a focus on information security risk management. Ekelhart has an MSc in business informatics and an MSc in software engineering and Internet computing from the Vienna University of Technology. He is a member of the IEEE Systems, Man, and Cybernetics Society and ISC2. Contact him at aekelhart@sba-research.org.
www.computer.org/security

65

Vous aimerez peut-être aussi