Académique Documents
Professionnel Documents
Culture Documents
Abstract
The Microsoft Security Development Lifecycle (SDL) Optimization Model is designed to
facilitate gradual,
consistent, and cost-effective implementation of the SDL by
development organizations outside of Microsoft. The model helps those responsible for
integrating security and privacy in their organizations software development lifecycle to
assess their current state and to gradually move their organizations towards the adoption of
the proven Microsoft program for producing more secure software. The SDL Optimization
Model enables development managers and IT policy makers to assess the state of the
security in development. They can then create a vision and road map for reducing customer
risk by creating more secure and reliable software in a cost-effective, consistent, and
gradual manner. Although achieving security assurance requires long-term commitment, this
guide outlines a plan for attaining measureable process improvements, quickly, with realistic
budgets and resources.
This is the third of five resource guides. It explains the key practices for organizations
beginning at the Basic level, identified as those with few or undefined software development
security practices. This guide introduces a self-assessment checklist of relevant capabilities
and advice for conducting and managing the practices to achieve these capabilities, and it
provides links to relevant resources where additional content can be found. You can use the
information contained in this guide to help you move from the Basic level to the
Standardized level. For a full description of the model, concepts, capabilities, and maturity
levels, please see the first guide in this series, Microsoft Security Development Lifecycle
(SDL) Optimization Model: Introduction to the Optimization Model.
For the latest information, more detailed descriptions, and the business benefits of the
Microsoft Security Development Lifecycle, go to http://www.microsoft.com/SDL.
The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the
date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment
on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication.
This document is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED, OR STATUTORY, AS
TO THE INFORMATION IN THIS DOCUMENT OR INFORMATION REFERENCED OR LINKED TO BY THIS DOCUMENT.
Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of
this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means
(electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of
Microsoft Corporation.
Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject
matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this
document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.
2008 Microsoft Corporation. All rights reserved. This work is licensed under the Creative Commons Attribution-Non-Commercial
License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc/2.5/ or send a letter to Creative Commons,
543 Howard Street, 5th Floor, San Francisco, California, 94105, USA.
Microsoft, Microsoft Office Word, InfoPath, Visual Studio, Win32, Visual C#, Visual C++, SQL Server, ActiveX, and Windows are
either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.
All other trademarks are property of their respective owners.
Contents
Resource Guide Overview....................................................................................................... 1
Audience.............................................................................................................................. 1
SDL Optimization Levels....................................................................................................... 1
Preparing to Implement SDL Requirements.............................................................................3
Phased Approach................................................................................................................. 3
Implementation services................................................................................................... 3
Implementer GuideBasic to Standardized............................................................................4
Capability Area: Training, Policy, and Organizational Capabilities............................................4
Introduction.......................................................................................................................... 4
Capability: Training............................................................................................................... 4
Capability: Bug Tracking....................................................................................................... 6
Capability Area: Requirements and Design.............................................................................8
Introduction.......................................................................................................................... 8
Capability: Risk Assessment................................................................................................. 8
Capability: Quality Gates.................................................................................................... 11
Capability: Threat Modeling................................................................................................ 12
Capability Area: Implementation...........................................................................................14
Introduction........................................................................................................................ 14
Capability: Secure Coding Policies......................................................................................14
Capability: Cross-Site Scripting (XSS) and SQL Injection Defenses.....................................16
Capability Area: Verification.................................................................................................. 17
Introduction........................................................................................................................ 17
Capability: Dynamic Analysis and Application Scanning (Web Applications)......................17
Capability: Fuzzing............................................................................................................. 19
Capability: Penetration Testing........................................................................................... 20
Capability Area: Release and Response.................................................................................21
Introduction........................................................................................................................ 21
Capability: Final Security Review........................................................................................21
Capability: Project Archiving............................................................................................... 23
Capability: Response Planning and Execution....................................................................24
Audience
This document is designed for development managers and IT decision makers who
are responsible for planning, deploying, and governing security and privacy measures
in software development and who want to implement the practices and concepts of
the Microsoft Security Development Lifecycle (SDL).
Preparing
to Implement SDL Requirements
The Standardized level is an important step on the road to the SDL. At the
Standardized level, security practices and standards are beginning to be introduced
into the development lifecycle. Organizations at this level are able to assess the
security and privacy risk of new projects and to select the best candidates for
implementing security and privacy practices into the development lifecycle. The
organization has realized the value of the SDL and has made the decision to embark
on the path of greater adoption. Security and privacy practices are only applied to a
few pilot projects. Much of the effort is spent at later phases of the lifecycle and in
2
security response, where improvements come at a greater cost than those incurred
with the more integrated and proactive practices at the higher optimization levels.
Phased Approach
Microsoft recommends a phased approach to meeting the requirements in each of
the SDL capability areas. The four phases are shown in the following illustration.
In the Assess phase, you determine the current capabilities and resources within your
organization.
In the Identify phase, you determine what you need to accomplish and which
capabilities you want to incorporate.
In the Evaluate and Plan phase, you determine what you need to do to implement the
capabilities outlined in the Identify phase.
In the Deploy phase, you execute the plan that you built in the previous phase.
Implementation services
Implementation services for the projects outlined in this document are provided by
Microsoft partners and Microsoft Services. For assistance in implementing the SDL
optimization improvements highlighted in the SDL Optimization Model Implementer
Resource Guides, please refer to the SDL Pro Network page on the SDL Web site, or
visit the Microsoft Services Web site.
Introduction
Training, Policy, and Organizational Capabilities is an SDL optimization capability area
and the foundation for implementing many capabilities in the SDL Optimization
Model. Ongoing Training, Policy, and Organizational Capabilities focus on capabilities
and practices at an organizational level that cross many projects and can be
implemented in parallel to product release cycles. The main benefits of developing
these capabilities include: improved security awareness and skills, increased
standardization in security development practices, internal security metrics for
measuring effectiveness, and clearer executive support for security in development.
Capability: Training
Overview
The average developer or tester may know very little about building secure software.
Increasing their knowledge of the executive commitment to security, common
security concerns and pitfalls, and the resources available to them is critical for
enabling developers to create more secure code. Training is therefore one of the
foundational practices of the SDL.
Phase 1: Assess
The Assess phase involves identifying the requirements for training. Determine the
proper training content for your organization by considering:
Phase 2: Identify
The next phase involves identifying the resources available for curriculum creation
and delivery of a Basics of Secure Design, Development, and Test or equivalent
training course. Consider the following:
The general technical content was selected in the Identify phase, but several other
practices and checkpoints in the SDL Optimization Model should be completed before
the course materials can be finalized. The outputs of the Quality Gates and Secure
Coding Policies practices in this guide should be a part of the basic security course
material.
Phase 4: Deploy
The goal of the Deploy phase is to deliver the training to the engineering
organization. All developers and testers in the pilot teams should have completed the
basic training course.
Checkpoint: Training
Requirement
Determine training needs and content for developers and testers
in the organization; create or acquire appropriate curriculum.
Phase 1: Assess
The Assess phase involves gathering potential requirements for security bug
classification. Bugs should be security tagged in three categories:
Security Cause: What was the root cause of the vulnerability? The following
resources are useful examples of general software and Web-application specific
vulnerability categorizations:
Web Application Security Consortium (WASC) Web Application Security
Statistics project
OWASP Top 10 2007 (The top 10 security vulnerabilities for 2007)
The MITRE Corporations Common Weakness Enumeration
The Open Source Vulnerability Database
Note
There is a wide variety of methods for
categorizing the causes and types of security
vulnerabilities, but many are too complex to
address at the Basic to Standardized level. You dont want to present staff
members (who may have a minimum of security training and experience)
with a list of 50 options they may not understand. When starting out, keep
your categories broad, base them on the most common vulnerabilities
found in your organizations code and products, and dont shy away from
having a lot of bugs in the Unknown or Other categories.
Security Effect: Ask yourself: What security property can an attacker violate with
the vulnerability? Microsoft recommends categorizing security bugs with the STRIDE
(Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and
Elevation of Privilege) categorization. For more information, see Uncover Security
Design Flaws Using the STRIDE Approach.
How Found: Identifying which practices are most productive in uncovering security
vulnerabilities helps guide implementation of the SDL. Categories should correspond
to the SDL and other development lifecycle practices, such as:
Design Review
Threat Modeling
Code Analysis
Code Review
Functional Quality Assurance Test
Third-Party Penetration Test
Final Security Review
Externally Reported
Phase 2: Identify
In the Identify phase, the security expert team should identify the relevant bugtracking and management systems, along with the people and processes necessary
to add the new security categorizations.
Phase 4: Deploy
In the Deploy phase, the engineering organization is trained to categorize new
vulnerabilities with the additional security categorizations. This might be part of the
general security basics training, or it might be simply by means of an e-mail
campaign.
Requirement
Bug databases and tracking software
can record and classify
8
vulnerabilities by security cause, effect, and method of discovery.
If you have completed the step listed above, your organization has met the minimum
requirements of the Standardized level for Bug Tracking. We recommend that you
follow additional best practices for bug tracking addressed in the SDL Process
Guidance at Microsoft MSDN.
Introduction
In Requirements and Design, new practices are first introduced into the lifecycle of
specific products and projects. As is generally well established in software
engineering, the later in the product lifecycle a bug is found, the more expensive it is
to fix; this is perhaps even more true for security vulnerabilities. Insecure designs, in
particular, resemble other nonfunctional requirements, such as scalability. Without
careful attention, mistakes can easily propagate and become extraordinarily costly to
fix. Assessing risk, analyzing security and privacy early in the lifecycle, and
identifying proper mitigations can drive the most dramatic cost savings in a welloptimized SDL practice. Ongoing activity in Requirements and Design focuses on
capabilities and practices that continue to front-load security effort where it can be
most effective.
Phase 1: Assess
The Assess phase involves gathering the requirements for Risk Assessment. The
overall goal should be to categorize each new project as high, medium, or low risk.
The risk score will represent the product of a best-effort guess at two factors:
10
At the Assess stage, an estimate should also be made of the appropriate size and
degree of detail for the risk questionnaire, based upon an understanding of the
diversity of products and projects and on the organizational tolerance for additional
project management overhead.
For more information, see the SDL Process GuidancePhase 2: Design Phase: Risk
Analysis and Chapter 8 and Chapter 9 of The Security Development Lifecycle.
Phase 2: Identify
The Identify phase involves targeting the appropriate documentation method for
creating, auditing, and enforcing the risk questionnaire. This might be a Microsoft
Office Word document, an InfoPath form, or a simple Web application, or
opportunities might exist to integrate directly into Application Lifecycle Management
(ALM), Enterprise Resource Planning (ERP), or other project management systems
that drive the rest of the product lifecycle.
11
system. For your organization, emphasize issues that have produced bugs in the past
or that can lead to high-severity vulnerabilities as defined in your quality gates.
12
Phase 4: Deploy
The goal of the Deploy phase is to deliver the questionnaire and to get responses
from 80 percent of new projects during the project initiation or requirements lifecycle
phase in order to select the pilot SDL projects.
Note
At more advanced maturity levels, some of this risk assessment may be
automated for existing or legacy projects. If it isnt possible to develop and
require teams to complete such a questionnaire, early and inexpensive
automation may be a second-best alternative. If source code files can be
correlated to projects, for example, the rough number of hits from a security
13
14
Phase 1: Assess
The Assess phase involves gathering requirements to build the bug ranking guideline.
Review the sample security and privacy quality gates (or bug bars) from the SDL
Process Guidance to gain an understanding of the form and content of the document
and to assess how it can be incorporated into or supplement existing bug
categorization and ranking tools in your organization.
Phase 2: Identify
In the Identify phase, the security expert team should gather the detailed
requirements for defining meaningful risk and bug ranking classifications and
categories. This will include input from:
15
For more information, see the Microsoft Privacy Guidelines for Developing Software
Products and Services and the Microsoft Security Response Center Security Bulletin
Severity Rating System.
Phase 4: Deploy
In the Deploy phase, these quality gates are rolled out to the SDL pilot projects as the
standard for categorizing and prioritizing security and privacy vulnerabilities.
16
Phase 1: Assess
Assess the readiness and availability of resources on the security expert team to
perform Threat Modeling for the selected SDL pilot projects.
Phase 2: Identify
For the SDL pilot projects, identify the features to be reviewed using the Risk
Assessment Capability. Many security bugs happen at the interface between
components owned by different teams, where the security requirements and
guarantees made by each side of the contract are not clearly defined. Threat
Modeling core system components with many dependencies helps expose this kind of
vulnerabilities and provides valuable documentation for all of the clients of these key
systems.
Phase 4: Deploy
A representative from the security expert team conducts the Threat Modeling session
with the team. For more information on building threat models, see:
Requirement
17
If you have completed the step listed above, your organization has met the minimum
requirements of the Standardized level for Threat Modeling.
Introduction
Implementation focuses on security measures to eliminate and reduce the impact of
vulnerabilities in the construction of software. Ongoing Implementation activity
focuses on improving the use and sophistication of security tools and policies when
building software.
Phase 1: Assess
In the Assess phase, evaluate the set of languages, compilers, and tools that are in
scope for secure coding policies. Do not forget to include languages, like JavaScript,
which are embedded in other artifacts or are executed outside of the standard
system context, but which may still present risks to customers and end users.
Phase 2: Identify
In the Identify phase, note the relevant practices, tools, and checklists for the
assessed areas of coverage. The Standardized level requires:
Using the latest compiler and linker because important defenses are added by
the tools.
18
19
Phase 4: Deploy
Guidelines and policies are rolled out to the development organization and should be
included as part of the Training curriculum. It may also be helpful, though not
required at the Standardized level, to encourage or require developers to read books
that provide a comprehensive view of secure development best practices, such as
Writing Secure Code, Second Edition.
20
Phase 1: Assess
The Assess phase begins by determining the major platforms, languages, and
frameworks to target. Many of the tools available will also vary by platform and
development environment, so it is important to assess both what is in use and how
much variation exists across each development groups tool chains.
Phase 2: Identify
In the Identify phase, the set of tools available and appropriate for the target bug
classes and development platforms are investigated, and a few are selected for
piloting or competitive analysis.
A variety of tools suitable for both native and managed code development on the
Windows platform and with Visual Studio are available at the Microsoft SDL Tools
Repository.
The following is a list of useful free tools provided by Microsoft and other vendors.
This list is not intended to be comprehensive, as there are numerous commercial
code analysis tools in the market.
In the source code analysis space, Microsoft provides the following tools:
Passive Web application proxy scanners can be useful in identifying areas where XSS
and other vulnerabilities may be present. Two free and open source tools in this
space are:
At the code and framework level, several libraries and filters exist that can be
integrated directly into the application:
Phase 4: Deploy
The Deploy phase involves handing the tools off to selected members or teams in the
development organization and collecting feedback as to the effectiveness of the
tools.
Introduction
Verification is focused on security practices to discover weaknesses and to verify
security once software construction is functionally complete. This phase is important
because it helps to discover and eliminate vulnerabilities that might have been
introduced into the code at an earlier stage. Ongoing Verification activity focuses on
deeper and more sophisticated use of the tools and a more comprehensive code
review.
defenders must utilize similar tools to find and eliminate these kinds of easily
discoverable weaknesses at the verification stage.
Phase 1: Assess
Most available scanners will work generically on all Web applications, regardless of
the underlying technology platform. However, there may be some edge cases or
framework-specific attacks that one tool or another can cover more deeply, and some
scanners may offer better compatibility and coverage for heavily AJAX-enabled
applications. These basic characteristics of the applications to be scanned should be
assessed to help with tool selection. Also assess which classes of vulnerabilities you
expect the tools to assist in discovering.
Phase 2: Identify
Identify tools to evaluate. Tools in this area vary from fully automated scanners to
managed services, to semi-manual browser toolbars that can integrate into the
normal functional testing process. Most offerings in this area are commercial
products, but free or open source tools to investigate include:
Multiple tools may be beneficial to increase coverage, and certain tools may only
offer specialized testing for one class of vulnerability and should be used in
conjunction with a more general scanner. Some tools, such as Nikto, can be used to
scan for Web server configuration vulnerabilities, but they do not target the discovery
of vulnerabilities in custom applications. These should always be used in conjunction
with an application-specific scanner.
Phase 4: Deploy
During the Deploy phase, roll out the tool to the test organization, or have the
security expert group apply it to selected pilot projects. Gather data on the tools
effectiveness, and tune it with the goal of making it an acceptable part of mandatory
practices.
Requirement
23
The security expert team is working with the SDL pilot teams to
evaluate and deploy dynamic scanning tools for Web applications.
Checkpoint: Dynamic
Applications)
Analysis
and
Application
Scanning
(Web
If you have completed the step listed above, your organization has met the minimum
requirements of the Standardized level for Dynamic Analysis. We recommend that
you follow additional best practices for dynamic analysis addressed in the SDL
Process Guidance at Microsoft MSDN.
Capability: Fuzzing
Overview
Complex parsers for file formats and custom network protocols are a frequent cause
of high-severity vulnerabilities, due to buffer overrun, integer overflow, and related
issues possible in languages like C and C++. Identifying these issues with traditional
testing and code review is a time-consuming and error-prone process. Fuzz testing,
the automated creation and execution of many test cases created by targeted and
random inputs, has proven itself as a cost-effective method for identifying security
vulnerabilities. Widely employed by attackers, fuzzing should be proactively utilized
by defenders in verifying their software.
For more information, see the following resources:
Fuzz testing
Fuzz Testing at Microsoft and the Triage Process
Fuzzing: Brute Force Vulnerability Discovery (ISBN: 0321446119), by Michael
Sutton, Adam Greene, and Pedram Amini (Addison Wesley Professional, 2007)
Phase 1: Assess
The Assess phase involves identifying what kinds of parsers will be required to be
fuzzed. At the Standardized level, fuzzing is required for all new file parsers written in
C or C++ that accept data across a trust boundary. Depending on the history of
vulnerabilities, it may be desirable to extend this mandate to cover all such parsers
in legacy code, in addition to network protocol parsers exposed to unauthenticated
data.
Phase 2: Identify
In the Identify phase, the security expert team should identify candidate fuzzers. A
wide variety of free and commercial tools are available to satisfy many requirements
and styles:
24
Fuzzing Software, a list of (mostly free) tools from the book Fuzzing: Brute Force
Vulnerability Discovery by Sutton, Greene, and Amini, including FileFuzz.
Peach 2 is a free, easy-to-use, extensible fuzzing platform. Peach is capable of
fuzzing just about anything you can imagine, including network based services,
RPC, COM/DCOM, SQL Stored Procedures, and file formats.
File Fuzzers, Fuzzbox, Windows IPC Fuzzing Tools, and Forensic Fuzzing Tools are
free fuzz testing libraries from iSEC Partners.
Defensics, commercial blackbox, negative testing tools for developers from
Codenomicon.
25
Phase 4: Deploy
In the Deploy phase, the selected tool is deployed against eligible parsers. At the
Standardized level, the security expert team may do this, or it may train the relevant
teams in the testing organization to do so. Bugs identified through fuzzing, and the
remediation of those bugs, should be tracked and verified.
Checkpoint: Fuzzing
Requirement
Custom file format parsers implemented in native C or C++ code
have been fuzzed for the SDL pilot projects.
If you have completed the step listed above, your organization has met the minimum
requirements of the Standardized level for Fuzzing. We recommend that you follow
additional best practices for fuzzing addressed in the SDL Process Guidance at
Microsoft MSDN.
Phase 1: Assess
The Assess phase involves identifying which software modules to focus on for
penetration testing efforts, developing a budget for outside penetration testing, and
determining the necessary skills and criteria for potential vendors.
Phase 2: Identify
In the Identify phase, targets are selected for penetration tests, and vendors are
selected to respond to the Request for Proposal (RFP). Members of the Microsoft SDL
26
27
Phase 4: Deploy
Execute the penetration test, and perform recommended remediation.
Requirement
Penetration testing by third parties, as appropriate, is completed.
If you have completed the step listed above, your organization has met the minimum
requirements of the Standardized level for Penetration Testing. We recommend that
you follow additional best practices for penetration testing addressed in the SDL
Process Guidance at Microsoft MSDN.
Introduction
This capability area focuses on security practices performed for final security
assurance before release. It also helps you to prepare for and execute responses in
the event that security vulnerabilities are discovered in production software. The Final
Security Review (FSR) is conducted to verify that all of the relevant SDL requirements
have been satisfied, facilitating more effective governance of security assurance.
After release of the software, response activities are critical to minimize risk to
customers and to remediate vulnerabilities in a less costly and more orderly manner.
28
Phase 1: Assess
In the Assess phase, determine what resources are available to perform the FSR. Ask
yourself: How much time will the central security expert team have available, and
how many projects can reasonably be covered? How much time in the product
development lifecycle can be devoted to a FSR? For teams that have been diligently
following the SDL practices, the FSR should take no more than a day; however,
expect that some teams will have left some practices incomplete or bugs unfixed,
and they will have to spend time resolving those.
Also, verify that you have determined which quality gates must be enforced for
software to be released to customers.
Phase 2: Identify
Next, identify which features or projects are eligible for an FSR. It is important to do
this as early as possible so that appropriate time can be built into the release cycle.
Occasionally, a project may need to be delayed to complete unfinished security work,
but this should definitely not be the norm in the SDL. After the resource budget has
been assessed and the design requirement reviews have been completed, it should
be possible to pick FSR candidates. Teams that had high-risk scores or a history of
security vulnerabilities but that have not reported any security bugs during the rest
of the development lifecycle are also prime candidates for an FSR.
Phase 4: Deploy
The FSR is conducted prior to release of the software, likely concurrent with
functional regression testing. The central security expert team meets with the
development team and assesses their execution of the required SDL practices. At the
Standardized level, it is required that quality gates for bugs released to production be
enforced and that exceptions to these gates be formally reviewed and approved.
Bugs prioritized such that they do not fall under mandates should also be reviewed to
ensure that they have been properly rated according to the quality gates. The FSR
should also assess how well teams have complied with the other SDL mandates, such
as fuzzing, secure coding policies, and other current security practices.
Requirement
The security expert team can use risk analysis and results from
earlier SDL practices to identify candidate projects for Final
Quality gates exist for number and severity of bugs released to
29that products adhere to internal
The security expert team verifies
policies and meet relevant external regulatory requirements for
The security expert team reviews and approves all of the
Phase 1: Assess
The Assess phase involves identifying platforms and technologies where symbol
archiving is relevant.
Phase 2: Identify
In the Identify phase, the specific projects on these platforms that ship public binaries
are selected.
Phase 4: Deploy
30
Symbols are archived at every release and used by production support teams to
assist in researching security issues.
Requirement
Debug symbols are archived in a central location for all publicly
If you have completed the step listed above, your organization has met the minimum
requirements of the Standardized level for Project Archiving. We recommend that you
follow additional best practices for project archiving addressed in the SDL Process
Guidance at Microsoft MSDN.
31
Phase 1: Assess
The Assess phase involves identifying what the response process must encompass.
The general characteristics of your application will also shape the response plan.
Rolling out security updates will typically be much simpler for Web applications or
software-as-a-service (SaaS) offerings than it will be for shrink-wrapped products.
Consider the following questions when setting goals for incident response and
determining who and how to involve various principals:
32
33
Phase 2: Identify
In the Identify phase, components that may require security servicing are cataloged.
Ask yourself: What are the target response times for each, and how will they be
serviced? What third-party components may need updating? What kinds of bugs
require a special release, and which can wait until the next regular release cycle?
Phase 4: Deploy
To deploy the incident response plan, a public contact point for security issues is
publicized to collect notices from the public or security research community. Posting a
notice on your companys Web site is a good start. Common e-mail address choices
for responsible disclosure of vulnerabilities contacts (when a specific person cannot
be otherwise identified) include secure@yourcompanyname.com and
security@yourcompanyname.com. Even if you choose to publicize a different contact
address, mail to these addresses should be monitored.
Requirement
New code and projects have recorded contacts for incident
response, and a security response first responder contact point is
made available to clients and the general public.
If you have completed the step listed above, your organization has met the minimum
requirements of the Standardized level for Response Planning and Execution. We
34
recommend that you follow additional best practices for response planning and
execution addressed in the SDL Process Guidance at Microsoft MSDN.
35