Vous êtes sur la page 1sur 21

Code Review Tips

Contents:

1)Useful Tips
2) A Process for Performing Security Code Reviews

- Prioritize

- Review the code

- Rerun all available tools

- Dig deep into risky code

3)Purpose of Code Review

4)Sql Code Review Tips


Useful Tips:
Tip #1: First and foremost, review code often. I’ve worked with many
architects that do regular code reviews every one or two weeks. Think about
how much code you yourself could write in one eight hour work day. Now take
that number of lines of code and multiple it by the number of developers on a
typical team, say five. Now, multiply that by five or ten work days. That’s a lot
of code to review! Performing a review daily or every other day gives you the
ability to review much smaller chunks and you’ll actually have better overall
coverage.
I can hear you groaning: “Managing the typical responsibilities of an
application architect over the course of a project is hard enough. How can I
possibly keep up with daily code reviews?” My answer is: “You won’t have to.”
The most critical part of any application development project is the first week.
It’s the point where the developers start to build the application. This is the
point where an architect should be the singular driving force behind that
activity, guiding it, nurturing it and helping it evolve to maturity. Daily code
reviews during this start-up period in the project life cycle can address and fix
future show-stopping defects before they ever have a chance to grow to
fruition. And, if you play your cards right, you be seeding the kind of
knowledge in the development team that will avoid future issues from
happening (see Tip #4).
Tip #2: Find a static code analysis tool you like and start using it. I look
at these tools like the smoke detectors in my house: Whether my house is on
fire (serious) or my kids are burning a pizza in the oven (annoying, but not
necessarily serious), I want to know that there is smoke in my house and
where, approximately, it is. Use a static code analysis tool to find potential
“hot spots” in your code, go there and see what it looks like. I’ve personally
used FxCop and Compuware’s DevPartner Studio. Of the two, DevPartner
has way more horsepower and capabilities, but having FxCop built into Visual
Studio Team System makes it a lot more convenient to use.
Tip #3: Actually read the code. When you’ve locked on to your static
analysis tool of choice, don’t let it become a replacement for actually reading
the code. Again, a tool makes a good bird dog, but you still need to go retrieve
the duck. Reading the code will give you context into the decision making
process the developer used when he/she wrote the code. I’ve seen things pop
into an analysis window and was ready to go thump the developer on the
head, only to read the code and completely get why they went the direction
they did.
Tip #4: Separate style from substance. It’s easy to fall into the trap of
looking at someone else’s code and think, “That’s not how I would have done
it.” Try as hard as you can to divorce yourself from the style of someone’s
code and focus on the substance. The two questions I generally ask myself in
this instance are: “Will this code work?” and “How easy is it for me to figure
out what this code does just by reading it?” The answer to these two
questions will generally give me a good feel for the quality of the code.
Tip #5: Make code reviews a learning opportunity. Code reviews can be
an opportunity to develop or alienate people on your team, depending on your
approach. When doing a review, I obviously want to find as many defects and
other issues as I can, but I also want the individual who wrote that code to
learn from the outcome of the review so he/she will not make the same
mistakes in the future. Taking this approach, you’re ensuring better quality
code in your application while making the individuals writing the code better at
their job. Ultimately, you won’t need to be as hands-on because the team will
be better at writing code and will know what you expect.

Code Review:

A code review is like literally debugging the code with your eyes,
trying to understand not only how it works, but what it's trying to do. The latter is
much harder to understand from reading code, because understanding why
things are the way they are requires looking at the meta level of the code.

Some of the benefits in a test review:

• Tests reveal the intention behind the code much better than the code itself.
That means it's easier to discover logical bugs in the code by reading a
test. "Why are you testing that it throws an exception when it should
always simply log errors?"
• Tests are declarative by default - the developer declares what the code is
supposed to be accomplishing, but the test itself is usually very easy to
read (or at least should be).
• Tests are faster and shorter to read and understand. I can review a test in
10th the amount of time it takes me to review code. I'll usually flip between
the test and the code only if I need to, but I can usually find logical errors
in a test case in less than a minute or so, without even looking at the code.
that saves plenty of time for everyone.

Security Code Reviews

• Make sure you know what you?re doing.


• Prioritize.
• Review the code.
Prioritize

How do you prioritize a code review effort when you have, say, 1,000 files to
review? Various tactics that gravitate around attack surface and potential bug
density can provide a good place to start. Code compiled into higher attack
surface software requires deeper review than code in lower attack surface
components. This isn?t to say that you don?t need to review lower attack surface
software; I?m simply talking about priorities. We can use the following heuristics
to determine code review priority:

• Old code. Older code tends to have more security vulnerabilities than new
code because newer code often reflects a better understanding of security
issues. The definition of old is hard to quantify, but you should review in
depth any code you consider ?legacy.?
• Code that runs by default. Attackers often go after installed code that runs
by default. Therefore, such code must be better quality, and hence
reviewed earlier and deeper than code that doesn?t execute by default.
Code running by default increases an application?s attack surface, which
is a product of all code accessible to attackers.1
• Code that runs in elevated context. Code that runs in elevated
identities?Local System in Windows or root in *nix, for example?must be
of higher quality and requires earlier and deeper review because code
identity is another component of attack surface.
• Anonymously accessible code. Another attack-surface element, code that
anonymous users can access should be reviewed in greater depth than
code that only valid users and administrators can access.
• Code listening on a globally accessible network interface. Code that
listens by default on a network, especially the Internet, is obviously open
to substantial risk and must be reviewed in depth for security
vulnerabilities.
• Code written in C/C++/assembly language. Because these languages
have direct access to memory, buffer-manipulation vulnerabilities within
the code can lead to buffer overruns, which often lead to malicious code
execution. With this in mind, you should review code written in these
languages in depth for buffer-overrun vulnerabilities. Of course, other
languages have other catastrophic vulnerabilities, such as SQL-injection
vulnerabilities in Java, PHP, C#, or Perl.
• Code with a history of vulnerabilities. Code that?s had numerous past
security vulnerabilities probably has many more unless someone has
made a directed effort to remove them from it.
• Code that handles sensitive data. You must review code that handles
personal, financial, or confidential data to ensure that it doesn?t disclose
the data to untrusted users through potential weaknesses.
• Complex code. There is no good metric to define complex code, and
although some research at Microsoft suggests that cyclomatic complexity2
might be a good indicator of potential bug density, I know of no metric
that?s a great indicator of potential security-bug density. Nonetheless,
complex code often has bugs and some of them could well be security
vulnerabilities.
• Code that changes frequently. Code that exhibitschurn often sees new
bugs introduced. Again, not all of these will be security vulnerabilities, but
given a stable set of code that?s updated only twice in a year versus code
that changes every day, the latter will probably have more vulnerabilities in
it.

Another prioritization technique is to get an estimate of the code?s bug density.


The capture?recapture method provides one good way to do this. Although it was
developed to determine wildlife populations,3 we can also use it to estimate bug
density. At the highest level, this involves the following process:

• Have a small team (TeamA) of perhaps just two or three people ?review
some code for security vulnerabilities (BugsA).
• Have a second team (TeamB) review the same code, also looking for
security vulnerabilities (BugsB).
• Note the quantity of vulnerabilities found by both teams (BugsAB).
• Estimate the number of vulnerabilities in the code using the formula:

BugsA / Estimate = BugsAB / BugsB.

For example, if the first team finds 10 security vulnerabilities and the second
team finds 12, with four in common, the capture?recapture method estimates 30
vulnerabilities in the code. In other words, the review team probably has a lot
more work to do. Note that, although a general bug count is interesting and
useful, we find no evidence at Microsoft of a correlation between general bug
quantity and security bug quantity.

Review the code

The next step is where the work really begins. Reviewing code involves three
steps:

• Rerun all available code-analysis tools.


• Look for common vulnerability patterns.
• Dig deep into risky code.
Let?s look at each in detail, starting broad but shallow, digging deeper into risky
areas, and finally getting narrow but very deep.

Rerun all available tools

Most software shops have source-code analysis tools. They range from simple,
noisy grep-like tools that look for potential vulnerabilities to full-fledged static-
analysis tools that perform data flow analysis on the code under inspection. Even
the highest compiler warning levels (such as /W4 in Visual C++ or ?Wall in gcc)
can indicate areas of code that require more scrutiny. Rerun all tools, and note
every warning or error?even those you?d normally reject as noisy. Some
warnings could actually be errors, or at least hide real errors. Warnings from
multiple tools can indicate code that needs more security review.

Look for common vulnerability patterns

The next step is to review for some of the most common and nefarious bug
types. Most notable among these are integer arithmetic issues, buffer overruns,
cryptographic issues, SQL injection, and cross-site scripting (XSS).

The best way to illustrate the process of searching for these bug types is to use a
simple decision graph. But first, it?s important to understand one simple rule: you
must always know what the attacker controls. If the attacker controls nothing,
there?s no security bug; if the attacker controls a great deal of data used in the
code, the potential for a security bug skyrockets. Consider, for example, the
following code, which shows a buffer overrun:

char t[3]; t[3] = 0;

Is this code a security bug? The answer is no. It is most certainly a bug, but the
attacker controls nothing. If the attacker controlled, say, the array index, then it
would be a first-class security bug.

Now let?s look at the common vulnerabilities. I don?t offer remedies here, but the
books listed in the sidebar have great advice on how to fix them.

Integer arithmetic vulnerabilities. The two flow charts in Figure 1 show the
process of reviewing code for integer arithmetic issues. Figure 1a illustrates the
process for finding vulnerabilities in memory allocation, and 1b shows how to find
them during array indexing.
Figure 1. Reviewing for integer overflow issues. (a) Dynamic memory-allocation
functions and (b) array indexing code must be correctly bounds checked to make
sure there is no arithmetic errors that could lead to security vulnerabilities.

When determining if your code has any dynamic memory-allocation functions,


you can start by searching for *alloc and calls to operator::new. That said, you?ll
still have to locate those rare cases in which memory-allocation functions don?t
have *alloc in their names. For example, the integer overflow that led to Microsoft
bulletin MS05-051 was in a function named GetMemSpace.4

Buffer-overrun vulnerabilities. Good tools can find some classes of buffer


overruns, but not all of them, so manual review is still important?particularly
because many tools focus on ?dangerous functions? rather than data origin. The
first step is to identify all code that copies buffers and isn?t restricted to strcpy.
You can start by locating while() or for(;;) loops or ?safe? functions such as
memcpy, which programmers commonly use to copy buffers. The bug that led to
the Blaster worm was a simple while loop.

Given the prevalence of buffer-overrun exploits that arise from them, Microsoft
has actually banned many C runtime library functions for new code including:

• the strcpy, wcscpy family,


• the strcat, wcscat family,
• the strncpy, wcsncat family,
• the sprintf, wsprintf family,
• the snprintf, wnsprintf family,
• the vsprintf, wvsprintf family,
• the _alloca family,
• the strtok family,
• the scanf family, and
• the gets family.

In The Security Development Lifecycle, Steve Lipner and I include a full list of
banned functions along with a header file, named banned.h.5

Note the last block in the flow chart in Figure 2: Is function correctly bounded? I
can?t stress enough how important this is. I?ve seen code in which the developer
replaced a potentially insecure function, such as strcpy, with an ostensibly secure
function such as strncpy, but got the buffer size wrong. This is the error that led to
the June 2006 Symantec remote-management-stack buffer overflow advisory;
you can see the coding error on the eEye Web site
(www.eeye.com/html/research/advisories/AD20060612.html).
Figure 2. Reviewing for buffer overflows. This chart shows the high-level flow
from identifying common coding constructs to determining whether they?re used
securely.

Cryptographic vulnerabilities. The chart in Figure 3 shows the process for


finding cryptographic weaknesses in your code. If your code uses cryptographic
algorithms, you need to make sure the code has a base level of ?hygiene.? That
means no hard-coded secrets, such as passwords or cryptographic keys, and no
banned algorithms. Microsoft has banned several algorithms for new code
including:
• DES (key size is too small)
• MD4, MD5 (broken)
• SHA-1 (showing signs of weakness)
• RC4 (unless reviewed by a cryptographer)

Figure 3. Reviewing for crypto issues. This chart shows the steps for determining
whether your code uses cryptographic constructs and, if it does, how to restrict
the code to using more robust algorithms and key sizes.

In some cases you have no choice but to use a banned algorithm because an
industry standard requires it. For example, digest authentication6 requires MD5.
But we avoid them whenever possible, or at least limit their use to code that isn?t
?security-related.

SQL Injection vulnerabilities. SQL injection vulnerabilities are a prime attack


method for compromising sensitive data. The ?we use a firewall? argument
simply doesn?t help most of the time, so you must make sure your code is as
free from SQL-injection vulnerabilities as possible.

Cross-site scripting vulnerabilities. XSS issues can lead to disclosure of a


Web-user?s private data, and some XSS-based worms, such as the MySpace
XSS worm, have been seen in the wild. The part of the flow chart in Figure 5 that
requires explanation is with regard to defenses. You can use many defense-in-
depth tricks to help mitigate any XSS vulnerabilities you might miss. These
include HTML-encoding all output that?s derived from untrusted input, and
marking cookies as HttpOnly7 to help mitigate cookie-stealing XSS attacks.
Figure 4. Reviewing for SQL-injection issues. First you must determine whether
your application uses a SQL database or not, if it does you must perform
appropriate SQL statement construction hygiene.
Figure 5. Reviewing for cross-site scripting (XSS) issues. This chart shows the
high-level steps required to identify common XSS issues in Web server code.

Dig deep into risky code

The process I described earlier finds only low-hanging vulnerabilities. The last
stage of the code-review process is to dig very deeply into risky code. But be
warned that this phase is slow going. Most people tire quickly of reviewing code,
so I suggest setting aside only two hours to review code and having no more
than two such sessions a day.

The first step is to assemble a very small team of usually no more than three or
four people:

• the code author, whose job is just to answer questions, not to play tour
guide,
• a subject-matter expert who is knowledgeable about the code, but isn?t
the code author, and
• a note-taker.

Including more than four generally slows the process down as people tend to
argue and get sidetracked on unrelated issues.
The next step is to identify the riskiest components?you should be able to turn to
the threat model for this list, or else you can determine potential risk in a less
formal manner by applying the heuristics I defined earlier. However you identify
the components, be sure to manually review all anonymously accessible code
that?s exposed to the Internet.

Once you document all these components, build a code review schedule. For
each component under review, determine where data enters the system?this
might be a network call via recv() or recvfrom()?and then trace the data from that
point forward, questioning the nature of the data and how it?s used in the code.
Consider the following questions when manipulating the data:

• Are there logic or off-by-one errors (for example, ?>? vs. ?>=? or ?||? vs. ?
&&?)?
• Is the data correctly validated?
• Are buffer lengths constrained correctly?
• Are integer values range-checked correctly?
• Are pointers validated?
• Can the code become inefficient (for example, O(N?2)) due to some
malformed data (for example, a hash table look-up becomes a list look-
up8)?
• Are errors handled correctly?

You can stop the code review when you get to a point in the code that correctly
validates the input. This might occur in more than one point, so make sure to
review all such code.

This process will not find all security vulnerabilities in your code, but it?s an
effective method for scanning large amounts of code quickly for common issues
and reviewing risky code in greater depth. A good source-code editor that lets
you quickly browse, perform lookups, and hop around the call structure is critical.
And never lose track of my greatest advice to you here: always understand what
the bad guy controls.

Purpose:

First to make sure that the code that is being produced has sufficient
quality to be released. In other words, it's the acid test for whether the code
should be promoted to the next step in the process.

Second as a teaching tool to help developers learn when and how to apply
techniques to improve code quality, consistency, and maintainability. Through
thoughtfully evaluating code on a recurring basis, developers have the
opportunity to learn different and potentially better ways of coding.
Techniques:

1. CodeReview.Checklist.Add().

Start creating your own checklist of the areas code reviews will
cover , you may follow the code slandered for the company to start
creating your checklist, go through your code with the checklist and
fix whatever you find. Not only will you reduce the number of things
that the team finds, you'll reduce the time to complete the code
review meeting—and everyone will be happy to spend less time in
the review.

2. CodingStandards.Update().

One of the challenges that a developer has in the company with


combative code review practices is that they frequently don't know
where the next problem will come from. rapidly updating the coding
standards will help developer to learn from past lessons and
minimize the opportunity of having the issue in next review
meetings .

3. Developer.Me != Developer.Code .

Remember Development is about creativity . if you have some bugs


in your code this doesn't necessary means you are a bad developer
. your colleagues who are reviewing the code generally aren't trying
to say that you're a bad developer (or person) by pointing out
something that you missed, or a better way of handling things.
They're doing what they're supposed to be doing by pointing out
better ways. Even if they're doing a bad job of conveying it, it's your
responsibility to hear past the attacking comments and focus on the
learning that you can get out of the process. You need to strive to
not get defensive.

4. Developer.Code.UnitTest().

Always remember writing comprehensive unit tests will save a lot of


the time that we would traditionally spend reviewing proper
functionality . Instead, we focus on improving the functionality of
code that has already been shown to work
5. Developer.Code.IsObjectOrianted.

Make sure all the time you are following the best practices of object
oriented, every single piece of code can be done in usual old
structured way BUT this is not the way we think , refractor your
code to be object oriented code .

6. Developer.Code.DesignPatterns.Add().

Whenever possible use commonly known design patterns that suits


your situation, design patterns are widely known between
developer community and gained their trust and confident and had
been tested on a lot of situations proving its fitness . so be in the
safe side and use it if you need it.

7. Developer.Code.IsComplex.

Think agile , keep it simple all the time , assumptions leads to


complex design and complex code . focus on the current problem
and use the shortest way to solve it this will reduce your code
complexity and in turn the probability of bugs.

8. Developer.Code.HandleExceptions() .

The code required to handle exceptions " Try/Catch .. " is the


cheapest code in the world , it will cost you nothing , so don't forget
to handle your code expected and generic exceptions .

9. Developer.Code.Build.IsFreeOfCodeAnalysisWarning .

The ideal case for your code is to build probably without any code
analysis warnings , always remember code analysis warning is your
friend, it help you to avoid future bugs and security vulnerabilities
that might defect your code. So pay attentions to them .

10. Developer.Code.IsClean .

Follow the naming conventions as your company coding standards


refer. Keep the code files outline neat and clean. Remove any
redundant code or comments . Organize your code in regions with
descriptive names . Avoid grouping multiple types in one code file.
Use XML tags for documenting types and members. Simply learn
the art of beautiful code .

Tips

The document for stored procedure coding standards is long. I have identified a
few rules which I think are the most important. All the examples in this article are
written assuming the DBA is a member of the System Administrator (sa) role.
I use the following script to search in all stored procedures.

Listing 1: SQL Server 2000 Example using systemtables.

1. USE AdventureWorks;
2. GO
3. CREATE PROCEDURE dbo.usp_SearchStoredProcedure
4. @SearchTerm VARCHAR(1000)
5. AS
6. SELECT DISTINCT so.name
7. FROM syscomments sc
8. INNER JOIN sysobjects so on sc.id=so.id
9. WHERE sc.text LIKE '%'+@SearchTerm+'%' AND OBJECTPROPERTY(so.id,
'IsProcedure') = 1;
10.GO

USE AdventureWorks;
GO
CREATE PROCEDURE
1. USE AdventureWorks;
dbo.usp_SearchStoredProc
2. GO
edure
3. CREATE PROCEDURE dbo.usp_SearchStoredProcedure
@SearchTerm
VARCHAR(1000)
4. @SearchTerm VARCHAR(1000)
AS 5. AS
SELECT DISTINCT
6. SELECT OBJECT_NAME(object_id)
so.name
FROM7. FROM sys.sql_modules
syscomments sc
8. WHERE DefinitionsoLIKE '%'+@SearchTerm+'%' AND OBJECTPROPERTY(object_id,
INNER JOIN sysobjects
'IsProcedure') = 1;
on sc.id=so.id
9. GO
WHERE sc.text LIKE
'%'+@SearchTerm+'%' AND
OBJECTPROPERTY(so.id,
USE AdventureWorks;
'IsProcedure') = 1;
GO
GO
CREATE PROCEDURE
• All the columns in the SELECT clause must follow the two-part naming
dbo.usp_SearchStoredProc
edure convention: tablename.columname. This will prevent ambiguity when
@SearchTerm
searching for that column in all the stored procedures. If you use the
VARCHAR(1000)
AS stored procedure shown above to search for tablename.columnname it will
SELECT give a more accurate result than columnname only.
OBJECT_NAME(object_id)
FROM Usage
• of * in the SELECT clause is not recommended, for two reasons.
sys.sql_modules
WHERE First, it decreases
Definition LIKE readability. Second, if columns retrieved in SELECT *
'%'+@SearchTerm+'%' AND
OBJECTPROPERTY(object application, performance is degraded.
are not used by the
_id, 'IsProcedure') = 1;
GO
All the stored procedures that use SELECT * can be listed using the stored
procedure displayed in listing 1, as follows:
1. USE AdventureWorks;
2. GO
3. EXEC dbo.usp_SearchStoredProcedure '*'
4. GO

USE AdventureWorks;
GO
EXEC
The result set will return all the stored procedures which have used * anywhere
dbo.usp_SearchStoredProc in
the text.
edure '*' Using this method is much faster than going through all the stored
GO
procedures and determining which ones use the '*' character.
• In the ORDER BY clause, it is advised to use tablename.columnname
instead of numbers to identify the ORDER BY sequence. Use ORDER BY
tablename.columnname1, tablename.columnname2,
tablename.columnname3, instead of ORDER BY 1, 2, 3. This method is
good for readability as well. It will be consistent with other SELECT
statements where columns are used in the ORDER BY clause but are not
used in the SELECT clause. To list all the stored procedures which have
an ORDER BY clause, run the stored procedure displayed in listing 1 as
follows:
1. USE AdventureWorks;
2. GO
3. EXEC dbo.usp_SearchStoredProcedure '*'
4. GO

USE AdventureWorks;
GO
EXEC
• Dynamic queries reduce the performance of stored procedures, as usage
dbo.usp_SearchStoredProc
edure '*'of EXEC or sp_executesql forces recompilation of the stored procedure.
GO
To list all the Stored Procedures which have an EXEC or sp_executesql
clause, run the stored procedure displayed in listing 1 as follows:
1. USE AdventureWorks;
2. GO
3. EXEC dbo.usp_SearchStoredProcedure 'EXEC'
4. GO
5. EXEC dbo.usp_SearchStoredProcedure 'sp_executesql'
6. GO

USE AdventureWorks;
GO
EXEC
• Formatting is important, but excessive spaces between different clauses
dbo.usp_SearchStoredProc
makes it harder to search in the database by running a query on system
edure 'EXEC'
GO
EXEC tables.
dbo.usp_SearchStoredProc
Likewise, if you wish to search for any keyword or restricted word defined in your
edure 'sp_executesql'
company
GO coding standard, the above stored procedure can be used to quickly
make a list of stored procedures using them. Although this example searches for
only one word, it can be easily modified to search for multiple words in a stored
procedure.
When searching the list of stored procedures, I usually run the following code,
which generates the commands that return the text of a stored procedure. This is
an easy way to see all the stored procedures in the result pane. In this way, each
stored procedure does not have to be opened to view its text; or run sp_helptext
to see it.
Listing 2 shows a stored procedure called
usp_SearchStoredProcedure_GenerateHelpText, which generates the code to
display the text of searched stored procedures.

Listing 6: Stored procedure to retrieve the text of searched stored


procedures.

1. USE AdventureWorks;
2. GO
3. CREATE PROCEDURE dbo.usp_SearchStoredProcedure_GenerateHelpText
4. @SearchTerm VARCHAR(1000)
5. AS
6. SELECT DISTINCT 'EXEC sp_helptext ''' + s.name+'.'+OBJECT_NAME(m.object_id)+''''
7. FROM sys.sql_modules m
8. INNER JOIN sys.objects o ON o.object_id = m.object_id
9. INNER JOIN sys.schemas s ON s.schema_id = o.schema_id
10.WHERE Definition LIKE '%'+@SearchTerm+'%' AND
11. OBJECTPROPERTY(m.object_id, 'IsProcedure') = 1;
12.GO

USE AdventureWorks;
GO
CREATE PROCEDURE
First, change the results to display in text. Then, run stored procedure in listing 2,
dbo.usp_SearchStoredProc
as follows:
edure_GenerateHelpText
@SearchTerm
1. USE AdventureWorks;
VARCHAR(1000)
AS 2. GO
SELECT DISTINCT
3. EXEC 'EXEC
dbo.usp_SearchStoredProcedure_GenerateHelpText 'employee'
sp_helptext ''' +
4. GO
s.name+'.'+OBJECT_NAME
(m.object_id)+''''
USE
FROM AdventureWorks;
sys.sql_modules m
GO
INNER JOIN sys.objects o
EXEC
ON o.object_id =
Result Set:
dbo.usp_SearchStoredProc
m.object_id
edure_GenerateHelpText
INNER JOIN sys.schemas s
ON 1. EXEC sp_helptext
'employee'
s.schema_id = 'dbo.uspGetEmployeeManagers'
GO 2. EXEC sp_helptext 'HumanResources.uspUpdateEmployeeLogin'
o.schema_id
WHERE Definition LIKE
'%'+@SearchTerm+'%'
EXEC sp_helptext AND
OBJECTPROPERTY(m.obj
'dbo.uspGetEmployeeMana
ect_id,
gers' 'IsProcedure') = 1;
Notice
GO that the result set contains the object prefixed with the object owner. This
EXEC sp_helptext
is'HumanResources.uspUpda
required for sp_helptext to execute without any error. If you want to skip this
teEmployeeLogin'
step and go to the next step where you can see the text of the stored procedure,
all you need to do is run the stored procedure in listing 3,
usp_SearchStoredProcedure_Generate_ViewSPText. This generates the text of
all the searched stored procedures.

Listing 9: Stored procedure that generates the text of all the stored
procedures.

1. USE AdventureWorks;
2. GO
3. CREATE PROCEDURE dbo.usp_SearchStoredProcedure_Generate_ViewSPText
4. @SearchTerm VARCHAR(1000)
5. AS
6. DECLARE @SQL NVARCHAR(MAX)
7. SET @SQL = ''
8. SELECT @SQL = @SQL + ' EXEC sp_helptext ''' +
s.name+'.'+OBJECT_NAME(m.object_id)+''''
9. FROM sys.sql_modules m
10.INNER JOIN sys.objects o ON o.object_id = m.object_id
11.INNER JOIN sys.schemas s ON s.schema_id = o.schema_id
12.WHERE Definition LIKE '%'+@SearchTerm+'%' AND
13.OBJECTPROPERTY(m.object_id, 'IsProcedure') = 1;
14.EXEC sp_executesql @SQL
15.GO

USE AdventureWorks;
GO
CREATE PROCEDURE
First, change the results to display in text. Then, run the procedure in listing 3 as
dbo.usp_SearchStoredProc
follows:
edure_Generate_ViewSPTe
xt
1. USE AdventureWorks;
@SearchTerm
VARCHAR(1000)
2. GO
AS
3. EXEC
DECLARE dbo.usp_SearchStoredProcedure_GenerateHelpText 'employee'
@SQL
4. GO
NVARCHAR(MAX)
SET @SQL = ''
SELECT @SQL = @SQL + '
USE AdventureWorks;
EXEC sp_helptext ''' +
GO
s.name+'.'+OBJECT_NAME
EXEC
This stored procedure will display the code text of all the stored procedures in the
(m.object_id)+''''
dbo.usp_SearchStoredProc
FROM sys.sql_modules m tricks will help Senior DBAs to do code reviews faster for
results window.
edure_GenerateHelpText
INNER JOIN These
sys.objects o
'employee'
all
GOstored
ON procedures.
o.object_id =
m.object_id
Another
INNER JOIN interesting fact sto note about listing 1: If you run the SQL Server 2000
sys.schemas
version and SQL
ON s.schema_id = Server 2005 together in the same transaction, the performance
o.schema_id
isWHERE
quite noticeable. In the SQL Server 2005 version, the query cost is 8% (relative
Definition LIKE
to'%'+@SearchTerm+'%'
the batch). As shown ANDin figure 1, in the SQL Server 2000 version the query
OBJECTPROPERTY(m.obj
cost is 92% (relative to the batch). Another reason to migrate to SQL Server
ect_id, 'IsProcedure') = 1;
2005!
EXEC sp_executesql @SQL
GO
Cost of SQL Server query

Listing 11: Find created/modified stored procedures.


1. USE AdventureWorks
2. GO
3. CREATE PROCEDURE dbo.usp_FindModifiedSP
4. @intDay INT
5. AS
6. SELECT name, create_date, modify_date
7. FROM sys.objects
8. WHERE type = 'P'
9. AND (DATEDIFF(D,create_date, GETDATE()) < @intDay
10.OR DATEDIFF(D,modify_date, GETDATE()) < @intDay)
11. GO

USE AdventureWorks
GO
CREATE PROCEDURE
• Sometimes it helps to check how many stored procedures are changed
dbo.usp_FindModifiedSP
@intDayand
INTcreated since last code review. I create reports of changed/created
AS
SELECT stored
name,procedures and visually inspect a few of the stored procedure.
Create
create_date, the stored procedure shown in listing 4 and execute it as follows to
modify_date
FROM find
sys.objects
created/modified stored procedures in the last 10 days:
WHERE type = 'P'
AND1. USE AdventureWorks
(DATEDIFF(D,create_date,
2. GO
GETDATE()) < @intDay
OR 3. EXEC dbo.usp_FindModifiedSP 10
4. GO
DATEDIFF(D,modify_date,
GETDATE()) < @intDay)
GO
USE AdventureWorks
GO
EXEC
You can pass any other
dbo.usp_FindModifiedSP 10value instead of 10. The above stored procedure will list
all user-created stored procedures as well as all system stored procedures. If a
GO
stored procedure is not modified after it was created, the modify and create dates
will be the same.
There are few additional things should be quickly visually inspected.
• Important logic in stored procedures and functions should be properly
documented.
• If possible, all the tables should be joined on indexed columns. That
improves performance significantly.
• Table names should be aliased for readability. I like to alias table names;
however, every organization have their policy about table aliases.
• All newly created tables should have a primary key and a clustered index.

The rules mentioned here are not final rules and they can be modified as per
organization policy and business requirements. These rules are for pre-code
review. Final code review may include all the rules of the pre-code review,
including coding standards and performance testing.

Vous aimerez peut-être aussi