Vous êtes sur la page 1sur 87

Dutch Information Worker User Group

SharePoint
®

eMagazine

Why would I need a My Site?


Controlling the SandBox:
A real business necessity
PowerShell and SharePoint for beginners
FAST Search Server for Power Users
Introducing LINQ to SharePoint
Fortifying the Pillars of Governance with
SharePoint and DocAve
Logging and monitoring in SharePoint 2010
Building a FAQ with new CQWP
functionality in SharePoint 2010
Using the refinement panel to get more
insight into your search results
Nr.3
Sept. 2010
issued by

The new Enterprise Content Management


features in SharePoint 2010
SharePoint® eMagazine - September 2010

Contents
Thank you! 3
Why would I need a My Site? 4
Controlling the SandBox: A real business necessity 10
PowerShell and SharePoint for beginners 19
FAST Search Server for Power Users 25
Introducing LINQ to SharePoint 35
About DIWUG 41
Fortifying the Pillars of Governance with SharePoint and DocAve 42
Logging and monitoring in SharePoint 2010 46
Building a FAQ with new CQWP functionality in SharePoint 2010 57
Using the refinement panel to get more insight
into your search results 63
The new Enterprise Content Management features
in SharePoint 2010 70
About the authors 86

Colofon
DIWUG SharePoint eMagazine
Nr. 3, September 2010
Publisher:
Stichting Dutch Information Worker User Group (DIWUG)
Third edition, September 2010
Editors:
Marianne van Wanrooij (lead)
marianne@diwug.nl
Mirjam van Olst
mirjam@diwug.nl

Special thanks to:


Hannah Swain (reviewer)
and all the authors and sponsors!
Design and layout:
Barth Sluyters
http://home.12move.nl/barthsluyters/
©2010. All rights reserved. No part of this magazine may be reproduced in any way
without prior written permission of DIWUG or the author. All trademarks mentioned in
this magazine are the property of their respective owners.

2
SharePoint® eMagazine - September 2010

Thank you!
Editor’s note
Thank you (again) for downloading the DIWUG SharePoint eMagazine. And
although this is just the 3rd issue we’re having a special! With over 80 pages of
content for developers, IT-pros and power users we hope you’ll enjoy reading this
magazine!
It seems that SharePoint 2010 is still gaining popularity! Although there was/is
an economic crisis (yes there was one!) there is a high demand for SharePoint
expertise. Maybe this is the reason why more and more companies are looking
into SharePoint to try to get a piece of the action. Although it’s great to have
more people on board in the community, I don’t think that management should
decide over night to “do SharePoint”. When somebody tells me he/she is a Share-
Point Guru my next question is always; “on what part?”. There are only a few
REAL SharePoint Gurus! SharePoint is a complex product with many components
with different kind of business solutions and possibilities. Each part of the Share-
Point pie is an expertise.
In this magazine some of these experts and enthusiasts have written about their
area of expertise. In this issue topics like MySites, PowerShell, FAST search, LINQ to
SharePoint, Content Query Web Part, logging and monitoring, Governance and
Enterprise Content Management shows the variety of components and complexity
of SharePoint 2010. But isn’t that what we like about SharePoint?

I hope you’ll enjoy this issue of the DIWUG SharePoint eMagazine. And don’t
forget to give us your feedback!

Marianne van Wanrooij


lead editor
DIWUG SharePoint eMagazine
Marianne@diwug.nl

This issue is sponsored by


See page

Getronics Consulting 9
European SharePoint Best Practices Conference 24
Sponsored article by AvePoint 42
Macaw 56
AvePoint 85

You are invited to click on the ad’s for direct contact.

3
SharePoint® eMagazine - September 2010

Why would I need a My Site?


by Matthias Fonteyne
Microsoft SharePoint Server 2010 My Site functionality is the platform’s solution for a
personal online environment. It provides personal and shared document management,
personal and colleague’s profile information, knowledge about personal interests and news.
It’s only 1 click away by using the login menu on every site.

Figure 1: Accessing the My Site.


This article provides an overview of the advantages, functionality and practical
examples on how to use this personal space in one’s professional environment. The
pricing and licensing will also be discussed. This should give you a comprehensive
response to the often heard question ‘Why would I need a My Site?”.

Internal networking
Nobody can deny the influence of social media in both personal and professional
situations. Nowadays, the mix of personal and professional networks is a given.
Knowledge workers expand their professional network with colleagues worldwide.
The knowledge, gathered through external and internal networking is becoming as
valuable as the one provided by trainings. However, internal networking can play an
even bigger part in professional development than one might think.
Microsoft SharePoint Server 2010 offers employees a personalized platform to expose
their knowledge and expertise to colleagues within company walls. They can store
their personal contact information on the profile page. Anybody can read all about
their colleagues’ profiles, activities and positioning within the organization. This
allows to easily find the perfect sparring partner within their own organization: they
might discover unknown expertise in close colleagues or get to meet new people
with similar interests to their own. The links to profiles are available throughout the
entire farm.

Figure 2: The profile page header.

4
SharePoint® eMagazine

The header of the profile contains the worker’s personal and contact information.
This information can be managed by the users themselves. Most fields have a privacy
setting, which allows the employee to choose to limit which people can see it (e.g.
home phone number).
The top zone provides a space to give a brief shout. It can be used to inform people
about current activities, questions or thoughts. Imagine a CEO giving a quick shout
on what his day looks like, a colleague shouting for some assistance with a blocking
issue or a secretary asking everybody to get their timesheets in on time. This can be
compared to a tweet on Twitter or a status update on Facebook or LinkedIn.

Knowledge sharing
Every organization has a massive collection of extremely valuable knowledge.
Wouldn’t it be great if that useful knowledge was shared with colleagues, without
crossing the company’s firewall? Of course, this knowledge should be filtered to
meet your own interests and leave out the information that is not that interesting
or relevant. In the latest version of SharePoint, Microsoft SharePoint Server 2010,
the feature of tagging is introduced. It allows everybody to add specific keywords to
content items such documents, pages and list items across the entire site collections.
In the tab Tags and Notes an overview is given of the user’s activity regarding tags
and notes. The tagging can be done anywhere on site collections and is aggregated
on this personal page. It provides co-workers with an idea of the activity of colleagues
not only in their own office, but in offices all over the world. This brings people closer
together and highly stimulates international cooperation and knowledge sharing.
Can you imagine looking up a person in your company whom you briefly heard about?
Somebody told you he might be interesting to add to your international team. With
Microsoft SharePoint Server 2010 My Sites, his online profile is just a few clicks away.
You can read about his interests, expertise, past projects and current activity, get his
personal contact information and even leave him a note on his note board.

Wouldn’t it be great if that knowledge was shared with colleagues,


without crossing the company’s firewall?

Figure 3: The Tags and Notes tab.


Furthermore, everyone can follow certain tags to stay updated on all activities
regarding these particular subjects. This information will be displayed on the starting
page of their personal environment. Because users can choose the tags they prefer,
the communication preserves its to-the-point and sharply personalized character. It
gives users complete control, filtering away any undesired contents. It’s a fact that
knowledge sharing requires durable contributions to keep the medium alive and
up-to-date. This smart way of information gathering maximizes commitment and
enthusiasm, keeping internal knowledge flows alive and kicking.

5
SharePoint® eMagazine

This smart way of information gathering maximizes commitment and


enthusiasm, keeping internal knowledge flows alive and kicking.

Security
The utmost valuable asset of many companies is the in-house knowledge. Although
knowledge sharing is essential within and outside the company’s premises, you would
like to limit the knowledge interchange of company critical information to the outside
world. Of course, externally managed platforms such as LinkedIn or Twitter offer a
much larger audience. This hands your workers a huge knowledge exchange platform
which can be extremely valuable in some scenarios. However, it might also form a
disclosure risk of company-confidential information. An internal, closed environment
like Microsoft SharePoint Server 2010 offers a secure place to store and exchange
sensitive information, minimizing the risk of leaks to the outside world.

Figure 4: Item-level Permission Management.


Within the Microsoft SharePoint Server 2010 environment, there is the ability to
manage access and permissions on every possible level ranging from the complete farm
down to one particular document, page or list item. Users are able to share with or
hide information from their colleagues in an easy and granular manner. However, it is
advised to manage the security at the highest possible level as assigning permissions at
an item based level quickly becomes hard to manage. Item level security should be used
whenever an exception on its container is required. The standard method of profile
management is through the company’s Active Directory users and groups which can be
used in the SharePoint environment, including automated profile synchronization. This
is the first step to minimizing maintenance effort and costs.

Interfaces with external systems


The amount of data in a large number of different systems within every company requires
information exchange and synchronization. Microsoft SharePoint Server 2010’s Business
Connectivity Services offers a method of importing and exchanging information from
other systems such as SAP, PeopleSoft and SQL Server and other custom information
providers. This information is exposed within SharePoint as external lists. This makes
the data easy to combine with data stored within the SharePoint environment, treating
it as if was all internal data.
The Microsoft SharePoint Server 2010 My Site includes the Organization Browser. This
interactive Silverlight control represents the location of the user in the organization.
It provides a rich and interactive user interface on the internal company organization.
This information is provided by the organization’s Active Directory and is synchronized
with the SharePoint’s own user information store. This feature enables users to quickly
locate colleagues, find contact information and read about their own position within
the organization. Whenever the information in the Active Directory is updated, the
User Profile Synchronization job takes care of the update in the SharePoint user
information store. But why not imagine a custom solution with automated contact
information updates on everybody’s profile, based on back-end HR systems?

6
SharePoint® eMagazine

Figure 5: The standard Organization Browser, with Silverlight technology.


One might consider displaying up-to-date, personalized sales reports, based on
SAP sales data. These can be visualized on everybody’s own personal environment,
enhancing the feeling of responsibility and boosting productivity.
Each company will have their own possibilities and challenges, but they all share the
need to expose the information within one single working environment. Microsoft
SharePoint Server 2010 offers this platform and the possibilities to link custom
information providers as a transparent and simple front-end to the personalized
environment of the My Site.

Link custom information providers as a transparent and simple front-end

Centralized backup and storage


Every My Site offers a place for document storage called My Content. This environment
facilitates personal content management within a user’s My Site. It allows document
and picture storage for both personal and shared use. There is also a blogging area,
which colleagues can read and comment on. All shared content will be visible to
colleagues on the user’s profile Content tab. This document storage could be the
replacement of any local “My Documents”, “My Pictures” or any shared folders on
network drives. In some scenarios, where policies prevent users from storing any data
on local or removable drives, online storage provides a perfect alternative.

Figure 6: User’s content tab, with personal and shared content.


All content in SharePoint is stored in one or more centralized SQL Server Content
databases, not directly on the file system. The out-of-the box document management
is used in the My Site content environment as well. This allows for complete version
control, approval workflows and access management. Documents in the Personal

7
SharePoint® eMagazine

Documents area are visible to the user only, while Shared Documents can be shared
with all or just selected system users.
Because of this centralized way of working, all storage and backup is managed by the
farm administrators.. This has a positive effect on both the cost-of-control and the
complexity of these essential mechanisms, compared to a decentralized approach.
The Microsoft SharePoint Server 2010 Central Administration website has a built-in
interface for quick and easy backup schedules and restore scenarios.

A positive effect on both the cost-of-control and the complexity.

Pricing and Licencing


Microsoft SharePoint 2010 is available in three versions: SharePoint Foundation 2010,
SharePoint Server 2010 Standard, SharePoint Server 2010 Enterprise. SharePoint
Foundation 2010 can be installed and used with a valid Windows Server 2008 license
(depending on the server setup a SQL Server license is required as well). However, to
use the My Site functionality, SharePoint Server needs to be installed and the license
must be upgraded to the Standard or Enterprise version. Upgrading the license consists
of two parts. First, the company has to purchase a Microsoft SharePoint Server 2010
license for every server on which the system is running. Secondly, for every internal
user or device accessing the system, a Client Access License will need to be bought.
The purchase of a SharePoint Server 2010 Server Farm with My Site functionality is
an investment in an environment where people can improve their productivity with
better internal networking and more efficient and secure communication. It doesn’t
only give you the My Site functionality, but all extended features of the SharePoint
Server 2010 platform such as collaboration sites, online forms, search and much more.
Contact your Microsoft licencing partner for more information and detailed pricing.

Conclusion
Online social media are the newest way to share knowledge within the organization
and beyond. One way to make sure the organization’s confidential information
remains within company walls is to provide these capabilities inside those same
company boundaries. Microsoft SharePoint Server 2010 offers this functionality in
a secure and closed environment. It helps knowledge workers to stay up-to-date on
their knowledge area by exchanging valuable information with their own co-workers,
within the own organization.
The out-of-the-box mechanisms of document management, access control, backup-
restore and Business Connectivity Services can lower the cost-of-control and
complexity of the storage and enhance transparency as well as ease-of-use for both
administrators and end-users.
Microsoft SharePoint Server 2010 My Site is a complete, secure and mature
environment for knowledge sharing, internal networking and information exposure.
The added value of these processes is higher productivity, openness and a higher
team work spirit. Every company with a strong knowledge sharing and collaboration
need should invest in Microsoft SharePoint 2010 Server My Sites and start working
more effectively.
Start discovering your in-house knowledge today!

... by the way ...


Using ‘inprivate browsing’ helps with Ribbon development!
Try it!
Wesley Hackett - http://weshackett.spaces.live.com/blog/

8
CONNECT
SHARE
DISCOVER

IN THE NEW WORLD OF WORK, EMPLOYEES INCREASINGLY MEET ONLINE


RATHER THAN IN REAL LIFE. AS A RESULT, THEY ARE EVER MORE DEPENDENT
ON INFORMATION AND COMMUNICATION TECHNOLOGY TO CONNECT WITH THEIR
PEERS, SHARE RELEVANT INFORMATION AND DISCOVER VALUABLE CONTENT.

GETRONICS CONSULTING PROVIDES PORTAL AND COLLABORATION SOLUTIONS,


AS WELL AS CONSULTANCY SERVICES TO HELP YOUR STAFF ADAPT TO THE
NEW WORLD OF WORK AND TO EMPOWER THEM WITH TOOLS TO CONNECT,
SHARE AND DISCOVER!

getronicsconsulting.com

Getronics Consulting
SharePoint® eMagazine - September 2010

Controlling the SandBox:


A real business necessity
by Gustavo Velez
Sandbox solutions in SharePoint 2010 allow site collection administrators to deploy
SharePoint solutions in a contained and safe environment, giving them the authority
to manage the installation of components and applications without the intervention
of SharePoint Administrators and without access to the Central Administration.

For a complete review of the SharePoint Sandbox, its use and application, examine the article
“Sandboxed Solutions in SharePoint 2010” written by Mirjam van Olst in the
January 2010 SharePoint eMagazine issue (http://www.sdn.nl/LinkClick.aspx?fileticket
=nCffQTC9R7g%3d&tabid=139&mid=668&forcedownload=true).

Microsoft has decided that a sandboxed solution is the default type of solution for SharePoint.
To enforce this, the default SharePoint solution type of new Visual Studio projects is always
sandboxed (except if it is impossible to implement the solution in the sandbox).
However, as both choices (sandbox and farm implementation) are available, it’s up to you to
decide which one is the best solution for your specific situation. Having said that, Microsoft
expects a large explosion of commercial sandboxed solutions (these are not yet visible as
SharePoint 2010 is only a few months old).

Sandboxes from a business perspective


From an enterprise perspective, it is possible to say that there are roughly three types
of companies using SharePoint at the moment:

n Small companies, where the SharePoint implementation has probably a


maximum of a few hundred users. These companies don’t have the technical
capacity to develop their own solutions; if customization is necessary, they buy
commercial components. As the complete implementation is in the hands of a
very small number of administrators, it doesn’t really matter for the company if
the solutions are sandboxed or not.
n Medium companies with a maximum of a few thousand users. They probably
have a skilled IT department that maintains tough control over the SharePoint
implementation. Possibly the company structure is highly centralized and the
administrators will not allow the use of sandboxed solutions.
n Large companies with a large number of users; probably highly decentralized
organizations. They generally have a good functioning SharePoint Center of
Excellence that controls all the governance aspects of the SharePoint imple-
mentation. The architects in the Center of Excellence are now dealing with a
problem: they need to choose whether or not to allow the use of Sandboxes.
Due to the decentralized nature of the company, it is desirable that users
can install their own customizations, relieving the administrators of heavy
obligations. On the other side, they don’t like the idea of a chaotic growth
of implemented customizations which make the governance a nightmare,
including future migrations and implementation of patches and service
packs.

As you can see, deciding on the use of sandboxed solutions is a governance issue, not
a technical one. As the number of users increases, the importance of governance also
increases. And if the number of users increases, sandboxed solutions become more
and more important.

10
SharePoint® eMagazine

If the company decides to shut down the Sandbox, there are different options. Each
has advantages and disadvantages:

n Stop the “Microsoft SharePoint Foundation Sandboxed Code Service”. The


complete sandbox infrastructure in the farm will not work: the Solutions Gallery
is still visible, but it generates an error if a user tries to upload a solution.
n No site collection administrator permissions for users other than the SharePoint
administrators. Only administrators are able to upload new solutions, eliminating
the advantage and flexibility of sandboxed solutions (users cannot install and
deploy customizations themselves). Users will not have access to the Solutions
Gallery. Must be implemented for each site collection separately.
n Remove admin rights for the Solutions Gallery. Same effect as the last bullet,
but the users are able to see the Solutions Gallery (but cannot use it).
n Reduce the available resources for the sandbox to zero. Very similar to the
first bullet method, but the resources are assigned at the site collection level,
making more granular control possible.
n Use one or more sandbox validators.

SharePoint Online is a bit of a special case. Sandboxed solutions were developed with
SharePoint Online in mind. In a SharePoint Online environment sandboxed solutions
will be the only way to deploy customizations to your sites.

Validators to control the sandbox


Each time a SharePoint solution is uploaded to the Solutions Gallery in any site
collection, SharePoint activates the Sandbox Validators infrastructure. Validators are
classes that inherit from SPSolutionValidator, installed in the GAC and activated in the
system. A validator checks the application against certain business rules and allows
or prevents its installation. This is the best way to allow the use of the Sandbox
infrastructure without the necessity of implementing any local configuration,
exercising the necessary governance in a centralized way.
It is possible to imagine different methods of implementing the validators from
the governance point of view: allow new software only if it is signed with a special
certificate, enforce enterprise naming, allow only certain types of customizations
(e.g. only web parts), etc. When looking at the example of a global enterprise
situation, the best solution would be to have the benefits of the sandbox together
with control of the installed software by using validation based on Strongly Signed
Certificates.
In an organization where everyone can develop components for SharePoint and install
them in the sandbox, the SharePoint Center of Excellence (CoE) can create a validator
that checks for the public key used to compile the software: if the source code is not
signed with the enterprise key, the component may not be installed. In this case, the
CoE owns the public and private keys, typically stored in a file with the extension .snk;
developers may create their own keys for developing and testing purposes. After
approval by the CoE, the code would be recompiled with the enterprise key to allow
installation in the sandbox(es).

Creating a sandbox validator


To create a validator, start Visual Studio 2010 and create a new project using
the SharePoint 2010 “Empty SharePoint Project” template. Ensure that the .NET
Framework 3.5 is used, assign a name to the project (“ValidatorKey” in the following
example) and select “Deploy as a farm solution” in the wizard.
Add a new class to the project and rename it consequently (“ValidatorStrongName”
in the example). The first step is to decorate the class with its own identifier and
inherit from SPSolutionValidator:

11
SharePoint® eMagazine

[Guid(“D84E464F-5056-4082-A75F-D10C0786DB89”)]
class ValidatorStrongName : SPSolutionValidator

Listing 1: ValidatorStrongName.
SPSolutionValidator is a class in the Microsoft.SharePoint.UserCode NameSpace, thus
a directive “using” to this namespace must be in place. A using directive to Microsoft.
SharePoint.Administration is also necessary as well. The base source code for the class
would be:

class ValidatorStrongName : SPSolutionValidator


{
private const string SolutionValidatorName = “Strong Name Validator”;

public ValidatorStrongName() { }

public ValidatorStrongName(SPUserCodeService myUserCodeService)


: base(SolutionValidatorName, myUserCodeService)
{
this.Signature = 10001;
}

public override void ValidateSolution(SPSolutionValidationProperties


properties)
{
base.ValidateSolution(properties);
properties.Valid = true;
}

public override void ValidateAssembly(SPSolutionValidationProperties


properties, SPSolutionFile assembly)
{
base.ValidateAssembly(properties, assembly);
properties.Valid = true;
}
}

Listing 2: The directive “using”.


The class needs two constructors: the default (empty) constructor and a second
constructor used to define the name of the validator and its signature property
(integer). Validators use two methods that must be overridden:

n ValidateSolution that allows working with the SharePoint Solution. The


SPSolutionValidationProperties “properties” argument contains all the
necessary information about the solution such as, its name, site collection
(SPSite) where the validator is acting and the files contained in the Solution.
The most important property is the boolean “Valid” that controls whether
the solution will be accepted or rejected by the validator.
n ValidateAssembly that follows the structure of the first method, but accepts a
second argument containing all the information about its assembly file. Again,
using the boolean property “Valid” makes it possible to control whether the
validated solution is accepted or rejected.

SharePoint 2010 installs by default one empty validator called “Default solution
validator”; empty means that the “Valid” properties in both methods are hard coded
to be “true”. The Validator can be found in the Microsoft.SharePoint.UserCode
namespace under the class SPDefaultSolutionValidator. The class can be decompiled
to observe its code structure, similar to the code shown in the last paragraph.
In the example, we would like to validate that the SharePoint Solution naming respects
the established enterprise rules of a hypothetical company indicating that any solution
name must follow the structure “MyCompany.[Department].[SolutionName]”.
To ensure this business rule, we can use a helper routine to split the name into its
components and check for the first one:

12
SharePoint® eMagazine

private bool SolutionNameIsCorrect(string SolutionName)


{
bool blnReturn = false;

char[] mySplitter = { ‘.’ };


string[] NameParts = SolutionName.Split(mySplitter);
if (NameParts[0].Equals(FirstPartSolutionName) == true)
blnReturn = true;

return blnReturn;
}

Listing 3: The helper routine.


Then, the ValidateSolution method can use the routine:

public override void ValidateSolution(SPSolutionValidationProperties


properties)
{
base.ValidateSolution(properties);

properties.Valid = SolutionNameIsCorrect(properties.Name);
}

Listing 4: The ValidateSolution method.


“FirstPartSolutionName” is a string, a constant or dynamically retrieved value
containing the required string.
The second validation to implement in the example ensures that the assembly has
been signed using the enterprise safe name key from the hypothetical company.
There are different ways to read the public key from the assembly, but probably the
easiest is to use the “Evidence” class from System.Reflexion.Assembly:

private bool SignatureIsCorrect(SPSolutionFile assembly)


{
bool blnReturn = false;

Assembly myAssembly = Assembly.Load(ConvertAssembly(assembly));


IEnumerator keyEnumerator =
myAssembly.Evidence.GetHostEnumerator();
while (keyEnumerator.MoveNext())
{
if (keyEnumerator.Current.GetType() == typeof(StrongName))
{
StrongName myStrongName = (StrongName)keyEnumerator.Current;
if (myStrongName.PublicKey.Equals(PublicKeyBlob) == true)
blnReturn = true;
}
}

return blnReturn;
}

Listing 5: The second validation.


The assembly is coming into the routine as an ReadOnlyCollection<byte> object;
unfortunately, it is not possible to convert a ReadOnlyCollection to an array (the object
type accepted by the System.Reflection.Assembly Load method) without creating a
clone by iterating it and breaking the contract of being read-only. This could be a
security concern, but because it is not exposed outside the class, it is an acceptable
risk. The routine “ConvertAssembly” accepts the ReadOnlyCollection assembly file as
argument and gets back its equivalent byte Array.

13
SharePoint® eMagazine

private byte[] ConvertAssembly(SPSolutionFile assembly)


{
byte[] asmReturn = null;

ReadOnlyCollection<byte> readOnlyAssembly =
new ReadOnlyCollection<byte>(assembly.OpenBinary());
byte[] byteArrayAssembly = new byte[readOnlyAssembly.Count];
readOnlyAssembly.CopyTo(byteArrayAssembly, 0);

return asmReturn = byteArrayAssembly;


}

Listing 6: ConvertAssembly.
The SignatureIsCorrect helper routine uses a string constant “PublicKeyBlob” defined
at the beginning of the class to make the validation. For simplicity, the public key value
has been hard coded, but it can be saved using any dynamic method (for example in
the web.config file). “Using” directives to the following name spaces are necessary:

using System.Collections;
using System.Reflection;
using System.Security.Policy;
using System.Collections.ObjectModel;

Listing 7: Directive “using” for name spaces.


To create the comparison constant, it is necessary to first read it from the safe name
key file (.snk) or from a signed assembly. This can be realized using the “sn” tool and
its command “-Tp” to extract the blob from an assembly:

C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC> sn -Tp [pathTo\Strong-


NamedAssembly.dll]

Listing 8: Using the “sn”-tool.


Or using PowerShell:

$bytes = $null;
$assembly = “pathTo\StrongNamedAssembly.dll”;
$bytes = [System.Reflection.Assembly]::LoadFrom($assembly).GetName().
GetPublicKey();
$key = “”;
for($i=0; $i -lt $bytes.Length; $i++)
{
$key += “{0:x}” -f $bytes[$i];
}
$key;

Listing 9: Code lines for PowerShell.


To prevent tampering, the strong name key file is always hashed. The “public key
token” is the last 8 bytes of the SHA-1 hash of the public key. Having the assembly, it is
relatively easy to validate the public key token as well, as the following routine shows:

private bool PublicKeyTokenIsCorrect(SPSolutionFile assembly)


{
bool blnReturn = false;

Assembly myAssembly = Assembly.Load(ConvertAssembly(assembly));


Byte[] myPublicKeyTokenB =
myAssembly.GetName().GetPublicKeyToken();
string myPublicKeyTokenS = string.Concat(
myPublicKeyTokenB.Select(myPKT => myPKT.ToString(“x2”)).ToArray());

if (myPublicKeyTokenS.Equals(PublicKeyToken))
blnReturn = true;

return blnReturn;
}

Listing 10: Validating the public key token.

14
SharePoint® eMagazine

The “GetPublicKeyToken” method returns the token as a byte array; using a LINQ
expression, it is converted to a hex string and compared to the fast coded value of the
original signature file. One “using” directive to System.Linq is necessary.
To show in the SharePoint User Interface that the validation has been rejected, the
class SPSolutionValidationProperties has two properties: one for an error message
(“ValidationErrorMessage”) and one to redirect to an aspx page (“Validation-
ErrorUrl”). To make use of this infrastructure, create an aspx page in the validator
project (in Visual Studio 2010, go to Project - Add New Item - Application Page) giving
it a distinctive name (“SolutionValidationPage.aspx” in the example). In the Place-
HolderMain, add some text indicating that the solution cannot be installed, and in
the PlaceHolderPageTitle, add a title for the page itself:

<asp:Content ID=”Main” ContentPlaceHolderID=”PlaceHolderMain” runat=”server”>


The validation of the SharePoint Solution has been rejected
</asp:Content>

<asp:Content ID=”PageTitle” ContentPlaceHolderID=”PlaceHolderPageTitle”


runat=”server”>
Validation Solution
</asp:Content>

Listing 11: Instructing PlaceHolderMain and PlaceHolder Title.


And then call the two error properties from ValidateSolution and ValidateAssembly.
The final source code of both will be:

public override void ValidateSolution(SPSolutionValidationProperties


properties)
{
base.ValidateSolution(properties);

bool NameIsValid = SolutionNameIsCorrect(properties.Name);


if (NameIsValid)
properties.Valid = true;
else
{
properties.Valid = false;
properties.ValidationErrorMessage = “The Solution is invalid
because the name is incorrect”;
properties.ValidationErrorUrl =
“/_layouts/ValidatorKey/SolutionValidationPage.aspx”;
}
}

public override void ValidateAssembly(SPSolutionValidationProperties


properties, SPSolutionFile assembly)
{
base.ValidateAssembly(properties, assembly);

bool SignatureIsValid = SignatureIsCorrect(assembly);


bool KeyTokenIsValid = PublicKeyTokenIsCorrect(assembly);
if (SignatureIsValid && KeyTokenIsValid)
properties.Valid = true;
else
{
properties.Valid = false;
properties.ValidationErrorMessage = “The Solution is invalid
because the Strong Name is incorrect”;
properties.ValidationErrorUrl =
“/_layouts/ValidatorKey/SolutionValidationPage.aspx”;
}
}

Listing 12: The final source code of ValidateSolution and Validate Assembly.
Finally, the validator must be activated when the project is installed in SharePoint.
This can be done by adding a feature and its event receiver. Add the feature to the
project, change its name to “ValidatorKey” and add the event receiver. Uncomment
the methods FeatureActivated and FeatureDeactivating. Add a “using” directive to
Microsoft.SharePoint.Administration.

15
SharePoint® eMagazine

In the method FeatureActivated, add the validator class to the collection of SPUser-
CodeService.Local.SolutionValidators validators. In the FeatureDeactivating method,
remove the class from the collection using its GUID (defined for the decoration of the
validator class):

public override void FeatureActivated(SPFeatureReceiverProperties


properties)
{
SPUserCodeService.Local.SolutionValidators.Add
(new ValidatorStrongName(SPUserCodeService.Local));
}

public override void FeatureDeactivating(SPFeatureReceiverProperties


properties)
{
SPUserCodeService.Local.SolutionValidators.Remove
(new Guid(“D84E464F-5056-4082-A75F-D10C0786DB89”));
}

Listing 13: Removing the class in the FeatureDeactivating method.


Compile and deploy the project. Check that the validator has been correctly configured
by using PowerShell. To do this, open the “SharePoint 2010 Management Shell” and
use the following commands:

$spusercodeservice =
[Microsoft.SharePoint.Administration.SPUserCodeService]::Local;
$spusercodeservice.SolutionValidators;

Listing 14: Checking the correct configuration of the validator.


If the validator has been installed and activated, PowerShell will show its configuration:

Figure 1: Configuration of PowerShell.


To test the validator, create two web parts with Visual Studio 2010, one using the
correct naming and safe name key, the other using a different key file. The first one
can be installed without a problem, the second will show the error page when you
try to activate the Solution:

Figure 2: Rejection of Validation of the SharePoint Solution.

16
SharePoint® eMagazine

PowerShell and Visual Studio to control the Sandbox


As mentioned in Mirjam’s article which was cited at the beginning, SharePoint uses
fourteen metrics to calculate the resources used by each sandbox. The metrics and
their values can be visualized using PowerShell:

$spusercodeservice =
[Microsoft.SharePoint.Administration.SPUserCodeService]::Local
$spusercodeservice.ResourceMeasures

Listing 15: Visualizing factors and values using PowerShell.

Figure 3: SharePoint 2010 Management Shell.


The picture shows only the last metric. Note that the most important properties for
the resources calculation are the “MinimumThreshold”, “ResourcesPerPoint” and
“AbsoluteLimit”. PowerShell allows you to change these values as well: for example,
to change the “AbsoluteLimit” metric of “UnresponsiveprocessCount” use a script
such as:

$SPUserCodeService =
[Microsoft.SharePoint.Administration.SPUserCodeService]::Local
$oneFactor = $SPUserCodeService.ResourceMeasures[“UnresponsiveprocessCount”]
$oneFactor.AbsoluteLimit = 3
$oneFactor.Update

Listing 16: Script to change “AbsoluteLimit” factor.


In a similar way, using a console application in Visual Studio 2010 (.NET Framework
3.5), it is possible to read the metrics:

static void ReadSandBoxFactors()


{
SPUserCodeService userCodeService = SPUserCodeService.Local;
SPResourceMeasureCollection resourceMeasureColl =
userCodeService.ResourceMeasures;

foreach (SPResourceMeasure resourceMeasure in resourceMeasureColl)


{
Console.WriteLine(resourceMeasure.Name + “ - “ +
resourceMeasure.AbsoluteLimit.ToString() + “ - “ +
resourceMeasure.DiskSizeRequired.ToString() + “ - “ +
resourceMeasure.MinimumThreshold.ToString() + “ - “ +
resourceMeasure.ResourcesPerPoint.ToString());
}
}

Listing 17: Reading the metrics with a console application in Visual Studio 2010.
And change them:

17
SharePoint® eMagazine

static void ModifySandBoxFactor(string FactorName, double FactorNewValue)


{
SPUserCodeService userCodeService = SPUserCodeService.Local;
SPResourceMeasureCollection resourceMeasureColl =
userCodeService.ResourceMeasures;

SPResourceMeasure myUnresponsibleProcess =
resourceMeasureColl[FactorName];
myUnresponsibleProcess.AbsoluteLimit = FactorNewValue;
myUnresponsibleProcess.Update();
}

Listing 18: Changing the metrics.


Finally, the validators are also reachable from the SharePoint object model, as the
following routine shows:

static void ReadValidators()


{
SPUserCodeService userCodeService = SPUserCodeService.Local;
SPSolutionValidatorCollection solutionValidatorColl =
userCodeService.SolutionValidators;

foreach (SPSolutionValidator solutionValidator in solutionValidatorColl)


{
Console.WriteLine(solutionValidator.Name + “ - “ +
solutionValidator.Status.ToString());
}
}

Listing 19: Routine for validators.

Conclusions
The SharePoint sandboxed approach is an important feature that can give a plethora
of opportunities for allowing users to install their own software. But freedom can
create also chaos. Sandbox validators can reduce the level of anarchy and maintain the
control over the software installed without the necessity off shut down the sandbox.

The DIWUG SharePoint eMagazine is published quarterly and


provides information for a big audience of developers, IT-pros,
consultants and power users world wide.

Click on the relevant illustration to download the full Acrobat-pdf version or the
eReader-pdf version. For additional information about DIWUG please go to page 41.

18
SharePoint® eMagazine - September 2010

PowerShell and SharePoint for beginners


by Albert-Jan Schot
PowerShell offers a new ‘commandline’ interface allowing you to manage several different
Microsoft products. In a pursuit to help both Administrators and Developers PowerShell offers
options for both groups, so whether you want to automate and manage your SharePoint farm, or
just quickly write an update script for your sites. PowerShell allows you to do both.

Hello PowerShell
PowerShell has been around for quite some time now, it started somewhere in the
summer of 2005 with the public beta named Monad, and it has evolved to a version
2.0 since August 2009. And since it’s rolled out with Windows7 and Windows Server
2008 R2 most of us probably have seen it in some form. So since PowerShell is a
commandline interpreter it looks like the old command tool (cmd.exe).

Figure 1: Default PowerShell Window.


As you might have seen there are several PowerShell versions. Whenever you search
for it in your start menu you see three different programs; the default Windows
PowerShell, an x86 version and the PowerShell Integrated Scripting Environment
(ISE). Make sure that whenever you do anything for PowerShell and SharePoint to
never use the x86 version. SharePoint 2010 is completely x64 based, so using an x86
PowerShell will result in errors.

Since SharePoint 2010 is x64 based, only use the PowerShell x64

PowerShell allows you to work with the commandline shown in Figure 1, that way
it works almost the same way as stsadm. However scripting is made a bit easier, you
now have a nice Integrated Scripting Environment, named the PowerShell ISE. If you
don’t have the PowerShell ISE yet you can easily enable it with PowerShell.

# Add ServerManager module


Import-Module Servermanager

# Add the PowerShell ISE


Add-WindowsFeature PowerShell-ISE

Listing 1: Enable Windows PowerShell ISE.


The PowerShell ISE has three main windows: 1. The Script Pane, allowing you to write
your code and save it as a PowerShell Script File (.ps1). 2. The Output Pane, showing
any output you get from executing your scripts, and last but not least 3. The Command
Pane, allowing you to tag in to your executed .ps1 scripts.

19
SharePoint® eMagazine

Figure 2: Default PowerShell Window.

Hello World: My first PowerShell script


Having started up our ISE we can finally write some scripts. Before we dive into
managing our SharePoint sites let’s have a look at some basics in scripting for
PowerShell.
Creating variables in PowerShell is pretty straight forward, just use the $ to declare
it, and you’re done, and based on the value PowerShell sets the type. You can use #
for single line comments and <# #> for block comments, as you can see in Listing 2.

# A Hello World Example Comment

$string = “Hello world”


write-host -f blue $string

$int = 1
$int + $int + $int

<# you can also


use
block
comments
#>

Listing 2: Hello World example.


You can restrict the values your variable accepts by preceding your variable with the
desired ValueType (and thus restricting it). And you can retrieve all members of an
object with the “| Get-Member” or “| GM” options. So using “[string]” or “[int]” will
force the ValueType, whenever you try to save a ValueType into a variable that has an
enforced ValueType and they do not match you will get an error. In listing 3 you can
find examples on how to enforce ValueTypes and how to retrieve all members of an
object.

20
SharePoint® eMagazine

[string]$stringa = “Some text”


[string]$stringb = 1234

[int]$newint = 1
[int]$newint = “Some text” #results in error

<# you can retrieve the ValueType of a string


With the variable.GetTYpe() property
#>

$stringa.GetType() #returns a String


$stringb.GetType() #returns a String
$newint.GetType() #returns an Int

# you can also retrieve all members for an object


$stringa | Get-Member #or use | GM

Listing 3: Enforce ValueTypes and the Get-Member.


Other basic options also look familiar. Arrays can be declared by using “1..10”, or
by using “1,3,5,7,9”, and of course you can enforce the ValueType “[array]” if you
like. Regarding the quotes you can use ‘ (single quotes) if you don’t want variable
expansion and you can use “ (double quotes) if you want variable expansion, and you
can use a @ to create a multiline string. An example of these operations can be found
in Listing 4.

$arraya = 1..5
[array]$arrayb = 5..10
[array]$arrayb = 1,3,5,7,9

$arraya
$arrayb
$arrayc

# Quoting rules
$a = ‘PowerShell World’
write-output ‘Hello $a’ # no variable expansion
“hello $a” #variable expansion
“Length of ‘$a’ is $($a.Length)”

$multiline = @’
a
multiline
string
‘@

Listing 4: arrays, quotes and multiline strings.


You can also include existing PowerShell scripts within your own, the only thing
you have to remember is to set the ExecutionPolicy to Unrestricted, if that is not
already done, you can use the Get-ExecutionPolicy to determine the current level.
The ExecutionPolicy determines which PowerShell scripts are allowed to run on your
computer, and by default it is set at a level that does not allow you to execute scripts
from within PowerShell.
There are four levels:
n Restricted (no scripts can be run, and PowerShell can only be used in interactive
mode).
n AllSigned (only scripts signed by a trusted publisher can be run).
n RemoteSigned (Downloaded scripts must be signed by a trusted publisher
before they can run).
n Unrestricted (No restrictions, all PowerShell scripts can run).

So in order to run either scripts you created yourself, or scripts you downloaded from
the Internet, you have to use the Set-ExecutionPolicy and set it to Unrestricted (see
Listing 5 for an example, where you can also can see that retrieving parameters and

21
SharePoint® eMagazine

passing them to another script is pretty easy). If you want to understand exactly how
the ExecutionPolicy works, and how it prevents bad things to happen, there are
several good MSDN articles on the security model of PowerShell.

#retrieve ExecutionPolicy
Get-ExecutionPolicy

Set-ExecutionPolicy Unrestricted

$Name = Read-Host “Please provide name”

.\ScriptToExecute.ps1 –World “Ola again $Name”

<#the ScriptToExecute.ps1 would contain


Param($world)
write-host -f green $world
#>

Listing 5: Including existing PowerShell scripts with parameters.

PowerShell and Reflection


Managing SharePoint with PowerShell leaves you with two different options. The
first one is also applicable for SharePoint 2007 and is ‘reflection’. It simply means
that you can code against the .net framework through PowerShell as you would do
with Console Apps, all you have to do is to ‘refactor’ the desired .net objects. On the
other hand you have the option to use the predefined Commandlets (Cmdlets). These
Cdmlets are a bunch of common command templates that are straightforward and
easy to use. By default you got 492 different SharePoint Cmdlets, and you are free to
develop custom ones. First let’s have a look at reflection, since that will look familiar
to anyone who has ever done anything related to programming for SharePoint.
To do anything with reflection, the first step is to load the desired assembly, and
guess what since we want to do things with SharePoint we are going to load the
Microsoft SharePoint assembly. Once it is loaded we can create new objects and work
with those. Since you can bind any object to any variable, unless you forced the type,
you can easily create one object with the new-object method and derive all others
based on the first as shown in Listing 6, where you can see an example on how to
create a SPSite and SPWeb object, retrieve all Members and an example on you can
change the Title of a site.

[System.Reflection.Assembly]::LoadWithPartialName(“Microsoft.SharePoint”)

$mysite=new-object Microsoft.SharePoint.SPSite(“http://portal.sp2010.dev”)
$myweb = $mysite.Rootweb

$myweb | GM

$myweb.Title = “PowerShell Rocks”


$myweb.Update()

Listing 6: Creating SPSite & SPWeb objects and update a Title.


Another handy option within PowerShell is the fact that it supports pipelining, which
means, you just send objects down a pipe, filtering out the information you need.
Lets say that want to retrieve all webs, in the example of Listing 6, on the $mysite
object you happen to have the AllWebs property, a SPWebCollectiong returning all
the webs. Now instead of retrieving all the webs you want to ‘filter’ them and return
only webs with a certain name. You can do so in a single line of code with the Where-
Object filter, and final you can change the look of the result set with the “| Format-
Table” (or FT)-option so you have control over what information you want to show or
return. In Listing 7 you can see that once you have got an object (SPSite in this case),
you can use piping to select all sites where the title contains “Demo”. You can use “*”
for wildcards and the “-like” option is not case sensitive.

22
SharePoint® eMagazine

[System.Reflection.Assembly]::LoadWithPartialName(“Microsoft.SharePoint”)

$mysite=new-object Microsoft.SharePoint.SPSite(“http://portal.sp2010.dev”)

<# retrieve all webs that starts with Demo


Only show the Title Url and if it’s Rootweb #>
$mysite.AllWebs | Where-Object {$_.Title -like “*Demo*”} | FT Title, Url,
IsRootWeb

Listing 7: Piping example.


If you have some background in programming for SharePoint you probably noticed
that writing code for PowerShell is pretty straight forward, it’s almost like it can do
everything code can do, but better. Using reflection however, isn’t always the way
to go. The power of reflection and PowerShell is mainly in the fact that you can
access the object model of SharePoint without having to build and deploy console
applications only to find out that you have to rebuild your code just to update a few
things, so for tasks that can be scripted or quick fixes on the server you can use the
reflection options in PowerShell.
PowerShell Cmdlets
Since the power of PowerShell is all about the use of Cmdlets, reflection is not
always the way to go. The default Cmdlets are focused more on an administrator
point of view, so using them will feel a bit different compared to the reflection
options. By default PowerShell provides you with a stunning 492 Cmdlets only to use
with SharePoint, if you are interested in how you can get an overview of all those
Cmdlets, you can easily use PowerShell to output them, as you can see in Listing 8.

Add-PSSnapin “Microsoft.SharePoint.PowerShell”
Get-Command –PSSnapin “Microsoft.SharePoint.PowerShell” | select name, definition

Listing 8: Get all SharePoint Cmdlets.


Listing 8 also shows that you have to add the so called SharePoint snapin first by
using the “Add-PSSnapin” Cmdlet. A snapin is a container that contains all Cmdlets
of a certain type. There are several other snapins available for different tasks like
managing your server, Exchange, and VMWare, but you are free to develop your own
snapins as a few others already did for SharePoint.
The strength of Cmdlets can be made obvious just by looking how easy it is made to
do certain tasks, Listing 9 shows an example on how you can create an incremental
crawl on your Search Service Application, after checking if the crawl status was Idle.

$CrawlContent = Get-SPEnterpriseSearchServiceApplication|
Get-SPEnterpriseSearchCrawlContentSource

if($CrawlContent.CrawlStatus -eq “Idle” ) {


$CrawlContent.StartIncrementalCrawl()
}

Listing 9: Cdmlet example to start crawl.


As you can see it’s easy to retrieve the Service Application and the ContentSource that
is attached to that Service Applcation through piping and start a crawl on it.
Since there are so many Cmdlets you might find yourself a bit lost having the first
glance at all those options and commands you are given, just keep in mind that every-
thing you can do with stsadm can be done with PowerShell, even though you might
take a few minutes more the first time.
Conclusion
PowerShell is a handy tool and a great substitute for STSADM, it provides a complete
toolset for both developers and administrators working with SharePoint, and it makes
their daily lives a lot better. The only downside on the use of PowerShell is its learning
curve, since there are so many Cmdlets, and they all allow you to pipe, it might get a
bit confusing learning how to use them. However once you start working with Power-
Shell your will get hooked and will never want to return to STSADM and console apps,
I promise you!

23
EUROPEAN BEST PRACTICES 11TH–13TH APRIL 2011, LONDON, UK

SHAREPOINT CONFERENCE

Clarity. Direction. Confidence.

EUROPEAN
SHAREPOINT

BEST PRACTICES
CONFERENCE
LONDON 2011
QUEEN ELIZABETH II CONFERENCE
CENTRE, LONDON 11TH-13TH APRIL, 2011
The European SharePoint Best Practices Conference Learn practical and productive SharePoint practices
returns to London in 2011, hosted once more by from the industry’s leading authorities. Event speakers
Combined Knowledge. include:
• Microsoft MVPs
Best Practices is about doing things the right way:
• Microsoft Product Team Members
finding the smartest ideas, activities and techniques
• Industry Experts
around. With 5 tracks dedicated to all types of
audience from IT Pros, Developers, Information
Workers, Project Managers and Business Event hosted by
Managers/Information Directors, this is the largest
SharePoint conference to come to the UK and has
something for everyone.
The Best Practice Conference is all about real
WIN AN XBOX! Everybody who
world experience designing and using SharePoint, registers will be entered into a
and tailored tracks will appeal to IT Pros, free weekly draw to win an XBox
Developers and Designers, Information Workers, 360. See the web site for details!
Architects and Taxonomists, Project Managers
and Information Officers.

To register or for more details go to


www.sharepointbestpractices.co.uk

#bpc11

European SharePoint Best Practices Conference


SharePoint® eMagazine - September 2010

FAST Search Server for Power Users


by Mark van Dijk
Microsoft took a huge step forward in the market for enterprise search solutions
with the acquisition of FAST Search & Transfer (FAST) in early 2008. With FAST
Search Server 2010 for SharePoint added to their product line of search solutions,
Microsoft is now a truly competitive player in enterprise search.
This article focuses on what FAST Search for SharePoint has to offer for end users and
what power users can do to configure these features to meet their specific needs.

Key features
Let’s start by taking a look at an out-of-the-box search center using FAST Search for
SharePoint.

Figure 1: FAST Search Center.


In general, this screen looks quite similar to the end-user search center experience
when using (regular) SharePoint Server 2010 Search. However, there are some very
interesting enhancements that make this search experience more powerful than its
SharePoint Search counterpart. Also, as we’ll see later in this article, it’s possible to
add some other unique features to the search center that aren’t available in Share-
Point Search.
Let’s walk through the enhancements and capabilities FAST Search for SharePoint
adds to the search experience.

Conversational Capabilities
Deep Results Refinement
The column on the left in figure 1 contains the refinement panel. It allows users to
filter search results on metadata. Refiners are available in SharePoint Search as well,
but FAST Search for SharePoint offers the concept of deep refiners. This feature adds
counts to each filter, depicting the total number of search results available with that
filter applied. Also, the displayed refiners are based on all results in the result set, not
just the first 100 search results, which is default behavior in SharePoint Search.
Refiners can be added to the refinement panel for any managed property.

25
SharePoint® eMagazine

Managed Properties
You might be unfamiliar with the term “Managed Property”, so let me explain what
it means. When content is crawled and indexed, the crawler marks various aspects
of the content as a property and puts this information in a so called Property Store
(which is actually one of the search databases). These aspects can be metadata added
by the author of the content, but it can also be information that is “gathered” during
the crawl process, for instance the language of the content, or the file type (Word,
PowerPoint, etc.). In SharePoint, these properties are called “Crawled Properties”.
Crawled properties cannot be used by the query engine directly. For this, a crawled
property first needs to be mapped to a Managed Property. When a property is
“managed”, it can be used by the query engine. This means the property can be used
in search queries; it can also be used in the Advanced Search screen, in the Sort By
dropdown, in the Refinement Panel, etc.

Property extraction
FAST Search for SharePoint adds a unique feature to the process of gathering
metadata, called metadata/property extraction. While processing the content,
FAST Search for SharePoint is able to extract metadata from content by matching
terms in a dictionary to terms used in the content. Out of the box, FAST Search
for SharePoint can identify company names, person names and geographic names/
locations in documents. The extracted properties can be used to improve the search
experience, for example by adding refiners based on these properties.

Result count
Right above the search results is the total number of search results. Obviously,
this information is also available in SharePoint Search. A very small but important
difference however, is that in FAST Search for SharePoint it says “1-10 of 61 results”,
where in SharePoint Search it would say “1-10 of about 61 results”. This difference is
explained by the fact that FAST Search for SharePoint is much better at knowing the
exact number of results for a given search request. The number shown by SharePoint
Search is only an educated guess.

Sort Results on Managed Properties


On the right of the result count is a dropdown box that allows the user to sort the results
by various properties. With FAST Search for SharePoint, it’s possible to sort by any
Managed Property, while with SharePoint Search it’s only possible to sort by relevance
or modified date. FAST Search for SharePoint also offers the ability to configure
multiple rank profiles to determine the relevance of search results in different ways. If
a new rank profile is configured, it’s added to the sort by options as well.

Figure 2: Navigating to Managed property management.

26
SharePoint® eMagazine

Let’s see how a managed property is configured so it can be used for sorting:
In Central Administration, under “Manage Service Applications”, click on the FAST
Search Service Application link (in this case “FASTQuery”). From this page, navigate
to “FAST Search Administration”, which is available in the top left navigation. Then
go to “Managed Properties”.
The list on this page shows all the available Managed Properties. In this example,
we’ll make sure the Content Type property is available for sorting. Find the Content
Type property and click it. Now check the “Sort property” (figure 3) checkbox and
click the OK button.

Figure 3: Mark as Sort Property.


Now we’ll go back to the search center and add the property to the Sort By dropdown
box. To do this, first put the page in edit mode and then modify the Search Actions
web part properties. In the tool pane, expand “Display Properties” and note that the
Content Type property is now available. Check the “Enabled” checkbox as shown in
figure 4 and then apply the changes to the web part.

Figure 4: Enable Content Type sorting.


Check in the changes to the page and perform a new search request. Note that we
can now sort our result set by Content Type as shown in figure 5.

Figure 5: Sorting by Content Type.

Similar Results
Each search result contains a link that says “Similar Results”. By clicking on this link,
the user performs a new search request to include documents that are similar to that
specific result. This feature is unique in FAST Search for SharePoint.

Result Collapsing
If multiple copies of the same document exist in a crawled source, the indexed copies
will have the same checksum. When performing a search request, FAST Search for

27
SharePoint® eMagazine

SharePoint will collapse these documents as one document in the result set, avoiding
that the same document appears multiple times in the search results.
If a search result is collapsed, it contains a link that says “Duplicates (n)”, where n is
the number of duplicates available in the index of that piece of content. Clicking on
this link performs a new search request that returns all duplicates as individual search
results. This feature is unique to FAST Search for SharePoint.

Visual Enhancements and Contextual Search


Thumbnails and Previews
Probably the most noticeable feature that’s only available in FAST Search for SharePoint
is the availability of thumbnails and previews for the search results. Thumbnails are
available for Microsoft Word and Microsoft PowerPoint documents, (inline) previews
are available for Microsoft PowerPoint documents (figure 6).

Figure 6: Thumbnails and Previews.


As you can see, the first page of presentations and documents is shown as a thumbnail
right next to each search result. This helps the user get to the relevant search result
more quickly.
The graphical previewer for PowerPoint presentations is based on Microsoft Silver-
light. Without having to open the document, the user can quickly determine whether
this is the presentation he was looking for.
The ability to open a Microsoft Office document in the browser by clicking the “View
In Browser” link is a nice feature as well, but not specific to FAST Search for SharePoint.

Contextual Search
Another unique feature that’s only available in FAST Search for SharePoint is the
concept of “contextual search”. This feature allows you to configure the search center
to react differently to specific groups of users, personalizing the search experience.
Configuring so called “user contexts” is the key point here. By defining user contexts
you’re basically defining audiences for your search center. These user contexts can
then be used to target Best Bets, Visual Best Bets, document promotions, document
demotions, site promotions and site demotions.

User Contexts
Let’s see how contextual search can be configured. We’ll start by adding a user context
to our search center. Note that the entire configuration regarding contextual search is
done at site collection level.

28
SharePoint® eMagazine

Go to the site settings page of the search center. If this isn’t the root web of the
site collection, make sure you go to the Top Level Site Settings. Then locate the
Site Collection Administration pane and click the link that says “FAST Search user
context“. Please be aware that the FAST links in the site collection settings will always
be available, even if FAST is not installed or configured in your environment. If you
click a FAST link when FAST is not installed, you will get an error message.
This page shows all User Contexts that have been configured previously. Add a new
User Context by clicking the “Add User Context” link in figure 7.

Figure 7: Add a new User Context.


By default, a user context can be constructed by choosing a name for the User
Context and then selecting values for the Office Location (SPS-Location) and/or Ask
Me About (SPS-Responsibility) User Profile Properties. We’ll take a small side step to
discuss how these User Context Properties can be configured to use different User
Profile Properties.
Managing these properties is quite easy. You use the following two PowerShell
commands:
Get-SPEnterpriseSearchExtendedQueryProperty -SearchApplication “[Your FAST
query Search App Name]” -Identity “FASTSearchContextProperties”

Listing 1: Command used to determine the currently configured properties.


By default, this returns the following values: SPS-Location, SPS-Responsibility
To modify the properties, use the second command:

Set-SPEnterpriseSearchExtendedQueryProperty -SearchApplication “[Your FAST


query Search App Name]” -Identity “FASTSearchContextProperties” -Value
“SPS-Location,SPS-Skills,Department”

Listing 2: Command used to modify the currently configured properties.


Now the department can also be used to configure a User Context.
Now back to our example, where we use the default User Context Properties
configuration. We’ll create a user context “Project Management” that targets all
users that have “Project Management” in their “Ask Me About” profile property.
Fill out the form and click OK:

29
SharePoint® eMagazine

Figure 8: Creating the User Context.


We have now created a User Context which we can use to further configure our
contextual search experience.

Best Bets, Visual Best Bets, Document Promotions and Document Demotions
Before we start configuring (Visual) Best Bets and Document Promotions/Demotions,
let’s see what the default search results look like when I’m searching for “content
management” as a logged on user that meets the criteria of our previously created
User Context:

Figure 9: Search Results (not personalized).

30
SharePoint® eMagazine

Nothing special to see here, since we haven’t configured anything for our User
Context yet. To configure (Visual) Best Bets or Document Promotions/Demotions, we
first need to add one or more keyword terms. When a query includes one of these
keyword terms (or one of its configured synonyms), Best Bets are displayed promi-
nently on the search results page. With document promotions and demotions, admin-
istrators can move specific search results for a keyword to the top or the bottom of
the search results list.
Let’s create a keyword term for “content management” and add a Visual Best Bet.
We’ll also try to promote the “Gear Promo” document that’s currently on the bottom
of the page.
Again, go to the site settings page of the root web of the search center site collection.
This time click the FAST Search keywords link in the Site Collection Administration
pane. This page shows all keywords that have been configured previously. Click “Add
Keyword” to add a new keyword:

Figure 10: Managing Keywords.

... by the way ...

Infopath admistrator approved deployment with PowerShell

$formPath = “pathtoform”
Test-SPInfoPathFormTemplate -Path $formPath
Install-SPInfoPathFormTemplate -Path $formPath

Albert-Jan Schot - Digiwijs.nl - Oranjestraat 215 - 2983 HR Ridderkerk - The Netherlands


M +31 622711904 - E appieschot@xs4all.nl - W http://www.digiwijs.nl

31
SharePoint® eMagazine

Let´s create a Keyword for the keyword phrase “content management”. Note that
we can add synonyms for our keyword. When users search for the keyword, results
from all synonyms will also be displayed in the search result. There’s a distinction
between one-way synonyms and two-way synonyms. When users search for a
one-way synonym, only results are returned for this synonym. If a user searches for a
two-way synonym, results are returned for this synonym in addition to results for the
keyword and all other synonyms. In our example we’ll just add a keyword without
adding synonyms:

Figure 11: Adding a Keyword.


Now that we’ve created our keyword, we can configure Visual Best Bets, Best Bets,
Document Promotions and Document Demotions for this keyword. This can be done
from the Keywords overview page, or from the Keyword details page:

Figure 12: Adding Best Bets and Document Promotions/Demotions.


In this example, we’ll add a Visual Best Bet and a Document Promotion to the keyword.
First, we’ll add a Visual Best Bet. In my example, the URL for this Visual Best Bet points
to a small banner I uploaded to a document library. Visual Best Bets are not limited to
images; rich content like html can also be used. In this case, a link pointing to an html
file would have done the job as well. We pick our previously created User Context to
target this best bet to a specific audience. Save the best bet by clicking OK:

32
SharePoint® eMagazine

Figure 13: Adding a Visual Best Bet.

Finally, we’ll add a Document Promotion for the “Gear Promo” document. This process
is quite similar to adding a Visual Best Bet, but in this case the URL points to the “Gear
Promo” document:

Figure 14: Adding a Document Promotion.

33
SharePoint® eMagazine

Let’s take a look at our search center now:

Figure 15: Personalized search center.


The promotional banner is now visible above the search results, and the “Gear Promo”
document is now the first search result.
Note that by default, the keyword itself is displayed as well, together with the
description we provided. This can be turned off in the web part properties of the
Best Bets web part.

Conclusion
It’s clear that FAST Search Server 2010 for SharePoint adds quite some value to the
search experience by providing enhancements and new capabilities to the enterprise
search features of SharePoint Server 2010. Visual search capabilities like thumbnails,
previews and best bets provide an efficient way for information workers to interact
with search results. On the other hand, conversational search capabilities like result
sorting, deep refinement, similar results and collapsing duplicate results provide ways
for information workers to interact with and refine their search results, so that they
can quickly find the information they require. Finally, by providing contextual search
capabilities, FAST Search for SharePoint adds the ability to personalize the search
experience in a powerful way.

... by the way ...


Do you know we offer an eReader version of our eMagazines as well?
Have a look at http://www.diwug.nl

34
SharePoint® eMagazine - September 2010

Introducing LINQ to SharePoint


by Joe Capka
SharePoint 2010 adds the ability to use LINQ syntax to query as well as manipulate
the content database. While some SharePoint developers are familiar with LINQ, the
LINQ to SharePoint provider provides a whole new way of working with SharePoint
data. We will look at how to get LINQ to SharePoint up and running as well as some
hints and limitations.

Introduction
Prior to SharePoint 2010 there were already numerous programmatic ways to get
content out of SharePoint. There was the Object Model, SP Web Services and of
course CAML queries. We now have an additional option: LINQ to SharePoint. While
some may wonder why we need yet another way, LINQ seems to have truly taken off
in many other .NET development circles and so perhaps we SharePoint folks should
consider it.
What is LINQ to SharePoint then? Loosely stated, LINQ is a query language that
you can write inline in your code. The “to SharePoint” part means that the query
is executed against the SharePoint content database: in other words against your
SharePoint lists, content types and list items. LINQ to SharePoint is actually a
latecomer to the game as other LINQ providers such as LINQ to SQL and LINQ to
XML have been around for some time. If you have used LINQ syntax with any of
these, you will not be surprised that the LINQ to SharePoint syntax is the same. This
has the benefit that you no longer have to know CAML to query SharePoint. LINQ
operates on strongly typed objects, so you don’t have to worry about typos (like in
CAML) and you get IntelliSense in Visual Studio. Everything is no longer a SPListItem.

Entity Classes and SPMetal


Since few things in life are free, there is some extra setup work that needs to be done
in order to benefit from the great features offered by LINQ. Luckily Microsoft has
been kind enough to provide tools that automate most of this setup work for us.

[Microsoft.SharePoint.Linq.ContentTypeAttribute(Name=”Announcement”,
Id=”0x0104”)]
public partial class Announcement : Item
{
private string _body;
private System.Nullable<System.DateTime> _expires;
...
public Announcement() {
this.OnCreated();
}

[Microsoft.SharePoint.Linq.ColumnAttribute(Name=”Body”, Storage=”_body”,
FieldType=”Note”)]
public string Body {
get { ... }
set { ... }
}
}

[Microsoft.SharePoint.Linq.ColumnAttribute(Name=”Expires”,
Storage=”_expires”, FieldType=”DateTime”)]
public System.Nullable<System.DateTime> Expires {
get { ... }
set { ... }
}
}
}

Listing 1: Generated entity class for an Announcement (edited for conciseness).

35
SharePoint® eMagazine

There is no magic way for the LINQ to SharePoint provider to know what content we
have put into SharePoint, or to know how we want that content to be represented in
code. The link (no pun intended) that lets us code against strongly typed representations
of our content comes in the form of Entity Classes. These classes map SharePoint content
to an object oriented structure. Let’s see an example to make things clearer.
Listing 1 shows the entity class for an out of box SharePoint Announcement. Note that
most of the implementation has been removed to conserve space. We will not drill
into all the details of this class, but let’s get an idea of what is going on here. The first
thing to notice is that we are looking at a strongly typed class called Announcement,
which inherits from another class named Item. This hierarchy should sound familiar to
you if you have worked with SharePoint before as the Announcement content type
inherits from the Item content type. Next, notice that there are two public properties:
Body and Expires. If we look at Figure 1, we notice that these are two columns that
are part of the Announcement content type but are NOT inherited from the Item
content type.

Figure 1: Announcement Content Type.


The last and perhaps most important thing to notice are the attributes on the class
as well as the public methods. These attributes are what connects these classes to
the SharePoint elements. We can see that the class represents the Announcement
content type, referenced by name and Id. The public properties each represent a site
column in the Announcement content type, and are strongly typed to correspond
with the SharePoint column types.
Writing all these entity classes by hand would be tedious, error prone and completely
unnecessary. There is a tool from Microsoft that creates entity classes for us when
pointed at a SharePoint site. This tool is called SPMetal and lives in the BIN folder of
the 14 hive. The most basic use of SPMetal requires two parameters:one providing the
url of the SharePoint site we want it to model, and a second for the code file name we
want to generate. The tool is smart enough to infer the code language from the file
extension. This simple use is demonstrated in Listing 2.

SPMetal /web:http://TestSite /code:TestSite.cs

Listing 2: Basic use of SPMetal.


While this will work, it does generate quite a mess. Without additional parameters,
SPMetal will model all of the lists and content types, and there are usually quite a
few of those in any site. You will likely not be writing code against all those classes
so having them generated creates unnecessary classes and potential for confusion.
You also may want to control things like the namespace for the generated classes or
even the class names themselves. Luckily SPMetal is very flexible and takes a number
of parameters. One of these parameters is an XML file defining the exact output
desired. A full description is beyond the scope of this article, but let’s look at an
example of SPMetal limited to generate classes only required for an announcement
list called MyAnnouncements.

SPMetal /web:http://TestSite /code:TestSite.cs /namespace:L2SP


/parameters:testsite.xml

Listing 3: Parameterized use of SPMetal for cleaner output.

36
SharePoint® eMagazine

Listing 3 shows the SPMetal command, where we have added two parameters.
First we specify that the namespace for the generated classes should be TestProject
andsecond we specify a parameters XML file. The XML file contents are shown in
Listing 4.

<?xml version=”1.0” encoding=”utf-8”?>


<Web xmlns=”http://schemas.microsoft.com/SharePoint/2009/spmetal”>
<List Name=”MyAnnouncements”/>
<ExcludeOtherLists/>
</Web>

Listing 4: Contents of SPMetal parameters file.


This is a very simple parameters file, but achieves exactly what we set out to do.
We first explicitly instruct SPMetal to generate a class for the MyAnnouncements list
and then instruct it to exclude any lists we have not asked for. This results in entity
classes being generated for the Announcement content type, the Item content type
(automatically detected as the base class of the Announcement content type) and a
strongly typed accessor for the MyAnnouncements list. It does not create any other
classes since we did not ask for them and also do not need them. Notice that although
we did not specify the need for the Announcement and Item entity classes, SPMetal
realizes these are required by the list and generates them.

Using Linq to SharePoint


One class that we did not discuss yet but is the real core of the LINQ to SharePoint
story is the Microsoft.SharePoint.Linq.DataContext. Those who are familiar with LINQ
to SQL will find the SharePoint incarnation of the DataContext very familiar. This class
is what manages the connection to the SharePoint content, and the class that we use
to instantiate objects based on the entity classes defined earlier. There are two ways
of using the DataContext class, directly or by inheriting from it. When used directly,
we see code similar to that show in Listing 5.

DataContext dc = new DataContext(“http://TestSite”);


EntityList<Announcement> MyAnnouncements =
dc.GetList<Announcement>(“MyAnnouncements”);
IQueryable<Announcement> orderedAnnouncements =
MyAnnouncements.OrderBy(a => a.Expires);

Listing 5: Using the DataContext directly.


This code uses the DataContext to open a connection to the site at http://TestSite
url. It then gets a list named MyAnnouncements, specifying that the list contains
items of the class Announcement which is the entity class we saw before. It then uses
standard LINQ syntax to get the announcements ordered by expiration date. The
advantage here is that no derived class from the DataContext is needed. However,
we still need to get the list using a string name and that seems a bit messy. Luckily,
SPMetal automatically creates a class derived from the DataContext and lets us use
this derived class in a cleaner way. Let’s examine the code in Listing 6.

TestSiteDataContext tdc = new TestSiteDataContext(“http://TestSite”);


EntityList<Announcement> MyAnnouncements = tdc.MyAnnouncements;
IQueryable<Announcement> orderedAnnouncements =
MyAnnouncements.OrderBy(a => a.Expires);

Listing 6: Using the inherited DataContext.


This code is very similar to the previous code, with two exceptions. First, the code
is not using the standard DataContext class but rather the TestSiteDataContext
class generated when we used SPMetal. (The name of the generated data context
class is based on the name we gave our code file.) The second and more interesting
difference is that the TestSiteDataContext class has a strongly typed property named
MyAnnouncements which returns exactly the same object as the GetList method in
Listing 5. The code is presented like this for demonstration and comparison, but in
reality we would likely write the code as in Listing 7 where the query is performed

37
SharePoint® eMagazine

directly on the MyAnnouncements property of the TestSiteDataContext. This code is


shorter AND cleaner, always a win.

TestSiteDataContext tdc = new TestSiteDataContext(“http://TestSite”);


IQueryable<Announcement> orderedAnnouncements =
tdc.MyAnnouncements.OrderBy(a => a.Expires);

Listing 7: Cleaner use of the inherited DataContext.

Querying data
Up to this point we have been looking at all the pieces required to get LINQ to Share-
Point to work properly. Now we get to finally use LINQ queries against SharePoint
and pull some data out. Let’s start with a basic query and say we need to get all the
announcements that have expired or are expiring today. Also, we are only interested in
their ID and Title fields. Back in the CAML query days this would not necessarily been a
‘basic’ query. Listing 8 has the code to perform this query, let’s go through it.

var TodaysExpirationIDs = from a in tdc.MyAnnouncements


where a.Expires.HasValue && a.Expires.Value < DateTime.Today.AddDays(1)
select new { a.Id, a.Title };

Listing 8: Basic LINQ to SharePoint query.


First we notice that the same TestSiteDataContext as before is being used. Then we
use standard LINQ syntax to ask for announcements out of the MyAnnouncements
list. However, we filter using the where clause to only get announcements that have
an expiration date and where the expiration date is less than tomorrow. Finally we
use projection into an anonymous type to create a new object containing only the ID
and Title of each announcement. For those new to LINQ, this may be a lot in one go
and I recommend you spend some time with the LINQ syntax. For those familiar with
LINQ, this will look perfectly normal. What is important here is that there is nothing
SharePoint specific about this query. Nevertheless it returns exactly what we need
and it fetched it from the SharePoint content database. Impressed yet?
The previous query I showed was dubbed ‘basic’. So what is a more complex query? Let’s
have a look at a query that joins two lists together. First some background information:
in order to create this code, we need to create a custom list that has a lookup field
pointing at the Title field in the MyAnnouncements list. Suppose this new list has a large
number of items and we thus call it the LargeList. We then re-run SPMetal with param-
eters such that the LargeList is included in the generated classes.
This time we wish to retrieve all the items from the LargeList that are associated with
an Announcement that has expired or is expiring today. We are interested in the
Announcement Id and the LargeList item Title as the result. Let’s look at the code in
Listing 9.

var TodaysExpirationItems = from i in tdc.LargeList


where i.Announcement.Expires.HasValue &&
i.Announcement.Expires.Value < DateTime.Today.AddDays(1)
select new { i.Announcement.Id, i.Title };

Listing 9: LINQ to SharePoint query joining two lists.


Notice that this query is not much more complex to write than the previous. The
first difference is that the source is the LargeList from the TestSiteDataContext
object and not the MyAnnounements list. The LargeList has a Lookup field on the
MyAnnouncements list, and as can be seen, this results in an Announcement field
on each item from this list. We can therefore access all the properties of the linked
announcement and do so in the where clause. Finally we create the result collection
from information out of both the announcement and the large list item. We thus
implicitly join two SharePoint lists and return filtered results from this join. This
is much easier than trying to achieve this task in CAML, as we will later see in the
Seeing the CAML section.

38
SharePoint® eMagazine

A very important limitation needs to be mentioned at this point. LINQ to SharePoint


can ONLY join lists that have a Lookup field defined as in the example above. It
will NOT join arbitrary lists using the join operator. See the reading list for more
information.

The Rest of CRURD


First off, I do mean CRURD. As we all know, SharePoint has a recycle bin and LINQ
to SharePoint is well aware of it. We therefore have the option to Delete or Recycle
items using LINQ to SharePoint. Listing 10 shows how one uses LINQ to SharePoint to
perform insertions, updates, recycling and deleting of data. As with querying, these
operations are quite easy to perform. The basis of it all is working with the strongly
typed entity classes and our derived DataContext.

Announcement newAn = new Announcement();


newAn.Title = “My New Announcement”;
newAn.Expires = DateTime.Today.AddDays(5);
tdc.MyAnnouncements.InsertOnSubmit(newAn);

Announcement first = tdc.MyAnnouncements.FirstOrDefault();


first.Title = “Updated by LINQ”;

Announcement two = tdc.MyAnnouncements.FirstOrDefault(


a => a.Title == “Announcement Two”);
tdc.MyAnnouncements.RecycleOnSubmit(two);

Announcement three = tdc.MyAnnouncements.FirstOrDefault(


a => a.Title == “Announcement Three”);
tdc.MyAnnouncements.DeleteOnSubmit(three);

tdc.SubmitChanges();

Listing 10: Insert, Update, Recycle and Delete data.


Performing inserts consists of creating an object of the type we want to insert, setting
the object’s properties as necessary and then calling the InsertOnSubmit method of
the list that we want to insert the object into. Updates are even easier, in that we
just pull the object out of the database with a query and update its properties as
needed. Deletions and Recycles consist of getting the object out of the database
using a query and then calling the RecycleOnSubmit or DeleteOnSubmit method of
the list where the object is located. For all these operations there is one more line of
code that absolutely needs to be included and that is the SubmitChanges call on the
data context. This line causes all the changes from the object model to be persisted
to the database. It is important to understand that until this method is called, all the
changes only exist in the object representation of the data, and the database has
not been changed at all. This has a number of implications, the discussion of which is
beyond the scope of this article, save one important one that I will mention. It is quite
likely that more than one thread will be running the same code, and that concurrency
conflicts occur when reading and writing to the content database. LINQ to SharePoint
has been prepared to deal with such conflicts (and others) and can even resolve them
for you. It all starts with the SubmitChanges call which will throw a ChangeConflict-
Exception if any data consistency conflicts occur. This means that you should always
wrap the SubmitChanges call in a try-catch block and deal with conflicts. See the
reading list for more information.

Seeing the CAML


Time for the big secret to come out: we are not done with CAML after all. As great
as LINQ to SharePoint is, CAML is the fundamental query language of SharePoint.
Therefore it should come as no surprise that under the hood, LINQ to SharePoint
queries are translated to CAML and then executed against the content database. If
this seems anticlimactic, realize that LINQ to SharePoint is a great new abstraction
layer on top of something that worked quite well but was hard (for most of us) to
work with.

39
SharePoint® eMagazine

So what if you want to see the CAML that LINQ to SharePoint is generating? It is
quite easy to do, actually. The DataContext (derived or not) has a Log property that
accepts a TextWriter object. The DataContext will then write the queries it generates
to the TextWriter. My favorite means of testing this is using a literal in a web part
and writing the content of the log to the literal using a StringWriter. This can be seen
in Listing 11. Note that due to the deferred nature of LINQ queries, the query only
executes if it is used. Therefore simply inserting a query into the example will not
generate any Log output. Code that uses the query also needs to be inserted into this
listing, such as a foreach loop enumerating the results.

StringWriter sw = new StringWriter();


TestSiteDataContext tdc = new TestSiteDataContext(“http://TestSite”);
tdc.Log = sw;
.... <Insert Query Code and Use here> ...
sw.Flush();
myLiteral.Text = Server.HtmlEncode(sw.ToString());

Listing 11: Setting the DataContext.Log.


Listing 12 is an example of the CAML query that was captured using this technique;
it is the CAML for the query in Listing 9. I will let you choose which you would prefer
to write.

<View>
<Query>
<Where>
<And>
<BeginsWith>
<FieldRef Name=”ContentTypeId” />
<Value Type=”ContentTypeId”>0x0100</Value>
</BeginsWith>
<Lt>
<FieldRef Name=”AnnouncementExpires” IncludeTimeValue=”TRUE” />
<Value Type=”Lookup”>2010-07-08T00:00:00Z</Value>
</Lt>
</And>
</Where>
</Query>
<ViewFields>
<FieldRef Name=”Announcement” LookupId=”TRUE” />
<FieldRef Name=”Title” />
<FieldRef Name=”AnnouncementExpires” />
</ViewFields>
<ProjectedFields>
<Field Name=”AnnouncementExpires” Type=”Lookup” List=”Announcement”
ShowField=”Expires” />
</ProjectedFields>
<Joins>
<Join Type=”LEFT” ListAlias=”Announcement”>
<!--List Name: MyAnnouncements-->
<Eq>
<FieldRef Name=”Announcement” RefType=”ID” />
<FieldRef List=”Announcement” Name=”ID” />
</Eq>
</Join>
</Joins>
<RowLimit Paged=”TRUE”>2147483647</RowLimit>
</View>

Listing 12: Captured CAML.

Limitations
LINQ to SharePoint is not perfect. It offers a lot, but is has a few shortcomings. First,
LINQ to SharePoint does not work for anonymous users. This is bad news to those who
build public facing web sites since there is likely a large part of your content that will
need to be accessible to anonymous users. Second, LINQ to SharePoint doesn’t work
across site collections. You cannot query one site collection from another. Both these
issues can be solved with a workaround that involves manipulating the HttpContext,
but it feels like a hack and the performance implications are unknown. Third, LINQ

40
SharePoint® eMagazine

to SharePoint does not support all the queries that the LINQ syntax allows. See the
reading list for more details.

Conclusion
LINQ to SharePoint brings the power of LINQ queries into the SharePoint arena. While
CAML queries work great, they are not great to work with. LINQ abstracts CAML
and lets developers query the content database in a much easier way. By providing
strongly typed objects for all CRUD operations LINQ to SharePoint results in more
consistent, cleaner and more simple code.

Reading List
SPMetal Parameters file schema:
http://msdn.microsoft.com/en-us/library/ee535058.aspx
Writing data to SharePoint, including conflict resolution:
http://msdn.microsoft.com/en-us/library/ee538246.aspx
Unsupported LINQ Queries and Two-stage Queries:
http://msdn.microsoft.com/en-us/library/ee536585(office.14).aspx

About DIWUG
DIWUG is a platform for people that are interested in information worker solutions. Several
times a year, DIWUG organizes events where members can meet, share knowledge and see
interesting presentations in an informal setting.

The DIWUG SharePoint eMagazine is a free downloadable magazine written by MCM’s,


MVP’s and other well known people from the SharePoint community. The target audience
are IT-pros, developers and end (power) users. All articles are in English.

We’re looking for authors!


Of course we cannot do this without content. We’re looking for articles on SharePoint
(2007/2010). These articles can be for developers, (power) end users or IT-pros. If you’d be
interested to write an article, please let us know by emailing us a brief summary of the topic
you want to write about. If English is not your native language, don’t hesitate to contribute.
We will help you with the English spelling and grammar.

... and sponsors


If you’re interested in sponsoring this eMagazine please let us know and we’ll send you the
sponsor possibilities.

Contact information:
Marianne van Wanrooij; marianne @diwug.nl
Mirjam van Olst; mirjam@diwug.nl
Mart Muller; mart@diwug.nl
http://www.diwug.nl

41
SharePoint® eMagazine - September 2010

Sponsored article by AvePoint

Fortifying the Pillars of Governance


with SharePoint and DocAve
By Christopher Musico – Business Content Editor, AvePoint

What exactly is Governance?


Governance, at first glance, may be one of those broad, umbrella-like terms that
can mean virtually anything. Definitions from different research firms and business
consultants do little to resolve the issue. According to Gartner Research, IT Governance
is “the set of processes that ensure the effective and efficient use of IT in enabling
an organization to achieve its goals”. Microsoft TechNet explains that IT Governance
is “the set of policies, roles, responsibilities, and processes that guide, direct, and
control how an organization’s business divisions and IT teams cooperate to achieve
business goals”. Though these definitions help frame the issue, it is often difficult
for the IT administrator to define actual “good governance practices from such broad
definitions.

So what can we elucidate from these general definitions? To begin our discussion,
let’s draw one important line in the sand: While IT plays an instrumental role in
the performance of “good governance”, governance is not simply a software- or
hardware-driven initiative. Let’s be clear: Governance is neither a technology nor a
tool. First and foremost, governance is a business decision.

This business decision must be made – and led – by the C-Suite. Executive-level
management must ensure that it is stewarding and fully supporting the initiative
in order to ensure it is truly effective. This executive-level sponsorship must cover
the execution of the governance initiative, establishing the procedures necessary to
implement governance, and purchasing the tools necessary to aid in that execution.
Notice that “purchasing tools to aid in execution” is last on the list. A clear plan must
be established first. Part of that plan must include getting all of the key organization
employees to the table to hammer out a successful governance strategy. This roster
can include the C-level executives, financial stakeholders, IT leaders, business division
leaders, information architects/taxonomists, compliance officers, development
leaders, knowledge workers, and trainers.

What’s at risk? Exposure to legal and compliance-related issues, much in the same
vein as Enron and other companies that fell victim to litigation leading to their
ultimate demise. When these companies did fall, and numerous compliance regula-
tions – including Sarbanes Oxley – came out, many organizations reactively purchased
technology platforms such as Microsoft SharePoint to centralize and standardize their
businesses without proactively giving thought to proper governance. The end-result
was often a collaboration environment and digital asset repository burdened with
sprawl, decreased security, excessive administration costs, depressed user adoption,
and, yes, susceptibility to compliance-related issues – exactly what organizations were
trying to avoid. This problem still rears its head today – according to research from the
Association for Information and Image Management (AIIM), less than 50 percent of
SharePoint implementations were subject to a formal business case, and only half of
those required a financial justification. Consequently, most companies did not have a
management plan as to which of SharePoint’s features were to be used, and where.

We will get to why Microsoft SharePoint is an excellent platform upon which to


centralize and streamline organizations shortly, but it is important – whether your
organization is in the throes of taming an out-of-control technology environment or
is considering a move to the newly released Microsoft SharePoint Server 2010 – that
the following factors are considered before commencing your IT Governance project:

42
SharePoint® eMagazine

Sponsored article by AvePoint

n Establish initial principles and goals: Identify tangible, measurable policies


and standards in order to accurately quantify the initiative’s benefit to the
organization.
n Classify the business information and content: Ensure that your enterprise-wide
information is organized and available in order to support your governance
initiative.
n Develop an education strategy: Ensure that all end-users are given comprehensive
training on the newly established governance policy as well as the tools and
resources necessary to properly utilize the technology platform.
n Craft an ongoing plan: Make sure the key governance stakeholders meet
regularly to assess current progress and determine what, if any, actions should
be taken to continue ensuring proper governance.

SharePoint: The Building Blocks of Governance


Microsoft SharePoint is the fastest growing server product in Microsoft history, and
for good reason: Organizations immediately recognize the platform’s ability to serve
as a centralized document repository, online collaboration workspace, development
platform, and a gateway for other mission-critical initiatives including enterprise
content management. SharePoint is truly unique, though, in that the many virtues of
the platform also serve as the fundamental reasons why it is challenging to effectively
govern.

SharePoint was intentionally created to act as a decentralized platform whose


end-users are the primary contributors of content and developers of processes. This
peculiar quality helps deliver four distinct benefits to the organization: pervasive
collaboration, delegated administration, user experience, and employee self-
service. However, these very benefits also are the prime suspects for SharePoint’s
governance challenges. Once the proper business plan and people are set in place for
your governance initiative, SharePoint administrators can meet these challenges by
addressing four key SharePoint management areas, or pillars:

n Site and Information Architecture: SharePoint Sites can be set up by an


end-user quickly without requiring any pre-qualification, adherence to business
protocols, or information architecture. This is great for end-user adoption, but
disastrous for streamlined administration, potentially leading to hundreds of
non-business aligned, duplicate, and redundant sites.
n Securities and Policies: SharePoint’s decentralized nature also extends to
security permissions and management. The goal is to ensure data access to only
those so authorized, without over-burdening IT staff. Avoidance of a manual
and labor-intensive policy and permissions management process demands
that certain elements – like departmental Team Sites – are governed by Active
Directory permissions, while others are efficiently managed via SharePoint
permissions.
n Operational Procedures: SharePoint administrators face a peculiar quandary
with regard to SharePoint data protection and data accessibility: Administrators
must ensure that user experience promotes collaboration and creativity, yet
govern it in such a way that service level agreements (SLAs) are kept, scalability
is accounted for, geo-distribution and business continuity plans are established,
and information from legacy systems does not get lost in the shuffle.
n Compliance: Administrators must manage the initiatives, technological controls,
and procedural controls required to ensure that SharePoint’s infrastructure, its
users, and the information it supports operate under applicable laws, standards,
and policies. This requires that appropriate auditing, archiving, reporting, and
data protection strategies are put in place and documented.

43
SharePoint® eMagazine

Sponsored article by AvePoint

Native SharePoint Governance Possibilities & Their Limitations


SharePoint provides the building blocks for common administration and governance
tasks, but it cannot natively perform all of the necessary tasks efficiently. SharePoint’s
native administration and governance tools do not allow for the performance of
global – or bulk – changes to configurations or settings. This holds true for discovery,
reporting, and rollback of changes as well. For example, with some exceptions,
administrators cannot efficiently discover, propagate, and retract customizations,
SharePoint Solutions, or SharePoint Designer elements throughout their various
development, testing, staging, and production environments. These types of changes
and customizations must often be managed manually, which can be both error-prone
and time-consuming.

The ability to discover and modify SharePoint securities is similarly limited, as native
SharePoint tools do not allow administrators to discover the deployment-wide
access/modification rights for a particular end-user or group without manually
searching each element within the entire deployment. Furthermore, in more complex
environments with complex inheritance, administrators cannot discover and modify
securities changes for an end-user or group without accessing the settings interface
of each SharePoint element or object for which that user has prescribed permissions.
There is really no native way to search, view, or modify all elements and objects for
which a specific end-user or group has permissions, leading to inefficient use of an
administrator’s time, and a poorly governed information security strategy.

SharePoint’s native capabilities with regard to the protection, access, and lifecycle
management of content and data are also limited. SharePoint cannot natively
backup and restore data and content at the item-level with full fidelity, nor can it
ensure total farm backup of all SharePoint components, including the GAC, IIS, the
SharePoint Hive, and other front-end customizations. There is no native functionality
enabling organizations to optimize the performance of their SharePoint deployment
via business-rule aware archiving, or any way to ensure all geographically distributed
end-users accessing offline or satellite SharePoint farms have access to the same,
up-to-date information as those accessing the central, or main farm.

Additionally, out-of-the-box reporting, auditing, and archiving features for the sake
of meeting compliance obligations are not robust enough for organizations crafting
a comprehensive governance strategy. SharePoint cannot perform full-data capture
archive snapshots – including securities, metadata, and audit trail – retain them in
an immutable form, or maintain them outside of SharePoint’s SQL storage. Targeted
audit retrieval or reporting by user, object, and activity is also lacking in out-of-the-
box SharePoint deployments. Executing automated data pruning – based on globally
defined retention policies – is not possible natively, nor is tracking and reporting
upon all SharePoint activities from the object, user, or event perspective.

SharePoint provides the foundation for governance, but additional technical tools
are required – in conjunction with appropriate procedural protocols – to ensure truly
effective and efficient management of site and information architecture, securities
and policies, operational procedures, and compliance adherence.

DocAve and SharePoint: Closing the Governance Gap


The key is to adopt the technical tools that empower the IT administrator to satisfy
his or her governance operations efficiently. This is what AvePoint’s flagship
solution, the DocAve Software Platform, was designed for. Comprised of more
than 25 independently deployable modules, all piloted via a unified, browser-
based interface and built upon a fully distributed architecture, DocAve delivers the
software solutions organizations need to ensure its SharePoint deployment meets
all IT Governance standards, including:

44
SharePoint® eMagazine

Sponsored article by AvePoint

n Administration: DocAve enables organizations to unify the management of all


SharePoint permissions, users, objects, and content from a single interface for
multiple SharePoint environments and instances.
n Storage Optimization: DocAve allows companies to keep their Microsoft SQL
Server resources optimized with intelligent archiving, automated, real-time
binary large object (BLOB) offloading via the EBS/RBS providers, and migration-
free SharePoint connectivity to network and file-share content.
n Reporting & Testing: DocAve arms administrators with a fully customizable
dashboard to access all mission-critical analytics regarding SharePoint infra-
structure, health, and activity; build comprehensive audit and analytics reports
for legal/regulatory review, and create fully customizable SharePoint testing
environments to ensure optimal system architecture and enable informed
strategic planning.
n Replication & Integration: DocAve offers real-time one-way, two-way, and
one-to-many synchronization of SharePoint content and configurations, within
or across farms, so all end-users have access to the most up-to-date information.
n Data Protection: DocAve delivers comprehensive SharePoint data protection,
with support for granular, item-level through platform-level backup, and swift,
full-fidelity recovery of SharePoint content and architecture.
n Regulatory Compliance: DocAve delivers the comprehensive auditing, flexible
reporting, and automated life-cycle management tools organizations require
to meet all regulatory obligations and best practice protocols, in addition to
enabling and supporting a culture of proactive compliance for SharePoint
deployments.
n Migration: DocAve ensures that companies using prior versions of SharePoint
(including Windows SharePoint Services v2 and v3, SharePoint Portal Server
2003, and Microsoft Office SharePoint Server 2007), or a variety of legacy
platforms, including Lotus Notes and Documentum, can perform a secure and
lossless migration to Microsoft SharePoint Server 2010.

Two Platforms, One Comprehensive Governance Solution


True IT governance requires the leadership, organizational structures, and processes
that ensure information technology systems – such as SharePoint – sustain and extend
the organization’s strategies and objectives. Remember that governance is not
simply an internal administrative task, nor can it succeed without having executive
sponsorship and clear business planning executed before deploying any technology.
This ever-evolving puzzle includes site and information architecture, securities and
policies, operational procedures, and compliance. To successfully satisfy all four of
these pillars, key stakeholders must implement the appropriate tools. Anything short
of this, and effective governance is simply unachievable.

SharePoint’s distinctive capabilities and architecture provides unique governance


challenges, and consequently dedicated SharePoint governance tools are required.
The platform’s native tools do not provide the breadth of functionality of the agility
necessary to efficiently deliver governance in today’s competitive and regulatory
landscape.

With the DocAve Software Platform, organizations will have the tools required to
successfully execute a robust governance strategy efficiently, meet their evolving
compliance objectives, and ensure optimal performance of their SharePoint platform.

45
SharePoint® eMagazine - September 2010

Logging and monitoring


in SharePoint 2010
by Mirjam van Olst
Logging and monitoring in SharePoint 2010 is considerably better than in SharePoint
2007. There are a lot of features that do some form of logging or monitoring. There is
diagnostic logging, usage and health data collection, the SharePoint health analyzer,
web analytics, the logging database, the staging database and the reporting database,
administrative reports, health reports, information management usage reports, web
analytics reports and a whole host of logging related timer jobs. In this article I will try
to make sense of it all. What features are used to collect what type of data, where is
that data stored and where can reports about the different types of data be found?

Diagnostic Logging
You will use diagnostic logging to keep an eye on what is going on your server.
Diagnostic logging is used to collect event and trace information. If you used Share-
Point 2007 you are probably familiar with SharePoint’s diagnostic logging. However,
diagnostic logging has been greatly improved in SharePoint 2010.
Information gathered through diagnostic logging is stored in the Windows event logs
and in the so called ULS logs. By default these logs are stored in the C:\Program Files\
Common Files\Microsoft Shared\Web Server Extensions\14\LOGS\ folder. The ULS log
files have a .log extension. All data that is written to the ULS logs is also written to the
logging database so it can be used for reporting purposes later on.
You can configure the diagnostic logging settings by going into Central Administration
=> Monitoring => Configure diagnostic logging.

Figure 1: The monitoring screen in Central Administration.


In here you can change the severity for which you want to log events for each category.
By using event throttling for the ULS logs you can make sure that the logs aren’t
full of information that you are not interested in. This will make it easier for you to
read the messages in the logs and find what you are looking for. The readability of
the Windows event logs will increase even further if you enable the event log flood
protection. Enabling this feature will cause repeating messages to be suppressed until
conditions return to normal. This will prevent your event log from filling up with the
same repeating event. If flood protection kicks in it will add a marker to the log to
indicate that log throttling has started. By default messages will be suppressed if they
occur more than 5 times in 2 minutes. You can use PowerShell to adjust this setting.

46
SharePoint® eMagazine

Figure 2: Event throttling per category.


You can also change the default location for the ULS logs and limit the number days
that log files are kept and the maximum amount of disk space that they can use. The
default amount of space they can take up is 1TB. Unless you have really large hard
disks in your SharePoint servers you probably want to change this to something more
reasonable!

Figure 3: ULS log configuration.


Another very cool and helpful new feature in SharePoint 2010 diagnostic logging is
the correlation ID. A correlation ID is an ID that is generated at the start of a page
request and that is added to the HTTP Header as SPREQUEST. This ID is then made
visible at every layer, in the ULS logs on the SharePoint web front end server, in the
logs on the SharePoint application server and even in SQL Profiler you can use the
correlation ID to find what you are looking for.
The correlation ID surfaces in the browser in the popup you get if an error occurs, or
on the developer dashboard if no error occurred during your request. Being able to
follow the conversation across layers and servers makes it a lot easier to find out what
exactly happened during your request.

Usage and health data collection


The second type of logging we will discuss is the logging of usage and health data.
You can start collecting usage and health data by going into Central Administration
=> Monitoring => Configure usage and health data collection.

47
SharePoint® eMagazine

Figure 4: Usage data collection.


At the top of the page you can enable usage data collection. You can also choose
which events you want to log. You can choose to log the following events:
n Content Import Usage
n Content Export Usage
n Page Requests
n Feature Use
n Search Query Usage
n Site Inventory Usage
n Timer Jobs
n Rating Usage
If you enable usage data collection SharePoint will create a “Usage and Health Data
Collection Service Application” called WSS_UsageApplication. You cannot create
this service application manually from the manage service applications page. Once
the service application exists you can see it in the list of service applications on the
manage service applications page. Clicking on the WSS_UsageApplication link or
selecting manage in the manage service applications admin page will take you to the
usage and health data collection page.
The usage providers log to the file system. The next section on the page lets you
change where the log files are written to. By default they are stored at C:\Program
Files\Common Files\Microsoft Shared\Web Server Extensions\14\LOGS\. Usage data is
logged to .usage files. You can limit the maximum amount of space the usage files
can take up on the file system. The default amount is 5GB.
Be aware that enabling logging for all usage events will have a large impact on disk
usage on your SharePoint server. It can seriously hurt the performance of your farm.
Because of this you should only enable logging for those events that you are really
interested in. If you only want to look at usage reports for certain events once, or
every now and then, just enable logging for those events until you have the data you
need and then disable it again.

48
SharePoint® eMagazine

If you have ever tried to open a .usage file you know that these files weren’t meant
to be a nice bedtime story. The data in them is binary. There are two log collection
timer jobs that are both needed to help you make the collected data useful. The first
one is the Microsoft SharePoint Foundation Usage Data Import timer job and this one
is used for collecting the data from the .usage files from each SharePoint server in the
farm and storing the information in the logging database. By default this job will run
every 30 minutes, but you can adjust this if you want to.

Figure 5: Health data collection and the logging database.

Figure 6: Health data collection related timer jobs.

49
SharePoint® eMagazine

The second timer job is the Microsoft SharePoint Foundation Usage Data Processing
job. This job is used for processing the data, so that it can be used to generate reports.
By default this job will only run once a day.
By checking the Enable health data collection checkbox and setting up a schedule
for the health data timer jobs you can enable health data collection. Health data is
not logged to the files system of the SharePoint servers. It is logged directly to the
logging database. We will talk about this database in great detail later.
If you click on the Health Logging Schedule link you will see a list of timer jobs that
are related to health logging. Many of them are enabled. Most of the diagnostic
data provider timer jobs however are disabled by default. There is a reason for this.
If you enable these timer jobs, and more specifically the Diagnostic Data Provider:
Performance Counters - Database Servers timer job and the Diagnostic Data Provider:
Performance Counters – Web Front Ends timer job they will extend the schema of
your logging database and they will start logging like crazy. So much so that the
amount of data stored in the logging database will get into the tens or hundreds
of gigabytes, if not terabytes. Writing all this data to the logging database will also
batter the hard drive of your SQL server so much that the logging database might
require its own SQL server.

Logging Database
At the bottom of the usage and health data collection page you can select a database
server and database name for the logging database. Be aware that this database is
sometimes called the usage database. Don’t let this confuse you, it’s the same thing.
All the information collected by the diagnostic providers and by the usage and health
timer jobs is eventually stored in the logging database. This means that this database
can get very busy and very big. If you are serious about using the health data collection
features, it can get so busy that Microsoft recommends giving this database its own
SQL server instance. The logging DB should never be stored on the same physical disks
as the search or content databases, because that will result in massive disk contention
for all three databases. If you plan to use the health data collection features you need
to carefully design your storage layer. It should be obvious from the above statements
that you have to pay close attention to what information you are logging and whether
you actually need that information. If you don’t need the information don’t collect it
and don’t consume unnecessary resources.
By default the logging database stores 14 days’ worth of data. By running a Power-
Shell script this can be extended to up to 31 days. Be aware that every time you
change the retention policy all tables in the logging database are emptied and you
will lose the data you collected up till that point.
The schema of the logging database is documented. This means that you can write
data to the logging database and you can read, query and build reports directly from
it. To make sure you don’t cause locks on the database it will be better to make a
snapshot of the logging database and use that for reading and querying the data.
The bottom line however is that the logging database is the first SharePoint database
that you are allowed to touch. It is even supported to change the database schema
by adding tables or stored procedures. Don’t modify the schema of any of the default
objects in the database as that could problems when SharePoint tries to write data
to it.
Here is an example of a query that you could run against the logging database.
This query will get you yesterday’s slowest pages for the web application with web
application id d75c0a2e-2e15-4d1a-b300-7889fa86133f on the mysharepointserver-
name SharePoint server.
This query is just one example; you can generate an almost unlimited amount of
reports from the data in the logging database.

50
SharePoint® eMagazine

declare @stime datetime


declare @etime datetime
set @stime = getDate() - 1
set @etime = getDate()
exec dbo.proc_GetSlowestPages
@StartTime = @stime,
@EndTime = @etime,
@WebApplicationId = ‘d75c0a2e-2e15-4d1a-b300-7889fa86133f’,
@MachineName = ‘myshareointservername’

Listing 1: Requesting yesterday’s slowest pages for a given web application


on a given server.

SharePoint Health Analyzer


The SharePoint Health Analyzer is a new health analysis tool that enables you to
proactively check for potential configuration, performance or usage problems. The
Health Analyzer runs predefined health rules against all servers in the farm. A health
rule runs a test and returns a status that tells you the outcome of the test. SharePoint
can even help you to resolve problems that are reported by some of the rules.
Each health rule falls in one of the following categories: Security, Performance,
Configuration, or Availability.
Even though the list of rules is just another SharePoint list, adding an item to this
list won’t add another rule to the Health Analyzer. A health rule is actually a custom
assembly. The rules need to inherit from SPHealthAnalysisRule or SPRepairable-
HealthAnalysisRule. All rules need to implement:
n A Check() method that returns Passed or Failed.
n Some string properties that explain the problems.
n Categories (ErrorLevel and SPHealthCategory).
n Optionally, a Repair() method (only for an SPRepairableHealthAnalysisRule)
that returns Failed, Succeeded or Unnecessary.
The details page of a rule includes an option to Reanalyze Now, which runs the rule
again. This is useful if you think you have fixed the problem reported by the rule and
you don’t want to wait for the next time the timer job is scheduled to run until you
can check whether the message will indeed disappear from the list of health analyzer
problems.
Normally the health analyzer will use a bunch of timer jobs to check on all the rules.
The timer jobs can be found on the page with the timer jobs for health data collection
shown in figure 6. The health rules are run by one of the many timer jobs whose name
starts with “Health Analysis Job”.

Figure 7: Health Analyzer Rule Definitions.

51
SharePoint® eMagazine

To gather data the health analyzer timer jobs will get information from the servers
in the farm and from the logging database. This data will be used by the rules to
determine whether the farm is healthy and stable.
When a rule fails, the status is written to the Health Analyzer Reports list in Central
Administration and to the Windows event log. Also a message will be displayed on
the homepage of the Central Administration telling the administrator that the health
analyzer has detected an issue. The colour of the bar tells you how serious of an issue
the health analyzer has found.
On the page in Central Administration displaying the Health Analyzer Reports list
you can click an alert to view more information about the problem and see steps to
resolve the problem. You can also change the settings for the rule that raised the
alert.
The Health Analyzer Reports list is just another SharePoint list, with the same options
other SharePoint lists have. This means you can for instance create your own views on
the list and put alerts on it so you get a warning when anything changes.

Figure 8: Health Analyzer warning on Central Administration home page.

Figure 9: Review Health Analyzer problems and solutions.

52
SharePoint® eMagazine

Be aware that there will always be messages in the list of problems and solutions.
They do no harm. Two examples of these are:
n The “Missing server side dependencies” that is there by default and that is
looking for a dll that is apparently missing on the server, even if you installed
everything the way it should be installed.
n The “The server farm account should not be used for other services” will popup
if you are running the user profile service. This will by default use the farm
account. There is no way around this, you can’t change it, or your user profile
synchronization won’t work. You shouldn’t use the farm account for any of the
other services. That would be a bad practice. This rule will be changed in a later
hotfix, so it won’t fire for the user profile service anymore. For now you can just
ignore the message.
Don’t make it your life’s mission to get rid of every message in the Health Analyzer
list, but do keep track of messages in there so you will be able to spot potential
problems and solve them before they cause real damage to your farm.

Web Analytics
You could argue that the web analytics features in SharePoint 2010 aren’t really part
of the logging and monitoring features. However they do use a lot of the logging and
monitoring features, which is why I will cover them in this article anyway.
SharePoint 2010 has its own web analytics service application that can be created
and managed from the managed service applications page in Central Administration.
When creating the service application you will also create the staging and the reporting
databases that are used by the web analytics service application to store and query the
usage data.
The “Web Analytics Data Processing” timer job will (just like the usage data
collection timer jobs) use the .usage files to collect information about the farm
from the SharePoint servers. This makes the web analytics features the only log data
collection features that don’t use the logging database.
It also means that the web analytics features need the .usage files in order to be
able to function. Because of this usage data collection must be enabled for the web
analytics service to work. If you create the web analytics service application it will
automatically enable the usage data collection for you. Be aware that if you manually
disable the usage data collection timer jobs or delete the usage data collection service
application that your web analytics will stop working.

Figure 10: Creating the web analytics service application.

53
SharePoint® eMagazine

The data in the staging database is retained for 30 days and moved into the reporting
database for longer term retention. The retention time for data in the reporting
database ranges from 1 to 25 months and can be selected when creating the web
analytics service application. The meat of the web analytics features is the ”Web
Analytics Web Service”, which contains most of the logic needed for the web analytics
features to work.

Reports
We spend quite a lot of time on how and where SharePoint collects all its logging and
monitoring data. Of course this isn’t any good if you can’t view and create reports
that use all the collected data.
The diagnostic logging information is surfaced in the ULS logs and in the Windows
Event Logs. This information will mainly be used to track down the cause of problems
on your environment and not necessarily to generate reports on.
The usage and health data information that is stored in the logging database is
surfaced in reports on the view administrative reports and the view health reports
pages.
The messages generated by the health analyzer are available in the Review problems
and solutions list and in the Windows Event Viewer.
Web analytics reports are shown on the View Web Analytics reports page. The web
analytics reports show data collected per web application. From the site settings
pages the Site Collection Web Analytics reports and the Site Web Analytics reports
can be opened. These reports show similar information that will be relevant to the
current site or site collection. The site and site collection reports can be viewed by
site owners and site collection administrators, enabling them to get some insights
into the behavior of users on their sites without them needing access to Central
Administration.
Because a lot of data is stored in the logging database and it is possible to write your
own queries against this database you can easily create your own custom reports
using Excel, SQL Reporting Services or third party reporting tools. This means that
you can create some really fancy reports about the data logged into the logging
database by just using pivot tables and slicer in Excel. A lot of reporting that can be
done without using SQL Server Reporting Services.

... by the way ...


Surveys, see only your own answer(s)
Surveys are a great tool in SharePoint, but one tricky setting with surveys. All submitted
answers are readable for all users who can access the survey!
How to avoid these setting? Go to the Survey, Survey Settings, Advanced settings, check Read
responses that were created by the user.
To avoid this problem in the future you can save a empty Survey with the above settings as a
template. Survey, Survey Settings, Save survey as template.

Tip from Jan Ligtenberg, trainer/ consultant APS IT-diensten,


j.ligtenberg@apsitprojecten.nl
Tel. 06 10634297

54
SharePoint® eMagazine

Conclusion
In this article we have looked at the different logging and monitoring features in
SharePoint 2010. Hopefully by now you will have an understanding of what feature
you should use for what purpose, where the different features store their information
and where you can see the logged data being surfaced.
Table 1 provides a very short overview of the features, the places where they log and
the different reporting options.

Feature Logs to Data surfaced


Diagnostic logging ULS Logs ULS Logs
Windows Event Log Windows Event Log
Logging Database Custom Reports
Usage data collection .usage files, written into Logging Administrative Reports
Database by timer job Health Reports
Custom Reports
Health data collection Logging Database Health Reports
Custom Reports
Health analyzer Health Analyzer Reports list Review problems and solutions page
and Windows Event Log Windows Event Log
Web Analytics Reads data from the .usages Web analytics reports in Central
files and writes it to the staging Administration and Site Settings page
and reporting databases
Table 1: Overview of logging and monitoring services in SharePoint 2010.
There are many logging and monitoring improvements in SharePoint 2010. Go
check them out now that you know what they are, where you can find them and
what they do.

... by the way ...


The gap between Read and Contribute
Sometimes you wish a role Contribute minus Delete and Edit rights. If you create a new role
you can define all the required options. Go to Site Actions, Site Settings on the toplevel of
your Portal.
Click on Site permissions, in the Ribbon click on Permission Levels and Add a Permission Level.
Put a Name and Description in the desired fields and check the following options:
Add Items, View Application Pages, Use Client Integration Features. You will see some options
will be checked automatically.
Now you can use this role on Libraries en Lists!

Tip from Jan Ligtenberg, trainer/ consultant APS IT-diensten,


j.ligtenberg@apsitprojecten.nl
Tel. 06 10634297

55
Wat doe jij volgende maand?
Jij bouwt het wereldwijde intranet voor een grote multinational, waar de meer dan
25.000 gebruikers dagelijks met veel plezier mee werken. Je opent de poorten naar
kritische ERP, BI en HR systemen en integreert die met de modernste Document
Management en Social Networking applicaties.

Jij bent namelijk een specialist die wel raad weet met SharePoint 2010, Nintex Workflow
en WSF. Iemand die altijd op zoek is naar nieuwe kennis en als eerste met de technologie
van morgen wil werken, maar ook iemand die geniet van het succes dat de klant vandaag
boekt met jouw innovatieve oplossingen.

Jij bent ondernemend en deelt graag je visie en kennis met je omgeving, zowel fysiek als
online, en werkt samen met designers, architecten, developers en consultants aan de
beste oplossingen voor je klant. Je weet hoe je werk en privé vlekkeloos kunt combineren
en onafhankelijk van tijd en plaats de optimale bijdrage levert aan je team.

Volgende maand werk jij bij Macaw.


SharePoint Consultants
SharePoint Developers
Project Managers

Macaw is Nederlands grootste, uitsluitend in Microsoft technologie gespecialiseerde, IT dienstverlener.


Onze kennis is gebundeld in vier Solution Centers die zich richten op het realiseren van collaboratieve
portalen, ECM systemen, internet en commerce sites, business solutions, CRM systemen en
applicatiebeheer en hosting diensten

Bezoek voor meer informatie over deze en andere vacatures onze website www.echtleukwerk.nl of neem contact
op met onze afdeling Recruitment via 020-8510510 of recruitment@macaw.nl.

Macaw
SharePoint® eMagazine - September 2010

Building a FAQ with new CQWP


functionality in SharePoint 2010
by Hannah Swain
In a real world case, the client wanted a list of frequently asked questions which they could easily
filter per category. Without using a custom solution, the best answer in MOSS 2007 would have
been to create a page for each category, then place a CQWP to filter the applicable questions.
This solution isn’t particularly satisfactory, as it is difficult to maintain. The final solution was to
use a managed metadata field to filter the query string, thanks to the new and improved Content
Query Web Part (CQWP) in SharePoint 2010.
This article will walk you through the process of creating a similar set-up for category filtering
using the CQWP. This is a no-code solution with only a little bit of help from SharePoint Designer
for creating a page layout.

Step 0: Taxonomies and term stores?


Taxonomies are used every day in the real world. In his work Imperium Naturae
(1735), Carl Linnaeus categorized the world into the well-known animal, vegetable
and mineral kingdoms. These are further broken down into a ranked hierarchy. An
example which may be slightly closer to home is the Amazon.com website. Items on
the website are classified as being of the type books, clothing, kitchen supplies, etc.
Books are categorized according to genre and then subject. The taxonomy is simply a
structure used to classify information.
One of the great things about SharePoint 2010 is the ability to work with a taxonomy
– a central set of hierarchal terms which are accessible from multiple site collections.
This allows for the ability to categorize an item within the SharePoint environment.
Note that you can reach the term store via Central Administration or as an option on
the site settings page of each site in your site collection.

Step 1: Fill the term store


We need to save the list of FAQ Categories somewhere. My personal best practice is
that whenever there is a site column that I would normally use a choice or lookup for,
I now use the term store and make it a managed metadata column. In this demo case,
there is nothing else in this term store so the location is not very important.
Make a new term set and create the underlying terms. Make a term for each category.

Figure 1: Filling the term store.

57
SharePoint® eMagazine

Step 2: Creating the content type


At the site collection level, create a content type which inherits from the content type
“Article Page”. I have chosen to use the Article Page content type because I want each
of my questions to be its own page – the same principles in this article can be applied
to most other content types such as document or item.
In this case, the site columns in Article Page map nicely to most of the site columns
that we will need:

FAQ information Site column


Question Title
Answer Page Content

Table 1: content type columns.


Finally, create a new site column of the type “Managed Metadata” called FAQ Category.
In the column settings, link it to the FAQ Category section of the term store; this setting
tells the managed metadata column where in the taxonomy to “attach” itself.

Figure 2: Linking the managed metadata column to the term store.

Step 3: Create the page layout


This is an optional step, but it’s necessary for examples which will be shown later on
in this article. Make a simple page layout which shows the title and page contents
site columns, linked to the FAQ content type, with the FAQ Category column in the
hidden metadata section. I went ahead and implemented two columns in the layout:
one is used for the already mentioned site columns, the other will be used later.

58
SharePoint® eMagazine

Figure 3: Creating the page layout in SharePoint Designer.


Remember that to work with SharePoint 2010, you need SharePoint Designer 2010.
SharePoint Designer 2010 is not compatible with SharePoint 2007, nor can you use
SharePoint Designer 2007 with SharePoint 2010. Both applications can be downloaded
free of charge from the Microsoft site.

Step 4: Add some content


Go ahead and create a few questions just to have something to play with in the rest
of this post. Don’t forget to add the content type to your page library, etc.

Step 5: Showing related questions


Here is where that extra column on the page layout comes in: we’re going to show
the related questions so that visitors can click straight through to more interesting
information. On the page layout in SharePoint Designer, add a CQWP. The easiest
way to configure the CQWP is to right-click it and then choose “Tag properties…”
from the context menu that appears. Configure it as follows:

Choose content type


Specifically choose the content type you are using – in this case, FAQ. This ensures
that pages which use other content types (such as default.aspx) will not turn up in
the query.

Additional filters
SharePoint 2010 introduces two new filters in the CQWP which are very useful. Note
that there is a question mark next to the heading in the web part – this has a good
summary which always helps me out when my memory of the correct context fails
me.

Filter option Description


PageFieldValue Allows you to use the current value of a field on the current page.
PageQueryString Allows you to grab the value of a query string in the URL.

Table 2: CQWP filter options.


We will be using PageFieldValue to get the value of FAQ Category for the current
page. Using that, we can filter out other pages in the page library which have the
same category. The correct form is:
[PageFieldValue: FAQ Category]

59
SharePoint® eMagazine

Figure 4: Configuring the CQWP filter options.


When you view your page layout, it is perfectly normal that the CQWP only shows
dummy content. It will be visible when you check your pages via a browser – you can
see the pages that have the same FAQ Category.

Figure 5: Example of a FAQ page.

60
SharePoint® eMagazine

You can edit the CQWP further, for example to limit the number of results returned,
change the sort order, etc.

Step 6: FAQ overview page


We need a good way to show the available questions, so the last step will be creating
a page as an overview. Go ahead and use a blank web part page for this one, though
you can get as creative as you like with a custom page layout.

Summary link web part


The first necessary web part is the Summary Links Web part. Put this in the right
column. For each FAQ Category, make a link. That link will point to the current page,
with a query string which is linked to the category.
The “cat” is the query string variable. For each link, we’re saying “put the variable
called ‘cat’ with the value ‘FAQ Category’” into the URL. This will be picked up on the
page that the link goes to.
It is very important that the category that you put into the query string is exactly the
same as the name of your category in the FAQ category managed metadata field.

Figure 6: Configuring a link in the Summary Links web part.

61
SharePoint® eMagazine

Make a link in the web part for each of your categories – each link needs to point to
the current page.

CQWP
The next step is to place a CQWP in the left column. This will be picking up the correct
FAQ pages for us.
The values for this CQWP are the same as for the CQWP when showing related
information. Instead of filtering with the PageFieldValue, we’ll be filtering with the
PageQueryString, to grab that variable out of the URL:

Figure 7: Configuring the CQWP query string filter settings.


When you save/check the page in, the CQWP will be empty. Click on one of the links
in your links web part to see items from that category.

Figure 8: Example of the CQWP filtering according to the Summary Links web part.
When you link to the page, make sure to include the query string with the category
that you want to show by default, e.g. http://www.test.com/pages/default.
aspx?cat=General info.

Conclusions
In this article, we worked through putting together all the steps for a FAQ. This could
very easily be reused for any kind of information that needs to be filtered and shown
on a page, such as a news archive or a product catalogue. The taxonomy and managed
metadata features offer many new options within SharePoint 2010 besides the one
that has been handled – I expect many interesting solutions to pop up as more users
work with them.

62
SharePoint® eMagazine - September 2010

Using the refinement panel to get more


insight into your search results
by Lennard van Leuven
The way knowledge workers get to their information is shifting from “contextual browsing”
to “parametric searching”. The amount of information companies are dealing with is becoming
enormous and the diversity in which content is used and reused is complex. Finding the infor-
mation you want by using only one context (for example the department that added the content)
means that you miss out on the opportunity to find information based on other aspects of the
content. This approach requires more enhanced search functionality.

The SharePoint 2010 refinement panel


Although the search functionality in SharePoint 2007 had been increased dramatically
compared to SharePoint 2003, it still didn’t give you any insight into the search results.
You could search for a document with a specific metadata property and the search
results page would then tell you which documents matched your query. What it didn’t
tell you was what other refinements you could make to get closer to the content you
were looking for.
SharePoint 2010 has a new feature called the refinement panel. This feature gives
you this insight into your search results. In this article I will describe how the search
functionality in SharePoint has evolved with the new refinement panel, how to
configure and customize the refinement panel and how the new search refinement
panel in SharePoint 2010 could become the tipping point for users to navigate to
information by using the search functionality.

“Searching or filtering”
In the last couple of years, good search functionality has become one of the most
important requirements when building a new corporate portal. More and more
people are becoming familiar with using a search engine. Since the relevance of the
search results has been improved greatly, users rely more on the search functionality
as the way to get to the organizational information.

Figure 1: An example of a refinement panel on a Dutch commerce site.

63
SharePoint® eMagazine

One of the big differences between searching for information on the internet (using
Google or Bing) and searching for information on your corporate portal is that the
information on your corporate portal is in a (semi-)controlled environment. This
makes it possible to enrich information with metadata when it is created or added.
The “structured” properties attached to the “unstructured” information such as
documents or webpages make it possible to search for information not only based on
keywords, but also by filtering the results based on unique properties. Searching for
information in a filtered set of search results gives much better results. For example,
finding that specific application form for attending a conference is much easier if you
perform a query for “conference application form” only within all application forms
instead of performing that query throughout the whole SharePoint portal.
Refining the number of results based on properties is used on almost all major
commerce sites. Figure 1 shows a screenshot of a Dutch commerce site that gives
visitors the opportunity to refine the number of found products based on different
categories.
MOSS 2007 didn’t provide any out of the box functionality for achieving this. The
CodePlex tool “Faceted Search” (http://facetedsearch.codeplex.com/) made it possible
to do a refinement on your search result page. However, it also had some downsides
such as the requirement to install the Microsoft Enterprise Library software; it was
also not possible to refine your search based on date ranges.

The refinement panel in SharePoint 2010


SharePoint 2010 comes with two built-in search web parts for refining your search
results: the “people refinement panel” and the “refinement panel”. The first one is
used for refining the results from a people search and the second one for refining
the results from all other searches. These refinement panel web parts are part of the
search web parts delivered in SharePoint 2010. The refinement panel web parts are
(automatically) connected to the Search Core Results web part.

Figure 2: An example of the out-of-the-box refinement panel web part.


By default, the refinement panel web parts show a number of refinements. This table
shows the default metadata properties for each of the refinement panel web parts

64
SharePoint® eMagazine

Refinement panel web part People refinement panel web part


Result type Focus
Site Job title
Author Organization
Modified date Managed metadata
Tags
Table 1: Default metadata refinement properties.
The refinement panels can be configured with custom metadata properties. These
can either be managed metadata properties defined in the search service application
or taxonomy properties defined in the profile store. Both of the refinement panel
web parts work the same way. I will now focus on the basic refinement panel and
show you the most important web part properties.

Figure 3: The refinement panel web part properties.


Figure 3 shows the refinement panel web part properties. The most important
properties for configuring the refinement panel web part are:

65
SharePoint® eMagazine

Property Description
Filter Category Definition This XML string configures the categories for which you can refine the
search results. I will go deeper into this property later on.
Number of categories to display The number of categories which are shown on the screen. Make
sure this number is similar or higher than the number of categories
you’ve configured in the “Filter Category definition” property or your
categories will not be shown.
Number of characters to display If the values in the refinement categories are very long, the values will
be truncated to the number of characters set in this property.
Use Default Configuration This checkbox resets the values of the web part properties to its original
values. By default it is checked. Make sure you uncheck this checkbox so
that your settings apply!
Data view properties – XSL editor This property allows you to configure the look and feel of the refinement
panel. Like the core results web part, this is done by editing some XSLT
code.
Table 2: Most important properties for configuring the refinement panel web part.

Customizing the refinement panel


With some code samples for the “Filter Category Definition” property, I will show
how you can perform some of the basic customizations on the refinement panel.

1. Adding custom metadata properties.


Before you can add your custom metadata properties, these properties first need
to be created as managed metadata properties. This is done in the Search Service
Application. After that, you can add the properties as filter categories. The following
example shows the XML for three custom metadata properties.

<?xml version=”1.0” encoding=”utf-8”?>


<FilterCategories>
<Category Title=”Afdeling”
Description=”De afdeling waar het document van afkomstig is”
Type=
”Microsoft.Office.Server.Search.WebControls.ManagedPropertyFilterGenerator”
MetadataThreshold=”5”
NumberOfFiltersToDisplay=”4”
MaxNumberOfFilters=”20”
SortBy=”Frequency”
SortByForMoreFilters=”Name”
SortDirection=”Descending”
SortDirectionForMoreFilters=”Ascending”
ShowMoreLink=”True”
MappedProperty=”Afdeling”
MoreLinkText=”show more”
LessLinkText=”show fewer” />
<Category Title=”DocumentType”
Description=”Het type document”
Type=
”Microsoft.Office.Server.Search.WebControls.ManagedPropertyFilterGenerator”
MetadataThreshold=”5”
NumberOfFiltersToDisplay=”4”
MaxNumberOfFilters=”20”
SortBy=”Frequency”
SortByForMoreFilters=”Name”
SortDirection=”Descending”
SortDirectionForMoreFilters=”Ascending”
ShowMoreLink=”True”
MappedProperty=”DocumentType”
MoreLinkText=”show more”
LessLinkText=”show fewer” />
<Category Title=”Vestiging”
Description=”Van welke vestiging is het document afkomstig”
Type=
”Microsoft.Office.Server.Search.WebControls.ManagedPropertyFilterGenerator”
MetadataThreshold=”5”

66
SharePoint® eMagazine

NumberOfFiltersToDisplay=”4”
MaxNumberOfFilters=”20”
SortBy=”Frequency”
SortByForMoreFilters=”Name”
SortDirection=”Descending”
SortDirectionForMoreFilters=”Ascending”
ShowMoreLink=”True”
MappedProperty=”Vestiging”
MoreLinkText=”show more”
LessLinkText=”show fewer” />
</FilterCategories>

Listing 1: Adding custom metadata properties.


Most attributes are self-explaining. Two attributes need some explanation. The
attribute “MappedProperty” refers to the managed metadata property in the Search
Service Application. The MetadataThreshold attribute determines the minimum
number of results for the metadata property to be shown. If the MetadataThreshold
attribute is set to 10. The refinement category is only shown if the combined number
of results for that category is higher than 10.

2. Show the number of matches


Like the example of the Dutch commerce site in figure 1, you get much more information
if the numbers of matches are shown. By default the SharePoint 2010 refinement
panel doesn’t show the result count. By adding the attribute ShowCounts=”Count”,
this is easy to achieve.

<?xml version=”1.0” encoding=”utf-8”?>


<FilterCategories>
<Category Title=”Afdeling”
Description=”De afdeling waar het document van afkomstig is”
Type=
”Microsoft.Office.Server.Search.WebControls.ManagedPropertyFilterGenerator”
MetadataThreshold=”5”
NumberOfFiltersToDisplay=”4”
MaxNumberOfFilters=”20”
SortBy=”Frequency”
SortByForMoreFilters=”Name”
SortDirection=”Descending”
SortDirectionForMoreFilters=”Ascending”
ShowMoreLink=”True”
MappedProperty=”Afdeling”
MoreLinkText=”show more”
LessLinkText=”show fewer”
ShowCounts=”Count”/>
</FilterCategories>

Listing 2: The ShowCounts attribute.

3. Playing with dates


Some of the most interesting refinements are the ones based on date columns. Instead
of displaying results in a chronologic order, this allows you to group the search results
by a couple of logical time periods. This is done by adding a custom filter to your
refinement category. Custom filters can be added with a range mapping or a value
mapping and can be based on data type Date, String or Number. Value mapping is
used for custom filters like the type of documents (items with extension .docx or .pdf
are documents and items with extension .png or .jpg are images).

<CustomFilter CustomValue=”Documents”>
<OriginalValue>docx</OriginalValue>
<OriginalValue>pdf</OriginalValue>
</CustomFilter>
<CustomFilter CustomValue=”Image”>
<OriginalValue>png</OriginalValue>
<OriginalValue>jpg</OriginalValue>
</CustomFilter>

Listing 3: CustomFilter with value mapping.

67
SharePoint® eMagazine

Range mapping is used for date or numeric ranges. The following example shows a
custom filter that shows how many results are changed within the last week.

<CustomFilter CustomValue=”Last week”>


<OriginalValue>-7..</OriginalValue>
</CustomFilter>

Listing 4: CustomFilter with range mapping.


The following example shows a complete refinement category that that tells you how
many results changed within the last week, month, year and the results that haven´t
changed within the last year.

<Category Title=”Modified Date”


Description=”When the item was last updated”
Type=
”Microsoft.Office.Server.Search.WebControls.ManagedPropertyFilterGenerator”
MetadataThreshold=”5”
NumberOfFiltersToDisplay=”6”
MaxNumberOfFilters=”0”
SortBy=”Custom”
ShowMoreLink=”True”
MappedProperty=”Write”
MoreLinkText=”show more”
LessLinkText=”show fewer”
ShowCounts=”Count” >
<CustomFilters MappingType=”RangeMapping”
DataType=”Date”
ValueReference=”Relative”
ShowAllInMore=”False”>
<CustomFilter CustomValue=”Last week”>
<OriginalValue>-7..</OriginalValue>
</CustomFilter>
<CustomFilter CustomValue=”Last month”>
<OriginalValue>-31..</OriginalValue>
</CustomFilter>
<CustomFilter CustomValue=”Last year”>
<OriginalValue>-365..</OriginalValue>
</CustomFilter>
<CustomFilter CustomValue=”Older than a year”>
<OriginalValue>..-365</OriginalValue>
</CustomFilter>
</CustomFilters>
</Category>

Listing 5: Example of a date refinement category.

Customizing the look & feel


The refinement panel is used by default on the search results page, giving you the
opportunity to refine the results and get closer to the content you’re looking for.
Another way of using the refinement panel is without the search results web part.
A SharePoint portal with a large number of documents could have the refinement
panel on the homepage. This way the refinement panel gives the users some insights
in the amount and diversity of the documents.

68
SharePoint® eMagazine

Figure 4: Refinement panel with customized look & feel.


This example can be realized by adding a hidden search results web part on the
homepage with a fixed query like “IsDocuments:1”. This fixed/static query retrieves
all documents in the SharePoint portal. The customized refinement panel on the
homepage shows which metadata values are used and how many documents
match these values. To make this possible the look and feel of the refinement panel
has been customized. Customizing the refinement panel is done the same way as
customizing the search results web part by changing some XSLT code.
The XSLT consists of a number of xsl:template sections. The most important ones
are the FilterCategory, Filter and FilterLink templates. The first loops through all
refinement categories and calls the Filter template for each value within the category.
The Filter template determines if the current value is the selected item, one of the
other values or the implied item. The implied item is the “Any DocumentType”
option that comes with each category to remove a selected refinement. For all three
selection options, the FilterLink template is called with a different value for the CSS
class parameter. The FilterLink template actually shows the refinement values.
The flexibility of the XSLT code gives you the ability to brand the refinement panel in
any way you want.

Conclusion
The new refinement panel web parts in SharePoint 2010 give the SharePoint search
functionality a big boost. Using a search engine only by querying full text keywords
isn’t enough anymore. Applying metadata to information and tagging content by
the author on add/upload or by the users in the form of social tagging gives the
ability to see information from different anchors and is essential for web parts like
the refinement panels to enrich the user experience of your SharePoint 2010 portal.

Links
http://www.cmswire.com/cms/document-management/sharepoint-2010-using-
taxonomy-metadata-to-improve-search-discovery-007425.php
http://msdn.microsoft.com/en-us/library/ff625183(v=office.14).aspx

69
SharePoint® eMagazine - September 2010

The new Enterprise Content Management


features in SharePoint 2010
by Roel Hans Bethlehem, Tycho Bizot and Dolf den Ouden
This article will talk about the new SharePoint 2010 Enterprise Content Management (ECM)
features. It will go into the possibilities and limitations found while working with these features.
The project which we (the authors) did for a global financial services firm allows for scanning
physical mail, dealing with electronic mail, provisioning of site collections for each client handled
by the firm and automatic organization of content.

The following ECM features are a huge improvement on the 2007 version:

n Content Type Publishing Hub


n Managed Metadata
n Content Organizer
n In Place Records Management
n Document IDs

However, we did run into some limitations while trying to preserve metadata when
documents were moved, while using outgoing email and while trying to do bulk
updates of metadata from the datasheet view.
In this article we will focus on how to setup the Content Type Publishing Hub and
the Managed Metadata Service Application, as well as how to overcome some of
the limitations by bending the SharePoint import / export framework and using the
dialog framework. We assume that the reader has some experience running the
SharePoint 2010 Management Shell (PowerShell) and also has experience in the
SharePoint Object Model and C#.

Setting up the Managed Metadata Service Application


To take advantage of the new ECM features of SharePoint 2010, you will need to
provision the Managed Metadata Service Application. The Managed Metadata
Service Application is important in the following cases:

n You want to use a Content Type Publishing Hub to centrally manage all of your
content types to subscribed web applications
n You want to offer taxonomy lists as a new site column called Managed Metadata
column

After the SharePoint 2010 installation, you can automatically provision the service
applications via the wizard. In this article, we will show you how to provision your
own managed metadata service using PowerShell.
We choose to use PowerShell as our the weapon of choice for administrating
SharePoint. In order to be successful, make sure that you run the SharePoint 2010
shell as an administrator, that you have farm rights and have spshelladmin rights
(otherwise you cannot execute PowerShell command against SharePoint 2010). The
cmdlet get-spshelladmin will show the privileged accounts.
Make sure that you created a managed metadata service application and that the
managed metadata service instance is started (you can find this under Service on
Server in the Central Administration web site). You can check whether the service
is started by using the SharePoint PowerShell cmdlet get-spserviceinstance. If the
status is disabled then you will need to start the managed metadata service using the
cmdlet start-spserviceinstance -Identity [id of the service instance as shown below].

70
SharePoint® eMagazine

Figure 1: List of available service application instances.


Since we want to be able to reuse the script, we will create an XML file containing
all of the settings. Then we will create both the service application and the service
application connection using the settings file to provide the necessary settings:

<SharePointInstaller>
<Services>
<MetaDataService Name=”DMS Metadata Service”
DatabaseServer=”mydbserver”
DatabaseName=”SharePoint_Metadata”
FailoverDatabaseServer=””
HubUri=”http://foobar.com/sites/contenttypehub”>
<ApplicationPool Name=”SharePoint Hosted Services”
Account=”CORP\SP_WorkerProcess” />
<Proxy Name=”DMS Metadata Service Proxy” Partitioned=”false”>
<ProxyGroup Name=”Default” />
</Proxy>
</MetaDataService>
</Services>
<SharePointInstaller>

Listing 1: The settings file that is used to declare the settings.

function Start-MetaDataService([string]$settingsFile) {
try {
#Initialize settings
[xml]$config = Get-Content $settingsFile
$svcConfig = $config.SharePointInstaller.Services.MetaDataService
#Managed Account
$ManagedAccount = Get-SPManagedAccount | select -First 1
if ($ManagedAccount -eq $NULL)
{ throw “No Managed Accounts” }

$farm = Get-SPFarm
$buildversion = $farm.BuildVersion
#App Pool
$ApplicationPool = Get-SPServiceApplicationPool`
$svcConfig.ApplicationPool.Name -ea SilentlyContinue
if($ApplicationPool -eq $null)
{

71
SharePoint® eMagazine

$ApplicationPool = New-SPServiceApplicationPool`
$svcConfig.ApplicationPool.Name -account $ManagedAccount

if (-not $?)
{ throw “Failed to create an application pool” }
}

#Create a Taxonomy Service Application


#This assumes we already have a managed metadata service...
if((Get-SPServiceApplication |
?{$_.DisplayName -eq $svcConfig.Name}) -eq $null)
{
Write-Host “Adding service $svcConfig.Name”
New -SPMetadataServiceApplication -Name $svcConfig.Name`
-ApplicationPool $svcConfig.ApplicationPool.Name`
-SyndicationErrorReportEnabled`
-ErrorAction SilentlyContinue`
-ErrorVariable errApp

if ($errApp) {throw $errApp}


}

#Create the proxy


Write-Host “Creating proxy for the metadata service.”
New -SPMetadataServiceApplicationProxy -Name $svcConfig.Proxy.Name`
-ServiceApplication $svcConfig.Name `
-DefaultProxyGroup`
-ErrorAction SilentlyContinue`
-ErrorVariable errProxy

if ($errProxy) {throw $errProxy}


}
catch {Write-Output $_}
}
Start-MetaDataService(settings.xml)

Listing 2: PowerShell to provision the Managed Metadata Service Application.


Running the script will provision the managed metadata service application with the
specified database and application pool settings. We can now create the Content Type
Publishing Hub, publish content types within the hub and offer managed metadata
to subscribed web applications.

Figure 2: The managed metadata service application with two term sets.

72
SharePoint® eMagazine

Figure 3: The properties of the managed metadata service application show the url of
the Content Type Publishing Hub.

Figure 4: The properties of the managed metadata service proxy show the settings for
the connection to the Content Type Publishing Hub.

73
SharePoint® eMagazine

Creating Content Type Publishing Hub


and publishing the content types
To create the Content Type Publishing Hub, create a new site collection. We will be
using http://foo.com/sites/contenttypehub in this example. To make this site collection
a Content Type Publishing Hub activate the following site collection feature:

install-spfeature -Identity 9A447926-5937-44cb-857A-D3829301C73B


-url http://foo.com/sites/contenttypehub

Once you have activated the feature, you can create the content types that you want
to publish in the Content Type Publishing Hub. Just create the content types as you
would normally do either through the UI or by using a SharePoint feature defining
the content type and activate this on the site collection. You will also need to activate
the document ID feature in all site collections that are subscribed to the hub if content
types in the hub use document IDs.
In the example below, we publish the content types in the Content Type Publishing
Hub. We assume that all published content types are part of the content type group
called DMS Group.
When finished creating the content types, they need to be published so the subscribed
web application can consume the content types from the hub. Using the following
PowerShell script we can publish all content types in the Content Type Publishing
Hub site collection that are part of the DMS Group. First of all we need to extend the
settings.xml to provide some more settings:

[as child of SharePointInstaller in the settingsfile]


<ServiceSettings>
<ContentHub Url=”http://foo.com/sites/contenttypehub” ContentTypeGroup=’
”Dms Group”></ContentHub>
</ServiceSettings>

Listing 3: The settings file is extended to support


the Content Type Publishing Hub settings.
Note that the new namespace Microsoft.SharePoint.Taxonomy is referenced to
publish the content types.

... by the way ...


Corrupt or missing webparts
Every administrator will recognise this problem: closed webparts on a page. Default setting
of a webpart is that they are able to hide or remove it, they will make a personal view on the
page.
Webparts which are corrupt or not converted at migrations can also easily deleted when using
this trick.
Use the option ?contents=1 in your URL, for example http://portal/default.aspx?contents=1
Now you can remove or reset webparts

Tip from Jan Ligtenberg, trainer/ consultant APS IT-diensten,


j.ligtenberg@apsitprojecten.nl
Tel. 06 10634297

74
SharePoint® eMagazine

function Subscribe-ContentTypes([string]$settingsFile)
{
[xml]$settings = Get-Content $settingsFile
[System.Reflection.Assembly]::LoadWithPartialName(“Microsoft.SharePoint”)
[System.Reflection.Assembly]::LoadWithPartialName
(“Microsoft.SharePoint.Administration”)
[System.Reflection.Assembly]::LoadWithPartialName
(“Microsoft.SharePoint.Taxonomy”)

#Make sure that the content type hub uri is set


$metaDataApp = Get-SPMetadataServiceApplication
-Identity $settings.SharePointInstaller.Services.MetaDataService.Name
Set-SPMetadataServiceApplication -Identity $metaDataApp
-HubUri $settings.SharePointInstaller.ServiceSettings.ContentHub.Url

$Thesite = new-object Microsoft.SharePoint.SPSite


([string]$settings.SharePointInstaller.ServiceSettings.ContentHub.Url)
$Publisher = new-object
Microsoft.SharePoint.Taxonomy.ContentTypeSync.ContentTypePublisher
($Thesite)
$types = $Thesite.Rootweb.ContentTypes
$types | ForEach-Object {
if ($_.Group -eq [string]$settings.SharePointInstaller.’
ServiceSettings.ContentHub.ContentTypeGroup) {
Write-Host “Name: $($_.Name) Group: $($_.Group)”
if ($Publisher.IsPublished($_) -eq $true) {
Write-Host “Already Published: $($Publisher.IsPublished($_))”
}
else {
$Publisher.Publish($_)
Write-Host “Published $($_.Name)”
}
}
}
}

Listing 4: PowerShell script to publish the content types.


Run the script to publish the content types in the Content Type Publishing Hub.
Once the content type subscriber timer job has run, the web application and its site
collections will be synced with the Content Type Publishing Hub. Since the standard
setting will synchronize them only once every 59 minutes, you may want to run the
timer job(s) manually.

Figure 5: Content types in the Content Type Publishing Hub.

75
SharePoint® eMagazine

Figure 6: Site collection subscribed to the Content Type Publishing Hub.


Note that you can check the content type publishing error log if your content types
are not synchronized correctly.

Creating managed metadata term sets


Creating the term sets using the GUI is a rather labor intensive job. Luckily we can
also import a comma separated file to directly import a complete term set. The csv
file has a special format describing the hierarchy and availability of terms. There are
some community tools to help us convert the csv file to a format that SharePoint can
handle:
http://www.wictorwilen.se/Post/Create-SharePoint-2010-Managed-Metadata-with-
Excel-2010.aspx
In this example we will be downloading a csv file containing the taxonomy of the
Netherlands from province level (provincie), to the councils within the provinces
(gemeente) and the towns and villages (woonplaats) that compromise these councils.
http://6pp.kvdb.net/exports/cities.csv

Woonplaats,Gemeente,Provincie,Latitude,Longitude
‘s-Graveland,Wijdemeren,Noord-Holland,,
‘s-Gravendeel,Binnenmaas,Zuid-Holland,,
‘s-Gravenmoer,Dongen,Noord-Brabant,51.66666670,4.80000000
‘s-Gravenpolder,Borsele,Zeeland,51.45000000,3.90000000

Listing 5: The original csv file.


After downloading the csv file, reading the data in Excel, culling the latitude and
longitude columns and creating the term set using the macros provided by Wictor
Wilen, we commit it to the Managed Metadata Service using PowerShell.

“Term Set Name”,”Term Set Description”,”LCID”,”Available for Tagging”,”Term


Description”,”Level 1 Term”,”Level 2 Term”,”Level 3 Term”,”Level 4 Term”,”Level
5 Term”,”Level 6 Term”,”Level 7 Term”
“Plaatsen”,”Plaatsen in Nederland”,,”False”,,,,,,,,
,,,”True”,,”Noord-Holland”,”Wijdemeren”,”’s-Graveland”,,,,
,,,”True”,,”Zuid-Holland”,”Binnenmaas”,”’s-Gravendeel”,,,,
,,,”True”,,”Noord-Brabant”,”Dongen”,”’s-Gravenmoer”,,,,
,,,”True”,,”Zeeland”,”Borsele”,”’s-Gravenpolder”,,,,

Listing 6: The resulting csv after using the macro, ready for import into the Managed
Metadata Service Application.

76
SharePoint® eMagazine

Extend the settings file as follows and also make sure that the path to the csv file is
set correctly in the settings file.

<Metadata>
<TermStore Name=”DMS Metadata Service Proxy”>
<Group Name=”Netherlands”>
<TermSet Name=”Dutch places” File=”C:\temp\cities.csv”></TermSet>
</Group>
</TermStore>
</Metadata>

Listing 7: Extending the settings file to declare settings for the taxonomy.
Now you can use the following PowerShell function to create the term set in the
managed metadata service application:

function Create-Taxonomy([string]$settingsFile)
{
[System.Reflection.Assembly]::LoadWithPartialName
(“Microsoft.SharePoint.Taxonomy”)
[xml]$config = Get-Content $settingsFile
$Thesite = new-object Microsoft.SharePoint.SPSite
($config.SharePointInstaller.ServiceSettings.CentralAdministration.Url)
Write-Host “Site url: $($Thesite.Url)”
$session = New-Object Microsoft.SharePoint.Taxonomy.TaxonomySession
($Thesite, $true)

#foreach loop to read all termstores

$session.TermStores | ForEach-Object {
Write-Host “Termstore: $($_.Name)”
if ($_.Name -eq
$config.SharePointInstaller.ServiceSettings.Metadata.TermStore.Name)
{
$termstore = $_
}
}
Write-Host “Selected termstore: $($termstore.Name)”
$im = $termstore.GetImportManager()

# if null group returned then create the group


$config.SharePointInstaller.ServiceSettings.Metadata.TermStore.Group |
ForEach-Object {
$group = $null
$termset = $null
$groupName = $_.Name
$termstore.Groups | ForEach-Object
{
Write-Host “Group: $($_.Name)”
if ($_.Name -eq $groupName)
{
$group = $_
}
}
if ($group -eq $null) {
Write-Host “Creating group”
Write-Host “$($config.ServiceSettings.Metadata.TermStore.Group.Name)”
$group = $termstore.CreateGroup($groupName)
}
#Write-Host “Selected group: $($group.Name)”

$group.TermSets | ForEach-Object {
Write-Host “Termset: $($_.Name)”
if ($_.Name -eq $_.TermSet.Name) {
$termset = $_
}
}

#Write-Host “Selected termset: $($termset.Name)”


if ($termset -eq $null) {
# read csv file into textreader
$importFile = New-Object System.IO.StreamReader($_.TermSet.File)
$importErrorMessage = “”
$allTermsAdded = “”

77
SharePoint® eMagazine

$termset = $im.ImportTermSet
($group, $importFile, [ref] $allTermsAdded, [ref] $importErrorMessage)
}
}

Write-Host “Commiting changes to the taxonomy service application.”


$termstore.CommitAll()
}

Listing 8: Creating the term set using PowerShell.


After running the script the term set is imported and the term store now contains
a term set called “Plaatsen” (translation: “places”) which can be used in a managed
metadata site column. Figure 7 shows the Managed Metadata Service Application
after the transformed csv file is imported.

Figure 7: The metadata service showing the “plaatsen” taxonomy.


You can now add use the “Plaatsen” term set in site columns in all sites that are
connected to the Managed Metadata Service Application. Just add a site column of
the managed metadata type and select the “Plaatsen” term set. Select the node from
where you want the managed metadata column to start offering terms. In this case
we select the node “Plaatsen”.
After adding the column to the library, you can select the town you are looking for
by browsing through the hierarchy in the term set or by typing in part of the term.

Figure 8: Selecting a term in a managed metadata column.

78
SharePoint® eMagazine

Figure 9: Type ahead functionality in a managed metadata column.

Moving documents from document


library to document library
For our project, we needed the functionality to move documents and their associated
metadata, which had to meet the following requirements:
n Keep all versions
n Keep context information like modification date
n Allow movement between different site collections and web applications
n Allow movement between different document libraries

The first thing we did was checking out whether the out of the box functionality
could help us to achieve our goals. We checked out the following options:
n SPFile.MoveTo
n Send Document to Repository (SharePoint Designer workflow activity)
n Export and import

SPFile.MoveTo, this method only allows us to move our files between libraries within
the same site collection. Another minor detail about this method is that when a
document is moved it counts as a modification, therefore changing the modified
property timestamp.
Send Document to Repository, this will only allow us to send files to a record center or
to a site collection that has the Content Organizer feature enabled. Unfortunately, in
our case the documents ended up in the content organizer’s drop off folder instead
of in the intended directory. This approach also modifies the document’s timestamp.
Export and import, this will allow us to keep our versions and doesn’t modify the
timestamp, but it only allows us to move files to the same library within another site
collection.
After finding out that none of the out of the box features were suitable, we decided
to modify the export and import process. Before starting the export process it can
be configured using the SPExportSettings. One of the properties of the SPExportSet-
tings object is the FileCompression property. Setting the FileCompression property
to false will export your item and its setting files into separate files. Two of the files
created during the export are RootObjectMap.xml and Manifest.xml. We did a lot of
investigating and testing in order to figure out how we could edit the files so that
the item would be imported into our destination library.

79
SharePoint® eMagazine

We created an extension method for the SPListItem class that will first call our export
method using the predefined export settings.

private static string Export(string siteURL)


{
string exportLocation = string.Empty;
try
{
SPExportSettings exportSettings = new SPExportSettings();
exportSettings.AutoGenerateDataFileName = true;
exportSettings.ExportMethod = SPExportMethodType.ExportAll;
exportSettings.SiteUrl = siteURL;
exportSettings.IncludeSecurity = SPIncludeSecurity.All;
exportSettings.IncludeVersions = SPIncludeVersions.All;
exportSettings.FileCompression = false;

SPExportObject exportObject = new SPExportObject();


exportObject.IncludeDescendants = SPIncludeDescendants.None;
exportObject.Type = SPDeploymentObjectType.ListItem;
exportObject.Id = m_listItem.UniqueId;
exportSettings.ExportObjects.Add(exportObject);

using (SPExport export = new SPExport(exportSettings))


{
export.Run();
}
exportLocation = exportSettings.FileLocation +
“\\” + exportSettings.BaseFileName;
}
catch (Exception ex)
{
throw new Exception(
string.Format(“Error exporting file from site: {0}”, siteURL),
ex.InnerException);
}
return exportLocation;
}

Listing 9: Exporting an item.


The export method will return the location to which the files are exported. We can
then use this location to replace the source library url with the destination library url.

private static void ReplaceStringInFile(string filePath,


string sourceUrl, string destinationUrl,
System.Text.RegularExpressions.RegexOptions regexOptions)
{
try
{
string content = string.Empty;
using (StreamReader reader = new StreamReader(filePath))
{
content = reader.ReadToEnd();
content = Regex.Replace(
content, sourceUrl, destinationUrl, regexOptions);
}

using (StreamWriter writer = new StreamWriter(filePath))


{
writer.Write(content);
}
}
catch (Exception ex)
{
throw new Exception(
string.Format(“Error replacing urls in {0}”, filePath),
ex.InnerException);
}
}

Listing 10: Replacing the source url with the destination url.

80
SharePoint® eMagazine

We only change the library urls within these files. It is not necessary to change the
site collection urls within these files because export and import does this for you.
The import only requires us to configure the UpdateVersions to SPUpdateVersions.
Overwrite. The reason for setting the Overwrite will be explained below.
The import code is displayed in listing 11.

private static bool Import(string siteURL, string fileToImport)


{
try
{
SPImportSettings importSettings = new SPImportSettings();

importSettings.BaseFileName = Path.GetFileName(fileToImport);
importSettings.FileLocation = Path.GetDirectoryName(fileToImport);
importSettings.SiteUrl = siteURL;
importSettings.RetainObjectIdentity = false;
importSettings.IncludeSecurity = SPIncludeSecurity.All;
importSettings.UpdateVersions = SPUpdateVersions.Overwrite;
importSettings.UserInfoDateTime =
SPImportUserInfoDateTimeOption.ImportAll;
importSettings.FileCompression = false;

using (SPImport import = new SPImport(importSettings))


{
import.Run();
}
}
catch (Exception ex)
{
throw new Exception
(string.Format(“Error importing file: {0} to site: {1}”,
fileToImport, siteURL), ex.InnerException);
}
return true;
}

Listing 11: Importing an item into the destination list.


Usually this will work well, however an error might occur if the source item name
is already in use in the destination library. SharePoint export and import does not
allow you to change filenames during the import and export process so we created
a method which checks the destination item name. If the destination item name is
unavailable, we will change the source item name into the available destination item
name before we export the item. We create a placeholder item in the destination
library right away to ensure that the verified unique name will not be taken during
the import process. This is why we set the UpdateVersions setting to SPUpdateVersions.
Overwrite. This enables us to replace the placeholder with the imported file without
leaving a trail in the history of the imported file.

Using the ribbon and dialog framework


to bulk update metadata
The ribbon is one of the new features in SharePoint 2010 and probably the most
noticeable change for the end-users. Combined with the new dialog model, the
ribbon gives you the possibility to implement great new features which improve the
end-user’s productivity and giving them the same (or better) user experience as the
out of the box functionality in SharePoint.
One of the things we implemented using the ribbon and dialog model during our
DMS project is enabling users to email selected documents directly from a document
library. The implementation is based on an article called “Customizing the SharePoint
ribbon” published in the SDN SharePoint 2010 eMagazine, written by Marianne van
Wanrooij. The end result is shown in the image below. Our custom button is added
to the ribbon between “E-mail a link” and “Alert Me”. Selecting one or more files
enables our button and clicking it loads the dialog which gives the user the possibility
to choose a recipient and type an accompanying message. When selecting a single
file, the same action can be started via the file’s context menu.

81
SharePoint® eMagazine

Figure 10: Emailing files from a document library.


As with most customizations we started out by adding a feature to our solution. This
feature deploys our custom actions, the images for our button and a layouts page.
The elements file in listing 12 contains the custom action definition.

<?xml version=”1.0” encoding=”utf-8”?>


<Elements xmlns=”http://schemas.microsoft.com/SharePoint/”>
<CustomAction
Id=”EMailFiles”
Location =”CommandUI.Ribbon”
Sequence=”5”
Title=”Email Fils”>
<CommandUIExtension>
<CommandUIDefinitions>
<CommandUIDefinition
Location =”Ribbon.Documents.Share.Controls._children”>
<Button Id=”Ribbon.Documents.Share.EMailFilesButton”
Alt=”Email Files”
Sequence=”15”
Command=”EmailFiles”
Image32by32=”/_layouts/images/demo/MailFile32.jpg”
Image16by16 =”/_layouts/images/demo/MailFile16.jpg”
LabelText=”Email Files”
TemplateAlias=”o1”
/>
</CommandUIDefinition>
</CommandUIDefinitions>
<CommandUIHandlers>
…..
</CommandUIHandlers>
</CommandUIExtension>
</CustomAction>

Listing 12: Setting up a custom action.


The MSDN page describing the location ids that can be used for custom actions is
incorrect; the easiest way to determine the location id for the desired position is to
take a look at the file 14\Template\Global\XML\CMDUI.XML. This file contains most of
the default custom actions for SharePoint 2010. If you have found the correct location,
make sure to use the same template alias. Depending on the location of your custom
action, one of the image sizes is used. When testing to see if your custom action is
displayed in the correct location, make sure to do plenty of IIS resets and clear your
cache, otherwise you might not see your changes show up.

82
SharePoint® eMagazine

The Command parameter of the Button element refers to the EmailFiles command
we defined in the CommandUIHandlers section. The JavaScript in this function is
responsible for opening our modal dialog page and it supplies a callback function
that handles the returned SP.UI.DialogResult by displaying it in the notification bar
that was introduced with SharePoint 2010. Enabling/disabling our button depending
on the number of documents the user has selected is also done in this JavaScript. You
could extend this and hide it if outgoing email isn’t configured on the server, like
Microsoft did with the Alert Me button.

<CommandUIHandlers>
<CommandUIHandler Command=”EmailFiles” CommandAction =”javascript:
function emailFilesCallback(dialogResult, returnValue)
{
if (dialogResult == SP.UI.DialogResult.OK)
{
SP.UI.Notify.addNotification(returnValue);
SP.UI.ModalDialog.RefreshPage(SP.UI.DialogResult.OK);
}
else if (dialogResult == SP.UI.DialogResult.cancel)
{
SP.UI.Notify.addNotification(returnValue);
SP.UI.ModalDialog.RefreshPage(SP.UI.DialogResult.cancel);
}
}

var ctx = SP.ClientContext.get_current();


var web = ctx.get_web();
var items = SP.ListOperation.Selection.getSelectedItems(ctx);
var fileIds = ‘’;
var k;
for (k in items)
{
fileIds += ‘|’ + items[k].id;
}
var options = {
url: ‘{SiteUrl}/_layouts/TMF-DMS/EmailFile.aspx?fileIds=’ + fileIds +
‘&amp;source=’ + SP.ListOperation.Selection.getSelectedList(),
tite: ‘Email Files’,
allowMaximize: true,
showClose: true,
width: 600,
height: 500,
dialogReturnValueCallback: emailFilesCallback };
SP.UI.ModalDialog.showModalDialog(options);”
EnabledScript=”javascript:function itemsSelected()
{
var items = SP.ListOperation.Selection.getSelectedItems();
var ci = CountDictionary(items);
return (ci >= 1);
}
itemsSelected();” />
</CommandUIHandlers>

Listing 13: The JavaScript handling our custom action.


As you can see in the code in listing 13 we use the Client Object Model for JavaScript
to retrieve the current SPContext. The context leads us to the to list so we can get the
selected items and build a string containing all item ids. This string, together with the
list id, is added to the querystring we use to call our dialog page. SharePoint replaces
{SiteUrl} at runtime.
The Dialog Page is a normal layouts page derived from LayoutsPageBase. If a layouts
page is used as a modal dialog elements using the css class “s4-notdlg” are hidden,
giving the user the dialog experience. When loading the page we retrieve the id of
the list the user performed the action on and split the ListItem id’s. This knowledge
is used to retrieve the names of the selected items and perform some checks. In our
case we check if the total size of the selected files is above a threshold specified in the
web.config and if any folders are selected. If any of these checks gives a positive result
we warn the user and take appropriate action.

83
SharePoint® eMagazine

The sender data is set to the current user’s email address and we use the SMTP server
specified in the farms outbound mail settings. We retrieve these settings via the
OutboundMailServiceInstance property of the current SPWebApplication object.

//Close the dialog


this.Page.Response.Clear();
this.Page.Response.Write(string.Format(CultureInfo.InvariantCulture, “<script
type=\”text/javascript\”>window.frameElement.commonModalDialogClose(1, ‘Files
Sent!’);</script>”));
this.Page.Response.End();

Listing 14: The JavaScript handling our custom action.


If the message is sent or a failure occurred, we inform the user by returning a corres-
ponding dialogresult value and an accompanying message. The dialog window is
closed; as the page isn’t refreshed, the user remains on the same page he started the
action from. The result of is action is displayed in the notification area.
We added another custom action to our feature which enables the user to email a
single selected file via the context menu. It uses the same layouts page and has only
a small change in the JavaScript to disable the action if the selected item is a folder.
Thanks to our custom actions the user is now able to quickly send the selected files to
his clients who do not have access to the SharePoint environment. While enthusiastic
about the increased productivity, the business has decided it might be too easy and
are now contemplating how to keep a grip on the data leaving the company. Options
being considered are auditing, carbon copying supervisors and limiting the receiving
email addresses a user can select.
The idea to select multiple files and perform an action on them is also used in the
Mailroom sites of the Document Management System we created. In the Mailroom
all physical mail sent to the company is scanned and ends up in the Inbox document
library of one of the mailroom sites. In the Mailroom, the Mailroom operator has to
edit the document metadata and select the client that the document belongs to. The
information about clients is stored in a Business Connectivity Service List (BCS) and is
retrieved from a backend system through a web service.

Figure 11: Bulk update client information through a dialog.


Before we implemented a custom action, the mailroom operators needed to edit
the properties of each incoming document to select the client that the document/
mail item belonged to. Using the same principle of sending the current list id and

84
SharePoint® eMagazine

the selected items to a layouts page, we enable the user to choose a client from
our BCS list and set the metadata for multiple documents in a single action. As you
can probably imagine, this increases productivity in a mailroom where hundreds of
documents pass through the inbox every single day.
Conclusion
In this article we have shown you how to setup the Managed Metadata Service
Application, how to get the Content Type Hub up and running and how to create a
taxonomy in the Managed Metadata Service Application. We have also shown you
some limitations of SharePoint 2010 and how you can deal with them. We believe
that the ECM features of SharePoint 2010 combined with the existing features such as
Import /Export and the power of the Dialog Framework provide great opportunities
in the ECM market for SharePoint developers.

AvePoint

Fortifying the Pillars of Governance with SharePoint and DocAve


Christopher Musico is Business Content Editor at AvePoint. He assists in the execution
of the company’s global messaging strategy. Working closely with global organizations
and SharePoint experts, he writes AvePoint’s customer case studies, thought leadership
papers, corporate press releases, and advertisement copy.
Prior to his position at AvePoint, Christopher covered collaboration technologies at a
leading business trade publication, receiving multiple awards for his pieces on cloud
computing and generational trends.
Email: Christopher.Musico@avepoint.com

85
SharePoint® eMagazine - September 2010

About the authors


Why would I need a My Site?
Matthias Fonteyne is a Senior SharePoint Solution Builder at Giraffe IT,
a Dutch consulting firm offering services in HR, Finance, Change and
ICT. Giraffe IT is a business productivity services provider, dedicated to
help clients achieve their business needs through proven technology
and hands-on experience, mainly in SharePoint and Integration
solutions. He specializes in SharePoint Publishing Sites, branding with
SharePoint and SharePoint social media, in both Microsoft Office
SharePoint Server 2007 and Microsoft SharePoint Server 2010.
Email: Matthias.fonteyne@giraffe.nl
Twitter: @mfonteyne
Controlling the SandBox: A real business necessity
Gustavo Velez is a MCSD, MVP SharePoint, Solutions Manager for
Avanade (http://www.avanade.com). In his many years of experience
developing and working with Windows and Office applications,
especially SharePoint, Gustavo has given seminars/training in Share-
Point as well as doing private consultancy work. He is webmaster of
http://www.gavd.net, the only Spanish-language site dedicated to
SharePoint. Gustavo has created online training videos for Microsoft
MSDN about SharePoint 2007 and SharePoint 2010 and is the author
of four books about SharePoint programming.

PowerShell and SharePoint for beginners


Albert-Jan Schot is a SharePoint visionary and spends most of his
days on building things with and for SharePoint. From development
based on SharePoint to training end-users on their SharePoint
environment, SharePoint is the beating heart of his daily routine.
Web: http://www.tamtam.nl
Blog: http://blogs.tamtam.nl/appie
Email: appie@tamtam.nl or appieschot@xs4all.nl
Twitter: @appieschot

FAST Search Server for Power Users


Mark van Dijk currently works as a SharePoint developer and
architect at Macaw in the Netherlands. He is an experienced IT
professional with over ten years’ practical experience and a primary
focus on Microsoft technologies. He’s been working on the Share-
Point platform since 2004 and his current interest is in SharePoint’s
enterprise search technologies and establishing integrations with
Dynamics CRM/xRM.
Email: mark.van.dijk@macaw.nl
Twitter: @MarksPoint

Introducing LINQ to SharePoint


Joe Capka is a SharePoint consultant living in Amsterdam, working
for his own company XComplica. He has been working with Share-
Point from day one of his professional career and has had a love-hate
relationship with it ever since. Most of Joe’s work is on public facing
websites and his daily tasks usually revolve around making SharePoint
look and behave like anything but SharePoint.
Web: http://www.xcomplica.com
Blog: http://jcapka.blogspot.com
Email: jcapka@xcomplica.com
Twitter: @jcapka

86
SharePoint® eMagazine

Logging and monitoring in SharePoint 2010


Mirjam van Olst is a SharePoint Architect in the Information Worker
Solutions department of Macaw in the Netherlands. In this role she
has been helping companies to implement SharePoint solutions since
2004. Mirjam is a Microsoft Certified Master for SharePoint 2007.
Mirjam is co-organizer of DIWUG and a track owner of the Infor-
mation Worker track of the SDN.
Blog: http://www.sharepointchick.com.

Building a FAQ with new CQWP functionality in SharePoint 2010


Hannah Swain is a SharePoint consultant with a strange fascination
for information management, productivity and whatever else
happens to come her way.
Twitter: @hannahrswain

Using the refinement panel to get more insight into your


search results
Lennard van Leuven is a functional SharePoint specialist at Mavention.
His main focus is to translate the customers’ requirements and wishes
to a realistic and expandable SharePoint design. Lennard has been
involved in different large scale SharePoint implementations.
Web: http://www.mavention.nl
Email: Lennard.van.leuven@mavention.nl
Blog: http://www.mavention.nl/mavention-blog.aspx

The new Enterprise Content Management features in SharePoint 2010


Roel Hans Bethlehem is an experienced Sharepoint Solutions architect
with ten years of IT experience. He is the team lead for the Microsoft
Rapid Deployment Program for Sharepoint 2010 at a TMF (Financial
Services company). He is a speaker at SDN, DIWUG and Sparked toolkit
sessions and spoke at the Poland .NET Roadshow 2010.
Web: http://www.sparked.nl or http://www.tsunami.nl
Email: Roelhans.bethlehem@sparked.nl or roelhans@tsunami.nl
Twitter: @rbethleh

Tycho Bizot worked for Logica CMG in the first years of his career
andworked on several SharePoint 2007 projects. He joined Sparked
in 2010 and is now working on several SharePoint 2010 and 2007
projects.
Web: http://www.sparked.nl or http://tycho.bizot.nl
Email: tycho.bizot@sparked.nl or Tycho@bizot.nl

Dolf den Ouden has always been passionate about IT. After high
school, he did two Technical Information studies resulting in his
bachelor degree. Dolf started his own company, developing several
websites and web applications. After finishing his degree in 2006
he started working at Logica and specialized in.NET and SharePoint
development. In 2009 he joined Sparked.
Web: http://www.dolfdenouden.nl or http://www.italkcode.net
Email: Dolf.den.ouden@sparked.nl

87

Vous aimerez peut-être aussi