Vous êtes sur la page 1sur 76

@CODEMagazine

Dynamic Languages, Web API, SSIS 2012

MAY JUN
2012
codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - CODE COMPONENT DEVELOPER MAGAZINE

An EPS Company

US $ 5.95 Can $ 8.95

Dyna

ic m

g Lan

Sponsored by:

uages

TABLE OF CONTENTS

Features
8 The Bakers Dozen Doubleheader: 26 New Features in SQL Server Integration Services 2012 (Part 2 of 2)
Kevin looks at 13 new features in SQL Server Integration Services 2012.

48 Grokking the DLR: Why its Not Just for Dynamic Languages
Kevin reviews why so many developers dont know much about the Dynamic Language Runtime, why many have misconceptions about DLR and why developers should consider using the DLR as a communication tool, even if they never intend to use a dynamic programming language in their own application designs.

Kevin S. Goff

15 New at CODE Magazine!


Markus discusses Xiine, a Kickstarter project and CODE Framework.

Kevin Hazzard

Markus Egger

58 Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework
Markus walks through how using themes and styles features in the CODE Framework, which is available for free, can make you a more productive developer and can make your applications easier to modify and maintain over its lifecycle.

16 SharePoint Applied: Visual Studio 11 Beta and SharePoint Development


Sahil explains which new features in Visual Studio 11 Beta he thinks will be most interesting to Sharepoint 2010 developers.

Markus Egger

Sahil Malik

20 The Danger of Dynamic Languages


Neal weighs in on the debate about functional vs. imperative languages and offers argument that functional languages, used correctly, offer profound benet.

Columns
74 Managed Coder: On Abstraction
Ted Neward

Neal Ford

22 Business Web Page Layout Ideas for HTML5 Applications


Pauls article discusses how to use CSS3 to make your pages look better and how to use new HTML5 elements and attributes.

Departments
6 Editorial 73 Code Compilers 19 Advertisers Index

Paul D. Sheriff

28 Dynamic Languages 101


Ted discusses a few dynamic languages youve probably not seen before (Lua, Prolog, Scheme and Clojure) and how to use them from within traditional C#/Visual Basic applications.

Ted Neward

36 An Introduction to ASP.NET Web API


Rick writes about the new ASP.NET Web API, which is an exciting new addition to the ASP.NET stack that provides a new, well-designed HTTP framework for creating REST and AJAX APIs.

Rick Strahl

Sponsored by:

US subscriptions are US $29.99 for one year. Subscriptions outside the US pay US $44.99. Payments should be made in US dollars drawn on a US bank. American Express, MasterCard, Visa, and Discover credit cards are accepted. Bill me o ption is available only for US subscriptions. Back issues are available. For subscription information, send e-mail to subscriptions@code-magazine.com or contact customer service at 832-717-4445 ext 10. Subscribe online at codemag.com CoDe Component Developer Magazine (ISSN # 1547-5166) is published bimonthly by EPS Software Corporation, 6605 Cypresswood Drive., Suite 300, Spring, TX 77379 U.S.A. POSTMASTER: Send address changes to CoDe Component Developer Magazine, 6605 Cypresswood Drive., Suite 300, Spring, TX 77379 U.S.A..

Table of Contents

codemag.com codemag.com od a .c o emag.com

EDITORIAL

The Times they are A-changin!


Two years ago we dedicated an entire issue of CODE Magazine to the concept of open source software. In my editorial titled Open Source Software http://www.code-magazine.com/Article. aspx?quickid=0906011, I gave an overview of the landscape of Open SourceSoftware (OSS) as it

pertained to the .NET developer. I am amazed how much things have changed in just a few short years. Open source has come to the Microsoft ecosystem in full force. For a company that only ten years ago called open source software a cancer (http:// en.wikipedia.org/wiki/Steve_Ballmer#Free_ and_open_source_software) the sea change is remarkable. Last month was full of news on the Microsoft open source news front. We learned that Microsoft is one of the top 20 committers to the Linux kernel (http://www.linuxfoundation.org/news-media/ announcements/2012/04/linux-foundationreleases-annual-linux-development-report). Not soon after the news of Microsofts commitment to the Linux kernel came the bombshell: Microsoft released ASP.NET MVC, Web API and the Razor view engine under an open source license. The Apache 2.0 license to be exact! As Bob Dylan says: The times they are a-changin. But wait, it gets better. Microsoft is also starting a new wholly owned subsidiary called Microsoft Open Technologies, Inc. (http://blogs.technet. com/b/port25/archive/2012/04/12/announcingone-more-way-microsof t-will-engage-withthe-open-source-and-standards-communities. aspx) This organization will work with standards initiatives and open source projects. The press release is like a whos who of the open source ecosystem: Linux, Hadoop, MongoDB, PhoneGap, etc. The benet of Microsoft supporting these projects will be felt around the world. I know a lot of my clients will look more favorably on adopting open source software now that Microsoft has demonstrated their commitment.

code, permission to fork it and create your own derivative but it was missing a critical characteristic of more permissive licenses. Microsoft didnt allow outsiders to commit patches to the core source code. With Microsofts announcement in March 2012, this is no longer true. Microsoft now takes submissions to the core source code. As a matter of fact, they already have and it didnt take long for it to happen.

What does this announcement really mean?


You have access to the code and you can create your own fork of the code and make changes to that. When you make changes you can then submit them to the team for inclusion into the mainline product.

sion occurred shortly after Microsofts decision to open source big parts of the ASP.NET stack. The biggest takeaway I had is that software developers and consumers of Microsoft products need to change our world view when it comes to new features. In the case of ASP.NET MVC, Web API and the Razor view engine, the adoption of new features now lies partially in the hands of the community. Want a new template? Just do it! Want a new overload for a function? Just do it! Want to unseal all the classes? Just do it! (Please someone, do it!) The responsibility for adding features no longer rests solely in the hands of Microsoft. Its up to us as a community to make these happen.

Rod Paddock

Now that the code is under Apache, what happens next?


I am highly interested to see how this process progresses. I think the development community will collectively turn an important corner when Microsoft adds non-Microsoft employees to the core team. What does this mean? Most OSS projects have core teams that are the nal gatekeepers to accepting changes into the mainline source code tree. Currently there are no non-Microsoft employees with this ability. I dont expect this to happen by at. Adoption into the core team will happen organically, but I can see a person that frequently contributes patches will eventually be added to the team.

How do you participate?


The Microsoft development team that maintains these projects has developed a good set of guidelines for developers to follow when submitting patches. A lot of the ideas for getting started are pretty simple: add a unit test, x a defect, search for a TODO comment in the source code. Click over to this Web page for guidelines that can help you get started as a contributor: http://aspnetwebstack.codeplex.com/wikipage? title=Contributing&referringTitle=Home.

Why is Microsoft Making these Open Source Moves?


A number of questions arise from these new developments. I will try and answer a few of them here.

Wasnt MVC already OSS?


Yes and no. ASP.NET MVC was released under the MS-PL license. This license gave you access to the

With Great Power Comes Great Responsibility


I recently had an interesting conversation about feature requests for ASP.NET MVC. This discus-

Editorial

codemag.com

ONLINE QUICK ID 1206021

The Bakers Dozen Doubleheader: 26 New Features in SQL Server Integration Services 2012 (Part 2 of 2)
In the rst game of this doubleheader (the last issue of CODE Magazine), I covered 13 new database and T-SQL features in SQL Server 2012. Well, its the second game of the doubleheader, and the nightcap features 13 new features in SQL Server Integration Services 2012. SSIS has always been a good time, and now its an even better tool with
enhancements and improvements over prior versions. Even if you had a love/hate relationship with SSIS before, youll nd that Microsoft paid special attention to SSIS 2012. The Bakers Dozen Potpourri Miscellaneous new features in SSIS 2012

SQL Server 2012 Released to Manufacturing!


As of this writing (early March 2012), Microsoft has released SQL Server 2012 to manufacturing, and has set a release date of early April 2012 for general availability. With the new column store index, T-SQL features, SSIS features that Ill talk about in this article, and new Business Intelligence features that Ill talk about in subsequent articles, SQL Server 2012 is an industry game changer!

Starting Lineup for Game 2


Kevin S. Goff
Kgoff@KevinSGoff.NET Kevin S. Goff has been a SQL Server MVP since 2010, and was a C# MVP from 2005-2009. He is currently a full-time SQL Server/Business Intelligence Practice Manager for SetFocus, LLC, a Microsoft Certied Partner for Learning Solutions.

Normally in Bakers Dozen tradition, I say, Whats on the menu? This time, Im saying, The starting lineup is as follows: New Development Editor use of SQL Server Data Tools 2010 New Shared Connection managers to simplify the connection manager process across packages in a project SSIS parameters at the project, package, and task level Bakers Dozen Spotlight: a new variable expression task, to eliminate instances where scripts are necessary New UNDO/REDO functionality in the data ow editor New SSIS Expression Language Features New native ODBC Data Flow Source and Destination Components Greatly improved recovery from data lineage and invalid metadata reference issues New Data Taps functionality to programmatically tap into a Data Flow pipeline New SSIS tasks to support Change Data Capture New Deployment features in SSIS 2012 A new SSIS Server Management Dashboard feature

Tip 1: New SSIS Development Environment using SQL Server Data Tools
Prior to SSIS 2012, SSIS developers used Business Intelligence Development Studio, which was a shell of Visual Studio 2008. Some who used SSIS 2008R2 were (understandably) upset that even the R2 version (released in 2010) still used the VS2008 shell, as opposed to the updated WPF-based Visual Studio 2010 shell. Fortunately, the planets now align SSIS 2012 uses the WPF-based Visual Studio 2010 shell. The SSIS development editor is much more visually appealing. Although this might not be critical for experienced ETL developers, a better looking UI will help with the appeal for new ETL developers. Figure 1 and Figure 2 show the control ow and data ow for an SSIS package in the new SSDT environment. (At the end of this article, Ill talk about what this package does.)

Figure 1: The control ow for an SSIS package in the new SSDT environment uses the WPF framework.

Figure 2: The data ow for an SSIS package in the new SSDT environment has the pipeline in blue instead of the old green color.

The Bakers Dozen Doubleheader: 26 new Features in SQL Server Integration Services 2012 (Part 2 of 2)

codemag.com

Tip 3: SSIS Parameters at the Project, Package, and Task Level


A common question in SSIS is how to pass parameters to packages. Prior to SSIS 2012, there was no direct way: you had to pass variables in parent-child congurations. This required some work and was moderately difcult to debug. Microsoft has added full parameter capabilities for SSIS projects and packages. Figure 4 shows an example of the new tab in the SSIS package editor.

Figure 4: The new tab in the SSIS package editor denes package parameters. Figure 3: The new structure in SSIS 2012 lets you create project-level Connection Managers and use them throughout your project.

Tip 2: New Shared Connection Managers


In prior versions of SSIS, connection managers (for OLE DB, FTP, SMTP, at le, and other connections) were scoped to individual packages, not projects. An SSIS project did not allow shared connections across packages inside the project. This meant that developers had to copy/ paste connections and connection expressions from package to package. (There was a workaround a developer could create a package template with the necessary base connections, and then create new packages from the base package. However, all SSIS does is create a copy from the base package template, with no lineage back to the template. So any changes to the connection manager in the template will not ripple through.) Figure 3 shows the new structure in SSIS 2012, where a developer can create project-level connection managers and then use them throughout the packages in the project. Any changes to the connection managers ripple through to packages that use them.

Tip 4: Bakers Dozen Spotlight: A New Variable Expression Task


For years, Ive used SSIS and taught SSIS. As much as I love the product, there were a few features that I felt strongly should behave differently. For instance, any time I need to programmatically set the value of an SSIS variable (so that I can take advantage of the variable later in the package), I need to drop down to the basement in SSIS and write a short script in either C# or Visual Basic to manipulate the value of the variable (using the weakly-typed Dts.Variables collection). Certainly, developers shouldnt be afraid to write .NET code when the need arises. (I still vaguely recall my past life as a C# developer/MVP.) However, one can argue that mundane tasks such as incrementing/accumulating variables or building variables from dynamic expressions should be an appropriately unceremonious process. The good news actually VERY good news is that Microsoft recognized that a script task and extra steps were overkill for setting variable expressions, and created a new Variable Expressions task (Figure 5) in the Control Flow. This task allows you to use the SSIS expression language to manipulate SSIS variables at the package level, instead of the script level. While certainly not the largest enhancement in SSIS 2002, I personally nd this a very welcome new task.

Tip 5: An UNDO Feature (Finally!)


It does seem a bit strange to praise a new feature that you would assume already exists in a product but until SQL Server 2012, SSIS never had an undo/redo feature in the development environment. Unless youre a perfect developer and a perfect typist (Im certainly not, as my co-workers will attest), an undo feature is critical. It has existed in the development environment in SSRS for years and now exists in SSIS.

Figure 5: The new Variable Expressions task maintains SSIS variables without the need for an SSIS script task.

Tip 6: New SSIS Expression Language Features


The SSIS Expression Language (which is a cross between Visual Basic and C# syntax) has some new functions in SSIS 2012, as shown in Figures 6 and 7. REPLACENULL function (as opposed to the prior method of using an ISNULL function with the ? and : immediate operators) LEFT function for retrieving the N-most characters at the beginning of an expression

Figure 6: You can handle NULL values easily with the new REPLACENULL function.

Figure 7: The new SSIS function identies a specic token or returns the number of tokens.

10

The Bakers Dozen Doubleheader: 26 new Features in SQL Server Integration Services 2012 (Part 2 of 2)

codemag.com

TOKEN (to parse a string and return a specic token) and TOKENCOUNT (to parse a string and return the number of tokens)

Tip 9: Data Taps


Choosing a Bakers Dozen Spotlight feature for SSIS 2012 was a tough choice for me, as SSIS 2012 has so many great new features. The runner-up is SSIS Data Taps. Suppose you have an SSIS package that runs overnight. You want a package to always write out the contents of a particular data ow to a at le. In prior versions of SSIS, you could do this, but it meant introducing a new at le destination into the package and possibly reworking the execution steps in the data ow. While possible, this wasnt a great solution. Fortunately, SSIS 2012 provides a more powerful and more elegant way to hook into (or tap) a specic data ow pipeline, and write the contents of the pipeline to a specic destination. Step 1: In the data ow pipeline (Figures 10 and 11), identify the specic identication string for the specic pipeline. Step 2: Deploy the package to the SSIS server database. (Ill talk about this in the next tip.) Step 3: In the SSIS server database, create an instance of an execution by using the SSIS system stored procedure called [catalog].[create_execution] (Listing 1). This returns an execution ID instance (basically an integer handle). Then use that instance ID to create a data tap by calling the SSIS system stored procedure [catalog]. [add_data_tap] (also in Listing 1). When the package executes on the server (through scheduling with SQL Server Agent or via any other execution method), the package always writes out a CSV le based on the output le referenced in the data tap!

Tip 7: Support for ODBC Data Sources and Data Flow Destinations
Those who have tried to integrate SSIS with ODBC connections will be happy to learn that SSIS 2012 contains new native ODBC data ow components. This is related to Microsofts announcement that it will drop support for OLE DB after SQL Server 2012 in favor of ODBC. SSIS 2012 allows you to create an ODBC connection manager, specifying either a user or system DSN, or a custom ODBC connection string.

Tip 8: Greatly Improved Recovery from Data Lineage and Invalid Metadata Reference Issues
Imagine that you give a young child a red lollipop then a minute later you take away the lollipop. As the parent of a toddler, I know that even if you give the child a new red lollipop, the child will throw a t. Data Flow components prior to SSIS 2012 behaved similarly. For instance, suppose you have an OLE DB destination that expects ten columns from the data ow pipeline from the previous component. Now suppose that you remove one of the columns from the prior component (perhaps one of the ten columns isnt used any longer). The OLE DB destination prior to SSIS 2012 complained and generated an error because of the invalid reference. Alternatively, suppose the OLE DB destination expected 10 columns from a previous data ow component (component A), but now receives the same columns from a different component (component B). The OLE DB Destination component prior to SSIS 2012 still complained that the lineage of the columns was from a different parent component. Correcting these issues always meant a certain amount of surgery on the component in error, to force it to recognize the changes. SSIS always materialized both the pipeline and the parent lineage into each subsequent component in the pipeline. Each component contained specic information about what columns it expected and where they came from and didnt respond well to changes. This, like other issues in prior versions of SSIS, was something that new developers had trouble grasping, and experienced developers just simply lived with. The good news is that Microsoft has addressed this problem in SSIS 2012, and corrections to invalid metadata in a component are now much easier. Figures 8 and 9 demonstrate an example of this: a component (a at le destination) expects a certain number of components, and then we remove one of the components from a previous data ow. The pipeline still generates an error, but we can use a better interface (Figure 9) to resolve any invalid pipeline references.

SQL Server Integration Services 2012


SSIS has always been a very good tool. SSIS 2012 is now an outstanding tool, with an improved UI, more efcient handling of changes and anomalies, and new functions. If your job includes extracting data and loading data somewhere else, you should look at SSIS.

Tip 10: New Tasks for Change Data Capture


SQL Server 2008 introduced Change Data Capture, a feature in the database engine to automate the capture of changes and the logging of inserts/updates/deletes to audit trail history tables. SSIS 2012 introduces new control ow and data ow tasks for managing CDC processes and for reading data from CDC tables.

Figure 8: You can resolve invalid references in the pipeline using the new interface.

Figure 9: This is the new Resolve Invalid Data Flow Pipeline References Editor.

codemag.com

The Bakers Dozen Doubleheader: 26 new Features in SQL Server Integration Services 2012 (Part 2 of 2)

11

The control ow task is the CDC Control task, which allows an ETL developer to control the lifecycle of CDC processes. There are two CDC data ow components. The rst is the CDC source, which allows you to open a CDC change tracking log table and read the rows into the pipeline. The second is the CDC splitter, which separates the rows in a change tracking table into three distinct pipelines: new rows from the change tracking table, updated rows (with the before and after values from the updates), and deleted rows.

Figure 10: To build a Data Flow Tap in SQL Server, you must rst determine the identication string of the pipeline.

Figure 11: You can easily nd the identication string for the pipeline (for use in a Data Tap).

Listing 1: SQL code in the SSIS service engine for Data taps
USE [SSISDB] DECLARE @return_value int, @execution_id bigint, DECLARE @data_tap_id bigint EXEC [catalog].[create_execution] @folder_name = NSSIS2012DemoProjectFolder, @project_name = NSSIS2012DemoProject, @package_name = NETLMergeExample.dtsx.dtsx, @execution_id = @execution_id OUTPUT EXEC [catalog].[add_data_tap] @execution_id = @execution_id, @task_package_path = N\Package\Foreach Loop Container\Data Flow + - Process CSV lesProduct, @dataow_path_id_string = NPaths[Data Conversion.Data Conversion Output], @data_lename = NOutputDatatap.csv, @data_tap_id = @data_tap_id OUTPUT

Tip 11: A New Deployment Feature


Package deployment in prior versions of SSIS, while certainly functional, has always been a bit of pomp and circumstance. Fortunately, SSIS 2012 offers a brand new method of deploying packages to an SSIS Database catalog. First, you need to congure the instance of SQL Server. In Figure 12, you create a new SSIS Database Catalog before deploying SSIS Projects. Next, you need to enable CLR integration as part of creating the SSIS database catalog (Figure 13). Finally, back in SSDT (Visual Studio), you can deploy the project and all of the packages (Figure 14).

Tip 12: New Management Dashboard Feature


Once packages have been deployed to the new SSIS database catalog, you can access the catalog database and any package options (Figure 15), redene parameters (Figure 16), to redene connection managers (Figure 17), and generate reports on package execution (Figure 18). Additionally, you can read about options to author reports against the SSIS Catalog database: http://blogs. msdn.com/b/mattm/archive/2011/08/01/repor tauthoring-on-the-ssis-catalog.aspx.

Tip 13: The Bakers Dozen Potpourri Miscellaneous New Features in SSIS 2012
In addition to all the major features above, SSIS 2012 has plenty of additional features that further bolster the case for SSIS 2012 being a major and important new version. Here are some of the other features new in SSIS 2012: In SSIS 2005, there were many areas in the User Interface where you had to type out a variable (as opposed to selecting from a list). SSIS 2008 took care of most of those areas, but left a small number unaddressed. SSIS 2012 has nally covered all the areas where a variable needs to be referenced. SSIS 2012 makes it easier to create a data viewer (fewer keystrokes). You can now populate a data ow row count as fast as a Nolan Ryan fastball. (Feel free to search for Nolan Ryan!) Expression Result Length > 4000. SSIS 2012 allows developers to set breakpoints as part of Script component debugging. Additionally, Microsoft upgraded the scripting engine to VSTA 3.0. The

Figure 12: You must create a new SSIS Database Catalog before deploying SSIS Projects.

Figure 13: To Create SSIS Database Catalog, you must enable CLR integration and provide a password.

12

The Bakers Dozen Doubleheader: 26 new Features in SQL Server Integration Services 2012 (Part 2 of 2)

codemag.com

Figure 14: The new SSIS project deploy screen deploys SSIS projects to the SSISDB Catalog.

Figure 15: The SSIS Catalog database lists useful options after package deployment. Figure 17: The SSIS Catalog database package options are used to redene connection managers.

Figure 16: The SSIS Catalog database package options help you redene parameters.

Figure 18: This report shows activity on the package execution. Listing 2: Script to create staging table
use AdventureWorks2008R2 go if exists( select * from sys.objects where object_id = object_id(dbo.TempStagingCurrencyRates)) DROP TABLE [dbo].[TempStagingCurrencyRates] GO CREATE TABLE [dbo].[TempStagingCurrencyRates] ( [CurrencyRateDate] [datetime] NOT NULL, [FromCurrencyCode] [nchar](3) NOT NULL, [ToCurrencyCode] [nchar](3) NOT NULL, [AverageRate] [money] NOT NULL, [EndOfDayRate] [money] NOT NULL ) GO

codemag.com

The Bakers Dozen Doubleheader: 26 new Features in SQL Server Integration Services 2012 (Part 2 of 2)

13

Listing 3: T-SQL Script to create a MERGE


USE [AdventureWorks2008R2] GO CREATE PROCEDURE [dbo].[MergeTempCurrencyRates] as begin DECLARE @MergeActions TABLE (ActionName varchar(10) ) MERGE Sales.CurrencyRate as T using [dbo].[TempStagingCurrencyRates] as S on T.CurrencyRateDate = S.CurrencyRateDate and T.FromCurrencyCode = S.FromCurrencyCode and T.ToCurrencyCode = S.ToCurrencyCode when not matched then insert (CurrencyRateDate, FromCurrencyCode, ToCurrencyCode,AverageRate, EndOfDayRate) Values (S.CurrencyRateDate, S.FromCurrencyCode, S.ToCurrencyCode, S.AverageRate, S.EndOfDayRate) when matched and (S.AverageRate <> T.AverageRate OR S.EndOfDayRate <> T.EndOfDayRate ) then update set T.AverageRate = S.AverageRate, T.EndOfDayRate = S.EndOfDayRate OUTPUT $action as ActionName into @MergeActions ; -- bring back the # of insertions and the # of updates, -- so that the SSIS can read them into 2 variables DECLARE @NumInserts INT, @NumUpdates INT SET @NumInserts = (select COUNT(*) FROM @MergeActions WHERE ActionName = INSERT) SET @NumUpdates = (SELECT COUNT(*) FROM @MergeActions WHERE ActionName = UPDATE) SELECT @NumInserts as NumInserts, @NumUpdates AS NumUpdates END GO

SSIS Team Blog talks more about this here: http:// blogs.msdn.com/b/mattm/archive/2012/01/13/ script-component-debugging-in-ssis-2012.aspx. The Merge and Merge Join Transformations now use less memory than before. As a result, developers no longer need to set the MaxBuffersPerInput property (which was necessary to avoid consuming excess memory). You can now change the scope of a variable.

Just like other development paradigms have design patterns, this SSIS package represents a common SSIS design pattern: loading multiple sets of files into a temporary table, then using a T-SQL MERGE statement to insert/update the data. The alternate approach of inserting/updating individual rows to the production table will likely perform worse, compared to a single MERGE statement against a large set of data.

Final Note: Is SSIS as Great as Advertised? Yes!


As an instructor, one of the things I cover with students is the value of SQL Server Integration Services (SSIS). Its always important for people to understand the benefit of the tool and an example-driven approach can go a long way toward seeing how SSIS can help with many data processing requirements.

Next Time Around in the Bakers Dozen


There are plenty of new features in the Analysis Services portion of SQL Server 2012, and I plan to cover them later in the year. However, the next installment of the Bakers Dozen will cover 13 different tips for optimizing T-SQL queries in SQL Server. Kevin S. Goff

Article Project File


You can find the entire source code and project file on my website at www.KevinSGoff.net in the download area.

Figures 1 and 2 (along with Listings 2 and 3) show the control flow and data flow for a stripped-down version of a production SSIS package. The package does the following: Truncates a (staging) table that the package uses to temporarily hold new incoming data Retrieves a variable number of CSV (text) files from an FTP server (in this demo, currency exchange rate data, such as daily exchange rates from US to Mexico, US to Japan, etc.) Dynamically loops through the CSV files (where you dont know the names of the files at designtime), opens the contents, performs some validations (such as checking that the currency codes are valid), and then inserting the data into the staging table If the number of files processed was greater than zero, calls a T-SQL stored procedure that utilizes a MERGE statement. The MERGE statement reads both the staging table and the production exchange rate table, and performs two actions: inserts any rows that exist in the staging table but not the production table, and updates any rows where the rates have actually changed.

14

The Bakers Dozen Doubleheader: 26 new Features in SQL Server Integration Services 2012 (Part 2 of 2)

codemag.com

ONLINE QUICK ID 1206031

New at CODE Magazine!


A lot of new things are going on at CODE Magazine, both online and ofine, and both directly associated with the magazine as well as efforts even more directly related to your development efforts. You may have already seen some of the things we do with CODE Consulting (www.codemag.com/consulting) and CODE Training (www.codemag.com/training), but today I would like to draw your attention to other things.

FR AMEWORK
Xiine is available on many platforms. Go to Kickstarter and tell us if you want CODE Magazine available in ePub format. The CODE Framework is available on CodePlex.

New Versions of Xiine!


You may already be using our very own Xiine desktop reading software to read CODE Magazine digitally. (If not, get your free copy at www.Xiine.com and get immediate access to all issues of CODE Magazine you ever owned.) What you may not be aware of is that we now also have mobile versions of Xiine! Check out the versions for iOS in the Apple store (both iPhone and iPad are supported) as well as the Android versions (available in the ofcial Android store as well as the Amazon.com Android Marketplace). The Android version also supports Amazons new Kindle Fire as well as other tablets and Android-powered phones. But wait there is more! We are currently also working on versions for Windows Phone 7 as well as Windows 8 Metro. Stay tuned and keep an eye out for these as well. All versions of Xiine are, of course, completely free of charge! Find links to all versions (desktop, phone, or otherwise) on www.Xiine.com!

CODE Magazine in ePub Kickstarter


CODE Magazine is available in a number of digital formats including Kindle, Xiine, PDF, HTML, and more. However, one format is currently absent: ePub! With ePub support, people could read CODE Magazine on devices such as the Sony Reader, the Nook, practically any other eBook reader available today (except for Kindle), iPads (with or without the Apple bookstore), desktop computers, and so on. We also assume readers would like it if we made ePub books available completely DRM free, so that is part of our vision. So why is CODE Magazine not available in ePub format today? Simply put, it is a matter of resources and priorities, and an associated evaluation of those. Would people like to read our magazine in this format? If yes, how many readers want it and how badly? Are there other thoughts and ideas readers would like to share with us? We simply do not know! For this reason, we decided to research the matter by creating a Kickstarter campaign (www.kickstarter.com). If you feel it would be a good idea to create an ePub version of CODE Magazine, then visit our Kickstarter campaign and let us know! You can search for CODE Magazine on the Kickstarter website, or visit www.codemag.com/kickstarter for a direct link! If enough people are interested, you may be able to read CODE Magazine in ePub format as soon as the next issue (and we would then also create ePub format for all back-issues as well.

CODE Framework
You may already be aware of this since we have been running a series of articles on this subject, but did you see we have released our very own framework called CODE Framework completely free and open source on CodePlex? If you are engaged in business application development, regardless of whether that happens on Windows, the Web, mobile platforms (Windows Phone, iOS, Android), services, or even Windows 8 Metro, you should check out this offering. The framework can be used in full, or you may just pick out some interesting nuggets you may want to use or get inspired by. The license associated with this open source project is particularly unrestrictive and allows you to do just about anything you want free of charge. (Note, however, that this is a supported product and premium support, training and consulting is available for those who desire it, but this is completely optional.) Oh, and make sure you read our series of articles on the subject! For more information, visit www.codemag.com/framework.

Whats Next?
A lot of stuff, really. Make sure you join us at our website (www.codemag.com) as well as on our Facebook site (www.facebook.com/CODEMagazine) to stay up to date with new developments, and also to let us know about ideas and feedback you may have! Markus Egger

codemag.com

New at CODE Magazine!

15

ONLINE QUICK ID 1206041

SharePoint Applied: Visual Studio 11 Beta and SharePoint Development


SharePoint 2007, in many ways, was a v1 release. It was the rst time .NET was properly applied to SharePoint, and from SharePoint 2007 onwards the product has done very well. Partly because of the rich built in functionality that comes with SharePoint and party because of its extensible nature things developers can do.
As we are all painfully aware, the development tools for SharePoint 2007 were somewhere between absent and woefully inadequate. A huge part of that gap was lled by the community with tools such as WSPBuilder. If you have developed for SharePoint 2007, undoubtedly you must know about WSPBuilder (wspbuilder.codeplex.com). Things changed with Visual Studio 2010. With it, Microsoft released a very mature, well-thought toolset for developing SharePoint 2010 solutions. For a v1 release, it was surprisingly good. For the last several years of SharePoint 2010-based development, I havent missed WSPBuilder. We are at yet another cusp now. Microsoft is releasing new versions of everything: Windows, Ofce, SharePoint, Visual Studio, Metro apps, etc. Throughout 2012 we are going to experience a paradigm shift. The question this begets is what is new in Visual Studio 11 Beta (rumored to be called Visual Studio 2012) specically for SharePoint developers? project, a list designer opens up or a Content Type Designer (Figure 1) opens up that lets you visually craft up the structure of the content type or list denition. The best part the most difcult part of authoring views for a list denition is now a matter of checking checkboxes for the columns youd like to see in the view. You can see this in Figure 2. As far as site columns, there is a new SPI called site column. You simply right-click on your Visual Studio project and choose to add a new site column, and then merrily edit the <Field> element in the newly added Elements. xml.

Sahil Malik
www.winsmarts.com Sahil Malik is a Microsoft MVP, INETA speaker, a .NET author, consultant and trainer. Sahil loves interacting with fellow geeks in real time. His talks and trainings are full of humor and practical nuggets. You can nd more about his trainings at http://www. winsmarts.com/training.aspx.

Silverlight Web Parts


While we argue about HTML5 vs. Silverlight, one thing we all can agree upon developing Silverlight Web Parts in Visual Studio 2010 required too many repetitive manual steps. You had to create a SharePoint solution, a separate Silverlight project, and then with custom build action, copy the le over to a specic location argh! In Visual Studio 2011, these steps are simplied to a single step Create a SharePoint 2010 Silverlight Web Part. That is basically it! Visual Studio 2011 now includes a project template that simplies all those tasks.

New Designers Lists, Content Types and Site Columns


Admit it, youve done it! Youve enrolled in the SharePoint university of reverse engineering (http:// blah.winsmarts.com/2008-2-Dev_Tip__The_SharePoint_ University_of_Reverse_Engineering.aspx), to rst handcreate the content type you want, generate the XML for it, and then include that in your Visual Studio project. We have all done it directly or indirectly using tools. This is no longer necessary. Visual Studio 11 Beta includes new designers for content types and lists making it so much easier than before to author these SPIs (SharePoint Items). When you add a content type or a list denition into your Visual Studio

Visual Web Parts Now Support Sandboxed Solutions


We have had two kinds of Web Parts in Visual Studio templates normal Web Parts (server controls), and visual Web Parts (user controls). The user control-based Web Parts had an articial limitation. Historically, we have always deployed user controls or .ascx les in a directory called _controltemplates. This has been standard and nothing more. The issue, of course, is _controltemplates resides on the physical disk, which requires a visual Web Part to be a farm solution. This is not necessary. In reality, you can have an .ascx land inside the content database using a module tag, and its code behind be packaged inside a DLL, much like youd do with a Web Part. While you could do this in Visual Studio 2010 also, Visual Studio 11 Beta now includes a project template to make it easy.

Publish Directly to Ofce 365


Developing with Visual Studio is interesting. When you press F5 in Visual Studio, the following actions take place:

Figure 1: The new Content Type Designer.

16

SharePoint Applied: Visual Studio 11 Beta and SharePoint Development

codemag.com

FROM THE PRODUCERS OF CODE MAGAZINE

Consulting from the Most Trusted Source in .NET!


Looking for a company who can help you with your .NET projects, provide man-power and know-how, and reduce your overall risk? CODE Consulting is your perfect partner! CODE Consulting is the consulting, development, and custom software arm of CODE Magazine. CODE Consulting has access to the most extensive network of experts through both in-house staff as well as external sources. We do not claim to be an expert in every single technology, but through the vast network of CODE authors, trainers, and consultants, as well as MVP and RD networks, not to mention our extensive community involvement, we always have access to the expert in the industry for just about any topic. CODE Consulting can handle projects of any size, from single-day engagements to multi-year projects involving dozens of team members. We provide consulting and development services for .NET and Visual Studio, Expression Studio, as well as various server products such as Team Foundation Server, SQL Server, SharePoint, and more. We can help you with your Web, Windows, and Mobile applications (Windows Mobile, Windows Phone, Android, iOS, etc.). Our developers are experts in overall application architecture and design as well as the actual implementation, including UI and Interaction Design for Windows, Web, and Mobile UIs as well as modern NUIs. Our experienced staff can help organize and manage diverse teams. At CODE Consulting, we work with technologies and standards such as XAML, HTML (4 and 5), CSS, JavaScript, VB, C#, Silverlight, WPF, WCF, SOA, and much, much more.

See more details at: www.codemag.com/consulting

CONSULTING
An EPS Company

sandbox solutions, the reality is if you insisted on typing everything, the solution will build and compile. With Visual Studio 2011, the compiler will show you an error if you try and use farm only API calls. Also, the IntelliSense is improved so you see only relevant API calls in sandbox solutions.

JavaScript Improvements
JavaScript is a strange animal. The biggest challenge with JavaScript it is only the browser that truly knows what the full runtime will look like at runtime. It is an incredibly difcult task for an external tool, such as Visual Studio, to fully replace browser-based debugging, and I do not expect Visual Studio 11 Beta to be able to do that. However, with Visual Studio 11 Beta, you can now debug JavaScript in SharePoint projects. Also, IntelliSense is provided when coding JavaScript in SharePoint projects, URL resolution for JavaScript is enabled for visual Web Parts in sandboxed solutions. This means that you can reference JavaScript les located in SharePoints content database in your SharePoint projects in Visual Studio. The code is automatically included at build time.

Figure 2: The new List Designer.

Deactivate Features Retract Solution Delete Solution Add Solution Deploy Solution Activate Features Launch Browser

Better Organization of SharePoint Templates


Visual Studio 2010 SharePoint Project Templates were (at least as how I saw it): Empty SharePoint project A bunch of other templates that I almost never used And a couple of Import related templates After a few years of experience, and most developers agreeing with my assessment of the templates above, Visual Studio 11 Beta has a much cleaner layout of SharePoint project templates. Specically, there are only ve templates now: SharePoint 2010 project SharePoint 2010 Silverlight Web Part SharePoint 2010 Visual Web Part Import SharePoint 2010 solution package Import SharePoint 2010 workow

Optionally, you may also have a resolution of conicts and other similar commands. This approach works well, and historically we have developed on a machine that had Visual Studio and SharePoint installed locally. This is a necessity for SharePoint development. For farm solutions, this will continue to be the story. But, sandbox solutions are interesting. Sandbox solutions do not need this level of operating system level access. They are simply uploaded to a document library called the solutions gallery, and they run directly from there. This is especially interesting for Ofce 365 developers. Ofce 365 developers need to work with a local instance of a SharePoint server a bit similar to an onpremise SharePoint environment. They develop sandbox solutions, and they copy those over to Ofce 365 when they are done. Understand the steps they purchase a subscriptionbased SharePoint installation so they dont have to run SharePoint, only to end up running on-premise SharePoint albeit in a development environment to develop for the subscription-based SharePoint. So thats not ideal. With Visual Studio 11 Beta, you will now be able to deploy sandbox solutions directly from Visual Studio 11 Beta to Ofce 365, or, for that matter, deploy to any remote server. In order to do so, use the Publish command on the Build menu, select the Publish to SharePoint Site option and provide the remote servers URL, such as https://someremoteserver. sharepoint.microsoftonline.com. To publish a SharePoint solution to a local server, select the Publish to File System option and provide a local system path.

Visual Studio 11 Beta includes new designers for content types and lists making it so much easier than before to author these SPIs (SharePoint Items).

Once you have created a SharePoint 2010 project, you can now add the following item templates: Application Page BDC model Content Type which now also shows you a content type designer Empty Element Event Receiver List which now includes a list designer Module Sequential Workow and State machine workow Silverlight Web Part Site Column Site Denition User Control Visual Web Part Web Part In addition, Visual Studio 11 Beta now clearly tells you what works in a farm solution and what doesnt.

Improvements for Sandbox Solutions


Writing code for sandbox solutions today in Visual Studio 2010 involves a little bit of guess work. Even though Visual Studio 2010 will try and help you out as much as it can in preventing you from using APIs not allowed in

18

SharePoint Applied: Visual Studio 11 Beta and SharePoint Development

codemag.com

Proling
Proling has long been available to .NET applications. Proling helps you identify bottlenecks within your application. SharePoint is a complex product, and sometimes it is not very obvious to the programmer why a certain piece of code works faster than another because the underlying API can be so complicated. With Visual Studio 11 Beta, proling is now available to SharePoint applications. This means, in a SharePoint project, you can start using the Visual Studio Proling Tools Performance Wizard to create a performance session. You do this by clicking on Launch Performance Wizard on the Analyze menu in Visual Studio 11 Beta. This will pop open a wizard that asks you some basic questions, like the parameters you would like to prole the application upon such as CPU usage. Alternatively, you can create a performance session in a unit test. You can do so by going to the Test Results window, open the shortcut menu for the unit test and select Create performance session. After creating a performance session, you simply use the application, and Visual Studio will then run a prole analysis on your application. This will then create a simple report for you to read, which will include a graph of CPU usage over time, hierarchical function call stack,

process and module view, functions view, etc. This will help you pinpoint any bottlenecks in your application. All this is not new, except, now you can do all this with SharePoint.

Summary
Looks like Microsoft is serious serious about making SharePoint development easier for all of us. But in fairness, because SharePoint is built on top of .NET, you will always see .NET tooling a step ahead of SharePoint. And that is okay because a lot of things that happen in .NET sometimes do not gain traction. In the SharePoint world, we get tried and tested practices. Given everything else we can do with SharePoint, and development tools that constantly keep getting better, this continues to get more and more exciting. What is your favorite Visual Studio 11 Beta SharePoint development feature? Let me know. Until then, happy SharePointing. Sahil Malik

Advertisers Index
CODE Consulting www.codemag.com/consulting CODE Consulting / Mobile Apps www.codemag.com/mobileapps CODE Framework www.codemag.com/framework CODE Magazine www.codemag.com/magazine DevTeach Developers Conference www.devteach.com dtSearch www.dtSearch.com MadExpo www.madexpo.us SharePointTechCon www.sptechcon.com State of .NET www.StateOfDotNet.com Tech Conferences Inc. www.devconnections.com Xiine www.xiine.com 71 45 51 www.telerik.com 57 2 5, 7, 9 Bronze Sponsor 61 GOLD SPONSOR - Digital Escrow Services 17 33 76 25 39 www.xamalot.com 65 www.tower48.com 75

ADVERTISERS INDEX

GOLD SPONSOR - Art Assets

Advertising Sales: Tammy Ferguson 832-717-4445 ext 26 tammy@code-magazine.com

This listing is provided as a courtesy to our readers and advertisers. The publisher assumes no responsibility for errors or omissions.

codemag.com

SharePoint Applied: Visual Studio 11 Beta and SharePoint Development

19

ONLINE QUICK ID 1206051

The Danger of Dynamic Languages


Back in 2005, when Ruby on Rails started appearing on developers radars, there was an explosion of blogs and articles discussing how dangerous these loosey goosey languages were, with their hippy dynamic typing. And many predicted dire fates for companies foolish enough to take the plunge. Regular readers are certainly familiar with
Ted Neward, who makes technology predictions each year on his blog. Heres what Ted said on January 1, 2006: Scripting languages will hit their peak interest period in 2006; Ruby conversions will be at its apogee, and its likely that somewhere in the latter half of 2006 well hear about the rst major Ruby project failure, most likely from a large consulting rm that tries to duplicate the success of Rubys evangelists (Dave Thomas, David Geary, and the other Rubyists I know of from the NFJS tour) by throwing Ruby at a project without really understanding it. In other words, same story, different technology, same result. By 2007 the Ruby backlash will have begun. In Teds defense, hes sometimes correct in his predictions. In his defense, there were people who misunderstood how to apply the technology, and there were some failures, although no where nearly as many as pundits predicted. Current day, dynamic languages thrive. Major websites like GroupOn and LivingSocial use Ruby on Rails to build and maintain industrial strength sites, and even the website that spawned Rails, Basecamp, is still alive and well. The Rails framework itself was harvested out of the Basecamp Web application, illustrating that there is a Rails site that has survived all the messy parts of enterprise development: scalability, maintenance, user feature requests, etc., for almost a decade. In fact, the most popular language in the world, JavaScript, has some of the most unforgiving aspects of any type system. I know that its popularity is accidental and not based on technical merit; my point is that even the most hostile of languages can be tamed when used properly. For many language communities, rigorous testing added the safety net that static typing formerly provided, with of course, a slew of other engineering benets. Testing is well established in Ruby on Rails, and mature development teams that use JavaScript also heavily test it. Testing is critical for many popular dynamic languages because of the design of the language. An imperative, object-oriented language is designed to mutate state and move it around. If you think about the design features of object-oriented languages, many of them facilitate visibility and access of shared internal state: encapsulation, scoping, visibility, etc. When you mutate state in any language, testing is one of your best options to make sure that its happening correctly. But given the apparent lack of danger of dynamic languages when used properly, perhaps its time we rethought the language characteristics that are important for the components of our applications. To do that, I need to discuss types.

Types of Types
Computer language types generally exist along two axises, pitting strong versus weak and dynamic versus static, as shown in Figure 1. Static typing indicates that you must specify types for variables and functions beforehand, whereas dynamic typing allows you to defer it. Strongly typed variables know their type, allowing reection and instance checks, and they retain that knowledge. Weakly typed languages have less sense of what they point to. For example, C is a statically, weakly typed language: variables in C are really just a collection of bits, which can be interpreted in a variety of ways, to the joy and horror (sometime simultaneously) of C developers everywhere. Java is strongly, statically typed: you must specify variable types, sometimes several times over, when declaring variables. Scala, C# and F# are also strongly, statically typed, but manage with much less verbosity by using type inference. Many times, the language can discern the appropriate type, allowing for less redundancy.

Neal Ford
nford@thoughtworks.com Neal Ford is Software Architect and Meme Wrangler at ThoughtWorks, a global IT consultancy with an exclusive focus on end-to-end software development and delivery. He is also the designer and developer of applications, instructional materials, magazine articles, courseware, video/ DVD presentations, and author and/or editor of six books spanning a variety of technologies, including the most recent The Productive Programmer. He focuses on designing and building of large-scale enterprise applications. He is also an internationally acclaimed speaker, speaking at over 250 developer conferences worldwide, delivering more than 1000 talks. Check out his website at nealford.com.

Many times, the language can discern the appropriate type, allowing for less redundancy.
This diagram is not new; this distinction has existed for a long time. However, a new aspect as entered into the equation: functional programming.

Functional Functionality
Functional programming languages have a different design philosophy than imperative ones. Imperative languages try to make mutating state easier, and have lots of features for that purpose. Functional languages try to minimize mutable state, and build more general purpose machinery. When you nd reusable code in an object-oriented system, you harvest it by capturing a class graph. Its no coincidence that every pattern in the Gang of Four book, Design Patterns: Elements of Reusable ObjectOriented Software, features one or more class diagrams. Functional reuse is a bit different. In functional programming languages, language designers have built general algorithmic machinery, based in part on the fascinating mathematics eld of category theory, expecting data and customization via code or closure blocks. A common philosophy in the functional programming world, particularly in Lisp communities like Clojure, is to have only a few data structures (lists and maps) with many algorithms (lter, map, reduce, folds, etc.) that operate

Figure 1: Two axises of computer languages.

20

The Danger of Dynamic Languages

codemag.com

on them. Doing so allows the designers to create hyper efcient operations because they focus on just a few things. Another common philosophy in the functional world is to embrace immutability. When done at a low level, immutable data structures simplify many complex things: threading, serialization, etc. But functional doesnt dictate a typing system, as you can see in Figure 2. With their added reliance, even insistence, on immutability, the key differentiator between languages now isnt dynamic versus static, but imperative versus functional, with interesting implications for the way we build software.

ity. As in the original, DSLs sit on top, serving the same purpose. However, I also believe that DSLs will penetrate through all the layers of our systems, all the way to the bottom. This is exemplied by the ease in which you can write DSLs in languages like Scala (functional, statically strongly types) and Clojure (functional, dynamically strongly typed) to capture important things in concise ways.

I also believe that DSLs will penetrate through all the layers of our systems, all the way to the bottom.
This is a huge change, but it has fascinating implications. To see a glimpse of this, check out the architecture of the brand new commercial product Datomic. Its a functional database that keeps a full delity history of every change, allowing you to roll the database back in time to see snapshots of the past. In other words, an update doesnt destroy data; it creates a new version of it. Once you grok the implications of that, you may be answering to your boss about why you are destroying valuable historical trending data every time you update a record in your relational database. One cool Datomic use case: because you always have history, practices like Continuous Delivery, which relies on the ability to roll your database backwards and forwards in time, become trivial. Now, with relational databases, you use tools like Liquibase that have complex scripts to sync schema and data changes (best), you use snapshots to restore to known good restore points (just OK), or you do it manually (the horror!). Using an immutable database, you just move the time pointer backwards. Testing multiple versions of your application becomes trivial because you can directly synchronize schema and code changes. Datomic is built with Clojure, assuming functional constructs at the architectural level, and top of stack implications are amazing.

Figure 2: Functional languages do not dictate a typing system.

Polyglot Pyramids
In my blog back in 2006, I accidentally re-popularized the term [Polyglot Programming] (http://memeagora. blogspot.com/2006/12/polyglot-programming.html) and gave it a new meaning: taking advantage of modern runtimes to create applications that mix and match languages but not platforms. This was based on the realization that the Java and .NET platforms support over 200 languages between them, with the added suspicion that there is no one true language that can solve every problem. With modern managed runtimes, you can freely mix and match languages at the byte code level, utilizing the best one for a particular job. After I published my article, my colleague Ola Bini published a follow on paper discussing his Polyglot Pyramid, which suggests the way people might architect applications in the polyglot world, as shown in Figure 3. In Olas original pyramid, he suggests using more static languages at the bottommost layers, where reliability is the highest priority. Next, he suggests using more dynamic languages for the application layers, utilizing friendlier and simpler syntax for building things like user interfaces. Finally, atop the heap, are Domain Specic Languages, built by developers to succinctly encapsulate important domain knowledge and workow. Typically, DSLs are implemented in dynamic languages to leverage some of their capabilities in this regard. This pyramid was a tremendous insight added to my original post, but upon reection about current events, Ive modied it. I now believe that typing is a red herring, distracting from the important characteristic, which is functional versus imperative. My new Polyglot Pyramid appears in Figure 4. I believe that the resiliency we crave comes not from static typing but from embracing functional concepts at the bottom. If all of your core APIs for heavy lifting, like data access, integration, etc., could assume immutability, all that code would be much simpler. Of course, it changes the way we build databases and other infrastructure, but the end result will be guaranteed stability at the core. Atop the functional core, use imperative languages to handle workow, business rules, user interfaces, and other parts of the system where developer productivity is a prior-

Figure 3: Ola Binis Polyglot Pyramid.

Figure 4: My new Polyglot Pyramid.

Summary
Dont believe people that tell you that dynamic languages are dangerous too much evidence exists to the contrary. Rather, ask what makes them safe and make sure you apply that to all your development, regardless of language type. Rather than stress about dynamic versus static, the much more interesting discussion now is functional versus imperative, and the implications of this change go deeper than the previous one. In the past, weve been designing imperatively using a variety of different languages. Switching to the functional style is a bigger shift than just learning a new syntax, but the benecial effects can be profound. Neal Ford

codemag.com

The Danger of Dynamic Languages

21

ONLINE QUICK ID 1206061

Business Web Page Layout Ideas for HTML5 Applications


In most business applications, you create a common look and feel, data entry pages, and a method for navigating through the application. As you begin to work with HTML5, you will want to build these features and take advantage of the features of HTML5 that can make your applications stand out from the crowd. In this article, you will be presented
with several common business Web pages that give you an idea of the power of HTML5 and CSS 3. Creating HTML Web pages and using CSS did not change drastically in HTML5, but there are now more options and some new elements that will make your job as a developer easier. You should note that this article is not going to explain how to put data or code behind the business Web pages you create; that process has not changed for most Web applications. This article covers how to use CSS3 to make your pages look better and how to use new HTML5 elements and attributes. Often, one of the rst things you do when building a new Web application is to create a home page and dene how the user will navigate through the application. Figure 1 shows an example of a home page and a navigation system. While this is only a single line of menus items, you could make each of these menus have a drop-down list associated with it using a little jQuery code. One of the things you notice right away about the home page in Figure 1 is the drop shadows around each navigation button. You also notice that each button has a rounded corner. All of the buttons together sit atop a background that also has a drop shadow and also has a rounded corner. Furthermore, each of these elements also has a gradient color of light gray to gray. Although there were ways to accomplish drop shadows, rounded corners and gradients prior to CSS 3 and HTML5, it was not always easy for developers to create them. You often needed help from a graphic artist to create these effects. But now these graphical elements are a part of CSS 3 and can be created by a developer with a little help from some online tools such as ColorZilla, which I will talk about in the next section. Listing 1 shows the complete HTML for the navigation page shown in Figure 1. As you can see, the markup is fairly simple. The new items in this HTML are the boxshadow, and the border-radius rules in the .mainMenu style selector, and the <nav> and <footer> elements in the main body of the HTML. The <nav> element is nothing more than semantic markup used to group links together to compose your main navigation area. Having a separate element allows you to use an element selector in CSS to style the <nav> element. In addition, <nav> allows search engines to determine that this is your main navigation area. The CSS rules box-shadow and border-radius used in the .mainMenu style provide the rounded corners of your main navigation and footer areas. Three versions of box-shadow and border-radius help account for the syntax differences between browsers. You can test these styles with Opera 11.61, Google Chrome 17.0, Safari 5.12, FireFox 9.01, and IE 9, and although the pages in this article may look slightly different on each one, they all work with HTML5 and CSS 3. If a particular browser does not support some specic feature of HTML5 or CSS 3, it simply downgrades to something that is similar in HTML4. You may also provide your own downgrade process by using some JavaScript or a tool such as Modernizr, available at (www.modernizr.com). The other thing you may notice from the navigation page in Figure 1 is that the navigation background and the hyperlinks have a slight gradient color. In other words, they are not just one color of gray; they start with a light gray at the top and gradually become a darker gray at the bottom. To accomplish this, add a class attribute to the <nav> button called backColor. This style class is dened in the style sheet named Styles.css. Listing 2 shows the complete denition of this backColor style. Dont let the size of this listing scare you! This was code generated from a great website called ColorZilla (http:// www.colorzilla.com/). This free on-line utility generates the correct CSS styles needed to create gradients for multiple browsers. The last new item on the home page is the <footer> element. Again, this is just a new semantic markup element. You can style the <footer> element exactly as you would style the <nav> element. You use the same class attribute, backColor, on this <footer> element. This adds the background color to the footer. Also, in the Styles.css, you will nd the footer selector dened as follows:
footer { padding: 0.5em 0.5em 0.5em 0.5em; margin: 0.5em 0.5em 0.5em 0.5em; position: absolute; bottom: 0.2em; left: 0em; width: 95%;

Paul D. Sheriff
PSheriff@pdsa.com (714) 734-9792 Paul D. Sheriff is the President of PDSA, Inc. (www.pdsa.com) and a Microsoft Partner in Southern California. Paul acts as the Microsoft Regional Director for Southern California assisting the local Microsoft ofces with several of their events each year and being an evangelist for them. Paul has authored literally hundreds of books, webcasts, videos and articles on .NET, WPF, Silverlight, Windows Phone and SQL Server. Check out Pauls new code generator called Haystack at www.CodeHaystack.com.

Design tools available for HTML5 are proliferating at a rapid rate; this means a developer can make Web applications look better without the assistance of a graphic artist.

Figure 1: A navigation system in HTML5 can be surrounded with <nav> tags.

22

Business Web Page Layout Ideas for HTML5 Applications

codemag.com

Listing 1: The HTML for the default page of your web application
<!DOCTYPE html> <html> <head> <meta charset=utf-8 /> <title>Business UI Samples</title> <link rel=stylesheet type=text/css href=Styles/Styles.css /> <style type=text/css> .mainMenu { color: White; oat: none; text-decoration: none; display: inline-block; text-align: center; height: 0.5em; width: 5em; margin: 0.5em 0.5em 0.5em 0.5em; padding: 0.3em 1em 1.1em 1em; border: 0.09em solid black; box-shadow: 0.5em 0.5em rgba(0,0,0,0.8); -webkit-box-shadow: 0.5em 0.5em rgba(0,0,0,0.8); -moz-box-shadow: 0.5em 0.5em rgba(0,0,0,0.8); border-radius: 0.5em; -webkit-border-radius: 0.5em; -moz-border-radius: 0.5em; } text-align: left; border-radius: 0.75em; -webkit-border-radius: 0.75em; -moz-border-radius: 0.75em; } <p> { margin-left: 1em; } </style> </head> <body> <nav class=backColor> <a href=Login.htm class=mainMenu backColor>Login</a> <a href=ContactUs.htm class=mainMenu backColor>Contact</a> <a href=Name.htm class=mainMenu backColor>Name</a> <a href=Address.htm class=mainMenu backColor>Address</a> <a href=User.htm class=mainMenu backColor>User</a> </nav> <br /> <br /> <br /> <p> Content goes in here...</p> <footer class=backColor> Samples of Business UI </footer> </body> </html>

The rules above are applied to the <footer> element and the backColor class is also applied with the background color. Keeping your background color separate from your other style rules allows you to change the background color in one place without affecting any other style rules. You can also see this type of styling on the <a href> elements used for the main navigation.
<a href="Login.htm" class="mainMenu backColor">Login</a>

The <header> element is used to identify an area of the page that contains description information about this particular Web page. Just like any other normal HTML element, you can then apply a style to the <header>. In the login page, the words at the top Please Login to Access this Application are the header area. The <header> element in this Web page looks like the following.
<header class="backColor"> Please Login to Access this Application </header>

Working with HTML5 Today


If you are using Visual Studio 2010 and you are working with HTML5 and CSS 3, you need to download and install the Web Standards Update for Microsoft Visual Studio 2010 SP1 available at http://bit. ly/lWV98W. This update gives you IntelliSense for HTML5 and CSS 3. Visual Studio 11, which is due out in 2012, will have full IntelliSense support for HTML5 and CSS 3 built in. The CSS Editor in VS 11 has been greatly updated with many new features such as auto formatting of rules, hierarchical indentation, commenting and uncommenting works on the complete CSS selector even if only part of it is selected, and there is a color picker for any CSS property that needs a color. The HTML5 editor has also been given many new features. You can read about all the new features in ASP.NET 4.5 at http://bit.ly/oOnLtz.

Again, notice the use of the backColor class attribute to apply the background gradient to the header. In the <head> tag of the login page you will nd the style shown in Listing 3 for the <header> element to give it the look you see in Figure 2. Another new element in HTML5 is called <gure>. This element is used as a wrapper around any image you display on your page. There is an optional <gcaption> element that can be used to display a caption for your gure. You wont use a gcaption on this gure because it isnt necessary

In the class attribute on each of the <a href> elements, you apply two styles. The mainMenu selector controls foreground color, margin, padding, and other rules while the backColor selector applies the background color.

Login Page
Most applications require a user to authenticate by typing in a login ID and a password. The login page, shown in Figure 2, introduces a few more HTML5 elements and attributes. The new elements are <header> and <gure>, and the new attributes are autofocus, required, and placeholder. For these new attributes, your mileage will vary on the different browsers. Opera 11.61 is the only browser that seems to render HTML5 consistently with all of these new attributes. I recommend you download this browser in order to try out the samples in this article.

Figure 2: HTML5 contains new attributes, such as placeholder, to help you tell the user what they should enter in each eld.

codemag.com

Business Web Page Layout Ideas for HTML5 Applications

23

Listing 2: Gradients are a great way to make your web pages look more natural to users
.backColor { /* Old browsers */ background: rgb(181,189,200); /* IE9 SVG, needs conditional override of lter to none */ background: url(data:image/svg+xml;base64, PD94bWwgdm ); /* FF3.6+ */ background: -moz-linear-gradient(top, rgba(181,189,200,1) 0%, rgba(130,140,149,1) 36%, rgba(40,52,59,1) 100%); /* Chrome,Safari4+ */ background: -webkit-gradient(linear, left top, left bottom, color-stop(0%, rgba(181,189,200,1)), color-stop(36%, rgba(130,140,149,1)), color-stop(100%, rgba(40,52,59,1))); /* Chrome10+,Safari5.1+ */ background: -webkit-linear-gradient(top, rgba(181,189,200,1) 0%, rgba(130,140,149,1) 36%, rgba(40,52,59,1) 100%); /* Opera 11.10+ */ background: -o-linear-gradient(top, rgba(181,189,200,1) 0%, rgba(130,140,149,1) 36%, rgba(40,52,59,1) 100%); /* IE10+ */ background: -ms-linear-gradient(top, rgba(181,189,200,1) 0%, rgba(130,140,149,1) 36%, rgba(40,52,59,1) 100%); /* W3C */ background: linear-gradient(top, rgba(181,189,200,1) 0%, rgba(130,140,149,1) 36%, rgba(40,52,59,1) 100%); /* IE6-8 */ lter: progid:DXImageTransform.Microsoft.gradient( startColorstr=#b5bdc8, endColorstr=#28343b, GradientType=0 ); }

Listing 3: Styling the <header> element in the Login page


header { oat: left; font-size: x-large; color: White; text-align: center; vertical-align: middle; margin: 1em 0.5em 1em 0.5em; padding: 1em 1em 1em 1em; border: none; box-shadow: 0.5em 0.5em rgba(0,0,0,0.8); -webkit-box-shadow: 0.5em 0.5em rgba(0,0,0,0.8); -moz-box-shadow: 0.5em 0.5em rgba(0,0,0,0.8); -webkit-border-radius: 1em; -moz-border-radius: 1em; border-radius: 1em; }

Figure 3: The range input type renders as a slider on some browsers. happen, which is a great improvement that you can take advantage of for all data-entry pages. Instead, you use the new autofocus attribute on the <input> element. You will also nd that there are two other new attributes on the Login ID text box: required and placeholder.
<input type="text" name="txtLogin" class="textInput" autofocus required placeholder="Enter Your Login ID" />

for this particular page. The key image at the top-right of the login page is dened in the HTML as the following.
<gure> <img src="Images/KeyComputer.png" width="60" height="60" alt="Login" /> </gure>

In the <head> tag of the login page you will nd a style for this gure that will make it look as shown in Figure 2.
gure { oat: left; vertical-align: top; text-align: center; margin: 2.2em 2em 0em 3em; }

The required attribute stops a page from posting the data unless something is entered into the Login ID text box. You may receive a pop-up balloon informing you that the particular eld is required, depending on the browser you are using to run the page. The placeholder attribute is used to display watermark text within the input control. This text, such as Enter Your Login ID, appears within the text box until the user moves into the control. Then it disappears. If the user

When you run this login Web page, youll notice that your cursor is automatically placed on the Login ID text box. There is no JavaScript code required to make this

24

Business Web Page Layout Ideas for HTML5 Applications

codemag.com

DONT GET LEFT BEHIND!

SUBSCRIBE TODAY AND DONT MISS AN ISSUE!


Special offer!
Get a 3-year subscription for the price of 2! www.codemag.com/subscribe/cm612

See more details at: www.codemag.com/magazine

Taalvi - Fotolia.com

MAGAZINE
An EPS Company

leaves the text box without lling any text into the box, the placeholder text re-appears.

<input type="range" min="21" max="110" step="1" id="age" value="30" onchange="ageOutput.value = age.value;" /> <img src="images/Plus.png" class="plusminus" onclick="age.value = age.value + 1; ageOutput.value = age.value;" /> <output id="ageOutput" />

Personal Information
The Personal Information Web page shown in Figure 3 contains many of the same elements and attributes as the Login page and the navigation page. However, there are a couple of new HTML5 features used in this page. A <datalist> element is used in combination with the list attribute to create the Salutation drop down. The new input type, range, creates the slider used for Your Age. Next to the slider is an <output> element used to display the value from the range slider. In order to make this work, you do have to write a little bit of JavaScript. Lets rst take a look at the Salutation drop down. Instead of using a <select> and <option> elements, the new <datalist> element can be used. This makes the input look more like the auto-complete lists users are used to on search engines and many other websites. Once a user starts typing into this text box, the list automatically drops down and is ltered by the characters the user types. The user may choose a value from the list, or enter a new value. Below is the HTML5 code needed to create the Salutation element.
<input type="text" name="salutation" class="textInput" autofocus list="salutationList" /> <datalist id="salutationList"> <option value="Dr">Dr</option> <option value="Mr">Mr</option> <option value="Mrs">Mrs</option> <option value="Miss">Miss</option> </datalist>

In addition to the JavaScript in the two image controls, you might want to write a little JavaScript code when the page loads to pre-populate the <output> element with the value in the range control.
<script type="text/javascript"> window.addEventListener('load', function () { // Get the age output control var out = document.getElementById( 'ageOutput'); // Get the age control var age = document.getElementById('age'); out.value = age.value; }, false); </script>

The <output> element is another new semantic element that you can style in any manner you see t. Its purpose is to allow you to place some output data in a specic location on your page without having to use a <p> or <span> tag.

Other Pages
In the sample that you download for this article, you will nd three other business Web pages that you might nd useful. These pages use the same HTML5 elements, attributes and CSS 3 styles as the other pages that have been discussed in this article.

Notice that the <input> type is text, but adds the list attribute. The list attribute must be the ID of a valid <datalist> element. In the code above, the <datalist> is positioned right under the <input> element, but it can be anywhere on the Web page. Normal <option> elements are used to populate the data list. The HTML5 specication also says that you can attach a data attribute to the input type with a URI pointing to a valid XML le that can be used to ll the list. The <input type=range> displays a numeric slider in Opera, Chrome, and Safari. There are new attributes on the range called min, max and step. These attributes control the minimum value allowed, the maximum value allowed and by how much to increment the value property when the user moves the slider. In the Personal Information page, I added two images around the range input: a minus and a plus sign. To these images, I added some JavaScript to the onClick events to decrement and increment the <output> elements respectively. Below is the HTML code used to create the slider and the <output> element.
<img src="images/Minus.png" class="plusminus" onclick="age.value = age.value - 1; ageOutput.value = age.value;" />

Contact Us
Having a Contact Us page in your Web application allows a user to give you feedback about the application, report a bug, or ask you for more information about your product. Figure 4 shows an example of a Contact Us page

Figure 4: Having a Contact Us page is a great way to get feedback from a user.

26

Business Web Page Layout Ideas for HTML5 Applications

codemag.com

word to the user. When choosing a security question, be sure to make it something personal in nature that only the user would know. Look at the questions in the data list in Figure 6. These questions are things that only that user is likely to know.

Summary
In this article, you learned to use HTML5 and CSS 3 to create a variety of business application Web pages. Using rounded borders and drop shadows makes your pages look more modern. Employing linear gradients in your background colors helps your applications look more natural to new users. Taking advantage of autofocus, required and placeholder attributes greatly simplies your Web pages and allows you to get rid of a lot of JavaScript. Of course, all of this assumes that HTML5 can be rendered on all browsers that your users use. Right now, this is just not the case. So, you will still need to use some fallback mechanisms such as Modernizr (www.modernizr.com) to ensure that your HTML5 applications will work with older browsers. Paul D. Sheriff

Figure 5: This US Address page could be used in many Web applications where you must gather information from your users.

Figure 6: A Create User Prole page is needed in a Web application where you have users that sign in. that uses place holders, auto focus, drop shadows, linear gradients and many of the other techniques you have previously seen.

US Address Page
If you wish to gather address information from your user for processing an order, an Address page like the one shown in Figure 5 can come in handy. This page was created for addresses in the United States, but Im sure you can modify it for your locale. Notice that the size of the Save button on this page is larger than the Cancel button. Making your default button larger than the other buttons is a great way to inform your user that this is the button that will be executed when they press the Enter key.

User Prole
When asking a user to ll out his or her prole for your site, its a good idea to ask for a security question and answer. If the user ever forgets a password, you can prompt for login ID and a security question selected on the User Prole page shown in Figure 6. When the user supplies the correct answer, you can email the new pass-

codemag.com

Business Web Page Layout Ideas for HTML5 Applications

27

ONLINE QUICK ID 1206071

Dynamic Languages 101


Much hoopla has been generated across the community about dynamic languages; much of it is spoken in the same glowing terms normally reserved for unicorns and rainbows. Some of it is deserved, some of it isnt. All of it seems to surround two languagesJavaScript and Rubybut in fact, several other languages, three of which Ill
present here, offer some distinctly interesting and useful features. Twenty years ago, I took a contract building a standalone desktop application for a small rm. I was the die-hard hard-core C++ developer who prided himself on knowing the four different kinds of smart pointers, not to mention all of the Gang-of-Four design patterns and the C++ template syntax. But as I scoped out the project, it became apparent to me that this program didnt cater much to C++. It was a lot of le manipulation, writing a le here, reading a le there, copying a le to this other place and so on. And they wanted a small database to store some conguration, and The more I looked at it, the more it sounded like this was a problem for (dare I admit it?) Visual Basic. Long-time VB developers know whats coming next. I presented my position to the boss, an IT guy who clearly wanted to make the transition to development, and his response was emphatic. This is an application that the rm is going to depend on, so it has to be written in C++. When I asked him why, he said, point-blank, the way one lectures a small child, Visual Basic code is interpreted, so its slow. It doesnt have a type system, so code written in it is buggy. And it runs on top of a garbage-collected virtual machine, which means its bloated. And so on. I often wonder what that guy thinks of the success of the JVM, the CLR, Parrot, and LLVM, not to mention Ruby, Python, JavaScript and Perl. Well, maybe not Perl. Perl just sucks. some logic within it (to differentiate between users, roles, servers, you name it). In fact, this latter example is hardly new: James Gosling, the inventor of Java, once said, Every conguration le becomes a scripting language eventually. (Im looking at you, app.cong.) In some cases, the language in question can be compiled directly to bytecode and executed alongside the code we write in C# or VB (or F#, if you were paying attention to Aaron Ericksons article from Mar/Apr 2012 issue of CODE Magazine, Quick ID: 1203081). This means we can take something of a Marxist approach to languages in a project: From each language, according to its abilities. To each project, according to its needs. In this article, I want to show you a few dynamic languages youve probably not seen before, and how to use them from within traditional C#/VB applications. Im not going to suggest that you use all of them at once, nor even that this is a must-do kind of thing. But, hey, if the shoe ts. Note that Im not going to spend a lot of time talking about IronPython and IronRuby because theyve already gotten some good press in the past. Its likely that readers of this magazine have run into them before. Note, also, that Im not trying to jump into the dynamic vs. static debate and weigh in on the side of dynamic languages as being better than static languages (whatever that means). To be honest, I actually prefer languages that have a strong type system behind them, like F#, but much of that is because I tend to write code behind the scenes, rather than the front-end code which generally lends itself better to the dynamic end of the language spectrum. (I cite as exhibit A the incredible success of Ruby on Rails.) To be perfectly frank, I believe in a polyglot world, one in which we use a variety of languages to solve the worlds IT problems. And these are three of the languages I think you should think about using to solve those problems, above and beyond the ones you already know.

Ted Neward
Ted Neward is an Architectural Consultant with Neudesic, LLC. He resides in the Pacic Northwest with his wife, dog, two sons, four cats, and eight laptops. You can reach Ted via Twitter at @tedneward, via email at , via his blog at , or by visiting the Dennys in Redmond at 2AM on most nights.

Rise of the Dynamics


One of the core facets, typically, that makes a language a scripting language (as opposed to a system language like C/C++ or Pascal) is that the scripting language is interpreted rather than compiled. In a world dominated by justin-time compiled bytecode execution engines like the JVM and the CLR, these denitions get a little fuzzy. Lets not argue that pointits not incredibly important anyway. What is interesting about dynamic languages is that the presence of that interpreter gives us an interesting ability, one that goes underutilized within most enterprise projects: we can embed the interpreter into another project and use the scripting language from within our own applications. Imagine business rules written by business analysts and stored in a database for your web application to retrieve and execute, for example. Or imagine being able to modify the code on the y from within the application, usually in response to user requests or fast-moving business logic changes. Or imagine using the scripting language as a way of storing conguration data that has

Lua
Lua is probably the most widely used scripting language youve never heard of. The language itself is a freelyavailable open-source object-oriented(ish) language hosted at http://www.lua.org. The reason for its popularity is simple: Lua was designed from the beginning to be easily hosted from C/C++ code. This made it very attractive to game developers and designers, allowing them to write the high-performance code in C or C++ and the game rules and triggers and game logic in Lua, even opening up the Lua scripts to third parties (like gamers) to customize and extend. World of Warcraft does this, and it has spawned a cottage industry of gamers-turned-

28

Dynamic Languages 101

codemag.com

programmers who customize their WoW experience with plugins and add-ons and extensions, making the WoW ecosystem just that much more fun and interesting. The original hosting interface for Lua is, as mentioned earlier, based on C/C++ code, but fortunately the Lua community is every bit as active as the Ruby or .NET communities. A .NET-based adapter layer, LuaInterface, hides the .NET-to-C++ interop parts, making it ridiculously trivial to host Lua code in a .NET application.

end function Account:withdraw (amt) self.balance = self.balance - amt end function Account:deposit (amt) self.balance = self.balance + amt end function Account:getBalance() return self.balance end

Getting Started with Lua


LuaInterface is, like Lua itself, an open-source project, and lives at http://code.google.com/p/luainterface/. Currently, the best way to use LuaInterface is to pull down the source, which is a trivial exercise in Subversion. Once its on your machine, its an equally trivial exercise to build it in Visual Studio. One note: there are a few mentions on the LuaInterface wiki about build errors around mixed-mode assemblies, but I didnt run into this. I did, however, nd that the test executables (LuaRunner) were missing an app.cong in the project, which breaks the build. It caused no harm to remove the app.cong reference from the project and build, so I suggest doing so. Fact is, we dont really want the test executables anywaywe want the library for our own uses. Before we go much further, test the LuaInterface build: in the Built directory next to the LuaInterface directory, you should nd a compiled LuaRunner.exe (assuming you got it to build). From the Built directory, run: C:> LuaRunner ..\lua-5.1.4\test\hello.lua And you should get the traditional greeting.

This is a class in Lua. To be more precise, Lua has tables, which arent relational tables, but essentially dictionaries of name/value pairs. In fact, technically, this is a collection of functions stored in a table that will make up an object. So, for example, when I write the following after it:
a = Account:new { balance = 0 } a:deposit(100.00) print(a:getBalance())

The console will print 100. Without going a lot deeper into Lua syntax (which is a pretty fascinating subject in its own right, by the way), one thing thats important to point out is that Lua lacks classes entirely, just as JavaScript does; both are prototype-based object languages, in that inheritance is a matter of following the prototype chain to nd methods that arent found on the object directly. This also means that you can change the method denitions on a single object if you wish:
b = Account:new { balance = 0 } function b:withdraw (amt) -- no limit! end function b:getBalance() return 100000.00 end b:withdraw(10000.00) print(b:getBalance())

Writing Lua
Lua is an object-oriented(ish) language in that on the surface of it, it appears to have a lot of the same basic concepts that the traditional imperative language developer will nd comfortable: primitives, objects, and so on. In Lua, everything is dynamically resolved, so types dont play a major factor in writing code. Functions can be either part of objects or stand-alone. The usual imperative ow-control primitives (if/else, for and so on) are here. Variables are untyped, though Lua does have a basic concept of type within itspecically, variables are of only a few types, strings, numbers and so on. Readers familiar with everybodys favorite Web scripting language will probably have already gured this out: In many ways, Lua conceptually ts into the same category as JavaScript. Luas syntax is arguably much simpler, though, with far fewer gotchas within the language. For example, consider the following:
Account = { balance = 0 } function Account:new (o) o = o or {} setmetatable(o, self) self.__index = self return o

This concept, of an object having no class, and objects behavior being entirely mutable at runtime, is core to understanding JavaScript, but Luas syntax is just different enough to keep the C#/C++/Java developer from thinking that shes on familiar ground.

Hosting Lua
Its a simple matter to create a C# (or VB, but sorry VBers, some habits are just too hard to break) project and add the LuaInterface assemblies to the project. Specically, both of the assemblies in the Built directorylua51.dll and LuaInterface.dllare required. The Lua interpreter is entirely compiled into managed code, so both are standard .NET IL assemblies, and thus theres no weird security permissions So, after doing the Add Reference thing in Visual Studio, try this:
using System; using LuaInterface; namespace LuaHost {

codemag.com

Dynamic Languages 101

29

class Program { static void Main(string[] args) { Console.WriteLine("LuaHost v0.1"); var lua = new Lua(); lua.DoString("io.write(\"Hello, world,"+ "from \",_VERSION,\"!\\n\")"); } } }

codeplex.com. Go grab it, install it, and re up the Prolog.NET Workbench. (Just for the record, theres a second Prolog.NET implementation at http://prolog.hodroj.net/, which appears to be slightly newer, appears to work with Mono, and appears to have a similar kind of feature set as the CodePlex project version; I chose to use the rst one, but I suspect either one would work just as well in practice.)

Writing Prolog
In the Command pane of the Workbench, type in the following snippet of Prolog, making sure to include the period (which is a statement terminator, like the ; in C#/Java/C++) at the end, then click Execute at the bottom of that pane:
likes(john,mary).

As you might well infer, this is essentially a hard-wired version of what we did at the command-line a few minutes ago: greet the world from Lua. Having gotten this far, its fairly easy to see how this could be expanded: one thought would be to create a REPL (Read-Eval-Print-Loop, an interactive console) that reads a line from the user, feeds it to Lua, repeat, and host that from within WinForms or WPF. Or even Visual Studio. (Which Microsoft already did, all you WoW players out there.) If youre a Web developer, write an ASP.NET handler that executes Lua on the server, a la Node, but using a language that was actually designed, instead of cobbled together over a weekend.

This line says that john likes mary, and Prolog accepts that into the system by responding with Success in the Transcript pane above the Command pane. This line, in Prolog terminology, is a fact. (Well, to Prolog its a factto Mary it may be an unfortunate situation resulting from having made eye contact with a smarmy co-worker.) We can assert other kinds of facts into Prolog; in fact, we can assert lots of different kinds of facts, because Prolog knows nothing about the meaning behind the words john, mary or likes, only that A and B are linked by C. So additional facts might look like:
likes(ted,macallan25). likes(jess,macallan25). likes(miguel,macallan25). likes(charlotte,redwine). valuable(gold). female(jess). female(charlotte). male(ted). male(miguel). gives(ted,redwine,charlotte).

Prolog
Most of the time when were writing code for a customer, we expect the customer to tell us how to get things done. There are some projects, however, where the customer doesnt exactly know the right answer ahead of time which makes it hard to know if the code is generating the right answer. Consider, for example, Sudoku puzzles. The puzzle always has an answer (assuming its a legitimate puzzle, of course), and we have ways of verifying if a potential answer is correct, but neither the developer nor the customer (the Sudoku player) has that answer in front of them. (If this seems like a spurious example, then consider certain kinds of simulations or forecasting or other dataanalysis kinds of work. At least with Sudoku we know we have one and only one right answer, so lets work with that for now.) While writing a Sudoku solver in C# can be done, back in the AI research days, Prolog was developed to do precisely this kind of thing: take facts asserted into the system, and when asked to examine an assertion, determine whether that assertion could be true given the facts present within the system. If that sounded like gibberish, stay with me for a second. Examples will help.

These facts tell us that Ted, Jess and Miguel like macallan25 while Charlotte likes redwine, gold is valuable, Charlotte and Jess are female while Miguel and Ted are male, and Ted gives redwine to Charlotte (probably to impress her on a date or something). These facts collectively form a database in Prolog, and, like the more familiar relational form, the Prolog database allows us to issue queries against it:
:- likes(ted, macallan25).

To Prolog, this is a question: does Ted like Macallan 25? Very much so, yes, and it turns out that Prolog agreesit will respond with a yes or success response, depending on the Prolog implementation youre using. In this particular case, Prolog is looking at the verb (likes) joining the two nouns (ted and macallan25, what Prolog calls objects), and determining if there is an V/N1/N2 pairing in the facts databaseand,

Getting Started with Prolog


First things rst: we need a Prolog implementation, and fortunately theres one written in .NET (and thus easily hostable!). Prolog.NET is available at http://prolog.

30

Dynamic Languages 101

codemag.com

as we saw earlier, there is, so it responds with a success response. But if we ask it a different query:
:- likes(ted, redwine).

Prolog comes back, correctly, with the answer jess. Prolog is, very simply, an inference engine, and it shares a lot of similarities to rules engines like Drools.NET or iLOG Rulesbut in a language syntax, and something that we can call from .NET code. If these seem like simplistic scenarios, consider a trickier one: a fast-food restaurant chain needs a software system to help them manage employee schedules. Anyone whos ever worked as a manager of a restaurant (or assistant manager, when the manager decided to delegate that job to his high-school assistant manager to teach a sense of responsibility, and of course it had nothing to do with his absolute loathing of the task, not that Im still bitter or anything) knows what a pain it is. Every employee has immutable schedule restrictions, particularly in a college town where schedules change with every quarter or semester, not to mention the complications around seniority and the implicit more senior people get rst pick at the schedule and so on. This is exactly the kind of problem that Prolog excels at we can assert each employees schedule restrictions and preferences as facts, set up some rules about how often they can work (no back-to-back shifts, for example), and then let Prolog gure out the permutations of the schedule for nal human approval. In fact, Prolog.NET has a sample along these lines (Listing 1). Walking through all of this is a bit beyond the scope of this article, but the code starts with some declarations of the days of the week, the shifts in the plant, and a denition that a WorkPeriod is a given shift/day combination. Then we get into the employee/shift combinations (a shiftAssignment) and the employee/day combinations (a dayAssignment), and nish with a declaration of rules that create the three-way binding between an employ-

Prolog will respond with a no or failure. Which totally isnt true, but Prolog only knows about the facts that were asserted into its database; if its not in the database, then to Prolog it doesnt exist. Prolog will also allow you to put variables instead of objects into the query, and let Prolog ll the variable with the objects that match:
:- likes(Person, macallan25).

Here, Prolog knows that Person is a variable because it starts with a capital letter. (Yes, seriously. Prolog is that case-sensitive.) And it responds by telling us every object that likes macallan25, which in this case is three objects, miguel, ted and jess. Now, suppose that Ted likes anyone that is female and who in turn likes a particular kind of beverage. (Ed. note: Meaning, Ted likes a Person who is female that likes a particular beverage. Jess works because she is female (rst clause) and because she likes the Beverage Ted passed in (Scotch).) We can express this in Prolog as a rule:
likes(ted,Person,Beverage) :female(Person), likes(Person,Beverage).

Now we present it with the query, who does Ted like that likes Macallan 25?
:- likes(ted, Person, macallan25).

Listing 1: A Prolog schedule example


day(monday). day(tuesday). day(wednesday). day(thursday). day(friday). shift(rst). shift(second). shift(third). workPeriod(workPeriod(Day,Shift)) :day(Day), shift(Shift). shiftAssignment(alice,rst). shiftAssignment(bob,second). shiftAssignment(cathy,third). shiftAssignment(doug,rst). shiftAssignment(emily,second). shiftAssignment(fred,third). -- and so on shiftAssignment(zack,second). dayAssignment(doug,[thursday,friday]). dayAssignment(emily,[friday,monday]). dayAssignment(fred,[monday,tuesday]). -- and so on dayAssignment(zack,[monday,tuesday]). workPeriodAssignment(workPeriodAssignment(Person,workPeriod(Day,+ Shift))) :dayAssignment(Person,Days), containsItem(Days,Day), shiftAssignment(Person,Shift). containsItem([Item|Items],Item). containsItem([AnotherItem|List],Item) :containsItem(List,Item). workPeriodAssignments([],[]). workPeriodAssignments([workPeriodAssignment(Person,WorkPeriod) + |Assignments],[WorkPeriod|WorkPeriods]) :workPeriodAssignments(Assignments,WorkPeriods), workPeriodAssignment(workPeriodAssignment(Person,WorkPeriod)).

dayAssignment(alice,[monday,tuesday]). dayAssignment(bob,[tuesday,wednesday]). dayAssignment(cathy,[wednesday,thursday]).

solve(Assignments) :ndall(WorkPeriod,workPeriod(WorkPeriod),WorkPeriods), workPeriodAssignments(Assignments,WorkPeriods).

codemag.com

Dynamic Languages 101

31

Listing 2: Hosting Prolog in a C# project


using System; using Prolog; using Prolog.Code; namespace PrologHost { class Program { static void Main(string[] args) { var program = new Prolog.Program(); var sentences = Parser.Parse(likes(ted,macallan25)); foreach (var cs in sentences) program.Add(cs); var querySentences = Parser.Parse(:likes(Person,macallan25)); Query query = new Query(querySentences[0]); PrologMachine machine = PrologMachine.Create( program, query); ExecutionResults results = machine.RunToSuccess(); Console.WriteLine(Results: {0}, results); foreach (var v in machine.QueryResults.Variables) Console.WriteLine(v.Name + = + v.Text); } } }

ee, a shift, and a day to create a given schedule. Its a great non-trivial example to have a look at, plus it demonstrates the intersection of Prolog and .NET, since the sample itself is compiled into a small WPF app displaying the schedule permutations in a grid.

operated upon. It is this meta facility that lends Lisps (and, therefore, Scheme) much of the power that is commonly ascribed to the languages within this family.

Getting Started with Scheme


Like the other two languages weve seen so far, an implementation of Scheme designed specically for the CLR or, to be more precise, the Dynamic Language Runtime (DLR) that runs on top of the CLR and serves as the core for IronPython, IronRuby and the dynamic keyword in C# 4.0is available. Not surprisingly, it is called IronScheme and it is available for download on Codeplex at http://ironscheme.codeplex.com in either source or precompiled binary form. Pull it down, install (or build) it, and re up the IronScheme REPL to get a prompt to play with.

Hosting Prolog.NET
Like the LuaInterface situation earlier, hosting the Prolog. NET implementation is pretty straightforward (Listing 2). In a C# project, add the Prolog.dll assembly, found in the root of the Prolog.NET installation directory, to your project. Obtain a Prolog.Program instance, and use the Parser found in the Prolog namespace to capture Prolog facts and rules and dene queries to be run against them. As you can see, facts and queries are parsed separately and added to the PrologMachine instance, and then executed. The API permits execution in a single-step fashion, allowing for on-the-y examination of the machine during its processing, but for non-debugging scenarios, RunToSuccess() is the preferred approach.

Writing Scheme
As already mentioned, everything in Lisp is a list, so all programming in Scheme will basically be putting together ()-bracketed lists of things, typically in a Polish-notation fashion. So, for example, typing the following:
> (* 5 20)

Another Prolog
Another approach to Prolog-on-the-CLR is that taken by P#, a Prolog source-to-source translator that takes Prolog input and generates C# les that can be compiled into your assembly. You can nd it at http://www.dcs.ed.ac.uk/ home/jjc/psharp/psharp-1.1.3/dlpsharp.html if you are interested.

Yields a response of 100 because that is what applying the * (multiplication) function on arguments of 5 and 20 produces. This list-based syntax alone is fascinating because it means that Scheme can write functions that accept a varying number of parameters without signicant difculty (what the academics sometimes refer to as exible arity), meaning that we can also write:
> (* 5 20 20)

Scheme
No conversation on dynamic languages can be called complete without a nod and a tip of the hat to one of the granddaddies of all languages, Lisp, and its Emacs-hosted cousin, Scheme. Scheme, like Lisp, is conceptually a very simple language (Everything is a list!) with some very mind-blowing concepts to the programmer who hasnt wandered outside of Visual Studio much. (Code is data! Data is code!) Scheme, as they say, is a Lisp, which means that it syntactically follows many of the same conventions that Lisp doesall program statements are in lists, bounded by parentheses, giving Scheme code the chance to either interpret the list as a method call or command, or do some processing on the list before passing it elsewhere to be

And get back 2000. Of course, if all we wanted was a reverse-reverse-Polish calculator, wed ask some long-haired dude whose family name used to be Niewardowski to recite multiplication tables while walking backwards. Scheme also allows you to store values in named storage using (dene):
> (dene pi 3.14159) > (dene radius 10) > (* pi (* radius radius))

32

Dynamic Languages 101

codemag.com

FROM THE PRODUCERS OF CODE MAGAZINE

Mobile Development from the Most Trusted Source in .NET!


Looking for a company who can help you with your mobile platform projects, provide man-power and know-how, and reduce your overall risk? CODE Consulting is your perfect partner! It is difcult to ignore the growing demand for applications that run on mobile devices like smart phones and slate devices, but you dont have to venture into this new technology alone. CODE Consulting is here to help you develop your mobile applications on Android, iOS (iPhone/iPad/iPod), Windows Phone, WebOS, and more. CODE Consulting provides a wide range of services for every need and scale, from mentoring your team to make sure your mobile project is on track, to the complete design and development of your mobile application. CODE Consulting is the consulting, development, and custom software arm of CODE Magazine. We have access to the most extensive network of experts through both in-house staff, as well as a vast network of CODE authors, trainers and consultants, MVP and RD networks, and community involvement. At CODE Consulting, we work with technologies and standards like .NET, Java, Objective-C, HTML, CSS, JavaScript, Silverlight, SOA, and much more.

See more details at: www.codemag.com/consulting

CONSULTING
An EPS Company

shu ers k shutterstock rstock

Listing 3: Hosing IronScheme in a C# Console project


using using using using using using System; Microsoft.Scripting; Microsoft.Scripting.Hosting; IronScheme; IronScheme.Hosting; IronScheme.Runtime; (display x) x)) ); var foo = se.Evaluate(foo) as Callable; var result = foo.Call(hello world); Console.Write(result); var radius = se.Evaluate((* 3.14159 (* 10 10))); Console.Write(radius); var bar = se.Evaluate( (lambda x (for-each display (reverse x))(newline))) as Callable; bar.Call(1, 2, 3, 4, 5); Console.ReadLine(); se.Evaluate(@ (dene (foo x) (let ((x (string-append x \n))) } } }

namespace SchemeHost { class Program { static void Main(string[] args) { var slp = ScriptDomainManager.CurrentManager .GetLanguageProvider(typeof(IronSchemeLanguageProvider)); var se = slp.GetEngine();

Highly Recommended Books


Like most languages, you cant use them correctly after reading just a thousand words. So, for those who are interested in following up, the following books come highly recommended:

(dene) isnt limited to just dening variables; we can also (dene) new functions, like so:
> (dene (square x) (* x x)) > (* pi (square radius))

Although it may look a little overwhelming, when you peer into it, a number of things leap out: there is a correlation between HTML tags and Scheme functions ((h2 ...), (form ...), and so on), and the open-ended nature of Scheme lists makes it easy to extend the language to incorporate templatized elements into the rendered HTML. For example, consider this snippet from the above:
`(div ,@(map display-entry blogdata))

Programming in Prolog
(Clocksin, Mellish)

Structure and Interpretation


of Computer Programs (Abelson and Sussman); Teaches Scheme, available online

One of the things apparent when we look at Scheme code is that the distinctions between variables and methods are quite fuzzy when compared against languages like C# and VB. Is pi a function that returns a value, or is it a variable storing a value? And, quite honestly, do we care? Should we? (You might, but you shouldnt. Unlearn, young Jedi, unlearn.)

Hosting Scheme
If theres a theme to this article, its that hosting language X is pretty easy, and IronScheme really is no different. From a new C# Console project, add three assembly references from the IronScheme root directory: Microsoft.Scripting.dll (the DLR), IronScheme.dll, and IronScheme.Closures.dll. See Listing 3. As you can see, getting an IronScheme engine up and running is pretty straightforward: just ask the DLRs ScriptDomainManager to give you an IronScheme engine instance. Once there, we only need to pass the Scheme expressions in, and IronScheme will hand back the results. If those expressions resolve into functions, such as in the case above with foo, then we need only cast them to Callable instances, and we can call through to them with no difculty. Oh, and for the record? IronScheme is ridiculously easy to get started using on a Web MVC project, because the IronScheme authors have already built the necessary hooks (and Visual Studio integration!) to create an MVC application. In the IronScheme implementation, check out the websample directory, which contains a couple of different samples (as well as the IronScheme documentation). Congure an ASP.NET site around that directory, then hit it with an HTTP request of /blog, and explore the 100% IronSchemewritten blog engine. Admittedly, its pretty tiny, but then again, so is the code. And the IronScheme way to represent an HTML view isnt all that hard to read, either (Listing 4).

The Scheme Programming


Language (Dybvig); available online

This looks pretty innocuous, but here the power of Schemes functional nature kicks inwe use the map function to take a function, display-entry, and map it over every element in the blogdata collection, which effectively iterates through the collection and generates the HTML for each entry. To those willing to look past the arcane ()-based syntax, Scheme offers all the power of a functional language, combined with the exibility of a dynamic one. Is this likely to take over from ASP.NET MVC written in C# or VB any time soon? Maybe not, but long-time practitioners of Lisp and Scheme have often touted how easy it is to get things done in these languages thanks to the ability to build abstractions on top of abstractions, so maybe its worth a research spike for a while, just to see.

Programming Clojure
(Halloway)

The Lua Programming


Language (www.lua.org)

Clojure-CLR
No discussions of a modern Lisp would be complete without mentioning Clojure, a Lisp originally born on the JVM, but since ported to the CLR. Clojure is a Lisp, but its not Common Lisp or Scheme. Its creator, Rich Hickey, put some fascinating ideas about state and data into the language, making it a powerful tool for doing things in parallel. If youre a Java programmer, picking up Clojure is a highly-recommended step to take; if youre a .NET programmer, however, although still recommended, its not quite as easy, owing to the fact that all of the documentation and articles and books on Clojure are focused specically on the JVM and Java APIs. Still, for those willing to brace themselves for a little rough sailing at rst, Clojure-CLR can be a powerful experiment, and its a natural complement to learning Iron-

34

Dynamic Languages 101

codemag.com

Listing 4: Using IronScheme to present an HTML view


(library (views blog) (export index entry edit add) (import (ironscheme) (ironscheme clr) (models blog) (ironscheme web) (ironscheme web views)) (dene (to-string obj) (clr-call Object ToString obj)) (dene (page-template . body) `(html (xmlns . http://www.w3.org/1999/xhtml) (head (title Blog in IronScheme) ,(css-link ~/styles/blog.css)) (body ,(display-menu) . ,body))) (dene (edit-page-template . body) (apply page-template (javascript-include ~/wmd/wmd.js) body)) (dene (display-menu) `(ul (class . menu) (li ,(action-link Home )) ,(if (string=? (user-name) admin) `(li ,(action-link Add entry add))) ,(if (user-authenticated?) (li (a (href . /auth/logout) Logout)) (li (a (href . /auth/login) Login))) (li (form (action . ,(action-url search)) (method . post) (input (type . text) (name . searchterm) (value . ,(or (form searchterm) ))) (input (type . submit) (value . Search)))) )) (dene (display-entry e) (let ((id (blog-entry-id e))) `(div (class . blog-entry) (div (class . blog-header) ,(action/id-link (blog-entrysubject e) entry id)) (div (class . blog-body) (no-escape ,(blog-entry-body e))) (span (class . blog-footer) posted by ,(blog-entry-author e) on ,(to-string (blog-entry-date e)) ,(when (string=? (user-name) admin) `(span ,(action/id-link edit edit id) ,(action/id-link delete delete id (onclick . return conrm(Are you sure?)) ))) )))) (dene-view (index blogdata pageindex) (page-template (h2 Blog in 100% IronScheme) `(div ,@(map display-entry blogdata)) (if (not (string=? search (context-item action))) `(span ,(if (= pageindex 1) (action-link << index)) ,(if (> pageindex 1) (action/id-link << previous (- pageindex 1))) ,(if (not (null? blogdata)) (action/id-link >> previous (+ pageindex 1))))) )) (dene-view (add) (edit-page-template (h2 Add entry) `(form (action . ,(action-url add)) (method . post) ,(make-label/input subject Subject text ) (textarea (style . width:500px;height:200px) (name . body) (id . body) ) (br) (input (type . submit))))) (dene-view (entry e) (page-template (h2 Blog in 100% IronScheme) (display-entry e))) (dene-view (edit e) (edit-page-template (h2 Edit entry) `(form (action . ,(action/id-url edit (blog-entry-id e))) (method . post) ,(make-label/input subject Subject text (blog-entrysubject e)) (textarea (style . width:500px;height:200px) (name . body) (id . body) ,(blog-entry-body e)) (br) (input (type . submit))))) )

Scheme. Clojure, unlike most Lisps, has no interpreter, meaning that Clojure-CLR is going to compile everything into IL, and thus eliminate concerns around the hideous performance of being an interpreted language.

Moving On
Certainly the crop of .NET languages doesnt end here. In fact, trying to trim the list down from all the languages I could have discussed was one of the hardest things about writing this articlelanguages like Cobra, Nemerle, Boo and the aforementioned IronPython and IronRuby, all are powerful and useful languages that can signicantly change the development arc of a project if used correctly.

No one language is going to be the silver bullet to all your development ills; what we gain in using a dynamic language, we lose in taking on some of the risks inherent in that language. For example, almost every Ruby developer Ive ever talked to makes it very clear that in a Ruby project, unit tests are not just a nice-to-have, but an essential necessity to ensuring the project succeeds. The language offers a tremendous amount of exibility, but at a price. At the end of the day, thats probably something that should be said about all the tools we use. Caveat emptor. Ted Neward

codemag.com

Dynamic Languages 101

35

ONLINE QUICK ID 1206081

An Introduction to ASP.NET Web API


Microsoft recently released the ASP.NET MVC 4.0 beta and along with it, the brand spanking new ASP.NET Web API. Web API is an exciting new addition to the ASP.NET stack that provides a new, well-designed HTTP framework for creating REST and AJAX APIs (API is Microsofts new jargon for a service, in case youre wondering). Although Web API
currently ships and installs with ASP.NET MVC 4, you can use Web API functionality in any ASP.NET project, including WebForms, WebPages and MVC or none of the above. You can also self-host Web API in your own applications. Please note that this article is based on pre-release bits of ASP.NET Web API (pre-RC) and the API is still changing. The samples are built against the latest snapshot of the CodePlex ASP.NET Web Stack Source and some of the syntax and functions might change by the time Web API releases. Overall concepts apply, and Ive been told that functionality is mostly feature complete, but things are still changing as I write this. Please refer to the latest code samples on GitHub for the nal syntax of the examples. WCF REST or ASP.NET AJAX with ASMX, its a brand new platform rather than bolted-on technology that is supposed to work in the context of an existing framework. Web API is meant to handle any kind of HTTP input and produce output and status codes using the full spectrum of HTTP functionality available. Theres much-improved support for content negotiation based on HTTP Accept headers, with the framework capable of detecting content that the client sends and requests and automatically serving the appropriate data format in return. Many of the features favor convention over conguration, making it much easier to do the right thing without having to explicitly congure specic functionality. Although previous solutions accomplished this using a variety of WCF and ASP.NET features, Web API combines all this functionality into a single server-side HTTP framework that intrinsically understands the HTTP semantics and subtly drives you in the right direction for most operations. And when you need to customize or do something that isnt automatic, there are overrides for most behaviors, and even many low-level hook points that allow you to plug-in custom functionality with relatively little effort.

Rick Strahl
rstrahl@west-wind.com Rick Strahl is the big Kahuna and janitor at West Wind Technologies on Maui, Hawaii. The company specializes in Web and distributed application development, develops several commercial and free tools, provides training and mentoring with focus on .NET, IIS and Visual Studio. Ricks an ASP.NET Insider, a frequent contributor to magazines and books, and a frequent speaker at developer conferences and user groups. For more information, please visit: www.west-wind.com/weblog/

Whats a Web API and Why Do We Need It?


HTTP APIs become increasingly important with the proliferation of devices that we use today. Most mobile devices, like phones and tablets, run apps that use data retrieved from the Web over HTTP. Desktop applications are also moving in this direction with more and more online content and synching and Windows 8 promising an app-like experience. Likewise, many Web applications rely on rich client functionality to create and manipulate the browser user interface, using AJAX rather than server-generated HTML data to load up the user interface with data. Results returned from these remote HTTP services are data rather than HTML markup. This data tends to be in XML or more commonly today in JSON (JavaScript Object Notation) format. Web API provides an easy way to build the backend code to handle these remote API calls using a exible framework that is based very specically around the semantics of the HTTP protocol. The .NET stack already includes a number of tools that provide the ability to create HTTP service backends. Theres WCF REST for REST and AJAX, ASP.NET AJAX Services purely for AJAX and JSON, and you can always use plain HTTP Handlers for any sort of response but with minimal plumbing. You can also use plain MVC Controller Methods or even ASP.NET WebForms pages to generate arbitrary HTTP output.

ASP.NET Web API differentiates itself from existing Microsoft solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics.
Web API also requires very little in the way of conguration so its very quick and unambiguous to get started. To top it all off, you can also host the Web API in your own applications or services. Above all, Web API makes it extremely easy to create arbitrary HTTP endpoints in an application without the overhead of a full framework like WebForms or ASP.NET MVC. Because Web API works on top of the core ASP.NET stack, you can plug Web APIs into any ASP.NET application.

Most mobile devices, like phones and tablets, run apps that use data retrieved from the Web over HTTP.

Although all of these can accomplish the task of returning HTTP responses, none of them are optimized for the repeated tasks that an HTTP service has to deal with. If you are building sophisticated Web APIs on top of these solutions, youre likely to either repeat a lot of code or write signicant plumbing code yourself to handle various API requirements consistently across requests.

Getting Started
Ill create a new empty ASP.NET application to demonstrate that Web API can work with any kind of ASP.NET project.

A Better HTTP Experience


ASP.NET Web API differentiates itself from these other solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics. Unlike

Make Sure ASP.NET MVC4 (Pre-Release) Is Installed


The rst step is to make sure you have ASP.NET MVC 4 installed on your machine in order to get the required

36

An Introduction to ASP.NET Web API

codemag.com

Web API libraries. If it isnt installed, you can download it from http://www.asp.net/web-api. Alternately, you can also download the latest ASP.NET MVC/Web API source code from the CodePlex site (aspnetwebstack.codeplex. com). Because the API is still in ux, I used CodePlex code for my samples. The samples include the current binaries, so to run them you dont actually need to download anything.

Listing 1: Global.asax routing conguration for Web API


using System; using System.Web.Routing; using System.Web.Http; namespace AspNetWebApi { public class Global : System.Web.HttpApplication { protected void Application_Start(object sender, EventArgs e) { RouteTable.Routes.MapHttpRoute( name: AlbumApi, routeTemplate: albums/{title}, defaults: new { symbol = RouteParameter.Optional, controller=AlbumApi } ); } } }

Create a New ASP.NET Empty Project


Although you can create a new project based on the ASP. NET MVC/Web API template to quickly get up and running, Ill take you through the manual setup process, because one common use case is to add Web API functionality to an existing ASP.NET application. This process describes the steps needed to hook up Web API to any ASP.NET 4.0 application. Start by creating an ASP.NET Empty Project. Then create a new folder in the project called Controllers.

Add a Web API Controller Class


Once you have any kind of ASP.NET project open, you can add a Web API Controller class to it. Web API Controllers are very similar to MVC Controller classes, but they work in any kind of project. Add a new item to this folder by using the Add New Item option in Visual Studio and choose Web API Controller Class, as shown in Figure 1. Make sure that the name of the controller class includes Controller at the end of it, which is required in order for Web API routing to nd it. Here, the name for the class is AlbumApiController. For this example, Ill use a Music Album model to demonstrate basic behavior of Web API. The model consists of albums and related songs where an album has properties like Name, Artist and YearReleased and a list of songs with a SongName and SongLength as well as an AlbumId that links it to the album. You can nd the code for the model (and the rest of these samples) on GitHub at: http://goo.gl/rA0cx. To add the le manually, create a new folder called Model, and add a new class Album.

cs and copy the code into it. Theres a static AlbumData class with a static CreateSampleAlbumData() method that creates a short list of albums on a static .Current that Ill use for the examples. Before we look at what goes into the controller class though, lets hook up routing so we can access this new controller.

ASP.NET Web API Versions and Samples


At the time this article was written, ASP.NET Web API is ofcially in beta with a pending release candidate (RC) coming very soon, according to Microsoft. The Web API beta is part of the ASP.NET MVC 4.0 Beta and you can download it from: http:// www.asp.net/web-api. In late March, Microsoft also opened up the MVC and Web API source code and published the current live builds on CodePlex (http:// aspnetwebstack.codeplex.com). Amazingly, you now have access to live builds of ASP.NET MVC and Web API. Because Web API is still changing signicantly, I used the latest CodePlex build for examples, as these bits are close to what you will see for the RC, rather than using the already out of date beta bits. I will be updating the samples as new versions come along, so make sure to grab the latest code from the GitHub sample site. The samples include the current binaries so they should work on any .NET 4.0 installation since the Web API binaries are xcopy deployed in the bin folder of the project.

Hooking up Routing in Global.asax


To start, I need to perform the one required conguration task in order for Web API to work: I need to congure routing to the controller. Like MVC, Web API uses routing to provide clean, extension-less URLs to controller methods. Using an extension method to ASP.NETs static RouteTable class, you can use the MapHttpRoute() (in the System.Web.Http namespace) method to hook-up the routing during Application_Start in global.asax.cs shown in Listing 1. This method congures Web API to direct URLs that start with an albums folder to the AlbumApiController class. Routing in ASP.NET is used to create extension-less URLs and allows you to map segments of the URL to specic Route Value parameters. A route parameter, with a name inside curly brackets {name}, is mapped to parameters on the controller methods. Route parameters can be optional, and there are two special route parameters controller and action that determine the controller to call and the method to activate respectively.

HTTP Verb Routing


Routing in Web API can route requests by HTTP Verb in addition to standard {controller},{action} routing. For the rst examples, I use HTTP Verb routing, as shown in Listing 1. Notice that the route Ive dened doesnt include an {action} route value or action value in the defaults. Rather, Web API can use the HTTP Verb in this route to determine the method to call the controller, and a GET request maps to any method that starts

Figure 1: This is how you create a new Controller Class in Visual Studio.

codemag.com

An Introduction to ASP.NET Web API

37

with Get. So methods called Get() or GetAlbums() are matched by a GET request and a POST request maps to a Post() or PostAlbum(). Web API matches a method by name and parameter signature to match a route, query string or POST values. In lieu of the method name, the [HttpGet,HttpPost,HttpPut,HttpDelete, etc.] attributes can also be used to designate the accepted verbs explicitly if you dont want to follow the verb naming conventions.

Web API shares many concepts of ASP.NET MVC, and the implementation of your API logic is done by implementing a subclass of the System.Web.Http.ApiController class. Each public method in the implemented controller is a potential endpoint for the HTTP API, as long as a matching route can be found to invoke it. The class name you create should end in Controller, which is how Web API matches the controller route value to gure out which class to invoke. Inside the controller you can implement methods that take standard .NET input parameters and return .NET values as results. Web APIs binding tries to match POST data, route values, form values or query string values to your parameters. Because the controller is congured for HTTP Verb based routing (no {action} parameter in the route), any methods that start with Getxxxx() are called by an HTTP GET operation. You can have multiple methods that match each HTTP Verb as long as the parameter signatures are different and can be matched by Web API. In Listing 3, I create an AlbumApiController with two methods to retrieve a list of albums and a single album by its title. To access the rst two requests, you can use the following URLs in your browser: http://localhost/aspnetWebApi/albums http://localhost/aspnetWebApi/albums/Dirty%20Deeds Note that youre not specifying the actions of GetAlbum or GetAlbums in these URLs. Instead Web APIs routing uses HTTP GET verb to route to these methods that start with Getxxx().

Routing in Web API works the way routing works in ASP.NET MVC, but adds the ability to route by HTTP Verb in lieu of specifying a controller action.
Although HTTP Verb routing is a good practice for REST style resource APIs, its not required and you can still use more traditional routes with an explicit {action} route parameter. When {action} is supplied, the HTTP verb routing is ignored. Ill talk more about alternate routes later. When youre nished with initial creation of les, your project should look like Figure 2. Notice that adding a Web API controller to your project adds a long string of new assemblies to your project; Web API is designed in a very modular fashion. Web API (and MVC 4.0) is shipped as an add-on library and deploys the assemblies into your sites bin folder and can be xcopy deployed no explicit installation is required.

Figure 2: The initial project has the new API Controller Album model.

Creating Your First Controller


Now its time to create some controller methods to serve data. For these examples, Ill use a very simple Album and Songs model to play with, as shown in Listing 2. Listing 2: The Album and Songs Model for the sample
public class Album { public string Id {get; set;} [Required,StringLength(80)] public string AlbumName {get; set; } [StringLength(80)] public string Artist { get; set; } public int YearReleased {get; set; } public DateTime Entered { get; set; } [StringLength(128)] public string AlbumImageUrl { get; set; } [StringLength(200)] public string AmazonUrl { get; set; } public virtual List<Song> Songs {get; set;} } public class Song { public string AlbumId { get; set; } [Required, StringLength(80)] public string SongName { get; set; } [StringLength(5)] public string SongLength { get; set; } }

Content Negotiation
When you access any of the URLs above from a browser, you get either an XML or JSON result returned back. The album list result for Chrome 17 and Internet Explorer 9 is shown Figure 3. Notice that the results are not the same: Chrome returns an XML response and IE9 returns a JSON response. Whoa, whats going on here? Shouldnt we see the same result? Actually, no. Web API determines what type of content to return based on Accept headers. HTTP clients, like browsers, use Accept headers to specify what kind of content theyd like to see returned. Browsers generally ask for HTML rst, followed by a few additional content types. Chrome (and most other major browsers) ask for:
Accept: text/html, application/xhtml+xml,application/xml; q=0.9,*/*;q=0.8

IE9 asks for:


Accept: text/html, application/xhtml+xml, */*

Note that Chromes Accept header includes application/ xml, which Web API nds in its list of supported media types and returns an XML response. IE9 doesnt include

38

An Introduction to ASP.NET Web API

codemag.com

Listing 3: A basic ApiController implementation based on HTTP Verb mapping


public class AlbumApiController : ApiController { public IEnumerable<Album> GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); return albums; } public Album GetAlbum(string title) { var album = AlbumData.Current .SingleOrDefault(alb => alb.AlbumName .Contains(title)); return album; } }

ATOM or even OData feeds by providing the appropriate Accept header from the client. By default, you dont have to worry about the output format in your code.

Web API automatically switches output formats based on the HTTP Accept header of the request. The default content type if no matching Accept header is specied is JSON.
Note that you can still specify an explicit output format if you choose, either globally by overriding the installed formatters, or individually by returning a lower level HttpResponseMessage instance and setting the formatter explicitly. There will be more on that in a minute. Along the same lines, any content sent to the server via POST/PUT is parsed by Web API based on the HTTP Content-type of the data sent. The same formats allowed for output are also allowed on input. Again, you dont have to do anything in your code Web API automatically performs the deserialization from the content.

Resources
Sample source code on GitHub (http://goo.gl/8mhIh) ASP.NET MVC and Web API source on CodePlex: http:// aspnetwebstack.codeplex.com

an Accept header type that works on Web API by default, and it returns its default format, which is JSON. This is an important and very useful feature that was missing from any previous Microsoft REST tools: Web API automatically switches output formats based on HTTP Accept headers. Nowhere in the server code above do you have to explicitly specify the output format. Rather, Web API determines what format the client is requesting based on the Accept headers and automatically returns the result based on the available formatters. This means that a single method can handle both XML and JSON results. Using this simple approach makes it very easy to create a single controller method that can return JSON, XML,

Accessing Web API JSON Data with jQuery


A very common scenario for Web API endpoints is to retrieve data for AJAX calls from the Web browser. Because JSON is the default format for Web API, its easy to access data from the server using jQuery and its getJSON() method. This example receives the albums array

Json.NET in Web API


In the current beta, the default JSON serializer is DataContractJsonSerializer, which has some substantial limitations. Post-beta builds integrate the popular Json.NET library as the default JSON serializer, which is a huge improvement over DataContractJsonSerializer in terms of supported functionality, performance and features. Serializers tend to have differences and if you need to use a specic serializer to match older client code, its easy to plug-in a different one. I wrote a blog post originally meant for the beta to plug-in Json.NET, but if you have a need for an alternate serializer you can use the techniques described in this post: http://goo.gl/fD7Lf

Figure 3: Web API responses can vary depending on the browser used, demonstrating Content Negotiation in action as these two browsers send different HTTP Accept headers.

40

An Introduction to ASP.NET Web API

codemag.com

from GetAlbums() and databinds it into the page using knockout.js.


$.getJSON("albums/", function (albums) { // make knockout template visible $(".album").show(); // create view object and attach array var view = { albums: albums }; ko.applyBindings(view); });

HttpResponseMessage is similar to ActionResult in ASP.NET MVC in that its a common way to return an abstract result message that contains content. HttpResponseMessage is parsed by the Web API framework using standard interfaces to retrieve the response data, status code, headers and so on. Web API turns every response including those Controller methods that return static results into HttpResponseMessage instances. Explicitly returning an HttpResponseMessage instance gives you full control over the output and lets you bypass WebAPIs post-processing of the HTTP response on your behalf. HttpResponseMessage allows you to customize the response in great detail. Web APIs attention to detail in the HTTP spec really shows; many HTTP options are exposed as properties and enumerations with detailed IntelliSense comments. Even if youre new to building REST-based interfaces, the API guides you in the right direction for returning valid responses and response codes. For example, assume that I always want to return JSON from the GetAlbums() controller method and ignore the default media type content negotiation. To do this, I can adjust the output format and headers as shown in Listing 4. This example returns the same IEnumerable<Album> value, but it wraps the response into an HttpResponseMessage so you can control the entire HTTP message re-

Figure 4 shows this and the next examples HTML output. You can check out the complete HTML and script code at http://goo.gl/Ix33C. The result from the getJSON() call is a JavaScript object of the server result, which comes back as a JavaScript array. In the code, I use knockout.js to bind this array into the UI, which as you can see, requires very little code, instead using knockouts data-bind attributes to bind server data to the UI. Of course, this is just one way to use the data its entirely up to you to decide what to do with the data in your client code. Along the same lines, I can retrieve a single album to display when the user clicks on an album. The response returns the album information and a child array with all the songs. The code to do this is very similar to the last example, where I pulled the albums array:
$(".albumlink").live("click", function () { var id = $(this).data("id"); // title $.getJSON("albums/" + id, function (album) { ko.applyBindings(album, $("#divAlbumDialog")[0]); $("#divAlbumDialog").show(); }); });

Here the URL looks like this: /albums/Dirty%20Deeds, where the title is the ID captured from the clicked elements data ID attribute.

Explicitly Overriding Output Format


When Web API automatically converts output using content negotiation, it does so by matching Accept header media types to the GlobalConguration.Conguration.Formatters and the SupportedMediaTypes of each individual formatter. You can add and remove formatters to globally affect what formats are available and its easy to create and plug in custom formatters. The example project includes a JSONP formatter that can be plugged in to provide JSONP support for requests that have a callback= querystring parameter. Adding, removing or replacing formatters is a global option you can use to manipulate content. Its beyond the scope of this introduction to show how it works, but you can review the sample code or check out my blog entry on the subject (http://goo.gl/UAzaR). If automatic processing is not desirable in a particular Controller method, you can override the response output explicitly by returning an HttpResponseMessage instance. Figure 4: The Album Display sample uses JSON data loaded from Web API. Listing 4: Returning an HttpResponseMessage for more control over HTTP output
public HttpResponseMessage GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); var resp = new HttpResponseMessage(HttpStatusCode.OK); resp.Content = new ObjectContent<IEnumerable<Album>>( albums, new JsonMediaTypeFormatter()); resp.Headers.ConnectionClose = true; resp.Headers.CacheControl = new CacheControlHeaderValue(); resp.Headers.CacheControl.Public = true; return resp; }

codemag.com

An Introduction to ASP.NET Web API

41

Listing 5: Using HttpResponseMessage to return non serialized content


[HttpGet] public HttpResponseMessage AlbumArt(string title) { var album = AlbumData.Current .Where(alb => title.StartsWith(title)) .FirstOrDefault(); if (album == null) { return Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError(Album not found)); } // demo sillyness downloading from Amazon var http = new WebClient(); var imageData = http.DownloadData(album.AlbumImageUrl); // create response and return var result = new HttpResponseMessage(HttpStatusCode.OK); result.Content = new ByteArrayContent(imageData); result.Content.Headers.ContentType = new MediaTypeHeaderValue(image/jpeg); return result; }

method explicitly. Here, I used the [HttpGet] attribute to serve the image. Another option to handle the error could be to return a xed placeholder image if no album can be matched or the album doesnt have an image.

If you want complete control over your HTTP output and the formatter used, you can return an HttpResponseMessage result rather than raw .NET values.
When returning an error code, you can also return a strongly typed response to the client. For example, you can set the 404 status code and also return a custom error object (ApiMessageError is a class I dened) like this:
return Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found") );

sult including the headers, formatter and status code. In Listing 4, I explicitly specify the formatter using the JsonMediaTypeFormatter to always force the content to JSON. If you prefer to use the default content negotiation with HttpResponseMessage results, you can create the Response instance using the Request.CreateResponse method:
var resp = Request.CreateResponse<IEnumerable<Album>>( HttpStatusCode.OK, albums);

If the album can be found, the image will be returned. The image is downloaded into a byte[] array, and then assigned to the results Content property. I created a new ByteArrayContent instance and assigned the images bytes and the content type so that it displays properly in the browser. There are other xxxContent() objects available: StringContent, StreamContent, ByteArrayContent, MultipartContent, and ObjectContent are at your disposal to return just about any kind of content. You can create your own Content classes if you frequently return custom types and handle the default formatter assignments that should be used to send the data out. Although HttpResponseMessage results require more code than returning a plain .NET value from a method, it allows much more control over the actual HTTP processing than automatic processing. It also makes it much easier to test your controller methods as you get a response object that you can check for specic status codes and output messages rather than just a result value.

This hooks up the appropriate formatter from the active Request based on Content Negotiation.

Non-Serialized Results
The output returned doesnt have to be a serialized value but can also be raw data, like strings, binary data or streams. You can use the HttpResponseMessage.Content object to set a number of common Content classes. Listing 5 shows how to return a binary image using the ByteArrayContent class from a Controller method. The image retrieval from Amazon is contrived, but it shows how to return binary data using ByteArrayContent. It also demonstrates that you can easily return multiple types of content from a single controller method, which is actually quite common. If an error occurs such as a resource cant be found or a validation error you can return an error response to the client thats very specic to the error. In GetAlbumArt(), if the album cant be found, we want to return a 404 Not Found status (and realistically no error, as its an image). Note that if you are not using HTTP Verb-based routing or not accessing a method that starts with Get/Post etc., you have to specify one or more HTTP Verb attributes on the

Routing Again
Ok, lets get back to the image example: In order to return my album art image Id like to use a URL like this: http://localhost/aspnetWebApi/albums/Dir ty%20 Deeds/image In order for this URL to work, I have to create a new Controller because my earlier routes pointed to the AlbumApiController using HTTP Verb routing. HTTP Verb-based routing is great for representing a single set of resources such as albums. You can map operations like add, delete, update and read easily using HTTP Verbs. However, the way that WebAPI routes to methods based on name prex (isuch as Getxxx() methods or HTTP Verb attribute, its easy to use up these HTTP Verbs and end up

42

An Introduction to ASP.NET Web API

codemag.com

with overlapping method signatures that result in route conicts. In fact, I was unable to make the above URL work with any combination of HTTP Verb plus Custom routing using a single controller. There are number of ways around this, but all involve additional controllers. I think its easier to use explicit Action routing and then add custom routes if you need simpler URLs. So in order to accommodate some of the other examples, I created another controller AlbumRpcApiController to handle all requests that are explicitly routed via actions (/albums/rpc/AlbumArt) or are custom routed with explicit routes dened in the HttpConguration. I added the AlbumArt() method to this new AlbumRpcApiController class. For the image URL to work, you need a custom route placed before the default route from Listing 1.
RouteTable.Routes.MapHttpRoute( name: "AlbumApiActionImage", routeTemplate: "albums/{title}/image", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "AlbumArt" } );

based routing in the original AlbumApiController, I can implement a method called PostAlbum() to accept a new album from the client. Listing 6 shows the Web API code to add a new album. The PostAlbum() method receives an album parameter, which is automatically deserialized from the POST buffer that the client sent. The data passed from the client can be either XML or JSON. Web API automatically gures out what format it needs to deserialize based on the content type and binds the content to the album object. Web API uses model binding to bind the request content to the parameter(s) of controller methods. Like MVC you can check the model by looking at ModelState.IsValid. If its not valid, you can run through the ModelState.Values and check each binding for errors. When a binding error occurs, youll want to return an HTTP error response and its best to do that with an HttpResponseMessage result. In Listing 6, I used the custom error class that holds a message and an array of detailed error messages for each binding error. I used this object as the content to return to the client along with my Conict HTTP Status Code response. If binding succeeds, the example returns a string with the name and date entered to demonstrate that you captured the data. Normally, a method like this should return a Boolean or no response at all (HttpStatusCode. NoConent). The sample uses a simple static list to hold albums, so once youve added the album using the Post operation, you can hit the /albums/ URL to see that the new album was added. The client jQuery code to call the POST operation from the client with jQuery is shown in Listing 7. The code in Listing 7 creates an album object in JavaScript to match the structure of the .NET Album class. This object is passed to the $.ajax() function to send to the server as POST. The data is turned into JSON and the content type set to application/json so that the server knows what to convert when de-serializing in the Album instance.

SPONSORED SIDEBAR: Quick, Easy, Versatile Software with SOA


The basic idea behind Service Oriented Architecture (SOA) is simple: Create the functional part of your application independent from the user interface layer and use a standardized way to access the functional parts so they are accessible from any device on any platform, allowing for quick creation of new user interfaces and easier maintenance of existing ones. SOA is not tied to a specic technology. In fact, SOAs great advantage is that it spans languages and platforms and allows for the creation of clients in a variety of technologies. A great and productive way to create the SOA layer is with .NET and in particular, WCF (Windows Communication Foundation). CODE Consulting provides a wide range of services around SOA and WCF, ranging from training and mentoring, all the way to creation and implementation of the architecture. Markus Egger, Microsoft RD and MVP will be teaching an increasingly popular class, A Day of SOA on June 4, 2012, for only $399! Sign up at www.codemag.com/training. Find out more about the offerings from CODE Consulting at http:// www.codemag.com/Consulting or send an e-mail to info@ codemag.com for a free hour of consulting on SOA!

Now I can use either of the following URLs to access the image: Custom route: (/albums/rpc/{title}/image) http://localhost/aspnetWebApi/albums/PowerAge/ image Action route: (/albums/rpc/action/{title}) http://localhost/aspnetWebAPI/albums/rpc/albumart/ PowerAge

Sending Data to the Server


To send data to the server and add a new album, you can use an HTTP POST operation. Since Im using HTTP Verb-

Listing 6: Adding or updating an Album using a POST operation


public HttpResponseMessage PostAlbum(Album album) { if (!this.ModelState.IsValid) { // my custom error class var error = new ApiMessageError() { message = Model is invalid}; foreach (var prop in ModelState.Values) { if (prop.Errors.Any()) error.errors.Add(prop.Errors.First().ErrorMessage); } // Return the error object as a response with an error code return Request.CreateResponse<ApiMessageError>( HttpStatusCode.Conict, error); } } foreach (var song in album.Songs) song.AlbumId = album.Id; var matchedAlbum = AlbumData.Current .SingleOrDefault(alb => alb.Id == album.Id || alb.AlbumName == album.AlbumName) if (matchedAlbum == null) AlbumData.Current.Add(album); else matchedAlbum = album; // return a string to show that the value got here var resp = Request.CreateResponse(HttpStatusCode.OK); resp.Content = new StringContent(album.AlbumName + + album.Entered.ToString(), Encoding.UTF8, text/plain); return resp;

codemag.com

An Introduction to ASP.NET Web API

43

Listing 7: Using jQuery to POST an album to the server


var id = new Date().getTime().toString(); this.album = { Id: id, AlbumName: Power Age, Artist: AC/DC, YearReleased: 1977, Entered: 2002-03-11T18:24:43.5580794-10:00, AlbumImageUrl: http://..., AmazonUrl: http://... , Songs: [ {SongName: Rock n Roll Damnation}, {SongName: Downpayment Blues}, {SongName: Riff Raff} ] } $(#btnSendAlbum).click(function () { $.ajax( { }); }); url: albums/, type: POST, contentType: application/json, data: JSON.stringify(album), processData: false, beforeSend: function (xhr) { // explicitly request JSON xhr.setRequestHeader(Accept, application/json); }, success: function (result) { alert(result); }, error: function (xhr, status, p3, p4) { alert(JSON.parse(xhr.responseText).message); }

The jQuery code hooks up success and failure events. Success returns the result data, which is a string thats echoed back with an alert box. If an error occurs, jQuery returns the XHR instance and status code. You can check the XHR to see if a JSON object is embedded and if it is, you can extract it by deserializing it and accessing the .message property. REST standards suggest that updates to existing resources should use PUT operations. REST standards aside, Im not a big fan of separating out inserts and updates so I tend to have a single method that handles both. But if you want to follow REST suggestions, you can create a PUT method that handles updates by forwarding the PUT operation to the POST method:
public HttpResponseMessage PutAlbum(Album album) { return Post(album); }

$(".removeimage").live("click", function () { var $el = $(this).parent(".album"); var txt = $el.nd("a").text(); $.ajax({ url: "albums/" + encodeURIComponent(txt), type: "DELETE", success: function (result) { $el.fadeOut(function() { $el.remove(); }); }, error: jqError }); });

Note the use of the DELETE verb in the $.ajax() call, which routes to DeleteAlbum on the server. DELETE is a non-content operation, so you supply a resource ID (the title) via route value or the querystring.

Routing Conicts
In all requests with the exception of the AlbumArt example, I used HTTP Verb routing that I set up in Listing 1. HTTP Verb Routing is a recommendation that is in line with typical REST access to HTTP resources. However, it takes quite a bit of effort to create REST-compliant API implementations based only on HTTP Verb routing. You saw one example that didnt really t the return of an image where I created a custom route albums/{title}. image that required creation of a second controller to work. HTTP Verb routing to a controller does not mix with custom or action routing to the same controller because of the limited mapping of HTTP Verbs imposed by HTTP Verb routing. To understand some of the problems with verb routing, lets look at another example. Lets say you create a GetSortableAlbums() method like this and add it to the original AlbumApiController accessed via HTTP Verb routing:
[Queryable] public IQueryable<Album> GetSortableAlbums() { var albums = Albums.OrderBy(alb => alb.Artist);

To make the corresponding $.ajax() call, all you have to change from Listing 7 is the type: from POST to PUT. To round out the server code, heres the DELETE verb controller method:
public HttpResponseMessage DeleteAlbum(string title) { var matched = Albums.Where(alb => alb.AlbumName == title) .SingleOrDefault(); if (matchedAlbum == null) return new HttpResponseMessage(HttpStatusCode.NotFound); Albums.Remove(matchedAlbum); return new HttpResponseMessage(HttpStatusCode.NoContent); }

To call this action method using jQuery, you can use:

44

An Introduction to ASP.NET Web API

codemag.com

return albums.AsQueryable(); }

If you compile this code and try to now access the /albums/ link, you get an error: Multiple Actions were found that match the request. HTTP Verb routing only allows access to one GET operation per parameter/route value match. If more than one method exists with the same signature, it doesnt work. As before, the solution to get this method to work is to throw it into another controller. Because I set up the AlbumRpcApiController I can add the method there. First, I should rename the method to SortableAlbums() so Im not using a Get prex for the method. This also makes the action parameter look cleaner in the URL it looks less like a method and more like a noun.

Although OData ltering is an interesting feature that gives the client a lot of control over certain operations (like skip and take and possibly sorting, which can be nice for grid displays), Im not sure if that sort of logic really belongs in client code. More likely, you should expose methods in the API that natively include ltering parameters rather than using a direct querying mechanism like OData. Undoubtedly, some will nd this approach appealing for quick and dirty operations where the client drives behavior.

Error Handling
Ive already done some minimal error handling in the examples. For example in Listing 6, I detected some known-error scenarios like model validation failing or a resource not being found and returning an appropriate HttpResponseMessage result. But what happens if your code just blows up or causes an exception? If you have a controller method, like this:
public void ThrowError() { throw new InvalidOperationException("Your code!"); }

HTTP Testing with Fiddler


If youre building REST or AJAX applications, its very useful to test HTTP responses. With Web API, you get to set many options in the HTTP request, like Accept headers, so its good to have a tool to test with. My personal favorite HTTP client testing tool is Fiddler (www.fiddlertool.com) from Eric Lawrence. I also like its Request Composer tool. You can capture requests, drag them onto the composer and then modify the HTTP headers and resend the request with the changed data. You can also save requests and play them back later, which is very useful for testing and debugging. If you havent used Fiddler, its a must-have tool go get it and watch a couple of the tutorials on Erics site to get started quickly.

HTTP Verb Routing adds a whole new level of complexity when youre trying to shoehorn functionality into the handful of available HTTP Verbs. Think carefully if thats the route you want to take.
I can then create a new route that handles direct-action mapping:
RouteTable.Routes.MapHttpRoute( name: "AlbumApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi" } );

You can call it with this: http://localhost/AspNetWebApi/albums/rpc/ThrowError The default exception handling displays a 500-status response with the serialized exception on the local computer only. When you connect from a remote computer, Web API throws back an IIS 500 error with no data returned (IIS then adds its HTML error page). The behavior is congurable in the GlobalConguration:
GlobalConguration .Conguration .IncludeErrorDetailPolicy= IncludeErrorDetailPolicy.Never;

As I am explicitly adding a route segment rpc into the route template, I can now reference explicit methods in the Web API controller using URLs like this: http://localhost/AspNetWebApi/rpc/SortableAlbums

IQueryable<T> Results
Did you notice that the last example returned IQueryable<Album> as a result? Web API serializes the IQueryable<T> interface just ne as an array, but in addition, it also allows for using OData-style URI conventions (http:// goo.gl/9nO3d) in the query string to lter the result if you specify a [Queryable] attribute on the method. You can sort and lter and limit the selection using OData commands that should be familiar from LINQ usage. For example: http://localhost/AspNetWebApi/albums/rpc/SortableAl bums?$orderby=Artist&$top=2&$skip=1 Even though you get OData-style querying support, the output generated uses Web APIs standard output generation logic so you can create JSON or XML, depending on content negotiation or your explicit output mapping.

If you want more control over your error responses sent from code, you can throw explicit error responses yourself using HttpResponseException. When you throw an HttpResponseException the response parameter is used to generate the output for the Controller action.
[HttpGet] public void ThrowError() { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.BadRequest, new ApiMessageError("Your code stinks!")); throw new HttpResponseException(resp); }

Throwing an HttpResponseException stops the processing of the controller method and immediately returns the response you passed to the exception. Unlike other Exceptions red inside of WebAPI, HttpResponseException bypasses the Exception Filters installed and instead just outputs the response you provide.

46

An Introduction to ASP.NET Web API

codemag.com

Listing 8: Implementing an ExceptionFilter to automatically turn exceptions into object result messages
public class UnhandledExceptionFilter : ExceptionFilterAttribute { public override void OnException(HttpActionExecutedContext context) { HttpStatusCode status = HttpStatusCode.BadRequest; var exType = context.Exception.GetType(); if (exType == typeof(UnauthorizedAccessException)) status = HttpStatusCode.Unauthorized; else if (exType == typeof(ArgumentException)) } } status = HttpStatusCode.NotFound; var apiError = new ApiMessageError() { message = context.Exception.Message }; // create a new response and attach our ApiError object // which now gets returned on ANY exception result context.Response = context.Request .CreateResponse<ApiMessageError>(status,apiError);

In this case, the serialized ApiMessageError result string is returned in the default serialization format XML or JSON. You can pass any content to HttpResponseMessage, which includes creating your own exception objects and consistently returning error messages to the client. Heres a small helper method on the controller that you might use to send exception info back to the client consistently:
private void ThrowSafeException(string message, HttpStatusCode statusCode = HttpStatusCode.BadRequest) { var errResponse = Request.CreateResponse<ApiMessageError>( statusCode, new ApiMessageError() {message = message}); throw new HttpResponseException(errResponse); }

The latter is a great way to get global error trapping so that all errors (short of hard IIS errors and explicit HttpResponseException errors) return a valid error response that includes error information in the form of a known-error object. Using a lter like this allows you to throw an exception as you normally would and have your lter create the right response. You can even create some custom exceptions so you can differentiate your own exceptions from unhandled system exceptions. This is just one example of how ASP.NET Web API is congurable and extensible. Exception lters are just one example of how you can plug-in into the Web API request ow to modify output. Many more hooks exist and Ill take a closer look at extensibility in a future article.

Summary
Web API is a big improvement over previous Microsoft REST and AJAX toolkits. The key features to its usefulness are its ease of use with simple controller based logic, familiar MVC-style routing, low conguration impact, extensibility at all levels and tight attention to exposing and making HTTP semantics easily accessible. Although none of the concepts used in Web API are new or radical, Web API combines the best of previous platforms into a single framework thats highly functional, easy to work with, and extensible to boot. I think that Microsoft has hit a home run with Web API. Web API is currently in beta and getting close to a release candidate. Its slated to ship later this year, around the same time as Visual Studio 11 Beta and .NET 4.5. In the meantime, you can start using Web API today in its beta form with its Go Live license, or with the current code from aspnetwebstack.codeplex.com, if youre willing to keep up with the frequent changes. Rick Strahl

You can then use it to output any captured errors from code:
public void ThrowError() { try { List<string> list = null; list.Add("Rick"); } catch(Exception ex) { ThrowSafeException(ex.Message); } }

Web API combines the best of previous Microsoft REST and AJAX tools into a single framework thats highly functional, easy to work with, and extensible to boot!

Another more global solution is to create an Exception Filter. Filters in Web API provide the ability to pre- and post-process controller method operations. An exception lter looks at all exceptions red and then optionally creates an HttpResponseMessage result. Listing 8 shows an example of a basic Exception lter implementation. Filters can be assigned to individual controller methods like this:
[UnhandledExceptionFilter] public void ThrowError()

Or you can register a lter globally in the HTTP Conguration:


GlobalConguration.Conguration.Filters.Add( new UnhandledExceptionFilter());

codemag.com

An Introduction to ASP.NET Web API

47

ONLINE QUICK ID 1206091

Grokking the DLR: Why its Not Just for Dynamic Languages
Many .NET developers have heard of the Dynamic Language Runtime (DLR) but they dont quite know what to make of it. Developers working in languages like C# and Visual Basic sometimes shirk dynamic programming languages because they fear the scalability problems that have historically been associated with using them. Also of concern is the fact
languages like Python and Ruby dont perform compiletime type checking, which can lead to runtime errors that are very costly to nd and x. These are valid concerns that may explain why the DLR hasnt enjoyed more popularity among mainstream .NET developers in the two years since its ofcial release. After all, any .NET Runtime that has the words Dynamic and Language in its title must be strictly for creating and supporting languages like Python, right? During that talk, I jotted down the term that popped into my mind as I heard Jim retell the architecture of the DLR: the language of languages. Four years later, that moniker still characterizes the DLR pretty well. With some realworld DLR experience under my belt, however, Ive come to realize that the DLR isnt just for language interoperability. With dynamic type support now baked into C# and Visual Basic, the DLR has become a gateway from our favorite .NET languages to the data and code in any remote system, no matter what kind of hardware or software it may use. To understand the idea of the DLR as a language-integrated, IPC mechanism, lets begin with an example that has nothing to do with dynamic programming languages at all. Imagine two computing systems: one called the initiator and the other called the target. The initiator needs to invoke a function named foo on the target passing some number of parameters and retrieving the results. After locating the target system, the initiator must bundle all of the necessary call information together in a format that can be understood by the target. At a minimum, this includes the name of the function and the parameters to be passed. The initiator then sends the request to the target. After unpacking the request and validating the parameters, the target may execute the foo function. Then the target system must package up the results, including any exceptions that may have occurred, and send them back to the initiator. Lastly, the initiator must unpack the results and respond appropriately. This request-response pattern is common, describing at a high level how almost every call-based IPC mechanism works.

Kevin Hazzard
wkhazzard@gmail.com Kevin Hazzard is a Microsoft MVP living in the Richmond, Virginia. He has been married for twenty three years and has children ranging in age from college to elementary school. He serves as a Director for CapTech Consulting, a midsized rm of more than three hundred consultants with ofces in Richmond, Charlotte, Philadelphia and Washington, D.C. specializing in project management, business intelligence, and software and database development. Kevin is an advisory board member for the Information Systems and Technology program for his local community college, where he also taught C++ and C# as an adjunct professor for more than a decade. He further demonstrates his commitment to public education by serving as an elected member of his local countys K-12 School Board. Kevin is an organizer for several software developer community events including the Richmond Code Camp and the Mid-Atlantic Developer Expo (http://MADExpo.us).

Not so fast. While its true that the DLR was conceived to support the Iron implementations of the Python and Ruby programming languages on the .NET Framework, the architecture of the DLR provides abstractions that go much deeper than that. Under the covers, the DLR offers a rich set of interfaces for performing runtime Inter-Process Communication (IPC). Over the years, developers have seen many tools from Microsoft for communicating between applications: DDE, DCOM, ActiveX, .NET Remoting, WCF, OData. The list just goes on and on. Its a seemingly unending parade of acronyms, each one representing a technology that has promised to make it easier to share data or to invoke remote code this year than it was using last years technology. In this article, Ill show you why you may want to consider using the DLR as a communication tool, even if you never intend to use a dynamic programming language in your own application designs.

The Language of Languages


The rst time I heard Jim Hugunin speak about the DLR, his talk surprised me. Jim created an implementation of the Python language to run on the Java Virtual Machine (JVM) known as Jython. At the time of the talk, he had recently joined Microsoft to create IronPython for .NET. Based on his background, I expected him to focus on the language but Jim spent nearly the entire time talking about heady stuff like expression trees, dynamic call dispatch and call caching mechanisms, instead. What Jim described was a set of runtime compiler services that would make it possible for any two languages to communicate with one another in a high performance way.

The DynamicMetaObject Class


To understand how the architecture of the DLR ts this pattern, lets explore one of the DLRs central classes called the DynamicMetaObject. Well begin by examining three of the twelve core methods in that type: 1. BindCreateInstance create or activate an object 2. BindInvokeMember call an encapsulated method 3. BindInvoke execute the object (as a function) When you need to call a method in a remote system, the rst thing to do is create an instance of the type. Of course, not all systems are object-oriented so the term instance may be metaphorical. In fact, the target service may be implemented as an object pool or as a singleton so the terms activation or connection might apply as well as instantiation.

Figure 1: The initiator invokes the foo function on the target.

48

Grokking the DLR: Why its Not Just for Dynamic Languages

codemag.com

Other runtime frameworks follow this same pattern. The Component Object Model (COM) provides the CoCreateInstance function for creating objects. With .NET Remoting, you might use the CreateInstance method of the System.Activator class. The DLRs DynamicMetaObject provides BindCreateInstance for a similar purpose. After using the DLRs BindCreateInstance, the created thing you have in hand may be of a type that supports multiple methods. The metaobjects BindInvokeMember method is used to bind an operation that can invoke the function. In the graphical example from above, the string foo would be passed as a parameter to let the binder know that the member method by that name should be called. Also included with the parameter are useful bits of information like the argument count, argument names and a ag that says whether or not the binder should ignore case when trying to nd the named member. After all, some languages are picky about the case of their symbols and some are not. When the thing returned from BindCreateInstance is just a single function (or delegate) however, the metaobjects BindInvoke method is used instead. To make this clear, consider the following small bit of dynamic C# code:
delegate void IntWriter(int n); void Main() { dynamic Write = new IntWriter(Console.WriteLine); Write(5); }

The important thing for you to understand at this point is that the C# compiler recognizes the statement Writer. Write(7) as a member access operation. What we often call the dot operator in C# is formally called the member access operator. The DLR code generated by the compiler in this case would ultimately call BindInvokeMember, passing the string Write and the integer argument 7 to an operation that can perform the invocation. In short, BindInvoke is used to call a dynamic object that is a function while BindInvokeMember is used to call a method that is a member of a dynamic object.

Access to Properties via DynamicMetaObject


Its clear to see in these two small examples that the C# compiler is using its language syntax to deduce which DLR binding operations should be performed. If you were using Visual Basic to access dynamic objects, the semantics of that language would be used instead. The member access (dot) operator doesnt just access methods, of course. You can access properties in C# using that same operator. The DLR metaobject provides three more useful methods to access properties on dynamic objects: 4. BindGetMember get a property value 5. BindSetMember set a property value 6. BindDeleteMember delete a member The purpose of BindGetMember and BindSetMember might be obvious, especially since you know that they pertain to the way that .NET uses the properties of a class. When the compiler evaluates get (or read) operations on a property of a dynamic object, it emits a call to BindGetMember. When the compiler evaluates set (or write) operations on a property, it makes sure that BindSetMember is emitted instead.

Whats with the Bind Prexes?


Youll notice that the methods in the DynamicMetaObject begin with the term Bind. This is because they dont actually do the work. For example, the method called BindCreateInstance doesnt create instances. Instead, it binds an operation at runtime that creates objects. This may seem strange but as you study the DLR, youll appreciate the indirection that binding affords you.

This code isnt the optimal way to write the number 5 to the console. A good developer would never do something so wasteful. However, this code illustrates the use of a dynamic variable that is a delegate, which can be called like a function. If the delegate type were derived to implement a DLR interface named IDynamicMetaObjectProvider, the BindInvoke method of the DynamicMetaObject that it returns would be called to attach an operation to do the work. This is because the C# compiler recognizes that the dynamic object called Write is being used syntactically like a function. Now look at another bit of dynamic C# code to understand when BindInvokeMember might be emitted by the compiler instead:
class Writer : IDynamicMetaObjectProvider { public void Write(int n) { Console.WriteLine(n); } // interface implementation omitted } void Main() { dynamic Writer = new Writer(); Writer.Write(7); }

Metaobject
The term metaobject is not unique to the DLR. The prex meta comes from Greek where it simply means beside or after. Therefore, metadata is data that is beside the real data, representing it or providing access to it. The DLRs DynamicMetaObject accurately reects this idea with respect to objects. A DLR metaobject operates alongside a real object, assisting with invocation or control.

Treating an Object as an Array


Some classes behave as containers for instances of other types. So, the DLR metaobject has methods to handle these cases. Each of the array-oriented metaobject methods ends with the term "Index": 7. BindGetIndex get the value at a specic index 8. BindSetIndex set the value at a specic index 9. BindDeleteIndex delete the value at a specic index To understand how BindGetIndex and BindSetIndex are used, imagine a DLR-enabled wrapper class called JavaBridge that can load Java class les and expose them to .NET code without a lot of ceremony. Such a class might be used to load a Java class le called Customer. class that contains some Object-Relational Mapping (ORM) code. A DLR metaobject can be created to invoke that ORM code from .NET in a very natural way. Heres an example in C# that shows how the JavaBridge might work in practice:
1: JavaBridge java = new JavaBridge(); 2: dynamic customers = java.Load("Customer.class"); 3: dynamic Jason = customers["Bock"]; 4: Jason.Balance = 17.34;

Ive omitted the implementation of the interface in this small example because it would take lots of code to show you how to do that correctly. In a following section, however, well take a shortcut and implement a dynamic metaobject with just a few lines of code.

codemag.com

Grokking the DLR: Why its Not Just for Dynamic Languages

49

5: customers["Wagner"] = new Customer("Bill");

Lines 3 and 5 in the listing above will be interpreted by the C# compiler as index accesses because of the use of the index access ([]) operator that was used. Behind the scenes, the custom DLR metaobject for the types exposed by the JavaBridge will then receive calls to their BindGetIndex and BindSetIndex methods, respectively, to pass calls to a waiting JVM via Java Remote Method Invocation (RMI). In this scenario, the DLR helps us to bridge the gap between C# and a statically-typed language, perhaps making it clearer why I call the DLR the language of languages. Just like the BindDeleteMember method, the BindDeleteIndex method is not intended for use from statically typed languages like C# and Visual Basic. Those languages have no way to express such a concept. However, you can establish a convention for deleting members from a class at runtime to get that kind of functionality if its valuable to you. For example, setting an index to null, which can be expressed in C# and Visual Basic, could be interpreted by your metaobject to mean the same thing as BindDeleteMember.

powerhouse of metaprogramming goodness; jam packed with all sorts of performance-optimizing techniques that make your dynamic .NET code fast and efcient. Ill cover the performance aspects of the CallSite<T> class at the end of this article. Much of what call sites do in dynamic .NET code concerns runtime code generation and compilation. So, its signicant to note that the CallSite<T> class is implemented in a namespace that contains both of the words Runtime and CompilerServices. If the DLR is the language of languages, then the CallSite<T> class is one of its major grammatical constructs. Lets take a look at the tiny example from the last section one more time to get familiar with call sites and how compilers like C# inject them into our code:
dynamic x = 13; int y = x + 11;

Figure 2: Flow of BinaryOperation and Convert.

Conversion and Operations


The last group of DLR metaobject methods concerns the handling of operations and type conversions. These are: 10. BindConvert convert an object to another type 11. BindBinaryOperation invoke a binary operator on two supplied operands 12.BindUnaryOperation invoke a unary operator on one supplied operand The BindConvert method gets used whenever the compiler determines that it needs to convert a dynamic object to some known type. This happens implicitly when assigning the result of a dynamic invocation to a nondynamic data type. For example, in the following small C# example, the assignment to the variable y forces a call to BindConvert in the emitted code.
dynamic x = 13; int y = x + 11;

From what youve learned so far, you know that calls to BindBinaryOperation and BindConvert will be emitted by the C# compiler for this bit of code. Rather than showing you the long Microsoft Intermediate Language (MSIL) disassembly of what the compiler produces, Ive included Figure 2, a owchart that describes the compilers output instead. Remember that the C# compiler uses its own syntax to determine what actions are required on the dynamic type. In the current example, there are two operations to emit: the addition of variable x to an integer (Site2) and the conversion of the result into an integer (Site1). Each of these actions becomes a call site which is stored in a container for the enclosing method. As you can see in the owchart in Figure 2, the call sites are created in reverse order in the beginning but invoked in the correct order at the end. You can see in the owchart that the BindConvert and BindBinaryOperation metaobject methods are called just before the Create Call Site 1 and Create Call Site2 steps, respectively. Yet, the invocation of the bound operations doesnt occur until the very end. Hopefully, the graphic helps you to understand that binding is not the same thing as invoking in the DLR. Moreover, binding happens once per the creation of each call site. The invocations, on the other hand, may occur many times over, reusing the initialized call sites to optimize performance. Before I dive into more of the performance optimizations that the DLR uses to make dynamic code efcient and fast, lets take a look at a simple way to implement the IDynamicMetaObjectProvider contract I mentioned earlier in one of your own classes.

Deleting a Property?
The BindDeleteMember method of the DLR metaobject may be a bit puzzling if youve never worked with dynamic programming languages before. Dynamic languages like Python and Ruby allow you to add functions and properties to an object or its type on the y. Of course, you can delete them, too. Since the DLR was designed to support dynamic language implementations, it makes sense for the BindDeleteMember method to be included in the metaobject denition. However, C# and Visual Basic have no syntax to support such a concept so those languages will never emit calls to BindDeleteMember, even if you implement that method in your metaobject.

The BindBinaryOperation and BindUnaryOperation methods are used whenever an operator such as arithmetic addition (+) or increment (++) is encountered. In the example above, the addition of the dynamic variable x to the constant 11 will emit a call to BindBinaryOperation method. Keep this tiny example in your minds eye for a moment. We use it in the next section to grok another key DLR class known as the call site.

Dynamic Dispatch via Call Sites


If the extent to which you use and understand the DLR never progressed beyond the dynamic keyword, you would probably never know what a call site was or that it even existed as a type in the .NET Framework. This humble type, formally known as CallSite<T>, exists in the System.Runtime.CompilerServices namespace. Its a

A Simple Example, the Easy Way


At the heart of the DLR, Expression Trees are used to generate the functions attached by the twelve binding methods introduced earlier. While many developers have used Expression Trees indirectly via lambda expressions in Language Integrated Query (LINQ), few have the deep experience necessary to implement the

50

Grokking the DLR: Why its Not Just for Dynamic Languages

codemag.com

Listing 1: Fetching movie data from Netix


string movieTitle = Uri.EscapeDataString( PromptForUserInput( Enter a movie title: )); string netixQueryFormat = http://odata.netix.com/Catalog/ + Titles?$lter=Name%20eq%20{0}; string netixUrl = String.Format( netixQueryFormat, movieTitle); DynamicOData netixMovie = new DynamicOData(); netixMovie.OnDataReady += OnNetixMovieReady; netixMovie.FetchAsync(netixUrl);

Do the names of those twelve virtual methods look familiar? They should since you just nished studying the members of the abstract DynamicMetaObject class which includes methods like BindCreateInstance and BindInvoke. The DynamicMetaObject class implements the IDynamicMetaObjectProvider, which returns a DynamicMetaObject from its single method. The operations bound to the underlying metaobject implementation simply dispatch their calls to the methods beginning with Try in the DynamicObject instance. All you have to do is override the methods like TryGetMember and TrySetMember in a class that derives from DynamicObject and a metaobject working behind the scenes handles all the messy Expression Tree details.

A Dynamic OData Class


To see how this works in practice, Ill show you the design for a dynamic class that can fetch an arbitrary OData feed and parse it while providing a very native-looking syntax in C# or Visual Basic. The goal is to be able to connect to an OData as shown in Listing 1. After getting the title of a movie and formatting the query URL to get data from the Netix OData service, a DynamicOData class is instantiated. The OnDataReady event is subscribed and the object is instructed to fetch the OData asynchronously. When the data is ready, the OnNetixMovieReady method shown in Listing 2 will be called. It uses a helper method called Dump that simply writes formatted strings to the console window. After running the code in Listing 2 from the sample application, querying for data about the movie When Harry Met Sally, youll see output on the console window that looks something like Figure 3. The rst thing to notice about the OnNetixMovieReady method shown in Listing 2 is that the movie parameter is marked as dynamic. This will force the C# compiler to create a site container for the method and store call sites within it. To make this clear, the rst line of dynamic access code actually produces three call sites within the container:
movie.BoxArt.SmallUrl.Value

Listing 2: The OnDataReady event handler


static void OnNetixMovieReady( dynamic movie) { Dump(Netix movie information); Dump(BoxArt SmallUrl = {0}, movie.BoxArt.SmallUrl.Value); Dump(BluRay is available = {0}, movie.BluRay.Available.Value); Dump(Updating the availability); movie.BluRay.Available.Value = true; Dump(BluRay is available = {0}, movie.BluRay.Available.Value); Dump(Runtime = {0} minutes, movie.Runtime.Value / 60); }

Figure 3: Dynamically accessing Netix data. complete IDynamicMetaObjectProvider contract very well. Fortunately, the .NET Framework includes a base class called DynamicObject that does a lot of the work for you. In this section, Ill show you how to build a dynamic, Open Data (OData) Protocol class based on the DLRs DynamicObject type, which contains the following twelve virtual methods: 1. TryCreateInstance 2. TryInvokeMember 3. TryInvoke 4. TryGetMember 5. TrySetMember 6. TryDeleteMember 7. TryGetIndex 8. TrySetIndex 9. TryDeleteIndex 10. TryConvert 11. TryBinaryOperation 12.TryUnaryOperation

Do you see how the member access (dot) operator is used three times in C# expression? Each of them will produce a call site within the site container for the OnNetixMovieReady method. Of course, all of that happens behind the scenes. The C# compiler takes care of all that hard work for you.

Managing Data Inside Your Dynamic Class


The question is: how is it possible that properties like BoxArt and SmallUrl, which are very specic to the Netix data, are made available to the C# code without any ceremony? The answer to that question is in the implementation of the TryGetMember virtual method which well explore in a bit. To understand how TryGetMember works, however, you rst need to understand how the DynamicOData class manages its data internally. The portion of the DynamicOData class that initializes the data from the OData feed is shown in Listing 3.

52

Grokking the DLR: Why its Not Just for Dynamic Languages

codemag.com

Listing 3: Dynamic data management


public delegate void DataReady(dynamic obj); public class DynamicOData : DynamicObject, IEnumerable { public event DataReady OnDataReady; const string odataNamespace = http://schemas.microsoft.com/ado/ + 2007/08/dataservices; const string metadataNamespace = odataNamespace + /metadata; IEnumerable<XElement> _current = null; public void FetchAsync(string queryUrl) { WebClient client = new WebClient(); client.DownloadStringCompleted += OnDownloadStringCompleted; client.DownloadStringAsync( new Uri(queryUrl)); } private void OnDownloadCompleted( object sender, DownloadStringCompletedEventArgs e) { string xml = (e != null && e.Error == null) ? e.Result : String.Empty; if (xml != null && xml.Trim().Length > 0) { var document = XDocument.Parse(xml); XNamespace ns = metadataNamespace; _current = document.Descendants( ns + properties); } if (OnDataReady != null) OnDataReady(this); } // remainder of class omitted here

The DynamicOData class begins by setting up a delegate and an event to handle the OnDataReady event. Then a couple of namespaces are declared: one for data services common to the OData Entity Data Model (EDM) and another for the metadata. When parsing the output of an OData feed, these are necessary for addressing the Atom-encoded XML elements correctly. An IEnumerable<XElement> called _current serves as the storage for the DynamicOData node. The FetchAsync command starts the download of the XML document using a WebClient instance. When the transfer of the XML document is complete, the OnDownloadCompleted method is invoked where the XML text is parsed into an XDocument from which the <properties> elements are collected and stored in the _current enumeration. All of the OData well be using from any feed can be found in the <properties> collection. Listing 4 shows a subset of the <properties> collection as XML for one movie in the Netix OData feed. Lastly, after the XML document has been parsed and queried, the OnDataReady event is red to let the caller know that the object is ready for use.

Listing 4: Sample OData <properties>


<m:properties> <d:Name>When Harry Met Sally</d:Name> <d:AverageRating m:type=Edm.Double> 3.8 </d:AverageRating> <d:ReleaseYear m:type=Edm.Int32> 1989 </d:ReleaseYear> <d:Runtime m:type=Edm.Int32> 5760 </d:Runtime> <d:Rating>R</d:Rating> <d:BluRay> <d:Available m:type=Edm.Boolean> false </d:Available> </d:BluRay> <d:BoxArt> <d:SmallUrl> http://.../small/60000226.jpg </d:SmallUrl> </d:BoxArt> <m:properties>

Implementing Member Access


Now that you know that the XML from the OData feed is managed as an enumeration of XElement objects, youre ready to understand how TryGetMember is implemented. Listing 5 shows an abbreviated version of the TryGetMember method to get started. The TryGetMember method shown in Listing 5 takes two parameters: one for the binder and an output parameter called result. If were successful in locating the member named in the binder.Name property, Ill simply store the value into the result parameter which the DLR will hand back to the calling context. A special pseudo-property called Value is made available at the beginning of the TryGetMember method. I included this pseudo-property because I know that I will want to parse the OData feed with bits of chained property expressions that looks like this:
movie.BoxArt.SmallUrl.Value

Because OData can be deeply nested like the sample shown in Listing 4, I want to be able to chain property accessed together uently until I reach the node that has the value Im interested in. In the case of the single C# statement above, I know that I want the Value of the SmallUrl property within the BoxArt property of the movie. The code to do that when the Value pseudo-property is encountered returns the _current enumerations rst XElements Value property as a string, for now. I omitted some code at that point to keep things simple but well get back to it in a bit.

codemag.com

Grokking the DLR: Why its Not Just for Dynamic Languages

53

Listing 5: TryGetMember (abbreviated)


public override bool TryGetMember( GetMemberBinder binder, out object result) { result = null; if (binder.Name == Value) { XElement element = _current.ElementAt(0); result = _current.ElementAt(0).Value; // code omitted here for brevity } else { var attr = _current.ElementAt(0) .Attribute(XName.Get(binder.Name, metadataNamespace)); if (attr != null) result = attr.Value; else { var items = _current.Descendants( XName.Get(binder.Name, odataNamespace)); if (items == null || items.Count() == 0) return false; result = new DynamicOData(items); } } return true; }

Listing 6: Return OData strongly-typed


XAttribute typeAttr = element.Attribute( XName.Get(type, metadataNamespace)); if (typeAttr != null) { string type = typeAttr.Value; if (type != null) { switch (type) { default: break; case Edm.Boolean: result = Convert.ToBoolean(result); break; case Edm.Int32: result = Convert.ToInt32(result); break; } } }

returned directly to the caller. I could do that, of course, but I want to be able to chain this result to another one. More importantly, I want my Value pseudo-property and XML attribute handling semantics to apply to the node thats returned. The easiest way to do that is to return the resulting XML nodes wrapped in a new DynamicOData object. A special constructor is provided to handle this case:
protected DynamicOData( IEnumerable<XElement> current) { _current = new List<XElement>(current); }

If TryGetMember doesnt encounter the use of the Value pseudo-properties, it processes the request rst as if it were trying to obtain the value of an XML attribute then as a named XML element. This is also special handling because of the nature of XML text. Some data that I want to access may be encoded as an XML attribute. Other data may be encoded as an XML element. For this implementation, Ive decided that I dont want to have to use any kind of special syntax or another pseudo-property to get at attribute data. Ive chosen by convention to return matching attributes if they exist, then matching elements. Of course, this wont work in every case but it does highlight the fact that when youre designing your own dynamic objects, youre in the drivers seat. In other words, working within the syntax of the host language, youre free to implement any kind of convention that makes sense for the semantics that youre trying to create. Finishing up Listing 5, when the specied binder.Name property doesnt reference a pseudo-property or an attribute, the queried Descendents of the _current XML elements are returned. Its important to note that the enumeration of XElement objects obtained this way isnt

While the original DynamicOData object was created for fetching XML over the network, this special constructor creates a new one at the selected level of the XML hierarchy. The C# expression movie.BoxArt will return a new DynamicOData object having its _current variable scoped to the <d:BoxArt> node of the XML. Then using the member access (dot) operator on that object, followed by SmallUrl will return another new DynamicOData object scoped to the <d:SmallUrl> node. Finally, accessing the Value pseudo-property on that last dynamic object stops the chain. To nish up our examination of TryGetMember, I need to address the code that I omitted from Listing 5. To do that, think about this line of code you saw earlier.
Dump("Runtime = {0} minutes", movie.Runtime.Value / 60);

How is it possible for the Value pseudo-property of the Runtime element, which is returned by TryGetMember as a string, to be divisible by the number 60 in C#? The answer is that its not, of course. C# isnt that kind of dynamic language, at least not yet. To make this kind of code possible in a statically-typed language, we can take advantage of some hints in the OData data. ODatas Entity Data Model (EDM) denes a handful of abstract types for things like integers, dates, Boolean values, etc. Some OData elements are marked with an attribute named type to tell you how the text (or nested XML) contained within an element should be interpreted. Listing 6 shows the code removed from Listing 5 that provides this functionality whenever the Value pseudo-property is accessed.

54

Grokking the DLR: Why its Not Just for Dynamic Languages

codemag.com

Of course, to keep things short and sweet, I didnt include conversions for all sixteen of ODatas abstract types. The full source code for this example does include them, though. The code in Listing 6 looks at a Value node for an attribute named type. If its found, the value of the attribute is checked against one of the sixteen known EDM data type names. If it matches, a conversion is performed to coerce the value into the expected data type. In this way, expressions like movie.Runtime.Value / 60 work correctly at runtime. With respect to member access, Ive spent a lot of time talking about TryGetMember but no time talking about how to modify data dynamically. The Netix OData feed that Ive been working with so far is read-only but other OData feeds are read-write. I wont show the code here but it would be easy enough to add a Save method to the DynamicOData class to handle the update process if you needed that sort of functionality. The question is: how can I make modications in a DynamicOData instance using the same uent syntax that Ive been using to read data? An overridden TrySetMember like this should do it:
public override bool TrySetMember( SetMemberBinder binder, object value) { if (binder.Name == "Value") { _current.ElementAt(0).Value = value.ToString(); return true; } return false; }

public IEnumerator GetEnumerator() { foreach (var element in _current) yield return new DynamicOData(element); }

Just as I returned each named node in TryGetMember as a new DynamicOData object to make chaining possible, the iterator shown here wraps each XElement in the _ current collection as a new DynamicOData object so that all of the nice dynamic language semantics we want to apply to the XML document extend to each node. Heres a bit of test code that uses eBays OData feed to nd the top ten items on their site pertaining to the same movie that we queried Netix about.
string ebayQueryFormat = "http://ebayodata.cloudapp.net/" + "Items?search={0}&$top=10"; string ebayUrl = String.Format( ebayQueryFormat, movieTitle); DynamicOData ebayItems = new DynamicOData(); ebayItems.OnDataReady += OnEbayItemsReady; ebayItems.FetchAsync(ebayUrl);

The Strange Land of the DLR


In 1961, Robert A. Heinlein wrote a most excellent science ction novel entitled Stranger in a Strange Land. In the book, the Martian word grok is used to describe understanding so profound and intuitive that it leaves no separation between the observer and that which he observes. To grok a thing is to become one with it. The DLR represents something of a strange land for most software developers. However, it can be grokked by studying a few of its central classes. Once you reach that level of empathy with the DLR, your approach to software design might be forever changed, whether you choose to use the DLR or some other form of metaprogramming to make your applications more adaptive and natural-feeling.

Its the same pattern I used for fetching data from Netflix. Im using the same DynamicOData type that I used to query Netix. However, the query is a bit different since eBay provides a search verb to which the search term can be assigned. Listing 7 shows the OnEbayItemsReady method that is called when the data is loaded. The foreach loop shown in Listing 7 takes advantage of the IEnumerable implementation in my DynamicOData class. Inside that loop, since each returned item has been wrapped as a new DynamicOData instance, properties specic to the eBay OData feed like Id, Title and CurrentPrice become resolvable. Of course, if I wanted to ascribe array-like semantics directly to the DynamicOData class, I could do so by overriding TryGetIndex as follows: Listing 7: Enumerating eBay items
static void OnEbayItemsReady(dynamic ebayItems) { Dump(eBay item information:); try { foreach (var item in ebayItems) { Dump(ID = {0}, Title ={1}, + CurrentPrice = {2:C}, item.Id.Value, item.Title.Value.Substring(0, 20), item.CurrentPrice.Value); } } catch (Exception ex) { Dump({0}: {1}, ex.GetType().Name, ex.Message); } Dump(Press Enter to continue ...); }

With this new method in place, C# code like this becomes possible:
movie.BluRay.Available.Value = true;

Because the object returned for the Available node in that C# statement is a DynamicOData object, its handling of the Value pseudo-property actually writes to the same XDocument in memory that all of the DynamicOData objects reference throughout the call chain. The code in Listing 2 and the sample output shown in Figure 3 makes this fairly clear. Go back and look at them. Before changing the BluRay.Availability property, the value obtained from the Netix service was false. After changing it, the value read by a separate DynamicOData object is reported as true. A hypothetical Save method within the DynamicOData would only need to detect these sorts of changes and use the OData protocol to update them on the server.

Accessing the Dynamic Object as an Array


When working with OData, lists of data are very common. To handle them well, youll want to add some functionality to your dynamic data classes. You may have noticed in Listing 3 that the DynamicOData class implemented the IEnumerable contract. This isnt anything special required to enable dynamic typing. The code to do it is easy enough:

codemag.com

Grokking the DLR: Why its Not Just for Dynamic Languages

55

public override bool TryGetIndex( GetIndexBinder binder, object[] indexes, out object result) { int ndx = (int)indexes[0]; result = new DynamicOData( _current.ElementAt(ndx)); return true; }

CallSite Creation Optimization


Dynamic code has the potential of being slow so the DLR uses many performance techniques to speed things up. One of the simplest ideas is checking the existence of call sites as shown in Figure 2. Since the containers for call sites are static (Shared in Visual Basic), call sites only need to be created once, speeding up code that runs repeatedly and avoiding the expense altogether for dynamic code that may never run at all.

This is a very simplistic implementation that assumes that my indexing strategy is purely numerical. Do you see the cast operation to coerce a single integer from the array? However, any sort of indexing I need is possible. The indexes parameter of the TryGetIndex method is an object array, meaning that C# compiler will pass exactly whats provided by the caller. There may be one index value or a dozen of them. They could be strings or integers or even complex data types. The skys the limit as they say, so Im free to get as creative as I like with the way in which the index parameters are implemented. Hopefully, the DynamicOData class Ive shown here opens your eyes to the possibilities available to you when using the DLR. What Ive created isnt about dynamic languages per se. Its true that C# and Visual Basic feels more dynamic when using a class thats powered by the IDynamicMetaObjectProvider contract. But C# and Visual Basic are still statically-typed languages under the hood. Deferral of some binding operations until runtime gives them a feeling of being just dynamic enough to make our code more expressive than its ever been before. To nish up, lets spend a bit of time discussing the performance concerns that arise from the code youve seen here.

on code thats encountered at runtime and the rules in the other caches used to generate them. As a runtime compiler service, the delegate cache is also sometimes referred to as inline. The reason for that term is that the expressions generated by the DLR and its binders are assembled into MSIL and Just-In-Time (JIT) compiled, just like any other .NET code. This runtime compilation happens in line with the normal ow and execution of your program. As you can imagine, turning dynamic code into compiled .NET code on the y can make a massive, positive impact on the performance of the application. With the downloadable source code for this article, Ive included a second project called PythonIntegration that interfaces some C# code to IronPython. I wont cover the application here because its lengthy and would require a lot more space to describe. Youll need to download and install IronPython if you want to experiment with the PythonIntegration application, of course. What youll discover is the vast difference between the static-to-dynamic language interoperability of the past compared to the high performance options offered by Microsofts DLR. Some over-the-border operations from C# to Python, measured in tight repetition, are literally 100,000 times faster using the caching mechanisms that you get for free when using the DLR. These same caching tactics are applied when calling from C# to any other CLR-compliant language, too.

Conclusion
The DLR isnt just about dynamic languages. It opens up a whole world of possibilities for communicating between disparate systems. As .NETs language of languages, the DLR enables the movement of code and data with a kind of uidity and natural expressiveness that werent possible beforehand. As you've seen, the language of a data model like OData can be mapped rather generically into the syntax of C# and Visual Basic using the DLR, increasing comprehension dramatically. Other call invocation systems like Javas Remote Method Invocation (RMI) can be mapped directly into our favorite languages as well, breathing life into existing code bases and increasing their overall business value. Because the DLR can shape the code and data of any other system into .NET so gracefully, the possibilities for using it should be limited only by your imagination. Kevin Hazzard

Rule and Call Site Caching


The biggest concern that many developers have with dynamic programming languages is performance. The DLR goes to extraordinary measures to address that concern. Ive touched briey on the fact that the CallSite<T> class exists within the namespace called System.Runtime.CompilerServices. Also in that namespace are a number of other classes that perform caching at a variety of levels. Using these types, the DLR implements three major levels of caching to speed up dynamically dispatched operations: 1. A global rule cache 2. A local rule cache 3. A polymorphic delegate cache The rule caches are used to avoid spending computing resources when binding objects by type for specic call sites. If two objects of type string are passed to a dynamic method and an integer is returned, global and/or local rule caches may be updated to record that pathway to the dynamic code. This can speed up binding in future calls. The delegate cache, which is managed within the call site object itself, is called polymorphic because the delegates stored there can take many shapes depending

56

Grokking the DLR: Why its Not Just for Dynamic Languages

codemag.com

FROM THE PRODUCERS OF CODE MAGAZINE

Vancouver

Chicago

Boston

New York Los Angeles

Philadelphia

Phoenix

Dallas Houston

es: Includ out All Abws 8! o Wind

A Free Afternoon of .NET Insider Know-How!


Join Microsoft Regional Director, MVP, and publisher of CODE Magazine Markus Egger, for an afternoon of free and independent information about current development technologies! Our network of CODE Magazine, CODE Consulting, and CODE Training experts are producing this well established series of events with the purpose of taking a look at the current state of .NET. Which of the many .NET technologies have gained traction? Which ones can you ignore for now? Whats new in .NET and what will be added in the near future? What other Microsoft technologies should you include in your development efforts or consider for future planning? The goal of the State of .NET events is to share our extensive experience. Due to our efforts around CODE Magazine, CODE Consulting, CODE Training, and our community involvement, we are in a unique position to see a larger number of real-world projects than anyone else in the industry. Our relationship with Microsoft provides us a behind-the-scenes look. In combination, this results in a unique position, and as a content provider, we are eager to share this information. State of .NET events are generally held with Microsofts support, but the content is independently produced and not inuenced by other parties. State of .NET events provide beefy content for developers and decision makers without marketing uff. Best of all, thanks to our sponsors, it is free of charge! Also take a look at recordings from prior events at www.StateOfDotNet.com.

See more details at: www.StateOfDotNet.com

An EPS Event

ONLINE QUICK ID 1206101

Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework
In a prior installment of this series of articles about CODE Framework (CODE Framework: Writing MVVM/MVC WPF Applications, Jan/Feb 2012), I discussed how to use the WPF features of CODE Framework to create rich client applications in a highly productive and structured fashion reminiscent of creating ASP.NET MVC applications, although
with WPF MVVM concepts applied. In this article, I will dive deeper into the subject and discuss the unique benets of the CODE Framework WPF components which enable developers to create the part of the UI that is actually visible in a highly productive and reusable manner. Most MVVM frameworks create great structure in setting up the overall infrastructure, but provide little in the way of actual UI development. And here is where you create a user control that acts as the view is how the story usually goes and the developer is completely on her own in doing so. Not so in CODE Framework! Developers and designers alike can use many of the great (yet optional) features of the framework to quickly create great looking and completely stylable UIs. In fact, many of these features can be used even if your overall development framework is something else. You can simply bring these components into other setups as needed. you have a very simple UI, such as one based on a user control, perhaps one that creates a login UI with the option to enter a user name and password. Perhaps for that purpose, you arranged your user control into logical rows and columns using a Grid layout element. Something like the XAML shown in Listing 1 perhaps, which creates the UI shown in Figure 1. For those of you familiar with any of the XAML dialects, this type of UI denition is probably well-known to you. There are quite a few things that bug me about this setup, however, ranging from little annoyances to the fact that with the proper techniques, this same UI can probably be dened in just a handful of lines of code. Lets start out with the little things.

Markus Egger
megger@eps-software.com Markus is an international speaker, having presented sessions at numerous conferences in North and South America and Europe. Markus has written many articles for publications including CODE Magazine, Visual Studio Magazine, MSDN Brazil, asp. net Pro, FoxPro Advisor, Fuchs, FoxTalk and Microsoft Ofce and Database Journal. Markus is the publisher of CODE Magazine. Markus is also the President and Chief Software Architect of EPS Software Corp., a custom software development and consulting rm located Houston, Texas. He specializes in consulting for object-oriented development, Internet development, B2B, and Web Services. EPS does most of its development using Microsoft Visual Studio (.NET). Markus has also worked as a contractor on the Microsoft Visual Studio team, where he was mostly responsible for object modeling and other object- and component-related technologies.

Getting Started
When you create CODE Framework WPF applications, you can use as little or as much as the UI-specic features as you like. Just like in any other framework, you can create your view as a user control (or similar UI element) in a XAML le with a C# or VB code-behind le. Or, you can go all out and use the CODE Framework View UI element and go cold turkey without even a code-behind le (which has great advantages as I will discuss as this article goes on). Or, you can simply use some of the convenient little features that might make general WPF development more straightforward and ease into the subject that way. Or you can mix and match any and all of those approaches. (NOTE: You can also create entire custom themes, which is not nearly as hard as it sounds, but that shall perhaps be the topic of a future article.)

Many CODE Framework features can be used individually and in combination with completely different frameworks.
First, lets talk about the denition of the Grid. It is very convenient to arrange UIs using Grid elements. What is not so convenient is that the syntax for the denition of Grid rows and columns tends to be rather verbose. In fact, looking at Listing 1, you will notice that almost half the required code (13 lines in this example) went towards row and column denitions. And sometimes you really want to do fancy things with all the various settings you can put on rows and columns, but in the majority of scenarios I ever see, people only set row heights and column widths. For this reason, CODE Framework provides a more convenient way to dene rows and columns in a Grid. To take advantage of this feature, make sure you have a reference to Code.Framework.Wpf.dll in your project (see sidebar for how to get CODE Framework if you do not have it already). Also, dene the namespace for CODE Framework WPF controls at the top of your UI like this (all in one line):
xmlns:c=clr-namespace:CODE.Framework.Wpf.Controls; assembly=CODE.Framework.Wpf

Most WPF/XAML MVVM Frameworks provide great structure for the mechanics of the UI but not for the things you actually see. CODE Framework is different.
Lets start out with a few very simple examples (and for those of you who are looking for the mind-blowing features bear with me we are getting there!). Lets say

NOTE: If you are using a productivity tool such as Resharper, the tool will probably just add this line for you. Also, see the sidebar about XAML namespaces if you are not familiar with this feature.

58

Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework

codemag.com

Listing 1: A plain-WPF denition of a login UI


<UserControl x:Class=UITest.LoginControl xmlns=http://schemas.microsoft.com/winfx/2006/xaml/ presentation xmlns:x=http://schemas.microsoft.com/winfx/2006/xaml> <Grid> <Grid.RowDenitions> <RowDenition Height=Auto /> <RowDenition Height=Auto /> <RowDenition Height=Auto /> <RowDenition Height=Auto /> <RowDenition Height=25 /> <RowDenition Height=Auto /> </Grid.RowDenitions> <Grid.ColumnDenitions> <ColumnDenition Width=* /> <ColumnDenition Width=* /> </Grid.ColumnDenitions> <Label Grid.ColumnSpan=2>Username:</Label> <TextBox Grid.ColumnSpan=2 Grid.Row=1 Width=250 Text={Binding Username} /> <Label Grid.ColumnSpan=2 Grid.Row=2>Password:</Label> <Label Grid.ColumnSpan=2 Grid.Row=2>Password:</Label> <PasswordBox Grid.ColumnSpan=2 Grid.Row=3 Width=250/> <Button Margin=5 HorizontalAlignment=Right MinWidth=70 Grid.Row=5 Content=Login /> <Button Margin=5 HorizontalAlignment=Left MinWidth=70 Grid.Row=5 Grid.Column=2 Content=Cancel /> </Grid> </UserControl>

Now that you have access to CODE Framework features in our UI, you can turn the 13-line Grid denition into the following:
<Grid c:GridEx.RowHeights=Auto,Auto,Auto,Auto,25,Auto c:GridEx.ColumnWidths=*,*>

like that. In that case, you may not like the width of 250 pixels anymore, because the font may either be too large or too small to accommodate as much information as you want. A better way to set the width is to set the width to something that logically maps to about the same amount of text being visible regardless of font. (With proportional fonts, this is always a somewhat inexact science as you are dependent on which exact letters are being typed. But I want this to make some common sense once the UI pops up on the screen). Using the CODE Framework, Ill show you how to set another attached property. This time, the property is dened on an element called View, which is part of the Code.Framework.Wpf.Mvvm.dll, so make sure you add that to your project references and make it accessible through a new namespace in your UI:
xmlns:m=clr-namespace:CODE.Framework.Wpf.Mvvm; assembly=CODE.Framework.Wpf.Mvvm

The result of this is exactly the same as the 13-liner, except it is more convenient. Note that this is not just more convenient for direct declaration of row heights and column widths, but it also makes it much easier to style the control. For instance, you could create a control style with the exact same settings like this:
<Style TargetType=Grid x:Key=MyStyle> <Setter Property=c:GridEx.ColumnWidths Value=*,* /> <Setter Property=c:GridEx.RowHeights Value=Auto,Auto,Auto,Auto,25,Auto /> </Style>

Figure 1: A simple hand-coded login UI in WPF.

Download Examples for this Article


This article comes with an accompanying sample solution you can use to follow the exact scenario described. You are also encouraged to take a look at other examples related to this article. Take a look at http://codemag.com/framework to see a list of links to resources that may be of interest to you.

This is much easier than it would be without this nifty feature! Note that we didnt even have to use a special control. We still use a standard Grid, but the framework allows us to set RowHeights and ColumnWidths properties by means of attached properties. (If you are not familiar with attached properties, see the sidebar.) You could also use the GridEx element directly instead of a Grid and get a few more features yet, but in many cases, you will simply nd yourself using a few attached properties in addition to what you are already doing. This is a pattern you will see quite a few times in CODE Framework. In addition to the elements and controls you are already used to, there often is a control with almost the same name except for an Ex sufx that provides extended functionality. You can generally use either the Ex control, or just some attached properties provided by that class. Another aspect that currently bugs me about this UI denition is the hardcoded width of the text elements. They are set to a width of 250 pixels. This may work ne for the current font settings, but what if someone changed the font, perhaps by changing the applications overall style, or by setting Windows system settings, or anything

Again, this should be just one line without spaces in your code, even though the way this long line displays in the magazine is in two separate lines. With that reference in your project, you can remove the Width setting from the two input controls and replace it with this:
m:View.WidthEx=25

XAML Namespaces
XAML is a language that denes instantiation of objects and setting properties on those objects. To know which objects/controls are available, XAML denes XML namespaces. By default, a single namespace indicates to XAML that all the standard controls in the WPF or Silverlight or Metro namespaces are available. To use other controls (such as your own or such as third-party controls), you can dene additional namespaces that consist of a prex (such as mvvm) and then link to a .NET namespace, making all the classes in that namespace available for use in XAML. The classes in that new namespace are simply referred to by namespace plus the class name as in <mvvm:View />.

Note that the code changes from 250 to 25 as the value. The idea here is to say I want a width that accommodates about 25 characters using the current font face and size, and assuming an average character width. You can try to experiment with font settings and run your app (sometimes this doesnt show up right away in the designer) to see how the width changes. Note that you could set this attached property on both the textbox and the password-box. In fact, you can set the WidthEx property on any UI element. It is a very generic setting in that sense. That is why it is dened on the View object. You simply needed a generic place to put this sort of setting, and the View object seemed to make the most sense for that.

codemag.com

Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework

59

Attached Properties
All XAML dialects (WPF, Silverlight,) allow for a concept called attached properties. This enables developers to create properties on an object and then attach those properties to another object. This is extremely helpful when one object needs to store a value specic to another object and keep track of it. For instance, the Grid class in WPF needs to know which row and column other elements are to be put into and for that purpose needs to store those values for each child element. This is done by setting the Grid. Column and Grid.Row properties on completely different objects. You can use this same concept for a wide variety of things. Whenever an attached property is set, a method can be triggered that reacts to setting that property, allowing developers to do just about anything they want as a reaction to the property being set. For instance, you could create an arbitrary class called DnD with an attached property called Enabled. If this property is then set on an arbitrary second element (such as <Button DnD.Enabled=True />) a handler method can be triggered that wires up everything needed to enable drag & drop on the target object, thus having effectively extended the target class with drag and drop capabilities, without that class ever being aware of it. As you can imagine, this provides for an extremely exible and powerful system the CODE Frameworks takes extensive advantage of.

There is one nal example of a convenience feature I want to add to the UI. You may notice that the textbox is bound to a value (which presumably will be provided by some sort of view model or whatever else you have set as the data context). This is a very convenient setup and typical for an MVVM application. Note that the passwordbox, on the other hand, does not have a binding, because the text of a password-box simply isnt bindable in WPF. This is very inconvenient in an MVVM world, where you really want to bind just about anything to the view model. For this reason, we added the ability to bind a password-box by adding you guessed it an attached property that can be used like so right within the existing password-box:
c:PasswordBoxEx.Value={Binding Password}

completely. In fact, you can remove a whole lot more, including the Grid.Column and Grid.Row as well as the Grid. ColumnSpan settings. The View has the ability to gure that stuff out on its own. To do so, it uses advanced layout styling. (To follow along with this example, remove all such layout information from your UI denition now.) NOTE: Layout styling is a subject all on its own, and I have, in fact, written an article about layout styling called Super Productivity: Using WPF and Silverlights Automatic Layout Features in Business Applications, which appeared in the 2010 Nov/Dec issue of CODE Magazine. It has nothing to do with CODE Framework as such, but explains the concepts CODE Framework uses in general terms. By default, the View uses a Grid as its layout strategy. Since you removed all layout information from the UI denition, the layout will now look exactly like it would in any WPF Grid: All the controls are piled on top of one another. Not exactly useful or what you want. To create a more useful layout, we can use a different style. One such style that is available in all CODE Framework themes/ skins is called CODE.Framework-Layout-SimpleFormLayout. What does this style look like? Well, that depends on which theme you are using. Suppose you choose one of the CODE Framework standard themes, such as Metro or Battleship (Windows 95 look); it will create a vertical stack of controls. Take a look at the UI denition in Listing 2, which as you can see, is signicantly simpler than the one in Listing 1. Nevertheless, the result still looks the same as the UI in Figure 1. NOTE: If you have created your project from scratch or arent using CODE Framework as your main framework, you need to make sure you add the desired theme DLL and merge in the theme root into your resources. (Using the CODE Framework templates, this is done automatically.) Assuming you want to use the Metro theme, add a reference to Code.Framework.Wpf.Theme.Metro.dll and add the following XAML to your App.xaml le to make sure the resource dictionaries that make up the Metro theme are available to your UI:
<Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source=pack://application:,,, /CODE.Framework.Wpf.Theme.Metro; component/ThemeRoot.xaml /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources>

There you are, with a fully bindable password-box! Youll nd quite a few more of these convenience features in CODE Framework. These features range from simple visual aspects to code readability and productivity features (such as the Grid row and column denitions) to functional aspects (such as a bindable password box) to behavioral features such as allowing controls like ListBoxes and trees to be bound to meaningful WPF commands. The space I have in this article is too short to discuss them all, but I encourage you to explore some of these features on your own. Plus, we are adding new ones all the time! Another aspect of all this that is really important is that so far, we are still dealing with a rather simple user control that has little to do with the CODE Framework, other than us having brought in a few DLLs and then having used some very specic features. You can use those features in any WPF application, regardless of whether CODE Framework is a key part of your setup or not. You can pick and choose not just the DLLs and classes you want to use, but in some cases, you may only want to use a single attached property. The level of choice you have is quite granular, and that is a deliberate design feature of CODE Framework.

Taking Things Further


Youve now got your feet wet and have used some very simple features of the framework. You can make your entire login UI screen denition much simpler by taking things to the next step. To do so, change the root element of the UI from UserControl to the View element provided by CODE Framework:
<m:View x:Class=UITest.LoginControl ...

NOTE: Since this UI has a code-behind le, you also need to go to that le and change the inheritance structure to inherit from View rather than user control, otherwise youll get a cant redene baseclass error. You can think of a View as a generic and extremely exible container for UI denitions. By default, you can think of a View as a Grid, although you can change that layout behavior to your liking (which is often done in styles). Since the View itself can act as a Grid, you do not need the Grid denition anymore, so you can remove that

When I rst show developers a UI denition as the one in Listing 2, I generally make a point to draw their attention not to whats there but to what isnt there: a complete lack of any layout information. The listing denes only which controls we want and what they are bound to, and perhaps a few other business things such as the rough desired width of a control in a generic fashion. But the fact that you have a label at the top of the form, then a textbox a few pixels below, and so forth, is something that is completely driven by the style. And as such, it is also changeable by means of a style, so you could make

60

Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework

codemag.com

Listing 2: Dening the same UI as in Listing 1, but using CODE Framework features
<m:View x:Class=UITest.LoginControl xmlns:x=http://schemas.microsoft.com/winfx/2006/xaml xmlns=http://schemas.microsoft.com/winfx/2006/xaml/presentation xmlns:c=clr-namespace:CODE.Framework.Wpf.Controls; assembly=CODE.Framework.Wpf xmlns:m=clr-namespace:CODE.Framework.Wpf.Mvvm; assembly=CODE.Framework.Wpf.Mvvm Style={DynamicResource CODE.Framework-Layout-SimpleFormLayout} > <Label>Username:</Label> <TextBox m:View.WidthEx=25 Text={Binding Username} /> <Label>Password:</Label> <PasswordBox m:View.WidthEx=25 c:PasswordBoxEx.Value={Binding Password}/> <StackPanel HorizontalAlignment=Center VerticalAlignment=Bottom Orientation=Horizontal> <Button Margin=5 MinWidth=70 Content=Login /> <Button Margin=5 MinWidth=70 Content=Cancel /> </StackPanel> </m:View>

it look completely different in the same Windows app, or you could even take this to different platforms such as Metro or Windows Phone, and have the style create an appropriate look for each specic platform. Note that the UI is not just a simple top-to-bottom stack, but the two buttons at the bottom are supposed to be at the same level horizontally. Since the style doesnt do that automatically, I added a StackPanel that handles these two buttons and let the style align the StackPanel as a whole. This is a fairly common thing to do. You may often have UIs you can have almost laid out entirely by some available style, except for one detail like these buttons. That doesnt mean you cant use the style. You simply use the style for what it does well and handle the rest (often the more interesting things) yourself. Composing UIs out of these different approaches is an important aspect. So now you might wonder how you would know about the available styles. The simplest answer to that question is to use the CODE Framework Visual Studio Extensions (downloadable through the Extensions Manager in Visual Studio). This gives you CODE Framework-specic project templates, including one for Views. When you use that project template, a dialog pops up which lets you select the style for your view from a list (which is ever growing). This is a simple way to experiment with the different layout styles. Of course, you can also look into individual CODE Framework theme source code projects (they all start with CODE. FrameworkWpf.Theme and see which XAML resources are there. The layout ones all have names that start with [Theme]-Layout-, such as Metro-Layout-SimpleForm. xaml. We even make all these resource dictionaries available as a separate download for easy viewing of the WPF styles. At this point you might wonder what the layout style you just used is dened as. You can see that code here:
<Style TargetType=ItemsControl x:Key=CODE.Framework-Layout-SimpleFormLayout>

More about Styling


CODE Framework uses many WPF/XAML techniques that are extremely powerful and quite simple, yet unfortunately not as widely understood as they should be. I previously wrote two related articles about working very productively in all XAML dialects by tapping into the power of styles and templates, specically for the purposes of layout as well as styling ListBoxes. You can nd these articles on http:// codemag.com. You can also nd quick and easy references to these articles on http://codemag.com/ framework. The articles are not specic to CODE Framework, but they are recommended reading for all CODE Framework developers seeking to use any of the WPF or XAML features in the framework. They are also recommended reading for all XAML developers regardless of whether or not they intend to use the CODE Framework.

codemag.com

Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework

61

<Setter Property=ItemsPanel> <Setter.Value> <ItemsPanelTemplate> <Layout:BidirectionalStackPanel ChildItemMargin=0,0,0,5 /> </ItemsPanelTemplate> </Setter.Value> </Setter> </Style>

cal orientation set to Bottom, which are put in from the bottom up. In addition, this control allows setting the margin between the child items. (In this example, we set it so every control has a 5-pixel bottom margin.) In fact, this stack panel even has a special option for label and textbox type of controls, because we often have UIs that have alternating label/control patterns, where the label goes with the control (such as the Username: label before the username textbox and so on), and one usually wants a little less margin after the label. If you run our example UI, you can see the bidirectional stack panel in action. Try to resize the login window and you will see that the login buttons always stick to the bottom of the window since they are dened in their own StackPanel which has its VerticalAlignment property set to Bottom. A quick side-remark about the two buttons: The simplest way to dene this UI would be to not have to dene these two buttons at all. If you think about it from a slightly more abstract viewpoint, you will notice that the login form provides a few very simple aspects. You can enter the user name and password, and textbox controls are generally a good way to achieve that in just about any environment. The buttons then allow you to trigger or cancel login. But are buttons really the best approach for that? That depends on the exact environment and the applied theme. In a conventional Windows setup, buttons may be great. In a touch environment, perhaps you want a different kind of button, or perhaps you want to use gestures. On a phone or in Windows 8 Metro, you may use buttons that are integrated with the device. The list goes on and on, and the point I am making is that you really do not know whether buttons are the best way to go. A better approach (which would also be more productive for developers) is to dene that both the login and cancel actions are available but leave it up to the applied theme/skin to decide how to best present these standard actions. In the CODE Framework, that is possible through the Actions collection that can be (optionally) present on view models. The style then simply picks that up and shows it in the UI. If you create a default CODE Framework WPF MVVM/MVC app, you will see a login screen that looks suspiciously like the one we are creating in this article, including the two buttons, but they are only dened on the view model. For more information on this technique, see the CODE Framework MVVM/MVC article as well as the WPF Layout article (see above for both).

This is really just a lengthy way of saying we want to use a BidirectionalStackPanel to lay out items in an ItemsControl. The View object is an ItemsControl, so this style can be applied to it (but also to any other ItemsControl whether it is part of CODE Framework or not again, you may see a design and philosophical pattern emerge here). You might ask, what is a bidirectional stack panel? Well, its another one of those little convenience controls. It works much like a StackPanel in WPF, except for a few minor details. For one, it can stack things both ways. In other words: If you use a bidirectional stack panel with vertical orientation (the default), then controls are put into the stack one after the other from top to bottom. Except for those controls that have their individual verti-

Getting the CODE Framework


CODE Framework is completely free and open source. A good starting point for all things related to CODE Framework is http://codemag. com/framework. The easiest way to install the framework on your system is to install the CODE Framework Tools, which you can do through the Visual Studio Extensions Manager (found in the Tools menu). Simply search for CODE Framework (you may have to scroll down) and install the tools, which gives you access to CODE Framework templates. If you use these templates and you need certain CODE Framework DLLs, the tools will automatically retrieve the needed components from CodePlex without you having to do so manually (you will be prompted, however, for security reasons and to give you more control over what you want to retrieve and from where exactly). CodePlex is where the CODE Framework binaries as well as all the source code is kept. You can go to http://codeframework. codeplex.com to download specic versions as well as source code. CODE Framework is made available under the MIT license, which means you can pretty much do whatever you want with it and you will never be charged for anything. NOTE: For those seeking premium support, training, or consulting, CODE offers these options as well, but they are completely optional.

Figure 2: Creating a new View to implement the UIs shown in Figure 3 and Figure 4.

More Automatic Layout


The login form Ive demonstrated so far is a very simple user interface. So lets see if we can do something more exciting. Lets create a customer edit form UI with the ability to search and add customers. To do this, Ill walk you through creating a brand new CODE Framework WPF MVVM/MVC project. If you have never done that before, check out my article about building CODE Framework WPF MVVM/MVC apps (see above). The quick version of this relatively simple process is this: If you do not have CODE Framework installed, search for the CODE Framework Tools using the Visual Studio Extensions Manager (Tools menu). Then, create a new solution and pick the

Figure 3: The customer search UI with a Windows 95 (Battleship) theme applied.

62

Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework

codemag.com

WPF MVVM/MVC project template. If you do not have the CODE Framework assemblies, the template will offer to download them automatically from CodePlex, which you should do. When you create the new project, pick the Battleship theme as your default theme to start out with (but choose to generally include both Battleship and Metro themes). The rst feature Ill show you how to add to the new project is a customer list and search interface. To do so, add a new Controller to the Controllers folder and add a Search() method. Then, go to the existing StartViewModel and add an action to its list of standard actions (or use one of the dummy ones that is already there), and when executed, call Controller.Action(Customer, Search) to trigger the Search() method on the newly created controller. (NOTE: If you are not familiar with these steps but want to follow along, check out the previous CODE Framework WPF MVVM/MVC article see above.) The detailed setup of the Controller and even the ViewModel do not matter for our purposes here. In this article we only care about the View. So lets go ahead and add the new View in the Views/Customer folder (you will have to add the Customer sub-folder to your Views folder). The ultimate goal is to create a user interface that looks like the UIs shown in Figure 3 and Figure 4 (which are the same UI but with different themes/skins applied). The search UI has two distinct parts: The main or primary part of the UI shows the list of customers as the result of the search, and the secondary part contains three textboxes that allow the user to specify search criteria. As it turns out, quite a few UIs follow this primary/ secondary UI pattern, where a large area occupies the main part of the UI and a secondary part provides additional features. Think of Windows Explorer showing a list of les in the main area and a tree in the secondary area (possibly with the tree area hidden). Or think of many of the Ofce applications and how they can show optional panels attached to the side of a screen. I am sure you can think of many more such examples. Since this type of UI setup is so common, CODE Framework provides default styles for this known as the Primary/Secondary Form Layout. In fact, there are two slight variations on that theme. A general purpose style of that name as well as one that is specic for showing lists. UIs with lists in their main area tend to have a slightly different look than the ones that do not, so there are two options by default. For this example you want to use the one for lists. To create a new view with this style, add a new item to the Views/Customer folder and pick the CODE Framework View template. This shows the dialog shown in Figure 2. Note that this dialog lets you pick the Primary/Secondary style as one of the default options, which puts you right where you want to be. As the next step, we dene the list part of the view (the part that will ultimately show the customer search results). For now, all we are going to do is put a ListBox in the view and bind it to a Customers collection on the view model. (I am skipping the details of the view model here, but you can download the companion source code for this article to see the details of the view model denition.) To indicate to the view that this is the control you want to use for the primary area of the UI, you can set

Figure 4: The customer search UI in Windows 7 with the Metro theme applied. the View.UIElementType attached property to Primary. And thats about it for the core denition of the list. Observant readers may notice that I have not yet dened which elds I want to show in the list, how they are to be displayed, or anything of that nature. In fact, the list as it is dened right now will show an entry for each of the customers found, but it will show no actual eld information, so the list is not very useful. However, we are not currently worried with that part, since each theme may want to show search results in a different way. So all Ill do for now is dene that the list is based on a style called Customer-List which I have yet to dene. (See Listing 3 for the complete source of the view denition.) The secondary part of the UI is going to host the search UI. The style I have chosen is going to do a good job out of the box at placing the secondary UI-part in an appropriate spot for each theme the user may choose. In fact, by default, this style is going to be somewhat intelligent and look at the dimensions of the secondary UI. If the secondary UI is tall and skinny, it will be put either to the right or the left of the main UI. If on the other hand, the secondary UI is very wide but not very tall, the style will put that UI either at the top or the bottom. At least that is what happens in most themes. Of course, you can completely change the way this works, and you can change other aspects, such as the threshold at which it ips from one approach to another, or whether you want that behavior at all. (Take a look at the GridPrimarySecondary class for a list of all the properties you can set.) Some themes may also choose a completely different approach. Perhaps on smaller screen sizes, a theme could decide to only show the primary UI and oat in the secondary UI only when needed. There is no limit to the options you can pick here except your imagination and perhaps some UI standards you may want to follow. What all this means is that you do not have to worry in general terms as to where the secondary UI is going to go. You only have to dene it. But how do you dene that UI exactly? After all, the secondary UI is really a collection of controls rather than just a single control. The answer is deceptively simple: You can simply put all of those controls into a container and then ag the container as the secondary UI. How the controls are laid out inside

codemag.com

Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework

63

Listing 3: The View denition for the Customer Search UI shown in Figure 3 and Figure 4
<mvvm:View xmlns=http://schemas.microsoft.com/winfx/2006/xaml/ presentation xmlns:mvvm=clr-namespace:CODE.Framework.Wpf.Mvvm; assembly=CODE.Framework.Wpf.Mvvm xmlns:c=clr-namespace:CODE.Framework.Wpf.Controls; assembly=CODE.Framework.Wpf Title=Customer Search Style={DynamicResource CODE.Framework-Layout-ListPrimarySecondaryFormLayout} > <ListBox ItemsSource={Binding Customers} Style={DynamicResource Customer-List} c:ListBoxEx.Command={Binding EditCustomer} mvvm:View.UIElementType=Primary /> <mvvm:View UIElementType=Secondary Style={DynamicResource CODE.Framework-Layout-SimpleFormLayout}> <Label>Last Name:</Label> <TextBox Text={Binding LastName} /> <Label>First Name:</Label> <TextBox Text={Binding FirstName} /> <Label>Company:</Label> <TextBox Text={Binding Company} /> <Button HorizontalAlignment=Right Content=Search... Command={Binding SearchCustomers} /> <WrapPanel VerticalAlignment=Bottom> <Label FontSize={DynamicResource FontSize-Smaller}> Legend:</Label> <Rectangle Fill=Red Height=8 Width=8 /> <Label FontSize={DynamicResource FontSize-Smaller}> Inactive</Label> <Rectangle Fill=Green Height=8 Width=8 /> <Label FontSize={DynamicResource FontSize-Smaller}> Active</Label> </WrapPanel> </mvvm:View> </mvvm:View>

SPONSORED SIDEBAR: Learn ASP.NET MVC in a Day!


ASP.NET is one of the worlds most popular Web development environments. We can help you with all ASP.NET projects as well as related technologies, such as HTML (4 and 5), JavaScript, jQuery, CSS, AJAX, services, and many more. (Learn more from www.codemag.com/ consulting.) Thats why CODE Training is offering a full day of training in ASP.NET MVC from CODE Consultants, experts in Web development. CODE Training and EPS Software will be holding an intensive one-day lecture, June 5, 2012, on ASP.NET MVC specically designed for developers of business applications. Learn how, when and why to use ASP.NET MVC for the best result in your projects. Only $399! Visit www.codemag.com/ training to nd out a little bit more about the class, or send an e-mail to info@codemag.com for more information.

the container is a different matter. In the example you might want a layout you are already familiar with: The simple form layout with controls stacked top to bottom and some other controls stacked from the bottom up. To facilitate all this, youll use a View object as the container (yes, a View element inside another View there is nothing wrong with that) and set the style to the familiar simple form layout style. And voila! The UI is done. Quick and painless, yet extremely exible and reusable.

Some UIs cannot be laid out entirely in a fully automatic fashion, but are compositions made out of smaller individual UI segments that use automatic layout features individually.
What youve just done is an extremely important concept in the CODE Framework: Youve used automatic layout features, but you arent using the automatic layout system to lay out the entire form all at once. I would consider it very unlikely that you have many forms in your application which could all be laid out in one swoop by a single generic layout mechanism. However, by composing individual parts of the UI from pieces that can individually be laid out automatically, you can probably handle a very wide range of UIs. There will likely still be a certain percentage of your UIs (or parts of those UIs) that you have to lay out by hand, and that is OK. Being able to use automatic layout for the rest of the views, however, provides huge advantages in terms of developer productivity and long term maintainability and reusability of your application. Understanding the ability to apply automatic layout features to sub-sections of your UIs is a big step towards becoming a super-productive WPF developer. At this point, the example is still missing some functionality. You will want to launch a customer edit form when the user selects an item from the list (as well as when the New Customer button is clicked). Lets create

some code that handles customer selections. Many developers would now create an event handler for events such as double-click in the code-behind le. Note, however, that our view doesnt even have a code-behind le at all. Whats up with that? Well, for one, you can create regular views with code-behind les and use them in CODE Framework without problems if you wish to do so. (CODE Framework supports both compiled views those with code-behind les as well as loose XAML views that do not have any associated code-behind les). Personally, I really like views without code-behind les for a number of reasons. For one, they are more generic and can be reused in more different scenarios if they are not tied to a specic code-behind le and associated classes that may only be available in some XAML dialects. They are also not pre-compiled with a specic XAML dialect, which means that your application can do some pre-processing before loading the views. (For instance, there is no Label control in Silverlight, but the framework can handle that with a pre-processing step of a loose XAML le but not a compiled one). Also, developers tend to put way too much code into codebehind les, which causes bad implementations and views that arent very exible or reusable. For instance, if you hook the ListBoxs double-click event, you would be forever trapped in having to use a double-click. But what if you want to run the view in a touch scenario? Then youd only want to single-tap. Or perhaps you want to run the view on a phone and double-clicks may not apply there. Maybe you want to have a right-click option to trigger editing. And so on, and so on. By not providing a code-behind le to put inexible code like this, developers are practically forced to write good code. (NOTE: You can always do the same thing you can do in code-behind les in behaviors. And if you really were to run into a scenario where this is not the case, rst, please drop me an email, because Id like to see this. Second, you could always use a code-behind le just for that view.) The example for this article will simply use actions to drive customer editing. I have added a view action (com-

64

Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework

codemag.com

Get oodles of free XAML clipart!

Get thousands of cliparts free of charge! Get clipart in XAML Canvas and XAML Brush formats Use live tools to manipulate cliparts Styles, skins, templates, and shaders coming soon! SVG, JPG, PNG, BMP, GIF, and other formats
are also available!

www.Xamalot.com

Figure 5: The Search view denition automatically maintains four different resource dictionaries that are loaded as needed.

mand) called EditCustomer to the view model, and I can simply bind my ListBox to that action. As you may know, ListBoxes do not have a useful command setup for this purpose, so we added one in CODE Framework. Simply set ListBoxEx.Command and you are good to go. Better yet, ListBoxEx provides a few additional settings that specify whether commands are to be triggered on single click or double click. (Take a look at the downloadable source for details on that implementation.) Note that this sort of setup is also somewhat common in the CODE Framework. Whenever functionality useful to MVVM-style architecture and general coding without code-behind is missing, we try to add it. However, we cant possibly anticipate all scenarios developers may encounter and provide specic command bindings for them. Instead, we have a generic attached property on an object called Ex that provides an EventCommand property (as well as an EventCommands collection in case you need more than one), which allows binding any event to a command. For instance, if you wanted to bind a buttons double-click event to a command, you could do it in the following fashion:
<Button Content=Hello> <c:Ex.EventCommand> <c:EventCommand Command={Binding DoubleClickCommand} Event=MouseDoubleClick /> </c:Ex.EventCommand> </Button>

dictionaries automatically (depending on the options you pick in the dialog shown in Figure 2). Figure 5 shows the search view source le with the associated resource dictionaries. The rules for loading resource dictionaries are simple. When loading a view (such as Search.xaml), the framework also searches for XAML les with a .Layout. inx. Therefore, the Search.xaml view will always also load Search.Layout.xaml (if it exists) without you having to manually add that dictionary or having to merge it in yourself. (In fact, you should never load these dictionaries manually to avoid having unwanted and confusing resources available in scenarios where they are not wanted.) You can add up to 20 layout resource dictionaries (such as Search.Layout.xaml, Search.Layout.0.xaml, Search.Layout.1.xaml, and so forth), which will all be loaded and provide a convenient place to put individual resources associated your view without having to create a single monster resource dictionary.

CODE Framework Default Styles


CODE Frameworks WPF UI features are driven heavily by styles, resources and templates. The question that is thus preeminent is which resources are available to me? There are several good ways to nd out. One is to simply look at the source code which is available on http:// codeframework.codeplex.com. All the styles are dened in resource dictionaries (XAML les) that are relatively easy to look at. (They are also broken out into a separate source code download le for easy access.) Another way to nd out is to use some tools such as the provided View Template which includes a dialog that allows picking a default style. You can then simply look at the created XAML le to see which style is used. Yet another approach is to use Expression Blend, which allows looking at copies of applied default styles.

Putting templates into themespecic resources is no more or less work than putting them right into the view, which makes this a good idea even if you only plan to use a single theme. Not to mention that they work better with Expression Blend.
The CODE Framework also loads theme-specic (I am using the terms theme and skin interchangeably as is common in the developer community) resource dictionaries. CODE Framework WPF applications have a global setting for the current theme set on the Application object. The framework uses an ApplicationEx object which provides a Theme property. Based on this property, the framework loads different resources. For instance, if that property is set to Metro the Search. Metro.xaml resource is also loaded with the search view. If the theme was set to Battleship it would load Search.Battleship.xaml instead. This article isnt about creating new themes (I will write a future article about that), but you can freely expand on the theming system in CODE Framework and create your own themes or customize existing ones. So assuming you had created a theme called BlueOcean (note that there cant be spaces in theme names), the framework would try to load a le called Search.BlueOcean.xaml. In the example for this article, that le does not exist. Whenever that happens, the framework tries to load a default theme le, so it would load Search.Default.xaml if that le existed (as it does in the example for this artcicle). The default le is a good place to put catch-all resources, but note that it is only loaded if no specic le is found. It will not be loaded in addition to theme les as some people incorrectly believe. (You can, however, manually create additional resource dictionaries that do not follow any naming convention and are thus not automatically handled by the CODE Framework and add dictionary merge commands to the automatic dictionaries. This is great for creating and loading resources that are shared across theme-specic dictionaries. Among other scenarios, graphical assets such as icons may often be dened in this way.)

Using Themes
So far, we have created a fully functional customer list with a severe lack of any real information. We can search for customers and we can see a list of customers, but the resulting list only shows the name of the bound class (CustomerQuickInformation) rather than useful data such as the customers name. What is missing is a data template for each item in the list. Using standard WPF (or any or XAML dialect for that matter) technique, we can simply dene an ItemTemplate for the ListBox to remedy the situation. However, if you were to do that right in the view denition, then the format of the list would now be hardcoded and couldnt be changed specic to a certain theme or even a different platform such as a touch-enabled environment or a mobile device. A much better approach is to put the same denition into a resource dictionary (basically a separate XAML le). This is no more or less work than putting it directly into the view, so this is even a good idea if you never switch to a different theme or platform. Besides, you never know whether you may want to change to a different theme later. Chances are that 5 or 10 years down the road, you might want to create a face-lift for your application. (This also makes UI elements like item templates very nicely editable and designable with tools such as Expression Blend, which I highly recommend using.) The CODE Framework has the ability to automatically manage resource dictionaries for you. Every view you create can optionally have additional resource dictionaries associated by simple naming convention. Using standard CODE Framework templates, you can create such resource

66

Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework

codemag.com

It is common to only create theme-specic les for a handful of themes but use the default le for all others. Maybe you support ve different themes in your application, four of which are simply different font and color variations on a basic Windows theme while the fth is a touch-specic option. You may create a default dictionary that is generally used and only create one additional one for the touch-specic setup. Returning to the example at hand, lets create a denition for the customer list for our simple Windows 95 style (called Battleship). As you can see in Listing 3, the ListBox is dened as using a style called Customer-List. We can thus put a style denition with that key into our Search.Battleship.xaml le. A basic setup of this style can look like this:
<Style x:Key=Customer-List TargetType=ListBox BasedOn={StaticResource {x:Type ListBox}}> <Setter Property=ItemTemplate> <Setter.Value> <DataTemplate> <Grid> <!-- and so on... -->

<Style TargetType=ItemsControl x:Key=CODE.Framework-LayoutListPrimarySecondaryFormLayout>

Since the resource dictionary specic to the view is loaded (internally) after the style of the same name provided by the framework, this style takes precedence and will be the one applied. (This is a standard XAML technique and useful for many things with or without CODE Framework.) The actual denition of this style is copied almost 1:1 from the default denition as found in the CODE Framework source (which is available to everyone). The only thing I changed is that I increased the SecondaryUIElementAlignmentChangeSize property to 500 indicating that I want my secondary UI to be aligned across the top of the view, unless it is taller than 500 pixels. (It isnt in our example, thus effectively forcing the search UI to be positioned at the top of the screen.) Figure 3 shows the result. Now, all the example lacks is a denition of a look specic to Metro. To achieve the appearance shown in Figure 4, Ill follow the same steps as for the Battleship theme (except I wont mess with the overall layout style this time). The ListBox item template now is a simple Grid of a size hardcoded to 75x250 pixels. Within it, I placed two data-bound text elements as well as an icon (which I downloaded as a XAML-based vector image from www.Xamalot.com see sidebar). To create a nice multi-column ow of elements within the list, I styled the items panel of the ListBox to use a WrapPanel element. Furthermore, I want slightly different select-behavior. While in a regular Windows world, I would expect to double-click a customer to edit it; in Metro, I would expect to singleclick (or single-tap in a touch environment) to achieve the same result. I can do this by adding the following to the denition of my ListBox style:
<Style TargetType=ListBox x:Key=Customer-List BasedOn={StaticResource Metro-Control-ListBox}> <Setter Property=c:ListBoxEx.CommandTrigger Value=Select />

Xamalot.com
You can download artwork used in this article from www.Xamalot. com, a free source of clipart for developers specically created to provide XAML-based art (although you can also download everything in bitmap-based formats such as JPG or PNG). To use XAML-based vector art (as shown in the Metrostyled list of customers in this article), simply nd a clipart you like, choose to download it as a XAML Brush resource (or simply have the XAML displayed on the site) and copy it into your own resource dictionaries. The simplest way to display XAML-based art in WPF is to place a rectangle on the screen and use the downloaded art/brush as your ll color.

As you can see, this style is dened to be applicable for ListBox elements. It is also based on the default style for ListBoxes. Themes may choose to completely re-dene the way certain controls look (ListBoxes in Metro, for instance, have a different appearance than they do in Windows 7). I dont want to worry about what that specic look is, but I want to respect it. That is why I will base it on the default. In addition, I then dene the ItemTemplate as a DataTemplate. The exact details are omitted from the code snippet above, but the basic idea is simple. The template of each item is a Grid which has several columns. Within the rst column I place a rectangle with a data-bound ll color based on whether or not the customer is active. Columns 2 and 3 have data-bound text elements to show the customers name and company name. You can see the full code example in Listing 4. Listing 4 has a few other details of interest. I want the ListBox to look like a data grid. For that purpose, I created a simple template for the entire ListBox that shows a header with labels for each column. I use a Grid to dene this header element and I even allow for GridSplitter elements to resize some of the columns. The data template for each individual item denes the individual columns within each row (which are technically independent from every other row in the list) to have a width that is databound to the width of the header column. With that, I get a working grid control where each row shows its data in columns and those columns are resizable through the header as you would expect. Listing 4 has one more interesting trick: As you may recall, the entire view uses a style called CODE.FrameworkLayout-ListPrimarySecondaryFormLayout. This is a style dened by the framework we intended to use directly and our view was never designed to dene a custom style for the layout. However, in this case (mainly to show this technique in this example), I decided to override that style anyway and create a style of the same name in the Battleship resource dictionary:

This is a small change, but it is also profound as it shows that one can use styles to not only change visual aspects such as colors or layout, but even drive behavior of a user interface. (If all this re-styling of ListBoxes is new to you, check out my article about styling ListBoxes in XAML. See sidebar for more details.) Note how little effort actually went into creating these different themes, yet the results as shown in Figure 3 and Figure 4 are quite different in appearance and even behavior. Experiment with the application by running it with both themes. You can change themes directly in the App.xaml le, but you can also swap themes in the running application by using the menu items/tiles that are added for this purpose by default. Note that you can cause a style change any time you desire either by triggering a SwitchThemeViewAction as shown in the StartViewModel class, or by simply setting the Theme property of the current application:
var app = (App.Current as ApplicationEx); app.Theme = Blue;

codemag.com

Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework

67

Listing 4: The complete denition of the Battleship themed UI elements for the Search view
<Setter.Value> <ControlTemplate TargetType={x:Type ListBox}> <Border x:Name=Bd BorderBrush= {TemplateBinding BorderBrush} BorderThickness= {TemplateBinding BorderThickness} Background={TemplateBinding Background} Padding=1 SnapsToDevicePixels=true> <Controls:GridEx RowHeights=Auto,*> <Style TargetType=ItemsControl x:Key= <Grid> CODE.Framework-Layout-ListPrimarySecondaryFormLayout> <Grid.ColumnDenitions> <Setter Property=ItemsPanel> <ColumnDenition Width=25 x:Name= <Setter.Value> column1 /> <ItemsPanelTemplate> <ColumnDenition Width=300 x:Name= <Layout:GridPrimarySecondary Margin= column2 /> 20 UIElementSpacing=15 <ColumnDenition Width=300 x:Name= SecondaryUIElementAlignment column3 /> ChangeSize=500/> <ColumnDenition Width=* x:Name= column4 /> </ItemsPanelTemplate> </Setter.Value> </Grid.ColumnDenitions> </Setter> <Grid.Background> <Setter Property=Background Value={x:Null} /> <LinearGradientBrush StartPoint= </Style> 0,0 EndPoint=0,1> <GradientStop Color= #E0E0E0 Offset=0 /> <Style x:Key=Customer-List TargetType= ListBox BasedOn={StaticResource {x:Type ListBox}}> <GradientStop Color= <Setter Property=ItemTemplate> WhiteSmoke Offset=.5 /> <Setter.Value> <GradientStop Color= <DataTemplate> #D6D6D6 Offset=1 /> <Grid> </LinearGradientBrush> <Grid.ColumnDenitions> </Grid.Background> <ColumnDenition <Label Grid.Column=1>Name</Label> <GridSplitter Grid.Column= Width={Binding Width, ElementName=column1, Mode=OneWay} /> 1 HorizontalAlignment=Right VerticalAlignment= <ColumnDenition Stretch Width=1 /> <Label Grid.Column= Width={Binding Width, ElementName=column2, Mode=OneWay} /> 2>Company</Label> <ColumnDenition <GridSplitter Grid.Column= 2 HorizontalAlignment=Right Width={Binding Width, ElementName=column3, Mode=OneWay} /> <ColumnDenition VerticalAlignment= Width={Binding Width, ElementName=column4, Stretch Width=1 /> Mode=OneWay} /> </Grid> </Grid.ColumnDenitions> <ScrollViewer Focusable=false Grid.Row=1 <Rectangle Height=16 Width=16 Margin=2 Fill={Binding IsActiveBrush} Padding={TemplateBinding Padding}> VerticalAlignment=Center <ItemsPresenter HorizontalAlignment=Left /> SnapsToDevicePixels= {TemplateBinding SnapsToDevicePixels}/> <TextBlock Grid.Column=1 Text= {Binding FullName} /> </ScrollViewer> <TextBlock Grid.Column=2 Text= </Controls:GridEx> {Binding Company} /> </Border> </Grid> </ControlTemplate> </DataTemplate> </Setter.Value> </Setter.Value> </Setter> </Setter> </Style> </ResourceDictionary> <Setter Property=Template> <ResourceDictionary xmlns=http://schemas.microsoft.com/winfx/2006/xaml/ presentation xmlns:x=http://schemas.microsoft.com/winfx/2006/xaml xmlns:Controls=clr-namespace:CODE.Framework.Wpf.Controls;assembly= CODE.Framework.Wpf xmlns:Layout=clr-namespace:CODE.Framework.Wpf.Layout;assembly= CODE.Framework.Wpf>

This causes the current application to unload all themespecic resources and load the ones for the specied theme instead (following the rules described above). This works even for UIs already loaded, which will completely change look and behavior on the y when you do this. This effect often blows developers away as the results can be dramatic, yet are very simple to achieve. Switch back and forth between the Battleship and the

Metro themes to experiment with this feature of the framework.

The View Visualizer


When it comes to working with all these resources, one of the advantages is that during development, each re-

68

Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework

codemag.com

source remains very simple and manageable. However, things can get a bit tricky once the application is running and you are trying to gure out what is going on. (NOTE: This is an issue for all XAML applications and not specic to CODE Framework.) To remedy this situation, CODE Framework provides a View Visualizer, which is a developer tool that is turned on by default in App.xaml. cs (yet should be turned off before you deploy your application). The View Visualizer provides a list of all views currently running and shows details about which view and view model each UI is using. It also shows which controller has launched the UI, thus giving you all three pieces of information going along with MVC scenarios. This is a very valuable tool to have, especially when it comes to maintaining applications, or when you are asked to work on a UI that was originally created by a different developer and you may not know where to nd all the pieces. However, the View Visualizer provides a lot more detail about each individual view. Once you select a view from the list of open views, you can not only see a live and zoomable visual of the view (useful for taking a close look at view details), but you can also see a hierarchical display of all the elements that make up the view. (During development, we would refer to this as the document outline.) You can hover your mouse over each element in the view to see a preview of just that element (useful for identifying the specic element you are looking for in complex views) and you can then select an element to see additional details. Those details include a list of all resource dictionaries that are loaded and accessible to the selected element (which could be application-global, specic to the current view, or even specic to the current element) and all the resource dictionaries these dictionaries may be loading and so on (dictionary merging can cause large hierarchies to be loaded).

Figure 6: The View Visualizer tool shows detailed information about all open views, the elements they are made of and details about associated resources and styles.

Figure 7: A Customer Edit view using the Metro style in Windows 7.

The View Visualizer provides tools for WPF similar in concept to what Firebug in Firefox provides for HTML.
In addition, you can choose to see all the styles and their individual settings that apply to the selected element. For those of you who have done web development and have used either the Internet Explorer Developer Tools or Firebug in Firefox, this is probably a familiar sight, as this part of the visualizer aims to provide the same information or its XAML equivalent. It allows you to easily see which styles are applied and why (they may be explicitly set by key or brought in implicitly based on the control type) and which styles these styles are based on, and so forth. You can also see which style settings have been overridden by subsequent styles as these settings are crossed out. Again, to web developers, this is probably a familiar sight, while a tool like this used to be sorely missing in WPF. Figure 6 shows the tool in action.

Figure 8: The unchanged customer edit view running as a true Windows 8 Metro application. want to create a customer edit form to show a few more details. Fundamentally, the provided example (as shown in Figure 7) is relatively simple, yet it has some very interesting details. Listing 5 shows the denition of this view (which is stored in only a single le with no needed additional resource dictionaries at all). The most interesting aspect of the edit view denition is not what is there, but what is not there. If you look

Editing Customers
At this point, the example is already quite interesting from a developers point of view. Nevertheless, I also

codemag.com

Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework

69

Listing 5: The view denition for our customer edit screen


<mvvm:View xmlns=http://schemas.microsoft.com/winfx/2006/xaml/ presentation xmlns:mvvm=clr-namespace:CODE.Framework.Wpf.Mvvm;assembly= CODE.Framework.Wpf.Mvvm Title=Edit Customer Style={DynamicResource CODE.Framework-Layout-EditFormLayout} > <Label>First Name:</Label> <TextBox Text={Binding FirstName} mvvm:View.WidthEx=25 /> <Label>Last Name:</Label> <TextBox Text={Binding LastName} mvvm:View.WidthEx=25 /> <Label>Company Name:</Label> <TextBox Text={Binding Company} mvvm:View.WidthEx=25 /> <Label>Division:</Label> <TextBox Text={Binding Division} mvvm:View.WidthEx=25 /> <Label mvvm:View.GroupBreak=True>Phone:</Label> <TextBox Text={Binding Phone} mvvm:View.WidthEx=25 /> <Label>Fax:</Label> <TextBox Text={Binding Fax} mvvm:View.WidthEx=25 /> <Label mvvm:View.ColumnBreak=True >Birthday:</Label> <Calendar SelectedDate={Binding Birthday} /> </mvvm:View>

ently. Labels are now above their associated controls. Spacing is different. Font sizes are different. Yet using CODE Framework, we can still use the same exact view without changes. This not only saves a ton of work when moving to Metro, but it also allows you to reuse code that is already well tested. Another example of reusing the same view is shown in Figure 9, which shows the view running on Windows Phone 7. These examples lead us beyond the focus of this article and into using different areas of the framework such as the Windows 8 and Windows Phone specic implementaitons (and more). I do not have the space in this article to talk about those aspects in detail (these may be the subject of future installments of this column), but it is important to understand and know that if you are dening your UIs using the techniques described here, you are not just going to be very productive in creating your UIs, but you will also have created much more reusable UIs in the long run.

Some Loose Ends


Is that all there is to know about UI development in WPF with the CODE Framework? Not by a long shot. But I hope to have given you a good overview of the most important aspects and the overall paradigm, which should allow you to explore further steps on your own. There are a few odds and ends that have not t in well into the article to this point that I would still like to point out.

Fonts
closely, you will see that in true CODE Framework fashion, the view denition is very simple and only lists the different elements you want in the UI, what they are bound to, and a few abstract layout hints, such as the width of the elements in a generic and style independent fashion as well as group and column breaks. And thats it! The actual layout of this form is created by a style called Edit Form Layout, which produces the result shown in Figure 7. In this example, the style was able to lay out the entire form all at once, allowing you to not just be super-productive in the creation of the form but also create a view that is highly reusable in other XAML dialects and even in completely different scenarios such as ASP.NET MVC, iOS and Android. Is it realistic in real-world scenarios to have UI denitions that can be completely handled by a style like this? Well, as it turns out, most business applications tend to have a number of trivial edit forms that can indeed be handled in this fashion. Most serious forms, however, are likely to be more complex. In those cases, it is more realistic to apply these automatic layout styles to sub-sections of the view (like we did with the search screen) and compose larger views out of smaller areas that are laid out automatically (or if need be, even manually). Figure 8 shows the same view from Figure 7, but this time it runs on true Windows 8 using Windows 8 Metro (not to be confused with the Metro style which I applied to a Windows 7 WPF application throughout this article). While the view denition remains unchanged, the applied style chooses to present the view differOne is the use of fonts and font sizes. All default CODE Framework themes dene default settings for font families and font sizes. Those are usually dened within each themes resources (in the framework source code) in a le called Fonts.xaml. Most importantly, there is a resource called DefaultFont which is used to dene the themes default font family. You should use that reference rather than explicitly setting the font on any of your elements:
<!-- Good --> <TextBlock FontFamily={DynamicResource DefaultFont} /> <!-- Bad --> <TextBlock FontFamily=Segoe UI />

If you nd yourself needing different font families in your app, simply add more resources in your own resource dictionaries. A simple way to do so is to create your own Fonts.xaml le in the Themes/[Name] folder, which the default CODE Framework template already created for you (for instance, if you chose to include the Metro theme, simply add a Themes/Metro/Fonts. xaml le). Add a link to your merged dictionary from the root theme le (such as Metro.xaml in case if the Metro theme), which will cause the framework to automatically load your font denition resource dictionary when that theme is applied.

Figure 9: The same customer edit view running unchanged in denition, although with a different look, on Windows Phone 7.

Colors
A similar concept applies for colors. You should never apply a color explicitly, but you should always use styles for col-

70

Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework

codemag.com

Read CODE Magazine on Xiine!

All You Can Read. Now also on Android and iPhone/iPad!


Read all your issues of CODE Magazine and more in Xiine. We now support your PC as well as your mobile device, both online and ofine. b Download your free client today, D or read online at www.Xiine.com!

ure 10). For testability, the framework also allows for mocking of chosen values in message boxes, so you can place a call to a message box and simulate the user picking one of the available options without ever having to display a UI. CODE Framework message boxes also have quite a number of features that standard message boxes lack. As it turns out, CODE Framework message boxes are really just standardized UIs with a view denition and a view model. If you are placing a standard call to the Controller. Message() action, the framework automatically creates a standard view and view model appropriate to display message box. However, you can also add your own view and view model if you want. This allows much greater exibility. For instance, you can easily create completely different captions for your buttons. Or, you could create complete custom views and models to add elements such as textboxes or drop down lists to your message boxes. Since this feature is using the standard view/view-model architecture of the framework, you could create UIs with no limit to complexity. (However, most message boxes should probably not have hundreds of UI elements in real world scenarios.)

Figure 10: The Metro theme uses a Metro-appropriate approach for displaying message boxes. ors. This allows for much greater exibility (not to mention consistency) across your application. It is not uncommon for applications to offer completely new skins using the simple trick of swapping the color dictionary. CODE Framework, by default, denes twelve different standard colors (three foreground, three background, three highlight, and three theme colors) all dened in the Colors.xaml le. Since WPF sometimes needs colors and sometimes needs brushes, there are brush-equivalents for all these colors as well. (If you want to make changes, you only need to change the colors as the brushes automatically use the colors to dene themselves). You may have noticed that all the Metro examples in this article use a blue background while the default template creates a red background. I achieved this simply by putting the following setting into my Metro-Colors.xaml le in the Metro theme folder:
<Color x:Key=CODE.Framework-ApplicationThemeColor1>Navy</Color>

Getting You Started


And there you have it! While this isnt nearly all I would like to talk about, this should still give you an overview of the most important features and paradigms. For more information, download the example associated with this article and visit http://codemag.com/framework. Also, feel free to send me an email. I am always happy to help you out with CODE Framework questions. Furthermore, CODE is offering training classes for CODE Framework on a regular basis. Check out http://codemag.com/training for details. If you are interested in attending a class, contact the CODE training division at info@codemag.com and mention you read this article, which will give you a discount on attending the course. Markus Egger

Again, add to the list of colors if needed (and it is generally a good idea to create color as well as brush resources), but do not use explicit colors in your views and view-specic styles.

Message Boxes
A nal feature I want to point out is one that deals with the common need of message boxes. It is very tempting to simply put a call to MessageBox.Show() in your view model, but this kills reuse as well as other aspects such as testability. To avoid this problem, CODE Framework offers its own message box feature. To show a message box, simply use the Controller.Message() method:
Controller.Message(Pretending to save data..., Saving);

Fundamentally, this call supports the same parameters you would expect on the standard Windows message box. In fact, many styles choose to use the default message box feature to enable controller messages. However, many styles choose to display messages completely different in a way that is appropriate for the style (see Fig-

72

Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework

codemag.com

CODE COMPILERS (Continued from 74) guage in your brain at a time that taking the time and energy to master one will crowd out the others. Does that hold true for spoken language? If not, then why the difference between spoken and programming language? If anything, programming languages are simpler than their spoken cousins (just ask any AI researcher). So it stands to reason that if I cant keep C# and ML in my head at the same time, I cant keep French and English there, either. And yet, somehow people do this all the time. On the y. During tense diplomatic negotiations. Far, far better than any computer ever could, for that matter. So perhaps similar kinds of languages, like the C-family of languages (C++/Java/C#) are easy to hold simultaneously, but stretching across families (C-family vs Lisp-family or ML-family) is too far. Just like French (Latin-family) and Chinese (Far Eastern-family) are too yeah. I guess somehow the Chinese and French have been able to talk to each other through interpreters, too. And to the naysayers, Ill point out that sometimes, the point of learning other programming languages isnt to try and populate your place of employment with every language under the sun, but to stretch your mind in interesting new directions and see that stretch rewarded with clarity when something new comes down the pipe. Think for a moment about C#. In C# 2.0, Microsoft introduced anonymous delegates blocks of code that could be trapped into a reference and passed around like objects. If you were one of those programmers who came from the C/C++ family of languages (probably passing through Java on the way), then you didnt see much point to this new language feature, except to make event-handling code a little easier to write. But if you were a Lisp programmer, or a Ruby programmer, or a programmer of any other language that supports closures, you knew immediately what this feature was, and you knew immediately how it could be used to tremendous effect. Your designs could suddenly start taking advantage of this, and get signicantly simpler and cleaner. And when C# 3.0 started introducing more functional kinds of syntax and semantics, masquerading as LINQ, you again immediately recognized it as such and started writing code taking advantage of it. And when the Task Parallel Library and PLINQ came along, you just nodded knowingly during the session and started planning for how it would change your code. Consider a simple concept like recursion: something that most programmers learn in an early programming lesson, by the time they reach the point of writing production code, theyve learned the lesson that recursion is slow, compared to iteration, and put the idea on the shelf. What they never realize, then, is that recursion is slow in a language that has to create a new stack frame for each method/function/procedure call, and that in some languages, the compiler (or the runtime) can optimize the recursion into a single stack frame, regardless of how deep the recursion goes, using whats called tail recursion optimization. Or that some languages differentiate the passing of parameters by name, rather than by position (as C#, Java, C++, C, and so on do). Or, if you knew Lua, or JavaScript, and their idea of objects (which are essentially name/value pairs), you realize that a Dictionary<string,object> that contains name/value bindings where the value could be those delegate instances sounds awfully familiar and exible And so on. Just like knowing different spoken languages helps improve your understanding of your native tongue, knowing different programming languages helps improve your mental grip on the language used at home. Its not just a matter of an Eskimo having 27 different words for snow (which is a myth, by the way) and understanding that snow is somehow important to that culture its about seeing how different languages put concepts together and how sometimes that changes the perception of the concept entirely.

May/June 2012 Volume 13 Issue 3 Group Publisher Markus Egger Associate Publisher Rick Strahl Editor-in-Chief Rod Paddock Managing Editor Ellen Whitney Content Editors H. Kevin Fansler Erik Ruthruff Melanie Spiller Writers In This Issue Markus Egger Neal Ford Kevin S. Goff Kevin Hazzard Sahil Malik Ted Neward Paul D. Sheriff Rick Strahl Technical Reviewers Markus Egger Rod Paddock Art & Layout King Laurin GmbH info@raffeiner.bz.it Production Franz Wimmer King Laurin GmbH 39057 St. Michael/ Eppan, Italy Printing Fry Communications, Inc. 800 West Church Rd. Mechanicsburg, PA 17055 Advertising Sales Tammy Ferguson 832-717-4445 ext 26 tammy@code-magazine.com Circulation & Distribution General Circulation: EPS Software Corp. Newsstand: Ingram Periodicals, Inc. Media Solutions Source Interlink International The News Group Subscriptions Circulation Manager Cleo Gaither 832-717-4445 ext 10 subscriptions@code-magazine.com US subscriptions are US $29.99 for one year. Subscriptions outside the US are US $44.99. Payments should be made in US dollars drawn on a US bank. American Express, MasterCard, Visa, and Discover credit cards accepted. Bill me option is available only for US subscriptions. Back issues are available. For subscription information, email subscriptions@code-magazine.com or contact customer service at 832-717-4445 ext 10. Subscribe online at www.code-magazine.com CODE Component Developer Magazine EPS Software Corporation / Publishing Division 6605 Cypresswood Drive, Ste 300, Spring, Texas 77379 Phone: 832-717-4445 Fax: 832-717-4460

Wrap-up
For some people, these words will fall on deaf ears. Workplace diversity may be well and good, but clearly technical diversity is a Bad Thing, and should be avoided at all costs. If Microsoft had wanted us to use dynamic languages, they would have built us one to use. If Microsoft had wanted us to use functional languages, they would have built it into the languages they gave us. And if Microsoft had wanted us to think outside of the box, they would have given us a new box to think within. And to all those who argue this perspective, I have only one piece of advice, handed down to me from a man I worked for many years ago and deeply respect to this day: Tell them that the phrase they will need to learn for their next job is Would you like fries with that? Or, if you prefer, Voulez-vous mangez les frites? Ted Neward

codemag.com

Managed Coder

73

MANAGED CODER

Managed Coder: On Diversity


Writing software is hard, particularly when schedules keep programmers nose to the grindstone; every so often, its important to take a breather and look around the world and discover what we can ndironically, what we nd can often help us write software better.
Diversity in the workplace, declare the HR websites, is a good thing, crucial to innovation. Does this hold for programming languages? While I was in college at the University of California, Davis, diversity was a hot topic among the students. In fact, one year, a half-dozen students decided that the Latino Studies department wasnt diverse enough (apparently, if memory serves, all three professors were of Caucasian descent), and staged a hunger strike for diversity. I remember asking, as the hunger strike neared its second day, why diversity was a good thing. I also remember half the student body (it seemed) calling me an insensitive racist chauvinist pig. Despite all the words being hurled my way, I never actually heard an objective answer. Everybody knows that diversity in the workforce is good. Everybody knows that diversity in a student body is good. What I cant gure out, however, is how it makes things goodwhat about a diverse workforce, exactly, leads to benets not felt in a homogeneous one? ees, which (to my mind) makes senseprogramming is a pretty merit-based industry, when you think about it, given that neither the compilers nor the servers really care about the age, gender, ethnicity, political bias, or any other distinctive characteristics of the programmers involved. (It should be pointed out, however, that programs sometimes bring their own biases to the game. For years, UNIX developers could feel a 60s vibe by trying to use their build tool; typing make love at the command line produced a result not war?) So if diversity at the racial and ethnic level is an important part of creating a competitive and innovative workforce, then why do programming shops continually stress that they are a monocultural place to work? Why is We are a C# shop, just a C# shop, and we will always be a C# shop (until Microsoft kills it, anyway) somehow a good thing? to hire people who will know everything out of the gate. In fact, the polyglot project is already a fact of life. Consider the average Web project: How many different languages does the programmer need to know in order to be useful? A CLR language (C# or VB, usually), SQL, HTML, XML, JavaScript, CSS, a shell language (be that .bat les or PowerShell), were up to a half-dozen already and weve only considered a Web app. Senior developers will often need to know a few more on top of that, just to be able to interoperate with other projects from other platforms or languages, to boot. Plus all those frameworks. Of course, sometimes we just follow the herd: C# programmers get paid more than Visual Basic developers, so it must be the best language to use, right? That one wasnt even worth responding to. (I honestly heard that from a development team manager once.) Sometimes, it takes the form of:

Polygamy, Polyamory, Polyglot-amy, oh my!


Most often, when I hear the argument against a polyglot environment, the arguments fall into one of the following: Too many languages means we dont learn how to use any one of them well. How will we ever nd programmers who can write all these languages? Generally, when presented with these arguments, I call their bluff: what if we substitute the word languages with the word frameworks? Too many frameworks means we dont learn how to use any one of them well certainly applies to a couple of shops Ive been intimately involved with, sure. And How will we ever nd programmers who can write all these frameworks? Sure. Find me the .NET developer who knows ASP.NET MVC, Entity Framework, WCF, Workow, and the Base Class Library well. (Hell, for that matter, nd me somebody who feels like they know just the BCL well, and Ill show you a programmer who hasnt really taken a hard look at how many thousands of classes and tens-of-thousands of methods are in it.) Face it: programming is hard. The programming ecosystem is just far too rich and complex to expect

Eh, <language> is good enough for everything we do, why change? Which is an argument Ive heard over and over again from developers, which I always hear in my head as, Dude, I just got to the point where I got enough C# (or VB) under my belt to be hired somewhere, dont go rocking the boat! Learning is hard! Then maybe you should go shopping, Barbie.

To the Internet!
Doing a quick Google search on Diversity leads to more of the same chest-beating It just is kinds of articles (and a couple of college message boards), but one article (http://www.businessnewsdaily. com/1200-workforce-diversity-good-for-business. html) actually offers up a rationale: diversity apparently, according to a Forbes study, leads to better innovation, which in turn leads to better competitiveness: Companies have realized that diversity and inclusion are no longer separate from other parts of the business, said Stuart Feil, editorial director of Forbes Insights. Organizations in the survey understand that different experiences and different perspectives build the foundation necessary to compete on a global scale. Although no articles Ive found make the causation clear, the correlation between a diverse workplace and innovation (and thus competitive strength) seems to be relatively clear, at least according the study cited. One of the key elements seems to be around attracting and retaining talented employ-

Thinking in Language
It makes me curious, though: again, going back to diversity arguments, whats so wrong with English? Why not just insist that everybody within the company learn English? I mean, English seems to work well enough for me, why shouldnt it work well enough for everybody? (No, Im not really arguing that. I happen to enjoy being semi-uent in French and German.) Monoglots of programming languages will be quick to point out that you cant have more than one lan(Continued on page 73)

74

Managed Coder

codemag.com

VENDORS: ADD A REVENUE STREAM BY OFFERING ESCROW TO YOUR CUSTOMERS!

han Less t ay! per d

Affordable High-Tech Digital Escrow


Tower 48 is the most advanced and affordable digital escrow solution available. Designed and built specically for software and other digital assets, Tower 48 makes escrow inexpensive and hassle free. Better yet, as a vendor, you can turn escrow into a service you offer to your customers and create a new revenue stream for yourself. Regardless of whether you are a vendor who wants to offer this service to their customers, or whether you are a customer looking for extra protection, visit our web site to start a free and hassle-free trial account or to learn more about our services and digital escrow in general!

Visit www.Tower48.com for more information!

FROM THE PRODUCERS OF CODE MAGAZINE

Application Framework from the Most Trusted Source in .NET!


Brought to you by CODE Magazine, the CODE Framework is a free and open-source business application development framework, focusing on productivity, maintainability, and great application architecture. The CODE Framework provides a wide range of features, from various UI development approaches (Windows, Web, Mobile,) to Middle-Tier and Service components, to data access, security, various utility features, and much much more. CODE Framework also aims to not re-invent features that are already available by making mashability with other frameworks a high priority. To get the CODE Framework, visit www.codemag.com/framework or download various versions and tools from http://codeframework.codeplex.com.

See more details at: www.codemag.com/framework

FR AMEWORK
An EPS Company

shutterstoc shutterstock shutterstock hu te oc

Vous aimerez peut-être aussi