Académique Documents
Professionnel Documents
Culture Documents
MAY JUN
2012
codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - CODE COMPONENT DEVELOPER MAGAZINE
An EPS Company
Dyna
ic m
g Lan
Sponsored by:
uages
TABLE OF CONTENTS
Features
8 The Bakers Dozen Doubleheader: 26 New Features in SQL Server Integration Services 2012 (Part 2 of 2)
Kevin looks at 13 new features in SQL Server Integration Services 2012.
48 Grokking the DLR: Why its Not Just for Dynamic Languages
Kevin reviews why so many developers dont know much about the Dynamic Language Runtime, why many have misconceptions about DLR and why developers should consider using the DLR as a communication tool, even if they never intend to use a dynamic programming language in their own application designs.
Kevin S. Goff
Kevin Hazzard
Markus Egger
58 Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework
Markus walks through how using themes and styles features in the CODE Framework, which is available for free, can make you a more productive developer and can make your applications easier to modify and maintain over its lifecycle.
Markus Egger
Sahil Malik
Columns
74 Managed Coder: On Abstraction
Ted Neward
Neal Ford
Departments
6 Editorial 73 Code Compilers 19 Advertisers Index
Paul D. Sheriff
Ted Neward
Rick Strahl
Sponsored by:
US subscriptions are US $29.99 for one year. Subscriptions outside the US pay US $44.99. Payments should be made in US dollars drawn on a US bank. American Express, MasterCard, Visa, and Discover credit cards are accepted. Bill me o ption is available only for US subscriptions. Back issues are available. For subscription information, send e-mail to subscriptions@code-magazine.com or contact customer service at 832-717-4445 ext 10. Subscribe online at codemag.com CoDe Component Developer Magazine (ISSN # 1547-5166) is published bimonthly by EPS Software Corporation, 6605 Cypresswood Drive., Suite 300, Spring, TX 77379 U.S.A. POSTMASTER: Send address changes to CoDe Component Developer Magazine, 6605 Cypresswood Drive., Suite 300, Spring, TX 77379 U.S.A..
Table of Contents
EDITORIAL
pertained to the .NET developer. I am amazed how much things have changed in just a few short years. Open source has come to the Microsoft ecosystem in full force. For a company that only ten years ago called open source software a cancer (http:// en.wikipedia.org/wiki/Steve_Ballmer#Free_ and_open_source_software) the sea change is remarkable. Last month was full of news on the Microsoft open source news front. We learned that Microsoft is one of the top 20 committers to the Linux kernel (http://www.linuxfoundation.org/news-media/ announcements/2012/04/linux-foundationreleases-annual-linux-development-report). Not soon after the news of Microsofts commitment to the Linux kernel came the bombshell: Microsoft released ASP.NET MVC, Web API and the Razor view engine under an open source license. The Apache 2.0 license to be exact! As Bob Dylan says: The times they are a-changin. But wait, it gets better. Microsoft is also starting a new wholly owned subsidiary called Microsoft Open Technologies, Inc. (http://blogs.technet. com/b/port25/archive/2012/04/12/announcingone-more-way-microsof t-will-engage-withthe-open-source-and-standards-communities. aspx) This organization will work with standards initiatives and open source projects. The press release is like a whos who of the open source ecosystem: Linux, Hadoop, MongoDB, PhoneGap, etc. The benet of Microsoft supporting these projects will be felt around the world. I know a lot of my clients will look more favorably on adopting open source software now that Microsoft has demonstrated their commitment.
code, permission to fork it and create your own derivative but it was missing a critical characteristic of more permissive licenses. Microsoft didnt allow outsiders to commit patches to the core source code. With Microsofts announcement in March 2012, this is no longer true. Microsoft now takes submissions to the core source code. As a matter of fact, they already have and it didnt take long for it to happen.
sion occurred shortly after Microsofts decision to open source big parts of the ASP.NET stack. The biggest takeaway I had is that software developers and consumers of Microsoft products need to change our world view when it comes to new features. In the case of ASP.NET MVC, Web API and the Razor view engine, the adoption of new features now lies partially in the hands of the community. Want a new template? Just do it! Want a new overload for a function? Just do it! Want to unseal all the classes? Just do it! (Please someone, do it!) The responsibility for adding features no longer rests solely in the hands of Microsoft. Its up to us as a community to make these happen.
Rod Paddock
Editorial
codemag.com
The Bakers Dozen Doubleheader: 26 New Features in SQL Server Integration Services 2012 (Part 2 of 2)
In the rst game of this doubleheader (the last issue of CODE Magazine), I covered 13 new database and T-SQL features in SQL Server 2012. Well, its the second game of the doubleheader, and the nightcap features 13 new features in SQL Server Integration Services 2012. SSIS has always been a good time, and now its an even better tool with
enhancements and improvements over prior versions. Even if you had a love/hate relationship with SSIS before, youll nd that Microsoft paid special attention to SSIS 2012. The Bakers Dozen Potpourri Miscellaneous new features in SSIS 2012
Normally in Bakers Dozen tradition, I say, Whats on the menu? This time, Im saying, The starting lineup is as follows: New Development Editor use of SQL Server Data Tools 2010 New Shared Connection managers to simplify the connection manager process across packages in a project SSIS parameters at the project, package, and task level Bakers Dozen Spotlight: a new variable expression task, to eliminate instances where scripts are necessary New UNDO/REDO functionality in the data ow editor New SSIS Expression Language Features New native ODBC Data Flow Source and Destination Components Greatly improved recovery from data lineage and invalid metadata reference issues New Data Taps functionality to programmatically tap into a Data Flow pipeline New SSIS tasks to support Change Data Capture New Deployment features in SSIS 2012 A new SSIS Server Management Dashboard feature
Tip 1: New SSIS Development Environment using SQL Server Data Tools
Prior to SSIS 2012, SSIS developers used Business Intelligence Development Studio, which was a shell of Visual Studio 2008. Some who used SSIS 2008R2 were (understandably) upset that even the R2 version (released in 2010) still used the VS2008 shell, as opposed to the updated WPF-based Visual Studio 2010 shell. Fortunately, the planets now align SSIS 2012 uses the WPF-based Visual Studio 2010 shell. The SSIS development editor is much more visually appealing. Although this might not be critical for experienced ETL developers, a better looking UI will help with the appeal for new ETL developers. Figure 1 and Figure 2 show the control ow and data ow for an SSIS package in the new SSDT environment. (At the end of this article, Ill talk about what this package does.)
Figure 1: The control ow for an SSIS package in the new SSDT environment uses the WPF framework.
Figure 2: The data ow for an SSIS package in the new SSDT environment has the pipeline in blue instead of the old green color.
The Bakers Dozen Doubleheader: 26 new Features in SQL Server Integration Services 2012 (Part 2 of 2)
codemag.com
Figure 4: The new tab in the SSIS package editor denes package parameters. Figure 3: The new structure in SSIS 2012 lets you create project-level Connection Managers and use them throughout your project.
Figure 5: The new Variable Expressions task maintains SSIS variables without the need for an SSIS script task.
Figure 6: You can handle NULL values easily with the new REPLACENULL function.
Figure 7: The new SSIS function identies a specic token or returns the number of tokens.
10
The Bakers Dozen Doubleheader: 26 new Features in SQL Server Integration Services 2012 (Part 2 of 2)
codemag.com
TOKEN (to parse a string and return a specic token) and TOKENCOUNT (to parse a string and return the number of tokens)
Tip 7: Support for ODBC Data Sources and Data Flow Destinations
Those who have tried to integrate SSIS with ODBC connections will be happy to learn that SSIS 2012 contains new native ODBC data ow components. This is related to Microsofts announcement that it will drop support for OLE DB after SQL Server 2012 in favor of ODBC. SSIS 2012 allows you to create an ODBC connection manager, specifying either a user or system DSN, or a custom ODBC connection string.
Tip 8: Greatly Improved Recovery from Data Lineage and Invalid Metadata Reference Issues
Imagine that you give a young child a red lollipop then a minute later you take away the lollipop. As the parent of a toddler, I know that even if you give the child a new red lollipop, the child will throw a t. Data Flow components prior to SSIS 2012 behaved similarly. For instance, suppose you have an OLE DB destination that expects ten columns from the data ow pipeline from the previous component. Now suppose that you remove one of the columns from the prior component (perhaps one of the ten columns isnt used any longer). The OLE DB destination prior to SSIS 2012 complained and generated an error because of the invalid reference. Alternatively, suppose the OLE DB destination expected 10 columns from a previous data ow component (component A), but now receives the same columns from a different component (component B). The OLE DB Destination component prior to SSIS 2012 still complained that the lineage of the columns was from a different parent component. Correcting these issues always meant a certain amount of surgery on the component in error, to force it to recognize the changes. SSIS always materialized both the pipeline and the parent lineage into each subsequent component in the pipeline. Each component contained specic information about what columns it expected and where they came from and didnt respond well to changes. This, like other issues in prior versions of SSIS, was something that new developers had trouble grasping, and experienced developers just simply lived with. The good news is that Microsoft has addressed this problem in SSIS 2012, and corrections to invalid metadata in a component are now much easier. Figures 8 and 9 demonstrate an example of this: a component (a at le destination) expects a certain number of components, and then we remove one of the components from a previous data ow. The pipeline still generates an error, but we can use a better interface (Figure 9) to resolve any invalid pipeline references.
Figure 8: You can resolve invalid references in the pipeline using the new interface.
Figure 9: This is the new Resolve Invalid Data Flow Pipeline References Editor.
codemag.com
The Bakers Dozen Doubleheader: 26 new Features in SQL Server Integration Services 2012 (Part 2 of 2)
11
The control ow task is the CDC Control task, which allows an ETL developer to control the lifecycle of CDC processes. There are two CDC data ow components. The rst is the CDC source, which allows you to open a CDC change tracking log table and read the rows into the pipeline. The second is the CDC splitter, which separates the rows in a change tracking table into three distinct pipelines: new rows from the change tracking table, updated rows (with the before and after values from the updates), and deleted rows.
Figure 10: To build a Data Flow Tap in SQL Server, you must rst determine the identication string of the pipeline.
Figure 11: You can easily nd the identication string for the pipeline (for use in a Data Tap).
Listing 1: SQL code in the SSIS service engine for Data taps
USE [SSISDB] DECLARE @return_value int, @execution_id bigint, DECLARE @data_tap_id bigint EXEC [catalog].[create_execution] @folder_name = NSSIS2012DemoProjectFolder, @project_name = NSSIS2012DemoProject, @package_name = NETLMergeExample.dtsx.dtsx, @execution_id = @execution_id OUTPUT EXEC [catalog].[add_data_tap] @execution_id = @execution_id, @task_package_path = N\Package\Foreach Loop Container\Data Flow + - Process CSV lesProduct, @dataow_path_id_string = NPaths[Data Conversion.Data Conversion Output], @data_lename = NOutputDatatap.csv, @data_tap_id = @data_tap_id OUTPUT
Tip 13: The Bakers Dozen Potpourri Miscellaneous New Features in SSIS 2012
In addition to all the major features above, SSIS 2012 has plenty of additional features that further bolster the case for SSIS 2012 being a major and important new version. Here are some of the other features new in SSIS 2012: In SSIS 2005, there were many areas in the User Interface where you had to type out a variable (as opposed to selecting from a list). SSIS 2008 took care of most of those areas, but left a small number unaddressed. SSIS 2012 has nally covered all the areas where a variable needs to be referenced. SSIS 2012 makes it easier to create a data viewer (fewer keystrokes). You can now populate a data ow row count as fast as a Nolan Ryan fastball. (Feel free to search for Nolan Ryan!) Expression Result Length > 4000. SSIS 2012 allows developers to set breakpoints as part of Script component debugging. Additionally, Microsoft upgraded the scripting engine to VSTA 3.0. The
Figure 12: You must create a new SSIS Database Catalog before deploying SSIS Projects.
Figure 13: To Create SSIS Database Catalog, you must enable CLR integration and provide a password.
12
The Bakers Dozen Doubleheader: 26 new Features in SQL Server Integration Services 2012 (Part 2 of 2)
codemag.com
Figure 14: The new SSIS project deploy screen deploys SSIS projects to the SSISDB Catalog.
Figure 15: The SSIS Catalog database lists useful options after package deployment. Figure 17: The SSIS Catalog database package options are used to redene connection managers.
Figure 16: The SSIS Catalog database package options help you redene parameters.
Figure 18: This report shows activity on the package execution. Listing 2: Script to create staging table
use AdventureWorks2008R2 go if exists( select * from sys.objects where object_id = object_id(dbo.TempStagingCurrencyRates)) DROP TABLE [dbo].[TempStagingCurrencyRates] GO CREATE TABLE [dbo].[TempStagingCurrencyRates] ( [CurrencyRateDate] [datetime] NOT NULL, [FromCurrencyCode] [nchar](3) NOT NULL, [ToCurrencyCode] [nchar](3) NOT NULL, [AverageRate] [money] NOT NULL, [EndOfDayRate] [money] NOT NULL ) GO
codemag.com
The Bakers Dozen Doubleheader: 26 new Features in SQL Server Integration Services 2012 (Part 2 of 2)
13
SSIS Team Blog talks more about this here: http:// blogs.msdn.com/b/mattm/archive/2012/01/13/ script-component-debugging-in-ssis-2012.aspx. The Merge and Merge Join Transformations now use less memory than before. As a result, developers no longer need to set the MaxBuffersPerInput property (which was necessary to avoid consuming excess memory). You can now change the scope of a variable.
Just like other development paradigms have design patterns, this SSIS package represents a common SSIS design pattern: loading multiple sets of files into a temporary table, then using a T-SQL MERGE statement to insert/update the data. The alternate approach of inserting/updating individual rows to the production table will likely perform worse, compared to a single MERGE statement against a large set of data.
Figures 1 and 2 (along with Listings 2 and 3) show the control flow and data flow for a stripped-down version of a production SSIS package. The package does the following: Truncates a (staging) table that the package uses to temporarily hold new incoming data Retrieves a variable number of CSV (text) files from an FTP server (in this demo, currency exchange rate data, such as daily exchange rates from US to Mexico, US to Japan, etc.) Dynamically loops through the CSV files (where you dont know the names of the files at designtime), opens the contents, performs some validations (such as checking that the currency codes are valid), and then inserting the data into the staging table If the number of files processed was greater than zero, calls a T-SQL stored procedure that utilizes a MERGE statement. The MERGE statement reads both the staging table and the production exchange rate table, and performs two actions: inserts any rows that exist in the staging table but not the production table, and updates any rows where the rates have actually changed.
14
The Bakers Dozen Doubleheader: 26 new Features in SQL Server Integration Services 2012 (Part 2 of 2)
codemag.com
FR AMEWORK
Xiine is available on many platforms. Go to Kickstarter and tell us if you want CODE Magazine available in ePub format. The CODE Framework is available on CodePlex.
CODE Framework
You may already be aware of this since we have been running a series of articles on this subject, but did you see we have released our very own framework called CODE Framework completely free and open source on CodePlex? If you are engaged in business application development, regardless of whether that happens on Windows, the Web, mobile platforms (Windows Phone, iOS, Android), services, or even Windows 8 Metro, you should check out this offering. The framework can be used in full, or you may just pick out some interesting nuggets you may want to use or get inspired by. The license associated with this open source project is particularly unrestrictive and allows you to do just about anything you want free of charge. (Note, however, that this is a supported product and premium support, training and consulting is available for those who desire it, but this is completely optional.) Oh, and make sure you read our series of articles on the subject! For more information, visit www.codemag.com/framework.
Whats Next?
A lot of stuff, really. Make sure you join us at our website (www.codemag.com) as well as on our Facebook site (www.facebook.com/CODEMagazine) to stay up to date with new developments, and also to let us know about ideas and feedback you may have! Markus Egger
codemag.com
15
Sahil Malik
www.winsmarts.com Sahil Malik is a Microsoft MVP, INETA speaker, a .NET author, consultant and trainer. Sahil loves interacting with fellow geeks in real time. His talks and trainings are full of humor and practical nuggets. You can nd more about his trainings at http://www. winsmarts.com/training.aspx.
16
codemag.com
CONSULTING
An EPS Company
sandbox solutions, the reality is if you insisted on typing everything, the solution will build and compile. With Visual Studio 2011, the compiler will show you an error if you try and use farm only API calls. Also, the IntelliSense is improved so you see only relevant API calls in sandbox solutions.
JavaScript Improvements
JavaScript is a strange animal. The biggest challenge with JavaScript it is only the browser that truly knows what the full runtime will look like at runtime. It is an incredibly difcult task for an external tool, such as Visual Studio, to fully replace browser-based debugging, and I do not expect Visual Studio 11 Beta to be able to do that. However, with Visual Studio 11 Beta, you can now debug JavaScript in SharePoint projects. Also, IntelliSense is provided when coding JavaScript in SharePoint projects, URL resolution for JavaScript is enabled for visual Web Parts in sandboxed solutions. This means that you can reference JavaScript les located in SharePoints content database in your SharePoint projects in Visual Studio. The code is automatically included at build time.
Deactivate Features Retract Solution Delete Solution Add Solution Deploy Solution Activate Features Launch Browser
Optionally, you may also have a resolution of conicts and other similar commands. This approach works well, and historically we have developed on a machine that had Visual Studio and SharePoint installed locally. This is a necessity for SharePoint development. For farm solutions, this will continue to be the story. But, sandbox solutions are interesting. Sandbox solutions do not need this level of operating system level access. They are simply uploaded to a document library called the solutions gallery, and they run directly from there. This is especially interesting for Ofce 365 developers. Ofce 365 developers need to work with a local instance of a SharePoint server a bit similar to an onpremise SharePoint environment. They develop sandbox solutions, and they copy those over to Ofce 365 when they are done. Understand the steps they purchase a subscriptionbased SharePoint installation so they dont have to run SharePoint, only to end up running on-premise SharePoint albeit in a development environment to develop for the subscription-based SharePoint. So thats not ideal. With Visual Studio 11 Beta, you will now be able to deploy sandbox solutions directly from Visual Studio 11 Beta to Ofce 365, or, for that matter, deploy to any remote server. In order to do so, use the Publish command on the Build menu, select the Publish to SharePoint Site option and provide the remote servers URL, such as https://someremoteserver. sharepoint.microsoftonline.com. To publish a SharePoint solution to a local server, select the Publish to File System option and provide a local system path.
Visual Studio 11 Beta includes new designers for content types and lists making it so much easier than before to author these SPIs (SharePoint Items).
Once you have created a SharePoint 2010 project, you can now add the following item templates: Application Page BDC model Content Type which now also shows you a content type designer Empty Element Event Receiver List which now includes a list designer Module Sequential Workow and State machine workow Silverlight Web Part Site Column Site Denition User Control Visual Web Part Web Part In addition, Visual Studio 11 Beta now clearly tells you what works in a farm solution and what doesnt.
18
codemag.com
Proling
Proling has long been available to .NET applications. Proling helps you identify bottlenecks within your application. SharePoint is a complex product, and sometimes it is not very obvious to the programmer why a certain piece of code works faster than another because the underlying API can be so complicated. With Visual Studio 11 Beta, proling is now available to SharePoint applications. This means, in a SharePoint project, you can start using the Visual Studio Proling Tools Performance Wizard to create a performance session. You do this by clicking on Launch Performance Wizard on the Analyze menu in Visual Studio 11 Beta. This will pop open a wizard that asks you some basic questions, like the parameters you would like to prole the application upon such as CPU usage. Alternatively, you can create a performance session in a unit test. You can do so by going to the Test Results window, open the shortcut menu for the unit test and select Create performance session. After creating a performance session, you simply use the application, and Visual Studio will then run a prole analysis on your application. This will then create a simple report for you to read, which will include a graph of CPU usage over time, hierarchical function call stack,
process and module view, functions view, etc. This will help you pinpoint any bottlenecks in your application. All this is not new, except, now you can do all this with SharePoint.
Summary
Looks like Microsoft is serious serious about making SharePoint development easier for all of us. But in fairness, because SharePoint is built on top of .NET, you will always see .NET tooling a step ahead of SharePoint. And that is okay because a lot of things that happen in .NET sometimes do not gain traction. In the SharePoint world, we get tried and tested practices. Given everything else we can do with SharePoint, and development tools that constantly keep getting better, this continues to get more and more exciting. What is your favorite Visual Studio 11 Beta SharePoint development feature? Let me know. Until then, happy SharePointing. Sahil Malik
Advertisers Index
CODE Consulting www.codemag.com/consulting CODE Consulting / Mobile Apps www.codemag.com/mobileapps CODE Framework www.codemag.com/framework CODE Magazine www.codemag.com/magazine DevTeach Developers Conference www.devteach.com dtSearch www.dtSearch.com MadExpo www.madexpo.us SharePointTechCon www.sptechcon.com State of .NET www.StateOfDotNet.com Tech Conferences Inc. www.devconnections.com Xiine www.xiine.com 71 45 51 www.telerik.com 57 2 5, 7, 9 Bronze Sponsor 61 GOLD SPONSOR - Digital Escrow Services 17 33 76 25 39 www.xamalot.com 65 www.tower48.com 75
ADVERTISERS INDEX
This listing is provided as a courtesy to our readers and advertisers. The publisher assumes no responsibility for errors or omissions.
codemag.com
19
Types of Types
Computer language types generally exist along two axises, pitting strong versus weak and dynamic versus static, as shown in Figure 1. Static typing indicates that you must specify types for variables and functions beforehand, whereas dynamic typing allows you to defer it. Strongly typed variables know their type, allowing reection and instance checks, and they retain that knowledge. Weakly typed languages have less sense of what they point to. For example, C is a statically, weakly typed language: variables in C are really just a collection of bits, which can be interpreted in a variety of ways, to the joy and horror (sometime simultaneously) of C developers everywhere. Java is strongly, statically typed: you must specify variable types, sometimes several times over, when declaring variables. Scala, C# and F# are also strongly, statically typed, but manage with much less verbosity by using type inference. Many times, the language can discern the appropriate type, allowing for less redundancy.
Neal Ford
nford@thoughtworks.com Neal Ford is Software Architect and Meme Wrangler at ThoughtWorks, a global IT consultancy with an exclusive focus on end-to-end software development and delivery. He is also the designer and developer of applications, instructional materials, magazine articles, courseware, video/ DVD presentations, and author and/or editor of six books spanning a variety of technologies, including the most recent The Productive Programmer. He focuses on designing and building of large-scale enterprise applications. He is also an internationally acclaimed speaker, speaking at over 250 developer conferences worldwide, delivering more than 1000 talks. Check out his website at nealford.com.
Many times, the language can discern the appropriate type, allowing for less redundancy.
This diagram is not new; this distinction has existed for a long time. However, a new aspect as entered into the equation: functional programming.
Functional Functionality
Functional programming languages have a different design philosophy than imperative ones. Imperative languages try to make mutating state easier, and have lots of features for that purpose. Functional languages try to minimize mutable state, and build more general purpose machinery. When you nd reusable code in an object-oriented system, you harvest it by capturing a class graph. Its no coincidence that every pattern in the Gang of Four book, Design Patterns: Elements of Reusable ObjectOriented Software, features one or more class diagrams. Functional reuse is a bit different. In functional programming languages, language designers have built general algorithmic machinery, based in part on the fascinating mathematics eld of category theory, expecting data and customization via code or closure blocks. A common philosophy in the functional programming world, particularly in Lisp communities like Clojure, is to have only a few data structures (lists and maps) with many algorithms (lter, map, reduce, folds, etc.) that operate
20
codemag.com
on them. Doing so allows the designers to create hyper efcient operations because they focus on just a few things. Another common philosophy in the functional world is to embrace immutability. When done at a low level, immutable data structures simplify many complex things: threading, serialization, etc. But functional doesnt dictate a typing system, as you can see in Figure 2. With their added reliance, even insistence, on immutability, the key differentiator between languages now isnt dynamic versus static, but imperative versus functional, with interesting implications for the way we build software.
ity. As in the original, DSLs sit on top, serving the same purpose. However, I also believe that DSLs will penetrate through all the layers of our systems, all the way to the bottom. This is exemplied by the ease in which you can write DSLs in languages like Scala (functional, statically strongly types) and Clojure (functional, dynamically strongly typed) to capture important things in concise ways.
I also believe that DSLs will penetrate through all the layers of our systems, all the way to the bottom.
This is a huge change, but it has fascinating implications. To see a glimpse of this, check out the architecture of the brand new commercial product Datomic. Its a functional database that keeps a full delity history of every change, allowing you to roll the database back in time to see snapshots of the past. In other words, an update doesnt destroy data; it creates a new version of it. Once you grok the implications of that, you may be answering to your boss about why you are destroying valuable historical trending data every time you update a record in your relational database. One cool Datomic use case: because you always have history, practices like Continuous Delivery, which relies on the ability to roll your database backwards and forwards in time, become trivial. Now, with relational databases, you use tools like Liquibase that have complex scripts to sync schema and data changes (best), you use snapshots to restore to known good restore points (just OK), or you do it manually (the horror!). Using an immutable database, you just move the time pointer backwards. Testing multiple versions of your application becomes trivial because you can directly synchronize schema and code changes. Datomic is built with Clojure, assuming functional constructs at the architectural level, and top of stack implications are amazing.
Polyglot Pyramids
In my blog back in 2006, I accidentally re-popularized the term [Polyglot Programming] (http://memeagora. blogspot.com/2006/12/polyglot-programming.html) and gave it a new meaning: taking advantage of modern runtimes to create applications that mix and match languages but not platforms. This was based on the realization that the Java and .NET platforms support over 200 languages between them, with the added suspicion that there is no one true language that can solve every problem. With modern managed runtimes, you can freely mix and match languages at the byte code level, utilizing the best one for a particular job. After I published my article, my colleague Ola Bini published a follow on paper discussing his Polyglot Pyramid, which suggests the way people might architect applications in the polyglot world, as shown in Figure 3. In Olas original pyramid, he suggests using more static languages at the bottommost layers, where reliability is the highest priority. Next, he suggests using more dynamic languages for the application layers, utilizing friendlier and simpler syntax for building things like user interfaces. Finally, atop the heap, are Domain Specic Languages, built by developers to succinctly encapsulate important domain knowledge and workow. Typically, DSLs are implemented in dynamic languages to leverage some of their capabilities in this regard. This pyramid was a tremendous insight added to my original post, but upon reection about current events, Ive modied it. I now believe that typing is a red herring, distracting from the important characteristic, which is functional versus imperative. My new Polyglot Pyramid appears in Figure 4. I believe that the resiliency we crave comes not from static typing but from embracing functional concepts at the bottom. If all of your core APIs for heavy lifting, like data access, integration, etc., could assume immutability, all that code would be much simpler. Of course, it changes the way we build databases and other infrastructure, but the end result will be guaranteed stability at the core. Atop the functional core, use imperative languages to handle workow, business rules, user interfaces, and other parts of the system where developer productivity is a prior-
Summary
Dont believe people that tell you that dynamic languages are dangerous too much evidence exists to the contrary. Rather, ask what makes them safe and make sure you apply that to all your development, regardless of language type. Rather than stress about dynamic versus static, the much more interesting discussion now is functional versus imperative, and the implications of this change go deeper than the previous one. In the past, weve been designing imperatively using a variety of different languages. Switching to the functional style is a bigger shift than just learning a new syntax, but the benecial effects can be profound. Neal Ford
codemag.com
21
Paul D. Sheriff
PSheriff@pdsa.com (714) 734-9792 Paul D. Sheriff is the President of PDSA, Inc. (www.pdsa.com) and a Microsoft Partner in Southern California. Paul acts as the Microsoft Regional Director for Southern California assisting the local Microsoft ofces with several of their events each year and being an evangelist for them. Paul has authored literally hundreds of books, webcasts, videos and articles on .NET, WPF, Silverlight, Windows Phone and SQL Server. Check out Pauls new code generator called Haystack at www.CodeHaystack.com.
Design tools available for HTML5 are proliferating at a rapid rate; this means a developer can make Web applications look better without the assistance of a graphic artist.
22
codemag.com
Listing 1: The HTML for the default page of your web application
<!DOCTYPE html> <html> <head> <meta charset=utf-8 /> <title>Business UI Samples</title> <link rel=stylesheet type=text/css href=Styles/Styles.css /> <style type=text/css> .mainMenu { color: White; oat: none; text-decoration: none; display: inline-block; text-align: center; height: 0.5em; width: 5em; margin: 0.5em 0.5em 0.5em 0.5em; padding: 0.3em 1em 1.1em 1em; border: 0.09em solid black; box-shadow: 0.5em 0.5em rgba(0,0,0,0.8); -webkit-box-shadow: 0.5em 0.5em rgba(0,0,0,0.8); -moz-box-shadow: 0.5em 0.5em rgba(0,0,0,0.8); border-radius: 0.5em; -webkit-border-radius: 0.5em; -moz-border-radius: 0.5em; } text-align: left; border-radius: 0.75em; -webkit-border-radius: 0.75em; -moz-border-radius: 0.75em; } <p> { margin-left: 1em; } </style> </head> <body> <nav class=backColor> <a href=Login.htm class=mainMenu backColor>Login</a> <a href=ContactUs.htm class=mainMenu backColor>Contact</a> <a href=Name.htm class=mainMenu backColor>Name</a> <a href=Address.htm class=mainMenu backColor>Address</a> <a href=User.htm class=mainMenu backColor>User</a> </nav> <br /> <br /> <br /> <p> Content goes in here...</p> <footer class=backColor> Samples of Business UI </footer> </body> </html>
The rules above are applied to the <footer> element and the backColor class is also applied with the background color. Keeping your background color separate from your other style rules allows you to change the background color in one place without affecting any other style rules. You can also see this type of styling on the <a href> elements used for the main navigation.
<a href="Login.htm" class="mainMenu backColor">Login</a>
The <header> element is used to identify an area of the page that contains description information about this particular Web page. Just like any other normal HTML element, you can then apply a style to the <header>. In the login page, the words at the top Please Login to Access this Application are the header area. The <header> element in this Web page looks like the following.
<header class="backColor"> Please Login to Access this Application </header>
Again, notice the use of the backColor class attribute to apply the background gradient to the header. In the <head> tag of the login page you will nd the style shown in Listing 3 for the <header> element to give it the look you see in Figure 2. Another new element in HTML5 is called <gure>. This element is used as a wrapper around any image you display on your page. There is an optional <gcaption> element that can be used to display a caption for your gure. You wont use a gcaption on this gure because it isnt necessary
In the class attribute on each of the <a href> elements, you apply two styles. The mainMenu selector controls foreground color, margin, padding, and other rules while the backColor selector applies the background color.
Login Page
Most applications require a user to authenticate by typing in a login ID and a password. The login page, shown in Figure 2, introduces a few more HTML5 elements and attributes. The new elements are <header> and <gure>, and the new attributes are autofocus, required, and placeholder. For these new attributes, your mileage will vary on the different browsers. Opera 11.61 is the only browser that seems to render HTML5 consistently with all of these new attributes. I recommend you download this browser in order to try out the samples in this article.
Figure 2: HTML5 contains new attributes, such as placeholder, to help you tell the user what they should enter in each eld.
codemag.com
23
Listing 2: Gradients are a great way to make your web pages look more natural to users
.backColor { /* Old browsers */ background: rgb(181,189,200); /* IE9 SVG, needs conditional override of lter to none */ background: url(data:image/svg+xml;base64, PD94bWwgdm ); /* FF3.6+ */ background: -moz-linear-gradient(top, rgba(181,189,200,1) 0%, rgba(130,140,149,1) 36%, rgba(40,52,59,1) 100%); /* Chrome,Safari4+ */ background: -webkit-gradient(linear, left top, left bottom, color-stop(0%, rgba(181,189,200,1)), color-stop(36%, rgba(130,140,149,1)), color-stop(100%, rgba(40,52,59,1))); /* Chrome10+,Safari5.1+ */ background: -webkit-linear-gradient(top, rgba(181,189,200,1) 0%, rgba(130,140,149,1) 36%, rgba(40,52,59,1) 100%); /* Opera 11.10+ */ background: -o-linear-gradient(top, rgba(181,189,200,1) 0%, rgba(130,140,149,1) 36%, rgba(40,52,59,1) 100%); /* IE10+ */ background: -ms-linear-gradient(top, rgba(181,189,200,1) 0%, rgba(130,140,149,1) 36%, rgba(40,52,59,1) 100%); /* W3C */ background: linear-gradient(top, rgba(181,189,200,1) 0%, rgba(130,140,149,1) 36%, rgba(40,52,59,1) 100%); /* IE6-8 */ lter: progid:DXImageTransform.Microsoft.gradient( startColorstr=#b5bdc8, endColorstr=#28343b, GradientType=0 ); }
Figure 3: The range input type renders as a slider on some browsers. happen, which is a great improvement that you can take advantage of for all data-entry pages. Instead, you use the new autofocus attribute on the <input> element. You will also nd that there are two other new attributes on the Login ID text box: required and placeholder.
<input type="text" name="txtLogin" class="textInput" autofocus required placeholder="Enter Your Login ID" />
for this particular page. The key image at the top-right of the login page is dened in the HTML as the following.
<gure> <img src="Images/KeyComputer.png" width="60" height="60" alt="Login" /> </gure>
In the <head> tag of the login page you will nd a style for this gure that will make it look as shown in Figure 2.
gure { oat: left; vertical-align: top; text-align: center; margin: 2.2em 2em 0em 3em; }
The required attribute stops a page from posting the data unless something is entered into the Login ID text box. You may receive a pop-up balloon informing you that the particular eld is required, depending on the browser you are using to run the page. The placeholder attribute is used to display watermark text within the input control. This text, such as Enter Your Login ID, appears within the text box until the user moves into the control. Then it disappears. If the user
When you run this login Web page, youll notice that your cursor is automatically placed on the Login ID text box. There is no JavaScript code required to make this
24
codemag.com
Taalvi - Fotolia.com
MAGAZINE
An EPS Company
leaves the text box without lling any text into the box, the placeholder text re-appears.
<input type="range" min="21" max="110" step="1" id="age" value="30" onchange="ageOutput.value = age.value;" /> <img src="images/Plus.png" class="plusminus" onclick="age.value = age.value + 1; ageOutput.value = age.value;" /> <output id="ageOutput" />
Personal Information
The Personal Information Web page shown in Figure 3 contains many of the same elements and attributes as the Login page and the navigation page. However, there are a couple of new HTML5 features used in this page. A <datalist> element is used in combination with the list attribute to create the Salutation drop down. The new input type, range, creates the slider used for Your Age. Next to the slider is an <output> element used to display the value from the range slider. In order to make this work, you do have to write a little bit of JavaScript. Lets rst take a look at the Salutation drop down. Instead of using a <select> and <option> elements, the new <datalist> element can be used. This makes the input look more like the auto-complete lists users are used to on search engines and many other websites. Once a user starts typing into this text box, the list automatically drops down and is ltered by the characters the user types. The user may choose a value from the list, or enter a new value. Below is the HTML5 code needed to create the Salutation element.
<input type="text" name="salutation" class="textInput" autofocus list="salutationList" /> <datalist id="salutationList"> <option value="Dr">Dr</option> <option value="Mr">Mr</option> <option value="Mrs">Mrs</option> <option value="Miss">Miss</option> </datalist>
In addition to the JavaScript in the two image controls, you might want to write a little JavaScript code when the page loads to pre-populate the <output> element with the value in the range control.
<script type="text/javascript"> window.addEventListener('load', function () { // Get the age output control var out = document.getElementById( 'ageOutput'); // Get the age control var age = document.getElementById('age'); out.value = age.value; }, false); </script>
The <output> element is another new semantic element that you can style in any manner you see t. Its purpose is to allow you to place some output data in a specic location on your page without having to use a <p> or <span> tag.
Other Pages
In the sample that you download for this article, you will nd three other business Web pages that you might nd useful. These pages use the same HTML5 elements, attributes and CSS 3 styles as the other pages that have been discussed in this article.
Notice that the <input> type is text, but adds the list attribute. The list attribute must be the ID of a valid <datalist> element. In the code above, the <datalist> is positioned right under the <input> element, but it can be anywhere on the Web page. Normal <option> elements are used to populate the data list. The HTML5 specication also says that you can attach a data attribute to the input type with a URI pointing to a valid XML le that can be used to ll the list. The <input type=range> displays a numeric slider in Opera, Chrome, and Safari. There are new attributes on the range called min, max and step. These attributes control the minimum value allowed, the maximum value allowed and by how much to increment the value property when the user moves the slider. In the Personal Information page, I added two images around the range input: a minus and a plus sign. To these images, I added some JavaScript to the onClick events to decrement and increment the <output> elements respectively. Below is the HTML code used to create the slider and the <output> element.
<img src="images/Minus.png" class="plusminus" onclick="age.value = age.value - 1; ageOutput.value = age.value;" />
Contact Us
Having a Contact Us page in your Web application allows a user to give you feedback about the application, report a bug, or ask you for more information about your product. Figure 4 shows an example of a Contact Us page
Figure 4: Having a Contact Us page is a great way to get feedback from a user.
26
codemag.com
word to the user. When choosing a security question, be sure to make it something personal in nature that only the user would know. Look at the questions in the data list in Figure 6. These questions are things that only that user is likely to know.
Summary
In this article, you learned to use HTML5 and CSS 3 to create a variety of business application Web pages. Using rounded borders and drop shadows makes your pages look more modern. Employing linear gradients in your background colors helps your applications look more natural to new users. Taking advantage of autofocus, required and placeholder attributes greatly simplies your Web pages and allows you to get rid of a lot of JavaScript. Of course, all of this assumes that HTML5 can be rendered on all browsers that your users use. Right now, this is just not the case. So, you will still need to use some fallback mechanisms such as Modernizr (www.modernizr.com) to ensure that your HTML5 applications will work with older browsers. Paul D. Sheriff
Figure 5: This US Address page could be used in many Web applications where you must gather information from your users.
Figure 6: A Create User Prole page is needed in a Web application where you have users that sign in. that uses place holders, auto focus, drop shadows, linear gradients and many of the other techniques you have previously seen.
US Address Page
If you wish to gather address information from your user for processing an order, an Address page like the one shown in Figure 5 can come in handy. This page was created for addresses in the United States, but Im sure you can modify it for your locale. Notice that the size of the Save button on this page is larger than the Cancel button. Making your default button larger than the other buttons is a great way to inform your user that this is the button that will be executed when they press the Enter key.
User Prole
When asking a user to ll out his or her prole for your site, its a good idea to ask for a security question and answer. If the user ever forgets a password, you can prompt for login ID and a security question selected on the User Prole page shown in Figure 6. When the user supplies the correct answer, you can email the new pass-
codemag.com
27
Ted Neward
Ted Neward is an Architectural Consultant with Neudesic, LLC. He resides in the Pacic Northwest with his wife, dog, two sons, four cats, and eight laptops. You can reach Ted via Twitter at @tedneward, via email at , via his blog at , or by visiting the Dennys in Redmond at 2AM on most nights.
Lua
Lua is probably the most widely used scripting language youve never heard of. The language itself is a freelyavailable open-source object-oriented(ish) language hosted at http://www.lua.org. The reason for its popularity is simple: Lua was designed from the beginning to be easily hosted from C/C++ code. This made it very attractive to game developers and designers, allowing them to write the high-performance code in C or C++ and the game rules and triggers and game logic in Lua, even opening up the Lua scripts to third parties (like gamers) to customize and extend. World of Warcraft does this, and it has spawned a cottage industry of gamers-turned-
28
codemag.com
programmers who customize their WoW experience with plugins and add-ons and extensions, making the WoW ecosystem just that much more fun and interesting. The original hosting interface for Lua is, as mentioned earlier, based on C/C++ code, but fortunately the Lua community is every bit as active as the Ruby or .NET communities. A .NET-based adapter layer, LuaInterface, hides the .NET-to-C++ interop parts, making it ridiculously trivial to host Lua code in a .NET application.
end function Account:withdraw (amt) self.balance = self.balance - amt end function Account:deposit (amt) self.balance = self.balance + amt end function Account:getBalance() return self.balance end
This is a class in Lua. To be more precise, Lua has tables, which arent relational tables, but essentially dictionaries of name/value pairs. In fact, technically, this is a collection of functions stored in a table that will make up an object. So, for example, when I write the following after it:
a = Account:new { balance = 0 } a:deposit(100.00) print(a:getBalance())
The console will print 100. Without going a lot deeper into Lua syntax (which is a pretty fascinating subject in its own right, by the way), one thing thats important to point out is that Lua lacks classes entirely, just as JavaScript does; both are prototype-based object languages, in that inheritance is a matter of following the prototype chain to nd methods that arent found on the object directly. This also means that you can change the method denitions on a single object if you wish:
b = Account:new { balance = 0 } function b:withdraw (amt) -- no limit! end function b:getBalance() return 100000.00 end b:withdraw(10000.00) print(b:getBalance())
Writing Lua
Lua is an object-oriented(ish) language in that on the surface of it, it appears to have a lot of the same basic concepts that the traditional imperative language developer will nd comfortable: primitives, objects, and so on. In Lua, everything is dynamically resolved, so types dont play a major factor in writing code. Functions can be either part of objects or stand-alone. The usual imperative ow-control primitives (if/else, for and so on) are here. Variables are untyped, though Lua does have a basic concept of type within itspecically, variables are of only a few types, strings, numbers and so on. Readers familiar with everybodys favorite Web scripting language will probably have already gured this out: In many ways, Lua conceptually ts into the same category as JavaScript. Luas syntax is arguably much simpler, though, with far fewer gotchas within the language. For example, consider the following:
Account = { balance = 0 } function Account:new (o) o = o or {} setmetatable(o, self) self.__index = self return o
This concept, of an object having no class, and objects behavior being entirely mutable at runtime, is core to understanding JavaScript, but Luas syntax is just different enough to keep the C#/C++/Java developer from thinking that shes on familiar ground.
Hosting Lua
Its a simple matter to create a C# (or VB, but sorry VBers, some habits are just too hard to break) project and add the LuaInterface assemblies to the project. Specically, both of the assemblies in the Built directorylua51.dll and LuaInterface.dllare required. The Lua interpreter is entirely compiled into managed code, so both are standard .NET IL assemblies, and thus theres no weird security permissions So, after doing the Add Reference thing in Visual Studio, try this:
using System; using LuaInterface; namespace LuaHost {
codemag.com
29
class Program { static void Main(string[] args) { Console.WriteLine("LuaHost v0.1"); var lua = new Lua(); lua.DoString("io.write(\"Hello, world,"+ "from \",_VERSION,\"!\\n\")"); } } }
codeplex.com. Go grab it, install it, and re up the Prolog.NET Workbench. (Just for the record, theres a second Prolog.NET implementation at http://prolog.hodroj.net/, which appears to be slightly newer, appears to work with Mono, and appears to have a similar kind of feature set as the CodePlex project version; I chose to use the rst one, but I suspect either one would work just as well in practice.)
Writing Prolog
In the Command pane of the Workbench, type in the following snippet of Prolog, making sure to include the period (which is a statement terminator, like the ; in C#/Java/C++) at the end, then click Execute at the bottom of that pane:
likes(john,mary).
As you might well infer, this is essentially a hard-wired version of what we did at the command-line a few minutes ago: greet the world from Lua. Having gotten this far, its fairly easy to see how this could be expanded: one thought would be to create a REPL (Read-Eval-Print-Loop, an interactive console) that reads a line from the user, feeds it to Lua, repeat, and host that from within WinForms or WPF. Or even Visual Studio. (Which Microsoft already did, all you WoW players out there.) If youre a Web developer, write an ASP.NET handler that executes Lua on the server, a la Node, but using a language that was actually designed, instead of cobbled together over a weekend.
This line says that john likes mary, and Prolog accepts that into the system by responding with Success in the Transcript pane above the Command pane. This line, in Prolog terminology, is a fact. (Well, to Prolog its a factto Mary it may be an unfortunate situation resulting from having made eye contact with a smarmy co-worker.) We can assert other kinds of facts into Prolog; in fact, we can assert lots of different kinds of facts, because Prolog knows nothing about the meaning behind the words john, mary or likes, only that A and B are linked by C. So additional facts might look like:
likes(ted,macallan25). likes(jess,macallan25). likes(miguel,macallan25). likes(charlotte,redwine). valuable(gold). female(jess). female(charlotte). male(ted). male(miguel). gives(ted,redwine,charlotte).
Prolog
Most of the time when were writing code for a customer, we expect the customer to tell us how to get things done. There are some projects, however, where the customer doesnt exactly know the right answer ahead of time which makes it hard to know if the code is generating the right answer. Consider, for example, Sudoku puzzles. The puzzle always has an answer (assuming its a legitimate puzzle, of course), and we have ways of verifying if a potential answer is correct, but neither the developer nor the customer (the Sudoku player) has that answer in front of them. (If this seems like a spurious example, then consider certain kinds of simulations or forecasting or other dataanalysis kinds of work. At least with Sudoku we know we have one and only one right answer, so lets work with that for now.) While writing a Sudoku solver in C# can be done, back in the AI research days, Prolog was developed to do precisely this kind of thing: take facts asserted into the system, and when asked to examine an assertion, determine whether that assertion could be true given the facts present within the system. If that sounded like gibberish, stay with me for a second. Examples will help.
These facts tell us that Ted, Jess and Miguel like macallan25 while Charlotte likes redwine, gold is valuable, Charlotte and Jess are female while Miguel and Ted are male, and Ted gives redwine to Charlotte (probably to impress her on a date or something). These facts collectively form a database in Prolog, and, like the more familiar relational form, the Prolog database allows us to issue queries against it:
:- likes(ted, macallan25).
To Prolog, this is a question: does Ted like Macallan 25? Very much so, yes, and it turns out that Prolog agreesit will respond with a yes or success response, depending on the Prolog implementation youre using. In this particular case, Prolog is looking at the verb (likes) joining the two nouns (ted and macallan25, what Prolog calls objects), and determining if there is an V/N1/N2 pairing in the facts databaseand,
30
codemag.com
as we saw earlier, there is, so it responds with a success response. But if we ask it a different query:
:- likes(ted, redwine).
Prolog comes back, correctly, with the answer jess. Prolog is, very simply, an inference engine, and it shares a lot of similarities to rules engines like Drools.NET or iLOG Rulesbut in a language syntax, and something that we can call from .NET code. If these seem like simplistic scenarios, consider a trickier one: a fast-food restaurant chain needs a software system to help them manage employee schedules. Anyone whos ever worked as a manager of a restaurant (or assistant manager, when the manager decided to delegate that job to his high-school assistant manager to teach a sense of responsibility, and of course it had nothing to do with his absolute loathing of the task, not that Im still bitter or anything) knows what a pain it is. Every employee has immutable schedule restrictions, particularly in a college town where schedules change with every quarter or semester, not to mention the complications around seniority and the implicit more senior people get rst pick at the schedule and so on. This is exactly the kind of problem that Prolog excels at we can assert each employees schedule restrictions and preferences as facts, set up some rules about how often they can work (no back-to-back shifts, for example), and then let Prolog gure out the permutations of the schedule for nal human approval. In fact, Prolog.NET has a sample along these lines (Listing 1). Walking through all of this is a bit beyond the scope of this article, but the code starts with some declarations of the days of the week, the shifts in the plant, and a denition that a WorkPeriod is a given shift/day combination. Then we get into the employee/shift combinations (a shiftAssignment) and the employee/day combinations (a dayAssignment), and nish with a declaration of rules that create the three-way binding between an employ-
Prolog will respond with a no or failure. Which totally isnt true, but Prolog only knows about the facts that were asserted into its database; if its not in the database, then to Prolog it doesnt exist. Prolog will also allow you to put variables instead of objects into the query, and let Prolog ll the variable with the objects that match:
:- likes(Person, macallan25).
Here, Prolog knows that Person is a variable because it starts with a capital letter. (Yes, seriously. Prolog is that case-sensitive.) And it responds by telling us every object that likes macallan25, which in this case is three objects, miguel, ted and jess. Now, suppose that Ted likes anyone that is female and who in turn likes a particular kind of beverage. (Ed. note: Meaning, Ted likes a Person who is female that likes a particular beverage. Jess works because she is female (rst clause) and because she likes the Beverage Ted passed in (Scotch).) We can express this in Prolog as a rule:
likes(ted,Person,Beverage) :female(Person), likes(Person,Beverage).
Now we present it with the query, who does Ted like that likes Macallan 25?
:- likes(ted, Person, macallan25).
codemag.com
31
ee, a shift, and a day to create a given schedule. Its a great non-trivial example to have a look at, plus it demonstrates the intersection of Prolog and .NET, since the sample itself is compiled into a small WPF app displaying the schedule permutations in a grid.
operated upon. It is this meta facility that lends Lisps (and, therefore, Scheme) much of the power that is commonly ascribed to the languages within this family.
Hosting Prolog.NET
Like the LuaInterface situation earlier, hosting the Prolog. NET implementation is pretty straightforward (Listing 2). In a C# project, add the Prolog.dll assembly, found in the root of the Prolog.NET installation directory, to your project. Obtain a Prolog.Program instance, and use the Parser found in the Prolog namespace to capture Prolog facts and rules and dene queries to be run against them. As you can see, facts and queries are parsed separately and added to the PrologMachine instance, and then executed. The API permits execution in a single-step fashion, allowing for on-the-y examination of the machine during its processing, but for non-debugging scenarios, RunToSuccess() is the preferred approach.
Writing Scheme
As already mentioned, everything in Lisp is a list, so all programming in Scheme will basically be putting together ()-bracketed lists of things, typically in a Polish-notation fashion. So, for example, typing the following:
> (* 5 20)
Another Prolog
Another approach to Prolog-on-the-CLR is that taken by P#, a Prolog source-to-source translator that takes Prolog input and generates C# les that can be compiled into your assembly. You can nd it at http://www.dcs.ed.ac.uk/ home/jjc/psharp/psharp-1.1.3/dlpsharp.html if you are interested.
Yields a response of 100 because that is what applying the * (multiplication) function on arguments of 5 and 20 produces. This list-based syntax alone is fascinating because it means that Scheme can write functions that accept a varying number of parameters without signicant difculty (what the academics sometimes refer to as exible arity), meaning that we can also write:
> (* 5 20 20)
Scheme
No conversation on dynamic languages can be called complete without a nod and a tip of the hat to one of the granddaddies of all languages, Lisp, and its Emacs-hosted cousin, Scheme. Scheme, like Lisp, is conceptually a very simple language (Everything is a list!) with some very mind-blowing concepts to the programmer who hasnt wandered outside of Visual Studio much. (Code is data! Data is code!) Scheme, as they say, is a Lisp, which means that it syntactically follows many of the same conventions that Lisp doesall program statements are in lists, bounded by parentheses, giving Scheme code the chance to either interpret the list as a method call or command, or do some processing on the list before passing it elsewhere to be
And get back 2000. Of course, if all we wanted was a reverse-reverse-Polish calculator, wed ask some long-haired dude whose family name used to be Niewardowski to recite multiplication tables while walking backwards. Scheme also allows you to store values in named storage using (dene):
> (dene pi 3.14159) > (dene radius 10) > (* pi (* radius radius))
32
codemag.com
CONSULTING
An EPS Company
namespace SchemeHost { class Program { static void Main(string[] args) { var slp = ScriptDomainManager.CurrentManager .GetLanguageProvider(typeof(IronSchemeLanguageProvider)); var se = slp.GetEngine();
(dene) isnt limited to just dening variables; we can also (dene) new functions, like so:
> (dene (square x) (* x x)) > (* pi (square radius))
Although it may look a little overwhelming, when you peer into it, a number of things leap out: there is a correlation between HTML tags and Scheme functions ((h2 ...), (form ...), and so on), and the open-ended nature of Scheme lists makes it easy to extend the language to incorporate templatized elements into the rendered HTML. For example, consider this snippet from the above:
`(div ,@(map display-entry blogdata))
Programming in Prolog
(Clocksin, Mellish)
One of the things apparent when we look at Scheme code is that the distinctions between variables and methods are quite fuzzy when compared against languages like C# and VB. Is pi a function that returns a value, or is it a variable storing a value? And, quite honestly, do we care? Should we? (You might, but you shouldnt. Unlearn, young Jedi, unlearn.)
Hosting Scheme
If theres a theme to this article, its that hosting language X is pretty easy, and IronScheme really is no different. From a new C# Console project, add three assembly references from the IronScheme root directory: Microsoft.Scripting.dll (the DLR), IronScheme.dll, and IronScheme.Closures.dll. See Listing 3. As you can see, getting an IronScheme engine up and running is pretty straightforward: just ask the DLRs ScriptDomainManager to give you an IronScheme engine instance. Once there, we only need to pass the Scheme expressions in, and IronScheme will hand back the results. If those expressions resolve into functions, such as in the case above with foo, then we need only cast them to Callable instances, and we can call through to them with no difculty. Oh, and for the record? IronScheme is ridiculously easy to get started using on a Web MVC project, because the IronScheme authors have already built the necessary hooks (and Visual Studio integration!) to create an MVC application. In the IronScheme implementation, check out the websample directory, which contains a couple of different samples (as well as the IronScheme documentation). Congure an ASP.NET site around that directory, then hit it with an HTTP request of /blog, and explore the 100% IronSchemewritten blog engine. Admittedly, its pretty tiny, but then again, so is the code. And the IronScheme way to represent an HTML view isnt all that hard to read, either (Listing 4).
This looks pretty innocuous, but here the power of Schemes functional nature kicks inwe use the map function to take a function, display-entry, and map it over every element in the blogdata collection, which effectively iterates through the collection and generates the HTML for each entry. To those willing to look past the arcane ()-based syntax, Scheme offers all the power of a functional language, combined with the exibility of a dynamic one. Is this likely to take over from ASP.NET MVC written in C# or VB any time soon? Maybe not, but long-time practitioners of Lisp and Scheme have often touted how easy it is to get things done in these languages thanks to the ability to build abstractions on top of abstractions, so maybe its worth a research spike for a while, just to see.
Programming Clojure
(Halloway)
Clojure-CLR
No discussions of a modern Lisp would be complete without mentioning Clojure, a Lisp originally born on the JVM, but since ported to the CLR. Clojure is a Lisp, but its not Common Lisp or Scheme. Its creator, Rich Hickey, put some fascinating ideas about state and data into the language, making it a powerful tool for doing things in parallel. If youre a Java programmer, picking up Clojure is a highly-recommended step to take; if youre a .NET programmer, however, although still recommended, its not quite as easy, owing to the fact that all of the documentation and articles and books on Clojure are focused specically on the JVM and Java APIs. Still, for those willing to brace themselves for a little rough sailing at rst, Clojure-CLR can be a powerful experiment, and its a natural complement to learning Iron-
34
codemag.com
Scheme. Clojure, unlike most Lisps, has no interpreter, meaning that Clojure-CLR is going to compile everything into IL, and thus eliminate concerns around the hideous performance of being an interpreted language.
Moving On
Certainly the crop of .NET languages doesnt end here. In fact, trying to trim the list down from all the languages I could have discussed was one of the hardest things about writing this articlelanguages like Cobra, Nemerle, Boo and the aforementioned IronPython and IronRuby, all are powerful and useful languages that can signicantly change the development arc of a project if used correctly.
No one language is going to be the silver bullet to all your development ills; what we gain in using a dynamic language, we lose in taking on some of the risks inherent in that language. For example, almost every Ruby developer Ive ever talked to makes it very clear that in a Ruby project, unit tests are not just a nice-to-have, but an essential necessity to ensuring the project succeeds. The language offers a tremendous amount of exibility, but at a price. At the end of the day, thats probably something that should be said about all the tools we use. Caveat emptor. Ted Neward
codemag.com
35
Rick Strahl
rstrahl@west-wind.com Rick Strahl is the big Kahuna and janitor at West Wind Technologies on Maui, Hawaii. The company specializes in Web and distributed application development, develops several commercial and free tools, provides training and mentoring with focus on .NET, IIS and Visual Studio. Ricks an ASP.NET Insider, a frequent contributor to magazines and books, and a frequent speaker at developer conferences and user groups. For more information, please visit: www.west-wind.com/weblog/
ASP.NET Web API differentiates itself from existing Microsoft solutions in that it was built from the ground up around the HTTP protocol and its messaging semantics.
Web API also requires very little in the way of conguration so its very quick and unambiguous to get started. To top it all off, you can also host the Web API in your own applications or services. Above all, Web API makes it extremely easy to create arbitrary HTTP endpoints in an application without the overhead of a full framework like WebForms or ASP.NET MVC. Because Web API works on top of the core ASP.NET stack, you can plug Web APIs into any ASP.NET application.
Most mobile devices, like phones and tablets, run apps that use data retrieved from the Web over HTTP.
Although all of these can accomplish the task of returning HTTP responses, none of them are optimized for the repeated tasks that an HTTP service has to deal with. If you are building sophisticated Web APIs on top of these solutions, youre likely to either repeat a lot of code or write signicant plumbing code yourself to handle various API requirements consistently across requests.
Getting Started
Ill create a new empty ASP.NET application to demonstrate that Web API can work with any kind of ASP.NET project.
36
codemag.com
Web API libraries. If it isnt installed, you can download it from http://www.asp.net/web-api. Alternately, you can also download the latest ASP.NET MVC/Web API source code from the CodePlex site (aspnetwebstack.codeplex. com). Because the API is still in ux, I used CodePlex code for my samples. The samples include the current binaries, so to run them you dont actually need to download anything.
cs and copy the code into it. Theres a static AlbumData class with a static CreateSampleAlbumData() method that creates a short list of albums on a static .Current that Ill use for the examples. Before we look at what goes into the controller class though, lets hook up routing so we can access this new controller.
Figure 1: This is how you create a new Controller Class in Visual Studio.
codemag.com
37
with Get. So methods called Get() or GetAlbums() are matched by a GET request and a POST request maps to a Post() or PostAlbum(). Web API matches a method by name and parameter signature to match a route, query string or POST values. In lieu of the method name, the [HttpGet,HttpPost,HttpPut,HttpDelete, etc.] attributes can also be used to designate the accepted verbs explicitly if you dont want to follow the verb naming conventions.
Web API shares many concepts of ASP.NET MVC, and the implementation of your API logic is done by implementing a subclass of the System.Web.Http.ApiController class. Each public method in the implemented controller is a potential endpoint for the HTTP API, as long as a matching route can be found to invoke it. The class name you create should end in Controller, which is how Web API matches the controller route value to gure out which class to invoke. Inside the controller you can implement methods that take standard .NET input parameters and return .NET values as results. Web APIs binding tries to match POST data, route values, form values or query string values to your parameters. Because the controller is congured for HTTP Verb based routing (no {action} parameter in the route), any methods that start with Getxxxx() are called by an HTTP GET operation. You can have multiple methods that match each HTTP Verb as long as the parameter signatures are different and can be matched by Web API. In Listing 3, I create an AlbumApiController with two methods to retrieve a list of albums and a single album by its title. To access the rst two requests, you can use the following URLs in your browser: http://localhost/aspnetWebApi/albums http://localhost/aspnetWebApi/albums/Dirty%20Deeds Note that youre not specifying the actions of GetAlbum or GetAlbums in these URLs. Instead Web APIs routing uses HTTP GET verb to route to these methods that start with Getxxx().
Routing in Web API works the way routing works in ASP.NET MVC, but adds the ability to route by HTTP Verb in lieu of specifying a controller action.
Although HTTP Verb routing is a good practice for REST style resource APIs, its not required and you can still use more traditional routes with an explicit {action} route parameter. When {action} is supplied, the HTTP verb routing is ignored. Ill talk more about alternate routes later. When youre nished with initial creation of les, your project should look like Figure 2. Notice that adding a Web API controller to your project adds a long string of new assemblies to your project; Web API is designed in a very modular fashion. Web API (and MVC 4.0) is shipped as an add-on library and deploys the assemblies into your sites bin folder and can be xcopy deployed no explicit installation is required.
Figure 2: The initial project has the new API Controller Album model.
Content Negotiation
When you access any of the URLs above from a browser, you get either an XML or JSON result returned back. The album list result for Chrome 17 and Internet Explorer 9 is shown Figure 3. Notice that the results are not the same: Chrome returns an XML response and IE9 returns a JSON response. Whoa, whats going on here? Shouldnt we see the same result? Actually, no. Web API determines what type of content to return based on Accept headers. HTTP clients, like browsers, use Accept headers to specify what kind of content theyd like to see returned. Browsers generally ask for HTML rst, followed by a few additional content types. Chrome (and most other major browsers) ask for:
Accept: text/html, application/xhtml+xml,application/xml; q=0.9,*/*;q=0.8
Note that Chromes Accept header includes application/ xml, which Web API nds in its list of supported media types and returns an XML response. IE9 doesnt include
38
codemag.com
ATOM or even OData feeds by providing the appropriate Accept header from the client. By default, you dont have to worry about the output format in your code.
Web API automatically switches output formats based on the HTTP Accept header of the request. The default content type if no matching Accept header is specied is JSON.
Note that you can still specify an explicit output format if you choose, either globally by overriding the installed formatters, or individually by returning a lower level HttpResponseMessage instance and setting the formatter explicitly. There will be more on that in a minute. Along the same lines, any content sent to the server via POST/PUT is parsed by Web API based on the HTTP Content-type of the data sent. The same formats allowed for output are also allowed on input. Again, you dont have to do anything in your code Web API automatically performs the deserialization from the content.
Resources
Sample source code on GitHub (http://goo.gl/8mhIh) ASP.NET MVC and Web API source on CodePlex: http:// aspnetwebstack.codeplex.com
an Accept header type that works on Web API by default, and it returns its default format, which is JSON. This is an important and very useful feature that was missing from any previous Microsoft REST tools: Web API automatically switches output formats based on HTTP Accept headers. Nowhere in the server code above do you have to explicitly specify the output format. Rather, Web API determines what format the client is requesting based on the Accept headers and automatically returns the result based on the available formatters. This means that a single method can handle both XML and JSON results. Using this simple approach makes it very easy to create a single controller method that can return JSON, XML,
Figure 3: Web API responses can vary depending on the browser used, demonstrating Content Negotiation in action as these two browsers send different HTTP Accept headers.
40
codemag.com
HttpResponseMessage is similar to ActionResult in ASP.NET MVC in that its a common way to return an abstract result message that contains content. HttpResponseMessage is parsed by the Web API framework using standard interfaces to retrieve the response data, status code, headers and so on. Web API turns every response including those Controller methods that return static results into HttpResponseMessage instances. Explicitly returning an HttpResponseMessage instance gives you full control over the output and lets you bypass WebAPIs post-processing of the HTTP response on your behalf. HttpResponseMessage allows you to customize the response in great detail. Web APIs attention to detail in the HTTP spec really shows; many HTTP options are exposed as properties and enumerations with detailed IntelliSense comments. Even if youre new to building REST-based interfaces, the API guides you in the right direction for returning valid responses and response codes. For example, assume that I always want to return JSON from the GetAlbums() controller method and ignore the default media type content negotiation. To do this, I can adjust the output format and headers as shown in Listing 4. This example returns the same IEnumerable<Album> value, but it wraps the response into an HttpResponseMessage so you can control the entire HTTP message re-
Figure 4 shows this and the next examples HTML output. You can check out the complete HTML and script code at http://goo.gl/Ix33C. The result from the getJSON() call is a JavaScript object of the server result, which comes back as a JavaScript array. In the code, I use knockout.js to bind this array into the UI, which as you can see, requires very little code, instead using knockouts data-bind attributes to bind server data to the UI. Of course, this is just one way to use the data its entirely up to you to decide what to do with the data in your client code. Along the same lines, I can retrieve a single album to display when the user clicks on an album. The response returns the album information and a child array with all the songs. The code to do this is very similar to the last example, where I pulled the albums array:
$(".albumlink").live("click", function () { var id = $(this).data("id"); // title $.getJSON("albums/" + id, function (album) { ko.applyBindings(album, $("#divAlbumDialog")[0]); $("#divAlbumDialog").show(); }); });
Here the URL looks like this: /albums/Dirty%20Deeds, where the title is the ID captured from the clicked elements data ID attribute.
codemag.com
41
method explicitly. Here, I used the [HttpGet] attribute to serve the image. Another option to handle the error could be to return a xed placeholder image if no album can be matched or the album doesnt have an image.
If you want complete control over your HTTP output and the formatter used, you can return an HttpResponseMessage result rather than raw .NET values.
When returning an error code, you can also return a strongly typed response to the client. For example, you can set the 404 status code and also return a custom error object (ApiMessageError is a class I dened) like this:
return Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found") );
sult including the headers, formatter and status code. In Listing 4, I explicitly specify the formatter using the JsonMediaTypeFormatter to always force the content to JSON. If you prefer to use the default content negotiation with HttpResponseMessage results, you can create the Response instance using the Request.CreateResponse method:
var resp = Request.CreateResponse<IEnumerable<Album>>( HttpStatusCode.OK, albums);
If the album can be found, the image will be returned. The image is downloaded into a byte[] array, and then assigned to the results Content property. I created a new ByteArrayContent instance and assigned the images bytes and the content type so that it displays properly in the browser. There are other xxxContent() objects available: StringContent, StreamContent, ByteArrayContent, MultipartContent, and ObjectContent are at your disposal to return just about any kind of content. You can create your own Content classes if you frequently return custom types and handle the default formatter assignments that should be used to send the data out. Although HttpResponseMessage results require more code than returning a plain .NET value from a method, it allows much more control over the actual HTTP processing than automatic processing. It also makes it much easier to test your controller methods as you get a response object that you can check for specic status codes and output messages rather than just a result value.
This hooks up the appropriate formatter from the active Request based on Content Negotiation.
Non-Serialized Results
The output returned doesnt have to be a serialized value but can also be raw data, like strings, binary data or streams. You can use the HttpResponseMessage.Content object to set a number of common Content classes. Listing 5 shows how to return a binary image using the ByteArrayContent class from a Controller method. The image retrieval from Amazon is contrived, but it shows how to return binary data using ByteArrayContent. It also demonstrates that you can easily return multiple types of content from a single controller method, which is actually quite common. If an error occurs such as a resource cant be found or a validation error you can return an error response to the client thats very specic to the error. In GetAlbumArt(), if the album cant be found, we want to return a 404 Not Found status (and realistically no error, as its an image). Note that if you are not using HTTP Verb-based routing or not accessing a method that starts with Get/Post etc., you have to specify one or more HTTP Verb attributes on the
Routing Again
Ok, lets get back to the image example: In order to return my album art image Id like to use a URL like this: http://localhost/aspnetWebApi/albums/Dir ty%20 Deeds/image In order for this URL to work, I have to create a new Controller because my earlier routes pointed to the AlbumApiController using HTTP Verb routing. HTTP Verb-based routing is great for representing a single set of resources such as albums. You can map operations like add, delete, update and read easily using HTTP Verbs. However, the way that WebAPI routes to methods based on name prex (isuch as Getxxx() methods or HTTP Verb attribute, its easy to use up these HTTP Verbs and end up
42
codemag.com
with overlapping method signatures that result in route conicts. In fact, I was unable to make the above URL work with any combination of HTTP Verb plus Custom routing using a single controller. There are number of ways around this, but all involve additional controllers. I think its easier to use explicit Action routing and then add custom routes if you need simpler URLs. So in order to accommodate some of the other examples, I created another controller AlbumRpcApiController to handle all requests that are explicitly routed via actions (/albums/rpc/AlbumArt) or are custom routed with explicit routes dened in the HttpConguration. I added the AlbumArt() method to this new AlbumRpcApiController class. For the image URL to work, you need a custom route placed before the default route from Listing 1.
RouteTable.Routes.MapHttpRoute( name: "AlbumApiActionImage", routeTemplate: "albums/{title}/image", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "AlbumArt" } );
based routing in the original AlbumApiController, I can implement a method called PostAlbum() to accept a new album from the client. Listing 6 shows the Web API code to add a new album. The PostAlbum() method receives an album parameter, which is automatically deserialized from the POST buffer that the client sent. The data passed from the client can be either XML or JSON. Web API automatically gures out what format it needs to deserialize based on the content type and binds the content to the album object. Web API uses model binding to bind the request content to the parameter(s) of controller methods. Like MVC you can check the model by looking at ModelState.IsValid. If its not valid, you can run through the ModelState.Values and check each binding for errors. When a binding error occurs, youll want to return an HTTP error response and its best to do that with an HttpResponseMessage result. In Listing 6, I used the custom error class that holds a message and an array of detailed error messages for each binding error. I used this object as the content to return to the client along with my Conict HTTP Status Code response. If binding succeeds, the example returns a string with the name and date entered to demonstrate that you captured the data. Normally, a method like this should return a Boolean or no response at all (HttpStatusCode. NoConent). The sample uses a simple static list to hold albums, so once youve added the album using the Post operation, you can hit the /albums/ URL to see that the new album was added. The client jQuery code to call the POST operation from the client with jQuery is shown in Listing 7. The code in Listing 7 creates an album object in JavaScript to match the structure of the .NET Album class. This object is passed to the $.ajax() function to send to the server as POST. The data is turned into JSON and the content type set to application/json so that the server knows what to convert when de-serializing in the Album instance.
Now I can use either of the following URLs to access the image: Custom route: (/albums/rpc/{title}/image) http://localhost/aspnetWebApi/albums/PowerAge/ image Action route: (/albums/rpc/action/{title}) http://localhost/aspnetWebAPI/albums/rpc/albumart/ PowerAge
codemag.com
43
The jQuery code hooks up success and failure events. Success returns the result data, which is a string thats echoed back with an alert box. If an error occurs, jQuery returns the XHR instance and status code. You can check the XHR to see if a JSON object is embedded and if it is, you can extract it by deserializing it and accessing the .message property. REST standards suggest that updates to existing resources should use PUT operations. REST standards aside, Im not a big fan of separating out inserts and updates so I tend to have a single method that handles both. But if you want to follow REST suggestions, you can create a PUT method that handles updates by forwarding the PUT operation to the POST method:
public HttpResponseMessage PutAlbum(Album album) { return Post(album); }
$(".removeimage").live("click", function () { var $el = $(this).parent(".album"); var txt = $el.nd("a").text(); $.ajax({ url: "albums/" + encodeURIComponent(txt), type: "DELETE", success: function (result) { $el.fadeOut(function() { $el.remove(); }); }, error: jqError }); });
Note the use of the DELETE verb in the $.ajax() call, which routes to DeleteAlbum on the server. DELETE is a non-content operation, so you supply a resource ID (the title) via route value or the querystring.
Routing Conicts
In all requests with the exception of the AlbumArt example, I used HTTP Verb routing that I set up in Listing 1. HTTP Verb Routing is a recommendation that is in line with typical REST access to HTTP resources. However, it takes quite a bit of effort to create REST-compliant API implementations based only on HTTP Verb routing. You saw one example that didnt really t the return of an image where I created a custom route albums/{title}. image that required creation of a second controller to work. HTTP Verb routing to a controller does not mix with custom or action routing to the same controller because of the limited mapping of HTTP Verbs imposed by HTTP Verb routing. To understand some of the problems with verb routing, lets look at another example. Lets say you create a GetSortableAlbums() method like this and add it to the original AlbumApiController accessed via HTTP Verb routing:
[Queryable] public IQueryable<Album> GetSortableAlbums() { var albums = Albums.OrderBy(alb => alb.Artist);
To make the corresponding $.ajax() call, all you have to change from Listing 7 is the type: from POST to PUT. To round out the server code, heres the DELETE verb controller method:
public HttpResponseMessage DeleteAlbum(string title) { var matched = Albums.Where(alb => alb.AlbumName == title) .SingleOrDefault(); if (matchedAlbum == null) return new HttpResponseMessage(HttpStatusCode.NotFound); Albums.Remove(matchedAlbum); return new HttpResponseMessage(HttpStatusCode.NoContent); }
44
codemag.com
return albums.AsQueryable(); }
If you compile this code and try to now access the /albums/ link, you get an error: Multiple Actions were found that match the request. HTTP Verb routing only allows access to one GET operation per parameter/route value match. If more than one method exists with the same signature, it doesnt work. As before, the solution to get this method to work is to throw it into another controller. Because I set up the AlbumRpcApiController I can add the method there. First, I should rename the method to SortableAlbums() so Im not using a Get prex for the method. This also makes the action parameter look cleaner in the URL it looks less like a method and more like a noun.
Although OData ltering is an interesting feature that gives the client a lot of control over certain operations (like skip and take and possibly sorting, which can be nice for grid displays), Im not sure if that sort of logic really belongs in client code. More likely, you should expose methods in the API that natively include ltering parameters rather than using a direct querying mechanism like OData. Undoubtedly, some will nd this approach appealing for quick and dirty operations where the client drives behavior.
Error Handling
Ive already done some minimal error handling in the examples. For example in Listing 6, I detected some known-error scenarios like model validation failing or a resource not being found and returning an appropriate HttpResponseMessage result. But what happens if your code just blows up or causes an exception? If you have a controller method, like this:
public void ThrowError() { throw new InvalidOperationException("Your code!"); }
HTTP Verb Routing adds a whole new level of complexity when youre trying to shoehorn functionality into the handful of available HTTP Verbs. Think carefully if thats the route you want to take.
I can then create a new route that handles direct-action mapping:
RouteTable.Routes.MapHttpRoute( name: "AlbumApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi" } );
You can call it with this: http://localhost/AspNetWebApi/albums/rpc/ThrowError The default exception handling displays a 500-status response with the serialized exception on the local computer only. When you connect from a remote computer, Web API throws back an IIS 500 error with no data returned (IIS then adds its HTML error page). The behavior is congurable in the GlobalConguration:
GlobalConguration .Conguration .IncludeErrorDetailPolicy= IncludeErrorDetailPolicy.Never;
As I am explicitly adding a route segment rpc into the route template, I can now reference explicit methods in the Web API controller using URLs like this: http://localhost/AspNetWebApi/rpc/SortableAlbums
IQueryable<T> Results
Did you notice that the last example returned IQueryable<Album> as a result? Web API serializes the IQueryable<T> interface just ne as an array, but in addition, it also allows for using OData-style URI conventions (http:// goo.gl/9nO3d) in the query string to lter the result if you specify a [Queryable] attribute on the method. You can sort and lter and limit the selection using OData commands that should be familiar from LINQ usage. For example: http://localhost/AspNetWebApi/albums/rpc/SortableAl bums?$orderby=Artist&$top=2&$skip=1 Even though you get OData-style querying support, the output generated uses Web APIs standard output generation logic so you can create JSON or XML, depending on content negotiation or your explicit output mapping.
If you want more control over your error responses sent from code, you can throw explicit error responses yourself using HttpResponseException. When you throw an HttpResponseException the response parameter is used to generate the output for the Controller action.
[HttpGet] public void ThrowError() { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.BadRequest, new ApiMessageError("Your code stinks!")); throw new HttpResponseException(resp); }
Throwing an HttpResponseException stops the processing of the controller method and immediately returns the response you passed to the exception. Unlike other Exceptions red inside of WebAPI, HttpResponseException bypasses the Exception Filters installed and instead just outputs the response you provide.
46
codemag.com
Listing 8: Implementing an ExceptionFilter to automatically turn exceptions into object result messages
public class UnhandledExceptionFilter : ExceptionFilterAttribute { public override void OnException(HttpActionExecutedContext context) { HttpStatusCode status = HttpStatusCode.BadRequest; var exType = context.Exception.GetType(); if (exType == typeof(UnauthorizedAccessException)) status = HttpStatusCode.Unauthorized; else if (exType == typeof(ArgumentException)) } } status = HttpStatusCode.NotFound; var apiError = new ApiMessageError() { message = context.Exception.Message }; // create a new response and attach our ApiError object // which now gets returned on ANY exception result context.Response = context.Request .CreateResponse<ApiMessageError>(status,apiError);
In this case, the serialized ApiMessageError result string is returned in the default serialization format XML or JSON. You can pass any content to HttpResponseMessage, which includes creating your own exception objects and consistently returning error messages to the client. Heres a small helper method on the controller that you might use to send exception info back to the client consistently:
private void ThrowSafeException(string message, HttpStatusCode statusCode = HttpStatusCode.BadRequest) { var errResponse = Request.CreateResponse<ApiMessageError>( statusCode, new ApiMessageError() {message = message}); throw new HttpResponseException(errResponse); }
The latter is a great way to get global error trapping so that all errors (short of hard IIS errors and explicit HttpResponseException errors) return a valid error response that includes error information in the form of a known-error object. Using a lter like this allows you to throw an exception as you normally would and have your lter create the right response. You can even create some custom exceptions so you can differentiate your own exceptions from unhandled system exceptions. This is just one example of how ASP.NET Web API is congurable and extensible. Exception lters are just one example of how you can plug-in into the Web API request ow to modify output. Many more hooks exist and Ill take a closer look at extensibility in a future article.
Summary
Web API is a big improvement over previous Microsoft REST and AJAX toolkits. The key features to its usefulness are its ease of use with simple controller based logic, familiar MVC-style routing, low conguration impact, extensibility at all levels and tight attention to exposing and making HTTP semantics easily accessible. Although none of the concepts used in Web API are new or radical, Web API combines the best of previous platforms into a single framework thats highly functional, easy to work with, and extensible to boot. I think that Microsoft has hit a home run with Web API. Web API is currently in beta and getting close to a release candidate. Its slated to ship later this year, around the same time as Visual Studio 11 Beta and .NET 4.5. In the meantime, you can start using Web API today in its beta form with its Go Live license, or with the current code from aspnetwebstack.codeplex.com, if youre willing to keep up with the frequent changes. Rick Strahl
You can then use it to output any captured errors from code:
public void ThrowError() { try { List<string> list = null; list.Add("Rick"); } catch(Exception ex) { ThrowSafeException(ex.Message); } }
Web API combines the best of previous Microsoft REST and AJAX tools into a single framework thats highly functional, easy to work with, and extensible to boot!
Another more global solution is to create an Exception Filter. Filters in Web API provide the ability to pre- and post-process controller method operations. An exception lter looks at all exceptions red and then optionally creates an HttpResponseMessage result. Listing 8 shows an example of a basic Exception lter implementation. Filters can be assigned to individual controller methods like this:
[UnhandledExceptionFilter] public void ThrowError()
codemag.com
47
Grokking the DLR: Why its Not Just for Dynamic Languages
Many .NET developers have heard of the Dynamic Language Runtime (DLR) but they dont quite know what to make of it. Developers working in languages like C# and Visual Basic sometimes shirk dynamic programming languages because they fear the scalability problems that have historically been associated with using them. Also of concern is the fact
languages like Python and Ruby dont perform compiletime type checking, which can lead to runtime errors that are very costly to nd and x. These are valid concerns that may explain why the DLR hasnt enjoyed more popularity among mainstream .NET developers in the two years since its ofcial release. After all, any .NET Runtime that has the words Dynamic and Language in its title must be strictly for creating and supporting languages like Python, right? During that talk, I jotted down the term that popped into my mind as I heard Jim retell the architecture of the DLR: the language of languages. Four years later, that moniker still characterizes the DLR pretty well. With some realworld DLR experience under my belt, however, Ive come to realize that the DLR isnt just for language interoperability. With dynamic type support now baked into C# and Visual Basic, the DLR has become a gateway from our favorite .NET languages to the data and code in any remote system, no matter what kind of hardware or software it may use. To understand the idea of the DLR as a language-integrated, IPC mechanism, lets begin with an example that has nothing to do with dynamic programming languages at all. Imagine two computing systems: one called the initiator and the other called the target. The initiator needs to invoke a function named foo on the target passing some number of parameters and retrieving the results. After locating the target system, the initiator must bundle all of the necessary call information together in a format that can be understood by the target. At a minimum, this includes the name of the function and the parameters to be passed. The initiator then sends the request to the target. After unpacking the request and validating the parameters, the target may execute the foo function. Then the target system must package up the results, including any exceptions that may have occurred, and send them back to the initiator. Lastly, the initiator must unpack the results and respond appropriately. This request-response pattern is common, describing at a high level how almost every call-based IPC mechanism works.
Kevin Hazzard
wkhazzard@gmail.com Kevin Hazzard is a Microsoft MVP living in the Richmond, Virginia. He has been married for twenty three years and has children ranging in age from college to elementary school. He serves as a Director for CapTech Consulting, a midsized rm of more than three hundred consultants with ofces in Richmond, Charlotte, Philadelphia and Washington, D.C. specializing in project management, business intelligence, and software and database development. Kevin is an advisory board member for the Information Systems and Technology program for his local community college, where he also taught C++ and C# as an adjunct professor for more than a decade. He further demonstrates his commitment to public education by serving as an elected member of his local countys K-12 School Board. Kevin is an organizer for several software developer community events including the Richmond Code Camp and the Mid-Atlantic Developer Expo (http://MADExpo.us).
Not so fast. While its true that the DLR was conceived to support the Iron implementations of the Python and Ruby programming languages on the .NET Framework, the architecture of the DLR provides abstractions that go much deeper than that. Under the covers, the DLR offers a rich set of interfaces for performing runtime Inter-Process Communication (IPC). Over the years, developers have seen many tools from Microsoft for communicating between applications: DDE, DCOM, ActiveX, .NET Remoting, WCF, OData. The list just goes on and on. Its a seemingly unending parade of acronyms, each one representing a technology that has promised to make it easier to share data or to invoke remote code this year than it was using last years technology. In this article, Ill show you why you may want to consider using the DLR as a communication tool, even if you never intend to use a dynamic programming language in your own application designs.
48
Grokking the DLR: Why its Not Just for Dynamic Languages
codemag.com
Other runtime frameworks follow this same pattern. The Component Object Model (COM) provides the CoCreateInstance function for creating objects. With .NET Remoting, you might use the CreateInstance method of the System.Activator class. The DLRs DynamicMetaObject provides BindCreateInstance for a similar purpose. After using the DLRs BindCreateInstance, the created thing you have in hand may be of a type that supports multiple methods. The metaobjects BindInvokeMember method is used to bind an operation that can invoke the function. In the graphical example from above, the string foo would be passed as a parameter to let the binder know that the member method by that name should be called. Also included with the parameter are useful bits of information like the argument count, argument names and a ag that says whether or not the binder should ignore case when trying to nd the named member. After all, some languages are picky about the case of their symbols and some are not. When the thing returned from BindCreateInstance is just a single function (or delegate) however, the metaobjects BindInvoke method is used instead. To make this clear, consider the following small bit of dynamic C# code:
delegate void IntWriter(int n); void Main() { dynamic Write = new IntWriter(Console.WriteLine); Write(5); }
The important thing for you to understand at this point is that the C# compiler recognizes the statement Writer. Write(7) as a member access operation. What we often call the dot operator in C# is formally called the member access operator. The DLR code generated by the compiler in this case would ultimately call BindInvokeMember, passing the string Write and the integer argument 7 to an operation that can perform the invocation. In short, BindInvoke is used to call a dynamic object that is a function while BindInvokeMember is used to call a method that is a member of a dynamic object.
This code isnt the optimal way to write the number 5 to the console. A good developer would never do something so wasteful. However, this code illustrates the use of a dynamic variable that is a delegate, which can be called like a function. If the delegate type were derived to implement a DLR interface named IDynamicMetaObjectProvider, the BindInvoke method of the DynamicMetaObject that it returns would be called to attach an operation to do the work. This is because the C# compiler recognizes that the dynamic object called Write is being used syntactically like a function. Now look at another bit of dynamic C# code to understand when BindInvokeMember might be emitted by the compiler instead:
class Writer : IDynamicMetaObjectProvider { public void Write(int n) { Console.WriteLine(n); } // interface implementation omitted } void Main() { dynamic Writer = new Writer(); Writer.Write(7); }
Metaobject
The term metaobject is not unique to the DLR. The prex meta comes from Greek where it simply means beside or after. Therefore, metadata is data that is beside the real data, representing it or providing access to it. The DLRs DynamicMetaObject accurately reects this idea with respect to objects. A DLR metaobject operates alongside a real object, assisting with invocation or control.
Ive omitted the implementation of the interface in this small example because it would take lots of code to show you how to do that correctly. In a following section, however, well take a shortcut and implement a dynamic metaobject with just a few lines of code.
codemag.com
Grokking the DLR: Why its Not Just for Dynamic Languages
49
Lines 3 and 5 in the listing above will be interpreted by the C# compiler as index accesses because of the use of the index access ([]) operator that was used. Behind the scenes, the custom DLR metaobject for the types exposed by the JavaBridge will then receive calls to their BindGetIndex and BindSetIndex methods, respectively, to pass calls to a waiting JVM via Java Remote Method Invocation (RMI). In this scenario, the DLR helps us to bridge the gap between C# and a statically-typed language, perhaps making it clearer why I call the DLR the language of languages. Just like the BindDeleteMember method, the BindDeleteIndex method is not intended for use from statically typed languages like C# and Visual Basic. Those languages have no way to express such a concept. However, you can establish a convention for deleting members from a class at runtime to get that kind of functionality if its valuable to you. For example, setting an index to null, which can be expressed in C# and Visual Basic, could be interpreted by your metaobject to mean the same thing as BindDeleteMember.
powerhouse of metaprogramming goodness; jam packed with all sorts of performance-optimizing techniques that make your dynamic .NET code fast and efcient. Ill cover the performance aspects of the CallSite<T> class at the end of this article. Much of what call sites do in dynamic .NET code concerns runtime code generation and compilation. So, its signicant to note that the CallSite<T> class is implemented in a namespace that contains both of the words Runtime and CompilerServices. If the DLR is the language of languages, then the CallSite<T> class is one of its major grammatical constructs. Lets take a look at the tiny example from the last section one more time to get familiar with call sites and how compilers like C# inject them into our code:
dynamic x = 13; int y = x + 11;
From what youve learned so far, you know that calls to BindBinaryOperation and BindConvert will be emitted by the C# compiler for this bit of code. Rather than showing you the long Microsoft Intermediate Language (MSIL) disassembly of what the compiler produces, Ive included Figure 2, a owchart that describes the compilers output instead. Remember that the C# compiler uses its own syntax to determine what actions are required on the dynamic type. In the current example, there are two operations to emit: the addition of variable x to an integer (Site2) and the conversion of the result into an integer (Site1). Each of these actions becomes a call site which is stored in a container for the enclosing method. As you can see in the owchart in Figure 2, the call sites are created in reverse order in the beginning but invoked in the correct order at the end. You can see in the owchart that the BindConvert and BindBinaryOperation metaobject methods are called just before the Create Call Site 1 and Create Call Site2 steps, respectively. Yet, the invocation of the bound operations doesnt occur until the very end. Hopefully, the graphic helps you to understand that binding is not the same thing as invoking in the DLR. Moreover, binding happens once per the creation of each call site. The invocations, on the other hand, may occur many times over, reusing the initialized call sites to optimize performance. Before I dive into more of the performance optimizations that the DLR uses to make dynamic code efcient and fast, lets take a look at a simple way to implement the IDynamicMetaObjectProvider contract I mentioned earlier in one of your own classes.
Deleting a Property?
The BindDeleteMember method of the DLR metaobject may be a bit puzzling if youve never worked with dynamic programming languages before. Dynamic languages like Python and Ruby allow you to add functions and properties to an object or its type on the y. Of course, you can delete them, too. Since the DLR was designed to support dynamic language implementations, it makes sense for the BindDeleteMember method to be included in the metaobject denition. However, C# and Visual Basic have no syntax to support such a concept so those languages will never emit calls to BindDeleteMember, even if you implement that method in your metaobject.
The BindBinaryOperation and BindUnaryOperation methods are used whenever an operator such as arithmetic addition (+) or increment (++) is encountered. In the example above, the addition of the dynamic variable x to the constant 11 will emit a call to BindBinaryOperation method. Keep this tiny example in your minds eye for a moment. We use it in the next section to grok another key DLR class known as the call site.
50
Grokking the DLR: Why its Not Just for Dynamic Languages
codemag.com
Do the names of those twelve virtual methods look familiar? They should since you just nished studying the members of the abstract DynamicMetaObject class which includes methods like BindCreateInstance and BindInvoke. The DynamicMetaObject class implements the IDynamicMetaObjectProvider, which returns a DynamicMetaObject from its single method. The operations bound to the underlying metaobject implementation simply dispatch their calls to the methods beginning with Try in the DynamicObject instance. All you have to do is override the methods like TryGetMember and TrySetMember in a class that derives from DynamicObject and a metaobject working behind the scenes handles all the messy Expression Tree details.
Figure 3: Dynamically accessing Netix data. complete IDynamicMetaObjectProvider contract very well. Fortunately, the .NET Framework includes a base class called DynamicObject that does a lot of the work for you. In this section, Ill show you how to build a dynamic, Open Data (OData) Protocol class based on the DLRs DynamicObject type, which contains the following twelve virtual methods: 1. TryCreateInstance 2. TryInvokeMember 3. TryInvoke 4. TryGetMember 5. TrySetMember 6. TryDeleteMember 7. TryGetIndex 8. TrySetIndex 9. TryDeleteIndex 10. TryConvert 11. TryBinaryOperation 12.TryUnaryOperation
Do you see how the member access (dot) operator is used three times in C# expression? Each of them will produce a call site within the site container for the OnNetixMovieReady method. Of course, all of that happens behind the scenes. The C# compiler takes care of all that hard work for you.
52
Grokking the DLR: Why its Not Just for Dynamic Languages
codemag.com
The DynamicOData class begins by setting up a delegate and an event to handle the OnDataReady event. Then a couple of namespaces are declared: one for data services common to the OData Entity Data Model (EDM) and another for the metadata. When parsing the output of an OData feed, these are necessary for addressing the Atom-encoded XML elements correctly. An IEnumerable<XElement> called _current serves as the storage for the DynamicOData node. The FetchAsync command starts the download of the XML document using a WebClient instance. When the transfer of the XML document is complete, the OnDownloadCompleted method is invoked where the XML text is parsed into an XDocument from which the <properties> elements are collected and stored in the _current enumeration. All of the OData well be using from any feed can be found in the <properties> collection. Listing 4 shows a subset of the <properties> collection as XML for one movie in the Netix OData feed. Lastly, after the XML document has been parsed and queried, the OnDataReady event is red to let the caller know that the object is ready for use.
Because OData can be deeply nested like the sample shown in Listing 4, I want to be able to chain property accessed together uently until I reach the node that has the value Im interested in. In the case of the single C# statement above, I know that I want the Value of the SmallUrl property within the BoxArt property of the movie. The code to do that when the Value pseudo-property is encountered returns the _current enumerations rst XElements Value property as a string, for now. I omitted some code at that point to keep things simple but well get back to it in a bit.
codemag.com
Grokking the DLR: Why its Not Just for Dynamic Languages
53
returned directly to the caller. I could do that, of course, but I want to be able to chain this result to another one. More importantly, I want my Value pseudo-property and XML attribute handling semantics to apply to the node thats returned. The easiest way to do that is to return the resulting XML nodes wrapped in a new DynamicOData object. A special constructor is provided to handle this case:
protected DynamicOData( IEnumerable<XElement> current) { _current = new List<XElement>(current); }
If TryGetMember doesnt encounter the use of the Value pseudo-properties, it processes the request rst as if it were trying to obtain the value of an XML attribute then as a named XML element. This is also special handling because of the nature of XML text. Some data that I want to access may be encoded as an XML attribute. Other data may be encoded as an XML element. For this implementation, Ive decided that I dont want to have to use any kind of special syntax or another pseudo-property to get at attribute data. Ive chosen by convention to return matching attributes if they exist, then matching elements. Of course, this wont work in every case but it does highlight the fact that when youre designing your own dynamic objects, youre in the drivers seat. In other words, working within the syntax of the host language, youre free to implement any kind of convention that makes sense for the semantics that youre trying to create. Finishing up Listing 5, when the specied binder.Name property doesnt reference a pseudo-property or an attribute, the queried Descendents of the _current XML elements are returned. Its important to note that the enumeration of XElement objects obtained this way isnt
While the original DynamicOData object was created for fetching XML over the network, this special constructor creates a new one at the selected level of the XML hierarchy. The C# expression movie.BoxArt will return a new DynamicOData object having its _current variable scoped to the <d:BoxArt> node of the XML. Then using the member access (dot) operator on that object, followed by SmallUrl will return another new DynamicOData object scoped to the <d:SmallUrl> node. Finally, accessing the Value pseudo-property on that last dynamic object stops the chain. To nish up our examination of TryGetMember, I need to address the code that I omitted from Listing 5. To do that, think about this line of code you saw earlier.
Dump("Runtime = {0} minutes", movie.Runtime.Value / 60);
How is it possible for the Value pseudo-property of the Runtime element, which is returned by TryGetMember as a string, to be divisible by the number 60 in C#? The answer is that its not, of course. C# isnt that kind of dynamic language, at least not yet. To make this kind of code possible in a statically-typed language, we can take advantage of some hints in the OData data. ODatas Entity Data Model (EDM) denes a handful of abstract types for things like integers, dates, Boolean values, etc. Some OData elements are marked with an attribute named type to tell you how the text (or nested XML) contained within an element should be interpreted. Listing 6 shows the code removed from Listing 5 that provides this functionality whenever the Value pseudo-property is accessed.
54
Grokking the DLR: Why its Not Just for Dynamic Languages
codemag.com
Of course, to keep things short and sweet, I didnt include conversions for all sixteen of ODatas abstract types. The full source code for this example does include them, though. The code in Listing 6 looks at a Value node for an attribute named type. If its found, the value of the attribute is checked against one of the sixteen known EDM data type names. If it matches, a conversion is performed to coerce the value into the expected data type. In this way, expressions like movie.Runtime.Value / 60 work correctly at runtime. With respect to member access, Ive spent a lot of time talking about TryGetMember but no time talking about how to modify data dynamically. The Netix OData feed that Ive been working with so far is read-only but other OData feeds are read-write. I wont show the code here but it would be easy enough to add a Save method to the DynamicOData class to handle the update process if you needed that sort of functionality. The question is: how can I make modications in a DynamicOData instance using the same uent syntax that Ive been using to read data? An overridden TrySetMember like this should do it:
public override bool TrySetMember( SetMemberBinder binder, object value) { if (binder.Name == "Value") { _current.ElementAt(0).Value = value.ToString(); return true; } return false; }
public IEnumerator GetEnumerator() { foreach (var element in _current) yield return new DynamicOData(element); }
Just as I returned each named node in TryGetMember as a new DynamicOData object to make chaining possible, the iterator shown here wraps each XElement in the _ current collection as a new DynamicOData object so that all of the nice dynamic language semantics we want to apply to the XML document extend to each node. Heres a bit of test code that uses eBays OData feed to nd the top ten items on their site pertaining to the same movie that we queried Netix about.
string ebayQueryFormat = "http://ebayodata.cloudapp.net/" + "Items?search={0}&$top=10"; string ebayUrl = String.Format( ebayQueryFormat, movieTitle); DynamicOData ebayItems = new DynamicOData(); ebayItems.OnDataReady += OnEbayItemsReady; ebayItems.FetchAsync(ebayUrl);
Its the same pattern I used for fetching data from Netflix. Im using the same DynamicOData type that I used to query Netix. However, the query is a bit different since eBay provides a search verb to which the search term can be assigned. Listing 7 shows the OnEbayItemsReady method that is called when the data is loaded. The foreach loop shown in Listing 7 takes advantage of the IEnumerable implementation in my DynamicOData class. Inside that loop, since each returned item has been wrapped as a new DynamicOData instance, properties specic to the eBay OData feed like Id, Title and CurrentPrice become resolvable. Of course, if I wanted to ascribe array-like semantics directly to the DynamicOData class, I could do so by overriding TryGetIndex as follows: Listing 7: Enumerating eBay items
static void OnEbayItemsReady(dynamic ebayItems) { Dump(eBay item information:); try { foreach (var item in ebayItems) { Dump(ID = {0}, Title ={1}, + CurrentPrice = {2:C}, item.Id.Value, item.Title.Value.Substring(0, 20), item.CurrentPrice.Value); } } catch (Exception ex) { Dump({0}: {1}, ex.GetType().Name, ex.Message); } Dump(Press Enter to continue ...); }
With this new method in place, C# code like this becomes possible:
movie.BluRay.Available.Value = true;
Because the object returned for the Available node in that C# statement is a DynamicOData object, its handling of the Value pseudo-property actually writes to the same XDocument in memory that all of the DynamicOData objects reference throughout the call chain. The code in Listing 2 and the sample output shown in Figure 3 makes this fairly clear. Go back and look at them. Before changing the BluRay.Availability property, the value obtained from the Netix service was false. After changing it, the value read by a separate DynamicOData object is reported as true. A hypothetical Save method within the DynamicOData would only need to detect these sorts of changes and use the OData protocol to update them on the server.
codemag.com
Grokking the DLR: Why its Not Just for Dynamic Languages
55
public override bool TryGetIndex( GetIndexBinder binder, object[] indexes, out object result) { int ndx = (int)indexes[0]; result = new DynamicOData( _current.ElementAt(ndx)); return true; }
This is a very simplistic implementation that assumes that my indexing strategy is purely numerical. Do you see the cast operation to coerce a single integer from the array? However, any sort of indexing I need is possible. The indexes parameter of the TryGetIndex method is an object array, meaning that C# compiler will pass exactly whats provided by the caller. There may be one index value or a dozen of them. They could be strings or integers or even complex data types. The skys the limit as they say, so Im free to get as creative as I like with the way in which the index parameters are implemented. Hopefully, the DynamicOData class Ive shown here opens your eyes to the possibilities available to you when using the DLR. What Ive created isnt about dynamic languages per se. Its true that C# and Visual Basic feels more dynamic when using a class thats powered by the IDynamicMetaObjectProvider contract. But C# and Visual Basic are still statically-typed languages under the hood. Deferral of some binding operations until runtime gives them a feeling of being just dynamic enough to make our code more expressive than its ever been before. To nish up, lets spend a bit of time discussing the performance concerns that arise from the code youve seen here.
on code thats encountered at runtime and the rules in the other caches used to generate them. As a runtime compiler service, the delegate cache is also sometimes referred to as inline. The reason for that term is that the expressions generated by the DLR and its binders are assembled into MSIL and Just-In-Time (JIT) compiled, just like any other .NET code. This runtime compilation happens in line with the normal ow and execution of your program. As you can imagine, turning dynamic code into compiled .NET code on the y can make a massive, positive impact on the performance of the application. With the downloadable source code for this article, Ive included a second project called PythonIntegration that interfaces some C# code to IronPython. I wont cover the application here because its lengthy and would require a lot more space to describe. Youll need to download and install IronPython if you want to experiment with the PythonIntegration application, of course. What youll discover is the vast difference between the static-to-dynamic language interoperability of the past compared to the high performance options offered by Microsofts DLR. Some over-the-border operations from C# to Python, measured in tight repetition, are literally 100,000 times faster using the caching mechanisms that you get for free when using the DLR. These same caching tactics are applied when calling from C# to any other CLR-compliant language, too.
Conclusion
The DLR isnt just about dynamic languages. It opens up a whole world of possibilities for communicating between disparate systems. As .NETs language of languages, the DLR enables the movement of code and data with a kind of uidity and natural expressiveness that werent possible beforehand. As you've seen, the language of a data model like OData can be mapped rather generically into the syntax of C# and Visual Basic using the DLR, increasing comprehension dramatically. Other call invocation systems like Javas Remote Method Invocation (RMI) can be mapped directly into our favorite languages as well, breathing life into existing code bases and increasing their overall business value. Because the DLR can shape the code and data of any other system into .NET so gracefully, the possibilities for using it should be limited only by your imagination. Kevin Hazzard
56
Grokking the DLR: Why its Not Just for Dynamic Languages
codemag.com
Vancouver
Chicago
Boston
Philadelphia
Phoenix
Dallas Houston
An EPS Event
Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework
In a prior installment of this series of articles about CODE Framework (CODE Framework: Writing MVVM/MVC WPF Applications, Jan/Feb 2012), I discussed how to use the WPF features of CODE Framework to create rich client applications in a highly productive and structured fashion reminiscent of creating ASP.NET MVC applications, although
with WPF MVVM concepts applied. In this article, I will dive deeper into the subject and discuss the unique benets of the CODE Framework WPF components which enable developers to create the part of the UI that is actually visible in a highly productive and reusable manner. Most MVVM frameworks create great structure in setting up the overall infrastructure, but provide little in the way of actual UI development. And here is where you create a user control that acts as the view is how the story usually goes and the developer is completely on her own in doing so. Not so in CODE Framework! Developers and designers alike can use many of the great (yet optional) features of the framework to quickly create great looking and completely stylable UIs. In fact, many of these features can be used even if your overall development framework is something else. You can simply bring these components into other setups as needed. you have a very simple UI, such as one based on a user control, perhaps one that creates a login UI with the option to enter a user name and password. Perhaps for that purpose, you arranged your user control into logical rows and columns using a Grid layout element. Something like the XAML shown in Listing 1 perhaps, which creates the UI shown in Figure 1. For those of you familiar with any of the XAML dialects, this type of UI denition is probably well-known to you. There are quite a few things that bug me about this setup, however, ranging from little annoyances to the fact that with the proper techniques, this same UI can probably be dened in just a handful of lines of code. Lets start out with the little things.
Markus Egger
megger@eps-software.com Markus is an international speaker, having presented sessions at numerous conferences in North and South America and Europe. Markus has written many articles for publications including CODE Magazine, Visual Studio Magazine, MSDN Brazil, asp. net Pro, FoxPro Advisor, Fuchs, FoxTalk and Microsoft Ofce and Database Journal. Markus is the publisher of CODE Magazine. Markus is also the President and Chief Software Architect of EPS Software Corp., a custom software development and consulting rm located Houston, Texas. He specializes in consulting for object-oriented development, Internet development, B2B, and Web Services. EPS does most of its development using Microsoft Visual Studio (.NET). Markus has also worked as a contractor on the Microsoft Visual Studio team, where he was mostly responsible for object modeling and other object- and component-related technologies.
Getting Started
When you create CODE Framework WPF applications, you can use as little or as much as the UI-specic features as you like. Just like in any other framework, you can create your view as a user control (or similar UI element) in a XAML le with a C# or VB code-behind le. Or, you can go all out and use the CODE Framework View UI element and go cold turkey without even a code-behind le (which has great advantages as I will discuss as this article goes on). Or, you can simply use some of the convenient little features that might make general WPF development more straightforward and ease into the subject that way. Or you can mix and match any and all of those approaches. (NOTE: You can also create entire custom themes, which is not nearly as hard as it sounds, but that shall perhaps be the topic of a future article.)
Many CODE Framework features can be used individually and in combination with completely different frameworks.
First, lets talk about the denition of the Grid. It is very convenient to arrange UIs using Grid elements. What is not so convenient is that the syntax for the denition of Grid rows and columns tends to be rather verbose. In fact, looking at Listing 1, you will notice that almost half the required code (13 lines in this example) went towards row and column denitions. And sometimes you really want to do fancy things with all the various settings you can put on rows and columns, but in the majority of scenarios I ever see, people only set row heights and column widths. For this reason, CODE Framework provides a more convenient way to dene rows and columns in a Grid. To take advantage of this feature, make sure you have a reference to Code.Framework.Wpf.dll in your project (see sidebar for how to get CODE Framework if you do not have it already). Also, dene the namespace for CODE Framework WPF controls at the top of your UI like this (all in one line):
xmlns:c=clr-namespace:CODE.Framework.Wpf.Controls; assembly=CODE.Framework.Wpf
Most WPF/XAML MVVM Frameworks provide great structure for the mechanics of the UI but not for the things you actually see. CODE Framework is different.
Lets start out with a few very simple examples (and for those of you who are looking for the mind-blowing features bear with me we are getting there!). Lets say
NOTE: If you are using a productivity tool such as Resharper, the tool will probably just add this line for you. Also, see the sidebar about XAML namespaces if you are not familiar with this feature.
58
Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework
codemag.com
Now that you have access to CODE Framework features in our UI, you can turn the 13-line Grid denition into the following:
<Grid c:GridEx.RowHeights=Auto,Auto,Auto,Auto,25,Auto c:GridEx.ColumnWidths=*,*>
like that. In that case, you may not like the width of 250 pixels anymore, because the font may either be too large or too small to accommodate as much information as you want. A better way to set the width is to set the width to something that logically maps to about the same amount of text being visible regardless of font. (With proportional fonts, this is always a somewhat inexact science as you are dependent on which exact letters are being typed. But I want this to make some common sense once the UI pops up on the screen). Using the CODE Framework, Ill show you how to set another attached property. This time, the property is dened on an element called View, which is part of the Code.Framework.Wpf.Mvvm.dll, so make sure you add that to your project references and make it accessible through a new namespace in your UI:
xmlns:m=clr-namespace:CODE.Framework.Wpf.Mvvm; assembly=CODE.Framework.Wpf.Mvvm
The result of this is exactly the same as the 13-liner, except it is more convenient. Note that this is not just more convenient for direct declaration of row heights and column widths, but it also makes it much easier to style the control. For instance, you could create a control style with the exact same settings like this:
<Style TargetType=Grid x:Key=MyStyle> <Setter Property=c:GridEx.ColumnWidths Value=*,* /> <Setter Property=c:GridEx.RowHeights Value=Auto,Auto,Auto,Auto,25,Auto /> </Style>
This is much easier than it would be without this nifty feature! Note that we didnt even have to use a special control. We still use a standard Grid, but the framework allows us to set RowHeights and ColumnWidths properties by means of attached properties. (If you are not familiar with attached properties, see the sidebar.) You could also use the GridEx element directly instead of a Grid and get a few more features yet, but in many cases, you will simply nd yourself using a few attached properties in addition to what you are already doing. This is a pattern you will see quite a few times in CODE Framework. In addition to the elements and controls you are already used to, there often is a control with almost the same name except for an Ex sufx that provides extended functionality. You can generally use either the Ex control, or just some attached properties provided by that class. Another aspect that currently bugs me about this UI denition is the hardcoded width of the text elements. They are set to a width of 250 pixels. This may work ne for the current font settings, but what if someone changed the font, perhaps by changing the applications overall style, or by setting Windows system settings, or anything
Again, this should be just one line without spaces in your code, even though the way this long line displays in the magazine is in two separate lines. With that reference in your project, you can remove the Width setting from the two input controls and replace it with this:
m:View.WidthEx=25
XAML Namespaces
XAML is a language that denes instantiation of objects and setting properties on those objects. To know which objects/controls are available, XAML denes XML namespaces. By default, a single namespace indicates to XAML that all the standard controls in the WPF or Silverlight or Metro namespaces are available. To use other controls (such as your own or such as third-party controls), you can dene additional namespaces that consist of a prex (such as mvvm) and then link to a .NET namespace, making all the classes in that namespace available for use in XAML. The classes in that new namespace are simply referred to by namespace plus the class name as in <mvvm:View />.
Note that the code changes from 250 to 25 as the value. The idea here is to say I want a width that accommodates about 25 characters using the current font face and size, and assuming an average character width. You can try to experiment with font settings and run your app (sometimes this doesnt show up right away in the designer) to see how the width changes. Note that you could set this attached property on both the textbox and the password-box. In fact, you can set the WidthEx property on any UI element. It is a very generic setting in that sense. That is why it is dened on the View object. You simply needed a generic place to put this sort of setting, and the View object seemed to make the most sense for that.
codemag.com
Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework
59
Attached Properties
All XAML dialects (WPF, Silverlight,) allow for a concept called attached properties. This enables developers to create properties on an object and then attach those properties to another object. This is extremely helpful when one object needs to store a value specic to another object and keep track of it. For instance, the Grid class in WPF needs to know which row and column other elements are to be put into and for that purpose needs to store those values for each child element. This is done by setting the Grid. Column and Grid.Row properties on completely different objects. You can use this same concept for a wide variety of things. Whenever an attached property is set, a method can be triggered that reacts to setting that property, allowing developers to do just about anything they want as a reaction to the property being set. For instance, you could create an arbitrary class called DnD with an attached property called Enabled. If this property is then set on an arbitrary second element (such as <Button DnD.Enabled=True />) a handler method can be triggered that wires up everything needed to enable drag & drop on the target object, thus having effectively extended the target class with drag and drop capabilities, without that class ever being aware of it. As you can imagine, this provides for an extremely exible and powerful system the CODE Frameworks takes extensive advantage of.
There is one nal example of a convenience feature I want to add to the UI. You may notice that the textbox is bound to a value (which presumably will be provided by some sort of view model or whatever else you have set as the data context). This is a very convenient setup and typical for an MVVM application. Note that the passwordbox, on the other hand, does not have a binding, because the text of a password-box simply isnt bindable in WPF. This is very inconvenient in an MVVM world, where you really want to bind just about anything to the view model. For this reason, we added the ability to bind a password-box by adding you guessed it an attached property that can be used like so right within the existing password-box:
c:PasswordBoxEx.Value={Binding Password}
completely. In fact, you can remove a whole lot more, including the Grid.Column and Grid.Row as well as the Grid. ColumnSpan settings. The View has the ability to gure that stuff out on its own. To do so, it uses advanced layout styling. (To follow along with this example, remove all such layout information from your UI denition now.) NOTE: Layout styling is a subject all on its own, and I have, in fact, written an article about layout styling called Super Productivity: Using WPF and Silverlights Automatic Layout Features in Business Applications, which appeared in the 2010 Nov/Dec issue of CODE Magazine. It has nothing to do with CODE Framework as such, but explains the concepts CODE Framework uses in general terms. By default, the View uses a Grid as its layout strategy. Since you removed all layout information from the UI denition, the layout will now look exactly like it would in any WPF Grid: All the controls are piled on top of one another. Not exactly useful or what you want. To create a more useful layout, we can use a different style. One such style that is available in all CODE Framework themes/ skins is called CODE.Framework-Layout-SimpleFormLayout. What does this style look like? Well, that depends on which theme you are using. Suppose you choose one of the CODE Framework standard themes, such as Metro or Battleship (Windows 95 look); it will create a vertical stack of controls. Take a look at the UI denition in Listing 2, which as you can see, is signicantly simpler than the one in Listing 1. Nevertheless, the result still looks the same as the UI in Figure 1. NOTE: If you have created your project from scratch or arent using CODE Framework as your main framework, you need to make sure you add the desired theme DLL and merge in the theme root into your resources. (Using the CODE Framework templates, this is done automatically.) Assuming you want to use the Metro theme, add a reference to Code.Framework.Wpf.Theme.Metro.dll and add the following XAML to your App.xaml le to make sure the resource dictionaries that make up the Metro theme are available to your UI:
<Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source=pack://application:,,, /CODE.Framework.Wpf.Theme.Metro; component/ThemeRoot.xaml /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources>
There you are, with a fully bindable password-box! Youll nd quite a few more of these convenience features in CODE Framework. These features range from simple visual aspects to code readability and productivity features (such as the Grid row and column denitions) to functional aspects (such as a bindable password box) to behavioral features such as allowing controls like ListBoxes and trees to be bound to meaningful WPF commands. The space I have in this article is too short to discuss them all, but I encourage you to explore some of these features on your own. Plus, we are adding new ones all the time! Another aspect of all this that is really important is that so far, we are still dealing with a rather simple user control that has little to do with the CODE Framework, other than us having brought in a few DLLs and then having used some very specic features. You can use those features in any WPF application, regardless of whether CODE Framework is a key part of your setup or not. You can pick and choose not just the DLLs and classes you want to use, but in some cases, you may only want to use a single attached property. The level of choice you have is quite granular, and that is a deliberate design feature of CODE Framework.
NOTE: Since this UI has a code-behind le, you also need to go to that le and change the inheritance structure to inherit from View rather than user control, otherwise youll get a cant redene baseclass error. You can think of a View as a generic and extremely exible container for UI denitions. By default, you can think of a View as a Grid, although you can change that layout behavior to your liking (which is often done in styles). Since the View itself can act as a Grid, you do not need the Grid denition anymore, so you can remove that
When I rst show developers a UI denition as the one in Listing 2, I generally make a point to draw their attention not to whats there but to what isnt there: a complete lack of any layout information. The listing denes only which controls we want and what they are bound to, and perhaps a few other business things such as the rough desired width of a control in a generic fashion. But the fact that you have a label at the top of the form, then a textbox a few pixels below, and so forth, is something that is completely driven by the style. And as such, it is also changeable by means of a style, so you could make
60
Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework
codemag.com
Listing 2: Dening the same UI as in Listing 1, but using CODE Framework features
<m:View x:Class=UITest.LoginControl xmlns:x=http://schemas.microsoft.com/winfx/2006/xaml xmlns=http://schemas.microsoft.com/winfx/2006/xaml/presentation xmlns:c=clr-namespace:CODE.Framework.Wpf.Controls; assembly=CODE.Framework.Wpf xmlns:m=clr-namespace:CODE.Framework.Wpf.Mvvm; assembly=CODE.Framework.Wpf.Mvvm Style={DynamicResource CODE.Framework-Layout-SimpleFormLayout} > <Label>Username:</Label> <TextBox m:View.WidthEx=25 Text={Binding Username} /> <Label>Password:</Label> <PasswordBox m:View.WidthEx=25 c:PasswordBoxEx.Value={Binding Password}/> <StackPanel HorizontalAlignment=Center VerticalAlignment=Bottom Orientation=Horizontal> <Button Margin=5 MinWidth=70 Content=Login /> <Button Margin=5 MinWidth=70 Content=Cancel /> </StackPanel> </m:View>
it look completely different in the same Windows app, or you could even take this to different platforms such as Metro or Windows Phone, and have the style create an appropriate look for each specic platform. Note that the UI is not just a simple top-to-bottom stack, but the two buttons at the bottom are supposed to be at the same level horizontally. Since the style doesnt do that automatically, I added a StackPanel that handles these two buttons and let the style align the StackPanel as a whole. This is a fairly common thing to do. You may often have UIs you can have almost laid out entirely by some available style, except for one detail like these buttons. That doesnt mean you cant use the style. You simply use the style for what it does well and handle the rest (often the more interesting things) yourself. Composing UIs out of these different approaches is an important aspect. So now you might wonder how you would know about the available styles. The simplest answer to that question is to use the CODE Framework Visual Studio Extensions (downloadable through the Extensions Manager in Visual Studio). This gives you CODE Framework-specic project templates, including one for Views. When you use that project template, a dialog pops up which lets you select the style for your view from a list (which is ever growing). This is a simple way to experiment with the different layout styles. Of course, you can also look into individual CODE Framework theme source code projects (they all start with CODE. FrameworkWpf.Theme and see which XAML resources are there. The layout ones all have names that start with [Theme]-Layout-, such as Metro-Layout-SimpleForm. xaml. We even make all these resource dictionaries available as a separate download for easy viewing of the WPF styles. At this point you might wonder what the layout style you just used is dened as. You can see that code here:
<Style TargetType=ItemsControl x:Key=CODE.Framework-Layout-SimpleFormLayout>
codemag.com
Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework
61
<Setter Property=ItemsPanel> <Setter.Value> <ItemsPanelTemplate> <Layout:BidirectionalStackPanel ChildItemMargin=0,0,0,5 /> </ItemsPanelTemplate> </Setter.Value> </Setter> </Style>
cal orientation set to Bottom, which are put in from the bottom up. In addition, this control allows setting the margin between the child items. (In this example, we set it so every control has a 5-pixel bottom margin.) In fact, this stack panel even has a special option for label and textbox type of controls, because we often have UIs that have alternating label/control patterns, where the label goes with the control (such as the Username: label before the username textbox and so on), and one usually wants a little less margin after the label. If you run our example UI, you can see the bidirectional stack panel in action. Try to resize the login window and you will see that the login buttons always stick to the bottom of the window since they are dened in their own StackPanel which has its VerticalAlignment property set to Bottom. A quick side-remark about the two buttons: The simplest way to dene this UI would be to not have to dene these two buttons at all. If you think about it from a slightly more abstract viewpoint, you will notice that the login form provides a few very simple aspects. You can enter the user name and password, and textbox controls are generally a good way to achieve that in just about any environment. The buttons then allow you to trigger or cancel login. But are buttons really the best approach for that? That depends on the exact environment and the applied theme. In a conventional Windows setup, buttons may be great. In a touch environment, perhaps you want a different kind of button, or perhaps you want to use gestures. On a phone or in Windows 8 Metro, you may use buttons that are integrated with the device. The list goes on and on, and the point I am making is that you really do not know whether buttons are the best way to go. A better approach (which would also be more productive for developers) is to dene that both the login and cancel actions are available but leave it up to the applied theme/skin to decide how to best present these standard actions. In the CODE Framework, that is possible through the Actions collection that can be (optionally) present on view models. The style then simply picks that up and shows it in the UI. If you create a default CODE Framework WPF MVVM/MVC app, you will see a login screen that looks suspiciously like the one we are creating in this article, including the two buttons, but they are only dened on the view model. For more information on this technique, see the CODE Framework MVVM/MVC article as well as the WPF Layout article (see above for both).
This is really just a lengthy way of saying we want to use a BidirectionalStackPanel to lay out items in an ItemsControl. The View object is an ItemsControl, so this style can be applied to it (but also to any other ItemsControl whether it is part of CODE Framework or not again, you may see a design and philosophical pattern emerge here). You might ask, what is a bidirectional stack panel? Well, its another one of those little convenience controls. It works much like a StackPanel in WPF, except for a few minor details. For one, it can stack things both ways. In other words: If you use a bidirectional stack panel with vertical orientation (the default), then controls are put into the stack one after the other from top to bottom. Except for those controls that have their individual verti-
Figure 2: Creating a new View to implement the UIs shown in Figure 3 and Figure 4.
62
Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework
codemag.com
WPF MVVM/MVC project template. If you do not have the CODE Framework assemblies, the template will offer to download them automatically from CodePlex, which you should do. When you create the new project, pick the Battleship theme as your default theme to start out with (but choose to generally include both Battleship and Metro themes). The rst feature Ill show you how to add to the new project is a customer list and search interface. To do so, add a new Controller to the Controllers folder and add a Search() method. Then, go to the existing StartViewModel and add an action to its list of standard actions (or use one of the dummy ones that is already there), and when executed, call Controller.Action(Customer, Search) to trigger the Search() method on the newly created controller. (NOTE: If you are not familiar with these steps but want to follow along, check out the previous CODE Framework WPF MVVM/MVC article see above.) The detailed setup of the Controller and even the ViewModel do not matter for our purposes here. In this article we only care about the View. So lets go ahead and add the new View in the Views/Customer folder (you will have to add the Customer sub-folder to your Views folder). The ultimate goal is to create a user interface that looks like the UIs shown in Figure 3 and Figure 4 (which are the same UI but with different themes/skins applied). The search UI has two distinct parts: The main or primary part of the UI shows the list of customers as the result of the search, and the secondary part contains three textboxes that allow the user to specify search criteria. As it turns out, quite a few UIs follow this primary/ secondary UI pattern, where a large area occupies the main part of the UI and a secondary part provides additional features. Think of Windows Explorer showing a list of les in the main area and a tree in the secondary area (possibly with the tree area hidden). Or think of many of the Ofce applications and how they can show optional panels attached to the side of a screen. I am sure you can think of many more such examples. Since this type of UI setup is so common, CODE Framework provides default styles for this known as the Primary/Secondary Form Layout. In fact, there are two slight variations on that theme. A general purpose style of that name as well as one that is specic for showing lists. UIs with lists in their main area tend to have a slightly different look than the ones that do not, so there are two options by default. For this example you want to use the one for lists. To create a new view with this style, add a new item to the Views/Customer folder and pick the CODE Framework View template. This shows the dialog shown in Figure 2. Note that this dialog lets you pick the Primary/Secondary style as one of the default options, which puts you right where you want to be. As the next step, we dene the list part of the view (the part that will ultimately show the customer search results). For now, all we are going to do is put a ListBox in the view and bind it to a Customers collection on the view model. (I am skipping the details of the view model here, but you can download the companion source code for this article to see the details of the view model denition.) To indicate to the view that this is the control you want to use for the primary area of the UI, you can set
Figure 4: The customer search UI in Windows 7 with the Metro theme applied. the View.UIElementType attached property to Primary. And thats about it for the core denition of the list. Observant readers may notice that I have not yet dened which elds I want to show in the list, how they are to be displayed, or anything of that nature. In fact, the list as it is dened right now will show an entry for each of the customers found, but it will show no actual eld information, so the list is not very useful. However, we are not currently worried with that part, since each theme may want to show search results in a different way. So all Ill do for now is dene that the list is based on a style called Customer-List which I have yet to dene. (See Listing 3 for the complete source of the view denition.) The secondary part of the UI is going to host the search UI. The style I have chosen is going to do a good job out of the box at placing the secondary UI-part in an appropriate spot for each theme the user may choose. In fact, by default, this style is going to be somewhat intelligent and look at the dimensions of the secondary UI. If the secondary UI is tall and skinny, it will be put either to the right or the left of the main UI. If on the other hand, the secondary UI is very wide but not very tall, the style will put that UI either at the top or the bottom. At least that is what happens in most themes. Of course, you can completely change the way this works, and you can change other aspects, such as the threshold at which it ips from one approach to another, or whether you want that behavior at all. (Take a look at the GridPrimarySecondary class for a list of all the properties you can set.) Some themes may also choose a completely different approach. Perhaps on smaller screen sizes, a theme could decide to only show the primary UI and oat in the secondary UI only when needed. There is no limit to the options you can pick here except your imagination and perhaps some UI standards you may want to follow. What all this means is that you do not have to worry in general terms as to where the secondary UI is going to go. You only have to dene it. But how do you dene that UI exactly? After all, the secondary UI is really a collection of controls rather than just a single control. The answer is deceptively simple: You can simply put all of those controls into a container and then ag the container as the secondary UI. How the controls are laid out inside
codemag.com
Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework
63
Listing 3: The View denition for the Customer Search UI shown in Figure 3 and Figure 4
<mvvm:View xmlns=http://schemas.microsoft.com/winfx/2006/xaml/ presentation xmlns:mvvm=clr-namespace:CODE.Framework.Wpf.Mvvm; assembly=CODE.Framework.Wpf.Mvvm xmlns:c=clr-namespace:CODE.Framework.Wpf.Controls; assembly=CODE.Framework.Wpf Title=Customer Search Style={DynamicResource CODE.Framework-Layout-ListPrimarySecondaryFormLayout} > <ListBox ItemsSource={Binding Customers} Style={DynamicResource Customer-List} c:ListBoxEx.Command={Binding EditCustomer} mvvm:View.UIElementType=Primary /> <mvvm:View UIElementType=Secondary Style={DynamicResource CODE.Framework-Layout-SimpleFormLayout}> <Label>Last Name:</Label> <TextBox Text={Binding LastName} /> <Label>First Name:</Label> <TextBox Text={Binding FirstName} /> <Label>Company:</Label> <TextBox Text={Binding Company} /> <Button HorizontalAlignment=Right Content=Search... Command={Binding SearchCustomers} /> <WrapPanel VerticalAlignment=Bottom> <Label FontSize={DynamicResource FontSize-Smaller}> Legend:</Label> <Rectangle Fill=Red Height=8 Width=8 /> <Label FontSize={DynamicResource FontSize-Smaller}> Inactive</Label> <Rectangle Fill=Green Height=8 Width=8 /> <Label FontSize={DynamicResource FontSize-Smaller}> Active</Label> </WrapPanel> </mvvm:View> </mvvm:View>
the container is a different matter. In the example you might want a layout you are already familiar with: The simple form layout with controls stacked top to bottom and some other controls stacked from the bottom up. To facilitate all this, youll use a View object as the container (yes, a View element inside another View there is nothing wrong with that) and set the style to the familiar simple form layout style. And voila! The UI is done. Quick and painless, yet extremely exible and reusable.
Some UIs cannot be laid out entirely in a fully automatic fashion, but are compositions made out of smaller individual UI segments that use automatic layout features individually.
What youve just done is an extremely important concept in the CODE Framework: Youve used automatic layout features, but you arent using the automatic layout system to lay out the entire form all at once. I would consider it very unlikely that you have many forms in your application which could all be laid out in one swoop by a single generic layout mechanism. However, by composing individual parts of the UI from pieces that can individually be laid out automatically, you can probably handle a very wide range of UIs. There will likely still be a certain percentage of your UIs (or parts of those UIs) that you have to lay out by hand, and that is OK. Being able to use automatic layout for the rest of the views, however, provides huge advantages in terms of developer productivity and long term maintainability and reusability of your application. Understanding the ability to apply automatic layout features to sub-sections of your UIs is a big step towards becoming a super-productive WPF developer. At this point, the example is still missing some functionality. You will want to launch a customer edit form when the user selects an item from the list (as well as when the New Customer button is clicked). Lets create
some code that handles customer selections. Many developers would now create an event handler for events such as double-click in the code-behind le. Note, however, that our view doesnt even have a code-behind le at all. Whats up with that? Well, for one, you can create regular views with code-behind les and use them in CODE Framework without problems if you wish to do so. (CODE Framework supports both compiled views those with code-behind les as well as loose XAML views that do not have any associated code-behind les). Personally, I really like views without code-behind les for a number of reasons. For one, they are more generic and can be reused in more different scenarios if they are not tied to a specic code-behind le and associated classes that may only be available in some XAML dialects. They are also not pre-compiled with a specic XAML dialect, which means that your application can do some pre-processing before loading the views. (For instance, there is no Label control in Silverlight, but the framework can handle that with a pre-processing step of a loose XAML le but not a compiled one). Also, developers tend to put way too much code into codebehind les, which causes bad implementations and views that arent very exible or reusable. For instance, if you hook the ListBoxs double-click event, you would be forever trapped in having to use a double-click. But what if you want to run the view in a touch scenario? Then youd only want to single-tap. Or perhaps you want to run the view on a phone and double-clicks may not apply there. Maybe you want to have a right-click option to trigger editing. And so on, and so on. By not providing a code-behind le to put inexible code like this, developers are practically forced to write good code. (NOTE: You can always do the same thing you can do in code-behind les in behaviors. And if you really were to run into a scenario where this is not the case, rst, please drop me an email, because Id like to see this. Second, you could always use a code-behind le just for that view.) The example for this article will simply use actions to drive customer editing. I have added a view action (com-
64
Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework
codemag.com
Get thousands of cliparts free of charge! Get clipart in XAML Canvas and XAML Brush formats Use live tools to manipulate cliparts Styles, skins, templates, and shaders coming soon! SVG, JPG, PNG, BMP, GIF, and other formats
are also available!
www.Xamalot.com
Figure 5: The Search view denition automatically maintains four different resource dictionaries that are loaded as needed.
mand) called EditCustomer to the view model, and I can simply bind my ListBox to that action. As you may know, ListBoxes do not have a useful command setup for this purpose, so we added one in CODE Framework. Simply set ListBoxEx.Command and you are good to go. Better yet, ListBoxEx provides a few additional settings that specify whether commands are to be triggered on single click or double click. (Take a look at the downloadable source for details on that implementation.) Note that this sort of setup is also somewhat common in the CODE Framework. Whenever functionality useful to MVVM-style architecture and general coding without code-behind is missing, we try to add it. However, we cant possibly anticipate all scenarios developers may encounter and provide specic command bindings for them. Instead, we have a generic attached property on an object called Ex that provides an EventCommand property (as well as an EventCommands collection in case you need more than one), which allows binding any event to a command. For instance, if you wanted to bind a buttons double-click event to a command, you could do it in the following fashion:
<Button Content=Hello> <c:Ex.EventCommand> <c:EventCommand Command={Binding DoubleClickCommand} Event=MouseDoubleClick /> </c:Ex.EventCommand> </Button>
dictionaries automatically (depending on the options you pick in the dialog shown in Figure 2). Figure 5 shows the search view source le with the associated resource dictionaries. The rules for loading resource dictionaries are simple. When loading a view (such as Search.xaml), the framework also searches for XAML les with a .Layout. inx. Therefore, the Search.xaml view will always also load Search.Layout.xaml (if it exists) without you having to manually add that dictionary or having to merge it in yourself. (In fact, you should never load these dictionaries manually to avoid having unwanted and confusing resources available in scenarios where they are not wanted.) You can add up to 20 layout resource dictionaries (such as Search.Layout.xaml, Search.Layout.0.xaml, Search.Layout.1.xaml, and so forth), which will all be loaded and provide a convenient place to put individual resources associated your view without having to create a single monster resource dictionary.
Putting templates into themespecic resources is no more or less work than putting them right into the view, which makes this a good idea even if you only plan to use a single theme. Not to mention that they work better with Expression Blend.
The CODE Framework also loads theme-specic (I am using the terms theme and skin interchangeably as is common in the developer community) resource dictionaries. CODE Framework WPF applications have a global setting for the current theme set on the Application object. The framework uses an ApplicationEx object which provides a Theme property. Based on this property, the framework loads different resources. For instance, if that property is set to Metro the Search. Metro.xaml resource is also loaded with the search view. If the theme was set to Battleship it would load Search.Battleship.xaml instead. This article isnt about creating new themes (I will write a future article about that), but you can freely expand on the theming system in CODE Framework and create your own themes or customize existing ones. So assuming you had created a theme called BlueOcean (note that there cant be spaces in theme names), the framework would try to load a le called Search.BlueOcean.xaml. In the example for this article, that le does not exist. Whenever that happens, the framework tries to load a default theme le, so it would load Search.Default.xaml if that le existed (as it does in the example for this artcicle). The default le is a good place to put catch-all resources, but note that it is only loaded if no specic le is found. It will not be loaded in addition to theme les as some people incorrectly believe. (You can, however, manually create additional resource dictionaries that do not follow any naming convention and are thus not automatically handled by the CODE Framework and add dictionary merge commands to the automatic dictionaries. This is great for creating and loading resources that are shared across theme-specic dictionaries. Among other scenarios, graphical assets such as icons may often be dened in this way.)
Using Themes
So far, we have created a fully functional customer list with a severe lack of any real information. We can search for customers and we can see a list of customers, but the resulting list only shows the name of the bound class (CustomerQuickInformation) rather than useful data such as the customers name. What is missing is a data template for each item in the list. Using standard WPF (or any or XAML dialect for that matter) technique, we can simply dene an ItemTemplate for the ListBox to remedy the situation. However, if you were to do that right in the view denition, then the format of the list would now be hardcoded and couldnt be changed specic to a certain theme or even a different platform such as a touch-enabled environment or a mobile device. A much better approach is to put the same denition into a resource dictionary (basically a separate XAML le). This is no more or less work than putting it directly into the view, so this is even a good idea if you never switch to a different theme or platform. Besides, you never know whether you may want to change to a different theme later. Chances are that 5 or 10 years down the road, you might want to create a face-lift for your application. (This also makes UI elements like item templates very nicely editable and designable with tools such as Expression Blend, which I highly recommend using.) The CODE Framework has the ability to automatically manage resource dictionaries for you. Every view you create can optionally have additional resource dictionaries associated by simple naming convention. Using standard CODE Framework templates, you can create such resource
66
Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework
codemag.com
It is common to only create theme-specic les for a handful of themes but use the default le for all others. Maybe you support ve different themes in your application, four of which are simply different font and color variations on a basic Windows theme while the fth is a touch-specic option. You may create a default dictionary that is generally used and only create one additional one for the touch-specic setup. Returning to the example at hand, lets create a denition for the customer list for our simple Windows 95 style (called Battleship). As you can see in Listing 3, the ListBox is dened as using a style called Customer-List. We can thus put a style denition with that key into our Search.Battleship.xaml le. A basic setup of this style can look like this:
<Style x:Key=Customer-List TargetType=ListBox BasedOn={StaticResource {x:Type ListBox}}> <Setter Property=ItemTemplate> <Setter.Value> <DataTemplate> <Grid> <!-- and so on... -->
Since the resource dictionary specic to the view is loaded (internally) after the style of the same name provided by the framework, this style takes precedence and will be the one applied. (This is a standard XAML technique and useful for many things with or without CODE Framework.) The actual denition of this style is copied almost 1:1 from the default denition as found in the CODE Framework source (which is available to everyone). The only thing I changed is that I increased the SecondaryUIElementAlignmentChangeSize property to 500 indicating that I want my secondary UI to be aligned across the top of the view, unless it is taller than 500 pixels. (It isnt in our example, thus effectively forcing the search UI to be positioned at the top of the screen.) Figure 3 shows the result. Now, all the example lacks is a denition of a look specic to Metro. To achieve the appearance shown in Figure 4, Ill follow the same steps as for the Battleship theme (except I wont mess with the overall layout style this time). The ListBox item template now is a simple Grid of a size hardcoded to 75x250 pixels. Within it, I placed two data-bound text elements as well as an icon (which I downloaded as a XAML-based vector image from www.Xamalot.com see sidebar). To create a nice multi-column ow of elements within the list, I styled the items panel of the ListBox to use a WrapPanel element. Furthermore, I want slightly different select-behavior. While in a regular Windows world, I would expect to double-click a customer to edit it; in Metro, I would expect to singleclick (or single-tap in a touch environment) to achieve the same result. I can do this by adding the following to the denition of my ListBox style:
<Style TargetType=ListBox x:Key=Customer-List BasedOn={StaticResource Metro-Control-ListBox}> <Setter Property=c:ListBoxEx.CommandTrigger Value=Select />
Xamalot.com
You can download artwork used in this article from www.Xamalot. com, a free source of clipart for developers specically created to provide XAML-based art (although you can also download everything in bitmap-based formats such as JPG or PNG). To use XAML-based vector art (as shown in the Metrostyled list of customers in this article), simply nd a clipart you like, choose to download it as a XAML Brush resource (or simply have the XAML displayed on the site) and copy it into your own resource dictionaries. The simplest way to display XAML-based art in WPF is to place a rectangle on the screen and use the downloaded art/brush as your ll color.
As you can see, this style is dened to be applicable for ListBox elements. It is also based on the default style for ListBoxes. Themes may choose to completely re-dene the way certain controls look (ListBoxes in Metro, for instance, have a different appearance than they do in Windows 7). I dont want to worry about what that specic look is, but I want to respect it. That is why I will base it on the default. In addition, I then dene the ItemTemplate as a DataTemplate. The exact details are omitted from the code snippet above, but the basic idea is simple. The template of each item is a Grid which has several columns. Within the rst column I place a rectangle with a data-bound ll color based on whether or not the customer is active. Columns 2 and 3 have data-bound text elements to show the customers name and company name. You can see the full code example in Listing 4. Listing 4 has a few other details of interest. I want the ListBox to look like a data grid. For that purpose, I created a simple template for the entire ListBox that shows a header with labels for each column. I use a Grid to dene this header element and I even allow for GridSplitter elements to resize some of the columns. The data template for each individual item denes the individual columns within each row (which are technically independent from every other row in the list) to have a width that is databound to the width of the header column. With that, I get a working grid control where each row shows its data in columns and those columns are resizable through the header as you would expect. Listing 4 has one more interesting trick: As you may recall, the entire view uses a style called CODE.FrameworkLayout-ListPrimarySecondaryFormLayout. This is a style dened by the framework we intended to use directly and our view was never designed to dene a custom style for the layout. However, in this case (mainly to show this technique in this example), I decided to override that style anyway and create a style of the same name in the Battleship resource dictionary:
This is a small change, but it is also profound as it shows that one can use styles to not only change visual aspects such as colors or layout, but even drive behavior of a user interface. (If all this re-styling of ListBoxes is new to you, check out my article about styling ListBoxes in XAML. See sidebar for more details.) Note how little effort actually went into creating these different themes, yet the results as shown in Figure 3 and Figure 4 are quite different in appearance and even behavior. Experiment with the application by running it with both themes. You can change themes directly in the App.xaml le, but you can also swap themes in the running application by using the menu items/tiles that are added for this purpose by default. Note that you can cause a style change any time you desire either by triggering a SwitchThemeViewAction as shown in the StartViewModel class, or by simply setting the Theme property of the current application:
var app = (App.Current as ApplicationEx); app.Theme = Blue;
codemag.com
Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework
67
Listing 4: The complete denition of the Battleship themed UI elements for the Search view
<Setter.Value> <ControlTemplate TargetType={x:Type ListBox}> <Border x:Name=Bd BorderBrush= {TemplateBinding BorderBrush} BorderThickness= {TemplateBinding BorderThickness} Background={TemplateBinding Background} Padding=1 SnapsToDevicePixels=true> <Controls:GridEx RowHeights=Auto,*> <Style TargetType=ItemsControl x:Key= <Grid> CODE.Framework-Layout-ListPrimarySecondaryFormLayout> <Grid.ColumnDenitions> <Setter Property=ItemsPanel> <ColumnDenition Width=25 x:Name= <Setter.Value> column1 /> <ItemsPanelTemplate> <ColumnDenition Width=300 x:Name= <Layout:GridPrimarySecondary Margin= column2 /> 20 UIElementSpacing=15 <ColumnDenition Width=300 x:Name= SecondaryUIElementAlignment column3 /> ChangeSize=500/> <ColumnDenition Width=* x:Name= column4 /> </ItemsPanelTemplate> </Setter.Value> </Grid.ColumnDenitions> </Setter> <Grid.Background> <Setter Property=Background Value={x:Null} /> <LinearGradientBrush StartPoint= </Style> 0,0 EndPoint=0,1> <GradientStop Color= #E0E0E0 Offset=0 /> <Style x:Key=Customer-List TargetType= ListBox BasedOn={StaticResource {x:Type ListBox}}> <GradientStop Color= <Setter Property=ItemTemplate> WhiteSmoke Offset=.5 /> <Setter.Value> <GradientStop Color= <DataTemplate> #D6D6D6 Offset=1 /> <Grid> </LinearGradientBrush> <Grid.ColumnDenitions> </Grid.Background> <ColumnDenition <Label Grid.Column=1>Name</Label> <GridSplitter Grid.Column= Width={Binding Width, ElementName=column1, Mode=OneWay} /> 1 HorizontalAlignment=Right VerticalAlignment= <ColumnDenition Stretch Width=1 /> <Label Grid.Column= Width={Binding Width, ElementName=column2, Mode=OneWay} /> 2>Company</Label> <ColumnDenition <GridSplitter Grid.Column= 2 HorizontalAlignment=Right Width={Binding Width, ElementName=column3, Mode=OneWay} /> <ColumnDenition VerticalAlignment= Width={Binding Width, ElementName=column4, Stretch Width=1 /> Mode=OneWay} /> </Grid> </Grid.ColumnDenitions> <ScrollViewer Focusable=false Grid.Row=1 <Rectangle Height=16 Width=16 Margin=2 Fill={Binding IsActiveBrush} Padding={TemplateBinding Padding}> VerticalAlignment=Center <ItemsPresenter HorizontalAlignment=Left /> SnapsToDevicePixels= {TemplateBinding SnapsToDevicePixels}/> <TextBlock Grid.Column=1 Text= {Binding FullName} /> </ScrollViewer> <TextBlock Grid.Column=2 Text= </Controls:GridEx> {Binding Company} /> </Border> </Grid> </ControlTemplate> </DataTemplate> </Setter.Value> </Setter.Value> </Setter> </Setter> </Style> </ResourceDictionary> <Setter Property=Template> <ResourceDictionary xmlns=http://schemas.microsoft.com/winfx/2006/xaml/ presentation xmlns:x=http://schemas.microsoft.com/winfx/2006/xaml xmlns:Controls=clr-namespace:CODE.Framework.Wpf.Controls;assembly= CODE.Framework.Wpf xmlns:Layout=clr-namespace:CODE.Framework.Wpf.Layout;assembly= CODE.Framework.Wpf>
This causes the current application to unload all themespecic resources and load the ones for the specied theme instead (following the rules described above). This works even for UIs already loaded, which will completely change look and behavior on the y when you do this. This effect often blows developers away as the results can be dramatic, yet are very simple to achieve. Switch back and forth between the Battleship and the
68
Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework
codemag.com
source remains very simple and manageable. However, things can get a bit tricky once the application is running and you are trying to gure out what is going on. (NOTE: This is an issue for all XAML applications and not specic to CODE Framework.) To remedy this situation, CODE Framework provides a View Visualizer, which is a developer tool that is turned on by default in App.xaml. cs (yet should be turned off before you deploy your application). The View Visualizer provides a list of all views currently running and shows details about which view and view model each UI is using. It also shows which controller has launched the UI, thus giving you all three pieces of information going along with MVC scenarios. This is a very valuable tool to have, especially when it comes to maintaining applications, or when you are asked to work on a UI that was originally created by a different developer and you may not know where to nd all the pieces. However, the View Visualizer provides a lot more detail about each individual view. Once you select a view from the list of open views, you can not only see a live and zoomable visual of the view (useful for taking a close look at view details), but you can also see a hierarchical display of all the elements that make up the view. (During development, we would refer to this as the document outline.) You can hover your mouse over each element in the view to see a preview of just that element (useful for identifying the specic element you are looking for in complex views) and you can then select an element to see additional details. Those details include a list of all resource dictionaries that are loaded and accessible to the selected element (which could be application-global, specic to the current view, or even specic to the current element) and all the resource dictionaries these dictionaries may be loading and so on (dictionary merging can cause large hierarchies to be loaded).
Figure 6: The View Visualizer tool shows detailed information about all open views, the elements they are made of and details about associated resources and styles.
The View Visualizer provides tools for WPF similar in concept to what Firebug in Firefox provides for HTML.
In addition, you can choose to see all the styles and their individual settings that apply to the selected element. For those of you who have done web development and have used either the Internet Explorer Developer Tools or Firebug in Firefox, this is probably a familiar sight, as this part of the visualizer aims to provide the same information or its XAML equivalent. It allows you to easily see which styles are applied and why (they may be explicitly set by key or brought in implicitly based on the control type) and which styles these styles are based on, and so forth. You can also see which style settings have been overridden by subsequent styles as these settings are crossed out. Again, to web developers, this is probably a familiar sight, while a tool like this used to be sorely missing in WPF. Figure 6 shows the tool in action.
Figure 8: The unchanged customer edit view running as a true Windows 8 Metro application. want to create a customer edit form to show a few more details. Fundamentally, the provided example (as shown in Figure 7) is relatively simple, yet it has some very interesting details. Listing 5 shows the denition of this view (which is stored in only a single le with no needed additional resource dictionaries at all). The most interesting aspect of the edit view denition is not what is there, but what is not there. If you look
Editing Customers
At this point, the example is already quite interesting from a developers point of view. Nevertheless, I also
codemag.com
Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework
69
ently. Labels are now above their associated controls. Spacing is different. Font sizes are different. Yet using CODE Framework, we can still use the same exact view without changes. This not only saves a ton of work when moving to Metro, but it also allows you to reuse code that is already well tested. Another example of reusing the same view is shown in Figure 9, which shows the view running on Windows Phone 7. These examples lead us beyond the focus of this article and into using different areas of the framework such as the Windows 8 and Windows Phone specic implementaitons (and more). I do not have the space in this article to talk about those aspects in detail (these may be the subject of future installments of this column), but it is important to understand and know that if you are dening your UIs using the techniques described here, you are not just going to be very productive in creating your UIs, but you will also have created much more reusable UIs in the long run.
Fonts
closely, you will see that in true CODE Framework fashion, the view denition is very simple and only lists the different elements you want in the UI, what they are bound to, and a few abstract layout hints, such as the width of the elements in a generic and style independent fashion as well as group and column breaks. And thats it! The actual layout of this form is created by a style called Edit Form Layout, which produces the result shown in Figure 7. In this example, the style was able to lay out the entire form all at once, allowing you to not just be super-productive in the creation of the form but also create a view that is highly reusable in other XAML dialects and even in completely different scenarios such as ASP.NET MVC, iOS and Android. Is it realistic in real-world scenarios to have UI denitions that can be completely handled by a style like this? Well, as it turns out, most business applications tend to have a number of trivial edit forms that can indeed be handled in this fashion. Most serious forms, however, are likely to be more complex. In those cases, it is more realistic to apply these automatic layout styles to sub-sections of the view (like we did with the search screen) and compose larger views out of smaller areas that are laid out automatically (or if need be, even manually). Figure 8 shows the same view from Figure 7, but this time it runs on true Windows 8 using Windows 8 Metro (not to be confused with the Metro style which I applied to a Windows 7 WPF application throughout this article). While the view denition remains unchanged, the applied style chooses to present the view differOne is the use of fonts and font sizes. All default CODE Framework themes dene default settings for font families and font sizes. Those are usually dened within each themes resources (in the framework source code) in a le called Fonts.xaml. Most importantly, there is a resource called DefaultFont which is used to dene the themes default font family. You should use that reference rather than explicitly setting the font on any of your elements:
<!-- Good --> <TextBlock FontFamily={DynamicResource DefaultFont} /> <!-- Bad --> <TextBlock FontFamily=Segoe UI />
If you nd yourself needing different font families in your app, simply add more resources in your own resource dictionaries. A simple way to do so is to create your own Fonts.xaml le in the Themes/[Name] folder, which the default CODE Framework template already created for you (for instance, if you chose to include the Metro theme, simply add a Themes/Metro/Fonts. xaml le). Add a link to your merged dictionary from the root theme le (such as Metro.xaml in case if the Metro theme), which will cause the framework to automatically load your font denition resource dictionary when that theme is applied.
Figure 9: The same customer edit view running unchanged in denition, although with a different look, on Windows Phone 7.
Colors
A similar concept applies for colors. You should never apply a color explicitly, but you should always use styles for col-
70
Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework
codemag.com
ure 10). For testability, the framework also allows for mocking of chosen values in message boxes, so you can place a call to a message box and simulate the user picking one of the available options without ever having to display a UI. CODE Framework message boxes also have quite a number of features that standard message boxes lack. As it turns out, CODE Framework message boxes are really just standardized UIs with a view denition and a view model. If you are placing a standard call to the Controller. Message() action, the framework automatically creates a standard view and view model appropriate to display message box. However, you can also add your own view and view model if you want. This allows much greater exibility. For instance, you can easily create completely different captions for your buttons. Or, you could create complete custom views and models to add elements such as textboxes or drop down lists to your message boxes. Since this feature is using the standard view/view-model architecture of the framework, you could create UIs with no limit to complexity. (However, most message boxes should probably not have hundreds of UI elements in real world scenarios.)
Figure 10: The Metro theme uses a Metro-appropriate approach for displaying message boxes. ors. This allows for much greater exibility (not to mention consistency) across your application. It is not uncommon for applications to offer completely new skins using the simple trick of swapping the color dictionary. CODE Framework, by default, denes twelve different standard colors (three foreground, three background, three highlight, and three theme colors) all dened in the Colors.xaml le. Since WPF sometimes needs colors and sometimes needs brushes, there are brush-equivalents for all these colors as well. (If you want to make changes, you only need to change the colors as the brushes automatically use the colors to dene themselves). You may have noticed that all the Metro examples in this article use a blue background while the default template creates a red background. I achieved this simply by putting the following setting into my Metro-Colors.xaml le in the Metro theme folder:
<Color x:Key=CODE.Framework-ApplicationThemeColor1>Navy</Color>
Again, add to the list of colors if needed (and it is generally a good idea to create color as well as brush resources), but do not use explicit colors in your views and view-specic styles.
Message Boxes
A nal feature I want to point out is one that deals with the common need of message boxes. It is very tempting to simply put a call to MessageBox.Show() in your view model, but this kills reuse as well as other aspects such as testability. To avoid this problem, CODE Framework offers its own message box feature. To show a message box, simply use the Controller.Message() method:
Controller.Message(Pretending to save data..., Saving);
Fundamentally, this call supports the same parameters you would expect on the standard Windows message box. In fact, many styles choose to use the default message box feature to enable controller messages. However, many styles choose to display messages completely different in a way that is appropriate for the style (see Fig-
72
Building Productive, Powerful, and Reusable WPF (XAML) UIs with the CODE Framework
codemag.com
CODE COMPILERS (Continued from 74) guage in your brain at a time that taking the time and energy to master one will crowd out the others. Does that hold true for spoken language? If not, then why the difference between spoken and programming language? If anything, programming languages are simpler than their spoken cousins (just ask any AI researcher). So it stands to reason that if I cant keep C# and ML in my head at the same time, I cant keep French and English there, either. And yet, somehow people do this all the time. On the y. During tense diplomatic negotiations. Far, far better than any computer ever could, for that matter. So perhaps similar kinds of languages, like the C-family of languages (C++/Java/C#) are easy to hold simultaneously, but stretching across families (C-family vs Lisp-family or ML-family) is too far. Just like French (Latin-family) and Chinese (Far Eastern-family) are too yeah. I guess somehow the Chinese and French have been able to talk to each other through interpreters, too. And to the naysayers, Ill point out that sometimes, the point of learning other programming languages isnt to try and populate your place of employment with every language under the sun, but to stretch your mind in interesting new directions and see that stretch rewarded with clarity when something new comes down the pipe. Think for a moment about C#. In C# 2.0, Microsoft introduced anonymous delegates blocks of code that could be trapped into a reference and passed around like objects. If you were one of those programmers who came from the C/C++ family of languages (probably passing through Java on the way), then you didnt see much point to this new language feature, except to make event-handling code a little easier to write. But if you were a Lisp programmer, or a Ruby programmer, or a programmer of any other language that supports closures, you knew immediately what this feature was, and you knew immediately how it could be used to tremendous effect. Your designs could suddenly start taking advantage of this, and get signicantly simpler and cleaner. And when C# 3.0 started introducing more functional kinds of syntax and semantics, masquerading as LINQ, you again immediately recognized it as such and started writing code taking advantage of it. And when the Task Parallel Library and PLINQ came along, you just nodded knowingly during the session and started planning for how it would change your code. Consider a simple concept like recursion: something that most programmers learn in an early programming lesson, by the time they reach the point of writing production code, theyve learned the lesson that recursion is slow, compared to iteration, and put the idea on the shelf. What they never realize, then, is that recursion is slow in a language that has to create a new stack frame for each method/function/procedure call, and that in some languages, the compiler (or the runtime) can optimize the recursion into a single stack frame, regardless of how deep the recursion goes, using whats called tail recursion optimization. Or that some languages differentiate the passing of parameters by name, rather than by position (as C#, Java, C++, C, and so on do). Or, if you knew Lua, or JavaScript, and their idea of objects (which are essentially name/value pairs), you realize that a Dictionary<string,object> that contains name/value bindings where the value could be those delegate instances sounds awfully familiar and exible And so on. Just like knowing different spoken languages helps improve your understanding of your native tongue, knowing different programming languages helps improve your mental grip on the language used at home. Its not just a matter of an Eskimo having 27 different words for snow (which is a myth, by the way) and understanding that snow is somehow important to that culture its about seeing how different languages put concepts together and how sometimes that changes the perception of the concept entirely.
May/June 2012 Volume 13 Issue 3 Group Publisher Markus Egger Associate Publisher Rick Strahl Editor-in-Chief Rod Paddock Managing Editor Ellen Whitney Content Editors H. Kevin Fansler Erik Ruthruff Melanie Spiller Writers In This Issue Markus Egger Neal Ford Kevin S. Goff Kevin Hazzard Sahil Malik Ted Neward Paul D. Sheriff Rick Strahl Technical Reviewers Markus Egger Rod Paddock Art & Layout King Laurin GmbH info@raffeiner.bz.it Production Franz Wimmer King Laurin GmbH 39057 St. Michael/ Eppan, Italy Printing Fry Communications, Inc. 800 West Church Rd. Mechanicsburg, PA 17055 Advertising Sales Tammy Ferguson 832-717-4445 ext 26 tammy@code-magazine.com Circulation & Distribution General Circulation: EPS Software Corp. Newsstand: Ingram Periodicals, Inc. Media Solutions Source Interlink International The News Group Subscriptions Circulation Manager Cleo Gaither 832-717-4445 ext 10 subscriptions@code-magazine.com US subscriptions are US $29.99 for one year. Subscriptions outside the US are US $44.99. Payments should be made in US dollars drawn on a US bank. American Express, MasterCard, Visa, and Discover credit cards accepted. Bill me option is available only for US subscriptions. Back issues are available. For subscription information, email subscriptions@code-magazine.com or contact customer service at 832-717-4445 ext 10. Subscribe online at www.code-magazine.com CODE Component Developer Magazine EPS Software Corporation / Publishing Division 6605 Cypresswood Drive, Ste 300, Spring, Texas 77379 Phone: 832-717-4445 Fax: 832-717-4460
Wrap-up
For some people, these words will fall on deaf ears. Workplace diversity may be well and good, but clearly technical diversity is a Bad Thing, and should be avoided at all costs. If Microsoft had wanted us to use dynamic languages, they would have built us one to use. If Microsoft had wanted us to use functional languages, they would have built it into the languages they gave us. And if Microsoft had wanted us to think outside of the box, they would have given us a new box to think within. And to all those who argue this perspective, I have only one piece of advice, handed down to me from a man I worked for many years ago and deeply respect to this day: Tell them that the phrase they will need to learn for their next job is Would you like fries with that? Or, if you prefer, Voulez-vous mangez les frites? Ted Neward
codemag.com
Managed Coder
73
MANAGED CODER
Eh, <language> is good enough for everything we do, why change? Which is an argument Ive heard over and over again from developers, which I always hear in my head as, Dude, I just got to the point where I got enough C# (or VB) under my belt to be hired somewhere, dont go rocking the boat! Learning is hard! Then maybe you should go shopping, Barbie.
To the Internet!
Doing a quick Google search on Diversity leads to more of the same chest-beating It just is kinds of articles (and a couple of college message boards), but one article (http://www.businessnewsdaily. com/1200-workforce-diversity-good-for-business. html) actually offers up a rationale: diversity apparently, according to a Forbes study, leads to better innovation, which in turn leads to better competitiveness: Companies have realized that diversity and inclusion are no longer separate from other parts of the business, said Stuart Feil, editorial director of Forbes Insights. Organizations in the survey understand that different experiences and different perspectives build the foundation necessary to compete on a global scale. Although no articles Ive found make the causation clear, the correlation between a diverse workplace and innovation (and thus competitive strength) seems to be relatively clear, at least according the study cited. One of the key elements seems to be around attracting and retaining talented employ-
Thinking in Language
It makes me curious, though: again, going back to diversity arguments, whats so wrong with English? Why not just insist that everybody within the company learn English? I mean, English seems to work well enough for me, why shouldnt it work well enough for everybody? (No, Im not really arguing that. I happen to enjoy being semi-uent in French and German.) Monoglots of programming languages will be quick to point out that you cant have more than one lan(Continued on page 73)
74
Managed Coder
codemag.com
FR AMEWORK
An EPS Company