Académique Documents
Professionnel Documents
Culture Documents
Architecture Guidance......................................................................... 3
Data Layer Guidelines ......................................................................... 3
Data Layer...................................................................................... 3
Design Considerations ...................................................................... 3
Specific Design Considerations .......................................................... 4
Design Patterns ............................................................................... 4
Designing Data Components ................................................................ 4
Choose Data Access Technology ........................................................ 4
Choose How Retrieving and Persisting Business Entities ........................ 5
Determine How Connecting the Data Source........................................ 5
Determine Strategies for Handling Data Source Errors .......................... 5
Design Service Agents ...................................................................... 5
Data Access Technology Matrix............................................................. 6
Data Access Technologies ................................................................. 6
Object-Relational Data Access ........................................................... 6
Disconnected and Offline Data Access................................................. 7
SOA and Service Scenarios ............................................................... 7
N-tier and General Scenarios............................................................. 8
Designing Business Entities................................................................. 10
Choose the Representation .............................................................. 10
Choose a Design for Business Entities ................................................10
Determine Serialization Support .......................................................11
Domain Driven Design ..................................................................... 11
Entity Framework ............................................................................. 12
Anti-Patterns To Avoid In N-Tier Applications ........................................12
Understanding N-Tier ...................................................................... 12
Custom Service or RESTful Service? ..................................................12
Anti-Pattern #1: Tight Coupling ........................................................ 13
Anti-Pattern #2: Assuming Static Requirements..................................13
Anti-Pattern #3: Mishandled Concurrency ..........................................14
Anti-Pattern #4: Stateful Services.....................................................14
Anti-Pattern #5: Two Tiers Pretending to be Three ..............................15
Anti-Pattern #6: Undervaluing Simplicity ...........................................15
N-Tier Application Patterns ................................................................. 15
Change Set .................................................................................... 15
DTOs ............................................................................................ 16
Simple Entities ............................................................................... 16
Self-Tracking Entities ...................................................................... 16
Implementing N-Tier with the Entity Framework .................................17
Concurrency Tokens........................................................................ 17
Serialization ................................................................................... 17
Working with the ObjectStateManager ...............................................18
Patterns Other Than Simple Entities in .NET 3.5 SP1............................19
API Improvements in .NET 4 ............................................................ 20
Building N-Tier Apps with EF4 ............................................................. 20
Self-Tracking Entities ...................................................................... 21
Data Transfer Objects ..................................................................... 24
Tips .............................................................................................. 26
Conclusion ..................................................................................... 26
Figures.............................................................................................. 27
Data Access Technologies ................................................................... 27
Current Data Technologies ............................................................... 27
Native Data Technologies................................................................. 27
WCF Data Services.......................................................................... 28
Future Data Technologies ................................................................ 28
Entity Framework .............................................................................. 29
Comparing N-Tier Patterns with EF4 ..................................................29
References ........................................................................................ 29
Book References ............................................................................... 29
Patterns of Enterprise Application Architecture ....................................29
Web References ................................................................................ 29
MSDN ........................................................................................... 29
MSDN Patterns & Practices ............................................................ 30
InfoQ ............................................................................................ 30
Blogs ............................................................................................ 30
Architecture Guidance
Data Layer Guidelines
Data Layer
Design Considerations
Design Patterns
• General: Active Record, Data Mapper, Data Transfer Object, Domain Model, Query
Object, Repository, Row Data Gateway, Table Data Gateway, Table Module
• Batching: Parallel Processing, Partitioning
• Transactions: Capture Transaction Details, Coarse-Grained Lock, Implicit Lock,
Optimistic Offline Lock, Pessimistic Offline Lock, Transaction Script
• Choice is determined by the type of data and how data must be manipulated within
the application.
• Consider using the ADO.NET Entity Framework (EF) if you want to create a data
model and map it to a relational database, using a flexible schema with the
flexibility of separating the mapping schema from the object model. If using EF, also
consider using LINQ to Entities allowing queries over strongly typed entities.
• Consider using WCF Data Services (formerly known as ADO.NET Data Services) if
developing a RIA or an n-tier rich client application, wanting to access data through
a resource-centric service interface. WCF Data Services is built on top of EF and
allows you to expose parts of an Entity Model through a REST interface.
• Consider using ADO.NET Core if you need a low level API for full control over data
access or if building an application that must support a disconnected data access
experience.
• Consider using ADO.NET Sync Services if designing an application that must support
occasionally connected scenarios, or requires collaboration between databases.
• Consider using LINQ to XML if using XML data in the application, and wanting to
execute queries using the LINQ syntax.
Choose How Retrieving and Persisting Business Entities
• Choose a strategy for populating business entities from the data store and for
persisting them back to the data store. An impedance mismatch exists between an
object-oriented data model and the relational data store. The most common
approaches use O/RM tools and frameworks to handle this mismatch.
• Consider using an O/RM framework that translates between domain entities and the
database. In a greenfield environment, use an O/RM tool to generate a schema to
support the object model and provide a mapping between the database and domain
entities. In a brownfield environment with an existing database schema, use an
O/RM tool for mapping between the domain model and relational model.
• A common pattern is domain driven design, based on modeling entities on objects
within a domain (see Designing Business Entities).
• Ensure grouping entities correctly to achieve a high level of cohesion. This may
mean requiring additional objects within the domain model, and that related entities
are grouped into aggregate roots.
• When working with Web applications or services, group entities and provide options
for partially loading domain entities with only the required data. This minimizes the
use of resources by avoiding holding initialized domain models for each user in
memory, and allows applications to handle higher user load.
• Identify how to connect to the data source, protect user credentials, and perform
transactions.
• Connections, Connection Pooling, Transactions and Concurrency
• Design an overall strategy to handle data source errors. All exceptions associated
with data sources should be caught by the data access layer. Exceptions concerning
the data itself, and data source access and timeout errors, should be handled in this
layer and passed to other layers only if the failures affect application responsiveness
or functionality.
• Exceptions, Retry Logic, Timeouts
• ADO.NET Core provides facilities for the general retrieval, update, and management
of data. It includes providers for SQL Server, OLE DB, ODBC, SQL Server CE, and
Oracle databases.
• ADO.NET Data Services Framework exposes data using the Entity Data Model,
through RESTful Web services accessed over HTTP. The data can be addressed
directly using URIs. The Web service can be configured to return the data as plain
Atom and JavaScript Object Notation (JSON) formats.
• ADO.NET Entity Framework provides a strongly typed data access experience over
relational databases. It moves the data model from the physical structure of
relational tables to a conceptual model that accurately reflects common business
objects. It introduces a common Entity Data Model within the ADO.NET
environment, allowing developers to define a flexible mapping to relational data.
This mapping helps to isolate applications from changes in the underlying storage
schema. It also supports LINQ to Entities, which provides LINQ support for business
objects exposed through the Entity Framework. When used as an O/RM product,
developers use LINQ to Entities against business objects, which Entity Framework
will convert to Entity SQL that is mapped against an Entity Data Model managed by
the Entity Framework. Developers also have the option of working directly with the
Entity Data Model and using Entity SQL in their applications.
• ADO.NET Sync Services is a provider included in the Microsoft Sync Framework, and
is used to implement synchronization for ADO.NET-enabled databases. It enables
data synchronization to be built into occasionally connected applications. It
periodically gathers information from the client database and synchronizes it with
the server database.
• Language Integrated Query (LINQ) provides class libraries that extend C# and
Visual Basic with native language syntax for queries. It is primarily a query
technology supported by different assemblies throughout the .NET Framework.
Queries can be performed against a variety of data formats, including DataSet
(LINQ to DataSet), XML (LINQ to XML), in-memory objects (LINQ to Objects),
ADO.NET Data Services (LINQ to Data Services), and relational data (LINQ to
Entities).
• LINQ to SQL provides a lightweight, strongly typed query solution against SQL
Server. LINQ to SQL is designed for easy, fast object persistence scenarios where
the classes in the mid-tier map very closely to database table structures. Starting
with .NET Framework 4.0, LINQ to SQL scenarios will be integrated and supported
by the ADO.NET Entity Framework; however, LINQ to SQL will continue to be a
supported technology.
• Custom business objects are common language runtime (CLR) objects that describe
entities in your system. The objects are created manually or using an O/RM
technology. Custom business objects are appropriate if complex business rules or
behavior must be encapsulated along with the related data. If custom business
objects need to be accessed across AppDomain, process, or physical boundaries, a
service layer can be implemented that provides access via Data Transfer Objects
(DTO) and operations that update or edit your custom business objects.
• DataSets are a form of in-memory database closely mapping to the actual database
schema. DataSets are typically used when building a data-oriented application
where the data in the application logic maps very closely to the database schema.
DataSets cannot be extended to encapsulate business logic or business rules.
Although DataSets can be serialized to XML, they should not be exposed across
process or service boundaries.
• XML is used to represent business entities only if the presentation layer requires it
or if application logic must work with the content based on its schema (for example,
a message routing system routing messages based on some well-known nodes in
the XML document). Using and manipulating XML can use large amounts of
memory.
• Domain Model is a design pattern that defines business objects representing real
world entities within the business domain. The business or domain entities contain
both behavior and structure (business rules and relationships are encapsulated
within the domain model). The domain model design requires in-depth analysis of
the business domain and typically does not map to the relational database models.
Consider using it when the business domain has complex business rules that relate
to the business domain, when designing a rich client and the domain model can be
initialized and held in memory, or when not working with a stateless business layer
that requires initialization of the domain model with every request.
• Table Module is a design pattern that defines entities based on tables or views
within a database. Operations used to access the database and populate the table
module entities are usually encapsulated within the entity, but can also be provided
by data access components. Consider using this design pattern if the tables or views
within the database closely represent the business entities, or if business logic and
operations relate to a single table or view.
• Custom XML objects represent deserialized XML data that can be manipulated within
the application code. Objects are instantiated from classes defined with attributes
that map properties within the class to elements and attributes within the XML
structure. Consider using custom XML objects if the consumed data is already in
XML format; XML data must be generated from non-XML data sources; or working
with read-only document-based data.
Domain Driven Design requires good understanding of the business domain mostly provided
to the development team by business domain experts. The whole team agrees to only use a
single language that is focused on the business domain, and which excludes any technical
jargon. Quite often, communication problems within development teams are due not only to
misunderstanding the language of the domain, but also due to the fact that the domain’s
language is itself ambiguous.
The domain model is expressed using entities, value objects, aggregate roots, repositories,
and domain services; organized into coarse areas of responsibility known as Bounded
Contexts:
• Entities are objects in the domain model that have a unique identity that does not
change throughout the state changes of the software. Entities encapsulate both
state and behavior.
• Value objects are objects in the domain model that are used to describe certain
aspects of a domain. They do not have a unique identity and are immutable (for
example a customer address object).
• Aggregate roots are entities that group logically related child entities or value
objects together, control access to them, and coordinate interactions between them.
• Repositories are responsible for retrieving and storing aggregate roots, typically
using an O/RM framework.
• Domain services represent operations, actions, or business processes and provide
functionality that refers to other objects in the domain model. At times, certain
functionality or an aspect of the domain cannot be mapped to any objects with a
specific life-cycle or identity; such functionality can be declared as a domain service
(for example, a catalog pricing service within the e-commerce domain).
While Domain Driven Design provides many technical benefits, such as maintainability, it
should be applied only to complex domains where the model and the linguistic processes
provide clear benefits in the communication of complex information, and in the formulation
of a common understanding of the domain.
Entity Framework
Anti-Patterns To Avoid In N-Tier Applications
Understanding N-Tier
• The key difference is that REST services are resource-centric while custom services
are operation-centric.
• With REST, data is divided into resources, each resource is given a URL, and
standard operations on those resources allowing CRUD are implemented.
• With custom services, any arbitrary method can be created and those operations
can be tailored to the specific needs of the application.
• ADO.NET Data Services in combination with the Entity Framework (EF) makes it
easy to create both RESTful services and clients to work with them. The framework
can provide more functionality to RESTful services automatically because the
services are constrained to follow a specific pattern.
• For many applications, the constraints of REST are just too much. For example,
sometimes the operations involve multiple resources at once.
• Often the ideal solution for an application is a mixture of REST and custom services.
• Loose coupling is more difficult than tight coupling, and often the performance is not
as good.
• Why introduce an interface and dependency injection? Why build an abstraction with
custom objects mapped to the database instead of filling a DataTable and passing it
around?
• In the short term you gain some efficiency with tight coupling, but in the long run
evolving the application can become almost impossible.
• When you have modules that work together closely within a tier, sometimes tight
coupling is the right choice, but in other cases, components need to be kept at
arm's length from one another.
• Tiers do not always change at the same rate. The trick is to identify which parts of
the application might have different rates of change and which parts are tightly
coupled to each other.
• First, consider the boundary between the database and the mid-tier. Using the EF
already helps here because its mapping system provides an abstraction between
mid-tier code and the database. The same questions should be considered between
the mid-tier and the client.
• A particularly common and painful example of this anti-pattern in action is an
architecture that uses table adapters (moves the data into a DataSet with the same
schema) to retrieve data from the database and Web services that exchange
DataSets with the client (tightly coupling the mid-tier to the client).
• The next anti-pattern comes up when developers try to simplify things by keeping
the context around across multiple service calls.
• Managing the context lifetime can get tricky quickly. When you have multiple clients
calling the services, you have to maintain a separate context for each client or risk
collisions between them. And even if you solve those issues, you will end up with
major scalability problems.
• These scalability problems are not only the result of tying up server resources for
every client. In addition you will have to guard against the possibility that a client
might start a unit of work, but never complete it, by creating an expiration scheme.
Further, if you decide that you need to scale your solution out by introducing a farm
with multiple mid-tier server, then you will have to maintain session affinity to keep
a client associated with the same server where the unit of work began.
• The best solution is to avoid them altogether by keeping your mid-tier service
implementations stateless. If some information needs to be maintained for a unit of
work that extends across multiple service calls, then that information should be
maintained by the client.
Anti-Pattern #5: Two Tiers Pretending to be Three
• "Why can't you make the Entity Framework serialize queries across tiers?" "Oh, and
while you are at it, can you support initiating updates from another tier as well?"
• If you could create an Entity Framework ObjectContext on the client tier, execute
any Entity Framework query to load entities into that context, modify those entities,
and then have SaveChanges push an update from the client through the mid-tier to
the database server—if you could do all that, then why have the mid-tier at all? Why
not just expose the database directly?
Change Set
• The idea behind the change set pattern is to create a serializable container that can
keep the data needed for a unit of work together and, ideally, perform change
tracking automatically on the client. This approach also tends to be quite full-
featured and is easy to use on the mid-tier and on the client. DataSet is the most
common example of this pattern.
• Some of the downsides of this pattern:
◦ The change set pattern places significant constraints on the client because
the wire format tends to be very specific to the change set and hard to
make interoperable.
◦ The wire format is usually quite inefficient. Change sets are designed to
handle arbitrary schemas, so overhead is required to track the instance
schema.
◦ The ease with which you can end up tightly coupling two or more of the
tiers, which causes problems if you have different rates of change.
◦ Easy to abuse the change set.
• Because it is so easy to put data into the change set, send it to the mid-tier, and
then persist, you can do so without verifying on the mid-tier that the changes you
are persisting are only of the type that you expect.
• This pattern is best used in cases where you have full control over client deployment
so that you can address the coupling and technology requirement issues. It is also
the right choice if you want to optimize for developer efficiency rather than runtime
efficiency. If you do adopt this pattern, be sure to validate any changes on the mid-
tier rather than blindly persisting whatever changes arrive.
DTOs
• The intent of the Data Transfer Objects (DTOs) pattern is to separate the client and
the mid-tier by using different types to hold the data on the mid-tier and the data
on the client and in the messages sent between them. The DTO approach requires
the most effort to implement, but when implemented correctly, it can achieve the
most architectural benefits.
• You can develop and evolve your mid-tier and your client on completely separate
schedules because you can keep the data that travels between the two tiers in a
stable format regardless of changes made on either end. Naturally, at times you'll
need to add some functionality to both ends, but you can manage the rollout of that
functionality by building versioning plus backward and forward compatibility into the
code that maps the data to and from the transfer objects.
• Because you explicitly design the format of the data for when it transfers between
the tiers, you can use an approach that interoperates nicely with clients that use
technologies other than .NET. You can use a format that is very efficient to send
across the wire, or you can choose to exchange only a subset of an entity's data for
security reasons.
• The downside is the extra effort required to design three different sets of types for
essentially the same data and to map the information between the types.
• For many projects you might be able to achieve your goals with a pattern that
requires less effort.
Simple Entities
• The simple entities pattern reuses the mid-tier entity types on the client striving to
keep the complexity of the data structure to a minimum and passing entity
instances directly to service methods. Only allows simple property modification to
entity instances on the client. More complex operations, such as changing the
relationships or accomplishing a combination of inserts, updates, and deletes,
should be represented in the structure of the service methods.
• No extra types are required and no effort has to be put into mapping data from one
type to another. If you can control deployment of the client, you can reuse the same
entity structures.
• The primary disadvantage is that more methods are usually required on the service
if you need to accomplish complex scenarios that touch multiple entities. This leads
to either chatty network traffic, where the client has to make many service calls to
accomplish a scenario or special-purpose service methods with many arguments.
• The simple entities approach is especially effective when you have relatively simple
clients or when the scenarios are such that operations are homogenous. Then the
service methods are generally either queries for read-only data, modifications to
one entity at a time without changing much in the way of relationships, or inserting
a set of related entities all at once for a specific entity.
Self-Tracking Entities
• The self-tracking entities pattern is built on the simple entities pattern. It creates
smart entity objects that keep track of their own changes and changes to related
entities. To reduce constraints on the client, the entities are plain-old CLR objects
(POCO) that are not tied to any particular persistence technology. They just
represent the entities and some information about whether they are unchanged,
modified, new, or marked for deletion.
• Because the tracking information is built into the entities themselves and is specific
to their schema, the wire format can be more efficient than with a change set.
Because they are POCO, they make few demands on the client and interoperate
well. Because validation logic can be built into the entities themselves, you can
more easily remain disciplined about enforcing the intended operations for a
particular service method.
• There are two primary disadvantages for self-tracking entities compared to change
sets:
◦ A change set implementation can allow multiple change sets to be merged if
the client needs to call more than one service method to retrieve the data it
needs.
◦ The entity definitions are somewhat complicated because they include the
tracking information directly instead of keeping that information in a
separate structure outside the entities.
• Self-tracking entities are not as thoroughly decoupled as DTOs, and there are times
when more efficient wire formats can be created with DTOs.
• Nothing prevents you from using a mix of DTOs and self-tracking entities.
Concurrency Tokens
• The best option for concurrency token is to use a row version number. A row's
version automatically changes whenever any part of the row changes in the
database.
• The next best option is to use something like a time stamp and add a trigger to the
database updating the time stamp whenever a row is modified.
• In the Entity Designer, select the property and set its Concurrency Mode to Fixed.
You can have more than one property in the same entity with Concurrency Mode set
to Fixed, but this is usually not necessary.
Serialization
It keeps track of the existence of each entity under its control; its key value; an EntityState value,
which can be unchanged, modified, added, or deleted; a list of modified properties; and the original
value of each modified property. When you retrieve an entity from the database, it is added to the
list of entities tracked by the state manager, and the entity and the state manager work together to
maintain the tracking information. If you set a property on the entity, the state of the entity
automatically changes to Modified, the property is added to the list of modified properties, and the
original value is saved. Similar information is tracked if you add or delete an entity. When you call
SaveChanges on the ObjectContext, this tracking information is used to compute the update
statements for the database. If the update completes successfully, deleted entities are removed
from the context, and all other entities transition to the unchanged state so that the process can
start over again.
• When sending entities to another tier, the automatic change tracking process is
interrupted. To perform an update on the mid-tier by using information from the
client, you need two special methods of ObjectContext:
◦ The Attach method tells the state manager to start tracking an entity. There
are two critical things about Attach to keep in mind:
▪ At the end of a successful call to Attach, the entity will always be in
the unchanged state. If you want to eventually get the entity into
some other state, such as modified or deleted, you need to take
additional steps to transition the entity to that state. The value an
entity's property has when you attach it will be considered the
original value for that property. The value of the concurrency token
when you attach the entity will be used for concurrency checks.
▪ If you attach an entity that is part of a graph of related entities, the
Attach method will walk the graph and attach each of the entities it
finds.
◦ The ApplyPropertyChanges method implements the other half of a
disconnected entity modification scenario. It looks in the
ObjectStateManager for another entity with the same key as its argument
and compares each regular property of the two entities. When it finds a
property that is different, it sets the property value on the entity in the state
manager to match the value from the entity passed as an argument to the
method. It is important to note that the method operates only on "regular"
properties and not on navigation properties, so it affects only a single entity,
not an entire graph. It was designed especially for the simple entities
pattern.
• Above mechanism adds some complication to the client which needs to copy the
entity before modifying it. An altenative is to attach the modified entity and use
some lower-level APIs on the ObjectStateManager to tell it that the entity should be
in the modified state and that every property is modified.
• ObjectStateManager mechanism can also be used for service methods to add and
delete entities.
• The change set pattern can be implemented. See the sample of this pattern written
with one of the prerelease betas of the EF. Consider creating an ObjectContext on
the client with only the conceptual model metadata and use that as a client-side
change tracker.
• Implementing DTOs is not that much more difficult with the first release of the EF
than it will be in later releases. You have to write your own code or use an
automatic mapper to move data between your entities and the DTOs. Consider
using LINQ projections to copy data from queries directly into your DTOs.
public List<CustomerDTO> GetCustomerDTOs(){ using (var ctx = new
NorthwindEntities()) { var query = from c in ctx.Customers select
new CustomerDTO() { Name = c.ContactName, Phone = c.Phone }; return
query.ToList(); }}
• The EF will support complete persistence ignorance for entity classes (POCO). Allows
creation of entities that have no dependencies on the EF or other persistence-
related DLLs. A single entity class used for persisting data with the EF will also work
on Silverlight or earlier versions of .NET. POCO helps isolate the business logic in
your entities from persistence concerns and makes it possible to create classes with
a clean, interoperable serialization format.
• Working with the ObjectStateManager will be easier because the state transition
constraints have been relaxed.
• Allow building a model in which an entity exposes a foreign key property that can be
manipulated directly.
• EF will use the T4 template engine to allow easy, complete control over the code
that is generated for entities. Allows Microsoft releasing templates that generate
code for a variety of scenarios and usage patterns, and allows customizing those
templates. One of the templates will produce classes that implement the self-
tracking entities pattern with no custom coding required on your part.
These features are used to implement the Self-Tracking Entities pattern in a template
(making it more accessible) and while DTOs still require the most work during initial
implementation, this process is also easier with EF4 (see figure).
Self-Tracking Entities
Start by creating an Entity Data Model that represents the conceptual entities and map it to
a database:
• Reverse engineer a model from an existing database
• Create a model from scratch and then generate a database to match
Replace the default code generation template with the Self-Tracking Entities template:
• Right-click the entity designer surface and choose Add Code Generation Item.
• Choose the Self-Tracking Entities template from the list of installed templates.
• Turns off default code generation and adds two templates: one generates the
ObjectContext, and the other generates entity classes. Separating into two
templates makes it possible to split the code into separate assemblies, one for
entity classes and one for context.
The main advantage is that you can have your entity classes in an assembly that has no
dependencies on the Entity Framework. This way, the entity assembly and any business
logic implemented there can be shared by the mid-tier and the client if you want.
The context is kept in an assembly that has dependencies on both the entities and the EF:
• If the client is running .NET 4, you can just reference the entity assembly from the
client project.
• If your client is running an earlier version of .NET or is running Silverlight, you can
add links from the client project to the generated files and recompile the entity
source in that project (targeting the appropriate CLR).
Similarly, if new entities are added to a graph or entities are deleted from a graph, that
information is tracked:
• Since the state of each entity is tracked on the entity itself, the tracking mechanism
behaves as you would expect even when you relate entities retrieved from more
than one service call.
• If you establish a new relationship, just that change is tracked: the entities involved
stay in the same state, as though they had all been retrieved from a single service
call.
The context template adds the method ApplyChanges to the generated context. It attaches
a graph of entities to the context and sets the information in the ObjectStateManager to
match the information tracked on the entities. With the track information of the entities
track and ApplyChanges, the generated code handles both change tracking and concurrency
concerns, two of the most difficult parts of correctly implementing an n-tier solution.
The GetProducts service method is used to retrieve reference data on the client about the
product catalog. It retrieves a customer and a list of that customer’s orders:
To illustrate client usage of self-tracking entities, consider the creation of an order with
appropriate order detail lines, updating parts of the customer entity with the latest contact
information, and also deleting any orders that have a null OrderDate (system marks
rejected orders that way):
Note that when creating the order detail entity for the new order, just the ProductID
property is set rather than the Product entity itself. This is the new foreign key relationship
feature in action. It reduces the amount of information that travels over the wire because
you serialize only the ProductID back to the mid-tier, not a copy of the product entity.
It’s in the implementation of the SubmitOrder service method that Self-Tracking Entities
really shines:
The call to ApplyChanges reads the change information from the entities and applies it to
the context in a way that makes the result the same as if those changes had been
performed on entities attached to the context the whole time.
A second critical design principle is that you should develop separate, specific service
methods for each operation. Without these separate operations, you do not have a strong
contract representing what is and isn’t allowed between your two tiers, and properly
validating your changes can become impossible.
In DTOs, instead of sharing a single entity implementation between the mid-tier and the
client, you create a custom object that’s used only for transferring data over the service and
develop separate entity implementations for the mid-tier and the client:
• It isolates your service contract from implementation issues on the mid-tier and the
client, allowing that contract to remain stable even if the implementation on the
tiers changes.
• It allows you to control what data flows over the wire. You can avoid sending
unnecessary data or data the client is not allowed to access.
• The service contract is designed with the client scenarios in mind so that the data
can be reshaped between the mid-tier entities and the DTOs (maybe by combining
multiple entities into one DTO).
• Benefits come at the price of having to create and maintain one or two more layers
of objects and mapping.
Following code applies DTOs to the order submission example. Note the CustomerVersion
field which contains the row version information used for concurrency checks on the
customer entity:
The service method that accepts this DTO uses the same lower-level Entity Framework APIs
that the Self-Tracking Entities template uses to accomplish its tasks. First, you create a
graph of customer, order and order detail entities based on the information in the DTO:
Then you attach the graph to the context and set the appropriate state information:
ctx.Customers.Attach(customer);
var customerEntry =
ctx.ObjectStateManager.GetObjectStateEntry(customer);
customerEntry.SetModified();
customerEntry.SetModifiedProperty("ContactName");
ctx.ObjectStateManager.ChangeObjectState(order, EntityState.Added);
foreach (var order_detail in order.Order_Details)
{
ctx.ObjectStateManager.ChangeObjectState(order_detail,
EntityState.Added);
}
return ctx.SaveChanges() > 0;
Flow:
• Attach the entire graph to the context: each entity is in the Unchanged state.
• Tell the ObjectStateManager to put the customer entity in the Modified state with
only the ContactName property marked as modified (the only customer info
provided by the DTO).
• Change the state of the order and each of its order details to Added.
• Apply changes to customer and order with SaveChanges.
Because you have a very specific DTO for each scenario no change validations are required:
you are interpreting the DTO object as you map the information from it into your entities.
Nevertheless, in many cases additional validation of the values or other business rules is
still required.
One other consideration is properly handling concurrency exceptions using the version
information of the customer entity included in the DTO. You can either map this exception
to a WCF fault for the client to resolve the conflict, or you can catch the exception and apply
some sort of automatic policy for handling the conflict.
Tips
Conclusion
The .NET 4 release of the Entity Framework makes the creation of architecturally sound n-
tier applications much easier. For most applications, it is recommended starting with the
Self-Tracking Entities template, which simplifies the process and enables the most reuse. If
you have different rates of change between service and client, or if you need absolute
control over your wire format, you should move up to a Data Transfer Objects
implementation. Regardless of which pattern you choose, always keep in mind the key
principles that the antipatterns and patterns represent and never forget to validate your
data before saving.
Figures
Data Access Technologies
References
Book References
Web References
MSDN
InfoQ
Blogs
[BASILRIA, Business Apps Example for Silverlight 3 RTM and .NET RIA Services,
Overview] Overview, Blog Brad Adams
[BDSEF] Blog Danny Simmons - Dev manager for the Entity Framework Team
Walkthrough: Self Tracking Entities for the Entity Framework, Blog
[WSTEEF]
ADO.NET Team
[FKREF] Foreign Key Relationships in the Entity Framework, Blog ADO.NET Team