Académique Documents
Professionnel Documents
Culture Documents
Cyborgx37
You are viewing a feed that contains frequently updated content. When you subscribe to a feed, it
is added to the Common Feed List. Updated information from the feed is automatically downloaded to your
computer and can be viewed in Internet Explorer and other programs. Learn more about feeds.
To begin, let's look at a simple and somewhat common scenario. We have an unsorted list of names. We
want to go through each letter of the alphabet and print a an alphabetized list of names for the current
letter. Here's a typical approach.
Module Module1
Sub Main()
Dim alphabet As String() = New String() {"A", "B", "C", "D", "E", "F", "G", "H", "I",
"J", _
"K", "L", "M", "N", "O", "P", "Q", "R", "S",
"T", _
"U", "V", "W", "X", "Y", "Z"}
Dim names As String() = New String() {"Adam", "Dave", "John", "Alex", "Daryll", "Jacob",
_
"Christopher", "Bill", "Ronald", "Jeff"}
For Each letter In alphabet
Dim selected_names As New List(Of String)
For Each name As String In names
If name.StartsWith(letter) Then selected_names.Add(name)
Next
If selected_names.Count > 0 Then
selected_names.Sort(AddressOf SortNamesMethod)
Console.WriteLine("Names beginning with '" & letter & "'")
For Each name As String In selected_names
Console.WriteLine(name)
Next
Console.WriteLine("")
End If
Next
'Pause for the user
Console.Read()
End Sub
Private Function SortNamesMethod(ByVal name1 As String, ByVal name2 As String) As Integer
Return name1.CompareTo(name2)
End Function
End Module
While this approach certainly accomplishes the job, you will see in a moment how LINQ can make the line
count smaller, the program flow more logical and the code easier to maintain.
Extension Methods
Extension methods allow programmers to add useful methods to existing types. In our example, we will be
adding an extension method to the IEnumerable(Of String) type. The extension method will print the list of
strings to the console, as well as the list title.
Imports System.Runtime.CompilerServices
Next we will need to create a new subroutine. The sub will be named ToConsole and will take two
parameters. The first parameter will define the type to which this method is being added, and the second
parameter will be the title that we want to give our list. Finally, we will need to add the Extension attribute
to the function to alert the compiler that this method is an extension method.
<extension()> _
Private Sub ToConsole(ByVal items As IEnumerable(Of String), _
ByVal title As String)
End Sub
Now we need to add the functionality. We will borrow it from the Main function above:
<extension()> _
Private Sub ToConsole(ByVal items As IEnumerable(Of String), _
ByVal title As String)
If items IsNot Nothing Then
Console.WriteLine(title)
For Each item In items
Console.WriteLine(item)
Next
Console.WriteLine("")
End If
End Sub
We can use this method on any object that implements the IEnumerable(Of String) interface. In our
example, the selected_names variable in the Main method implements this interface because the List(Of
String) implements IEnumerable(Of String). We can modify our code to look like this:
Imports System.Runtime.CompilerServices
Module Module1
Sub Main()
Dim alphabet As String() = New String() {"A", "B", "C", "D", "E", "F", "G", "H", "I",
"J", _
"K", "L", "M", "N", "O", "P", "Q", "R", "S",
"T", _
"U", "V", "W", "X", "Y", "Z"}
Dim names As String() = New String() {"Adam", "Dave", "John", "Alex", "Daryll", "Jacob",
_
"Christopher", "Bill", "Ronald", "Jeff"}
For Each letter In alphabet
Dim selected_names As New List(Of String)
For Each name As String In names
If name.StartsWith(letter) Then selected_names.Add(name)
Next
If selected_names.Count > 0 Then
selected_names.Sort(AddressOf SortNamesMethod)
selected_names.ToConsole("Names beginning with '" & letter & "'")
End If
Next
'Pause for the user
Console.Read()
End Sub
Private Function SortNamesMethod(ByVal name1 As String, ByVal name2 As String) As Integer
Return name1.CompareTo(name2)
End Function
<extension()> _
Private Sub ToConsole(ByVal items As IEnumerable(Of String), _
ByVal title As String)
If items IsNot Nothing Then
Console.WriteLine(title)
For Each item In items
Console.WriteLine(item)
Next
Console.WriteLine("")
End If
End Sub
End Module
Typically an extension method is most useful in cases where you'd want to that method in more than one
place. Our example is small and this is not really necessary, but the exercise will help when dealing with
some of the built-in extension methods provided for LINQ.
Lambda Expression
Next, let's get rid of the SortNamesMethod. In a small application like this one, defining a function that is
used only once is not a problem, but in a very large application, these kinds of extra functions get to be
annoying and confusing. We will use a simple lambda expression instead of the function.
The List(Of String).Sort() function accepts a reference to a function as a parameter. The function doesn't
care where the function exists or how it was declared as long as the function takes two strings as
parameters and returns an integer indicating if the first string is greater than, equal to or less than the
second string. We can replace the AddressOf SortNamesMethod with the lambda expression
The compiler reads this lambda expression and creates a function for us. The lambda expression explicitly
declares its parameters and the compiler is able to detect that the return type is boolean (because
name1.CompareTo(name2) returns a boolean). Once the compiler has created the function, it replaces the
lambda expression with the address of the compiler-created function. The resulting binary code is pretty
much the same, but the benefit is that I no longer need to deal with that extra function floating around in
my code.
Let's begin with the Where method. For any IEnumberable(Of T), the Where method will return an
IEnumerable(Of T) where all items in the list match the predicate provided as a parameter. That's a little
confusing, so let's looks at an example. We will be replacing the for loop which filters the names list by the
first letter with the Where extension method.
becomes
We've replace the for-loop with an extension method. For the predicate parameter, we've used a lambda
expression that accepts a String parameter and returns a boolean. The lambda is transformed into a
function at compile time, and the Where extension method applies the function to each element in our
name list at run time. The Where extension method then returns an IEnumerable(Of String) containing all
items in our name list where the result of the predicate (StartsWith(letter)) is true. Then we use the
ToList() extension method to transform the IEnumerable(Of String) into an IList(Of String) so that we can
assign the result back to the selected_names variable.
We can take this a step further by sorting the list as we filter it. Instead of applying the ToList extension
method to the result of the Where extension method, let's apply to OrderBy extension method. The OrderBy
extension method takes a name from our list as a parameter and returns some key by which to sort the list.
In this case, we want to order the list by the name, so all we need to do is return the name that we passed
in as a parameter.
becomes
The sort function later in the code is no longer necessary. Our complete code now looks like:
Imports System.Runtime.CompilerServices
Module Module1
Sub Main()
Dim alphabet As String() = New String() {"A", "B", "C", "D", "E", "F", "G", "H", "I",
"J", _
"K", "L", "M", "N", "O", "P", "Q", "R", "S",
"T", _
"U", "V", "W", "X", "Y", "Z"}
Dim names As String() = New String() {"Adam", "Dave", "John", "Alex", "Daryll", "Jacob",
_
"Christopher", "Bill", "Ronald", "Jeff"}
For Each letter In alphabet
Dim selected_names As New List(Of String)
selected_names = names.Where(Function(name As String)
name.StartsWith(letter)).OrderBy(Function(name As String) name).ToList()
If selected_names.Count > 0 Then
selected_names.ToConsole("Names beginning with '" & letter & "'")
End If
Next
'Pause for the user
Console.Read()
End Sub
<extension()> _
Private Sub ToConsole(ByVal items As IEnumerable(Of String), _
ByVal title As String)
If items IsNot Nothing Then
Console.WriteLine(title)
For Each item In items
Console.WriteLine(item)
Next
Console.WriteLine("")
End If
End Sub
End Module
LINQ
Now that we've covered extension methods and lambda expressions, we can convert our selected_names
filter into a LINQ statement.
becomes
This is the exact same statement written two ways. As you can see, LINQ is syntactic sugar to make our
extension methods look prettier. Not all extension methods can be replaced with LINQ, which is why we
must still call the ToList() extension method as we did before. We must also wrap the LINQ in commas so
that it is applied to the result of the entire LINQ statement instead of the name variable.
In addition to what we have seen, there is the Select extension method. This method accepts as a
parameter a name and returns whatever object you want to return. The result of applying this extension
method to a list is a new list of whatever type you chose to return. LINQ even allows us to return an
anonymous type (a new type inferred from the expression with whatever properties we specify). Using the
Select extension method in LINQ, we can transform our code to look like this:
Imports System.Runtime.CompilerServices
Module Module1
Sub Main()
Dim alphabet As String() = New String() {"A", "B", "C", "D", "E", "F", "G", "H", "I",
"J", _
"K", "L", "M", "N", "O", "P", "Q", "R", "S",
"T", _
"U", "V", "W", "X", "Y", "Z"}
Dim names As String() = New String() {"Adam", "Dave", "John", "Alex", "Daryll", "Jacob",
_
"Christopher", "Bill", "Ronald", "Jeff"}
Dim rolodex = From list In (From letter In alphabet _
Select Letter = letter, Entries = (From name In names _
Where
name.StartsWith(letter))) _
Where list.Entries.Count() > 0
For Each page In rolodex
page.Entries.ToConsole("Names beginning with '" & page.Letter & "'")
Next
'Pause for the user
Console.Read()
End Sub
<extension()> _
Private Sub ToConsole(ByVal items As IEnumerable(Of String), _
ByVal title As String)
If items IsNot Nothing Then
Console.WriteLine(title)
For Each item In items
Console.WriteLine(item)
Next
Console.WriteLine("")
End If
End Sub
End Module
This creates a list of some anonymous type where the Names property of the anonymous type has more
than zero items in it. We then take this list and apply the ToConsole extension method to the Names
property of each item and passes in a title using the Letter property.
With fewer lines, this code is easier to read and more durable as there are not as many lines of code to
break. As you can see, we have also made use of VB's ability to infer the type of a variable from the
assignment expression. Our rolodex variable is strongly typed, but since we used an anonymous type there
is no way to declare the type in a typical dim statement. By allowing VB to infer the type, we can sidestep
this requirement and still have the Letter and Entries properties appear in the intellisense list (along with all
of the advantages of compile-time type checking).
View On CodeProject
With the introduction of WPF Microsoft began using DIU (Device Independent Units). A DIU (also known as
a Device Independent Pixel, or DIP) measurement is based on inches rather than hardware-specific pixels.
A DIU is defined as 1/96 of an inch (smaller than the point, which is defined as 1/72 of an inch). For a
standard 96 pixel per inch monitor, 96 DIU = 96 pixels. For monitors with 120 pixels/inch, 96 DIU = 120
pixels. For a monitor with 60 pixels/inch, 10 DIU = 60 pixels. When the measurement doesn't work out to a
perfectly round number (as is often the case) WPF will automatically use anti-aliasing, or you have the
option of “snapping” to the nearest pixel if you don't want "fuzzy" outlines for your button, etc.
One kink is that all measurements, including text, are now measured in DIU. In Windows Forms, depending
on what control you were using, font size was often measured in points (a point is 1/72 of an inch and is
the measurement used in Word and other applications). So now when you specify font size 12, you are
actually specifying a 9 point font (which is probably smaller than you intended).
To help with this transition, I’ve put together a quick chart to quickly convert inches and points into DIU. I
hope this helps:
Units of Measurement
Inches Points DIU
1/96 3/4 1
1/72 1 1 1/3
1/48 1 1/2 2
1/32 2 1/4 3
1/24 3 4
1/16 4 1/2 6
1/12 6 8
1/8 9 12
5/36 10 13 1/3
1/6 12 16
3/16 13 1/2 18
1/4 18 24
1/3 24 32
1/2 36 48
3/4 54 72
1 72 96
1 1/2 108 144
2 144 192
3 216 288
4 288 384
5 360 480
8 576 768
10 720 960
11 1/2 828 1104
12 864 1152
Home Articles Quick Answers Message Boards Job Board Catalog Help! Lounge
Beginner License: The Code Project Open License (CPOL) VB (VB10), .NET (.NET3.5), LINQ,
All-Topics, Dev, Design
General Programming
of strings to the console, as well as the list title.
Graphics / Design
Development Lifecycle In order to create an extension method, we will need to import the System.Runtime.CompilerServices
General Reading namespace.
Third Party Products
Mentor Resources Collapse Copy Code
Collapse
Services
Product Catalog Imports System.Runtime.CompilerServices
Job Board
CodeProject VS2008 Addin Next we will need to create a new subroutine. The sub will be named ToConsole and will take two
parameters. The first parameter will define the type to which this method is being added, and the second
Feature Zones parameter will be the title that we want to give our list. Finally, we will need to add the Extension attribute
Product Showcase to the function to alert the compiler that this method is an extension method.
The SQL Zone
WhitePapers / Webcasts Collapse Copy Code
.NET Dev Library Collapse
ASP.NET 4 Web Hosting <Extension()> _
Private Sub ToConsole(ByVal items As IEnumerable(Of String), _
ByVal title As String)
End Sub
Now we need to add the functionality. We will borrow it from the Main function above:
We can use this method on any object that implements the IEnumerable(Of String) interface. In our
example, the selected_names variable in the Main method implements this interface because the List(Of
String) implements IEnumerable(Of String) . We can modify our code to look like this:
Typically an extension method is most useful in cases where you'd want that method in more than one place.
Our example is small and this is not really necessary, but the exercise will help when dealing with some of the
built-in extension methods provided for LINQ.
Lambda Expression
Next, let's get rid of the SortNamesMethod . In a small application like this one, defining a function that is used
only once is not a problem, but in a very large application, these kinds of extra functions get to be annoying and
confusing. We will use a simple lambda expression instead of the function.
The List(Of String).Sort() function accepts a reference to a function as a parameter. The function
doesn't care where the function exists or how it was declared as long as the function takes two string s as
parameters and returns an integer indicating if the first string is greater than, equal to or less than the
second string . We can replace the AddressOf SortNamesMethod with the lambda expression.
The compiler reads this lambda expression and creates a function for us. The lambda expression explicitly
declares its parameters and the compiler is able to detect that the return type is boolean (because
name1.CompareTo(name2) returns a boolean). Once the compiler has created the function, it replaces the
lambda expression with the address of the compiler-created function. The resulting binary code is pretty much
the same, but the benefit is that I no longer need to deal with that extra function floating around in my code.
Let's begin with the Where method. For any IEnumberable(Of T) , the Where method will return an
IEnumerable(Of T) where all items in the list match the predicate provided as a parameter. That's a little
confusing, so let's looks at an example. We will be replacing the for loop which filters the names list by the
first letter with the Where extension method.
becomes:
We've replaced the for -loop with an extension method. For the predicate parameter, we've used a lambda
expression that accepts a String parameter and returns a boolean. The lambda is transformed into a function
at compile time, and the Where extension method applies the function to each element in our name list at run
time. The Where extension method then returns an IEnumerable(Of String) containing all items in our
name list where the result of the predicate (StartsWith(letter) ) is true . Then we use the ToList()
extension method to transform the IEnumerable(Of String) into an IList(Of String) so that we can
assign the result back to the selected_names variable.
We can take this a step further by sorting the list as we filter it. Instead of applying the ToList extension
method to the result of the Where extension method, let's apply to OrderBy extension method. The OrderBy
extension method takes a name from our list as a parameter and returns some key by which to sort the list.
In this case, we want to order the list by the name, so all we need to do is return the name that we passed in
as a parameter.
becomes:
The sort function later in the code is no longer necessary. Our complete code now looks like:
LINQ
Now that we've covered extension methods and lambda expressions, we can convert our selected_names
filter into a LINQ statement.
becomes:
This is the exact same statement written two ways. As you can see, LINQ is syntactic sugar to make our
extension methods look prettier. Not all extension methods can be replaced with LINQ, which is why we must
still call the ToList() extension method as we did before. We must also wrap the LINQ in commas so that it is
applied to the result of the entire LINQ statement instead of the name variable.
In addition to what we have seen, there is the Select extension method. This method accepts as a parameter
a name and returns whatever object you want to return. The result of applying this extension method to a list is
a new list of whatever type you choose to return. LINQ even allows us to return an anonymous type (a new type
inferred from the expression with whatever properties we specify). Using the Select extension method in
LINQ, we can transform our code to look like this:
This creates a list of some anonymous type where the Names property of the anonymous type has more than
zero items in it. We then take this list and apply the ToConsole extension method to the Names property of
each item and pass in a title using the Letter property.
With fewer lines, this code is easier to read and more durable as there are not as many lines of code to break.
As you can see, we have also made use of VB's ability to infer the type of a variable from the assignment
expression. Our rolodex variable is strongly typed, but since we used an anonymous type, there is no way to
declare the type in a typical dim statement. By allowing VB to infer the type, we can sidestep this requirement
and still have the Letter and Entries properties appear in the intellisense list (along with all of the
advantages of compile-time type checking).
View on CodeProject
License
This article, along with any associated source code and files, is licensed under The Code Project Open License
(CPOL)
Cyborgx37
Location: United States
Member
(Refresh)
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+PgUp/PgDown to switch pages.
Home Articles Quick Answers Message Boards Job Board Catalog Help! Lounge
Search
How to serialize list of linq entities (ex. from DBML) Filter (e.g. C++) Go
Today I was trying to save list of entities those were generated by DBMLThis code shows how you can store your list in
XML string and put it in ViewState, after that you get your list back.Using the CodeYou can use this code as you
Tools
need.Here is class Serializatorpublic class...
Print
Today I was trying to save list of entities those were Sign Up to vote for this article Share
generated by DBML
This code shows how you can store your list in XML string
0 5.0/5 1,451 Discuss
Report
answers 2 votes views
and put it in ViewState, after that you get your list back.
Tags: .NET3.5, C#, .NET, LINQ ...
Add an Alternate
This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
(Refresh)
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+PgUp/PgDown to switch pages.
Home Articles Quick Answers Message Boards Job Board Catalog Help! Lounge
Search
When using LINQ to SQL, it can be very useful to see the SQL commands that are generated by your LINQ expressions.
Sometimes the results are surprising and you might be able to improve performance by tweaking the LINQ.All you have to
Tools
do is set the Log property of the DataContext object. ...
Print
When using LINQ to SQL, it can be very useful to see the Sign Up to vote for this article Share
SQL commands that are generated by your LINQ
expressions. Sometimes the results are surprising and you
might be able to improve performance by tweaking the
0 5.0/5 571 Discuss
Report
answers 3 votes views
LINQ.
Tags: C#, SQL, .NET, LINQ
All you have to do is set the Log property of the
DataContext object. For example. Drill down
.NET .NET2.0 .NET3.0
Announcements
Write an iPhone
Tutorial, Win an iPad
Local Government
Windows Azure
Competition
Monthly Competition
(Refresh)
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+PgUp/PgDown to switch pages.
Home Articles Quick Answers Message Boards Job Board Catalog Help! Lounge
Platforms, Frameworks & Libraries » LINQ » General Advanced License: The Code Project Open License (CPOL) C#, .NET, LINQ, Dev
Libraries
Collapse
Windows Powershell
LINQ int ExecuteCommand(string command, params object[] parameters);
Azure IEnumerable<TResult> ExecuteQuery<TResult>(string query, params object[] parameters);
IEnumerable ExecuteQuery(Type elementType, string query, params object[] parameters);
General Programming DbCommand GetCommand(IQueryable query);
ITable GetTable(Type type);
Graphics / Design MetaModel Mapping { get; }
Development Lifecycle void SubmitChanges();
IEnumerable<TResult> Translate<TResult>(DbDataReader reader);
General Reading IEnumerable Translate(Type elementType, DbDataReader reader);
Third Party Products
Mentor Resources You'll notice that the GetTable<T> method is missing from the list. This is because this method can not be
implemented by other classes since there is no direct way to construct a System.Data.Linq.Table<> .
Services
Microsoft did expose an interface of ITable which contains the basic methods required by the table; however,
Product Catalog
it isn't IEnumerable<T> , and cannot be used to write LINQ queries against. So, to get the functionality of
Job Board
both ITable and IEnumerable<T> into the IDataContext with one method, I've created another method:
CodeProject VS2008 Addin
To keep things simple, the data structure that will be used will consist of Members, Articles, and Comments.
In order to map the generated LINQ to SQL DataContext to the IDataContext , we need to make a partial
class for the generated class and ensure it implements IDataContext . Since the generated DataContext
doesn't contain a definition for GetITable<T> , we also have to define this:
The EnumerableTable class is literally just a wrapper class to expose both the ITable and
IEnumerable<T> . This is essentially how we get around not being able to instantiate a LINQ Table<> object.
Each entity model definition is created by creating an interface that simply defines the properties of the model,
and then having the interface inherit from IBaseEntity which exposes the dependency on the
IDataContext and the basic methods that should be included in the entity such as Save and Delete . Each
LINQ to SQL entity then needs to implement the model by creating a partial class for it. Then, in order for
Dependency Injection to work, a constructor method is created for the LINQ to SQL classes that has an
IDataContext object as a parameter (when Unity constructs an object, it looks for the constructor with the
most parameters, and since this new constructor has more parameters than the default generated one, Unity
knows that the IDataContext is a dependency).
In this project, IArticle , IMember , and IComment will be manually created, and partial classes for
Article , Member , and Comment will need to be created to ensure that the LINQ to SQL classes implement
these interfaces.
A service is setup to expose methods to interact with the data for each table. A service interface needs to be
created to define any data function that should take place. It then needs to inherit from the IBaseService<>
which exposes the dependency on the IEntityServiceFactory (which in turn has a reference to the
IDataContext and all other data services). Once the interface is setup, then the actual service as a class is
created. This class inherits from BaseService<> which already defines the basic properties and methods
required. For Dependency Injection to work, the constructor for each service is created that has an
IEntityServiceFactory object as its parameter.
A class diagram of how the data services are setup:
The configuration section for Unity defines IoC containers. Each container maps interfaces to real objects, and in
each mapping, we can define the lifetime of the object that Dependency Injection creates. Since we only want
one DataContext created for the LINQ to SQL container, we can define it as a singleton. This maps the
IDataContext to a singleton of the LINQ to SQL generated object.
Now we need to map our data model interfaces to real objects; in this case, the generated LINQ to SQL classes.
And finally, we need to setup the data services. The syntax is really hard to look at, but this is how Microsoft
made the string syntax for defining generic types. What this is actually doing is mapping IBaseService<T> to
a real service. For example, the first mapping is mapping IBaseService<Member> to MemberService .
Now, we need a way to get Dependency Injection to build all of the objects for us. With the above configuration
in place, the IoC container will give us a LinqUnity.Linq.DataContext object when an IDataContext is
requested, a LinqUnity.Linq.Article object when an IArticle is requested, and so on. To get this to
work, the EntityServiceFactory class has been created which has some methods to get Unity to create
these objects for us:
This mapping could be defined in the XML as well, but then another custom class would need to be created to
create the Unity objects, etc... I wanted to keep the EntityServiceFactory as the basic object to use for
the framework so that implementation of this framework didn't require any knowledge of Unity. The default
constructor for the EntityServiceFactory will load the container defined in the configuration file called
DataLayer. Alternatively, you can pass a different container name to the overloaded constructor method.
Each service is dependent on the IEntityServiceFactory because each service may need a reference to the
IDataContext and potentially the other data services.
Normally, with LINQ to SQL, we would write queries based on the table properties generated on the
DataContext , such as:
or:
This can't be done with this framework since neither Article s nor GetTable<T> are members of the
IDataContext . Instead, we need to use the custom GetITable<T> method that has been created to expose
an IEnumerable<T> object to query:
With the above syntax, our data service methods might look something like this:
As stated in the beginning of this article, one of the downfalls of this is that we're not querying directly against
the System.Data.Linq.Table<T> , so we lose the additional extension methods available on the
System.Data.Linq.Table<T> object as compared to the IEnumerable<T> object.
The EntityServiceFactory includes the basic methods for creating services and entities with all of their
dependencies wired up; however, a nicer implementation would be to extend this class and expose properties for
accessing each of the data services. In this example, this class is called ServiceFactory , and is quite simple
with three properties: CommentService , ArticleService , and MemberService . Each call to one of these
properties will return a new service object created from Dependency Injection. In its most simple form, one of
the properties may look like:
Policy Injection
Policy Injection is a simple AOP type of framework found in Microsoft's Enterprise Library. In this example we'll
use Policy Injection to get logging and caching happening at the method level by simply attributing the methods
you want logged or cached. To implement Policy Injection, we change the above properties code to:
Policy Injection requires that an object extends MarshalByRefObject , or that it implements an interface
containing the methods that will be used in Policy Injection. Since all of our classes are interfaced, this is really
easy to do.
To cache the output of a method, all you have to do is add the CachingCallHandler :
Now, the output of SelectAll() will be cached for 5 minutes. Logging is just as easy; however, it requires
some entries in the configuration file (see the source code and Microsoft's documentation for more details):
The above will create a log entry before the method is called with the passed in parameter values, and after the
method is called with the value of the returned object. The configuration section for the logging application block
will allow you to configure exactly what is logged and how it is formatted.
Though attributing is quite easy, you can configure Policy Injection in the configuration file as well to dynamically
change what is cached, logged, etc... without recompiling. However, the methods that are targeted still need to
exist inside of an object that is wrapped or created with Policy Injection.
All you have to do to use the data services is create a ServiceFactory and access the properties to call the
appropriate methods. This will create a new IMember :
Behind the scenes, this has created a new Member object, and also called the InsertOnSubmit method of its
corresponding member ITable . To save the changes to the DataContext , we can just call:
Calling factory.DataContext.SubmitChanges() would also do the same thing (but I think, the above is
nicer to use :) LINQ to SQL doesn't have a nice way (as far as I know) to run an update on one entity or table,
it simply will update all changes made, so the Save() method is really just a wrapper for the
DataContext.SubmitChanges() .
Since we've declared the IDataContext to be a singleton, this means that we don't have to worry about which
DataContext created which entity, since it will always be the same when it is resolved from the factory. This
allows us to create different entities from different services, link them together, save the changes to the
database, and not have to worry about any errors regarding mismatched DataContext s:
factory.DataContext.SubmitChanges();
As mentioned in the beginning of this article, I've created a test data context called XDataContext which stores
data in XML files instead of a database. I've defined a second container in the configuration file which is exactly
the same as the SQL container; however, the IDataContext is mapped to this XDataContext instead of the
LINQ to SQL DataContext . I didn't create custom entities since the LINQ to SQL entities are quite simple to
begin with and already take care of the entity relationships.
To use this other container, all we have to do is construct the EntityServiceFactory with the name of the
container.
The XDataContext manages identity seeding and incrementing as well as tracking additions and deletions.
Points of interest
The "nice to have" methods such as Delete() and Save() that now exist on these entities also come with a
catch. Using the EntityServiceFactory 's CreateEntity<T> method to create an entity automatically
wires up the entity's dependencies with the IDataContext so that Save() and Delete() can be called.
However, when these entities are returned from a data source, they don't have their dependencies setup. In
order to get this working, we have to use the BuildEntity<T> method of the EntityServiceFactory to
wire up the dependencies for each object. This probably comes with a bit of a performance overhead. For
example, the SelectAll() method:
calls BuildEntity for each item returned from the data store. Considering there may be hundreds or
thousands of rows, this may come at a cost. However, apart from the BuildEntity performance overhead,
there's negligible overhead as compared to running normal LINQ to SQL with a lot of iterations.
On another note, I've read in quite a few places that serializing LINQ to SQL entities to XML is not possible
without some trickery, so for this example, I've just implemented IXmlSerializable and custom serialized
these objects.
Conclusion
I thought this was quite a fun exercise to show off some really cool technologies. It was also quite interesting
trying to get around Microsoft's LINQ to SQL class structure to implement mock services without using some sort
of type mocking library.
References
Here's more info on the technologies used:
License
This article, along with any associated source code and files, is licensed under The Code Project Open License
(CPOL)
Shannon Deminick Shannon Deminick is the Technical Director of The Farm Digital, a Sydney based digital
services agency.
Occupation: Web Developer
Member
Company: The Farm Digital
Location: Australia
Re: This wont work very well Shannon Deminick 15:46 16 Mar '09
Last Visit: 0:05 24 Jun '10 Last Update: 0:05 24 Jun '10 1
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+PgUp/PgDown to switch pages.
Home Articles Quick Answers Message Boards Job Board Catalog Help! Lounge
Platforms, Frameworks & Libraries » LINQ » General License: The Code Project Open License (CPOL) C# (C#3.0), Windows (WinXP,
Win2003, Vista, Win2008, Win7),
Windows Communication
Foundation
Windows Presentation
Foundation
Windows Workflow Foundation
Libraries
Windows Powershell
LINQ
Azure
General Programming
Graphics / Design
Development Lifecycle
General Reading
Third Party Products
Mentor Resources
Services The Product entity contains all the different scenarios we have come across when detaching entities.
Product Catalog Category is a child entity, Item is a child entity list and we have configured Descn to delay or lazy load.
Job Board
CodeProject VS2008 Addin To detach the Product entity, the first thing we need to do is create a Product partial class. To create the
partial class for the Product entity, right click the Product entity in the LINQ to SQL designer, select "View
Feature Zones Code" and a partial class is created for the Product entity. We will add the following method to the Product
Product Showcase partial class:
The SQL Zone
WhitePapers / Webcasts Collapse Copy Code
.NET Dev Library Collapse
ASP.NET 4 Web Hosting partial class Product
{
public override void Detach()
{
if (null == PropertyChanging)
return;
PropertyChanging = null;
PropertyChanged = null;
}
}
First, a check is made to verify that the entity is attached to a context. This might be considered a bit of a hack,
but LINQ to SQL entities participate in the DataContext 's change notification through their
PropertyChanged and PropertyChanging event handlers. The LINQ to SQL's DataContext tracks
objects using the INotifyPropertyChanging and INotifyPropertyChanged interfaces. This means that
the PropertyChanged and PropertyChanging events manage the attachment to the DataContext . A
check to verify whether this event is being handled or not lets me know if the entity is attached to a
datacontext .
If the entity is not attached to a datacontext , no work needs to be done. Also, this check eliminates the
possibility of circular references causing any stack overflow issues. If the entity is attached to a datacontext ,
then the event handlers for the PropertyChanging and PropertyChanged events are removed.
Now that the event handlers have been removed, changes for the entity are no longer being tracked. However,
we aren't done detaching the Product entity. We must detach all of its child entities, lists and the properties
that are delay loaded. So, we will do that now.
To implement the detach for all the child entities, lists and the delay loaded properties, we must create an
abstract base class called LinqEntityBase from which all the entities inherit.
Since we will implement some base methods to take advantage of reuse that will need to use the Detach()
method from each entity, an abstract detach method will be needed in the LinqEntityBase class. We
have already seen the Detach() for the Petshop Product entity. Each of the other entities in the
Petshop.dbml will need a detach specific to that entity. So, Product will now inherit LinqEntityBase .
So, now let's look at what we need to do to detach a child entity. We will add the following method that
specifically detaches a child entity to our LinqEntityBase class.
We must first determine if the entity has been loaded. The trick here is to not trigger any loading of entities. The
HasLoadedOrAssignedValue method tells us whether the entity has been loaded or not and we can avoid
any lazy loading of entities. Once we determine the entity has been loaded, the entity is detached and returned
as the target of a new EntityRef instance. If the entity has not been loaded, the property is set to a new
empty instance of EntityRef .
This line calls the Detach implementation that is specific in this case to the Category entity. Again, each
entity requires its own Detach method specific to that entity. We have implemented a Detach() method on
each entity similar to the process we are using for the Product entity.
All child lists for an entity must be detached as well. The ItemList property on the Product entity is an
EntitySet . Each ItemList in the EntitySet must be detached and the following method is needed to
accomplish this.
As we mentioned before, HasLoadedOrAssignedValue is used to determine if the list has been loaded and
avoids lazy loading the list. Each item in the ItemList must be detached and copied to a new EntitySet
that is not attached to a datacontext .
Lastly, any delay loaded properties will need to be detached. By updating the DBML, I have configured the
Descn property of the Product entity to be delay loaded.
Any delay loaded properties also hold a connection to the datacontext and must be detached and uses the
third and last Detach method we will need in the base class.
As has been the pattern, check to see if the object has been loaded, if not return a default instance, otherwise,
return a new instance of Link with the value of the object as the target of the instance.
Now that we have added the necessary base methods for detaching the child entities, child entity sets and delay
loaded properties, we can complete the Product Detach method by adding a Detach call for the Category ,
ItemList and Desc property. Below is the complete Product Detach() method:
Using Detach
One way to take advantage of detach and reattaching Linq to SQL entities is to use the repository pattern. We
have setup a simple OrderRepository to get and save Orders.
}
public static Order Save(Order order)
{
using (var context = new PetshopDataContext())
{
if (order.OrderId > 0)
context.Orders.Attach(order, true);
else
context.Orders.InsertOnSubmit(order);
context.SubmitChanges();
order.Detach();
}
return order;
}
}
}
As you can see, each of these methods uses its own datacontext. There is no need to pass one datacontext
around as a parameter or hold it in a module level variable. The entities are completely disconnected which
means you are free to use the entities anywhere you like without worrying about the datacontext . The ability
to detach makes the repository pattern possible with LINQ to SQL. The code below interacts with the repository
and is not concerned with datacontext s or maintaining a connection to the database.
Conclusion
As you can see, it is a bit of work to detach an entity from a datacontext . Also, as the object graph gets
more complicated, it can be tricky to ensure that the entity is completely detached. The PLINQO detach takes all
precautions when detaching from the datacontext and ensures an entity can be used in a disconnected
manner. The ability to use LINQ to SQL entities disconnected from the datacontext opens up many opportunities
for encapsulation and reuse.
PLINQO generates detach methods similar to the one we just built for the Product entity for each entity.
PLINQO figures out all the necessary child objects, lists and delay loaded properties that need to be detached
and makes sure the proper detach methods for those entities are executed when detaching an entity from the
datacontext . This means you do not have to worry about what is needed to detach your entities. PLINQO has
already taken care of it.
So, how do you detach entities when using PLINQO? Call the Detach method. DONE!
History
License
This article, along with any associated source code and files, is licensed under The Code Project Open License
(CPOL)
Eric J. Smith
Occupation: Architect
Company: CodeSmith Tools, LLC
Location: United States
Member
Shannon Davidson
Occupation: Architect
Company: CodeSmith Tools
Location: United States
Member
Re: Use the bbv.DomainDrivenDesign library Member 3847706 15:11 11 Dec '09
URGENT- linq to sql with adding a class to DBML designer manually. 5:50 24 Jul '09
poconnor
Re: URGENT- linq to sql with adding a class to DBML designer manually. 11:19 28 Sep '09
Shannon Davidson
Re: URGENT- linq to sql with adding a class to DBML designer 4:43 30 Sep '09
poconnor
manually.
Last Visit: 0:05 24 Jun '10 Last Update: 0:05 24 Jun '10 1
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+PgUp/PgDown to switch pages.
PermaLink | Privacy | Terms of Use Copyright 2009 by Eric J. Smith, Shannon Davidson
Last Updated: 12 Jul 2009 Everything else Copyright © CodeProject, 1999- 2010
Editor: Sean Ewington Web22 | Advertise on the Code Project
Home Articles Quick Answers Message Boards Job Board Catalog Help! Lounge
Platforms, Frameworks & Libraries » LINQ » General License: The Code Project Open License (CPOL) XML, C#1.0, C#2.0, C#3.0,
Windows, .NET3.0, .NET3.5, LINQ
Wanted
Senior Applications
Developer-Microsoft at
Wolters Kluwer-MediRegs
Articles
Desktop Development
Web Development
Mobile Development
Enterprise Systems
Database
Multimedia
Languages
Platforms, Frameworks &
Libraries
ATL
MFC
STL
WTL
COM / COM+
.NET Framework
Win32/64 SDK & OS Introduction
Vista API
Vista Security During my working experience, I had to process some user error reports concerning one of our company's
Cross Platform products. These reports included call stack information intended to help us with the detection of error causes.
Game Development
Mobile Development We use an obfuscation tool upon our production code, so the call stack information provided by an error report
Windows CardSpace
requires some “hopping around” with an obfuscation map and manual text search. This “hopping” is not always
Windows Communication
Foundation an easy thing to do – the obfuscation map is a huge XML file with a size of more than 25 MB, and most text
Windows Presentation editors do not appreciate such information volume at all. Such editors' preferences are reasonable, assuming that
Foundation a usual human-made file rarely runs over the 1 MB boundary.
Our company uses the Dotfuscator tool, its Community Edition is shipped with Visual Studio. Obfuscation maps
have the same format for all obfuscator editions, so anyone can test this name resolving tool on their own code.
The obfuscation map is an XML file whose structure looks like this:
You will notice that the obfuscated name is always placed in an optional <newname> element. If this element is
omitted, then the object uses its original name.
Next, we should define the user input. For example, we need to find a type with the obfuscated name “a”.
Usually, we search for the “<newname>a</newname>” string – this will find all the types, methods, and fields
that have the obfuscated name ‘a’. There are about several thousand results in complex projects. To achieve our
search goal, we should analyze a parent element and detect if it is a <type> element.
Thus, a user usually uses two parameters: the first parameter is an obfuscation map file path, and the second
parameter is an obfuscated name. There is also one more (implicit) parameter – a search result type
(type/method/field), but we will try to infer this parameter from the second. According to the requirement of UI
simplicity, I think this is enough.
The main operation here is the map.Descendants("type") call, which returns all the <type> elements from
the XElement content.
The Descendants() method returns a plain collection of the descendant XML elements. This collection includes
child elements, grandchildren elements, and etc. So, if we write map.Descendants() , we will get all XML
elements enumeration from the map document. This method has an overload that allows filtering the output
collection by specifying the matching element name filter. I used this overload to filter out all elements except
the <type> .
Note: The filter name should be a fully qualified name; it means that if the filtered elements have a namespace,
the filter name must have it too.
Note: Keep in mind that Descendants uses deferred execution, meaning that the actual access to the
underlying XML will be performed when you first access the Descendants result rather then when you call this
function.
map.Descendants("type") will scan the whole XML tree for the specified element type; it is not the most
effective solution, but the simplest one. Using direct element navigation that reduces the whole XML scan will be
more productive. For example, we can use such an expression:
Depending on the XML content, this expression can give us ten times performance boost than the Descendants
call. But for this application, I prefer simplicity of the Descendants function.
Now we have all the <type> elements, and need to find matches with the obfuscated name. I implement this
using a LINQ query:
The types collection is filtered by matching the type’s child element <newname> content with the passed
obfuscated name. This can also be done using the Where extension method with the lambda expression:
As stated before, the <newname> element is optional, so Element("newname") returns null when the type
is not obfuscated. To avoid possible NRE, I’ve changed LINQ query to the following:
This code will search types with obfuscated or original name matching obfuscatedName .
The let keyword introduces a new variable name that holds a <newname> element or a <name> element in
case no <newname> element is present. This new variable is an anonymous type that consists of a current
<type> element and a <name> /<newname> element. Something like that:
As we can see, there is the second Select function call, which (in conjunction with the anonymous type
projection) will give us some performance penalty, so I rewrite the query to the following:
The next thing to do is to process complex type names. In XML, these names are separated by ‘/’ instead of ‘.’;
e.g., the “MyClass.MyInternalClass” name is presented by a “MyClass/MyInternalClass” string value. We just need
to replace “.” on “/” in the obfuscatedName variable to allow a match:
At last, we provide anonymous type projection that will help us to process the search results in C#:
After that, you can process the search result as you wish; for example, output it to the console:
Summary
That’s it. We have found types providing the obfuscated name. In the next part, I will step deeper into the LINQ
queries by providing Fields and Methods name resolving solutions.
Thanks for your time, and you are welcome to post any questions or suggestions.
License
This article, along with any associated source code and files, is licensed under The Code Project Open License
(CPOL)
Alexander Yegorov
Occupation: Technical Lead
Company: Devart (www.devart.com)
Location: Ukraine
Member
(Refresh)
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+PgUp/PgDown to switch pages.
Home Articles Quick Answers Message Boards Job Board Catalog Help! Lounge
Platforms, Frameworks & Libraries » LINQ » General License: The Code Project Open License (CPOL) C#1.0, C#2.0, C#3.0, Windows,
.NET3.0, .NET3.5, .NET4.0, LINQ
Leveraging LINQ to XML: Querying an obfuscation map: Part 2 Revision: 3 (See All)
By Alexander Yegorov Posted: 5 Oct 2009
Views: 2,611
A practical use of the LINQ to XML technology. Bookmarked: 9 times
Wanted
Software Engineer at
Google
Articles
Desktop Development
Web Development
Mobile Development
Enterprise Systems
Database
Multimedia
Languages
Platforms, Frameworks &
Libraries
ATL
MFC
STL
WTL
COM / COM+
.NET Framework
Win32/64 SDK & OS
Vista API Introduction
Vista Security
Cross Platform This article describes a practical usage of LINQ to XML.
Game Development
Mobile Development This is the second and the last part of "Leveraging LINQ to XML: Querying an obfuscation map" article cycle. It's
Windows CardSpace
recommended to read the previous part - this will give you a better understanding of the things described below.
Windows Communication
Foundation
Windows Presentation In the previous part, we used LINQ to XML queries to search for the original type name in an obfuscation map
Foundation by providing its obfuscated name. Now, I want to show you how we can use advanced LINQ queries to provide
Windows Workflow Foundation more complex search tasks such as original type member name search.
Libraries
Windows Powershell
LINQ
Task definition
Azure
General Programming
As previously defined, the user interface should be as simple as possible. So I decided to limit the UI with two
editors. The first is used to provide the obfuscation map file name and the other one is for user search requests.
Graphics / Design
Development Lifecycle Let's take a look at three possible user inputs used as search criteria:
General Reading
Third Party Products 1. a - this is, definitely, a search request for the obfuscated type "a ".
Mentor Resources 2. a.b - this is less clear - it could be a complex type name "a.b ", or a field named "b " of type "a ". We
can't distinguish these cases, so we will search for both - a type and a field.
Services
3. a.b(System.String, int) - this is, definitely, a method search request with a signature (string,
Product Catalog int) .
Job Board
CodeProject VS2008 Addin Note: The complete method signature contains a result type, but since .NET doesn't support overloads based on
a result type, we don't need it.
Feature Zones
Product Showcase The member search task can be divided in two steps:
The SQL Zone
WhitePapers / Webcasts 1. Type search.
.NET Dev Library 2. Member search based on first step results.
ASP.NET 4 Web Hosting
We already have a code to complete the first step. To use it, we just need to provide the obfuscated type name.
Thus, a list of more detailed steps is:
This code detects the signature presence by searching for the "(" symbol and replaces all "." with "/" inside the
signature using the indexed Select extension method.
Now, we can split the obfuscated name apart by simply calling the obfuscatedName.Split('.') method.
The last substring can be the method signature or the field name.
The method signature requires some additional parsing to detect the argument types, and the equality routine
that will compare the signatures. I implemented this logic in the Signature class. To achieve code unity, I also
represent Field search request using the Signature class - so any type member search request will be the
Signature instance. Signature has three main members:
bool IsMethod - indicates if this is a method signature.
bool MemberName - the name of the type member.
bool SygEquals(string sig) - check if the passed signature string equals to the instance.
I will not provide the Signature class implementation details, because they are quite trivial and you can get
them from the code sample at the start of the article.
The first is a complete type that covers the whole search string (e.g., type name "a.b" input is "a.b")
The other one is an incomplete type that partially covers an input string except the last name element
(e.g., Type name "a" input is "a.b"). This type is required for member search as we have defined earlier.
Note: One more thing that I want to show is how to declare an anonymous type array before you fill it with
data. For example, you want to declare an anonymous type array and fill it with data depending on some
conditions and then process this array. In such a case, I use the next pattern:
Such an approach allows you not to define named types when you don't need them. I use this pattern to define
an array of types to search with a flag that indicates if this type is a complete one.
Now we need to rewrite the original type search query to use the typeNames array.
The join operation used here allows me to provide an easy filtering upon the typeNames array. Anonymous
type projection will be done for the query result. This projection contains the found type element and the flag
indicating type completeness. To get all the found types, we can use a simple expression:
Note: LINQ queries use deferred execution basis; it means that the execution of a query is deferred until the
moment you access the data, and even more - a query will be executed each time you access it. The next code
demonstrates this behavior:
foreach(var q in qry)
Console.Write(q.Name + " ");
// Print: "child1 child2".
Console.WriteLine();
// Add new child element to show "each time" execution.
xml.Add(new XElement("child3"));
foreach (var q in qry)
Console.Write(q.Name + " ");
// Print: "child1 child2 child3".
To avoid redundant executions, you can cache the query result by using the .ToArray() or .ToList()
extension methods. I intend to use the result of the types query both in the complete type search and the type
member search, so in the worst case, the types query will be executed twice. To avoid this, I cache the query
result using the .ToArray() function.
Now, when we have found the types, we can proceed to the type members search. The straight approach is to
use the nested select :
Here, we select methods from incomplete types and filter them by matching the Signature objects. This query
will be compiled to something like this:
Here, we can see the SelectMany call with an anonymous type projection. In the general case, each type
element from the types array contains a collection of methods, so it looks like a two-dimensional collection that
consists of types where each type holds a methods collection. The SelectMany call flattens this two-dimension
collection to one-dimension, and then we filter it.
This is a general case, but in our case, we will have a collection of types with collections of one method at most
(because of our search filter). So, we have a redundant SelectMany call and an anonymous type projection
that will impact the performance. We can't fix this issue using a clear LINQ syntax, but we can combine LINQ
with extension methods and lambda expressions to achieve the desired result:
Here, I have combined LINQ select with the SingleOrDefault extension method call that allows me to
remove the SelectMany call and the anonymous type projection. The last Where method call filters out the
default values from the result - this will give us an empty enumeration if nothing is found. Here is what it will be
compiled to:
At the end, we can project the members query result to the anonymous type that helps us to process it in
future.
};
There is no difference in the method and field search due to the unified Signature class solution, so I
generalized this approach for both the fields and methods search. Now, you can process the found members as
you wish, for example, output them to the console:
The complete solution can be downloaded from the link at the top of the article.
Summary
That's all. There are still many things to do, for example, the application can process the whole call stack and
can retrieve its de-obfuscated version; also, it's handy to have some external API, e.g., a command line or
something like this, but this is out of the article scope.
I should also mention some issues that I have faced during development. The first is that it is hard to provide
code decomposition because of anonymous types that can't be used as method parameters. The other one is
that LINQ query debugging is quite difficult (but thanks to LINQ Pad, not as hard as it can be). All others are
not so noticeable, so I don't think them worth to be mentioned here.
This article describes my first experience of a practical use of the LINQ to SQL technology. I hope you enjoyed
reading, and that the article material will bring you some new and useful experience that you can apply in your
practice.
License
This article, along with any associated source code and files, is licensed under The Code Project Open License
(CPOL)
Alexander Yegorov
Occupation: Technical Lead
Company: Devart (www.devart.com)
Location: Ukraine
Member
(Refresh)
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+PgUp/PgDown to switch pages.
Home Articles Quick Answers Message Boards Job Board Catalog Help! Lounge
Beginner License: The Common Public License Version 1.0 (CPL) C#, .NET, LINQ
License
This article, along with any associated source code and files, is licensed under The Common Public License
Version 1.0 (CPL)
I am a student of The Holy Quranic Sciences Institute. There, we study the Holy
Quranic sciences and Islamic legislation.
http://WithDotNet.WordPress.com
Arabic technical content for developers
http://Islamtecture.WordPress.com
Arabic. Talks about Islam and Islamic legislation
My resume:
http://www.box.net/elsheimy-resume
(Refresh)
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+PgUp/PgDown to switch pages.
Home Articles Quick Answers Message Boards Job Board Catalog Help! Lounge
Beginner License: The Code Project Open License (CPOL) C# (C#3.0, C#4.0), .NET, ASP.NET,
LINQ, All-Topics, Architect, Dev,
Articles
Desktop Development
Web Development
Mobile Development
Enterprise Systems
Database
Multimedia
Languages
Platforms, Frameworks &
Libraries
ATL
MFC
STL
WTL
COM / COM+
.NET Framework
Win32/64 SDK & OS
Vista API
Vista Security
Cross Platform
Game Development
Mobile Development
Windows CardSpace
Windows Communication
Foundation My web application has other roles not listed in that image. This is why there are some permissions that none of
Windows Presentation the listed roles are approved for such as "Allow Access To Site/User/Role Maintenance". For this page, I wanted
Foundation to omit any rows from the display where at least one of the roles wasn't approved for it. After all, these
Windows Workflow Foundation permissions were irrelevant for these roles. Displaying them would only confuse my users and invite support
Libraries
questions.
Windows Powershell
LINQ
Azure
The data is being displayed in an ASP.NET GridView that has been bound to a DataSet. To omit the rows, I
General Programming decided I would filter the DataSet prior to assinging the GridView's DataSource property and calling its DataBind
Graphics / Design
method. Please notice that the first column of the GridView will contain a string for the description of the
permission and each subsequent column will contain a bool specifying if that role is approved for the permission.
Development Lifecycle
One thing not obvious from the image is that the number of roles is driven by the database and is not static. If
General Reading
I add another role to my application there will be another column added here and I wouldn't want to have to
Third Party Products
modify this code to accommodate the additional role.
Mentor Resources
Services Below is my code to call the method to filter the DataSet. GetMyRoleData is just a dummy method to acquire
the needed data and I do not show it in this example. I then call the FilterRows method that will delete all the
Product Catalog
rows from the DataSet for which no role has approval for that row's permission. I then set the GridView's
Job Board
DataSource property and call the DataBind method. Nothing of note so far.
CodeProject VS2008 Addin
But instead of enumerating through each column and checking its value, LINQ can make the code so much
simpler. Why not just use the Any operator? If any of the elements in the row's array of items is true, then I
don't want to delete the row. Alternatively, I could use the All operator and make sure they are all set to false,
but again, this would be less efficient and require every item to be checked. Using the Any operator instead of
the All operator allows the code to stop once an item is true.
There is one last complication and that is the fact that the first column of each DataRow is a string for the
permission description and not a bool for a role. This can easily be overcome though with the OfType operator.
Just call the OfType operator on the array of items, ItemArray, and specify to only return elements of type bool.
It's just that simple. Here is the code.
So, I enumerate through each DataRow in the DataTable's Rows collection. I then call the OfType operator on
the row's ItemArray to return only those items that are a bool. Last I call the Any operator and provide a
lambda expression returning true if any of the items is set to true. Very simple. I then call the DataRow's Delete
method if I got a false back from the LINQ query. Just like that, my permissions have been filtered. Here is one
last image showing the GridView after calling the FilterRows method:
As you can see, LINQ can make so many chores just that much more simple.
License
This article, along with any associated source code and files, is licensed under The Code Project Open License
(CPOL)
(Refresh)
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+PgUp/PgDown to switch pages.
Home Articles Quick Answers Message Boards Job Board Catalog Help! Lounge
License: The Code Project Open License (CPOL) C# (C#3.0, C#4.0), .NET (.NET3.5,
.NET4.0), LINQ, Architect, Dev
Wanted
.Net/Java Application
Architect at Wellpoint
Windows Presentation
Foundation return ((yKey != null) && xKey.Equals(yKey));
}
Windows Workflow Foundation
Libraries return (yKey == null);
Windows Powershell }
LINQ public override int GetHashCode(TSource obj)
Azure {
Tkey key = this.GetKey(obj);
General Programming
Graphics / Design return (key == null) ? 0 : key.GetHashCode();
}
Development Lifecycle
public override bool Equals(object obj)
General Reading {
Third Party Products
SelectorEqualityComparer<TSource, Tkey> comparer =
obj as SelectorEqualityComparer<TSource, Tkey>;
Mentor Resources return (comparer != null);
}
Services public override int GetHashCode()
Product Catalog {
return base.GetType().Name.GetHashCode();
Job Board }
CodeProject VS2008 Addin private Tkey GetKey(TSource obj)
{
Feature Zones return (obj == null) ? (Tkey)(object)null : this.selector(obj);
}
Product Showcase }
The SQL Zone
WhitePapers / Webcasts Now I can write code like this:
.NET Dev Library
ASP.NET 4 Web Hosting Collapse Copy Code
Collapse
.Distinct(new SelectorEqualityComparer<Source, Key>(x => x.Field))
And, for improved readability, conciseness and expressiveness and support for anonymous types the
corresponding Distinct extension method:
License
This article, along with any associated source code and files, is licensed under The Code Project Open License
(CPOL)
Member
Last Visit: 1:36 24 Jun '10 Last Update: 1:36 24 Jun '10 1
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+PgUp/PgDown to switch pages.
Home Articles Quick Answers Message Boards Job Board Catalog Help! Lounge
Advanced License: The Code Project Open License (CPOL) C# (C#3.0, C#4.0), .NET (.NET3.5,
.NET4.0), LINQ, Architect, Dev
Wanted
(LAMP) Software Engineer
at Adknowledge
General Programming }
Graphics / Design
Development Lifecycle Now I can write code like this:
General Reading
Third Party Products Collapse Copy Code
Mentor Resources Collapse
Looks a lot better, doesn’t it? And it works with anonymous types.
Update: I, accidentally, had published the wrong version of the IEqualityComparer<T>.Equals method.
License
This article, along with any associated source code and files, is licensed under The Code Project Open License
(CPOL)
Member
(Refresh)
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+PgUp/PgDown to switch pages.
Home Articles Quick Answers Message Boards Job Board Catalog Help! Lounge
Beginner License: The Code Project Open License (CPOL) C# (C#3.0, C#4.0), .NET (.NET3.5,
.NET4.0), LINQ, Dev
Wanted
(LAMP) Software Engineer
at Adknowledge
Windows Workflow Foundation This article, along with any associated source code and files, is licensed under The Code Project Open License
Libraries (CPOL)
Windows Powershell
LINQ About the Author
Azure
General Programming
Paulo Morgado About Paulo Morgado
Graphics / Design
Development Lifecycle Occupation: Software Developer (Senior)
General Reading Company: Paulo Morgado
Third Party Products Location: Portugal
Mentor Resources
Services
Product Catalog
Job Board
CodeProject VS2008 Addin
Feature Zones
Product Showcase
Member
The SQL Zone
WhitePapers / Webcasts
.NET Dev Library
ASP.NET 4 Web Hosting
(Refresh)
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+PgUp/PgDown to switch pages.
Home Articles Quick Answers Message Boards Job Board Catalog Help! Lounge
Advanced License: The Code Project Open License (CPOL) C#, VB (VB9.0, VB10)
Wanted
(LAMP) Software Engineer
at Adknowledge
For the sample class and interface, the usage would be something like this:
UPDATED: The previous implementation was overcomplicated and had some string based logic. Kudos to Nuno.
License
This article, along with any associated source code and files, is licensed under The Code Project Open License
(CPOL)
Member
(Refresh)
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+PgUp/PgDown to switch pages.
Home Articles Quick Answers Message Boards Job Board Catalog Help! Lounge
Platforms, Frameworks & Libraries » LINQ » General License: The Microsoft Public License (Ms-PL) C# (C#3.0), SQL, Windows, .NET
(.NET3.5), SQL-Server, Visual-Studio
Wanted
Senior Applications
Developer-Microsoft at
Wolters Kluwer-MediRegs
LINQ
Azure Inside the Code
General Programming
Graphics / Design We basically have two models we want to synchronize - one is the LINQ to SQL model, and the other is the
Development Lifecycle database model. So let's start off by loading the LINQ to SQL model:
General Reading
Third Party Products Collapse Copy Code
Mentor Resources Collapse
var asm = Assembly.LoadFrom(Options.AssemblyFile); //Load an assembly file
Services var type = asm.GetType(Options.TypeName, true); //Find the DataContext class
//using reflection
Product Catalog var model = new AttributeMappingSource().GetModel(type); //Load the LINQ to SQL mapping
Job Board //model from the specified type
CodeProject VS2008 Addin
Review this in an online IDE here and here.
Feature Zones
Product Showcase Now, we'll load the database model using SMO and the connection string we have.
The SQL Zone
WhitePapers / Webcasts Collapse Copy Code
.NET Dev Library Collapse
ASP.NET 4 Web Hosting var sb = new SqlConnectionStringBuilder(ConnectionString); //Parse the
//connection string
var server = new Server(sb.DataSource); //Connect to the
//database server
var db = server.Databases[sb.InitialCatalog]; //Get the database
Now we have all the data we need, and it's simply a matter of iterating the LINQ to SQL model, and locating the
corresponding database objects. If they don't exist, we simply create them, otherwise we verify their definition.
Prerequisites
This tool uses SQL Server Management Objects (SMO), the latest version of it can be found at the Microsoft SQL
Server 2008 Feature Pack download page, but here are the direct links:
SQLSysClrTypes.msi
SharedManagementObjects.msi
Usage
This tool is used like any standard command line tool, with the following syntax:
Example
This statement will synchronize (create and update) the MyDb database in the local machine's SqlExpress
instance using the MyApp.MyDataContext class located in the MyApp.exe assembly.
Recommendations
Specify both the /autocreate and /autoupdate options for maximum automation.
Create a batch file that executes this tool, and include it in your project.
In the early stages of development, run this batch file as a post-build step in your project file.
The sources include a sample project with a LINQ to SQL model of the well known Northwind database. To run
the sample:
Modify the LINQ to SQL model by modifying the MyNorthwind.dbml file using Visual Studio, you can add a
column, add a table, change the data type of a column, allow null s on a column, etc...
Build the Samples project to reflect your changes.
Run the SyncMyNorthwindDb.bat batch file to synchronize your database.
Points of Interest
I think that this tool answers a basic need when using the LINQ to SQL Framework. We use it extensively at
CodeRun in both development and deployment. In this way, modifying the database is as simple as adding a
property.
License
This article, along with any associated source code and files, is licensed under The Microsoft Public License (Ms-
PL)
Some key improvements [modified] Alphons van der 9:43 4 Jun '10
Heijden
Date type "Time" not supported? Alexander Quinte 13:18 17 Mar '10
Re: Date type "Time" not supported? Alexander Quinte 2:40 27 Mar '10
Problems when using external database server lexugax 0:58 3 Feb '10
Re: Problems when using external database server lexugax 5:37 5 Feb '10
Muchas Gracias and Thank You 10x clintonG 6:54 6 Jun '09
Last Visit: 4:29 24 Jun '10 Last Update: 4:29 24 Jun '10 1
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+PgUp/PgDown to switch pages.
Home Articles Quick Answers Message Boards Job Board Catalog Help! Lounge
Platforms, Frameworks & Libraries » LINQ » General License: The Code Project Open License (CPOL) C#, .NET (.NET2.0, .NET3.0,
.NET3.5), WinForms, LINQ, Dev
Wanted
Software Engineers at
Segula Technologies
Articles
Desktop Development
Web Development
Mobile Development
Enterprise Systems
Database
Multimedia
Languages
Platforms, Frameworks &
Libraries
ATL
MFC
STL
WTL
COM / COM+
.NET Framework
Win32/64 SDK & OS
Vista API
Vista Security
Cross Platform
Game Development
Introduction
Mobile Development
Windows CardSpace Implementing parent-child hierarchies (for example, a Sale object and the SaleDetails associated with it) is
Windows Communication one of the most common scenarios encountered when modeling the entities in a business domain. If you
Foundation implement the business objects using classes, a collection of child objects will typically be stored in a List<T> .
Windows Presentation However, List<T> , will prove anemic should you require a rich user interface built on top of the .NET
Foundation
Windows Workflow Foundation
framework's support for data binding.
Libraries
Windows Powershell The typical solution will be to wrap the List<T> in a BindingSource in order to take advantage of its design
LINQ time support for data binding. That road will only take you so far as a critical feature will be absent - support for
Azure sorting.
General Programming
Graphics / Design This article will seek to remedy that by providing a custom implementation of a BindingList<T> that will
Development Lifecycle automatically provide the methods required to provide sorting capability on every property defined in type T .
General Reading
Third Party Products Implementation Objectives
Mentor Resources
Support sorting on all properties by instantiating an instance of the custom implementation of the
Services BindingList<T> . E.g., write:
Product Catalog Collapse Copy Code
Job Board Collapse
CodeProject VS2008 Addin MySortableBindingList<SaleDetails>sortableSaleDetails =
new MySortableBindingList<SaleDetail>();
Feature Zones
Product Showcase
and get the sorting functionality.
The SQL Zone
WhitePapers / Webcasts
.NET Dev Library
Motivating Example
ASP.NET 4 Web Hosting To illustrate this approach, we shall model two classes, Sale and SaleDetail , as follows:
The above classes are just simple enough to illustrate the main concepts behind the article, as validation,
persistence, error handling etc., are beyond the scope of the article.
Subclassing BindingList<T>
First, the code:
originalList = list;
populateBaseList(this, originalList);
}
protected override void ApplySortCore(PropertyDescriptor prop,
ListSortDirection direction) {
/*
Look for an appropriate sort method in the cache if not found .
Call CreateOrderByMethod to create one.
Apply it to the original list.
Notify any bound controls that the sort has been applied.
*/
sortProperty = prop;
var orderByMethodName = sortDirection ==
ListSortDirection.Ascending ? "OrderBy" : "OrderByDescending";
var cacheKey = typeof(T).GUID + prop.Name + orderByMethodName;
if (!cachedOrderByExpressions.ContainsKey(cacheKey)) {
CreateOrderByMethod(prop, orderByMethodName, cacheKey);
}
ResetItems(cachedOrderByExpressions[cacheKey](originalList).ToList());
ResetBindings();
sortDirection = sortDirection == ListSortDirection.Ascending ?
ListSortDirection.Descending : ListSortDirection.Ascending;
}
In a Nutshell
If, for instance, you create a MySortableBindingList<Sale> and sort on the Customer property, an
expression that conceptually looks something like Enumerable.OrderBy<Sale>(originalList, a =>
a.Customer) will be created and used to do the sorting.
The code to create the sample data and set up the data binding:
new Sale(){
Client = "Jahmani Mwaura",
SaleDate = new DateTime(2008,1,1),
Salesman = "Gachie",
SaleDetails = new MySortableBindingList<SaleDetail>(){
new SaleDetail(){
Product = "Sportsman",
Quantity = 1,
UnitPrice = 80
},
new SaleDetail(){
Product = "Tusker Malt",
Quantity = 2,
UnitPrice = 100
},
new SaleDetail(){
Product = "Alvaro",
Quantity = 1,
UnitPrice = 50
}
}
},
new Sale(){
Client = "Ben Kones",
SaleDate = new DateTime(2008,1,1),
Salesman = "Danny",
SaleDetails = new MySortableBindingList<SaleDetail>(){
new SaleDetail(){
Product = "Embassy Kings",
Quantity = 1,
UnitPrice = 80
},
new SaleDetail(){
Product = "Tusker",
Quantity = 5,
UnitPrice = 100
},
new SaleDetail(){
Product = "Novida",
Quantity = 3,
UnitPrice = 50
}
}
},
new Sale(){
Client = "Tim Kim",
SaleDate = new DateTime(2008,1,1),
Salesman = "Kiplagat",
SaleDetails = new MySortableBindingList<SaleDetail>(){
new SaleDetail(){
Product = "Citizen Special",
Quantity = 10,
UnitPrice = 30
},
new SaleDetail(){
Product = "Burn",
Quantity = 2,
UnitPrice = 100
}
}
}
};
saleBindingSource.DataSource = new MySortableBindingList<Sale>(sales);
}
Seeing it at work
You can download the samples at the top of the page and see it at work for yourself. I hope you enjoy.
Cheers!
History
December 2, 2008: Article posted.
License
This article, along with any associated source code and files, is licensed under The Code Project Open License
(CPOL)
Member
Re: Please Fix yours amazing class Jahmani 21:37 30 Sep '09
Re: Ordering glyph not displayed correctly Jahmani 21:38 30 Sep '09
Seems to break automatic list change notifications... Sly2aar 1:15 27 Jan '09
Re: Seems to break automatic list change notifications... Sly2aar 1:55 27 Jan '09
Re: Seems to break automatic list change notifications... [modified] Jahmani 23:19 28 Jan '09
Re: Seems to break automatic list change notifications... Sly2aar 2:32 29 Jan '09
Re: Seems to break automatic list change notifications... Drew Loika 18:08 2 Feb '09
Re: Seems to break automatic list change notifications... Jahmani 21:42 2 Feb '09
Last Visit: 4:29 24 Jun '10 Last Update: 4:29 24 Jun '10 1
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+PgUp/PgDown to switch pages.
Home Articles Quick Answers Message Boards Job Board Catalog Help! Lounge
Platforms, Frameworks & Libraries » LINQ » General License: The Code Project Open License (CPOL) C# (C#1.0, C#2.0, C#3.0), .NET
(.NET3.5), Dev
Feature Zones 1. In your project, add a reference to the LINQtoCSV.dll you generated during Installation.
Product Showcase 2. The file will be read into an IEnumerable<T> , where T is a data class that you define. The data records
The SQL Zone read from the file will be stored in objects of this data class. You could define a data class along these
WhitePapers / Webcasts
lines:
.NET Dev Library Collapse Copy Code
ASP.NET 4 Web Hosting Collapse
using LINQtoCSV;
using System;
class Product
{
[CsvColumn(Name = "ProductName", FieldIndex = 1)]
public string Name { get; set; }
[CsvColumn(FieldIndex = 2, OutputFormat = "dd MMM HH:mm:ss")]
public DateTime LaunchDate { get; set; }
[CsvColumn(FieldIndex = 3, CanBeNull = false, OutputFormat = "C")]
public decimal Price { get; set; }
[CsvColumn(FieldIndex = 4)]
public string Country { get; set; }
[CsvColumn(FieldIndex = 5)]
public string Description { get; set; }
}
Although this example only uses properties, the library methods will recognize simple fields as well. Just
make sure your fields/properties are public.
The optional CsvColumn attribute allows you to specify whether a field/property is required, how it should
be written to an output file, etc. Full details are available here.
3. Import the LINQtoCSV namespace at the top of the source file where you'll be reading the file:
Collapse Copy Code
Collapse
using LINQtoCSV;
4. Create a CsvFileDescription object, and initialize it with details about the file that you're going to
read. It will look like this:
Collapse Copy Code
Collapse
CsvFileDescription inputFileDescription = new CsvFileDescription
{
SeparatorChar = ',',
FirstLineHasColumnNames = true
};
This allows you to specify what character is used to separate data fields (comma, tab, etc.), whether the
first record in the file holds column names, and a lot more (full details).
It is this object that exposes the Read and Write methods you'll use to read and write files.
6. Read the file into an IEnumerable<T> using the CsvContext object's Read method, like this:
Collapse Copy Code
Collapse
IEnumerable<Product> products =
cc.Read<Product>("products.csv", inputFileDescription);
This reads the file products.csv into the variable products , which is of type IEnumerable<Product> .
7. You can now access products via a LINQ query, a foreach loop, etc.:
Collapse Copy Code
Collapse
var productsByName =
from p in products
orderby p.Name
select new { p.Name, p.LaunchDate, p.Price, p.Description };
// or ...
foreach (Product item in products) { .... }
To make it easier to get an overview, here is the code again that reads from a file, but now in one go:
You'll find this same code in the SampleCode project in the sources.
Writing to a file
The optional CsvColumn attribute allows you to specify such things as what date and number formats to
use when writing each data field. Details for all CsvColumn properties (CanBeNull , OutputFormat , etc.)
are available here.
Although this example only uses properties, you can also use simple fields.
The Write method will happily use an anonymous type for T , so you can write the output of a LINQ query
right to a file. In that case, you obviously won't define T yourself. Later on, you'll see an example of this.
3. Import the LINQtoCSV namespace at the top of the source file where you'll be writing the file:
Collapse Copy Code
Collapse
using LINQtoCSV;
4. Make sure the data is stored in an object that implements IEnumerable<T> , such as a List<T> , or
the IEnumerable<T> returned by the Read method.
5. Create a CsvFileDescription object, and initialize it with details about the file you will be writing,
along these lines:
Collapse Copy Code
Collapse
CsvFileDescription outputFileDescription = new CsvFileDescription
{
SeparatorChar = '\t', // tab delimited
FirstLineHasColumnNames = false, // no column names in first record
FileCultureName = "nl-NL" // use formats used in The Netherlands
};
7. Invoke the Write method exposed by the CsvContext object to write the contents of your
IEnumerable<T> to a file:
Collapse Copy Code
Collapse
cc.Write(
products2,
"products2.csv",
outputFileDescription);
This writes the Product objects in the variable products2 to the file "products2.csv".
Here is the code again that writes a file, but now in one go:
If you have a LINQ query producing an IEnumerable of anonymous type, writing that IEnumerable to a file
is no problem:
Here, a LINQ query selects all products for "Netherlands" from the variable products , and returns an
IEnumerable holding objects of some anonymous type that has the fields Name , LaunchDate , Price , and
Description . The Write method then writes those objects to the file products-Netherlands.csv.
CsvContext.Write Overloads
Write<T>(IEnumerable<T> values, string fileName)
Write<T>(IEnumerable<T> values, string fileName, CsvFileDescription
fileDescription)
Write<T>(IEnumerable<T> values, TextWriter stream)
Write<T>(IEnumerable<T> values, TextWriter stream, CsvFileDescription
fileDescription)
Some interesting facts about these overloads:
CsvContext.Read Overloads
Read<T>(string fileName)
Read<T>(string fileName, CsvFileDescription fileDescription)
Read<T>(StreamReader stream)
Read<T>(StreamReader stream, CsvFileDescription fileDescription)
Some interesting facts about these overloads:
Deferred Reading
Here is how the Read overloads implement deferred reading:
When you invoke the Read method (which returns an IEnumerable<T> ), no data is read yet. If using a
file, the file is not yet opened.
When the Enumerator is retrieved from the IEnumerable<T> (for example, when starting a foreach
loop), the file is opened for reading. If using a stream, the stream is rewound (seek to start of the
stream).
Each time you retrieve a new object from the Enumerator (for example, while looping through a
foreach ), a new record is read from the file or stream.
When you close the Enumerator (for example, when a foreach ends or when you break out of it), the file
is closed. If using a stream, the stream is left unchanged.
If reading from a file, the file will be open for reading while you're accessing the IEnumerable<T> in a
foreach loop.
The file can be updated in between accesses. You could access the IEnumerable<T> in a foreach loop,
then update the file, then access the IEnumerable<T> again in a foreach loop to pick up the new
data, etc. You only need to call Read once at the beginning, to get the IEnumerable<T> .
CsvFileDescription
The Read and Write methods need some details about the file they are reading or writing, such as whether the
first record contains column names.
As shown in the Reading from a file and Writing to a file examples, you put those details in an object of type
CsvFileDescription , which you then pass to the Read or Write method. This prevents lengthy parameter
lists, and allows you to use the same details for multiple files.
SeparatorChar
QuoteAllFields
FirstLineHasColumnNames
EnforceCsvColumnAttribute
FileCultureName
TextEncoding
DetectEncodingFromByteOrderMarks
MaximumNbrExceptions
SeparatorChar
Type: char
Default: ','
Applies to: Reading and Writing
Example:
The character used to separate fields in the file. This would be a comma for CSV files, or a '\t' for a tab
delimited file.
You can use any character you like, except for white space characters or the double quote (").
QuoteAllFields
Type: bool
Default: false
Applies to: Writing only
Example:
When false , Write only puts quotes around data fields when needed, to avoid confusion - for example, when
the field contains the SeparatorChar or a line break.
FirstLineHasColumnNames
Type: bool
Default: true
Applies to: Reading and Writing
Example:
When reading a file, tells Read whether to interpret the data fields in the first record in the file as column
headers.
When writing a file, tells Write whether to write column headers as the first record of the file.
EnforceCsvColumnAttribute
Type: bool
Default: false
Applies to: Reading and Writing
Example:
When true , Read only reads data fields into public fields and properties with the [CsvColumn] attribute,
ignoring all other fields and properties. And, Write only writes the contents of public fields and properties with
the [CsvColumn] attribute.
FileCultureName
Type: string
Default: current system setting
Applies to: Reading and Writing
Example:
Different cultures use different ways to write dates and numbers. 23 May 2008 is 5/23/2008 in the United States
(en-US) and 23/5/2008 in Germany (de-DE). Use the FileCultureName field to tell Read how to interpret the
dates and numbers it reads from the file, and to tell Write how to write dates and numbers to the file.
By default, the library uses the current language/country setting on your system. So, if your system uses
French-Canadian (fr-CA), the library uses that culture unless you override it with FileCultureName .
The library uses the same culture names as the .NET "CultureInfo " class (full list of names).
TextEncoding
Type: Encoding
Default: Encoding.UTF8
Applies to: Reading and Writing
Example:
If the files that you read or write are in English, there is no need to set TextEncoding .
However, if you use languages other than English, the way the characters in your files are encoded may be an
issue. You will want to make sure that the encoding used by the library matches the encoding used by any other
programs (editors, spreadsheets) that access your files.
Specifically, if you write files with the Euro symbol, you may need to use Unicode encoding, as shown in the
example.
DetectEncodingFromByteOrderMarks
Type: bool
Default: true
Applies to: Reading only
Example:
Tells Read whether to detect the encoding of the input file by looking at the first three bytes of the file.
Otherwise, it uses the encoding given in the TextEncoding property.
MaximumNbrExceptions
Type: int
Default: 100
Applies to: Reading only
Example:
Sets the maximum number of exceptions that will be aggregated into an AggregatedException .
To not have any limit and read the entire file no matter how many exceptions you get, set
AggregatedException to -1.
For details about aggregated exceptions, see the error handling section.
CsvColumn Attribute
As shown in the Reading from a file and Writing to a file examples, you can decorate the public fields and
properties of your data class with the CsvColumn attribute to specify such things as the output format for date
and number fields.
Use of the CsvColumn attribute is optional. As long as the EnforceCsvColumnAttribute property of the
CsvFileDescription object you pass into Read or Write is false , those methods will look at all public
fields and properties in the data class. They will then simply use the defaults shown with each CsvColumn
property below.
Name
CanBeNull
NumberStyle
OutputFormat
FieldIndex
Name
Type: string
Default: Name of the field or property
Applies to: Reading and Writing
Example:
The Read and Write methods normally assume that the data fields in the file have the same names as the
corresponding fields or properties in the class. Use the Name property to specify another name for the data field.
CanBeNull
Type: bool
Default: true
Applies to: Reading only
Collapse Copy Code
Collapse
[CsvColumn(CanBeNull = false)]
public DateTime LaunchDate { get; set; }
If false , and a record in the input file does not have a value for this field or property, then the Read method
generates a MissingRequiredFieldException exception.
FieldIndex
Type: bool
Default: Int32.MaxValue
Applies to: Reading only
Example:
This property is used for both reading and writing, but in slightly different ways.
Reading - The Read method needs to somehow associate data fields in the input file with field and properties in
the data class. If the file has column names in the first record, that's easy - Read simply matches the column
names with the names of the fields and properties in the data class.
However, if the file does not have column names in the first record, Read needs to look at the order of the data
fields in the data records to match them with the fields and properties in the data class. Unfortunately though,
the .NET framework does not provide a way to reliably retrieve that order from the class definition. So, you have
to specify which field/property comes before which field/property by giving the fields and properties a
CsvColumn attribute with the FieldIndex property.
The FieldIndex s do not have to start at 1. They don't have to be consecutive. The Read and Write methods
will simply assume that a field/property comes before some other field/property if its FieldIndex is lower.
Writing - The Write method uses the FieldIndex of each field or property to figure out in what order to
write the data fields to the output file. Field and properties without FieldIndex get written last, in random
order.
NumberStyle
Type: NumberStyles
Default: NumberStyles.Any
Applies to: Reading of numeric fields only
Example:
Allows you to determine what number styles are allowed in the input file (list of options).
By default, all styles are permitted, except for one special case. In order to accept hexadecimal numbers that do
not start with 0x, use NumberStyles.HexNumber , as shown in the example.
OutputFormat
Type: string
Default: "G"
Applies to: Writing only
Example:
Lets you set the output format of numbers and dates/times. The default "G" format works well for both dates
and numbers most of the time.
When writing a date/time or number field, the Write method first determines the type of the field (DateTime ,
decimal , double , etc.) and then calls the ToString method for that type, with the given OutputFormat .
So, in the example above, if LaunchDate is 23 November 2008, the field written to the file will be "23 Nov
08".
With many formats, the final result depends on the language/country of the file, as set in the
FileCultureName property of the CsvFileDescription object. So, if LaunchDate is 23 November 2008
and you specify the short date format:
Then, the final value written to the output file will be "11/23/08" if you use US dates (FileCultureName is set
to "en-US"), but "23/11/08" if you use German dates (FileCultureName is set to "de-DE").
Error Handling
Exception
LINQtoCSVException
BadStreamException
CsvColumnAttributeRequiredException
DuplicateFieldIndexException
RequiredButMissingFieldIndexException
ToBeWrittenButMissingFieldIndexException
NameNotInTypeException
MissingCsvColumnAttributeException
TooManyDataFieldsException
TooManyNonCsvColumnDataFieldsException
MissingFieldIndexException
MissingRequiredFieldException
WrongDataFormatException
AggregatedException
When the Read and Write methods detect an error situation, they throw an exception with all information you
need to solve the problem. As you would expect, all exceptions are derived from the .NET class Exception .
In addition to such properties as StackTrace and Message , the Exception class exposes the Data
property. The Read and Write methods use that property to provide exception information in a way that is
easy for your code to read, while they provide error messages targeted at humans via the Message property.
The description for each exception (further below) shows what information is stored in the Data property.
Aggregating exceptions
When the Read method detects an error while reading data from a file, it does not throw an exception right
away, but stores it in a list of type List<Exception> . Then, after it has processed the file, it throws a single
exception of type AggregatedException , with the list of exceptions in its
Data["InnerExceptionsList"] property. This allows you to fix all problems with an input file in one go,
instead of one by one.
You can limit the number of exceptions that get aggregated this way by setting the MaximumNbrExceptions
property of the CsvFileDescription object that you pass to the Read method. By default,
MaximumNbrExceptions is set to 100. When the limit is reached, the AggregatedException is thrown
right away, with the list of exceptions aggregated so far.
Not all exceptions get aggregated! Before Read starts reading data from a file, it first processes column names,
CsvColumn attributes, etc. If something goes wrong during that preliminary stage, it throws an exception right
away.
Deferred reading
Keep in mind that due to deferred reading, you can get exceptions not only when you invoke the Read method,
but also when you access the IEnumerable<T> that is returned by the Read method.
Example
The following code reads a file and processes exceptions. To show how to use the Data property, it includes
some special processing for the DuplicateFieldIndexException - thrown when the Read and Write
methods detect two fields or properties with the same FieldIndex .
BadStreamException
Thrown when a stream is passed to Read , which is either null , or does not support Seek . The stream has to
support Seek , otherwise it cannot be rewound when the IEnumarable returned by Read is accessed.
CsvColumnAttributeRequiredException
This exception exposes the same properties as Exception .
Thrown when the CsvFileDescription object that has been passed to Read has both
FirstLineHasColumnNames and EnforceCsvColumnAttribute set to false .
If there are no column names in the file, then Read relies on the FieldIndex of each field or property in the
data class to match them with the data fields in the file. However, if EnforceCsvColumnAttribute is
false , that implies that fields or properties without the CsvColumn attribute can also be used to accept data,
while they do not have a FieldIndex .
DuplicateFieldIndexException
Additional Properties - This exception exposes the same properties as Exception , plus these additional
properties:
Thrown when two or more fields or properties have the same FieldIndex .
RequiredButMissingFieldIndexException
Additional Properties - This exception exposes the same properties as Exception , plus these additional
properties:
When there are no column names in the first record in the file (FirstLineHasColumnNames is false ), each
required field (CanBeNull attribute set to false ) must have a FieldIndex attribute, otherwise it cannot be
read from the file.
ToBeWrittenButMissingFieldIndexException
Additional Properties - This exception exposes the same properties as Exception , plus these additional
properties:
When writing a file without column names in the first record, you will want to make sure that the data fields
appear in each line in a well defined order. If that order were random, it would be impossible for some other
program to reliably process the file.
So, when the Write method is given a CsvFileDescription with FirstLineHasColumnNames as false ,
and it finds a field or property that doesn't have a FieldIndex , it throws a
ToBeWrittenButMissingFieldIndexException .
NameNotInTypeException
Additional Properties - This exception exposes the same properties as Exception , plus these additional
properties:
If the Read method is given a CsvFileDescription with FirstLineHasColumnNames as true , and one
of the column names in the first record in the file does not match a field or property, it throws a
NameNotInTypeException .
MissingCsvColumnAttributeException
Additional Properties - This exception exposes the same properties as Exception , plus these additional
properties:
The Read method may throw this exception when it is given a CsvFileDescription with both
FirstLineHasColumnNames and EnforceCsvColumnAttribute as true . When Read reads the column
names from the first record, one of those column names may match a field or property that doesn't have a
CsvColumn attribute, even though only fields and properties with a CsvColumn attribute can be used. When
that happens, Read throws a MissingCsvColumnAttributeException .
TooManyDataFieldsException
Additional Properties - This exception exposes the same properties as Exception , plus these additional
properties:
Data["LineNbr"] int Line in the input file with an excess data field
Data["FileName"] string Name of the input file
Thrown when a record in the input file has more data fields than there are public fields and properties in the
data class.
TooManyNonCsvColumnDataFieldsException
Additional Properties - This exception exposes the same properties as Exception , plus these additional
properties:
When only fields or properties that have a CsvColumn attribute are used (Read is given a
CsvFileDescription with EnforceCsvColumnAttribute as true ), and a record in the input file has
more data fields than there are fields and properties with the CsvColumn attribute, a
TooManyNonCsvColumnDataFieldsException is thrown.
MissingFieldIndexException
Additional Properties - This exception exposes the same properties as Exception , plus these additional
properties:
If there are no column names in the first record of the input file (Read is given a CsvFileDescription with
FirstLineHasColumnNames as false ), then Read relies on the FieldIndex of the fields and properties in
the data class to match them with the data fields in the file.
When a record in the input file has more data fields than there are fields and properties in the data class with a
FieldIndex , then a MissingFieldIndexException is thrown.
MissingRequiredFieldException
Additional Properties - This exception exposes the same properties as Exception , plus these additional
properties:
Thrown when a record from the input file does not have a value for a required field or property (CanBeNull
property of the CsvColumn attribute set to false ).
Empty strings and strings consisting of only white space need to be surrounded by quotes, so they are
recognized as something other than null .
These input lines both have the data fields "abc", null, and "def":
While this line has the data fields "abc", followed by the empty string, followed by "def":
and this line has the data fields "abc", followed by a string with three spaces, followed by "def":
WrongDataFormatException
Additional Properties - This exception exposes the same properties as Exception , plus these additional
properties:
Thrown when a field has the wrong format. For example, a numeric field with the value "abc".
AggregatedException
Additional Properties - This exception exposes the same properties as Exception , plus these additional
properties:
License
This article, along with any associated source code and files, is licensed under The Code Project Open License
(CPOL)
Matt Perdeck Technical skills used in last 2 years: AJAX, JavaScript (5 years), ASP.NET 3.5 and 2.0
(C#, VB.NET), SQL Server 2000 and 2005, XML, XHTML, CSS, InsiteCreation CMS.
Other technical skills include: Visual C++ (5 years), PHP, C, Linux, HTTP, SMTP, POP3,
TCP/IP, PERL, C, embedded low level multi tasking software, CGI, MySQL, X.25, SNA.
Dutch national, with Australian permanent residence. Has worked in The Netherlands,
Australia, Slovakia and Thailand.
Currently on contract with a major international publishing house until 1 August 2008,
developing web sites.
Could not find a part of the path Error Member 1883331 3:32 24 Jan '10
exception "Index was outside the bounds of the array." for large 21:39 1 Aug '09
cvs.net
number of columns.
does it handle column name like "Product Name" instead of 21:29 30 Jul '09
jaideeptelang
"ProductName"
Export file have numeric field with no values Myron L Miller 21:22 19 May '09
"TypeConverter cannot convert from System.String." error message 6:26 8 Apr '09
John Gomes
CsvContext Read method that take stream as param needs a small 12:30 23 Feb '09
msreekm
change.. (doesn't accept the file description passed in method)
Last Visit: 4:29 24 Jun '10 Last Update: 4:29 24 Jun '10 1 2 3 Next »
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+PgUp/PgDown to switch pages.
Home Articles Quick Answers Message Boards Job Board Catalog Help! Lounge
Platforms, Frameworks & Libraries » LINQ » General License: The Code Project Open License (CPOL) C#, .NET, LINQ, Dev
Libraries
Windows Powershell
LINQ To inspect what a DBML project item is attached to, right click on it and click "Attach Renderers...".
Azure
General Programming The image below shows how your working environment will look like:
Graphics / Design
Development Lifecycle
General Reading
Third Party Products
Mentor Resources
Services
Product Catalog
Job Board
CodeProject VS2008 Addin
Feature Zones
Product Showcase
The SQL Zone
WhitePapers / Webcasts
.NET Dev Library
ASP.NET 4 Web Hosting
The DBML file located in the Scenario1 folder in the business project is linked with the
CustomizeDesigners\Scenario1\CustSmallChangesToDefaultLINQtoSQL class. The code to achieve
the desired modification is remarkably simple:
As you can see in the code above, we invoke the standard code generator to retrieve the code generated by
Microsoft's designer. Once you have it, you can manipulate it by adding code to it, or replace certain portions of
it with string manipulation routines or Regular Expressions.
To test, edit Business\Scenario1\CustomersLINQ.dbml in the Business project, save it, expand its children, and
open Business\Scenario1\CustomersLINQ.Designer.cs. You can see that the file contains extra code added to
Microsoft's code.
To debug, set a breakpoint in the Render method, launch a debug session, load the same solution in the second
Visual Studio instance, and perform the same steps as you do when you test.
The example code generator has another method, AddAttribute , that shows how to do a slightly more
complex manipulation. You can use Regular Expressions if the desired changes are more complex.
The easiest way to achieve this is to link the DBML file to two code generators. The first one invokes Microsoft's
code generator, and you might want to slightly alter the results as per Scenario 1. The second one generates the
extra code.
For properly building a code generator for the DBML file, you need to be ble to deserialize it into a serializable
class instance. This is achieved by having a schema in the code generators project that creates serializable
classes from it. The second code generator deserializes the DBML file into such a class and then uses it to
generate what is needed.
The DBML file schema can be found at C:\Program Files\Microsoft Visual Studio
9.0\Xml\Schemas\DbmlSchema.xsd supposing that you installed Visual Studio in the default location. This
schema has been added to the CustomizeDesigners project to Dbml\DbmlSchema.xsd. It is linked with the
CustomizeDesigners\XsdRenderers.rgt code generator that invokes the same mechanism as xsd.exe to create
serializable classes from schemas. Expand the children project items for DbmlSchema.xsd to see the serializable
classes.
Business\Scenario2\CustomersLINQ.dbml is linked to two code generators (right click on it and click "Attach
Renderers..." to inspect), and two .cs files get created/updated when you save it. The
Business\Scenario2\CustomersLINQ.Designer.cs file is generated by
CustomizeDesigners\Scenario2\SmallChangesToDefaultLINQtoSQL.cs exactly the same as per Scenario1.
As opposed to ASPX, you can define render methods that help you break down your code generation work into
more manageable units. Also note the <% RenderTables(); %> call just under the namespace. This calls a
function that is manually written in the code-behind.
The DBML file is deserialized into the _database property before the code generation takes place. Please note
the implementation of RenderTables which gets called from the template file. In its turn, it calls the
RenderTable(Table table) that is defined in the template.
To test it, edit Business\Scenario2\CustomersLINQ.dbml, save it, expand its children, and inspect the content of
the two children cs files.
To debug, set a breakpoint in the PreRender method, launch a debug session, load the same solution in the
second Visual Studio instance, and perform the same steps as you do when you test.
Its purpose is to be an example and provide the familiarity and structure to start with for building your heavily
customized code generator, taking advantage of an already made designer.
Final Note
The possibilities are limitless on what you can now do with the LINQtoSQL designer. Here are some examples:
You can have your own LINQtoSQL customizations that magically get created when you use the designer.
You can generate something different, like SQL scripts for the tables, and Stored Procedures used by the
DBML file.
You can use the designer for something completely different and unrelated to LINQtoSQL as long as the
designer does a good enough job for what you are trying to achieve.
License
This article, along with any associated source code and files, is licensed under The Code Project Open License
(CPOL)
radusib
Occupation: Architect
Location: New Zealand
Member
Last Visit: 4:29 24 Jun '10 Last Update: 4:29 24 Jun '10 1
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+PgUp/PgDown to switch pages.
Home Articles Quick Answers Message Boards Job Board Catalog Help! Lounge
Platforms, Frameworks & Libraries » LINQ » General License: The Code Project Open License (CPOL) C# (C#4.0), .NET (.NET4.0), LINQ,
Architect, Dev
Software Engineers at
Collapse Copy Code
Segula Technologies
Collapse
View this and other fine jobs var result = source.Query<Person, Tuple<string, double>>(
on the Job Board. "SELECT Address, Avg(Age) FROM this GROUP BY Address");
var result2 = source.Query<Person, Family>(
"SELECT Address, Avg(Age) AS AverageAge FROM this GROUP BY Address");
Articles var result3 = source.Query<Person>("SELECT * FROM this ORDER BY age");
Desktop Development var result4 = source.Query<Person, string>("SELECT DISTINCT address FROM this")
Web Development
Mobile Development I'm a big fan of LINQ, especially LINQ to Objects. Code that might be tens of lines long using a foreach
Enterprise Systems iteration and evaluation can often be shortened to 1 or 2 lines. Anything that reduces code lines is a big plus for
Database maintainability and quality. As I've used LINQ more and more in both production and hobby code, I've found
Multimedia
that while the Enumerable extension methods are pretty easy to follow and work with, I continue to trip on
the "almost SQL" inline LINQ C# keywords. Perhaps I've been at this too long, but my fingers just won't start a
Languages
query statement with any other word than SELECT .
Platforms, Frameworks &
Libraries My personal short comings aside, I've found that LINQ has a more practical limitation, namely those "almost
ATL SQL" statements are still tightly coupled to an application's static structure. I was a big fan of the VARIANT
MFC
back in the day, and have always thought that IDispatchEx never really got a chance to show its real
STL
WTL potential outside of Microsoft apps before COM went out of style. Dynamic typing has a lot of advantages
COM / COM+ especially in increasingly large, complex, and distributed systems. Perhaps, I should switch to Python, but C#
.NET Framework pays the bills.
Win32/64 SDK & OS
Vista API Luckily, Microsoft has been adding dynamic typing features to .NET and C#. C#/.NET 4.0 is adding some
Vista Security interesting features: the F# Tuple type has been made part of the BCL; C# gets a dynamic keyword and
Cross Platform access to the Dynamic Language Runtime, and the BCL gets an ExpandoObject allowing even statically typed
Game Development
languages like VB.NET and C# to take on some features of a Duck Typed language. The combination of dynamic
Mobile Development
Windows CardSpace
and static typing within C# may be a powerful new addition, or it might end up being a Frankenstein with the
Windows Communication worst features of both approaches. It will be interesting to see how all of this plays out over time.
Foundation
Windows Presentation But I digress. The real reason for this article is that I've always wanted to write a runtime evaluation engine.
Foundation .NET 3.5 (with the addition of System.Linq.Expressions ) and the dynamic typing features of C# 4.0 have
Windows Workflow Foundation provided the right set of tools. So I gave it a whirl.
Libraries
Windows Powershell
What came out the other end lets you take something like this:
LINQ
Azure
General Programming Collapse Copy Code
Collapse
Graphics / Design
Development Lifecycle var result = from p in source
group p by p.Address into g
General Reading where g.Average(p => p.Age) > 40
Third Party Products select new { Address = g.Key, AverageAge = g.Average(p => p.Age) };
Mentor Resources
and replace it with something like this:
Services
Product Catalog Collapse Copy Code
Job Board Collapse
CodeProject VS2008 Addin
var result = source.Query<Person, dynamic>("SELECT Address, Avg(Age) AS AverageAge
FROM this GROUP BY Address HAVING AverageAge > 40")
Feature Zones
Product Showcase
The SQL Zone Background
WhitePapers / Webcasts
.NET Dev Library The basis for this article begins with a previous installment: Taking LINQ to SQL in the Other Direction. That
ASP.NET 4 Web Hosting article describes the basic parsing and evaluation infrastructure used. Much of the underlying code is the same
(or at least started it out the same) especially in the area of SQL parsing using the GOLD parser. A limitation in
that previous code was that the data being evaluated had to be in the form of an IDictionary<string,
object> and that was also how the data was returned to the caller.
It also owes some inspiration to the Dynamic LINQ example that Microsoft published as part of the VS2008
Samples collection.
Caveat Emptor
The attached code and project was created with Microsoft Visual Studio 2010 Beta 2, and some of the new
features of C# and .NET 4.0, such as:
The entry point in the API is a small set of extension methods that extend IEnumerable<T> and take a SQL
statement in the form of a string . There are two sets of overloads:
The parsing happens in the base class, and is largely the same as it was in the previous article.
Creating the evaluation functions is a matter of determining which parts of a SQL query are present and
generating a lambda for each one:
In order to support both querying a type's properties and its value, there is a value() function built in. This
allows queries such as the following to be differentiated:
Also, the this entry in the join chain portion of the FROM clause is merely a placeholder. It refers to the same
this that is passed to the extension methods or the IEnumerable<T> passed to the Evaluate methods. In
a future update, I'd like to add support for joins across multiple collections, but at the moment, that capability
isn't present.
Points of Interest
The most challenging part of this was creating result selectors. SQL return types are polymorphic depending on
the query. A SELECT query can return a subset of the input data without a type transformation, with queries
such as SELECT * FROM this WHERE Age = 40 . It can return a subset of the fields from the input data
type: SELECT name, age FROM this . It can return a single value in situations like SELECT Avg(Age)
FROM this . Or queries can return completely transformed types that are aggregations of the input data:
SELECT name, Avg(age) FROM this GROUP BY name .
The type parameters passed to the Query methods indicate both the type contained in the enumerable and the
type to create and return.
Subselection
For a query that will return the same type as passed in, both TSource and TResult will be the same. There is
an overload for this case that only takes one type parameter.
Single Properties
For selecting single properties from the source type, a lambda is created that returns the value of that property
from each source object. This doesn't require the creation and initialization of new objects:
Multiple Properties
Returning multiple properties requires creating new instances of TResult and populating them with the result
data. There are three approaches to doing this.
In most cases, it is expected that the return type has read/write properties for each of the fields in the SELECT
statement. In this case, if the source property name is not the same as the result property, the AS keyword can
be used to map the two. In the example below, the Person class has a property named Address , while
OtherPerson has Location :
As a side note: all property names are evaluated in a case insensitive fashion. If TSource or TResult has
Property and property , exceptions will be thrown.
A special selector is created for the Tuple type which has a constructor that takes an argument for each
constituent property. The order of fields in the Select statement must match the order of arguments in the
constructor declaration.
Another special selector is created when dynamic is specified as TResult . In this case, you will always get a
collection of ExpandoObject s back. ExpandoObject implements IDictionary<string, object> to
store the set of dynamically assigned properties, and this interface is used to populate the return objects. It is
via this mechanism that this API goes from statically to dynamically typed. Something I noticed about the
ExpandoObject is that its property names are case sensitive. I don't know if that's good or bad, but for some
reason, I expected them not to be, since it would seem to mesh more with a dynamically typed environment.
Grouping
By far the most challenging selector was GROUP BY . This involves not only a type transformation between
TSource and TResult but the calculation of properties on the return type as opposed to only assignment. It
took me a while to wrap my head around how to do this without calculating and caching some intermediate state
for each aggregate. In the end, I created a type, GroupByCall , to cache the delegate for each aggregate at
compile time for later invocation during evaluation.
{
var aggregateExpression = aggregate.GetCallExpression(
typeof(IEnumerable<TSource>), typeof(TSource), group);
groupingCall.Aggregates.Add(aggregate.Alias,
Expression.Lambda(aggregateExpression, group).Compile());
}
// create the call to the result selector
var key = Expression.Parameter(keyType, "key");
var groupingFunc = Expression.Call(Expression.Constant(groupingCall),
"GroupingFunc", new Type[] { keyType }, key, group);
var resultSelectorLambda = Expression.Lambda(groupingFunc, key, group);
// package all of that up in a call to Enumerable.GroupBy
var data = Expression.Parameter(typeof(IEnumerable<TSource>), "data"); // the input data
var groupByExpression = Expression.Call(typeof(Enumerable), "GroupBy",
new Type[] { typeof(TSource), keyType, typeof(TResult) },
data, keyLambda, resultSelectorLambda);
// create the lambda
return Expression.Lambda<Func<IEnumerable<TSource>,
IEnumerable<TResult>>>(groupByExpression, data);
}
Conclusion
The attached unit tests contain plenty of examples for different combinations of the above, but most of the basic
syntax of SQL SELECT should work. I've had a lot of fun with this code thus far, and am planning on updating it
with additional functionality. Hopefully, you find it useful or at the very least interesting.
History
Initial upload - 11/11/2009.
License
This article, along with any associated source code and files, is licensed under The Code Project Open License
(CPOL)
Don Kackman The first computer program I ever wrote was in BASIC on a TRS-80 Model I and it
looked something like:
Looking good Don...Cant wait for queries Sacha Barber 3:27 24 Nov '09
Re: Looking good Don...Cant wait for queries Don Kackman 8:07 24 Nov '09
Re: Looking good Don...Cant wait for queries Sacha Barber 22:41 24 Nov '09
Last Visit: 4:29 24 Jun '10 Last Update: 4:29 24 Jun '10 1
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+PgUp/PgDown to switch pages.