Vous êtes sur la page 1sur 161

Introduction 

  ASTreeView is a powerful treeview with drag drop, ajax


loading, context menu, xml import/export, checkbox,
selection, add/editing/deleting nodes with aIntroduction 

  ASTreeView is a powerful treeview with drag drop, ajax loading, context menu, xml
import/export, checkbox, selection, add/editing/deleting nodes with ajax.

Background 

ASTreeView is a powerful treeview with drag drop, ajax loading, context menu, xml
import/export, checkbox, selection, add/editing/deleting nodes with ajax.

ASTreeView is developed on .NET framework 2.0. Demo project is a Visual Studio


2005 project.

ASTreeView is FREE! That means you can use it anywhere!

I host the project on Google Code: http://code.google.com/p/astreeview/ please


download the assembly amd the demo , check out the demo, and use ASTreeView in
your project! 

Using the code

Here are functionalities ASTreeView supports:

1. Drag & drop.


User can drag & drop nodes within the tree or even among trees!

See live
demo: http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo1.aspx

<ct:ASTreeView ID="astvMyTree"
runat="server"
BasePath="~/Javascript/astreeview/"
DataTableRootNodeValue="0"
EnableRoot="false"
EnableNodeSelection="false"
EnableCheckbox="true"
EnableDragDrop="true"
EnableTreeLines="true"
EnableNodeIcon="true"
EnableCustomizedNodeIcon="true"
EnableContextMenu="true"
EnableDebugMode="false"
EnableContextMenuAdd="false" />

In Code Behind: 

protected void btnToggleDragDrop_Click( object sender, EventArgs e )


{
this.astvMyTree.EnableDragDrop = !
this.astvMyTree.EnableDragDrop;
}

protected void btnToggleTreeLines_Click( object sender,


EventArgs e )
{
this.astvMyTree.EnableTreeLines = !
this.astvMyTree.EnableTreeLines;
}

protected void btnToggleNodeIcon_Click( object sender,


EventArgs e )
{
this.astvMyTree.EnableNodeIcon = !
this.astvMyTree.EnableNodeIcon;
}

protected void btnToggleCheckbox_Click( object sender,


EventArgs e )
{
this.astvMyTree.EnableCheckbox = !
this.astvMyTree.EnableCheckbox;
}

protected void btnToggleDefaultNodeIcon_Click( object sender,


EventArgs e )
{
this.astvMyTree.EnableCustomizedNodeIcon = !
this.astvMyTree.EnableCustomizedNodeIcon;
}

protected void btnToggleContextMenu_Click( object sender,


EventArgs e )
{
this.astvMyTree.EnableContextMenu = !
this.astvMyTree.EnableContextMenu;
}

2. Tree lines.
Enable/Disable tree line is available.

See live
demo: http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo1.aspx

3. Tree node icons.


Developer can specific customized icon for each node, use default node icon, or,
disable node icon.
See live
demo: http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo1.aspx

4. Checkbox.
Three-state(checked, unchecked, half-checked) checkbox is available.

See live
demo: http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo1.aspx

5. Tree node context menu.


User can use context menu to edit/delete node by right click the node. Ajax
edit/delete is supported.
See live
demo: http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo3.aspx

6. Multi-data source supported.


Developer can bind different types of data source( currently astreeview supports
datatable and xml datasource ). Or developer can create ASTreeViewNode and
append to the tree in the code.

XML:

DataTable:

 
See live
demo: http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo4.aspx

7. Server-side event supported.


OnSelectedNodeChanged and OnCheckedNodeChanged are available.

See live
demo: http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo2.aspx

Configuration:  

<ct:ASTreeView ID="astvMyTree"
runat="server"

BasePath="~/Javascript/astreeview/"
DataTableRootNodeValue="0"
EnableRoot="false"
EnableNodeSelection="true"
EnableCheckbox="true"
EnableDragDrop="false"
EnableTreeLines="true"
EnableNodeIcon="true"

EnableCustomizedNodeIcon="false"
AutoPostBack="true"
EnableDebugMode="false"
EnableContextMenu="false"

OnOnCheckedNodeChanged="astvMyTree_OnCheckedNodeChanged"
OnOnSelectedNodeChanged="astvMyTree_OnSelectedNodeChanged" />

 in Code Behind:


protected void astvMyTree_OnCheckedNodeChanged( object src,
ASTreeViewNodeCheckedEventArgs e )
{
string toConsole = string.Format(
">>OnCheckedNodeChanged checked: text:{0} value:{1} state:{2}", e.NodeText,
e.NodeValue, e.CheckedState.ToString() );
this.divConsole.InnerHtml += ( toConsole + "<br />" );
}

protected void astvMyTree_OnSelectedNodeChanged( object src,


ASTreeViewNodeSelectedEventArgs e )
{
string toConsole = string.Format(
">>OnSelectedNodeChanged selected: text:{0} value:{1}", e.NodeText,
e.NodeValue );
this.divConsole.InnerHtml += ( toConsole + "<br />" );
}

protected void btnGetSelectedNode_Click( object sender,


EventArgs e )
{
string toConsole = string.Empty;

ASTreeViewNode selectedNode =
astvMyTree.GetSelectedNode();
if( selectedNode == null )
toConsole = ">>no node selected.";
else
toConsole = string.Format( ">>node selected:
text:{0} value:{1}", selectedNode.NodeText, selectedNode.NodeValue );

this.divConsole.InnerHtml += ( toConsole + "<br />" );


}

protected void btnGetCheckedNodes_Click( object sender,


EventArgs e )
{
List<ASTreeViewNode> checkedNodes =
this.astvMyTree.GetCheckedNodes( cbIncludeHalfChecked.Checked );
StringBuilder sb = new StringBuilder();
foreach( ASTreeViewNode node in checkedNodes )
sb.Append( string.Format( "[text:{0}, value:
{1}]<br />", node.NodeText, node.NodeValue ) );

this.divConsole.InnerHtml += ( string.Format( ">>nodes


checked: <div style='padding-left:20px;'>{0}</div>", sb.ToString() ) );
}
8. Ajax nodes loading supported.
Having thousands of nodes? No problem, ASTreeView support loading nodes using
ajax.

See live
demo: http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo5.aspx

Configuration:

<ct:ASTreeView ID="astvMyTree"
runat="server"

BasePath="~/Javascript/astreeview/"
DataTableRootNodeValue="0"
EnableRoot="false"
EnableNodeSelection="true"
EnableCheckbox="true"
EnableDragDrop="false"
EnableTreeLines="true"
EnableNodeIcon="true"

EnableCustomizedNodeIcon="false"
EnableContextMenu="true"
EnableDebugMode="false"
EnableAjaxOnEditDelete="true"

AddNodeProvider="~/ASTreeViewDemo5.aspx"

AdditionalAddRequestParameters="{'t2':'ajaxAdd'}"

EditNodeProvider="~/ASTreeViewRenameNodeHandler.aspx"

DeleteNodeProvider="~/ASTreeViewDeleteNodeProvider.aspx"

LoadNodesProvider="~/ASTreeViewDemo5.aspx"

AdditionalLoadNodesRequestParameters="{'t1':'ajaxLoad'}"/>
In Code Behind: 

protected override void Render( HtmlTextWriter writer )


{
if( Request.QueryString["t1"] == "ajaxLoad" )
{
string virtualParentKey =
Request.QueryString["virtualParentKey"];

string para = string.Empty;// "= 1";


if( virtualParentKey == null )
para = " is NULL";
else
para = "=" + virtualParentKey;

string sql = @"SELECT p1.[ProductID] as


ProductID, p1.[ProductName] as ProductName, p1.[ParentID] as ParentID,
p3.childNodesCount as ChildNodesCount
FROM [Products] p1
LEFT OUTER JOIN
(
SELECT COUNT(*) AS childNodesCount , p2.[ParentID] AS pId
FROM [Products] p2
GROUP BY p2.[ParentID]
) p3
ON p1.[ProductID] = p3.pId
WHERE p1.[ParentID] " + para;

DataTable dt = OleDbHelper.ExecuteDataset(
base.NorthWindConnectionString, CommandType.Text, sql ).Tables[0];

ASTreeViewNode root = new ASTreeViewNode(


"root" );

foreach( DataRow dr in dt.Rows )


{
string productName =
dr["ProductName"].ToString();
string productId =
dr["ProductID"].ToString();
string parentId =
dr["ParentID"].ToString();
int childNodesCount = 0;
if( !
string.IsNullOrEmpty( dr["ChildNodesCount"].ToString() ) )
childNodesCount =
int.Parse( dr["ChildNodesCount"].ToString() );

ASTreeViewLinkNode node = new


ASTreeViewLinkNode( productName, productId );
node.VirtualNodesCount =
childNodesCount;
node.VirtualParentKey = productId;
node.IsVirtualNode = childNodesCount
> 0;
node.NavigateUrl = "#";
node.AddtionalAttributes.Add( new
KeyValuePair<string, string>( "onclick", "return false;" ) );

root.AppendChild( node );
}
HtmlGenericControl ulRoot = new
HtmlGenericControl( "ul" );
astvMyTree.TreeViewHelper.ConvertTree( ulRoot,
root, false );
foreach( Control c in ulRoot.Controls )
c.RenderControl( writer );
}
else if( Request.QueryString["t2"] == "ajaxAdd" )
{
string addNodeText =
Request.QueryString["addNodeText"];
int parentNodeValue =
int.Parse( Request.QueryString["parentNodeValue"] );

string maxSql = "select max( productId ) from


products";
int max = (int)OleDbHelper.ExecuteScalar(
base.NorthWindConnectionString, CommandType.Text, maxSql );
int newId = max + 1;

string sql = string.Format( @"INSERT INTO


products( productid, Discontinued, productname, parentid ) values( {0} ,0,
'{1}', {2})"
, max + 1, addNodeText.Replace( "'",
"''" ), parentNodeValue );

int i = OleDbHelper.ExecuteNonQuery(
base.NorthWindConnectionString, CommandType.Text, sql );

ASTreeViewNode root = new ASTreeViewNode(


"root" );

ASTreeViewLinkNode node = new


ASTreeViewLinkNode( addNodeText, newId.ToString() );
node.NavigateUrl = "#";
node.AddtionalAttributes.Add( new
KeyValuePair<string, string>( "onclick", "return false;" ) );

root.AppendChild( node );
HtmlGenericControl ulRoot = new
HtmlGenericControl( "ul" );
astvMyTree.TreeViewHelper.ConvertTree( ulRoot,
root, false );
foreach( Control c in ulRoot.Controls )
c.RenderControl( writer );
}
else
base.Render( writer );

}
 

9. Multi-type tree node.


A tree node can be a hyper-link or LinkButton to perform postback.

See live demo:

Hyper Link:http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo1.aspx

Link Button:http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo2.aspx

10. ASDropDownTree
ASDropDownTree inherits ASTreeView, looks like a DropDownList, multi-selection
and single-selection are available by the control's configuration.
See live
demo: http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo6.aspx

11. Drag & Drop between( or even among) trees


Nodes can be dragged and dripped cross trees.

See live
demo: http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo7.aspx 

Points of Interest

I spent two of three months in development, and the ASTreeView is finally finished.
Now I would like to introduce it to you.  And your feedback is appreciated! 
History

jax.

Background 

ASTreeView is a powerful treeview with drag drop, ajax loading, context menu, xml
import/export, checkbox, selection, add/editing/deleting nodes with ajax.

ASTreeView is developed on .NET framework 2.0. Demo project is a Visual Studio


2005 project.

ASTreeView is FREE! That means you can use it anywhere!

I host the project on Google Code: http://code.google.com/p/astreeview/ please


download the assembly amd the demo , check out the demo, and use ASTreeView in
your project! 

Using the code

Here are functionalities ASTreeView supports:

1. Drag & drop.


User can drag & drop nodes within the tree or even among trees!

See live
demo: http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo1.aspx

<ct:ASTreeView ID="astvMyTree"
runat="server"
BasePath="~/Javascript/astreeview/"
DataTableRootNodeValue="0"
EnableRoot="false"
EnableNodeSelection="false"
EnableCheckbox="true"
EnableDragDrop="true"
EnableTreeLines="true"
EnableNodeIcon="true"
EnableCustomizedNodeIcon="true"
EnableContextMenu="true"
EnableDebugMode="false"
EnableContextMenuAdd="false" />

In Code Behind: 

protected void btnToggleDragDrop_Click( object sender, EventArgs e )


{
this.astvMyTree.EnableDragDrop = !
this.astvMyTree.EnableDragDrop;
}

protected void btnToggleTreeLines_Click( object sender,


EventArgs e )
{
this.astvMyTree.EnableTreeLines = !
this.astvMyTree.EnableTreeLines;
}

protected void btnToggleNodeIcon_Click( object sender,


EventArgs e )
{
this.astvMyTree.EnableNodeIcon = !
this.astvMyTree.EnableNodeIcon;
}

protected void btnToggleCheckbox_Click( object sender,


EventArgs e )
{
this.astvMyTree.EnableCheckbox = !
this.astvMyTree.EnableCheckbox;

protected void btnToggleDefaultNodeIcon_Click( object sender,


EventArgs e )
{
this.astvMyTree.EnableCustomizedNodeIcon = !
this.astvMyTree.EnableCustomizedNodeIcon;
}

protected void btnToggleContextMenu_Click( object sender,


EventArgs e )
{
this.astvMyTree.EnableContextMenu = !
this.astvMyTree.EnableContextMenu;
}

2. Tree lines.
Enable/Disable tree line is available.

See live
demo: http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo1.aspx

3. Tree node icons.


Developer can specific customized icon for each node, use default node icon, or,
disable node icon.
See live
demo: http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo1.aspx

4. Checkbox.
Three-state(checked, unchecked, half-checked) checkbox is available.

See live
demo: http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo1.aspx

5. Tree node context menu.


User can use context menu to edit/delete node by right click the node. Ajax
edit/delete is supported.
See live
demo: http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo3.aspx

6. Multi-data source supported.


Developer can bind different types of data source( currently astreeview supports
datatable and xml datasource ). Or developer can create ASTreeViewNode and
append to the tree in the code.

XML:

DataTable:

 
See live
demo: http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo4.aspx

7. Server-side event supported.


OnSelectedNodeChanged and OnCheckedNodeChanged are available.

See live
demo: http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo2.aspx

Configuration:  

<ct:ASTreeView ID="astvMyTree"
runat="server"

BasePath="~/Javascript/astreeview/"
DataTableRootNodeValue="0"
EnableRoot="false"
EnableNodeSelection="true"
EnableCheckbox="true"
EnableDragDrop="false"
EnableTreeLines="true"
EnableNodeIcon="true"

EnableCustomizedNodeIcon="false"
AutoPostBack="true"
EnableDebugMode="false"
EnableContextMenu="false"

OnOnCheckedNodeChanged="astvMyTree_OnCheckedNodeChanged"
OnOnSelectedNodeChanged="astvMyTree_OnSelectedNodeChanged" />

 in Code Behind:


protected void astvMyTree_OnCheckedNodeChanged( object src,
ASTreeViewNodeCheckedEventArgs e )
{
string toConsole = string.Format(
">>OnCheckedNodeChanged checked: text:{0} value:{1} state:{2}", e.NodeText,
e.NodeValue, e.CheckedState.ToString() );
this.divConsole.InnerHtml += ( toConsole + "<br />" );
}

protected void astvMyTree_OnSelectedNodeChanged( object src,


ASTreeViewNodeSelectedEventArgs e )
{
string toConsole = string.Format(
">>OnSelectedNodeChanged selected: text:{0} value:{1}", e.NodeText,
e.NodeValue );
this.divConsole.InnerHtml += ( toConsole + "<br />" );
}

protected void btnGetSelectedNode_Click( object sender,


EventArgs e )
{
string toConsole = string.Empty;

ASTreeViewNode selectedNode =
astvMyTree.GetSelectedNode();
if( selectedNode == null )
toConsole = ">>no node selected.";
else
toConsole = string.Format( ">>node selected:
text:{0} value:{1}", selectedNode.NodeText, selectedNode.NodeValue );

this.divConsole.InnerHtml += ( toConsole + "<br />" );


}

protected void btnGetCheckedNodes_Click( object sender,


EventArgs e )
{
List<ASTreeViewNode> checkedNodes =
this.astvMyTree.GetCheckedNodes( cbIncludeHalfChecked.Checked );
StringBuilder sb = new StringBuilder();
foreach( ASTreeViewNode node in checkedNodes )
sb.Append( string.Format( "[text:{0}, value:
{1}]<br />", node.NodeText, node.NodeValue ) );

this.divConsole.InnerHtml += ( string.Format( ">>nodes


checked: <div style='padding-left:20px;'>{0}</div>", sb.ToString() ) );
}
8. Ajax nodes loading supported.
Having thousands of nodes? No problem, ASTreeView support loading nodes using
ajax.

See live
demo: http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo5.aspx

Configuration:

<ct:ASTreeView ID="astvMyTree"
runat="server"

BasePath="~/Javascript/astreeview/"
DataTableRootNodeValue="0"
EnableRoot="false"
EnableNodeSelection="true"
EnableCheckbox="true"
EnableDragDrop="false"
EnableTreeLines="true"
EnableNodeIcon="true"

EnableCustomizedNodeIcon="false"
EnableContextMenu="true"
EnableDebugMode="false"
EnableAjaxOnEditDelete="true"

AddNodeProvider="~/ASTreeViewDemo5.aspx"

AdditionalAddRequestParameters="{'t2':'ajaxAdd'}"

EditNodeProvider="~/ASTreeViewRenameNodeHandler.aspx"

DeleteNodeProvider="~/ASTreeViewDeleteNodeProvider.aspx"

LoadNodesProvider="~/ASTreeViewDemo5.aspx"

AdditionalLoadNodesRequestParameters="{'t1':'ajaxLoad'}"/>
In Code Behind: 

protected override void Render( HtmlTextWriter writer )


{
if( Request.QueryString["t1"] == "ajaxLoad" )
{
string virtualParentKey =
Request.QueryString["virtualParentKey"];

string para = string.Empty;// "= 1";


if( virtualParentKey == null )
para = " is NULL";
else
para = "=" + virtualParentKey;

string sql = @"SELECT p1.[ProductID] as


ProductID, p1.[ProductName] as ProductName, p1.[ParentID] as ParentID,
p3.childNodesCount as ChildNodesCount
FROM [Products] p1
LEFT OUTER JOIN
(
SELECT COUNT(*) AS childNodesCount , p2.[ParentID] AS pId
FROM [Products] p2
GROUP BY p2.[ParentID]
) p3
ON p1.[ProductID] = p3.pId
WHERE p1.[ParentID] " + para;

DataTable dt = OleDbHelper.ExecuteDataset(
base.NorthWindConnectionString, CommandType.Text, sql ).Tables[0];

ASTreeViewNode root = new ASTreeViewNode(


"root" );

foreach( DataRow dr in dt.Rows )


{
string productName =
dr["ProductName"].ToString();
string productId =
dr["ProductID"].ToString();
string parentId =
dr["ParentID"].ToString();
int childNodesCount = 0;
if( !
string.IsNullOrEmpty( dr["ChildNodesCount"].ToString() ) )
childNodesCount =
int.Parse( dr["ChildNodesCount"].ToString() );

ASTreeViewLinkNode node = new


ASTreeViewLinkNode( productName, productId );
node.VirtualNodesCount =
childNodesCount;
node.VirtualParentKey = productId;
node.IsVirtualNode = childNodesCount
> 0;
node.NavigateUrl = "#";
node.AddtionalAttributes.Add( new
KeyValuePair<string, string>( "onclick", "return false;" ) );

root.AppendChild( node );
}
HtmlGenericControl ulRoot = new
HtmlGenericControl( "ul" );
astvMyTree.TreeViewHelper.ConvertTree( ulRoot,
root, false );
foreach( Control c in ulRoot.Controls )
c.RenderControl( writer );
}
else if( Request.QueryString["t2"] == "ajaxAdd" )
{
string addNodeText =
Request.QueryString["addNodeText"];
int parentNodeValue =
int.Parse( Request.QueryString["parentNodeValue"] );

string maxSql = "select max( productId ) from


products";
int max = (int)OleDbHelper.ExecuteScalar(
base.NorthWindConnectionString, CommandType.Text, maxSql );
int newId = max + 1;

string sql = string.Format( @"INSERT INTO


products( productid, Discontinued, productname, parentid ) values( {0} ,0,
'{1}', {2})"
, max + 1, addNodeText.Replace( "'",
"''" ), parentNodeValue );

int i = OleDbHelper.ExecuteNonQuery(
base.NorthWindConnectionString, CommandType.Text, sql );

ASTreeViewNode root = new ASTreeViewNode(


"root" );

ASTreeViewLinkNode node = new


ASTreeViewLinkNode( addNodeText, newId.ToString() );
node.NavigateUrl = "#";
node.AddtionalAttributes.Add( new
KeyValuePair<string, string>( "onclick", "return false;" ) );

root.AppendChild( node );
HtmlGenericControl ulRoot = new
HtmlGenericControl( "ul" );
astvMyTree.TreeViewHelper.ConvertTree( ulRoot,
root, false );
foreach( Control c in ulRoot.Controls )
c.RenderControl( writer );
}
else
base.Render( writer );

}
 

9. Multi-type tree node.


A tree node can be a hyper-link or LinkButton to perform postback.

See live demo:

Hyper Link:http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo1.aspx

Link Button:http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo2.aspx

10. ASDropDownTree
ASDropDownTree inherits ASTreeView, looks like a DropDownList, multi-selection
and single-selection are available by the control's configuration.
See live
demo: http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo6.aspx

11. Drag & Drop between( or even among) trees


Nodes can be dragged and dripped cross trees.

See live
demo: http://www.geekees.com/ASTreeViewDemo/ASTreeViewDemo7.aspx 

Points of Interest

I spent two of three months in development, and the ASTreeView is finally finished.
Now I would like to introduce it to you.  And your feedback is appreciated! 
Introduction Contents
 Introduction
The discrepancies between object and relational models has  Database
been long noted, and referred to as the object-relational mapping
impedance mismatch.  UI forms
 Field types
What's more rarely noted however is how simple applications,  Elements
often referred to as CRUD (Create, Read, Update, Delete) are behavior
effectively demonstrating the stunning resemblance between  Elements
the relational models and the UI structures. A trivial remark lies positioning
in the correspondence between DB operators and CRUD  Elements
functions: Create/Insert, Read/Select, Update/Update, groups
Delete/Delete.  Elements
appearance
The main idea of this article is to explore a model of UI  Keeping it
components that would make the impedance mismatch moot simple
by linking directly the UI model to the database without going  Meta-model
through any intermediate object representation. for CRUD
 Storing the
We argue that the key concept here is the representation of UI metadata   
Metadata and that with a proper set of fundamental UI widgets, About the
most CRUD applications can be effectively designed without implementation
hand coding. The first step to automated code generation of
full-blown application seems indeed to show that simple
applications (CRUD in our example) can be automatically build  
(even on the fly).

In this article we will take as an example a "to do list"


application and gather all the meaningful information necessary
to fully describe the UI and database mapping necessary to
add, view, edit, delete, and search records. This information is
called UI metadata (data about the UI data) and its structure is
called meta-model (model of model, or model of metadata).

Ideally, the metadata should only contain what is different from one CRUD
application to another, with as little redundancy as possible. What is the same across
all application should be re-usable code.

The different issues we need to consider for a meta-model are:

 Database mapping
 UI forms
 Field types
 Elements behavior
 Elements positioning
 Elements groups
 Elements appearance

To paraphrase Eric. S. Raymond original quote from The Cathedral and the Bazaar,
"smart data structures and dumb code works a lot better than the other way
around", in our case, we could say "Smart meta-data structures and dumb code
generation works a lot better than the other way around" ;-)

The meta-model is the structure of the applications definition; therefore decisions


made there have incredible consequences on the volume and complexity of the code.

Database mapping

Database mapping for code generation can become a very complex task when
mapping all possible data structures and types of joins. We will not try to re-invent
Hibernate but just look at the minimal metadata needed to generate all necessary
SQL for CRUD.

In order to keep the metadata simple, we can use conventions and stick to as few
patterns as possible. In the limited scope of CRUD, we can consistently use primary
keys on columns called "ID" of type integer (or long), not null, and auto-increment.
Of course, other naming conventions and techniques are possible, you may choose
to call your primary keys "<table name>ID"and not set them as auto-increment but
that will eventually result in more code and metadata (primay key column name,
data type, increment...). What is important is to stick to a convention so that we will
not have to define primary keys and joins in the metadata for each entity or to code
all possible types of joins.

For our "to do list" example we will need a database schema like the following:

To manipulate the data in the former schema, different SQL syntaxes are possible,
and the SQL could be generated on the fly or coded once in stored procedures. We
will basically need SQL code like the following:
Create:
INSERT INTO EVOL_ToDo (ID, Title, Duedate, PriorityID,
CategoryID, Complete, Notes)
VALUES (...)

Read:
SELECT t.ID, t.Title, t.Duedate, t.PriorityID, t.CategoryID,
t.Complete, t.Notes, EVOL_ToDoPriority.name AS Priority,
EVOL_ToDoCategory.name AS Category
FROM EVOL_ToDo t left join EVOL_ToDoPriority on
t.PriorityID=EVOL_ToDoPriority.ID left join EVOL_ToDoCategory on
t.CategoryID=EVOL_ToDoCategory.ID
WHERE ...
ORDER BY ...

Update:
UPDATE EVOL_ToDo
SET Title=..., DueDate=...
WHERE ID=...

Delete:
DELETE FROM EVOL_ToDo
WHERE ID=...

In addition, we will also need the following two simple queries to populate our drop
downs of tasks categories and priorities with their lists of values:
SELECT ID, name FROM EVOL_ToDoPriority ORDER BY ID
and
SELECT ID, name FROM EVOL_ToDoCategory ORDER BY name

In order to generate all that SQL, we will only need the following information
(assuming all tables use "ID" as primary key):

Driving
EVOL_ToDo
table:
Fields:  ID - Primary key - int
 Title - nvarchar
 Duedate - datetime
 PriorityID - int (LOV table = EVOL_ToDoPriority, display column
= name, alias = Priority, sorted by = ID)
 CategoryID - int (LOV table = EVOL_ToDoCategory, display
column = name, alias = Category, sorted by = name)
 Complete - bit
 Notes - nvarchar
Default list
Duedate, PriorityID
order:

So far, all this metadata is available directly from the database there is no need yet
for external mapping. It may be completed with stored procedure names for all
database operations. In order to build the corresponding user interface more
metadata will become necessary. Then it will make sense to have a separate
structure (meta-model) to store our metadata.

UI forms

In order to interact with our database tables, we will need several different forms
(often called modes or views). These are the same for all CRUD applications: List,
View, Edit, and Search... and maybe Advanced Search and Export.

List View

Displays the most important fields for Displays all fields for viewing 1 record at a
multiple rows of records. time. 

   
Edit Search

Displays all fields for editing 1 record at a Query by example form for searching
time.  records.
   
Export Advanced Search

Export of selections in standard formats Advanced form for queries such as "phone
(Excel, HTML, CSV, XML, or SQL), one or number start by 415" or "categories equal
several records at the same time.   Finances or Business"... 

   

In most web applications today each mode is a different web page. In recent Web
2.0 rich internet applications the modes Export, Search and Advanced search are
often modal dialogs. Also, we sometimes see applications with a single page made of
2 different frames for the List mode and the Edit or View mode (Microsoft Outlook
style).

Whether you plan on using one page per mode, or have "the page be the
application", the different modes are still there and the same proposed meta-model
(and same models) can be used to describe it.

Field Types

For each field, the model should not describe every detail for every mode but use
patterns. For example, a date field will appear as a text box with a date picker
button in Edit mode but it will appear as a single string in View mode. Defining a field
as "date" should be enough, there is no need to describe the text field, the button,
and the popup calendar for every date field.
Field types are no longer just data types but become "behavioral types" or "UI
types". For example: fields for a URL, an email, or a phone number are of different
types for the user as they behave differently, but they have the same data type for
the database (varchar or nvarchar). We need patterns associated to each field type.

The list of field types should at least include the following:

 boolean
 date, datetime, time
 decimal, integer
 document
 email, url
 formula
 image
 lov (list of values)
 text
 textmultiline

boolean

Boolean fields are Yes/No values displayed as checkboxes. A Boolean field is stored
as a numeric value (Yes=1, No=0 or null). The most efficient database column type
for it is bit.

edit view list

search adv. search

date, datetime, time

Dates are displayed as an input box with a date picker in edit mode, and as a
formatted string in other modes. The Javascript for the date picker is an external JS
file which can be customized. Possible database column types are datetime or
smalldatetime.

edit view list


search adv. search

decimal, integer

These types are used for numeric values. Decimal can be stored as data type money
or decimal. Integer can be smallint, int, bigint…

edit view list

search adv. search


document

Documents are displayed as a link for download in view mode, as a text box with a
browse button for upload in edit mode, as a checkbox in the search and advanced
search modes. Like images, documents are stored on the file server and only the
filename is stored in the database.

edit view list

search adv. search

email, url

Text value displayed as a text box in edit mode and hyperlink in other modes. These
can be stored as varchar, or nvarchar.

edit view list

search adv. search

formula

SQL formula or sub-query. The calculation SQL is entered in the dbcolumn attribute
of the field. Fields of type formula cannot be edited by users.
Example of formula field:

<field type="formula" readonly="1" label="Photos"


format="0 'photos'" dbcolumnread="NBphotos"
dbcolumn="SELECT COUNT(*) FROM EVOL_Photo P WHERE
P.albumid=T.id" ...  >

edit view list

search adv. search

image

Images are displayed as such in view mode, as a box with a browse button for
upload in edit mode, as a checkbox in the search and advanced search modes.
Images are stored on the file server, only the filename is stored in the database, as a
varchar or nvarchar.

edit view list

search adv. search

lov (list of values)


Lists of values are choices of values displayed as drop-down lists in edit mode or as
the string of the selected value in view mode. They correspond to joins to secondary
tables in the database and are stored in the driving table as a number which is the
primary key of the value in the secondary table.

Using certain attributes of the field it can become a many-to-many relationship


instead of a one-to-many.

edit view list

search adv. search

text

This type is the most commonly used one. It is displayed as a text box in edit mode.
It is a string stored as varchar or nvarchar.

edit view list

search adv. search

textmultiline
Fields of these types are displayed as big text boxes (HTML "textarea") and can
spread over several rows. They can be stored as text, varchar, or nvarchar.

edit view list

search adv. search

The former list is what I believe to be the minimal set of field types necessary to
build applications. Of course, more field types could be added if needed in your
application. For example IP address, color, data-size, temperature, or rich text...
could be nice to implement as special widgets. SalesForce offers specific field types
for "Percent", and "Phone"; it also provides field types for relationships but these
could also be considered "special list of values".

You may implement slightly different patterns. The nice thing is that if you build a
metadata driven application you will only need to change the code once for all pages
to change accordingly. If you use a Full Text Search, you may want to add "contain
word".

It is also useful to add "Is null" and "Is not null" (maybe called "Is empty" and "Is
not empty" depending on your users) in the advanced search (it is only useful for
required fields).

Most fields of types map to a single database column but list of values (LOV) fields
map a key column in the driving table and need a join to a secondary table. Fields of
type "formula" may map to a sub-query or be calculated by the database or the UI
depending on the implementation.

For our "to do list" example, we will need the following fields:

 Title (text)
 Due date (date)
 Priority (list of values)
 Category (list of values)
 Complete (boolean)
 Notes (multiline text)

Elements behavior

The toolbar for any kind of CRUD application will be the same by definition. Maybe
we should call it "SCRUD", with an additional "S" for "Search". It may also have a
"clone" button that can be quite useful for data entry. Some of the icons may be
repeated in the web form as buttons or links.

A lot of the client interaction code can be generic and should not require any
additional metadata. For example panels can be collapsible using the same code for
a to do list or a contact management application.

The field types already imply quite some behavior but we still need to introduce more
metadata for each field:

 Validation rules: Required, Min, Max, RegExp, MaxLength...


 Security: ReadOnly
 Flagging appartenance to modes: all fields will be in the Edit and View modes,
but the modes Search, Advanced Search, and List should only contain a sub-
set of the fields.
 Altering the appearance or behavior of specific fields sometimes in specific
modes: label-edit, label-search, help, optional...

Adding behavioral information to our "to do list" example, we now have the following
fields definitions:

 Title (text) : required, maxlength = 255 characters


 Due date (date)
 Priority (list of values) : required
 Category (list of values)
 Complete (boolean) : labellist="C." (use abbreviated label in lists)
 Notes (multiline text) : maxlength = 1000 characters, height = 6

Although, it was not needed in our example, we can expect the need for custom
validation. The metadata should allow to specify custom javascript.

Elements positioning
With the metadata we now have gathered for our example, and the field type
patterns, we can already present an Edit form as follow.

As long as we do not have too many fields on the screen this is perfectly fine. For
applications with a large number of fields, this layout involves a lot for vertical
scrolling for the user. Also, it is much easier for users when fields are visually
grouped into panels on the screen (maybe tabs too, which in the metadata could
simply be considered a different type of panels).

The most common positioning strategy is to decide on a specific number of columns


for the whole page. Labels can be on the left of fields (in their own column) as seen
in the previous screenshot, or on top of each field as in the next screenshot. This is
not ideal in cases like addresses where it is much nicer to display a different number
of fields on each row and have field of different width on the same row like the
following:

Let's go through the exercise with our to do list example. Assuming we could position
elements as we please, we could organize fields into 2 or 3 groups. It may seem
trivial but I'll say it anyway because it is important and not always done: the
grouping of fields should match the user mental model not the developer's.

2 panels organization: Task and Status

or 3 panels organization: Task, Status, and Notes

Several different ways to specify elements position on the screen are available. We
could specify the top and left coordinates for each element, but each time we remove
or add an element we would have to change coordinates of other elements to re-
organize the form.

My favorite positioning scheme is the "flow positioning" because it only requires one
variable per element: its width. Also, when adding or removing elements, there is no
need to make any change to other elements. In addition, it can be implemented in a
"forgiving way" where any value greater than 100% will be considered as 100%; a
field of width less than 100% between two fields of width 100% will also behave as
100%. Also, it supports cases like the earlier address example.

Fields and panels (groups) organization is described the same way. Elements are
placed sequentially, left to right, until a width of 100% is reached. Once the
maximum width is reached, the next panel will appear below the previous group of
panels.

Choosing the 3 panels presentation, and using the "flow positioning" scheme we
have the following description:

Panel: Task 62%


 Field: Title 100%
 Field: Due date 40%
 Field: Priority 60%

Panel: Status 38%


 Field: Category 100%
 Field: Complete 100%

Panel: Notes 100%


 Field: Notes 100%

Notice how minimalist this scheme is: by keeping the structure of the metadata
(elements order and nesting of fields into panels), and using 100% as the default
width, we only need 4 numbers to position all elements (fields and panels) on the
form.

The Edit and View mode follow the same positioning. For other modes, fields appear
or not based on the attribute search for search, searchadv for advanced search,
and searchlist for list. All fields present in the metadata will appear in the export
mode.

By the way, I choose a 62%-38% proportion instead of a more typical 50%-50%


because it is the golden ratio which is recognized as more aesthetical. If it works for
ancient Greek architects, Da Vinci, and the French impressionists, it should also work
for application screens... and it's not any more difficult to code than other
proportions.

Standard ratio: 50% - 50%.


Golden ratio: 62% - 38%

The difference is suttle but at the unconscious level users should feel more
confortable with proportions which remind them of their environment or their body.

Field labels positioning

In his latest book Web Form Design: Filling in the Blanks, Luke Wroblewski dedicates
a full chapter to labels positioning. In order to improve usability, I decided to follow
his guidelines.

One of the best practices he proposes (and I hadn't thought of before) is: "When
considering different label alignments for different forms in a single application, think
through the context versus consistency trade-off".

As a result, I used different labels positioning scheme for the different forms as
follows:

 Top-aligned labels for View & Edit forms because they "labels and input fields
are n close proximity, processing them requires little effort".
 Left-aligned labels for Search & Advanced Search forms because they "make
scanning the information required by a form easier".
 Labels with inputs for Export form

Elements groups

As we saw earlier, when we have a lot of field it becomes useful to group them into
panels. Now, when we have too many panels to fit the screen without scrolling, we
should group them into tabs. Tabs can save a lot of screen real estate and help users
not to be overwhelmed by too much information at once.
Elements appearance

In web applications today, elements appearance are almost always set using CSS.
We can simply define a default CSS class for all fields, and another one for read-only
fields. Of course, it is possible and sometimes necessary to go further and have
different classes for different types of fields, or in different modes.

At the minimum we should have the following CSS classes:

For fields not passing validation we should add a "FieldInvalid" CSS class. By using it
in conjonction with the "Field" CSS class, we can reduce the potential number of CSS
classes needed.

There is no need to specify in the model any of the CSS class for elements which
stay the same for every application like toolbar, buttons, field labels... Actually, by
using a set of default CSS classes and only specifying the classes of element we want
to be special, we are saving a lot of time and effort but also enforcing look & feel
consistency in the application.

In addition, we should add to the metadata a way to specify the file name of the
entity icon.
In our example, to make it stand out, we will change the class name of the field
"Title" from the default "Field" to "FieldMain". We will also specify an icon for the
entity.

Keeping it simple

So far, we looked at what needed to be accounted to describe CRUD applications. We


could now decide to have one XML document for the database mapping (similar to
Hibernate's), and one XML document for each UI form (refering the database
mapping document by field/property name). This may be needed for complex
applications but in the scope of CRUD we can "collapse" all these documents into a
single one.

Keeping the number of XML documents as low as possible, and avoiding redundancy
will make maintenance easier and avoid many potential "metadata bugs" caused by
typos or mismatches across documents.

We can now define the elements of the Edit form of our "to do list" application, add
to each field its database mapping information, and flag fields we want in the Search,
Advanced search, and List forms. That will give us a single XML document like the
following which can be used to generate all SQL, and all UI forms.

<?xml version="1.0" encoding="UTF-8"?>


<form label="To Do" xmlns="http://www.evolutility.com">
<data entity="task" entities="tasks" icon="m-todo.gif"
dbtable="EVOL_ToDo" dborder="PriorityID, duedate" />
<panel label="Task" width="62" >
<field type="text" label="Title"
dbcolumn="title"
required="1" cssclass="fieldmain" maxlength="255"
width="100"
search="1" searchlist="1" searchadv="1" />
<field type="date" label="Due Date"
dbcolumn="duedate"
maxlength="10" width="40"
search="1" searchlist="1" searchadv="1" />
<field type="lov" label="Priority"
dbcolumn="PriorityID" dbcolumnread="Priority"
dbtablelov="EVOL_ToDoPriority" dbcolumnreadlov="name"
dborderlov="ID"
required="1" width="60"
search="1" searchlist="1" searchadv="1" />
</panel>
<panel label="Category" width="38">
<field type="lov" label="Category"
dbcolumn="CategoryID" dbcolumnread="Category"
dbtablelov="EVOL_ToDoCategory" dbcolumnreadlov="name"
dborderlov="name"
width="100"
search="1" searchlist="1" searchadv="1" />
<field type="boolean" label="Complete" labellist="C."
dbcolumn="Complete"
width="50" img="checkb.gif"
search="1" searchlist="1" searchadv="1" />
</panel>
<panel label="Notes" width="100">
<field type="textmultiline" label="" labeledit="Notes"
labellist="Notes"
dbcolumn="Notes"
maxlength="1000" width="100" height="6"
search="0" searchlist="0" searchadv="1" />
</panel>
</form>

Also, I find it very useful to have good default values. That way things can be
changed if needed, but users can rely on default value and not know about it until
needed. It doesn't simplify the metamodel but can make the models smaller.

Try the "to do list" application generated from that metadata live on Evolutility.org.

Meta-model for CRUD

Let's now gather an exhaustive list of the metadata encountered in the previous
paragraphs and come up with a formal language to fully describe CRUD applications.
We will also add information that was not needed in our example but will become
handy for more complex applications.

As we just saw earlier, a CRUD application can be defined by a single XML document.
This XML document is composed of one form element containing one data element,
and one or more panel elements containing one or more field elements.

form element

form is the root element. It doesn’t represent one single web form, but all necessary
web forms (Edit, View, List, Search, Advanced Search, Export) at once. It has the
optional attributes label, description and version, and the required namespace
xmlns.
The form element contains one element data and one or more elements
panel.nFields are displayed or not on each web form based on the attributes
search, searchlist, and searchadv.

data element

The data element specifies the set of database objects used by the component:
driving table, stored procedures; and the icon and screen name associated.

User's object name for the database object (for example: "task"
entity
for a to do list application).

entities Plural for entity (example "tasks" for the to do list).

Filename of the records icon (same one for all records).


icon
Example: icon ="todo.gif"

Database column used as record identifiyer for the user (not the
dbcolumnlead primary key).
Example: dbcolumnlead="TITLE"

Name of the primary key column used as record identifiyer.


dbcolumnpk
Example: dbcolumnpk="ID"

dbtable Name of driving table for the application.

dbwhere SQL where clause to limit the dataset manipulated.

List of column names to include in the "order by" SQL clause. It


dborder
is the default sort option.

Name and parameters of the stored procedure for deleting


spdelete
records (or flagging them as deleted).

Name and parameters of the stored procedure for retrieving a


spget
specific record.

Name and parameters of the stored procedure for checking


splogin users' credentials. Example: splogin="EvoSP_Login @login,
@password"

Name and parameters of the stored procedure for paging search


sppaging
results.

panel element

The panel element is used to visually group fields together on the screen.

cssclass CSS class for the panel. The default value is "Panel".
cssclasslabel CSS class for the panel title. The default value is "PanelLabel".

label Panel title.

Skips the panel from displaying, if every field contained is empty


optional
and optional (in View mode only).

Width of the panel in percentage of the total width of the form.


width
Example: width="100" for the full width.

tab element

The tab element can be used to group panels together and view them a group at a
time. This element is optional.

cssclass CSS class for the tab. The default value is "Tab".

cssclasslabel CSS class for the tab title. The default value is "TabLabel".

label Tab title.

field element

The elements field represents fields on the screen, and database columns at once.
It is the most used element and the element with the most attributes. Database
columns hidden to the user (like the primary key of the driving table) are not
declared.

cssclass CSS class name for the specific field (if we want to over write the
default). 

cssclasslabel CSS class name for the specific field label (in modes Edit and
View). The default value is "FieldLabel".

dbcolumn Database column (SQL name) for the field.

dbcolumnread Database column alias. Only useful for field of type LOV,
otherwise dbcolumnread must be the same as dbcolumn for
the field.

dbcolumnreadlov Column to show as value in lists. Default value is "name".

dborderlov Column name (or coma separated list of column names) to sort
the values in the drop downs.

dbtablelov Database table with the list of possible values for the drop down.

dbwherelov Extra SQL where clause to limit the list of value.


defaultvalue Default value for the field displayed while creating a new record.

format Field format (for fields of type boolean, date, decimal, or


integer).
Example: format="'$'#,##0.00"

height Height of the field, in number of rows (default to 1 for all field
except fields of type TextMultilines).

help Help tooltip for the field (only used in Edit mode).

img Image to display (for fields of type "boolean" or "url") in Edit or


View modes.

jsvalidation Name of the Javascript method for the field validation.

label, labeledit, Field title (in different modes). Only label is mandatory and will
labellist be the default for all other modes if not specified.

link Forces the field to be displayed as a link to another web page. It


can use the following variables @itemid, @userid, @fieldid.

linklabel Display a sentence or an image as the link. @fieldvalue in the


string is replaced by the field value at runtime.

linktarget Direct link click to a new browser.

max  Maximum value allowed for the field.

maxlength   Maximum number of characters allowed for the field value.

min
Minimum value allowed for the field.  

minlength   Minimum number of characters allowed for the field value to be


valid.

optional Determines if the field is displayed when empty (apply to View


mode only).

readonly readonly=1 presents edition of the field. readonly=2 presents


edition of the field, but allows typing in insertion. 

regexp Regular expression to validate the field value against.

required Determines if the field is required for saving the record. The
Javascript for client-side validation is automatically generated.

search Determines if the field appear in the search form.

searchadv Determines if the field appear in the advanced search form.

searchlist Determines if the field appears as a column of the search results


list.
type The type of the field. As we saw earlier, possible values are:
boolean (yes/no), date, datetime, decimal, document, email,
formula, image, integer, lov (list of values), text, textmultiline,
time, or url.

width
Width of the field in percentage of the Panel it belongs to.

Storing the metadata

Now that we have a meta-model, we need to store the metadata. Naturally, we have
the 2 common options: XML or database. This is where we rip some of the benefits
of a simple meta-model: the simpler meta-model the simpler structure for the XML
or the database dictionary.

Using XML documents

 Pros: flexible, self contained, easy to install, easy to modify.


 Cons: sometimes redundant across documents.
 Nice to have: An XSD for validation.

Using a database dictionary

 Pros: centralized, can mix data and metadata (even though it can be
dangerous), can use the same application to edit the data and the metadata.
 Cons: overhead of a database dictionary, difficult to edit without custom tool.
 Must have: A database schema (a simple one below).

  
The former schema could be enhanced by spliting the "EvoDico_Field" table into 2
separate tables, one for the database mapping and one for the UI information of
each field. This will be especially helpful when the same table is accessed by different
groups of users and some users are restricted to sub-set of the fields.

About the implementation(s)

The proposed meta-model is relatively simple, ideal for CRUD, and all its patterns
work together. Other meta-models, and different sets of patterns are definitly
possible. Depending on the level of control and customizability you want to achieve,
you may want to use a more complex ones.

There are many advantages to keeping it simple. That way, non-programmers can
look at an application description and make modifications without having to go
through a big learning curve or read the doc (as we know most people don't).

The most interesting question about metadata driven applications may be about
when to perform code generation. Should we use the meta-model to generate code
at design time (then compile and run) or can we generate the code at run-time and
have the application modify itself?

The UI is described in a way that can be implemented on any plateform.

I'm currently working on an open source CRUD framework called Evolutility which is
an implementation of the meta-model described in this article using C# and
Javascript for ASP.net, using SQL Server or MySQL. It can work with metadata
stored in XML or directly in the database (allowing the application to modify itself).
Thanks to users/contributors it is available in Catalan, Chinese (simplified),
Danish, English, French, German, Hindy, Italian, Japanese,
Portuguese, Romanian, Spanish, and Turkish.

Live sample applications are available at www.evolutility.org. Evolutility project is


source code (under Affero GPL v3 with dual licensing). Enjoy... maybe improve it and
contribute your code.

A working implementation of this meta-model is available with sample applications in


my other articles on the fly code generation with Evolutility and Wizard and CRUD
applications to build other CRUD applications.
Tier Generator is a powerful tool for generating business and data layers in C#. It is
a code generation tool that helps users to rapidly generate and deploy business and
data layers for their applications. The idea behind this is to provide a utility (tool) to
the developer that has the capabilities of quickly generating consistent and tested
source code that will help to get projects started sooner and finished faster.

Tier Generator connects to a Microsoft SQL Server database server and generates
business and data layers in C#. It also generates Stored Procedures for DML
operations.

Business layer

Tier Generator generates code in two layers (business and data). It generates some
classes for the each table in the database in the business layer. E.g., our database
contains the table Employee. Tier Generator will generate the following files:

 Employee
 EmployeeKeys
 EmployeeFactory
The Employee (business object) class contains the declaration of all instance fields
along with properties. It also overrides the AddValidationRules method to
associate the validation rules to the properties of the business object. It also
contains an enum of all the fields.

public class Employee: BusinessObjectBase


{
#region InnerClass

public enum EmployeeFields


{
EmployeeID, Name, Password, Email, TeamID,DepartmentID, IsAdmin
}

#endregion

#region Data Members

int _employeeID;
string _name;
string _password;
string _email;
int _teamID;
int _departmentID;
bool _isAdmin;

#endregion

#region Properties

public int EmployeeID


{
get { return _employeeID; }
set
{
if (_employeeID != value)
{
_employeeID = value;
PropertyHasChanged("EmployeeID");
}
}
}

public string Name


{
get { return _name; }
set
{
if (_name != value)
{
_name = value;
PropertyHasChanged("Name");
}
}
}
.
.
.

#endregion

#region Validation

internal override void AddValidationRules()


{
ValidationRules.AddRules(new Validation.ValidateRuleNotNull("EmployeeID",
"EmployeeID"));
ValidationRules.AddRules(new Validation.ValidateRuleNotNull("Name",
"Name"));
ValidationRules.AddRules(new Validation.ValidateRuleStringMaxLength("Name",
"Name",50));
ValidationRules.AddRules(new
Validation.ValidateRuleStringMaxLength("Password",

"Password",50));
ValidationRules.AddRules(new
Validation.ValidateRuleStringMaxLength("Email",

"Email",100));
ValidationRules.AddRules(new Validation.ValidateRuleNotNull("TeamID",
"TeamID"));
ValidationRules.AddRules(new Validation.ValidateRuleNotNull("DepartmentID",

"DepartmentID"));
ValidationRules.AddRules(new Validation.ValidateRuleNotNull("IsAdmin",
"IsAdmin"));
}

#endregion
}
The EmpolyeesKeys (business object keys) class contains the list of primary keys of
the table.

public class EmployeeKeys


{
#region Data Members

int _employeeID;

#endregion

#region Constructor

public EmployeeKeys(int employeeID)


{
_employeeID = employeeID;
}

#endregion

#region Properties

public int EmployeeID


{
get { return _employeeID; }
}

#endregion

The EmployeeFactory (business factory) class contains the methods for the
Insert, Delete, Update, and Select operations. It provides the following
methods for the DML operations:

 public bool Insert(Employee businessObject)


 public bool Update(Employee businessObject)
 public Employee GetByPrimaryKey(EmployeeKeys keys)
 public List<Employee> GetAll()
 public List<Employee> GetAllBy(Employee.EmployeeFields
fieldName, object value)
 public bool Delete(EmployeeKeys keys)
 public bool Delete(Employee.EmployeeFields fieldName, object
value)
The factory class performs the DML operations with the help of the data layer.

public class EmployeeFactory


{
#region data Members

EmployeeSql _dataObject = null;

#endregion

#region Constructor

public EmployeeFactory()
{
_dataObject = new EmployeeSql();
}

#endregion

#region Public Methods

public bool Insert(Employee businessObject)


{
if (!businessObject.IsValid)
{
throw new InvalidBusinessObjectException(
businessObject.BrokenRulesList.ToString());
}

return _dataObject.Insert(businessObject);

public bool Update(Employee businessObject)


{
if (!businessObject.IsValid)
{
throw new InvalidBusinessObjectException(
businessObject.BrokenRulesList.ToString());
}

return _dataObject.Update(businessObject);
}
public Employee GetByPrimaryKey(EmployeeKeys keys)
{
return _dataObject.SelectByPrimaryKey(keys);
}

public List<Employee> GetAll()


{
return _dataObject.SelectAll();
}

public List<Employee> GetAllBy(Employee.EmployeeFields fieldName,


object value)
{
return _dataObject.SelectByField(fieldName.ToString(), value);
}

public bool Delete(EmployeeKeys keys)


{
return _dataObject.Delete(keys);
}

public bool Delete(Employee.EmployeeFields fieldName, object value)


{
return _dataObject.DeleteByField(fieldName.ToString(), value);
}

#endregion

Data Layer

The data access file generated by the Tier Generator contains the methods for DML
operations. It uses Stored Procedures for DML operations. The factory class methods
call the data layer methods for insertion and deletion.

class EmployeeSql : DataLayerBase


{

#region Public Methods

/// <summary>
/// insert new row in the table
/// </summary>
/// <param name="businessObject">business object</param>
/// <returns>true of successfully insert</returns>
public bool Insert(Employee businessObject)
{
SqlCommand sqlCommand = new SqlCommand();
sqlCommand.CommandText = "dbo.[sp_Employee_Insert]";
sqlCommand.CommandType = CommandType.StoredProcedure;

// Use base class' connection object


sqlCommand.Connection = MainConnection;

try
{
sqlCommand.Parameters.Add(new SqlParameter("@EmployeeID", SqlDbType.Int,
4,
ParameterDirection.Output,
false, 0, 0, "",
DataRowVersion.Proposed,
businessObject.EmployeeID));
sqlCommand.Parameters.Add(new SqlParameter("@Name", SqlDbType.NVarChar,
50, ParameterDirection.Input,
false, 0, 0, "",
DataRowVersion.Proposed,
businessObject.Name));
sqlCommand.Parameters.Add(new SqlParameter("@password",
SqlDbType.NVarChar,
50, ParameterDirection.Input,
false, 0, 0, "",
DataRowVersion.Proposed,
businessObject.Password));
sqlCommand.Parameters.Add(new SqlParameter("@Email", SqlDbType.NVarChar,
100, ParameterDirection.Input,
false, 0, 0, "",
DataRowVersion.Proposed,
businessObject.Email));
sqlCommand.Parameters.Add(new SqlParameter("@TeamID", SqlDbType.Int,
4, ParameterDirection.Input,
false, 0, 0, "",
DataRowVersion.Proposed,
businessObject.TeamID));
sqlCommand.Parameters.Add(new SqlParameter("@DepartmentID",
SqlDbType.Int,
4, ParameterDirection.Input,
false, 0, 0, "",
DataRowVersion.Proposed,
businessObject.DepartmentID));
sqlCommand.Parameters.Add(new SqlParameter("@IsAdmin", SqlDbType.Bit,
1, ParameterDirection.Input,
false, 0, 0, "",
DataRowVersion.Proposed,
businessObject.IsAdmin));
MainConnection.Open();

sqlCommand.ExecuteNonQuery();
businessObject.EmployeeID =
(int)sqlCommand.Parameters["@EmployeeID"].Value;

return true;
}
catch(Exception ex)
{
throw new Exception("Employee::Insert::Error occured.", ex);
}
finally
{
MainConnection.Close();
sqlCommand.Dispose();
}
}

#endregion
}

How to use

The code generated by the Tier Generator is easy to use. Open the generated project
in Visual Studio 2005 and compile it. Run the Stored Procedures script in the the
database which is generated by the Tier Generator. You can find the SQL script file in
the generated folder.

Add a new Windows/web project in the existing project and add the DLL of the
generated code to it. Add app.config for Windows applications and web.config for
web applications. Get the connection string from the generated app.config file. You
will get this file in the generated folder.

<appSettings>
<add key="Main.ConnectionString"
value="Data Source=localhost;Initial Catalog=School;
User Id=sa;Password=sa" />
</appSettings>

Here is the code sample for inserting a new record:

public void AddNewRecord()


{
Employee emp = new Employee();
emp.EmployeeID = 1;
emp.FirstName = "Shakeel";
emp.LastName = "Iqbal";
.
.
.
.

EmployeeFactory empFact = new EmployeeFactory();


empFact.Insert(emp);
}

The code sample for selecting all the records:

public void SelectAll()


{
EmployeeFactory empFact = new EmployeeFactory();
List<Employee> list = empFact.GetAll();

dataGrid1.DataSource = list;
}

Future enhancements

I have some future enhancements planned for the Tier Generator, and I have plans
to launch the next version of the Tier generator. In this version, I will improve my
business and data layers, and I will also provide the following features:

 Generate Windows application.


 Generate Web application.
 Generate Web Services.

License

Introduction
NArrange is a stand-alone, configurable .NET code beautification tool that
automatically organizes code members and elements within .NET classes. It allows
developers to easily sort class contents according to their style and layout
conventions. NArrange works by parsing source code files into a document object
model, arranging the elements, then rewriting the arranged source code.

NArrange helps reduce the amount of time developers spend arranging members
within source code files, and when used as part of check-in procedures, can also help
reduce source code repository conflicts. With NArrange, developers don't need to
worry about where they place a new member definition in a class; they can just type
away, and run NArrange prior to compilation. After formatting, the new member will
be automatically moved to the appropriate location in the source file. NArrange is not
only a great time saver, but it also helps enforce coding style standards across a
team when a common configuration file is used.

Obtaining NArrange

NArrange is an open source tool, and can be downloaded here.

Using NArrange

To demonstrate the common usage scenarios of NArrange, we'll start by creating a


new project in our C# editor, in this case, Visual Studio®, and add a stub class with
a constructor. Note that since NArrange is a stand-alone tool without any IDE
dependencies, it can be used along with any code editor or within automated build
processes.
Figure: New Project with a Class Stub

With the project saved, NArrange can then be run against the source code file,
project, or solution. For running NArrange from the command line, refer to Command
Line Usage, or to integrate with Visual Studio® as an external tool, see Microsoft®
Visual Studio® Setup. When running NArrange with an editor, it is recommended to
backup or commit your source first, or use the built-in backup feature, which makes
reverting formatting changes a snap. Either way, please be sure to protect your time
investment in your code prior to running the tool.
Figure: Running NArrange as a VS External Tool

After running NArrange, the constructor we added to the stub class will automatically
be enclosed in a "Constructors" region (see image below).

Figure: After Running NArrange Against the Stub Class

Since our stub class doesn't yet have any functionality, we'll then a new method,
called Demo, that writes a friendly message to the console. Note that when adding
the new method, we're not taking any care with regards to its placement within the
class. In this case, we are adding it as the first member just inside the class body,
before the constructor.
Figure: Adding a New Member

After adding the new member, we'll run NArrange again against the source file or
project. The new method will be automatically moved to the proper location in the
file, and enclosed in an appropriate region (see below). The default configuration for
NArrange also sorts members within groups or regions, alphabetically, by member
name.
Figure: Rearranged Class with New Method

Scrolling up to the top of the source file, you will also notice that NArrange
automatically encloses the header comments in a region, and applies grouping and
sorting to using directives (see image below).
Figure: Header Region and Sorted using Directives

Command Line Usage

To arrange a file, just run the following:

>narrange-console <source file> [optional output file]

If an output file is not specified, the original source file will be overwritten.
Alternatively, you can run NArrange against a C# or VB project file or solution. When
arranging a project or solution, the original source files will be overwritten.

The command line usage is as follows:

narrange-console <input> [output] [/c:configuration]


[/b] [/r] [/t]

input Specifies the source code file, project or solution to arrange.

output For a single source file, specifies the output file


to write arranged code to.
[Optional] If not specified the input source
file will be overwritten.

/c Configuration - Specifies the XML configuration file to use.


[Optional] If not specified the default
configuration will be used.

/b Backup - Specifies to create a backup before arranging


[Optional] If not specified, no backup will be created.
Only valid if an output file is not specified
and cannot be used in conjunction with Restore.

/r Restore - Restores arranged files from the latest backup


[Optional] When this flag is provided, no files will be arranged.
Only valid if an output file is not specified
and cannot be used in conjunction with Backup.

/t Trace - Detailed logging

Microsoft® Visual Studio® Setup

1. From the Tools menu, select External Tools.


2. Add a new entry for NArrange.
o For the command, select the location of narrange-console.exe.
o For arguments, pass the solution, project, or file path macro
depending on your preference. When choosing this, you may want to
take into consideration your revision control system (i.e., whether or
not files are marked as read-only). NArrange cannot arrange read-only
files. It is recommended to pass the /b argument to specify that an
automatic backup be created in your user temp directory.
3. It is also recommended to setup a restore command using the same
parameters, replacing /b with /r. However, be careful when using restore, as
it will revert any edits made to a file since the last NArrange.
Creating a Custom Configuration File

By default, NArrange uses a configuration that is, for the most part, compatible with
the member ordering rules defined by the Microsoft StyleCop source analysis tool. An
exception to this is file header regions.

To override the default arrangement of source code members, a custom


configuration file can be used. To create your own custom configuration file, you
should start by creating a copy of DefaultConfig.xml and renaming it appropriately.
Note that NArrange does not read DefaultConfig.xml, but rather it is provided as an
example for creating a custom configuration. The actual default configuration is
embedded within the NArrange core assembly.

The NArrange Configuration Editor, shown below, can be used to ease editing of the
XML configuration. It can be launched using narrange-config.exe.
Figure: Configuration Editor

The figure below shows sections from a sample XML configuration file. NArrange uses
the ordering of Element and Region tags in the configuration file to define the
ordering of code members in the output source files.
Figure: Sample NArrange Configuration File

By defining a Sort attribute for elements in a region, NArrange will sort those
elements alphabetically in the region by the specified attribute. Valid element
attributes are:

 Name - Returns the code element name.


 Access - Returns the element access.

Valid comparison value strings for this attribute are:

o None
o Public
o Protected
o Internal
o Protected, Internal
 Modifier - Returns element modifiers as a comma separated list.

Valid comparison value strings for this attribute are:

o Abstract
o Sealed
o Static
o Unsafe
o Virtual
o Override
o New
o ReadOnly
o Constant
o External
o Partial
 ElementType - Returns the NArrange element type.

Valid comparison value strings for this attribute are:

o Comment
o Attribute
o Using
o Namespace
o Region
o ConditionDirective
o Field
o Constructor
o Property
o Method
o Event
o Delegate
o Type
 Type - Returns the return type of the element (e.g. 'string').

For Type elements, this returns the sub-type: 'Class', 'Structure', 'Interface',
'Enum' or 'Module' (VB only).

 Attributes - Returns a comma separated list of the names of all attributes


associated with an element.

Attributes can also be qualified with a scope. For example $(Element.Name) or $


(Parent.ElementType).

You may notice that filter conditions can be applied to elements, which are used to
filter elements into the appropriate group or region. Filter expressions can use the
element attributes above using the $(AttributeHere) syntax. Filter expressions
use string comparison, and the following operators are available:

Operator Name Syntax


Equality ==

Inequality !=

Contains :

Regular Expression match =~

Logical Or Or

Logical And And

Negate !(inner expression)

Strings in element expressions should be enclosed with single quotes.

Limitations

One of the biggest limitations with the current version of NArrange is that it cannot
handle many scenarios involving preprocessor directives (#pragma, #define, etc.).
Thus, you will get a warning message indicating that files containing unhandled
preprocessor directives cannot be parsed. Note that this is not an issue if the
preprocessor directive is within a member body.

For conditional compilation preprocessor directives (#if, #else, etc.), basic support
is available. So long as the conditional compilation preprocessor directive completely
encloses types/members in their entirety (with all attributes and comments),
NArrange will preserve the directive and process any enclosed elements. However,
the scenario where a directive starts at the class level and intersects a member
declaration in any manner is not supported, and will result in a parsing error.

Supported Languages

NArrange currently supports organization of C# and Visual Basic .NET source code
files. Although NArrange has been built and tested under the .NET Framework 2.0, it
includes support for many 3.0 and 3.5 language features such as:

 Partial Methods
 Extension Methods
 Object Initializers

Many 3.+ language features are excluded from this list, such as LINQ and Lambda
Expressions. However, NArrange does not currently parse to the statement level of
constructors, methods, and properties, so these features are inherently supported.
NArrange has been designed with the intention of supporting many .NET languages.
If you are interested in implementing a parser/writer for an additional CLI language,
please contact an NArrange contributor through the SourceForge project site.
NArrange is still in the early stages of development, so any suggestions to help make
the framework more compatible with other languages would be greatly appreciated.

TOC

 Introduction
 Background
 Using the code
 SQL2ClassDemo
 SQL Server properties
 SQL Server treeview
 Output settings
 Source code settings
 Progress
 Using CodeDom and its limits (17 Jul)
 About
 Sample of HumanResources Employee class from the AdventureWorks
database (31 Jul)
 References
 History (9 Jan. 2009)

Introduction

Having worked with the SQLDMO, I found some limitations, so I decided to make my
own SQLDMO called SQLReader. It is a read-only DLL that takes SQL meta data and
puts it into classes that one can work with. The SQLReader can not assign anything
in the SQL Server.

This demo consists of two projects:

 SQL2ClassDemo
 SQLReader

SQL2ClassDemo is the actual code generator, and the SQLReader is the code
generator information supplier.

Background

I work a lot with database rich applications, and I needed an app that could make
class shells from a database table.
Using the code

To load databases from a SQL Server database is a straightforward process. Here, I


will show how to use the the SQLReader, and afterwards, I will show how to use the
information to make classes from tables.

Here is how to load the server databases:

SQLServer SqlSrv = new SQLServer();


//Connecting to the local server
SqlSrv.ConnectionSetting.DataSource = Environment.MachineName;
SqlSrv.ConnectionSetting.IntegratedSecurity = false;
SqlSrv.ConnectionSetting.UserID = "sa";
SqlSrv.ConnectionSetting.Password = "P@ssw0rd";
SqlSrv.LoadDatabases();

Here is how we load just one database:

SqlSrv.LoadDatabases("AdventureWorks");

This is how we loop through the objects in the SQL Server database:

foreach (Database db in SqlSrv.Databases.Items)


{
foreach (Table t in db.Tables.Items)
{
foreach (Column c in t.Columns.Items)
{
//Do something with the column...
}

foreach (Index i in t.Indexes.Items)


{
//Do something with the Index...
}

foreach (foreign_key fk in t.ForeignKeys.Items)


{
//Do something with the foreign_key...
}

foreach (KeyConstraint kc in t.Keys.Items)


{
//Do something with the KeyConstraint...
}
}
}

A full documentation of the SQLReader DLL is available in the ZIP file. Please look
into it. Or look here.
SQL2ClassDemo

1. Type the name or the IP of the server you want to connect to.
2. Choose if you want to connect with Integrated Security or not.
3. If you do not use the Integrated Security, please type the username and
password.
4. If you want to load only one database, then type the name of the database
and check the box.
5. Then, click Connect.
SQL Server properties

This section shows you how to list the SQLReader properties.


SQL Server treeview

In this section, you can select or deselect the databases / tables you want to make
an output file (source code file) from. You will also see the property of the
SQLReader class of the selected item in the tree.
Output settings

This section allows you to set the output directory, where the source code files will
appear when done creating them.
Source code settings

In this section, you can set some basic settings that the code generator will perform
when creating the source code files. Set what kind of languages you want the source
code to be in, in the language group box. Here, you can also set pre names and post
names for the fields, if you desire that.

In the Class Setting group, some other values can be set. For example, what kind of
modifiers there shall be on the fields and properties.
 Add Comments, will add information to the field about the SQL object, e.g.,
what kind of data types is in the database and so on.
 Map MS_Description, will add the Description attribute to the property
with the same text as the description in the SQL Server Management Studio.
 Try to correct propertyname, will try to correct the name of the column
from the SQL table. Example: orderId = OrderId or Order id = Orderid.
 Try to correct property displayname, will try to correct the column name
from the SQL table. Example: UserId = User Id or OrderMadeByUser =
Order Made By User.
 Create Reference Object Property, will take any foreign keys related to
the table, and create properties that assign the corresponding class types of
the related table in the foreign key.
 User Databasename as Namespace, as it says, uses the database name as
the namespace name.
 Add Schema to the namespace, can be helpful if you use multiple schemas
within one database, and have the same table names under each schema.
 Class namespace imports, here you can select what kind of namespaces
you want to import into the class.
 Class BaseTypes, add or remove some base type objects to the class.
Progress

In this section, just click Create, and you are on your way.
You can just double click on the file to open it. Or navigate to the output folder,
which should look like this if you have selected the AdventureWorks database:
Using CodeDom

Using CodeDom to create source code has its limitations. Here's a short list of some
of the limits one may encounter. CodeDom can not make:

 while loops (when trying to make this, CodeDom actually creates a for loop
which looks very nasty).
 using statements
 foreach statements
 Static classes; instead it makes a sealed abstract class
 Variable plus-plus increment (i++); instead it makes (i = (i + 1))
 Add comments to the same line as the code
 Child namespaces
But there are some ways to get around this. I am working on a CodeDom cleaner
project which will correct this and make some more nice looking code. CodeDom
version:

//If statements
if(CountVarUp)
{
i = (i + 1);
}

//For loop
for(int i = 0; i < 100; (i = (i + 1)))
{
//do something...
}
A programmer's version:
//If statements
if(CountVarUp)
i++;

//For loop
for(int i = 0; i < 100; i++)
//do something...

About

In this section, I added some basic information about this little application:
Sample of HumanResources Employee class from the
AdventureWorks database

Class layout:

 AdventureWorks.HumanResources

 public void Select(string ConnectionString)


 public int Insert(string ConnectionString)
 public int Update(string ConnectionString)
 public int Delete(string ConnectionString)
 public string[] GetSqlCommandStrings()
 private void AddFromRecordSet(SqlDataReader rs)
 private SqlParameter[] GetSqlParameters()

 internal static string _SQL_Select


 internal static string _SQL_Insert
 internal static string _SQL_Update
 internal static string _SQL_Delete
 public int EmployeeID
 public string NationalIDNumber
 public int ContactID
 public string LoginID
 public int ManagerID
 public string Title
 public DateTime BirthDate
 public string MaritalStatus
 public string Gender
 public DateTime HireDate
 public bool SalariedFlag
 public short VacationHours
 public short SickLeaveHours
 public bool CurrentFlag
 public Guid rowguid
 public DateTime ModifiedDate
 public Contact ContactID_Contact
 public Employee ManagerID_Employee
 public EmployeeAddressCollection EmployeeAddressCollection
 public EmployeeDepartmentHistoryCollection
EmployeeDepartmentHistoryCollection
 public EmployeePayHistoryCollection
EmployeePayHistoryCollection
 public JobCandidateCollection JobCandidateCollection
 public PurchaseOrderHeaderCollection
PurchaseOrderHeaderCollection
 public SalesPersonCollection SalesPersonCollection

[Description("Employee information such as salary, department, and title.")]


public class Employee : Object
{

#region Static SQL String Memebers


/// This field represents the full SELECT string for the table Employee,
/// with the WHERE clause.
internal static string _SQL_Select = "@"SELECT [EmployeeID],
[NationalIDNumber], " +
"[ContactID], [LoginID], [ManagerID], " +
"[Title], [BirthDate], [MaritalStatus], [Gender], [HireDate], " +
"[SalariedFlag], [VacationHours], [SickLeaveHours], " +
"[CurrentFlag], [rowguid], [ModifiedDate] FROM " +
"[HumanResources].[Employee] WHERE [EmployeeID]=@EmployeeID ";

/// This field represents the full INSERT INTO string for the table
/// Employee.
internal static string _SQL_Insert = "@"INSERT INTO " +
"[HumanResources].[Employee] ([EmployeeID], [NationalIDNumber]," +
" [ContactID], [LoginID], [ManagerID], [Title], [BirthDate], " +
"[MaritalStatus], [Gender], [HireDate], [SalariedFlag], " +
"[VacationHours], [SickLeaveHours], [CurrentFlag], [rowguid], " +
"[ModifiedDate]) VALUES([EmployeeID], [NationalIDNumber], " +
"[ContactID], [LoginID], [ManagerID], [Title], [BirthDate], " +
"[MaritalStatus], [Gender], [HireDate], [SalariedFlag], " +
"[VacationHours], [SickLeaveHours], [CurrentFlag], [rowguid],
[ModifiedDate]) ";

/// This field represents the full UPDATE string for the table Employee,
/// with the WHERE clause.
internal static string _SQL_Update =
"@"UPDATE [HumanResources].[Employee] SET [EmployeeID] = @EmployeeID, " +
"[NationalIDNumber] = @NationalIDNumber, [ContactID] = " +
"@ContactID, [LoginID] = @LoginID, [ManagerID] = @ManagerID, " +
"[Title] = @Title, [BirthDate] = @BirthDate, " +
"[MaritalStatus] = @MaritalStatus, [Gender] = @Gender, [HireDate] =
@HireDate," +
" [SalariedFlag] = @SalariedFlag, [VacationHours] = @VacationHours, " +
"[SickLeaveHours] = @SickLeaveHours, [CurrentFlag] = " +
"@CurrentFlag, [rowguid] = @rowguid, [ModifiedDate] = " +
"@ModifiedDate WHERE [EmployeeID]=@EmployeeID ";

/// This field represents the DELETE string for the table Employee,
/// with the WHERE clause.
internal static string _SQL_Delete =
"DELETE FROM [HumanResources].[Employee] WHERE [EmployeeID]=@EmployeeID
";
#endregion

#region Tables Memebers


/// SQL Type:int - Primary key for Employee records.
private int _EmployeeID;
[Description("Primary key for Employee records.")]
[DisplayName("Employee ID")]
[Category("Primary Key")]
public int EmployeeID
{
get
{
try
{
return _EmployeeID;
}
catch (System.Exception err)
{
throw new Exception("Error getting EmployeeID", err);
}
}
set
{
try
{
_EmployeeID = value;
}
catch (System.Exception err)
{
throw new Exception("Error setting EmployeeID", err);
}
}
}

/// SQL Type:nvarchar - Unique national identification number such


/// as a social security number.
private string _NationalIDNumber;

[Description("Unique national identification number" +


" such as a social security number.")]
[DisplayName("National IDNumber")]
[Category("Column")]
public string NationalIDNumber
{
get
{
try
{
return _NationalIDNumber;
}
catch (System.Exception err)
{
throw new Exception("Error getting NationalIDNumber", err);
}
}
set
{
try
{
if ((value.Length <= 30))
{
_NationalIDNumber = value;
}
else
{
throw new OverflowException("Error setting" +
" NationalIDNumber, " +
"Length of value is to long. Maximum Length: 30");
}
}
catch (System.Exception err)
{
throw new Exception("Error setting NationalIDNumber", err);
}
}
}

/// SQL Type:int - Identifies the employee in the Contact table.


/// Foreign key to Contact.ContactID.
private int _ContactID;

[Description("Identifies the employee in the Contact table." +


" Foreign key to Contact.ContactID.")]
[DisplayName("Contact ID")]
[Category("Foreign Key")]
public int ContactID
{
get
{
try
{
return _ContactID;
}
catch (System.Exception err)
{
throw new Exception("Error getting ContactID", err);
}
}
set
{
try
{
_ContactID = value;
}
catch (System.Exception err)
{
throw new Exception("Error setting ContactID", err);
}
}
}

/// SQL Type:nvarchar - Network login.


private string _LoginID;

[Description("Network login.")]
[DisplayName("Login ID")]
[Category("Column")]
public string LoginID
{
get
{
try
{
return _LoginID;
}
catch (System.Exception err)
{
throw new Exception("Error getting LoginID", err);
}
}
set
{
try
{
if ((value.Length <= 512))
{
_LoginID = value;
}
else
{
throw new OverflowException("Error " +
"setting LoginID, " +
"Length of value is to long. " +
"Maximum Length: 512");
}
}
catch (System.Exception err)
{
throw new Exception("Error setting LoginID", err);
}
}
}

/// SQL Type:int - Manager to whom the employee is assigned.


/// Foreign Key to Employee.M
private int _ManagerID;

[Description("Manager to whom the employee is " +


"assigned. Foreign Key to Employee.M")]
[DisplayName("Manager ID")]
[Category("Foreign Key")]
public int ManagerID
{
get
{
try
{
return _ManagerID;
}
catch (System.Exception err)
{
throw new Exception("Error getting ManagerID", err);
}
}
set
{
try
{
_ManagerID = value;
}
catch (System.Exception err)
{
throw new Exception("Error setting ManagerID", err);
}
}
}

/// SQL Type:nvarchar - Work title such as Buyer or Sales


/// Representative.
private string _Title;

[Description("Work title such as Buyer or Sales Representative.")]


[DisplayName("Title")]
[Category("Column")]
public string Title
{
get
{
try
{
return _Title;
}
catch (System.Exception err)
{
throw new Exception("Error getting Title", err);
}
}
set
{
try
{
if ((value.Length <= 100))
{
_Title = value;
}
else
{
throw new OverflowException("Error " +
"setting Title, Length of value " +
"is to long. Maximum Length: 100");
}
}
catch (System.Exception err)
{
throw new Exception("Error setting Title", err);
}
}
}
/// SQL Type:datetime - Date of birth.
private System.DateTime _BirthDate;

[Description("Date of birth.")]
[DisplayName("Birth Date")]
[Category("Column")]
public System.DateTime BirthDate
{
get
{
try
{
return _BirthDate;
}
catch (System.Exception err)
{
throw new Exception("Error getting BirthDate", err);
}
}
set
{
try
{
_BirthDate = value;
}
catch (System.Exception err)
{
throw new Exception("Error setting BirthDate", err);
}
}
}

/// SQL Type:nchar - M = Married, S = Single


private string _MaritalStatus;

[Description("M = Married, S = Single")]


[DisplayName("Marital Status")]
[Category("Column")]
public string MaritalStatus
{
get
{
try
{
return _MaritalStatus;
}
catch (System.Exception err)
{
throw new Exception("Error getting MaritalStatus", err);
}
}
set
{
try
{
if ((value.Length <= 2))
{
_MaritalStatus = value;
}
else
{
throw new OverflowException("Error " +
"setting MaritalStatus, " +
"Length of value is to long. Maximum Length: 2");
}
}
catch (System.Exception err)
{
throw new Exception("Error setting MaritalStatus", err);
}
}
}

/// SQL Type:nchar - M = Male, F = Female


private string _Gender;

[Description("M = Male, F = Female")]


[DisplayName("Gender")]
[Category("Column")]
public string Gender
{
get
{
try
{
return _Gender;
}
catch (System.Exception err)
{
throw new Exception("Error getting Gender", err);
}
}
set
{
try
{
if ((value.Length <= 2))
{
_Gender = value;
}
else
{
throw new OverflowException("Error setting Gender, " +
"Length of value is to long. Maximum Length: 2");
}
}
catch (System.Exception err)
{
throw new Exception("Error setting Gender", err);
}
}
}

/// SQL Type:datetime - Employee hired on this date.


private System.DateTime _HireDate;

[Description("Employee hired on this date.")]


[DisplayName("Hire Date")]
[Category("Column")]
public System.DateTime HireDate
{
get
{
try
{
return _HireDate;
}
catch (System.Exception err)
{
throw new Exception("Error getting HireDate", err);
}
}
set
{
try
{
_HireDate = value;
}
catch (System.Exception err)
{
throw new Exception("Error setting HireDate", err);
}
}
}

/// SQL Type:Flag - Job classification. 0 = Hourly, not exempt from


/// collective bargaining. 1 = Salaried, exempt from collective bargaining.
private bool _SalariedFlag;

[Description("Job classification. 0 = Hourly, " +


"not exempt from collective bargaining." +
"1 = Salaried, exempt from collective bargaining.")]
[DisplayName("Salaried Flag")]
[Category("Column")]
public bool SalariedFlag
{
get
{
try
{
return _SalariedFlag;
}
catch (System.Exception err)
{
throw new Exception("Error getting SalariedFlag", err);
}
}
set
{
try
{
_SalariedFlag = value;
}
catch (System.Exception err)
{
throw new Exception("Error setting SalariedFlag", err);
}
}
}
/// SQL Type:smallint - Number of available vacation hours.
private short _VacationHours;

[Description("Number of available vacation hours.")]


[DisplayName("Vacation Hours")]
[Category("Column")]
public short VacationHours
{
get
{
try
{
return _VacationHours;
}
catch (System.Exception err)
{
throw new Exception("Error getting VacationHours", err);
}
}
set
{
try
{
_VacationHours = value;
}
catch (System.Exception err)
{
throw new Exception("Error setting VacationHours", err);
}
}
}

/// SQL Type:smallint - Number of available sick leave hours.


private short _SickLeaveHours;

[Description("Number of available sick leave hours.")]


[DisplayName("Sick Leave Hours")]
[Category("Column")]
public short SickLeaveHours
{
get
{
try
{
return _SickLeaveHours;
}
catch (System.Exception err)
{
throw new Exception("Error getting SickLeaveHours", err);
}
}
set
{
try
{
_SickLeaveHours = value;
}
catch (System.Exception err)
{
throw new Exception("Error setting SickLeaveHours", err);
}
}
}

/// SQL Type:Flag - 0 = Inactive, 1 = Active


private bool _CurrentFlag;

[Description("0 = Inactive, 1 = Active")]


[DisplayName("Current Flag")]
[Category("Column")]
public bool CurrentFlag
{
get
{
try
{
return _CurrentFlag;
}
catch (System.Exception err)
{
throw new Exception("Error getting CurrentFlag", err);
}
}
set
{
try
{
_CurrentFlag = value;
}
catch (System.Exception err)
{
throw new Exception("Error setting CurrentFlag", err);
}
}
}

/// SQL Type:uniqueidentifier - ROWGUIDCOL number uniquely identifying


/// the record. Used to support a merge replication sample.
private System.Guid _rowguid;

[Description(
"ROWGUIDCOL number uniquely identifying " +
"the record. Used to support a merge" +
"replication sample.")]
[DisplayName("rowguid")]
[Category("Column")]
public System.Guid rowguid
{
get
{
try
{
return _rowguid;
}
catch (System.Exception err)
{
throw new Exception("Error getting rowguid", err);
}
}
set
{
try
{
_rowguid = value;
}
catch (System.Exception err)
{
throw new Exception("Error setting rowguid", err);
}
}
}

/// SQL Type:datetime - Date and time the record was last updated.
private System.DateTime _ModifiedDate;
[Description("Date and time the record was last updated.")]
[DisplayName("Modified Date")]
[Category("Column")]
public System.DateTime ModifiedDate
{
get
{
try
{
return _ModifiedDate;
}
catch (System.Exception err)
{
throw new Exception("Error getting ModifiedDate", err);
}
}
set
{
try
{
_ModifiedDate = value;
}
catch (System.Exception err)
{
throw new Exception("Error setting ModifiedDate", err);
}
}
}
#endregion

#region Related Objects


/// Represents the foreign key object
private Contact _ContactID_Contact;

[Description("Represents the foreign key object of the type Contact")]


public Contact ContactID_Contact
{
get
{
try
{
return _ContactID_Contact;
}
catch (System.Exception err)
{
throw new Exception("Error getting ContactID_Contact", err);
}
}
set
{
try
{
_ContactID_Contact = value;
_ContactID = _ContactID_Contact.ContactID;
}
catch (System.Exception err)
{
throw new Exception("Error setting ContactID_Contact", err);
}
}
}

/// Represents the foreign key object


private Employee _ManagerID_Employee;

[Description("Represents the foreign key object of the type Employee")]


public Employee ManagerID_Employee
{
get
{
try
{
return _ManagerID_Employee;
}
catch (System.Exception err)
{
throw new Exception("Error getting ManagerID_Employee", err);
}
}
set
{
try
{
_ManagerID_Employee = value;
_ManagerID = _ManagerID_Employee.EmployeeID;
}
catch (System.Exception err)
{
throw new Exception("Error setting ManagerID_Employee", err);
}
}
}
#endregion

#region Related Object Collections


/// Represents the foreign key object
private EmployeeAddressCollection _EmployeeAddressCollection;

[Description("Represents the foreign key relation." +


" This is an Collection of Employee.")]
public EmployeeAddressCollection EmployeeAddressCollection
{
get
{
try
{
return _EmployeeAddressCollection;
}
catch (System.Exception err)
{
throw new Exception("Error getting EmployeeAddressCollection",
err);
}
}
set
{
try
{
_EmployeeAddressCollection = value;
}
catch (System.Exception err)
{
throw new Exception("Error setting EmployeeAddressCollection",
err);
}
}
}

/// Represents the foreign key object


private EmployeeDepartmentHistoryCollection
_EmployeeDepartmentHistoryCollection;

[Description("Represents the foreign key relation." +


" This is an Collection of Employee.")]
public EmployeeDepartmentHistoryCollection
EmployeeDepartmentHistoryCollection
{
get
{
try
{
return _EmployeeDepartmentHistoryCollection;
}
catch (System.Exception err)
{
throw new Exception("Error getting Employee" +
"DepartmentHistoryCollection", err);
}
}
set
{
try
{
_EmployeeDepartmentHistoryCollection = value;
}
catch (System.Exception err)
{
throw new Exception("Error setting Employee" +
"DepartmentHistoryCollection", err);
}
}
}

/// Represents the foreign key object


private EmployeePayHistoryCollection _EmployeePayHistoryCollection;

[Description("Represents the foreign key relation." +


" This is an Collection of Employee.")]
public EmployeePayHistoryCollection EmployeePayHistoryCollection
{
get
{
try
{
return _EmployeePayHistoryCollection;
}
catch (System.Exception err)
{
throw new Exception("Error getting
EmployeePayHistoryCollection", err);
}
}
set
{
try
{
_EmployeePayHistoryCollection = value;
}
catch (System.Exception err)
{
throw new Exception("Error setting
EmployeePayHistoryCollection", err);
}
}
}

/// Represents the foreign key object


private JobCandidateCollection _JobCandidateCollection;

[Description("Represents the foreign key relation. This is an Collection of


Employee.")]
public JobCandidateCollection JobCandidateCollection
{
get
{
try
{
return _JobCandidateCollection;
}
catch (System.Exception err)
{
throw new Exception("Error getting JobCandidateCollection",
err);
}
}
set
{
try
{
_JobCandidateCollection = value;
}
catch (System.Exception err)
{
throw new Exception("Error setting JobCandidateCollection",
err);
}
}
}

/// Represents the foreign key object


private PurchaseOrderHeaderCollection _PurchaseOrderHeaderCollection;

[Description("Represents the foreign key relation. This is an Collection of


Employee.")]
public PurchaseOrderHeaderCollection PurchaseOrderHeaderCollection
{
get
{
try
{
return _PurchaseOrderHeaderCollection;
}
catch (System.Exception err)
{
throw new Exception("Error getting
PurchaseOrderHeaderCollection", err);
}
}
set
{
try
{
_PurchaseOrderHeaderCollection = value;
}
catch (System.Exception err)
{
throw new Exception("Error setting
PurchaseOrderHeaderCollection", err);
}
}
}

/// Represents the foreign key object


private SalesPersonCollection _SalesPersonCollection;

[Description("Represents the foreign key relation. This is an Collection of


Employee.")]
public SalesPersonCollection SalesPersonCollection
{
get
{
try
{
return _SalesPersonCollection;
}
catch (System.Exception err)
{
throw new Exception("Error getting SalesPersonCollection", err);
}
}
set
{
try
{
_SalesPersonCollection = value;
}
catch (System.Exception err)
{
throw new Exception("Error setting SalesPersonCollection", err);
}
}
}
#endregion

#region Public Methods


public void Select(string ConnectionString)
{
try
{
SqlConnection Conn = new SqlConnection(ConnectionString);
SqlCommand Com = Conn.CreateCommand();
Com.CommandText = Employee._SQL_Select;
Com.Parameters.AddRange(GetSqlParameters());
Conn.Open();
SqlDataReader rs = Com.ExecuteReader();
while(rs.Read())
{
AddFromRecordSet(rs);
}
rs.Close();
Conn.Close();
rs.Dispose();
Com.Dispose();
Conn.Dispose();
}
catch (System.Exception )
{
throw;
}
}

public int Insert(string ConnectionString)


{
try
{
SqlConnection Conn = new SqlConnection(ConnectionString);
SqlCommand Com = Conn.CreateCommand();
Com.CommandText = Employee._SQL_Insert;
Com.Parameters.AddRange(GetSqlParameters());
Conn.Open();
int rowseffected = Com.ExecuteNonQuery();
Conn.Close();
Com.Dispose();
Conn.Dispose();
return rowseffected;
}
catch (System.Exception )
{
throw;
}
}

public int Update(string ConnectionString)


{
try
{
SqlConnection Conn = new SqlConnection(ConnectionString);
SqlCommand Com = Conn.CreateCommand();
Com.CommandText = Employee._SQL_Update;
Com.Parameters.AddRange(GetSqlParameters());
Conn.Open();
int rowseffected = Com.ExecuteNonQuery();
Conn.Close();
Com.Dispose();
Conn.Dispose();
return rowseffected;
}
catch (System.Exception )
{
throw;
}
}

public int Delete(string ConnectionString)


{
try
{
SqlConnection Conn = new SqlConnection(ConnectionString);
SqlCommand Com = Conn.CreateCommand();
Com.CommandText = Employee._SQL_Delete;
Com.Parameters.AddRange(GetSqlParameters());
Conn.Open();
int rowseffected = Com.ExecuteNonQuery();
Conn.Close();
Com.Dispose();
Conn.Dispose();
return rowseffected;
}
catch (System.Exception )
{
throw;
}
}

public string[] GetSqlCommandStrings()


{
try
{
string[] CommStr = new string[4];
CommStr[0] = "SELECT [EmployeeID], [NationalIDNumber], [ContactID],
[LoginID], [ManagerID], [Title], [BirthDate]," +
" [MaritalStatus], [Gender], [HireDate], [SalariedFlag],
[VacationHours],
[SickLeaveHours], [CurrentFlag], [rowguid]," +
" [ModifiedDate] FROM [HumanResources].[Employee] WHERE
[EmployeeID] =
" + _EmployeeID + ";
CommStr[1] = "INSERT INTO [HumanResources].[Employee] ([EmployeeID],
[NationalIDNumber], [ContactID], [LoginID]," +
" [ManagerID], [Title], [BirthDate], " +
"[MaritalStatus], [Gender], [HireDate],
[SalariedFlag], [VacationHours], " +
"[SickLeaveHours], [CurrentFlag], [rowguid],
[ModifiedDate]) VALUES(" + _EmployeeID + ", '" +
_NationalIDNumber.Replace("'","''") + "', " + _ContactID + ", '" +
_LoginID.Replace("'","''") + "', " +
_ManagerID + ", '" + _Title.Replace("'","''") +
"', '" + _BirthDate.Replace("'","''") +
"', '" + _MaritalStatus.Replace("'","''") +
"', '" + _Gender.Replace("'","''") +
"', '" + _HireDate.Replace("'","''") + "',
" + _SalariedFlag + ", " + _VacationHours +
", " + _SickLeaveHours + ", " +
_CurrentFlag + ", " + _rowguid + ", '" +
_ModifiedDate.Replace("'","''") + "')";
CommStr[2] = "UPDATE [HumanResources].[Employee] SET
[NationalIDNumber] = '" +
_NationalIDNumber.Replace("'","''") + "' ,[ContactID] = " +
_ContactID + " ,
[LoginID] = '" + _LoginID.Replace("'","''") +
"' ,[ManagerID] = " + _ManagerID +
" ,[Title] = '" + _Title.Replace("'","''") + "' ,[BirthDate] = '"
+
_BirthDate.Replace("'","''") + "' ,[MaritalStatus] = '" +
_MaritalStatus.Replace("'","''") +
"' ,[Gender] = '" + _Gender.Replace("'","''") +
"' ,[HireDate] = '" + _HireDate.Replace("'","''") +
"' ,[SalariedFlag] = " + _SalariedFlag + " ,[VacationHours] = " +
_VacationHours + " ,[SickLeaveHours] = " +
_SickLeaveHours + " ,[CurrentFlag] = " + _CurrentFlag + " ,
[rowguid] = " +
_rowguid + " ,[ModifiedDate] = '" +
_ModifiedDate.Replace("'","''") +
"' WHERE [EmployeeID] = " + _EmployeeID + ";
CommStr[3] = "DELETE FROM [HumanResources].[Employee] WHERE
[EmployeeID] =
" + _EmployeeID + ";
return CommStr;
}
catch (System.Exception )
{
throw;
}
}
#endregion

#region Private Methods


private void AddFromRecordSet(SqlDataReader rs)
{
try
{
// if value from the recordset, to the EmployeeID
// field is NOT null then set the value.
if ((rs.IsDBNull(rs.GetOrdinal("EmployeeID")) == false))
{
EmployeeID = rs.GetInt32(rs.GetOrdinal("EmployeeID"));
}
// if value from the recordset, to the NationalIDNumber
// field is NOT null then set the value.
if ((rs.IsDBNull(rs.GetOrdinal(
"NationalIDNumber")) == false))
{
NationalIDNumber = rs.GetString(
rs.GetOrdinal("NationalIDNumber"));
}
// if value from the recordset, to the ContactID
// field is NOT null then set the value.
if ((rs.IsDBNull(rs.GetOrdinal("ContactID")) == false))
{
ContactID = rs.GetInt32(rs.GetOrdinal("ContactID"));
}
// if value from the recordset, to the LoginID
// field is NOT null then set the value.
if ((rs.IsDBNull(rs.GetOrdinal("LoginID")) == false))
{
LoginID = rs.GetString(rs.GetOrdinal("LoginID"));
}
// if value from the recordset, to the ManagerID
// field is NOT null then set the value.
if ((rs.IsDBNull(rs.GetOrdinal("ManagerID")) == false))
{
ManagerID = rs.GetInt32(rs.GetOrdinal("ManagerID"));
}
// if value from the recordset, to the Title
// field is NOT null then set the value.
if ((rs.IsDBNull(rs.GetOrdinal("Title")) == false))
{
Title = rs.GetString(rs.GetOrdinal("Title"));
}
// if value from the recordset, to the BirthDate
// field is NOT null then set the value.
if ((rs.IsDBNull(rs.GetOrdinal("BirthDate")) == false))
{
BirthDate = rs.GetDateTime(rs.GetOrdinal("BirthDate"));
}
// if value from the recordset, to the MaritalStatus
// field is NOT null then set the value.
if ((rs.IsDBNull(rs.GetOrdinal("MaritalStatus")) == false))
{
MaritalStatus =
rs.GetString(rs.GetOrdinal("MaritalStatus"));
}
// if value from the recordset, to the Gender field
// is NOT null then set the value.
if ((rs.IsDBNull(rs.GetOrdinal("Gender")) == false))
{
Gender = rs.GetString(rs.GetOrdinal("Gender"));
}
// if value from the recordset, to the HireDate
// field is NOT null then set the value.
if ((rs.IsDBNull(rs.GetOrdinal("HireDate")) == false))
{
HireDate = rs.GetDateTime(rs.GetOrdinal("HireDate"));
}
// if value from the recordset, to the SalariedFlag
// field is NOT null then set the value.
if ((rs.IsDBNull(rs.GetOrdinal("SalariedFlag")) == false))
{
SalariedFlag =
rs.GetBoolean(rs.GetOrdinal("SalariedFlag"));
}
// if value from the recordset, to the VacationHours
// field is NOT null then set the value.
if ((rs.IsDBNull(rs.GetOrdinal("VacationHours")) == false))
{
VacationHours = rs.GetInt16(rs.GetOrdinal("VacationHours"));
}
// if value from the recordset, to the SickLeaveHours
// field is NOT null then set the value.
if ((rs.IsDBNull(rs.GetOrdinal("SickLeaveHours")) == false))
{
SickLeaveHours = rs.GetInt16(rs.GetOrdinal("SickLeaveHours"));
}
// if value from the recordset, to the CurrentFlag
// field is NOT null then set the value.
if ((rs.IsDBNull(rs.GetOrdinal("CurrentFlag")) == false))
{
CurrentFlag = rs.GetBoolean(rs.GetOrdinal("CurrentFlag"));
}
// if value from the recordset, to the rowguid
// field is NOT null then set the value.
if ((rs.IsDBNull(rs.GetOrdinal("rowguid")) == false))
{
rowguid = rs.GetGuid(rs.GetOrdinal("rowguid"));
}
// if value from the recordset, to the ModifiedDate
// field is NOT null then set the value.
if ((rs.IsDBNull(rs.GetOrdinal("ModifiedDate")) == false))
{
ModifiedDate = rs.GetDateTime(rs.GetOrdinal("ModifiedDate"));
}
}
catch (SqlException sqlExc)
{
throw sqlExc;
}
catch (Exception Exc)
{
throw Exc;
}
}

private SqlParameter[] GetSqlParameters()


{
List<SqlParameter> SqlParmColl = new List<SqlParameter>();
try
{
SqlParmColl.Add(AdventureWorks.AddSqlParm(
"@EmployeeID", EmployeeID, SqlDbType.Int));
SqlParmColl.Add(AdventureWorks.AddSqlParm(
"@NationalIDNumber", NationalIDNumber, SqlDbType.NVarChar));
SqlParmColl.Add(AdventureWorks.AddSqlParm(
"@ContactID", ContactID, SqlDbType.Int));
SqlParmColl.Add(AdventureWorks.AddSqlParm(
"@LoginID", LoginID, SqlDbType.NVarChar));
SqlParmColl.Add(AdventureWorks.AddSqlParm(
"@ManagerID", ManagerID, SqlDbType.Int));
SqlParmColl.Add(AdventureWorks.AddSqlParm(
"@Title", Title, SqlDbType.NVarChar));
SqlParmColl.Add(AdventureWorks.AddSqlParm(
"@BirthDate", BirthDate, SqlDbType.DateTime));
SqlParmColl.Add(AdventureWorks.AddSqlParm(
"@MaritalStatus", MaritalStatus, SqlDbType.NChar));
SqlParmColl.Add(AdventureWorks.AddSqlParm(
"@Gender", Gender, SqlDbType.NChar));
SqlParmColl.Add(AdventureWorks.AddSqlParm(
"@HireDate", HireDate, SqlDbType.DateTime));
SqlParmColl.Add(AdventureWorks.AddSqlParm(
"@SalariedFlag", SalariedFlag, ));
SqlParmColl.Add(AdventureWorks.AddSqlParm(
"@VacationHours", VacationHours, SqlDbType.SmallInt));
SqlParmColl.Add(AdventureWorks.AddSqlParm(
"@SickLeaveHours", SickLeaveHours, SqlDbType.SmallInt));
SqlParmColl.Add(AdventureWorks.AddSqlParm(
"@CurrentFlag", CurrentFlag, ));
SqlParmColl.Add(AdventureWorks.AddSqlParm(
"@rowguid", rowguid, SqlDbType.UniqueIdentifier));
SqlParmColl.Add(AdventureWorks.AddSqlParm(
"@ModifiedDate", ModifiedDate, SqlDbType.DateTime));
return SqlParmColl.ToArray();
}
catch (Exception Exc)
{
throw Exc;
}
}
#endregion
}

References

 SQLReader website
 MSDN
o Database
o Objects
o Schema
o Table
o Column
o Foreign_key
o Foreign_key_column
o Indexes
o Index_columns
o Extended_properties
o Type
o Default_constraint
o Trigger
o Comment
o Identity_column

History

How to work with sample project :

Download attachment file of this article and open zip file.

First, run TestDb_Script.sql in your Sql Server Management Studio (SSMS), it will be
create a sample database named : TestDB that contains a table named : Student
and some records to work with it.

Second, run DbHelper from debug folder, initialize connection options


(DataSource,User,Pass, …) and then click Connect. It will be load all databases that
exists in the current DataSource.

Then, select TestDb database that we created earlier.

Because our database has not any Stored Procedure, then in outpuType, select
Generate Stored Procedures, select path (language is not important in this mode),
then click Generate Code. Open file that has been created (like this : Your
Path\Sql\TestDB_Sps.Sql) with SSMS and run it in your TestDb database.

Then, run DbHelper again (or click Connect again if you didn't close it) and select
TestDb database.

In outputType, select Generate Code for Stored Procedures (DAL), and check
Generate Tables Classes (BLL). select language and path, then click Generate Code.
It will be create in your selected path,a class sourceFile named DAL.cs or DAL.vb
which include all code of storedProcedures and Data Access Layer and a folder
named BLL which contains TestDb.cs or TestDb.vb that include all the classes code
for tables. You must add these files to your solution and add using namespace in
your source code directive.

The new functionality in Database Helper v 2.0.0 (


Background

When we want to create database applications, we must dealing with database


repeatedly, then in our code level, for each interaction with database, we must
create connection, then define command that will be execute in database and finally
get results (if has result). However it's trouble to dealing with database level code in
our application code and it's not best practice for high performance database
applications. For best result and create database applications in less time and higher
performance, this is best practice to split Data Access Layer from our main
application code, for this work, Database Helper is one of the most applications that
helps you to produce your database applications easier and rather.

Technical Information (V 1.0.0) :

Database Helper has been written by c# 2.0, and also Source code is available. For
Generate .net code, I used System.CodeDom and System.CodeDom.Compiler
namespaces. For loading database information, I used SQLDMO.dll library. This
library is good for this purpose, but for the first time, it's take a long time to load
database info.
Technical Information (v 2.0.0) :

In v 2.0.0 i used Smo (Sql Server Management Objects) instead of Dmo (Distributed
Management Objects) to work with Sql Server (2005) easier and use full multi
threading (thanks to Anonymous Deleates to prevent some trouble about to define a
new delegate and write some methods), then optimize loading database info
algorithm which improve loading database information about 10 time faster.

About Database Helper :

DataBase Helper is an open source application that helps you to better programming
and interacting with database, it has three useful tools to :

1. Generate Stored Procedures Code in DataBase Level

2. Generate .net (Vb & C#) Single Class and Collection Class for each table in
database (BLL)

3. Generate .net (Vb & C#) Code for Stored Procedures as methods and some other
useful methods (DAL)

Generate Stored Procedures Code in Database Level :

This tool, create 5 stored procedure for each table (include:


SelectAll,SelectRow,Insert,Update,Delete), thanks to Mr.Keramati for write this tool
and get me it's source code to merge with my other tools, but originally, it was
generated 5 procedures for selected table, and I customized it to generate 5 stored
procedures for all of tables in database, Mr.Keramati also publish his source code in
CodePLex seperately.

The output of this tool is the folder (named Sql) that contained a single file with .sql
extension that include all stored procedures generated for all tables in database.

Generate .Net Classes in Code Level (Business Logic


Layer) :

This tool, create two classes for each table, first class is the single class that helps
you write your codes more Strongly-Typed, the second class is Collection class (for
first class) that inherited from CollectionBase class to hold number of it's single class
as rows(like DataTable) in memory for working with many rows with Strongly-Typed
manner, you even can bind your DataGridView.DataSource to this object. Classes are
generates in either CSharp or Vb.Net language in namespace that has same name
with database.
The output of this tool is a folder (named BLL) that contain a .net source file include
all classes that generated from tables in Database namespace(the filename is equals
to Database name).

Generate .Net Codes for Working with Stored Procedures


(Data Access Layer) :

This tool, extract all stored procedures from database and create a method for each
one, parameters also defines in methods (if exists), in the other words, it will be
create Data Access Layer for you.

All methods returns DataTable object, if it has results, then DataTable.Rows > 0,
otherwise, the returned DataTable object has not any valid row (equals to 0).

It is also generate some useful methods for better work and interacting with
database :

1. First overload of constructor : that get full connectionString as parameter, you can
access to connectionString via ConnectionString property.

2. Second overload of constructor : that get two parameters, ServerName and


DataBaseName, and then create Trusted Connection, you can access to
connectionString via ConnectionString property.

3. Third overload of constructor that get ServerName,DatabaseName,UserName and


Password and create UnTrusted Connection, you can access to connectionString via
ConnectionString property.

4. GetData() : this method will be created for general purposes (that return query
result) that take a SqlCommand as parameter and return result as DataTable object.

Note : you don't need to pass connection to contrsuctor of SqlCommand object, it's
will be initialized inside GetData() method.

5. ExecNonQuery() : this method will be created for general purposes that execute in
NonQuery mode and like GetData(), take a SqlCommand and execute it, then return
int as result.

The output of this tool is .net source file named DAL, that has a class with the same
name(DAL) in it's Database namespace.

The problem:

You cannot call remote ASP.NET Web Service methods from a JavaScript, AJAX
client.
Example:

You have a Web Service at this address: http://a.com/service.asmx and you've


configured the service to work with AJAX clients:

[WebService
(Namespace = "http://www.hyzonia.com/gametypes/PopNDropLikeGame/WS2")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
[System.Web.Script.Services.ScriptService]
public class GameService : System.Web.Services.WebService
{
[WebMethod(EnableSession = true)]
public GameSessionResponse CreateGameSession(Guid questId)
{
...
}
}

And it works fine when you call its methods from a web page that is in this address:
http://a.com/page.htm:

$.ajax({
type: "POST",
url: "GameService.asmx/CreateGameSession",
data: "{questId: '" + questId + "'}",
cache: false,
contentType: "application/json; charset=utf-8",
dataType: "json",
success: function(response) {
Game._onSessionGot(response.d);
}
});

But the very same client-side code doesn’t work from this address:
http://b.clom/page.htm.

The problem in depth:

At first, it is a silly problem, for me it is an overprotection. After all, Web Services are
meant to be called by remote clients. The fact that browsers block access to Web
Services by AJAX calls is clearly contrary to the purpose of Web Services.

Interestingly, browser extensions like Flash and Silverlight also, by default, block
remote Web Services, but they provide a workaround. Unfortunately, no browser by
date supports this work around for XMLHttpRequests. This "security measure"
seems odder when we notice that it is perfectly correct to import a JavaScript code
snippet from another domain using a script tag:

<script
src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js"
type="text/javascript">
</script>

The solution:

As it was said, Flash and Silverlight both support remote calls. You just need a client
access policy file to be hosted at the root of a.com
(http://a.com/clientaccesspolicy.xml):

<?xml version="1.0" encoding="utf-8"?>


<access-policy>
<cross-domain-access>
<policy>
<allow-from http-request-headers="SOAPAction">
<domain uri="*"/>
</allow-from>
<grant-to>
<resource path="/" include-subpaths="true"/>
</grant-to>
</policy>
</cross-domain-access>
</access-policy>

This file allows remote calls to be made from any other domain.

But in many situations, we want to call the Web Service methods directly from AJAX
clients. This need was the cause of the development of JSONP (JSON with padding)
protocol. As it was discussed, it is correct to have a <script> element that loads a
script from another domain. On the other hand, you may know that it is possible to
load scripts dynamically by a simple JavaScript trick (writing <script> tags) or
using this jQuery plug in. Now the bulbs are flickering! The solution is to access the
JSON Web Service by the src attribute of a <script> element. This is the whole
idea behind JSONP.

But there are a couple of problems needed to be solved for ASP.NET ASMX Web
Services before we can use them in a JSONP scenario.

1. ASP.NET Web Services by default only accept POST requests; a <script


src=""> element, produces a GET request.
2. The result of the web method call must conform to JSONP, and as you can
guess, ASP.NET 3.5 by default doesn’t support it.

The solution to the first problem may seem trivial, we can easily enable GET calls to
web methods using the [ScriptMethod(UseHttpGet = true)] attribute. The
immediate problem is that when we mark a web method by this attribute, it can only
be called by GET requests. And remember, other clients (actually anything other
than JSONP clients) are supposed to communicate with the web service by POST
requests. I usually end up inheriting from the original Web Service and marking web
methods by the [ScriptMethod(UseHttpGet = true)] attribute in the derived
class. Therefore, I will have two ASMX Web Services, one using the original class
(expecting POST requests) and the other using the derived class (expecting GET
requests).

[WebMethod(), ScriptMethod(UseHttpGet = true)]


public override GameSessionResponse CreateGameSession(Guid questId)
{
return base.CreateGameSession(questId);
}

Note you may need to add this code snippet in web.config:

<system.web>
<webServices>
<protocols>
<add name="HttpGet"/>
</protocols>
</webServices>

</system.web>

There's another problem to be addressed in the client side. The client should call the
web method using the correct URL (it has to pass the correct query string that could
be deserialized back to .NET objects in the server side). In case of POST requests,
I'm used to JSON2 library to post data to ASP.NET ASMX Web Services. JQuery
$.AJAX method (when it is configured to use JSONP, using dataType: "jsonp")
creates query string parameters for the data objects it receives. But the result is not
usable for ASMX Web Services.

Luckily, there's a ready to use JQuery plug-in (jMsAjax) that has the required
algorithms for serializing a JavaScript object into a query string that can be parsed
by ASP.NET Web Services.

Using the plug-in, I created this function to serialize JavaScript objects into query
strings:
$.jmsajaxurl = function(options) {
var url = options.url;
url += "/" + options.method;
if (options.data) {
var data = ""; for (var i in options.data) {
if (data != "")
data += "&"; data += i + "=" +
msJSON.stringify(options.data[i]);
}
url += "?" + data; data = null; options.data = "{}";
}
return url;
};

You will need jMsAjax for this code snippet to work.

Finally, this is a sample of a client side code using JQuery that calls an ASMX Web
Service using JSONP:

var url = $.jmsajaxurl({


url: "http://hiddenobjects.hyzonia.com/services/GameService3.asmx",
method: "Login",
data: { email: "myemail@mydomain.com", password: "mypassword" }
});

$.ajax({
cache: false,
dataType: "jsonp",
success: function(d) { console.log(d); },
url: url + "&format=json"
});

Or equivalently:

$.getJSON(url + "&callback=?&format=json", function(data) {


console.log(data);
});

When you call an ASP.NET Web Service method (that is configured to receive GET
requests) using a code similar to the above, it returns in XML. The problem is that
the Web Service expects to receive a request that has a content type of
"application/json; charset=utf-8" and the <script> element simply
doesn't add this content type to the request. There's a little thing we can do at the
client side. The easiest way to resolve this problem is to use an HTTP module. The
HTTP module should add this content type to the requests before they are processed
by the Web Service handler.

On the other hand, a JSONP client expects that the Web Service returns the call by a
string like this:

nameOfACallBackFunction(JSON_OBJECT_WEB_METHOD_RETURNED)

nameOfACallBackFunction must be given to the server by a parameter in the


query string. Different JSONP compatible Web Services use different names for this
parameter, but usually it is named 'callback'. At least, this is what $.ajax()
automatically adds to the request in JSONP mode.

We have to modify the response stream that the server is returning. Luckily, in
ASP.NET, it is easy to apply a filter to the response.

I slightly modified this HTTP module that I originally grabbed from a post in
elegantcode.com, to improve its performance:

public class JsonHttpModule : IHttpModule


{
private const string JSON_CONTENT_TYPE =
"application/json; charset=utf-8";

public void Dispose()


{
}

public void Init(HttpApplication app)


{
app.BeginRequest += OnBeginRequest;
app.ReleaseRequestState += OnReleaseRequestState;
}

bool _Apply(HttpRequest request)


{
if (!request.Url.AbsolutePath.Contains(".asmx")) return false;
if ("json" != request.QueryString.Get("format")) return false;
return true;
}

public void OnBeginRequest(object sender, EventArgs e)


{
HttpApplication app = (HttpApplication)sender;
if (!_Apply(app.Context.Request)) return;

// correct content type of request


if (string.IsNullOrEmpty(app.Context.Request.ContentType))
{
app.Context.Request.ContentType = JSON_CONTENT_TYPE;
}
}

public void OnReleaseRequestState(object sender, EventArgs e)


{
HttpApplication app = (HttpApplication)sender;

if (!_Apply(app.Context.Request)) return;

// apply response filter to conform to JSONP


app.Context.Response.Filter =
new JsonResponseFilter(app.Context.Response.Filter, app.Context);
}
}

public class JsonResponseFilter : Stream


{
private readonly Stream _responseStream;
private HttpContext _context;

public JsonResponseFilter(Stream responseStream, HttpContext context)


{
_responseStream = responseStream;
_context = context;
}

//...

public override void Write(byte[] buffer, int offset, int count)


{
var b1 = Encoding.UTF8.GetBytes(
_context.Request.Params["callback"] + "(");
_responseStream.Write(b1, 0, b1.Length);
_responseStream.Write(buffer, offset, count);
var b2 = Encoding.UTF8.GetBytes(");");
_responseStream.Write(b2, 0, b2.Length);
}

//...
}

This HTTP module will be applied to each request to an .asmx file that has a
format=json in its query string. Note that you have to update web.config:

<system.web>

<httpModules>

<add name="JSONAsmx" type="JsonHttpModule, App_Code"/>
</httpModules>
</system.web>

for IIS6, and:

<system.webServer>
<modules>

<add name="JSONAsmx" type="JsonHttpModule, App_Code"/>
</modules>

</system.webServer>

for IIS7.

Now to test it, let's open the Web Service in a browser window; in my example,
http://hiddenobjects.hyzonia.com/services/GameService3.asmx/Login?
email=e@e.com&password=p should return in XML, and
http://hiddenobjects.hyzonia.com/services/GameService3.asmx/Login?
email="e@e.com"&password="p"&format=json&callback=myCallBackFunc will
return:

myCallBackFunc({"d":{"__type":"HLoginResponse",
"isSuccessful":false,"error":false,"authSessionId":null,
"nickName":null,"score":0}});

Don't worry about myCallBackFunc, JQuery nicely manages it so that the whole
business is behind the scene and you can use the $.ajax success callback the very
same way you use it for a normal AJAX call.

We should note that JSONP has its own problems, especially… yes... in IE! All
versions of Internet Explorer have a 2083 character limit for the URL of a request. It
means that you cannot send large data in GET requests to the server. Sometimes
this limitation leaves us with no choice but to use Flash or create a proxy to the
remote Web Service in the local domain.
Report   175 votes for this
article.
1 2 3 4 5
Popularity: 9.94 Rating: 4.43 out of 5
 Download source files - 8 Kb

Introduction

Creating your first web service is incredibly easy. In fact, by using the wizards in
Visual Studio. NET you can have your first service up and running in minutes with no
coding.

For this example I have created a service called MyService in the /WebServices
directory on my local machine. The files will be created in the
/WebServices/MyService directory.

A new namespace will be defined called MyService, and within this namespace will
be a set of classes that define your Web Service. By default the following classes will
be created:

Global (in global.asax) Derived from HttpApplication. This file is


the ASP.NET equivalent of a standard ASP
global.asa file.
WebService1 (in WebService1.cs) Derived from
System.Web.Services.WebService.
This is your WebService class that allows
you to expose methods that can be called
as WebServices.

There are also a number of files created:

AssemblyInfo.cs Contains version and configuration information for your


assembly.
web.config Defines how your application will run (debug options,
the use of cookies etc).
MyService.disco Discovery information for your service.
WebService1.asmx Your WebService URL. Navigate to this file in a browser
and you will get back a user-friendly page showing the
methods available, the parameters required and the
return values. Forms are even provided allowing you to
test the services through the web page.
bin\MyService.dll The actual WebService component. This is created when
you build the service.

The class for your service that is created by default is called (in this case)
WebService1, and is within the MyService namespace. The code is partially shown
below.

namespace MyService
{
...
/// <summary>
/// Summary description for WebService1.
/// </summary>
[WebService(Namespace="http://codeproject.com/webservices/",
Description="This is a demonstration WebService.")]
public class WebService1 : System.Web.Services.WebService
{
public WebService1()
{
//CODEGEN: This call is required by the ASP+ Web Services Designer
InitializeComponent();
}

...

[WebMethod]
public string HelloWorld()
{
return "Hello World";
}
}
}

A default method HelloWorld is generated and commented out. Simply uncomment


and build the project. Hey Presto, you have a walking talking WebService. 

A WebService should be associated with a namespace. Your Wizard-generated


service will have the name space http://tempuri.org. If you compile and run the
service as-is you'll get a long involved message indicating you should choose a new
namespace, so we add the namespace, and the WebService description as follows:

[WebService(Namespace="http://codeproject.com/webservices/",
Description="This is a demonstration WebService.")]
public class WebService1 : System.Web.Services.WebService
{
...

To test the service you can right click on WebService1.asmx in the Solution Explorer
in Visual Studio and choose "View in Browser". The test page is shown below,

When invoked this returns the following:

Getting the demo application to run


If you downloaded the source code with this article then you will need to create a
directory 'WebServices' in your web site's root directory and extract the downloaded
zip into there. You should then have:

\WebServices
\WebServices\bin
\WebServices\WebService1.asmx
...

Navigating to http://localhost/WebServices/WebService1.asmx won't show you the


WebService because you need to ensure that the webservice's assembly is in the
application's /bin directory. You will also find that you can't load up the solution file
MyService.sln. To kill two birds with one stone you will need to fire up the IIS
management console, open your website's entry, right click on the WebServices
folder and click Properties. Click the 'Create' button to create a new application the
press OK. The /WebServices directory is now an application and so the .NET
framework will load the WebService assembly from the /WebServices/bin directory,
and you will be able to load and build the MyService.sln solution.

Extending the example


So we have a WebService. Not particularly exciting, but then again we haven't
exactly taxed ourselves getting here. To make things slightly more interesting we'll
define a method that returns an array of custom structures.

Within the MyService namespace we'll define a structure called ClientData:

public struct ClientData


{
public String Name;
public int ID;
}

and then define a new method GetClientData. Note the use of the WebMethod
attribute in front of the method. This specifies that the method is accessible as a
WebService method.

[WebMethod]
public ClientData[] GetClientData(int Number)
{
ClientData [] Clients = null;

if (Number > 0 && Number <= 10)


{
Clients = new ClientData[Number];
for (int i = 0; i < Number; i++)
{
Clients[i].Name = "Client " + i.ToString();
Clients[i].ID = i;
}
}
return Clients;
}

If we compile, then navigate to the the .asmx page then we are presented with a
form that allows us to enter a value for the parameter. Entering a non-integer value
will cause a type-error, and entering a value not in the range 1-10 will return a null
array. If, however, we manage to get the input parameter correct, we'll be presented
with the following XML file:
It's that easy.

Caching WebServices

Often a WebService will return the same results over multiple calls, so it makes
sense to cache the information to speed things up a little. Doing so in ASP.NET is as
simple as adding a CacheDuration attribute to your WebMethod:

[WebMethod(CacheDuration = 30)]
public ClientData[] GetClientData(int Number)
{

The CacheDuration attribute specifies the length of time in seconds that the method
should cache the results. Within that time all responses from the WebMethod will be
the same.

You can also specify the CacheDuration using a constant member variable in your
class:

private const int CacheTime = 30; // seconds

[WebMethod(CacheDuration = CacheTime)]
public ClientData[] GetClientData(int Number)
{

Adding Descriptions to your WebMethods

In the default list of WebMethods created when you browse to the .asmx file it's nice
to have a description of each method posted. The Description attribute
accomplishes this.
[WebMethod(CacheDuration = 30,
Description="Returns an array of Clients.")]
public ClientData[] GetClientData(int Number)
{

Your default .asmx page will then look like the following:

There are other WebMethod attributes to control buffering, session state and
transaction support.

Deploying the WebService

Now that we have a WebService it would be kind of nice to allow others to use it (call
me crazy, but...). Publishing your WebService on your server requires that your
solution be deployed correctly. On the Build menu of Visual Studio is a "Deploy"
option that, when first selected, starts a Wizard that allows you to add a Deployment
project to your solution. This creates an installation package that you can run on
your server which will create the necessary directories, set the correct parameters
and copy over the necessary files.

This doesn't really give you an idea of what, exactly, is happening, so we'll deploy
our MyService manually.

Deploying the application is done using the steps in Getting the demo application to
run. We need to create a directory for our service (or use an existing directory) for
our .asmx file, and we need to have the service's assembly in the application's bin/
directory. Either place the .asmx file in a subdirectory on your website and place the
assembly in the /bin folder in your website's root, or place the /bin in the
subdirectory containing the .asmx file and mark that directory as an application (see
above).

If you choose to create a separate directory and mark it as an application then


Within this directory you need to add the following files and directories:
MyService.asmx This file acts as the URL for your service
MyService.disco The discovery document for your service
Configuration file for your service that overrides default web
web.config
settings (optional).
/bin This directory holds the assembly for your service
/bin/MyService.dll The actual service asembly

Introduction

I love the work of developing simple, reusable, and hopefully long lasting software
components. By simple I mean easy to use, not necessarily easy to design or
implement. As software engineers we should, in my opinion, work hard to make
complicated things look simple, not the other way around. On the other hand, I like
distributed computing, which is pretty complicated by nature. I get very excited if my
programs can jump from machine to machine, go through firewalls, and eventually
retrieve or update data on a legacy system (which usually means mainframe to
me :-).

At work we have a lot of existing programs designed initially for internal use. The
trend is to move functionalities provided by these programs to web servers so that
we can serve more customers over the internet. It is impractical to rewrite a lot of
these programs using the latest and greatest technology. Typically we wrap the
needed functionalities in various web methods and other programs will invoke these
methods. Most of our new web server programs are developed using .NET. To invoke
a web method from .NET code, according to the documentation, you need to add a
web reference to your project. A proxy class for the web service will be generated for
your project. You can also manually generate a proxy class using the wsdl.exe tool.
The .NET generated proxy class is hard-coded for a specific web service. You cannot
use the same proxy class to access a different web service. If your program needs to
use 10 different web services, you will have to generate 10 different proxy classes
and compile those classes into your code.

The .NET framework class SoapHttpClientProtocol is the base class for all
generated web service proxies. I have tried without success to use this base class to
build a reusable proxy class for general web services. Fortunately, I figured out my
own solution by using the SoapClient object in the Microsoft Soap Toolkit. The
SoapClient object "provides a client-side, high-level interface whose methods and
properties send a Simple Object Access Protocol (SOAP) request to the server and
process the response from the server". In order to invoke a web method in a most
flexible way, I have constructed a com component which is a general web service
proxy. My proxy is a key component in the web service dispatcher program and it is
described in a separate article .

The web service dispatcher program


This program itself is a simple web service written in ASP.NET. It simplifies the work
of building and running web client/server programs. Here is how it works. First, we
register existing web methods on the web service dispatcher to make them available
to other programs. If you have a program that needs to access multiple web
methods implemented in different web services, instead of calling those methods
individually, you can write the code to call my web service dispatcher providing only
the name of the web method you want to call and the input arguments. The web
service dispatcher will forward your request to the corresponding server program and
also return the output to you. Again, your code need only to know the name, the
input format, and the output format of the web method you want to invoke. So as
long as the name, the input format, and the output format of the web method do not
change, the server program that implements this particular web method can be
changed or moved to a different physical location without affecting your existing
code. In fact, the web method you want to invoke can be implemented in more than
one server programs, some may be written in C++ and others in VB (or even in
Java).

Here is a summary of the main advantages of using the web service dispatcher.

 The programs that use various web methods need only to access the web
service dispatcher program.
 The server programs need not be implemented using .NET and they don't
have to be running on the windows platform either. The same goes with client
programs. The dispatcher itself is implemented using .NET, however.
 Multiple server programs running on one or more machines can provide the
same web method (uniquely identified by its name string) through the web
service dispatcher so that the client programs can still function if one of the
server programs crashes.
 Multiple instances of the web service dispatcher can be used. The same server
program can make its web methods available through multiple dispatchers.

In order to use the web service dispatcher, we need to do the following.

1. Install and run an instance of the web service dispatcher program.


2. Register existing web methods on the dispatcher.
3. The other programs invoke the registered web methods through the
dispatcher.

There are some restrictions on using the web service dispatcher. First, web methods
registered with the dispatcher can only take string arguments and the return value
must also be a string. This is to simplify the implementation of the web service
dispatcher. Since it is easy to use XML to represent complicated data types, this
restriction is really not a big deal. Secondly, a web method implementation must be
uniquely identified by the registered method name. For example, if there are two
different server programs each registers a web method called GetPersonInfo on
the same instance of the dispatcher, then they must implement the same thing, you
cannot have one implementation returns the birthday while the other returns the
social security number. The dispatcher assumes that all implementations of the same
method are the same and will randomly pick a server, if there is more than one, to
process a user request.

Here are the main public methods of the web service dispatcher.

RegisterMethod. This method takes a single string argument and returns a


boolean flag, true for success and false for failure. You can register multiple web
methods with only one call to this method. We will cover this method in detail later.

UnRegisterMethod. This is, of course, used to unregister a web method. Details


will be discussed later.

InvokeMethodX. Here X is a number ranging from 0 to 5. The first argument of this


method is the name string of the web method you want to invoke. If the web method
you want to invoke does not take any argument, then you use the InvokeMethod0 .
Otherwise, you can provide the input arguments by using InvokeMethod1 ,
InvokeMethod2 , ..., and InvokeMethod5, depending on the number of
arguments. The returned value of InvokeMethodX is the empty string if an error
occurs, otherwise it is the output string of the web method you just invoked.

As you can see, we can use the web service dispatcher as some sort of "gateway"
and let all other programs register and access web methods through this gateway.
This will make the implementations of client and server programs easier and it is
likely a more flexible and distributive design. Now consider scalability and reliability,
what if there are too many users trying to access the web service dispatcher? Will it
become a bottleneck? What if the machine that runs the gateway goes down? The
answer is, we do not have to restrict ourselves to a single gateway.

For example, suppose we have 10 instances of server programs running on 10


different machines and each of them registers the same set of 15 web methods on 3
different instances of web service dispatcher. If there are 300 simultaneous users (or
client programs) trying to call these 15 web methods, we could, in theory anyway,
divide these 300 users into 3 groups of roughly the same workload, and let each
group access a separate gateway. Please note that if one of the server instances
crashes, most client programs will not be affected because the remaining 9 server
instances are still providing the same set of web methods. The dispatcher program
will randomly pick a server instance if multiple exist for the same web method. This
example is pure imaginary and I have no resource to do any realistic testing.

How to install and use the web service dispatcher


The dispatcher has to be installed on a machine that already has the .NET framework
and the Microsoft Soap Toolkit 3.0 (you can still use Microsoft Soap Toolkit 2.0 by
modifying the web.config file).

Step 1. First, unzip the file WebServiceDispatcher.zip into a directory on the


target machine, you need to preserve directory structure while unzipping.

Step 2. Create a virtual web directory WebServiceDispatcher pointing to the


above directory.

Then we need to register existing web methods with the dispatcher.

Step 3. Call the RegisterMethod method of the dispatcher.

The RegisterMethod method does not have to be called by the server instance that
implements the web method, but typically a server calls RegisterMethod to
register its web methods when starting up. The following VB script registers a web
method named GetData .

Option Explicit

Const WSDL_URL = _
"http://localhost/WebServiceDispatcher/ServiceDispatcher.asmx?wsdl"
Const INPUT_XML = _
"<Root><MethodList><Method>_
<MethodName>GetData</MethodName>_
<ProviderName>Server11</ProviderName>_
<ServiceURL>http://Server11.com/MyService/MyService.wsdl</ServiceURL>_
</Method></MethodList></Root>"

Dim spclt
Set spclt = CreateObject("MSSOAP.SoapClient")
spclt.mssoapinit WSDL_URL

If spclt.RegisterMethod(INPUT_XML) Then
WScript.Echo "Web method registered"
Else
WScript.Echo "Failed to register web method"
End If
set spclt = nothing

Here is an example of the input string to the RegisterMethod method. As you can
see, it is possible to register multiple web methods with one call.
<Root>
<MethodList>
<Method>
<MethodName>GetName</MethodName>
<InternalMethodName>GetName1</InternalMethodName>
<ProviderName>NameProvider1</ProviderName>
<ServiceURL>http://NameProvider1.com/WebService/WebService.wsdl
</ServiceURL>
<ProxyServer>MyProxy.Com</ProxyServer>
<ProxyPort>91</ProxyPort>
<AuthUser>Tester</AuthUser>
<AuthPassword>123456</AuthPassword>
</Method>
<Method>
<MethodName>GetName</MethodName>
<InternalMethodName>GetName2</InternalMethodName>
<ProviderName>NameProvider2</ProviderName>
<ServiceURL>http://NameProvider2.com/WebService/WebService.wsdl
</ServiceURL>
<ProxyServer>MyProxy.Com</ProxyServer>
<ProxyPort>91</ProxyPort>
<AuthUser>Tester</AuthUser>
<AuthPassword>123456</AuthPassword>
<Method>
<Method>
<MethodName>GetAge</MethodName>
<ProviderName>AgeProvider</ProviderName>
<ServiceURL>http://AgeProvider.com/WebService/WebService.wsdl
</ServiceURL>
</Method>
</MethodList>
</Root>

The <MethodName> string identifies the web method registered on the dispatcher.
The <ProviderName> string identifies a server instance that implements this web
method. The <InternalMethodName> string is the name of the web method on the
server instance that implements it, you can register the same web method from two
different server instances (two different providers) as demonstrated in the above
XML. It is assumed that <MethodName> and <InternalMethodName> are the same
in case <InternalMethodName> is not provided. When a request to invoke a web
method comes to the dispatcher, the dispatcher will randomly pick a provider if there
is more than one. The <ServiceURL> string is used by the dispatcher to access the
provider of a web method on behalf of the clients. The <ProxyServer> and the
<ProxyPort> strings are optional, they are needed only if there is a firewall between
the dispatcher and the provider of the web method. The <AuthUser> and the
<AuthPassword> strings are also optional, they are needed only if access to the
web method provider are restricted by the given user name and password.

It is ok to register a web method for the same provider more than once, the last
registeration will override the previous ones. For example, if the <ServiceURL>
string was wrong when it was first registered, you can correct the information by
registering it again.

A server providing a web method through the dispatcher should call the
UnRegisterMethod method before shutting down itself. The format of the input
string for UnRegisterMethod is the same as that of the RegisterMethod except
that you need only to specify <MethodName> and <ProviderName> . If you don't
specify <ProviderName> , then the web method identified by <MethodName> will
be unregistered for all providers!

Step 4. Other programs access the registered web methods by calling


InvokeMethodX methods of the web service dispatcher.

Here is a VB script that calls the GetData web method registered in the above
script, assuming it takes only one input argument.

Option Explicit

Const WSDL_URL = _
"http://localhost/WebServiceDispatcher/ServiceDispatcher.asmx?wsdl"

Dim spclt
Set spclt = CreateObject("MSSOAP.SoapClient")
spclt.mssoapinit WSDL_URL

Dim output
output = spclt.InvokeMethod1("GetData", "This is an input string")

If output <> "" Then


WScript.Echo "Output: " & output
Else
WScript.Echo "Failed to invoke web meothod"
End If

set spclt = nothing

When writing real code, you should know what to send as the input arguments and
how to process the output. I have included code for a sample web service called
MathService which provides four web methods, Add, Subtract, Multiply, and
Divide . To install MathService , you need to unzip the MathService.zip file
into a directory on the target machine and create a virtual web directory named
MathService for it. Run the script RegisterMathService.vbs to register
methods of MathService on the dispatcher (assuming you already installed the
dispatcher). Run the script InvokeMathService.vbs to invoke the methods of
MathService through the dispatcher.

Other features of the web service dispatcher

The dispatcher program provides dynamic tracing capability. If you look at the
web.config file of the dispatcher, you may find the following text.

<appSettings>
<add key="TraceFilePrefix" value="Log\WebServiceDispatcherTrace" />
<add key="TraceLevel" value="40" />
<add key="TraceCleanup" value="7" />
<add key="SoapToolkitVersion" value="3.0" />
<add key="DataFile" value="Data\WebServiceDispatcherData" />
<add key="FailedAttemptLimit" value="3" />
</appSettings>

The value of TraceFilePrefix specifies where do you want to create trace files. If
you set the value to c:\temp\Dispatcher , then trace files will be created in
directory c:\temp and the file name will be the string Dispatcher plus a datetime
stamp. By the way, a new trace file will be created for each day the dispatcher is
being used so that you won't get a gigantic trace file if the dispatcher keeps running
for months. The value of TraceLevel determines how much information you want
in the trace file. Level 0 means no tracing will be done. Level 10 will generate error
messages in the trace file. Level 20 means error plus warning messages. Level 30
will have additional messages indicating what internal functions have been called.
Level 40 provides the most detailed tracing, which includes all input strings and
output strings. You can dynamically invoke the SetTrace method on the dispatcher
to change TraceFilePrefix and TraceLevel.  In order to prevent the hard disk
from filling up, old trace files will be cleaned up automatically after 7 days (you can
modify this setting by changing the value of TraceCleanup in the web.config file).

Tracing for the dispatcher is implemented in the .NET component


Tools.TraceUtility.dll. This component is ported from a C++ utility described
in another codeproject article . The source code for this component is included in the
TraceUtility.zip file.

Suppose you are using the web service dispatcher in a production environment.
What happens if you have to restart IIS or reboot the machine? Do all server
programs have to re-register their web methods with the dispatcher? The answer is
no unless there is a real disaster (hard disk failure, blue screen of death, cannot
shutdown IIS, etc.). This is because when the dispatcher program is shutdown
normally, the registration information will be saved to a local file, and the
information will be restored from the file when the dispatcher is restarted.

As stated earlier, before a server providing a web method through the dispatcher is
shutdown, it should call UnRegisterMethod to clear the information about this
provider stored on the dispatcher. What happens if two server instances register the
same web method through the dispatcher and one of them becomes unavailable or
dies unexpectedly? Will the dispatcher be smart enough to always forward user
requests to the server instance that is still alive? The answer is no. The dispatcher
just randomly picks a provider for the web method to forward the request, therefore
some users may experience problems. However, the FailedAttemptLimit value in
the web.config file determines how many times a web method provider can fail to
respond to user requests. If the value is 10 , then the dispatcher will automatically
unregister a provider when it failed 10 times consecutively. If the
FailedAttemptLimit value is not specified in the web.config file, then the
default value will be 3. So if there are five server instances providing the same web
method through the dispatcher and four of them died a tragic death, then eventually
all user requests for this web method will be processed by the one instance that is
still alive!

Internal implementation of the dispatcher

The web service dispatcher program is implemented as a web service using ASP.NET.
It consists of the files web.config, Global.asax , ServiceDispatcher.asmx,
WebServiceDispatcher.dll , Tools.TraceUtility.dll , XYSoapClient.dll
(and the .NET generated Interop.XYSOAPCLIENTLib.dll ).

The XYSoapClient.dll is a regular com dll built with Visual C++ 6.0 which
implements the general proxy object for accessing various web services. It uses the
SoapClient object from the Microsoft Soap Toolkit (either version 3.0 or 2.0). The
reason I am using VC++ 6.0 instead of VC++ 7.0 to build this component is that I
want to be able to use it on older platforms (machines without the .NET framework,
etc.) and with older applications (VB 6.0 and VC++ 6.0 applications, etc.). The
source code for this project is in the XYSoapClient.zip file. I am going to write a
separate article to describe this component and give more C++ examples for
accessing web services.

Introduction

I did this stuff more than 2 years ago and just wanted to share with "beginners" who
are getting to know why webservice would be very useful when crossing the far
distance and can be shared for many clients. This example shows you how to
communicate with a DB Server on the Internet from a client application on your PC
using SOAP protocol. Usually, Enterprise Information System (EIS) such as ERP,
CRM, SCM are built for internal use purposes and installed on the LAN. We will use
direct connection in that case and almost never have to care about the speed of the
connection. But what happens if we implement an application with a DB Server on
the Internet (maybe not EIS but a conference or online-meeting software for
instance)? Can SOAP be one of the possible answers?

Background

 First, we create a Web service that can connect to the Database Server (here
I use SQL Server), and query to return information or interact with records
 Second, we create a Windows Forms client that can control the Web service
and display the information or do whatever we want with the DB

Using the Code

You can find a full manual inside the ZIP file, together with a tutorial video. By the
way, since this code was written long time ago, it's still in .NET 2003 and I'm not
sure if it can run on .NET 2005 (after conversion). And I'm sorry if someone may ask
for tungDbDriver.dll inside the ZIP file. I lost that assembly code and according to
me, that DLL is very simple to write. And since the main purpose of this article is to
give a tutorial on the advantage of webservice, please forgive me about it. Thanks.

Points of Interest

I consider the most interesting point to be the way we can establish connection with
a "remote" database server on the Internet. Imagine that Yahoo or MSN Chat
applications always require users to log-in first. When users key in the username and
password, it sends a message to the server and checks to make sure that the user is
in the database with the right password that has been provided by the user. I don't
think they use SOAP protocol but Client/Server with messages sent between them
are the way those applications communicate, similar to SOAP. There is "no direct
connection" between the client application and the database server.

Introduction

I did this stuff more than 2 years ago and just wanted to share with "beginners" who
are getting to know why webservice would be very useful when crossing the far
distance and can be shared for many clients. This example shows you how to
communicate with a DB Server on the Internet from a client application on your PC
using SOAP protocol. Usually, Enterprise Information System (EIS) such as ERP,
CRM, SCM are built for internal use purposes and installed on the LAN. We will use
direct connection in that case and almost never have to care about the speed of the
connection. But what happens if we implement an application with a DB Server on
the Internet (maybe not EIS but a conference or online-meeting software for
instance)? Can SOAP be one of the possible answers?

Background

 First, we create a Web service that can connect to the Database Server (here
I use SQL Server), and query to return information or interact with records
 Second, we create a Windows Forms client that can control the Web service
and display the information or do whatever we want with the DB

Using the Code

You can find a full manual inside the ZIP file, together with a tutorial video. By the
way, since this code was written long time ago, it's still in .NET 2003 and I'm not
sure if it can run on .NET 2005 (after conversion). And I'm sorry if someone may ask
for tungDbDriver.dll inside the ZIP file. I lost that assembly code and according to
me, that DLL is very simple to write. And since the main purpose of this article is to
give a tutorial on the advantage of webservice, please forgive me about it. Thanks.
Points of Interest

I consider the most interesting point to be the way we can establish connection with
a "remote" database server on the Internet. Imagine that Yahoo or MSN Chat
applications always require users to log-in first. When users key in the username and
password, it sends a message to the server and checks to make sure that the user is
in the database with the right password that has been provided by the user. I don't
think they use SOAP protocol but Client/Server with messages sent between them
are the way those applications communicate, similar to SOAP. There is "no direct
connection" between the client application and the database server.

Introduction

You've seen Napster, Gnucleus, Morpheus and the like file-sharing applications. If
you haven't, I'm sure you've heard about the exciting peer-to-peer (P2P) technology.
I don't doubt you've seen the big hype about web services, but you probably haven't
seen anything smarter than an HTML screen-scrapping web service, or one, exposing
some proprietary technology or software to the web-aware public.

This article demonstrates a virtual file system, based on P2P and web services, which
in all modesty I call "The Internet File System" or IFS. Unfortunately, big articles
look bad, so I'll show you how to use the library I've written, and if you are smart,
(and I know you are) you'll get the rest from the source code. Have a pleasant
reading...

What is Internet File System?

The Internet File System is an imaginary file system. A real Internet file system
could hardly exist, because of the variety of hardware and software platforms (OSes)
scattered around the world. The IFS simulates a file system by having a central
repository of folder and file links (shortcuts) published by a multitude of computers,
known as peers. Peers share files and folders in the repository, but do not use it to
actually store the files contents. They communicate to each other instead, to
download (copy) the files between themselves.

The IFS Architecture

In this simple implementation, the repository is managed by the peers via a web
service (IFSWS), and the peer-to-peer communication is handled by a P2P
framework library (P2PLIB). Because of the complexity to use directly the web
service and the P2P framework, I've built a stand-alone P2P server (P2PSRV), which
runs on the peer computers and handle all the P2P communication automatically.
Furthermore, to hide the details of the IFS implementation, I've built a Internet File
System library (IFSLIB) on top of IFSWS and P2PLIB, that exposes an easy to use
object model for manipulation of the central repository, as well as easy copy
(download) of files between peers.

As you can see on the diagram above, there are two big parties here: the IFS Web
Service, and the peers:

The IFS Web Service

It is only responsible for storing and retrieving information - it just serves as a


repository. Actually, it registers/unregisters/logs in peers, publishes folders and files
and retrieves vital information about the peers, etc., but the coolest part of IFS is
done by the library.

The Peers

I'm not a native English speaker, I'm Bulgarian. That's why I cant explain exactly
what a peer is. In my dictionary, the word "peer" has just one meaning as a noun,
and it is not "a PC connected to the Internet, that can communicated directly whith
other PCs, using some communication protocol". However, that's the meaning of
peer I've put in this article.

There are 3 things that make the dumb PCs peers - these are the P2P library, the
P2P server, and the IFS library. The IFSLIB works on top of the web service, and the
P2P server on top of the library.

One can use the IFS library to build any kind of P2P application - GUI-based Windows
Explorer-like IFS browser, console tools for automatic/scheduled download of files,
etc. I started working on an IFS browser, but due to the limited time (I started this
article to enter the web contest) could not finish it. However, I included the crippled
demo in the source code, so anyone who likes the idea (and can code in C#) could
finish (or rewrite) it. (Now, as I am updating the article, two weeks later, the status
of the IFSBrowser is unchanged. That's because I thought that fixing bugs and
adding features to the library has a bigger priority at least for me.)

Why this IFS is (or could be) cool?

Well, you have 3 scenarios when you have to copy a file from one peer to another.
To avoid tautology, lets name the interested peer (which wants to download a file)
P1, and the owner peer -- P2.

 Both peers are not behind firewall

No problem. P1 pulls the file from P2

 P2 is behind firewall

Ooops! P1 can't connect to P2 just that easy. So what can it do? It can
execute a web service method that will assign a task to P2 to push the file to
P1. The P2P server on P2's PC is scheduled to retrieve its tasks, using the
web service. When it gets a push task, it connects to P1 and pushes the file.
Is this easy? Yes, and I have almost implemented this feature.

 Both peers are behind firewall

Yeaah! What shall we do now? Well, if P1 could assign an upload task to P2,
the latter could upload the file, using the web service to the web service's
server PC. When P2 is done, it could assign a download task to P1 and the
latter will download the file! Voila! This feature is not implemented at all but is
very easy to implement. However, because I'm pressed by a DEADline sooooo
badly, I don't promise that I'll implement it for the next week...

How it works?

How works what?:) OK, I'll try to focus and tell you how the library works. I won't
discuss how the basic functionality works, i.e. peer registration, log in, folder and file
publishing, etc. These are just entries in the database via the IFS web service. I'll
show you how a typical file download works in both of the easier scenarios, i.e. the
"no firewalls" and "owner is behind firewall" situations. And because I know you're
SMART guys (and I'm so lazy:) I haven't drawn any pictures, so read along...

No firewalls

 I'll use P1 and P2 again for the interested and the owner peer. P1 has a IfsFile
instance (see library objects below) and calls the Download method. The IfsFile
object has a OwnerPeer property, which returns an IfsPeer instance. The latter
has an IpEndPoint property, containing the IP address and port of the remote
peer. At this moment, the IFS library sends a "library pull" command to the P2P
server, running on P1's PC. This command means the the library wants the server to
pull a file from a remote peer. The P2P server gets the remote peer's end point from
the command, as well as the file name P1 wants to pull and the folder in which to
download the file and then sends a "peer pull" command to the remote peer (P2).
P2's P2P server accepts the command, gets the file size and CRC (CRC is not
implemented) and sends them back the P1. P1 now knows exactly how many bytes
to accept and starts to receive the bits P2 sends and write them in the destination
folder. After P2 has sent all the bytes and P1 has received them, P1 sends an "OK"
response (or "ERROR" if something goes wrong) to P2 and closes the connection.
That's pretty much what a download is.

Owner behind firewall

 P1 assigns a "push task" to P2 via the IFS web service and "thinks" that the file has
been downloaded :) At some time, P2's P2P server reads its task via the IFSWS and
sees it has a "push task". P2P issues to itself (the task reader is in another thread) a
"library push" command. As the peer command listener thread receives the
command, it gets the file size and CRC (CRC again is not implemented) and forms a
"peer push" command. P2 connects to P1 (the task contains P1's end point) and
issues the "peer push" command and starts to stream the file. P1 gets the command,
receives the file and stores it in the destination folder, indicated by P2. (P2 knows
the destination folder, because P1 has sent it in the "push task".) That's all.

Using the library

There's a long way to go until you actually start using the library, but we'll get to
that stuff (Compiling and configuring IFS) soon. Now I just want to show you how
easy it is to use IFSLIB.

The library consists of only 4 classes, that hide everything about peer-to-peer and
web services stuff: IfsSystem, IfsPeer, IfsFolder and the IfsFile.

In order to use IFS (after it is already set up), you should register in the repository
as a peer. There's nothing easier than that, like you'll see in a moment, but I'll warn
you something first: unless you have registered or logged in as a peer with IFS, you
won't be able to use the IFS's most important object - the IfsSystem object. It is
implemented as singleton to avoid having multiple instances of IFS peers on the
same peer computer. If you actually try to use even the simpliest property of the
IfsSystem class, you'll get a runtime exception, stating that you haven't logged
in/registered with IFS.

First, you have to register as a new peer:


// first, you'll have to get a "handle" of the IFS singleton object
IfsSystem __gc* ifs = IfsSystem::Instance;

// now you can register


IfsPeer __gc* peer = ifs->RegisterPeer (
S"Stoyan Damov", // alias
S"Stoyan", // login name
S"Secret", // password
S"BG", // country code (unused in this version)
false); // behind firewall or NAT?

Of course, you may not register more than one time in IFS (unless you're prepared
for exceptions), so once you have registered, the next time you should sign in:

IfsPeer __gc* peer = ifs->LoginPeer (S"Stoyan", S"Secret");


// in fact you can throw the peer away, you won't use it for
// anything, except for examining its properties

A peer (IfsPeer instance) has the following properties:

 ID - the unique peer ID in IFS


 Alias - peer's human-readable name
 Login - visible only to the peer (other peers see empty string)
 Password - ditto
 IPEndPoint - peer's IP address and port
 BehindFirewall - indicates whether the peer is behind firewall or NAT, i.e.
whether the peer's PC is "visible" or externally "connectable":)

Once you've registered or logged in, you can start using the IfsSystem object's
properties, the most important (and usable) of which is the RootFolder property,
which returns an IfsFolder object, representing the virtual root folder.

Working with folders

You can get the root folder just that easy:

IfsFolder __gc* root = ifs->RootFolder;

Each folder (including the root one has several properties):

 ID - the unique folder ID in IFS


 Name
 Description
 OwnerPeer - the IfsPeer object, owner of the folder
 ParentFolder - the parent IfsFolder (evaluated on request and cached)
 PeerFolderPath - the physical folder on the owner peer's PC
 VirtualPath - the IFS logical path (evaluated on request and cached)
 Published - a boolean flag, indicating whether the folder has been published
or is brand new IfsFolder instance

After you get to the root folder, there are many things you can do with it:

 publish a sub-folder in it
 get all of its sub-folders
 find sub-folders having a specified name (recursively)
 publish a file in it
 get all of its files
 find files having a specified name (recursively)

Actually you can perform these actions with all folder objects you get a pointer to,
and you can very easily get an arbitrary folder like this:

IfsFolder __gc* folder = IfsFolder::GetFolderByPath (S"./Docs/PDF");

Below, I am giving some examples of the fore-mentioned operations, and once


you've seen them, you can move to some useful static methods of the IfsFolder
class.

You can publish a folder into an existing one:

IfsFolder __gc* subFolder = root->PublishFolderHere (


S"VirtualFolderName",
S"Description", // may be omitted in an overload
S"c:\\physical\\folder\\path");

// and for more fun:


IfsFolder __gc* subSub = (root->PublishFolderHere (
S"Folder",
S"Description", // may be omitted in an overload
S"c:\\physical\\folder"))->PublishFolderHere (
S"SubFolder",
S"c:\\physical\\folder\\subFolder");

or publish a brand new folder:

// when you know the destination path


IfsFolder __gc* folder = new IfsFolder (
S"VirtualName",
S"Description", // may be omitted in an overload
S"c:\\physical\\folder\\path");
folder->Publish ("./target/virtual/path");

// when you have the destination folder object


folder->Publish (targetFolder);

The lazy guys (this includes me) can publish folders using the static methods:

IfsFolder __gc* folder = IfsFolder::PublishFolder (


S"VirtualName",
S"Description", // may be omitted in an overload
S"c:\\physical\\folder\\path",
S"./target/virtual/path");

Oooh, I forgot to tell you how do you rename a folder

// I assume you got one already


folder->RenameTo (S"NewFolderName"); // wow! how difficult :)

There may (or may not) exist other methods (either static or instance ones) for
publishing a folder, but I think these were enough to show you how easy it is done.
Now, its time to see what you can do with the published folders:

You can find sub-folders:

// the statement below will return all folders, arbitrary level


// below the "folder" one, which name is "docs" (recursively)
ArrayList __gc* folders = folder->FindSubFolders (S"docs");

or get all folders:

// this statement will return all folders below the "folder" object
ArrayList __gc* folders = folder->GetFolders ();

or even find folders in the entire IFS:

ArrayList __gc* folders = IfsFolder::FindFolders (S"docs");


// the above is equivalent to:
ArrayList __gc* folders = root->FindSubFolders (S"docs");

Working with files

In the previous version of the article, I forgot to write a lot of things about the files. I
forgot to tell you that a file (IfsFile) has some useful properties:

 ID - the unique file ID in IFS


 Name - the file's name (w/o path)
 Description - guess what:)
 OwnerPeer - the IfsPeer object, that owns the file:)
 PeerFilePath - the full path (including the file name) in the owner peer's PC
 Folder - the IfsFolder object, where the file resides
 Published - a boolean flag, indicating whether the file has been published or is
brand new IfsFile instance

Again, in the previous article, I mentioned that the IfsFolder has several instance
and static methods to publish a folder and a file. Do you know why the folder should
publish a file, and not a file publish itself? Because I was stupid. I was not able to
use the IfsFolder class in the IfsFile one, because I would create a cyclic
header include. Every C++ programmer knows s/he should not include the header,
but rather just declare the class in the header like class __gc* IfsFolder;.
That's what I did then, but it didn't work and I thought that either I suck, or Visual
C++ does. Well, I suck, but let me tell you why. I forgot that all classes in the IFS
library were wrapped in two namespaces. That's why either I should have written
class __gc* IfsFolder; inside the namespaces of the "IfsFile.h" header, or
wrap the declaration in the namespaces, like this: namespace
InternetFileSystem { namespace Library { public __gc class
IfsFolder; }}. So, that's what I did, and now the IfsFile class has six instance
or static methods for publishing.

You can create a brand new file like this:

IfsFile __gc* file = new IfsFile (


S"fileName",
S"file description", // may be omitted in an overload
S"x:\\full\\path\\to\\fileName");

And publish it like that:

// a) calling the static Publish method (laziest)


IfsFile __gc* file = IfsFile::Publish (
S"fileName",
S"file description", // may be omitted in an overload
S"x:\\full\\path\\to\\fileName",
S"./ifs/target/path/");

// b) calling another static Publish method (you should have an


// IfsFolder before that) assuming you have the targetFolder,
// which is an instance of the IfsFolder class
IfsFile __gc* file = IfsFile::Publish (
S"fileName",
S"file description", // may be omitted in an overload
S"x:\\full\\path\\to\\fileName",
targetFolder);

// c) You have a brand new file and you want to publish it


file->Publish (S"./ifs/target/path/");

// d) You have a brand new file and an IfsFolder instance


// (targetFolder);
file->Publish (targetFolder);

You can get an IfsFile object in several ways:

// get the file (assuming you have the folder already)


IfsFile __gc* file = folder->GetFile (S"readme.txt");

// get a folder's files


ArrayList __gc* files = folder->GetFiles ();

// or search in the whole IFS for a given file


ArrayList __gc* files = IfsFolder::FindFiles (S"readme.txt");

// now you get a file like this:


IfsFile __gc* file = static_cast<IfsFile __gc*> (files->get_Item (0));

The typical scenario is to download a file from a remote peer:

// this may not happen instantly


file->Download (S"c:\\local\\folder");<strike>
Now, a file's folder is just its property Folder.
IfsFolder __gc* folder = file->Folder; // easier, I think :)

And finally, guess how a file is renamed... I'll leave it to your imagination, but the
method should look like RenameTo:)

There are more instance and static methods of the IfsFolder and the IfsFile
classes but you can see and learn them by browsing the source code.

The IFS Browser

Well, I tried to write a big example of how you can use the library. It is (will be) a
fully fledged Windows Explorer-like IFS browser, and I called it "IFS Browser" :). I
ran out of time, so I couldn't finish it, but I've implemented the basic functionality:

 Register peer
 Login peer
 Address bar, where you can type IFS URLs, e.g. ifs://pub/docs/
 A folder tree at the left - it works and it expand folders on demand
 A file list at the right - it works and can rename files
 Up one folder toolbar button
 The menus and the toolbar buttons I think its worth to have (most of them do
not work yet)
 See the Latest changes section for more...

You can implement all the other features in a couple of hours, believe me! However,
I'll implement them next week, so if you can wait, you'll get everything for free.
Below is a screenshot of the IFS Browser in action:

Compiling and configuring IFS

If you own a copy of Visual Studio .NET you don't have to do much than compiling
the solution file. But if you don't own one, please, do your self a BIG favor and buy
it, otherwise you'll have to wait a week, until I finish version 2 of this article and
explain the manual command-line compilation.
Compilation

Open the solution file in VS .NET and build it. This step will produce the following
binaries in the Bin folder:

 IFSBrowser.exe - the Windows Explorer-like IFS browser


 InternetFileSystem.P2PServer.exe - the peer-to-peer server
 IFSWebService.dll - the IFS web service proxy
 InternetFileSystem.Library.dll - the library you'd use
 InternetFileSystem.P2PFramework.dll - the framework, used by P2PSRV
 InternetFileSystemWebService.dll - the IFS web service

Configuration

Except to change the ConnectionString setting, explained below I can't remember


anything else. There's a configuration file, that will be created automatically by the
P2P library (and the IFS library will add your IP address to it), where you can change
some parameters to fit your needs:

 Port - this is the port #, the P2P server will listen on (don't touch it, unless
you touch it everywhere and every peer in the network does the same)
 P2PFmkNamespace - this is the name of the namespace, where the P2P
framework classes reside, and is used by the server to instantiate the
appropriate type, when a request comes. If you change the namespace of the
peer-to-peer library, you should change this setting as well.
 ThreadSleepTime - this is the sleep time in milliseconds, which the server's
worker thread sleeps, waiting for requests on each loop
 ThreadJoinTime - this is the timeout in milliseconds given to the server to
stop, until the thread is forcefully aborted

The config file will be created in the common application folder, that's X:\Documents
and Settings\All Users\Application Data\ on Windows 2K and above.

Setting up the IFS database

I've chosen Microsoft SQL Server for the back-end of the IFS Web Service, because:

 I love Microsoft :)
 I have more experience with MS SQL Server, than with any other database,
although even XML or plain text files could suffice to run the IFS (but you have to
change a lot of the web service code)

The IFS database is so simple, that a simpler database could hardly exist. Just look
at the picture below and see why:
 

I think I shouldn't explain anything here, should I?

Now, to setup the database you just have to run 1 SQL script in your favorite tool
(osql, isql, isqlw [Query Analyzer]). The script's name is InstallDatabase.sql and
resides in the Database folder of the zipped source code. It will create a database,
called "IFS", its tables and stored procedures. NOTE: you'll have to edit the
InstallDatabase.sql script and modify the physical location of the database, because I
had no time to even write a simple parameterized batch file. That's it. In order for
the IFSWS to work with the database, you have to change the ConnectionString
setting in the <appSettings> section of the Web Service's web.config file. I guess
that's all you should do. To uninstall the database, run the UninstallDatabase.sql
script or manually drop the database (which is what the script does).

About MC++, some tricks 'n' traps

Why I've written IFS in Managed C++?

I've read and learned the "Managed extensions for C++" specification and the
migration guide as soon as Microsoft released them to the public. However, I am a
full-time developer and I don't have the time to play around with MC++, because of
the "mission impossible"-deadlines, because I study a lot more stuff (e.g. preparing
for 2 MCSD .NET exams, learning HLA, MASM, ATL/WTL 7, ATL Server, etc.) and last,
but not the least, I have to pay attention (or whatever you call it :) to my wife. So I
just wanted to practice MC++, and believe me, it was quite unpleasant to switch
from C# (daily job) to MC++ (nightly fun) and vice-versa. IFS is my first > 100 LOC
MC++ work (actually it is >5000 LOC), and I'm thankful to Chris Maunder for setting
up the Web development contest, helping me practice MC++.

In my opinion, MC++ is no more powerful than C# if you only use the .NET
Framework and the managed extensions (w/o IJW and unmanaged code). It is
actually slower to write MC++ code and you will forget the __gc* quite often on
your first 2,000 MC++ lines of code. Furthermore, you will get sick with Microsoft's
pervert syntax for functions, returning managed arrays, like: unsigned char
ReturnsManagedCharArray () __gc[];. However, VC++ .NET is the best choice
for writing either managed or unmanaged applications, because you actually have
two languages, and an arsenal of SDKs.

Tricks and Traps

Well, the thoughts I'm about to share, are not big tricks for those, who already have
experience with the .NET framework and MC++, but I know there are some guys,
which will appreciate them, and I wrote these, before I wrote this paragraph:) You
may also want to know that I've hidden a MC++ compiler bug in the text, so keep
reading...

 Playing around with .NET since Beta 1, I found myself reinventing the wheel
more than an year later (now), which is one of the worst things one may do
as a programmer. Because I had no experience with the network classes, I've
almost re-implemented the NetworkStream and a couple of other classes. It
was no fun, and I felt damn stupid when I saw them in the docs. Do not be
tempted to re-invent the wheel. Don't excuse yourself, saying you have no
time to read and learn everything (like I do:). You have to. It will do you only
good, believe me.
 Do not expose your __gc classes's destructors with the public modifier, or
you'll see funny dtor() methods in the C#'s IDE. And then, learn the usage
of two visibility modifiers (e.g. private public to hide the internal
(assembly) methods from the the public.
 Do not forget the "get_" part on your properties, and do not forget the
__property keyword infront :). Then, remember that __property wants to
stay before the static keyword. Fate.
 If you wrap your streams with reader/writers, let them own the stream. That
will save you from closing both, as closing the latter will also close the
underlying stream. This applies to the NetworkStream as well, which closes
the underlying socket.
 Do not omit the "S" prefix on hardcoded strings, or you will get errors from
the compiler, which tend to point to the right place, but show strange
messages sometimes. Furthermore, if you have N S"Hello" strings, they will
all point to the same String __gc* object (I guess only if you have turned
string interning ON)
 Reflection, in my opinion, is one of the coolest features of .NET. Learn and
know them, as they can get very handy sometimes (see the Server class'
HandleRequest method to see reflection in action, saving the otherwise
inevitable switch [the switch is faster than reflection though]).
 You can kill me for this, but do use gotos when other techniques will kill you.
(Imagine you want to check N conditions in a try-catch-__finally block and if
the conditions are not met, you want to exit from the try block, but execute
some code after the __finally block. What will you do, huh? Have N ifs
nested? Invent a break_block keyword? :) I even saw Jeff Richter using
goto to exit to the end of a try-catch block.
 For Microsoft's sake, do not rename your web service's files. Think up the
service name before you write it in the IDE's dialog box, please! I had tons of
problems!
 Do not modify the automatically created VS.NET files, related to web services
(.config, .asmx, .disco), etc. in the folder they were deployed. Modify them in
your project's folder - otherwise, VC++ .NET will detect the changes and will
overwrite them, the next time you recompile the WS.
 Here's the compiler bug: use static_cast to unbox enums. Do not use
dynamic_cast or __try_cast or the compiler will crash. I though I found
this bug first, but once I posted it, I learned that someone else found it a
month ago :)

A thought about warnings I wanted to share...

I dreamed several months ago (don't laugh) that I've invented a new programming
language (as if the current are not enough) and in my dream I named it "p". I guess
it has C++-like syntax, but it was very strange, because it had no control statements
like do/while/for, etc. Instead, it had built-in algorithms (like those in STL) that fit
every case in the world:) And one of the MOST cool features was that "p" could
throw exceptions and warnings! In fact, I think high-level languages like
C#/VB/Java... deserve such a feature. Just imagine you have a method that expects
some parameters, examines them but decides to do its job in a more efficient way,
ignoring the parameters you've passed to it. It could throw a warning, and you could
catch it only if you are interested in it, like this:

void IntelligentMethod (int someHintValue)


{
int aBetterValue = CalulateBetterValue (someHintValue);
if (aBetterValue != someHintValue)
warn[ b] (new Warning (S"Ignoring someHintValue"));
// ...proceed with aBetterValue
}
Yes, I know it could be done very easy with events, but its just not the
same, just as typing op_Equality is not the same as typing ==. And really, I
miss HRESULTs! We don't have a severity, facility, etc. We don't even have
some code. I know I could write my own ApplicationException-derived
exception with code, etc. but the inevitable switch on the code will suck and
will break the idea of catching the right (and expected exceptions) like:
catch (MyException) { /* handle it */ }
catch (YourException) { /* ditto */ }
In fact, this could be achieved in the following way:
// the base exceptions
public __gc class CriticalSeverityException : public Exception { ... };
public __gc class MediumSeverityException : public Exception { ... };
public __gc class LowSeverityException : public Exception { ... };

// the specific ones


public __gc class OutOfDiskSpaceException : public CriticalSeverityException
{ ... };
public __gc class AccessDeniedException : public CriticalSeverityException { ...
};
public __gc class BusinessLogicException : public MediumSeverityException
{ ... };
// the one below is something like a warning...
public __gc class NearQuotaLimitException : public LowSeverityException { ... };
and now, we can handle the exceptions in the following manner:
try
{
// do something, throwing exceptions
}
// this will catch both OutOfDiskSpace and AccessDenied exceptions
catch (CriticalSeverityException __gc* e) { /* whatever */ }
catch (BusinessLogicException __gc* e) { /* catch specific one */ }
// catches all LowSeverity exceptions
catch (LowSeverityException __gc* e) { /* ... */ }
__finally { /* ... */ }
I'd really like to see one day a construct like this (or not exactly like this):
try
{
// do something wich throws exceptions and raises warnings
}
catch_warning (SpecificWarning __gc* w) { /* handle warning */ }
catch (Exception __gc*) { /* ... */ }
But enough. I must have lost my mind :) If you have some thoughts, share
them with me, and please comment on this one. I really want to know if
someone else thinks that ANY language needs warnings.

TODO(s): Oh, my!

I don't even want to start this section, but I have to. I want to share with
you what I wanted to put in IFS, but as it was developed for the contest, I
had no time. I will (eventually) add many more features once I have some
free time (which is never), but for those enthusiasts, who want to improve
on it, here's the (LONG) list:

 you can't see them in the DB ERD, but they exist! -- the tables
Attributes, FileAttributes and FolderAttributes. I though that it
would be very stupid to expand the Files and Folders tables for
some properties like size, author, last accessed, blah blah, etc. So I
initially designed the tables to support attributed files and folders,
but haven't implemented them in the IFS web service (though I've
implemented the prototypes in the IFS library). So it will be cool to
implement them one day (may be the day before I retire :))
 search for folders and files with wildcards ("*", "?")
 ditto, with regular expressions (like in Linux)
 copy (download) a whole folder (+ recursively) - actually this one is
easy, one just have to use the IFSLIB to traverse and copy the folder
 upload/download Peer tasks to handle the "the 2 Peers are behind a
firewall" situation
 if 2 or more peers had registered the same file and a peer asks for it,
IFSWS should choose the closest peer (maybe based on the
CountryCode field) - currenly, neither the "single-file-owned-by-
multiple-peers", nor "choosing-the-closest-peer" is implemented
 write a full featured IFS Browser (GUI File manager)
 write a Peer-to-Peer server in ANSI C/C++, so the Linux guys can use
the IFS too
 handle SOCKS proxy, SSL connections, etc.
 a virtual file system of such magnitute can't go live without some sort
of load balancing - however, neither I have a bunch of computers
(hey, I live in Bulgaria :), nor I have the time to convert the current
IFS to a super-duper scalable one
 I can think up at least 20 more features...

Conclusion

Vigorous writing is concise. A sentence should contain no unnecessary


words, a paragraph no unnecessary sentences. [William Strunk Jr.]

Because I'm faaaar away from a vigorous writer, thank you for reading the
article! It was my first article and I found that I write code 10 times faster
than plain text :) Frankly, I envy the famous book writers - they should be
really smart guys! Now, about the article: I hope you saw how easy it is to
use the IFS library. If you examine the source code, coming with the article,
you'll see how easy is to implement a simple Internet File System. I've
written it in a week in two weeks (now it is updated) in my spare time
(which is my sleep time), and I'm not a typist, so you can do it for even
lesser time. That's .NET - a RAD framework for today's rushing world. Bugs
happen, and they happen even at Microsoft, but you shouldn't let that stop
you from learning and practicing this new exciting technology, which in my
opinion will rule the development world in an year or so (tell me frankly,
have you ever seen a technology producing more than 150 books in less
than 6 months? I haven't.)

Reporting bugs

Yup, they exist. And they bite:) As of this writing, there are no bugs in IFS
(at least, I don't know any). However, there's one 100% Microsoft's bug: it
is either in the ImageList or in the Resource manager or in the ToolBar
class. You put some images in an image list, you set the image list to a
toolbar, set the appropriate image indices to the toolbar buttons and you
expect them to show up, right? Wrong! Either they won't show up at all, or
one of them will show up everywhere! That's why I distribute the icons for
the toolbar, and place them on the toolbar with code. You should copy
IFSBrowser\Resources\*.ico to the Bin folder, or the IFS Browser will
crash. In the previous edition of the article, I said there's a third bug,
concerning the exception handling. I kind of fixed that, and I added some
meaningful exceptions that the IFS library throws around :)

The lack of documentation is a BIG bug. I promissed I'll make one, but
unfortunately right now, I'm under big pressure in work, so I'll generate the
.CHM help in the next version of the article. Sorry!
Send all other bugs (and cheers :) to stoyan_damov@hotmail.com. I'll be
more than glad to fix them. However, if you fix a bug, please send it to me
(plus the fix, please)! Thanks!

Latest changes

I hope you haven't read the previous version of the article. Here's why:

25 Sep
 Yay! I got the "push task" running! The remote peer can now be
behind firewall/NAT. (BTW, I underestimated the complexity and lost
several hours chasing (un)existing bugs:)
 Fixed 2 bugs (I'm ashame to share) in the P2P library
 IFS library checks for null (0) arguments now and throws
ArgumentNullException
 Added a couple of store procedures, fixed one
 Worked a little bit on the IFS Browser: added file and folder rename,
the browser navigates when you press ENTER (as expected), the up
button is now working no matter how you get to any folder (i.e. you
can type "ifs://./folder/subfolder" and when you press the Up button
on the toolbar, the browser will navigate to the parent folder).

24 Sep
 Edited this article and put the "Latest changes" section in it :)

21 Sep
 Fixed a couple of synchronization bugs in the P2P library
 Made the P2P library stream files in chunks

19 Sep
 Added Folder property to the IfsFile class
 Added MoveTo method(s) to IfsFile
 Added several publishing methods to IfsFile
 Rewritten a couple of stored procedures, written the missing ones :)
 Fixed some bugs resulting from invalid object(s) state
 Added proper access modifiers to all classes
 Added tasks support to the IFS web service and P2P server

16-18 Sep
 Fixed 2 trifling bugs
 Improved (rewritten) the NetHelper class
 Added (proper?) exception handling everywhere (anywhere I
remembered to do that:)

14 Sep
 Initial version (the one I hope you haven't seen)

Disclaimer

The software comes �AS IS�, with all faults and with no warranties.
Please, take the best disclaimer from any open source license, read it and
memorize it. FREE software = NO WARRANTY :) However, I grant you the
full rights to do ANYTHING with the source code (except sue me for it:), and
the only thing I want is to thank me in your mind:)

Vous aimerez peut-être aussi