Philippe Truche’s Blog

27 October 2008

PDC 08 Keynote – A Lukewarm Mood


Here I am at my third PDC (I attended PDC03 and PDC05), and I have to say that the mood at the keynote was the most lukewarm I have seen in the last 3 PDCs.  Perhaps it had to do with the topic: cloud computing.  Not that developers don’t care about it, but I think it tends not to be foremost in developers’ minds.  Now I think of the operations people I work with, and I think they would be impressed in the advances that are being made in manageability within the Microsoft platform.

It’s eratly in this PDC, but my take on this so far is that we are in a transition PDC.  Developers are still absorbing or getting educated on a large number of technologies like ASP.NET MVC, LINQ, Silverlight 2.0, etc… This PDC might jsut be incremental, showing a glimpse of the direction in which Microsoft is heading, with opportunities for feedback from the developer community.

The rest of this week will tell.

12 August 2008

.NET Assembly Versioning Lifecycle

While it may seem that versioning assemblies in .NET can be a black art, it really isn’t.  The difficulty lies in the lack of guidance in this area.  In fact, I had to read many blogs to put together an approach on a project I was working on a few years (see Suzanne Cook’s excellent blog post “When to Change File/Assembly Versions“). 

I recently moved to a new project and found that this guidance is still helpful.  As a result, I decided to share it because it is still something that is not discussed much, yet it plays an important part of the configuration management of a .NET application.

Version Attributes
The .NET framework makes available three (3) separate version attributes:

  • AssemblyVersion.  As discussed in Assembly Versioning, the .NET runtime uses the assembly version number for binding purposes.  It is used for .NET internal purposes only and should not be correlated to the product version.
  • AssemblyFileVersion.  Instructs a compiler to use a specific version number for the Win32 file version resource. The Win32 file version is not required to be the same as the assembly’s version number.
  • AssemblyInformationalVersion.  “The attribute defined by this class attaches additional version information to an assembly” for documentation purposes only.  It is text-based informational version and typically corresponds to the product’s marketing literature, packaging, or product name.  This data is never used at runtime.

[assembly: AssemblyVersion(“”)]
[assembly: AssemblyFileVersionAttribute(“3.0.2134.6636”)]
[assembly: AssemblyInformationalVersion(“3.0.1”)]

Version Formats
Version formats are suggested for the assembly version (AssemblyVersionAttribute), the file version (AssemblyFileVersionAttribute), and the “product number” (AssemblyInformationalVersionAttribute).

Assembly Version Format
Version information for an assembly (set by using the AssemblyVersion attribute) consists of the following four values, as described in the Version Class in the .NET framework documentation.

  • Major Version. “Assemblies with the same name but different major versions are not interchangeable. This would be appropriate, for example, for a major rewrite of a product where backward compatibility cannot be assumed.”
  • Minor Version. “If the name and major number on two assemblies are the same, but the minor number is different, this indicates significant enhancement with the intention of backward compatibility. This would be appropriate, for example, on a point release of a product or a fully backward compatible new version of a product.”
  • Build Number.  “A difference in build number represents a recompilation of the same source. This would be appropriate because of processor, platform, or compiler changes.”
  • Revision.  “Assemblies with the same name, major, and minor version numbers but different revisions are intended to be fully interchangeable. This would be appropriate to fix a security hole in a previously released assembly.”

The assembly version number is used internally by .NET and should not matter to anyone outside of the development team.

File Version Format
This is the actual file version (set by using the AssemblyFileVersionAttribute attribute) and it satisfies the following goals:

  • It correlates the product binaries to the source files from which they were compiled (as long as labeling is performed in the source code control database)
  • It allows for the re-creation of older builds.
  • It clearly identifies upgrade and bug fix releases.
  • It clearly identifies which version of the source code is in production.

The File Version could follow this format:

  • Major Version. This is the internal version of the product and is assigned by the application team.  It should not change during the development cycle of a product release.
  • Minor Version. This is normally used when an incremental release of the product is planned rather than a full feature upgrade.  It is assigned by the application team, and it should not be changed during the development cycle of a product release.
  • Build Number. The build process usually generates this number.  Keep in mind that the numbers cannot be more than 5 digits.  One scheme for generating this number is what I call the ‘YearDaysCounter.’  The first 1-3 digits pertain to the day of the year on which the build is being performed.  It has the advantage of being sequential, going from 001 to 365 as time progresses through the year. The last 2 digits pertain to the iteration of the build for a particular day and thus allow a range of 01 through 99.
  • Revision. This could be assigned by the build team and could contain the reference number of the migration to the production environment.  When it is not known yet, a 0 is used until the number is issued.

Product Number Format
This is the product number that is communicated to stakeholders outside the development and build teams (e.g. Application XYZ is on release “3.0.1”).  It follows the simpler 3-part format <major>.<minor>.<maintenance>.  The following guidelines are useful in changing the product version number:

  • Major Release.  Increment this number when major functionality is being released.
  • Minor Release.  Increment this number when alterations or enhancements to existing functionality is made and changes the end user experience.
  • Maintenance Release: Increment this number when bug fixes are released or performance enhancements are made that exclude functionality changes or material ‘look and feel’ changes to the user interface.

Like the assembly version number, the product version number should be changed after going to production.  It is set by using the AssemblyInformationalVersion attribute.

14 July 2008

Using the Enterprise Library Validation Block with WCF Asynchronous Operations

Filed under: .NET, WCF — Tags: , , , , — Philippe Truche @ 3:58

I was recently surprised to find out first hand that the Enterprise Library Validation Block throws a somewhat obscure exception: ArgumentNullException(“operation.SyncMethod”) when the WCF AsyncPattern is employed.

By debugging into the Enterprise Library source code (EntLib 3.1), I came to understand that the Validation Block was coded to handle synchronous service operations only (using the FaultContract<ValidationFault> attribute on the service operation signature .  In our case, some of our operations also implement the asynchronous pattern and we have Begin and End method pairs for each.

While I was aware that one-way operations do not allow faults to be returned, I was’t sure about asynchronous service operations (implemented as asynchronous on the service side).  I checked the .NET documentation and was able to confirm that FaultContract<T> is supported on service operations implemented asynchronously using the AsyncPattern property.   A difficult to find comment exists in the documentation of the FaultException(TDetail) Generic Class: “The FaultContractAttribute can be used to specify SOAP faults for both two-way service methods and for asynchronous method pairs.”  See

With this information in hand, I reasonned that I should be able to modify the validation block.  I made the code changes, ran the NUnit tests included in the validation block, and employed the modified assembly on our asynchronous service operations.  I made modifications to ValidationParameterInspector.cs and ValidationBehavior.cs.  The scope of the modifications included the following methods:

  • ValidationParameterInspector.ValidationParameterInspector
  • ValidationParameterInspector.BeforeCall
  • ValidationBehavior.HasValidationAssertions

Note that the modifications apply to both EntLib 3.1 and EntLib 4.0.  The modifications are included below:

Changes to ValidationParameterInspector.cs

/// <summary>

/// Constructor to initialize <see cref=”InputValidators”/>.

/// </summary>

/// <remarks>

/// When the operation is a <see cref=”OperationDescription.BeginMethod”/> because it implements the

/// the <see cref=”OperationContractAttribute.AsyncPattern”/>,

/// it will cause the <see cref=”InputValidators”/> to have empty validators for the

/// <see cref=”AsyncCallback”>callback</see> and <see cref=”Object”>state</see> method

/// arguments.

/// </remarks>

/// <param name=”operation”>The <see cref=”OperationDescription”/> of the operation.</param>

/// <param name=”ruleSet”>A <see cref=”String”/> naming the ruleset to be used.</param>

public ValidationParameterInspector(OperationDescription operation, string ruleSet)


// Check the synchronous method of the service first. If the operation implements the

// OperationContractAttribute.AsyncPattern property, set the method to the BeginMethod.

MethodInfo method = operation.SyncMethod;

if (method == null) method = operation.BeginMethod;


foreach (ParameterInfo param in method.GetParameters())


switch (param.Attributes)


case ParameterAttributes.Out:

case ParameterAttributes.Retval:



inputValidators.Add(CreateInputParameterValidator(param, ruleSet));







/// <summary>

/// Performs the validations.

/// </summary>

/// <param name=”operationName”>A <see cref=”String”/> containing the name of the operation.</param>

/// <param name=”inputs”>

/// An <see cref=”object”/> array containing the input parameters of the method and expected as described on

/// the operation, and thus excluding input parameters resulting from the service’s implementation of the

/// <see cref=”OperationContractAttribute.AsyncPattern”/> property.

/// </param>

/// <returns>null</returns>

public object BeforeCall(string operationName, object[] inputs)


ValidationFault fault = new ValidationFault();

// Because there could be fewer inputs than inputValidators, iterate

// through the inputs to run the corresponding validations. This accounts

// for asynchronous methods where the callback and state are defined in the

// method but are not exposed outside the boundary of the service.

for (int i = 0; i < inputs.Length; ++i)


ValidationResults results = inputValidators[i].Validate(inputs[i]);

AddFaultDetails(fault, inputValidatorParameterNames[i], results);



if (!fault.IsValid)


   throw new FaultException<ValidationFault>(fault);



return null;



Changes to ValidationBehavior.cs

/// <summary>

/// Returns a <see cref=”Boolean”/> to indicate whether or not an operation has validation assertions.

/// </summary>

/// <remarks>

/// If the method implements the <see cref=”OperationContractAttribute.AsyncPattern”/> property,

/// the BeginMethod is used instead of the SyncMethod because the SyncMethod is null.

/// </remarks>

/// <param name=”operation”></param>

/// <returns></returns>

private bool HasValidationAssertions(OperationDescription operation)


if (operation == null) throw new ArgumentNullException(“operation”);


// Check the synchronous method of the service first. If the operation implements the

// OperationContractAttribute.AsyncPattern property, set the method to the BeginMethod.

MethodInfo methodInfo = operation.SyncMethod;

if (methodInfo == null) methodInfo = operation.BeginMethod;


return methodInfo.GetCustomAttributes(typeof(ValidatorAttribute), false).Length > 0 ||




I created a copy of this post on CodePlex.  See

2 June 2008

WCF, Conditional Compilation, and language differences

Filed under: .NET, WCF, Web Services — Tags: — Philippe Truche @ 11:46

Perhaps you might have defined WCF message contracts lately, and you want to be explicit about the encryption level you require for your message parts.  I am taking the example of a message contract, but really my point is applicable to any element to which you wish to apply the EncryptionLevel enumeration.

In C#:

[MessageHeader(Name="Environment", MustUnderstand=true
#if !DEBUG
 , ProtectionLevel = ProtectionLevel.Sign

public String Environment
 get { return _environment; }
 set { _environment = value; }

#If CONFIG = "Debug"
 <MessageHeader(Name:="Environment", MustUnderstand:=True)> _
 Public Property Environment() As String
 <MessageHeader(Name:="Environment", MustUnderstand:=True, ProtectionLevel:=ProtectionLevel.Sign)> _
 Public Property Environment() As String
#End If
 Return _environment
 End Get
 Set(ByVal Value As String)
 _environment = Value
 End Set
 End Property


As you can see, there are some key differences in how attributes are specified using conditional compilation:
  • In C#, I can toggle parts of the same attribute on and off.  In VB.NET, I must repeat the entire attibute (no parts allowed by the compiler).
  • Because of line continuation constraints in VB.NET, the line following the line being continued must be included inside the conditional compilation block.
  • Using VB.NET in Visual Studio (2005 or 2008), it is not clear which part of the conditional compilation is “active” based on the selected configuration.  In C#, the inactive block is grayed out.

Keep in mind these “consequences” as you proceed with WCF attributes in the language you write with.

24 October 2007

Services and Smart Clients

Filed under: Uncategorized — Philippe Truche @ 9:09

I was reviewing the Commonwealth Bank of Australia Case Study ( and comparing it with my own experiences developing two smart client applications using the Smart Client Software Factory and the Service Factory from Patterns and Practices.

One item in particular resonated loudly: it is the premise that services can be either private or public.  Public services are intended for broad consumption and interoperability is key.  Private services are services intended for consumption by the presentation tier and thus are tightly coupled.  Private services may or may not consume public services.

I very much agree with this premise.   In fact, I have struggled with designing services intended for a UI while trying to adhere to principles of service design.  This validates my own experience and is a helpful framework for structing SOA discussions.

 What about your own experiences?  Do you agree with this premise?

19 October 2007

Mobile workers’ best friends

Filed under: Uncategorized — Philippe Truche @ 1:10

I read quite a few trade magazines, and every so often I read a segment on mobile workers and telecommuting.  While VPN has made it more easy to stay connected with the office, there are a couple of “best friends” that have worked well for me.

The first one is Remote Desktop Connection.  Once I have the VPN established, I often use Remote Desktop Connection to access my PC at work.  This is great because it’s like being in the office.  I have access to all my applications and data in one easy step.

The other “best friend” I have developed lately is my Western Digital 250 GB 2.5″ external hard drive.  Using Virtual PC 2007, I created a virtual hard disk on my external HD.  I then built my virtual machine like I would a physical.  The advantage is that I can carry my virtual machine in my pocket and fire it up from any computer that has Virtual PC 2007 and a USB 2.0 port.  At one point, I was pursuing having a bootable thumb drive; having a Virtual PC hard disk on my external HD has turned out to be quite a welcome substitute.

12 September 2007

Visual Studio 2008 Beta 2 and the Smart Client Software Factory

Filed under: Uncategorized — Philippe Truche @ 3:13

Well, it wasn’t exactly without some pain, but I got to where I thought I should get, and that is I installed the Smart Client Software Factory – May 2007  (SCSF) in Visual Studio 2008.

Here is a helpful resource in getting this done:

Nonetheless, it took a while to get there.  Here are the issues I ran into and the configuration that ultimately worked:

  • I used Virtual PC 2007 to do this in an isolated environment.
  • My first virtual PC was Windows XP with Service Pack 2.  I downloaded and installed Visual Studio 2008 Beta 2, Professional Edition, and could not see the Smart Client solution when creating new projects.  This seemed odd so I installed the source code of the SCSF.  I proceeded to enable Guidance Package Development and ran into type load exceptions here there and everywhere.  Not good…  I researched and tried many different approaches, but nothing worked.
  • Then, talking with a friend made a lightbulb go off in my mind.  What if the particular edition of Visual Studio had an impact?  After all, this is a beta 2 and there are bound to be some issues.  So I proceeded to download Visual Studio 2008 Beta 2, Team Suite Edition, on a prepackaged virtual hard drive (see  The OS in this virtual machine is Windows Server 2003 SP2. I performed the same SCSF installation steps as before, and, SUCCESS.  It worked.  I even exercised most recipes to make sure this was valid.  And it was.

Conclusion: SCSF can be used with Visual Studio 2008 Beta 2, and presumably, with Visual Studio 2008 RTC once it’s available.  Note that I have not tested making an installed from the SCSF source, so I cannot ascertain that it could be modified under Visual Studio 2008.

5 September 2007

Smart Clients, Visual Studio 2008, and connecting the dots…

Filed under: Uncategorized — Philippe Truche @ 1:19

As I looked at the Patterns and Practices upcoming releases web site ( and remembered reading some interesting posts about Acropolis not being included in Visual Studio 2008 but getting released later in 2008 (, I am afraid I connected the dots…

Neither the Smart Client Software Factory (SCSF) nor the Composite UI Application Block (CAB) are going to be released in a Visual Studio 2008 compatible release.   And Acropolis won’t be there until later in 2008.   This tells me it’s safer to continue smart client development using Visual Studio 2005 until after Acropolis has been released.

 I am currently installing the factory on a virtual PC in which I have installed Visual Studio 2008 so I can figure out what will be involved to get this guidance package to run in Visual Studio 2008.  We’ll see how far I can get without running into a brick wall…  More on that later.

7 May 2007

Smart Clients and Service Granularity

Filed under: Uncategorized — Philippe Truche @ 12:58

In this post, I want to talk about service granularity in a smart client application.  By service granularity, I am referring to the continuum of the scope of data that can be returned by a call to service.  On the low end of the continuum, I would get a single data element (that’s extreme and nobody that I know of does this in today’s distributed applications – that’s more akin to CORBA and DCE, both of which are no longer current).  On the other extreme, I’d get a complete object graph for a given business entity.  In fact, the dilemma is the granularity of the data transfer objects (refer to Martin Fowler’s documented enterprise application pattern at

In smart client applications, I often use several tabs to display information about something.  For example, I could have a customer screen with a personal information tab, account tab, preferences tab, and order history tab.  Now I have a choice: (1) make one call to a service and get a data transfer object to represent the customer and related information for all the tabs in one fell swoop; or (2) make a service call to get a data transfer object for the personal information tab, and make an additional 3 separate calls to the service in the background and fill the tabs when the information comes back.  The advantage of doing this is that the call to fill the first tab results in a smaller payload and thus should result in a faster response.  The disadvantage is that a total of 4 calls were made and it would be easy to fall into the trap of designing the services to fit a particular UI paradigm, a no-no in “service oriented architectures.” 

I am trying to find a happy medium where I first think about what the service should look like from a business perspective, while accounting for different ways in which UIs might consume the service.

Please post your comments on this topic as I am interested in finding out what others’ experiences are in this area.

25 April 2007

Using Visual Studio Team System to load test web services

Filed under: .NET, Testing, VSTS, Web Services — Philippe Truche @ 1:55

I recently had to load test web services for the smart client application we have developed.  I wanted to be able to go through quick cycles (test, evaluate, adjust) without having to wait for a few days for a load test slot in our LoadRunner lab, so I offered to use Visual Studio team system to perform the tests.  I already had experience with Microsoft’s previous load test tool – Application Center Test.  I was pleased with VSTS’ version of a load test tool.  Not quite as powerful as LoadRunner, but much more accessible in ease of use.

Anyway, I used VSTS successfully to load test web services and satisfy the following requirements:

  • Randomly pick data from a set for a given test.
  • Invoke the web services using HTTP POSTs (i.e. without having to create a client)
  • Use a configuration file to store the SOAP envelopes needed to make the web service calls.

Picking data randomly 

When using VSTS tests, the test methods are decorated with the [TestMethod] attribute.    The [DataSource] attribute can also be used on the test method to indicate that the test is data-driven.  Caution: the test is executed as many times as there are rows in the data source.  That may be OK when performing unit tests, but it is not OK for a load test.  So how do you get around this?  Well, I got my data source to return only one row picked randomly.  Using SQL Server 2005 Express, I defined views that would select one row of data using the following construct: “SELECT TOP(1) columnName1 AS alias1, columnName2 AS alias2, …. , columnNameN as aliasN FROM tableName ORDER BY NEWID();”

You might wonder why I am using an aliases on the column names.  The reason why I did this was to decouple the column names in the tables from the element names in the SOAP envelopes.  When the data source provides the row of data, I have a utility running through the column names of the DataRow object and substituting the data in the SOAP envelope based on the name of the columns found in the DataRow provided through the [DataSource] attribute.  This simple convention makes it very easy to parameterize the SOAP sent to the service.  Because the load test client should be as efficient as possible, I use a simple string parsing algorithm to perform the substitutions.

Invoke the web services using HTTP POSTs

I created a static class that uses an HttpWebRequest object to create the POST.   The result is returned into an HttpWebResponse object by simply doing this:

using ( HttpWebResponse response = (HttpWebResponse)request.GetResponse())


return response.StatusCode.ToString();


If the web server responds with error response codes (e.g. HTTP 401, HTTP 500), VSTS detects that and reports on the errors.  This is much better than Application Center Test where I was left to parse through the responses to make sure I did not get an error back.

Use a configuration file to store the SOAP envelopes needed to make the web service calls.

For this, my team member created an XML file that is loaded into an XmlDocument from which XmlNodes are selected and stored into an object that holds the SOAPAction to perform, the SOAP envelope to send the service, and the Url to use.  Yes, I know, this is very ASMX-centric, and will likely need to change with WCF, but we were not developing a framework either.  We needed a point solution to get going quickly.

Leave a comment if you have any questions or comments.

« Newer PostsOlder Posts »

Blog at