Philippe Truche’s Blog

11 July 2012

Deploying web applications to multiple environments using Microsoft Web Deploy

Filed under: .NET, TFS, Web Deploy — Tags: — Philippe Truche @ 8:54

There is one thing application development teams can appreciate: deploying applications can be difficult and time consuming; and the time spent supporting the deployment of applications is time not spent on features.

Fortunately, the technologies for deploying web applications have matured quite a bit in the past few years.  In a recent project I made use of Microsoft Web Deploy to deploy 2 web applications to 5 environments.  It sounds pretty straightforward until you consider the following:

  • One of the web application was deployed to both external and internal web server clusters, and when deployed on the internal cluster it had a different configuration that allows internal users to use the application with the behaviors desired for internal users.
  • Some of the environments  included an AppFabric Cache host running locally on the web server while other environments depended on remote cache hosts.
  • One of the web application, mostly ASP.NET Web Forms, contained a Silverlight “island.”  The web forms and the Silverlight applications were developed independently and we needed to be able to deploy them independently because their migration schedules did not always coincide.

This provided a significant set of challenges but with some trial and error, we achieved a “no-touch” deployment, much to the application development team’s joy.  In this article, I do not cover the basics of using Microsoft Web Deploy though I provide a synopsis of the toolset.  Instead, I focus on the more challenging areas so that the readers can apply the techniques outlined here in their own projects.

The Toolset

  • Visual Studio 2010.  In addition to Web.config, there is now Web.Debug.config and Web.Release.config.  This is not limited to the default build configurations.  If you create your own build configuration, for example MyBuildConfig, then you could create Web.MyBuildConfig.config.  But what is the point of these new files?  What Visual Studio does is apply the XSLT transformations specified in those files to the Web.config.  For example, when creating a release web deployment package, you might want to modify the compilation element to remove the debug attribute: so instead of the default <compilation debug=”true” targetFramework=”4.0″ defaultLanguage=”cs”/>, you want <compilation targetFramework=”4.0″ defaultLanguage=”cs”/> (debug defaults to “false” when omitted).
  • msdeploy.exe.  This command-line utility is part of the Microsoft Web Deploy toolset, itself part of the Microsoft IIS stack.  Web Deploy simply allows web applications to be deployed to target servers and make deployment-time modifications to the Web.config file. How does this work?  You need to add a parameters.xml file to your Web project in Visual Studio.  When the web deployment package is created, this file is processed into a projectName.SetParameters.xml file located in the folder in which the deployment package is created.  In Visual Studio 2010, this is be default in the directory tree created in the obj folder; in Visual Studio 2012, you specify the destination folder when you create the publish profile using the ‘Web Deploy Package” publishing method (see for additional information).

To summarize, you use Visual Studio to make modifications to the Web.config file based on your target configuration (Debug, Release, or your own custom configuration) regardless of the target environment, while you use msdeploy.exe to deploy and make changes to the Web.config file at deployment time based on environment factors (e.g. DB connections strings, service URLs, etc..).

This was a very short introduction to the tools at our disposal and what their purpose is.  Let’s go into much more detail about how you use this tools to handle a number of common scenarios.

Build Configuration Transformations

Using Visual Studio’s ability to create versions of the Web.config file for each build configuration – by default, Debug and Release – there are a number of transformations you can specify to obtain configuration files appropriate for debug and release configurations. There is another use of this Visual Studio feature that perhaps does not get highlighted much: the ability to make configuration files targeted for servers. The Web.config file in the Visual Studio web projects is targeted for local development on the developers’ machines. Rarely is it “server ready.” Using Visual Studio’s ability to transform this file makes it possible to make web.config files targeted for server deployments.

In the list that follows, I highlight a number of transformations that can be applied to make server-ready Web.config files regardless of the target configuration (Debug or Release).

  • Inserting configuration sections not used in local development. This is handy when using server products on servers that aren’t available locally on the developers’ machine. For example, you might use the ASP.NET State Service on your developer machines but switch to AppFabric Cache on the servers.
  • Alter logging sections so that you log to files when in Visual Studio but log to a database when on the server.
  • Alter SMTP so that emails generated by the web site go to a folder when working from Visual Studio but go to the target SMTP server when running from servers.
  • Insert assemblies into the /system.web/compilation/assemblies node. Back to the AppFabric Cache example, the DLLs are present on the server because the product is installed there but it is usually not installed on the developers’ machines. The ability to insert those when making the web deployment package is most useful.
  • Alter the session state timeouts. Often, it is useful to have long timeouts on local machines because debugging requires it; once on the server, however, you should have shorter timeouts.
  • Alter web server behaviors. For example, it is convenient to allow browsing directories from the web browser locally but is not appropriate on the servers. Likewise, you can set your static content cache expiration far out into the future and set up the removal of Etags if not using them. Another useful application is to set up redirects from HTTP to HTTPS – great on the server but not desirable locally.
  • Remove sections not needed on the server. Some of the profiling tools you might use require an ASP.NET development server while you may be using IIS Express locally and using IIS 7.x on the servers. This makes the management of your configuration file a bit tricky since system.web/customErrors, system.web/httpHandlers, and system.web/httpModules apply to the ASP.NET development server (pre-IIS 7.x) while system.webServer/httpErrors , system.webServer/handlers, and system.webServer/modules are used by IIS 7.x. I recommend maintaining both system.web and system.webServer in the Web.config file and simply removing unwanted system.web sections when making Web.config files targeting server deployments.
  • WCF logging. You can set up message logging locally so that every message is recorded but remove message logging on the servers.
  • WCF metadata. You may want to have your MEX endpoints available in development but remove those from other environments.
  • Debug items. While you may want your compilation to be set to true when in the Debug configuration, you definitely should not see this in Release Web.config files.
  • If using web forms in a web farm, you will want to use the same validation and decryption keys across all nodes.

To illustrate these points, let’s take the following sample Web.config file:

<?xml version="1.0"?>


    <section name="loggingConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.LoggingSettings, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=9c844884b2afcb9e" requirePermission="true" />
    <section name="cachingConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Caching.Configuration.CacheManagerSettings, Microsoft.Practices.EnterpriseLibrary.Caching, Version=5.0.414.0, Culture=neutral, PublicKeyToken=9c844884b2afcb9e" requirePermission="true" />

  <cachingConfiguration defaultCacheManager="Cache Manager">
      <add name="Cache Manager" type="Microsoft.Practices.EnterpriseLibrary.Caching.CacheManager, Microsoft.Practices.EnterpriseLibrary.Caching, Version=5.0.414.0, Culture=neutral, PublicKeyToken=9c844884b2afcb9e" expirationPollFrequencyInSeconds="60" maximumElementsInCacheBeforeScavenging="1000" numberToRemoveWhenScavenging="10" backingStoreName="NullBackingStore" />
      <add type="Microsoft.Practices.EnterpriseLibrary.Caching.BackingStoreImplementations.NullBackingStore, Microsoft.Practices.EnterpriseLibrary.Caching, Version=5.0.414.0, Culture=neutral, PublicKeyToken=9c844884b2afcb9e" name="NullBackingStore" />

    <add name="EntLibErrorLogging" connectionString="EntLibLoggingConnString" providerName="System.Data.SqlClient" />
    <add name="Application" connectionString="ApplicationConnString" providerName="System.Data.EntityClient" />

  <loggingConfiguration name="Logging Application Block" defaultCategory="General" logWarningsWhenNoCategoriesMatch="true">
      <add name="Event Log Listener" type="Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.FormattedEventLogTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging"
                listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.FormattedEventLogTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging"
                source="CSWS Logging" formatter="Text Formatter"  log="" machineName="." traceOutputOptions="None" />
      <add name="Rolling Flat File Trace Listener" type="Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.RollingFlatFileTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging"
           listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.RollingFlatFileTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging"
           header="~~~~~~~~~~~~ Begin Log Entry ~~~~~~~~~~~~~~~~"
           footer="~~~~~~~~~~~~ End Log Entry ~~~~~~~~~~~~~~~~~~"
           formatter="Text Formatter"
           rollFileExistsBehavior="Increment" rollInterval="Day" rollSizeKB="1500" />
      <add name="Database Trace Listener" type="Microsoft.Practices.EnterpriseLibrary.Logging.Database.FormattedDatabaseTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging.Database"
          listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Database.Configuration.FormattedDatabaseTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging.Database"
          databaseInstanceName="EntLibErrorLogging" writeLogStoredProcName="WriteLog"
          addCategoryStoredProcName="AddCategory" formatter="Text Formatter"
          traceOutputOptions="None" filter="All" />
      <add name="Text Formatter" type="Microsoft.Practices.EnterpriseLibrary.Logging.Formatters.TextFormatter, Microsoft.Practices.EnterpriseLibrary.Logging"
           template="&amp;lt;FormattedMessage&gt;&amp;lt;LocalTimestamp&gt;{timestamp(local)}&amp;lt;/LocalTimestamp&gt;&amp;lt;Category&gt;{category}&amp;lt;/Category&gt;&amp;lt;Title&gt;{title}&amp;lt;/Title&gt;&amp;lt;Severity&gt;{severity}&amp;lt;/Severity&gt;&amp;lt;ProcessId&gt;{localProcessId}&amp;lt;/ProcessId&gt;&amp;lt;Win32ThreadId&gt;{win32ThreadId}&amp;lt;/Win32ThreadId&gt;&amp;lt;ExtendedProperties&gt;{dictionary({key} - {value})}&amp;lt;/ExtendedProperties&gt;&amp;lt;ExceptionHandlerMessage&gt;{message}&amp;lt;/ExceptionHandlerMessage&gt;&amp;lt;/FormattedMessage&gt;" />
      <add name="General" switchValue="All" >
          <add name="Rolling Flat File Trace Listener" />
      <add name="Exceptions" switchValue="All" >
          <add name="Rolling Flat File Trace Listener" />
      <!-- The All Events special category receives all log entries. -->
      <allEvents name="All Events" switchValue="All" />
      <!-- The Unprocessed Category special category receives all log entries that are not processed by a source category (such as entries that specify a category that is not configured). -->
      <notProcessed name="Unprocessed Category" switchValue="All" >
          <add name="Rolling Flat File Trace Listener" />
      <!-- The Logging Errors & Warnings special category receives log entries for errors and warnings that occur during the logging process-->
      <errors name="Logging Errors &amp; Warnings" switchValue="All" >
          <add name="Event Log Listener" />

      <source name="System.ServiceModel" switchValue="Information, ActivityTracing" propagateActivity="true">
          <add name="RollingXmlFileTraceListener" />
      <source name="System.ServiceModel.MessageLogging">
          <add name="RollingXmlFileTraceListener" />
    <!-- The RollingXmlFileTraceListener also supports the following attributes.  Note that the attributes
       are EntLib logging block attributes and operate in exactly the same way.  All are optional.
       rollSizeKB             => The size in kilobytes that the file can reach before it rolls over.
                                 Defaults to 1024 (i.e. 1 MB).
       timeStampPattern       => The format of the date that is appended to the name of the new file.
                                 Defaults to "yyyy-MM-dd".
       rollInterval           => The time interval that determines when the file rolls over.
                                 Possible values are None (the default), Minute, Hour, Day, Week, Month, and Year.
       rollFileExistsBehavior => The behavior that occurs when the roll file is created.
                                 Possible values are Overwrite (the default), which overwrites the existing file;
                                 and Increment, which creates a new file. -->
      <add name="RollingXmlFileTraceListener"
          type="Framework.ServiceModel.Logging.RollingXmlFileTraceListener, Framework.ServiceModel"

      <smtp from="user@address.domain" deliveryMethod="PickupDirectoryFromIis">
        <network host="localhost" port="25" defaultCredentials="true" />

        <binding name="Client.ReportServer.BasicHttp.BindingConfig" maxBufferSize="524288" maxReceivedMessageSize="524288">
          <security mode="Transport">
            <transport clientCredentialType="Windows" />

      <endpoint address="" binding="basicHttpBinding" bindingConfiguration="Client.ReportServer.BasicHttp.BindingConfig" contract="Proxy.ReportExecutionServiceSoap" name="ReportExecutionServiceSoap" />
      <endpoint address="" binding="basicHttpBinding" bindingConfiguration="Client.ReportServer.BasicHttp.BindingConfig" contract="ReportingServiceProxy.ReportingService2005Soap" name="ReportingService2005Soap" />
    <diagnostics wmiProviderEnabled="true">
      <messageLogging logEntireMessage="true" logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" maxMessagesToLog="3000" />

    <authentication mode="None" />

    <compilation debug="true" batch="false" targetFramework="4.5" defaultLanguage="cs">
        <add assembly="Microsoft.Practices.EnterpriseLibrary.Validation, Version=5.0.414.0, Culture=neutral, PublicKeyToken=9c844884b2afcb9e" />

    <customErrors mode="Off" />

      <!-- This only matters if using the Visual Studio Web Development Server.  In IIS and IIS Express, /system.webServer/handlers is used.-->

      <!-- This only matters if using the Visual Studio Web Development Server.  In IIS and IIS Express, /system.webServer/modules is used.-->

    <httpRuntime executionTimeout="300" maxRequestLength="51200"/>

    <sessionState mode="StateServer" timeout="1440" />

    <trace enabled="true" localOnly="false" />


        <clear />
        <add value="public/default.aspx" />

      <mimeMap fileExtension=".svg" mimeType="image/svg+xml" />

    <validation validateIntegratedModeConfiguration="false" />



This web.config file is fine for use on developers’ workstations – session state is kept in the ASP.NET state service, logging targets local files, and emails go to file folders so developers don’t actually send emails.

To prepare the Web.config file for a Release configuration build to servers, the first step is to apply transformations to it using the Web.release.config file. The syntax is described fairly well by MSDN. Here is the transformation file:

<?xml version="1.0"?>

<configuration xmlns:xdt="">

    <section name="dataCacheClient" 
             type="Microsoft.ApplicationServer.Caching.DataCacheClientSection, Microsoft.ApplicationServer.Caching.Core, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" 
             allowLocation="true"  allowDefinition="Everywhere" 

  <dataCacheClient requestTimeout="180000" channelOpenTimeout="18000" maxConnectionsToServer="10" xdt:Transform="InsertAfter(/configuration/connectionStrings)">
    <securityProperties mode="Transport" protectionLevel="EncryptAndSign" />
    <transportProperties connectionBufferSize="500000" maxBufferPoolSize="2147483647" maxBufferSize="2147483647" maxOutputDelay="2"
                         channelInitializationTimeout="86400000" receiveTimeout="86400000" />

    <listeners xdt:Locator="XPath(add[@name='Event Log Listener'])" xdt:Transform="Remove"/>

  <loggingConfiguration xdt:Locator="XPath(categorySources/add[@name='General']/listeners/add[@name='Rolling Flat File Trace Listener'])"
                  name="Database Trace Listener" xdt:Transform="SetAttributes(name)"/>
  <loggingConfiguration xdt:Locator="XPath(categorySources/add[@name='Exceptions']/listeners/add[@name='Rolling Flat File Trace Listener'])"
                  name="Database Trace Listener" xdt:Transform="SetAttributes(name)"/>
  <loggingConfiguration xdt:Locator="XPath(specialSources/notProcessed/listeners/add[@name='Rolling Flat File Trace Listener'])"
                  name="Database Trace Listener" xdt:Transform="SetAttributes(name)"/>
  <loggingConfiguration xdt:Locator="XPath(specialSources/errors/listeners/add[@name='Event Log Listener'])"
                  name="Rolling Flat File Trace Listener" xdt:Transform="SetAttributes(name)"/>

    <sources xdt:Transform="Replace">
      <source name="System.ServiceModel" switchValue="Warning" propagateActivity="true">
          <add name="RollingXmlFileTraceListener"/>

    <mailSettings xdt:Transform="Replace">
      <smtp from="user@address.domain">

    <diagnostics wmiProviderEnabled="true" xdt:Transform="Replace" />


    <compilation xdt:Transform="RemoveAttributes(debug)" />
    <compilation batch="true" xdt:Transform="SetAttributes(batch)">
        <add assembly="Microsoft.ApplicationServer.Caching.Client, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" xdt:Transform="Insert"/>
        <add assembly="Microsoft.ApplicationServer.Caching.Core, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" xdt:Transform="Insert"/>

    <customErrors xdt:Transform="Remove" />

    <httpHandlers xdt:Transform="Remove" />

    <httpModules xdt:Transform="Remove" />

    <machineKey validationKey="1A7A288183C4302270B248DA1FB4454AC3F287C9D759D4203843D3C3D5FA86B809E05EC353D8CD1F48B894A45BC5349109335C5A77163799ABBA8EB710BE68C6"
               validation="SHA1" decryption="AES" xdt:Transform="Insert" />

    <sessionState mode="Custom" timeout="120" customProvider="AppFabricCacheSessionStoreProvider" xdt:Transform="Replace">

    <trace xdt:Transform="Remove"/>



    <directoryBrowse enabled="false" xdt:Transform="InsertAfter(/configuration/system.webServer/defaultDocument)"/>

    <staticContent xdt:Transform="Replace">
      <clientCache httpExpires="Sun, 29 Mar 2020 00:00:00 GMT" cacheControlMode="UseExpires" />

    <rewrite xdt:Transform="InsertBefore(/configuration/system.webServer/staticContent)">
        <rule name="Remove ETag">
          <match serverVariable="RESPONSE_ETag" pattern=".+" />
          <action type="Rewrite" value="" />

    <validation xdt:Transform="Remove"/>


  <location path="public/Application" xdt:Transform="Insert">
          <rule name="Redirect to HTTPS" stopProcessing="true">
            <match url="(.*)" />
              <add input="{HTTPS}" pattern="^OFF$" />
            <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" redirectType="Permanent" />


As the transformation file shows, the following transformations are accomplished:

  • A dataCacheClient configuration section is defined after the cachingConfiguration section in the configSection node.
  • The dataCacheClient section is inserted after the connectionStrings section.
  • In the logging configuration section, any listener targeting the event log is removed and all rolling flat file listeners are replaced by database listeners.
  • The smtp section is replaced with the desired SMTP configuration.
  • In the system.web section:
  • the compilation debug attribute is removed and set to batch
  • the AppFabric assemblies are added to the compilation section
  • the customError, httpHandlers, and httpModules sections are removed.
  • a machineKey section is inserted
  • the sessionState section is replaced with one appropriate for use with AppFabric Caching.
  • the trace section is removed
  • In the system.webServer section:
  • directoryBrowse is set to false to disallow directory browsing
  • a staticContent section is inserted to ensure content is set to expire far into the future
  • IIS rewriting rules are specified via a rewrite section
  • the validation section is removed because we removed the httpHandlers and httpModules sections from system.web (the validation section is only useful when running the application locally using the ASP.NET development server for code profiling purposes – use IIS Express otherwise).
  • A location section is inserted to define redirection to HTTPS for one of the areas of the application.

Using Visual Studio to publish the application to the Web Deploy Package, the output window shows the work being performed.  This is really important and I recommend always reviewing the messages in the output window.

2>—— Publish started: Project: WebFormsApp, Configuration: Release Any CPU ——
2>Transformed Web.config using C:\Projects\WebFormsApp\WebFormsApp\Web.Release.config into obj\Release\TransformWebConfig\transformed\Web.config.
2>Auto ConnectionString Transformed obj\Release\TransformWebConfig\transformed\Web.config into obj\Release\CSAutoParameterize\transformed\Web.config.
2>Copying all files to temporary location below for package/publish:
2>Packaging into c:\temp\
2>Adding declared parameter ‘EntLibErrorLogging-Web.config Connection String’.
2>Adding declared parameter ‘Application-Web.config Connection String’.
2>Package “” is successfully created as single file at the following location:
========== Publish: 1 succeeded, 0 failed, 0 skipped ==========

As the messages show, the Web.config file was transformed into .\obj\Release\TransformWebConfig\transformed\Web.config.  This is great because it allows you to perform a comparison between the original Web.config file and the transformed file.  When a transformation can’t be applied, Visual Studio shows detailed messages about which transformations failed.  I can’t stress enough how important it is to review the messages to make sure the transformations took place.

The other message to notice at the bottom is the name and location of the package that was created, assuming you configured Visual Studio to ZIP up the files to be deployed into a single file.  If you choose to use a folder instead, the message would show the location of the folder created.

Let’s take a look at where Visual Studio stores its files to do the transformation work.  Simply press on the Show All Files button in the Solution Explorer and the following structure is revealed, in line with the messages shown in the output window.


I like to copy the paths of the original and transformed Web.config files to validate that my transformation gave me the results I expected.  Here is a comparison report of the Web.config files.  The items stricken out were removed or modified by the transformations; the inserted or updated values are shown in color.


So far all we have managed to do is perform transformations on the configuration file to get it “server ready.”  In the next section, I show how you can apply transformations to make Web.config files targeted at each environment.

Deployment transformations

As you might have noticed in reviewing the messages from the output window when publishing your application to a web deploy package, Visual Studio automatically applies transformations to the connection strings so as to allow specifying connection strings for each environment.  Let’s review the relevant messages:

  • Adding declared parameter ‘EntLibErrorLogging-Web.config Connection String’.
  • Adding declared parameter ‘Application-Web.config Connection String’.
  • For this sample script, you can change the deploy parameters by changing the following file: c:\temp\WebFormsApp.SetParameters.xml.

Without you having to do anything, Visual Studio applied transformations to the connection strings and created a SetParameters.xml file that you can use when performing the deployment with msdeploy.exe or the IIS console for Web Deploy.  Let’s take a look at this file:

<?xml version="1.0" encoding="utf-8"?>
  <setParameter name="IIS Web Application Name" value="" />
  <setParameter name="EntLibErrorLogging-Web.config Connection String" value="EntLibLoggingConnString-TEST" />
  <setParameter name="Application-Web.config Connection String" value="ApplicationConnString-TEST" />

As you see, this file includes key value pairs.  The name is the key for the transformation to be applied and the value is the target value for deployment.  This of this file as a template.  You can make as many copies of this file as you desire.  You might make a UAT.SetParameters,xml and a PROD.SetParameters.xml file to deploy to the UAT and the PROD environments for example.  You would then modify the value of each key value pair so that it is appropriate for your target environment.

I know, I know.  This is the really exciting part about this technology.  And so the question that you probably are asking yourself now are: (1) how can I parameterize additional parts of the Web.config and; (2) how do I use these SetParameters.xml file in deployments (or how do I hand this off to the operations team to perform the deployments)?

Making Custom Parameters

To find out how to do this I will continue to build on the sample we have used this far.  First things first, you need to create a parameters.xml file and include it the root of your web application project.  Here is a picture of what it should look like in the Solution Explorer.


This file is very simple in structure.  This file is described at a basic level in this Microsoft How To.  More detail is provided on the by the IIS team (the product owner of the Web Deploy technology) in Web Deploy Parameterization.

In our example, here are parameters we want to introduce so that we can vary them per environment:

  • In the transformed configuration file, we inserted a validation section into configuration/system.web/ using a Web.Release.config XSLT transformation to insert the XML element.  We want to vary the validationKey and decryptionKey attributes in each environment so that all web servers in the cluster for a given environment uses the same keys.
  • In the transformed configuration file, we replaced the ASP.NET session state service with the AppFabric Caching provider.  We need to specify a different cacheName and sharedId for each environment.
  • We need to specify a different fileName for the Enterprise Library Logging rolling flat file listener for each environment.
  • We need to be able to set up a different user name and password for SQL Server Reporting Services for each environment.
  • In the transformed configuration file, we inserted a placeholder for the AppFabric Cache client configuration in configuration/dataCacheClient/hosts.  We need to replace this placeholder with the actual host configuration.  The reason why we use a placeholder is that for some environments, only one cache host is used while in other environments we use up to three cache hosts.  Using a placeholder allows Web Deploy to inject an entire XML string as a replacement for the placeholder thus varying the number of <host> children nodes of <hosts>.

These are some of the needs we experienced as we planned our deployments.  Your needs will certainly vary and you will need to define your own custom parameters.  For illustration purposes, I will show how we specified parameters for the validation and decryption keys, how we set up the AppFabric cache name, shared ID, and cache hosts.

To do this we set up the parameters as follows:


  <parameter name="validationKey"
             description="Please provide the validationKey value."
    <parameterEntry kind="XmlFile"
      match="/configuration/system.web/machineKey/@validationKey" />

  <parameter name="decryptionKey"
             description="Please provide the decryptionKey value."
    <parameterEntry kind="XmlFile"
      match="/configuration/system.web/machineKey/@decryptionKey" />

  <parameter name="Cache Name"
            description="Please provide the name of the AppFabric Cache."
    <parameterEntry kind="XmlFile"
                    match="/configuration/system.web/sessionState/providers/add[@name='AppFabricCacheSessionStoreProvider']/@cacheName" />

  <parameter name="Cache Shared ID"
            description="Please provide the shared ID of the AppFabric Cache."
    <parameterEntry kind="XmlFile"
                    match="/configuration/system.web/sessionState/providers/add[@name='AppFabricCacheSessionStoreProvider']/@sharedId" />

             description="Please set the configuration of the AppFabric Cache client as an XML-escaped string."
             defaultValue="&lt;host name='localhost' cachePort='22233'/>"
    <parameterEntry kind="TextFile"
                    match="ENV_DRIVEN_CONFIG_SET_BY_PARAM_TEXT_REPLACE" />


When we publish the application, the c:\temp\WebFormsApp.SetParameters.xml file now contains our new custom parameter entries as key value pairs.  Let’s open this file and make the modifications needed for the TEST environment we are going to do a deployment to.  With the modifications, the file’s content should look something like this (note that I obtained the validation and decryption keys from

<?xml version="1.0" encoding="utf-8"?>
  <setParameter name="IIS Web Application Name" value="" />
  <setParameter name="validationKey" value="23EB0A37C95089029B610813A8B63F55F3639F2306D8019089953681D92E50372401211A07F229B831D8895A559CFB755CECEACBEDD168CB0E3C4B60F61B4D84" />
  <setParameter name="decryptionKey" value="B890EEBAED18D018BD6A35435623BB3CEB91BF11B375C0814DDEFF6AB0B25052" />
  <setParameter name="Cache Name" value="ApplicationSessionStore-TEST" />
  <setParameter name="Cache Shared ID" value="AppSession-TEST" />
  <setParameter name="ENV_DRIVEN_CONFIG_SET_BY_PARAM_TEXT_REPLACE" value="&lt;host name='' cachePort='22233'/&gt;" />
  <setParameter name="EntLibErrorLogging-Web.config Connection String" value="EntLibLoggingConnString-TEST" />
  <setParameter name="Application-Web.config Connection String" value="ApplicationConnString-TEST" />

To deploy the application package to a target location, you will need to have Microsoft Web Deploy installed on your target server.  You can get it here: Web Deploy 3.0.  Because we are going to deploy locally for the purposes of this article, make sure to include the Remote Agent Service in addition to the Web Deployment Framework when you install Web Deploy 3.0.

If you don’t have IIS Express installed on your machine yet, go ahead and download and install it (you can find it here: Internet Information Services (IIS) 7.5 Express).  Open the applicationhost,config file located in C:\users\your-username\Documents\IIS Express\config and find the system.applicationHost/sites node.  Add a new node as follows (choosing an ID that does not conflict with IDs already present in your config file):

<site name="" id="6" serverAutoStart="true">
                <application path="/" applicationPool="Clr4IntegratedAppPool">
                    <virtualDirectory path="/" physicalPath="C:\temp\deployed\WebFormsApp" />
                <application path="/WebFormsApp" applicationPool="Clr4IntegratedAppPool">
                    <virtualDirectory path="/" physicalPath="C:\temp\deployed\WebFormsApp\WebFormsApp" />
                    <binding protocol="http" bindingInformation="*:9876:localhost" />

Create the folder C:\Temp\deployed\WebFormsApp.  Open a command prompt and change directory to C:\Program Files\IIS Express.  Issue the command iisexpress /siteId:site-id where site-id is the site ID you set up in applicationhost.config.  This should start the web server:


Now that the web server is running, we are ready to deploy the package to it.  To do this, we are going to use the Visual Studio-generated *.deploy.cmd file.  In my case, the file is named WebFormsApp.deploy.cmd because my Visual Studio web application project is named WebFormsApp.  This generated file comes with a readme – I encourage you to take a look at it.  Also take a look at the command file itself, in particular the msdeploy.exe command.  You may modify this command file to customize it to your needs – for example, we introduced support for environments so that if we call the CMD file with PROD, it would look for PROD.SetParameter.xml.

For now you might just find it easier to start by using the Visual Studio-generated command batch script to see what it does with msdeploy.exe, and then go from there.  The heart of Microsoft Web Deploy is msdeploy.exe.  When calling the CMD file, it will generate an msdeploy.exe command.  Let’s go ahead and try it.  I called the CMD file with /Y to indicate I want the deployment to take place and /L to indicate I am using IIS Express locally.


We can now compare the Visual Studio web config file that was made for Release (located in [project root]\obj\Release\transformed\) and the one that was just deployed to C:\Temp\deployed\WebFormsApp.  Here are the differences, as designed.



Microsoft has come a long way in terms of  providing tools that allow application builders to plan for deployments to various environments and allow operation teams to deploy the packages they are provided.  What I showed you in this article is but a few things you can do.  You can also integrate your database deployment and extend Web Deploy to perform synchronizations not provided out of the box.

Happy deployments!


20 August 2011

Throwing SOAP Faults in a WCF service being mindful of non-WCF clients

Filed under: .NET, WCF, Web Services — Tags: , , , , , , — Philippe Truche @ 9:28

I was engaged recently to design and write a WCF service that is to be consumed by an ASP.NET 2.0 client.  While I was aware that with a WCF service and a WCF client, using a System.ServiceModel.FaultException<TDetail>  was ideal for handling exception conditions, I wasn’t sure how this was going to be handled by .NET 2.0 since FaultException<TDetail> was introduced in .NET 3.0 – in .NET. 2.0, a SOAP faults are caught with  a System.Web.Services.Protocols.SoapException.

So off to trial and error I went and here is what I proposed.

Firstly, it is important to write a WCF service that can be consumed by a WCF client as well as a non-WCF client.  To this end, throwing SOAP faults should be done in a manner that both WCF clients and non-WCF clients can understand.  My first step was to make sure I had a standard WCF service with a WCF client that could understand the SOAP faults.  I created a DataContract for the SOAP fault I wanted to throw when certain conditions were met:

/// Thrown when no matching MPI is found.
/// </summary>
[DataContract(Namespace = WcfConfiguration.XmlNamespace)]
public class MpiNotFoundFault
	private string _mpi;

	/// Gets or sets the MPI.
	public string Mpi
		get { return _mpi; }
		set { _mpi = value; }

I  then specified my FaultContract on the service interface definition as follows:

public interface IMyPinService 
	[FaultContract(typeof(MpiNotFoundFault), Name ="MpiNotFoundFault", Namespace = WcfConfiguration.XmlNamespace, Action = WcfConfiguration.XmlNamespace +"/MpiNotFoundFault")] 
	bool RequestPin(string mpi, DateTime dob);

So far, so good.  Now, let’s see how we might throw this SOAP fault in the service implementation.  For the WCF client to work, I threw the exception as follows:

public bool RequestPin(string mpi, DateTime dob) 
	// Code elided
	throw new FaultException<MpiNotFoundFault>(new MpiNotFoundFault() { Mpi = mpi });

Then, in the WCF client, all I have to do is this:

// With a WCF client, we can catch specific exceptions and get to each fault contract&quot;s 
// contents by accessing the ex.Details propery.
catch (FaultException<MpiNotFoundFault> ex)
	Console.WriteLine(“MPI {0} was not found.“, ex.Detail.Mpi);

Yes, WCF does a lot of work for us.  However, when your client is not WCF, SOAP faults get a little bit more complicated.  Fortunately, there is a SOAP specification that can be referred to.  The SOAP 1.2 specification on soap faults is located here:  Notice I am introducing you to SOAP 1.2, not SOAP 1.1.  Yet, in the .NET 2.0 framework, when you use wsdl.exe to create a proxy class, the SOAP 1.1 protocol is defaulted.  That’s somewhat of a problem because is SOAP 1.1, faults may only have a faultcode and a faultstring.  It wasn’t until SOAP 1.2 that hierarchical codes and subcodes were defined.  To see the difference between the two, see

Why is this important you ask?  Well, if in my ASP.NET client (that does not know anything about WCF) all I have is a SoapException, then I’d like to use the code and subcode of the faults to communicate detail about the exception (of course, my service consumer is a trusted source and will filter the information appropriately before passing it on to its users, but that’s another topic entirely).

Here is what I want to do. In my ASP.NET client, I want to check the code and subcode to understand what the nature of the problem was.  Here is what my ASP.NET catch statement looks like this:

// In WCF we can catch specific exceptions by using FaultException<T> where T is one of the fault contracts 
// included in the service operation. In contrast, ASP.NET web services only allow for catching SoapException. 
// To get to the detail, use the SOAP Fault Code and Subcode as shown below. 
catch (SoapException ex)
	// The SOAP fault code always contains the string ‘CredentialValidationRequestFailed’
	Debug.WriteLine(ex.Code.Name); // prints the string "CredentialValidationRequestFailed"
	// The SOAP fault subcode contains the actual soap fault name without the Fault suffix
	Debug.WriteLine(ex.SubCode.Code.Name); // prints the string "MpiNotFound"

By doing this, I can see that my call failed because of a CredentialValidationRequestFailed code.  This code could occur for any number of reasons.  In this example, it occurred because of an MpiNotFound subcode, but I have another 3 subcodes that could have been the cause of the fault.

So what do I need to do to achieve this? Well I need to use SOAP 1.2 for sure.  On the service side, I configure the service using an HttpBinding. By default, this binding uses SOAP 1.1. and it can’t be changed. To use SOAP 1.2 for message encoding, I create a custom binding. My configuration file now looks like this:

        <binding name="basicHttpSoap12Binding">
          <textMessageEncoding messageVersion="Soap12"/>
      <service name="MySoap12Service">
        <endpoint address="" binding="customBinding" bindingConfiguration="basicHttpSoap12Binding"

On the client side, I use wsdl.exe to generate the proxy class; instead of letting the default SOAP 1.1 protocol apply, I pass in the protocol switch to specify the SOAP 1.2 protocol should be used as follows:  wsdl.exe /protocol:SOAP12. This causes the generated proxy to specify the SOAP 1.2 protocol in its constructor:

public partial class MyService : System.Web.Services.Protocols.SoapHttpClientProtocol {
    // code elided
    public MyService() {
        this.SoapVersion = System.Web.Services.Protocols.SoapProtocolVersion.Soap12;
        // code elided
    // code elided

Then, I need to throw my SOAP faults with a bit more information.  Here is the revised code snippet for throwing a SOAP fault that can now be understood by non-WCF clients:

throw new FaultException<MpiNotFoundFault>(new MpiNotFoundFault() { Mpi = mpi },
	String.Format(CultureInfo.InvariantCulture, “MPI ‘{0}’ not found.“, mpi),
	new FaultCode(“CredentialValidationRequestFailed“, new FaultCode(“MpiNotFound“, WcfConfiguration.XmlNamespace)));

That’s it.  I throw the FaultException<MpiNotFoundFault> exception, passing in a new MpiNotFoundFault object into the constructor, then I pass in the reason as the “MPI ‘xyz’ not found” string, and then I create my code and subcodes.  And voila, I have happy WCF clients and happy non-WCF clients.

On the wire, the SOAP fault looks like this:

<s:Envelope xmlns:s="">
  <s:Header />
          <s:Value xmlns:a="MyNamespace">a:MpiNotFound</s:Value>
        <s:Text xml:lang="en-US">MPI '123123' not found.</s:Text>
        <MpiNotFoundFault xmlns="MyNamespace" xmlns:i="">

22 March 2011

Unit Testing 101 – Fundamentals of Unit Testing

Filed under: .NET, Testing — Tags: , , — Philippe Truche @ 8:40

This post is my first of 3 posts on unit testing, as introduced in  Unit Testing 101 is targeted at audiences that have little to no familiarity with unit testing.  This might include junior developers and even managers in software development

Table Of Contents

  • What? Who? Why?
  • Deciding what to test…
  • …and more importantly, when to write the tests.
  • Popular unit testing frameworks available to .NET
  • Unit test prototype
  • Writing your first unit tests

What Is Unit Testing? Who Writes The Tests? Why Do It?

The consensus is that Kent Beck introduced unit testing.  There are a number of definitions of unit testing – personally I like the definition from the book Pragmatic Unit Testing: “unit tests are performed to prove that a piece of code does what the developer thinks it should do.”  The key word is “developer.”  The developers get to write the unit test code.

This leads me to my next point.  Unit Testing isn’t a testing activity; in fact, RUP clearly identifies unit testing as belonging to the implementation discipline (see

So if this is an activity that is performed as the source code is being developed, then what are some of the benefits that developers get from writing unit tests?

  • They get living documentation of how the code works.
  • They get the ability to refactor code quickly and safely through automated regression tests.
  • It improves the low-level design of their classes.
  • It reduces risk. Writing tests helps drive the code quality up, thus reducing the number of bugs found much later in the software development lifecycle (SDLC).

In fact, unit tests are such a part of development that Michael Feathers defined legacy code as any code that does not have tests (see

Deciding What To Test…

“What should I test?”  This is an interesting question I get often enough from developers and managers who are new to unit testing.  The question in itself is disconcerting because it reveals just how little is understood about unit testing and often stands as a euphemism for “What is the minimal amount of testing that I can get away with?”

I don’t have a real easy answer to this question.  There are no hard and fast rules that I can provide someone with.  Rather, I prefer to remind developers that the more they write unit tests, the more they improve their design abilities and the better the software they write.

Would I test a class without behaviors?  Probably not.  But what if I have an entity on which I add validation attributes (using the EntLib Validation Block or Data Annotations)?  I would definitely test the validations to ensure that the validators attached to properties are behaving as expected.

So my general rule is to write unit test that focus on behaviors and where there is value in performing this activity.

…And More Importantly, When To Write The Tests.

This is definitely the more important question.

As a developer, you get the most value from creating the unit tests when creating the code.  In fact, rarely does it make sense to go back and write unit tests after the code has been written.  Instead, you should write your tests as you write the source code.  Do I sometime s write my test before the code?  Yes – when I have a good idea what I’d like the API to look like.  Sometimes I write the test after creating the initial source code.  Most often, I find that writing source code and unit tests is a bit of the chicken and the egg problem – which one came first?

When dealing with bugs identified by testing teams or customers, you should definitely write tests before fixing the bugs.  The reason for this is twofold:

  • It helps reproduce the issue
  • It creates a test for a missing scenario

Popular Unit Testing Frameworks On The .NET Platform

I often get the question: “Which unit testing framework do you use?”  There are a number of frameworks specifically for .NET – most are open source and one is Microsoft’s very own.  They are all fairly similar, and I think which one you use comes down to personal preference.

The more important point is not to confuse the tests runners and the testing framework.  For example, I can run MbUnit tests (part of Gallio) from within Microsoft Visual Studio, and vice versa, I can run Microsoft unit tests from the Gallio Icarius runner.

This being said, here is a list of unit testing frameworks.  It includes, but is not limited to:

Qualities of Unit Tests

In preparing this course, I researched a few sites that would help establish a guideline as to what a good unit test should look like.  I ended up settling on a set of quality attributes that unit tests should have.  These quality attributes were presented by Peli De Halleux in an advanced unit testing session in Spain (see–advanced-unit-testing-aspects-13).  Of course, my own experience also shaped this list.

Unit tests should have the following quality attributes:

  • Atomic.
    • No dependencies between tests.  If test[i] must be run after test[j], you have an issue.  With some unit test frameworks (e.g. MbUnit), you can specify the order in which you want to run tests.  Still, it is something you should avoid.
    • Tests can be run in any order.
    • Tests can be run repeatedly.  I can’t tell you how many times I’ve seen tests that only work the second time and on subsequent runs.  That usually points to some initialization issues.  Fix them!
    • Tests can be run concurrently.  If you have thousands of tests and you are running a continuous integration process, this is rather important.  In general, when a test fails unless it is run on its own, there is some sort of shared state issue.  Fix it!
  • Trustworthy.
    • Tests should run every time on any machine.  “It works on my machine” is a team development killer.  The tests should run on any machine.  I have seen tests where a local resource from the user’s folder is used!!!  If doing a Get Latest on the solution does not get me all required dependencies, this is not going to work.   If there is one or more failing tests at any point in time, how can the tests be trusted?  What if you make some changes to the source code and there were 50 breaking tests before you started?  You make the change, run the tests, and there are still 50 breaking changes (which tests failed?  are they still the same ones?).  The fact is, you don’t know.  But when all tests pass and you make a change that leads to one or more failing tests, you can evaluate the impact of your changes and either fix the source or fix the tests.
    • Beware “integration” tests and separate them into their own projects with the word “integration” in them.  Unit tests have to run fast – tests that have dependencies on databases or are otherwise not self-contained should be separated out.  My personal preference is to put them into their own test projects.  I expect these tests will take longer.  At the same time, I expect that unit tests will run very fast.  This is really critical for continuous integration to be efficient.
  • Maintainable.
    • Test code is code also.  The same design principles we apply to source code should be applied to test code.  This is important because tests change over time to adjust for changing source code, so it is important that the tests be maintainable too.
    • Avoid repeating code.  You can use fixture setup and test setup methods to refactor otherwise repeating code.  You can also create helper classes to set up test data.
  • Readable.
    • This is really important.  I should be able to open any tests and understand them.  Otherwise, it is difficult to maintain the tests.
    • There are many ways to name the test methods.  I really like the suggestion I’d heard on a Podcast and explained here:  It follows this pattern: <Method_Being_Tested>_<Scenario>_<Expected Behavior>.  It works pretty well for me; I encourage you to try it.

A Unit Test Prototype

I have been testing for many years, and I found a pattern repeated over and over.  I did not have a name for it, but once I started working with RhinoMocks (a mocking framework – I will introduce this in the 201 or 301 course), I learned a name that I thought described well what I’d seen in my tests and other people’s tests.

The pattern is named the Arrange-Act-Assert (AAA) pattern.

  • Arrange.  Populate any objects and/or create any necessary Mocks or Stubs required by your test.
  • Act. Execute the code being tested.
  • Assert.  Verify that expectations were met or report failure.

Asserting is the critical step in any unit tests.  If you are not making an assertion, then you are not really unit testing.  How many assertions should there be you might ask?  Preferably one and only one.  On occasion I may have a couple of assertions is they are strongly related and it does not make sense to slit the test into two tests.  The problem with having too many assertions in a test is that when a test fails, it can be difficult to understand right away what the problem is (remember the single responsibility principle?).

Writing Your First Unit Tests

While I was teaching the class, I showed the students samples I’d put together in Visual Studio.  For the readers of this blog post, I encourage you to try the tutorials provided by the unit testing frameworks.  In particular, I find NUnit to be a good place to start because the tutorial is well written and easy to follow.  But that does not mean you can’t work through other tutorials.

Coming next, Unit Testing 201 and 301.

Until then, happy unit testing!

11 March 2011

Unit Testing – A Curriculum

Filed under: .NET, Testing — Tags: , , , , , — Philippe Truche @ 3:36

The subject of unit testing is one that comes over and over in my professional experience. I have been asked many times to teach developers how to unit test. It seems like the words “unit testing” somehow got some level of fame and everyone wants to claim that they are doing it. But to do it well and to do it consistently seems elusive. Worse yet, there tends to be a dichotomy between development managers and the developers about what unit testing really is and why we unit test in the first place.

When asked to put a webinar together, I decided to break it down into 3 separate classes. My first one is called Unit Testing 101 and covers the fundamentals of unit testing. Though it focuses on concepts primarily, it also shows a basic unit test so as to get a flavor. I then created the second course: Unit Testing 201 – Intermediate Unit Testing. In this course, I look at techniques for handling dependencies and introduce basic concepts of test doubles and mocking frameworks. Finally, I reserved the most advanced techniques for Unit Testing 301 – Advanced Unit Testing: I cover topics on compiler techniques in Visual Studio to access members that aren’t public, how to address dependencies without necessarily resorting to using an IOC container, techniques for faking out the HttpContext, employing last-resort frameworks like Microsoft’s Moles to stub out System.DateTime for example, how to manipulate the configuration file, how to host WCF in process (I know, this is bordering on integration tests clip_image001), and a basic overview of Microsoft Pex’s framework.

Yes, it sounds like there is a lot to know to write good unit tests. In fact, just like anything else – practice makes perfect.

In my next post, I will go through the contents of Unit Testing 101.

12 August 2008

.NET Assembly Versioning Lifecycle

While it may seem that versioning assemblies in .NET can be a black art, it really isn’t.  The difficulty lies in the lack of guidance in this area.  In fact, I had to read many blogs to put together an approach on a project I was working on a few years (see Suzanne Cook’s excellent blog post “When to Change File/Assembly Versions“). 

I recently moved to a new project and found that this guidance is still helpful.  As a result, I decided to share it because it is still something that is not discussed much, yet it plays an important part of the configuration management of a .NET application.

Version Attributes
The .NET framework makes available three (3) separate version attributes:

  • AssemblyVersion.  As discussed in Assembly Versioning, the .NET runtime uses the assembly version number for binding purposes.  It is used for .NET internal purposes only and should not be correlated to the product version.
  • AssemblyFileVersion.  Instructs a compiler to use a specific version number for the Win32 file version resource. The Win32 file version is not required to be the same as the assembly’s version number.
  • AssemblyInformationalVersion.  “The attribute defined by this class attaches additional version information to an assembly” for documentation purposes only.  It is text-based informational version and typically corresponds to the product’s marketing literature, packaging, or product name.  This data is never used at runtime.

[assembly: AssemblyVersion(“”)]
[assembly: AssemblyFileVersionAttribute(“3.0.2134.6636”)]
[assembly: AssemblyInformationalVersion(“3.0.1”)]

Version Formats
Version formats are suggested for the assembly version (AssemblyVersionAttribute), the file version (AssemblyFileVersionAttribute), and the “product number” (AssemblyInformationalVersionAttribute).

Assembly Version Format
Version information for an assembly (set by using the AssemblyVersion attribute) consists of the following four values, as described in the Version Class in the .NET framework documentation.

  • Major Version. “Assemblies with the same name but different major versions are not interchangeable. This would be appropriate, for example, for a major rewrite of a product where backward compatibility cannot be assumed.”
  • Minor Version. “If the name and major number on two assemblies are the same, but the minor number is different, this indicates significant enhancement with the intention of backward compatibility. This would be appropriate, for example, on a point release of a product or a fully backward compatible new version of a product.”
  • Build Number.  “A difference in build number represents a recompilation of the same source. This would be appropriate because of processor, platform, or compiler changes.”
  • Revision.  “Assemblies with the same name, major, and minor version numbers but different revisions are intended to be fully interchangeable. This would be appropriate to fix a security hole in a previously released assembly.”

The assembly version number is used internally by .NET and should not matter to anyone outside of the development team.

File Version Format
This is the actual file version (set by using the AssemblyFileVersionAttribute attribute) and it satisfies the following goals:

  • It correlates the product binaries to the source files from which they were compiled (as long as labeling is performed in the source code control database)
  • It allows for the re-creation of older builds.
  • It clearly identifies upgrade and bug fix releases.
  • It clearly identifies which version of the source code is in production.

The File Version could follow this format:

  • Major Version. This is the internal version of the product and is assigned by the application team.  It should not change during the development cycle of a product release.
  • Minor Version. This is normally used when an incremental release of the product is planned rather than a full feature upgrade.  It is assigned by the application team, and it should not be changed during the development cycle of a product release.
  • Build Number. The build process usually generates this number.  Keep in mind that the numbers cannot be more than 5 digits.  One scheme for generating this number is what I call the ‘YearDaysCounter.’  The first 1-3 digits pertain to the day of the year on which the build is being performed.  It has the advantage of being sequential, going from 001 to 365 as time progresses through the year. The last 2 digits pertain to the iteration of the build for a particular day and thus allow a range of 01 through 99.
  • Revision. This could be assigned by the build team and could contain the reference number of the migration to the production environment.  When it is not known yet, a 0 is used until the number is issued.

Product Number Format
This is the product number that is communicated to stakeholders outside the development and build teams (e.g. Application XYZ is on release “3.0.1”).  It follows the simpler 3-part format <major>.<minor>.<maintenance>.  The following guidelines are useful in changing the product version number:

  • Major Release.  Increment this number when major functionality is being released.
  • Minor Release.  Increment this number when alterations or enhancements to existing functionality is made and changes the end user experience.
  • Maintenance Release: Increment this number when bug fixes are released or performance enhancements are made that exclude functionality changes or material ‘look and feel’ changes to the user interface.

Like the assembly version number, the product version number should be changed after going to production.  It is set by using the AssemblyInformationalVersion attribute.

14 July 2008

Using the Enterprise Library Validation Block with WCF Asynchronous Operations

Filed under: .NET, WCF — Tags: , , , , — Philippe Truche @ 3:58

I was recently surprised to find out first hand that the Enterprise Library Validation Block throws a somewhat obscure exception: ArgumentNullException(“operation.SyncMethod”) when the WCF AsyncPattern is employed.

By debugging into the Enterprise Library source code (EntLib 3.1), I came to understand that the Validation Block was coded to handle synchronous service operations only (using the FaultContract<ValidationFault> attribute on the service operation signature .  In our case, some of our operations also implement the asynchronous pattern and we have Begin and End method pairs for each.

While I was aware that one-way operations do not allow faults to be returned, I was’t sure about asynchronous service operations (implemented as asynchronous on the service side).  I checked the .NET documentation and was able to confirm that FaultContract<T> is supported on service operations implemented asynchronously using the AsyncPattern property.   A difficult to find comment exists in the documentation of the FaultException(TDetail) Generic Class: “The FaultContractAttribute can be used to specify SOAP faults for both two-way service methods and for asynchronous method pairs.”  See

With this information in hand, I reasonned that I should be able to modify the validation block.  I made the code changes, ran the NUnit tests included in the validation block, and employed the modified assembly on our asynchronous service operations.  I made modifications to ValidationParameterInspector.cs and ValidationBehavior.cs.  The scope of the modifications included the following methods:

  • ValidationParameterInspector.ValidationParameterInspector
  • ValidationParameterInspector.BeforeCall
  • ValidationBehavior.HasValidationAssertions

Note that the modifications apply to both EntLib 3.1 and EntLib 4.0.  The modifications are included below:

Changes to ValidationParameterInspector.cs

/// <summary>

/// Constructor to initialize <see cref=”InputValidators”/>.

/// </summary>

/// <remarks>

/// When the operation is a <see cref=”OperationDescription.BeginMethod”/> because it implements the

/// the <see cref=”OperationContractAttribute.AsyncPattern”/>,

/// it will cause the <see cref=”InputValidators”/> to have empty validators for the

/// <see cref=”AsyncCallback”>callback</see> and <see cref=”Object”>state</see> method

/// arguments.

/// </remarks>

/// <param name=”operation”>The <see cref=”OperationDescription”/> of the operation.</param>

/// <param name=”ruleSet”>A <see cref=”String”/> naming the ruleset to be used.</param>

public ValidationParameterInspector(OperationDescription operation, string ruleSet)


// Check the synchronous method of the service first. If the operation implements the

// OperationContractAttribute.AsyncPattern property, set the method to the BeginMethod.

MethodInfo method = operation.SyncMethod;

if (method == null) method = operation.BeginMethod;


foreach (ParameterInfo param in method.GetParameters())


switch (param.Attributes)


case ParameterAttributes.Out:

case ParameterAttributes.Retval:



inputValidators.Add(CreateInputParameterValidator(param, ruleSet));







/// <summary>

/// Performs the validations.

/// </summary>

/// <param name=”operationName”>A <see cref=”String”/> containing the name of the operation.</param>

/// <param name=”inputs”>

/// An <see cref=”object”/> array containing the input parameters of the method and expected as described on

/// the operation, and thus excluding input parameters resulting from the service’s implementation of the

/// <see cref=”OperationContractAttribute.AsyncPattern”/> property.

/// </param>

/// <returns>null</returns>

public object BeforeCall(string operationName, object[] inputs)


ValidationFault fault = new ValidationFault();

// Because there could be fewer inputs than inputValidators, iterate

// through the inputs to run the corresponding validations. This accounts

// for asynchronous methods where the callback and state are defined in the

// method but are not exposed outside the boundary of the service.

for (int i = 0; i < inputs.Length; ++i)


ValidationResults results = inputValidators[i].Validate(inputs[i]);

AddFaultDetails(fault, inputValidatorParameterNames[i], results);



if (!fault.IsValid)


   throw new FaultException<ValidationFault>(fault);



return null;



Changes to ValidationBehavior.cs

/// <summary>

/// Returns a <see cref=”Boolean”/> to indicate whether or not an operation has validation assertions.

/// </summary>

/// <remarks>

/// If the method implements the <see cref=”OperationContractAttribute.AsyncPattern”/> property,

/// the BeginMethod is used instead of the SyncMethod because the SyncMethod is null.

/// </remarks>

/// <param name=”operation”></param>

/// <returns></returns>

private bool HasValidationAssertions(OperationDescription operation)


if (operation == null) throw new ArgumentNullException(“operation”);


// Check the synchronous method of the service first. If the operation implements the

// OperationContractAttribute.AsyncPattern property, set the method to the BeginMethod.

MethodInfo methodInfo = operation.SyncMethod;

if (methodInfo == null) methodInfo = operation.BeginMethod;


return methodInfo.GetCustomAttributes(typeof(ValidatorAttribute), false).Length > 0 ||




I created a copy of this post on CodePlex.  See

2 June 2008

WCF, Conditional Compilation, and language differences

Filed under: .NET, WCF, Web Services — Tags: — Philippe Truche @ 11:46

Perhaps you might have defined WCF message contracts lately, and you want to be explicit about the encryption level you require for your message parts.  I am taking the example of a message contract, but really my point is applicable to any element to which you wish to apply the EncryptionLevel enumeration.

In C#:

[MessageHeader(Name="Environment", MustUnderstand=true
#if !DEBUG
 , ProtectionLevel = ProtectionLevel.Sign

public String Environment
 get { return _environment; }
 set { _environment = value; }

#If CONFIG = "Debug"
 <MessageHeader(Name:="Environment", MustUnderstand:=True)> _
 Public Property Environment() As String
 <MessageHeader(Name:="Environment", MustUnderstand:=True, ProtectionLevel:=ProtectionLevel.Sign)> _
 Public Property Environment() As String
#End If
 Return _environment
 End Get
 Set(ByVal Value As String)
 _environment = Value
 End Set
 End Property


As you can see, there are some key differences in how attributes are specified using conditional compilation:
  • In C#, I can toggle parts of the same attribute on and off.  In VB.NET, I must repeat the entire attibute (no parts allowed by the compiler).
  • Because of line continuation constraints in VB.NET, the line following the line being continued must be included inside the conditional compilation block.
  • Using VB.NET in Visual Studio (2005 or 2008), it is not clear which part of the conditional compilation is “active” based on the selected configuration.  In C#, the inactive block is grayed out.

Keep in mind these “consequences” as you proceed with WCF attributes in the language you write with.

25 April 2007

Using Visual Studio Team System to load test web services

Filed under: .NET, Testing, VSTS, Web Services — Philippe Truche @ 1:55

I recently had to load test web services for the smart client application we have developed.  I wanted to be able to go through quick cycles (test, evaluate, adjust) without having to wait for a few days for a load test slot in our LoadRunner lab, so I offered to use Visual Studio team system to perform the tests.  I already had experience with Microsoft’s previous load test tool – Application Center Test.  I was pleased with VSTS’ version of a load test tool.  Not quite as powerful as LoadRunner, but much more accessible in ease of use.

Anyway, I used VSTS successfully to load test web services and satisfy the following requirements:

  • Randomly pick data from a set for a given test.
  • Invoke the web services using HTTP POSTs (i.e. without having to create a client)
  • Use a configuration file to store the SOAP envelopes needed to make the web service calls.

Picking data randomly 

When using VSTS tests, the test methods are decorated with the [TestMethod] attribute.    The [DataSource] attribute can also be used on the test method to indicate that the test is data-driven.  Caution: the test is executed as many times as there are rows in the data source.  That may be OK when performing unit tests, but it is not OK for a load test.  So how do you get around this?  Well, I got my data source to return only one row picked randomly.  Using SQL Server 2005 Express, I defined views that would select one row of data using the following construct: “SELECT TOP(1) columnName1 AS alias1, columnName2 AS alias2, …. , columnNameN as aliasN FROM tableName ORDER BY NEWID();”

You might wonder why I am using an aliases on the column names.  The reason why I did this was to decouple the column names in the tables from the element names in the SOAP envelopes.  When the data source provides the row of data, I have a utility running through the column names of the DataRow object and substituting the data in the SOAP envelope based on the name of the columns found in the DataRow provided through the [DataSource] attribute.  This simple convention makes it very easy to parameterize the SOAP sent to the service.  Because the load test client should be as efficient as possible, I use a simple string parsing algorithm to perform the substitutions.

Invoke the web services using HTTP POSTs

I created a static class that uses an HttpWebRequest object to create the POST.   The result is returned into an HttpWebResponse object by simply doing this:

using ( HttpWebResponse response = (HttpWebResponse)request.GetResponse())


return response.StatusCode.ToString();


If the web server responds with error response codes (e.g. HTTP 401, HTTP 500), VSTS detects that and reports on the errors.  This is much better than Application Center Test where I was left to parse through the responses to make sure I did not get an error back.

Use a configuration file to store the SOAP envelopes needed to make the web service calls.

For this, my team member created an XML file that is loaded into an XmlDocument from which XmlNodes are selected and stored into an object that holds the SOAPAction to perform, the SOAP envelope to send the service, and the Url to use.  Yes, I know, this is very ASMX-centric, and will likely need to change with WCF, but we were not developing a framework either.  We needed a point solution to get going quickly.

Leave a comment if you have any questions or comments.

24 March 2007

Encrypting Enterprise Library Sections in the *.exe.config file

Filed under: .NET — Philippe Truche @ 4:49

I got a question on encrypting Enterprise Library sections in the application configuration file.  I must admit this was somewhat tricky, so here is some additional information on this topic.  To encrypt an Enterprise Library (EL) section, you must copy the EL section handler assemblies to the system directory. To do this, go to the File System Editor in the Setup project and click on the System Folder; then, include EL section handlers. For example, to encrypt the section, you must include Microsoft.Practices.EnterpriseLibrary.Common.dll and Microsoft.Practices.EnterpriseLibrary.Security.dll in the System Folder of the Setup project. The reason for this is the msiexec.exe is installed in the system folder (usually C:\Windows\System32) and it looks for dependent assemblies in there.  Here is a screenshot of what this looks like in the setup project: 

File System Editor - System Folder

The config file, once encrypted, will look like this:

EL section encrypted in config file

Hopefully this post addresses the areas that might have been ambiguous in my previous posts.

17 March 2007

Smart Client and Web Application Development

Filed under: .NET — Philippe Truche @ 12:59

In the past 6 months, I have had the opportunity to develop a smart client application, using the CAB, the Smart Client Software Factory, the Web Service Software Factory, and the Enterprise Library 2.0.  I have worked on web application development since 2000, and client/server before that.

I was asked recently how smart client is different from a web application, whether it’s easier or more difficult to develop.  I think to answer this properly, one really needs to think about the architectural differences.  Let’s start with a web application.  In my experience, the layers of the application reside on the same physical tier.  I am pretty sure this is true in the majority of cases, and there are good reasons for it.  It is much easier to pass business entity objects between the UI, business, and data layers when those layers reside on a web server (typically in a single application domain), and it certainly does not prevent scaling out since you can replicate this setup on a number of servers grouped together in a cluster (a.k.a web farm).  As outlined by the patterns and practices group in “Application Architecture for .NET: Designing Applications and Services“, the Web Farm With Local Business Logic is a common deployment architecture and has the advantage of providing the best performance.

 Now on to the smart client.  The deployment pattern we have implemented is named “Rich Client with Web Service Access” in the same Application Architecture for .NET: Designing Applications and Services document.  This setup, along with the alternate “Rich Client with Remote Components” naturally involves multiple physical tiers.  The mere fact that a client is deployed to end user computers means that there are multiple physical tiers.

This leads me to this conclusion.  If you are going to develop a smart client application, you cannot escape the realities of distributed computing.  Comparing a web application with local business logic to a smart client application is not a fair comparison because the web application deployed in such a way does not involve distributed computing – objects are passed between the layers without worry about network latency or serialization.  A fairer comparison is a web application with remote business logic on the one side and the smart client application on the other because both deal with multiple physical tiers.

What do you think?  Please post your comments as I am interested in hearing your opinions.

Older Posts »

Blog at