Philippe Truche’s Blog

11 July 2012

Deploying web applications to multiple environments using Microsoft Web Deploy

Filed under: .NET, TFS, Web Deploy — Tags: — Philippe Truche @ 8:54

There is one thing application development teams can appreciate: deploying applications can be difficult and time consuming; and the time spent supporting the deployment of applications is time not spent on features.

Fortunately, the technologies for deploying web applications have matured quite a bit in the past few years.  In a recent project I made use of Microsoft Web Deploy to deploy 2 web applications to 5 environments.  It sounds pretty straightforward until you consider the following:

  • One of the web application was deployed to both external and internal web server clusters, and when deployed on the internal cluster it had a different configuration that allows internal users to use the application with the behaviors desired for internal users.
  • Some of the environments  included an AppFabric Cache host running locally on the web server while other environments depended on remote cache hosts.
  • One of the web application, mostly ASP.NET Web Forms, contained a Silverlight “island.”  The web forms and the Silverlight applications were developed independently and we needed to be able to deploy them independently because their migration schedules did not always coincide.

This provided a significant set of challenges but with some trial and error, we achieved a “no-touch” deployment, much to the application development team’s joy.  In this article, I do not cover the basics of using Microsoft Web Deploy though I provide a synopsis of the toolset.  Instead, I focus on the more challenging areas so that the readers can apply the techniques outlined here in their own projects.

The Toolset

  • Visual Studio 2010.  In addition to Web.config, there is now Web.Debug.config and Web.Release.config.  This is not limited to the default build configurations.  If you create your own build configuration, for example MyBuildConfig, then you could create Web.MyBuildConfig.config.  But what is the point of these new files?  What Visual Studio does is apply the XSLT transformations specified in those files to the Web.config.  For example, when creating a release web deployment package, you might want to modify the compilation element to remove the debug attribute: so instead of the default <compilation debug=”true” targetFramework=”4.0″ defaultLanguage=”cs”/>, you want <compilation targetFramework=”4.0″ defaultLanguage=”cs”/> (debug defaults to “false” when omitted).
  • msdeploy.exe.  This command-line utility is part of the Microsoft Web Deploy toolset, itself part of the Microsoft IIS stack.  Web Deploy simply allows web applications to be deployed to target servers and make deployment-time modifications to the Web.config file. How does this work?  You need to add a parameters.xml file to your Web project in Visual Studio.  When the web deployment package is created, this file is processed into a projectName.SetParameters.xml file located in the folder in which the deployment package is created.  In Visual Studio 2010, this is be default in the directory tree created in the obj folder; in Visual Studio 2012, you specify the destination folder when you create the publish profile using the ‘Web Deploy Package” publishing method (see http://msdn.microsoft.com/en-us/library/dd465323.aspx for additional information).

To summarize, you use Visual Studio to make modifications to the Web.config file based on your target configuration (Debug, Release, or your own custom configuration) regardless of the target environment, while you use msdeploy.exe to deploy and make changes to the Web.config file at deployment time based on environment factors (e.g. DB connections strings, service URLs, etc..).

This was a very short introduction to the tools at our disposal and what their purpose is.  Let’s go into much more detail about how you use this tools to handle a number of common scenarios.

Build Configuration Transformations

Using Visual Studio’s ability to create versions of the Web.config file for each build configuration – by default, Debug and Release – there are a number of transformations you can specify to obtain configuration files appropriate for debug and release configurations. There is another use of this Visual Studio feature that perhaps does not get highlighted much: the ability to make configuration files targeted for servers. The Web.config file in the Visual Studio web projects is targeted for local development on the developers’ machines. Rarely is it “server ready.” Using Visual Studio’s ability to transform this file makes it possible to make web.config files targeted for server deployments.

In the list that follows, I highlight a number of transformations that can be applied to make server-ready Web.config files regardless of the target configuration (Debug or Release).

  • Inserting configuration sections not used in local development. This is handy when using server products on servers that aren’t available locally on the developers’ machine. For example, you might use the ASP.NET State Service on your developer machines but switch to AppFabric Cache on the servers.
  • Alter logging sections so that you log to files when in Visual Studio but log to a database when on the server.
  • Alter SMTP so that emails generated by the web site go to a folder when working from Visual Studio but go to the target SMTP server when running from servers.
  • Insert assemblies into the /system.web/compilation/assemblies node. Back to the AppFabric Cache example, the DLLs are present on the server because the product is installed there but it is usually not installed on the developers’ machines. The ability to insert those when making the web deployment package is most useful.
  • Alter the session state timeouts. Often, it is useful to have long timeouts on local machines because debugging requires it; once on the server, however, you should have shorter timeouts.
  • Alter web server behaviors. For example, it is convenient to allow browsing directories from the web browser locally but is not appropriate on the servers. Likewise, you can set your static content cache expiration far out into the future and set up the removal of Etags if not using them. Another useful application is to set up redirects from HTTP to HTTPS – great on the server but not desirable locally.
  • Remove sections not needed on the server. Some of the profiling tools you might use require an ASP.NET development server while you may be using IIS Express locally and using IIS 7.x on the servers. This makes the management of your configuration file a bit tricky since system.web/customErrors, system.web/httpHandlers, and system.web/httpModules apply to the ASP.NET development server (pre-IIS 7.x) while system.webServer/httpErrors , system.webServer/handlers, and system.webServer/modules are used by IIS 7.x. I recommend maintaining both system.web and system.webServer in the Web.config file and simply removing unwanted system.web sections when making Web.config files targeting server deployments.
  • WCF logging. You can set up message logging locally so that every message is recorded but remove message logging on the servers.
  • WCF metadata. You may want to have your MEX endpoints available in development but remove those from other environments.
  • Debug items. While you may want your compilation to be set to true when in the Debug configuration, you definitely should not see this in Release Web.config files.
  • If using web forms in a web farm, you will want to use the same validation and decryption keys across all nodes.

To illustrate these points, let’s take the following sample Web.config file:

<?xml version="1.0"?>

<configuration>

  <configSections>
    <section name="loggingConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.LoggingSettings, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=9c844884b2afcb9e" requirePermission="true" />
    <section name="cachingConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Caching.Configuration.CacheManagerSettings, Microsoft.Practices.EnterpriseLibrary.Caching, Version=5.0.414.0, Culture=neutral, PublicKeyToken=9c844884b2afcb9e" requirePermission="true" />
  </configSections>

  <cachingConfiguration defaultCacheManager="Cache Manager">
    <cacheManagers>
      <add name="Cache Manager" type="Microsoft.Practices.EnterpriseLibrary.Caching.CacheManager, Microsoft.Practices.EnterpriseLibrary.Caching, Version=5.0.414.0, Culture=neutral, PublicKeyToken=9c844884b2afcb9e" expirationPollFrequencyInSeconds="60" maximumElementsInCacheBeforeScavenging="1000" numberToRemoveWhenScavenging="10" backingStoreName="NullBackingStore" />
    </cacheManagers>
    <backingStores>
      <add type="Microsoft.Practices.EnterpriseLibrary.Caching.BackingStoreImplementations.NullBackingStore, Microsoft.Practices.EnterpriseLibrary.Caching, Version=5.0.414.0, Culture=neutral, PublicKeyToken=9c844884b2afcb9e" name="NullBackingStore" />
    </backingStores>
  </cachingConfiguration>

  <connectionStrings>
    <add name="EntLibErrorLogging" connectionString="EntLibLoggingConnString" providerName="System.Data.SqlClient" />
    <add name="Application" connectionString="ApplicationConnString" providerName="System.Data.EntityClient" />
  </connectionStrings>

  <loggingConfiguration name="Logging Application Block" defaultCategory="General" logWarningsWhenNoCategoriesMatch="true">
    <listeners>
      <add name="Event Log Listener" type="Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.FormattedEventLogTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging"
                listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.FormattedEventLogTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging"
                source="CSWS Logging" formatter="Text Formatter"  log="" machineName="." traceOutputOptions="None" />
      <add name="Rolling Flat File Trace Listener" type="Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.RollingFlatFileTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging"
           listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.RollingFlatFileTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging"
           fileName="entlib_processed_exceptions.log"
           header="~~~~~~~~~~~~ Begin Log Entry ~~~~~~~~~~~~~~~~"
           footer="~~~~~~~~~~~~ End Log Entry ~~~~~~~~~~~~~~~~~~"
           formatter="Text Formatter"
           rollFileExistsBehavior="Increment" rollInterval="Day" rollSizeKB="1500" />
      <add name="Database Trace Listener" type="Microsoft.Practices.EnterpriseLibrary.Logging.Database.FormattedDatabaseTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging.Database"
          listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Database.Configuration.FormattedDatabaseTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging.Database"
          databaseInstanceName="EntLibErrorLogging" writeLogStoredProcName="WriteLog"
          addCategoryStoredProcName="AddCategory" formatter="Text Formatter"
          traceOutputOptions="None" filter="All" />
    </listeners>
    <formatters>
      <add name="Text Formatter" type="Microsoft.Practices.EnterpriseLibrary.Logging.Formatters.TextFormatter, Microsoft.Practices.EnterpriseLibrary.Logging"
           template="&amp;lt;FormattedMessage&gt;&amp;lt;LocalTimestamp&gt;{timestamp(local)}&amp;lt;/LocalTimestamp&gt;&amp;lt;Category&gt;{category}&amp;lt;/Category&gt;&amp;lt;Title&gt;{title}&amp;lt;/Title&gt;&amp;lt;Severity&gt;{severity}&amp;lt;/Severity&gt;&amp;lt;ProcessId&gt;{localProcessId}&amp;lt;/ProcessId&gt;&amp;lt;Win32ThreadId&gt;{win32ThreadId}&amp;lt;/Win32ThreadId&gt;&amp;lt;ExtendedProperties&gt;{dictionary({key} - {value})}&amp;lt;/ExtendedProperties&gt;&amp;lt;ExceptionHandlerMessage&gt;{message}&amp;lt;/ExceptionHandlerMessage&gt;&amp;lt;/FormattedMessage&gt;" />
    </formatters>
    <categorySources>
      <add name="General" switchValue="All" >
        <listeners>
          <add name="Rolling Flat File Trace Listener" />
        </listeners>
      </add>
      <add name="Exceptions" switchValue="All" >
        <listeners>
          <add name="Rolling Flat File Trace Listener" />
        </listeners>
      </add>
    </categorySources>
    <specialSources>
      <!-- The All Events special category receives all log entries. -->
      <allEvents name="All Events" switchValue="All" />
      <!-- The Unprocessed Category special category receives all log entries that are not processed by a source category (such as entries that specify a category that is not configured). -->
      <notProcessed name="Unprocessed Category" switchValue="All" >
        <listeners>
          <add name="Rolling Flat File Trace Listener" />
        </listeners>
      </notProcessed>
      <!-- The Logging Errors & Warnings special category receives log entries for errors and warnings that occur during the logging process-->
      <errors name="Logging Errors &amp; Warnings" switchValue="All" >
        <listeners>
          <add name="Event Log Listener" />
        </listeners>
      </errors>
    </specialSources>
  </loggingConfiguration>

  <system.diagnostics>
    <sources>
      <source name="System.ServiceModel" switchValue="Information, ActivityTracing" propagateActivity="true">
        <listeners>
          <add name="RollingXmlFileTraceListener" />
        </listeners>
      </source>
      <source name="System.ServiceModel.MessageLogging">
        <listeners>
          <add name="RollingXmlFileTraceListener" />
        </listeners>
      </source>
    </sources>
    
    <sharedListeners>  
    <!-- The RollingXmlFileTraceListener also supports the following attributes.  Note that the attributes
       are EntLib logging block attributes and operate in exactly the same way.  All are optional.
       
       rollSizeKB             => The size in kilobytes that the file can reach before it rolls over.
                                 Defaults to 1024 (i.e. 1 MB).
       timeStampPattern       => The format of the date that is appended to the name of the new file.
                                 Defaults to "yyyy-MM-dd".
       rollInterval           => The time interval that determines when the file rolls over.
                                 Possible values are None (the default), Minute, Hour, Day, Week, Month, and Year.
       rollFileExistsBehavior => The behavior that occurs when the roll file is created.
                                 Possible values are Overwrite (the default), which overwrites the existing file;
                                 and Increment, which creates a new file. -->
  
      <add name="RollingXmlFileTraceListener"
          type="Framework.ServiceModel.Logging.RollingXmlFileTraceListener, Framework.ServiceModel"
          rollSizeKB="67108864"
          timeStampPattern="dd-MM-yyyy"
          rollFileExistsBehavior="Increment"
          initializeData="csws.svclog"
          traceOutputOptions="Timestamp"/>
    </sharedListeners>
  </system.diagnostics>

  <system.net>
    <mailSettings>
      <smtp from="user@address.domain" deliveryMethod="PickupDirectoryFromIis">
        <network host="localhost" port="25" defaultCredentials="true" />
      </smtp>
    </mailSettings>
  </system.net>

  <system.serviceModel>
    <bindings>
      <basicHttpBinding>
        <binding name="Client.ReportServer.BasicHttp.BindingConfig" maxBufferSize="524288" maxReceivedMessageSize="524288">
          <security mode="Transport">
            <transport clientCredentialType="Windows" />
          </security>
        </binding>

      </basicHttpBinding>
    </bindings>
    <client>
      <endpoint address="https://dev.sqlrpt.net/ReportServer/ReportExecution2005.asmx" binding="basicHttpBinding" bindingConfiguration="Client.ReportServer.BasicHttp.BindingConfig" contract="Proxy.ReportExecutionServiceSoap" name="ReportExecutionServiceSoap" />
      <endpoint address="https://dev.sqlrpt.net/ReportServer/ReportService2005.asmx" binding="basicHttpBinding" bindingConfiguration="Client.ReportServer.BasicHttp.BindingConfig" contract="ReportingServiceProxy.ReportingService2005Soap" name="ReportingService2005Soap" />
    </client>
    <diagnostics wmiProviderEnabled="true">
      <messageLogging logEntireMessage="true" logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" maxMessagesToLog="3000" />
    </diagnostics>
  </system.serviceModel>

  <system.web>
    <authentication mode="None" />

    <compilation debug="true" batch="false" targetFramework="4.5" defaultLanguage="cs">
      <assemblies>
        <add assembly="Microsoft.Practices.EnterpriseLibrary.Validation, Version=5.0.414.0, Culture=neutral, PublicKeyToken=9c844884b2afcb9e" />
      </assemblies>
    </compilation>

    <customErrors mode="Off" />

    <httpHandlers>
      <!-- This only matters if using the Visual Studio Web Development Server.  In IIS and IIS Express, /system.webServer/handlers is used.-->
    </httpHandlers>

    <httpModules>
      <!-- This only matters if using the Visual Studio Web Development Server.  In IIS and IIS Express, /system.webServer/modules is used.-->
    </httpModules>

    <httpRuntime executionTimeout="300" maxRequestLength="51200"/>

    <sessionState mode="StateServer" timeout="1440" />

    <trace enabled="true" localOnly="false" />
  </system.web>

  <system.webServer>

    <defaultDocument>
      <files>
        <clear />
        <add value="public/default.aspx" />
      </files>
    </defaultDocument>

    <staticContent>
      <mimeMap fileExtension=".svg" mimeType="image/svg+xml" />
    </staticContent>

    <validation validateIntegratedModeConfiguration="false" />

  </system.webServer>

</configuration>

This web.config file is fine for use on developers’ workstations – session state is kept in the ASP.NET state service, logging targets local files, and emails go to file folders so developers don’t actually send emails.

To prepare the Web.config file for a Release configuration build to servers, the first step is to apply transformations to it using the Web.release.config file. The syntax is described fairly well by MSDN. Here is the transformation file:

<?xml version="1.0"?>

<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">

  <configSections>
    <section name="dataCacheClient" 
             type="Microsoft.ApplicationServer.Caching.DataCacheClientSection, Microsoft.ApplicationServer.Caching.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" 
             allowLocation="true"  allowDefinition="Everywhere" 
             xdt:Transform="InsertAfter(/configuration/configSections/section[@name='cachingConfiguration'])"/>
  </configSections>

  <dataCacheClient requestTimeout="180000" channelOpenTimeout="18000" maxConnectionsToServer="10" xdt:Transform="InsertAfter(/configuration/connectionStrings)">
    <hosts>ENV_DRIVEN_CONFIG_SET_BY_PARAM_TEXT_REPLACE</hosts>
    <securityProperties mode="Transport" protectionLevel="EncryptAndSign" />
    <transportProperties connectionBufferSize="500000" maxBufferPoolSize="2147483647" maxBufferSize="2147483647" maxOutputDelay="2"
                         channelInitializationTimeout="86400000" receiveTimeout="86400000" />
  </dataCacheClient>

  <loggingConfiguration>
    <listeners xdt:Locator="XPath(add[@name='Event Log Listener'])" xdt:Transform="Remove"/>
  </loggingConfiguration>

  <loggingConfiguration xdt:Locator="XPath(categorySources/add[@name='General']/listeners/add[@name='Rolling Flat File Trace Listener'])"
                  name="Database Trace Listener" xdt:Transform="SetAttributes(name)"/>
  <loggingConfiguration xdt:Locator="XPath(categorySources/add[@name='Exceptions']/listeners/add[@name='Rolling Flat File Trace Listener'])"
                  name="Database Trace Listener" xdt:Transform="SetAttributes(name)"/>
  <loggingConfiguration xdt:Locator="XPath(specialSources/notProcessed/listeners/add[@name='Rolling Flat File Trace Listener'])"
                  name="Database Trace Listener" xdt:Transform="SetAttributes(name)"/>
  <loggingConfiguration xdt:Locator="XPath(specialSources/errors/listeners/add[@name='Event Log Listener'])"
                  name="Rolling Flat File Trace Listener" xdt:Transform="SetAttributes(name)"/>

  <system.diagnostics>
    <sources xdt:Transform="Replace">
      <source name="System.ServiceModel" switchValue="Warning" propagateActivity="true">
        <listeners>
          <add name="RollingXmlFileTraceListener"/>
        </listeners>
      </source>
    </sources>
  </system.diagnostics>

  <system.net>
    <mailSettings xdt:Transform="Replace">
      <smtp from="user@address.domain">
        <network
          host="localhost"
          port="25"
          defaultCredentials="true"
          />
      </smtp>
    </mailSettings>
  </system.net>

  <system.serviceModel>
    <diagnostics wmiProviderEnabled="true" xdt:Transform="Replace" />
  </system.serviceModel>

  <system.web>

    <compilation xdt:Transform="RemoveAttributes(debug)" />
    <compilation batch="true" xdt:Transform="SetAttributes(batch)">
      <assemblies>
        <add assembly="Microsoft.ApplicationServer.Caching.Client, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" xdt:Transform="Insert"/>
        <add assembly="Microsoft.ApplicationServer.Caching.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" xdt:Transform="Insert"/>
      </assemblies>
    </compilation>

    <customErrors xdt:Transform="Remove" />

    <httpHandlers xdt:Transform="Remove" />

    <httpModules xdt:Transform="Remove" />

    <machineKey validationKey="1A7A288183C4302270B248DA1FB4454AC3F287C9D759D4203843D3C3D5FA86B809E05EC353D8CD1F48B894A45BC5349109335C5A77163799ABBA8EB710BE68C6"
               decryptionKey="81B5F289477B6DAC9589B31B5F3B33BF1F03F2F5CF4D611887A4E8AC6B6D32A0"
               validation="SHA1" decryption="AES" xdt:Transform="Insert" />

    <sessionState mode="Custom" timeout="120" customProvider="AppFabricCacheSessionStoreProvider" xdt:Transform="Replace">
      <providers>
        <add
          name="AppFabricCacheSessionStoreProvider"
          type="Microsoft.ApplicationServer.Caching.DataCacheSessionStoreProvider"
          cacheName="MyApplicationCache"
          sharedId="AppCache"/>
      </providers>
    </sessionState>

    <trace xdt:Transform="Remove"/>

  </system.web>

  <system.webServer>

    <directoryBrowse enabled="false" xdt:Transform="InsertAfter(/configuration/system.webServer/defaultDocument)"/>

    <staticContent xdt:Transform="Replace">
      <clientCache httpExpires="Sun, 29 Mar 2020 00:00:00 GMT" cacheControlMode="UseExpires" />
    </staticContent>

    <rewrite xdt:Transform="InsertBefore(/configuration/system.webServer/staticContent)">
      <outboundRules>
        <rule name="Remove ETag">
          <match serverVariable="RESPONSE_ETag" pattern=".+" />
          <action type="Rewrite" value="" />
        </rule>
      </outboundRules>
    </rewrite>

    <validation xdt:Transform="Remove"/>

  </system.webServer>

  <location path="public/Application" xdt:Transform="Insert">
    <system.webServer>
      <rewrite>
        <rules>
          <rule name="Redirect to HTTPS" stopProcessing="true">
            <match url="(.*)" />
            <conditions>
              <add input="{HTTPS}" pattern="^OFF$" />
            </conditions>
            <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" redirectType="Permanent" />
          </rule>
        </rules>
      </rewrite>
    </system.webServer>
  </location>

</configuration>

As the transformation file shows, the following transformations are accomplished:

  • A dataCacheClient configuration section is defined after the cachingConfiguration section in the configSection node.
  • The dataCacheClient section is inserted after the connectionStrings section.
  • In the logging configuration section, any listener targeting the event log is removed and all rolling flat file listeners are replaced by database listeners.
  • The smtp section is replaced with the desired SMTP configuration.
  • In the system.web section:
  • the compilation debug attribute is removed and set to batch
  • the AppFabric assemblies are added to the compilation section
  • the customError, httpHandlers, and httpModules sections are removed.
  • a machineKey section is inserted
  • the sessionState section is replaced with one appropriate for use with AppFabric Caching.
  • the trace section is removed
  • In the system.webServer section:
  • directoryBrowse is set to false to disallow directory browsing
  • a staticContent section is inserted to ensure content is set to expire far into the future
  • IIS rewriting rules are specified via a rewrite section
  • the validation section is removed because we removed the httpHandlers and httpModules sections from system.web (the validation section is only useful when running the application locally using the ASP.NET development server for code profiling purposes – use IIS Express otherwise).
  • A location section is inserted to define redirection to HTTPS for one of the areas of the application.

Using Visual Studio to publish the application to the Web Deploy Package, the output window shows the work being performed.  This is really important and I recommend always reviewing the messages in the output window.

2>—— Publish started: Project: WebFormsApp, Configuration: Release Any CPU ——
2>Transformed Web.config using C:\Projects\WebFormsApp\WebFormsApp\Web.Release.config into obj\Release\TransformWebConfig\transformed\Web.config.
2>Auto ConnectionString Transformed obj\Release\TransformWebConfig\transformed\Web.config into obj\Release\CSAutoParameterize\transformed\Web.config.
2>Copying all files to temporary location below for package/publish:
2>obj\Release\Package\PackageTmp.
2>Packaging into c:\temp\WebFormsApp.zip.
2>Adding declared parameter ‘EntLibErrorLogging-Web.config Connection String’.
2>Adding declared parameter ‘Application-Web.config Connection String’.
2>Package “WebFormsApp.zip” is successfully created as single file at the following location:
2>file:///c:/temp
========== Publish: 1 succeeded, 0 failed, 0 skipped ==========

As the messages show, the Web.config file was transformed into .\obj\Release\TransformWebConfig\transformed\Web.config.  This is great because it allows you to perform a comparison between the original Web.config file and the transformed file.  When a transformation can’t be applied, Visual Studio shows detailed messages about which transformations failed.  I can’t stress enough how important it is to review the messages to make sure the transformations took place.

The other message to notice at the bottom is the name and location of the package that was created, assuming you configured Visual Studio to ZIP up the files to be deployed into a single file.  If you choose to use a folder instead, the message would show the location of the folder created.

Let’s take a look at where Visual Studio stores its files to do the transformation work.  Simply press on the Show All Files button in the Solution Explorer and the following structure is revealed, in line with the messages shown in the output window.

SNAGHTML5459ab4

I like to copy the paths of the original and transformed Web.config files to validate that my transformation gave me the results I expected.  Here is a comparison report of the Web.config files.  The items stricken out were removed or modified by the transformations; the inserted or updated values are shown in color.

image

So far all we have managed to do is perform transformations on the configuration file to get it “server ready.”  In the next section, I show how you can apply transformations to make Web.config files targeted at each environment.

Deployment transformations

As you might have noticed in reviewing the messages from the output window when publishing your application to a web deploy package, Visual Studio automatically applies transformations to the connection strings so as to allow specifying connection strings for each environment.  Let’s review the relevant messages:

  • Adding declared parameter ‘EntLibErrorLogging-Web.config Connection String’.
  • Adding declared parameter ‘Application-Web.config Connection String’.
  • For this sample script, you can change the deploy parameters by changing the following file: c:\temp\WebFormsApp.SetParameters.xml.

Without you having to do anything, Visual Studio applied transformations to the connection strings and created a SetParameters.xml file that you can use when performing the deployment with msdeploy.exe or the IIS console for Web Deploy.  Let’s take a look at this file:

<?xml version="1.0" encoding="utf-8"?>
<parameters>
  <setParameter name="IIS Web Application Name" value="test.example.com" />
  <setParameter name="EntLibErrorLogging-Web.config Connection String" value="EntLibLoggingConnString-TEST" />
  <setParameter name="Application-Web.config Connection String" value="ApplicationConnString-TEST" />
</parameters>

As you see, this file includes key value pairs.  The name is the key for the transformation to be applied and the value is the target value for deployment.  This of this file as a template.  You can make as many copies of this file as you desire.  You might make a UAT.SetParameters,xml and a PROD.SetParameters.xml file to deploy to the UAT and the PROD environments for example.  You would then modify the value of each key value pair so that it is appropriate for your target environment.

I know, I know.  This is the really exciting part about this technology.  And so the question that you probably are asking yourself now are: (1) how can I parameterize additional parts of the Web.config and; (2) how do I use these SetParameters.xml file in deployments (or how do I hand this off to the operations team to perform the deployments)?

Making Custom Parameters

To find out how to do this I will continue to build on the sample we have used this far.  First things first, you need to create a parameters.xml file and include it the root of your web application project.  Here is a picture of what it should look like in the Solution Explorer.

SNAGHTML569f5a0

This file is very simple in structure.  This file is described at a basic level in this Microsoft How To.  More detail is provided on the by the IIS team (the product owner of the Web Deploy technology) in Web Deploy Parameterization.

In our example, here are parameters we want to introduce so that we can vary them per environment:

  • In the transformed configuration file, we inserted a validation section into configuration/system.web/ using a Web.Release.config XSLT transformation to insert the XML element.  We want to vary the validationKey and decryptionKey attributes in each environment so that all web servers in the cluster for a given environment uses the same keys.
  • In the transformed configuration file, we replaced the ASP.NET session state service with the AppFabric Caching provider.  We need to specify a different cacheName and sharedId for each environment.
  • We need to specify a different fileName for the Enterprise Library Logging rolling flat file listener for each environment.
  • We need to be able to set up a different user name and password for SQL Server Reporting Services for each environment.
  • In the transformed configuration file, we inserted a placeholder for the AppFabric Cache client configuration in configuration/dataCacheClient/hosts.  We need to replace this placeholder with the actual host configuration.  The reason why we use a placeholder is that for some environments, only one cache host is used while in other environments we use up to three cache hosts.  Using a placeholder allows Web Deploy to inject an entire XML string as a replacement for the placeholder thus varying the number of <host> children nodes of <hosts>.

These are some of the needs we experienced as we planned our deployments.  Your needs will certainly vary and you will need to define your own custom parameters.  For illustration purposes, I will show how we specified parameters for the validation and decryption keys, how we set up the AppFabric cache name, shared ID, and cache hosts.

To do this we set up the parameters as follows:

<parameters>

  <parameter name="validationKey"
             description="Please provide the validationKey value."
             defaultValue="1A7A288183C4302270B248DA1FB4454AC3F287C9D759D4203843D3C3D5FA86B809E05EC353D8CD1F48B894A45BC5349109335C5A77163799ABBA8EB710BE68C6"
             tags="">
    <parameterEntry kind="XmlFile"
      scope="\\Package\\PackageTmp\\Web\.config$"
      match="/configuration/system.web/machineKey/@validationKey" />
  </parameter>

  <parameter name="decryptionKey"
             description="Please provide the decryptionKey value."
             defaultValue="81B5F289477B6DAC9589B31B5F3B33BF1F03F2F5CF4D611887A4E8AC6B6D32A0"
             tags="">
    <parameterEntry kind="XmlFile"
      scope="\\Package\\PackageTmp\\Web\.config$"
      match="/configuration/system.web/machineKey/@decryptionKey" />
  </parameter>

  <parameter name="Cache Name"
            description="Please provide the name of the AppFabric Cache."
            defaultValue="ApplicationSessionStore"
            tags="">
    <parameterEntry kind="XmlFile"
                    scope="\\web.config$"
                    match="/configuration/system.web/sessionState/providers/add[@name='AppFabricCacheSessionStoreProvider']/@cacheName" />
  </parameter>

  <parameter name="Cache Shared ID"
            description="Please provide the shared ID of the AppFabric Cache."
            defaultValue="AppSession"
            tags="">
    <parameterEntry kind="XmlFile"
                    scope="\\web.config$"
                    match="/configuration/system.web/sessionState/providers/add[@name='AppFabricCacheSessionStoreProvider']/@sharedId" />
  </parameter>

  
  <parameter name="ENV_DRIVEN_CONFIG_SET_BY_PARAM_TEXT_REPLACE"
             description="Please set the configuration of the AppFabric Cache client as an XML-escaped string."
             defaultValue="&lt;host name='localhost' cachePort='22233'/>"
             tags="">
    <parameterEntry kind="TextFile"
                    scope="\\Package\\PackageTmp\\Web\.config$"
                    match="ENV_DRIVEN_CONFIG_SET_BY_PARAM_TEXT_REPLACE" />
  </parameter>

</parameters>

When we publish the application, the c:\temp\WebFormsApp.SetParameters.xml file now contains our new custom parameter entries as key value pairs.  Let’s open this file and make the modifications needed for the TEST environment we are going to do a deployment to.  With the modifications, the file’s content should look something like this (note that I obtained the validation and decryption keys from http://aspnetresources.com/tools/machineKey:

<?xml version="1.0" encoding="utf-8"?>
<parameters>
  <setParameter name="IIS Web Application Name" value="test.example.com" />
  <setParameter name="validationKey" value="23EB0A37C95089029B610813A8B63F55F3639F2306D8019089953681D92E50372401211A07F229B831D8895A559CFB755CECEACBEDD168CB0E3C4B60F61B4D84" />
  <setParameter name="decryptionKey" value="B890EEBAED18D018BD6A35435623BB3CEB91BF11B375C0814DDEFF6AB0B25052" />
  <setParameter name="Cache Name" value="ApplicationSessionStore-TEST" />
  <setParameter name="Cache Shared ID" value="AppSession-TEST" />
  <setParameter name="ENV_DRIVEN_CONFIG_SET_BY_PARAM_TEXT_REPLACE" value="&lt;host name='test.example.com' cachePort='22233'/&gt;" />
  <setParameter name="EntLibErrorLogging-Web.config Connection String" value="EntLibLoggingConnString-TEST" />
  <setParameter name="Application-Web.config Connection String" value="ApplicationConnString-TEST" />
</parameters>

To deploy the application package to a target location, you will need to have Microsoft Web Deploy installed on your target server.  You can get it here: Web Deploy 3.0.  Because we are going to deploy locally for the purposes of this article, make sure to include the Remote Agent Service in addition to the Web Deployment Framework when you install Web Deploy 3.0.

If you don’t have IIS Express installed on your machine yet, go ahead and download and install it (you can find it here: Internet Information Services (IIS) 7.5 Express).  Open the applicationhost,config file located in C:\users\your-username\Documents\IIS Express\config and find the system.applicationHost/sites node.  Add a new node as follows (choosing an ID that does not conflict with IDs already present in your config file):

<site name="test.example.com" id="6" serverAutoStart="true">
                <application path="/" applicationPool="Clr4IntegratedAppPool">
                    <virtualDirectory path="/" physicalPath="C:\temp\deployed\WebFormsApp" />
                </application>
                <application path="/WebFormsApp" applicationPool="Clr4IntegratedAppPool">
                    <virtualDirectory path="/" physicalPath="C:\temp\deployed\WebFormsApp\WebFormsApp" />
                </application>
                <bindings>
                    <binding protocol="http" bindingInformation="*:9876:localhost" />
                </bindings>
            </site>

Create the folder C:\Temp\deployed\WebFormsApp.  Open a command prompt and change directory to C:\Program Files\IIS Express.  Issue the command iisexpress /siteId:site-id where site-id is the site ID you set up in applicationhost.config.  This should start the web server:

image

Now that the web server is running, we are ready to deploy the package to it.  To do this, we are going to use the Visual Studio-generated *.deploy.cmd file.  In my case, the file is named WebFormsApp.deploy.cmd because my Visual Studio web application project is named WebFormsApp.  This generated file comes with a readme – I encourage you to take a look at it.  Also take a look at the command file itself, in particular the msdeploy.exe command.  You may modify this command file to customize it to your needs – for example, we introduced support for environments so that if we call the CMD file with PROD, it would look for PROD.SetParameter.xml.

For now you might just find it easier to start by using the Visual Studio-generated command batch script to see what it does with msdeploy.exe, and then go from there.  The heart of Microsoft Web Deploy is msdeploy.exe.  When calling the CMD file, it will generate an msdeploy.exe command.  Let’s go ahead and try it.  I called the CMD file with /Y to indicate I want the deployment to take place and /L to indicate I am using IIS Express locally.

image

We can now compare the Visual Studio web config file that was made for Release (located in [project root]\obj\Release\transformed\) and the one that was just deployed to C:\Temp\deployed\WebFormsApp.  Here are the differences, as designed.

image

Conclusion

Microsoft has come a long way in terms of  providing tools that allow application builders to plan for deployments to various environments and allow operation teams to deploy the packages they are provided.  What I showed you in this article is but a few things you can do.  You can also integrate your database deployment and extend Web Deploy to perform synchronizations not provided out of the box.

Happy deployments!

20 August 2011

Throwing SOAP Faults in a WCF service being mindful of non-WCF clients

Filed under: .NET, WCF, Web Services — Tags: , , , , , , — Philippe Truche @ 9:28

I was engaged recently to design and write a WCF service that is to be consumed by an ASP.NET 2.0 client.  While I was aware that with a WCF service and a WCF client, using a System.ServiceModel.FaultException<TDetail>  was ideal for handling exception conditions, I wasn’t sure how this was going to be handled by .NET 2.0 since FaultException<TDetail> was introduced in .NET 3.0 – in .NET. 2.0, a SOAP faults are caught with  a System.Web.Services.Protocols.SoapException.

So off to trial and error I went and here is what I proposed.

Firstly, it is important to write a WCF service that can be consumed by a WCF client as well as a non-WCF client.  To this end, throwing SOAP faults should be done in a manner that both WCF clients and non-WCF clients can understand.  My first step was to make sure I had a standard WCF service with a WCF client that could understand the SOAP faults.  I created a DataContract for the SOAP fault I wanted to throw when certain conditions were met:

///<summary>
/// Thrown when no matching MPI is found.
/// </summary>
[DataContract(Namespace = WcfConfiguration.XmlNamespace)]
public class MpiNotFoundFault
{
	private string _mpi;

	///<summary>
	/// Gets or sets the MPI.
	///</summary>
	[DataMember]
	public string Mpi
	{
		get { return _mpi; }
		set { _mpi = value; }
	}
}

I  then specified my FaultContract on the service interface definition as follows:

[ServiceContract]
public interface IMyPinService 
{
	[OperationContract] 
	[FaultContract(typeof(MpiNotFoundFault), Name ="MpiNotFoundFault", Namespace = WcfConfiguration.XmlNamespace, Action = WcfConfiguration.XmlNamespace +"/MpiNotFoundFault")] 
	bool RequestPin(string mpi, DateTime dob);
}

So far, so good.  Now, let’s see how we might throw this SOAP fault in the service implementation.  For the WCF client to work, I threw the exception as follows:

public bool RequestPin(string mpi, DateTime dob) 
{
	// Code elided
	throw new FaultException<MpiNotFoundFault>(new MpiNotFoundFault() { Mpi = mpi });
}

Then, in the WCF client, all I have to do is this:

// With a WCF client, we can catch specific exceptions and get to each fault contract&quot;s 
// contents by accessing the ex.Details propery.
catch (FaultException<MpiNotFoundFault> ex)
{
	Console.WriteLine(“MPI {0} was not found.“, ex.Detail.Mpi);
}

Yes, WCF does a lot of work for us.  However, when your client is not WCF, SOAP faults get a little bit more complicated.  Fortunately, there is a SOAP specification that can be referred to.  The SOAP 1.2 specification on soap faults is located here: http://www.w3.org/TR/soap12-part1/#soapfault.  Notice I am introducing you to SOAP 1.2, not SOAP 1.1.  Yet, in the .NET 2.0 framework, when you use wsdl.exe to create a proxy class, the SOAP 1.1 protocol is defaulted.  That’s somewhat of a problem because is SOAP 1.1, faults may only have a faultcode and a faultstring.  It wasn’t until SOAP 1.2 that hierarchical codes and subcodes were defined.  To see the difference between the two, see http://hadleynet.org/marc/whatsnew.html#S3.1.4.

Why is this important you ask?  Well, if in my ASP.NET client (that does not know anything about WCF) all I have is a SoapException, then I’d like to use the code and subcode of the faults to communicate detail about the exception (of course, my service consumer is a trusted source and will filter the information appropriately before passing it on to its users, but that’s another topic entirely).

Here is what I want to do. In my ASP.NET client, I want to check the code and subcode to understand what the nature of the problem was.  Here is what my ASP.NET catch statement looks like this:

// In WCF we can catch specific exceptions by using FaultException<T> where T is one of the fault contracts 
// included in the service operation. In contrast, ASP.NET web services only allow for catching SoapException. 
// To get to the detail, use the SOAP Fault Code and Subcode as shown below. 
catch (SoapException ex)
{
	// The SOAP fault code always contains the string ‘CredentialValidationRequestFailed’
	Debug.WriteLine(ex.Code.Name); // prints the string "CredentialValidationRequestFailed"
	// The SOAP fault subcode contains the actual soap fault name without the Fault suffix
	Debug.WriteLine(ex.SubCode.Code.Name); // prints the string "MpiNotFound"
}

By doing this, I can see that my call failed because of a CredentialValidationRequestFailed code.  This code could occur for any number of reasons.  In this example, it occurred because of an MpiNotFound subcode, but I have another 3 subcodes that could have been the cause of the fault.

So what do I need to do to achieve this? Well I need to use SOAP 1.2 for sure.  On the service side, I configure the service using an HttpBinding. By default, this binding uses SOAP 1.1. and it can’t be changed. To use SOAP 1.2 for message encoding, I create a custom binding. My configuration file now looks like this:

  <system.serviceModel>
    <bindings>
      <customBinding>
        <binding name="basicHttpSoap12Binding">
          <textMessageEncoding messageVersion="Soap12"/>
          <httpTransport/>
        </binding>
      </customBinding>
    </bindings>
    <services>
      <service name="MySoap12Service">
        <endpoint address="" binding="customBinding" bindingConfiguration="basicHttpSoap12Binding"
          bindingNamespace="MySoap12ServiceNamespace"
          contract="MySoap12Service">
        </endpoint>
      </service>
    </services>
  </system.serviceModel>

On the client side, I use wsdl.exe to generate the proxy class; instead of letting the default SOAP 1.1 protocol apply, I pass in the protocol switch to specify the SOAP 1.2 protocol should be used as follows:  wsdl.exe /protocol:SOAP12. This causes the generated proxy to specify the SOAP 1.2 protocol in its constructor:

public partial class MyService : System.Web.Services.Protocols.SoapHttpClientProtocol {
    // code elided
        
    public MyService() {
        this.SoapVersion = System.Web.Services.Protocols.SoapProtocolVersion.Soap12;
        // code elided
    }
    
    // code elided

Then, I need to throw my SOAP faults with a bit more information.  Here is the revised code snippet for throwing a SOAP fault that can now be understood by non-WCF clients:

throw new FaultException<MpiNotFoundFault>(new MpiNotFoundFault() { Mpi = mpi },
	String.Format(CultureInfo.InvariantCulture, “MPI ‘{0}’ not found.“, mpi),
	new FaultCode(“CredentialValidationRequestFailed“, new FaultCode(“MpiNotFound“, WcfConfiguration.XmlNamespace)));

That’s it.  I throw the FaultException<MpiNotFoundFault> exception, passing in a new MpiNotFoundFault object into the constructor, then I pass in the reason as the “MPI ‘xyz’ not found” string, and then I create my code and subcodes.  And voila, I have happy WCF clients and happy non-WCF clients.

On the wire, the SOAP fault looks like this:

<s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope">
  <s:Header />
  <s:Body>
    <s:Fault>
      <s:Code>
        <s:Value>s:CredentialValidationRequestFailed</s:Value>
        <s:Subcode>
          <s:Value xmlns:a="MyNamespace">a:MpiNotFound</s:Value>
        </s:Subcode>
      </s:Code>
      <s:Reason>
        <s:Text xml:lang="en-US">MPI '123123' not found.</s:Text>
      </s:Reason>
      <s:Detail>
        <MpiNotFoundFault xmlns="MyNamespace" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
          <Mpi>123123</Mpi>
        </MpiNotFoundFault>
      </s:Detail>
    </s:Fault>
  </s:Body>
</s:Envelope>

22 March 2011

Unit Testing 101 – Fundamentals of Unit Testing

Filed under: .NET, Testing — Tags: , , — Philippe Truche @ 8:40

This post is my first of 3 posts on unit testing, as introduced in http://philippetruche.wordpress.com/2011/03/11/unit-testing-a-curriculum/.  Unit Testing 101 is targeted at audiences that have little to no familiarity with unit testing.  This might include junior developers and even managers in software development

Table Of Contents

  • What? Who? Why?
  • Deciding what to test…
  • …and more importantly, when to write the tests.
  • Popular unit testing frameworks available to .NET
  • Unit test prototype
  • Writing your first unit tests

What Is Unit Testing? Who Writes The Tests? Why Do It?

The consensus is that Kent Beck introduced unit testing.  There are a number of definitions of unit testing – personally I like the definition from the book Pragmatic Unit Testing: “unit tests are performed to prove that a piece of code does what the developer thinks it should do.”  The key word is “developer.”  The developers get to write the unit test code.

This leads me to my next point.  Unit Testing isn’t a testing activity; in fact, RUP clearly identifies unit testing as belonging to the implementation discipline (see http://en.wikipedia.org/wiki/RUP).

So if this is an activity that is performed as the source code is being developed, then what are some of the benefits that developers get from writing unit tests?

  • They get living documentation of how the code works.
  • They get the ability to refactor code quickly and safely through automated regression tests.
  • It improves the low-level design of their classes.
  • It reduces risk. Writing tests helps drive the code quality up, thus reducing the number of bugs found much later in the software development lifecycle (SDLC).

In fact, unit tests are such a part of development that Michael Feathers defined legacy code as any code that does not have tests (see http://www.objectmentor.com/resources/articles/WorkingEffectivelyWithLegacyCode.pdf)

Deciding What To Test…

“What should I test?”  This is an interesting question I get often enough from developers and managers who are new to unit testing.  The question in itself is disconcerting because it reveals just how little is understood about unit testing and often stands as a euphemism for “What is the minimal amount of testing that I can get away with?”

I don’t have a real easy answer to this question.  There are no hard and fast rules that I can provide someone with.  Rather, I prefer to remind developers that the more they write unit tests, the more they improve their design abilities and the better the software they write.

Would I test a class without behaviors?  Probably not.  But what if I have an entity on which I add validation attributes (using the EntLib Validation Block or Data Annotations)?  I would definitely test the validations to ensure that the validators attached to properties are behaving as expected.

So my general rule is to write unit test that focus on behaviors and where there is value in performing this activity.

…And More Importantly, When To Write The Tests.

This is definitely the more important question.

As a developer, you get the most value from creating the unit tests when creating the code.  In fact, rarely does it make sense to go back and write unit tests after the code has been written.  Instead, you should write your tests as you write the source code.  Do I sometime s write my test before the code?  Yes – when I have a good idea what I’d like the API to look like.  Sometimes I write the test after creating the initial source code.  Most often, I find that writing source code and unit tests is a bit of the chicken and the egg problem – which one came first?

When dealing with bugs identified by testing teams or customers, you should definitely write tests before fixing the bugs.  The reason for this is twofold:

  • It helps reproduce the issue
  • It creates a test for a missing scenario

Popular Unit Testing Frameworks On The .NET Platform

I often get the question: “Which unit testing framework do you use?”  There are a number of frameworks specifically for .NET – most are open source and one is Microsoft’s very own.  They are all fairly similar, and I think which one you use comes down to personal preference.

The more important point is not to confuse the tests runners and the testing framework.  For example, I can run MbUnit tests (part of Gallio) from within Microsoft Visual Studio, and vice versa, I can run Microsoft unit tests from the Gallio Icarius runner.

This being said, here is a list of unit testing frameworks.  It includes, but is not limited to:

Qualities of Unit Tests

In preparing this course, I researched a few sites that would help establish a guideline as to what a good unit test should look like.  I ended up settling on a set of quality attributes that unit tests should have.  These quality attributes were presented by Peli De Halleux in an advanced unit testing session in Spain (see http://channel9.msdn.com/blogs/channel9spain/microsoft-pexmoles–advanced-unit-testing-aspects-13).  Of course, my own experience also shaped this list.

Unit tests should have the following quality attributes:

  • Atomic.
    • No dependencies between tests.  If test[i] must be run after test[j], you have an issue.  With some unit test frameworks (e.g. MbUnit), you can specify the order in which you want to run tests.  Still, it is something you should avoid.
    • Tests can be run in any order.
    • Tests can be run repeatedly.  I can’t tell you how many times I’ve seen tests that only work the second time and on subsequent runs.  That usually points to some initialization issues.  Fix them!
    • Tests can be run concurrently.  If you have thousands of tests and you are running a continuous integration process, this is rather important.  In general, when a test fails unless it is run on its own, there is some sort of shared state issue.  Fix it!
  • Trustworthy.
    • Tests should run every time on any machine.  “It works on my machine” is a team development killer.  The tests should run on any machine.  I have seen tests where a local resource from the user’s folder is used!!!  If doing a Get Latest on the solution does not get me all required dependencies, this is not going to work.   If there is one or more failing tests at any point in time, how can the tests be trusted?  What if you make some changes to the source code and there were 50 breaking tests before you started?  You make the change, run the tests, and there are still 50 breaking changes (which tests failed?  are they still the same ones?).  The fact is, you don’t know.  But when all tests pass and you make a change that leads to one or more failing tests, you can evaluate the impact of your changes and either fix the source or fix the tests.
    • Beware “integration” tests and separate them into their own projects with the word “integration” in them.  Unit tests have to run fast – tests that have dependencies on databases or are otherwise not self-contained should be separated out.  My personal preference is to put them into their own test projects.  I expect these tests will take longer.  At the same time, I expect that unit tests will run very fast.  This is really critical for continuous integration to be efficient.
  • Maintainable.
    • Test code is code also.  The same design principles we apply to source code should be applied to test code.  This is important because tests change over time to adjust for changing source code, so it is important that the tests be maintainable too.
    • Avoid repeating code.  You can use fixture setup and test setup methods to refactor otherwise repeating code.  You can also create helper classes to set up test data.
  • Readable.
    • This is really important.  I should be able to open any tests and understand them.  Otherwise, it is difficult to maintain the tests.
    • There are many ways to name the test methods.  I really like the suggestion I’d heard on a Podcast and explained here: http://osherove.com/blog/2005/4/3/naming-standards-for-unit-tests.html.  It follows this pattern: <Method_Being_Tested>_<Scenario>_<Expected Behavior>.  It works pretty well for me; I encourage you to try it.

A Unit Test Prototype

I have been testing for many years, and I found a pattern repeated over and over.  I did not have a name for it, but once I started working with RhinoMocks (a mocking framework – I will introduce this in the 201 or 301 course), I learned a name that I thought described well what I’d seen in my tests and other people’s tests.

The pattern is named the Arrange-Act-Assert (AAA) pattern.

  • Arrange.  Populate any objects and/or create any necessary Mocks or Stubs required by your test.
  • Act. Execute the code being tested.
  • Assert.  Verify that expectations were met or report failure.

Asserting is the critical step in any unit tests.  If you are not making an assertion, then you are not really unit testing.  How many assertions should there be you might ask?  Preferably one and only one.  On occasion I may have a couple of assertions is they are strongly related and it does not make sense to slit the test into two tests.  The problem with having too many assertions in a test is that when a test fails, it can be difficult to understand right away what the problem is (remember the single responsibility principle?).

Writing Your First Unit Tests

While I was teaching the class, I showed the students samples I’d put together in Visual Studio.  For the readers of this blog post, I encourage you to try the tutorials provided by the unit testing frameworks.  In particular, I find NUnit to be a good place to start because the tutorial is well written and easy to follow.  But that does not mean you can’t work through other tutorials.

Coming next, Unit Testing 201 and 301.

Until then, happy unit testing!

11 March 2011

Unit Testing – A Curriculum

Filed under: .NET, Testing — Tags: , , , , , — Philippe Truche @ 3:36

The subject of unit testing is one that comes over and over in my professional experience. I have been asked many times to teach developers how to unit test. It seems like the words “unit testing” somehow got some level of fame and everyone wants to claim that they are doing it. But to do it well and to do it consistently seems elusive. Worse yet, there tends to be a dichotomy between development managers and the developers about what unit testing really is and why we unit test in the first place.

When asked to put a webinar together, I decided to break it down into 3 separate classes. My first one is called Unit Testing 101 and covers the fundamentals of unit testing. Though it focuses on concepts primarily, it also shows a basic unit test so as to get a flavor. I then created the second course: Unit Testing 201 – Intermediate Unit Testing. In this course, I look at techniques for handling dependencies and introduce basic concepts of test doubles and mocking frameworks. Finally, I reserved the most advanced techniques for Unit Testing 301 – Advanced Unit Testing: I cover topics on compiler techniques in Visual Studio to access members that aren’t public, how to address dependencies without necessarily resorting to using an IOC container, techniques for faking out the HttpContext, employing last-resort frameworks like Microsoft’s Moles to stub out System.DateTime for example, how to manipulate the configuration file, how to host WCF in process (I know, this is bordering on integration tests clip_image001), and a basic overview of Microsoft Pex’s framework.

Yes, it sounds like there is a lot to know to write good unit tests. In fact, just like anything else – practice makes perfect.

In my next post, I will go through the contents of Unit Testing 101.

29 June 2010

How to sort projects alphabetically in Visual Studio

Filed under: Uncategorized — Philippe Truche @ 1:45

This is really handy, especially when there are many projects in Visual Studio.  From http://blog.catenalogic.com/post/2009/01/09/Sort-Visual-Studio-2008-Projects-alphabetically-inside-Solution-Folders.aspx, here is how you do it:

  1. Right-click on a project and select rename (or simple select a project, wait 1 second, and click it again or press F2).
  2. Don’t change the name, simply select another project with your mouse.

This will cause Visual Studio to resort the projects.

I also highly recommend the PowerCommands for Visual Studio 2008 because it allows you to collapse all the projects in a solution with a single click.

15 June 2010

How to get FxCop to take into account SuppressMessage attributes in your code

Filed under: Uncategorized — Philippe Truche @ 9:07

Every so often, I forget that non-team editions of Visual Studio do not define the CODE_ANALYSIS constant by default.  If you are wondering why FxCop seems to be ignoring your SuppressMessage attributes in your source code, this is probably the reason why.  See FAQ: Why does FxCop ignore my in-code (SuppressMessageAttribute) suppressions? [David Kean] for more information on this topic.

16 February 2010

How to find which assembly is referencing a specific assembly

Filed under: Uncategorized — Philippe Truche @ 10:17

If you have ever worked with an application that has a large number of assemblies, it can be daunting at times to manage the entire set.  In particular, you will find that developers may reference external assemblies that perhaps they should not be referencing.  For instance, would you want to deploy an ASP.NET application where developers reference and use types in System.Windows.Forms?

Here is how you can find those pesky references in your build output directory.  From a command line in the path of your build’s binaries, type in the following command:

(FOR %i IN (*.dll) DO ILDASM /TEXT /PUBONLY /ITEM=Assembly %i | FINDSTR /L System.Windows.Forms) > Output.txt

Then, open the Output.txt and search “.assembly extern System.Windows.Forms.” This will show you which assemblies are referencing System.Windows.Forms, if any.

19 November 2009

Day 3 – What’s New in WCF 4

Filed under: Uncategorized — Philippe Truche @ 2:02

Configuration is improved and is now similar to ASP.NET. 

  • Convention over configuration allows default endpoint configurations to be created by the framework.  Those default endpoints will be very helpful for simple scenarios where you want to get going quickly without the overhead of the configuration.
  • Empty binding names are defaults at the top of the config hierarchy.  Nice!
  • Behaviors now follow config inheritance rules, just like ASP.NET configuration.
  • Config-based activation.  SVC files are no longer required and can be replaced by config entries.  This is really cool if you have a lot of SVC files to manage.  As a result, you can replace 20 SVC files for example with a single config file.

Monitoring WCF Apps.

  • AppFabric is integral to monitoring.  Search for Windows AppFabric to get an overview of this new product from Microsoft.  A dashboard is integrated into the IIS 7 console.  This is really nice and makes it easier to visualize than monitoring WCF performance counters in perfmon.  This does not replace specialty tools like Avicode Intercept Studio or BMC AppSight, but is better than nothing.

Message Pump as a Service.

  • RoutingService is a new feature.  You host it like any other service.  It supports RequestReply, sessionful RequestReply, One Way, sessionful , and sessionful Duplex.  You build a message filter table that get evaluated at runtime.  The RoutingService then performs the actions as specified by matching filters.
  • The filter table can be replaced at runtime to respond to network changes for example.
  • Using a Routing Service enables scenarios like:
    • Protocol bridging. Examples: net.tcp to basic http; soap 1.1. to soap 1.2.
    • Security bridging.
    • Alternate endpoints.  You can use this for failover routing.   This one got applauses.  Very cool!

Discovery

  • Ad-hoc discovery.  Clients can multicast probe messages to discover services on the network.  Probe match messages are sent to the client in multicast.  The scale is limited by the transport being used.
  • Managed discovery.  A discovery proxy receives unicast hello messages from clients.  Probe multicast messages are intercepted by the discovery proxy.  Disco proxy sends unicast messages to the clients that send out probes, at which points those clients now switch to unicast messages to the proxy. 
  • New classes to look for.  ContractDescription, DynamicEndpoint, ServiceDiscoveryBehavior, AnnouncementService, UdpAnnouncementEndpoint, FindCriteria, and EndpointDiscoveryMetadata.
  • Demo.  Client was able to respond to a service being taken down and re-discover where else it could go and start using a backup service that was brought online before taking down the primary service.  This was really cool!

It seems to me that managed discovery is the better model for enterprise discovery of services.  I can see applicability of discovery in the projects I am currently working on and simply the hub-and-spoke model we are currently using.

It is interesting to me that this functionality is similar to what BizTalk can perform with some ESB toolkit.   I can’t wait to see what models are going to emerge and what role BizTalk will play in a WCF 4 environment.

The demos on this got many applauses from the audience.  :-)

18 November 2009

Day 2 at the PDC – Impressions from the keynote

Filed under: Uncategorized — Philippe Truche @ 3:07

Well, Microsoft never disappoints on the second day of the keynote.  I was wondering why breakfasts had been suppressed and the traditional Universal Studios party taken away.  This morning’s keynote gave the answer.  All PDC attendees are receiving a brand new laptop, custom configured for developers.  NICE!!!!

Now on to my favorite talk.  Scott Guthrie spoke and showed Silverlight 4.  I no longer have an excuse to delay getting up to speed on Silverlight.  As far as business applications are concerned, there is a concerted effort to address typical concerns of business applications in Silverlight 4.  It is now available as a beta, and was announced as planned on being released first half of 2010.  Silverlight 4, here I come!

28 October 2008

PDC08 – Day 1 Recap

Filed under: Uncategorized — Philippe Truche @ 3:37

Quite a few sessions were offered throughout the day.  I picked a few based on my interests, and wanted to share my take on some of my favorite sessions today (Monday 27 October 2008).

  • Scott Henselman’s session (TL49 Microsoft .NET Framework: Overview and Applications for Babies).   This session was based on a set of demos centered around the BabySmash application; it ties into the current food court offerings of the .NET framework, and also included some elements of upcoming .NET 4.0 features.  OK - so what was so likeable about Scott’s delivery?  Well, he is used to speaking to audiences; after all, he hosts the Henselminutes podcast.  The other thing I really liked: he comes from the point of view that he is a develop who knows C#, but he is not an expert on Silverlight 2, the MS surface, or WPF for that matter.  And it is with this premise that he makes a convincing point that these technologies are not that hard to pick up.  Granted, he had some “insider” help.  Still, I could not help but think that he was rather convincing and effectively acting as an evangelist.  Great demos tying in a number of technologies, from WPF to Silverlight, and yes, even the Surface with its touch capabilities.  Thanks a bunch, Scott.
  • Phil Haack on the ASP.NET MVC framework, with a segment on StackOverflow.com by Jeff Atwood (PC21 ASP.NET MVC: A New Framework for Building Web Applications).  This was a real good session too.  I have to admit I had not gotten a chance to look into ASP.NET MVC much, and this session filled my knowledge gap in no time.  I came to appreciate the ASP.NET MVC framework as another food vendor in the cafeteria; yes, it does not intend to replace the web form model, it only intends on providing an alternative model.  Effectively, this is an additional tool in the toolbox.  Use the Phillips head when you need a Phillips head.
    • Here is what I liked about it:
      • The developer has to know how HTTP works.  Not a bad thing in my book.  I have seen too many developers make mistakes in web forms because of their lack of understanding of HTTP.
      • It is naturally “search engine optimized” because of its alignment on REST principles.
      • Developers have to embrace HTML and in fact have complete control over the HTML being emitted.  I think Phil Haack’s analogy about transmissions is not bad at all.  Think of web forms as an automatic transmission, and think of ASP.NET MVC as a manual transmission.  Not a bad analogy indeed.
    • Here is where I think it falls short today:
    • Lack of support for bi-directional data binding.  In my experience, developers spend too much time pushing data into the UI and coding the events for the changes made to the data by the user.  Better bi-directional data binding is needed.  So it’s not there in ASP.NET MVC, but it is there in ASP.NET 2.0 and is also there in Spring.net.  As far as I understood during the session, however, bi-directional data binding is planned in future versions of the ASP.NET MVC framework.

Well, that’s it for day 1.

Older Posts »

The WordPress Classic Theme Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.