Azure CDN – Cache busting CSS image references and minification

In my previous post I discussed my AzureCdn.Me nuget package which can be used to add a cache busting query string to your CSS file references.  In this post we look at cache busting image references contained in your CSS files, and minifying the result.

Cache Busters

Inexplicably the Azure CDN doesn’t ship with a big red reset button to allow you to clear the cache for the CDN.  Meaning if you upload a new image or file that’s currently cached it may take hours or even days before the CDN refreshes the cache.

As I outlined in my previous post the easiest way round this is to add a query string to the resource reference, which differs on each deploy.  Meaning the CDN will see the resource as changed and pull down a fresh copy.

All well and good, but as often as not your CSS file may actually contain image reference within, ie

.lolcat1
{
    background-image: url("./images/lolcat1.jpg");
    background-size : 100% 100%;
}

Now if you modify that image it won’t be updated on the CDN, which is bad news.

Introducing AzureCdn.Me.Nant

Any regular reader of my blog will know I’m a big fan of Nant.  So I thought I’d write a build task to append a cache-busting query string onto the end of the image ref.  I spent a bit of time investigating whether anyone else had addressed this problem, to my surprise I couldn’t find any.

In the course of that investigation I took a look at YUI Compressor.  This open source tool minifies your CSS and Javascript.  I downloaded the code, and realised I could enhance it to add a cache buster prior to doing the minification.  Anyone interested can check out my fork on Github here.

Usage

I packaged up my changes as a nuget package AzureCdn.Me.Nant.  You can install it into your build project.

Step 1 – You need to ensure all image references within your css are quoted, eg:

  • good – url(“image.png”)
  • good – url(‘image.png’)
  • bad – url(image.png) – no quotes

If image refs aren’t quotes the YUI Compressor code won’t pick it up

Step 2 – Add a task similar to this into your build file, change the params to suit:

<loadtasks assembly=".\lib\Yahoo.Yui.Compressor.Build.Nant.dll" verbose="true" />
<target name="CompressCss">
    <echo message="Compressing files"/>
    <cssCompressor
        deleteSourceFiles="false"
        outputFile="${buildspace.src.dir}/AzureCdnMe.Sample.Web/cdn/content/minified.css"
        compressionType="Standard"
        loggingType="Info"
        preserveComments="false"
        lineBreakPosition="-1"
        cdnQueryString="1.01"
    >
    <sourceFiles>
        <include name="..\AzureCdnMe.Sample.Web\cdn\content\azurecdnme.css" />
        <include name="..\AzureCdnMe.Sample.Web\cdn\content\bootstrap.css" />
        <include name="..\AzureCdnMe.Sample.Web\cdn\content\bootstrap-responsive.css" />
     </sourceFiles>
     </cssCompressor>
</target>

The new property I’ve added is cdnQueryString.   When it runs it will both minify and cache bust your CSS image references, by appending a the supplied version number.

Referencing your code

Once you’ve minified the CSS you need to ensure your solution uses the new minified version.  If you install my AzureCdn.Me package you can find a helper method that will allow you to determine if you are in debug mode or release, eg:

if (Html.IsInDebugMode())
{
    <link href="@Url.AzureCdnContent("~/Content/bootstrap.css")" rel="stylesheet"  />
    <link href="@Url.AzureCdnContent("~/Content/bootstrap-responsive.css")" rel="stylesheet"  />
    <link href="@Url.AzureCdnContent("~/Content/azurecdnme.css")" rel="stylesheet"  />
}
else
{
    <link href="@Url.AzureCdnContent("~/Content/minified.css")" rel="stylesheet" type="text/css" />
}

Show me the code

You can find a working sample of the above here, where it’s hopefully obvious what is going on.

Conclusions

If anyone is interested it would be easy to enhance the YUI Compressor code to add an MSBuild task to add the cdnQueryString parameter.  Also the cdnQueryString param will work in the same way if you want to also minify any javascript.

Advertisement

Automating Visual Studio 2010 builds and deployments with Nant.Builder

Part 2 in my Visual Studio 2010 Turbo series

  1. Visual Studio 2010 Workflow
  2. Automating Your Builds with Nant.Builder
  3. DIY AppHarbor – Deploying Your Builds onto Windows Azure

In this post I look at using Nant and my Nant.Builder nuget package to quickly get your builds automated, from here it should be simple for you to integrate with a CI tool of your choice.

Update (27/07/12) – Anoop Shetty has put together an awesome post on using Nant.Builder here.  Thanks Anoop 🙂

Nant and Nant.Builder

I’ve been using Nant for years now, it’s a great tool for scripting and automating tedious build and deployment tasks.  Some might say it’s getting a bit long in the tooth, and it’s not as hip as rake etc.  However, I find it perfectly usable, with a learning curve that’s not too steep, it’s very well documented and it’s updated usually once or twice a year.

I’ve recently been doing more and more with Nuget, and I’m increasingly finding it a very powerful way of quickly setting up new projects.  One task that always takes a bit of time is setting up a build script for the new project.  Usually I’d cut and paste an existing script and hack out the bits that needed changed.  This was painful, and I wanted to get rid of this boring step, so Nant.Builder was born.

Installing and Integrating Nant

Hopefully you’ve followed part 1 of this enthralling series, so if you haven’t get Nant installed, download the latest stable build and extract it c:\dev\tools\nant-0.91.  If like me you’re trying to do more from the command line, add the bin directory into your Path environment var, ie C:\dev\tools\nant-0.91\bin

Open powershell and type nant you should see something like this, don’t worry about the Failure message for now:

NAnt 0.91 (Build 0.91.4312.0; release; 22/10/2011)
Copyright (C) 2001-2011 Gerry Shaw
http://nant.sourceforge.net

Nant can also be launched from Visual Studio.  Go to the Tools | External Tools menu option, click Add and complete as per screenshot, ensure you tick the Use Output Option.   You can now launch Nant from VS.

Install Nant.Builder

If you followed the first installment of this series you should have your new Solution in your workspace.  Now lets setup Nant.Builder:

  • Add a new empty project and name it  <yoursolutionname>.Build, ensure you save it in the src directory.  We’ll use this project to hold our build scripts.

  • We don’t want the compiler to build this project so click Build | Configuration Manager.  Untick build on any configurations

  • Now we can install Nant.Builder from Nuget run the following command, from the package manager command line:
install-package nant.builder -projectname <yoursolutionname>.Build
  • We now have Nant.Builder installed into your .Build project 🙂

Configure Nant.Builder for your solution

I’ve tried to keep configuration to the bare minimum, as the whole point is to keep things fast.

  • Open the Nant.Build file.
  • Set the solution.name property to the name of your solution, in our example SampleSolution
  • If you’ve set up your workspace as described in the Workspace blog, you won’t need to edit solution.src.dir.  If you don’t save your projects in a source dir, and save them in the same directory as the .sln file, edit this property to blank, ie “”
  • Set the solution.projects property to a comma separated list (no spaces) of all the projects contained in your solution, in our example SampleSolution.Services,SampleSolution.Tests,SampleSolution.Web
  • Set the release.configuration property to the configuration you want the solution to be compiled under, default is Release
  • If you’re not using CI, you can manually set the version number.  Nant.Builder will then version all your dlls with the version number you specify.  If you are using CCNet, Nant.Builder will pick up the version number from CCNet
  • Set the company.name property to the name of your company, this will also be added to the Assembly.Info, so users can see who created the dll
  • So in our sample we have this:
<!--The name of your solution, please overwrite the default -->
<property name="solution.name" value="SampleSolution"/>

<-- If your projects reside in a different directory from the .sln file specify here, or leave empty if not -->
<property name="solution.src.dir" value="src" />

<!-- Comma seperated list of projects contained in your solution -->
<property name="solution.projects" value="SampleSolution.Services,SampleSolution.Tests,SampleSolution.Web" />

<!-- Set the configuration for compilation, typically release, but may be custom -->
<property name="release.configuration" value="Release" />

<!-- Manually set version, if using CCNet this will be overwritten later -->
<property name="version.tag" value="1.0.0.1"/>
<property name="company.name" value="iainhunter.wordpress.com" />

If you’ve followed the first tutorial you shouldn’t need to change anything in GlobalBuildSettings.xml.  However, if you have a different workspace, buildspace, or have msbuild4 located in a non-standard location, set the values appropriately or you’ll get errors.

Running Nant

We can now run Nant from the command line by opening powershell, navigate to your Build directory, eg C:\dev\work\SampleSolution\src\SampleSolution.Build  then type Nant.  Your solution should build, or throw errors if you have warnings etc.

Alternatively in Visual Studio open the Nant.Build file, then in Tools  run your new Nant tool you created above.

Now if you navigate to your builds directory C:\dev\builds\SampleSolution you should see your build, and if you look at one of the Dlls you should see it has been versioned according to your instructions

Next steps

Nant.Builder is available on github here, so feel free to fork or send me a patch if you think it can be improved.  I’m planning to add a few enhancements like a msdeploy task etc, we’ll see how time allows.

Next time

We alter Nant.Builder to automatically deploy your solution onto Windows Azure

Visual Studio 2010 Turbo – Workflow

Recently I’ve been attempting to streamline my workflow, to help me Get Things Done 🙂  So I thought I’d share some of that work, in a mini-series of blog posts:

  1. Visual Studio Workflow
  2. Automating Your Builds with Nant.Builder
  3. DIY AppHarbor – Deploying Your Builds onto Windows Azure

Tool Up

I won’t spend much time expounding on the tools that I use, as excellent guides are but a Google search away, but I’d recommend installing and using the following:

  • Powershell – Yeah it’s a bit clunkier than bash, but it’s extremely powerful and lets you easily automate your day to day life
  • Console2 – A nice way of working on the CommandLine and Powershell- read Hanselman’s excellent guide
  • Git – As a looooong time SVN user and aficionado I was reluctant to move to Git, but now I have I have to confess I’m enjoying the experience
  • PoshGit – Work with Git in Powershell, read Haack’s excelent guide to getting it set up here and here.
  • Nant – My build tool of choice, I’ll talk about it a bit more in follow up posts.
  • Nuget – Jump start your projects with this excellent package manager, and then upload some packages of your own 🙂

Configure your workspace

I’d encourage all dev teams to configure their workspace in the same way.  That way you can jump onto a colleague’s machine and you’ll know where to go to find the code if pairing etc.  Also it makes it easy to share buildscripts around the team.  I set up my workspace as follows:

C:\dev 
    \builds 
    \releases 
    \tools 
    \work

So all work is done in the C:\dev dir.  We then create 4 subdirs:

  • builds – used solely by your build tool to copy files into for compilation, running unit tests etc
  • releases – used by your build tool to copy your solution in a format that can be easily released, ideally any subfolders here would be dated or versioned
  • tools –  contains any tools you use to help with developing code, ie Nant, NUnit etc
  • work – the main event, contains all your various solutions

So for example we might have:

C:\dev 
	\builds 
		\DemoApp1 
	\releases 
		\DemoApp1-Build-120508-0915 
		\DemoApp1-Build-120508-1115 
		\DemoApp2-Build-120507-1506 
	\tools 
		\Nant-0.91 
		\Nunit-2.6 
	\work 
		\DemoApp1 
		\DemoApp2

Creating a new empty solution

A number of years ago I read the excellent Code Leader, the author advised using a tool called TreeSurgeon to set up a new solution.  This tool is slightly long in the tooth now and could do with being updated, but you can follow the guide below to quickly define a new solution in the same style:

  • Create a new blank solution in c:\dev\work and name it after your project  – eg SampleSolution

  • Your solution will open in Visual Studio and will contain only the solution file.  Now you can add projects to your solution you might want to add a Web project a Services project and a Tests project.
  • So right-click on the solution file and select Add | New Project
  • As per .Net convention you should call the various projects <solutionname>.<projecttype>.  So we’ll add SampleSolution.Web, SampleSolution.Tests, SampleSolution.Services.
  • When you add the project ensure you set the location of the project to the /src folder within your solution

  • Right-click on the solution file again and select Enable Nuget Package Restore.   Click Yes when you get the pop-up about wanting Nuget to manage packages restores for you.  A couple of nuget files will be added to the root of your solution.

We should now have a nice neat solution, with all projects saved in the src dir, eg:

Next Steps

Now to test if Nuget is working lets add a package to see if all is working as expected, lets add the awesome Twitter Bootstrap.  So if we run the command in the Package Manager console

install-package TwitterBootstrap

Twitter Bootstrap will be installed successfully, and if we now look at our Solution dir, you’ll see we now have a new packages dir, containing Bootstrap and its dependencies

Now that we’re happy you can add your solution to the source code management tool of your choice, I’d recommend checking out Git.  BTW with nuget installed, you do NOT add the Packages directory to Source Control.

Next time

I’ll look at using Nant, to quickly build our solution.

Testing with Selenium Webdriver, Visual Studio and NUnit

Update 20/06/12 – Updated NareshScaler to work with the new IEDriverServer that ships with the latest version of Selenium Webdriver

We have a multi-tenant solution at work.  As we added tenants we were happy to discover that our app scaled nicely, but we were sad to discover that Naresh, our lone QA superstar, did not.  We discovered (unsurprisingly) it wasn’t possible to stick to a fixed release schedule and do regression/integration testing manually.  So some additional automation was required.

Integration Testing Automation Requirements

Our requirements were

  1. Naresh could create integration tests with minimum oversight from the team
  2. Tests needed to be created quickly without a lot of additional coding
  3. Ideally tools would run on Visual Studio

Ruling out Specflow

I’d been keen to investigate Specflow and the BDD style of integration testing.  However, it quickly became apparent that this would require a significant effort in time to successfully wire up the tests, and would require a reasonable amount of dev to ensure each test passed.  Thus failing requirements 1 and 2.

Ruling in Selenium Webdriver

We already had a bit of experience with Selenium and we found this excellent blog post from Stephen Walther.  Reading this we realised that  Selenium Webdriver met our requirements perfectly.  We could install Selenium Webdriver into Firefox and export the scripts as C# Webdriver classes.  We could then add the classes into a simple test-runner and we’d be able to scale Naresh 🙂

Some Selenium Pitfalls

However, it’s not all good news.  One thing to point out is that there can be a certain amount of flakiness when the various selenium drivers are trying to locate elements on your page.  Ie a test will fail once, then pass again later.  This is obviously far from ideal, but overall I think the benefits out-weigh the drawbacks.   Test that fail consistently can be investigated.

One way to minimise these failures would be to run the tests in only one browser, the Firefox driver seems the most reliable.  If you’re not doing loads of Javascript this is probably safe enough.

Introducing NareshScaler

I wrote NareshScaler to allow Naresh to quickly add each Selenium macro into the test-runner.  I’d also been wanting to try out creating a nuget package for a while, so this seemed like a perfect chance to give it a go.

You can install NareshScaler into your Integration Test project using the nuget package manager.  Once successfully installed, you should add your Selenium Webdriver class file(s).

Then simply mark each class as inheriting from NareshScalerTest.

You will now have to override the RunSeleniumTests method.  You simply need to wire up the driver and list any test methods in your selenium Webdriver class, ie:

[TestFixture]
public class NugetOrgTest : NareshScalerTest
{
private IWebDriver driver;
private string baseURL;

[SetUp]
public void SetupTest()
{
baseURL = "http://nuget.org/";
}

public void Test_That_NareshScaler_Exists_On_NugetOrg()
{
driver.Navigate().GoToUrl(baseURL + "/");
driver.FindElement(By.Id("searchBoxInput")).Clear();
driver.FindElement(By.Id("searchBoxInput")).SendKeys("NareshScaler");
driver.FindElement(By.Id("searchBoxSubmit")).Click();
driver.FindElement(By.LinkText("Naresh Scaler")).Click();
}

public override void RunSeleniumTests(IWebDriver webDriver)
{
driver = webDriver;
Test_That_NareshScaler_Exists_On_NugetOrg();
}
}

We use NUnit here, so once everything is wired up, you should be able to point NUnit at your Integration Tests dll and see all your tests running in IE, Firefox and Chrome  Which I have to admit is pretty cool when you see everything running automatically.

Additionally NareshScaler includes an Nant build file, to allow you to wire up your integration tests into CruiseControl etc.  I’ve added a sample, so you can hopefully see how it works.  Hope you find it useful.

The Mythical Version 1.0

As a breed us hackers are perfectionists.  Tinkering away at that algorithm, worrying about the size of that switch statement, wondering about abstracting away some detail.  But always, always with the aim of improving our code base.

Many of our number are also a bunch of nit-picking, passive-aggressive, show-boating arseholes.  Although these traits are kind of endearing once you realise that optimus1337, who is currently comparing you to Hitler, is probably 19, his Mum thinks he’s a wonderful lad, and he helps his Gran with her shopping at the weekends.

However, there is an unfortunate consequence of these two character traits.  It can make it very intimidating about putting out your opinion or sharing some code with your peers.  We’ll hoard code, or practice at home, but not want to put something out there because it’s not perfect, or we won’t contribute to a project for the fear that we’ll be shouted down, or what we produce won’t meet some sort of arbitrary ultra-geek standard.

This attitude can be seen in the insanely conservative version numbers we give any code that we are brave enough to put out into the wide-world, ie – MyProject – v0.0.001.  For example, I’m a massive fan of the Nant project and have been using it to build my solutions for the last 4 years.  In that time the project has gone from version 0.86Beta1 to the recently released v.091.  In the entire 4 years I’ve been using it, it’s been as solid as a rock, and I haven’t had one issue with it, ever!

There’s no such thing as done

All developers implicitly understand that no project is ever finished, or any piece of code ever completely bug free, or that couldn’t be refactored.  Which makes a “done” project as rare as the legendary unicorn.

A few years back when projects started versioning themselves after the year/month they were deployed, ie Ubuntu 12.4, Office 2010 etc.  I was very cynical, thinking this is just a marketing ploy, to make us download/purchase the latest version.

However, I’ve lately realised that this versioning scheme has the benefit of indicating that this software is just that year’s version, or that month’s version.  It doesn’t say this software has reached mythical v1 status, it just says this is the stuff we think is good enough to release now.  The marketing aspect is just a fringe benefit 🙂

Conclusion

So don’t worry about joining the melting pot – jump right in.  Release version 12.3.09 of that idea you’ve been working on.  You can still conform to semantic versioning, and tell the likes of optimus1337 “Dude, relax. The code’s not done, it just the stuff I wanted to share, and BTW that’s not how you spell Goebbels ;-)”

Agile Manifesto – The First Amendment

Riddle me this:

Q – What’s the #1 difference that an agile approach gives you versus old-skool software development methodologies (hint clue’s in the name)

A – Quickly or “agilely” getting working code into your end-users hands

It’s this answer that an awful lot of people miss, and it’s why I propose the first amendment to the Agile Manifesto:

“Any agile process should strive to get working software into the hands of end-users as quickly as possible.”

Let me now explain my reasoning:

There is an awful lot of guff talked about agile, for example, I’ve held my head in my hands after reading some recent blog posts discussing Cynefin and complexity thinking.

This sort of pontificating rubbish is what gives the consulting industry a bad name.  The main elements of developing software are the same as they’ve ever been from the Waterfall method on down – Requirement gathering, Design, Implementation, Verification (Testing) and Maintenance.  Agile (in any of its forms) isn’t a silver bullet and doesn’t magic away any of these elements, it’s just a different way of mixing these basic ingredients to build the same cake.

To stretch the baking metaphor – proponents of agile will tell you that with a Waterfall methodology you might end up with a Dundee cake, when the users really needed an Eccles Cake.  This is exactly right, but the fundamental point is you’re only going to know the users wanted an Eccles Cake if you show them the first slice of Dundee cake, and they say “I’m sorry but you have misunderstood my requirement”.

So what? The point is that only the end-user can know if the product works for them.  However, often the end-user never sees the product until the end of a 12 month development phase, much like the Waterfall method.  What gives?

This problem is common in web-development.  You win a nice fat contract to build out a site for your customer (lets call them AcmeCorp).  The dev-team apply their shiny agile methodology and ship working code to AcmeCorp every few weeks.  AcmeCorp looks at it and supplies comments etc, you ask for clarifications, get more requirements and so on and so forth.

12 months later AcmeCorp launches their shiny new site, but it doesn’t get the traffic they expected, or users don’t like the features, or it doesn’t appeal to the target demographic.  Spot the problem?

You might as well have managed the project Waterfall style, you did 12 sprints/iterations/leans/kanbans/cynefines, but from the End-User point of view, it was a big-bang delivery.  You’ve been doing CAT (Customer Acceptance Testing) when you should have been doing UAT.  Arguably there may have been some benefits about getting feedback from your Customer, but if you are not getting feedback from End-Users as early as possible in the project, you’re asking for major trouble.

Conclusion

Agile teams must strive to get feedback from the end-users, not just the customer, as early in the project as possible.  As the feature you and the customer are convinced is the killer-app, may well be something your end-users don’t care about, or don’t understand how to use effectively.

UAT is the name of the game, and Agile teams should impress upon their customers the importance of putting the product under the noses of the real end-users as early as is feasible.

Notes on MVCConf 2

MvcConf is one of the new breed of “virtual” conferences where lazy geeks, of which I am one of their proud number, get to take in a number of presentations from the comfort of their office chair.  The conference is primarily aimed at dotNet web developers working with the ASP.Net MVC framework, however, some topics would be of interest to web developers working on other platforms.

The second MvcConf was broadcast on Feb 8th, and I tuned in to see the great and good present on a number of interesting topics.  Here follows my initial notes.  Hopefully I’ll do a follow up post on some of the other presentations.

You can watch the videos of the presentations here

Scott Guthrie – Keynote

The MVC team will continue to work to annual release cycle, so expect MVC4 circa March 2012

  • The aim for the platform is to be evolutionary, and to not break any work from earlier versions of the framework.
  • The MVC team are concentrating on HTML5, Javascript, Mobile, Async and Cloud technologies, among others.
  • There will be more focus on clientside Javascript work
    • More JQuery integration
    • JQuery Datagrid, sponsored by Microsoft
  • There will be more training resources at http://asp.net/mvc
  • Entity Framework – Code First will become Microsoft’s default approach to ORM
  • More work on Resource Mangement, built in minification, jquery grid, jquery template
  • Visual Studio SP1 will have HTML5 intellisense improvements and JS tooling improvements

Phil Haack – The NuGet-y Goodness of Delivering Packages

  • Excellent intro into NuGet package management, very similar to the Ruby Gems concept
  • I thought the interesting point was not just about pulling packages into your own solution, but that it is reasonably trivial to turn your own solution into a NuGet package.
  • It would also be possible to deliver you solution to customer as  a NuGet package that they could install, and possibly integrate into their existing solution.
  • If it was an Open Source solution you could add it NuGet.org so every other Nuget user would have access to it.
  • One downside is that the current version doesn’t allow you to specify where in the solution the packages will be installed to – not an issue for new projects, but annoying for existing projects.  This will be addressed in a future release.

Eric Sowell – Evolving Practices in Using jQuery and Ajax in ASP.NET MVC Applications

  • Very good and entertaining speaker
  • Eric gave a good introduction to using Jquery and Ajax in a modular and reusable fashion
  • Was interesting to see him refactor and debug Javascript on the fly
  • His advice was to create a separate API area for Ajax/Json work, which funnily enough we came to that conclusion about 2 weeks ago, for the same reasons he outlines.
  • He also covers the “Returning Partial Views as Json” conundrum that we’ve been  struggling with.  Instinctively I prefer his solution of returning raw Json objects, but as Eric himself says there can be good reasons for returning partial views to JS.
  • Finally he demonstrates how to make JS functions more re-usable across your app, which seemed very useful to me (as an admittedl JS dabbler)

Vaidy Gopalakrishnan – IIS Express

  • Introduction to IISExpress which is going to ship with Visual Studio 2010 SP1.
  • Basically once we have IISExpress I’d say every developer in the land is going to switch to using it over the inbuilt Cassini.  It allows for a more accurate test, and is infinitely more flexible and powerful than Cassini
  • Vaidy showed how to configure IISExpress to enable external traffic and SSL certificates to be tested on your local machine (firewall allowing)

Steven Smith – Improving ASP.NET MVC Application Performance

  • Nice presentation on generalities of performance testing and then how deep dive into specifics of using Visual Studio Load Testing and tips and techniques for improving your apps performance.
  • Number one impressive thing was the sheer power of the Visual Studio Load Testing tooling, very comprehensive.
  • You should define your performance metrics in terms of
    • Required page execution time
    • Requests per second
    • Time to last byte (TTLB)
  • With these defined you know when performance is good enough and you’re not needlessly tweaking when you could be adding features
  • Isis tool on Codeplex as mentioned, as a tool that can help diagnose some perf problems http://isis.codeplex.com/
  • Output Caching offers the biggest bang for buck on terms of performance, up to 300% or more, but it can be difficult to integrate with pages that are doing writes or have user specific content on them, such as shopping carts etc
  • He then outlined how you can do Data Caching for these situations, it’s more complex but can offer good perf increases.  His blog at http://stevesmithblog.com/ shows some examples.
  • Finished with a bunch of tips specific to MVC3 about increasing performance.

John Sheehan – Intro to Building Twilio Apps with ASP.NET MVC

  • Really just dipped into this, but was an extremely cool demo showing Twilio and ASP.NET MVC3 application to build a text to speech app, send SMS’s or do order tracking over the phone.
  • Having watched this I would definitely use Twilio to do any work with SMS or telecoms

Troels Thomsen – Deploy ASP.NET MVC with No Effort

  • Again just dipped into this, Troels demonstrated hosting applications on AppHarbor
  • AppHarbor works with Git, so you can push your app onto AppHarbor and it can build and run your tests and if everything passes can deploy your site.
  • Demo was a bit wobbly so it’s definitely not production ready, but is a cool idea and might be worth investigating for dev and test platforms.

SEO Friendly Strings, with a simple ToSeo() extension

In web development, you’ll spend a lot of time converting blog titles, image titles, page titles into SEO friendly format, ie:

iains-blogs-rock-my-world

In the past I would have written a static helper utility method to do this, eg

string convertMe = "Iain’s Blogs Rock My World";
string seoString = Utils.ConvertToSeo(convertMe);

However C# 3 gave us the awesome power of extension methods, which in the formal words of MSDN are:

Extension methods enable you to “add” methods to existing types without creating a new derived type, recompiling, or otherwise modifying the original type.

Or to put it another way, you can add methods to classes you didn’t write, be they classes in the framework or classes written by charlatans like my good self.

So we want to add a ToSeo() method onto the string object. To perform this black magic you must ensure the following is true:

  1. Your extension method must be static
  2. The first parameter specifies which type the method operates on, and the parameter must be preceded the this keyword
  3. The extension method must live in a public static class

So in our case we have

namespace HuzuSocial.Web.Helpers
{
    public static class Extensions
    {
        public static string ToSeo(this string str)
        {
            if (str == null)
                return null;
            else
            {
                // Remove any punctuation
                var sb = new StringBuilder();
                foreach (char c in str)
                {
                    if (!char.IsPunctuation(c))
                        sb.Append(c);
                }

                // Replace spaces with dashs and return
                return sb.ToString().ToLower().Replace(" ", "-");
            }
        }
    }
}

And now our new ToSeo() extension is available on all strings, as long as we have a reference to our extensions class

I think this is a definite improvement over the old utils class approach, and gives us a more flexible and natural way of adding functionality.