> Mike Valenty

Testing Unity Container Configuration

| Comments

Before Inversion of Control, I would use configuration for a connection string, port number or other boring piece of information. Nowadays however, configuration can be a pretty hairy part of the application in its own right. Not necessarily the xml kind of configuration, just configuration. You know, the place where you use the “new “ keyword and essentially break all the principals you worked so hard to protect in the rest of your application. Uncle Bob referred to this as “the other side of the wall” in a podcast with Scott Hanselman.

Son, we live in a world that has walls, and those walls have to be guarded by men with guns. Who’s gonna do it? You? You, Lieutenant Weinberg? I have a greater responsibility than you can possibly fathom… – Colonel Nathan R. Jessep

In my case, I wanted to inject an encryption provider to do Rijndael encryption with a specific vector and all that. Since I already had another IEncryptionProvider registered with the container, I named the new one.

1
2
3
4
5
6
7
8
9
10
Container.Configure<InjectedMembers>().ConfigureInjectionFor<RijndaelEncryptionProvider>(
    "MyEncryptionProvider",
    new InjectionConstructor(
        new ResolvedParameter<NoOpSaltStrategy>(),
        new RijndaelConfig { Hash = "SHA1", Vector = "notreal", Iterations = 2, Size = 256 }));

Container.Configure<InjectedMembers>().ConfigureInjectionFor<MyService>(
    new InjectionConstructor(
        new ResolvedParameter<IEncryptionProvider>("MyEncryptionProvider"),
        new ResolvedParameter<DataDropConfigSection>()));

What is a good way to make sure I’m wiring up Unity properly? I could just go black box and make sure my service can decrypt a string encrypted with a particular vector. That would be pretty BDDish and provide good insulation from volatile configuration code. In this application however, doing a legit integration test was a PITA for other reasons and I wanted some quick feedback on my Unity configuration.

A coworker and I decided to use a container extension to watch all the dependencies that were resolved and then tell us if we got the right one. The unit test looked like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
[TestFixture]
public class EncryptionProviderTests
{
    private IUnityContainer container;
    private WasResolvedContainerExtension wasResolvedContainerExtension;

    [SetUp]
    public void SetUp()
    {
        wasResolvedContainerExtension = new WasResolvedContainerExtension();

        container = new UnityContainer()
            .AddExtension(wasResolvedContainerExtension)
            .AddNewExtension<WebMvcContainerExtension>();
    }

    [TearDown]
    public void TearDown()
    {
        container.Dispose();
    }

    [Test]
    public void Should_inject_named_instance_of_encryption_provider()
    {
        var service = container.Resolve<MyService>();

        AssertNamedInstanceWasResolved<IEncryptionProvider>("MyEncryptionProvider");
    }

    private void AssertNamedInstanceWasResolved<T>(string name)
    {
        Assert.IsTrue(wasResolvedContainerExtension.WasResolved<T>(name));
    }
}

Granted it’s a little magical, but at least it reads well and a failure is pretty easy to track down from that point. This is what the container extension looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
public class WasResolvedContainerExtension : UnityContainerExtension
{
    private WasResolvedBuilderStrategy strategy;

    protected override void Initialize()
    {
        strategy = new WasResolvedBuilderStrategy();

        Context.Strategies.Add(strategy, UnityBuildStage.Creation);
    }

    public bool WasResolved<T>()
    {
        return WasResolved<T>(null);
    }

    public bool WasResolved<T>(string name)
    {
        return strategy.WasResolved<T>(name);
    }
}

The real work is done in the BuilderStrategy which looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
public class WasResolvedBuilderStrategy : BuilderStrategy
{
    private IList<NamedTypeBuildKey> buildKeys = new List<NamedTypeBuildKey>();

    public override void PreBuildUp(IBuilderContext context)
    {
        buildKeys.Add((NamedTypeBuildKey)context.BuildKey);
    }

    public bool WasResolved<T>()
    {
        return WasResolved<T>(null);
    }

    public bool WasResolved<T>(string name)
    {
        var buildKey = (buildKeys.FirstOrDefault(k =>
        typeof(T).IsAssignableFrom(k.Type) && k.Name == name));

        return buildKey.Type != null;
    }
}

And there you have it, a pretty quick way to test Unity configuration.

Definition of Done

| Comments

If you don’t know where you are going, you will wind up somewhere else. – Yogi Berra

When asked how a project is going, most programmers will offer one of two discreet responses. It’s either “I just started looking at the code” or “I’m done, I just need to clean up a few things.” Upon further investigation, I have found that done means the hard part has been figured out and there is usually an IDE output window or a funky test web page running on localhost that can demonstrate this status.

The problem is that the hard part is really the fun part, and the actual hart part is the “…I just need to clean up a few things.” So, to remind me and the rest of the team what done means, we have the following definition posted prominently on the wall.

1) Unit Tested

This doesn’t need much of an explanation, but having a formal definition posted on the wall is a good reminder for a team new to unit testing.

Unit testing by itself is important, but the real boost comes from using a CI server. We use TeamCity for our .NET projects and phpUnderControl for php projects.

2) Acceptance Tested

For our team, acceptance testing means that we deploy our new code to a demo server and write Selenium tests for it. We export the selenium tests as phpUnit fixtures that CruiseControl will run whenever our svn repository is updated. Before the selenium tests are run, we need to update our demo server, so we have CruiseControl call http://ourdemoserver.com/svn-update.php first.

3) Packaged For Deployment

For our .NET projects, we use a homegrown tool for packaging and deploying. It can stop/start IIS register/unregister COM+ objects and rollback across a farm. For php projects, we simply hand roll a zip file and use some lightweight scripts to unpack the files on the server with rollback ability.

4) No Increased Technical Debt

I ask myself if this code is going to be an asset that makes us stronger and able to respond more quickly to future business opportunities, or is it going to be a fragile liability that I will need to carefully tip-toe around 30 seconds after it’s deployed.

Just like a parent thinks their kid is the cutest kid ever, it can be hard to look at your own work when you’ve got your head wrapped around it and come to terms with the fact that you’re about to deploy some legacy code. I usually grab a coworker and walk through things while paying close attention to the only valid measurement of code quality.

Lunch-n-Learn Videos

| Comments

These are the lunch-n-learn videos we’ve watched in the last few months (that I can remember). They are listed roughly in the order watched with the most recent ones at the top.

The Joys and Pains of a Long Lived Codebase – Jeremy Miller

Kona 3: Learning Behavior Driven Development (BDD) – Rob Conery

Best Practices in Javascript Library Design – John Resig

Facebook: Science and the Social Graph – Aditya Agarwal

Digg, An Infrastructure in Transition – Joe Stump

Ajax Performance – Douglas Crockford

High Performance Web Sites: 14 Rules for Faster Pages – Steve Souders

Agile Project Management: Lessons Learned at Google – Jeff Sutherland

10 Ways to Screw Up with Scrum and XP – Henrik Kniberg

The Principles of Agile Design by Bob Martin – Robert Martin

Introduction to Domain Specific Languages – Martin Fowler

10 Ways to Improve Your Code – Neal Ford

The Renaissance of Craftmanship – Robert Martin

Martinizing Is Not Refactoring

| Comments

A friend of mine, Keith, used the term martinizing (as in Uncle Bob) for the process of cleaning code. The term has taken on a very specific meaning and it’s worth a few words.

Martinizing is similar to refactoring in that it does not change the observable behavior of the code, but the goal is different. When I refactor, I am changing the design, usually in an effort to add a new feature in an open closed manner.

When I martinize, I am telling a story. The most important story being the desired behavior, but also a story of the hard earned knowledge acquired along the way. If I spend hours distilling some business concept. I want to leave a trail for the next guy to understand that a simple property assignment or conditional statement isn’t so simple. And of course the perfect way to punctuate that message is with a well written unit test.

Don’t Eat a Donut on the Way Home From the Gym

| Comments

I must warn you, this post is about unit testing software, not donuts. With that said, I’ve been pondering the dilemma of how much time to spend writing unit tests and today I had a moment of clarity that I couldn’t help but share.

I was pairing with a coworker on Friday and we were both feeling a bit unproductive because we had spent so much time writing unit tests for a relatively small bit of code. In fact we spent more time writing tests than we spent writing the code to make the tests pass. To make matters worse, we spent more time refactoring the unit tests than we did refactoring the code that made the tests pass. We were pining over the readability of the tests as if the test were more important than the code and that left me with an uneasy feeling over the weekend.

Today I found a bit of inner peace as I embraced the the notion that the tests are more important than the code that makes them pass. Think about that for a moment. The challenge in writing software is not the implementation. Syntax, structures and algorithms are the easy parts. The real hard earned knowledge won through experience comes in the form of specifications; Understanding the intricate complexities and desired behavior of your software in the wild.

Writing software can be like swimming in the ocean when a thick fog rolls in. The mechanics of swimming are learned easily, but the real challenge is knowing which direction to swim so that you get back to the shore before you run out of energy. After a day of programming, the real asset is not the implementation of your feature, it’ s the increased understanding of your problem domain. This understanding is captured in the form of executable requirements.

When there is a bug in your system, chances are it’s a bug in your understanding of how your system should behave under a particular set of conditions. Burying an innocuous if statement in the middle of some method deep in the stack is a horrible way to reap the reward of day spent spelunking through code. It’s like eating a donut on the way home from the gym.

The legacy you leave is the the unit test, it tells the story of the hard fought knowledge and the readability of your test is more important than the readability of your implementation. Tell the next developer what’s important through an executable specification that reads like one.