Monday Links: NDC Conference

This is another episode where I share the talks from NDC Conference I watched and liked. This time is about JavaScript, History, and Design.

How JavaScript Happened: A Short History of Programming Languages - Mark Rendle

This is a journey from FORTRAN to ALGOL to LISP to JavaScript. It explains why we still use if for conditional, i for loops, and * for multiplication. Spoiler alert: It’s because of FORTRAN.

Apache Kafka in 1 hour for C# Developers - Guilherme Ferreira

Clusters, Topics, Partitions, producers/consumers? This is a good first-time introduction to Kafka. The presenter uses kafkaflow and confluent-kafka-dotnet for the demo application.

Keynote: Why web tech is like this - Steve Sanderson

I found this one on r/programming (before the Reddit blackout) Informative! It feels like time traveling through operating systems and tools to create a Web page.

Pilot Critical Decision Making skills - Clifford Agius

The lesson from this one is to come up with a list of things that could go wrong and prepare and train for that. Follow TDODAR approach: Time, Diagnosis, Options, Decision, Assign, and Review.

Intentional Code - Minimalism in a World of Dogmatic Design

I like the idea that “software really is literature.” Not in the sense of literate programming but in the sense of a narrative to express idea where every line of code matters. I like the example of how a piece of code improves by only removing a few blank lines.

Another idea I liked is: “You don’t want everything to look the same.” We don’t want all applications to use Domain-Driven Design with Event Sourcing and microservices. Often architectural patterns only add to cognitive load and extra complexity.

The presenter suggests: “sitting and looking at it (at a piece of code) and working out how it makes you feel. And then when you feel something, try to understand why it feels that way.”

Voilà! Another Monday Links. What tech conferences do you follow? Do you also follow NDC Conference? What are your favorite presentations? Until next Monday Links.

In the meantime, don’t miss the previous Monday Links on Personal Moats, Unfair Advantage, and Quitting.

Happy coding!

TIL: How to pass a DataTable as a parameter with OrmLite

These days I use OrmLite a lot. Almost every single day. In one of my client’s projects, OrmLite is the defacto ORM. Today I needed to pass a list of identifiers as a DataTable to an OrmLite SqlExpression. I didn’t want to write plain old SQL queries and use the embedded Dapper methods inside OrmLite. This is what I found out after a long debugging session.

To pass a DataTable with a list of identifiers as a parameter to OrmLite methods, create a custom converter for the DataTable type. Then use ConvertToParam() to pass it as a parameter to methods that use raw SQL strings.

As an example, let’s find all movies from a list of director Ids. I know a simple JOIN will get our backs covered here. But bear with me. Let’s imagine this is a more involved query.

1. Create two entities and a table type

These are the Movie and Director classes,

public class Movie
{
    [AutoIncrement]
    public int Id { get; set; }

    [StringLength(256)]
    public string Name { get; set; }

    [Reference]
    // ^^^^^
    public Director Director { get; set; }
}

public class Director
{
    [AutoIncrement]
    public int Id { get; set; }

    [References(typeof(Movie))]
    public int MovieId { get; set; }
    //         ^^^^^
    // OrmLite expects a foreign key back to the Movie table

    [StringLength(256)]
    public string FullName { get; set; }
}

In our database, let’s define the table type for our list of identifiers. Like this,

CREATE TYPE dbo.IntList AS TABLE(Id INT NULL);
bunch of laptops on a table
A data table...Photo by Marvin Meyer on Unsplash

2. Pass a DataTable to a SqlExpression

Now, to the actual OrmLite part,

using NUnit.Framework;
using ServiceStack.DataAnnotations;
using System;
using System.Data;
using System.Data.SqlClient;
using System.Threading.Tasks;

namespace PlayingWithOrmLiteAndDataTables;

public class DataTableAsParameterTest
{
    [Test]
    public async Task LookMaItWorks()
    {
        // 1. Register our custom converter
        OrmLiteConfig.DialectProvider = SqlServerDialect.Provider;
        OrmLiteConfig.DialectProvider.RegisterConverter<DataTable>(new SqlServerDataTableParameterConverter());
        //                                                          ^^^^^

        var connectionString = "...Any SQL Server connection string here...";
        var dbFactory = new OrmLiteConnectionFactory(connectionString);
        using var db = dbFactory.Open();

        // 2. Populate some movies
        var titanic = new Movie
        {
            Name = "Titanic",
            Director = new Director
            {
                FullName = "James Cameron"
            }
        };
        await db.SaveAsync(titanic, references: true);

        var privateRyan = new Movie
        {
            Name = "Saving Private Ryan",
            Director = new Director
            {
                FullName = "Steven Spielberg"
            }
        };
        await db.SaveAsync(privateRyan, references: true);

        var pulpFiction = new Movie
        {
            Name = "Pulp Fiction",
            Director = new Director
            {
                FullName = "Quentin Tarantino"
            }
        };
        await db.SaveAsync(pulpFiction, references: true);

        // 3. Populate datable with some Ids
        var movieIds = new DataTable();
        movieIds.Columns.Add("Id", typeof(int));
        movieIds.Rows.Add(2);
        //              ^^^^^
        // This should be Saving Private Ryan's Id

        // 4. Write the SqlExpression
        // Imagine this is a more complex query. I know!
        var query = db.From<Director>();

        var tableParam = query.ConvertToParam(movieIds);
        //                     ^^^^^
        query = query.CustomJoin(@$"INNER JOIN {tableParam} ids ON Director.MovieId = ids.Id");
        //            ^^^^^
        // We're cheating here. We know the table name! I know.

        // 5. Enjoy!
        var spielberg = await db.SelectAsync(query);
        Assert.IsNotNull(spielberg);
        Assert.AreEqual(1, spielberg.Count);
    }
}

Notice we first registered our SqlServerDataTableParameterConverter. More on that later!

After populating some records, we wrote a query using OrmLite SqlExpression syntax and a JOIN to our table parameter using the CustomJoin(). Also, we needed to convert our DataTable into a parameter with the ConvertToParam() method before referencing it.

We cheated a bit. Our Director class has the same name as our table. If that’s not the case, we could use the GetQuotedTableName() method, for example.

3. Write an OrmLite custom converter for DataTable

And this is our SqlServerDataTableParameterConverter,

// This converter only works when passing DataTable
// as a parameter to OrmLite methods. It doesn't work
// with OrmLite LoadSelectAsync method.
public class SqlServerDataTableParameterConverter : OrmLiteConverter
{
    public override string ColumnDefinition
        => throw new NotImplementedException("Only use to pass DataTable as parameter.");

    public override void InitDbParam(IDbDataParameter p, Type fieldType)
    {
        if (p is SqlParameter sqlParameter)
        {
            sqlParameter.SqlDbType = SqlDbType.Structured;
            sqlParameter.TypeName = "dbo.IntList";
            //                       ^^^^^ 
            // This should be our table type name
            // The same name as in the database
        }
    }
}

This converter only works when passing DataTable as a parameter. That’s why it has a NotImplementedException. I tested it with the SelectAsync() method. It doesn’t work with the LoadSelectAsync() method. This last method doesn’t parameterize internal queries. It will bloat our database’s plan cache. Take a look at OrmLite LoadSelectAsync() source code on GitHub here and here to see what I mean.

To make this converter work with the LoadSelectAsync(), we would need to implement the ToQuotedString() and return the DataTable content as a comma-separated list of identifiers. Exercise left to the reader!

4. Write a convenient extension method

And, for compactness, let’s put that CustomJoin() into a beautiful extension method that infers the table and column name to join to,

public static class SqlExpressionExtensions
{
    public static SqlExpression<T> JoinToDataTable<T>(this SqlExpression<T> self, Expression<Func<T, int>> expression, DataTable table)
    {
        var sourceDefinition = ModelDefinition<T>.Definition;

        var property = self.Visit(expression);
        var parameter = self.ConvertToParam(table);

        // Expected SQL: INNER JOIN @0 ON "Parent"."EvaluatedExpression"= "@0".Id
        var onExpression = @$"ON ({self.SqlTable(sourceDefinition)}.{self.SqlColumn(property.ToString())} = ""{parameter}"".""Id"")";
        var customSql = $"INNER JOIN {parameter} {onExpression}";
        self.CustomJoin(customSql);

        return self;
    }
}

We can use it like,

// Before:
// var query = db.From<Director>();
// var tableParam = query.ConvertToParam(movieIds);
// query = query.CustomJoin(@$"INNER JOIN {tableParam} ids ON Director.MovieId = ids.Id");

// After: 
var query = db.From<Director>();
              .JoinToDataTable<Director>(d => d.MovieId, movieIds);

Voilà! That is what I learned (or hacked) today. Things we only find out when reading the source code of our libraries. Another thought: the thing with ORMs is the moment we need to write complex queries, we stretch out ORM features until they break. Often, we’re better off writing dynamic SQL queries. I know, I know! Nobody wants to write dynamic SQL queries by hand. Maybe ask ChatGPT?

If you want to read more about OrmLite and its features, check how to automatically insert and update audit fields with OrmLite and some lessons I learned after working with OrmLite.

Happy coding!

Monday Links: Personal Moats, Unfair Advantage, and Quitting

This is a career-only episode. These are five links I found interesting in the last month.

Build Personal Moats

From this post, the best career advice is to build a personal moat: “a set of unique and accumulating competitive advantages in the context of your career.” It continues describing good moats and how to find yours.

About personal moats:

  • “Ask others: What’s something that’s easy for me to do but hard for others?”
  • “Ideally you want this personal moat to help you build career capital in your sleep.”
  • “If you were magically given 10,000 hours to be amazing at something, what would it be? The more clarity you have on this response, the better off you’ll be.”

Read full article

Want an unfair advantage in your tech career? Consume content meant for other roles

This post is to build a competitive advantage by consuming content targeted to other roles. This is a mechanism to create more empathy, gain understanding, and better work in cross-functional teams, among other reasons. It also suggests a list of roles we can start learning about.

Read full article

man in white button up shirt sitting at the table
"Hey boss. I quit. Good luck" Photo by Boston Public Library on Unsplash

Career Advice No One Gave Me: Give a Lot of Notice When You Quit

This is gold! There’re lots of posts on the Internet about interviewing, but few about quitting. This one is about how to quit leaving doors open. It has concrete examples to “drop the bomb.”

Read full article

My 20-Year Career is Technical Debt or Deprecated

Reading this post, I realized I jumped to companies to always rewrite old applications. An old ASP.NET WebForms to a Console App. (Don’t ask me why!) An old ASP.NET WebForms again to an ASP.NET Web API project. An old Python scheduler to an ASP.NET Core project with HostedServices. History repeats itself, I guess. We’re writing legacy applications of tomorrow.

Let’s embrace that, quoting the post, “Given enough time, all your code will get deleted.”

Read full article

What you give up when moving into engineering management

Being a Manager requires different skills than being an Individual Contributor. Often people get promoted to the Management track (without any training) only because they’re good developers. Arrrgggg! I’ve seen managers that are only good developers…and projects at risk because of that. This post shares why it’s hard to make the change and what we lost by moving to the Management track, focus time, for example.

Read full article

Voilà! Another Monday Links. Do you think you have a personal moat or an unfair advantage? What is it? What are your quitting experiences? Until next Monday Links.

In the meantime, don’t miss the previous Monday Links on Interviewing, Zombies, and Burnout.

Happy coding!

Let's refactor a test: Speed up a slow test suite

Do you have fast unit tests? This is how I speeded up a slow test suite from one of my client’s projects by reducing the delay between retry attempts and initializing slow-to-build dependencies only once. There’s a lesson behind this refactoring session.

Make sure to have a fast test suite that every developer could run after every code change. The slower the tests, the less frequently they’re run.

I learned to have some metrics before rushing to optimize anything. I learned it while trying to optimize a slow room searching feature. These are the tests and their execution time before any changes:

Slow tests
Slow tests

Of course, I blurred some names for obvious reasons. I focused on two projects: Api.Tests (3.3 min) and ReservationQueue.Tests (18.9 sec).

I had a slower test project, Data.Tests. It contained integration tests using a real database. Probably those tests could benefit from simple test values. But I didn’t want to tune stored procedures or queries.

This is what I found and did to speed up this test suite.

Step 1: Reduce delays between retries

Inside the Api.Tests, I found tests for services with a retry mechanism. And, inside the unit tests, I had to wait more than three seconds between every retry attempt. C’mon, these are unit tests! Nobody needs or wants to wait between retries here.

My first solution was to reduce the delay between retry attempts to zero.

Set retryWaitSeconds = 0

Some tests built retry policies manually and passed them to services. I only needed to pass 0 as a delay. Like this,

Diff of setting retryWaitSecond variable to zero
Making retryWaitSeconds = 0

A simple Bash one-liner to find and replace a pattern got my back covered here.

Pass RetryOptions without delay

Some other tests used an EventHandler base class. After running a command handler wrapped in a database transaction, we needed to call other internal microservices. We used event handlers for that. This is the EventHandlerBase,

public abstract class EventHandlerBase<T> : IEventHandler<T>
{
    protected RetryOptions _retryOptions;

    protected EventHandlerBase()
    {
        _retryOptions = new RetryOptions();
        //              ^^^^^
        // By default, it has:
        // MaxRetries = 2
        // RetryDelayInSeconds = 3
    }

    public async Task ExecuteAsync(T eventArgs)
    {
        try
        {
            await BuildRetryPolicy().ExecuteAsync(async () => await HandleAsync(eventArgs));
        }
        catch (Exception ex)
        {
            // Sorry, something wrong happened...
            // Log things here like good citizens of the world...
        }
    }

    private AsyncPolicy BuildRetryPolicy()
    {
        return Policy.Handle<HttpRequestException>()
            .WaitAndRetryAsync(
                _retryOptions.MaxRetries,
                (retryAttempt) => TimeSpan.FromSeconds(Math.Pow(_retryOptions.RetryDelayInSeconds, retryAttempt)),
                //                ^^^^^
                (exception, timeSpan, retryCount, context) =>
                { 
                    // Log things here like good citizens of the world...
                });
    }

    public virtual void SetRetryOptions(RetryOptions retryOptions)
    //                  ^^^^^
    {
        m_retryOptions = retryOptions;
    }

    protected abstract Task HandleAsync(T eventArgs);
}

Notice one thing: the EventHandlerBase didn’t receive a RetryOptions in its constructor. All event handlers had, by default, a 3-second delay. Even the ones inside unit tests. Arrrgggg! And the EventHandlerBase used an exponential backoff. Arrrgggg! That explained why I had those slow tests.

The perfect solution would have been to make all child event handlers receive the right RetryOptions. But it would have required changing the Production code and probably retesting some parts of the app.

Instead, I went through all the builder methods inside tests and passed a RetryOptions without delay. Like this,

Adding a RetryOptions
Adding a RetryOptions

After removing that delay between retries, the Api.Tests ran faster.

Step 2: Initialize AutoMapper only once

Inside the ReservationQueue.Tests, the other slow test project, I found some tests using AutoMapper. Oh, boy! AutoMapper! I have a love-and-hate relationship with AutoMapper. I shared about AutoMapper in a past Monday Links episode.

Some of the tests inside ReservationQueue.Tests looked like this,

[TestClass]
public class ACoolTestClass
{
    private class TestBuilder
    {
        public Mock<ISomeService> SomeService { get; set; } = new Mock<ISomeService>();

        private IMapper mapper = null;

        internal IMapper Mapper
        //               ^^^^^
        {
            get
            {
                if (mapper == null)
                {
                    var services = new ServiceCollection();
                    services.AddMapping();
                    //       ^^^^^

                    var provider = services.BuildServiceProvider();
                    mapper = provider.GetRequiredService<IMapper>();
                }

                return mapper;
            }
        }

        public ServiceToTest Build()
        {
            return new ServiceToTest(Mapper, SomeService.Object);
            //                       ^^^^^
        }

        public TestBuilder SetSomeService()
        {
            // Make the fake SomeService instance return some hard-coded values...
        }
    }

    [TestMethod]
    public void ATest()
    {
        var builder = new TestBuilder()
                        .SetSomeService();
        var service = builder.Build();
        
        service.DoSomething();

        // Assert something here...
    }

    // Imagine more tests that follow the same pattern...
}

These tests used a private TestBuilder class to create a service with all its dependencies replaced by fakes. Except for AutoMapper’s IMapper.

To create IMapper, these tests had a property that used the same AddMapping() method used in the Program.cs file. It was an extension method with hundreds and hundreds of type mappings. Like this,

public static IServiceCollection AddMapping(this IServiceCollection services)
{
    var configuration = new MapperConfiguration((configExpression) =>
    {
        // Literally hundreds of single-type mappings here...
        // Hundreds and hundreds...
    });

    configuration.AssertConfigurationIsValid();
    services.AddSingleton(configuration.CreateMapper());

    return services;
}
A collapsed hundred-line AddMapping method
Look at the line numbers on the left!

The thing is that every single test created a new instance of the TestBuilder class. And, by extension, an instance of IMapper for every test. And creating an instance of IMapper is expensive. Arrrgggg!

A better solution would have been to use AutoMapper Profiles and only load the profiles needed in each test class. That would have been a long and painful refactoring session.

Use MSTest ClassInitialize attribute

Instead of creating an instance of IMapper when running every test, I did it only once per test class. I used MSTest [ClassInitialize] attribute. It decorates a static method that runs before all the test methods of a class. That was exactly what I needed.

To learn about all MSTest attributes, check Meziantou’s MSTest v2: Test lifecycle attributes.

My sample test class using [ClassInitialize] looked like this,

[TestClass]
public class ACoolTestClass
{
    private static IMapper Mapper;
    //                     ^^^^^

    [ClassInitialize]
    // ^^^^^
    public static void TestClassSetup(TestContext context)
    //                 ^^^^^
    {
        var services = new ServiceCollection();
        services.AddMapping();
        //       ^^^^^

        var provider = services.BuildServiceProvider();
        Mapper = provider.GetRequiredService<IMapper>();
    }

    private class TestBuilder
    {
        public Mock<ISomeService> SomeService { get; set; } = new Mock<ISomeService>();

        // No more IMapper initializations here

        public ServiceToTest Build()
        {
            return new ServiceToTest(Mapper, SomeService.Object);
            //                       ^^^^^
        }

        public TestBuilder SetSomeService()
        {
            // Return some hardcoded values from ISomeService methods...
        }
    }

    // Same tests as before...
}

I needed to replicate this change in other test classes that used AutoMapper.

After reducing the delay between retry attempts and creating IMapper once per test class, these were the final execution times,

List of tests inside Visual Studio
Faster tests

That’s under a minute! They used to run in ~3.5 minutes.

Voilà! That’s how I speeded up this test suite. Apart from reducing delays between retry attempts in our tests and initializing AutoMapper once per test class, the lesson to take home is to have a fast test suite. A test suite we can run after every code change. Because the slower the tests, the less frequently we run them. And we want our backs covered by tests all the time.

To read more about unit testing, check refactoring sessions to remove duplicated emails and update email statuses. And don’t miss my Unit Testing 101 series where I cover from naming conventions to best practices.

Happy testing!

Let's refactor a test: Update email statuses

Let’s continue refactoring some tests for an email component. Last time, we refactored two tests that remove duplicated email addresses before sending an email. This time, let’s refactor two more tests. But these ones check that we change an email status once we receive a “webhook” from a third-party email service. Let’s refactor them.

Here are the tests to refactor

If you missed the last refactoring session, these tests belong to an email component in a Property Management Solution. This component stores all emails before sending them and keeps track of their status changes.

These two tests check we change the recipient status to either “delivered” or “complained.” Of course, the original test suite had more tests. We only need one or two tests to prove a point.

using Moq;

namespace AcmeCorp.Email.Tests;

public class UpdateStatusCommandHandlerTests
{
    [Fact]
    public async Task Handle_ComplainedStatusOnlyOnOneRecipient_UpdatesStatuses()
    {
        var fakeRepository = new Mock<IEmailRepository>();
        var handler = BuildHandler(fakeRepository);

        var command = BuildCommand(withComplainedStatusOnlyOnCc: true);
        //                         ^^^^^
        await handler.Handle(command, CancellationToken.None);

        fakeRepository.Verify(t => t.UpdateAsync(
            It.Is<Email>(d =>
                d.Recipients[0].LastDeliveryStatus == DeliveryStatus.ReadyToBeSent
                //         ^^^^^
                && d.Recipients[1].LastDeliveryStatus == DeliveryStatus.Complained)),
                //            ^^^^^
            Times.Once());
    }

    [Fact]
    public async Task Handle_DeliveredStatusToBothRecipients_UpdatesStatuses()
    {
        var fakeRepository = new Mock<IEmailRepository>();
        var handler = BuildHandler(fakeRepository);

        var command = BuildCommand(withDeliveredStatusOnBoth: true);
        //                         ^^^^^
        await handler.Handle(command, CancellationToken.None);

        fakeRepository.Verify(t => t.UpdateAsync(
            It.Is<Email>(d =>
                d.Recipients[0].LastDeliveryStatus == DeliveryStatus.Delivered
                //         ^^^^^
                && d.Recipients[1].LastDeliveryStatus == DeliveryStatus.Delivered)),
                //            ^^^^^
            Times.Once());
    }

    private static UpdateStatusCommandHandler BuildHandler(
        Mock<IEmailRepository> fakeRepository)
    {
        fakeRepository
            .Setup(t => t.GetByIdAsync(It.IsAny<Guid>()))
            .ReturnsAsync(BuildEmail());

        return new UpdateStatusCommandHandler(fakeRepository.Object);
    }

    private static UpdateStatusCommand BuildCommand(
        bool withComplainedStatusOnlyOnCc = false,
        bool withDeliveredStatusOnBoth = false
        // Imagine more flags for other combination
        // of statuses. Like opened, bounced, and clicked
    )
        // Imagine building a large object graph here
        // based on the parameter flags
        => new UpdateStatusCommand();

    private static Email BuildEmail()
        => new Email(
            "A Subject",
            "A Body",
            new[]
            {
                Recipient.To("to@email.com"),
                Recipient.Cc("cc@email.com")
            });
}

I slightly changed some test and method names. But those are some of the real tests I had to refactor.

What’s wrong with those tests? Did you notice it?

These tests use Moq to create a fake for the IEmailRepository and the BuildHandler() and BuildCommand() factory methods to reduce the noise and keep our test simple.

A pen sitting in top of a piece of paper
Photo by Towfiqu barbhuiya on Unsplash

What’s wrong?

Let’s take a look at the first test. Inside the Verify() method, why is the Recipient[1] the one expected to have Complained status? what if we change the order of recipients?

Based on the scenario in the test name, “complained status only on one recipient”, and the withComplainedStatusOnlyOnCc parameter passed to BuildCommand(), we might think Recipient[1] is the email’s cc address. But, the test hides the order of recipients. We would have to inspect the BuildHandler() method to see the email injected into the handler and check the order of recipients.

In the second test, since we expect all recipients to have the same status, we don’t care much about the order of recipients.

We shouldn’t hide anything in builders or helpers and later use those hidden assumptions in other parts of our tests. That makes our tests difficult to follow. And we shouldn’t make our readers decode our tests.

Explicit is better than implicit

Let’s rewrite our tests to avoid passing flags like withComplainedStatusOnlyOnCc and withDeliveredStatusOnBoth, and verifying on a hidden recipient order. Instead of passing flags for every possible combination of status to BuildCommand(), let’s create one object mother per status explicitly passing the email addresses we want.

Like this,

public class UpdateStatusCommandHandlerTests
{
    [Fact]
    public async Task Handle_ComplainedStatusOnlyOnOneRecipient_UpdatesStatuses()
    {
        var addresses = new[] { "to@email.com", "cc@email.com" };
        var repository = new Mock<IEmailRepository>()
                            .With(EmailFor(addresses));
                            //    ^^^^^
        var handler = BuildHandler(repository);

        var command = UpdateStatusCommand.ComplaintFrom("to@email.com");
        //                                ^^^^^
        await handler.Handle(command, CancellationToken.None);

        repository.VerifyUpdatedStatusFor(
        //         ^^^^^
            ("to@email.com", DeliveryStatus.Complained),
            ("cc@email.com", DeliveryStatus.ReadyToBeSent));
    }

    [Fact]
    public async Task Handle_DeliveredStatusToBothRecipients_UpdatesStatuses()
    {
        var addresses = new[] { "to@email.com", "cc@email.com" };
        var repository = new Mock<IEmailRepository>()
                            .With(EmailFor(addresses));
                            //    ^^^^^
        var handler = BuildHandler(repository);

        var command = UpdateStatusCommand.DeliveredTo(addresses);
        //                                ^^^^^
        await handler.Handle(command, CancellationToken.None);
                
        repository.VerifyUpdatedStatusForAll(DeliveryStatus.Delivered);
        //         ^^^^^
    }
}

First, instead of creating a fake EmailRepository with a hidden email object, we wrote a With() method. And to make things more readable, we renamed BuilEmail() to EmailFor() and passed the destinations explicitly to it. We can read it like mock.With(EmailFor(anAddress)).

Next, instead of using a single BuildCommand() with a flag for every combination of statuses, we created one object mother per status: ComplaintFrom() and DeliveredTo(). Again, we passed the email addresses we expected to have either complained or delivered statuses.

Lastly, for our Assert part, we created two custom Verify methods: VerifyUpdatedStatusFor() and VerifyUpdatedStatusForAll(). In the first test, we passed to VerifyUpdatedStatusFor() an array of tuples with the email address and its expected status.

Voilà! That was another refactoring session. When we write unit tests, we should strive for a balance between implicit code to reduce the noise in our tests and explicit code to make things easier to follow.

In the original version of these tests, we hid the order of recipients when building emails. But then we relied on that order when writing assertions. Let’s not be like magicians pulling code we had hidden somewhere else.

Also, let’s use extension methods and object mothers like With(), EmailFor(), and DeliveredTo() to create a small “language” in our tests, striving for readability. The next person writing tests will copy the existing ones. That will make his life easier.

For more refactoring sessions, check these two: store and update OAuth connections and generate payment reports. And don’t miss my Unit Testing 101 series where I cover from naming conventions to best practices.

Happy testing!