Monday Links: Personal Moats, Unfair Advantage, and Quitting

This is a career-only episode. These are five links I found interesting in the last month.

Build Personal Moats

From this post, the best career advice is to build a personal moat: “a set of unique and accumulating competitive advantages in the context of your career.” It continues describing good moats and how to find yours.

About personal moats:

  • “Ask others: What’s something that’s easy for me to do but hard for others?”
  • “Ideally you want this personal moat to help you build career capital in your sleep.”
  • “If you were magically given 10,000 hours to be amazing at something, what would it be? The more clarity you have on this response, the better off you’ll be.”

Read full article

Want an unfair advantage in your tech career? Consume content meant for other roles

This post is to build a competitive advantage by consuming content targeted to other roles. This is a mechanism to create more empathy, gain understanding, and better work in cross-functional teams, among other reasons. It also suggests a list of roles we can start learning about.

Read full article

man in white button up shirt sitting at the table
"Hey boss. I quit. Good luck" Photo by Boston Public Library on Unsplash

Career Advice No One Gave Me: Give a Lot of Notice When You Quit

This is gold! There’re lots of posts on the Internet about interviewing, but few about quitting. This one is about how to quit leaving doors open. It has concrete examples to “drop the bomb.”

Read full article

My 20-Year Career is Technical Debt or Deprecated

Reading this post, I realized I jumped to companies to always rewrite old applications. An old ASP.NET WebForms to a Console App. (Don’t ask me why!) An old ASP.NET WebForms again to an ASP.NET Web API project. An old Python scheduler to an ASP.NET Core project with HostedServices. History repeats itself, I guess. We’re writing legacy applications of tomorrow.

Let’s embrace that, quoting the post, “Given enough time, all your code will get deleted.”

Read full article

What you give up when moving into engineering management

Being a Manager requires different skills than being an Individual Contributor. Often people get promoted to the Management track (without any training) only because they’re good developers. Arrrgggg! I’ve seen managers that are only good developers…and projects at risk because of that. This post shares why it’s hard to make the change and what we lost by moving to the Management track, focus time, for example.

Read full article

Voilà! Another Monday Links. Do you think you have a personal moat or an unfair advantage? What is it? What are your quitting experiences? Until next Monday Links.

In the meantime, don’t miss the previous Monday Links on Interviewing, Zombies, and Burnout.

Happy coding!

Let's refactor a test: Speed up a slow test suite

Do you have fast unit tests? This is how I speeded up a slow test suite from one of my client’s projects by reducing the delay between retry attempts and initializing slow-to-build dependencies only once. There’s a lesson behind this refactoring session.

Make sure to have a fast test suite that every developer could run after every code change. The slower the tests, the less frequently they’re run.

I learned to have some metrics before rushing to optimize anything. I learned it while trying to optimize a slow room searching feature. These are the tests and their execution time before any changes:

Slow tests
Slow tests

Of course, I blurred some names for obvious reasons. I focused on two projects: Api.Tests (3.3 min) and ReservationQueue.Tests (18.9 sec).

I had a slower test project, Data.Tests. It contained integration tests using a real database. Probably those tests could benefit from simple test values. But I didn’t want to tune stored procedures or queries.

This is what I found and did to speed up this test suite.

Step 1: Reduce delays between retries

Inside the Api.Tests, I found tests for services with a retry mechanism. And, inside the unit tests, I had to wait more than three seconds between every retry attempt. C’mon, these are unit tests! Nobody needs or wants to wait between retries here.

My first solution was to reduce the delay between retry attempts to zero.

Set retryWaitSeconds = 0

Some tests built retry policies manually and passed them to services. I only needed to pass 0 as a delay. Like this,

Diff of setting retryWaitSecond variable to zero
Making retryWaitSeconds = 0

A simple Bash one-liner to find and replace a pattern got my back covered here.

Pass RetryOptions without delay

Some other tests used an EventHandler base class. After running a command handler wrapped in a database transaction, we needed to call other internal microservices. We used event handlers for that. This is the EventHandlerBase,

public abstract class EventHandlerBase<T> : IEventHandler<T>
{
    protected RetryOptions _retryOptions;

    protected EventHandlerBase()
    {
        _retryOptions = new RetryOptions();
        //              ^^^^^
        // By default, it has:
        // MaxRetries = 2
        // RetryDelayInSeconds = 3
    }

    public async Task ExecuteAsync(T eventArgs)
    {
        try
        {
            await BuildRetryPolicy().ExecuteAsync(async () => await HandleAsync(eventArgs));
        }
        catch (Exception ex)
        {
            // Sorry, something wrong happened...
            // Log things here like good citizens of the world...
        }
    }

    private AsyncPolicy BuildRetryPolicy()
    {
        return Policy.Handle<HttpRequestException>()
            .WaitAndRetryAsync(
                _retryOptions.MaxRetries,
                (retryAttempt) => TimeSpan.FromSeconds(Math.Pow(_retryOptions.RetryDelayInSeconds, retryAttempt)),
                //                ^^^^^
                (exception, timeSpan, retryCount, context) =>
                { 
                    // Log things here like good citizens of the world...
                });
    }

    public virtual void SetRetryOptions(RetryOptions retryOptions)
    //                  ^^^^^
    {
        m_retryOptions = retryOptions;
    }

    protected abstract Task HandleAsync(T eventArgs);
}

Notice one thing: the EventHandlerBase didn’t receive a RetryOptions in its constructor. All event handlers had, by default, a 3-second delay. Even the ones inside unit tests. Arrrgggg! And the EventHandlerBase used an exponential backoff. Arrrgggg! That explained why I had those slow tests.

The perfect solution would have been to make all child event handlers receive the right RetryOptions. But it would have required changing the Production code and probably retesting some parts of the app.

Instead, I went through all the builder methods inside tests and passed a RetryOptions without delay. Like this,

Adding a RetryOptions
Adding a RetryOptions

After removing that delay between retries, the Api.Tests ran faster.

Step 2: Initialize AutoMapper only once

Inside the ReservationQueue.Tests, the other slow test project, I found some tests using AutoMapper. Oh, boy! AutoMapper! I have a love-and-hate relationship with AutoMapper. I shared about AutoMapper in a past Monday Links episode.

Some of the tests inside ReservationQueue.Tests looked like this,

[TestClass]
public class ACoolTestClass
{
    private class TestBuilder
    {
        public Mock<ISomeService> SomeService { get; set; } = new Mock<ISomeService>();

        private IMapper mapper = null;

        internal IMapper Mapper
        //               ^^^^^
        {
            get
            {
                if (mapper == null)
                {
                    var services = new ServiceCollection();
                    services.AddMapping();
                    //       ^^^^^

                    var provider = services.BuildServiceProvider();
                    mapper = provider.GetRequiredService<IMapper>();
                }

                return mapper;
            }
        }

        public ServiceToTest Build()
        {
            return new ServiceToTest(Mapper, SomeService.Object);
            //                       ^^^^^
        }

        public TestBuilder SetSomeService()
        {
            // Make the fake SomeService instance return some hard-coded values...
        }
    }

    [TestMethod]
    public void ATest()
    {
        var builder = new TestBuilder()
                        .SetSomeService();
        var service = builder.Build();
        
        service.DoSomething();

        // Assert something here...
    }

    // Imagine more tests that follow the same pattern...
}

These tests used a private TestBuilder class to create a service with all its dependencies replaced by fakes. Except for AutoMapper’s IMapper.

To create IMapper, these tests had a property that used the same AddMapping() method used in the Program.cs file. It was an extension method with hundreds and hundreds of type mappings. Like this,

public static IServiceCollection AddMapping(this IServiceCollection services)
{
    var configuration = new MapperConfiguration((configExpression) =>
    {
        // Literally hundreds of single-type mappings here...
        // Hundreds and hundreds...
    });

    configuration.AssertConfigurationIsValid();
    services.AddSingleton(configuration.CreateMapper());

    return services;
}
A collapsed hundred-line AddMapping method
Look at the line numbers on the left!

The thing is that every single test created a new instance of the TestBuilder class. And, by extension, an instance of IMapper for every test. And creating an instance of IMapper is expensive. Arrrgggg!

A better solution would have been to use AutoMapper Profiles and only load the profiles needed in each test class. That would have been a long and painful refactoring session.

Use MSTest ClassInitialize attribute

Instead of creating an instance of IMapper when running every test, I did it only once per test class. I used MSTest [ClassInitialize] attribute. It decorates a static method that runs before all the test methods of a class. That was exactly what I needed.

To learn about all MSTest attributes, check Meziantou’s MSTest v2: Test lifecycle attributes.

My sample test class using [ClassInitialize] looked like this,

[TestClass]
public class ACoolTestClass
{
    private static IMapper Mapper;
    //                     ^^^^^

    [ClassInitialize]
    // ^^^^^
    public static void TestClassSetup(TestContext context)
    //                 ^^^^^
    {
        var services = new ServiceCollection();
        services.AddMapping();
        //       ^^^^^

        var provider = services.BuildServiceProvider();
        Mapper = provider.GetRequiredService<IMapper>();
    }

    private class TestBuilder
    {
        public Mock<ISomeService> SomeService { get; set; } = new Mock<ISomeService>();

        // No more IMapper initializations here

        public ServiceToTest Build()
        {
            return new ServiceToTest(Mapper, SomeService.Object);
            //                       ^^^^^
        }

        public TestBuilder SetSomeService()
        {
            // Return some hardcoded values from ISomeService methods...
        }
    }

    // Same tests as before...
}

I needed to replicate this change in other test classes that used AutoMapper.

After reducing the delay between retry attempts and creating IMapper once per test class, these were the final execution times,

List of tests inside Visual Studio
Faster tests

That’s under a minute! They used to run in ~3.5 minutes.

Voilà! That’s how I speeded up this test suite. Apart from reducing delays between retry attempts in our tests and initializing AutoMapper once per test class, the lesson to take home is to have a fast test suite. A test suite we can run after every code change. Because the slower the tests, the less frequently we run them. And we want our backs covered by tests all the time.

To read more about unit testing, check refactoring sessions to remove duplicated emails and update email statuses. And don’t miss my Unit Testing 101 series where I cover from naming conventions to best practices.

Happy testing!

Let's refactor a test: Update email statuses

Let’s continue refactoring some tests for an email component. Last time, we refactored two tests that remove duplicated email addresses before sending an email. This time, let’s refactor two more tests. But these ones check that we change an email status once we receive a “webhook” from a third-party email service. Let’s refactor them.

Here are the tests to refactor

If you missed the last refactoring session, these tests belong to an email component in a Property Management Solution. This component stores all emails before sending them and keeps track of their status changes.

These two tests check we change the recipient status to either “delivered” or “complained.” Of course, the original test suite had more tests. We only need one or two tests to prove a point.

using Moq;

namespace AcmeCorp.Email.Tests;

public class UpdateStatusCommandHandlerTests
{
    [Fact]
    public async Task Handle_ComplainedStatusOnlyOnOneRecipient_UpdatesStatuses()
    {
        var fakeRepository = new Mock<IEmailRepository>();
        var handler = BuildHandler(fakeRepository);

        var command = BuildCommand(withComplainedStatusOnlyOnCc: true);
        //                         ^^^^^
        await handler.Handle(command, CancellationToken.None);

        fakeRepository.Verify(t => t.UpdateAsync(
            It.Is<Email>(d =>
                d.Recipients[0].LastDeliveryStatus == DeliveryStatus.ReadyToBeSent
                //         ^^^^^
                && d.Recipients[1].LastDeliveryStatus == DeliveryStatus.Complained)),
                //            ^^^^^
            Times.Once());
    }

    [Fact]
    public async Task Handle_DeliveredStatusToBothRecipients_UpdatesStatuses()
    {
        var fakeRepository = new Mock<IEmailRepository>();
        var handler = BuildHandler(fakeRepository);

        var command = BuildCommand(withDeliveredStatusOnBoth: true);
        //                         ^^^^^
        await handler.Handle(command, CancellationToken.None);

        fakeRepository.Verify(t => t.UpdateAsync(
            It.Is<Email>(d =>
                d.Recipients[0].LastDeliveryStatus == DeliveryStatus.Delivered
                //         ^^^^^
                && d.Recipients[1].LastDeliveryStatus == DeliveryStatus.Delivered)),
                //            ^^^^^
            Times.Once());
    }

    private static UpdateStatusCommandHandler BuildHandler(
        Mock<IEmailRepository> fakeRepository)
    {
        fakeRepository
            .Setup(t => t.GetByIdAsync(It.IsAny<Guid>()))
            .ReturnsAsync(BuildEmail());

        return new UpdateStatusCommandHandler(fakeRepository.Object);
    }

    private static UpdateStatusCommand BuildCommand(
        bool withComplainedStatusOnlyOnCc = false,
        bool withDeliveredStatusOnBoth = false
        // Imagine more flags for other combination
        // of statuses. Like opened, bounced, and clicked
    )
        // Imagine building a large object graph here
        // based on the parameter flags
        => new UpdateStatusCommand();

    private static Email BuildEmail()
        => new Email(
            "A Subject",
            "A Body",
            new[]
            {
                Recipient.To("to@email.com"),
                Recipient.Cc("cc@email.com")
            });
}

I slightly changed some test and method names. But those are some of the real tests I had to refactor.

What’s wrong with those tests? Did you notice it?

These tests use Moq to create a fake for the IEmailRepository and the BuildHandler() and BuildCommand() factory methods to reduce the noise and keep our test simple.

A pen sitting in top of a piece of paper
Photo by Towfiqu barbhuiya on Unsplash

What’s wrong?

Let’s take a look at the first test. Inside the Verify() method, why is the Recipient[1] the one expected to have Complained status? what if we change the order of recipients?

Based on the scenario in the test name, “complained status only on one recipient”, and the withComplainedStatusOnlyOnCc parameter passed to BuildCommand(), we might think Recipient[1] is the email’s cc address. But, the test hides the order of recipients. We would have to inspect the BuildHandler() method to see the email injected into the handler and check the order of recipients.

In the second test, since we expect all recipients to have the same status, we don’t care much about the order of recipients.

We shouldn’t hide anything in builders or helpers and later use those hidden assumptions in other parts of our tests. That makes our tests difficult to follow. And we shouldn’t make our readers decode our tests.

Explicit is better than implicit

Let’s rewrite our tests to avoid passing flags like withComplainedStatusOnlyOnCc and withDeliveredStatusOnBoth, and verifying on a hidden recipient order. Instead of passing flags for every possible combination of status to BuildCommand(), let’s create one object mother per status explicitly passing the email addresses we want.

Like this,

public class UpdateStatusCommandHandlerTests
{
    [Fact]
    public async Task Handle_ComplainedStatusOnlyOnOneRecipient_UpdatesStatuses()
    {
        var addresses = new[] { "to@email.com", "cc@email.com" };
        var repository = new Mock<IEmailRepository>()
                            .With(EmailFor(addresses));
                            //    ^^^^^
        var handler = BuildHandler(repository);

        var command = UpdateStatusCommand.ComplaintFrom("to@email.com");
        //                                ^^^^^
        await handler.Handle(command, CancellationToken.None);

        repository.VerifyUpdatedStatusFor(
        //         ^^^^^
            ("to@email.com", DeliveryStatus.Complained),
            ("cc@email.com", DeliveryStatus.ReadyToBeSent));
    }

    [Fact]
    public async Task Handle_DeliveredStatusToBothRecipients_UpdatesStatuses()
    {
        var addresses = new[] { "to@email.com", "cc@email.com" };
        var repository = new Mock<IEmailRepository>()
                            .With(EmailFor(addresses));
                            //    ^^^^^
        var handler = BuildHandler(repository);

        var command = UpdateStatusCommand.DeliveredTo(addresses);
        //                                ^^^^^
        await handler.Handle(command, CancellationToken.None);
                
        repository.VerifyUpdatedStatusForAll(DeliveryStatus.Delivered);
        //         ^^^^^
    }
}

First, instead of creating a fake EmailRepository with a hidden email object, we wrote a With() method. And to make things more readable, we renamed BuilEmail() to EmailFor() and passed the destinations explicitly to it. We can read it like mock.With(EmailFor(anAddress)).

Next, instead of using a single BuildCommand() with a flag for every combination of statuses, we created one object mother per status: ComplaintFrom() and DeliveredTo(). Again, we passed the email addresses we expected to have either complained or delivered statuses.

Lastly, for our Assert part, we created two custom Verify methods: VerifyUpdatedStatusFor() and VerifyUpdatedStatusForAll(). In the first test, we passed to VerifyUpdatedStatusFor() an array of tuples with the email address and its expected status.

Voilà! That was another refactoring session. When we write unit tests, we should strive for a balance between implicit code to reduce the noise in our tests and explicit code to make things easier to follow.

In the original version of these tests, we hid the order of recipients when building emails. But then we relied on that order when writing assertions. Let’s not be like magicians pulling code we had hidden somewhere else.

Also, let’s use extension methods and object mothers like With(), EmailFor(), and DeliveredTo() to create a small “language” in our tests, striving for readability. The next person writing tests will copy the existing ones. That will make his life easier.

For more refactoring sessions, check these two: store and update OAuth connections and generate payment reports. And don’t miss my Unit Testing 101 series where I cover from naming conventions to best practices.

Happy testing!

Monday Links: Interviewing, Zombies, and Burnout

For this Monday Links, I’d like to share five reads about interviewing, motivation, and career. These are five articles I found interesting in the past month or two.

Programming Interviews Turn Normal People into A-Holes

This is a good perspective on the hiring process. But from the perspective of someone who was the hiring manager. Two things I like about this one: “Never ask for anything that can be googled,” and “Decide beforehand what questions you will ask because I find that not given any instructions, people will resort to asking trivia or whatever sh*t library they are working on.”

I’ve been in those interviews that feel like an interrogatory. The only thing missing was a table in the middle of a dark room with a two-way mirror. Like in spy movies. Arrrggg!

Read full article

Demotivating a (Skilled) Programmer

From this article, a message for bosses: “…(speaking about salaries, dual monitors, ping pong tables) These things are ephemeral, though. If you don’t have him working on the core functionality of your product, with tons of users, and an endless supply of difficult problems, all of the games of ping pong in the world won’t help.”

Read full article

Workers in a textile factory in the 30s
Simpson's Gloves Pty Ltd, Richmond, circa 1932. Photo by Museums Victoria on Unsplash

How to Turn Software Engineers Into Zombies

This is a good post with a sarcastic tone and a good lesson. These are some ideas to turn software engineers into walking dead bodies:

  • “Always give them something to implement, not to solve”
  • “When it comes time for promotion come up with some credible silly process they have to go through in order to get something”
  • “If you keep hiring, you will always pay less per employee.”

I had to confess that I never saw the previous third point coming. Those are only three. The post has even more.

Read full article

How to Spot Signs of Burnout Culture Before You Accept a Job

We only have one chance of giving a first impression. Often the first impression we have about one company is the job listing itself. This post shows some clues to read between the lines to detect toxic culture from companies.

I ran from companies with “work under pressure” or “fast pace changing environment” anywhere in the job description. Often that screams: “We don’t know what we’re doing, but we’re already late.” Arrgggg!

Read full article

Numbers To Know For Managing (Software Teams)

I read this one with a bit of skepticism. I was expecting: sprint velocity, planned story points, etc. But I found some interesting metrics, like _five is the number of comments on a document before turning it into a meeting” and “one is the number of times to reverse a resignation.”

Read full article

Voilà! Another Monday Links. Have you ever found those interrogatories? Sorry, I meant interviews. Do your company track sprint velocity and story points? What metrics do they track instead? Until next Monday Links.

In the meantime, don’t miss the previous Monday Links on Passions, Estimates, and Methodologies.

Happy coding!

Goodbye, NullReferenceException: Separate State in Separate Objects

So far in this series about NullReferenceException, we have used nullable operators and C# 8.0 Nullable References to avoid null and learned about the Option type as an alternative to null.

Let’s see how to design our classes to avoid null when representing optional values.

Instead of writing a large class with methods that expect some nullable properties to be not null at some point, we’re better off using separate classes to avoid dealing with null and getting NullReferenceException.

Multiple state in the same object

Often we keep all possible combinations of properties of an object in a single class.

For example, on an e-commerce site, we create a User class with a name, password, and credit card. But since we don’t need the credit card details to create new users, we declare the CreditCard property as nullable.

Let’s write a class to represent either regular or premium users. We should only store credit card details for premium users to charge a monthly subscription.

public record User(Email email, SaltedPassword password)
{
    public CreditCard? CreditCard { get; internal set; }
    //              ^^^
    // Only for Premium users. We declare it nullable

    public void BecomePremium(CreditCard creditCard)
    {
        // Imagine we sent an email and validate credit card
        // details, etc
        //
        // Beep, beep, boop
    }

    public void ChargeMonthlySubscription()
    {
        // CreditCard might be null here.
        //
        // Nothing is preventing us from calling it
        // for regular users
        CreditCard.Pay();
        // ^^^^^
        // Boooom, NullReferenceException
    }
}

Notice that the CreditCard property only has value for premium users. We expect it to be null for regular users. And nothing is preventing us from calling ChargeMonthlySubscription() with regular users (when CreditCard is null). We have a potential source of NullReferenceException.

We ended up with a class with nullable properties and methods that only should be called when some of those properties aren’t null.

Inside ChargeMonthlySubscription(), we could add some null checks before using the CreditCard property. But, if we have other methods that need other properties not to be null, our code will get bloated with null checks all over the place.

Green grass near the gray road
Let's separate our state...Photo by Will Francis on Unsplash

Separate State in Separate Objects

Instead of checking for null inside ChargeMonthlySubscription(), let’s create two separate classes to represent regular and premiums users.

public record RegularUser(Email Email, SaltedPassword Password)
{
    // No nullable CreditCard anymore

    public PremiumUser BecomePremium(CreditCard creditCard)
    {
        // Imagine we sent an email and validate credit card
        // details, etc
        return new PremiumUser(Email, Password, CreditCard);
    }
}

public record PremiumUser(Email Email, SaltedPassword Password, CreditCard CreditCard)
{
    // Do stuff of Premium Users...

    public void ChargeMonthlySubscription()
    {
        // CreditCard is not null here.
        CreditCard.Pay();
    }
}

Notice we wrote two separate classes: RegularUser and PremiumUser. We don’t have methods that should be called only when some optional properties have value. And we don’t need to check for null anymore. For premium users, we’re sure we have their credit card details. We eliminated a possible source of NullReferenceException.

We’re better off writing separate classes than writing a single large class with nullable properties that only have values at some point.

I learned about this technique after reading Domain Model Made Functional. The book uses the mantra: “Make illegal state unrepresentable.” In our example, the illegal state is the CreditCard being null for regular users. We made it unrepresentable by writing two classes.

Voilà! This is another technique to prevent null and NullReferenceException by avoiding classes that only use some optional state at some point of the object lifecycle. We should split all possible combinations of the optional state into separate classes. Put separate state in separate objects.

Don’t miss the other posts in this series, what the NullReferenceException is and how to prevent it, Nullable Operators and References, and the Option type and LINQ XOrDefault methods.

Join my course C# NullReferenceException Demystified on Udemy and learn the principles, features, and strategies to avoid this exception in just 1 hour and 5 minutes.

Happy coding!