Monday Links: Monday Links: Passions, Estimates, and Methodologies

Welcome to the first Monday Links of 2023. These are five reads I found interesting last month. This time, I found software methodologies was a recurring theme these days.

Why I’m Glad I Lack Passion to BE a Programmer

After a couple of years of working as a software engineer, I started to embrace simplicity. Software exists to satisfy a business need. Perfect software only exists in books. That’s why I started to see the big goal and only use libraries/tools/concepts when there’s a compelling reason to do so. Not all applications need to use Domain-Driven Design with Event Sourcing.

I liked this one: My ideal for software development is to find the simplest solution to the practical problem. I’m not a passionate programmer anymore either.

Read full article

11 Laws of Software Estimation for Complex Work

I can relate to this story. It happens to a friend of a friend of mine. One day his CEO came saying he just closed a really big deal but he didn’t know what they were going to do. Arrrggg!

Apart from a relatable story, this post contains 11 rules about estimations. Like all estimations are simply guesses and by the time developers have enough information to give more accurate estimations, close to the end of a project, it’s already too late.

Read full article

User Interface Design: 5 Rules of Thumb

This article contains some basic principles to design good UI. I don’t like those “are you sure you want to do something?” messages. It’s not a good idea based on this article.

Read full article

Why Do Many Developers Consider Scrum to Be an Evil Scam?

Like any widespread idea, it gets perverted over time. I’ve been in teams where SCRUM is only adopted to micromanage developers using daily meetings. And the next thing you know is that daily meetings become a narrative of how busy developers are to avoid getting fired.

Read full article

Why don’t software development methodologies work?

This is an old article I found on Hacker News front page. It showed, years ago, what everybody is complaining about these days. You only need to visit Hacker News or r/programming once every month.

I’ve been in small projects with no methodologies to larger projects with SCRUM as religion. I’ve been there.

I like this paragraph from the article: “My own experience, validated by Cockburn’s thesis and Frederick Brooks in No Silver Bullet, is that software development projects succeed when the key people on the team share a common vision, what Brooks calls ‘conceptual integrity.’“

Apart from the main article, it has really good comments. This is one that resonates with me about the one methodology:

“A single technical lead with full authority to make decisions, with a next tier assistant, associated technical staff, and a non-technical support person. the achievement of the team is then determined by the leadership of the team. the size of the team and project complexity is then limited by the leader and her ability to understand the problem and assign tasks.”

For me, the most successful project are the ones with a small team who knows each other, and everyone knows the main goal and what to do.

Read full article

Voilà! Another Monday Links. Do you consider yourself a passionate developer? What’s your experience with software methodologies?

In the meantime, don’t miss the previous Monday Links on 40-year Programmer, Work and Burnout.

Happy coding!

Best of 2022

In 2022, I wrote two major series of posts: one about SQL Server performance tuning and another one about LINQ.

Once one of my clients asked me to “tune” some stored procedures and that inspired me to take a close look at the performance tuning world. Last year, I took Brent Ozar Mastering courses and decided to share some of the things I learned.

On another hand, I updated my Quick Guide to LINQ to use new C# features and wrote a bunch of new posts about LINQ. In fact, I released one text-based course about LINQ on Educative: Getting Started with LINQ. That’s my favorite C# feature, ever.

I kept writing my Monday Links posts. And, I decided to have my own Advent of Code. I prefer to call it: Advent of Posts. I wrote 22 posts in December. One post per day until Christmas eve. I missed a couple of days. But I consider it a “mission accomplished.”

These are the 5 posts I wrote in 2022 you read the most. In case you missed any of them, here they are:

  1. TIL: How to optimize Group by queries in SQL Server. This post has one of the lessons I learned after following Brent Ozar’s Mastering courses. Well, this one is about using CTEs to speed up queries with GROUP BY.
  2. SQL Server Index recommendations: Just listen to them. Again, these are some of the lessons I learned in Brent Ozar’s Mastering Index Tuning course. I shared why we shouldn’t blindly create indexes recommendations from query plans. They’re only clues. We can do better than that. I have to confess I added every single index recommendation I got before learning these lessons.
  3. Working with ASP.NET Core IDistributedCache Provider for NCache. This is a post I wrote in collaboration with Alachisoft, NCache creators. It introduces NCache and shows how to use it as a DistributedCache provider with ASP.NET Core.
  4. Four new LINQ methods in .NET 6: Chunk, DistinctBy, Take, XOrDefault. After many years, with the .NET 6 release, LINQ received new methods and overloads. In this post, I show some of them.
  5. How to use LINQ GroupBy method? Another post to get started with LINQ. It covers three GroupBy use cases.

Voilà! These were your 5 favorite posts. Hope you enjoy them as much as I did writing them. Probably, you found shorter versions of these posts on my dev.to account. Or the version of some random guy copy-pasted into his own website pretending I was an author on his programming site. Things you find out when you google your own user handle. Arrggg!

If any of my posts have helped you and you want to support my work, check my Getting Started with LINQ course on Educative or buy me a coffee, tea, or hamburger by downloading my ebooks from my Gumroad page.

Don’t miss my best of 2021.

Thanks for reading, and happy coding in 2023!

Let's refactor a test: Remove duplicated emails

This post is part of my Advent of Code 2022.

Recently, I’ve been reviewing pull requests as one of my main activities. This time, let’s refactor two tests I found on one code review session. The two tests check if an email doesn’t have duplicated addresses before sending it. But, they have a common mistake: testing private methods directly. Let’s refactor these tests to use the public facade of methods.

Always write unit tests using the public methods of a class or a group of classes. Don’t make private methods public and static to test them directly. Test the observable behavior of classes instead.

Here are the test to refactor

These tests belong to an email component in a Property Management Solution. This component stores all emails before sending them.

These are two tests to check we don’t try to send an email to the same addresses. Let’s pay attention to the class name and method under test.

public class SendEmailCommandHandlerTests
{
    [Fact]
    public void CreateRecipients_NoDuplicates_ReturnsSameRecipients()
    {
        var toEmailAddresses = new List<string>
        {
            "toMail1@mail.com", "toMail2@mail.com"
        };
        var ccEmailAddresses = new List<string>
        {
            "ccMail3@mail.com", "ccMail4@mail.com"
        };

        var recipients = SendEmailCommandHandler.CreateRecipients(toEmailAddresses, ccEmailAddresses);
        //                                       ^^^^^
        
        recipients.Should().BeEquivalentTo(
          new List<Recipient>
          {
              Recipient.To("toMail1@mail.com"),
              Recipient.To("toMail2@mail.com"),
              Recipient.Cc("ccMail3@mail.com"),
              Recipient.Cc("ccMail4@mail.com")
          });
    }

    [Fact]
    public void CreateRecipients_Duplicates_ReturnsRecipientsWithoutDuplicates()
    {
        var toEmailAddresses = new List<string>
        {
            "toMail1@mail.com", "toMail2@mail.com", "toMail1@mail.com"
        };
        var ccEmailAddresses = new List<string>
        {
            "ccMail1@mail.com", "toMail2@mail.com"
        };

        var recipients = SendEmailCommandHandler.CreateRecipients(toEmailAddresses, ccEmailAddresses);
        //                                       ^^^^^

        recipients.Should().BeEquivalentTo(
          new List<Recipient>
          {
              Recipient.To("toMail1@mail.com"),
              Recipient.To("toMail2@mail.com"),
              Recipient.Cc("ccMail1@mail.com"),
          });
    }
}

I slightly changed some names. But those are the real tests I had to refactor.

What’s wrong with those tests? Did you notice it? Also, can you point out where the duplicates are in the second test?

To have more context, here’s the SendEmailCommandHandler class that contains the CreateRecipients() method,

using MediatR;
using Microsoft.Extensions.Logging;
using MyCoolProject.Commands;
using MyCoolProject.Shared;

namespace MyCoolProject;

public class SendEmailCommandHandler : IRequestHandler<SendEmailCommand, TrackingId>
{
    private readonly IEmailRepository _emailRepository;
    private readonly ILogger<SendEmailCommandHandler> _logger;

    public CreateDispatchCommandHandler(
        IEmailRepository emailRepository,
        ILogger<CreateDispatchCommandHandler> logger)
    {

        _emailRepository = emailRepository;
        _logger = logger;
    }

    public async Task<TrackingId> Handle(SendEmailCommand command, CancellationToken cancellationToken)
    {
        // Imagine some validations and initializations here...

        var recipients = CreateRecipients(command.Tos, command.Ccs);
        //               ^^^^^
        var email = Email.Create(
            command.Subject,
            command.Body,
            recipients);

        await _emailRepository.CreateAsync(email);

        return email.TrackingId;
    }

    public static IEnumerable<Recipient> CreateRecipients(IEnumerable<string> tos, IEnumerable<string> ccs)
    //                                   ^^^^^
        => tos.Select(Recipient.To)
              .UnionBy(ccs.Select(Recipient.Cc), recipient => recipient.EmailAddress);
    }
}

public record Recipient(EmailAddress EmailAddress, RecipientType RecipientType)
{
    public static Recipient To(string emailAddress)
        => new Recipient(emailAddress, RecipientType.To);

    public static Recipient Cc(string emailAddress)
        => new Recipient(emailAddress, RecipientType.Cc);
}

public enum RecipientType
{
    To, Cc
}

The SendEmailCommandHandler processes all requests to send an email. It grabs the input parameters, creates an Email class, and stores it using a repository. It uses the free MediatR library to roll commands and command handlers.

Also, it parses the raw email addresses into a list of Recipient with the CreateRecipients() method. That’s the method under test in our two tests. Here the Recipient and EmailAddress work like Value Objects.

Now can you notice what’s wrong with our tests?

What’s wrong?

Our two unit tests test a private method directly. That’s not the appropriate way of writing unit tests. We shouldn’t test internal state and private methods. We should test them through the public facade of our logic under test.

In fact, someone made the CreateRecipients() method public to test it,

Diff showing a private method made public
Someone made the internals public to write tests

Making private methods public to test them is the most common mistake on unit testing.

For our case, we should write our tests using the SendEmailCommand class and the Handle() method.

Don’t expose private methods

Let’s make the CreateRecipients() private again. And let’s write our tests using the SendEmailCommand and SendEmailCommandHandler classes.

This is the test to validate that we remove duplicates,

[Fact]
public async Task Handle_DuplicatedEmailInTosAndCc_CallsRepositoryWithoutDuplicates()
{
    var duplicated = "duplicated@email.com";
    //  ^^^^^
    var tos = new List<string> { duplicated, "tomail@mail.com" };
    var ccs = new List<string> { duplicated, "ccmail@mail.com" };

    var fakeRepository = new Mock<IDispatchRepository>();

    var handler = new CreateDispatchCommandHandler(
        fakeRepository.Object,
        Mock.Of<ILogger<SendEmailCommandHandler>>());

    // Let's write a factory method that receives these two email lists
    var command = BuildCommand(tos: tos, ccs: ccs);
    //            ^^^^^
    await handler.Handle(command, CancellationToken.None);

    // Let's write some assert/verifications in terms of the Email object
    fakeRepository
        .Verify(t => t.CreateAsync(It.Is<Email>(/* Assert something here using Recipients */), It.IsAny<CancellationToken>());
    // Or, even better let's write a custom Verify()
    //
    // fakeRepository.WasCalledWithoutDuplicates();
}

private static SendEmailCommand BuildCommand(IEnumerable<string> tos, IEnumerable<string> ccs)
    => new SendEmailCommand(
        "Any Subject",
        "Any Body",
        tos,
        ccs);

Notice we wrote a BuildCommand() method to create a SendEmailCommand only with the email addresses. That’s what we care about in this test. This way we reduce the noise in our tests. And, to make our test values obvious, we declared a duplicated variable and used it in both destination email addresses.

To write the Assert part of this test, we can use the Verify() method from the fake repository to check that we have the duplicated email only once. Or we can use the Moq Callback() method to capture the Email being saved and write some assertions. Even better, we can create a custom assertion for that. Maybe, we can write a WasCalledWithoutDuplicates() method.

That’s one of the two original tests. The other one is left as an exercise to the reader.

Voilà! That was today’s refactoring session. To take home, we shouldn’t test private methods and always write tests using the public methods of the code under test. We can remember this principle with the mnemonic: “Don’t let others touch our private parts.” That’s how I remember it.

For more refactoring sessions, check these two: store and update OAuth connections and generate payment reports. Don’t miss my Unit Testing 101 series where I cover from naming conventions to best practices.

Happy coding!

To Value Object or Not To: How I choose Value Objects

This post is part of my Advent of Code 2022.

Today I reviewed a pull request and had a conversation about when to use Value Objects instead of primitive values. This is the code that started the conversation and my rationale to promote a primitive value to a Value Object.

Prefer Value Objects to encapsulate validations or custom methods on a primitive value. Otherwise, if a primitive value doesn’t have a meaningful “business” sense and is only passed around, consider using the primitive value with a good name for simplicity.

In case you’re not familiar with Domain-Driven Design and its artifacts. A Value Object represents a concept that doesn’t have an “identifier” in a business domain. Value objects are immutable and compared by value.

Value Objects represent elements of “broader” concepts. For example, in a Reservation Management System, we can use a Value Object to represent the payment method of a Reservation.

TimeStamp vs DateTime

This is the piece of code that triggered my comment during the code review.

public class DeliveryNotification : ValueObject
{
    public Recipient Recipient { get; init; }
    
    public DeliveryStatus Status { get; init; }
    
    public TimeStamp TimeStamp { get; init; }
    //     ^^^^^^

    protected override IEnumerable<object?> GetEqualityComponents()
    {
        yield return Recipient;
        yield return Status;
        yield return TimeStamp;
    }
}

public class TimeStamp : ValueObject
{
    public DateTime Value { get; }

    private TimeStamp(DateTime value)
    {
        Value = value;
    }
    
    public static TimeStamp Create()
    {
        return new TimeStamp(SystemClock.Now);
    }

    protected override IEnumerable<object> GetEqualityComponents()
    {
        yield return Value;
    }
}

public enum DeliveryStatus
{
    Created,
    Sent,
    Opened,
    Failed
}

We wanted to record when an email is sent, opened, and clicked. We relied on a third-party Email Provider to notify our system about these email events. The DeliveryNotification has an email address, status, and timestamp.

The ValueObject base class is Vladimir Khorikov’s ValueObject implementation.

Notice the TimeStamp class. It’s only a wrapper around the DateTime class. Mmmm…

Sand clock
Photo by Alexandar Todov on Unsplash

Promote Primitive Values to Value Objects

I’d dare to say that using a TimeStamp instead of a simple DateTime in the DeliveryNotification class was an overkill. I guess when “when we have a hammer, everything looks like a finger.”

This is my rationale to choose between value objects and primitive values:

  1. If we need to enforce a domain rule or perform a business operation on a primitive value, let’s use a Value Object.
  2. If we only pass a primitive value around and it represents a concept in the language domain, let’s wrap it around a record to give it a meaningful name.
  3. Otherwise, let’s stick to the plain primitive values.

In our TimeStamp class, apart from Create(), we didn’t have any other methods. We might validate if the inner date is in this century. But that won’t be a problem. I don’t think that code will live that long.

And, there are cleaner ways of writing tests that use DateTime than using a static SystemClock. Maybe, it would be a better idea if we can overwrite the SystemClock internal date.

I’d take a simpler route and use a plain DateTime value. I don’t think there’s a business case for TimeStamp here.

public class DeliveryNotification : ValueObject
{
    public Recipient Recipient { get; init; }
    
    public DeliveryStatus Status { get; init; }
    
    public DateTime TimeStamp { get; init; }
    //     ^^^^^^

    protected override IEnumerable<object?> GetEqualityComponents()
    {
        yield return Recipient;
        yield return Status;
        yield return TimeStamp;
    }
}

// Or alternative, to use the same domain language
//
// public record TimeStamp(DateTime Value);

public enum DeliveryStatus
{
    Created,
    Sent,
    Opened,
    Failed
}

If in the “email sending” domain, business analysts or stakeholders use “timestamp,” for the sake of a ubiquitous language, we can add a simple record TimeStamp to wrap the date. Like record TimeStamp(DateTime value).

Voilà! That’s a practical option to decide when to use Value Objects and primitive values. For me, the key is asking if there’s a meaningful domain concept behind the primitive value. Otherwise we would end up with too many value objects or obsessed with primitive values.

If you want to read more about Domain-Driven Design, check my takeaways from these books Hands-on Domain-Driven Design with .NET Core and Domain Modeling Made Functional.

Happy coding!

Dump and Load to squash old migrations

This post is part of my Advent of Code 2022.

Recently, I stumbled upon the article Get Rid of Your Old Database Migrations. The author shows how Clojure, Ruby, and Django use the “Dump and Load” approach to compact or squash old migrations. This is how I implemented the “Dump and Load” approach in one of my client’s projects.

1. Export database objects and reference data with schemazen

In one of my client’s projects, we had too many migration files that we started to group them inside folders named after the year and month. Squashing migrations sounds like a good idea here.

For example, for a three-month project, we wrote 27 migration files. This is the Migrator project,

List of migration files in one of my projects
27 migration files for a short-term project

For those projects, we use Simple.Migrations to apply migration files and a bunch of custom C# extension methods to write the Up() and Down() steps. Since we don’t use an all-batteries-included migration framework, I needed to generate the dump of all database objects.

I found schemazen in GitHub, a CLI tool to “script and create SQL Server objects quickly.”

This is how to script all objects and export data from reference tables with schemazen,

dotnet schemazen script --server (localdb)\\MSSQLLocalDB
    --database <YourDatabaseName>
    --dataTablesPattern=\"(.*)(Status|Type)$\"
    --scriptDir C:/someDir

Notice I used --dataTablesPattern option with a regular expression to only export the data from the reference tables. In this project, we named our reference tables with the suffixes “Status” or “Type.” For example, PostStatus or ReceiptType.

I could simply export the objects from SQL Server directly. But those script files contain a lot of noise in the form of default options. Schemazen does it cleanly.

Schemazen generates one folder per object type and one file per object. And it exports data in a TSV format. I didn’t find an option to export the INSERT statements in its source code, though.

Schemazen generates a folder structure like this,

 |-data
 |-defaults
 |-foreign_keys
 |-tables
 props.sql
 schemas.sql

After this first step, I had the database objects. But I still needed to write the actual migration file.

Piles of used cars and trucks waiting to be recycled
Photo by Randy Laybourne on Unsplash

2. Process schemazen exported files

To write the squash migration file, I wanted to have all scripts in a single file and turn the TSV files with the exported data into INSERT statements.

I could write a C# script file, but I wanted to stretch my Bash/Unix muscles. After some Googling, I came up with this,

# It grabs the output from schemazen and compacts all dump files into a single one
FILE=dump.sql

# Merge all files into a single one
for folder in 'tables/' 'defaults/' 'foreign_keys/'
do
    find $folder -type f \( -name '*.sql' ! -name 'VersionInfo.sql' \) | while read f ;
    do
        cat $f >> $FILE;
    done
done

# Remove GO keywords and blank lines
sed -i '/^GO/d' $FILE
sed -i '/^$/d' $FILE

# Turn tsv files into INSERT statements
for file in data/*tsv;
do
    echo "INSERT INTO $file(Id, Name) VALUES" | sed -e "s/data\///" -e "s/\.tsv//" >> $FILE
    cat $file | awk '{print "("$1",\047"$2"\047),"}' >> $FILE
    echo >> $FILE
    
    sed -i '/^$/d' $FILE
    sed -i '$ s/,$//g' $FILE
done

The first part merges all separate object files into a single one. I filtered the VersionInfo table. That’s Simple.Migration’s table to keep track of already applied migrations.

The second part removes the GO keywords and blank lines.

And the last part turns the TSV files into INSERT statements. It grabs table names from the file name and removes the base path and the TSV extension. It assumes reference tables only have an id and a name.

With this compact script file, I removed the old migration files except the last one. For the project in the screenshot above, I kept Migration0027. Then, I used all the SQL statements from the dump file in the Up() step of the migration. I had an squash migration after that.

Voilà! That’s how I squashed old migrations in one of my client’s projects using schemazen and a Bash script. The idea is to squash our migrations after every stable release of our projects. From the reference article, one commenter said he does this approach one or twice a year. Another one, after every breaking changes.

By the way, recently, I got interested in the Unix tools again. Check how to replace keywords in a file name and content with Bash and how to create ASP.NET Core Api project structure with dotnet cli.

Happy coding!