To Value Object or Not To: How I choose Value Objects

This post is part of my Advent of Code 2022.

Today I reviewed a pull request and had a conversation about when to use Value Objects instead of primitive values. This is the code that started the conversation and my rationale to promote a primitive value to a Value Object.

Prefer Value Objects to encapsulate validations or custom methods on a primitive value. Otherwise, if a primitive value doesn’t have a meaningful “business” sense and is only passed around, consider using the primitive value with a good name for simplicity.

In case you’re not familiar with Domain-Driven Design and its artifacts. A Value Object represents a concept that doesn’t have an “identifier” in a business domain. Value objects are immutable and compared by value.

Value Objects represent elements of “broader” concepts. For example, in a Reservation Management System, we can use a Value Object to represent the payment method of a Reservation.

TimeStamp vs DateTime

This is the piece of code that triggered my comment during the code review.

public class DeliveryNotification : ValueObject
{
    public Recipient Recipient { get; init; }
    
    public DeliveryStatus Status { get; init; }
    
    public TimeStamp TimeStamp { get; init; }
    //     ^^^^^^

    protected override IEnumerable<object?> GetEqualityComponents()
    {
        yield return Recipient;
        yield return Status;
        yield return TimeStamp;
    }
}

public class TimeStamp : ValueObject
{
    public DateTime Value { get; }

    private TimeStamp(DateTime value)
    {
        Value = value;
    }
    
    public static TimeStamp Create()
    {
        return new TimeStamp(SystemClock.Now);
    }

    protected override IEnumerable<object> GetEqualityComponents()
    {
        yield return Value;
    }
}

public enum DeliveryStatus
{
    Created,
    Sent,
    Opened,
    Failed
}

We wanted to record when an email is sent, opened, and clicked. We relied on a third-party Email Provider to notify our system about these email events. The DeliveryNotification has an email address, status, and timestamp.

The ValueObject base class is Vladimir Khorikov’s ValueObject implementation.

Notice the TimeStamp class. It’s only a wrapper around the DateTime class. Mmmm…

Sand clock
Photo by Alexandar Todov on Unsplash

Promote Primitive Values to Value Objects

I’d dare to say that using a TimeStamp instead of a simple DateTime in the DeliveryNotification class was an overkill. I guess when “when we have a hammer, everything looks like a finger.”

This is my rationale to choose between value objects and primitive values:

  1. If we need to enforce a domain rule or perform a business operation on a primitive value, let’s use a Value Object.
  2. If we only pass a primitive value around and it represents a concept in the language domain, let’s wrap it around a record to give it a meaningful name.
  3. Otherwise, let’s stick to the plain primitive values.

In our TimeStamp class, apart from Create(), we didn’t have any other methods. We might validate if the inner date is in this century. But that won’t be a problem. I don’t think that code will live that long.

And, there are cleaner ways of writing tests that use DateTime than using a static SystemClock. Maybe, it would be a better idea if we can overwrite the SystemClock internal date.

I’d take a simpler route and use a plain DateTime value. I don’t think there’s a business case for TimeStamp here.

public class DeliveryNotification : ValueObject
{
    public Recipient Recipient { get; init; }
    
    public DeliveryStatus Status { get; init; }
    
    public DateTime TimeStamp { get; init; }
    //     ^^^^^^

    protected override IEnumerable<object?> GetEqualityComponents()
    {
        yield return Recipient;
        yield return Status;
        yield return TimeStamp;
    }
}

// Or alternative, to use the same domain language
//
// public record TimeStamp(DateTime Value);

public enum DeliveryStatus
{
    Created,
    Sent,
    Opened,
    Failed
}

If in the “email sending” domain, business analysts or stakeholders use “timestamp,” for the sake of a ubiquitous language, we can add a simple record TimeStamp to wrap the date. Like record TimeStamp(DateTime value).

Voilà! That’s a practical option to decide when to use Value Objects and primitive values. For me, the key is asking if there’s a meaningful domain concept behind the primitive value. Otherwise we would end up with too many value objects or obsessed with primitive values.

If you want to read more about Domain-Driven Design, check my takeaways from these books Hands-on Domain-Driven Design with .NET Core and Domain Modeling Made Functional.

Happy coding!

Dump and Load to squash old migrations

This post is part of my Advent of Code 2022.

Recently, I stumbled upon the article Get Rid of Your Old Database Migrations. The author shows how Clojure, Ruby, and Django use the “Dump and Load” approach to compact or squash old migrations. This is how I implemented the “Dump and Load” approach in one of my client’s projects.

1. Export database objects and reference data with schemazen

In one of my client’s projects, we had too many migration files that we started to group them inside folders named after the year and month. Squashing migrations sounds like a good idea here.

For example, for a three-month project, we wrote 27 migration files. This is the Migrator project,

List of migration files in one of my projects
27 migration files for a short-term project

For those projects, we use Simple.Migrations to apply migration files and a bunch of custom C# extension methods to write the Up() and Down() steps. Since we don’t use an all-batteries-included migration framework, I needed to generate the dump of all database objects.

I found schemazen in GitHub, a CLI tool to “script and create SQL Server objects quickly.”

This is how to script all objects and export data from reference tables with schemazen,

dotnet schemazen script --server (localdb)\\MSSQLLocalDB
    --database <YourDatabaseName>
    --dataTablesPattern=\"(.*)(Status|Type)$\"
    --scriptDir C:/someDir

Notice I used --dataTablesPattern option with a regular expression to only export the data from the reference tables. In this project, we named our reference tables with the suffixes “Status” or “Type.” For example, PostStatus or ReceiptType.

I could simply export the objects from SQL Server directly. But those script files contain a lot of noise in the form of default options. Schemazen does it cleanly.

Schemazen generates one folder per object type and one file per object. And it exports data in a TSV format. I didn’t find an option to export the INSERT statements in its source code, though.

Schemazen generates a folder structure like this,

 |-data
 |-defaults
 |-foreign_keys
 |-tables
 props.sql
 schemas.sql

After this first step, I had the database objects. But I still needed to write the actual migration file.

Piles of used cars and trucks waiting to be recycled
Photo by Randy Laybourne on Unsplash

2. Process schemazen exported files

To write the squash migration file, I wanted to have all scripts in a single file and turn the TSV files with the exported data into INSERT statements.

I could write a C# script file, but I wanted to stretch my Bash/Unix muscles. After some Googling, I came up with this,

# It grabs the output from schemazen and compacts all dump files into a single one
FILE=dump.sql

# Merge all files into a single one
for folder in 'tables/' 'defaults/' 'foreign_keys/'
do
    find $folder -type f \( -name '*.sql' ! -name 'VersionInfo.sql' \) | while read f ;
    do
        cat $f >> $FILE;
    done
done

# Remove GO keywords and blank lines
sed -i '/^GO/d' $FILE
sed -i '/^$/d' $FILE

# Turn tsv files into INSERT statements
for file in data/*tsv;
do
    echo "INSERT INTO $file(Id, Name) VALUES" | sed -e "s/data\///" -e "s/\.tsv//" >> $FILE
    cat $file | awk '{print "("$1",\047"$2"\047),"}' >> $FILE
    echo >> $FILE
    
    sed -i '/^$/d' $FILE
    sed -i '$ s/,$//g' $FILE
done

The first part merges all separate object files into a single one. I filtered the VersionInfo table. That’s Simple.Migration’s table to keep track of already applied migrations.

The second part removes the GO keywords and blank lines.

And the last part turns the TSV files into INSERT statements. It grabs table names from the file name and removes the base path and the TSV extension. It assumes reference tables only have an id and a name.

With this compact script file, I removed the old migration files except the last one. For the project in the screenshot above, I kept Migration0027. Then, I used all the SQL statements from the dump file in the Up() step of the migration. I had an squash migration after that.

Voilà! That’s how I squashed old migrations in one of my client’s projects using schemazen and a Bash script. The idea is to squash our migrations after every stable release of our projects. From the reference article, one commenter said he does this approach one or twice a year. Another one, after every breaking changes.

By the way, recently, I got interested in the Unix tools again. Check how to replace keywords in a file name and content with Bash and how to create ASP.NET Core Api project structure with dotnet cli.

Happy coding!

Lessons I learned as a code reviewer

This post is part of my Advent of Code 2022.

In the past month, for one of my clients, I became a default reviewer. I had the chance to check everybody else’s code and advocate for change. After dozens of Pull Requests (PRs) reviewed, these are the lessons I learned.

I’ve noticed that most of the comments fall into two categories. I will call them “babysitting” and “surprising solution.”

1. Babysitting

In these projects, before opening a PR, we have to cover all major code changes with tests, have zero analyzer warnings, and format all C# code. But try to guess what the most common comments are. Things like “please write tests to cover this method,” “address analyzers warnings,” and “run CodeMaid to format this code.”

As a reviewee, before opening a PR, wear the reviewer hat and review your own code. It’s frustrating when the code review process becomes a slow and expensive linting process.

To have a smooth code review, let’s automate some of the things checked during the review process. For example, let’s clean and format our files with a Git hook or Visual Studio extension. And let’s turn all warnings into compilation errors.

For example, with this idea of automation in mind, I ended up writing a Git pre-commit hook to format sql files.

2. Surprising solution

Apart from making developers follow conventions, the next most common comments are clarification comments. Things like “why did you do that? Possibly, this is a simpler way.” Often, it’s easy when there’s a clear and better solution. Like when a developer used semaphores to prevent concurrent access to dictionaries. We have concurrent collections for that.

As a reviewee, use the PR description and comments to give enough context to avoid unnecessary discussion. The most frustrating PRs are the ones with only a ticket number in their title. Show the entry point of your changes, tell why you chose a particular design, and signal places where you aren’t sure if there’s a better way.

Voilà! These are some of the lessons I learned after being a reviewer. The next time you open a PR, review your own code first and give enough context to your reviewers.

But, the one thing to improve code reviews is to use short PRs. PRs everyone could review in 10 or 15 minutes without too much discussion. As a reviewer, I wouldn’t mind reviewing multiple short PR’s in a working session than reviewing one single long PR that exhausts all my mental energy.

Also as a reviewer, I learned to stop using leading or tricky questions. And I taught to use simple test values to write good unit tests.

If you’re new to code reviews, check these Tips for better code reviews.

Happy coding!

Lessons I learned from my ex-coworkers about software engineering

This post is part of my Advent of Code 2022.

For better or worse, we all have something to learn from our bosses and co-workers.

These are three lessons I learned from three of my ex-coworkers and ex-bosses about software engineering, designing, and programming.

I didn’t take the time to thank them when I worked with them. This is my thank you note.

1. Inspire Change

From Edgardo, the most senior of all developers, I learned to inspire change. He didn’t talk too much. But when he did, everybody listened.

He always brought new ideas to improve our development process. Instead of doing things himself, he dropped a seed on us. “Hey, what if we do something? Think of a way of achieving something else.”

He was the kind of guy who inspired trust to ask him anything, not only about coding. I tapped his shoulder: “hey, Edgardo. I have a question about life” and he dropped whatever he was doing to listen, answer, and inspire us all.

In emergencies, while everybody panicked, Edgardo was calm, going through log files and running diagnostics.

2. Stand on the shoulders of giants

From Javier, the architect, I learned to stand on the shoulder of giants.

When we ran into issues, he always said “you’re not the first one solving that problem” and “smarter people have already solved that.” He made us look out there first.

Every time I’m tempted to start something from scratch, I start looking at GitHub. Maybe I can stand on somebody else’s shoulders.

Recently, a coworker told me that reading an authorization token from a custom header with ASP.NET Core was impossible. And my first thought was: “we’re not the first ones doing that.” After some Googling, we definitively can do that. Javier was right!

Also, from Javier, I learned to read other people’s source code. He believed that’s the way of learning from others. By looking at his code.

3. Identify your users and their goals

From Pedro, the boss, I learned to keep in mind who our end users are.

More than once, I remember Pedro asking designers to change fonts and increase their size. He said: “you aren’t the one who’s going to use this app. This is for your dad and granddad. This is for oldies.”

Also, from Pedro, I learned to optimize for the most frequent scenario. Once we had to read and validate XML files, Pedro suggested storing the XML documents first and then validating them and continuing with the rest of the processing. Because “90% of the time, those documents are valid.”

Voilà! These are some of the lessons I learned from some of my post coworkers. What have you learned from your coworkers and bosses? I bet they have something to teach you.

For more career lessons, check things I wished I knew before becoming a software engineer, ten lessons learned after one year of remote work, and things I learned after a failed project.

Happy coding!

Three Postmortem Lessons From a "Failed" Project

This post is part of my Advent of Code 2022.

Software projects don’t fail because of the tech stack, programming languages, or frameworks.

Sure, choosing the right tech stack is critical for the success of a software project. But software projects fail due to unclear expectations and communication issues. Even unclear expectations are a communication issue too.

This is a failed software project that was killed by unclear expectations and poor communication.

This was a one-month project to integrate a Property Management System (PMS) with a third-party Guest Messaging System. The idea was to sync reservations and guest data to this third-party system, so hotels could send their guests reminders and Welcome messages.

Even though our team delivered the project, we made some mistakes and I learned a lesson or two.

1. Minimize Moving Parts

Before starting the project, we worked with an N-tier architecture. Controllers call Services that use Repositories to talk to a database. There’s nothing wrong with that.

But, the new guideline was to “start doing DDD.” That’s not a bad thing per se. The thing was: almost nobody in the team was familiar with DDD.

I had worked with some of the DDD artifacts before. But, we didn’t know what the upper management wanted with “start doing DDD.”

With this decision, a one-month project ended up being behind schedule.

After reading posts and sneaking into GitHub template projects, two or three weeks later, we agreed on the project structure, aggregate and entity names, and an overall approach. We were already late.

For such a small project with a tight schedule, there was no room for experimentation.

“The best tool for a job is the tool you already know.” At that time, the best tool for my team was N-tier architecture.

For my future projects, I will minimize the moving parts.

2. Define a Clear Path

We agreed on reading the “guests” and “reservations” tables inside a background processor to call the third-party APIs. And we stared working on it.

But another team member was analyzing how to implement an Event-Driven solution with a message queue.

Our team member didn’t realize that his solution required “touching” some parts of the Reservation lifecycle, with all the Development and Testing effort implied.

Although his idea could be the right solution, in theory, we already chose the low-risk solution. He wasted some time we could have used on something else.

For my future projects, I will define a clear path and have everybody on-boarded.

3. Don’t Get Distracted. Cross the Finish Line

With a defined solution and everybody working on it, the team lead decided to put the project back on track with more meetings and ceremonies.

We started to estimate with poker planning.

During some of the planning sessions, we joked about putting “in one month” as the completion date for all tickets and stopped doing those meetings.

Why should everyone in the team vote for an estimation on a task somebody else was already working on? We all knew what we needed and what everybody else was doing.

It was time to focus on the goal and not get distracted by unproductive ceremonies or meetings. I don’t mean stop writing unit tests or doing code reviews. Those were the minimum safety procedures for the team.

For my future projects, I will focus on crossing the finish line.

Voilà! These are the postmortem lessons I learned from this project. Although tech choice plays a role in the success of a project, I found that “people and interactions” are way more important than choosing the right libraries and frameworks.

I bet we can find fail projects using the most state-of-the-art technology or programming languages.

Like marriages, software projects fail because of unclear expectations and poor communication. This was one of them.

For more career lessons, check my five lessons after five years as a software developer, ten lessons learned after one year of remote work and things I wished I knew before working as a software engineer.

Happy coding!