Lessons I learned from my ex-coworkers about software engineering

This post is part of my Advent of Code 2022.

For better or worse, we all have something to learn from our bosses and co-workers.

These are three lessons I learned from three of my ex-coworkers and ex-bosses about software engineering, designing, and programming.

I didn’t take the time to thank them when I worked with them. This is my thank you note.

1. Inspire Change

From Edgardo, the most senior of all developers, I learned to inspire change. He didn’t talk too much. But when he did, everybody listened.

He always brought new ideas to improve our development process. Instead of doing things himself, he dropped a seed on us. “Hey, what if we do something? Think of a way of achieving something else.”

He was the kind of guy who inspired trust to ask him anything, not only about coding. I tapped his shoulder: “hey, Edgardo. I have a question about life” and he dropped whatever he was doing to listen, answer, and inspire us all.

In emergencies, while everybody panicked, Edgardo was calm, going through log files and running diagnostics.

2. Stand on the shoulders of giants

From Javier, the architect, I learned to stand on the shoulder of giants.

When we ran into issues, he always said “you’re not the first one solving that problem” and “smarter people have already solved that.” He made us look out there first.

Every time I’m tempted to start something from scratch, I start looking at GitHub. Maybe I can stand on somebody else’s shoulders.

Recently, a coworker told me that reading an authorization token from a custom header with ASP.NET Core was impossible. And my first thought was: “we’re not the first ones doing that.” After some Googling, we definitively can do that. Javier was right!

Also, from Javier, I learned to read other people’s source code. He believed that’s the way of learning from others. By looking at his code.

3. Identify your users and their goals

From Pedro, the boss, I learned to keep in mind who our end users are.

More than once, I remember Pedro asking designers to change fonts and increase their size. He said: “you aren’t the one who’s going to use this app. This is for your dad and granddad. This is for oldies.”

Also, from Pedro, I learned to optimize for the most frequent scenario. Once we had to read and validate XML files, Pedro suggested storing the XML documents first and then validating them and continuing with the rest of the processing. Because “90% of the time, those documents are valid.”

Voilà! These are some of the lessons I learned from some of my post coworkers. What have you learned from your coworkers and bosses? I bet they have something to teach you.

For more career lessons, check things I wished I knew before becoming a software engineer, ten lessons learned after one year of remote work, and things I learned after a failed project.

Happy coding!

Three Postmortem Lessons From a "Failed" Project

This post is part of my Advent of Code 2022.

Software projects don’t fail because of the tech stack, programming languages, or frameworks.

Sure, choosing the right tech stack is critical for the success of a software project. But software projects fail due to unclear expectations and communication issues. Even unclear expectations are a communication issue too.

This is a failed software project that was killed by unclear expectations and poor communication.

This was a one-month project to integrate a Property Management System (PMS) with a third-party Guest Messaging System. The idea was to sync reservations and guest data to this third-party system, so hotels could send their guests reminders and Welcome messages.

Even though our team delivered the project, we made some mistakes and I learned a lesson or two.

1. Minimize Moving Parts

Before starting the project, we worked with an N-tier architecture. Controllers call Services that use Repositories to talk to a database. There’s nothing wrong with that.

But, the new guideline was to “start doing DDD.” That’s not a bad thing per se. The thing was: almost nobody in the team was familiar with DDD.

I had worked with some of the DDD artifacts before. But, we didn’t know what the upper management wanted with “start doing DDD.”

With this decision, a one-month project ended up being behind schedule.

After reading posts and sneaking into GitHub template projects, two or three weeks later, we agreed on the project structure, aggregate and entity names, and an overall approach. We were already late.

For such a small project with a tight schedule, there was no room for experimentation.

“The best tool for a job is the tool you already know.” At that time, the best tool for my team was N-tier architecture.

For my future projects, I will minimize the moving parts.

2. Define a Clear Path

We agreed on reading the “guests” and “reservations” tables inside a background processor to call the third-party APIs. And we stared working on it.

But another team member was analyzing how to implement an Event-Driven solution with a message queue.

Our team member didn’t realize that his solution required “touching” some parts of the Reservation lifecycle, with all the Development and Testing effort implied.

Although his idea could be the right solution, in theory, we already chose the low-risk solution. He wasted some time we could have used on something else.

For my future projects, I will define a clear path and have everybody on-boarded.

3. Don’t Get Distracted. Cross the Finish Line

With a defined solution and everybody working on it, the team lead decided to put the project back on track with more meetings and ceremonies.

We started to estimate with poker planning.

During some of the planning sessions, we joked about putting “in one month” as the completion date for all tickets and stopped doing those meetings.

Why should everyone in the team vote for an estimation on a task somebody else was already working on? We all knew what we needed and what everybody else was doing.

It was time to focus on the goal and not get distracted by unproductive ceremonies or meetings. I don’t mean stop writing unit tests or doing code reviews. Those were the minimum safety procedures for the team.

For my future projects, I will focus on crossing the finish line.

Voilà! These are the postmortem lessons I learned from this project. Although tech choice plays a role in the success of a project, I found that “people and interactions” are way more important than choosing the right libraries and frameworks.

I bet we can find fail projects using the most state-of-the-art technology or programming languages.

Like marriages, software projects fail because of unclear expectations and poor communication. This was one of them.

For more career lessons, check my five lessons after five years as a software developer, ten lessons learned after one year of remote work and things I wished I knew before working as a software engineer.

Happy coding!

Six helpful extension methods I use to work with Collections

This post is part of my Advent of Code 2022.

LINQ is the perfect way to work with collections. It’s declarative and immutable. But, from time to time, I take some extension methods with me to the projects I work with. These are some extension methods to work with collections.

1. Check if a collection is null or empty

These are three methods to check if a collection is null or empty. They’re wrappers around the LINQ Any method.

public static bool IsNullOrEmpty<T>([NotNullWhen(false)] this IEnumerable<T>? collection)
    => collection == null || collection.IsEmpty();

public static bool IsNotNullOrEmpty<T>([NotNullWhen(true)] this IEnumerable<T>? collection)
    => !collection.IsNullOrEmpty();

public static bool IsEmpty<T>(this IEnumerable<T> collection)
    => !collection.Any();

Notice we used the [NotNullWhen] attribute to let the compiler know if the source collection is null. This way, when we turn on the nullable references feature, the compiler can generate more accurate warnings. If we don’t add this attribute, we get some false positives. Like this one,

IEnumerable<Movie>? movies = null;

if (movies.IsNotNullOrEmpty())
{
    movies.First();
    // ^^^^^
    // CS8604: Possible null reference argument for parameter 'source'
    //
    // But we don't want this warning here...
}

2. EmptyIfNull

In the same spirit of DefaultIfEmpty, let’s create a method to return an empty collection if the source collection is null. This way, we can “go with the flow” by nesting this new method with other LINQ methods.

public static IEnumerable<T> EmptyIfNull<T>(this IEnumerable<T>? enumerable)
   => enumerable ?? Enumerable.Empty<T>();

For example, we can write,

someNullableCollection.EmptyIfNull().Select(DoSomething);
//                     ^^^^^

Instead of writing,

someNullableCollection?.Select(DoSomeMapping) ?? Enumerable.Empty<SomeType>();

I found this idea in Pasion for Coding’s Null Handling with Extension Methods.

3. Enumerated

The LINQ Select() method has an overload to map elements using their position in the source collection. We can use it like this,

movies.Select((movie, position) => DoSomething(movie, position))));

Inspired by Swift Enumerated method, we can write a wrapper around this Select() overload. Like this,

public static IEnumerable<(int Index, TResult Item)> Enumerated<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector)
    => source.Select((t, index) => (index, selector(t)));

public static IEnumerable<(int Index, TSource Item)> Enumerated<TSource>(this IEnumerable<TSource> source)
    => source.Enumerated(e => e);

For example, we can write,

foreach (var (index, movie) in movies.Enumerated())
{
    Console.WriteLine($"[{index}]: {movie.Name}")l
}

.NET 9 introduced the Index method that works like our Enumerated(). We don’t need to roll our own method anymore in recent versions of .NET.

Voilà! These are some of my favorite extension methods to work with collections. Some of them are workarounds to avoid the NullReferenceException when working with collections. What extension methods do you use often?

If you want to learn more about LINQ, read my Quick Guide to LINQ.

Want to write more expressive code for collections? Join my course, Getting Started with LINQ on Udemy and learn everything you need to know to start working productively with LINQ—in less than 2 hours.

Happy coding!

How to create ASP.NET Core Api project structure with dotnet cli

This post is part of my Advent of Code 2022.

While looking at C# Advent 2022, I found the Humble Toolsmith page and its post Create Test Solutions by Scripting the Dotnet CLI.

That post reminded me I have my own script to create the folder structure for ASP.NET Core API projects. Currently, I work with a client where I have to engage in short (3-5 month) projects. Every now and then, I create new projects. And these are the types of tasks we don’t do often and always forget how to do it. Why not scripting it!

How to create project structure with dotnet cli

This is the script I use to create the source and test projects with the references between them for an ASP.NET Core API project:

# Change to suit your own needs
Prefix=Acme.CoolProject
#      ^^^^^
# Change it to use your project name prefix

# 1. Create solution
dotnet new sln --name $Prefix.Api

# 2. Create src projects
# Create class libraries
for name in 'Data' 'Domain' 'Infrastructure' 'Messages'
do
# Optionally:
# dotnet new classlib -o $Prefix.$name/src -n $name
dotnet new classlib -o src/$Prefix.$name
dotnet sln add src/$Prefix.$name/$Prefix.$name.csproj --in-root
done

# Create Console projects
dotnet new console -o src/$Prefix.Data.Migrator
dotnet sln add src/$Prefix.Data.Migrator/$Prefix.Data.Migrator.csproj --in-root

# Create Api projects
dotnet new webapi -o src/$Prefix.Api
dotnet sln add src/$Prefix.Api/$Prefix.Api.csproj --in-root

# Api depends on Data, Infrastructure, and Messages
for dependsOn in 'Data' 'Infrastructure' 'Messages'
do
dotnet add src/$Prefix.Api/$Prefix.Api.csproj reference src/$Prefix.$dependsOn/$Prefix.$dependsOn.csproj
done

# Data depends on Domain and Infrastructure
for dependsOn in 'Domain' 'Infrastructure'
do
dotnet add src/$Prefix.Data/$Prefix.Data.csproj reference src/$Prefix.$dependsOn/$Prefix.$dependsOn.csproj
done

# Data.Migrator depends on Data
dotnet add src/$Prefix.Data.Migrator/$Prefix.Data.Migrator.csproj reference src/$Prefix.Data/$Prefix.Data.csproj

# Infrastructure depends on Domain and Messages
for dependsOn in 'Domain' 'Messages'
do
dotnet add src/$Prefix.Infrastructure/$Prefix.Infrastructure.csproj reference src/$Prefix.$dependsOn/$Prefix.$dependsOn.csproj
done

# 3. Create test projects
for name in 'Api' 'Data' 'Domain' 'Infrastructure'
do
dotnet new mstest -o tests/$Prefix.$name.Tests
dotnet sln add tests/$Prefix.$name.Tests/$Prefix.$name.Tests.csproj -s Tests
dotnet add tests/$Prefix.$name.Tests/$Prefix.$name.Tests.csproj reference src/$Prefix.$name/$Prefix.$name.csproj
done

# 4. Copy template files
# .gitignore, .editorconfig, .dockerignore
# Copy dotfiles
for file in $(ls -I "*.cs" ~/Documents/_Projects/_FolderStructure/Templates/)
do
cp ~/Documents/_Projects/_FolderStructure/Templates/$file .
done

# 5. Cleanup
find . -name "WeatherForecastController.cs" -type f -delete
find . -name "WeatherForecast.cs" -type f -delete

find . -name "Class1.cs" -type f -delete
find . -name "UnitTest1.cs" -type f -delete

When I need to create a new project, I only change the Prefix at the top of the file.

Notice this script copies some template files (.gitignore, .editorconfig, .dockerignore) from a shared location.

This script creates a project structure like this:

Project list in Visual Studio
ASP.NET Core Api project structure inside Visual Studio

And a folder structure like this:

│   Acme.CoolProject.Api.sln
├───src
│   ├───Acme.CoolProject.Api
│   ├───Acme.CoolProject.Data
│   ├───Acme.CoolProject.Data.Migrator
│   ├───Acme.CoolProject.Domain
│   ├───Acme.CoolProject.Infrastructure
│   └───Acme.CoolProject.Messages
└───tests
    ├───Acme.CoolProject.Api.Tests
    ├───Acme.CoolProject.Data.Tests
    ├───Acme.CoolProject.Domain.Tests
    └───Acme.CoolProject.Infrastructure.Tests

In the Messages project, I put input and output view models. And, when doing CQRS, I put commands and queries. In the Migrator project, I put the Simple.Migrations runner and migrations to update the database schema.

With small tweaks, we can change the folder structure to have the component folders on top and the /src and /test folders inside them. Like,

│   Acme.CoolProject.Api.sln
├───Api
    ├───src
    └───tests

Even we can create folders and csproj files with shorter names by passing the -n flag and a name in the dotnet new command.

How to update the csproj files with Powershell

Then to update csproj files, like making nullable warning errors or adding a root namespace, instead of doing it by hand, I tweak this PowerShell file:

$projects = Get-ChildItem -Filter *.csproj -Recurse -Exclude *Tests*.csproj
    
$projects | foreach { 
    try
    {
        $path = $_.FullName;

        $proj = [xml](Get-Content $path);
        
        $propertyGroup = $proj.Project.PropertyGroup  | where { -not [String]::IsNullOrWhiteSpace($_.TargetFramework) };

        $shouldSave = $false
        if($propertyGroup.RootNamespace -eq $null)
        {
            $RootNamespace = $propertyGroup.ParentNode.ParentNode.CreateElement('RootNamespace');
      $propertyGroup.AppendChild($RootNamespace) | out-null;
            $propertyGroup.RootNamespace = "Acme.CoolProject";
            $shouldSave = $true
        }
        
        if($shouldSave)
        {
            $proj.Save($path);
            Write-Host "RootNamespace added to $path"
        }
    }
    catch
    {
        Write-Host $path ([System.Environment]::NewLine)
        $_
    }
}

Voilà! That’s how I create the folder and project structure for one of my clients. This is another script that saved my day! Kudos to Humble Toolsmith for inspiring me to write this one.

To read more content, check How to quickly rename projects inside a Visual Studio solution and How to rename a keyword in file contents and names.

Happy coding!

How to write good unit tests: Use simple test values

This post is part of my Advent of Code 2022.

These days I had to review some code that had one method to merge dictionaries. This is one of the suggestions I gave during that review to write good unit tests.

To write good unit tests, write the Arrange part of tests using the simplest test values that exercise the scenario under test. Avoid building large object graphs and using magic numbers in the Arrange part of tests.

Here are the tests I reviewed

These are two of the unit tests I reviewed. They test the Merge() method.

using MyProject;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using System.Collections.Generic;
using System.Linq;

namespace MyProject.Tests;

[TestClass]
public class DictionaryExtensionsTests
{
    [TestMethod]
    public void Merge_NoDuplicates_DoesNotMergeNullAndEmptyOnes()
    {
        var me = new Dictionary<int, int>
        {
            { 1, 10 }, { 2, 20 }, { 3, 30 }
        };
        var empty = new Dictionary<int, int> { };
        var one = new Dictionary<int, int>
        {
            { 4, 40 }
        };
        var two = new Dictionary<int, int>
        {
            { 5, 50 }, { 6, 60 }, { 7, 70 }, { 8, 80}, { 9, 90 }
        };
        var three = new Dictionary<int, int>
        {
            { 10, 100 }, { 11, 110 }
        };
        var four = new Dictionary<int, int>
        {
            { 12, 120 }, { 13, 130 }, { 14, 140 }, { 15, 150 },
            { 16, 160 }, { 17, 170 }, { 18, 180 }, { 19, 190 }
        };

        var merged = me.Merge(one, empty, null, two, null, three, null, null, four, empty);
        //              ^^^^^

        Assert.AreEqual(19, merged.Keys.Count);
        var keyRange = Enumerable.Range(1, merged.Keys.Count);
        foreach (var entry in merged)
        {
            Assert.IsTrue(keyRange.Contains(entry.Key));
            Assert.AreEqual(entry.Key * 10, entry.Value);
        }
    }

    [TestMethod]
    public void Merge_DuplicateKeys_ReturnNoDuplicates()
    {
        var me = new Dictionary<int, int>
        {
            { 1, 10 }, { 2, 20 }, { 3, 30 }, { 4, 40 },
            { 5, 50 }, { 6, 60 }, { 7, 70 }
        };
        var one = new Dictionary<int, int>
        {
            { 1, 1 }, { 2, 2 }, { 8, 80 }
        };
        var two = new Dictionary<int, int>
        {
            { 3, 3 }, { 9, 90 }
        };
        var three = new Dictionary<int, int>
        {
            { 4, 4 }, { 5, 5 }, { 6, 6 }, { 7, 7 }, { 10, 100 }
        };

        var merged = me.Merge(one, two, three);
        //              ^^^^^

        Assert.AreEqual(10, merged.Keys.Count);
        var keyRange = Enumerable.Range(1, merged.Keys.Count);
        foreach (var entry in merged)
        {
            Assert.IsTrue(keyRange.Contains(entry.Key));
            Assert.AreEqual(entry.Key * 10, entry.Value);
        }
    }
}

Yes, those are the real tests I had to review. I slightly changed the namespaces and the test names.

What’s wrong?

Let’s take a closer look at the first test. Do we need six dictionaries to test the Merge() method? No! And do we need 19 items? No! We can still cover the same scenario with only two single-item dictionaries without duplicate keys.

And let’s write separate tests to deal with edge cases. Let’s write one test to work with null and another one with an empty dictionary. Again two dictionaries will be enough for each test.

Having too many dictionaries with too many items made us write that funny foreach with a funny multiplication inside. That’s why some of the values are multiplied by 10, and others aren’t. We don’t need that with a simpler scenario.

Unit tests should only have assignments without branching or looping logic.

Looking at the second test, we noticed it followed the same pattern as the first one. Too many items and a weird foreach with a multiplication inside.

A simple bedroom
Let's embrace simplicity. Photo by Samantha Gades on Unsplash

Write tests using simple test values

Let’s write our tests using simple test values to prepare our scenario under test.

[TestMethod]
public void Merge_NoDuplicates_DoesNotMergeNullAndEmptyOnes()
{
    var one = new Dictionary<int, int>
    {
        { 1, 10 }
    };
    var two = new Dictionary<int, int>
    {
        { 2, 20 }
    };

    var merged = one.Merge(two);
    //               ^^^^^

    Assert.AreEqual(2, merged.Keys.Count);

    Assert.IsTrue(merged.Contains(1));
    Assert.IsTrue(merged.Contains(2));
}

// One test to Merge a dictionary with an empty one
// Another test to Merge a dictionary with a null one

[TestMethod]
public void Merge_DuplicateKeys_ReturnNoDuplicates()
{
    var duplicateKey = 1;
    //  ^^^^^
    var one = new Dictionary<int, int>
    {
        { duplicateKey, 10 }, { 2, 20 }
        //  ^^^^^
    };
    var two = new Dictionary<int, int>
    {
        { duplicateKey, 10 }, { 3, 30 }
        //  ^^^^^
    };
    var merged = one.Merge(two);
    //               ^^^^^

    Assert.AreEqual(3, merged.Keys.Count);

    Assert.IsTrue(merged.Contains(duplicateKey));
    Assert.IsTrue(merged.Contains(2));
    Assert.IsTrue(merged.Contains(3));
}

Notice this time, we boiled down the Arrange part of the first test to only two dictionaries with one item each, without duplicates.

And for the second one, the one for duplicates, we wrote a duplicateKey variable and used it in both dictionaries as key to make the test scenario obvious. This way, after reading the test name, we don’t have to decode where the duplicate keys are.

Since we wrote simple tests, we could remove the foreach in the Assert parts and the funny multiplications.

The test for the null and empty cases are exercises left to the reader. They’re not difficult to write.

Voilà! That’s another tip to write good unit tests. Let’s strive to have tests easier to follow with simple test values. Here we used dictionaries, but we can follow this tip when writing integration tests for the database. Often to prepare our test data, we insert multiple records when only one or two are enough to prove our point.

Also, I wrote other posts about how to write good unit tests. One to reduce noisy tests and use explicit test values and another one to write a failing test first. Don’t miss my Unit Testing 101 series where I cover more subjects like this one.

Happy testing!