How to write good unit tests: Use simple test values

This post is part of my Advent of Code 2022.

These days I had to review some code that had one method to merge dictionaries. This is one of the suggestions I gave during that review to write good unit tests.

To write good unit tests, write the Arrange part of tests using the simplest test values that exercise the scenario under test. Avoid building large object graphs and using magic numbers in the Arrange part of tests.

Here are the tests I reviewed

These are two of the unit tests I reviewed. They test the Merge() method.

using MyProject;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using System.Collections.Generic;
using System.Linq;

namespace MyProject.Tests;

[TestClass]
public class DictionaryExtensionsTests
{
    [TestMethod]
    public void Merge_NoDuplicates_DoesNotMergeNullAndEmptyOnes()
    {
        var me = new Dictionary<int, int>
        {
            { 1, 10 }, { 2, 20 }, { 3, 30 }
        };
        var empty = new Dictionary<int, int> { };
        var one = new Dictionary<int, int>
        {
            { 4, 40 }
        };
        var two = new Dictionary<int, int>
        {
            { 5, 50 }, { 6, 60 }, { 7, 70 }, { 8, 80}, { 9, 90 }
        };
        var three = new Dictionary<int, int>
        {
            { 10, 100 }, { 11, 110 }
        };
        var four = new Dictionary<int, int>
        {
            { 12, 120 }, { 13, 130 }, { 14, 140 }, { 15, 150 },
            { 16, 160 }, { 17, 170 }, { 18, 180 }, { 19, 190 }
        };

        var merged = me.Merge(one, empty, null, two, null, three, null, null, four, empty);
        //              ^^^^^

        Assert.AreEqual(19, merged.Keys.Count);
        var keyRange = Enumerable.Range(1, merged.Keys.Count);
        foreach (var entry in merged)
        {
            Assert.IsTrue(keyRange.Contains(entry.Key));
            Assert.AreEqual(entry.Key * 10, entry.Value);
        }
    }

    [TestMethod]
    public void Merge_DuplicateKeys_ReturnNoDuplicates()
    {
        var me = new Dictionary<int, int>
        {
            { 1, 10 }, { 2, 20 }, { 3, 30 }, { 4, 40 },
            { 5, 50 }, { 6, 60 }, { 7, 70 }
        };
        var one = new Dictionary<int, int>
        {
            { 1, 1 }, { 2, 2 }, { 8, 80 }
        };
        var two = new Dictionary<int, int>
        {
            { 3, 3 }, { 9, 90 }
        };
        var three = new Dictionary<int, int>
        {
            { 4, 4 }, { 5, 5 }, { 6, 6 }, { 7, 7 }, { 10, 100 }
        };

        var merged = me.Merge(one, two, three);
        //              ^^^^^

        Assert.AreEqual(10, merged.Keys.Count);
        var keyRange = Enumerable.Range(1, merged.Keys.Count);
        foreach (var entry in merged)
        {
            Assert.IsTrue(keyRange.Contains(entry.Key));
            Assert.AreEqual(entry.Key * 10, entry.Value);
        }
    }
}

Yes, those are the real tests I had to review. I slightly changed the namespaces and the test names.

What’s wrong?

Let’s take a closer look at the first test. Do we need six dictionaries to test the Merge() method? No! And do we need 19 items? No! We can still cover the same scenario with only two single-item dictionaries without duplicate keys.

And let’s write separate tests to deal with edge cases. Let’s write one test to work with null and another one with an empty dictionary. Again two dictionaries will be enough for each test.

Having too many dictionaries with too many items made us write that funny foreach with a funny multiplication inside. That’s why some of the values are multiplied by 10, and others aren’t. We don’t need that with a simpler scenario.

Unit tests should only have assignments without branching or looping logic.

Looking at the second test, we noticed it followed the same pattern as the first one. Too many items and a weird foreach with a multiplication inside.

A simple bedroom
Let's embrace simplicity. Photo by Samantha Gades on Unsplash

Write tests using simple test values

Let’s write our tests using simple test values to prepare our scenario under test.

[TestMethod]
public void Merge_NoDuplicates_DoesNotMergeNullAndEmptyOnes()
{
    var one = new Dictionary<int, int>
    {
        { 1, 10 }
    };
    var two = new Dictionary<int, int>
    {
        { 2, 20 }
    };

    var merged = one.Merge(two);
    //               ^^^^^

    Assert.AreEqual(2, merged.Keys.Count);

    Assert.IsTrue(merged.Contains(1));
    Assert.IsTrue(merged.Contains(2));
}

// One test to Merge a dictionary with an empty one
// Another test to Merge a dictionary with a null one

[TestMethod]
public void Merge_DuplicateKeys_ReturnNoDuplicates()
{
    var duplicateKey = 1;
    //  ^^^^^
    var one = new Dictionary<int, int>
    {
        { duplicateKey, 10 }, { 2, 20 }
        //  ^^^^^
    };
    var two = new Dictionary<int, int>
    {
        { duplicateKey, 10 }, { 3, 30 }
        //  ^^^^^
    };
    var merged = one.Merge(two);
    //               ^^^^^

    Assert.AreEqual(3, merged.Keys.Count);

    Assert.IsTrue(merged.Contains(duplicateKey));
    Assert.IsTrue(merged.Contains(2));
    Assert.IsTrue(merged.Contains(3));
}

Notice this time, we boiled down the Arrange part of the first test to only two dictionaries with one item each, without duplicates.

And for the second one, the one for duplicates, we wrote a duplicateKey variable and used it in both dictionaries as key to make the test scenario obvious. This way, after reading the test name, we don’t have to decode where the duplicate keys are.

Since we wrote simple tests, we could remove the foreach in the Assert parts and the funny multiplications.

The test for the null and empty cases are exercises left to the reader. They’re not difficult to write.

Voilà! That’s another tip to write good unit tests. Let’s strive to have tests easier to follow with simple test values. Here we used dictionaries, but we can follow this tip when writing integration tests for the database. Often to prepare our test data, we insert multiple records when only one or two are enough to prove our point.

Also, I wrote other posts about how to write good unit tests. One to reduce noisy tests and use explicit test values and another one to write a failing test first. Don’t miss my Unit Testing 101 series where I cover more subjects like this one.

Happy testing!

TIL: Five or more lessons I learned after working with Hangfire and OrmLite

This post is part of my Advent of Code 2022.

These days I finished another internal project while working with one of my clients. I worked to connect a Property Management System with a third-party Point of Sales. I had to work with Hangfire and OrmLite. I used Hangfire to replace ASP.NET BackgroundServices. Today I want to share some of the technical things I learned along the way.

1. Hangfire lazy-loads configurations

Hangfire lazy loads configurations. We have to retrieve services from the ASP.NET Core dependencies container instead of using static alternatives.

I faced this issue after trying to run Hangfire in non-development environments without registering the Hangfire dashboard. This was the exception message I got: “JobStorage.Current property value has not been initialized.” When registering the Dashboard, Hangfire loads some of those configurations. That’s why “it worked on my machine.”

These two issues in Hangfire GitHub repo helped me to find this out: issue #1991 and issue #1967.

This was the fix I found in those two issues:

using Hangfire;
using MyCoolProjectWithHangfire.Jobs;
using Microsoft.Extensions.Options;

namespace MyCoolProjectWithHangfire;

public static class WebApplicationExtensions
{
    public static void ConfigureRecurringJobs(this WebApplication app)
    {
        // Before, using the static version:
        //
        // RecurringJob.AddOrUpdate<MyCoolJob>(
        //    MyCoolJob.JobId,
        //    x => x.DoSomethingAsync());
        // RecurringJob.Trigger(MyCoolJob.JobId);
				
        // After:
        //
        var recurringJobManager = app.Services.GetRequiredService<IRecurringJobManager>();
        // ^^^^^
        recurringJobManager.AddOrUpdate<MyCoolJob>(
            MyCoolJob.JobId,
            x => x.DoSomethingAsync());
			
        recurringJobManager.Trigger(MyCoolJob.JobId);
    }
}

2. Hangfire Dashboard in non-Local environments

By default, Hangfire only shows the Dashboard for local requests. A coworker pointed that out. It’s in plain sight in the Hangfire Dashboard documentation. Arrrggg!

To make it work in other non-local environments, we need an authorization filter. Like this,

public class AllowAnyoneAuthorizationFilter : IDashboardAuthorizationFilter
{
    public bool Authorize(DashboardContext context)
    {
        // Everyone is more than welcome...
        return true;
    }
}

And we add it when registering the Dashboard into the dependencies container. Like this,

app.UseHangfireDashboard("/hangfire", new DashboardOptions
{
    Authorization = new [] { new AllowAnyoneAuthorizationFilter() }
});

3. InMemory-Hangfire SucceededJobs method

For the In-Memory Hangfire implementation, the SucceededJobs() method from the monitoring API returns jobs from most recent to oldest. There’s no need for pagination. Look at the Reverse() method in the SucceededJobs() source code.

I had to find out why an ASP.NET health check was only working the first time. It turned out that the code was paginating the successful jobs, always looking for the oldest successful jobs. Like this,

public class HangfireSucceededJobsHealthCheck : IHealthCheck
{
    private const int CheckLastJobsCount = 10;

    private readonly TimeSpan _period;

    public HangfireSucceededJobsHealthCheck(TimeSpan period)
    {
        _period = period;
    }

    public Task<HealthCheckResult> CheckHealthAsync(HealthCheckContext context, CancellationToken cancellationToken = default)
    {
        var isHealthy = true;

        var monitoringApi = JobStorage.Current.GetMonitoringApi();

        // Before:
        // It used pagination to bring the oldest 10 jobs
        //
        // var succeededCount = (int)monitoringApi.SucceededListCount();
        // var succeededJobs = monitoringApi.SucceededJobs(succeededCount - CheckLastJobsCount, CheckLastJobsCount);
        //                                                 ^^^^^

        // After:
        // SucceededJobs returns jobs from newest to oldest 
        var succeededJobs = monitoringApi.SucceededJobs(0, CheckLastJobsCount);
        //                                            ^^^^^  

        var successJobsCount = succeededJobs.Count(x => x.Value.SucceededAt.HasValue
                                  && x.Value.SucceededAt > DateTime.UtcNow - period);

        var result = successJobsCount > 0
            ? HealthCheckResult.Healthy("Yay! We have succeeded jobs.")
            : new HealthCheckResult(
                context.Registration.FailureStatus, "Nein! We don't have succeeded jobs.");
        
        return Task.FromResult(result);
    }
}

This is so confusing that there’s an issue on the Hangfire repo asking for clarification. Not all storage implementations return successful jobs in reverse order. Arrrggg!

4. Prevent Concurrent execution of Hangfire jobs

Hangfire has an attribute to prevent the concurrent execution of the same job: DisableConcurrentExecutionAttribute. Source.

[DisableConcurrentExecution(timeoutInSeconds: 60)]
// ^^^^^
public class MyCoolJob
{
    public async Task DoSomethingAsync()
    {
        // Beep, beep, boop...
    }
}

Even we can change the resource being locked to avoid executing jobs with the same parameters. For example, we can run only one job per client simultaneously, like this,

public class MyCoolJob
{
    [DisableConcurrentExecution("clientId:{0}", 60)]
    //                          ^^^^^
    public async Task DoSomethingAsync(int clientId)
    {
        // Beep, beep, boop...
    }
}

5. OrmLite IgnoreOnUpdate, SqlScalar, and CreateIndex

OrmLite has a [IgnoreOnUpdate] attribute. I found this attribute when reading OrmLite source code. When using SaveAsync(), OrmLite omits properties marked with this attribute when generating the SQL statement. Source.

OrmLite QueryFirst() method requires an explicit transaction as a parameter. Unlike SqlScalar() which uses the same transaction from the input database connection. Source. I learned this because I had a DoesIndexExists() method inside a database migration and it failed with the message “ExecuteReader requires the command to have a transaction…“ This is what I had to change,

private static bool DoesIndexExist<T>(IDbConnection connection, string tableName, string indexName)
{
    var doesIndexExistSql = @$"
      SELECT CASE WHEN EXISTS (
        SELECT * FROM sys.indexes
        WHERE name = '{indexName}'
        AND object_id = OBJECT_ID('{tableName}')
      ) THEN 1 ELSE 0 END";
    
    // Before:
    // return connection.QueryFirst<bool>(isIndexExistsSql);
    //                   ^^^^^
    // Exception: ExecuteReader requires the command to have a transaction...

    // After:
    var result = connection.SqlScalar<int>(doesIndexExistSql);
    //                      ^^^^^
    return result > 0;
}

Again, by looking at OrmLite source code, the CreateIndex() method, by default, creates indexes with names like: idx_TableName_FieldName. Then we can omit the index name parameter when working with this method. Source

Voilà! That’s what I learned from this project. This gave me the idea to stop to reflect on what I learned from every project I work on. I really enjoyed figuring out the issue with the health check. It made me read the source code of the In-memory storage for Hangfire.

For more content, check how I use the IgnoreOnUpdate attribute to automatically insert and update audit fields with OrmLite, how to pass a DataTable as a parameter with OrmLite and how to replace BackgroundServices with a lite Hangfire.

Happy coding!

Four Lessons I Wished I Knew Before Becoming a Software Engineer

This post is part of my Advent of Code 2022.

It has been more than 10 years since I started working as a Software Engineer.

I began designing reports by hand using iTextSharp. And by hand, I mean drawing lines and pixels on a blank canvas. Arrggg!

I used Visual Studio 2010 and learned about LINQ for the first time those days.

Then I moved to some sort of full-stack role writing DotNetNuke modules with Bootstrap and Knockout.js.

In more recent years, I switched to work as a backend engineer. I got tired of getting feedback on colors, alignment, and other styling issues. They’re important. But that’s not the work I enjoy doing.

If I could start all over again, these are four lessons I wished I knew before becoming a Software Engineer again.

1. Find a Way To Stand Out: Make Yourself Different

Learning a second language is a perfect way to stand out. I’m a bit biased since language learning is one of my hobbies.

For most of us, standing out means learning English as a second language.

A second language opens doors to new markets, professional relationships, and job opportunities. And, you can brag about a second and third language on your CV.

After an interview, you can be remembered for the languages you speak. “Ah! The guy who speaks languages.”

2. Never Stop Learning

Let’s be honest. University will teach you lots of subjects. Probably, you don’t need most of them and the ones you need you will have to study them on your own.

You will have to study books, watch online conferences, and read blog posts. Never stop learning! That would keep you in the game in the long run.

But, it can be daunting if you try to learn everything about everything. “Learn something about everything, and everything about something,” says popular wisdom.

Libraries and frameworks come and go. Stick to the principles.

Desktop, laptop and notebook
Always keep learning. Photo by Iewek Gnos on Unsplash

3. Have an Escape Plan

There is no safe place to work. Period! Full stop!

Companies lay off employees without any further notice and apparent reason. You can get seriously injured or sicked. You won’t be able to work forever.

If you’re reading this from the future, ask your parents or grandparents about the year 2020. Lots of people lost their jobs or got their salaries cut by half in a few days. And there were nothing they could do about it.

Have an escape plan. A side income, your own business, a hobby you can turn into a profitable idea. You name it!

Apart from an escape plan, have an emergency fund. The book “The Simple Path to Wealth” calls emergency funds: “F-you” money. Keep enough savings in your account to avoid worrying about when to leave a job or when the choice isn’t yours.

4. Have an Active Online Presence

If I could do something different, I would have an active online presence way earlier.

Be active online. Have a blog, a LinkedIn profile, or a professional profile on any other social network. Use social networks to your advantage.

In the beginning, you might think you don’t know enough to start writing or start a blog. But you can share what you learn, the resources you use to learn, and your sources of inspiration. You can learn in public and show your work.

Voilà! These are four lessons I wished I knew before starting a software engineer career. Remember, every journey is different and we’re all figuring out life. For sure, my circumstances have been different than yours, and that’s why these four lessons.

In any case, “Your career is your responsibility, not your employer’s.” I learned that from The Clean Coder.

Interested in more career lessons? Check five lessons I learned on my first five years as software engineer, ten lessons learned after one year of remote work, and a case against massive unrequested refactorings.

Happy coding!

TIL: How to automatically insert and update audit fields with OrmLite

This post is part of my Advent of Code 2022.

These days I had to work with OrmLite. I had to follow the convention of adding audit fields in all of the database tables. Instead of adding them manually, I wanted to populate them when using OrmLite SaveAsync() method. This is how to automatically insert and update audit fields with OrmLite.

1. Create a 1-to-1 mapping between two tables

Let’s store our favorite movies. Let’s create two classes, Movie and Director, to represent a one-to-one relationship between movies and their directors.

public interface IAudit
{
    DateTime CreatedDate { get; set; }

    DateTime UpdatedDate { get; set; }
}

public class Movie : IAudit
{
    [AutoIncrement]
    public int Id { get; set; }

    [StringLength(256)]
    public string Name { get; set; }

    [Reference]
    // ^^^^^^^
    public Director Director { get; set; }

    [Required]
    public DateTime CreatedDate { get; set; }

    [Required]
    public DateTime UpdatedDate { get; set; }
}

public class Director : IAudit
{
    [AutoIncrement]
    public int Id { get; set; }

    [References(typeof(Movie))]
    // ^^^^^^
    public int MovieId { get; set; }
    //         ^^^^^
    // OrmLite expects a foreign key back to the Movie table

    [StringLength(256)]
    public string FullName { get; set; }

    [Required]
    public DateTime CreatedDate { get; set; }

    [Required]
    public DateTime UpdatedDate { get; set; }
}

Notice we used OrmLite [Reference] to tie every director to his movie. With these two classes, OrmLite expects two tables and a foreign key from Director pointing back to Movie. Also, we used IAudit to add the CreatedDate and UpdateDate properties. We will use this interface in the next step.

2. Use OrmLite Insert and Update Filters

To automatically set CreatedDate and UpdatedDate when inserting and updating movies, let’s use OrmLite InsertFilter and UpdateFilter. With them, we can manipulate our records before putting them in the database.

Let’s create a unit test to show how to use those two filters,

using ServiceStack.DataAnnotations;
using ServiceStack.OrmLite;

namespace OrmLiteAuditFields;

[TestClass]
public class PopulateAuditFieldsTest
{
    [TestMethod]
    public async Task SaveAsync_InsertNewMovie_PopulatesAuditFields()
    {
        OrmLiteConfig.DialectProvider = SqlServerDialect.Provider;
        OrmLiteConfig.InsertFilter = (command, row) =>
        //            ^^^^^
        {
            if (row is IAudit auditRow)
            {
                auditRow.CreatedDate = DateTime.UtcNow;
                //       ^^^^^
                auditRow.UpdatedDate = DateTime.UtcNow;
                //       ^^^^^
            }
        };
        OrmLiteConfig.UpdateFilter = (command, row) =>
        //            ^^^^^
        {
            if (row is IAudit auditRow)
            {
                auditRow.UpdatedDate = DateTime.UtcNow;
                //       ^^^^^
            }
        };

        var connectionString = "...Any SQL Server connection string here...";
        var dbFactory = new OrmLiteConnectionFactory(connectionString, SqlServerDialect.Provider);

        using var db = dbFactory.Open();

        var movieToInsert = new Movie
        {
            Name = "Titanic",
            // We're not setting CreatedDate and UpdatedDate here...
            Director = new Director
            {
                FullName = "James Cameron"
                // We're not setting CreatedDate and UpdatedDate here, either...
            }
        };
        await db.SaveAsync(movieToInsert, references: true);
        //       ^^^^^
        // We insert "Titanic" for the first time
        // With "references: true", we also insert the director

        var insertedMovie = await db.SingleByIdAsync<Movie>(movie.Id);
        Assert.IsNotNull(insertedMovie);
        Assert.AreNotEqual(default, insertedMovie.CreatedDate);
        Assert.AreNotEqual(default, insertedMovie.UpdatedDate);
    }
}

Notice we defined the InsertFilter and UpdateFilter and inside them, we checked if the row to be inserted or updated implemented the IAudit interface, to then set the audit fields with the current timestamp.

To insert a movie and its director, we used SaveAsync() with the optional parameter references set to true. We didn’t explicitly set the CreatedDate and UpdatedDate properties before inserting a movie.

Internally, OrmLite SaveAsync() either inserts or updates an object if it exists in the database. It uses the property annotated as the primary key to find if the object already exists in the database.

Instead of using filters, we can use [Default(OrmLiteVariables.SystemUtc)] to annotate our audit fields. With this attribute, OrmLite will create a default constraint. But, this will work only for the first insertion. Not for future updates on the same record.

3. Add [IgnoreOnUpdate] for future updates

To support future updates using the OrmLite SaveAsync(), we need to annotate the CreatedDate property with the attribute [IgnoreOnUpdate] in the Movie and Director classes. Like this,

public class Movie : IAudit
{
    [AutoIncrement]
    public int Id { get; set; }

    [StringLength(256)]
    public string Name { get; set; }

    [Reference]
    public Director Director { get; set; }

    [Required]
    [IgnoreOnUpdate]
    // ^^^^^^^^^^^^
    public DateTime CreatedDate { get; set; }

    [Required]
    public DateTime UpdatedDate { get; set; }
}

public class Director : IAudit
{
    [AutoIncrement]
    public int Id { get; set; }

    [References(typeof(Movie))]
    public int MovieId { get; set; }

    [StringLength(256)]
    public string FullName { get; set; }

    [Required]
    [IgnoreOnUpdate]
    // ^^^^^^^^^^^^
    public DateTime CreatedDate { get; set; }

    [Required]
    public DateTime UpdatedDate { get; set; }
}

Internally, when generating the SQL query for an UPDATE statement, OrmLite doesn’t include properties annotated with [IgnoreOnUpdate]. Source Also, OrmLite has similar attributes for insertions and queries: [IgnoreOnInsertAttribute] and [IgnoreOnSelectAttribute]

Let’s modify our previous unit test to insert and update a movie,

using ServiceStack.DataAnnotations;
using ServiceStack.OrmLite;

namespace OrmLiteAuditFields;

[TestClass]
public class PopulateAuditFieldsTest
{
    [TestMethod]
    public async Task SaveAsync_InsertNewMovie_PopulatesAuditFields()
    {
       // Same OrmLiteConfig as before...
        var connectionString = "...Any SQL Server connection string here...";
        var dbFactory = new OrmLiteConnectionFactory(connectionString, SqlServerDialect.Provider);

        using var db = dbFactory.Open();

        var movieToInsert = new Movie
        {
            Name = "Titanic",
            // We're not setting CreatedDate and UpdatedDate here...
            Director = new Director
            {
                FullName = "James Cameron"
                // We're not setting CreatedDate and UpdatedDate here, either...
            }
        };
        await db.SaveAsync(movieToInsert, references: true);
        //       ^^^^^
        // 1.
        // We insert "Titanic" for the first time
        // With "references: true", we also insert the director

        await Task.Delay(1_000);
        // Let's give it some time...

        var movieToUpdate = new Movie
        {
            Id = movie.Id,
            //   ^^^^^
            Name = "The Titanic",
            // We're not setting CreatedDate and UpdatedDate here...
            Director = new Director
            {
                Id = movie.Director.Id,
                //   ^^^^^
                FullName = "J. Cameron"
                // We're not setting CreatedDate and UpdatedDate here, either...
            }
        };
        await db.SaveAsync(movieToUpdate, references: true);
        //       ^^^^^
        // 2.
        // To emulate a repository method, for example,
        // We're creating a new Movie object updating
        // movie and director names using the same Ids
    }
}

Often, when we work with repositories to abstract our data access layer, we update objects using the identifier of an already-inserted object and another object with the properties to update. Something like, UpdateAsync(movieId, aMovieWithSomePropertiesChanged).

Notice this time, after inserting a movie for the first time, we created a separate Movie instance (movieToUpdate) keeping the same ids and updating the other properties. We used the same SaveAsync() as before.

At this point, if we don’t annotate the CreatedDate properties with [IgnoreOnUpdate], we get the exception: “System.Data.SqlTypes.SqlTypeException: SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.”

We don’t want to change the CreatedDate on updates. That’s why in the UpdateFilter we only change UpdatedDate. Since we’re using a different Movie instance in the second SaveAsync() call, CreatedDate stays uninitialized when OrmLite runs the UPDATE statement in the database. That’s why we got that exception.

Voilà! That’s how to automate audit fields with OrmLite. After reading the OrmLite source code, I found out about these filters and attributes. I learned the lesson of reading our source code dependencies from a past Monday Links episode.

To read more about OrmLite, check how to pass a DataTable as a parameter with OrmLite, how to join to subqueries with OrmLite, and five lessons while working with OrmLite.

Happy coding!

TIL: How to replace keywords in a file name and content with Bash

This post is part of my Advent of Code 2022.

These days I needed to rename all occurrences of one keyword with another in source files and file names. In one of my client’s projects, I had to query one microservice to list a type of account to store it in an intermediate database. After a change in requirements, I had to query for another type of account and rename every place where I used the old one. This is what I learned.

1. Find and Replace inside Visual Studio

My original solution was to use Visual Studio to replace “OldAccount” with “NewAccount” in all .cs files in my solution. I used the “Replace in Files” menu by pressing: Ctrl + Shift + h,

Visual Studio 'Replace in Files' menu
Visual Studio 'Replace in Files' menu

After this step, I replaced all occurrences inside source files. For example, it renamed class names from IOldAccountService to INewAccountService. To rename variables, I repeated the same replace operation but using lowercase patterns.

With the “Replace in Files” menu, I covered file content. But I still had to change the filenames. For example, I needed to rename IOldAccountService.cs to INewAccountService.cs. I did it by hand. Luckily, I didn’t have to replace many of them. There must be a better way!- I thought.

2. Find and Replace with Bash

After renaming my files by hand, I thought I could have used the command line to replace both the content and file names. I use Git Bash anyways. Therefore I have access to most of Unix commands.

Replace ‘old’ with ‘new’ inside all .cs files

This is how to replace “Old” with “New” in all .cs files, Source

grep -irl --include \*.cs "Old" | xargs sed -bi 's/Old/New/g'

With the grep command, we look for all .cs files (--include \*.cs) containing the “Old” keyword, no matter the case (-i flag), inside all child folders (-r), showing only the file path (-l flag).

We could use the first command, before the pipe, to only list the .cs files containing a keyword.

Then, with the sed command, we replace the file content in place (-i flag), changing all occurrences of “Old” with “New” (s/Old/New/g). Notice the g option in the replacement pattern. To avoid messing with line endings, we use the -b flag. Source

If we use spaces in filenames, that’s weird in source files but just in case, we need to tell grep and sed to use a different separator,

grep -irlZ --include \*.cs "Old" | xargs -0 sed -bi 's/Old/New/g'

This time, we pass Z to grep and -0 to xargs. Source

This first command and its variation does what Visual Studio “Find in Files” does.

Rename ‘old’ with ‘new’ in filenames

Instead of renaming files by hand, this is how to replace “Old” with “New” in file names, Source

find . -path ./TestCoverageReport -prune -type f -o -name "*Old*" \
  | sed 'p;s/Old/New/' \
  | xargs -d '\n' -n 2 mv

This time, we’re using the find command to “find” all files (-type f), with “Old” anywhere in their names (-name "*Old*"), inside the current folder (.), excluding the TestCoverageReport folder (-path ./TestCoverageReport -prune).

Optionally, we can exclude multiple files by wrapping them inside parenthesis, like, Source

find . \( -path ./FolderToExclude -o -path ./AnotherFolderToExclude \) \
    -prune -type f -o -name "*Old*"

Then, we feed the sed command to generate new names replacing “Old” with “New.” This time, we’re using the p option to print the “before” pattern. Up to this point, our command returns something like this,

./Some/Folder/AFileWithOld.cs
./Some/Folder/AFileWithNew.cs
...

With the last part, we split the sed output by the newline character and passed groups of two filenames to the mv command to finally rename the files.

Another alternative to sed followed by mv would be to use the rename command, like this,

find . -path ./TestCoverageReport -prune -type f -o -name "*Old*" \
  | xargs rename 's/Old/New/g'

Voilà! That’s how to replace a keyword in the content and name of files. It took me some time to figure this out. But, we can rename files with two one-liners. It will save us time in the future. Kudos to StackOverflow.

To read more productivity tips and tools, check these programs that save me 100 hours, how to format commit messages with hooks, and how to rename Visual Studio projects.

Happy coding!