How I got rid of two recurring review comments (Git hook, VS extension)

These are two things I always forgot to do when opening my code to code review. To save my reviewers and me some time, I decided to do something about it. This is how I get rid of two recurrent comments I got getting my code reviewed.

For a project I was working on, I had to include the ticket number in every commit message and add Async suffix to all asynchronous C# methods. I forgot these two conventions every time I created my Pull Requests.

1. How to add ticket numbers in commit messages

Add ticket numbers in Git commit messages using a prepare-commit-msg hook. This hook formats commit messages before committing the changes. Use this hook to enforce naming conventions and run custom actions before committing changes.

I was already naming my feature branches with the ticket number. Then, with a bash script I could read the ticket number from the branch name and prepends it to the commit message.

This is a prepare-commit-msg hook to prepend commit messages with the ticker number from branch names.

#!/bin/bash
FILE=$1
MESSAGE=$(cat $FILE)
TICKET=[$(git branch --show-current | grep -Eo '^(\w+/)?(\w+[-_])?[0-9]+' | grep -Eo '(\w+[-])?[0-9]+' | tr "[:lower:]" "[:upper:]")]
if [[ $TICKET == "[]" || "$MESSAGE" == "$TICKET"* ]];then
  exit 0;
fi

echo "$TICKET $MESSAGE" > $FILE

If I name my feature branch feat/ABC-123-my-awesome-branch. Then when I commit my code, Git will rewrite my commit messages to look like ABC-123 My awesome commit.

I wrote about this hook on my list of Programs that saved me 1000 hours, where I listed the Git aliases, Visual Studio extensions and other online tools to save me some valuable time.

2. Don’t miss Async suffix on asynchronous C# methods

Another convention I always forgot about was adding Async suffix on asynchronous C# methods.

Use a .editorconfig file

After Googling a bit, a coworker came up with this StackOverflow answer to use a .editorconfig file to get errors on async methods missing the Async suffix.

This is how to enforce the Async suffix inside an .editorconfig file,

[*.cs]

# Async methods should have "Async" suffix
dotnet_naming_rule.async_methods_end_in_async.symbols = any_async_methods
dotnet_naming_rule.async_methods_end_in_async.style = end_in_async
dotnet_naming_rule.async_methods_end_in_async.severity = error

dotnet_naming_symbols.any_async_methods.applicable_kinds = method
dotnet_naming_symbols.any_async_methods.applicable_accessibilities = *
dotnet_naming_symbols.any_async_methods.required_modifiers = async

dotnet_naming_style.end_in_async.required_prefix = 
dotnet_naming_style.end_in_async.required_suffix = Async
dotnet_naming_style.end_in_async.capitalization = pascal_case
dotnet_naming_style.end_in_async.word_separator =

But, the .editorconfig enforces Async suffix even Main method and tests names. MainAsync looks weird. Also, it misses method declarations returning Task or Task<T> on interfaces. Arrggg!

AsyncMethodNameFixer Visual Studio extension

To add a warning on asynchronous C# methods missing the Async suffix, use the AsyncMethodNameFixer extension on Visual Studio.

With the AsyncMethodNameFixer extension, I get warnings when I don’t include the Async suffix on methods and interfaces. It doesn’t catch the Main method and test methods.

But, to enforce this convention across the team, I have to rely on the other developers to have this extension installed. With the .editorconfig, all the naming rules travels with the code itself when anyone clone the repository.

Voilà! That’s how I got rid of these two recurring comments while code review. For more productive code reviews, read my Tips and Tricks for Better Code Reviews.

For more extensions to make you more productive with Visual Studio, check my Visual Studio Setup for C#.

If you’re new to Git, check my Git Guide for Beginners and my Git guide for TFS Users.

Happy coding!

How I take notes?

Some time ago, I commented on a discussion on dev.to titled How do you make notes?. Here is my long reply.

Plain text and Markdown

I love plain text. It’s future-proof. I can use any text editor to edit text files. Notepad++, SublimeText, Vim, Visual Studio Code, you name it. I use plain text for almost everything.

I write all my notes using Markdown. It’s formatted plain text that renders to HTML. Markdown is already on README files in GitHub and almost everywhere on the Internet.

I write and organize my notes with Notable. It’s clean and simple.

A pencil and a notebook
Photo by Jan Kahánek on Unsplash

todo.txt and Zettelkasten

For my task list, I have a todo.txt file. One task per line. Each task has a priority, due date, and optional tags.

For ideas and future tasks, I have a later.txt. Once a task is done, I move it to a done.txt file. I keep it as a brag document.

I have one note per file for every blog post, podcast, video, and book I find interesting. I group these notes using the tag: “til,” short for “Today I learned.” I write the date, source, key points, and my reaction.

I capture ideas and thoughts on my phone. From the book Pragmatic Thinking and Learning, I learned to “always have something to keep notes.”

Recently I found a note-taking system: Zettelkasten. In short, we write our learning in our own words on a paper or card, put an index number, and connect it to our existing notes. Although there are editors for Zettelkasten, I have my own slip-box and keep my cards with pen and paper.

UPDATE (Jun 2023): These days, I’ve been playing with a todo.txt per day and a Bash script to start a new workday by importing the pending tasks from the previous day. Also, in the same spirit of todo.txt, I found calendar.txt, a plain text calendar, and this Bash oneliner to open the calendar.txt file in the current date with Vim.

To read more about the Zettelkasten method, check my takeaways from the book How to Take Smart Notes. For more ideas on plain text files, check these ten clever uses for plain text on Lifehacker.com.

Happy note-taking!

How to read configuration values in ASP.NET Core

Let’s see how to read and overwrite configuration values with ASP.NET Core 6.0 using the Options pattern.

To read configuration values following the Options pattern, add a new section in the appsetttings.json file, create a matching class, and register it into the dependencies container using the Configure() method.

Macaroons in the showcase of a pastry shop
Those are Macaroon options. Not the Options pattern. Photo by Vered Caspi on Unsplash

Let’s see how to use, step by step, the Options pattern to read configuration values.

1. Change the appsettings.json file

After creating a new ASP.NET Core API project with Visual Studio or the dotnet tool from a terminal, let’s add in the appsettings.json file the values we want to configure on a new JSON object.

Let’s add a couple of configuration values inside a new MySettings object in our appsettings.json file like this,

{
  "MySettings": {
    "AString": "Hello, there!",
    "ABoolean": true,
    "AnInteger": 1,
    "AnArray": ["hello", ",", "there", "!"]
  }
}

Inside the appsettings.json file, we can use booleans, integers, and arrays, not only strings.

2. Create and bind a configuration class

Then, let’s create a matching configuration class for our configuration section in the appsettings.json file.

We should name our configuration class after our section name and its properties after the keys inside our section.

This is the configuration class for our MySettings section,

public class MySettings
{
    public string AString { get; set; }
    public bool ABoolean { get; set; }
    public int AnInteger { get; set; }
    public string[] AnArray { get; set; }
}

Next, let’s bind our configuration class to our custom section and register it into the built-in dependencies container. In our Program.cs class, let’s use the Configure() method for that,

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddControllers();

var mySettingsSection = builder.Configuration.GetSection("MySettings");
builder.Services.Configure<MySettings>(mySettingsSection);
//               ^^^^^

var app = builder.Build();
app.MapControllers();
app.Run();

As an alternative, we can use GetRequiredSection() instead. It throws an InvalidOperationException if we forget to add the configuration section in our appsettings.json file.

3. Use sections and subsections

Let’s use sections and subsections to group our configuration values on appsettings.json files.

Now let’s say MySettings is inside another section: AllMyCoolSettings. We need a new AllMyCoolSettings class containing a MySettings property like this,

public class AllMyCoolSettings
{
    public MySettings MySettings { get; set; }
    //     ^^^^^
}

Then, in the Configure() method, we separate the section and subsection names using a colon, :, like this,

var mySettings = builder.Configuration.GetSection("AllMyCoolSettings:MySettings");
//                                                 ^^^^^
builder.services.Configure<MySettings>(mySettings);

4. Inject an IOptions interface

To use these configuration values, let’s add an IOptions<T> parameter in the constructor of our service or controller.

Let’s create a simple controller that prints one of our configured values,

[Route("api/[controller]")]
public class ValuesController : Controller
{
    private readonly MySettings _mySettings;

    public ValuesController(
        IOptions<MySettings> mySettingsOptions)
        // ^^^^^
    {
        _mySettings = mySettingsOptions.Value;
        //                              ^^^^^
    }

    [HttpGet]
    public string Get()
    {
        return _mySettings.ASetting;
        //     ^^^^^   
    }
}

The IOptions<T> interface has a property Value. It holds an instance of our configuration class with the parsed values from the appsettings.json file.

In our controller we use the injected MySettings like any other object instance.

By default, if we forget to add a configuration value in the appsettings.json file, ASP.NET Core doesn’t throw any exception. Instead, ASP.NET Core initializes the configuration class to its default values.

That’s why it’s a good idea to always validate for missing configuration values inside constructors.

For unit testing, let’s use the method Options.Create() with an instance of the MySettings class we want to use. We don’t need a stub or mock for that!

5. Use separate configuration files per environment

Let’s separate our configuration values into different configuration files per environment.

By default, ASP.NET Core creates two JSON files: appsettings.json and appsettings.Development.json. But we could have other configuration files, too.

If ASP.NET Core doesn’t find a value in an environment-specific file, it reads the default appsettings.json file instead.

ASP.NET Core reads the current environment from the ASPNETCORE_ENVIRONMENT environment variable.

On a development machine, we can use the launchSettings.json file to set environment variables.

For example, let’s override one configuration value using an environment variable in our launchSettings.json file,

{
    "<YourSolutionName>": {
      "commandName": "Project",
      "applicationUrl": "http://localhost:5000",
      
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development",
        // ^^^^^
        "MySettings__AString": "This value comes from an environment variable"
        // ^^^^^
      }
    }
  }
}

By default, ASP.NET Core reads configuration values from environment variables, too.

Environment variables have a higher precedence than JSON files.

For example, if we set an environment variable MySettings__AString, ASP.NET Core will use that value instead of the one on the appsettings.json file.

Notice that the separator for sections and subsections inside environment variables is a double undescore, __.

6. Embrace PostConfigure

After registering our configuration classes, we can override their values using the PostConfigure() method.

I used PostConfigure() when refactoring a legacy application. I grouped related values in the appsetting.json file into sections. But I couldn’t rename the existing environment variables to match the new names. I did something like this instead,

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddControllers();

var mySettingsSection = builder.Configuration.GetSection("MySettings");
builder.Services.Configure<MySettings>();

builder.Services.PostConfigure<MySettings>(options =>
//               ^^^^^
{
    var anOldSetting = Environment.GetEnvironmentVariable("AnOldSettingName");
    //  ^^^^^
    if (!string.IsNullOrEmpty(anOldSetting))
    {
        options.AString = anOldSetting;
        //      ^^^^^
    }
});

var app = builder.Build();
app.MapControllers();
app.Run();

Conclusion

Voilà! That’s how to read configuration values with ASP.NET Core 6.0. Apart from the IOptions interface we used here, ASP.NET Core has IOptionSnapshot and IOptionsMonitor. Also, we can read values from INI files, XML files, or Azure Key Vault.

In the days of the old ASP.NET framework, we had a ConfigurationManager class and a web.config file to read configuration values. Those days are gone! We have JSON files now.

For more ASP.NET Core content, check how to create a caching layer, how to create a CRUD API with Insight.Database, and how to use background services with Hangfire.

Happy coding!

How to keep your database schema updated with Simple.Migrations

Do you email SQL scripts to a coworker to update your database schema? Do you update your database schema by hand? I’ve being there. Let’s find out about database migrations with Simple.Migrations.

A database migration is a uniquely identified operation to create, update or delete database objects. Migrations are a more scalable and maintainable alternative to running SQL scripts directly into your database to update its schema.

Let’s migrate

Migrations allow you to create and setup testing and development environments easily.

Have you ever needed to create a new environment to reproduce an issue? But, you forgot to run one or two scripts to create columns in a table and your application couldn’t even start. Then, you realized you had two problems.

Migrations help you to keep your database schema and your SQL scripts under version control in sync.

No more emails with database scripts!

How to keep your database schema updated with .NET Core and Simple.Migrations
Photo by Barth Bailey on Unsplash

Simple.Migrations

Simple.Migrations “is a simple bare-bones migration framework for .NET Core”. It provides “a set of simple, extendable, and composable tools for integrating migrations into your application”.

Simple.Migrations has out-of-the-box database providers for SQL Server, SQLite, PostreSQL and MySQL. But, you can create your own provider too.

Let’s create our first migration for an employees database using SQL Server.

1. Create a new Employees table

First, in a Console application, install Simple.Migrations NuGet package. Then, create a class CreateEmployees inheriting from the Migration base class. Don’t forget to add the using SimpleMigrations; statement.

With Simple.Migrations, all migrations should override two methods: Up and Down.

The Up() method should contain database operation to apply. For example, creating a new table, adding a new column to an existing table, etc. And, the Down() method should contain the steps to rollback that operation. Remember, we want to apply and rollback migrations.

For our first migration, the Up() method will have the SQL statement to create the Employees table. And, the Down() method, the statement to remove it.

You can use the Execute() method from the Migration class to run your SQL statements. But, you have a Connection property of type DbConnection to bring your own database layer or ORM of choice.

A migration should use be uniquely identified.

You need to annotate your migration with a version number using the [Migration] attribute. Either a consecutive number or a timestamp-like number is fine.

Make sure to not to repeat the same version number. Otherwise, you will get a MigrationLoadFailedException.

This is our CreateEmployees migration with CREATE TABLE and DROP TABLE statements.

using SimpleMigrations;

[Migration(1)]
public class CreateEmployees : Migration
{
    protected override void Up()
    {
        Execute(@"CREATE TABLE Employees
                (
                    [Id] [int] PRIMARY KEY IDENTITY(1,1) NOT NULL,
                    [SSO] [varchar](24) NOT NULL,
                    [FirstName] [varchar](255) NOT NULL,
                    [MiddleName] [varchar](255) NOT NULL,
                    [LastName] [varchar](255) NOT NULL,
                    [Salary] [decimal](18) NOT NULL,
                    [CreatedDate] [datetime] NULL,
                    [UpdatedDate] [datetime] NULL,
                )");
    }

    protected override void Down()
    {
        Execute(@"DROP TABLE Employees");
    }
}

I know, I know…Yes, I copied the SQL statement from SQL Server Management Studio Database Designer.

2. Apply our first migration

The next step is to update the Console application to run this migration.

Inside the Main() method of your console application, create a connection to your database and use the SimpleMigrator class. Its constructor needs the assembly containing the migrations and a database provider.

For our example, the MssqlDatabaseProvider is the appropriate provider.

With the SimpleMigrator class, you can use two methods: MigrateTo() and MigrateToLatest().

MigrateTo() applies an specific version into your database. And MigrateToLatest(), all versions up to the latest one. Before using any of these two methods, make sure to call the Load method.

The Main() method of our console application looks like this.

class Program
{
    static void Main(string[] args)
    {
        var connString = @"Data Source=(localdb)\MSSQLLocalDB;Initial Catalog=Payroll;Integrated Security=True;";
        using (var connection = new SqlConnection(connString))
        {
            var databaseProvider = new MssqlDatabaseProvider(connection);
            var migrator = new SimpleMigrator(typeof(AssemblyWithYourMigrations).Assembly, databaseProvider);
            migrator.Load();
            migrator.MigrateToLatest();
        }
    }
}

Run your console application to apply your first migration.

Simple.Migrations creates a dbo.VersionInfo table on your database. This table keeps track of the applied migrations. It should look like this one.

Id Version AppliedOn Description
1 1 8/13/2020 4:24:18 PM CreateEmployees

3. Add a column to an existing table

Now, suppose you need to add a Type column to the Employees table.

This time, create an AddTypeToEmployee class with the SQL statements needed. Remember, you need a different version number.

For example, the AddTypeToEmployee will look like this.

[Migration(2)]
public class AddTypeToEmployee : Migration
{
    protected override void Up()
    {
        Execute(@"ALTER TABLE Employees
                  ADD Type VARCHAR(8) NULL");
    }

    protected override void Down()
    {
        Execute(@"ALTER TABLE Employees
                  DROP COLUMN Type");
    }
}

Again, run the console application. Notice how the Employees and VersionInfo tables have changed on your database.

4. A runner

Finally, you can create a runner to apply your migrations. Simple.Migrations has a predefined console runner.

We have a Console application that always applies all the migrations to the latest. We need more flexibility to apply any migration or rollback any other.

Let’s use .NET Core configuration options to move the connection string to a settings file. We have ours hardcoded into our Console application.

For this, you need two install two NuGet packages:

  • Microsoft.Extensions.Configuration, and
  • Microsoft.Extensions.Configuration.Json

Then, create an appsettings.json file with your connection string. Mine looks like this.

{
  "ConnectionStrings": {
    "YourConnString": "Data Source=(localdb)\\MSSQLLocalDB;Initial Catalog=Payroll;Integrated Security=True;"
  }
}

If you’re using different environments (QA, Staging, for example), you can read the environment name from an environment variable.

Then, with the ConfigurationBuilder class, you can load the appropiate json file with our connection string per environment.

After using the console runner, our console application looks like this.

class Program
{
    static void Main(string[] args)
    {
        var connString = Configuration().GetConnectionString("YourConnString");
        using (var connection = new SqlConnection(connString))
        {
            var databaseProvider = new MssqlDatabaseProvider(connection);
            var migrator = new SimpleMigrator(typeof(AssemblyWithYourMigrations).Assembly, databaseProvider);

            var consoleRunner = new ConsoleRunner(migrator);
            consoleRunner.Run(args);

            Console.ReadKey();
        }
    }

    public static IConfigurationRoot Configuration()
    {
        var environmentName = Environment.GetEnvironmentVariable("DOTNET_ENVIRONMENT");

        var configurationBuilder = new ConfigurationBuilder()
            .AddJsonFile("appsettings.json")
            .AddJsonFile($"appsettings.{environmentName}.json", optional: true);
        return configurationBuilder.Build();
    }
}

Simple.Migrations default ConsoleRunner reads the commands up to migrate to the latest version, to to migrate to an specific version and down to revert back to a version. If the command arguments you provide are invalid, you will get a help message.

Conclusion

Voilà! That’s how we can keep our database schema up to date with migrations and Simple.Migrations.

Migrations are a better alternative to running scripts directly into your database. You can use migrations to create constraints and indexes too. With migrations, your database structure is under version control and reviewed as it should be.

Your mission, Jim, should you decide to accept it, is to add a Payments table with a relation to the Employees table. It should contain an id, a date, a paid value and the employee id. This post will self-destruct in five seconds. Good luck, Jim.

To learn more about reading configuration files in ASP.NET Core, read Configuration and the Options pattern in ASP.NET Core. Speaking of ASP.NET Core and databases, check How to create a CRUD API with Insight.Database.

Also, check how to simplify your migrations by squashing old migration files.

Happy migration time!

How to create fakes with Moq. And what I don't like about it

A recurring task when we write unit tests is creating replacements for collaborators. If we’re writing unit tests for an order delivery system, we don’t want to charge a credit card every time we run our tests. This is how we can create fakes using Moq.

Fakes or test doubles are testable replacements for dependencies and external systems. Fakes could return a fixed value or throw an exception to test the logic around the dependency they replace. Fakes can be created manually or with a mocking library like Moq.

Think of fakes or test doubles like body or stunt doubles in movies. They substitute an actor in life-threatening scenes without showing their face. In unit testing, fakes replace external components.

How to write your own fakes

We can write our own fakes by hand or use a mocking library.

If we apply the Dependency Inversion Principle, the D of SOLID, our dependencies are well abstracted using interfaces. Each service receives its collaborators instead of building them directly.

To create a fake, we create a class that inherits from an interface. Then, on Visual Studio, from the “Quick Refactorings” menu, we choose the “Implement interface” option. Et voilà! We have our own fake.

But, if we need to create lots of fake collaborators, a mocking library can make things easier. Mocking libraries are an alternative to writing our own fakes manually. They offer a friendly API to create fakes for an interface or an abstract class. Let’s see Moq, one of them!

Moq, a mocking library

Moq is a mocking library that ” is designed to be a very practical, unobtrusive and straight-forward way to quickly setup dependencies for your tests”.

Moq, ” the most popular and friendly mocking library for .NET”

From moq

Create fakes with Moq…Action!

Let’s see Moq in action! Let’s start with an OrderService that uses an IPaymentGateway and IStockService. This OrderService checks if an item has stock available to charge a credit card when placing a new order. Something like this,

public class OrderService 
{
    private readonly IPaymentGateway _paymentGateway;
    private readonly IStockService _stockService;
    
    public OrderService(IPaymentGateway paymentGateway, IStockService stockService)
    {
        _paymentGateway = paymentGateway;
        _stockService = stockService;
    }
    
    public OrderResult PlaceOrder(Order order)
    {
        if (!_stockService.IsStockAvailable(order))
        {
            throw new OutOfStockException();
        }
        
        _paymentGateway.ProcessPayment(order);
            
        return new PlaceOrderResult(order);
    }
}

To test this service, let’s create replacements for the real payment gateway and stock service. We want to check what the OrderService class does when there’s stock available and when there isn’t.

For our test name, let’s follow the naming convention from The Art of Unit Testing. With this naming convention, a test name shows the entry point, the scenario, and the expected result separated by underscores.

Of course, that’s not the only naming convention. There are other ways to name our tests.

[TestClass]
public class OrderServiceTests
{
    [TestMethod]
    public void PlaceOrder_StockAvailable_CallsProcessPayment()
    {
        var fakePaymentGateway = new Mock<IPaymentGateway>();

        var fakeStockService = new Mock<IStockService>();
        fakeStockService
            .Setup(t => t.IsStockAvailable(It.IsAny<Order>()))
            .Returns(true);
        var orderService = new OrderService(fakePaymentGateway.Object, fakeStockService.Object);

        var order = new Order();
        orderService.PlaceOrder(order);

        fakePaymentGateway
            .Verify(t => t.ProcessPayment(order), Times.Once);
    }
}

What happened here? First, it creates a fake for IPaymentGateway with new Mock<IPaymentGateway>(). Moq can create fakes for classes too.

Then, it creates another fake for IStockService. This fake returns true when the method IsStockAvailable() is called with any order as a parameter.

Next, it uses the Object property of the Mock class to create instances of the fakes. With these two instances, it builds the OrderService.

Finally, using the Verify() method, it checks if the method ProcessPayment() was called once. A passing test now!

Cut!…What I don’t like about Moq

Moq is easy to use. We can start using it in minutes! We only need to read the README and quickstart files in the documentation. But…

For Moq, everything is a mock: Mock<T>. Strictly speaking, everything isn’t a mock. There’s a difference between stubs and mocks.

The XUnit Tests Patterns book presents a detailed category of fakes or doubles: fakes, stubs, mocks, dummies, and spies. And, The Art of Unit Testing book reduces this classification to only three types: fakes, stubs, and mocks.

Other libraries use Fake, Substitute, or Stub/Mock instead of only Mock.

Moq has chosen this simplification to make it easier to use. But, this could lead us to misuse the term “mock.” So far, I have deliberately used the word “fake” instead of “mock” for a reason.

For Moq, MockRepository is a factory of mocks. We can verify all mocks created from this factory in a single call. But, a repository is a pattern to abstract creating and accessing records in a data store. We will find OrderRepository or EmployeeRepository. Are MockSession or MockGroup better alternatives? Probably. Naming is hard anyway.

Conclusion

Voilà! That’s how we create fakes and mocks with Moq. Moq is a great library! It keeps its promise. It’s easy to set up dependencies in our tests. We need to know only a few methods to start using it. We only need: Setup, Returns, Throws, and Verify. It has chosen to lower the barrier of writing tests. Give it a try! To mock or not to mock!

If you use Moq often, avoid typing the same method names all the time with these snippets I wrote for Visual Studio.

For more tips on writing unit tests, check my posts on how to write good unit tests by reducing noise and writing failing tests. And don’t miss the rest of my Unit Testing 101 series where I cover more subjects like this one.

Want to write readable and maintainable unit tests in C#? Join my course Mastering C# Unit Testing with Real-world Examples on Udemy and learn unit testing best practices while refactoring real unit tests from my past projects. No more tests for a Calculator class.

Happy mocking time!