Programs that saved you 100 hours (Online tools, Git aliases and Visual Studio extensions)

Today I saw this Hacker News thread about Programs that saved you 100 hours. I want to show some of the tools that have saved me a lot of time. Probably not 100 hours yet.

1. Online Tools

  • JSON Utils It converts a json file into C# classes. We can generate C# properties with attributes and change their casing. Visual Studio has this feature as “Paste JSON as Classes”. But, it doesn’t change the property names from camelCase in our JSON strings to PascalCase in our C# class.

  • NimbleText It applies a replace pattern on every single item of a input dataset. I don’t need to type crazy key sequences. Like playing the drums. For example, it’s useful to generate SQL insert or updates statements from sample data in CSV format.

  • jq play An online version of jq, a JSON processor. It allows to slice, filter, map and transform JSON data.

2. Git Aliases and Hooks

Aliases

I use Git from the command line most of the time. I have created copied some aliases for my everyday workflows. These are some of my Git aliases:

alias gs='git status -sb' 
alias ga='git add ' 
alias gco='git checkout -b ' 
alias gc='git commit ' 
alias gacm='git add -A && git commit -m ' 
alias gcm='git commit -m ' 

alias gpo='git push origin -u ' 
alias gconf='git diff --name-only --diff-filter=U'

Not Git related, but I have also created some aliases to use the Pomodoro technique.

alias pomo='sleep 1500 && echo "Pomodoro" && tput bel' 
alias sb='sleep 300 && echo "Short break" && tput bel' 
alias lb='sleep 900 && echo "Long break" && tput bel'

I don’t need fancy applications or distracting websites. Only three aliases.

Hook to format commit messages

I work in a project that uses a branch naming convention. I need to include the type of task and the task number in the branch name. For example, feat/ABC123-my-branch. And, every commit message should include the task number too. For example, ABC123 My awesome commit. I found a way to automate that with a prepare-commit-msg hook.

With this hook, I don’t need to memorize every task number. I only ned to include the ticket number when creating my branches. This is the Git hook I use,

#!/bin/bash
FILE=$1
MESSAGE=$(cat $FILE)
TICKET=[$(git rev-parse --abbrev-ref HEAD | grep -Eo '^(\w+/)?(\w+[-_])?[0-9]+' | grep -Eo '(\w+[-])?[0-9]+' | tr "[:lower:]" "[:upper:]")]
if [[ $TICKET == "[]" || "$MESSAGE" == "$TICKET"* ]];then
  exit 0;
fi

echo "$TICKET $MESSAGE" > $FILE

This hook grabs the ticket number from the branch name and prepend it to my commit messages.

3. Visual Studio extensions

I use Visual Studio almost every working day. I rely on extensions to simplify some work. These are some the extensions I use,

  • CodeMaid It’s like a janitor. It helps me to clean extra spaces and blank lines, remove and sort using statements, insert blank line between properties and much more.

  • MappingGenerator I found this extension recently and it has been a time saver. You need to initialize an object with default values? You need to create a view model or DTO from a business object? MappingGenerator got us covered!

Voilà! These are the tools that have saved me 100 hours! If you want to try more Visual Studio extension, check my Visual Studio Setup for C#. If you’re new to Git, check my Git Guide for Beginners and my Git guide for TFS Users.

Happy coding!

ASP.NET Core Guide for ASP.NET Framework Developers

If you are a C# developer, chances are you have heard about this new .NET Core thing and the new version of the ASP.NET framework. You can continue to work with ASP.NET Web API or any other framework from the old ASP.NET you’ve known for years. But, ASP.NET Core is here to stay.

In case you missed it, “ASP.NET Core is a cross-platform, high-performance, open-source framework for building modern, cloud-based, Internet-connected applications.” “ASP.NET Core is a redesign of ASP.NET 4.x, with architectural changes that result in a leaner, more modular framework.”

ASP.NET Core has brought a lot of new features. For example, cross-platform development and deployment, built-in dependency injection, middlewares, health checks, out-of-the-box logging providers, hosted services, API versioning, and much more.

Don’t worry if you haven’t started to worked with ASP.NET Core yet. This is a new framework with lots of new features, but it has brought many other features from the previous version. So, you will feel like home.

TL;DR

  1. You can create projects from the command line.
  2. NuGet packages are listed on the csproj files.
  3. csproj files don’t list .cs files anymore.
  4. There’s no Web.config, you have a json file instead.
  5. There’s no Global.asax, you have Startup.cs instead.
  6. You have a brand new dependency container.

1. Every journey begins with the first step

toddler's standing in front of beige concrete stair
Photo by Jukan Tateisi on Unsplash

If you are adventurous, download and install the ASP.NET Core developer (SDK) and create a new empty web project from Visual Studio. These are the files that you get from it.

|____appsettings.Development.json
|____appsettings.json
|____Program.cs
|____Properties
| |____launchSettings.json
|____<YourProjectName>.csproj
|____Startup.cs

ASP.NET Core has been created with other operating systems and IDEs in mind. Now, you can create a project, compile it, and run the tests from the command line.

For example, to create a new empty Web project from the command line, you can use dotnet new web.

2. Where is the packages.config file?

If you installed a NuGet package into your brand new ASP.NET Core project, one thing you could notice is the missing packages.config file. If you remember, it is an xml file that holds the packages installed.

But, where in the world are those packages referenced in ASP.NET Core projects? In the csproj file of your project!

Now, a csproj file looks like this:

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp3.1</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Newtonsoft.Json" Version="12.0.3" />
  </ItemGroup>

</Project>

NuGet packages are referenced under ItemGroup in a PackageReference node. There you are Newtonsoft.Json! Goodbye, packages.config file!

3. Wait! What happened to csproj files?

Csproj files have been simplified too. Before a csproj file listed every single file in the project. All your files with .cs extension were in it. Now, every .cs file within the folder structure of the project is part of it.

Before, things started to get complicated as time went by and the number of files increased. Sometimes, merge conflicts were a nightmare. There were files under version control not included in the csproj file. Were they meant to be excluded because they didn’t apply anymore? Or somebody tried to solve a merge conflict and forgot to include them? This problem is no more!

4. Where is the Web.config file?

Another missing file is the Web.config file. Instead you have a Json file: the appsettings.json file. You can use strings, integers, booleans, and arrays in your config file.

There is even support for sections and subsections. Before, if you wanted to achieve that, you had to come up with a naming convention for your keys. For example, prepending the section and subsection name in every key name.

Probably, you have used ConfigurationManager all over the place in your code to read configuration values. Now, you can have a class with properties mapped to a section or subsection of your config file. And you can inject it into your services.

// appsettings.json
{
    "MySettings": {
        "ASetting": "ASP.NET Core rocks",
        "AnotherSetting": true
    }
}
public class MySettings
{
    public string ASetting { get; set; }
    public bool AnotherSetting { get; set; }
}

public class YourService
{
    public YourService(IOptions<MySettings> settings)
    {
        // etc
    }
}

You still need to register that configuration into the dependency container. More on that later!

Additionally, you can override keys per environment. You can use the name of your environment in the file name. For example, appsettings.Development.json or appsettings.QA.json. You can specify the current environment with an environment variable or in the launchSettings.json file.

There’s even support for sensitive settings that you don’t want to version control: secrets.json file. You can manage this file from the command line too.

5. Where is the Global.asax file?

Yet another missing file: Global.asax. You used it to perform actions on application or session events. For example, when application started or ended. It was the place to do one-time setups, register filters, or define routes.

But now we use the Startup.cs file. It contains the initialization and all the settings needed to run the application. An Startup.cs file looks like this:

public class Startup
{
    public Startup(IConfiguration configuration)
    {
        Configuration = configuration;
    }

    public IConfiguration Configuration { get; }
        
    public void ConfigureServices(IServiceCollection services)
    {
    }

    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
    }
}

It has two methods: ConfigureServices() and Configure().

The Configure() method replaces the Global.asax file. It creates the app’s request processing pipeline. This is the place to register a filter or a default route for your controllers.

And the ConfigureServices() is to configure the services to be injected into the dependency container…Wait, what?

6. A brand new dependency container

Prior to ASP.NET Core, if you wanted to apply dependency injection, you had to bring a container and roll the discovery of services for your controllers. For example, you had an xml file to map your interfaces to your classes or did some assembly scanning to do it automatically.

Now, a brand new dependency container is included out-of-the-box. You can inject dependencies into your services, filters, middlewares, and controllers. It lacks some of the features from your favorite dependency container, but it is meant to suit “90% of the scenarios.”

If you are familiar with the vocabulary from another containers, AddTransient(), AddScoped(), and AddSingleton() ring a bell. These are the lifetimes of the injected services, ranging from the shortest to the largest.

More specifically, a transient service is created every time a new instance is requested. An scoped service is created once per request. Plus, a singleton service is created only once per the application lifetime.

To register your services, you have to do it inside of the ConfigureServices() method of the Startup class. Also, you bind your classes to a section or subsection of the config file here.

// In the Startup.cs file

public void ConfigureServices(IServiceCollection services)
{
    services.AddTransient<IMyService, MyService>();
    
    var section = Configuration.GetSection("MySettings");
    services.Configure<MySettings>(section);
}

7. Conclusion

You have only scratched the surface of ASP.NET Core. You have learned about some of the changes ASP.NET Core has brought. But, if you haven’t started with ASP.NET Core, go and try it. You may be surprise by how things are done now.

UPDATE (Oct 2023): I wrote this post back in the day when ASP.NET Core was brand new. This is the post I wish I had read back then. In recent versions, ASP.NET Core simplified configurations by ditching Startup.cs files. All other concepts remain the same.

This post was originally published on exceptionnotfound.net as part of Guest Writer Program. Thanks Matthew for editing this post.

For more ASP.NET Core content, read how to read configuration values, how to create a caching layer, and how to create a CRUD API with Insight.Database.

Happy coding!

The Art of Unit Testing: Takeaways

This is THE book to learn how to write unit tests. It starts from the definition of a unit test to how to implement them in your organization. It covers the subject extensively.

“The Art of Unit Testing” teaches us to treat unit tests with the same attention and care we treat production code. For example, we should have test reviews instead of only code reviews.

These are some of the main ideas from “The Art Of Unit Testing.”

TL;DR

  1. Write trustworthy tests
  2. Have a unit test project per project and a test class per class
  3. Keep a set of always-passing unit tests
  4. Use “UnitOfWork_Scenario_ExpectedBehaviour” for your test names
  5. Use builders instead of SetUp methods

1. Write Trustworthy Tests

Write trustworthy tests. A test is trustworthy if you don’t have to debug it to make sure it passes.

To write trustworthy tests, avoid any logic in your tests. If you have conditionals and loops in your tests, you have logic in them.

You can find logic in helper methods, fakes, and assert statements. Avoid logic in the assert statements, use hardcoded values instead.

Tests with logic are hard to read and replicate. A unit test should consist of method calls and assert statements.

2. Organize Your Tests

Have a unit test project per project and a test class per class. You should easily find tests for your classes and methods.

Create separate projects for your unit and integration tests. Add the suffix “UnitTests” and “IntegrationTests” accordingly. For a project Library, name your tests projects Library.UnitTests and Library.IntegrationTests.

Create tests inside a file with the same name as the tested code adding the suffix “Tests”. For MyClass, your tests should be inside MyClassTests. Also, you can group features in separate files by adding the feature name as a suffix. For example, MyClassTests.AnAwesomeFeature.

3. Have a Safe Green Zone

Keep a set of always-passing unit tests. You will need some configurations for your integration tests: a database connection, environment variables, or some files in a folder. Integration tests will fail if those configurations aren’t in place. So, developers could ignore some failing tests, and real issues, because of those missing configurations.

Therefore, separate your unit tests from your integration tests. Put them into different projects. This way, you will distinguish between a missing setup and an actual problem with your code.

A failing test should mean a real problem, not a false positive.

The Art of Unit Testing Takeaways
Whangarei Falls, Whangarei, New Zealand. Photo by Tim Swaan on Unsplash

4. Use a Naming Convention

Use UnitOfWork_Scenario_ExpectedBehaviour for your test names. You can read it as follow: when calling “UnitOfWork” with “Scenario”, then it “ExpectedBehaviour”.

In this naming convention, a Unit of Work is any logic exposed through public methods that return value, change the system state, or make an external invocation.

With this naming convention is clear the logic under test, the inputs, and the expected result. You will end up with long test names, but it’s OK to have long test names for the sake of readability.

5. Prefer Builders over SetUp methods

Use builders instead of SetUp methods. Tests should be isolated from other tests. Sometimes, SetUp methods create shared state among your tests. You will find tests that pass in isolation but don’t pass alongside other tests and tests that need to be run many times to pass.

Often, SetUp methods end up with initialization for only some tests. Tests should create their own world. Initialize what’s needed inside every test using builders.

Voilà! These are my main takeaways. Unit testing is a broad subject. The Art of Unit Testing cover almost all you need to know about it. The main lesson from this book is to write readable, maintainable, and trustworthy tests. Remember, the next person reading your tests will be you.

“Your tests are your safety net, so do not let them rot.”

If you’re new to unit testing, start reading my Unit Testing 101. You will write your first unit test in C# with MSTest. For more naming conventions, check how to name your unit tests.

Happy testing!

Pipeline pattern: An assembly line of steps

You need to do a complex operation made of smaller consecutives tasks. These tasks might change from client to client. This is how you can use the Pipeline pattern to achieve that. Let’s implement the Pipeline pattern in C#.

With the Pipeline pattern, a complex task is divided into separated steps. Each step is responsible for a piece of logic of that complex task. Like an assembly line, steps in a pipeline are executed one after the other, depending on the output of previous steps.

TL;DR Pipeline pattern is like the enrich pattern with factories. Pipeline = Command + Factory + Enricher

When to use the Pipeline pattern?

You can use the pipeline pattern if you need to do a complex operation made of smaller tasks or steps. If a single task of this complex operation fails, you want to mark the whole operation as failed. Also, the tasks in your operation vary per client or type of operation.

Some common scenarios to use the pipeline pattern are booking a room, generating an invoice or creating an order.

Let’s use the Pipeline pattern

A pipeline is like an assembly line in a factory. Each workstation in an assembly adds a part until the product is assembled. For example, in a car factory, there are separate stations to put the doors, the engine and the wheels.

With the pipeline pattern, you can create reusable steps to perfom each action in your “assembly line”. Then, you run these steps one after the other in a pipeline.

For example, in an e-commerce system to sell an item, you need to update the stock, charge a credit card, send a delivery order and notify the client.

Pipeline pattern in C#
Photo by Lenny Kuhne on Unsplash

Let’s implement our own pipeline

First, create a command/context class for the inputs of the pipeline.

public class BuyItemCommand : ICommand
{
    // Item code, quantity, credit card information, etc
}

Then, create one class per each workstation of your assembly line. These are the steps.

In our e-commerce example, steps will be UpdateStockStep, ChargeCreditCardStep, SendDeliveryOrderStep and NotifyClientStep.

public class UpdateStockStep : IStep<BuyItemCommand>
{
    public Task ExecuteAsync(BuyItemCommand command)
    {
        // Put your own logic here
        return Task.CompletedTask;
    }
}

Next, we need a builder to create our pipeline with its steps. Since the steps may vary depending on the type of operation or the client, you can load your steps from a database or configuration files.

For our e-commerce example, we don’t need to create a delivery order when we sell an eBook. In that case, we need to build two pipelines: BuyPhysicalItemPipeline for products that require shipping and BuyDigitalItemPipeline for products that don’t.

But, let’s keep it simple. Let’s create a BuyItemPipelineBuilder.

public class BuyItemPipelineBuilder : IPipelineBuilder
{
    private readonly IStep<BuyItemCommand>[] Steps;

    public BuyItemPipelineBuilder(IStep<BuyItemCommand>[] steps)
    {
        Steps = steps;
    }

    public IPipeline CreatePipeline(BuyItemCommand command)
    {
      // Create your pipeline here...
      var updateStockStep = new UpdateStockStep();
      var chargeCreditCardStep = new ChargeCreditCard();
      var steps = new[] { updateStockStep, chargeCreditCardStep };
      return new BuyItemPipeline(command, steps);
    }
}

Now, create the pipeline to run all its steps. It will have a loop to execute each step.

public class BuyItemPipeline : IPipeline
{
    private readonly BuyItemCommand Command;
    private readonly IStep<BuyItemCommand>[] Steps;

    public BuyItemPipeline(BuyItemCommand command, IStep<BuyItemCommand>[] steps)
    {
        Command = command;
        Steps = steps;
    }

    public async Task ExecuteAsync()
    {
        foreach (var step in Steps)
        {
            await step.ExecuteAsync(Command);
        }
    }
}

Also, you can use the Decorator pattern to perform orthogonal actions on the execution of the pipeline or every step. You can run the pipeline inside a database transaction, log every step or measure the execution time of the pipeline.

Now everything is in place, let’s run our pipeline.

var command = new BuyItemCommand();
var builder = new BuyItemPipelineBuilder(command);
var pipeline = builder.CreatePipeline();

await pipeline.ExecuteAsync();

Some steps of the pipeline can be delayed for later processing. The user doesn’t have to wait for some steps to finish his interaction with the system. You can schedule the execution of some steps in background jobs for later processing. For example, you can use Hangfire or roll your own queue mechanism (Kiukie…Ahem, ahem)

Conclusion

Voilà! This is the Pipeline pattern. You can find it out there or implement it on your own. Depending on the expected load of your pipeline, you could use Azure Functions or any other queue mechanism to run your steps.

I have used and implemented this pattern before. I used it in an invoicing platform to generate documents. Each document and client type had a different pipeline.

Also, I have used it in a reservation management system. I had separate pipelines to create, modify and cancel reservations.

PS: You can take a look at Pipelinie to see more examples. Pipelinie offers abstractions and default implementations to roll your own pipelines and builders.

All ideas and contributions are more than welcome!

canro91/Pipelinie - GitHub

Clean Code: Takeaways

Clean Code will change the way you code. It doesn’t teach how to code in a particular language. But, it teaches how to produce code easy to read, grasp and maintain. Although code samples are in Java, all concepts can be translated to other languages.

The Clean Code starts defining what it’s clean code by collecting quotes from book authors and other well-known people in the field. It covers the subject of Clean Code from variables to functions to architectural design.

The whole concept of Clean Code is based on the premise that code should be optimized to be read. It’s true we, as programmers, spend way more time reading code than actually writing it.

These are the three chapters I found instructive. The whole book is instructive. But, if I could only read a few chapters, I would read the next ones.

Naming

The first concept after the definition of Clean Code is naming things. This chapter encourages names that reveal intent and are easy to pronounce. And, to avoid punny or funny names.

Instead of writing int d; // elapsed time in days, write int elapsedTimeInDays.

Instead of writing, genymdhms, write generationTimestamp.

Instead of writing, HolyHandGrenade, write DeleteItems.

Comments

Clean Code is better than bad code with comments.

We all have heard that commenting our code is the right thing to do. But, this chapter shows what actually needs comments.

Have you seen this kind of comment before? i++; // Increment i Have you written them? I did once.

Don’t use a comment when a function or variable can be used.

Don’t keep the list of changes and authors in comments at the top of your files. That’s what version control systems are for.

Functions

There is one entire chapter devoted to functions. It recommends to write short and concise functions.

“Functions should do one thing. They should do it well”.

This chapter discourages functions with boolean parameters. They will have to handle the true and false scenarios. Then, they won’t do only one thing.

Voilà! These are the three chapters I find the most instructive and more challenging. If you could only read a few chapters, read those ones. Clean Code should be an obligatory reading for every single developer. Teachers should, at least, point students to this book. This book doesn’t deserve to be read, it deserves to be studied. If you’re new to the Clean Code concept, grab a copy and study it.

If you’re interested in my takeaways of other books, take a look at Clean Coder and The Art of Unit Testing.

Happy reading!