Pragmatic Thinking and Learning: Three takeaways

This books sits between learning and programming. It’s like a learning guide for developers. Well, it’s more than that.

“Pragmatic Thinking and Learning” starts with an expertise model, moves to an analogy of how the brain works until how to capture new ideas and organize your learning.

Since learning is the best skill to have, this book is valuable and worth reading. You will find helpful tips for your learning and your everyday work. For example, always keep a piece of paper with you to note ideas and prefer ink over memory. At the end of each chapter, there are exercises to add into your routine and practice daily.

Expertise model

You aren’t a novice or an expert at everything. You are in different stages per skill.

  • Novices: They don’t know how to respond to mistakes. They only want to achieve an immediate goal. They need a recipe.
  • Advance Beginners: They try tasks on their own. They don’t have a big picture yet. They start to incorporate advice from past experiences.
  • Competent: They can create mental models of the problem domain. They troubleshoot problems on their own. They can mentor and don’t bother experts so much.
  • Proficient: They want to understand concepts. They can correct previous tasks and perform better next time. They know when to apply principles in context.
  • Expert: They work from intuition. They know which details to focus and which details to ignore.

“Once you truly become an expert, you become painfully aware of just how little you really know”

Let the R-mode be free

Roughly speaking, the brain is a single-bus dual-core processor. Only one CPU can access memory banks at a time.

A computer processor
Another single-bus multiple core processor. Photo by Christian Wiediger on Unsplash

Our brain works in two modes: linear mode (or L-mode) and rich mode (or R-mode). Coursera Learning How to Learn course calls these two modes: focus and diffuse mode. You need these two modes to work together.

The L-mode is rational and the R-mode is asynchronous.

The L-mode or focus mode works when you are actively studying a subject. You’re sitting in front of your computer or a textbook figuring out the details of that subject.

But, the R-mode or diffuse mode works when you are away from the subject you’re studying. Have you woken up with the solution to a problem you left at work the day before? Have you come up with a solution in the shower? That’s the R-mode or diffuse mode.

Since most of the thinking happens away from the keyboard, let the R-mode be free:

  • Use metaphors
  • Put something head-up. Look at your problem from a different perspective. Try to come up with 3 or 4 ways to cause deliberately the problem you are debugging.
  • Try oblique strategies. For example: for musicians, how your composition will sound from outside a room? On a dark room? For developers, how would your library look like once is finished? How would someone use it?
  • Change your routine
  • Go for a walk. Have you seen children playing to not to step on tree sheets on the floor? the floor is lava? That’s the idea.

Bonus

  • Have one engineering log. Trust ink over memory
  • Breathe, don’t hiss. Know when to temporary step away from your problem. Sometimes you only need to sleep over it.
  • Trust intuition but verify
  • Consider the context. There’s intention behind mistakes.
  • Prefer aesthetics. Strive for good design, it really works better. It might remind you of well-crafted and well-packed expensive cellphones and laptops.
  • Have always something to keep notes. You don’t know where you will be when your next idea will show up.
  • Learn something by doing or building. Instead of dissecting a frog, build one.

Happy thinking and learning!

How to create a CRUD API with ASP.NET Core and Insight.Database

Almost all web applications we write talk to a database. We could write our own database access layer, but we might end up with a lot of boilerplate code. Let’s use Insight.Database to create a CRUD API for a catalog of products with ASP.NET Core.

1. Why to use an ORM?

An object-relational mapping (ORM) is a library that converts objects to database records and vice-versa.

ORMs vary in size and features. We can find ORMs that manipulate database objects and generate SQL statements. Also, we can find micro-ORMs that make us write SQL queries.

We can roll our own database access layer. But, an ORM helps us to:

  • Open and close connections, commands, and readers
  • Parse query results into C# objects
  • Prepare input values to avoid SQL injection attacks
  • Write less code. And less code means fewer bugs.

2. What is Insight.Database?

Insight.Database is a “fast, lightweight .NET micro-ORM.”

Insight.Database is the .NET micro-ORM that nobody knows about because it’s so easy, automatic, and fast (and well-documented) that nobody asks questions about it on StackOverflow.

Insight.Database maps properties from C# classes to query parameters and query results back to C# classes with almost no mapping code.

Insight.Database supports record post-processing, too. We can manipulate records while read. For example, I used this feature to trim whitespace-padded strings from a legacy database without using Trim() in every mapping class.

Unlike other ORMs like OrmLite, Insight.Database doesn’t generate SQL statements for us. We have to write our own SQL queries or store procedures. In fact, Insight.Database documentation recommends calling our database through store procedures.

Trendy apparel store
Let's create our catalog of products. Photo by Clark Street Mercantile on Unsplash

3. A CRUD application with Insight.Database and ASP.NET Core

Let’s create a simple CRUD application for a catalog of products using Insight.Database.

Before we begin, we should have a SQL Server instance up and running. For example, we could use SQL Server Express LocalDB, shipped with Visual Studio.

Of course, Insight.Database has providers to work with other databases like MySQL, SQLite, or PostgreSQL.

Create the skeleton

First, let’s create an “ASP.NET Core Web API” application with Visual Studio or dotnet cli for our catalog of products. Let’s call our solution: ProductCatalog.

After creating our API project, we will have a file structure like this one:

|____appsettings.Development.json
|____appsettings.json
|____Controllers
| |____WeatherForecastController.cs
|____ProductCatalog.csproj
|____Program.cs
|____Properties
| |____launchSettings.json
|____WeatherForecast.cs

Let’s delete the files WeatherForecast.cs and WeatherForecastController.cs. Those are sample files. We won’t need them for our catalog of products.

Now, let’s create a ProductController inside the Controllers folder. In Visual Studio, let’s choose the template: “API Controller with read/write actions.” We will get a controller like this:

using Microsoft.AspNetCore.Mvc;

namespace ProductCatalog.Controllers;

[Route("api/[controller]")]
[ApiController]
public class ProductsController : ControllerBase
{
    [HttpGet]
    public IEnumerable<string> Get()
    {
        return new string[] { "value1", "value2" };
    }

    [HttpGet("{id}")]
    public string Get(int id)
    {
        return "value";
    }

    [HttpPost]
    public void Post([FromBody] string value)
    {
    }

    [HttpPut("{id}")]
    public void Put(int id, [FromBody] string value)
    {
    }

    [HttpDelete("{id}")]
    public void Delete(int id)
    {
    }
}

We’re using implicit usings and file-scoped namespaces. Those are some recent C# features.

If we run the project and make a GET request, we see two results.

GET request
A request to our GET endpoint using curl

We’re ready to start!

Get all products

Create the database

Let’s create a database ProductCatalog and a Products table inside our SQL Server instance. We could use a table designer or write the SQL statement in SQL Server Management Studio.

A product will have an ID, name, price, and description.

CREATE TABLE [dbo].[Products]
(
    [Id] INT NOT NULL PRIMARY KEY IDENTITY,
    [Name] VARCHAR(50) NOT NULL,
    [Price] DECIMAL NOT NULL,
    [Description] VARCHAR(255) NULL
)

It’s a good idea to version control our database schema and, even better, use database migrations. Let’s keep it simple for now.

Modify GET

Let’s create a Product class in a new Models folder. And let’s name the properties of the Product class after the columns of the Products table. Insight.Database will map the two for us.

namespace ProductCatalog.Models;

public class Product
{
    public int Id { get; set; }
    public string Name { get; set; }
    public decimal Price { get; set; }
    public string Description { get; set; }
}

Next, let’s modify the first Get method in the ProductController class to return an IEnumerable<Product> instead of IEnumerable<string>.

We need to add using ProductCatalog.Models; at the top of our file.

using Microsoft.AspNetCore.Mvc;
using ProductCatalog.Models;
//    ^^^^^

namespace ProductCatalog.Controllers;

[Route("api/[controller]")]
[ApiController]
public class ProductsController : ControllerBase
{
    [HttpGet]
    public IEnumerable<Product> Get()
    //                 ^^^^^
    {
    }
    
    // ...
}

Now, let’s install the Insight.Database NuGet package. After that, let’s update the Get() method to query the database with a store procedure called GetAllProducts. We need the QueryAsync() extension method from Insight.Database.

using Insight.Database;
//    ^^^^^
using Microsoft.AspNetCore.Mvc;
//    ^^^^^
using ProductCatalog.Models;

namespace ProductCatalog.Controllers;

[Route("api/[controller]")]
[ApiController]
public class ProductsController : ControllerBase
{
    [HttpGet]
    public async Task<IEnumerable<Product>> Get()
    {
        var connection = new SqlConnection(@"Data Source=(localdb)\MSSQLLocalDB;Initial Catalog=ProductCatalog;Integrated Security=True");
        return connection.QueryAsync<Product>("GetAllProducts");
        //                ^^^^^
    }
    
    // ...
}

I know, I know…We will refactor this shortly…By the way, let’s not version control passwords or any sensitive information, please.

Create GetAllProducts stored procedure

Now, let’s write the GetAllProducts stored procedure.

Depending on our workplace, we should follow a naming convention for stored procedures. For example, sp_<TableName>_<Action>.

CREATE PROCEDURE [dbo].[GetAllProducts]
AS
BEGIN
    SELECT Id, Name, Price, Description
    FROM dbo.Products;
END
GO

To try things out, let’s insert a product,

INSERT INTO Products(Name, Price, Description)
VALUES ('iPhone SE', 399, 'Lots to love. Less to spend.')

And, if we make another GET request, we should find the new product,

GET request showing one new product
A GET with curl showing one product

It’s a good idea not to return models or business objects from our API methods. Instead, we should create view models or DTOs with only the properties a consumer of our API will need. But let’s keep it simple.

Use appsettings.json file

Let’s clean what we have. First, let’s move the connection string to the appsettings.json file. That’s how we should use configuration values with ASP.NET Core.

"ConnectionStrings": {
    "ProductCatalog": "Data Source=(localdb)\\MSSQLLocalDB;Initial Catalog=ProductCatalog;Integrated Security=True;"
}

Next, let’s register a SqlConnection into the dependencies container using AddScoped(). This will create a new connection on every request. Insight.Database opens and closes database connections for us.

using Microsoft.Data.SqlClient;
//    ^^^^^

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers();

var connectionString = builder.Configuration.GetConnectionString("ProductCatalog");
//  ^^^^^
builder.Services.AddScoped(provider => new SqlConnection(connectionString));
//               ^^^^^

var app = builder.Build();
app.MapControllers();
app.Run();

Now, let’s update our ProductController to add a field and a constructor with a SqlConnection parameter.

using Insight.Database;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Data.SqlClient;
using ProductCatalog.Models;

namespace ProductCatalog.Controllers;

[Route("api/[controller]")]
[ApiController]
public class ProductsController : ControllerBase
{
    private readonly SqlConnection _connection;
    //               ^^^^^

    public ProductsController(SqlConnection connection)
    //     ^^^^^
    {
        _connection = connection;
    }

    [HttpGet]
    public async Task<IEnumerable<Product>> Get()
    {
        return await _connection.QueryAsync<Product>("GetAllProducts");
        //     ^^^^^
    }

    // ...
}

After this refactoring, our Get() should continue to work. Hopefully!

Pagination with OFFSET-FETCH

If our table grows, we don’t want to retrieve all products in a single database call. That would be slow!

Let’s query a page of results each time instead. Let’s add two parameters to the Get() method and the GetAllProducts store procedure: pageIndex and pageSize.

using Insight.Database;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Data.SqlClient;
using ProductCatalog.Models;

namespace ProductCatalog.Controllers;

[Route("api/[controller]")]
[ApiController]
public class ProductsController : ControllerBase
{
    private readonly SqlConnection _connection;

    public ProductsController(SqlConnection connection)
    {
        _connection = connection;
    }

    [HttpGet]
    public async Task<IEnumerable<Product>> Get(
        int pageIndex = 1,
        int pageSize = 10)
    {
        var parameters = new
        //  ^^^^^
        {
            PageIndex = pageIndex - 1,
            PageSize = pageSize
        };
        return await _connection.QueryAsync<Product>(
            "GetAllProducts",
            parameters);
            // ^^^^^
    }

    // ...
}

We used an anonymous object with the query parameters. These property names should match the store procedure parameters.

Next, let’s modify the GetAllProducts store procedure to add two new parameters and update the SELECT statement to use the OFFSET/FETCH clauses.

ALTER PROCEDURE [dbo].[GetAllProducts]
    @PageIndex INT = 1,
    // ^^^^^
    @PageSize INT = 10
    // ^^^^^
AS
BEGIN
    SELECT Id, Name, Price, Description
    FROM dbo.Products
    ORDER BY Name
    // ^^^^^
    OFFSET (@PageIndex*@PageSize) ROWS FETCH NEXT @PageSize ROWS ONLY;
    // ^^^^^
END
GO

Our store procedure uses a zero-based page index. That’s why we passed pageIndex - 1 from our C# code.

If we add more products to our table, we should get a subset of products on GET requests.

If you want to practice more, create an endpoint to search a product by id. You should change the appropriate Get() method and create a new store procedure: GetProductById.

Insert a new product

Modify POST

To insert a new product, let’s create an AddProduct class inside the Models folder. It should have three properties: name, price, and description. That’s what we want to store for our products.

namespace ProductCatalog.Models;

public class AddProduct
{
    public string Name { get; set; }
    public decimal Price { get; set; }
    public string Description { get; set; }
}

Next, let’s update the Post() method in the ProductController to use AddProduct as a parameter. This time, we need the ExecuteAsync() method instead.

using Insight.Database;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Data.SqlClient;
using ProductCatalog.Models;

namespace ProductCatalog.Controllers;

[Route("api/[controller]")]
[ApiController]
public class ProductsController : ControllerBase
{
    private readonly SqlConnection _connection;

    public ProductsController(SqlConnection connection)
    {
        _connection = connection;
    }

    // ...

    [HttpPost]
    public async Task Post([FromBody] AddProduct request)
    {
        var product = new
        {
            Name = request.Name,
            Price = request.Price,
            Description = request.Description
        };
        await _connection.ExecuteAsync("AddProduct", product);
        //                ^^^^^
    }

    // ...
}

Create AddProduct stored procedure

Next, let’s create the AddProduct stored procedure. It will have a single INSERT statement.

CREATE PROCEDURE AddProduct
    @Name VARCHAR(50),
    @Price DECIMAL(18, 0),
    @Description VARCHAR(255) = NULL
AS
BEGIN
    INSERT INTO Products(Name, Price, Description)
    VALUES (@Name, @Price, @Description)
END
GO

We should validate input data, of course. For example, a name and price should be required. We could use a library like FluentValidation for that.

Finally, to add a new product, let’s make a POST request with a JSON body. It should include a name, price, and description. We will see our new product, if we make another GET request.

POST followed by a GET request
A POST request followed by a GET request using curl

Did you notice we didn’t need any mapping code? We named our classes to match the stored procedure parameters and results. Great!

4. Conclusion

Voilà! That’s how to use Insight.Database to retrieve results and execute actions with store procedures using the QueryAsync() and ExecuteAsync() methods. If we follow naming conventions, we won’t need any mapping code. With Insight.Database, we keep our data access to a few lines of code.

Your mission, Jim, should you decide to accept it, is to change the Update() and Delete() methods to complete all CRUD operations. This post will self-destruct in five seconds. Good luck, Jim.

For more ASP.NET Core content, check how to write a caching layer with Redit and how to compress responses. If you’re coming from the old ASP.NET Framework, check my ASP.NET Core Guide for ASP.NET Framework Developers.

Happy coding!

Programs that saved you 100 hours (Online tools, Git aliases and Visual Studio extensions)

Today I saw this Hacker News thread about Programs that saved you 100 hours. I want to show some of the tools that have saved me a lot of time. Probably not 100 hours yet.

1. Online Tools

  • JSON Utils It converts a json file into C# classes. We can generate C# properties with attributes and change their casing. Visual Studio has this feature as “Paste JSON as Classes”. But, it doesn’t change the property names from camelCase in our JSON strings to PascalCase in our C# class.

  • NimbleText It applies a replace pattern on every single item of a input dataset. I don’t need to type crazy key sequences. Like playing the drums. For example, it’s useful to generate SQL insert or updates statements from sample data in CSV format.

  • jq play An online version of jq, a JSON processor. It allows to slice, filter, map and transform JSON data.

2. Git Aliases and Hooks

Aliases

I use Git from the command line most of the time. I have created copied some aliases for my everyday workflows. These are some of my Git aliases:

alias gs='git status -sb' 
alias ga='git add ' 
alias gco='git checkout -b ' 
alias gc='git commit ' 
alias gacm='git add -A && git commit -m ' 
alias gcm='git commit -m ' 

alias gpo='git push origin -u ' 
alias gconf='git diff --name-only --diff-filter=U'

Not Git related, but I have also created some aliases to use the Pomodoro technique.

alias pomo='sleep 1500 && echo "Pomodoro" && tput bel' 
alias sb='sleep 300 && echo "Short break" && tput bel' 
alias lb='sleep 900 && echo "Long break" && tput bel'

I don’t need fancy applications or distracting websites. Only three aliases.

Hook to format commit messages

I work in a project that uses a branch naming convention. I need to include the type of task and the task number in the branch name. For example, feat/ABC123-my-branch. And, every commit message should include the task number too. For example, ABC123 My awesome commit. I found a way to automate that with a prepare-commit-msg hook.

With this hook, I don’t need to memorize every task number. I only ned to include the ticket number when creating my branches. This is the Git hook I use,

#!/bin/bash
FILE=$1
MESSAGE=$(cat $FILE)
TICKET=[$(git rev-parse --abbrev-ref HEAD | grep -Eo '^(\w+/)?(\w+[-_])?[0-9]+' | grep -Eo '(\w+[-])?[0-9]+' | tr "[:lower:]" "[:upper:]")]
if [[ $TICKET == "[]" || "$MESSAGE" == "$TICKET"* ]];then
  exit 0;
fi

echo "$TICKET $MESSAGE" > $FILE

This hook grabs the ticket number from the branch name and prepend it to my commit messages.

3. Visual Studio extensions

I use Visual Studio almost every working day. I rely on extensions to simplify some work. These are some the extensions I use,

  • CodeMaid It’s like a janitor. It helps me to clean extra spaces and blank lines, remove and sort using statements, insert blank line between properties and much more.

  • MappingGenerator I found this extension recently and it has been a time saver. You need to initialize an object with default values? You need to create a view model or DTO from a business object? MappingGenerator got us covered!

Voilà! These are the tools that have saved me 100 hours! If you want to try more Visual Studio extension, check my Visual Studio Setup for C#. If you’re new to Git, check my Git Guide for Beginners and my Git guide for TFS Users.

Happy coding!

ASP.NET Core Guide for ASP.NET Framework Developers

If you are a C# developer, chances are you have heard about this new .NET Core thing and the new version of the ASP.NET framework. You can continue to work with ASP.NET Web API or any other framework from the old ASP.NET you’ve known for years. But, ASP.NET Core is here to stay.

In case you missed it, “ASP.NET Core is a cross-platform, high-performance, open-source framework for building modern, cloud-based, Internet-connected applications.” “ASP.NET Core is a redesign of ASP.NET 4.x, with architectural changes that result in a leaner, more modular framework.”

ASP.NET Core has brought a lot of new features. For example, cross-platform development and deployment, built-in dependency injection, middlewares, health checks, out-of-the-box logging providers, hosted services, API versioning, and much more.

Don’t worry if you haven’t started to worked with ASP.NET Core yet. This is a new framework with lots of new features, but it has brought many other features from the previous version. So, you will feel like home.

TL;DR

  1. You can create projects from the command line.
  2. NuGet packages are listed on the csproj files.
  3. csproj files don’t list .cs files anymore.
  4. There’s no Web.config, you have a json file instead.
  5. There’s no Global.asax, you have Startup.cs instead.
  6. You have a brand new dependency container.

1. Every journey begins with the first step

toddler's standing in front of beige concrete stair
Photo by Jukan Tateisi on Unsplash

If you are adventurous, download and install the ASP.NET Core developer (SDK) and create a new empty web project from Visual Studio. These are the files that you get from it.

|____appsettings.Development.json
|____appsettings.json
|____Program.cs
|____Properties
| |____launchSettings.json
|____<YourProjectName>.csproj
|____Startup.cs

ASP.NET Core has been created with other operating systems and IDEs in mind. Now, you can create a project, compile it, and run the tests from the command line.

For example, to create a new empty Web project from the command line, you can use dotnet new web.

2. Where is the packages.config file?

If you installed a NuGet package into your brand new ASP.NET Core project, one thing you could notice is the missing packages.config file. If you remember, it is an xml file that holds the packages installed.

But, where in the world are those packages referenced in ASP.NET Core projects? In the csproj file of your project!

Now, a csproj file looks like this:

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp3.1</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Newtonsoft.Json" Version="12.0.3" />
  </ItemGroup>

</Project>

NuGet packages are referenced under ItemGroup in a PackageReference node. There you are Newtonsoft.Json! Goodbye, packages.config file!

3. Wait! What happened to csproj files?

Csproj files have been simplified too. Before a csproj file listed every single file in the project. All your files with .cs extension were in it. Now, every .cs file within the folder structure of the project is part of it.

Before, things started to get complicated as time went by and the number of files increased. Sometimes, merge conflicts were a nightmare. There were files under version control not included in the csproj file. Were they meant to be excluded because they didn’t apply anymore? Or somebody tried to solve a merge conflict and forgot to include them? This problem is no more!

4. Where is the Web.config file?

Another missing file is the Web.config file. Instead you have a Json file: the appsettings.json file. You can use strings, integers, booleans, and arrays in your config file.

There is even support for sections and subsections. Before, if you wanted to achieve that, you had to come up with a naming convention for your keys. For example, prepending the section and subsection name in every key name.

Probably, you have used ConfigurationManager all over the place in your code to read configuration values. Now, you can have a class with properties mapped to a section or subsection of your config file. And you can inject it into your services.

// appsettings.json
{
    "MySettings": {
        "ASetting": "ASP.NET Core rocks",
        "AnotherSetting": true
    }
}
public class MySettings
{
    public string ASetting { get; set; }
    public bool AnotherSetting { get; set; }
}

public class YourService
{
    public YourService(IOptions<MySettings> settings)
    {
        // etc
    }
}

You still need to register that configuration into the dependency container. More on that later!

Additionally, you can override keys per environment. You can use the name of your environment in the file name. For example, appsettings.Development.json or appsettings.QA.json. You can specify the current environment with an environment variable or in the launchSettings.json file.

There’s even support for sensitive settings that you don’t want to version control: secrets.json file. You can manage this file from the command line too.

5. Where is the Global.asax file?

Yet another missing file: Global.asax. You used it to perform actions on application or session events. For example, when application started or ended. It was the place to do one-time setups, register filters, or define routes.

But now we use the Startup.cs file. It contains the initialization and all the settings needed to run the application. An Startup.cs file looks like this:

public class Startup
{
    public Startup(IConfiguration configuration)
    {
        Configuration = configuration;
    }

    public IConfiguration Configuration { get; }
        
    public void ConfigureServices(IServiceCollection services)
    {
    }

    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
    }
}

It has two methods: ConfigureServices() and Configure().

The Configure() method replaces the Global.asax file. It creates the app’s request processing pipeline. This is the place to register a filter or a default route for your controllers.

And the ConfigureServices() is to configure the services to be injected into the dependency container…Wait, what?

6. A brand new dependency container

Prior to ASP.NET Core, if you wanted to apply dependency injection, you had to bring a container and roll the discovery of services for your controllers. For example, you had an xml file to map your interfaces to your classes or did some assembly scanning to do it automatically.

Now, a brand new dependency container is included out-of-the-box. You can inject dependencies into your services, filters, middlewares, and controllers. It lacks some of the features from your favorite dependency container, but it is meant to suit “90% of the scenarios.”

If you are familiar with the vocabulary from another containers, AddTransient(), AddScoped(), and AddSingleton() ring a bell. These are the lifetimes of the injected services, ranging from the shortest to the largest.

More specifically, a transient service is created every time a new instance is requested. An scoped service is created once per request. Plus, a singleton service is created only once per the application lifetime.

To register your services, you have to do it inside of the ConfigureServices() method of the Startup class. Also, you bind your classes to a section or subsection of the config file here.

// In the Startup.cs file

public void ConfigureServices(IServiceCollection services)
{
    services.AddTransient<IMyService, MyService>();
    
    var section = Configuration.GetSection("MySettings");
    services.Configure<MySettings>(section);
}

7. Conclusion

You have only scratched the surface of ASP.NET Core. You have learned about some of the changes ASP.NET Core has brought. But, if you haven’t started with ASP.NET Core, go and try it. You may be surprise by how things are done now.

UPDATE (Oct 2023): I wrote this post back in the day when ASP.NET Core was brand new. This is the post I wish I had read back then. In recent versions, ASP.NET Core simplified configurations by ditching Startup.cs files. All other concepts remain the same.

This post was originally published on exceptionnotfound.net as part of Guest Writer Program. Thanks Matthew for editing this post.

For more ASP.NET Core content, read how to read configuration values, how to create a caching layer, and how to create a CRUD API with Insight.Database.

Happy coding!

The Art of Unit Testing: Takeaways

This is THE book to learn how to write unit tests. It starts from the definition of a unit test to how to implement them in your organization. It covers the subject extensively.

“The Art of Unit Testing” teaches us to treat unit tests with the same attention and care we treat production code. For example, we should have test reviews instead of only code reviews.

These are some of the main ideas from “The Art Of Unit Testing.”

TL;DR

  1. Write trustworthy tests
  2. Have a unit test project per project and a test class per class
  3. Keep a set of always-passing unit tests
  4. Use “UnitOfWork_Scenario_ExpectedBehaviour” for your test names
  5. Use builders instead of SetUp methods

1. Write Trustworthy Tests

Write trustworthy tests. A test is trustworthy if you don’t have to debug it to make sure it passes.

To write trustworthy tests, avoid any logic in your tests. If you have conditionals and loops in your tests, you have logic in them.

You can find logic in helper methods, fakes, and assert statements. Avoid logic in the assert statements, use hardcoded values instead.

Tests with logic are hard to read and replicate. A unit test should consist of method calls and assert statements.

2. Organize Your Tests

Have a unit test project per project and a test class per class. You should easily find tests for your classes and methods.

Create separate projects for your unit and integration tests. Add the suffix “UnitTests” and “IntegrationTests” accordingly. For a project Library, name your tests projects Library.UnitTests and Library.IntegrationTests.

Create tests inside a file with the same name as the tested code adding the suffix “Tests”. For MyClass, your tests should be inside MyClassTests. Also, you can group features in separate files by adding the feature name as a suffix. For example, MyClassTests.AnAwesomeFeature.

3. Have a Safe Green Zone

Keep a set of always-passing unit tests. You will need some configurations for your integration tests: a database connection, environment variables, or some files in a folder. Integration tests will fail if those configurations aren’t in place. So, developers could ignore some failing tests, and real issues, because of those missing configurations.

Therefore, separate your unit tests from your integration tests. Put them into different projects. This way, you will distinguish between a missing setup and an actual problem with your code.

A failing test should mean a real problem, not a false positive.

The Art of Unit Testing Takeaways
Whangarei Falls, Whangarei, New Zealand. Photo by Tim Swaan on Unsplash

4. Use a Naming Convention

Use UnitOfWork_Scenario_ExpectedBehaviour for your test names. You can read it as follow: when calling “UnitOfWork” with “Scenario”, then it “ExpectedBehaviour”.

In this naming convention, a Unit of Work is any logic exposed through public methods that return value, change the system state, or make an external invocation.

With this naming convention is clear the logic under test, the inputs, and the expected result. You will end up with long test names, but it’s OK to have long test names for the sake of readability.

5. Prefer Builders over SetUp methods

Use builders instead of SetUp methods. Tests should be isolated from other tests. Sometimes, SetUp methods create shared state among your tests. You will find tests that pass in isolation but don’t pass alongside other tests and tests that need to be run many times to pass.

Often, SetUp methods end up with initialization for only some tests. Tests should create their own world. Initialize what’s needed inside every test using builders.

Voilà! These are my main takeaways. Unit testing is a broad subject. The Art of Unit Testing cover almost all you need to know about it. The main lesson from this book is to write readable, maintainable, and trustworthy tests. Remember, the next person reading your tests will be you.

“Your tests are your safety net, so do not let them rot.”

If you’re new to unit testing, start reading my Unit Testing 101. You will write your first unit test in C# with MSTest. For more naming conventions, check how to name your unit tests.

Happy testing!