The Clean Coder: Three Takeaways

The Clean Coder is the second book on the Clean Code trilogy. It should be a mandatory reading for any professional programmer. These are my main takeaways.

The Clean Coder isn’t about programming in itself. It’s about the professional practice of programming. It covers from what is professionalism to testing strategies, pressure and time management.

Professionalism

Your career is your responsibility, not your employer’s

Professionalism is all about taking responsibility.

  • Do not harm: Do not release code, you aren’t certain about. If QA or an user finds a bug, you should be surprised. Make sure to take steps to prevent it to happen in the future.
  • Know how it works: Every line of code should be tested. Professional developers test their code.
  • Know your domain: It’s unprofessional to code your spec without any knowledge of the domain.
  • Practice: It’s what you do when you aren’t getting paid so you will be paid well.
  • Be calm and decisive under pressure: Enjoy your career, don’t do it under pressure. Avoid situations that cause stress. For example, commit to deadlines.
  • Meetings are necessary and costly. It’s unprofessional to attend to so many meetings. When the meetings get boring, be polite and ask if your presence is still needed.
Your career is your responsibility, not your employer's
One of my favorite quotes from the Clean Coder

Say No/Say Yes

Say. Mean. Do

Professionals have courage to say no to their managers. Also, as professional, you don’t have to say yes to everything. But you should find a creative way to make a yes possible.

  • There is no “trying”. Say no and offer a trade-off. “Try” is taken as yes and outcomes are expected accordingly
  • You can’t commit to things you don’t control. But, you can commit to some actions. For example, if you need somebody else to finish a dependency, create an interface and meet with the responsible guy.
  • Raise the flag. If you don’t tell someone you have a problem as soon as possible, you won’t have someone to help you on time.
  • Saying yes to drop out professionalism is not the way to solve problems.

Coding

It could be consider unprofessional not to use TDD

  • If you are tired or distracted, do not code. Coding requires concentration. And you will end up rewriting your work.
  • Be polite! Remember you will be the next one interrupting someone else. Use a failing test to let you know where you were after an interruption.
  • Debugging time is as expensive as coding time. Reduce your debugging time to almost 0. Use TDD, instead.
  • When you are late, raise the flag and be honest. It isn’t ok to say up to the end you’re fine and not to deliver your task.
  • Be honest about finishing your work. The worst attitude is when you say you’re done when you actually aren’t.
  • Ask for help. It’s unprofessional to remain stuck when there is help available.

Voilà! These are my main points from The Clean Coder. Do you see why I think it should be a mandatory reading? Oh, I missed another thing. An estimate isn’t a date, but a range of dates.

If you want to read other summaries, check Clean Code: Takeaways and Pragmatic Thinking and Learning: Takeaways.

A beginner's Guide to Git. A guide to time travel

Do you store your files on folders named after the date of your changes? I did it back in school with my class projects. There’s a better way! Let’s use Git and GitHub to version control our projects.

$ ls
Project-2020-04-01/
Project-Final/
Project-Final-Final/
Project-ThisIsNOTTheFinal/
Project-ThisIsTheTrueFinal/

1. What is a Version Control System?

First, what is a version control system? A version control system, VCS, is a piece of code that keeps track of changes of a file or set of files.

A version control system, among other things, allows us to:

  • Revert a file or the entire project to a previous state
  • Follow all changes of a file through its lifetime
  • See who has modified a file

To better understand this concept, let’s use an analogy.

A version control system is like a time machine. With it, we can go backward in time, create timelines and merge two separate timelines. We don’t travel to historical events in time, but to checkpoints in our project.

DeLorean time machine
DeLorean from Back to the Future. Photo by JMortonPhoto.com & OtoGodfrey.com / CC BY-SA

2. Centralized vs Distributed

There is a distinction between version control systems that make them different. Centralized vs distributed.

A centralized VCS requires a server to perform any operation on your project. We need to connect to a server to download all our files to start to work. If this server goes down, we can’t work. Bye, bye, productivity! Team Foundation Server (TFS) from Microsoft is a centralized VCS.

But, a distributed VCS doesn’t need a centralized server in the same sense. Each user has a complete copy of the entire project. Most operations are performed against this local copy. We can work offline. A two-hour flight without internet, no problem. For example, Git is a distributed VCS.

If you’re coming from TFS, I’ve written a Git Guide for TFS Users.

3. What’s Git anyways?

Up to this point, Git doesn’t need any further introduction. From its official page, “Git is a free and open-source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.

How to Install and Setup Git

We can install Git from its official page. There we can find instructions to install Git using package managers for all major OS’s.

Before starting to work, we need some one-time setups. We need to configure a name and an email. This name and email will appear in the file history of any file we create or modify.

Let’s go to the command line. From the next two commands, replace “John Doe” and “johndoe@example.com” with your own name and email.

$ git config --global user.name "John Doe"
$ git config --global user.email johndoe@example.com

We can change this name and email between projects. If we want to use different names and emails for work and personal projects, we’re covered. We can manage different accounts between folders.

How to Create a Git repository

There are two ways to start working with Git. From scratch or from an existing project.

If we are starting from scratch, inside the folder we want to version control, let’s use init. Like this,

$ git init

After running git init, Git creates a hidden folder called .git inside our project folder. Git keeps everything under the hood on this folder.

If we have an existing project, we need to use clone, instead.

# Replace <url> with the actual url of the project
# For example, https://github.com/canro91/Parsinator
$ git clone <url>

Did you notice it? The command name is clone. Since we are getting a copy of everything a server has about a project.

How to Add new files

Let’s start working! Let’s create new files or change existing ones in our project. Next, we want Git to keep track of these files. We need three commands: status, add and commit.

First, status shows the pending changes in our files. add includes some files in the staging area. And, commit creates an event in the history of our project.

# git status will show pending changes in files
$ git status
# Create a README file using your favorite text editor.
# Add some content
# See what has changed now
$ git status
$ git add README
$ git commit -m 'Add README file'

After using the commit command, Git knows about a file called README. We have a commit (a checkpoint) in our project we can go back to. Git has stored our changes. This commit has a unique code (a SHA-1) and an author.

We can use log to see all commits created so far.

# You will see your previous commit here
$ git log

What’s the Staging area?

The staging area or index is a concept that makes Git different.

The staging area is an intermediate area to review files before committing them.

It’s like making our files wait in a line before keeping track of them. This allows us to commit only a group of files or portions of a single file.

If you’re coming from TFS, notice you need two steps to store your changes. These are: add to include files into the staging area and commit to create a checkpoint from them. With TFS, you only “check-in” your files.

How to Ignore Files

Sometimes we don’t need to version control certain files or folders. For example, log files, third-party libraries, files, and folders generated by compilation or by our IDE.

If we’re starting from scratch, we need to do this only once. But if we’re starting from an existing project, chances are somebody already did it.

We need to create a .gitignore file with the patterns of files and folders we want to ignore. We can use this file globally or per project.

There is a collection of gitignore templates on GitHub per language and IDE’s.

For example, to ignore the node_modules folder, the .gitignore file will contain

node_modules/

Git won’t notice any files from the patterns included in the .gitignore file. Run git status to notice it.

How to write Good Commit Messages

A good commit message should tell why the change is needed, what problem it fixes and any side effect it might have.

Please, please don’t use “uploading changes” or anything like that on your commit messages.

Depending on our workplace or project, we have to follow a naming convention for our commit messages. For example, we have to include the type of change (feature, test, bug, or refactor) followed by a task number from a bug tracking software. If we need to follow a convention like this one, Git can format the commit messages for us.

Keep your commits small and focused. Work with incremental commits. And, don’t commit changes that break your project.

4. Branching and merging

What’s a Git Branch?

Using the time machine analogy, a branch is a separate timeline. Changes in a timeline don’t interfere with changes in other timelines.

Branching is one of the most awesome Git features. Git branches are lightweight and fast when compared to other VCS.

When starting a Git repository, Git creates a default branch called master. Let’s create a new branch called “testing”. For this, we will need the command branch follow by the branch name.

# Create a new branch called testing
$ git branch testing
# List all branches you have
$ git branch
# Move to the new testing branch
$ git checkout testing
# Modify the README file
# For example, add a new line
# "For example, how to create branches"
$ git status
$ git add README
$ git commit -m 'Add example to README'

Now, let’s switch back to the master branch and see what happened to our files there. To switch between branches, we need the checkout command.

# Now move back to the master branch
$ git checkout master
# See how README hasn't changed
# Modify the README file again
# For example, add "Git is so awesome, isn't it?"
$ git status
$ git add README
$ git commit -m 'Add Git awesomeness'
# See how these changes live in different timelines
$ git log --oneline --graph --all

We have created two branches, let’s see how we can combine what we have in the two branches.

Grays Sports Almanac
Grays Sports Almanac. Photo by Mike Mozart, CC BY 2.0, via Wikimedia Commons

How to Merge Two Branches

Merging two branches is like combining two separate timelines.

Continuing with the time travel analogy, merging is like when Marty goes to the past to get back the almanac and he is about to run into himself. Or when captain America goes back to New York in 2012 and he ends up fighting the other captain America. You got the idea!

Let’s create a new branch and merge it to master. We need the merge command.

# Move to master
$ git checkout master
# Create a new branch called hotfix and move there
$ git checkout -b hotfix
# Modify README file
# For example, add a new line
# "Create branches with Git is soooo fast!"
$ git status
$ git add README
$ git commit -m 'Add another example to README'
# Move back to master. Here master is the destination of all changes on hotfix
$ git checkout master
$ git merge hotfix
# See how changes from hotfix are now in master
# Since hotfix is merged, get rid of it
$ git branch -d hotfix

Here we have created a branch called hotfix and merged it to master. But we still have some chances on our branch testing. Let’s merge this branch to see what will happen.

$ git checkout master
$ git merge testing
# BOOM. You have a conflict
# Notice you have pending changes in README
$ git status
# Open README, notice you will find some weird characters.
#
# For example,
# <<<<<<< HEAD
# Content in master branch
# =======
# Content in testing branch
# >>>>>>> testing
#
# You have to remove these weird things
# Modify the file to include only the desired changes
$ git status
$ git add README
# Commit to signal conflicts were solved
$ git commit
# Remove testing after merge
$ git branch -d testing

If you’re coming from TFS, you noticed you need to move first to the branch you want to merge into. You merge from the destination branch, not from the source branch.

How to Move to the Previous Branch

In the same spirit of cd -, we can go to the previously visited branch using git checkout -. This last command is an alias for git checkout @{-1}. And, @{-1} refers to the last branch you were on.

# Starting from master
$ git checkout -b a-new-branch
# Do some work in a-new-branch
$ git checkout master
# Do some work in master
$ git checkout -
# Back to a-new-branch

Git master convention

By convention, the main timeline is called master. But starting from Git 2.28, when we run git init, Git looks for the configuration value init.defaultBranch to replace the “master” name. Other alternatives for “master” are main, primary, or default.

For existing repositories, we can follow this Scott Hanselman post to rename our master branch.

GitFlow Branching Model

Git encourages working with branches. Git branches are cheap and lightweight. We can create branches per task or feature.

There is a convention for branch creation, GitFlow. It suggests feature, release, and hotfix branches.

With Gitflow, we should have a develop branch where everyday work happens. Every new task starts in a separate feature branch taken from develop. Once we’re done with our task, we merge our feature branch back to develop.

5. GitHub: Getting our code to the cloud

Up until now, all our work lives on our computers. But, what if we want our project to live outside? We need a hosting solution. Among the most popular hosting solutions for Git, we can find GitHub, GitLab and Bitbucket.

It’s important to distinguish between Git and GitHub. Git != GitHub. Git is the version control system and GitHub is the hosting solution for Git projects.

No matter what hosting we choose, our code isn’t synced automatically with our server. We have to do it ourselves. Let’s see how.

How to create a repository from GitHub

To create a repository from a GitHub account, go to “Your Repositories: and click on “New”. We need a name and a description. We can create either public or private repositories with GitHub.

Then, we need to associate the GitHub endpoint with our local project. Endpoints are called remotes. Now we can upload or push your local changes to the cloud.

# Replace this url with your own
$ git remote add origin https://github.com/canro91/GitDemo.git
# push uploads the local master to a branch called master in the remote too
$ git push -u origin master
# Head to your GitHub account and refresh

6. Cheatsheet

Here you have all the commands we have used so far.

Command Function
git init Create a repo in the current folder
git clone <url> Clone an existing repo from url
git status List pending changes
git add <file> Add file to the staging area
git commit -m '<message>' Create a commit with message
git log List committed files in current branch
git log --oneline --graph --all List committed files in all branches
git branch <branch-name> Create a new branch
git checkout <branch-name> Change current branch to branch-name
git checkout -b <branch-name> Create a new branch and move to it
git merge <branch-name> Merge branch-name into current branch
git remote add <remote-name> <url> Add a new remote pointing to url
git push -u <remote-name> <branch-name> Push branch-name to remote-name

7. Conclusion

Voilà! We have learned the most frequent Git commands for everyday use. We used Git from the command line. But, most IDE’s offer Git integration through plugins or extensions. Now try to use Git from your favorite IDE.

If you want to practice these concepts, follow the repo First Contributions on GitHub to open your first Pull Request to an opensource project.

Your mission, Jim, should you decide to accept it, is to get the latest changes from your repository. After pushing, clone your project in another folder, change any file and push this change. Next, go back to your first folder, modify another file and try to push. This post will self-destruct in five seconds. Good luck, Jim.

Happy Git time!

Pragmatic Thinking and Learning: Three takeaways

This books sits between learning and programming. It’s like a learning guide for developers. Well, it’s more than that.

“Pragmatic Thinking and Learning” starts with an expertise model, moves to an analogy of how the brain works until how to capture new ideas and organize your learning.

Since learning is the best skill to have, this book is valuable and worth reading. You will find helpful tips for your learning and your everyday work. For example, always keep a piece of paper with you to note ideas and prefer ink over memory. At the end of each chapter, there are exercises to add into your routine and practice daily.

Expertise model

You aren’t a novice or an expert at everything. You are in different stages per skill.

  • Novices: They don’t know how to respond to mistakes. They only want to achieve an immediate goal. They need a recipe.
  • Advance Beginners: They try tasks on their own. They don’t have a big picture yet. They start to incorporate advice from past experiences.
  • Competent: They can create mental models of the problem domain. They troubleshoot problems on their own. They can mentor and don’t bother experts so much.
  • Proficient: They want to understand concepts. They can correct previous tasks and perform better next time. They know when to apply principles in context.
  • Expert: They work from intuition. They know which details to focus and which details to ignore.

“Once you truly become an expert, you become painfully aware of just how little you really know”

Let the R-mode be free

Roughly speaking, the brain is a single-bus dual-core processor. Only one CPU can access memory banks at a time.

A computer processor
Another single-bus multiple core processor. Photo by Christian Wiediger on Unsplash

Our brain works in two modes: linear mode (or L-mode) and rich mode (or R-mode). Coursera Learning How to Learn course calls these two modes: focus and diffuse mode. You need these two modes to work together.

The L-mode is rational and the R-mode is asynchronous.

The L-mode or focus mode works when you are actively studying a subject. You’re sitting in front of your computer or a textbook figuring out the details of that subject.

But, the R-mode or diffuse mode works when you are away from the subject you’re studying. Have you woken up with the solution to a problem you left at work the day before? Have you come up with a solution in the shower? That’s the R-mode or diffuse mode.

Since most of the thinking happens away from the keyboard, let the R-mode be free:

  • Use metaphors
  • Put something head-up. Look at your problem from a different perspective. Try to come up with 3 or 4 ways to cause deliberately the problem you are debugging.
  • Try oblique strategies. For example: for musicians, how your composition will sound from outside a room? On a dark room? For developers, how would your library look like once is finished? How would someone use it?
  • Change your routine
  • Go for a walk. Have you seen children playing to not to step on tree sheets on the floor? the floor is lava? That’s the idea.

Bonus

  • Have one engineering log. Trust ink over memory
  • Breathe, don’t hiss. Know when to temporary step away from your problem. Sometimes you only need to sleep over it.
  • Trust intuition but verify
  • Consider the context. There’s intention behind mistakes.
  • Prefer aesthetics. Strive for good design, it really works better. It might remind you of well-crafted and well-packed expensive cellphones and laptops.
  • Have always something to keep notes. You don’t know where you will be when your next idea will show up.
  • Learn something by doing or building. Instead of dissecting a frog, build one.

Happy thinking and learning!

How to create a CRUD API with ASP.NET Core and Insight.Database

Almost all web applications we write talk to a database. We could write our own database access layer, but we might end up with a lot of boilerplate code. Let’s use Insight.Database to create a CRUD API for a catalog of products with ASP.NET Core.

1. Why to use an ORM?

An object-relational mapping (ORM) is a library that converts objects to database records and vice-versa.

ORMs vary in size and features. We can find ORMs that manipulate database objects and generate SQL statements. Also, we can find micro-ORMs that make us write SQL queries.

We can roll our own database access layer. But, an ORM helps us to:

  • Open and close connections, commands, and readers
  • Parse query results into C# objects
  • Prepare input values to avoid SQL injection attacks
  • Write less code. And less code means fewer bugs.

2. What is Insight.Database?

Insight.Database is a “fast, lightweight .NET micro-ORM.”

Insight.Database is the .NET micro-ORM that nobody knows about because it’s so easy, automatic, and fast (and well-documented) that nobody asks questions about it on StackOverflow.

Insight.Database maps properties from C# classes to query parameters and query results back to C# classes with almost no mapping code.

Insight.Database supports record post-processing, too. We can manipulate records while read. For example, I used this feature to trim whitespace-padded strings from a legacy database without using Trim() in every mapping class.

Unlike other ORMs like OrmLite, Insight.Database doesn’t generate SQL statements for us. We have to write our own SQL queries or store procedures. In fact, Insight.Database documentation recommends calling our database through store procedures.

Trendy apparel store
Let's create our catalog of products. Photo by Clark Street Mercantile on Unsplash

3. A CRUD application with Insight.Database and ASP.NET Core

Let’s create a simple CRUD application for a catalog of products using Insight.Database.

Before we begin, we should have a SQL Server instance up and running. For example, we could use SQL Server Express LocalDB, shipped with Visual Studio.

Of course, Insight.Database has providers to work with other databases like MySQL, SQLite, or PostgreSQL.

Create the skeleton

First, let’s create an “ASP.NET Core Web API” application with Visual Studio or dotnet cli for our catalog of products. Let’s call our solution: ProductCatalog.

After creating our API project, we will have a file structure like this one:

|____appsettings.Development.json
|____appsettings.json
|____Controllers
| |____WeatherForecastController.cs
|____ProductCatalog.csproj
|____Program.cs
|____Properties
| |____launchSettings.json
|____WeatherForecast.cs

Let’s delete the files WeatherForecast.cs and WeatherForecastController.cs. Those are sample files. We won’t need them for our catalog of products.

Now, let’s create a ProductController inside the Controllers folder. In Visual Studio, let’s choose the template: “API Controller with read/write actions.” We will get a controller like this:

using Microsoft.AspNetCore.Mvc;

namespace ProductCatalog.Controllers;

[Route("api/[controller]")]
[ApiController]
public class ProductsController : ControllerBase
{
    [HttpGet]
    public IEnumerable<string> Get()
    {
        return new string[] { "value1", "value2" };
    }

    [HttpGet("{id}")]
    public string Get(int id)
    {
        return "value";
    }

    [HttpPost]
    public void Post([FromBody] string value)
    {
    }

    [HttpPut("{id}")]
    public void Put(int id, [FromBody] string value)
    {
    }

    [HttpDelete("{id}")]
    public void Delete(int id)
    {
    }
}

We’re using implicit usings and file-scoped namespaces. Those are some recent C# features.

If we run the project and make a GET request, we see two results.

GET request
A request to our GET endpoint using curl

We’re ready to start!

Get all products

Create the database

Let’s create a database ProductCatalog and a Products table inside our SQL Server instance. We could use a table designer or write the SQL statement in SQL Server Management Studio.

A product will have an ID, name, price, and description.

CREATE TABLE [dbo].[Products]
(
    [Id] INT NOT NULL PRIMARY KEY IDENTITY,
    [Name] VARCHAR(50) NOT NULL,
    [Price] DECIMAL NOT NULL,
    [Description] VARCHAR(255) NULL
)

It’s a good idea to version control our database schema and, even better, use database migrations. Let’s keep it simple for now.

Modify GET

Let’s create a Product class in a new Models folder. And let’s name the properties of the Product class after the columns of the Products table. Insight.Database will map the two for us.

namespace ProductCatalog.Models;

public class Product
{
    public int Id { get; set; }
    public string Name { get; set; }
    public decimal Price { get; set; }
    public string Description { get; set; }
}

Next, let’s modify the first Get method in the ProductController class to return an IEnumerable<Product> instead of IEnumerable<string>.

We need to add using ProductCatalog.Models; at the top of our file.

using Microsoft.AspNetCore.Mvc;
using ProductCatalog.Models;
//    ^^^^^

namespace ProductCatalog.Controllers;

[Route("api/[controller]")]
[ApiController]
public class ProductsController : ControllerBase
{
    [HttpGet]
    public IEnumerable<Product> Get()
    //                 ^^^^^
    {
    }
    
    // ...
}

Now, let’s install the Insight.Database NuGet package. After that, let’s update the Get() method to query the database with a store procedure called GetAllProducts. We need the QueryAsync() extension method from Insight.Database.

using Insight.Database;
//    ^^^^^
using Microsoft.AspNetCore.Mvc;
//    ^^^^^
using ProductCatalog.Models;

namespace ProductCatalog.Controllers;

[Route("api/[controller]")]
[ApiController]
public class ProductsController : ControllerBase
{
    [HttpGet]
    public async Task<IEnumerable<Product>> Get()
    {
        var connection = new SqlConnection(@"Data Source=(localdb)\MSSQLLocalDB;Initial Catalog=ProductCatalog;Integrated Security=True");
        return connection.QueryAsync<Product>("GetAllProducts");
        //                ^^^^^
    }
    
    // ...
}

I know, I know…We will refactor this shortly…By the way, let’s not version control passwords or any sensitive information, please.

Create GetAllProducts stored procedure

Now, let’s write the GetAllProducts stored procedure.

Depending on our workplace, we should follow a naming convention for stored procedures. For example, sp_<TableName>_<Action>.

CREATE PROCEDURE [dbo].[GetAllProducts]
AS
BEGIN
    SELECT Id, Name, Price, Description
    FROM dbo.Products;
END
GO

To try things out, let’s insert a product,

INSERT INTO Products(Name, Price, Description)
VALUES ('iPhone SE', 399, 'Lots to love. Less to spend.')

And, if we make another GET request, we should find the new product,

GET request showing one new product
A GET with curl showing one product

It’s a good idea not to return models or business objects from our API methods. Instead, we should create view models or DTOs with only the properties a consumer of our API will need. But let’s keep it simple.

Use appsettings.json file

Let’s clean what we have. First, let’s move the connection string to the appsettings.json file. That’s how we should use configuration values with ASP.NET Core.

"ConnectionStrings": {
    "ProductCatalog": "Data Source=(localdb)\\MSSQLLocalDB;Initial Catalog=ProductCatalog;Integrated Security=True;"
}

Next, let’s register a SqlConnection into the dependencies container using AddScoped(). This will create a new connection on every request. Insight.Database opens and closes database connections for us.

using Microsoft.Data.SqlClient;
//    ^^^^^

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers();

var connectionString = builder.Configuration.GetConnectionString("ProductCatalog");
//  ^^^^^
builder.Services.AddScoped(provider => new SqlConnection(connectionString));
//               ^^^^^

var app = builder.Build();
app.MapControllers();
app.Run();

Now, let’s update our ProductController to add a field and a constructor with a SqlConnection parameter.

using Insight.Database;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Data.SqlClient;
using ProductCatalog.Models;

namespace ProductCatalog.Controllers;

[Route("api/[controller]")]
[ApiController]
public class ProductsController : ControllerBase
{
    private readonly SqlConnection _connection;
    //               ^^^^^

    public ProductsController(SqlConnection connection)
    //     ^^^^^
    {
        _connection = connection;
    }

    [HttpGet]
    public async Task<IEnumerable<Product>> Get()
    {
        return await _connection.QueryAsync<Product>("GetAllProducts");
        //     ^^^^^
    }

    // ...
}

After this refactoring, our Get() should continue to work. Hopefully!

Pagination with OFFSET-FETCH

If our table grows, we don’t want to retrieve all products in a single database call. That would be slow!

Let’s query a page of results each time instead. Let’s add two parameters to the Get() method and the GetAllProducts store procedure: pageIndex and pageSize.

using Insight.Database;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Data.SqlClient;
using ProductCatalog.Models;

namespace ProductCatalog.Controllers;

[Route("api/[controller]")]
[ApiController]
public class ProductsController : ControllerBase
{
    private readonly SqlConnection _connection;

    public ProductsController(SqlConnection connection)
    {
        _connection = connection;
    }

    [HttpGet]
    public async Task<IEnumerable<Product>> Get(
        int pageIndex = 1,
        int pageSize = 10)
    {
        var parameters = new
        //  ^^^^^
        {
            PageIndex = pageIndex - 1,
            PageSize = pageSize
        };
        return await _connection.QueryAsync<Product>(
            "GetAllProducts",
            parameters);
            // ^^^^^
    }

    // ...
}

We used an anonymous object with the query parameters. These property names should match the store procedure parameters.

Next, let’s modify the GetAllProducts store procedure to add two new parameters and update the SELECT statement to use the OFFSET/FETCH clauses.

ALTER PROCEDURE [dbo].[GetAllProducts]
    @PageIndex INT = 1,
    // ^^^^^
    @PageSize INT = 10
    // ^^^^^
AS
BEGIN
    SELECT Id, Name, Price, Description
    FROM dbo.Products
    ORDER BY Name
    // ^^^^^
    OFFSET (@PageIndex*@PageSize) ROWS FETCH NEXT @PageSize ROWS ONLY;
    // ^^^^^
END
GO

Our store procedure uses a zero-based page index. That’s why we passed pageIndex - 1 from our C# code.

If we add more products to our table, we should get a subset of products on GET requests.

If you want to practice more, create an endpoint to search a product by id. You should change the appropriate Get() method and create a new store procedure: GetProductById.

Insert a new product

Modify POST

To insert a new product, let’s create an AddProduct class inside the Models folder. It should have three properties: name, price, and description. That’s what we want to store for our products.

namespace ProductCatalog.Models;

public class AddProduct
{
    public string Name { get; set; }
    public decimal Price { get; set; }
    public string Description { get; set; }
}

Next, let’s update the Post() method in the ProductController to use AddProduct as a parameter. This time, we need the ExecuteAsync() method instead.

using Insight.Database;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Data.SqlClient;
using ProductCatalog.Models;

namespace ProductCatalog.Controllers;

[Route("api/[controller]")]
[ApiController]
public class ProductsController : ControllerBase
{
    private readonly SqlConnection _connection;

    public ProductsController(SqlConnection connection)
    {
        _connection = connection;
    }

    // ...

    [HttpPost]
    public async Task Post([FromBody] AddProduct request)
    {
        var product = new
        {
            Name = request.Name,
            Price = request.Price,
            Description = request.Description
        };
        await _connection.ExecuteAsync("AddProduct", product);
        //                ^^^^^
    }

    // ...
}

Create AddProduct stored procedure

Next, let’s create the AddProduct stored procedure. It will have a single INSERT statement.

CREATE PROCEDURE AddProduct
    @Name VARCHAR(50),
    @Price DECIMAL(18, 0),
    @Description VARCHAR(255) = NULL
AS
BEGIN
    INSERT INTO Products(Name, Price, Description)
    VALUES (@Name, @Price, @Description)
END
GO

We should validate input data, of course. For example, a name and price should be required. We could use a library like FluentValidation for that.

Finally, to add a new product, let’s make a POST request with a JSON body. It should include a name, price, and description. We will see our new product, if we make another GET request.

POST followed by a GET request
A POST request followed by a GET request using curl

Did you notice we didn’t need any mapping code? We named our classes to match the stored procedure parameters and results. Great!

4. Conclusion

Voilà! That’s how to use Insight.Database to retrieve results and execute actions with store procedures using the QueryAsync() and ExecuteAsync() methods. If we follow naming conventions, we won’t need any mapping code. With Insight.Database, we keep our data access to a few lines of code.

Your mission, Jim, should you decide to accept it, is to change the Update() and Delete() methods to complete all CRUD operations. This post will self-destruct in five seconds. Good luck, Jim.

For more ASP.NET Core content, check how to write a caching layer with Redit and how to compress responses. If you’re coming from the old ASP.NET Framework, check my ASP.NET Core Guide for ASP.NET Framework Developers.

Happy coding!

Programs that saved you 100 hours (Online tools, Git aliases and Visual Studio extensions)

Today I saw this Hacker News thread about Programs that saved you 100 hours. I want to show some of the tools that have saved me a lot of time. Probably not 100 hours yet.

1. Online Tools

  • JSON Utils It converts a json file into C# classes. We can generate C# properties with attributes and change their casing. Visual Studio has this feature as “Paste JSON as Classes”. But, it doesn’t change the property names from camelCase in our JSON strings to PascalCase in our C# class.

  • NimbleText It applies a replace pattern on every single item of a input dataset. I don’t need to type crazy key sequences. Like playing the drums. For example, it’s useful to generate SQL insert or updates statements from sample data in CSV format.

  • jq play An online version of jq, a JSON processor. It allows to slice, filter, map and transform JSON data.

2. Git Aliases and Hooks

Aliases

I use Git from the command line most of the time. I have created copied some aliases for my everyday workflows. These are some of my Git aliases:

alias gs='git status -sb' 
alias ga='git add ' 
alias gco='git checkout -b ' 
alias gc='git commit ' 
alias gacm='git add -A && git commit -m ' 
alias gcm='git commit -m ' 

alias gpo='git push origin -u ' 
alias gconf='git diff --name-only --diff-filter=U'

Not Git related, but I have also created some aliases to use the Pomodoro technique.

alias pomo='sleep 1500 && echo "Pomodoro" && tput bel' 
alias sb='sleep 300 && echo "Short break" && tput bel' 
alias lb='sleep 900 && echo "Long break" && tput bel'

I don’t need fancy applications or distracting websites. Only three aliases.

Hook to format commit messages

I work in a project that uses a branch naming convention. I need to include the type of task and the task number in the branch name. For example, feat/ABC123-my-branch. And, every commit message should include the task number too. For example, ABC123 My awesome commit. I found a way to automate that with a prepare-commit-msg hook.

With this hook, I don’t need to memorize every task number. I only ned to include the ticket number when creating my branches. This is the Git hook I use,

#!/bin/bash
FILE=$1
MESSAGE=$(cat $FILE)
TICKET=[$(git rev-parse --abbrev-ref HEAD | grep -Eo '^(\w+/)?(\w+[-_])?[0-9]+' | grep -Eo '(\w+[-])?[0-9]+' | tr "[:lower:]" "[:upper:]")]
if [[ $TICKET == "[]" || "$MESSAGE" == "$TICKET"* ]];then
  exit 0;
fi

echo "$TICKET $MESSAGE" > $FILE

This hook grabs the ticket number from the branch name and prepend it to my commit messages.

3. Visual Studio extensions

I use Visual Studio almost every working day. I rely on extensions to simplify some work. These are some the extensions I use,

  • CodeMaid It’s like a janitor. It helps me to clean extra spaces and blank lines, remove and sort using statements, insert blank line between properties and much more.

  • MappingGenerator I found this extension recently and it has been a time saver. You need to initialize an object with default values? You need to create a view model or DTO from a business object? MappingGenerator got us covered!

Voilà! These are the tools that have saved me 100 hours! If you want to try more Visual Studio extension, check my Visual Studio Setup for C#. If you’re new to Git, check my Git Guide for Beginners and my Git guide for TFS Users.

Happy coding!