How to add an in-memory and a Redis-powered cache layer with ASP.NET Core 3

Imagine you have a service SettingsService that makes a REST request with a HttpClient. This service calls a microservice for the configurations of a property. But, it takes a couple of seconds to respond and it is accessed frequently. It would be a great to have this value stored somewhere else for faster respond times. Let’s see how to use caching for this!

A cache is an storage layer to speed up future requests. Reading from cache is faster than computing data every time it is requested. Let’s add caching to this SettingsService using ASP.NET Core.

In-Memory approach

Let’s start with an ASP.NET Core 3.1 API site with a controller that uses your SettingsService class. First, install the Microsoft.Extensions.Caching.Memory NuGet package. Then, register the in-memory cache in the ConfigureServices method of the Startup class. You need to use the AddMemoryCache method.

// Startup.cs
public void ConfigureServices(IServiceCollection services)
{
    services.AddMemoryCache(options =>
    {
        options.SizeLimit = 1024;
    });
    
    // ...
}

Since memory isn’t infinite, you want to limit the number of items stored in the cache. Make use of SizeLimit. It is the maximum number of “slots” or “places” the cache can hold. Also, you need to tell how many “places” a cache entry takes when stored. More on that later!

Decorator pattern

Next, let’s use the decorator pattern to add caching to the existing SettingsService without modifying it. To do that, create a new CachedSettingsService. It should inherit from the same interface as SettingsService. That’s the trick!

Also, you need a constructor receiving IMemoryCache and ISettingsService, the decorated service. Then, in the GetSettingsAsync method, you will use the decorated service to call your existing method if the value isn’t cached.

public class CachedSettingsService : ISettingsService
{
    private readonly IMemoryCache _cache;
    private readonly ISettingsService _settingsService;

    public CachedSettingsService(IMemoryCache cache, ISettingsService settingsService)
    {
        _cache = cache;
        _settingsService = settingsService;
    }

    public async Task<Settings> GetSettingsAsync(int propertyId)
    {
        var key = $"{nameof(propertyId)}:{propertyId}";
        return await _cache.GetOrSetValueAsync(key, async () => await _settingsService.GetSettingsAsync(propertyId));
    }
}

Limits and Expiration Time

Now, let’s create the GetOrSetValueAsync extension method. It will check first if a key is in the cache. Otherwise, it will use a factory method to compute the value and store it. This method receives a custom MemoryCacheEntryOptions to overwrite the default values.

Make sure to use expiration times when storing items. You can choose between sliding and absolute expiration times:

  • SlidingExpiration will reset the expiration time every time an entry is used before it expires
  • AbsoluteExpirationRelativeToNow will expire the entry after the given time, no matter how many times it’s been used
  • If you use both, the entry will expire when the first of the two times expire

If parents used SlidingExpiration, kids would never stop watching TV or using smartphones!

Don’t forget to include a size for each cache entry, if you use SizeLimit when registering the cache into the dependency container. This Size tells how many “places” from SizeLimit an entry takes. When this limit is reached, the cache won’t store any more entries until some of them expire. For more details, see Use SetSize, Size, and SizeLimit to limit cache size.

public static class MemoryCacheExtensions
{
    // Make sure to adjust these values to suit your own defualts...
    public static readonly MemoryCacheEntryOptions DefaultMemoryCacheEntryOptions
        = new MemoryCacheEntryOptions
        {
            AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(60),
            SlidingExpiration = TimeSpan.FromSeconds(10),
            Size = 1
        };

    public static async Task<TObject> GetOrSetValueAsync<TObject>(this IMemoryCache cache, string key, Func<Task<TObject>> factory, MemoryCacheEntryOptions options = null)
        where TObject : class
    {
        if (cache.TryGetValue(key, out object value))
        {
            return value as TObject;
        }

        result = await factory();

        options ??= DefaultMemoryCacheEntryOptions;
        cache.Set(key, value, options);

        return result;
    }
}

Registration

To start using the new CachedSettingsService, you need to register it into the container. Back to the Startup class! Register the existing SettingsService and the new decorated service. You can use Scrutor to register your decorator.

// Startup.cs
public void ConfigureServices(IServiceCollection services)
{
    services.AddTransient<SettingsService>();
    services.AddTransient<ISettingsService>(provider =>
    {
        var cache = provider.GetRequiredService<IMemoryCache>();
        var settingsService = provider.GetRequiredService<SettingsService>();
        return new CachedPropertyService(cache, propertyService);
    });

    // The same as before...
    services.AddMemoryCache(options =>
    {
        options.SizeLimit = 1024
    });
    
    // ...
}

Be aware of removing cached entries if you need to update or delete entities in you own code. You don’t want to use an old value or, even worse, a deleted value. In this case, you need to use the Remove method.

There are only two hard things in Computer Science: cache invalidation and naming things.

– Phil Karlton

From TwoHardThings

Unit Tests

Let’s see how you can create a test for this decorator. You will need to create a fake for the decorated service. Then, assert it’s called only once after two consecutive calls to the cached method. Let’s use Moq.

[TestClass]
public class CachedPropertyServiceTests
{
    [TestMethod]
    public async Task GetSettingsAsync_ByDefault_UsesCachedValues()
    {
        var memoryCache = new MemoryCache(Options.Create(new MemoryCacheOptions()));
        var fakeSettingsService = new Mock<ISettingsService>();
        fakeSettingsService.Setup(t => t.GetSettingsAsync(It.IsAny<int>()))
                           .ReturnsAsync(new Settings());
        var service = new CachedSettingsService(memoryCache, fakeSettingsService.Object);

        var propertyId = 1;
        var settings = await service.GetSettingsAsync(propertyId);

        fakeSettingsService.Verify(t => t.GetSettingsAsync(propertyId), Times.Once);

        settings = await service.GetSettingsAsync(propertyId);
        decoratedService.Verify(t => t.GetSettingsAsync(propertyId), Times.Once);
    }
}

Distributed approach

Now, let’s move to the distribute cache. A distributed cache layer lives in a separate server. You aren’t limited to the memory of the server running your API site. A distributed cache make sense when your site is running behind a load-balancer along many instances of the same server. For more advantages, see Distributed caching in ASP.NET Core

There is an implementation of the distributed cache using Redis for ASP.NET Core. Redis is “an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker”.

Using a distributed cache is similar to the in-memory approach. This time you need to install Microsoft.Extensions.Caching.StackExchangeRedis NuGet package and use the AddStackExchangeRedisCache method in your ConfigureServices method. Also, you need a Redis connection string and an InstanceName. The InstaceName groups entries with a prefix. It’s helpful when using a single Redis server with different sites.

Notice, there are two similar NuGet packages to use Redis with ASP.NET Core: Microsoft.Extensions.Caching.Redis and Microsoft.Extensions.Caching.StackExchangeRedis

// Startup.cs
public void ConfigureServices(IServiceCollection services)
{
    services.AddTransient<SettingsService>();
    services.AddTransient<ISettingsService>(provider =>
    {
        var cache = provider.GetRequiredService<IDistributedCache>();
        var settingsService = provider.GetRequiredService<SettingsService>();
        return new CachedSettingsService(cache, propertyService);
    });

    services.AddStackExchangeRedisCache(options =>
    {
        var redisConnectionString = Configuration.GetConnectionString("Redis");
        options.Configuration = redisConnectionString;

        var assemblyName = Assembly.GetExecutingAssembly().GetName();
        options.InstanceName = assemblyName.Name;
    });   
}

Redecorate

Make sure to change the cache interface from IMemoryCache to IDistributedCache. Go to your CachedSettingsService class and the ConfigureService method.

public class CachedSettingsService : ISettingsService
{
    private readonly IDistributedCache _cache;
    private readonly ISettingsService _settingsService;

    public CachedSettingsService(IDistributedCache cache, ISettingsService settingsService)
    {
        _cache = cache;
        _settingsService = settingsService;
    }

    public async Task<Settings> GetSettingsAsync(int propertyId)
    {
        var key = $"{nameof(propertyId)}:{propertyId}";
        return await _cache.GetOrSetValueAsync(key, async () => await _settingsService.GetSettingsAsync(propertyId));
    }
}

Now, let’s create a new GetOrSetValueAsync extension method to use the distributed cache. You need to use asynchronous methods and a serializer for the values to cache. These methods are GetStringAsync and SetStringAsync. This time you don’t need sizes for cache entries.

public static class DistributedCacheExtensions
{
    public static readonly DistributedCacheEntryOptions DefaultDistributedCacheEntryOptions
        = new DistributedCacheEntryOptions
        {
            AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(60),
            SlidingExpiration = TimeSpan.FromSeconds(10),
        };

    public static async Task<TObject> GetOrSetValueAsync<TObject>(this IDistributedCache cache, string key, Func<Task<TObject>> factory, DistributedCacheEntryOptions options = null)
        where TObject : class
    {
        var result = await cache.GetValueAsync<TObject>(key);
        if (result != null)
        {
            return result;
        }

        result = await factory();

        await cache.SetValueAsync(key, result, options);

        return result;
    }

    private static async Task<TObject> GetValueAsync<TObject>(this IDistributedCache cache, string key)
        where TObject : class
    {
        var data = await cache.GetStringAsync(key);
        if (data == null)
        {
            return default;
        }

        return JsonConvert.DeserializeObject<TObject>(data);
    }

    private static async Task SetValueAsync<TObject>(this IDistributedCache cache, string key, TObject value, DistributedCacheEntryOptions options = null)
        where TObject : class
    {
        var data = JsonConvert.SerializeObject(value);

        await cache.SetStringAsync(key, data, options ?? DefaultDistributedCacheEntryOptions, token);
    }
}

Unit Tests

For unit testing, you can use MemoryDistributedCache, an in-memory implementation of IDistributedCache. From the previous unit test, you need to replace the IMemoryCache dependency with var memoryCache = new MemoryDistributedCache(Options.Create(new MemoryDistributedCacheOptions()));.

Conclusion

Voilà! Now you know how to cache the results of a slow service using an in-memory and a distributed approach implementing the decorated pattern on your ASP.NET Core API sites. Additionally, you can turn on or off the cache layer using a toggle in your appsettings file. If you need to cache outside of an ASP.NET Core site, you can use libraries like CacheManager, Foundatio and Cashew. Happy caching time!

The Clean Coder: Three Takeaways

The Clean Coder is the second book on the Clean Code trilogy. It should be a mandatory reading for any professional programmer. This book isn’t about programming in itself. It’s about the professional practice of programming. It covers from what is professionalism to testing strategies, pressure and time management.

Professionalism

Your career is your responsibility, not your employer’s

Professionalism is all about taking responsibility.

  • Do not harm: Do not release code, you aren’t certain about. If QA or an user finds a bug, you should be surprised. Make sure to take steps to prevent it to happen in the future.
  • Know how it works: Every line of code should be tested. Professional developers test their code.
  • Know your domain: It’s unprofessional to code your spec without any knowledge of the domain.
  • Practice: It’s what you do when you aren’t getting paid so you will be paid well.
  • Be calm and decisive under pressure: Enjoy your career, don’t do it under pressure. Avoid situations that cause stress. For example, commit to deadlines.
  • Meetings are necessary and costly. It’s unprofessional to attend to so many meetings. When the meetings get boring, be polite and ask if your presence is still needed.

Say No/Say Yes

Say. Mean. Do

Professionals have courage to say no to their managers. Also, as professional, you don’t have to say yes to everything. But you should find a creative way to make a yes possible.

  • There is no “trying”. Say no and offer a trade-off. “Try” is taken as yes and outcomes are expected accordingly
  • You can’t commit to things you don’t control. But, you can commit to some actions. For example, if you need somebody else to finish a dependency, create an interface and meet with the responsible guy.
  • Raise the flag. If you don’t tell someone you have a problem as soon as possible, you won’t have someone to help you on time.
  • Saying yes to drop out professionalism is not the way to solve problems.

Coding

It could be consider unprofessional not to use TDD

  • If you are tired or distracted, do not code. Coding requires concentration. And you will end up rewriting your work.
  • Be polite! Remember you will be the next one interrupting someone else. Use a failing test to let you know where you were after an interruption.
  • Debugging time is as expensive as coding time. Reduce your debugging time to almost 0. Use TDD, instead.
  • When you are late, raise the flag and be honest. It isn’t ok to say up to the end you’re fine and not to deliver your task.
  • Be honest about finishing your work. The worst attitude is when you say you’re done when you actually aren’t.
  • Ask for help. It’s unprofessional to remain stuck when there is help available.

How to version control your projects with Git. A guide to time travel

Have you ever tried to version control your projects by appending dates on names? Are you doing it now? I have to confess I did it back in school with my class projects. If you’re doing it this way. There is a better way! Read on!

$ ls
Project-2020-04-01/
Project-Final/
Project-Final-Final/
Project-ThisIsNOTTheFinal/
Project-ThisIsTheTrueFinal/

You can find the presentation version of this post here.

Version Control System

First, what is a version control system? A version control system, VCS, is a piece of code that keep track of changes of a file or set of files. So you can later access an specific change.

A version control system, among other things, allows to

  • Revert a file or the entire project to a previous state
  • Follow all changes of a file through its lifetime
  • See who has modified a file

To better understand it, let’s use an analogy. A version control system is a time machine. With it you can go backwards in time, create timelines and merge two separate timelines. You don’t travel to historical events in time, but to checkpoints in your project.

Centralized vs Distributed

There is a distinction between version control systems. It makes them different. Centralized vs distributed.

A centralized VCS requires a server to perform any operation on your project. You need to connect to this server to download all your files to start to work. If this serves goes down, users can’t work. Bye, bye, productivity. Team Foundation Server (TFS) from Microsoft is a centralized VCS.

But, a distributed VCS doesn’t need a centralized server in the same sense. Each user has a complete copy of the entire project. Most of operations are performed against this local copy. So you can work offline. A two-hour flight without internet, no problem. For example, Git is a distributed VCS.

If you’re coming from TFS, I’ve written a Git Guide for TFS Users

What’s Git anyways?

Up to this point, Git doesn’t need any further introduction. From its official page, “Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.

Install and Setup

You can install Git from it’s official page. You can find instructions to install it using package managers for all major OS’s.

Before starting to work, you need some one-time setups. You need to configure a name and an email. This name and email will appear in any file history you have created or modified. Let’s go to the command line

# Replace "John Doe" and "johndoe@example.com" with your own name and email
$ git config --global user.name "John Doe"
$ git config --global user.email johndoe@example.com

You can change this name and email between projects. If you want to use different names and emails for work, you’re covered. You can manage different accounts between folders.

Let’s get started

There are two ways to start working with Git. From scratch or from an existing project.

# If you are starting from scratch
$ git init

# If you have an existing project
# Replace <url> with the actual url of the project
# For example, https://github.com/canro91/Parsinator
$ git clone <url>

Here git init creates a hidden folder called .git inside your project folder. Git keeps everything under the hood on this folder.

You noticed it? The command name is clone. Since you are getting a copy of everything a server has about a project.

Adding new files

Let’s start working! Create new files or change existing ones in your project. Next, you want Git to keep track of these files. You need three commands: status, add and commit.

# git status will show pending changes in files
$ git status
# Create a README file using your favorite text editor.
# Add some content
# See what has changed now
$ git status
$ git add README
$ git commit -m 'Add README file'

Now Git knows about a file called README. You have a commit (a checkpoint) in your project you can go back to. Git has stored your changes. This commit has an unique code (a SHA-1) and a author.

You can use log to see all commits created so far.

# You will see your previous commit here
$ git log

Staging area

The staging area or index is a concept that makes Git different. This is an intermediate area where you can review your files before committing them. It’s like making your files wait in a line before keeping track of them. This allows you to commit only a group of files or portions of a single file.

If you’re coming from TFS, notice you need two steps to store your changes. These are: add to include files into the staging area and commit to create a checkpoint from them.

Ignoring files

Sometimes you don’t need to version control certain files or folders. For example, log files, third-party libraries, files and folders generated by compilation or by your IDE.

If you’re starting from scratch, you need to do this only once. But if you’re starting from an existing project, chances are somebody already did it.

You need to create a .gitignore file with the patterns of files and folders you want to ignore. You can use this file globally or per project. There is a collection of gitignore templates for your language and your IDE.

For example, to ignore the node_modules folder, the .gitignore file will contain

node_modules/

Git won’t notice any files from the patterns included in the .gitignore file. Run git status to notice it.

Commit messages

Please, please don’t use “uploading changes” or any variations.

A good commit message should tell why the change is needed, what problem it fixes and any side effect it might have.

Depending on your workplace or project, you have to follow a naming convention for your commit messages. For example, using a code for the type of change (feature, test, bug or refactor) and a task number.

Commit messages are important! I heard from a friend who received a comment in a job application exercise about his commit messages. So, when it’s time to shine, shine even for your commit messages.

Branching and merging

Branching

Using the time machine analogy, a branch is a separate timeline. Chances in a timeline don’t interfere with changes other timelines. Timelines are called branches. By convention, the main timeline is called master.

This is one the most awesome Git features. Git branches are lightweight and fast when compared to other VCS.

# Create a new branch called testing
$ git branch testing
# List all branches you have
$ git branch
# Move to the new testing branch
$ git checkout testing
# Modify the README file
# For example, add a new line
# "For example, how to create branches"
$ git status
$ git add README
$ git commit -m 'Add example to README'

# Now move back to the master brach
$ git checkout master
# See how README hasn't changed
# Modify the README file again
# For example, add "Git is so awesome, isn't it?"
$ git status
$ git add README
$ git commit -m 'Add Git awesomeness'
# See how these changes live in different timelines
$ git log --oneline --graph --all

Merging

Merging two branches is combining two separate timelines.

SPOILER ALERT: It’s like when Marty goes to the past to get back the almanac and he is about to run into himself. Or when captain America goes back to New York in 2012 and he ends up fighting the other captain America. You got the idea.

Let’s create a new branch and merge it to master.

# Move to master
$ git checkout master
# Create a new branch called hotfix and move there
$ git checkout -b hotfix
# Modify README file
# For example, add a new line
# "Create branches with Git is soooo fast!"
$ git status
$ git add README
$ git commit -m 'Add another example to README'
# Move back to master. Here master is the destination of all changes on hotfix
$ git checkout master
$ git merge hotfix
# See how changes from hotfix are now in master
# Since hotfix is merged, get rid of if
$ git branch -d hotfix

Here you have created a branch called hotfix and merged it to master. But you still have some chances on your branch testing. Let’s merge this branch to see what will happen.

$ git checkout master
$ git merge testing
# BOOM. You have a conflict
# Notice you have pending changes in README
$ git status
# Open README, notice you will find some weird characters.
#
# For example,
# <<<<<<< HEAD
# Content in master branch
# =======
# Content in testing branch
# >>>>>>> testing
#
# You have to remove these weird things
# Modify the file to include only the desired changes
$ git status
$ git add README
# Commit to signal conflicts were solved
$ git commit
# Remove testing after merge
$ git branch -d testing

Git encourages working with branches. You can start to create branches per task or features. There is a convention for branch creation, GitFlow. It suggests feature, release and hotfix branches.

If your coming from TFS, you noticed you need to move first to the branch you want to merge into. You merge from the destination branch, not from the source branch.

Getting it to the cloud

Up until now, all your work lives in your own computer. But, what if you want your project to live outside? You need a hosting solution. Among the most popular hosting solutions for Git, you can find GitHub, GitLab and Bitbucket.

It’s important to distinguish between Git and Github. Git != GitHub. Git is the version control system and GitHub is the hosting solution for Git projects.

No matter what hosting you choose, your code isn’t sync automatically with your server. You have to do it yourself. Let’s see how.

You can create a repo from your GitHub account. Go to Your Repositories. You need to use a name and a description. The wizard will give you some instructions to start from scratch or from an existing project.

# Associate an endpoint to your local project. 
# Endpoints are called remotes.
# Replace this url with you own
$ git remote add origin https://github.com/canro91/GitDemo.git
# push uploads the local master to a branch called master in the remote too
$ git push -u origin master
# Head to your GitHub account and refresh

Conclusion

Voilà! You have learned the most frequent commands for everyday use. You know how to use Git from the command line. But most of IDE’s offer Git integration through plugins or extensions. Now try to use Git from your favorite IDE. If you want to continue to practice these concepts, follow First Contributions to open your first Pull Request. Happy Git time!

Your mission, Jim, should you decide to accept it, is to get the latest changes from your repository. After pushing, clone your project in another folder, change any file and push this change. Next, go back to your first folder, modify another file and try to push. This post will self-destruct in five seconds. Good luck, Jim.

Pragmatic Thinking and Learning: Three takeaways

This books sits between learning and programming. It’s like a learning guide for developers. Well, it’s more than that. It starts with an expertise model, moves to an analogy of how the brain works until how to capture new ideas and organize your learning.

Since learning is the best skill to have, this book is valuable and worth reading. You will find helpful tips for your learning and your everyday work. For example, always keep a piece of paper with you to note ideas and prefer ink over memory. At the end of each chapter, there are exercises to add into your routine and practice daily.

Expertise model

You aren’t a novice or an expert at everything. You are in different stages per skill.

  • Novices: They don’t know how to respond to mistakes. They only want to achieve an immediate goal. They need a recipe.
  • Advance Beginners: They try tasks on their own. They don’t have a big picture yet. They start to incorporate advice from past experiences.
  • Competent: They can create mental models of the problem domain. They troubleshoot problems on their own. They can mentor and don’t bother experts so much.
  • Proficient: They want to understand concepts. They can correct previous tasks and perform better next time. They know when to apply principles in context.
  • Expert: They work from intuition. They know which details to focus and which details to ignore.

“Once you truly become an expert, you become painfully aware of just how little you really know”

Let the R-mode be free

Roughly speaking, the brain is a single-bus dual-core processor. Only one CPU can access memory banks at a time.

Our brain works in two modes: linear mode (or L-mode) and rich mode (or R-mode). Learning to Learn calls these two modes: focus and diffuse mode. You need these two modes to work together.

The L-mode is rational and the R-mode is asynchronous. The L-mode or focus mode is when you are actively studying a subject. You’re sitting in front of your computer or a textbook figuring out the details of that subject. But the diffuse mode is when you are doing something else and that subject start to make more sense all of a sudden. Have you woken up with the solution to a problem you left at work the day before? Have you come up with a solution in the shower? That’s the R-mode or diffuse mode.

Since most of the thinking happens away from the keyboard, let the R-mode be free:

  • Use metaphors
  • Put something head-up. Look at your problem from a different perspective. Try to come up with 3 or 4 ways to cause deliberately the problem your debugging.
  • Try oblique strategies. For example: for musicians, how your composition will sound from outside a room? On a dark room? For developers, how would your library look like once is finished? How would someone use it?
  • Change your routine
  • Go for a walk. Have you seen children playing to not to step on tree sheets on the floor? the floor is lava? That’s the idea.

Bonus

  • Have one engineering log. Trust ink over memory
  • Breathe, don’t hiss. Know when to temporary step away from your problem. Sometimes you only need to sleep over it.
  • Trust intuition but verify
  • Consider the context. There’s intention behind mistakes.
  • Prefer aesthetics. Strive for good design, it really works better. It might remind you of well-crafted and well-packed expensive cellphones and laptops.
  • Have always something to keep notes. You don’t know where you will be when your next idea will show up.
  • Learn something by doing or building. Instead of dissecting a frog, build one.

How to create a CRUD API with ASP.NET Core and Insight.Database

A common task when working with web applications is querying the database. You want to store and retrieve any information from your database. If you choose to use ADO.NET, you end up writing boilerplate code. But, you could use a ORM instead.

Why to use a ORM?

A ORM, Object-relational mapping, is a library that translates between your program and your database. It converts objects to database records and vice-versa.

ORMs vary in size and features. You can find ORMs that create and maintain your database objects and generate SQL statements. Also, you can find micro-ORMs that make you write SQL queries.

You can roll your own database access layer. But, a ORM helps you to:

  • Open and close connections, commands and readers
  • Parse query results into C# objects
  • Prepare input values to avoid SQL injection attacks
  • Write less code. Less code, less bugs

Insight.Database

Insight.Database is a “fast, lightweight .NET micro-ORM”. It allows you to query your database with almost no mapping code. It maps fields/properties to query parameters and columns to fields/properties. Also, you can make extra changes to records as their being read. For example, you can trim whitespace-padded string columns from a legacy database.

Insight.Database is the .NET micro-ORM that nobody knows about because it’s so easy, automatic, and fast, (and well-documented) that nobody asks questions about it on StackOverflow.

Insight.Database recommends you to call your database through store procedures. It doesn’t generate any SQL statements for you.

Roll-up your sleeves

Let’s create a simple CRUD application for a catalog of products. You should have installed the latest version of ASP.NET Core SDK and one database engine. Let’s use SQL Server Express LocalDB shipped with Visual Studio.

Of course, you can use another database. Insight.Database has providers to work with MySql, SQLite or PostgreSQL. For a list of all providers, see Database providers.

Create the skeleton

Let’s create an ASP.NET Core Web application from Visual Studio for our catalog of products. Choose API as the project type. Let’s call it ProductCatalog. You will have a file structure like this one:

|____appsettings.Development.json
|____appsettings.json
|____Controllers
| |____WeatherForecastController.cs
|____ProductCatalog.csproj
|____Program.cs
|____Properties
| |____launchSettings.json
|____Startup.cs
|____WeatherForecast.cs

You can delete the files WeatherForecast.cs and WeatherForecastController.cs.

Now, let’s create a ProductController insde the Controllers folder. You can choose the template with read/write actions. You will get a class like this:

[Route("api/[controller]")]
[ApiController]
public class ProductController : ControllerBase
{
    // GET: api/Product
    [HttpGet]
    public IEnumerable<string> Get()
    {
        return new string[] { "value1", "value2" };
    }

    // GET: api/Product/5
    [HttpGet("{id}", Name = "Get")]
    public string Get(int id)
    {
        return "value";
    }

    // POST: api/Product
    [HttpPost]
    public void Post([FromBody] string value)
    {
    }

    // PUT: api/Product/5
    [HttpPut("{id}")]
    public void Put(int id, [FromBody] string value)
    {
    }

    // DELETE: api/Product/5
    [HttpDelete("{id}")]
    public void Delete(int id)
    {
    }
}

Now, if you run the project and make a GET request to https://localhost:44343/api/Product. You will get the two result values. The port number may be different for you.

[
    "value1",
    "value2"
]

You’re ready to start!

Get all products

Create the database

Let’s create a database ProductCatalog and a Products table. Feel free to use a table designer or write the SQL statement. A product will have an id, name, price and description.

CREATE TABLE [dbo].[Table]
(
    [Id] INT NOT NULL PRIMARY KEY IDENTITY,
    [Name] VARCHAR(50) NOT NULL,
    [Price] DECIMAL NOT NULL,
    [Description] VARCHAR(255) NULL
)

It’s a good idea to version control the table definitions and the store procedures. But, let’s keep it simple for now.

Modify GET

Let’s create a Product class inside a new folder Models. Modify the first Get method to return a IEnumerable<Product> instead of IEnumerable<string>. Don’t forget to add using ProductCatalog.Models; at the top of your file.

public class Product
{
    public int Id { get; set; }
    public string Name { get; set; }
    public decimal Price { get; set; }
    public string Description { get; set; }
}

Now, install Insight.Database nuget package. After that, update the body of the Get method to query the database with a store procedure called GetAllProducts. You will need the Query extension method from Insight.Database.

using ProductCatalog.Models;
using System.Data.SqlClient;

[HttpGet]
public IEnumerable<Product> Get()
{
    var connection = new SqlConnection(@"Data Source=(localdb)\MSSQLLocalDB;Initial Catalog=ProductCatalog;Integrated Security=True");
    return connection.Query("GetAllProducts");
}

I know, I know…We will refactor this in the next steps…By the way, don’t version control passwords or any sensitive information, please.

Create GetAllProducts store procedure

Now, you need the GetAllProducts store procedure. Depending on your workplace, you will have to follow a naming convention. For example, sp_Products_GetAll.

CREATE PROCEDURE [dbo].[GetAllProducts]
AS
BEGIN
    SELECT Id, Name, Price, Description
    FROM dbo.Products
END
GO

To try things out, add a product with an INSERT statement on your database.

INSERT INTO Products(Name, Price, Description)
VALUES ('iPhone SE', 399, 'Lots to love. Less to spend.')

And, if you make another GET request, you will find the new product. Yay!

[
    {
        "id": 1,
        "name": "iPhone SE",
        "price": 399,
        "description": "Lots to love. Less to spend."
    }
]

It’s a good idea not to return model or business objects from your API methods. It’s recommended to create view models or DTO’s with only the properties a consumer of your API will need. But, let’s keep it simple.

Refactor

Let’s clean what we have. First, move the connection string to the appsettings.json file.

"ConnectionStrings": {
    "ProductCatalog": "Data Source=(localdb)\\MSSQLLocalDB;Initial Catalog=ProductCatalog;Integrated Security=True;"
}

Next, register a SqlConnection in the ConfigureServices method of the Startup class. This will create a new connection on every request.

public void ConfigureServices(IServiceCollection services)
{
    services.AddScoped((provider) => new SqlConnection(Configuration.GetConnectionString("ProductCatalog")));
    services.AddControllers();
}

Now, update ProductController to add a field and a constructor.

private readonly SqlConnection _connection;

public ProductController(SqlConnection connection)
{
    _connection = connection;
}

// GET: api/Product
[HttpGet]
public IEnumerable<Product> Get()
{
    return _connection.Query("GetAllProducts");
}

Notice, properties in the Product class matches column names in Products table. After this refactor, this Get method should continue to work.

Pagination

If your table grows, you don’t want to retrieve all products in a single database call. Let’s query a page of results each time. For this, you will need two parameters in the Get method and in the GetAllProducts store procedure.

[HttpGet]
public IEnumerable Get(int pageIndex = 1, int pageSize = 10)
{
    var parameters = new { PageIndex = pageIndex - 1, PageSize = pageSize };
    return _connection.Query("GetAllProducts", parameters);
}

We have used an anonymous object for the parameters. These names should match the names in the store procedure definition. Notice, the store procedure expects a zero-based page index, so we have used pageIndex - 1.

Next, modify the GetAllProducts to add two new parameters: PageIndex and PageSize. And, update the SELECT statement to use the OFFSET-FETCH clauses.

ALTER PROCEDURE [dbo].[GetAllProducts]
    @PageIndex INT = 1,
    @PageSize INT = 10
AS
BEGIN
    SELECT Id, Name, Price, Description
    FROM dbo.Products
    ORDER BY Name
    OFFSET @PageIndex ROWS FETCH NEXT @PageSize ROWS ONLY;
END
GO

If you add more products to the table, you will see how you retrieve a subset of products on GET requests.

If you want to practice more, create an endpoint to search a product by id. You should change the appropriate Get method and a create a new store procedure: GetProductById.

Insert a new product

First, inside a new ViewModels folder, create an AddProduct class. It should have with three properties: name, price and description.

public class AddProduct
{
    public string Name { get; set; }
    public decimal Price { get; set; }
    public string Description { get; set; }
}

Next, update the Post method in the ProductController to use as parameter AddProduct. This time, since you will insert a new product, you need the Execute method instead of Query.

[HttpPost]
public void Post([FromBody] AddProduct request)
{
    var product = new Product
    {
        Name = request.Name,
        Price = request.Price,
        Description = request.Description
    };
    _connection.Execute("AddProduct", product);
}

Next, create the AddProduct store procedure. It will have a single INSERT statement.

CREATE PROCEDURE AddProduct
    @Name varchar(50),
    @Price decimal(18, 0),
    @Description varchar(255) = NULL
AS
BEGIN
    INSERT INTO Products(Name, Price, Description)
    VALUES (@Name, @Price, @Description)
END
GO

You need to validate input data, of course. For example, name and price are required. You can use annotations and a model validator or a library like FluentValidation.

Finally, to add a new product, make a POST request with a json body. It should include the name, price and description for the new product. You will see your product if you make another GET request.

POST https://localhost:44343/api/Product
{
    "name": "iPhone 11 Pro",
    "price": 999,
    "description": "Pro cameras. Pro display. Pro performance."
}

Conclusion

Voilà! You know how to use Insight.Database to retrieve results and execute actions with store procedures using Query and Execute methods. If you stick to naming conventions, you won’t need any mapping code. Insight.Database helped you to keep your data access to a few lines of code.

Your mission, Jim, should you decide to accept it, is to change the Update and Delete methods to comple all CRUD operations. This post will self-destruct in five seconds. Good luck, Jim.