If you are a C# developer, chances are you have heard about this new .NET Core thing and the new version of the ASP.NET framework. You can continue to work with ASP.NET Web API or any other framework from the old ASP.NET you’ve known for years. But, ASP.NET Core is here to stay.
In case you missed it, “ASP.NET Core is a cross-platform, high-performance, open-source framework for building modern, cloud-based, Internet-connected applications.” “ASP.NET Core is a redesign of ASP.NET 4.x, with architectural changes that result in a leaner, more modular framework.”
ASP.NET Core has brought a lot of new features. For example, cross-platform development and deployment, built-in dependency injection, middlewares, health checks, out-of-the-box logging providers, hosted services, API versioning, and much more.
Don’t worry if you haven’t started to worked with ASP.NET Core yet. This is a new framework with lots of new features, but it has brought many other features from the previous version. So, you will feel like home.
TL;DR
You can create projects from the command line.
NuGet packages are listed on the csproj files.
csproj files don’t list .cs files anymore.
There’s no Web.config, you have a json file instead.
There’s no Global.asax, you have Startup.cs instead.
You have a brand new dependency container.
1. Every journey begins with the first step
If you are adventurous, download and install the ASP.NET Core developer (SDK) and create a new empty web project from Visual Studio. These are the files that you get from it.
ASP.NET Core has been created with other operating systems and IDEs in mind. Now, you can create a project, compile it, and run the tests from the command line.
For example, to create a new empty Web project from the command line, you can use dotnet new web.
2. Where is the packages.config file?
If you installed a NuGet package into your brand new ASP.NET Core project, one thing you could notice is the missing packages.config file. If you remember, it is an xml file that holds the packages installed.
But, where in the world are those packages referenced in ASP.NET Core projects? In the csproj file of your project!
NuGet packages are referenced under ItemGroup in a PackageReference node. There you are Newtonsoft.Json! Goodbye, packages.config file!
3. Wait! What happened to csproj files?
Csproj files have been simplified too. Before a csproj file listed every single file in the project. All your files with .cs extension were in it. Now, every .cs file within the folder structure of the project is part of it.
Before, things started to get complicated as time went by and the number of files increased. Sometimes, merge conflicts were a nightmare. There were files under version control not included in the csproj file. Were they meant to be excluded because they didn’t apply anymore? Or somebody tried to solve a merge conflict and forgot to include them? This problem is no more!
4. Where is the Web.config file?
Another missing file is the Web.config file. Instead you have a Json file: the appsettings.json file. You can use strings, integers, booleans, and arrays in your config file.
There is even support for sections and subsections. Before, if you wanted to achieve that, you had to come up with a naming convention for your keys. For example, prepending the section and subsection name in every key name.
Probably, you have used ConfigurationManager all over the place in your code to read configuration values. Now, you can have a class with properties mapped to a section or subsection of your config file. And you can inject it into your services.
You still need to register that configuration into the dependency container. More on that later!
Additionally, you can override keys per environment. You can use the name of your environment in the file name. For example, appsettings.Development.json or appsettings.QA.json. You can specify the current environment with an environment variable or in the launchSettings.json file.
There’s even support for sensitive settings that you don’t want to version control: secrets.json file. You can manage this file from the command line too.
5. Where is the Global.asax file?
Yet another missing file: Global.asax. You used it to perform actions on application or session events. For example, when application started or ended. It was the place to do one-time setups, register filters, or define routes.
But now we use the Startup.cs file. It contains the initialization and all the settings needed to run the application. An Startup.cs file looks like this:
It has two methods: ConfigureServices() and Configure().
The Configure() method replaces the Global.asax file. It creates the app’s request processing pipeline. This is the place to register a filter or a default route for your controllers.
And the ConfigureServices() is to configure the services to be injected into the dependency container…Wait, what?
6. A brand new dependency container
Prior to ASP.NET Core, if you wanted to apply dependency injection, you had to bring a container and roll the discovery of services for your controllers. For example, you had an xml file to map your interfaces to your classes or did some assembly scanning to do it automatically.
Now, a brand new dependency container is included out-of-the-box. You can inject dependencies into your services, filters, middlewares, and controllers. It lacks some of the features from your favorite dependency container, but it is meant to suit “90% of the scenarios.”
If you are familiar with the vocabulary from another containers, AddTransient(), AddScoped(), and AddSingleton() ring a bell. These are the lifetimes of the injected services, ranging from the shortest to the largest.
More specifically, a transient service is created every time a new instance is requested. An scoped service is created once per request. Plus, a singleton service is created only once per the application lifetime.
To register your services, you have to do it inside of the ConfigureServices() method of the Startup class. Also, you bind your classes to a section or subsection of the config file here.
// In the Startup.cs filepublicvoidConfigureServices(IServiceCollectionservices){services.AddTransient<IMyService,MyService>();varsection=Configuration.GetSection("MySettings");services.Configure<MySettings>(section);}
7. Conclusion
You have only scratched the surface of ASP.NET Core. You have learned about some of the changes ASP.NET Core has brought. But, if you haven’t started with ASP.NET Core, go and try it. You may be surprise by how things are done now.
UPDATE (Oct 2023): I wrote this post back in the day when ASP.NET Core was brand new. This is the post I wish I had read back then. In recent versions, ASP.NET Core simplified configurations by ditching Startup.cs files. All other concepts remain the same.
This post was originally published on exceptionnotfound.net as part of Guest Writer Program. Thanks Matthew for editing this post.
This is THE book to learn how to write unit tests. It starts from the definition of a unit test to how to implement them in your organization. It covers the subject extensively.
“The Art of Unit Testing” teaches us to treat unit tests with the same attention and care we treat production code. For example, we should have test reviews instead of only code reviews.
These are some of the main ideas from “The Art Of Unit Testing.”
TL;DR
Write trustworthy tests
Have a unit test project per project and a test class per class
Keep a set of always-passing unit tests
Use “UnitOfWork_Scenario_ExpectedBehaviour” for your test names
Use builders instead of SetUp methods
1. Write Trustworthy Tests
Write trustworthy tests. A test is trustworthy if you don’t have to debug it to make sure it passes.
To write trustworthy tests, avoid any logic in your tests. If you have conditionals and loops in your tests, you have logic in them.
You can find logic in helper methods, fakes, and assert statements. Avoid logic in the assert statements, use hardcoded values instead.
Tests with logic are hard to read and replicate. A unit test should consist of method calls and assert statements.
2. Organize Your Tests
Have a unit test project per project and a test class per class. You should easily find tests for your classes and methods.
Create separate projects for your unit and integration tests. Add the suffix “UnitTests” and “IntegrationTests” accordingly. For a project Library, name your tests projects Library.UnitTests and Library.IntegrationTests.
Create tests inside a file with the same name as the tested code adding the suffix “Tests”. For MyClass, your tests should be inside MyClassTests. Also, you can group features in separate files by adding the feature name as a suffix. For example, MyClassTests.AnAwesomeFeature.
3. Have a Safe Green Zone
Keep a set of always-passing unit tests. You will need some configurations for your integration tests: a database connection, environment variables, or some files in a folder. Integration tests will fail if those configurations aren’t in place. So, developers could ignore some failing tests, and real issues, because of those missing configurations.
Therefore, separate your unit tests from your integration tests. Put them into different projects. This way, you will distinguish between a missing setup and an actual problem with your code.
A failing test should mean a real problem, not a false positive.
4. Use a Naming Convention
Use UnitOfWork_Scenario_ExpectedBehaviour for your test names. You can read it as follow: when calling “UnitOfWork” with “Scenario”, then it “ExpectedBehaviour”.
In this naming convention, a Unit of Work is any logic exposed through public methods that return value, change the system state, or make an external invocation.
With this naming convention is clear the logic under test, the inputs, and the expected result. You will end up with long test names, but it’s OK to have long test names for the sake of readability.
5. Prefer Builders over SetUp methods
Use builders instead of SetUp methods. Tests should be isolated from other tests. Sometimes, SetUp methods create shared state among your tests. You will find tests that pass in isolation but don’t pass alongside other tests and tests that need to be run many times to pass.
Often, SetUp methods end up with initialization for only some tests. Tests should create their own world. Initialize what’s needed inside every test using builders.
Voilà! These are my main takeaways. Unit testing is a broad subject. The Art of Unit Testing cover almost all you need to know about it. The main lesson from this book is to write readable, maintainable, and trustworthy tests. Remember, the next person reading your tests will be you.
“Your tests are your safety net, so do not let them rot.”
If you’re new to unit testing, start reading my Unit Testing 101. You will write your first unit test in C# with MSTest. For more naming conventions, check how to name your unit tests.
You need to do a complex operation made of smaller consecutives tasks. These tasks might change from client to client. This is how you can use the Pipeline pattern to achieve that. Let’s implement the Pipeline pattern in C#.
With the Pipeline pattern, a complex task is divided into separated steps. Each step is responsible for a piece of logic of that complex task. Like an assembly line, steps in a pipeline are executed one after the other, depending on the output of previous steps.
TL;DR Pipeline pattern is like the enrich pattern with factories. Pipeline = Command + Factory + Enricher
When to use the Pipeline pattern?
You can use the pipeline pattern if you need to do a complex operation made of smaller tasks or steps. If a single task of this complex operation fails, you want to mark the whole operation as failed. Also, the tasks in your operation vary per client or type of operation.
Some common scenarios to use the pipeline pattern are booking a room, generating an invoice or creating an order.
Let’s use the Pipeline pattern
A pipeline is like an assembly line in a factory. Each workstation in an assembly adds a part until the product is assembled. For example, in a car factory, there are separate stations to put the doors, the engine and the wheels.
With the pipeline pattern, you can create reusable steps to perfom each action in your “assembly line”. Then, you run these steps one after the other in a pipeline.
For example, in an e-commerce system to sell an item, you need to update the stock, charge a credit card, send a delivery order and notify the client.
Let’s implement our own pipeline
First, create a command/context class for the inputs of the pipeline.
Then, create one class per each workstation of your assembly line. These are the steps.
In our e-commerce example, steps will be UpdateStockStep, ChargeCreditCardStep, SendDeliveryOrderStep and NotifyClientStep.
publicclassUpdateStockStep:IStep<BuyItemCommand>{publicTaskExecuteAsync(BuyItemCommandcommand){// Put your own logic herereturnTask.CompletedTask;}}
Next, we need a builder to create our pipeline with its steps. Since the steps may vary depending on the type of operation or the client, you can load your steps from a database or configuration files.
For our e-commerce example, we don’t need to create a delivery order when we sell an eBook. In that case, we need to build two pipelines: BuyPhysicalItemPipeline for products that require shipping and BuyDigitalItemPipeline for products that don’t.
But, let’s keep it simple. Let’s create a BuyItemPipelineBuilder.
publicclassBuyItemPipelineBuilder:IPipelineBuilder{privatereadonlyIStep<BuyItemCommand>[]Steps;publicBuyItemPipelineBuilder(IStep<BuyItemCommand>[]steps){Steps=steps;}publicIPipelineCreatePipeline(BuyItemCommandcommand){// Create your pipeline here...varupdateStockStep=newUpdateStockStep();varchargeCreditCardStep=newChargeCreditCard();varsteps=new[]{updateStockStep,chargeCreditCardStep};returnnewBuyItemPipeline(command,steps);}}
Now, create the pipeline to run all its steps. It will have a loop to execute each step.
Also, you can use the Decorator pattern to perform orthogonal actions on the execution of the pipeline or every step. You can run the pipeline inside a database transaction, log every step or measure the execution time of the pipeline.
Now everything is in place, let’s run our pipeline.
Some steps of the pipeline can be delayed for later processing. The user doesn’t have to wait for some steps to finish his interaction with the system. You can schedule the execution of some steps in background jobs for later processing. For example, you can use Hangfire or roll your own queue mechanism (Kiukie…Ahem, ahem)
Conclusion
Voilà! This is the Pipeline pattern. You can find it out there or implement it on your own. Depending on the expected load of your pipeline, you could use Azure Functions or any other queue mechanism to run your steps.
I have used and implemented this pattern before. I used it in an invoicing platform to generate documents. Each document and client type had a different pipeline.
Also, I have used it in a reservation management system. I had separate pipelines to create, modify and cancel reservations.
PS: You can take a look at Pipelinie to see more examples. Pipelinie offers abstractions and default implementations to roll your own pipelines and builders.
All ideas and contributions are more than welcome!
Clean Code will change the way you code. It doesn’t teach how to code in a particular language. But, it teaches how to produce code easy to read, grasp and maintain. Although code samples are in Java, all concepts can be translated to other languages.
The Clean Code starts defining what it’s clean code by collecting quotes from book authors and other well-known people in the field. It covers the subject of Clean Code from variables to functions to architectural design.
The whole concept of Clean Code is based on the premise that code should be optimized to be read. It’s true we, as programmers, spend way more time reading code than actually writing it.
These are the three chapters I found instructive. The whole book is instructive. But, if I could only read a few chapters, I would read the next ones.
Naming
The first concept after the definition of Clean Code is naming things. This chapter encourages names that reveal intent and are easy to pronounce. And, to avoid punny or funny names.
Instead of writing int d; // elapsed time in days, write int elapsedTimeInDays.
Instead of writing, genymdhms, write generationTimestamp.
Instead of writing, HolyHandGrenade, write DeleteItems.
Comments
Clean Code is better than bad code with comments.
We all have heard that commenting our code is the right thing to do. But, this chapter shows what actually needs comments.
Have you seen this kind of comment before? i++; // Increment i Have you written them? I did once.
Don’t use a comment when a function or variable can be used.
Don’t keep the list of changes and authors in comments at the top of your files. That’s what version control systems are for.
Functions
There is one entire chapter devoted to functions. It recommends to write short and concise functions.
“Functions should do one thing. They should do it well”.
This chapter discourages functions with boolean parameters. They will have to handle the true and false scenarios. Then, they won’t do only one thing.
Voilà! These are the three chapters I find the most instructive and more challenging. If you could only read a few chapters, read those ones. Clean Code should be an obligatory reading for every single developer. Teachers should, at least, point students to this book. This book doesn’t deserve to be read, it deserves to be studied. If you’re new to the Clean Code concept, grab a copy and study it.
Are you new to code reviews? Do you know what to look for in a code review? Do you feel frustrated with your code review? I’ve been there too. Let’s see some tips I learned and found to improve our code reviews.
Code review is a stage of the software development process where a piece of code is examined to find bugs, security flaws, and other issues. Often reviewers follow a coding standard and style guide while reviewing code.
TL;DR
For the reviewer: Be nice. Remember you are reviewing the code, not the writer.
For the reviewee: Don’t take it personally. Every code review is an opportunity to learn.
For all the dev team: Reviews take time too. Add them to your estimates.
Advantages of Code Reviews
Code reviews are a great tool to identify bugs before the code gets shipped to end users. Sometimes we only need another pair of eyes to spot unnoticed issues in our code.
Also, code reviews ensure that the quality of the code doesn’t degrade as the project moves forward. They help to spread knowledge inside a team and mentor new members.
Now that we know what code reviews are good for, let’s see what to look for during code reviews and tips for each role in the review process.
What to look for in a code review?
If we’re new to code reviews and we don’t know what it’s going to be reviewed in our code…or if we have been asked to review somebody else code and we don’t know what to look for, we can start looking at this:
Does the code:
Compile in somebody else machine? If you have a Continuous Integration/Continuous Deployment (CI/CD) tool, we can easily check if the code is compiling and all tests are passing.
Include unit or integration tests?
Introduce new bugs?
Follow current standards?
Reimplement things? Is some logic already implemented in the standard library or an extension method?
Build things the hard way?
Kill performance?
Have duplication? Has the code been copied and pasted?
It’s a good idea to have a checklist next to us while reviewing the code. We can create our own checklist or use somebody else as a reference. Like Doctor McKayla Code Review Checklist.
For the reviewer
Before we start any review, let’s understand the context around the code we’re about to review.
A good idea is to start by looking at the unit tests and look at the “diff” of the code twice. One for the general picture and another one for the details.
If we’re a code reviewer, let’s:
Be humble. We all have something to learn.
Take out the person when giving feedback. We are reviewing the code, not the author.
Be clear. We may review code from juniors, mid-level, or seniors, even from non-native speakers of our language. Everybody has different levels of experience. Obvious things for us aren’t obvious for somebody else.
Give actionable comments. Let’s not use tricky questions to make the author change something. Let’s give clear and actionable comments instead. For example, what do you think about this method name? vs I think X would be a better name for this method. Could we change it?
Always give at least one positive remark. For example: It looks good to me (LGTM), good choice of names.
Use questions instead of commands or orders. For example, Could this be changed? vs Change it.
Use “we” instead of “you”. We’re part of the development process too. We’re also responsible for the code we’re reviewing.
Instead of showing an ugly code, teach. Let’s link to resources to explain even more. For example, blog posts and StackOverflow questions.
Review only the code that has changed. Let’s stop saying things like Now you’re here, change that method over there too.
Find bugs instead of style issues. Let’s rely on linters, compiler warnings, and IDE extensions to find styling issues.
Recently, I found out about Conventional Comments. With this convention, we start our comments with labels to show the type of comments (suggestion, nitpick, question) and their nature (blocking, non-blocking, if-minor).
Before asking someone to review our code, let’s review our own code. For example, let’s check if we wrote enough tests and followed the naming conventions and styling guidelines.
It’s a good idea to wait for the CI/CD to build and run all tests before asking someone to review our changes or assign reviewers in a web tool. This will save time for our reviewers and us.
If we’re a reviewee, let’s:
Stop taking it personally. It’s the code under review, not us.
Give context. Let’s give enough context to our reviews. We can write an explanatory title and a description of what our code does and what decisions we made.
Keep your work short and focused. Let’s not make reviewers go through thousands of lines of code in a single review session. For example, we can separate changes in business logic from formatting/styling.
Keep all the discussion online. If we contact reviewers by chat or email, let’s bring relevant comments to the reviewing tool for others to see them.
For team management
If we’re on the management side, let’s:
Make code reviews have the highest priority. We don’t want to wait days until we get our code reviewed.
Remember code reviews are as important as writing code. They take time too. Let’s add them to our estimates.
Have as a reviewer someone familiar with the code being reviewed. Otherwise, we will get styling and formatting comments. People judge what they know. That’s a cognitive bias.
Have at least two reviewers. For example, as reviewees, let’s pick the first reviewer. Then he will choose another one until the two of them agree.
Voilà! These are the tips I’ve learned while reviewing other people’s code and getting mine reviewed too. Code reviews can be frustrating. Especially when they end up being a discussion about styling issues and naming variables. I know, I’ve been there.
One of my lessons as a code reviewer is to use short and focused review sessions. I prefer to have short sessions in a day than a single long session that drains all my energy. Also, I include a suggestion or example of the change to be made in every comment. I want to leave clear and actionable comments on every code review.