Welcome to the first Monday Links of 2023. These are five reads I found interesting last month. This time, I found software methodologies was a recurring theme these days.
Why I’m Glad I Lack Passion to BE a Programmer
After a couple of years of working as a software engineer, I started to embrace simplicity. Software exists to satisfy a business need. Perfect software only exists in books. That’s why I started to see the big goal and only use libraries/tools/concepts when there’s a compelling reason to do so. Not all applications need to use Domain-Driven Design with Event Sourcing.
I liked this one: My ideal for software development is to find the simplest solution to the practical problem. I’m not a passionate programmer anymore either.
I can relate to this story. It happens to a friend of a friend of mine. One day his CEO came saying he just closed a really big deal but he didn’t know what they were going to do. Arrrggg!
Apart from a relatable story, this post contains 11 rules about estimations. Like all estimations are simply guesses and by the time developers have enough information to give more accurate estimations, close to the end of a project, it’s already too late.
This article contains some basic principles to design good UI. I don’t like those “are you sure you want to do something?” messages. It’s not a good idea based on this article.
Why Do Many Developers Consider Scrum to Be an Evil Scam?
Like any widespread idea, it gets perverted over time. I’ve been in teams where SCRUM is only adopted to micromanage developers using daily meetings. And the next thing you know is that daily meetings become a narrative of how busy developers are to avoid getting fired.
Why don’t software development methodologies work?
This is an old article I found on Hacker News front page. It showed, years ago, what everybody is complaining about these days. You only need to visit Hacker News or r/programming once every month.
I’ve been in small projects with no methodologies to larger projects with SCRUM as religion. I’ve been there.
I like this paragraph from the article: “My own experience, validated by Cockburn’s thesis and Frederick Brooks in No Silver Bullet, is that software development projects succeed when the key people on the team share a common vision, what Brooks calls ‘conceptual integrity.’“
Apart from the main article, it has really good comments. This is one that resonates with me about the one methodology:
“A single technical lead with full authority to make decisions, with a next tier assistant, associated technical staff, and a non-technical support person. the achievement of the team is then determined by the leadership of the team. the size of the team and project complexity is then limited by the leader and her ability to understand the problem and assign tasks.”
For me, the most successful project are the ones with a small team who knows each other, and everyone knows the main goal and what to do.
In 2022, I wrote two major series of posts: one about SQL Server performance tuning and another one about LINQ.
Once one of my clients asked me to “tune” some stored procedures and that inspired me to take a close look at the performance tuning world. Last year, I took Brent Ozar Mastering courses and decided to share some of the things I learned.
On another hand, I updated my Quick Guide to LINQ to use new C# features and wrote a bunch of new posts about LINQ. In fact, I released one text-based course about LINQ on Educative: Getting Started with LINQ. That’s my favorite C# feature, ever.
I kept writing my Monday Links posts. And, I decided to have my own Advent of Code. I prefer to call it: Advent of Posts. I wrote 22 posts in December. One post per day until Christmas eve. I missed a couple of days. But I consider it a “mission accomplished.”
These are the 5 posts I wrote in 2022 you read the most. In case you missed any of them, here they are:
TIL: How to optimize Group by queries in SQL Server. This post has one of the lessons I learned after following Brent Ozar’s Mastering courses. Well, this one is about using CTEs to speed up queries with GROUP BY.
SQL Server Index recommendations: Just listen to them. Again, these are some of the lessons I learned in Brent Ozar’s Mastering Index Tuning course. I shared why we shouldn’t blindly create indexes recommendations from query plans. They’re only clues. We can do better than that. I have to confess I added every single index recommendation I got before learning these lessons.
Voilà! These were your 5 favorite posts. Hope you enjoy them as much as I did writing them. Probably, you found shorter versions of these posts on my dev.to account. Or the version of some random guy copy-pasted into his own website pretending I was an author on his programming site. Things you find out when you google your own user handle. Arrggg!
Recently, I’ve been reviewing pull requests as one of my main activities. This time, let’s refactor two tests I found on one code review session. The two tests check if an email doesn’t have duplicated addresses before sending it. But, they have a common mistake: testing private methods directly. Let’s refactor these tests to use the public facade of methods.
Always write unit tests using the public methods of a class or a group of classes. Don’t make private methods public and static to test them directly. Test the observable behavior of classes instead.
Here are the test to refactor
These tests belong to an email component in a Property Management Solution. This component stores all emails before sending them.
These are two tests to check we don’t try to send an email to the same addresses. Let’s pay attention to the class name and method under test.
I slightly changed some names. But those are the real tests I had to refactor.
What’s wrong with those tests? Did you notice it? Also, can you point out where the duplicates are in the second test?
To have more context, here’s the SendEmailCommandHandler class that contains the CreateRecipients() method,
usingMediatR;usingMicrosoft.Extensions.Logging;usingMyCoolProject.Commands;usingMyCoolProject.Shared;namespaceMyCoolProject;publicclassSendEmailCommandHandler:IRequestHandler<SendEmailCommand,TrackingId>{privatereadonlyIEmailRepository_emailRepository;privatereadonlyILogger<SendEmailCommandHandler>_logger;publicCreateDispatchCommandHandler(IEmailRepositoryemailRepository,ILogger<CreateDispatchCommandHandler>logger){_emailRepository=emailRepository;_logger=logger;}publicasyncTask<TrackingId>Handle(SendEmailCommandcommand,CancellationTokencancellationToken){// Imagine some validations and initializations here...varrecipients=CreateRecipients(command.Tos,command.Ccs);// ^^^^^varemail=Email.Create(command.Subject,command.Body,recipients);await_emailRepository.CreateAsync(email);returnemail.TrackingId;}publicstaticIEnumerable<Recipient>CreateRecipients(IEnumerable<string>tos,IEnumerable<string>ccs)// ^^^^^=>tos.Select(Recipient.To).UnionBy(ccs.Select(Recipient.Cc),recipient=>recipient.EmailAddress);}}publicrecordRecipient(EmailAddressEmailAddress,RecipientTypeRecipientType){publicstaticRecipientTo(stringemailAddress)=>newRecipient(emailAddress,RecipientType.To);publicstaticRecipientCc(stringemailAddress)=>newRecipient(emailAddress,RecipientType.Cc);}publicenumRecipientType{To,Cc}
The SendEmailCommandHandler processes all requests to send an email. It grabs the input parameters, creates an Email class, and stores it using a repository. It uses the free MediatR library to roll commands and command handlers.
Also, it parses the raw email addresses into a list of Recipient with the CreateRecipients() method. That’s the method under test in our two tests. Here the Recipient and EmailAddress work like Value Objects.
Now can you notice what’s wrong with our tests?
What’s wrong?
Our two unit tests test a private method directly. That’s not the appropriate way of writing unit tests. We shouldn’t test internal state and private methods. We should test them through the public facade of our logic under test.
In fact, someone made the CreateRecipients() method public to test it,
For our case, we should write our tests using the SendEmailCommand class and the Handle() method.
Don’t expose private methods
Let’s make the CreateRecipients() private again. And let’s write our tests using the SendEmailCommand and SendEmailCommandHandler classes.
This is the test to validate that we remove duplicates,
[Fact]publicasyncTaskHandle_DuplicatedEmailInTosAndCc_CallsRepositoryWithoutDuplicates(){varduplicated="duplicated@email.com";// ^^^^^vartos=newList<string>{duplicated,"tomail@mail.com"};varccs=newList<string>{duplicated,"ccmail@mail.com"};varfakeRepository=newMock<IDispatchRepository>();varhandler=newCreateDispatchCommandHandler(fakeRepository.Object,Mock.Of<ILogger<SendEmailCommandHandler>>());// Let's write a factory method that receives these two email listsvarcommand=BuildCommand(tos:tos,ccs:ccs);// ^^^^^awaithandler.Handle(command,CancellationToken.None);// Let's write some assert/verifications in terms of the Email objectfakeRepository.Verify(t=>t.CreateAsync(It.Is<Email>(/* Assert something here using Recipients */),It.IsAny<CancellationToken>());// Or, even better let's write a custom Verify()//// fakeRepository.WasCalledWithoutDuplicates();}privatestaticSendEmailCommandBuildCommand(IEnumerable<string>tos,IEnumerable<string>ccs)=>newSendEmailCommand("Any Subject","Any Body",tos,ccs);
Notice we wrote a BuildCommand() method to create a SendEmailCommand only with the email addresses. That’s what we care about in this test. This way we reduce the noise in our tests. And, to make our test values obvious, we declared a duplicated variable and used it in both destination email addresses.
To write the Assert part of this test, we can use the Verify() method from the fake repository to check that we have the duplicated email only once. Or we can use the Moq Callback() method to capture the Email being saved and write some assertions. Even better, we can create a custom assertion for that. Maybe, we can write a WasCalledWithoutDuplicates() method.
That’s one of the two original tests. The other one is left as an exercise to the reader.
Voilà! That was today’s refactoring session. To take home, we shouldn’t test private methods and always write tests using the public methods of the code under test. We can remember this principle with the mnemonic: “Don’t let others touch our private parts.” That’s how I remember it.
Today I reviewed a pull request and had a conversation about when to use Value Objects instead of primitive values. This is the code that started the conversation and my rationale to promote a primitive value to a Value Object.
Prefer Value Objects to encapsulate validations or custom methods on a primitive value. Otherwise, if a primitive value doesn’t have a meaningful “business” sense and is only passed around, consider using the primitive value with a good name for simplicity.
In case you’re not familiar with Domain-Driven Design and its artifacts. A Value Object represents a concept that doesn’t have an “identifier” in a business domain. Value objects are immutable and compared by value.
Value Objects represent elements of “broader” concepts. For example, in a Reservation Management System, we can use a Value Object to represent the payment method of a Reservation.
TimeStamp vs DateTime
This is the piece of code that triggered my comment during the code review.
We wanted to record when an email is sent, opened, and clicked. We relied on a third-party Email Provider to notify our system about these email events. The DeliveryNotification has an email address, status, and timestamp.
Notice the TimeStamp class. It’s only a wrapper around the DateTime class. Mmmm…
Promote Primitive Values to Value Objects
I’d dare to say that using a TimeStamp instead of a simple DateTime in the DeliveryNotification class was an overkill. I guess when “when we have a hammer, everything looks like a finger.”
This is my rationale to choose between value objects and primitive values:
If we need to enforce a domain rule or perform a business operation on a primitive value, let’s use a Value Object.
If we only pass a primitive value around and it represents a concept in the language domain, let’s wrap it around a record to give it a meaningful name.
Otherwise, let’s stick to the plain primitive values.
In our TimeStamp class, apart from Create(), we didn’t have any other methods. We might validate if the inner date is in this century. But that won’t be a problem. I don’t think that code will live that long.
And, there are cleaner ways of writing tests that use DateTime than using a static SystemClock. Maybe, it would be a better idea if we can overwrite the SystemClock internal date.
I’d take a simpler route and use a plain DateTime value. I don’t think there’s a business case for TimeStamp here.
publicclassDeliveryNotification:ValueObject{publicRecipientRecipient{get;init;}publicDeliveryStatusStatus{get;init;}publicDateTimeTimeStamp{get;init;}// ^^^^^^protectedoverrideIEnumerable<object?>GetEqualityComponents(){yieldreturnRecipient;yieldreturnStatus;yieldreturnTimeStamp;}}// Or alternative, to use the same domain language//// public record TimeStamp(DateTime Value);publicenumDeliveryStatus{Created,Sent,Opened,Failed}
If in the “email sending” domain, business analysts or stakeholders use “timestamp,” for the sake of a ubiquitous language, we can add a simple record TimeStamp to wrap the date. Like record TimeStamp(DateTime value).
Voilà! That’s a practical option to decide when to use Value Objects and primitive values. For me, the key is asking if there’s a meaningful domain concept behind the primitive value. Otherwise we would end up with too many value objects or obsessed with primitive values.
Recently, I stumbled upon the article Get Rid of Your Old Database Migrations. The author shows how Clojure, Ruby, and Django use the “Dump and Load” approach to compact or squash old migrations. This is how I implemented the “Dump and Load” approach in one of my client’s projects.
1. Export database objects and reference data with schemazen
In one of my client’s projects, we had too many migration files that we started to group them inside folders named after the year and month. Squashing migrations sounds like a good idea here.
For example, for a three-month project, we wrote 27 migration files. This is the Migrator project,
For those projects, we use Simple.Migrations to apply migration files and a bunch of custom C# extension methods to write the Up() and Down() steps. Since we don’t use an all-batteries-included migration framework, I needed to generate the dump of all database objects.
I found schemazen in GitHub, a CLI tool to “script and create SQL Server objects quickly.”
This is how to script all objects and export data from reference tables with schemazen,
Notice I used --dataTablesPattern option with a regular expression to only export the data from the reference tables. In this project, we named our reference tables with the suffixes “Status” or “Type.” For example, PostStatus or ReceiptType.
I could simply export the objects from SQL Server directly. But those script files contain a lot of noise in the form of default options. Schemazen does it cleanly.
Schemazen generates one folder per object type and one file per object. And it exports data in a TSV format. I didn’t find an option to export the INSERT statements in its source code, though.
After this first step, I had the database objects. But I still needed to write the actual migration file.
2. Process schemazen exported files
To write the squash migration file, I wanted to have all scripts in a single file and turn the TSV files with the exported data into INSERT statements.
I could write a C# script file, but I wanted to stretch my Bash/Unix muscles. After some Googling, I came up with this,
# It grabs the output from schemazen and compacts all dump files into a single oneFILE=dump.sql
# Merge all files into a single onefor folder in'tables/''defaults/''foreign_keys/'do
find $folder-type f \(-name'*.sql'!-name'VersionInfo.sql'\) | while read f ;do
cat$f>>$FILE;done
done# Remove GO keywords and blank linessed-i'/^GO/d'$FILEsed-i'/^$/d'$FILE# Turn tsv files into INSERT statementsfor file in data/*tsv;do
echo"INSERT INTO $file(Id, Name) VALUES" | sed-e"s/data\///"-e"s/\.tsv//">>$FILEcat$file | awk'{print "("$1",\047"$2"\047),"}'>>$FILEecho>>$FILEsed-i'/^$/d'$FILEsed-i'$ s/,$//g'$FILEdone
The first part merges all separate object files into a single one. I filtered the VersionInfo table. That’s Simple.Migration’s table to keep track of already applied migrations.
The second part removes the GO keywords and blank lines.
And the last part turns the TSV files into INSERT statements. It grabs table names from the file name and removes the base path and the TSV extension. It assumes reference tables only have an id and a name.
With this compact script file, I removed the old migration files except the last one. For the project in the screenshot above, I kept Migration0027. Then, I used all the SQL statements from the dump file in the Up() step of the migration. I had an squash migration after that.
Voilà! That’s how I squashed old migrations in one of my client’s projects using schemazen and a Bash script. The idea is to squash our migrations after every stable release of our projects. From the reference article, one commenter said he does this approach one or twice a year. Another one, after every breaking changes.