These days I had to review some code that had one method to merge dictionaries. This is one of the suggestions I gave during that review to write good unit tests.
To write good unit tests, write the Arrange part of tests using the simplest test values that exercise the scenario under test. Avoid building large object graphs and using magic numbers in the Arrange part of tests.
Here are the tests I reviewed
These are two of the unit tests I reviewed. They test the Merge() method.
Yes, those are the real tests I had to review. I slightly changed the namespaces and the test names.
What’s wrong?
Let’s take a closer look at the first test. Do we need six dictionaries to test the Merge() method? No! And do we need 19 items? No! We can still cover the same scenario with only two single-item dictionaries without duplicate keys.
And let’s write separate tests to deal with edge cases. Let’s write one test to work with null and another one with an empty dictionary. Again two dictionaries will be enough for each test.
Having too many dictionaries with too many items made us write that funny foreach with a funny multiplication inside. That’s why some of the values are multiplied by 10, and others aren’t. We don’t need that with a simpler scenario.
Unit tests should only have assignments without branching or looping logic.
Looking at the second test, we noticed it followed the same pattern as the first one. Too many items and a weird foreach with a multiplication inside.
Write tests using simple test values
Let’s write our tests using simple test values to prepare our scenario under test.
[TestMethod]publicvoidMerge_NoDuplicates_DoesNotMergeNullAndEmptyOnes(){varone=newDictionary<int,int>{{1,10}};vartwo=newDictionary<int,int>{{2,20}};varmerged=one.Merge(two);// ^^^^^Assert.AreEqual(2,merged.Keys.Count);Assert.IsTrue(merged.Contains(1));Assert.IsTrue(merged.Contains(2));}// One test to Merge a dictionary with an empty one// Another test to Merge a dictionary with a null one[TestMethod]publicvoidMerge_DuplicateKeys_ReturnNoDuplicates(){varduplicateKey=1;// ^^^^^varone=newDictionary<int,int>{{duplicateKey,10},{2,20}// ^^^^^};vartwo=newDictionary<int,int>{{duplicateKey,10},{3,30}// ^^^^^};varmerged=one.Merge(two);// ^^^^^Assert.AreEqual(3,merged.Keys.Count);Assert.IsTrue(merged.Contains(duplicateKey));Assert.IsTrue(merged.Contains(2));Assert.IsTrue(merged.Contains(3));}
Notice this time, we boiled down the Arrange part of the first test to only two dictionaries with one item each, without duplicates.
And for the second one, the one for duplicates, we wrote a duplicateKey variable and used it in both dictionaries as key to make the test scenario obvious. This way, after reading the test name, we don’t have to decode where the duplicate keys are.
Since we wrote simple tests, we could remove the foreach in the Assert parts and the funny multiplications.
The test for the null and empty cases are exercises left to the reader. They’re not difficult to write.
Voilà! That’s another tip to write good unit tests. Let’s strive to have tests easier to follow with simple test values. Here we used dictionaries, but we can follow this tip when writing integration tests for the database. Often to prepare our test data, we insert multiple records when only one or two are enough to prove our point.
These days I finished another internal project while working with one of my clients. I worked to connect a Property Management System with a third-party Point of Sales. I had to work with Hangfire and OrmLite. I used Hangfire to replace ASP.NET BackgroundServices. Today I want to share some of the technical things I learned along the way.
1. Hangfire lazy-loads configurations
Hangfire lazy loads configurations. We have to retrieve services from the ASP.NET Core dependencies container instead of using static alternatives.
I faced this issue after trying to run Hangfire in non-development environments without registering the Hangfire dashboard. This was the exception message I got: “JobStorage.Current property value has not been initialized.” When registering the Dashboard, Hangfire loads some of those configurations. That’s why “it worked on my machine.”
These two issues in Hangfire GitHub repo helped me to find this out: issue #1991 and issue #1967.
This was the fix I found in those two issues:
usingHangfire;usingMyCoolProjectWithHangfire.Jobs;usingMicrosoft.Extensions.Options;namespaceMyCoolProjectWithHangfire;publicstaticclassWebApplicationExtensions{publicstaticvoidConfigureRecurringJobs(thisWebApplicationapp){// Before, using the static version://// RecurringJob.AddOrUpdate<MyCoolJob>(// MyCoolJob.JobId,// x => x.DoSomethingAsync());// RecurringJob.Trigger(MyCoolJob.JobId);// After://varrecurringJobManager=app.Services.GetRequiredService<IRecurringJobManager>();// ^^^^^recurringJobManager.AddOrUpdate<MyCoolJob>(MyCoolJob.JobId,x=>x.DoSomethingAsync());recurringJobManager.Trigger(MyCoolJob.JobId);}}
2. Hangfire Dashboard in non-Local environments
By default, Hangfire only shows the Dashboard for local requests. A coworker pointed that out. It’s in plain sight in the Hangfire Dashboard documentation. Arrrggg!
To make it work in other non-local environments, we need an authorization filter. Like this,
publicclassAllowAnyoneAuthorizationFilter:IDashboardAuthorizationFilter{publicboolAuthorize(DashboardContextcontext){// Everyone is more than welcome...returntrue;}}
And we add it when registering the Dashboard into the dependencies container. Like this,
For the In-Memory Hangfire implementation, the SucceededJobs() method from the monitoring API returns jobs from most recent to oldest. There’s no need for pagination. Look at the Reverse() method in the SucceededJobs() source code.
I had to find out why an ASP.NET health check was only working the first time. It turned out that the code was paginating the successful jobs, always looking for the oldest successful jobs. Like this,
publicclassHangfireSucceededJobsHealthCheck:IHealthCheck{privateconstintCheckLastJobsCount=10;privatereadonlyTimeSpan_period;publicHangfireSucceededJobsHealthCheck(TimeSpanperiod){_period=period;}publicTask<HealthCheckResult>CheckHealthAsync(HealthCheckContextcontext,CancellationTokencancellationToken=default){varisHealthy=true;varmonitoringApi=JobStorage.Current.GetMonitoringApi();// Before:// It used pagination to bring the oldest 10 jobs//// var succeededCount = (int)monitoringApi.SucceededListCount();// var succeededJobs = monitoringApi.SucceededJobs(succeededCount - CheckLastJobsCount, CheckLastJobsCount);// ^^^^^// After:// SucceededJobs returns jobs from newest to oldest varsucceededJobs=monitoringApi.SucceededJobs(0,CheckLastJobsCount);// ^^^^^ varsuccessJobsCount=succeededJobs.Count(x=>x.Value.SucceededAt.HasValue&&x.Value.SucceededAt>DateTime.UtcNow-period);varresult=successJobsCount>0?HealthCheckResult.Healthy("Yay! We have succeeded jobs."):newHealthCheckResult(context.Registration.FailureStatus,"Nein! We don't have succeeded jobs.");returnTask.FromResult(result);}}
This is so confusing that there’s an issue on the Hangfire repo asking for clarification. Not all storage implementations return successful jobs in reverse order. Arrrggg!
4. Prevent Concurrent execution of Hangfire jobs
Hangfire has an attribute to prevent the concurrent execution of the same job: DisableConcurrentExecutionAttribute. Source.
Even we can change the resource being locked to avoid executing jobs with the same parameters. For example, we can run only one job per client simultaneously, like this,
5. OrmLite IgnoreOnUpdate, SqlScalar, and CreateIndex
OrmLite has a [IgnoreOnUpdate] attribute. I found this attribute when reading OrmLite source code. When using SaveAsync(), OrmLite omits properties marked with this attribute when generating the SQL statement. Source.
OrmLite QueryFirst() method requires an explicit transaction as a parameter. Unlike SqlScalar() which uses the same transaction from the input database connection. Source. I learned this because I had a DoesIndexExists() method inside a database migration and it failed with the message “ExecuteReader requires the command to have a transaction…“ This is what I had to change,
privatestaticboolDoesIndexExist<T>(IDbConnectionconnection,stringtableName,stringindexName){vardoesIndexExistSql=@$"
SELECTCASEWHENEXISTS(SELECT*FROMsys.indexesWHEREname='{indexName}'ANDobject_id=OBJECT_ID('{tableName}'))THEN1ELSE0END";
// Before:// return connection.QueryFirst<bool>(isIndexExistsSql);// ^^^^^// Exception: ExecuteReader requires the command to have a transaction...// After:varresult=connection.SqlScalar<int>(doesIndexExistSql);// ^^^^^returnresult>0;}
Again, by looking at OrmLite source code, the CreateIndex() method, by default, creates indexes with names like: idx_TableName_FieldName. Then we can omit the index name parameter when working with this method. Source
Voilà! That’s what I learned from this project. This gave me the idea to stop to reflect on what I learned from every project I work on. I really enjoyed figuring out the issue with the health check. It made me read the source code of the In-memory storage for Hangfire.
It has been more than 10 years since I started working as a Software Engineer.
I began designing reports by hand using iTextSharp. And by hand, I mean drawing lines and pixels on a blank canvas. Arrggg!
I used Visual Studio 2010 and learned about LINQ for the first time those days.
Then I moved to some sort of full-stack role writing DotNetNuke modules with Bootstrap and Knockout.js.
In more recent years, I switched to work as a backend engineer. I got tired of getting feedback on colors, alignment, and other styling issues. They’re important. But that’s not the work I enjoy doing.
If I could start all over again, these are four lessons I wished I knew before becoming a Software Engineer again.
1. Find a Way To Stand Out: Make Yourself Different
Learning a second language is a perfect way to stand out. I’m a bit biased since language learning is one of my hobbies.
For most of us, standing out means learning English as a second language.
A second language opens doors to new markets, professional relationships, and job opportunities. And, you can brag about a second and third language on your CV.
After an interview, you can be remembered for the languages you speak. “Ah! The guy who speaks languages.”
2. Never Stop Learning
Let’s be honest. University will teach you lots of subjects. Probably, you don’t need most of them and the ones you need you will have to study them on your own.
You will have to study books, watch online conferences, and read blog posts. Never stop learning! That would keep you in the game in the long run.
But, it can be daunting if you try to learn everything about everything. “Learn something about everything, and everything about something,” says popular wisdom.
Libraries and frameworks come and go. Stick to the principles.
3. Have an Escape Plan
There is no safe place to work. Period! Full stop!
Companies lay off employees without any further notice and apparent reason. You can get seriously injured or sicked. You won’t be able to work forever.
If you’re reading this from the future, ask your parents or grandparents about the year 2020. Lots of people lost their jobs or got their salaries cut by half in a few days. And there were nothing they could do about it.
Have an escape plan. A side income, your own business, a hobby you can turn into a profitable idea. You name it!
Apart from an escape plan, have an emergency fund. The book “The Simple Path to Wealth” calls emergency funds: “F-you” money. Keep enough savings in your account to avoid worrying about when to leave a job or when the choice isn’t yours.
4. Have an Active Online Presence
If I could do something different, I would have an active online presence way earlier.
Be active online. Have a blog, a LinkedIn profile, or a professional profile on any other social network. Use social networks to your advantage.
In the beginning, you might think you don’t know enough to start writing or start a blog. But you can share what you learn, the resources you use to learn, and your sources of inspiration. You can learn in public and show your work.
Voilà! These are four lessons I wished I knew before starting a software engineer career. Remember, every journey is different and we’re all figuring out life. For sure, my circumstances have been different than yours, and that’s why these four lessons.
In any case, “Your career is your responsibility, not your employer’s.” I learned that from The Clean Coder.
These days I had to work with OrmLite. I had to follow the convention of adding audit fields in all of the database tables. Instead of adding them manually, I wanted to populate them when using OrmLite SaveAsync() method. This is how to automatically insert and update audit fields with OrmLite.
1. Create a 1-to-1 mapping between two tables
Let’s store our favorite movies. Let’s create two classes, Movie and Director, to represent a one-to-one relationship between movies and their directors.
publicinterfaceIAudit{DateTimeCreatedDate{get;set;}DateTimeUpdatedDate{get;set;}}publicclassMovie:IAudit{[AutoIncrement]publicintId{get;set;}[StringLength(256)]publicstringName{get;set;}[Reference]// ^^^^^^^publicDirectorDirector{get;set;}[Required]publicDateTimeCreatedDate{get;set;}[Required]publicDateTimeUpdatedDate{get;set;}}publicclassDirector:IAudit{[AutoIncrement]publicintId{get;set;}[References(typeof(Movie))]// ^^^^^^publicintMovieId{get;set;}// ^^^^^// OrmLite expects a foreign key back to the Movie table[StringLength(256)]publicstringFullName{get;set;}[Required]publicDateTimeCreatedDate{get;set;}[Required]publicDateTimeUpdatedDate{get;set;}}
Notice we used OrmLite [Reference] to tie every director to his movie. With these two classes, OrmLite expects two tables and a foreign key from Director pointing back to Movie. Also, we used IAudit to add the CreatedDate and UpdateDate properties. We will use this interface in the next step.
2. Use OrmLite Insert and Update Filters
To automatically set CreatedDate and UpdatedDate when inserting and updating movies, let’s use OrmLite InsertFilter and UpdateFilter. With them, we can manipulate our records before putting them in the database.
Let’s create a unit test to show how to use those two filters,
usingServiceStack.DataAnnotations;usingServiceStack.OrmLite;namespaceOrmLiteAuditFields;[TestClass]publicclassPopulateAuditFieldsTest{[TestMethod]publicasyncTaskSaveAsync_InsertNewMovie_PopulatesAuditFields(){OrmLiteConfig.DialectProvider=SqlServerDialect.Provider;OrmLiteConfig.InsertFilter=(command,row)=>// ^^^^^{if(rowisIAuditauditRow){auditRow.CreatedDate=DateTime.UtcNow;// ^^^^^auditRow.UpdatedDate=DateTime.UtcNow;// ^^^^^}};OrmLiteConfig.UpdateFilter=(command,row)=>// ^^^^^{if(rowisIAuditauditRow){auditRow.UpdatedDate=DateTime.UtcNow;// ^^^^^}};varconnectionString="...Any SQL Server connection string here...";vardbFactory=newOrmLiteConnectionFactory(connectionString,SqlServerDialect.Provider);usingvardb=dbFactory.Open();varmovieToInsert=newMovie{Name="Titanic",// We're not setting CreatedDate and UpdatedDate here...Director=newDirector{FullName="James Cameron"// We're not setting CreatedDate and UpdatedDate here, either...}};awaitdb.SaveAsync(movieToInsert,references:true);// ^^^^^// We insert "Titanic" for the first time// With "references: true", we also insert the directorvarinsertedMovie=awaitdb.SingleByIdAsync<Movie>(movie.Id);Assert.IsNotNull(insertedMovie);Assert.AreNotEqual(default,insertedMovie.CreatedDate);Assert.AreNotEqual(default,insertedMovie.UpdatedDate);}}
Notice we defined the InsertFilter and UpdateFilter and inside them, we checked if the row to be inserted or updated implemented the IAudit interface, to then set the audit fields with the current timestamp.
To insert a movie and its director, we used SaveAsync() with the optional parameter references set to true. We didn’t explicitly set the CreatedDate and UpdatedDate properties before inserting a movie.
Internally, OrmLite SaveAsync() either inserts or updates an object if it exists in the database. It uses the property annotated as the primary key to find if the object already exists in the database.
Instead of using filters, we can use [Default(OrmLiteVariables.SystemUtc)] to annotate our audit fields. With this attribute, OrmLite will create a default constraint. But, this will work only for the first insertion. Not for future updates on the same record.
3. Add [IgnoreOnUpdate] for future updates
To support future updates using the OrmLite SaveAsync(), we need to annotate the CreatedDate property with the attribute [IgnoreOnUpdate] in the Movie and Director classes. Like this,
Internally, when generating the SQL query for an UPDATE statement, OrmLite doesn’t include properties annotated with [IgnoreOnUpdate]. Source Also, OrmLite has similar attributes for insertions and queries: [IgnoreOnInsertAttribute] and [IgnoreOnSelectAttribute]
Let’s modify our previous unit test to insert and update a movie,
usingServiceStack.DataAnnotations;usingServiceStack.OrmLite;namespaceOrmLiteAuditFields;[TestClass]publicclassPopulateAuditFieldsTest{[TestMethod]publicasyncTaskSaveAsync_InsertNewMovie_PopulatesAuditFields(){// Same OrmLiteConfig as before...varconnectionString="...Any SQL Server connection string here...";vardbFactory=newOrmLiteConnectionFactory(connectionString,SqlServerDialect.Provider);usingvardb=dbFactory.Open();varmovieToInsert=newMovie{Name="Titanic",// We're not setting CreatedDate and UpdatedDate here...Director=newDirector{FullName="James Cameron"// We're not setting CreatedDate and UpdatedDate here, either...}};awaitdb.SaveAsync(movieToInsert,references:true);// ^^^^^// 1.// We insert "Titanic" for the first time// With "references: true", we also insert the directorawaitTask.Delay(1_000);// Let's give it some time...varmovieToUpdate=newMovie{Id=movie.Id,// ^^^^^Name="The Titanic",// We're not setting CreatedDate and UpdatedDate here...Director=newDirector{Id=movie.Director.Id,// ^^^^^FullName="J. Cameron"// We're not setting CreatedDate and UpdatedDate here, either...}};awaitdb.SaveAsync(movieToUpdate,references:true);// ^^^^^// 2.// To emulate a repository method, for example,// We're creating a new Movie object updating// movie and director names using the same Ids}}
Often, when we work with repositories to abstract our data access layer, we update objects using the identifier of an already-inserted object and another object with the properties to update. Something like, UpdateAsync(movieId, aMovieWithSomePropertiesChanged).
Notice this time, after inserting a movie for the first time, we created a separate Movie instance (movieToUpdate) keeping the same ids and updating the other properties. We used the same SaveAsync() as before.
At this point, if we don’t annotate the CreatedDate properties with [IgnoreOnUpdate], we get the exception: “System.Data.SqlTypes.SqlTypeException: SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.”
We don’t want to change the CreatedDate on updates. That’s why in the UpdateFilter we only change UpdatedDate. Since we’re using a different Movie instance in the second SaveAsync() call, CreatedDate stays uninitialized when OrmLite runs the UPDATE statement in the database. That’s why we got that exception.
Voilà! That’s how to automate audit fields with OrmLite. After reading the OrmLite source code, I found out about these filters and attributes. I learned the lesson of reading our source code dependencies from a past Monday Links episode.
These days I needed to rename all occurrences of one keyword with another in source files and file names. In one of my client’s projects, I had to query one microservice to list a type of account to store it in an intermediate database. After a change in requirements, I had to query for another type of account and rename every place where I used the old one. This is what I learned.
1. Find and Replace inside Visual Studio
My original solution was to use Visual Studio to replace “OldAccount” with “NewAccount” in all .cs files in my solution. I used the “Replace in Files” menu by pressing: Ctrl + Shift + h,
After this step, I replaced all occurrences inside source files. For example, it renamed class names from IOldAccountService to INewAccountService. To rename variables, I repeated the same replace operation but using lowercase patterns.
With the “Replace in Files” menu, I covered file content. But I still had to change the filenames. For example, I needed to rename IOldAccountService.cs to INewAccountService.cs. I did it by hand. Luckily, I didn’t have to replace many of them. There must be a better way!- I thought.
2. Find and Replace with Bash
After renaming my files by hand, I thought I could have used the command line to replace both the content and file names. I use Git Bash anyways. Therefore I have access to most of Unix commands.
Replace ‘old’ with ‘new’ inside all .cs files
This is how to replace “Old” with “New” in all .cs files, Source
With the grep command, we look for all .cs files (--include \*.cs) containing the “Old” keyword, no matter the case (-i flag), inside all child folders (-r), showing only the file path (-l flag).
We could use the first command, before the pipe, to only list the .cs files containing a keyword.
Then, with the sed command, we replace the file content in place (-i flag), changing all occurrences of “Old” with “New” (s/Old/New/g). Notice the g option in the replacement pattern. To avoid messing with line endings, we use the -b flag. Source
If we use spaces in filenames, that’s weird in source files but just in case, we need to tell grep and sed to use a different separator,
This time, we’re using the find command to “find” all files (-type f), with “Old” anywhere in their names (-name "*Old*"), inside the current folder (.), excluding the TestCoverageReport folder (-path ./TestCoverageReport -prune).
Optionally, we can exclude multiple files by wrapping them inside parenthesis, like, Source
find .\(-path ./FolderToExclude -o-path ./AnotherFolderToExclude \)\-prune-type f -o-name"*Old*"
Then, we feed the sed command to generate new names replacing “Old” with “New.” This time, we’re using the p option to print the “before” pattern. Up to this point, our command returns something like this,
With the last part, we split the sed output by the newline character and passed groups of two filenames to the mv command to finally rename the files.
Another alternative to sed followed by mv would be to use the rename command, like this,
find .-path ./TestCoverageReport -prune-type f -o-name"*Old*"\
| xargs rename 's/Old/New/g'
Voilà! That’s how to replace a keyword in the content and name of files. It took me some time to figure this out. But, we can rename files with two one-liners. It will save us time in the future. Kudos to StackOverflow.