This is another episode where I share the talks from NDC Conference I watched and liked. This time is about JavaScript, History, and Design.
How JavaScript Happened: A Short History of Programming Languages - Mark Rendle
This is a journey from FORTRAN to ALGOL to LISP to JavaScript. It explains why we still use if for conditional, i for loops, and * for multiplication. Spoiler alert: It’s because of FORTRAN.
Apache Kafka in 1 hour for C# Developers - Guilherme Ferreira
Clusters, Topics, Partitions, producers/consumers? This is a good first-time introduction to Kafka. The presenter uses kafkaflow and confluent-kafka-dotnet for the demo application.
Keynote: Why web tech is like this - Steve Sanderson
I found this one on r/programming (before the Reddit blackout) Informative! It feels like time traveling through operating systems and tools to create a Web page.
Pilot Critical Decision Making skills - Clifford Agius
The lesson from this one is to come up with a list of things that could go wrong and prepare and train for that. Follow TDODAR approach: Time, Diagnosis, Options, Decision, Assign, and Review.
Intentional Code - Minimalism in a World of Dogmatic Design
I like the idea that “software really is literature.” Not in the sense of literate programming but in the sense of a narrative to express idea where every line of code matters. I like the example of how a piece of code improves by only removing a few blank lines.
Another idea I liked is: “You don’t want everything to look the same.” We don’t want all applications to use Domain-Driven Design with Event Sourcing and microservices. Often architectural patterns only add to cognitive load and extra complexity.
The presenter suggests: “sitting and looking at it (at a piece of code) and working out how it makes you feel. And then when you feel something, try to understand why it feels that way.”
Voilà! Another Monday Links. What tech conferences do you follow? Do you also follow NDC Conference? What are your favorite presentations? Until next Monday Links.
These days I use OrmLite a lot. Almost every single day. In one of my client’s projects, OrmLite is the defacto ORM. Today I needed to pass a list of identifiers as a DataTable to an OrmLite SqlExpression. I didn’t want to write plain old SQL queries and use the embedded Dapper methods inside OrmLite. This is what I found out after a long debugging session.
To pass a DataTable with a list of identifiers as a parameter to OrmLite methods, create a custom converter for the DataTable type. Then use ConvertToParam() to pass it as a parameter to methods that use raw SQL strings.
As an example, let’s find all movies from a list of director Ids. I know a simple JOIN will get our backs covered here. But bear with me. Let’s imagine this is a more involved query.
1. Create two entities and a table type
These are the Movie and Director classes,
publicclassMovie{[AutoIncrement]publicintId{get;set;}[StringLength(256)]publicstringName{get;set;}[Reference]// ^^^^^publicDirectorDirector{get;set;}}publicclassDirector{[AutoIncrement]publicintId{get;set;}[References(typeof(Movie))]publicintMovieId{get;set;}// ^^^^^// OrmLite expects a foreign key back to the Movie table[StringLength(256)]publicstringFullName{get;set;}}
In our database, let’s define the table type for our list of identifiers. Like this,
CREATETYPEdbo.IntListASTABLE(IdINTNULL);
2. Pass a DataTable to a SqlExpression
Now, to the actual OrmLite part,
usingNUnit.Framework;usingServiceStack.DataAnnotations;usingSystem;usingSystem.Data;usingSystem.Data.SqlClient;usingSystem.Threading.Tasks;namespacePlayingWithOrmLiteAndDataTables;publicclassDataTableAsParameterTest{[Test]publicasyncTaskLookMaItWorks(){// 1. Register our custom converterOrmLiteConfig.DialectProvider=SqlServerDialect.Provider;OrmLiteConfig.DialectProvider.RegisterConverter<DataTable>(newSqlServerDataTableParameterConverter());// ^^^^^varconnectionString="...Any SQL Server connection string here...";vardbFactory=newOrmLiteConnectionFactory(connectionString);usingvardb=dbFactory.Open();// 2. Populate some moviesvartitanic=newMovie{Name="Titanic",Director=newDirector{FullName="James Cameron"}};awaitdb.SaveAsync(titanic,references:true);varprivateRyan=newMovie{Name="Saving Private Ryan",Director=newDirector{FullName="Steven Spielberg"}};awaitdb.SaveAsync(privateRyan,references:true);varpulpFiction=newMovie{Name="Pulp Fiction",Director=newDirector{FullName="Quentin Tarantino"}};awaitdb.SaveAsync(pulpFiction,references:true);// 3. Populate datable with some IdsvarmovieIds=newDataTable();movieIds.Columns.Add("Id",typeof(int));movieIds.Rows.Add(2);// ^^^^^// This should be Saving Private Ryan's Id// 4. Write the SqlExpression// Imagine this is a more complex query. I know!varquery=db.From<Director>();vartableParam=query.ConvertToParam(movieIds);// ^^^^^query=query.CustomJoin(@$"INNER JOIN {tableParam} ids ON Director.MovieId = ids.Id");// ^^^^^// We're cheating here. We know the table name! I know.// 5. Enjoy!varspielberg=awaitdb.SelectAsync(query);Assert.IsNotNull(spielberg);Assert.AreEqual(1,spielberg.Count);}}
Notice we first registered our SqlServerDataTableParameterConverter. More on that later!
After populating some records, we wrote a query using OrmLite SqlExpression syntax and a JOIN to our table parameter using the CustomJoin(). Also, we needed to convert our DataTable into a parameter with the ConvertToParam() method before referencing it.
We cheated a bit. Our Director class has the same name as our table. If that’s not the case, we could use the GetQuotedTableName() method, for example.
3. Write an OrmLite custom converter for DataTable
And this is our SqlServerDataTableParameterConverter,
// This converter only works when passing DataTable// as a parameter to OrmLite methods. It doesn't work// with OrmLite LoadSelectAsync method.publicclassSqlServerDataTableParameterConverter:OrmLiteConverter{publicoverridestringColumnDefinition=>thrownewNotImplementedException("Only use to pass DataTable as parameter.");publicoverridevoidInitDbParam(IDbDataParameterp,TypefieldType){if(pisSqlParametersqlParameter){sqlParameter.SqlDbType=SqlDbType.Structured;sqlParameter.TypeName="dbo.IntList";// ^^^^^ // This should be our table type name// The same name as in the database}}}
This converter only works when passing DataTable as a parameter. That’s why it has a NotImplementedException. I tested it with the SelectAsync() method. It doesn’t work with the LoadSelectAsync() method. This last method doesn’t parameterize internal queries. It will bloat our database’s plan cache. Take a look at OrmLite LoadSelectAsync() source code on GitHub here and here to see what I mean.
To make this converter work with the LoadSelectAsync(), we would need to implement the ToQuotedString() and return the DataTable content as a comma-separated list of identifiers. Exercise left to the reader!
4. Write a convenient extension method
And, for compactness, let’s put that CustomJoin() into a beautiful extension method that infers the table and column name to join to,
// Before:// var query = db.From<Director>();// var tableParam = query.ConvertToParam(movieIds);// query = query.CustomJoin(@$"INNER JOIN {tableParam} ids ON Director.MovieId = ids.Id");// After: varquery=db.From<Director>();.JoinToDataTable<Director>(d=>d.MovieId,movieIds);
Voilà! That is what I learned (or hacked) today. Things we only find out when reading the source code of our libraries. Another thought: the thing with ORMs is the moment we need to write complex queries, we stretch out ORM features until they break. Often, we’re better off writing dynamic SQL queries. I know, I know! Nobody wants to write dynamic SQL queries by hand. Maybe ask ChatGPT?
This is a career-only episode. These are five links I found interesting in the last month.
Build Personal Moats
From this post, the best career advice is to build a personal moat: “a set of unique and accumulating competitive advantages in the context of your career.” It continues describing good moats and how to find yours.
About personal moats:
“Ask others: What’s something that’s easy for me to do but hard for others?”
“Ideally you want this personal moat to help you build career capital in your sleep.”
“If you were magically given 10,000 hours to be amazing at something, what would it be? The more clarity you have on this response, the better off you’ll be.”
Want an unfair advantage in your tech career? Consume content meant for other roles
This post is to build a competitive advantage by consuming content targeted to other roles. This is a mechanism to create more empathy, gain understanding, and better work in cross-functional teams, among other reasons. It also suggests a list of roles we can start learning about.
Career Advice No One Gave Me: Give a Lot of Notice When You Quit
This is gold! There’re lots of posts on the Internet about interviewing, but few about quitting. This one is about how to quit leaving doors open. It has concrete examples to “drop the bomb.”
Reading this post, I realized I jumped to companies to always rewrite old applications. An old ASP.NET WebForms to a Console App. (Don’t ask me why!) An old ASP.NET WebForms again to an ASP.NET Web API project. An old Python scheduler to an ASP.NET Core project with HostedServices. History repeats itself, I guess. We’re writing legacy applications of tomorrow.
Let’s embrace that, quoting the post, “Given enough time, all your code will get deleted.”
What you give up when moving into engineering management
Being a Manager requires different skills than being an Individual Contributor. Often people get promoted to the Management track (without any training) only because they’re good developers. Arrrgggg! I’ve seen managers that are only good developers…and projects at risk because of that. This post shares why it’s hard to make the change and what we lost by moving to the Management track, focus time, for example.
Voilà! Another Monday Links. Do you think you have a personal moat or an unfair advantage? What is it? What are your quitting experiences? Until next Monday Links.
Do you have fast unit tests? This is how I speeded up a slow test suite from one of my client’s projects by reducing the delay between retry attempts and initializing slow-to-build dependencies only once. There’s a lesson behind this refactoring session.
Make sure to have a fast test suite that every developer could run after every code change. The slower the tests, the less frequently they’re run.
I learned to have some metrics before rushing to optimize anything. I learned it while trying to optimize a slow room searching feature. These are the tests and their execution time before any changes:
Of course, I blurred some names for obvious reasons. I focused on two projects: Api.Tests (3.3 min) and ReservationQueue.Tests (18.9 sec).
I had a slower test project, Data.Tests. It contained integration tests using a real database. Probably those tests could benefit from simple test values. But I didn’t want to tune stored procedures or queries.
This is what I found and did to speed up this test suite.
Step 1: Reduce delays between retries
Inside the Api.Tests, I found tests for services with a retry mechanism. And, inside the unit tests, I had to wait more than three seconds between every retry attempt. C’mon, these are unit tests! Nobody needs or wants to wait between retries here.
My first solution was to reduce the delay between retry attempts to zero.
Set retryWaitSeconds = 0
Some tests built retry policies manually and passed them to services. I only needed to pass 0 as a delay. Like this,
Some other tests used an EventHandler base class. After running a command handler wrapped in a database transaction, we needed to call other internal microservices. We used event handlers for that. This is the EventHandlerBase,
publicabstractclassEventHandlerBase<T>:IEventHandler<T>{protectedRetryOptions_retryOptions;protectedEventHandlerBase(){_retryOptions=newRetryOptions();// ^^^^^// By default, it has:// MaxRetries = 2// RetryDelayInSeconds = 3}publicasyncTaskExecuteAsync(TeventArgs){try{awaitBuildRetryPolicy().ExecuteAsync(async()=>awaitHandleAsync(eventArgs));}catch(Exceptionex){// Sorry, something wrong happened...// Log things here like good citizens of the world...}}privateAsyncPolicyBuildRetryPolicy(){returnPolicy.Handle<HttpRequestException>().WaitAndRetryAsync(_retryOptions.MaxRetries,(retryAttempt)=>TimeSpan.FromSeconds(Math.Pow(_retryOptions.RetryDelayInSeconds,retryAttempt)),// ^^^^^(exception,timeSpan,retryCount,context)=>{// Log things here like good citizens of the world...});}publicvirtualvoidSetRetryOptions(RetryOptionsretryOptions)// ^^^^^{m_retryOptions=retryOptions;}protectedabstractTaskHandleAsync(TeventArgs);}
Notice one thing: the EventHandlerBase didn’t receive a RetryOptions in its constructor. All event handlers had, by default, a 3-second delay. Even the ones inside unit tests. Arrrgggg! And the EventHandlerBase used an exponential backoff. Arrrgggg! That explained why I had those slow tests.
The perfect solution would have been to make all child event handlers receive the right RetryOptions. But it would have required changing the Production code and probably retesting some parts of the app.
Instead, I went through all the builder methods inside tests and passed a RetryOptions without delay. Like this,
After removing that delay between retries, the Api.Tests ran faster.
Step 2: Initialize AutoMapper only once
Inside the ReservationQueue.Tests, the other slow test project, I found some tests using AutoMapper. Oh, boy! AutoMapper! I have a love-and-hate relationship with AutoMapper. I shared about AutoMapper in a past Monday Links episode.
Some of the tests inside ReservationQueue.Tests looked like this,
[TestClass]publicclassACoolTestClass{privateclassTestBuilder{publicMock<ISomeService>SomeService{get;set;}=newMock<ISomeService>();privateIMappermapper=null;internalIMapperMapper// ^^^^^{get{if(mapper==null){varservices=newServiceCollection();services.AddMapping();// ^^^^^varprovider=services.BuildServiceProvider();mapper=provider.GetRequiredService<IMapper>();}returnmapper;}}publicServiceToTestBuild(){returnnewServiceToTest(Mapper,SomeService.Object);// ^^^^^}publicTestBuilderSetSomeService(){// Make the fake SomeService instance return some hard-coded values...}}[TestMethod]publicvoidATest(){varbuilder=newTestBuilder().SetSomeService();varservice=builder.Build();service.DoSomething();// Assert something here...}// Imagine more tests that follow the same pattern...}
These tests used a private TestBuilder class to create a service with all its dependencies replaced by fakes. Except for AutoMapper’s IMapper.
To create IMapper, these tests had a property that used the same AddMapping() method used in the Program.cs file. It was an extension method with hundreds and hundreds of type mappings. Like this,
publicstaticIServiceCollectionAddMapping(thisIServiceCollectionservices){varconfiguration=newMapperConfiguration((configExpression)=>{// Literally hundreds of single-type mappings here...// Hundreds and hundreds...});configuration.AssertConfigurationIsValid();services.AddSingleton(configuration.CreateMapper());returnservices;}
The thing is that every single test created a new instance of the TestBuilder class. And, by extension, an instance of IMapper for every test. And creating an instance of IMapper is expensive. Arrrgggg!
A better solution would have been to use AutoMapper Profiles and only load the profiles needed in each test class. That would have been a long and painful refactoring session.
Use MSTest ClassInitialize attribute
Instead of creating an instance of IMapper when running every test, I did it only once per test class. I used MSTest [ClassInitialize] attribute. It decorates a static method that runs before all the test methods of a class. That was exactly what I needed.
My sample test class using [ClassInitialize] looked like this,
[TestClass]publicclassACoolTestClass{privatestaticIMapperMapper;// ^^^^^[ClassInitialize]// ^^^^^publicstaticvoidTestClassSetup(TestContextcontext)// ^^^^^{varservices=newServiceCollection();services.AddMapping();// ^^^^^varprovider=services.BuildServiceProvider();Mapper=provider.GetRequiredService<IMapper>();}privateclassTestBuilder{publicMock<ISomeService>SomeService{get;set;}=newMock<ISomeService>();// No more IMapper initializations herepublicServiceToTestBuild(){returnnewServiceToTest(Mapper,SomeService.Object);// ^^^^^}publicTestBuilderSetSomeService(){// Return some hardcoded values from ISomeService methods...}}// Same tests as before...}
I needed to replicate this change in other test classes that used AutoMapper.
After reducing the delay between retry attempts and creating IMapper once per test class, these were the final execution times,
That’s under a minute! They used to run in ~3.5 minutes.
Voilà! That’s how I speeded up this test suite. Apart from reducing delays between retry attempts in our tests and initializing AutoMapper once per test class, the lesson to take home is to have a fast test suite. A test suite we can run after every code change. Because the slower the tests, the less frequently we run them. And we want our backs covered by tests all the time.
Let’s continue refactoring some tests for an email component. Last time, we refactored two tests that remove duplicated email addresses before sending an email. This time, let’s refactor two more tests. But these ones check that we change an email status once we receive a “webhook” from a third-party email service. Let’s refactor them.
Here are the tests to refactor
If you missed the last refactoring session, these tests belong to an email component in a Property Management Solution. This component stores all emails before sending them and keeps track of their status changes.
These two tests check we change the recipient status to either “delivered” or “complained.” Of course, the original test suite had more tests. We only need one or two tests to prove a point.
usingMoq;namespaceAcmeCorp.Email.Tests;publicclassUpdateStatusCommandHandlerTests{[Fact]publicasyncTaskHandle_ComplainedStatusOnlyOnOneRecipient_UpdatesStatuses(){varfakeRepository=newMock<IEmailRepository>();varhandler=BuildHandler(fakeRepository);varcommand=BuildCommand(withComplainedStatusOnlyOnCc:true);// ^^^^^awaithandler.Handle(command,CancellationToken.None);fakeRepository.Verify(t=>t.UpdateAsync(It.Is<Email>(d=>d.Recipients[0].LastDeliveryStatus==DeliveryStatus.ReadyToBeSent// ^^^^^&&d.Recipients[1].LastDeliveryStatus==DeliveryStatus.Complained)),// ^^^^^Times.Once());}[Fact]publicasyncTaskHandle_DeliveredStatusToBothRecipients_UpdatesStatuses(){varfakeRepository=newMock<IEmailRepository>();varhandler=BuildHandler(fakeRepository);varcommand=BuildCommand(withDeliveredStatusOnBoth:true);// ^^^^^awaithandler.Handle(command,CancellationToken.None);fakeRepository.Verify(t=>t.UpdateAsync(It.Is<Email>(d=>d.Recipients[0].LastDeliveryStatus==DeliveryStatus.Delivered// ^^^^^&&d.Recipients[1].LastDeliveryStatus==DeliveryStatus.Delivered)),// ^^^^^Times.Once());}privatestaticUpdateStatusCommandHandlerBuildHandler(Mock<IEmailRepository>fakeRepository){fakeRepository.Setup(t=>t.GetByIdAsync(It.IsAny<Guid>())).ReturnsAsync(BuildEmail());returnnewUpdateStatusCommandHandler(fakeRepository.Object);}privatestaticUpdateStatusCommandBuildCommand(boolwithComplainedStatusOnlyOnCc=false,boolwithDeliveredStatusOnBoth=false// Imagine more flags for other combination// of statuses. Like opened, bounced, and clicked)// Imagine building a large object graph here// based on the parameter flags=>newUpdateStatusCommand();privatestaticEmailBuildEmail()=>newEmail("A Subject","A Body",new[]{Recipient.To("to@email.com"),Recipient.Cc("cc@email.com")});}
I slightly changed some test and method names. But those are some of the real tests I had to refactor.
What’s wrong with those tests? Did you notice it?
These tests use Moq to create a fake for the IEmailRepository and the BuildHandler() and BuildCommand() factory methods to reduce the noise and keep our test simple.
What’s wrong?
Let’s take a look at the first test. Inside the Verify() method, why is the Recipient[1] the one expected to have Complained status? what if we change the order of recipients?
Based on the scenario in the test name, “complained status only on one recipient”, and the withComplainedStatusOnlyOnCc parameter passed to BuildCommand(), we might think Recipient[1] is the email’s cc address. But, the test hides the order of recipients. We would have to inspect the BuildHandler() method to see the email injected into the handler and check the order of recipients.
In the second test, since we expect all recipients to have the same status, we don’t care much about the order of recipients.
We shouldn’t hide anything in builders or helpers and later use those hidden assumptions in other parts of our tests. That makes our tests difficult to follow. And we shouldn’t make our readers decode our tests.
Explicit is better than implicit
Let’s rewrite our tests to avoid passing flags like withComplainedStatusOnlyOnCc and withDeliveredStatusOnBoth, and verifying on a hidden recipient order. Instead of passing flags for every possible combination of status to BuildCommand(), let’s create one object mother per status explicitly passing the email addresses we want.
First, instead of creating a fake EmailRepository with a hidden email object, we wrote a With() method. And to make things more readable, we renamed BuilEmail() to EmailFor() and passed the destinations explicitly to it. We can read it like mock.With(EmailFor(anAddress)).
Next, instead of using a single BuildCommand() with a flag for every combination of statuses, we created one object mother per status: ComplaintFrom() and DeliveredTo(). Again, we passed the email addresses we expected to have either complained or delivered statuses.
Lastly, for our Assert part, we created two custom Verify methods: VerifyUpdatedStatusFor() and VerifyUpdatedStatusForAll(). In the first test, we passed to VerifyUpdatedStatusFor() an array of tuples with the email address and its expected status.
Voilà! That was another refactoring session. When we write unit tests, we should strive for a balance between implicit code to reduce the noise in our tests and explicit code to make things easier to follow.
In the original version of these tests, we hid the order of recipients when building emails. But then we relied on that order when writing assertions. Let’s not be like magicians pulling code we had hidden somewhere else.
Also, let’s use extension methods and object mothers like With(), EmailFor(), and DeliveredTo() to create a small “language” in our tests, striving for readability. The next person writing tests will copy the existing ones. That will make his life easier.