These days, I’ve started reading “E-Myth Revisited” by Michael Gerber. I noticed that I developed my own method of reading books. Today, I want to share it: this is how I read non-fiction books for more retention.
To read books, I follow a method based mainly on the Zettelkasten method, described in the book How to Take Smart Notes. With the Zettelkasten method, we keep literature and permanent notes and follow a process to convert literature notes to permanent notes. It boils down to creating connections between notes.
I adapted the Zettelkasten method to use plain text instead of pieces of paper. I’m a big fan of plain text.
The key to retain more from the books we read is to read actively. Read looking for answers and connecting what we read to previous learnings.
This is the six-step process I follow to read non-fiction books.
Step 1: Intention
I switched from reading a book just for the sake of reading to reading a book to answer questions.
I don’t jump into a book with the same attitude if I’m just curious about a subject or want to answer a particular question. I learned I don’t have to read books from cover to cover.
For example, I started reading the “The E-Myth Revisited” to learn how to run a solo consulting practice.
Step 2: Overview
Then, I have a grasp of the book and its content. I use reviews, summaries, podcast interviews, or anything else to understand the overall book content.
Recently, I started to experiment with Copilot for this step. I ask Copilot to generate an executive summary of a book, for example.
Step 3: Note
Then, I create a new Markdown file for the book note. For every note, I use the date and the book title as the title.
I divide each book note into two halves. The first half is for questions and connections, and the second half is for the actual notes.
Also, I link to the new book note from the book index and the subject index. The book index is a note that links to all books I have read, in alphabetical order. The subject index is a note that references all other notes related to a specific subject. These two types of notes are entry points into my note vault.
For example, I could link to “The E-Myth Revisited” note from a “Consulting” index note.
Step 4: Question
After creating a new note, I read the table of contents, introduction, and conclusion looking for the book’s structure and interesting topics. I skim through the book to find anything that grabs my attention: boxes, graphs, and tables.
In the first half of the book note, I write questions I have about the subject and questions that arose after skimming the book. I got this idea of asking questions about a book before reading from Jim Kwik’s speed reading videos on YouTube.
If I decide not to read the book from cover to cover, I create an index of the chapters and sections I want to read or the ones I don’t want to. I keep this index for future reference. I learned this idea from SuperOrganizers’ Surgical Reading.
Step 5: Read
Then, I read the book while keeping note of interesting parts and quotes in the second half of my book note. I try not to copy and paste passages from the book into my notes but to write things in my own words, except for quotes.
After every chapter, I stop to recall the main ideas from that chapter.
Also, while reading the book, I answer the initial questions in the first half of the note.
Step 6: Connections
While reading or after finishing a chapter, I notice connections with other subjects and my existing knowledge. I use the first half of the note to write these connections and link to other notes.
This is the step where I write my book critique: how this expands or contradicts anything else I’ve learned.
For these connections and critique, the Zettelkasten method recommends a separate set of notes: the permanent notes. The original Zettelkasten proponent used separate handwritten notes and slip boxes for his permanent notes. I keep my permanent notes in the same file but in the first half. This way, the next time I open it, I find my connections and critique first.
Voila! That’s how I read non-fiction books. It’s a combination of the Zettelkasten method with a pre-reading step and my own note structure.
“Prediction is very difficult, especially if it’s about the future.” But here it goes.
On March 12th, 2024, Cognition Labs released Devin, “the first AI software engineer.” This announcement triggered an interesting conversation with a group of colleagues and ex-coworkers. The group was divided into despair and change.
Two opposing views
We all agreed that Devin, and AI in general, won’t take out jobs, at least not in the foreseeable future. But it will change the landscape for sure.
This is where the meme “AI needs well-written and unambiguous requirements, so we’re still safe” holds true.
Before the 2020 pandemic, we were living in a boom. We only needed “software engineer” as the title in our LinkedIn profiles to have dozens of recruiters offering “life-changing opportunities” every week.
That boom is over.
In 2023 and 2024, we experienced massive layoffs. We all knew someone in our inner circle who was laid off. It was a crazy time: one job listing, hundreds of applicants, and radio silence after sending a CV.
One part of the group believed that software engineering, at least the way we know it, would disappear in less than 10 years. They expected to see more layoffs and unemployment. They were planning escape routes away from this industry.
The other part of the group believed the world would still need software engineers, at least, to oversee what AI does. It brought up the subject of working conditions for future software engineers. Maybe they will come from underdeveloped countries with an extremely low wage and poor working conditions to fix the “oops” of AI software engineers.
My own predictions
In 2034, knowing programming and coding by itself won’t be enough. We will need to master a business domain or area of expertise and use programming in that context, mainly with AI.
Rather than being mundane code monkeys, our role will look like Product Managers. AI will automate the coding part of our job. We will work more as Requirement Writers and Prompt Engineers. Essentially, we all will be Engineering Managers overseeing a group of Devin’s.
We will see more Renaissance men and women, well-versed in different areas of knowledge, managing different AIs to achieve the goal of entire teams.
In the meantime, if somebody else writes requirements and we, software engineers, merely translate those requirements into code, we’ll be out of business.
Voilà! That’s how I envision Software Engineering in 2034: more human interaction and business understanding to identify requirements and prompts for AI Software Engineers. No more zero-value tasks like manual testing, code generation, and pointless meetings. AI will handle it all.
LINQ doesn’t get new features with each release of the .NET framework. It just simply works. This time, .NET 9 introduced two new LINQ methods: CountBy() and Index(). Let’s take a look at them.
1. CountBy
CountBy groups the elements of a collection by a key and counts the occurrences of each key. With CountBy, there’s no need to first group the elements of a collection to count its occurrences.
For example, let’s count all movies in our catalog by release year, of course, using CountBy(),
CountBy() returns a collection of KeyValuePair with the key in the first position and the count in the second one.
By the way, if that Console application doesn’t look like one, it’s because we’re using three recent C# features: the Top-level statements, records, and global using statements.
CountBy() has the same spirit of DistinctBy, MinBy, MaxBy, and other LINQ methods from .NET 6.0. With these methods, we apply an action direcly on a collection using a key selector. We don’t need to filter or group a collection first to apply that action.
2. Index
Index projects every element of a collection alongside its position in the collection.
If we take a look at the Index source code on GitHub, it’s a foreach loop with a counter in its body. Nothing fancy!
Voilà! Those are two new LINQ methods in .NET 9.0: CountBy() and Index(). It seems the .NET team is bringing to the standard library the methods we needed to roll ourselves before.
If you want to write more expressive code to work with collections, check my course Getting Started with LINQ on Educative, where I cover from what LINQ is, to refactoring conditionals with LINQ and to the its new methods and overloads in .NET6. All you need to know to start using LINQ in your everyday coding.
Starting with .NET 8.0, we have a better alternative for testing logging and logging messages. We don’t need to roll our own mocks anymore. Let’s learn how to use the new FakeLogger<T> inside our unit tests.
.NET 8.0 introduces FakeLogger, an in-memory logging provider designed for unit testing. It provides methods and properties, such us LatestRecord, to inspect the log entries recorded inside unit tests.
Let’s revisit our post on unit testing logging messages. In that post, we used a Mock<ILogger<T>> to verify that we logged the exception message thrown inside a controller method. This was the controller we wanted to test,
usingMicrosoft.AspNetCore.Mvc;namespaceFakeLogger.Controllers;[ApiController][Route("[controller]")]publicclassSomethingController:ControllerBase{privatereadonlyIClientService_clientService;privatereadonlyILogger<SomethingController>_logger;publicSomethingController(IClientServiceclientService,ILogger<SomethingController>logger){_clientService=clientService;_logger=logger;}[HttpPost]publicasyncTask<IActionResult>PostAsync(AnyPostRequestrequest){try{// Imagine that this service does something interesting...await_clientService.DoSomethingAsync(request.ClientId);returnOk();}catch(Exceptionexception){_logger.LogError(exception,"Something horribly wrong happened. ClientId: [{clientId}]",request.ClientId);// ^^^^^^^^// Logging things like good citizens of the world...returnBadRequest();}}}// Just for reference...Nothing fancy herepublicinterfaceIClientService{TaskDoSomethingAsync(intclientId);}publicrecordAnyPostRequest(intClientId);
1. Creating a FakeLogger
Let’s test the PostAsync() method, but this time let’s use the new FakeLogger<T> instead of a mock with Moq.
To use the new FakeLogger<T>, let’s install the NuGet package: Microsoft.Extensions.Diagnostics.Testing first.
Here’s the test,
usingFakeLogger.Controllers;usingMicrosoft.Extensions.Logging;usingMicrosoft.Extensions.Logging.Testing;// ^^^^^usingMoq;namespaceFakeLogger.Tests;[TestClass]publicclassSomethingControllerTests{[TestMethod]publicasyncTaskPostAsync_Exception_LogsException(){varclientId=123456;varfakeClientService=newMock<IClientService>();fakeClientService.Setup(t=>t.DoSomethingAsync(clientId)).ThrowsAsync(newException("Expected exception..."));// ^^^^^// 3...2...1...Boom...// Look, ma! No mocks here...varfakeLogger=newFakeLogger<SomethingController>();// ^^^^^varcontroller=newSomethingController(fakeClientService.Object,fakeLogger);// ^^^^^varrequest=newAnyPostRequest(clientId);awaitcontroller.PostAsync(request);// Warning!!!//var expected = $"Something horribly wrong happened. ClientId: [{clientId}]";//Assert.AreEqual(expected, fakeLogger.LatestRecord.Message);// ^^^^^^^^// Do not expect exactly the same log message thrown from PostAsync()// Even better:fakeLogger.VerifyWasCalled(LogLevel.Error,clientId.ToString());// ^^^^^}}
We needed a using for Microsoft.Extensions.Logging.Testing. Yes, that’s different from the NuGet package name.
We wrote new FakeLogger<SomethingController>() and passed it around. That’s it.
2. Asserting on FakeLogger
The FakeLogger<T> has a LatestRecord property that captures the last entry we logged. Its type is FakeLogRecord and contains a Level, Message, and Exception. And if no logs have been recorded, accessing LatestRecord will throw an InvalidOperationException with the message “No records logged.”
But, for the Assert part of our test, we followed the lesson from our previous post on testing logging messages: do not expect exact matches of logging messages in assertions. Otherwise, any change in the structure of our logging messages will make our test break, even if the underlying business logic remains unchanged.
Instead of expecting exact matches of the logging messages, we wrote an extension method VerifyWasCalled(). This method receives a log level and a substring as parameters. Here it is,
publicstaticvoidVerifyWasCalled<T>(thisFakeLogger<T>fakeLogger,LogLevellogLevel,stringmessage){varhasLogRecord=fakeLogger.Collector// ^^^^^.GetSnapshot()// ^^^^^.Any(log=>log.Level==logLevel&&log.Message.Contains(message,StringComparison.OrdinalIgnoreCase));// ^^^^^if(hasLogRecord){return;}// Output://// Expected log entry with level [Warning] and message containing 'Something else' not found.// Log entries found:// [15:49.229, error] Something horribly wrong happened. ClientId: [123456]varexceptionMessage=$"Expected log entry with level [{logLevel}] and message containing '{message}' not found."+Environment.NewLine+$"Log entries found:"+Environment.NewLine+string.Join(Environment.NewLine,fakeLogger.Collector.GetSnapshot().Select(l=>l));thrownewAssertFailedException(exceptionMessage);}
First, we used Collector and GetSnapshot() to grab a reference to the collection of log entries recorded. Then, we checked we had a log entry with the expected log level and message. Next, we wrote a handy exception message showing the log entries recorded.
Voilà! That’s how to write tests for logging messages using FakeLogger<T> instead of mocks.
If we only want to create a logger inside our tests without asserting anything on it, let’s use NullLogger<T>. But, if we want to check we’re logging exceptions, like good citizens of the world, let’s use the new FakeLogger<T> and avoid tying our tests to details like the log count and the exact log messages. That makes our tests harder to maintain. In any case, we can roll mocks to test logging.
I ran an experiment. Maybe it was fear of missing out. I decided to use AI to help me launch a new course. This is how I used Copilot and the prompts I used.
I got this idea after watching one of Brent Ozar’s Office Hours videos on YouTube where he shared he keeps ChatGPT opened all the time and uses it as a junior employee. I decided to run a similar experiment, but for launching a new course on unit testing, one of my favorite subjects.
1. Lesson content and materials
I planned the lesson content and recorded and edited all video lessons myself. #madebyahuman
For the editing part, I used Adobe Podcast to remove background noise. My neighbor’s dog started to bark every time I hit record. And the other day at home, somebody made a smoothie with a loud blender while I was recording. Arrrggg! The only downside is that my voice sounds auto-tuned, especially in word endings.
I used Copilot for its convenience. I don’t need to create an account. Even I made Microsoft Edge open Copilot as the default tab. I only need to press the Windows key, type “Edge,” and I’m right there.
These are the prompts I used.
Generate a list of title ideas for my course
You’re an expert on course creation, programming, and SEO, give me a list of titles for a course to teach insert subject here
Rewrite my draft for a landing page
Now you’re an expert on online writing, SEO, marketing, and copywriting, please help me improve this landing page for an online course to increase sales and conversions. Make sure to use a friendly and conversational tone.
This is my landing page:
insert landing page here
Turn a landing page into a script for an introductory video
Now turn that last version of the landing page into a list of paragraphs and sentences I can use to create a PowerPoint presentation. Keep it short and to the point. Use only 10 paragraphs. I will turn each paragraph into a slide
2. Online Marketing
Once I got all the lesson content and a landing page ready, I moved to the promotion part.
These are the prompts I used.
Write an email inviting readers to buy this new course
You’re an expert on copywriting and email marketing, give me n ideas for a short email to invite a reader who already took some action to buy my new video course: insert course name here. In that course, I insert brief description here. Offer a promo code for a limited time. Use friendly and engaging language.
Write a call to action for my posts
You’re an expert copywriter, give me n ideas for a two-sentence paragraph to promote my new course insert course name here. I’d like to use that paragraph at the end of posts on my blog to invite my readers to join the course. Use a friendly and conversational tone.
Write a launching post on LinkedIn
You’re an expert on copywriting, LinkedIn, and personal branding. Give me n ideas for a LinkedIn post to promote my new course insert title here based on the landing page of the course. Use a friendly and conversational tone.
This is the course landing page of the course:
insert landing page here
Nothing fancy. I followed the pattern: “Act as X, do Y for me based on some input. This is the input.”
I don’t use the exact same words Copilot gives me. I change the words I don’t use to make it sound like me.
Voilà! That’s how I used AI, especially Copilot, to help me launch my new testing course. On a Saturday morning, I ended up with a landing page and the script for an intro video for the course. What a productive morning!
I use AI the same way Jim Kwik, the brain coach, recommends in one of his YouTube videos: “AI (artificial intelligence) to enhance HI (human intelligence), not to replace it.” I don’t want AI to take out the pleasure of doing what I like to do. It’s my assistant.