Ultralearning: Takeaways

Scott Young describes in Ultralearning the strategy behind his own learning challenges, like “MIT Challenge in 1 year” and “A year without English.” Let’s learn what Ultralearning is all about. These are my takeaways.

Ultralearning is a self-directed and intense strategy to learn any subject. Ultralearning projects help to advance careers or excel at a particular subject. Ultralearning is a perfect alternative to traditional learning methods.

1. Before starting

Before starting an ultralearning project, answer why, what, and how you are going to ultralearn.

Why?

First, identify why you’re learning a subject and the effect you want to achieve. Are you learning the subject to get a specific result? Are you driven only by curiosity?

For example, are you learning to code to get a promotion? Or do you want to learn a new language to go on a trip? Those are two different motivations.

What?

Next, determine the concepts, facts, and procedures you need to learn.

  • A concept is something you need to understand instead of memorizing
  • A fact is something you need to memorize. Facts are useful only if you can recall them later
  • A procedure is something you need to practice

For example, learning vocabulary and expressions are facts when learning a foreign language. But pronunciation is procedural.

How?

After answering Why and What, select your resources. Spend about 10% of your learning time doing research. Use this research time to find how people are learning that subject.

Look for syllabi of courses, textbooks, boot camps, and experts in that field. Filter what won’t help you to achieve your goal.

Library collection
Find how people are learning your study subject. Photo by Christian Wiediger on Unsplash

2. During

Learn in context

Learning should be in the context where those skills will be applied. It’s doing the thing you want to get good at where most of the learning happens. For example, solve problem sets instead of watching lectures and learn a language through conversations instead of vocabulary lists.

Try project-based learning and immerse learning. For example, learn how to create a website in a month or go on a trip to learn a new language.

Prefer short study sessions

Spread shorter study sessions over a long period. Find a balance between long study sessions on a single topic and shorter sessions on different subjects. It’s better to have shorter sessions, between 15 minutes and an hour.

If you find yourself procrastinating, follow the 5-minute rule: start and sustain for 5 minutes. Also, you can try the Pomodoro technique: spread 25-minute practice sessions between 5-minute breaks.

Time Timer Watch
Photo by Ralph Hutter on Unsplash

Identify bottlenecks

Identify the bottleneck components in your learning. Separate your skill into sub-skills. And practice each sub-skill. Imagine a musician who practices tricky parts of a piece in isolation and then practices everything.

Recall instead of concept mapping

Recalling is better than concept maps and passive note reviewing. Recall concepts and facts. Your memory is a leaky bucket. Try space-repetition software or flashcards. After watching a lecture, write all you can remember. When practicing, avoid using your resources.

Voilà! Those are my takeaways from the Ultralearning book. It changed how I approach learning. Instead of overloading my brain with information, I start by creating a plan and list of learning resources.

For more learning content, check my takeaways from Pragmatic Thinking and Learning, one of my favorite books on the subject, and my advice on starting an Ultralearning project to become a Software Engineer.

Happy ultralearning!

Let's Go: Learn Go in 30 days

Do you want to learn a new programming language but don’t know what language to choose? Have you heard about Go? Well, let’s learn Go in 30 days!

From its official page, Go is “an open source programming language that makes it easy to build simple, reliable, and efficient software”.

Go is a popular language. According to Stack Overflow Developer Survey, since 2020, Go is in the top 10 of most admired/desired languages and in the top 15 of the most popular languages.

Docker, Kubernetes, and a growing list of projects use Go.

Why to choose Go?

Go reduces the complexity of writing concurrent software.

Go uses the concept of channels and goroutines. These two constructs allow us to have a “queue” and “two threads” to write to and read from it, out-of-the-box.

In other languages, we would need error-prone code to achieve similar results. Threads, locks, semaphores, etc, …

Rob Pike, one of the creators of Go, explains channels and goroutines in his talk Concurrency is not parallelism.

How to learn Go? Methodology

To learn a new programming language, library or framework, stop passively reading tutorials and copy-pasting code you find online.

Instead, follow these two principles:

1. Learn something by doing. This is one of the takeaways from the book Pragmatic Thinking and Learning. Instead of watching videos or skimming books, recreate examples and build mini-projects.

2. Don’t Copy and Paste. Instead of copy-pasting, read the sample code, “cover” it and reproduce it without looking at it. If you get stuck, search online instead of going back to the sample. For exercises, read the instructions and try to solve them by yourself. Then, check your solution.

“Instead of dissecting a frog, build one”.

― Andy Hunt, Pragmatic Thinking and Learning

Resources

Before starting to build something with Go, we can have a general overview of the language with the Pluralsight course Go Big Picture.

To grasp the main concepts, we can follow Learn Go with tests. It teaches Go using the concept of Test-Driven Development (TDD). Red, green, and refactor.

Other helpful resources to are Go by Example and Go documentation.

“To me, legacy code is simply code without tests.”

― Michael C. Feathers, Working Effectively with Legacy Code

Basic

Intermediate

Advanced

Fullstack

You can find more project ideas here: 40 project ideas for software engineers, What to code, Build your own x and Project-based learning.

Conferences

Conclusion

Go was designed to reduce the clutter and complexity of other languages. Go syntax is like C. Go is like C on asteroids. Goodbye, C pointers! Go doesn’t include common features in other languages like inheritance or exceptions. Yes, Go doesn’t have exceptions.

However, Go is batteries-included. You have a testing and benchmarking library, a formatter, and a race-condition detector. Coming from C#, you can still miss assertions like the ones from NUnit or XUnit.

Aren’t you curious about a language without exceptions? Happy Go time!

You can find my own 30-day journey following the resources from this post in LetsGo

canro91/LetsGo - GitHub

How to add an in-memory and a Redis-powered cache layer with ASP.NET Core

Let’s say we have a SlowService that calls a microservice and we need to speed it up. Let’s see how to add a caching layer to a service using ASP.NET Core 6.0.

A cache is a storage layer used to speed up future requests. Reading from a cache is faster than computing data or retrieving it from an external source on every request. ASP.NET Core has built-in abstractions for a caching layer using memory and Redis.

1. In-Memory cache

Let’s start with an ASP.NET Core 6.0 API project with a controller that uses our SlowService class.

First, let’s install the Microsoft.Extensions.Caching.Memory NuGet package. Then, let’s register the in-memory cache using the AddMemoryCache() method.

In our Program.cs file, let’s do this,

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers();

builder.Services.AddTransient<ISlowService, SlowService>();

builder.Services.AddMemoryCache(options =>
//               ^^^^^
{
    options.SizeLimit = 1_024;
    //      ^^^^^
});

var app = builder.Build();
app.MapControllers();
app.Run();

Since memory isn’t infinite, we need to limit the number of items stored in the cache. Let’s use SizeLimit. It sets the number of “slots” or “places” the cache can hold. Also, we need to tell how many “places” a cache entry takes when stored. More on that later!

Decorate a service to add caching

Next, let’s use the decorator pattern to add caching to the existing SlowService without modifying it.

To do that, let’s create a new CachedSlowService. It should inherit from the same interface as SlowService. That’s the trick!

The CachedSlowService needs a constructor receiving IMemoryCache and ISlowService. This last parameter will hold a reference to the existing SlowService.

Then, inside the decorator, we will call the existing service if we don’t have a cached value.

public class CachedSlowService : ISlowService
{
    private readonly IMemoryCache _cache;
    private readonly ISlowService _slowService;

    public CachedSlowService(IMemoryCache cache, ISlowService SlowService)
    //     ^^^^^
    {
        _cache = cache;
        _slowService = slowService;
    }

    public async Task<Something> DoSomethingSlowlyAsync(int someId)
    {
        var key = $"{nameof(someId)}:{someId}";
        return await _cache.GetOrSetValueAsync(
        //                  ^^^^^
            key,
            () => _slowService.DoSomethingSlowlyAsync(someId));
    }
}

Set Size, Limits, and Expiration Time

Let’s always use expiration times when caching items.

Let’s choose between sliding and absolute expiration times:

  • SlidingExpiration resets the expiration time every time an entry is used before it expires.
  • AbsoluteExpirationRelativeToNow expires an entry after a fixed time, no matter how many times it’s been used.
  • If we use both, the entry expires when the first of the two times expire
A child playing colorful videogames
If parents used SlidingExpiration, kids would never stop watching Netflix or using smartphones! Photo by Sigmund on Unsplash

Let’s always add a size to each cache entry. This Size tells how many “places” from SizeLimit an entry takes.

When the SizeLimit value is reached, the cache won’t store new entries until some expire.

Now that we know about expiring entries, let’s create the GetOrSetValueAsync() extension method. It checks first if a key is in the cache. Otherwise, it uses a factory method to compute and store a value into the cache. This method receives a custom MemoryCacheEntryOptions to overwrite the default values.

public static class MemoryCacheExtensions
{
    // Make sure to adjust these values to suit your own defaults...
    public static readonly MemoryCacheEntryOptions DefaultMemoryCacheEntryOptions
        = new MemoryCacheEntryOptions
        {
            AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(60),
            // ^^^^^
            SlidingExpiration = TimeSpan.FromSeconds(10),
            // ^^^^^
            Size = 1
            // ^^^^^
        };

    public static async Task<TObject> GetOrSetValueAsync<TObject>(
        this IMemoryCache cache,
        string key,
        Func<Task<TObject>> factory,
        MemoryCacheEntryOptions options = null)
            where TObject : class
    {
        if (cache.TryGetValue(key, out object value))
        {
            return value as TObject;
        }

        var result = await factory();

        options ??= DefaultMemoryCacheEntryOptions;
        cache.Set(key, result, options);

        return result;
    }
}

Register a decorated service

To start using the new CachedSlowService, let’s register it into the dependency container.

Let’s register the existing SlowService and the new decorated service,

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers();

// Before:
//builder.Services.AddTransient<ISlowService, SlowService>();

// After:
builder.Services.AddTransient<SlowService>();
//               ^^^^^
builder.Services.AddTransient<ISlowService>(provider =>
//               ^^^^^
{
    var cache = provider.GetRequiredService<IMemoryCache>();
    var slowService = provider.GetRequiredService<SlowService>();
    return new CachedSlowService(cache, SlowService);
    //         ^^^^^
});

builder.Services.AddMemoryCache(options =>
{
    options.SizeLimit = 1_024;
});

var app = builder.Build();
app.MapControllers();
app.Run();

As an alternative, we can use Scrutor, an “assembly scanning and decoration” library, to register our decorators.

Let’s use the Remove() method to delete cached entries if needed. We don’t want to use outdated or deleted values read from our cache by mistake.

There are only two hard things in Computer Science: cache invalidation and naming things.

– Phil Karlton

From TwoHardThings

Unit Test a decorated service

Let’s see how to test our decorator.

We need a fake for our decorator and assert it’s called only once after two consecutive calls. Let’s use Moq to create fakes.

[TestClass]
public class CachedSlowServiceTests
{
    [TestMethod]
    public async Task DoSomethingSlowlyAsync_ByDefault_UsesCachedValues()
    {
        var cacheOptions = Options.Create(new MemoryCacheOptions());
        var memoryCache = new MemoryCache(cacheOptions);
        //                ^^^^^
        var fakeSlowService = new Mock<ISlowService>();
        fakeSlowService
            .Setup(t => t.DoSomethingSlowlyAsync(It.IsAny<int>()))
            .ReturnsAsync(new Something());
        var service = new CachedSlowService(memoryCache, fakeSlowService.Object);
        //            ^^^^^

        var someId = 1;
        await service.DoSomethingSlowlyAsync(someId);
        await service.DoSomethingSlowlyAsync(someId);
        //            ^^^^^
        // Yeap! Twice!
        
        fakeSlowService.Verify(t => t.DoSomethingSlowlyAsync(someId), Times.Once);
        // Yeap! Times.Once!
    }
}

Now, let’s move to the distribute cache.

2. Distributed cache with Redis

A distributed cache layer lives in a separate server. We aren’t limited to the memory of our application server.

A distributed cache is helpful when we share our cache server among many applications or our application runs behind a load balancer.

Redis and ASP.NET Core

Redis is “an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker.” ASP.NET Core supports distributed caching with Redis.

Using a distributed cache with Redis is like using the in-memory implementation. We need the Microsoft.Extensions.Caching.StackExchangeRedis NuGet package and the AddStackExchangeRedisCache() method.

Now our CachedSlowService should depend on IDistributedCache instead of IMemoryCache.

Also we need a Redis connection string and an optional InstanceName. With an InstanceName, we group cache entries with a prefix.

Let’s register a distributed cache with Redis like this,

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers();

builder.Services.AddTransient<SlowService>();
builder.Services.AddTransient<ISlowService>(provider =>
{
    var cache = provider.GetRequiredService<IDistributedCache>();
    //                                      ^^^^^
    var slowService = provider.GetRequiredService<SlowService>();
    return new CachedSlowService(cache, SlowService);
    //         ^^^^^
});

builder.Services.AddStackExchangeRedisCache(options =>
//               ^^^^^
{ 
    options.Configuration = "localhost";
    //      ^^^^^
    // I know, I know! We should put it in an appsettings.json
    // file instead.
    
    var assemblyName = Assembly.GetExecutingAssembly().GetName();
    options.InstanceName = assemblyName.Name;
    //      ^^^^^
});

var app = builder.Build();
app.MapControllers();
app.Run();

It’s a good idea to read our Redis connection string from a configuration file instead of hardcoding one.

In previous versions of ASP.NET Core, we also had the Microsoft.Extensions.Caching.Redis NuGet package. It’s deprecated. It uses an older version of the StackExchange.Redis client.

Redecorate a service

Let’s change our CachedSlowService to use IDistributedCache instead of IMemoryCache,

public class CachedSlowService : ISlowService
{
    private readonly IDistributedCache _cache;
    private readonly ISlowService _slowService;

    public CachedSlowService(IDistributedCache cache, ISlowService slowService)
    //                       ^^^^^
    {
        _cache = cache;
        _slowService = slowService;
    }

    public async Task<Something> DoSomethingSlowlyAsync(int someId)
    {
        var key = $"{nameof(someId)}:{someId}";
        return await _cache.GetOrSetValueAsync(
            key,
            () => _slowService.DoSomethingSlowlyAsync(someId));
    }
}

Now let’s create a new GetOrSetValueAsync() extension method to use IDistributedCache instead.

This time, we need the GetStringAsync() and SetStringAsync() methods. Also, we need a serializer to cache objects. Let’s use Newtonsoft.Json.

public static class DistributedCacheExtensions
{
    public static readonly DistributedCacheEntryOptions DefaultDistributedCacheEntryOptions
        = new DistributedCacheEntryOptions
        {
            AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(60),
            // ^^^^^
            SlidingExpiration = TimeSpan.FromSeconds(10),
            // ^^^^^
            
            // We don't need Size here anymore...
        };

    public static async Task<TObject> GetOrSetValueAsync<TObject>(
        this IDistributedCache cache,
        string key,
        Func<Task<TObject>> factory,
        DistributedCacheEntryOptions options = null)
            where TObject : class
    {
        var result = await cache.GetValueAsync<TObject>(key);
        if (result != null)
        {
            return result;
        }

        result = await factory();
        await cache.SetValueAsync(key, result, options);

        return result;
    }

    private static async Task<TObject> GetValueAsync<TObject>(
        this IDistributedCache cache,
        string key)
            where TObject : class
    {
        var data = await cache.GetStringAsync(key);
        if (data == null)
        {
            return default;
        }

        return JsonConvert.DeserializeObject<TObject>(data);
    }

    private static async Task SetValueAsync<TObject>(
        this IDistributedCache cache,
        string key,
        TObject value,
        DistributedCacheEntryOptions options = null)
            where TObject : class
    {
        var data = JsonConvert.SerializeObject(value);

        await cache.SetStringAsync(key, data, options ?? DefaultDistributedCacheEntryOptions);
    }
}

With IDistributedCache, we don’t need sizes in the DistributedCacheEntryOptions when caching entries.

Unit Test a decorated service

For unit testing, let’s use MemoryDistributedCache, an in-memory implementation of IDistributedCache. This way, we don’t need a Redis server in our unit tests.

Let’s replace the MemoryCache dependency with the MemoryDistributedCache like this,

var cacheOptions = Options.Create(new MemoryDistributedCacheOptions());
var memoryCache = new MemoryDistributedCache(cacheOptions);         

With this change, our unit test now looks like this,

[TestClass]
public class CachedSlowServiceTests
{
    [TestMethod]
    public async Task DoSomethingSlowlyAsync_ByDefault_UsesCachedValues()
    {
        var cacheOptions = Options.Create(new MemoryDistributedCacheOptions());
        var memoryCache = new MemoryDistributedCache(cacheOptions);
        //                ^^^^^
        // This time, we're using an in-memory implementation
        // of IDistributedCache
        var fakeSlowService = new Mock<ISlowService>();
        fakeSlowService
            .Setup(t => t.DoSomethingSlowlyAsync(It.IsAny<int>()))
            .ReturnsAsync(new Something());
        var service = new CachedSlowService(memoryCache, fakeSlowService.Object);
        //            ^^^^^

        var someId = 1;
        await service.DoSomethingSlowlyAsync(someId);
        await service.DoSomethingSlowlyAsync(someId);
        // Yeap! Twice again!

        fakeSlowService.Verify(t => t.DoSomethingSlowlyAsync(someId), Times.Once);
    }
}

We don’t need that many changes to migrate from the in-memory to the Redis implementation.

Conclusion

Voilà! That’s how we cache the results of a slow service using an in-memory and a distributed cache with ASP.NET Core 6.0. Additionally, we can turn on or off the caching layer with a toggle in our appsettings.json file to create a decorated or raw service.

For more ASP.NET Core content, read how to compress responses and how to serialize dictionary keys. To read more about unit testing, check my Unit Testing 101 guide where I share what I’ve learned about unit testing all these years.

Happy caching time!

The Clean Coder: Three Takeaways

The Clean Coder is the second book on the Clean Code trilogy. It should be a mandatory reading for any professional programmer. These are my main takeaways.

The Clean Coder isn’t about programming in itself. It’s about the professional practice of programming. It covers from what is professionalism to testing strategies, pressure and time management.

Professionalism

Your career is your responsibility, not your employer’s

Professionalism is all about taking responsibility.

  • Do not harm: Do not release code, you aren’t certain about. If QA or an user finds a bug, you should be surprised. Make sure to take steps to prevent it to happen in the future.
  • Know how it works: Every line of code should be tested. Professional developers test their code.
  • Know your domain: It’s unprofessional to code your spec without any knowledge of the domain.
  • Practice: It’s what you do when you aren’t getting paid so you will be paid well.
  • Be calm and decisive under pressure: Enjoy your career, don’t do it under pressure. Avoid situations that cause stress. For example, commit to deadlines.
  • Meetings are necessary and costly. It’s unprofessional to attend to so many meetings. When the meetings get boring, be polite and ask if your presence is still needed.
Your career is your responsibility, not your employer's
One of my favorite quotes from the Clean Coder

Say No/Say Yes

Say. Mean. Do

Professionals have courage to say no to their managers. Also, as professional, you don’t have to say yes to everything. But you should find a creative way to make a yes possible.

  • There is no “trying”. Say no and offer a trade-off. “Try” is taken as yes and outcomes are expected accordingly
  • You can’t commit to things you don’t control. But, you can commit to some actions. For example, if you need somebody else to finish a dependency, create an interface and meet with the responsible guy.
  • Raise the flag. If you don’t tell someone you have a problem as soon as possible, you won’t have someone to help you on time.
  • Saying yes to drop out professionalism is not the way to solve problems.

Coding

It could be consider unprofessional not to use TDD

  • If you are tired or distracted, do not code. Coding requires concentration. And you will end up rewriting your work.
  • Be polite! Remember you will be the next one interrupting someone else. Use a failing test to let you know where you were after an interruption.
  • Debugging time is as expensive as coding time. Reduce your debugging time to almost 0. Use TDD, instead.
  • When you are late, raise the flag and be honest. It isn’t ok to say up to the end you’re fine and not to deliver your task.
  • Be honest about finishing your work. The worst attitude is when you say you’re done when you actually aren’t.
  • Ask for help. It’s unprofessional to remain stuck when there is help available.

Voilà! These are my main points from The Clean Coder. Do you see why I think it should be a mandatory reading? Oh, I missed another thing. An estimate isn’t a date, but a range of dates.

If you want to read other summaries, check Clean Code: Takeaways and Pragmatic Thinking and Learning: Takeaways.

A beginner's Guide to Git. A guide to time travel

Do you store your files on folders named after the date of your changes? I did it back in school with my class projects. There’s a better way! Let’s use Git and GitHub to version control our projects.

$ ls
Project-2020-04-01/
Project-Final/
Project-Final-Final/
Project-ThisIsNOTTheFinal/
Project-ThisIsTheTrueFinal/

1. What is a Version Control System?

First, what is a version control system? A version control system, VCS, is a piece of code that keeps track of changes of a file or set of files.

A version control system, among other things, allows us to:

  • Revert a file or the entire project to a previous state
  • Follow all changes of a file through its lifetime
  • See who has modified a file

To better understand this concept, let’s use an analogy.

A version control system is like a time machine. With it, we can go backward in time, create timelines and merge two separate timelines. We don’t travel to historical events in time, but to checkpoints in our project.

DeLorean time machine
DeLorean from Back to the Future. Photo by JMortonPhoto.com & OtoGodfrey.com / CC BY-SA

2. Centralized vs Distributed

There is a distinction between version control systems that make them different. Centralized vs distributed.

A centralized VCS requires a server to perform any operation on your project. We need to connect to a server to download all our files to start to work. If this server goes down, we can’t work. Bye, bye, productivity! Team Foundation Server (TFS) from Microsoft is a centralized VCS.

But, a distributed VCS doesn’t need a centralized server in the same sense. Each user has a complete copy of the entire project. Most operations are performed against this local copy. We can work offline. A two-hour flight without internet, no problem. For example, Git is a distributed VCS.

If you’re coming from TFS, I’ve written a Git Guide for TFS Users.

3. What’s Git anyways?

Up to this point, Git doesn’t need any further introduction. From its official page, “Git is a free and open-source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.

How to Install and Setup Git

We can install Git from its official page. There we can find instructions to install Git using package managers for all major OS’s.

Before starting to work, we need some one-time setups. We need to configure a name and an email. This name and email will appear in the file history of any file we create or modify.

Let’s go to the command line. From the next two commands, replace “John Doe” and “johndoe@example.com” with your own name and email.

$ git config --global user.name "John Doe"
$ git config --global user.email johndoe@example.com

We can change this name and email between projects. If we want to use different names and emails for work and personal projects, we’re covered. We can manage different accounts between folders.

How to Create a Git repository

There are two ways to start working with Git. From scratch or from an existing project.

If we are starting from scratch, inside the folder we want to version control, let’s use init. Like this,

$ git init

After running git init, Git creates a hidden folder called .git inside our project folder. Git keeps everything under the hood on this folder.

If we have an existing project, we need to use clone, instead.

# Replace <url> with the actual url of the project
# For example, https://github.com/canro91/Parsinator
$ git clone <url>

Did you notice it? The command name is clone. Since we are getting a copy of everything a server has about a project.

How to Add new files

Let’s start working! Let’s create new files or change existing ones in our project. Next, we want Git to keep track of these files. We need three commands: status, add and commit.

First, status shows the pending changes in our files. add includes some files in the staging area. And, commit creates an event in the history of our project.

# git status will show pending changes in files
$ git status
# Create a README file using your favorite text editor.
# Add some content
# See what has changed now
$ git status
$ git add README
$ git commit -m 'Add README file'

After using the commit command, Git knows about a file called README. We have a commit (a checkpoint) in our project we can go back to. Git has stored our changes. This commit has a unique code (a SHA-1) and an author.

We can use log to see all commits created so far.

# You will see your previous commit here
$ git log

What’s the Staging area?

The staging area or index is a concept that makes Git different.

The staging area is an intermediate area to review files before committing them.

It’s like making our files wait in a line before keeping track of them. This allows us to commit only a group of files or portions of a single file.

If you’re coming from TFS, notice you need two steps to store your changes. These are: add to include files into the staging area and commit to create a checkpoint from them. With TFS, you only “check-in” your files.

How to Ignore Files

Sometimes we don’t need to version control certain files or folders. For example, log files, third-party libraries, files, and folders generated by compilation or by our IDE.

If we’re starting from scratch, we need to do this only once. But if we’re starting from an existing project, chances are somebody already did it.

We need to create a .gitignore file with the patterns of files and folders we want to ignore. We can use this file globally or per project.

There is a collection of gitignore templates on GitHub per language and IDE’s.

For example, to ignore the node_modules folder, the .gitignore file will contain

node_modules/

Git won’t notice any files from the patterns included in the .gitignore file. Run git status to notice it.

How to write Good Commit Messages

A good commit message should tell why the change is needed, what problem it fixes and any side effect it might have.

Please, please don’t use “uploading changes” or anything like that on your commit messages.

Depending on our workplace or project, we have to follow a naming convention for our commit messages. For example, we have to include the type of change (feature, test, bug, or refactor) followed by a task number from a bug tracking software. If we need to follow a convention like this one, Git can format the commit messages for us.

Keep your commits small and focused. Work with incremental commits. And, don’t commit changes that break your project.

4. Branching and merging

What’s a Git Branch?

Using the time machine analogy, a branch is a separate timeline. Changes in a timeline don’t interfere with changes in other timelines.

Branching is one of the most awesome Git features. Git branches are lightweight and fast when compared to other VCS.

When starting a Git repository, Git creates a default branch called master. Let’s create a new branch called “testing”. For this, we will need the command branch follow by the branch name.

# Create a new branch called testing
$ git branch testing
# List all branches you have
$ git branch
# Move to the new testing branch
$ git checkout testing
# Modify the README file
# For example, add a new line
# "For example, how to create branches"
$ git status
$ git add README
$ git commit -m 'Add example to README'

Now, let’s switch back to the master branch and see what happened to our files there. To switch between branches, we need the checkout command.

# Now move back to the master branch
$ git checkout master
# See how README hasn't changed
# Modify the README file again
# For example, add "Git is so awesome, isn't it?"
$ git status
$ git add README
$ git commit -m 'Add Git awesomeness'
# See how these changes live in different timelines
$ git log --oneline --graph --all

We have created two branches, let’s see how we can combine what we have in the two branches.

Grays Sports Almanac
Grays Sports Almanac. Photo by Mike Mozart, CC BY 2.0, via Wikimedia Commons

How to Merge Two Branches

Merging two branches is like combining two separate timelines.

Continuing with the time travel analogy, merging is like when Marty goes to the past to get back the almanac and he is about to run into himself. Or when captain America goes back to New York in 2012 and he ends up fighting the other captain America. You got the idea!

Let’s create a new branch and merge it to master. We need the merge command.

# Move to master
$ git checkout master
# Create a new branch called hotfix and move there
$ git checkout -b hotfix
# Modify README file
# For example, add a new line
# "Create branches with Git is soooo fast!"
$ git status
$ git add README
$ git commit -m 'Add another example to README'
# Move back to master. Here master is the destination of all changes on hotfix
$ git checkout master
$ git merge hotfix
# See how changes from hotfix are now in master
# Since hotfix is merged, get rid of it
$ git branch -d hotfix

Here we have created a branch called hotfix and merged it to master. But we still have some chances on our branch testing. Let’s merge this branch to see what will happen.

$ git checkout master
$ git merge testing
# BOOM. You have a conflict
# Notice you have pending changes in README
$ git status
# Open README, notice you will find some weird characters.
#
# For example,
# <<<<<<< HEAD
# Content in master branch
# =======
# Content in testing branch
# >>>>>>> testing
#
# You have to remove these weird things
# Modify the file to include only the desired changes
$ git status
$ git add README
# Commit to signal conflicts were solved
$ git commit
# Remove testing after merge
$ git branch -d testing

If you’re coming from TFS, you noticed you need to move first to the branch you want to merge into. You merge from the destination branch, not from the source branch.

How to Move to the Previous Branch

In the same spirit of cd -, we can go to the previously visited branch using git checkout -. This last command is an alias for git checkout @{-1}. And, @{-1} refers to the last branch you were on.

# Starting from master
$ git checkout -b a-new-branch
# Do some work in a-new-branch
$ git checkout master
# Do some work in master
$ git checkout -
# Back to a-new-branch

Git master convention

By convention, the main timeline is called master. But starting from Git 2.28, when we run git init, Git looks for the configuration value init.defaultBranch to replace the “master” name. Other alternatives for “master” are main, primary, or default.

For existing repositories, we can follow this Scott Hanselman post to rename our master branch.

GitFlow Branching Model

Git encourages working with branches. Git branches are cheap and lightweight. We can create branches per task or feature.

There is a convention for branch creation, GitFlow. It suggests feature, release, and hotfix branches.

With Gitflow, we should have a develop branch where everyday work happens. Every new task starts in a separate feature branch taken from develop. Once we’re done with our task, we merge our feature branch back to develop.

5. GitHub: Getting our code to the cloud

Up until now, all our work lives on our computers. But, what if we want our project to live outside? We need a hosting solution. Among the most popular hosting solutions for Git, we can find GitHub, GitLab and Bitbucket.

It’s important to distinguish between Git and GitHub. Git != GitHub. Git is the version control system and GitHub is the hosting solution for Git projects.

No matter what hosting we choose, our code isn’t synced automatically with our server. We have to do it ourselves. Let’s see how.

How to create a repository from GitHub

To create a repository from a GitHub account, go to “Your Repositories: and click on “New”. We need a name and a description. We can create either public or private repositories with GitHub.

Then, we need to associate the GitHub endpoint with our local project. Endpoints are called remotes. Now we can upload or push your local changes to the cloud.

# Replace this url with your own
$ git remote add origin https://github.com/canro91/GitDemo.git
# push uploads the local master to a branch called master in the remote too
$ git push -u origin master
# Head to your GitHub account and refresh

6. Cheatsheet

Here you have all the commands we have used so far.

Command Function
git init Create a repo in the current folder
git clone <url> Clone an existing repo from url
git status List pending changes
git add <file> Add file to the staging area
git commit -m '<message>' Create a commit with message
git log List committed files in current branch
git log --oneline --graph --all List committed files in all branches
git branch <branch-name> Create a new branch
git checkout <branch-name> Change current branch to branch-name
git checkout -b <branch-name> Create a new branch and move to it
git merge <branch-name> Merge branch-name into current branch
git remote add <remote-name> <url> Add a new remote pointing to url
git push -u <remote-name> <branch-name> Push branch-name to remote-name

7. Conclusion

Voilà! We have learned the most frequent Git commands for everyday use. We used Git from the command line. But, most IDE’s offer Git integration through plugins or extensions. Now try to use Git from your favorite IDE.

If you want to practice these concepts, follow the repo First Contributions on GitHub to open your first Pull Request to an opensource project.

Your mission, Jim, should you decide to accept it, is to get the latest changes from your repository. After pushing, clone your project in another folder, change any file and push this change. Next, go back to your first folder, modify another file and try to push. This post will self-destruct in five seconds. Good luck, Jim.

Happy Git time!