What if you could throw containers at a cluster of nodes as easily as a single node? This course will teach how you can control a cluster with all the simplicity of managing a single docker engine thanks to Docker Swarm Mode.
The docker engine on a single node revolutionized how we run applications. But in production environments, you need more capacity and reliability than a single node can offer. In this course, Getting Started with Docker Swarm Mode, you'll learn how to control a cluster of nodes as easily as a managing a single node. First, you'll discover how to set up a swarm, add nodes, and launch services that describe containers. Next, you'll explore how to route traffic to the cluster and within the cluster, ensure containers recover from failures. Then, you'll learn how to roll out updates and deploy multi application stacks. Finally, you'll cover how to leverage integrated health checking and protect sensitive information with secrets. By the time you're done with this course, you'll have the knowledge to deploy your own swarm.
Wes Higbee is passionate about helping companies achieve remarkable results with technology and software. He’s had extensive experience developing software and working with teams to improve how software is developed to meet business objectives. Wes launched Full City Tech to leverage his expertise to help companies delight customers.
Course Overview Hi. My name is Wes Higbee. Welcome to my course, Getting Started with Docker Swarm Mode. Just this morning, I was doing a TeamCity demo that requires a server to be up and running, that has a website, also multiple agents to perform work, and an external database as well. Now for demo purposes, I was able to spin this up quickly on a single machine with a Docker engine. But in production environments, infrastructures like this can oftentimes need to span multiple nodes so that we have the capacity for the agents to quickly get your work done. It'd be nice to keep the benefits of a single node when we move to multiple nodes, and that's what we're going to talk about in this course with Docker Swarm Mode. We'll start out by looking at how we can set up a swarm, how we can add multiple nodes to that swarm, and how we can schedule a service that spans across those nodes, a service being something like a website. We'll take a look at how we can scale that website up. We'll take a look at how we can access that website externally, so how we can route into that website to be able to find it on whatever node it's running on, and we'll also talk about how that website might use some internal database or maybe an internal API if it needs further information, and how that website can find that internal database or API if they're running on the swarm as well. We'll take a look at look at the power of reconciling a desire state to make sure your application is always running. We'll look at rolling updates, container to container networking. We'll take a look at using stacks to simplify how we deploy multiple applications with a single Docker compose file. We'll also see health checking. And then we'll wrap up with protecting sensitive information using secrets. By the time we're done with this course, you'll know how to set up your own swarm that can handle any work that you throw at it. Let's get started.
Adding Nodes Now that we've seen a single node, and we're familiar with how we launch a container, and we can relate that back to what we've done in the past with docker run and docker-compose, we now have docker service create, and even though things aren't exactly the same with the service, they're pretty similar. And at the end of the day, we get a container. Now that we understand that, let's move to multiple nodes and see how a service can scale across nodes. So we'll move from our single node setup with a couple of containers for a service into a model where we have multiple nodes, and we'll see how we can launch containers for various different services across all those different nodes. And the beautiful thing is there's not much to do other than add in the nodes, and then everything else is going to take care of itself with all the commands that we've been using thus far. If we have multiple nodes, then the swarm managers will just go ahead and transparently dispatch the work across multiple nodes instead of just a single node.
Ingress Routing and Publishing Ports We've already seen how we can access services, for example, a website or a Web API. We've seen how we can access these when we have deployed them to our cluster, and we have even multiple instances of a Web API up and running. We can see how we can access these via a published port. In this module, though, I want to step behind the scenes and talk a little bit more about how this works and give you a good understanding of how you get traffic into a swarm because that's a major consideration, how you're going to bring in outside traffic and allow that outside traffic to access the applications or services that you have running on your cluster.
Reconciling a Desired State Maintaining a desired state is a big part of the value proposition when moving to Docker Swarm. So in addition to being able to manage multiple nodes and being able to throw work at those nodes, we get to manage that work from the perspective of reconciling a desired state. So instead of explicitly specifying what it is we want to run, we describe in a declarative style. It's kind of like the captain of a ship that sets a course to go from maybe New York City to London. The captain sets sail and points at London and then periodically checks the direction of the ship. If the ship has turned slightly, then the captain can readjust the course of the ship periodically because from time to time the ship will get out of sync with the direction that it should be headed. But, as long as we correct course frequently enough, we can make sure we are always close enough to the desired state to get where we need to go. And when things get off, we can quickly fix them. In this module, I want to look at more examples to make sure that we drive home the point of what exactly it means to reconcile a desired state.
Rolling Updates A natural extension of the topic of reconciling a desired state is to answer the question, How do we update our applications? So if we have a service that's deployed running version 5 of our application, and we want to deploy version 5. 1 or version 6, how do we go about doing that? And before I tell you, do you want to take a guess?
Deploying with Stacks This is probably my favorite module of this course. So far we have been typing out these long docker service create statements that have a lot of flags, and we're only beginning, so we're only using a subset of the possible flags that we could be using. We're also running docker service update quite a bit to make changes to our services. In this module, we are going to see how we can put all those flags into a file, a config file, a. yml file just like docker-compose, in fact using the compose v3 format. And we'll see how we can use this to not only create services but update services all for the convenience of a single file that specifies the desired state or definition for that service so that we don't need to deal with things like passing flags to create versus passing flags to update if the service already exists. This is a huge timesaver, so let's get started.
Health Checking Obviously Docker takes care of restarting a failed container if our application exits for example, but just knowing that a process is running doesn't always tell us if that process is healthy and in the case of a website we may have a process up and running, but the web server might not yet be ready to accept requests and if we start routing traffic to that web server before it can handle the requests then we're going to start dropping that traffic. So there's a level of knowing that a service is healthy and specifically an instance of that service in the form of a task and actually a container. We can know that that's healthy by performing some more advanced checks than just whether or not the service is running, and that's what we refer to as health checking and that's what we'll see in this module and we'll see how this integrates into everything else we've seen thus far in the course so that we can provide robust health checks of our application and we can virtually eliminate the possibility that we send traffic somewhere before that destination is ready to handle that traffic.