Intro to Docker - Part 2 (Networking, Docker Compose)
Hey, I’m Dalia and this is part 2 of my Intro to Docker video. In part 1, we covered the fundamentals of Docker including why you’d use Docker, what images are and what containers are. Then we created a container for a simple hello world application. If you haven’t watched part one, I highly recommend going back and watching it. I’ve included a link in the description In this video, I want to build on that knowledge and explain a few more concepts that are important to understand when using Docker, especially around networking which quickly comes up as soon as your application starts using multiple containers. I also want to explain why and how to use Docker Compose.
But before we talk about networking and Docker Compose, I want to show you how to containerize a more advanced application than our Hello World, and uses more than one container. This will lay the ground work needed to discuss networking. We'll start by going back to our web application we talked about in the previous video. Let’s say our application is a form that takes in user input and saves it to the database.
In my case, my application is using Tomcat as its application server and MySQL as its database. Don’t worry too much about what exactly I’m using as much as the fact that I’m using different technologies in my application. Originally when I started developing this application, I had to manually install Tomcat on my machine and then also install MySQL- which was a bit of a pain. I also helped a few developers run the application on their machines which convinced me that I could really use Docker to minimize the setup necessary to get my application up and running.
Let’s go ahead and do that! We’ll containerize this application by first creating a database container. Then, creating a container for my application code and everything it needs to run including the app server and JVM. Let’s switch to our IDE and before we do anything, we’ll make sure Docker is up and running. I’ll run docker version. The output looks good. Now we’ll create the MySQL container that my application will use to persist its data. You can use these same basic steps for any database you want to setup.
Like we mentioned in the previous video, in order to create a container, we need an image. Luckily, for most databases, there are official images available on DockerHub that you can simply pull and use right away. We’ll pull the official mysql image by running the docker pull command and passing in mysql. At this point, if you want to use a specific version of mysql, you can specify a colon and a version. I’ll leave it off which will pull the latest mysql image. Now that I have my image, I can create a container from it. We’ll create a container and give it a name - app-db.
I want to start this container in detached mode so our terminal isn’t taken over by the database logs. If you’ve ever worked with databases, you’ve probably had to do some sort of configuration with it like setting a username or a password, etc. In Docker, you do so by passing in environment variables when you first create your container. For example, for MySQL databases, we are required to set a root password for the database or it won't come up successfully.
I can do that by specifying the -e option, stands for environment variables, and passing in the environment variable’s key and value. I could also use an environment file to specify my environment variables but in this case, I’ll keep things simple and pass them in directly. I also want to not have to deal with having to initialize a database for my tables so I’ll pass in the MYSQL_DATABASE environment variable and give my database a name like myDB.
Finally I need to specify the image that will be used to create this container, which is the mysql image we just pulled. Let’s run this command. We see our container’s ID which tells that our container was created and started.
We also see it listed in the list of containers when we run docker ps. Now just because the container came up, doesn’t mean the process in this container was able to start successfully which is why I always like to check the container logs. I'll do that by running the docker logs command and passing in the container name.
I can also use the ID if I want. Our container logs shows us that our database has started successfully and is ready for us to persist to it. I want to pause here and mention that there are some debates on whether there are benefits to containerizing your databases for applications running in production. There are lots of valid arguments on both sides which you will want to consider when you’re ready for your application to go to production. However, for development environments where data integrity isn’t a concern, containerizing your database can be very beneficial because of how easy and fast it is to setup a database with Docker.
Another point I want to make is that with this current setup, my data will be wiped away whenever I recreate my container. For my development env, that’s fine. But if you’re thinking about using databases in production and are wondering how that works, Docker allows you to store data long term using volumes which allow you to store your data on the host filesystem. Now that we have our database container up and running, let’s create our second container which will have our web application. We'll start with similar steps to what we did with the Hello World container but we'll build on top of that. Before we start, let’s take a quick look at our application code.
Our application has a JSP file containing the form that the user sees when they navigate to the app. Then once the user submits a form, myServlet is called which creates an object containing the information passed in by the user and persists it to the database. The persistence.xml file specifies the URL that the application is using to connect to the database. This will be important later.
Now, let’s containerize this application. - In order to do that, we first need to create an image. We'll do so by creating a Dockerfile that contains the instructions on how to create an image which will have our application code, app server and JVM. The first line of our Dockerfile will define the base image that we’ll build our own image on top of.
For our hello world image, all we needed was a JVM. - This time, our application doesn’t just need a JVM, it also needs an application server. For Tomcat, there is an official image available that includes both the app server along with the JVM. I’ll specify the tomcat image name and this time, I’m going to be a bit more opinionated about my version and use Tomcat version 10 with version 11 of the jdk.
Next, I need to add the application code from my host machine into the image filesystem. I’m going to rebuild our war file and make sure it’s ready to go into our image. Our war file is built and is available under the target directory.
We can go ahead and use it in our Dockerfile. For my second argument, I need to specify the target directory where the war will be placed in the image. For an app like hello world, I was able to basically place the application files wherever I wanted on the image filesystem. However, since I’m building this image on top of a tomcat base image, I need to drop my files in the directory where tomcat needs it in order to run the application when Tomcat starts in the container.
For tomcat, that location is /usr/local/tomcat/webapps/ Finally we need the command that we want to run when the container starts. For Tomcat, you can invoke the catalina script which starts the Tomcat server. So, this Dockerfile basically says, build my image by taking a base image that includes Java 11 and Tomcat 10, drop my web application war in the appropriate tomcat directory and then start Tomcat. Let’s build this image. We’ll run the docker build command and give our image a name “my-web-app” with a 1.0 tag. Then, I'll specify a '.' for my current directory.
Looks like our image was created successfully. Now we’re ready to create our application’s container. We’ll give our container a name - “app” and start it in detached mode. Then finally specify the image name that this container is based on. Looks like my container has started. If we take a look at the logs, looks like the server started successfully.
Let’s go to our browser and try to reach our application at localhost:8080/MyWebApp/ Well, we actually get an error and if you’ve ever worked on web applications, you probably dread seeing this page. This is a great time to start talking about how networking works in Docker. In the previous video, we talked about how containers are well isolated units which is why you can run multiple containers without them trampling all over each other. This concept of isolation especially applies to networking.
Let’s go back to our original application stack without Docker. On my machine, I had a number of ports that processes and services run on and receive requests. In my original environment, the Tomcat server was directly installed on my machine and ran on port 8080, which is the default port for Tomcat. This allowed me to reach my application by making an HTTP request to localhost port 8080. Now in my new environment that uses Docker, there is an extra layer of isolation with my containers.
Meaning, that while our application container has actually started and is running on port 8080 in its container, Docker doesn’t actually know what’s going on in the container. To visualize this, I like to think back to the ship carrying a bunch of containers without knowing anything about the containers contents. In order to change that, there needs to be some extra information about the containers. So, there is a couple of extra details we need to notify Docker about in order to make the container visible outside of Docker. First, we need to tell Docker that there is a process in the application container that listens on a specific port at runtime. We do this by exposing the port. For example, for our application container, we need to expose the 8080 port since tomcat will be listening on that port.
When we expose the port, we’re exposing the port to Docker which allows you to communicate between Docker containers but that still doesn’t open it up outside of Docker. In order to make your ports available outside of Docker, you need to tell Docker to publish your port and bind it to a port on the host machine. Let’s see what this looks like in action. First, let’s expose the 8080 port where the Tomcat server will be running. I can do so by including the EXPOSE instruction in my Dockerfile along with the port I want to expose. Since we made adjustments to the Dockerfile, we need to rebuild our image.
Now Docker will know that this container has a process listening on port 8080. Now Docker knows about this port, we can now tell Docker to bind this port to a host machine’s port. We’ll want to do so at the time when we create a new container. But before we do that, let’s do a little cleanup and get rid of the container we created earlier since we will no longer be using it. We’ll run the docker remove command with the dash force option which will stop the container and remove it. You could also run docker stop instead of passing the -force option. BTW, if you’re ever wondering what docker options available to you, you can use the --help option.
Okay, now that we’ve gotten rid of our old application container, let’s recreate it with the same name but this time, I’ll pass in the -p option which stands for publish. Then, we need to specify the port on the host machine to bind to my container’s port. Then add a colon and specify the container port that the host port will be bound to.
This tells Docker to make port 8080 available outside of Docker and binds that port to the host machine’s port 8080. We’ll finish writing our command and create our new app container. Now, let’s try to reach the application again……. Awesome! Looks like we can now reach our application running in our container at local host port 8080! I want to make a quick side note here. For this container, we bound the container’s port to the same port number on the host machine. But we didn’t have to do that. I could have chosen any other host machine port to bind to my container port.
For example, say I want to run a second instance of my application. When my second app container starts, the Tomcat server in the container will be running on the default 8080 port exposed in my Dockerfile. The fact that both containers are running on port 8080 is not a problem in this case because they’re running within their own isolated containers.
However, if I want to make the second app container available outside of Docker, I need to bind the container port to an open host port. In fact, if I try to create a second container and pass in the same 8080:8080 port bindings, I’ll get an error because the 8080 port on the host machine is already allocated to my first app container. Instead, I need to bind my container port to an open host port, for example, 8081. We’ll do that by executing the same docker run but this time with 8081 as the host port. BTW, I always have to lookup which port gets listed first for the -p option. So,
just remember that you specify the host machine port first then you specify the container port you want that host machine port to be bound to. Now if we run this command, we see that our second app container was created successfully. And now I can access a second instance of my application using the 8081 port URL. This is also a good demonstration of how easy Docker makes it to bring up several instances of your application and scale your containers up and down. If you ever want to see your port bindings, you can run docker ps and see them there.
I don’t really need two instances of my application so I’ll go ahead and delete my second container. You can refer to containers using their name or id. I’ll use the container’s id this time - I can even use as little as the first 4 characters of the ID. And looks like we’re back to having our two containers. We successfully made our application reachable to the outside world. However, there is another aspect of networking that we haven’t talked much about yet which is communication between Docker containers. In fact, our application isn’t fully working right now. If I try to persist any data, the request
won’t go through. That’s because my application container can’t reach the database container. Let’s see why this is the case and learn more about networking in the process. In Docker, there are several types of networks that allow you to create secure networks for communication between containers. Bridge networks are the most commonly used one. In fact, if you create a network and don’t pass a type, a bridge network will be created for you by default. I’d like to set up a bridge network to handle communication between my two containers.
I can do that by calling the docker network create command and pass in a network name “app-network”. The network is created. Let’s take a look at all the networks we have now. You’ll notice the new network in this list. You’ll also notice three other networks which Docker creates by default. A host network which removes the network isolation you have between
a container and its host machine, a none network, which disables all networking, and a bridge network which the containers are attached to by default. The recommended practice is to create a dedicated network for containers that are meant to be connected to each other instead of using this default bridge network which is shared between all containers on the same Docker daemon. Now that we have our network, we need to connect both our containers to this network. We’ll start with the app-db container. We’ll call docker network connect, specify the network’s name then container that we want to connect to the network. Our app-db container is now connected to our app-network.
Next, we need to connect the app container to the network. However, before we do that, we need to change how the app connects to the database by changing the URL. Let’s go to the application file that specifies the database URL.
In our original setup without Docker, our app was using localhost to access the MySQL db installed directly on the machine. Instead, we want to connect my application to the database container. We can do so by referencing the name of the database container which is app-db. The reason this will work is because the two containers will be connected to their own dedicated network which resolves the address properly.
And now that we’ve made a change to the application code, we need to update our image with the new code. So, we’ll rebuild the application file that goes into our image, then I’ll rebuild my image and delete my old container since I’ll be recreating it with the new image. There are several ways to make the process of updating your application code easier and more automated when working with Docker - I talk about that in my Docker in IntelliJ IDEA video - but for now, we’ll take the long way. As I’m recreating my app container, I can pass in the network I want my container to use along with the docker run command instead of having to call docker network connect separately. You can do it either way. Now that the app container is connected to the app-network we’ve got both containers communicating with each other. Let’s see if our application is fully working and can reach the database.
Awesome! My application container is now properly connected to the database container and I can persist my data successfully! By now, you’ve probably lost track of the exact commands we need to execute all the commands we’ve run and all the adjustments we’ve made. Actually, if you’re anything like me, you avoid closing your terminal so you don’t lose your command history. But of course that’s not really sustainable or scalable. That’s one of the reasons to use Docker Compose. Docker Compose is a tool that allows you to define your application services and basically codifying (is that a word) your run commands.
By now, you’ve probably lost track of the exact commands we need to execute to get our application up and running - I know I have. If someone asks me for these commands, I’ll be going back through all of my command line history and try to figure out which commands to provide them. Thankfully, I don’t actually need to do that if I use Docker Compose. Docker Compose is a tool that allow you to define your application services and basically lets you codify your run commands. Let’s see how we can use Docker Compose for our application. First, we need to create a YAML file called docker-compose.yml.
If you’re not familiar with YAML, it’s a language commonly used for configuration and is made up of key-value pairs. It’s an alternative to XML or JSON. The first key-value pair we need to specify is the version that this docker compose will be using for its file format. I'll specify the “version” key then set the value to “3" - since it's currently the latest major version for the compose file format. Next, we define the services that make up the application.
In our case, we have two services, one is the database service and the other is the app service. BTW, when you’re writing YAML files, be very careful about your indentation. In this case the IDE is helping me out and putting in the right indentation but if you’re making any edits, pay attention to that. I’ve been bit a few times by that. When I write my docker compose files, I like to look at the run command I used to get my container working. Let’s take a look at our run command and see how we codify that in my docker compose file. First, I’ll call my service app-db, that way my application can use the same URL to access the database and I don't have to update my application.
Then, I’ll list the image that this service uses which is mysql Then, I’ll list the environment variables that I want to pass when the container starts And that’s all I need to start the first service. For my second service, I'll call it “app” You can either specify the name of the image we built earlier - my-web-app. Or what I like to do in my dev environments to simplify the environment setup is to have Docker compose build the image for me if it doesn’t exist. I do so by specifying build colon '.' which will look
for a Dockerfile in the current directory and build the image before it starts the container. Then, I’ll list my port bindings that will make my app service available outside of Docker. Then, I will declare a dependency on the db service that way my application service doesn’t come up until my database service has started. Now, we’re ready to use our docker-compose file to bring up our application. Before I do that, I’ll do some cleanup and delete the containers we created earlier using the docker run commands. This will help us avoid port
binding errors when docker compose tries to allocate port 8080 on the host machine. Then, we’ll run the docker-compose up command which will bring our application up by creating and starting our two containers. In the output, we see that docker compose created two containers for us. The default name of the container comes from the base directory of our project underscore then the name of the service.
If we run docker ps, we now see the two containers that docker compose created for us without executing any docker run commands. Let’s see if our application has started and is running successfully using Docker compose….And if we persist our data, it works too. I want to make a quick note here, if you noticed, the new containers that docker compose created were able to talk to each other without us having to create a new network and connecting those containers to that network. That’s because Docker Compose automatically creates a bridge network for the application services you’ve defined in your docker-compose.yml file
and attaches your containers to it. So you don’t have to manually do it yourself. In fact, if we run docker network ls, we’ll see the network that docker compose created for the two containers. I want to end my video by showing you why everything we covered here matters.
The original problem I had - was that the process to get my application running was long and tedious. So I decided to setup my application to work with Docker to simplify this process. Let’s say I just got a request from Helen who is a developer on my team wanting to work on my application. First I need to help her setup her development environment and get the application building and running locally. Let’s
see how that looks like now that I’ve got my application working with Docker. Dalia: Hey Helen Helen: Hey Dalia. How are you? Good. How are you?
I'm alright. Thank you. I'm looking forward to this. Yeah, let's start by cloning my web application repository. I'm going to send you the link on Slack. Amazing. Okay. Awesome, can you clone this repo? I certainly can. I will clone in IntelliJ IDEA. JetBrains Toolbox. That's correct. Open link. Ok. Clone Repository. Yep. Checking the directory. All looks good. I shall press Clone. Let's full screen IntelliJ IDEA as well.
Now, do I trust Dalia? I think I do. So let's trust this project. Yes. Awesome and do you have Docker running? I do. I do. It's not showing up there in my little icons but I promise you, it is running. All right. We'll find out soon enough. Great! Okay, you'll want to build the application war however you usually do that.
Okay. Let's do it this way. Let's do build, build artifacts and my web app war build. And previously in this video I used the terminal using Maven install. You can do whatever you'd like. It looks like Helen's preferable method is to do it through IntelliJ IDEA. Awesome! So now let's go and run docker compose up.
Okay. Should I do that in my terminal? Yes, let's do that. You could also do it from IntelliJ IDEA, but let's do it through the terminal. All right. I will use the terminal. So Option+F12 for me, docker-compose up. Let's start that running.
All right. Looks like that's off. Great. I like to give it a few seconds. Sure. I think it's done.
Yeah. I think that looks good. Let's try to reach the application so you can go to your browser and try to go to localhost:8080/MyWebApp/ Okay. localhost:8080/MyWebApp/. Here we go. There it is! Oh, it's an important form. Well, I will fill it in then.
My name is Helen. That one is easy. What is your favorite fruit? That one is not easy. Let's go with grapes and submit. Brilliant! It's working Dalia! And as you can see, because I've set up my application to work with Docker, Helen, didn't have to worry about manually installing Tomcat or installing MySQL databases, or even getting just the right Java version. And you'll also noticed that she's on macOS and I'm on Windows. And thankfully I did not have to worry about showing her how to get any of these requirements on her operating system. Anyways, thank you so much, Helen. And I'm looking forward to collaborating with you more.
Thank you, Dalia. I'm really excited and that was super easy. So thanks for walking me through it. Thanks, bye. See ya! And this wraps up our second Intro to Docker video. I hope you’ve found this video helpful and you now have a better understanding of Docker. For your next video, checkout my Docker in IntelliJ IDEA video I've included in the description to learn how to use IntelliJ IDEA to make developing application with Docker easier. Thanks for watching!
2021-09-02 08:52