How to Build and Run Node Apps with Docker and Compose

How to Build and Run Node Apps with Docker and Compose

Show Video

Hello. Dr. Khan my. Name is Kathleen Jewel and I. Am a developer at touchable, ocean. Primarily. I work on our, community. Web property, doing. A mixture of full-stack, application. Development and. Managing. Our pipeline, and flowing process via concourse to kubernetes. So. Today I'm here, to talk to you about how to build and run node applications, with docker and compose. But. First just a bit of background on this talk and its topic, by way of digital, oceans community. Some. Of you may be familiar with community. Platform already, we. Have a wide range of tutorials questions, answers. And developer, tools designed. To spread awareness and knowledge of open, source technologies. So. The basis for this talk is actually a series that lives on community, called from, containers secure nutty's with nodejs. The. Motivation, for this series was my sense that there is often a divide between guides, that discuss, containerization. And deployment and those that discussed application, development. So. I was interested in how a full-stack, series, would do a. Series. That brought those two things together. And just. Quickly I want to mention that we now now have a wonderful ebook, version of the series that you can download for free from community. Finally. One more resource I, have a series called rails on containers, that also lives on community, that aims, to do much the same thing as the nude series basically, taking, a full-stack approach, to application, development, and. Containerization. Okay. So today I'm here to talk to you about building, and running node apps with docker and compose, so. First I'll talk a bit about what goes into building an image then. I will talk about how to wire up the development set up with compose, and then. Finally, some things that you will want to think about when you're setting up for deployment. But. First some, hot takes. In. All seriousness before. Launching into any specifics, it's worth spending a minute getting reacquainted with, the problem, containers, are designed to address, so. This slide for example shows. A flicker, like application that. Has a user management, piece photo, management, piece a database. Adapter and, a, front end piece, so. The application, gets loaded as a whole onto, a virtual machine. Scaling. Therefore, involves, provisioning. For machines, if any. Part of the application changes, the, whole thing needs, to be reloaded. So. Things like application, level upgrades, become, tedious and error-prone with, lots of works, on my machine ambiguity. Thrown, in for fun. In a. Micro service-based architecture. However we, split the app up into micro services which. Are collections of loosely coupled, service, applications, that each perform, a single function. Containers. Are the underlying architecture. Underlying. Foundation for this architecture, so. They make it possible to manage groups of identical workloads, or deployments. And endpoints. That expose, groups of containers, or services, and. As. You've probably gathered, from this image these are key, concepts, when, you're working with. Container. Chorus traitors like kubernetes. Okay. So today, we're gonna talk a bit a bit about what you can do with containers, specifically. So. Before we get into any code um it's, worth taking a minute to talk about containers. And virtual machines together. So. A virtual, machine a good, example of a virtual machine would be a digitalocean. Droplet a, remote, server, so. These servers allow, you to, run multiple whole, systems, on a single physical host, a, hypervisor. Manages. The. Multiple, running machines and shares hardware, resources, between them, so. This is great because it allows for application, sandbox saying and versioning. It's. Way more efficient, than running several, physical hosts but. You still have some float right like a full operating system. Containers. Are like virtual, machines but, they, provide some additional advantages. They. Accomplish, the goals of, sandbox, sands, and provide, consistent. Reproducible. Runtime environments, much more efficiently. Running. Containers means. That you don't need a full operating system, just, a container runtime, so docker.

Container. Image files are generally, much smaller than virtual machine files. The. Spin up time for container is generally, much quicker, they. Tend to be more performant. Than virtual, machines and, finally. There are lots, and lots of pre-built, pre-configured. Images, available that you can use that are officially, maintained, and Jeanette's, Python, nodejs. Etc. So. Now that we've covered some metal level stuff on containers, let's, drill into how you might build an image for a node application in, particular. So. First you want to think about building your base and some of the choices that go with that you. Can find a list of image. Images. Along. With explanatory. Resources, for officially, maintained maintained, images, on docker hub in our. Case we're, going to use an Alpine, image for our node base and. Using an Alpine image or a slim image is a great way to minimize, the size of your final image. Some. Things that are worth keeping in mind when you think I'll find images are. Your. Package, availability, and compatibility, with other systems might, be different from what you expect so. Alpine. Uses, the, muscle library while many other distros, like. The bun to use, glib sea so, depending on your needs that, could complicate things for you, there. Also there have been differences in how these libraries, handle dns resolution which, can, really. Matter in a community's. Environment, for example. So. For us for this image we're going to use the Alpine node base because none of those things affect us. So. First um you, know once we set our base with the from instruction, we, can add our container, level of dependencies. So. The. Obligation, that we're building here doesn't have any additional requirements, but let's say we wanted to add them. So, we can interact with their application, files on, the container, we. Could then read it add and run instruction, to add that package. In. Cases where we, need a multiple, package packages. We can chain these, dependencies, into a single. Run instruction. This. Will help us keep our image layers to minimum and decrease. Our overall image size. So. Changing, the package, index update, to the add instruction. As, we've done here will, also prevent any unintended. Consequences. Let's. Say our update image layer, is cached for. Example but, a new package, is added to our application, that could, cause us problems but, if we chain things together it will not let, us bust the cache appropriately. Hopefully. No, cache this, flag here prevents. The package index from being cached locally so. We don't need to add additional instructions. To clean it up following, our package installation. Next. We're going to set our working directory and user here. We get to take advantage of the fact that our image. Base has a node user that we can use to avoid running our container as root, in. The same way that you would want to avoid running processes, on a virtual machine as root it's, a good idea to restrict, privileges, in the docker eyes environment, by running processes, as a non, root user we're possible. Our. Next stop will be copying, our application, up code over and installing. Our project, level of dependencies. When. Copying your code again, it's a good idea to think about cache busting, so. What we don't want is for our node modules, to be rebuilt any time we make it change to our application, code unless we're actually changing. Our dependencies. So if. We separate, out a copy of package JSON and, package lock JSON, from. Our application, code copy we can actually avoid that situation, so. This is a great pattern to follow with other stacks like for example if this, were rails, up you would definitely want to do that with your gem file. Here. We can also use the town flag when, we're copying our, application. Code to set the appropriate permissions on the code for our node user. And. Finally you can see what we're copying the application, code from the root of our project, over, to the working directory that we've specified, above. Next. We'll add our expose. And command instructions. With. Expose we, are indicating. What port they pass app, will be listening on for connections and then.

As, Our last instruction, we're going to specify a command to, run the application with in this. Case would be node, fjs which uses our projects app.js file. If. We, need us to provide a greater level of specificity here. Like for example if we had some. Tasks, that we needed to perform once. Our dependencies. Have been installed and our code copied, we, can also add an entry point script. Our, entry, point instruction, that points to a script that could accomplish those tasks, for us, so. To build out your scripts, you can always look at the source, code for any image, to see what the default command is for the image and then, you can write your scripts and commands accordingly, so. Here in, this case this image default. Command is node. Okay. So. We. Didn't interact with containers in many ways. So. Some things that we're going to do we're, going to build the image using. The dr. fellows command. Anytime. You build images, you can always look at them with the docker images, command, which. Can be very useful if, say you're, experimenting with different faces or. You want is to implement build, stages, to minimize your final application, image. Size. We're. Going to run our image with docker run so, we'll. Be using the D flag which will run the container in the background, the. PFLAG will publish the port on, the container and, map it to the port that we're going to specify on, the host which in our case will be a V we. Can always look but, or containers, with docker PS and, um the. Eighth flag will give us everything even. Things that I have stopped. And. Then. Docker logs will, give us our logs and we, can always exact into a container using docker exec so here. You. Know is an example of the, command that we would use if, we wanted to exact into, our node. Application, container, and. Get a running, shell to it so it yeah, because we're using that Alpine, image. It. Uses the Bourne shell so this is the command that we would use to. Get that shelf. All. Right so a, demo. Time. So. I'm going to clear, all that and you. Can see that in our, directory we have a docker file already, it, does. Not. Have a unique name so what, we're gonna do when we build is we are, not going to use the eff flag which, we could use to specify a different docker file. So. We're gonna build we're going to tag this we're gonna call this docker demo. And. We're gonna specify, the current directory as the build context. So. You can see, that. I've cleared my build, cache that we're getting everything fresh. You. All, right cool. So. Now we can run this docker, run. Calling. The doctor Gemma. Gonna. Run it in the background gonna, publish, a bada. Ad on the host and I'm. Using doctor demo. Okay. Now. Can. Go. Over to localhost, and. We. Can see our shark application. Looking. Good. All. Right one final stop we're. Going to look at our. Containers. And we're, gonna stop. This, one. That we just felt, no. No it's working. You. And, we'll remove that as well. Cool. All. Right. So, we, now have our application, running, or we've seen how we can do that so, but, as you probably saw it on our application form we, have what looks like input field so we're gonna need to persist some, data, and.

By, Doing it in order to do this we're gonna have to add a database, service to our setup. So. We can do this using multiple, containers with. A tool called compose, which allows us to. Define multi, container setups, so. We're gonna walk through how we could wire up the database service, with MongoDB, to. Persist, our precious. Shark application. With data. First. Though we want to take care of a few things on the application, side to, ensure that things run, smoothly when we add compose, to our application. Before. We add any code though it's worth thinking, about what. Compose is doing, and how that, might affect our application, code so. Service, in compose is an abstraction that points. Allows us to point to a running container. So. Using compose, and architecting. Our application. As a collection of services will bring it in line with 12 factor principles, so. Before we set up a compose file it's worth thinking about the work that we need to do on the application, side, with. That definition, of compose in mind and went with this twelve of a principles, in mind. So. The 1205 principals that matter to us here are storing, the config in the environment, and separating, it from our code and then. Treating, backing, services, as attached, resources. So. We're not running Mongo. Locally, right we're not working on a virtual machine that's. Running Mongo, and we're. Not working with an assigned. Database. Host like a separate, virtual machine that's running on so. We need to make sure that our database, our. Code can work with the database host, that's dynamically, assigned. So. For, example if we have a node application that, was already using Mongo let's, say you, know on the same host we. Probably already have applique, database. Connection, information and, methods to find in a DB Jas file, so. What we need to do is go in and pull those values, out of that file and find, a way to pass them in dynamically. So. Here for example is a D bjs file that, hard codes some, connection, constants. We. Can make this dynamic, by using Nuits process, and property. Which returns an object that contains the, user environment, so, instead of hard-coding the, Mongo connection, details we, can read them from, the environment this. Means we'll need to set them elsewhere. So. A local hidden file can, be a good way to start C's store secrets apart from your code be. Sure if you take this approach though that you log that file into your kit in docker nor files. So. Here is a nun file with, those variables that we saw earlier, in our dvj s. If. You're working in an or extruded environment, I highly recommend a. Credentials. Manager like vault which. Can provide obviously, way more security, than a. Okay. So in dbj us we're, gonna make sure that we add resiliency. Here to our code by. Specifying, some parameters, around connection, attempts, so, here we're defining, some code that, will allow us to set. How we try connections, and deal with successes. And failures. And. Then finally in package.json we want to make sure we have no daman. Which. Will allow us to automatically, restart, our application, when we make changes because certainly especially in, a development setup we don't want to have to do that manually Annika. Okay. So now. We get to write our compose file. So. The first thing that we're gonna do is tell compose. What image we're using for, our node.js. Service.

So. Here because we're working in development we're gonna build the image locally, using that docker file which, was located in, the context of our current directory. Next. We're gonna tell compose. Which environment. File. To use to load that information, I'm in, this case this is just that end file that we just saw we. Also want to make sure that we're. Specifying the, Mongo hostname, correctly. In our case that's going to point to our DB database service, which we're going to create it next. You'll. Notice that we've added some volumes here as well um, bind. Mounts like this one our. Key part of developing, with compose right, because in this case this mount will mount, the code on our host nor our working directory to. The. Container so. We can still work in this local directory, no. Demo, and. The work that we're going to do is going to be available immediately, and, accessible, in, the container. However. Working with bind mounts can lead to some confusion so, it's worth keeping a few points in mind whatever. Is on the host will hide what's, on the container in cases, words are not identical, right so. The specific, changes that you're making um, to, map to your container, introducing. Only what's necessary or, if you're spinning up an experimental, environment to see how that'll do, could, be overwritten, by anything that you have locally. So. In these cases and named volume, can, be a really, handy tool so. Specifically, here, we want only the node modules that we've specified, for this version of the project to be present on the container. Or. Let's say you know instead, of having a longer running project, where we run into dangers, we have just cloned, this repo and we. Haven't installed anything locally so we, want to make sure that we don't have an empty directory that's overwriting what's, installed on the container, so. This named volume will persist, the node modules that we install with our docker file instruction, and now, those contents, to the container which will hide the find so. It's helpful to think about doing this with any dependency, that you want to version or. Avoid rebuilding, on boot. Okay. And then final word about, volumes for Mac users thanks. To the fact that Macs are not running a Linux kernel natively. File. System mounts do not have the same guarantees, as they do in a Linux system so. Fine tuning your mount consistency. Between. Container, and host using, things like delegated, mounts is one, way to deal with this and will make your load time a lot faster. Okay. We, can then add our ports, option to map port 80 to 80 80 on the container and a, command, that will. Override the command that we specified, in the image so, in this case we're running the output node mod to. Ensure those automatic, reloads happen after we make changes, and we're. Also using the wait for tool which, is a wrapper. Script that uses netcat, to pull whether, or not a specific host, and port or accepting, TCP, connections, this. Is to avoid any unintended, consequences. If say our, application, were to try to connect to our database, before. Our database, startup tasks, are complete.

Compose. Also, has it depends on option, to ensure orders, of dependency, when. Starting services, but this order is based on whether or not a container, is running, rather than it's readiness. Next. We're gonna build this, DB. Database service. For. Our database image we can use the official Mongo image rather than building our own locally, and. Then, either pushing it up and. Pulling, it back down, and. This is because we don't need to do anything specific. To. The image. Next. In our environment, option we're, making use of the default, variables, that. You. Know we're. Both, loading, the environment file right but then also making use of some. Of the variables that Mongo, is providing, for us out of the box so. Mongo, and NDB root engagement, will create our. Bruzer, user, with root privileges that are defined in the admin authentication. Database and then, a Mongo InnoDB, root password we'll set that users password, in. Cases where we wanted a user with, restricted. Permissions for, example we, would want to create a script to accomplish. That and then mount that script into the, docker entry, point in it DVD directory. On the container. Finally. We're. Gonna add a named volume, to persist, our application, data so, that it's not lost between container. Restarts. And. Then finally at the end of the file we're going to add a top-level, volumes, key for, these named, volumes. Okay. Again many, different ways that we can act with interact, with our containers, and services. We're. Gonna use docker compose updated. To build our containers, and services, and run, the containers, in the background, we. Can always list everything with docker compose PS, we, can get our logs with docker compose logs. And. Again we can exact into our containers, with docker compose exact, so, you, know depending, on the base that you're using keep the point in mind that I mentioned earlier about the alpine, shell. And, then. Finally docker compose down, we'll take it down our, containers, and our, defined. Or default Network so in our case we're using a default bridge network. Alright. Back we go to. Demo. Again. And, we're. Gonna run. You. Awesome. Now. Well, it's a fresh clinical. Host again, and. Well. You know cutter. Sharks, and we, can add some sharks, so I'm gonna go. With this Megalodon. Shark which, is an, ancient shark, and we're gonna Samantha, and cool, see. It ah create. A new shark do, whale shark those. Are large. Awesome. Okay so. Actually. Let's. Test, our persistence, by, taking, this down. Okay. So our containers are down right but, we want to see we, want to make sure that. Our application, data it has persisted. So I'm. Going to put these back up I'm not destroying. The volume. Cool okay, so. Now. Reload. And my, volumes, have persisted. We can also check that by. Going. To sharks. Get shark and there. We go alright. Cool. So. So far we have talked about some things that are specific, to. Development. Setups and compose along. With some things that are generally applicable like, name volumes, and decoupling. Credentials, from application, code so. Building, on this information I want to briefly touch on some of the things that you will want to implement, when you're getting ready to deploy to production so. When. You're working in production, you're gonna typically, be building and pushing application. Images to some type of container registry. So, instead of building your application, locally as part of your composed workflow you, will likely build and push a versioned, image to a repository which, you're I'm gonna pull down to, run that out. With. Volumes, you're definitely not going to want to want have. Any bad mounts. Between. Your local application, code and your container running in production so.

Instead You can use a named volume, which, will allow you to mount the code that you've deployed to, other containers, for reuse so. This is really effective, if you need to share your, application. Code between containers. You. Will likely want to add a web server to. Your setup. So. With the web server you can do a few things. Including. Adding specificity. To how your application, handles requests. You. Can also get certs for your application. And. You. Have a few choices when you are adding your config to your web server container, a bind. Mount on a, config, directory or a file will work you, can also build the image locally, and push it up and. You can copy your config over as part of that process, so. In that case you can also use the name volume to persist, the config. In. Order to get your certs using, say a certificate. Authority like, let's, encrypt you have a few choices you. Can always add shape obtain them for a virtual machine as, you would without containers, however. You can also work with a cert bot service. Using. The officially, supported, cert pot image and this. Would allow you to go through the entire process of, obtaining your certs, whit. Containers. So. You, would get the certs mount, them as volumes and then mount those to, your web server container. And. You know how application. Code the app code volume here is being shared between all three of those services, so they opted the web server answer, fought. So. If you'd like to learn more about that process because, I'm glossing, over it a bit I have an article on community. Called how to secure, a containerized, node application, with, less encrypt, which we'll walk through the process and way. More detail than I have done here. Okay. So that's it for me thank, you for listening I'll look forward to engaging, with questions, and. Yeah. Enjoy the rest of dr. Khan folks.

2020-08-23 10:27

Show Video

Other news