Next Generation CI/CD with GKE and Tekton (Cloud Next '19)

Next Generation CI/CD with GKE and Tekton (Cloud Next '19)

Show Video

So. Welcome everyone to our session on next, generation CI, CD with gke and Tecton so, I'm Kim Lewandowski, and I'm a product manager at Google and. Hey everybody I'm Christie Wilson I'm also. From Google cloud and I'm an engineering, lead on tech time. So. Before, we get started a, quick show of hands how many of you are running kubernetes, workloads, today. Wow. Quite, a few awesome, and how, many of you are practicing, see icd. Ok. Awesome. And then using Jenkins. Nice. Something. Else. Ok. Cool good, mix so. Today. We're going to cover some basics, we're going to talk about a new project called Tecton that we've been working on we're. Excited we have two guest speakers joining, us today to talk about their integrations, with Tecton, we're, gonna briefly cover Tecton, governance, and then, finally, what's in the pipeline for us. So. We're here to talk to you about the next generation, of CI CD, and. We think that the key to taking a huge leap forward, with CI CD is cloud native technology, so. I for, one found myself using this term cloud native all the time but I realized, that I didn't actually know what it meant so this is specifically, what cloud native means. Applications. That are cloud native are open source their. Architecture. Is micro, services, in containers, as opposed, to monolithic. Apps on VMs, those. Containers, are dynamically, orchestrated. And we. Optimize, resource, utilization and, scale as needed. The. Key to all of this is containers, containers. Have really changed the way that we distribute software, so. Instead of building system specific, binaries, and then installing, those and installing a web of dependencies, we, can package up the binaries, with all of their dependencies, and configuration, that they need and then distribute that, but. What do you do if you have a lot of containers. That's. Where kubernetes comes in so. Kubernetes, is a platform, for dynamically, orchestrating, containers, you, can tell it how to run your container what, other services it needs what, storage it needs and kubernetes, takes care of the rest and then.

In Addition to that kubernetes. Abstracts, away the underlying, hardware so, you get functionality. Like if a machine that's, running your container goes down it'll, be automatically, rescheduled, to a machine that's up and, in. Google Cloud we, have a hosted, offering of kubernetes. Called, google, kubernetes, engine or gke. So. This is what cloud native ends up looking like for most of us we. Use containers, as our most basic building block then, we dynamically, orchestrate, those containers with kubernetes, and control our resource utilization and, these. Are the technologies, that we're using to build cloud native sea icd. So. For those not familiar with, CIC and it sounds like most of you are it's really a set of practices to. Get your code built tested, and deployed, so. CI pipelines, are usually kicked off after, a pre, submit workflow and determine, what code can, be merged into a branch and then, there's CD which is what, code changes, you then deploy to a branch either automatically, or manually. And, so. What we've learned is that there's not really a one-size-fits-all. Solution, there, are projects, that just want, something simple and just something that works out of the box and then, there are companies with really complex requirements. And processes, that they must follow as their code travels, from source to production so. It's, an exciting time for us in this new cloud native world for CI CD so, CI CD systems are really going through a transformation. CI. Systems, can now be centered around containers. And we can dynamically, orchestrate, those containers, and using. Service methodologies. Control ru search use, resource. Costs and with. Well-defined conformant. Api's we, can take advantage of that power and not be locked in. But. In this new world there's a lot of room for improvement problems. That existed, before are still true today and some. Are just downright harder, if we, break our services, into micro services, they, inherently, consist of more pieces have, more dependencies, and can be difficult to manage, and the. Terminology, is all over the place the, same words can mean different things depending on the tool and. There. Are a lot of tools it seems. Like every week a new tool is announced, so. I can't even keep up with all of them and I know that our customers are having challenges making their own tooling, decisions, so. It's great to have this many choices but, it can often lead to fragmentation. Confusion. And complexity. But. When you squint at all these continuous, delivery foundation. Or solutions. At their core they all start to look the same they have a concept, of source code access artifacts. Build. Results, etc, but. The end goal is always the same get my code from source to production as quickly and securely as possible. So. At Google we took a step back and after a few brainstorming, sessions, we, asked ourselves if, we could do the same thing to CIS CD that, kubernetes, did with containers that. Is could we collaborate, with industry leaders in the, open, to, define a common set of components. And guidelines for, CI CD systems, to build test and deploy code, anywhere. And. That. Is what the Tecton, project, is all about Tecton. Is a shared set of open source cloud native building, blocks for CI a CD systems, even. Though Tecton, runs on kubernetes, the, goal is to target any platform, any language. And any framework, whether that's gke on Prem, multi, cloud hybrid, cloud tribrid. Cloud you. Name it. So. Tecton started, as a project with inkay native people. Who got very excited, to be able to build images, on kubernetes, and but, very quickly they wanted to do more they wanted to run tests on those images, and they wanted to define more complex, pipelines. And. It's enthusiasm. Grew we decided to move it out and put it into its own github org and where it became Tecton. So. There again the vision of this project is CI CD building blocks that are composable, declarative. And reproducible. We, want to make it super easy and fast to build, custom extensible, layers on top of these building blocks so. Engineers, can take an entire CI, to see a CD pipeline, and run, it against their own infrastructure, or they can take pieces of that pipeline and run it in isolation in. The more, vendors that support Tecton, the, more choices users, will have and they'll, be able to plug and play different pieces from multiple vendors with, the same pipeline, definition.

Underneath. So. Tecton, is a collaborative, effort and we're already working on this project with. Companies. Including, cloudBees Red, Hat and Ibn and we, made a super big effort, to make it easy for new contributors to join us. And. Again, pipelines is our first building block for Tecton, and now Christy will be diving deeper. So. Tecton, pipelines, is all about cloud native components, for defining CI CD, pipelines, so I'm going to go into a bit of detail about how it works and how its implemented. So. The first thing to understand, is that it's implemented using kubernetes, c RDS, so. C r d stands for custom, resource, definition. And it's, a way of extending, kubernetes, itself, so. Out of the box kubernetes, comes with resources, like pods deployments. And services, but, through CR DS you can define your own resources, and then you create binary skald controllers, that act on those resources. So. What's the RDS have we added for Tecton pipelines. Our. Most basic building block is something we call a step, so, this is actually a kubernetes, container, spec which is an existing, type a container. Spec lets you specify an image and everything you need to run it like what environment, variables, to use what arguments, what volumes etc, and. The. First new type we added is called a task so. A task lets you combine steps, you, define a sequence of steps which, run in sequential, order on the, same kubernetes, node. Our. Next new type is called a pipeline, a pipeline. Lets you combine tasks, together and you. Can define the order that these tasks run in so that can be sequentially. It can be concurrently, or you can create complicated, graphs the. Tasks. Aren't guaranteed. To execute, on the same note but through the pipeline you can take the outputs, from one task and you can pass them as inputs to the next task. So. Being able to define these more complex graphs will really speed up your pipelines, so for, example in this pipeline we can get some of the faster, activities, out of the way in parallel, first like linting, and running unit tests, next. As we run into some of the slower steps like running integration, tests we can do some of our other slower activities, like building images and setting up the test environment for our end-to-end tests. So. Tasks, and pipelines, are types you define once and you use again and again to. Actually invoke, those you use pipeline, runs and task runs which are our next new types. So. These actually invoke the pipelines and tasks but to do that you need runtime, information like, what image registry, should I use what git repo should I be running against, and to do that you use our fifth and final type, pipeline. Resources. So. Altogether we added 5 CR DS we. Have tasks, which are made up of steps we. Have pipelines which are made up of tasks, then. We invoke those using, task runs and pipeline runs and finally. We provide runtime information with, pipeline resources. And. Decoupling. This runtime information gives, us a lot of power because, suddenly you, can take the same pipeline that you use to push to production and you can safely run it against your PO requests, or you. Can shift even further left and suddenly, your contributors, can run that same pipeline against, their own infrastructure. So. This is what the architecture, looks like at a very high level so. Users interact, with kubernetes, to create pipelines and tasks, which are stored in kubernetes itself, and then. When the user wants to run them the user creates the runs which, are picked up by a controller, which is managed by kubernetes, and the controller realizes, them by creating the appropriate, pods and container instances. Cool. So today I'm excited to welcome engineers, from cloudBees and trigger mesh to talk about how they've been integrating, with the Tecton project, and I want to highlight that they were able to do this very, quickly because we put a ton of time in ever, into onboarding, new collaborators. So. First I'd like to introduce Andrew. Bare on stage to talk to us about Jenkins X and Jenko jenkin X is integrating. With Tecton. So. Hi I'm Andrew. Bear I'm an, engineer, at cloud peas working, on pipelines both, in Jenkins, and Jacob, Zacks so. Who. Hears heard of Jenkins X. That's. That's a lot of people who here's played. With it a or is using it etc it's. Good it's good. So. In case you're not familiar with Jenkins ax let me try and probably, fail to explain it very well and then get, corrected. Jake. It's X is a new CI. CD experience, for kubernetes. It's. Designed to run on kubernetes, and target. Kubernetes. You. Can do use it for building traditional and cloud native workloads, you. Can create. New applications, or import existing applications. Into kubernetes and take. Advantage of various quick starts and. Build packs that allow you to get. The. Initial setup of the project, etc, without. Having to do it all by hand, you.

See Fully automated CI CD, integrating, with github, yeah, by a prow so a lot of automation and, get-ups. Promotions. Etc without you actually having to go. Click stuff by hand it's. Got promotions. It's got, staging. Dev. And prod environment, integration, and. A, whole lot of other magic. I'm. Here specifically, to talk about the part about. How, Jenkins X is using Tecton pipeline. So. A. User. Is. Not actually going to necessarily know that they're using Tecton. Pipelines, behind the scenes we. Have our own ways of defining your pipelines in Jenkins X either by a standard, build pack or when you define your own pipeline using, our llamó, syntax. Then. At runtime when. A. Bill, gets kicked off Jenkins, axe translates. That pipeline into. The. CR, DS that are necessary, to run, a, Tecton. Pipeline, and then, Jenkins. X monitors, the pipeline execution etc. So. That. Means that. Like. I said the user isn't directly, interacting, with Tecton the users interacting, with Jenkins acts that. Means that we can. Do. A lot of things on our side without having to worry about exactly, how the. Users gonna interact. So. Why are we using Tecton, pipelines, like. I said I've been working, on Jenkins pipelines as well for a while now and. What. We've come to learn is that pipeline, execution really, should be standardized, and commoditized. CIC. D-tools all over the place have reinvented, the wheel many. Times and. There's. No reason for us to keep doing that. So. I'm really excited about that and. We. Really like that we. Can, translate, our syntax into. What's. Necessary for the pipelines to actually execute so that we're still able to provide an opinionated, and curated, experience for Jenkins X users and pipeline. Authors. Without. Having them to worry about being, exactly, the right syntax and, verbosity, etc. And. It gets us away from. The. Traditional. Jenkins world of a long-running JVM. Controlling. All execution, which is you know good. But. The best part is as. Kim, mentioned how great, it is to contribute and get involved with Tecton, pipelines, I only got involved with this at all starting in November and. We've. Been able to contribute. Significantly. To the project, helped with their figuring out direction. Fixing. Bugs integrating. It with Jenkins X and get this all to the point of being pretty much production, ready in just a few months and. That's. Phenomenal. It's just been a great experience and, incredibly welcoming community. And. And just. Been a lot of fun, I. Don't. Have an actual demo, exactly. Let. Me go back again sorry, I, don't. Of course my screen went to sleep hold, on. Typo. All. Right so what, I wanted to show you here was just quickly. How much. Of a difference there is between the, syntax, a user is authoring and what Tecton actually needs to run and why, we think that's valuable so. This is roughly, what a. Obviously. Brain-dead, simple, pipeline in Jenkins X would look like. Just. You know 26, lines and, then. When we transform, it. Well. It's a lot more than that but because we're able to. Generate. That and not require the user to author it all by hand we're able to. Inject. Jenkins. X's opinions, about. Environments. Variables, about. Where. Your what. Images should be used, and. A. Lot more and it's been really. Great for us to be able to have. The full power of techsan, pipelines, behind the scenes and during execution, without. Needing to make, the user have to worry about all of those details all, the time so that's been really productive for us so thank.

You. So. Next I'd like to introduce Chris from trigger mesh to talk about the work he's been doing. Yeah. So, thank, you Kim, thanks Andrew hi everyone my, name is Chris Bambara developer. With trigger mesh and also one of the co-authors for the, tech, tip on our sorry for turgor. Meshes action, so. Starting. Off with trigger. Meshes, action. This, came out as more of a way. Of. Tying. In the github actions. Sorry. Yeah the github action workflow once, it was announced last October. Into. Creating. A way of being able to bind, that with the Tecton. Pipeline approach the, idea. Being that. With. That workflow we can translate. That into the, various resources that the, Tecton pipeline now makes available and, then. Be able to feed that into your kubernetes. Cluster and. Be, able to either experiment. With it or even, create. Additional hooks, so that it, can actually receives, external, stimuli, through something, like K natives eventing service things. Such, as like whether it is going to be a pull request from, something like github or maybe, it is some, other web. Form being filled out that triggers that build, or that, workflow. In the background, to provide the result and. Ultimately. Allowing you to run. Things from anywhere so. A, little bit of a bit. Of a mapping exercise. For. The terminology, goes with. Github, actions, you have the concept of the, workflow, the, workflow. It would be like the equivalent of the Tecton, pipelines global, pipeline, this. Is where anything, and everything runs task wise and then. As, far as the github action, that is the equivalent. Of that single, step or that task where, it's going to be that. One container that runs, that command that produces, the output and, I. Do call, out one, of the other components, within the github action, which is called, uses, this, is more of being. Able to define. The image that you want around within, your tech, town pipeline, task. Whereas. With the tech town pipeline tasks, expect. A particular, image what we end up translating, with, the, github actions. Or at. Least we will once I finished, my pull request is that, we'll. Be able to allow, the full support that github, has for their actions, as far as being able to point it to another github, repo or appointed. To and some, other local directory, within that, repo, that you have priest. But are defined, to be able to, build. The docker image that you can then feed into the pipe or, the task to work the magic as it were so, as, far as our little. Pretty picture, with. The. Trigger mesh action we actually have two, commands that, handle. Everything we, have down at the bottom the create. This, one creates, the tasks, creates. The associated, pipeline resources. And you. Can also use it to create the tasks, run or the pipe right our pipeline, run object, to create like a one-shot. Invocation. Usually. If you want to test something out to see if you've got your workflow working, just, right or, if. You wanted something else to call into this and, up. Above is we. Also. Create a que, native eventing, sync as well as, The. Associated, transceiver. Which creates. A serverless. Service. Within King, native to, handle. The info or the creation and the invocation of that task run object. So. To give, you a little bit of a quick, demo of what, we have, let's. See if I can Oh perfect. So what, we're looking at right now is the, customary, hello. World, example. You'll. Note up, at the top we do have our workflow defined this. One being more of our pipeline. Object. The. Result indicates. All, of the actions that would be associated, with our workflow and the on usually. Indicates the, type of action that would happen within your repo. And. Then right, below that one for the action, you'll, have, some. Kind of identifier, our. Naming it's, the. Fact that we're using CentOS, is our base image, and of, course we're just going to run, hello. With a specific, or a specification for the environment, variable we. Do also see. That arts does allow you to pass in either a string or an, array of strings. To go in as well which. Is one, of the nice things about their language and as. Far as like some of our translation. We, take care of that for you. So. With, the action, command itself as. Mentioned before hand you have your create you have your launch one. Of the things that we originally had started working on is our, own implementation. Of the parser for the. Github, action, workflow, syntax, but. Since. Github. Was kind enough to open, source it back, in February so that, experimental, projects, such as these can make use of it, we've. Started transitioning, to that one so now it acts as more of a sanity, check on. Whatever, workflows, that you feed in to ensure that it does what it's supposed to do, as. Far as the common global arguments. We do allow for passing in your git repository. When. It comes to creating.

Things Such as the. Collection of tasks. With, the create command it is used for being. Able to not, only create. A pipeline, resource, that could be referenced, by additional. Steps in case we wanted to add things to it but. Also in. A, case, of specifying. Like that local, directory so that we know which, repo to pull the docker from to build the image, so. Now. We'll. Feed that into our hello world we, just feed. Everything as is and we have our simple task object, you'll see our steps, have actually been. Broken. Down. And we, have our environment variables we have our new and improved image. And we also have our name which is Ben. Cooper, 95, to. Resemble. Their, traditional. Naming scheme and then. We can pass in, minus. T to. Provide. Our task for an object and. Then. Give. Her favorite apply f well. Hopefully. It will still talk to my kubernetes cluster and. We. Have our objects that are created. And. It, looks like it just finished, so. We have our true we have our success and, we, also have a pod name so we can go here and take a look at any output in the, case of failures or if there was something that you wanted to grab as, part of the any. Of the successful messages. And. Some. Other, things along those lines and then if. We were to look at launch the. One, thing it does require is that we do pass in a. Or. A task. This. One does also require that you specify, a github. Repo. Alright. And. This one so. Here. We have our. Eventing. The source definition of specific. Are specifying, going into github, the. Request for pulling in our. Credentials. And. Also. The test that'll create and fire so, that. Believe. Is pretty much it. Awesome. So thank you Andrew and Chris again for for sharing your work with us like, I said before we're not doing this alone and tacked-on is actually part of a new foundation called, a continuous, delivery foundation. This. Is an open foundation, where we're collaborating, with industry leaders, on, best. Practices, and guidelines for. The next generation, of CIA CD systems. And. So. Initial, projects, of the CDF include, Tecton. Jenkins, spinnaker. And Jenkins, X now, you've seen Tecton, integration, with Jenkins X we're all so excited that we're starting to integrate with spinnaker, as well. And. So. These are the current members of the CDF, and we're really excited to work with them on our mission to help companies, practice, continuous, delivery and, bring, value to their customers as, fast and securely as possible. So. If you do if you want to learn more about the, CDF please. Check out CD, dot foundation, to get involved or just kind of watch, what's going on. Alright, so what's, coming next so for the CDF, we have a co-located, summit, coming up in coop con Barcelona, on May, 20th, and if, you're interested in what's coming down the pipeline for, Tecton pipelines, or. Currently we're working on, some exciting features like, conditional, execution and, event triggering and more and for. Tecton itself we're looking forward to expanding the scope and adding more projects, and we recently had a dash board project, added and.

So. For takeaways if you're interested, in contributing to Tecton, or you're interested in integrating with Tecton please, check out our contributing. Guide in our github repo it has information about how to get started developing. And also how. You can join our slack channel and our working group meetings. Etc if you're, an end user check, out Jenkins X check, out trigger mesh action and watch, this space for more exciting, Tecton, integrations, in the future and, that's. It thanks so much for listening, to our talk.

2019-04-13 12:56

Show Video


Bitbucket Pipelines do "the same" since 2017

Other news