Getting Started With Mirantis K0S

Show video

thank you everyone for joining us sasha today we're getting started with mirantis k0s that's new communities distribution that just arrives last year after november i'm really excited to have tonight my uh guest of this time he is a jesse nolan's off mirrors he is a senior principal manager at mirantis and i'm really excited because he is also a tech leader for new competition k0s tonight we're exploring what why and how of k0s with jesse and hope you have enjoyed this session and if you want to see more webinar like this you can bring me paying me us on twitter our twitter handle is listing here in youtube chat you can send send us an email or send us a tweet we can figure out the what you are interested in watching our webinars so we already thanked for jesse for his time so had to get a head over to jesse all right so welcome to to talk about uh kzrs that the new kubernetes distro we've been we've been working at that around this for a while um so i'm i'm using i'm i'm working as a senior principal engineer at mirantis um i've been actually working with this cloud native technologies and and things and stuff for for the past eight plus years uh actually even even even well before they were called cloud native and the whole cncf wasn't even even born yet when we when i've started to work with containers and created things and i'm still still kind of hard-headed and and i'm still excited to build build stuff and tools to help help others others take in these cloud native technologies and and make people lives easier with those [Music] uh standard legal disclaimers so whatever i i state whatever i features i i mentioned on on these slides they're not like uh legally legally binding us to provide those features or anything so the usual usual usual stuff uh maybe a couple of words about miranda's like like what we what we do and why we've because as as mentioned in the in the meeting uh meetup invitation that kind of why we why we created k0 so miranda has a company has been has been working with uh with different uh different cloud and and and sort of technologies for for past quite a few years and and uh mainly mainly to to did create technologies and and and create solutions to to basically ship code faster whether you're running on a public or private clouds uh one of our kind of spirit things is is and has been the the openstack um and and and of course that is something that that we still strongly believing and and the sort of marriage of kubernetes and openstack what would we be doing and then of course uh the world of containers through the through the acquisition of docker enterprise side of things and and uh kubernetes and and everything so so that's our mission to sort of uh cheap code faster with the cloud native technologies and kubernetes and containers uh first things first so k0s is the the way to pronounce the project name uh i mean that's the official name or or pronunciation we we actually even even documented it in our in our documentation uh you might hear me also saying differently in different locations so so there's no nobody's gonna gonna get you any any troubles troubles if you if you pronounce it differently so don't worry about it uh so k0s is is something where we where we try to try to create a a kubernetes distribution uh that is that is like like that simple to use but but but of course we we want it to be like like fully certified kubernetes distribution so for those of you who are newer in the into the world of kubernetes so kubernetes and and and cncf maintains this sort of a conformance testing suite so it's basically set of set of tests automated tests that run on them run on the cluster and and and and figure out whether whether you are actually having a kind of solid and certified kubernetes of course it's it's not the first distro around and and it's actually not the first distro that i've been building myself and and the team has been building so so we of course we draw a lot of inspiration from uh from other existing stuff like like for example a a distribution called ferrous which i've had my hands on in the past uh k3s which which is of course a a fantastic thing and and and we draw some inspiration from there and and in the in that kind of k0s architecture and kind of concepts there's there's some amount of similarities with kk3s um alpine line exists is is one thing where we where we draw inspiration actually quite a bit uh but but of course but at the same time we have to acknowledge that that we're really standing on the shoulders of giants in a way so we wouldn't we wouldn't be of course doing k-0s if if there wasn't things like kubernetes to start with so uh so we're really really really kind of standing on the on the shoulders of the open source giants uh when we launched that the publicly the case euros project in in november so so after that we we've actually had pretty good early traction so looking looking purely from the github uh we we actually see that we already have like more than 3000 stargazers and and and the binaries that we built have been downloaded more than more than six thousand times already so uh for for us although these are more like sort of a vanity metrics but but but at least for us as as the sort of core team and the core directors of the project uh this is something that that kind of uh kind of gives us a a gives us a uh a kind of signal that uh the k0s is is doing something that that there are use cases for and and and people seem to like what we what we are doing and building so that's a good good thing of course [Music] so uh but i'll kind of cover cover today uh like like like the basic questions for that for the project that that why why in the first place would we would we create something like k-0s um then we'll dive dive a bit into into like technical details like like what it is and and and and then of course how we build things uh on the on the on the last parts of the of slides i'll i'll of course then then share a bit of the of the vision like like what are the what are the next big things that we are planning to work on no demo well of course there's gonna be a demo and and of course it's uh it's a live demo so um be be warned so you you know how live demos might might go so i might end up doing a live debugging session but let's see how it goes all right so about the name so why why why zero in the name uh so the zero came from from from the sort of thinking that uh we we wanna we wanna have like zero friction on the on the on the k zeros so uh we we wanna and what i mean by friction is is is things that that are adding complexity and and kind of adding the small annoying things that you have to do when when installing and setting up a kubernetes cluster so uh what we are trying to get at least this is that there's like zero friction of of getting you started with k zeros and and and and having a fully confirmed kubernetes cluster up and running uh so you shouldn't have like like 15 years of linux experience in your pocket and and another seven years of kubernetes experience in your pocket to be able to able to figure out how this works uh another zero is zero dependencies so uh and and this is something where i'll i'll dive quite quite deeply in in the presentation so um so gazers is actually a single binary so so there's basically no no host operating system dependencies uh i mean on on the on the kind of package level at least uh it's actually not really zero at the moment because uh kubernetes itself and and some of the cube components they they actually need a couple of other binaries on the on the system but uh but we're actually trying to get it to really to be zero as uh we've actually done some upstream patches from from our team to to to get rid of some of them some of the external binary dependencies on kubernetes itself uh so so so basically you can ship the same binary into into red hat or into ubuntu or or any of the any of the linux distros and it works exactly the same way and then of course zero cost so so uh case drawers is uh it's it's a it's an open source project so um with with um pretty pretty kind of uh free to use attached to license not not pretty free to use well free to use but but uh apache 2 is is a good license to so so you you can actually quite easily build your own solutions and project on top of top of that so that's that that's where the zero comes from on that on the name um why well uh what what we kind of set out out as a as a goal when we when we started to even even think about k0s and what we what what we'd actually need uh it's something that we we kind of uh kind of started to think that that uh we we need versatility from a kubernetes distro and and and and with that i what what i mean is that uh we we really have like very very diverse set of use cases where we want and need to run kubernetes today i mean it's it's not only us but but of course there's been there's been many others like like folks from rancher and k3s world too that that are seeing this uh kind of adoption of kubernetes going towards the edge type of use cases also uh so so we need something that that's like versatile enough so that you can deploy it on on one one hand you can deploy it on your typical big cloud providers like amazon's or on your private private open stack or or something but then on the other hand you have use cases where you want to deploy kubernetes on that more like on smaller devices on on closer to the to the edge of the network and while while doing that our kind of thinking was that that okay yeah we want to take kubernetes into that world but we want it to be like like pure upstream kubernetes we don't want to maintain a a fork or fork of kubernetes i mean maintaining a fork of something something decent in size and and if you if you look at cabernet's cold bases it's it's a huge huge monster so um uh we definitely didn't want to start maintaining a fork of kubernetes in in in my past i've i've been working with projects that uh that provide packages for different operating systems like rpms and dips and and and whatnot uh and from that that experience i don't really want to do that anymore hence we once we settled for the for the single single package single binary packaging for k0s uh and all these all this versatility and everything it supports not only about but getting you a a easy way to set up your kubernetes cluster but it also means that you have a you and and and we also have a have a base to build higher level solutions on top so we have some ideas and plans how to how do you realize utilize k0s ourselves of course and then hopefully hopefully you and and and the community you have a lot of use cases where you where you can can use k0 also uh one of the one of the biggest differentiators that we have in k0s compared to compared to most other kubernetes distros is say a it's something that we we call as a as a control plane isolation so uh in in most of the kubernetes distros uh the control plane the kubernetes api scheduler and and and those components they they actually run on on a node using using containers and and using cubelet on a node so we actually wanted to have a way where where there is no possibility for a user to schedule any workloads on the control nodes and and and and we'll dive also also in this but this is this is probably the biggest biggest differentiator that we have in k0s so by default the control plane is fully isolated from the from the rest of the cluster and that's that's pretty cool and that actually is something that enables very very versatile architecture for deployment uh that actually gives us a a nice bridge for for a couple of uh terminology things so so uh control plane in in the world of kubernetes of course describes that the nodes and and set of nodes that that run these kubernetes management components like the apis schedulers controller manager and and those and and and and control a controller is node that runs this control plane component uh in the in the k0 world and and in the documentation too you might also see see a a term server being used so so we use actually these controllers and servers pretty interchangeably we should really settle for one single term and we actually have a github issue even even on that so we we need to settle down which one we use um worker plan so that's of course that the set of machines running that the user workloads of course in containers with the help of some kubernetes components and a worker is a single node that that runs this container workloads all right so in the in the architecture of k0 so so i mean you you with k0s you you still get kubernetes so so of course the architecture is is is pretty much what what kubernetes architecture is so you have your your kubernetes components like api scheduler or controller manager whatnot and then on the on the on the worker plane you have your nodes that are running cubelet and and q proxies and and some other components yes of course it's still a a standard and conformant kubernetes setup but of course what you have in k-0s is that everything all the kubernetes components and and and plus something else they are packaged as a single binary uh and all of the kubernetes components are sort of managed by zeros itself so so you as a user you don't really have to worry and and and and figure out how to how to manage like uber in this api components or or anything you just have to you just have to start k0s and and the k0s itself will will actually then manage your your kubernetes api components uh as i as i already mentioned so uh all of the all of the kubernetes stuff that that we bundle in is is 100 upstream stuff so so we don't we don't do and maintain any forks of grenades or or any of the related components if we need to fix something we we push it to upstream so what it actually means for us is that uh we can actually adapt to upstream changes pretty fast so whenever there's a new version of cooper and it is out there it's uh it's it's it's like a couple of days and and and we have it packaged in in kzeros usually usually it's it's it's nothing more than that uh one it's actually not not a kubernetes versioning example but but uh one example was uh one of these container d patches so there was like cve like a month month two months ago i can't remember anymore uh on on container d and and uh it was actually less than 24 hours after the cv was announced that we had a a new package of k0s out there with the fixes on on that so working with your upstream and and it really allows us to to focus on the on the k0 k0's parts of of the solution so that we don't really have to spend and and waste effort on maintaining a fork so yeah everything in k0s is packaged as a single binary so one way to do to think about it is is it's like uh it's it's kind of like a self-extracting binary so uh what what's actually actually happened or or in the in the binary so so that there's actually other binaries within the okay zeros binary uh all of the all of the binaries that we that we package in like all of the kubernetes components and everything we do compile them in our build pipelines so but but the source for those builds are are of course fully fully upstream sources but we but we do the compilation as we want to have fully static statically compiled binaries what it means that the same binary can actually work between different operating systems like uh or or linux distros for example so you can you can use the same binary then with the red hat and with ubuntu and whatnot of course we have a different build but that's another it's another question so we don't really have to maintain all that all the rpms and whatnot so if you if you ever had to work on on those sort of uh linux distro packaging issues it's it's uh although it sounds like a like a simple thing but it's actually actually quite a lot of work to make everything work with different versions of debians and rpms and whatnots so in the end we have a single binary that's easy to ship around so if you want to set up your your cluster just dump this single binary into into all of the nodes and and you're ready to go um one way to think about k0s is of course also like it's a it's a like a glorified process manager so um all that all the kubernetes components and all other dependencies like like container d cube that everything we run those as what what we call like naked processes or so uh so like just like plain plain processes no containers in in the picture in in that sense so what it allows us is is is to run the run the k0 as components and and and uh controller for example in in environments where you actually can't run containers that there's less dependencies so you don't have to have container runtime on all of the all of the controller nodes for example and and so on it also has allows us to do slightly at least tighten the security so what we actually try to do in in in k0s or what what k zeros tries to do is that it it if they are different users set up on the on the linux system for different different components so it will actually bootstrap the the kubernetes components as and run the processes as a different users so not everything is run as a root on the control plane there's no container runtime there's no cube but there's no containers running by default so what it really means is that that you as a as a kubernetes user you don't really have any way to to schedule your workloads into the controller nodes whether on purpose or by accident uh that's actually something that that's that's like uh in in most of the kubernetes distros it's like really really hard to control that that who can actually schedule workbooks on on which of the notes in the cluster so that's although kubernetes are are back the robust access controls there are quite strong but but that's something that that's not like really easily controllable by by the our back or or anything and as i mentioned mentioned many many times and and and by mentioning this because this is the the main thing that that differentiates k0s from from other solutions is this control plane isolation really gives you a lot of versatility on your deployment options so with this you actually can run your control plane say on a on a public cloud and then you can actually run your worker plan on on your private data center behind firewalls and everything so that's now doable with with this sort of a control plane isolation one of these other models that we have on the on the case errors team is that uh yeah battery is included but swappable so so although we have defaults and and we provide most of the things out of out of box uh there are ways for for the users to to swap things out so uh by default we we ship with uh with container d as the as the container runtime but you can of course use use k0s with with your own container runtime implementation so whether that's uh we ran this container runtime the what what was previously called darker enterprise whether you use the the docker community edition or creole or whatever so uh as long as it's uh it's uh compliant with the cri interfaces and and and definitions then then you can you can use it with the k0s too but of course then it's up to you to ship the binaries and the packages and whatnot uh same goes for that for the cni that the container network implementation so that the networking implementation on the on on your kubernetes cluster so by default we ship with uh calico as the as the cni implementation uh but of course you can you can configure k0s to to not set up calico and and do the cni configuration yourself you can you can of course bring your container storage implementation in into the into the cluster so we we we don't actually actually ship k0s but involved with any storage provider so we we made that choice at least for now to do not cheap with with any storage provider because there are so many so many providers out there and and and and there are a lot of differences in the in the providers so one one provider is good for one use case and the other one is good for another use case so we can't really make a choice in engaged to which which one we should bundle in uh thanks to thanks to the open source project and component called coin that that originates from from the fox on rancher and k3s so uh we also embed coin into into in into k zeros and and with coin you can actually actually swap out the default lcd storage on the control plane to do something like my sequel or or any compatible or postgresor or sqlite for for single load cases uh that again is something that that allows versatility on the on the deployment options so if you if you don't want to want to have pain with with that cd um you can use something like host that mysql or or or whatever compatible database from from your cloud provider for example that's the sort of backend for the for the control plane stuff so coin is really fantastic on on that okay let's dive into into some of the details that that how we how we actually implement it before we we try to dive into into live demo so uh single binary again we how we actually do it is that uh we we we actually we when we build k zeros uh we actually build build the the binary as a more like a damp file first uh and then at the build time we actually generate uh this sort of offset positions for for different binaries so so say that we we embed like a like cubelet binary in into k0rs binary so we we basically we know of course that the size of the cubelet binary and based on that we can actually calculate the offset position for the for the cubelet within k0's binary and then we we we actually actually add that information into the k0rs binary itself and then during runtime we can then kind of easily seek seek and extract the binaries out from from the k0s binary so sort of like a self extracting well that sort of it is a self-explanatory binary and and and uh for those of you who have been working with golang stuff so so there are a lot of a lot of different solutions how to how to embed static embed some content in into your binaries whether that's another binaries like in our use case or whether that's like static websites or or anything but most of the most of the existing solutions they had a problem that the the data is actually actually embedded as as golden variables uh and what it means is that it it they are actually loaded into memory so so when we use those we we saw that okay yeah although this is nice and fine and everything it works perfectly almost but it ended up having a lot of things loaded in memory which we don't need of course we don't need that the cubelet binary in in memory all the time on the zero as per process at least and yeah everything is is uh statically linked so there's no like os dependencies on the on the package levels and or or anything so you can use the single binary on on any of the any of the linux destroyers [Music] uh with the control plane now now isolated from the from the worker plane so uh uh kubernetes has has a has a less known concept called aggregator so if you if you configure your control plane components mainly the api server uh if you configure it to to use this uh aggress selector concept so what it actually tells today the kubernetes api that okay whenever you need to call something that's outside from your i mean from the api perspective uh the call is actually made to this extra selector so it's basically it basically acts as a a proxy of sorts so whenever the api needs to call something external whether it's some external api or whether it's uh whether it's a node or cubelet api what what typically is uh the api makes the call actually to this aggress selector which is a a proxy of sorts uh and now on the on the worker plane uh we we actually set up a a a component called connectivity agent as a daemon set so it's basically running on on each of the nodes so the agent itself is configured to call home so the agent actually opens up a socket connection to this connectivity server and and this connectivity server is now this aggress selector or configured as this aggress selector so what actually happens when the aps server needs to connect to the worker node to to get some information or or if you're getting locks of your pods or executing yourself into a pod that those are the things where you actually have to where the api has to connect to the cubelet running on the node so uh on those cases the aps server sees that okay i have to i have to use the takers because i'm configured to use that and it actually calls the connectivity server and the connectivity server figures out that okay we're targeting a node this specific node in the cluster and it says that okay i have a i have an open open socket opened by the agent itself so it actually uses this sort of reverse tunnel to connect to the node so what it actually means now that that you really can have firewalls with with only outbound axes in front of the in front of the worker nodes so this way i can really run my control plane now for example in a cloud and i can run the workers in my say my private data center or on my i can simulate a private or emulate a a private data center on my mac using using virtual machines or containers or whatnot so this is something cool uh an okay 3s does similar thing but but uh they actually do it do it a bit differently technically uh and that this as far as i know at least that this is one of the reasons why they actually had had to do a fork for for kubernetes to inject these sort of functionalities as far as i know i might be wrong which is not a new thing for me but we do this with uh with uh like standard kubernetes configurations and and uh this connectivity server is uh it's a it's a one of the kubernetes especially in the interest group six projects that that we are using and and hopefully contributing also in the future uh joining new notes we and then this is something that i'll i'll demo so uh we use deserve tokens to do to sort of authenticate that the join process in the in the in the between the nodes uh let's see if i'm able to able to demo it but but the tokens are actually valid cube configs that are used for for different components like cubelet for example uh and we use that that that tokens both for for worker and controller joints so so uh of course worker join is is like connecting that the cube letter and and node components on the on the nodes to do that to the control plane components and and when we are running join process for that for the controllers what we actually do is is uh we we need to sync up things like the the the ca that that certificate authority created for the cluster uh we need to if we are using scds as a data store for that for the control plan then we also have to have to reconfigure hcd to have a new new members in the in the cluster and whatnot so that's also handled dynamically by k0s there's two ways to to sort of extend the out of blocks experience with k0s so so we we have this sort of a manifest bundle functionality so so there's a there's a special directory on the on the on the hosts where you can actually create sub directories and dump your your custom manifests in and and they they will be loaded automatically so say say that you you you want to ship with your own own cni let's say we for example just drop your crop all your manifests in into that folder and and they will get applied automatically another way is to do this uh extensions as helm charts so in k0's yaml configuration you you said that okay we want to use these repositories for the charts and and then we want to install these charts with uh with these values and then when when when k0 bootstraps it it actually then applies and and and set ups these detail charts for you automatically so you have sort of one place where you control your your kind of base cluster features what you want to provide for your for your users if you're running this sort of a platform for your developer teams for example and yes we do support windows so we we have marked windows support as us as highly experimental we know that there are some some super rough edges on the on that well on case heroes there's some rough edges still we we know that they are there but but for windows there's even even a few more of those that we need and and one of the things that that we are trying to work out is is the dependency on the on on existing docker or or miranda's container runtime on windows so currently you can use k0s only when you already install docker or or mcr on on your windows notes we were planning to ship with uh with embedded container d also for windows but that seems that there's actually actually a container d uh i'm not sure whether to call it buck or a feature that renders it it it's not so usable or it doesn't actually work with the cni providers so we were actually having a bit of trouble figuring out how to make that happen but basically the intro experience is pretty same for for windows workers also you you run the workers stuff give it a a path to the to the docker socket and and basically that's it so that's pretty neat that you have almost the same same user experience on on windows workers too when joining joining the windows notes into the cluster okay quick words about what what we're planning to do in in the near future before running a super quick demo uh so we're targeting a sort of release per month uh kind of release payments uh we are hoping to get get that 1-0 release out on on the first half of the year and then we we of course do batch patch releases then whenever needed whenever there's a there's a bug critical enough to be fixed quickly or or whether there's a some cv bad cv on on on some of the kubernetes or other other components or or things like that uh we are hoping to get get another really hard deal during this month during january there's gonna be a few more categories optimization and then we are actually coming out with that with another or k0ctl to allow your you to build your multi-node kind of life cycle manager for all you know k-0s clusters then we've got quite a lot of bug fixes then for the for the near future we are planning to work on on to support like fully air-gapped environments built-in functionalities for backup restore use cases then of course kubernetes is gonna gonna come out with 121 series of releases in next couple of months i appreciate joint tokens to to make life easier for cases like auto scaling clusters and those sort of things uh there's been some discussions people seem to need need or want to have like pre-built virtual machine image images for k0 and then of course we we we are thinking of and and we know few places where we need to optimize things quite up quite a bit actually so uh so that that's something that that's in our our radar but of course it's a it's an open source project so so uh you as a as a community you have of course a saying also what what what you need and what you want to have so so uh just open the open the issues and enhancement requests on what we can do about it and of course even better if you open pull requests directly uh we have documents up and running on on docs.k0s project io uh there's of course the github people a lot of information there and then we we are sharing the the slack channel on on on our kind of neighboring team with with the lens so so on on the lens slack there's a dedicated k0 channel channel so so uh join that slack and and join the k0s channel to to have a have a chat with people okay i'm i'm running quite late on them on the schedule but but i'll run a super super quick demo on things so for for the demo i'm i'm just applying a a some terraform stuff just to create me a couple of couple of test boxes on on hatchner cloud provider you'd have done this before because it takes so the terraform stuff is is nothing special so uh so that's just controllers just just workers they're just small virtual machines and and and what i've done is it actually runs a a a script to get a get me that the k0s binary in place as part of the provisioning so i'm i'm using for the demo i'm using that that bet one release of the zero 10 series so well as the as the sd version implies so it's a it's better so hopefully hopefully it'll it'll actually work i i just wanted to use the beta beta because it has nice enhancements on the on the installation of k0 so and of course for production production use cases you shouldn't really like curl and pipe pipe it do execution like like this so of course that's a that's a known security thing so don't do this at in in production for demo purposes on still creating okay okay i see one of the notes completed [Music] it usually doesn't like this long [Music] what did i say about the live demos how good idea is it it is okay i did try this actually last night so and and it did work perfectly but well the gods of computers are not with me today still creating okay it usually finishes up around 1 minute and 36 okay it actually did finish okay so basically i have now four boxes on my hands one of them i'll use as a controller and and the rest are are actually workers so um s is h root and this ip yeah yeah i trust it a zeros version yeah the battery is in place so one of the enhancements that that that's coming in in one pen is is this install command so so what i'll do because this is the controller also known as a server so i want to actually install the server parts so what it actually does for me it actually creates all these users for different cube components and processes and it installs k0s as a systemd service because the top this is an ubuntu box so so that's just md available so what i can do next is system ctl start k0 server service voila i mean in three minutes there's you'll see that that there's the k0 server but process running and it actually manages all the lcds aps servers everything for me all right let's connect blends lens into the picture so uh [Music] so this is the administrator cubeconfig [Music] it's long as as hell but but my lens dashboard id running paste in that config and then of course i have to change the localhost address to whatever was the controller's ip address and cluster voila we can see that of course of course for the workloads they're all pending because there's no workers connected to the cluster yet so so you you see that the controllers don't really run any any of the workloads so let's connect at least one worker into the cluster ssh root yeah i trust it and then to be able to join a worker node into the cluster i need a token so so something that i i can i can sort of authenticate that the worker so that it it's actually allowed to connect to the cluster for that i need to first create the token so of course okay 0s token create role worker this actually creates me a a super long token that i can use as as the way to authenticate that the worker into the cluster so i'll actually add that as a or create a file volcano token with just the token in it then what i'll do i'll again install i want to have the system did these services for for k0 so so whenever the node boots the the k0s parts would actually boot also with it so k0s install worker [Music] you have better places to put these sort of files in but but yeah it's now it's now here and that's it install the service and then choose them ctbl store k0s worker servers and in a minute or two we'll see that uh we actually get a node on the on the cluster on the lens yeah worker zero connected of course it's it takes a while until it actually gets like in in the ready state it needs to pull out all the all the images for calico and and whatnot and then i could do the same process repeat and and rinse and repeat for for all the notes but but uh the process is exactly the same just get the token in one way or another into the notes and then run the worker with that with the token and you have everything everything set up yep note ready conditions ready so some of the pots are still being created okay all of that all of the pots are now created on that on the node green everything works out of the box and then you can of course scale the scale the number of worker notes as as basically as much as as much as you need here or and after it's a standard kubernetes so so use lens or use any of your any of your kubernetes tools to connect to the cluster and be done with it all right thanks jesse it's been a very very thoughtful about session for me as a presenter as a host i'll learn a lot of cool stuffs so before the session i have a confusion around how to compare the k3s with the k0s how it's pointing how it's implemented how it's configured how it's used during this discussion and learns all the very cool stuff why part of it what part of it how part of it it's been a very exciting session for me but i have such i have included some question from audience previously in the in my email box so one of the question is that in a k0s cluster we are using a system ctl services rather than a static board or any other way of configuring the bootstrap configuring the configuring the control pad components uh you mean why we are using yes yeah okay okay so so yeah uh that that is that is basically coming from the fact that uh we we want to have the control plane stuff isolated from the cluster so we we on on the controller node so if i if i go to this uh controller node that i created so if i use this standard ps ps block for example you see that the the the kzrs server process when it starts it actually creates and spawns and manages these these other components as as plain linux processes so we don't have i i don't have any container g i don't have any docker or anything running on cubelet or anything running on this controller node and and this really means that i don't have any any way as a user i mean if i go to my lens if i go to my nodes i don't see the controllers as part of the cluster it's not not something where i can actually schedule my workloads on so so the control plane is now fully isolated from the from the from the cluster and from the workloads i mean from scheduling perspective and that's why we that's why we do it do this in in in the way we do it without giblet container the container runtimes yes okay but the next question on this topic of so if there are some distros of linux where systems these services are not available so if i use a k0s cluster so i need a operating system with the linux system these services already bigger no you don't need it okay you don't need it uh the the k0s install this k0s install yada yada the install command is actually able to figure out whether you have systemd in place or not and it also supports this open rc which is used als for example in alpine linux so so it can actually inject this this system service as an open rc service too but but but even even if you're running running if you're running something exotic that that that doesn't have either systemdr openrc in it doesn't matter how you start the k0s process so so what i could actually do is is i i i can actually do like a system ct all stop k0 service which will then of course stop all the control plane components so i don't have any cds or anything running anymore i can run the k0 server as a plain process on my foreground so we don't have like any hard dependencies on on systemd or anything okay so number one if us if you have a system where because system these services are not available so k0s can't set it up in the case so as a systemd services and also it has not a pretty much hard dependency or system these services to run their a management control plane so yep okay yep it's it's just up to you then do to figure out how to how to how to how to get it started when the when a computer starts and on and also we have a a single binary we don't need to be expert on rpm or taps dependency or arcade or any other stuff it's a single binary you just run the k0s install command and it's put in this it's extracted the single binary and this is and wrapped the api server controller manager at city and also all kind of things just yep being a single binary yes yes so so when when you've started the k0s stuff uh what you what you'll see is is underwater lip k0s and this is of course the the pathway which we use as the as the data directory is of course configurable but there's uh there's a folder called or directory called bin and and you'll see that we we actually extract all these cube this cube binaries there okay and also it's have a same opinion about like like a rancher have to extracted the different composition of uh data plane server like storing the lcd where you can uh swappable with sqlite or dxd well any way you want the connect there's a coin project already is there the server you can splash app or pluggable and yes and for the storage inter and for the right now you are using the cni for calico but it's completely pluggable whatever you want to use yes yes so so if we do this k 0's d for config [Music] let's pipe i hate the fact that that the golang yaml serialization does everything in in alphanumeric order because it's it's not the logical order how people usually see things so for example for the for the storage you can you can configure by default we use uh uh type hcd but of course you can you can also configure coin and and with the mysql or or sqlite packets or whatever and then for the for the network uh by default we use calico as the provider and then we have some default configuration for calico but of course you you you can say that okay we are going to use a custom provider uh that that means due to gay zeros that okay zeros will not configure valid code in any way for the cluster so it's fully up to you then to configure whatever cni you want to use in the cluster you could use weave or flannel or anything else multusar whatnot okay so actually i think kalaku is a very good tribe for us to choice even odd because in the in this space of cni doing tremendous stuff like the external ebf sport and cool kind of so by default calico is a choice for many of the people but you can use psyllium wavenet flannel or any other as you need it to be so yeah next question is so it's do because k 0 s have a support for integration with the cloud providers right now in plan or in progress over duty we can see in the future if already not have uh we we already do support that so uh for that you you need to configure configure the worker with uh let me grab a connection to another worker node i can't remember the name worker help for the worker you need to need to have this enable cloud provider flag so that'll basically tell cubelet or or it basically adds the that's the the cloud provider flag to cubelet uh we don't support any of the any of the the sort of kubernetes in cloud providers so so you always have to then deploy the cloud provider support as as parts in in the cluster which is the the recommended way nowadays anyway to do it so we we don't don't uh add these these amazons or we we actually compile the kubernetes components without the cloud providers in them currently okay and uh one last thing if one of the no let's say we have a controller manager there they are here in my laptop and the cloud provider the worker nodes are in the sa azure or aws or gcp so some of the node in the workers are in the gcp some of the workers are in azure or aws because they are hybrids people have their opinions that have used cases around hybrid crowd provider and they want to manage their work a lot out there so currently is this the scope of project or um well if that let let's put it this way so uh gay zeros doesn't have k0s and and we as as the project we don't have any strong opinions on on any of the any of the cloud providers so uh you're of course free free to use whatever you want but that if you if you want to have like uh like uh cro the the same cluster kind of spanning between different or or many many cloud providers you you'll probably actually see a lot of troubles on the networking part so you you might have hard time connecting the nodes between different different cloud providers into the same cluster that's just kind of a limitation on that on the networking at least partially uh you might be able to able to come around uh those at least some of those issues by by using using uh what we have also configurable on on calico this uh this wire guard so so that might get you get you out from some of the trouble but i have to be honest i i haven't tried to run kubernetes in a way that that i've actually then also enabled the cloud provider integrations for different clouds in the same cluster i'm not actually sure if that's even possible okay because then you then you'd have then you'd have competing competing stuff for for different like for for things like uh let's say that i i create a a service type of load balancer so who who actually implements the load balancer now is it google or is it amazon i don't know so i'm i'm i'm i'm not sure whether whether having multiple cloud providers configured in the same cluster actually can even work probably not okay so i allow ak for k2rs we can run it on windows and linux as well yes yes but but on windows on windows we only support the the worker parts okay worker bars for the control panel for this all you have to have a linux or virtual machine somewhere so you manage the cluster for yourself and yes if if right now as a i am beginner virtual i see kubernetes api server controller manager scheduler at cd so is anything new in the k0s cluster or is is going to be the it's going to be super a replica of the upstream kubernetes it's a it's it's a it's a full of string kubernetes so there's this i mean what you get is is still you get kubernetes so so uh we we don't change the fact or or hide the fact that it's kubernetes or we don't build fancy layers on top of it or or anything so it's it's kubernetes but as you saw it's it's like that simple to get your kubernetes and and and you don't have to figure out different like like like different packages that you need and and have dependencies between packages and whatnot so okay so it's just write a single command case you install cases install and service and you have all the one binaries extracted all the major components of kubernetes and they have already installed your system you doesn't have to be expert of with dependency management packages or host dependency packages and you have also support for different cris cni's csis and also has it's have a relatively closer to the or together it's a really a really really close to the upstream cluster so anything is changed there is going to be seen in the k0s immediately because it's not a fork repo and you are not maintaining the fork repo it's a super upstream kubernetes yes exactly and as also their oculus distribution around microgrids cuba adm k3s and many other so you have learns the great things from there and also you see that some use cases where they are not fit together and you have put together effort to fill those gaps in in your case us project where case 3s is not meant to be a feasible or microaids is not feasible so because micro k8 is not feasible where you have decent system deconfigured so you are not so much dependent on systemd services and if their system g services are not installed in operating system you are using the system is some base ops to install system d and run gauge us on the system where systemd services are already not installed uh we we we don't install systemd services so so uh as i said we we we don't have any any hard dependencies on systemd at all so if your system has systemd like like most of the linux distros do you have yes then then k0s it kind of injects itself as a systemd unit but but but i said you you you are fully free to run k0s in in in whatever way you can bootstrap the process whether it's uh whether it's a startup script of your owner whether you do it manually every time the computer reboots or we we don't really whether you do it with chrome tab or or anything it doesn't matter whether to create gay zeros how how you do it as long as as as long as we start the k0 server process and that's it we don't k0 doesn't really and and we don't care how you do it the the installation as a system d service is that that's just like like a helper function so that you don't you don't have to type in your your like a full full service definition choose them okay so this is essentially what that what the k0s install does it it creates this system d service stub or service unit that's it okay so kind to wrap things average questions up so for them if it's we learn from some things from microwave other distribution and fill them fill those gaps in so go ahead and summarize if i wanted to summarize how it's similar and different to different cases different distribution to how you summarize this in a few minutes or few words okay so so uh i've kind of intentionally tried to avoid like like com like really comparing things with with different distorts because the detroits are different so so it's it's like comparing with like like bananas to apples or or something so uh i mean there's use cases for for for different distros and in different places and and and and those sort of things but but uh the the main difference in in k0s compared to others is is the fact that that the control plane is isolated so and and that allows versatility on the on the sort of deployment architecture of of of gay zeroes and and and and that was the the probably the main main thing that that kind of led us to let us do to to create the k0s in in the first place that's thanks to wrapping up uh it's been a very exciting session for me and hopefully everybody enjoy afterward learns some of the cool stuff from you and really really appreciated the time and efforts of you and k0s team from presenting their talks with us so we can understand the motivation what why and hows of k0s and how to learn this and we have a really excited your time for in with us we hope to see in future as well but all the good wishes for your project all the good message to k0's team so happy you can do some good stuff and hopefully everybody watching this session enjoy the time spending with us thank you jesse thank you mirantis for uh the team to get active energy and time and knowledge with that thank you everyone bye bye jessie

2021-02-02

Show video