Now don't get me wrong, sometimes Telegraph and Grafana and Prometheus and all of these InfluxDB, sometimes all of these heavyweight monitoring systems have a place. They are robust, they are battle tested, and let's be honest, sometimes it can be really fun to know exactly nitty gritty details about your remote system. Sometimes though, all you want is a lightweight dashboard to show you, well this host has been using this much CPU today and has this much disk space left and my hard drives are running at this temperature. If that sounds like you and you don't fancy setting up an entire stack of stuff Bezel might be what you've been looking for. It's a lightweight simple way of
monitoring servers not just Linux but you can also run it on Windows too. I'll show you how to do that towards the end of the video and also I'm going to show you how to connect a remote Proxmox system in my case running in England across the Atlantic Ocean back here to Raleigh North Carolina using nothing at Tailscale with no ports open in your firewall. I'm Alex from Tailscale and today we're going to look at Bezel. It's really easy to get started with Bezel and thankfully it's completely self-hosted, it's completely free and it supports a whole bunch of monitoring and alerting stuff too.
So you can plug it into Telegram, Slack, Discord, email, notify, like it's a whole bunch of stuff. Go take a look at the docs for yourself if that sounds of interest to you. Also cover the notification stuff at the end of the video briefly but essentially the thing that caught my attention about this project is just how lightweight it is the agent uses about 8 megabytes of RAM when it's in use and the hub not much more than that I can also plug it into OIDC so you remember in a previous video I talked about TS IDP that's an OIDC provider so you can actually authenticate to bezel using tail scale I'm not going to show you how to do that today what I am going to do though is show you how to configure bezel hub to run inside an LXC container on top of Proxmox using Docker. Also gonna show you how to configure the binary agent on Linux and Windows 11 too as a service. That's not as easy as you might think. And then also we're gonna configure Proxmox like I talked about across the Atlantic so I can monitor servers no matter where they are in the world using bezel and tail scale.
So bezel works on the concept of like a hub and spoke type model. And being that I'm using Tailscale as part of this solution, it doesn't actually matter where I run the centralized hub of Bezel. So I'm gonna run this on a Proxmox node in my little home lab cluster that I use for all of these demos. It's just running on one of those little Dell one-liter small form factor PCs.
You don't need anything super powerful to run Bezel. It's one of my favorite things about the light weightness of it. So you can see here, I've just got a standard sort of Proxmox cluster, nothing special going on. I'm just going to create a very basic LXC container here. I'm going to run Docker inside the LXC container and then spin up the bezel hub inside the LXC inside Docker.
Too much inception? I need a spinning top here, don't I? I think I'm in the real world. Anyway, so that's what I'm going to do, and I will cut to after I've created the LXC. So now the LXC container is created. It's completely blank. There's no Docker on it.
There's no tail scale in it or anything. And I've just got this little helper script. I need curl for this, of course.
And Debian doesn't shit with curl because it's too bloaty. Let's sigh. Anyway, so what I want to show you is just a little helper script I've got over here at sh.ktz.me. Lxc.sh.
Very basic. It's just copying the tail scale install script and the Docker install script. If you don't trust me to host that script, you can go ahead and do your own thing. I'm just going to pipe that to SH and then let that run in the background.
So it's going to install Docker inside the LXC. It's also going to install Tailscale as well. So the end goal here, for me at least, is to have an environment where I have Tailscale installed and Docker installed. That's because I'm going to use Docker to run the bezel hub and then obviously Tailscale to connect everything together.
So all of the remote nodes that I'm going to be monitoring, the remote nodes reach out over SSH to the hub. They need to be rootable between each other, which is why I'm installing Tailscale everywhere I possibly can. Tailscale is now installed, but of course, because this is an LXC, we're going to need to add a couple of extra permissions underneath.
Obviously, if you're not running an LXC, this won't apply to you. But in the Tailscale documentation, you can see here there is a couple of lines here that you need to modify in your LXC configuration. So I'm going to go to this file here, Etsy PVE LXC and then the container number. So I'm going to get a shell on the Clarkson host. So the Etsy PVE LXC 100.
I'm literally just going to add at the bottom these two lines and then just write that file out. I'll do a PCT stop 100, which will stop the bezel hub container. and then a PCT start 100, which will, funnily enough, start the container. And that's just so it picks up the config changes, which allow it to bind to the DevNet ton device underneath.
This is a containerized thing. Again, it's not a bezel-specific thing, but if you want to deploy Tailscale inside an LXC, that's how you go about doing it. Okay, so now if we go into our LXC container and do a Tailscale up dash dash SSH, Tailscale Demon is running and everybody is happy so I'm going to copy my login URL right here it's going to ask to sign in as usual with my Tailscale authentication provider I'm going to pick my standard demo Google account that I use for all of this stuff and voila, BezelHub is now on my tail net as we can see here. Fully qualified domain name of BezelHub, in my case, Velociraptor. Now I'm aware I tell you this in every video, but I can't assume everybody watches every single video. So in the DNS tab of the tail scale interface, there is this option here, tail net name. If you want a fancy slash funny name for your tail net, we give everybody's tail net a
ts.net domain for free. You can roll your own name here. So if you want to come up with something other than, I think by default it's like tail, hold on, let me go here. Yeah, so my default name was tail6e5bf. Rolls right off the tongue. So what I wanted to do was use the tail scale kind name generator to come up with something I could actually say on camera or memorize whatever you prefer. There's a whole long list of these names go ahead and roll yourself a DNS name they're all free everybody gets one for free. The other thing you're going to want to do is enable HTTPS certificates and magic DNS in order for all of the different layers of what we're about to do to work with TLS certificates and all the rest of it. So with that taken care of we now have
bezel hub LXC container on our tail net, we're going to want to go ahead now and create the bezel hub container itself. Now according to the bezel documentation there are two ways to do this. One is to run the hub as a binary directly on the host. I'm deciding to eschew that option. I just prefer managing services in docker. Your mileage may vary but I just like the the ease with which I can manage services using Docker is just what I prefer. So
that's what I'm going to go ahead and do because this is my video. They have a couple of options here. So the first one is basically a two-in-one file. We are just going to deploy just the hub inside this LXC container. We're not going to do quite yet anyway, we'll get to it in a moment, we're not going to deploy the agent just yet. So what
So what we're going to want to do is modify this bezel service definition so that we can put it onto our tail net. And lucky for you, dear viewer, I have a file just here which is going to allow us to drop the bezel hub directly onto our tail net inside Docker. This is leveraging stuff that we've done previously on the channel, Docker deep dives, all that kind of stuff. There'll be links to the previous Docker videos I've done going into all of the nitty gritty details. If you just copy and paste this docker file, this docker compose file, you should be good to go.
So we're spinning up two containers. Essentially this container is going to handle the attaching of bezel to our tail net and of course we have the bezel hub application itself. I'm going to copy and paste that entire file inside my LXC container, which is running at bezel hub. I'm going to SSH as root using tail scale SSH. No SSH keys required.
Chef's kiss. Okay, so I'm going to create myself a compose.yaml file, and I'm just going to simply copy and paste everything from VS code Straight into here as a couple of other things we're gonna have to set up on the tail net side So we need to create a tag done if you noticed but in the docker compose file There is a tag here called bezel and we can actually use something inside tail scales ACL and grants features to limit which nodes can see which other nodes. So there could be a situation where you have a remote monitoring node and you think to yourself, well, I don't really want that seeing everything else on my tail net. And so by using these tags, we can limit the scope
of the view of the world that these things have. And then it just makes things a lot more secure. So let's go ahead and create that tag in our ACLs. First of all, it's straightforward enough So all we want to do is literally just create this line here of tag bezel. You can call the tag whatever you would like. Then the other thing that we're going to want to do is go ahead and generate ourselves a OAuth client.
Generating an OAuth client has changed a little bit in recent weeks. We want to generate ourselves an OAuth key, or more accurately, we want to create an OAuth token that's going to allow us to generate an OAuth key or container boot to generate an OAuth key when the container starts. So all we need to select is auth keys and this is under our OAuth client and then of course add the tag that we want to put in here. So in my case it's bezel and then I'm going to just click generate client. I'm going to copy this secret into my compose.yaml file and then the other thing that we have to do before we press go is just verify that we have this bezel hub json file in place.
So where are we looking for that? At the moment, we are mounting a volume here, and this file will enable TailScale serve. TailScale serve is a really easy way of, think of it like a reverse proxy for your service on your tail net. You don't have to worry about spinning up a separate reverse proxy anywhere else, and you'll get a TLS certificate for your hub service. There's a couple of moving pieces I'm aware, but it is worth it. So in the Git repo, the accompanying Git repo, which, by the way, will include a snippet for the Docker compose file. I've just pasted for you. I'll also include this little snippet and this is really fun. This is the
JSON output of Tailscale serve. So what we can do is feed that into the container and have the bezel hub show up on our tail net without doing any other configuration. It's all done completely programmatically. So where do we need to put this file? Well for today at least my volumes are all
referencing the print working directory file which is slash root so I'm going to do make der dash p slash root uh what else do we have bezel hub and then tail scale slash config and then in that directory that we just created bezel hub tail scale config I am going to put bezel hub dot json bezel hub dot json boom and so this is essentially going to proxy the port 8090 from inside the container out to port 443 on our tail net and in the process because ts net is owned by tail scale we do a little bit of magic behind the scenes to enable tls certificates for free. So wish me luck if I now type a docker compose up hopefully this is going to pull the bezel hub image it's also going to pull the tail scale image too and then it's going to add the two to our tail net and hopefully everything will just work. So if we go back now to our tail scale admin console and view bezel hub we can see that that's on our tail net as before this of course is the LXC container and then the container itself, Bezel, is right here. So now if I want to copy this fully qualified domain name, if I just copy and paste that into my browser, you can see that I've got a full TLS certificate. I've got the little padlock. Connection is secure. If we look at the certificate here, it's a let's encrypt certificate that was generated this morning.
So I'm just going to go ahead and create myself here a tail and scales at gmail.com. This is where we can actually start getting into the configuration of bezel itself now. Hooray. And we have a working hub. That's it. That's all it took. So let's just recap the steps because I'm always aware that when I'm explaining things, it takes a long time to explain what is ostensibly just a two or three step process. So we created an LXC. We updated the packages. We installed tail scale and then Docker as well inside the LXC. Arguably, you could get
way without installing Tailscale inside the LXC but I like to do that just because of Tailscale SSH so I can get in there and do stuff. Once we had the bare necessities in place of Tailscale and Docker inside the LXC we modified a compose.yaml file such that we created the Oauth client on the Tailscale side we made sure our DNS was set up. It's actually quite a few steps aren't there? And then we also made sure that our tail scale serve configuration was in the right directory as well. And then we created it. It was as simple as that. Very straightforward 97 step process. But the upshot is we now have the bezel hub running. Now we can start to connect
different agents from across wherever your infrastructure is running. So we're going to start just by monitoring these three Proxmox hosts in the same LAN and then we'll do a couple of other more exciting things later on. Now the real fun can start. This is where we're going to start installing the agent on multiple different operating systems. I'm going to focus primarily on Linux here and we've already shown you how to spin things up using Docker Compose for the hub. So why don't we jump to installing the agent on top of a Proxmox host and connect the bezel hub to an agent to start with. So in the Bezel documentation, there is an agent installation page.
You can see that they support running the agent as both a Docker container and also a binary. Now I've kind of shown you the gist of running a container, a Bezel container through the hub already. So I'm going to deploy my first agent as a binary on top of Proxmox, on top of Clarkson, my primary host in this Proxmox cluster. They've got a one-line install script. There are a bunch of manual steps here if you would prefer, but I'm lazy and I don't actually mind these one-liners too much, especially when I'm doing these demos.
So I'm going to run this install agent script, and now it's going to ask me for my SSH key. So I need to jump back to Bezel, back to my hub that we deployed earlier, and click on this button up here of add a system. So I'm going to call it Clarkson. And for the host or IP, I could use a tail scale IP or I could use, in this case, the local LAN IP because it's rootable on my sort of local LAN subnet.
But just for the sake of argument, I am going to use the tail scale IP just to prove to you that that works. And I suppose in doing so, it also shows that this host could literally be anywhere in the world. I also need to make sure I select binary here, too. I just nearly missed that.
Now, there is a one-line option here of copy Linux command. I'm actually going to do that and see what difference that makes to the install command right here. Oh, I see.
So it just embeds the SSH key directly as an argument that it passes to the script. Nice. Okay, so now I've done that.
It should just be hands-off keyboard time apart from answering questions. Daily updates for the agent. Yes, please. That sounds good. And now if we go back to Bezel, it's already working. goodness i love this software it's so many times with these things there's like these little gotchas and all that kind of stuff but we can see here it's pulling in the ip address the tail scale ip address uptime is is zero hours because i only rebooted it just before i started recording kernel is a proxmox pve kernel 6.8 and here is the cpu i told you it wasn't a powerful
system a 6600T i5 CPU and you can see here that it's just monitoring now quite quite a few little things it's picked up things like the NVMe sensors the core temperatures of CPUs all that kind of stuff network bandwidth you can see is still pretty low so we probably need to let this run for a little while in order to have some actually useful stats but in order to just give you some kind of an idea of the the scaling you can get with bezel when you have it across multiple different agents and different hosts and that kind of thing here's what my personal deployment of bezel looks like so i'm going to go to the grid view here and you can see i've got what's that six seven hosts two of which happen to be the same i'm going to get onto windows in just a minute because there's a whole couple of extra steps to run a service on windows that is a barrel of laughs um but essentially you can see i've got the home assistant plug-in turned on here So you can monitor things like Home Assistant. There is an add-on for Bezel in here. So if I literally just search for Bezel, I think I had to put a custom repository in. I won't get too much into the Home Assistant side of things in this video, but you can see there's a custom repo here for the Bezel agent on Home Assistant.
I can monitor Linux hosts. Deep Thought is another Proxmox host I have. Morphnix is a NixOS host that I'm monitoring. This is actually my primary media server.
So if I go back, I don't know, 24 hours, you can see last night, this is roundabout when we were watching some TV. You know, not a big spike or doing some downloads or something. Not a big spike of stuff going on, really. But I like that I can see what each of my different containers are up to in terms of like their RAM usage over time. It's just some nice stuff in there. Disco throughput, again, 120.
I guess there was some kind of like a ZFS scrub going on or something like that. It's just really nice to be able to spot these patterns and you can see like all of my hard drives, for example, temperature-wise, pretty, pretty stable. You can do things as well like add and monitor extra file systems by passing extra environment variables to the agent, either through a Docker container's environment variables or just setting them on the host itself. So we're going to dig into documentation slightly over here and just take a look at the environment variables section. And you can see here are all the different things that you can configure, all the different sys path for sensors and that kind of thing in case some certain things aren't being picked up, or indeed a white list of temperature sensors to monitor if too much stuff is being picked up. Where it really started to pique my interest was GPU monitoring.
Now, I've got a couple of Olamer instances running in this house, just testing stuff again for this channel and doing some stuff with Home Assistant and their LLM integrations. and you can see that bezel uses nvidia smi to monitor nvidia gpus now my personal installation of bezel has a node in it called nix nv llama and this is a nix os host with an nvidia gpu passed through to it on top of an amd epic uh 7402 system that i have in my basement and i wanted a way of monitoring the gpu power draw you can see at idle it's sat right now at 15 watts give or take um and you can see it's just it's drawing 15 watts doing absolutely nothing at all so but we can see like if i want to you know solve fizzbuzz in uh typescript for example it's now going to um pull in my nix nv llama uh host is going to be running fully locally using the latest llama 3.2 model this is using open web ui which again is a bit bit of a departure from bezel, but I'm just trying to show you like if you put load on these things, you can actually see that you can monitor that in pretty much real time inside bezel. And you can see that my A4000 GPU here has had some stuff come through on it. So I'm a huge fan of this bezel agent, but where it gets interesting at least is how it picks up that NVIDIA SMI path. So in my case, I worked with my good friend Claude to write a NixOS module, a very simple,
basic module for deploying the bezel agent on top of nix if you're interested in that the source code will be linked in the description down below you have to make sure that your path includes nvidia smi and the way to do that for me at least in nix was to include the current system bin path as part of my path as part of the service configuration as you can see here for the systemd unit this also applies to the windows installation a little bit later on because in wsl2 nvidia smi isn't quite where bezel expected it to be i'm connected in here to a windows 11 desktop this just so happens to be my gaming desktop next door and the reason i wanted to monitor it was because it's got an nvidia 3080 in it what i wanted to do was spin up bezel agent in wsl2 because that was the recommended way that the developer for bezel suggested we go about running bezel agent on windows the downside of doing it in wsl2 though is that it's running inside a virtual machine and as such the agent only has a view of the world that matches the view of the world that the virtual machine has so it's got the same limitations of cpu and of memory and all the rest of it so whilst it does work under wsl2 and i've written a blog post explaining how to do it i'm I'm actually not going to show you how to do the WSL 2 part today because I figured out how to compile the native Go binary for Windows and then you get all of the metrics from your Windows host directly through into the agent as well. Now, what I must say at the beginning of this is instructions for both the WSL 2 and the native Windows service compiled version are available on my personal blog. But you must proceed at your own risk because there is a GitHub issue on the project for Bezl stating that Windows Defender finds an issue with the agent.exe file once it's compiled. I didn't personally run into this issue, but you might. And I just want to, you know, in the interest of full transparency and disclosure, just say proceed at your own risk.
Okay, here be dragons. Now, compiling this is actually pretty straightforward if you've got access to a Linux box. And of course, if you're from Windows these days, you do, through WSL2, have access to a Linux box. So what we can do is, well, first of all, we've got to make sure that we've got Golang installed on our Ubuntu desktop. So sudo apt install golang go, I think is what it is.
Yes, already installed because I've already done it before I made the video, but such is life. Next up, we want to clone the Git repo. Now the end result that I'm aiming for here is to be running the agent.exe file as a native binary on the Windows host, but running it as a Windows service. And that was the tricky part for me that took a little bit of figuring out using something called NSSM, non-socking service manager.
We'll get to that. Don't worry. We will get there. But in the meantime, you need to compile the agent for Windows. So clone the git repo, go into the bezel directory, and then you want to look for this command next. We want to go into bezel, bezel, command, agent.
So we're already inside bezel. So change directory into bezel, command, agent. And then we need to compile it.
So we're going to just put these command line flags in place here. Go OS equals Windows. The architecture is AMD64. I can list all the files that are on my desktop, which means I can literally just copy agent.exe Out of there not CD I want copy copy agent.exe onto my desktop and now on my Windows machine I
Should have agent.exe now the next step we're going to want to do is create a directory under program files I just call mine bezel agent. You can call it whatever you want You can see I've actually got some previous stuff from where I was messing about you don't need half of this stuff You literally just need the agent. Okay, once you've copied the agent file over into your program files directory I'm not gonna do that because mine's working just fine in the background We need to go ahead and configure the NSSM side of things So this is a super old project I honestly hadn't been to this website in maybe a decade or more and I'm not sure it's seen any The website at least has seen any updates since then but doesn't matter. It still works. So NSSM I use chocolatey to manage packages on my Windows hosts because I'm a Linux guy and I like the command line So we go to PowerShell and then run as administrator and we can just do choco install NSSM and that's going to pull down the NSSM binary for us and put it onto our path. So now we can do things like Create services start them stop them all the rest of it And you can see in here that we have a very simple set of instructions to follow We're going to install a service named bezel agent pointing to the XE file that we just compiled and put into our program files directory obviously make sure these paths match up with what you created a few moments ago and And then in terms of what we want to do for configuring the host itself within Bezel, we need to grab the SSH key, the public SSH key from Bezel.
So we would go to add system like this and then binary and we'd call this Windows. And I think in my case, it's 7.54. I would copy the SSH key just here, not the Linux command this time.
I'm going to add this system. You can see it's just called Windows for right now. But if I go back into my Windows app here, what I'd want to do is run this NSSM set bezel agent command. And we're just basically setting an environment variable, same as we would do with like a system D service or something like that, of key equals and then the SSH key. And then we quite simply just start the bezel agent services.
And that's that. It should just pick it up right away. And I'm not going to break my existing deployment just for the sake of this demo because I don't have loads of Windows systems in this house to demonstrate this on. But you can see that Windows 11 is up. I just turned it on a few minutes ago before recording this segment, which is why the graph was empty. And you can see it picks up everything I've got in there.
So I've got Windows 11 desktop. Up time is 44 hours, even though most of that was just spent in suspend and sleep mode. Here's a specific kernel build of Windows that I'm running, and it's got a 9800X3D in it. And then if we scroll down, we can see that the GPU power drawer is picked up.
So it's using the native NVIDIA tooling to pick up some of the GPU metrics and that kind of thing. I'm actually kind of amazed that this is working, Hogwarts Legacy, over Windows Remote Desktop, of all things. I'm not even doing anything fancy like Moonlight or anything like that. The general point, though, is just to show you that I've put some load on the GPU, and you can see that the CPU's gone up, memory's gone up a little bit, disk IO, all that kind of stuff. But GPU PowerDraw now is starting to get where we would expect it to be running a AAA title these days. So for me, I think Bezel is really useful across multiple operating systems.
And there are ways to run Bezel with LaunchD on macOS as well, which I won't get into today. Because it's a pain, to be honest. But it does work.
So the gist of Bezel here is that you could create a very lightweight way of monitoring your entire tail net using nothing but a very simple couple of commands to add things via the tail net using the public SSH key authentication model that it uses to talk between the agent and the hub. And so in situations where something like maybe Prometheus and Grafana is a bit too heavyweight for you, or something like NetData might be a bit too big and a bit too heavy, Bezel is the perfect situation. Like it's a beautiful middle ground between Uptime Kuma, which is just monitoring things like a ping pong web request and that kind of stuff, although uptime kuma is awesome in its own right this is much more useful for monitoring things like system stats like you can see like gpu power draw temperature disk temperatures whether your file system fills up and all that kind of stuff and i haven't even touched upon any of the notification features that bezel offers either it supports this shelter library which supports as you can see notifications for discord email if this then that matter most notify that's great because that's another self-hosted so you can keep it completely it's a self-hosted notifications stack so you can keep everything in-house if you need to slack as well telegram my goodness um this tool is legit and i think bezel is going to become a real part of my arsenal moving forward so as you can see it's a really quick and easy way to monitor multiple hosts across not just your land but also your tail net too and again just look how easy this is to add a new host This host is in England. I'm in North Carolina, and we're gonna add a host on the other side of the Atlantic to Bezel in a couple of minutes.
So I'm gonna go in here, I'm gonna name this Snowball, which is the name of the host. I'm gonna go into my tail net and grab the IP of Snowball. So Snowball is over here, and I'm gonna do this.
I'm gonna copy 100 dot whatever, put that in here, copy the Linux command, and then I'm gonna go to my terminal window and just copy and paste the install agent script in here. And I'm going to add a host on the other side of the world to bezel using tail scale in, how long was that? Like 30 seconds. So there we are. I have just added a host on the other side of the Atlantic to my bezel hub using tail scale as a connecting fabric. As long as those two hosts can actually see each other, you know, they're both on the same tail net and are rootable.
So you can check that with tail scale ping on the command line. Make sure you've not got any ACLs or grants in the way to prevent them from seeing each other. It's very easily done once you start messing around with those things. Those two hosts can now see each other, and my bezel hub can now monitor that remote Linux host in England with no firewall rules open or any other configuration.
Reasons like this, projects like this, are just why this job for me is so easy is because I find this stuff really fun, just connecting stuff together that perhaps shouldn't be talking to each other. using bezel i can now monitor this host remotely and i didn't have to really do a whole bunch to make that happen i just ah it's so cliche it sounds so cheeseball but i love tail scale so a big thank you to those of you that have made it to the end of this rather long video today and until next time thank you so much for watching i hope you have a wonderful 2025 i've been alex from tail scale Thank you.
2025-01-25 09:36