I Set Up My OWN Private Cloud! Here's how YOU can too!

I Set Up My OWN Private Cloud! Here's how YOU can too!

Show Video

What if I told you there was an easy button for OpenStack? Easy like you could have it up and running with just a few commands complete with a beautiful user interface and in the end have your own self-hosted private cloud at your fingertips to start using and learning on. If you've ever wanted to learn or run a private cloud, this video is for you. Hey there, home lovers, selfhosters, IT pros, and engineers. Rich here. It's been a while since we've dived into a virtualization platform, let alone an actual private cloud platform. And I think it's high time that we remedy that. Right around the

beginning of April, Platform 9 announced the release of their community edition of Private Cloud Director, which they offer with absolutely no limitations, as in it's fully featured and entirely community supported. Private Cloud Director CE is built on top of OpenStack, KVM, and Kubernetes, and they've taken the complexity out of deploying all of that yourself. So, in this video, we're going to dive in and take a look at Platform 9's private cloud director community edition. And as we always do, let's learn a bit more about Platform 9 first. Platform 9 Systems was founded in 2013 as a cloud infrastructure management company that provides SASbased solutions to manage and operate private, edge, and hybrid clouds. Its main focus is making Kubernetes, OpenStack or other open-source infrastructure easy to deploy and maintain, particularly in onremise or hybrid environments. In

January of 2017, the company launched the industry's first infrastructure agnostic SAS managed Kubernetes service along with Vision, an open- source serverless framework built on Kubernetes. In December of 2020, Platform 9 released managed bare metal enabling SAS management of physical servers for various workloads including Kubernetes and databases. In November 2024, private cloud director was launched. PCD is a cloud-managed

on-remise virtualization platform designed to be an alternative to VMware's vCloud director. During that same time, Platform 9 also released automated tools to easily migrate from VMware to Private Cloud Director. In March of 2025, Platform 9 introduced the free community edition of Private Cloud Director at CubeCon CloudNative Europe 2025, allowing IT teams to experiment with Private Cloud Management without limitations or hidden fees. During that same month, Platform 9 announced a partner program to assist enterprises in migrating from VMware to Platform 9's private cloud director, aiming to reduce costs and migration timelines. I think

we should spend a moment here and set some levels of understanding as to what Private Cloud Director is, who platform 9 targets as customers, and whether something like PCD is right for you. First off, private cloud director is not explicitly intended to be a replacement for traditional enterprise data center virtualization as you would typically know it with VMware VSCenter, Proxmox, or XCPG. It is entirely able to do that, but you need to think bigger. PCD targets companies that run private clouds either for internal use or as a service for others. The best way I can describe this is to think about public clouds like AWS, Azure, or GCP. You, as

a customer, have pre-anned VM options to choose from. In the business, we typically call these t-shirt sizes. You get a few options for predefined VM sizes. You choose an image, whether it's

Windows or Linux, and then you press the go button and the VM is built. This is entirely different than how you run VMs in Proxmox or ESXi where you can customize every component of the virtual machine like the disk, RAM, CPU count, and then you manually build out the VM and the OS that's part of it. Private Cloud Director offers you the ability to become your own private cloud hosting service. You define the t-shirt sizes and you provide the OS images to be deployed and your customers can then deploy what they need. In essence, this

is not your typical HomeLab friendly Swiss Army knife virtualization. It's very much infrastructure as a service style management. PCD also features multi-tenency to segment out and separate different customers. Has decoupled clusters and regional functionality, API first consumption functionality, and allows you to set limits and quotas on how much your customers can consume as well. While that might turn off many of you when it comes to considering trying out PCD in your home lab, I'm sure there are a lot of you out there who see this as an opportunity to learn what it's like behind the scenes of hyperscalers and cloud service providers and have the opportunity to build one out yourself and learn how to manage them. And that is pretty cool. One last thing in this

video, I'm going to go higher level. I'm going to give you a quick overview and rundown of the installation process and then jump right into the guey for PCD so you can have a good look at what the management UI looks like. If you guys really want a full how to install PCD community edition video, get down those comments and let me know. Also, keep in mind that on Platform 9's YouTube channel, they have a quick 6-minute video that shows you the entire installation process start to finish and they do a really good job walking through it. I'll drop a link to that video in the description. Let's discuss

PCD's physical architecture and the minimum requirements for running private cloud director. PCD effectively has two major physical components. The private cloud director management system and the hypervisor or hosts that run the workloads. The PCD CE management system is responsible for the user interface, workload deployment, orchestration and management of your private cloud. And

workload hosts run your workloads be that virtual machines, containers, and so on. Any number of these components can be physical or virtual. and how you choose to deploy them depends on your hardware and objectives. Let's get the

minimum requirements out of the way for the private cloud director. First, the PCD requires a minimum of Iuntu 2204 on an AMD 64 cloud image. You'll need a minimum of 12 CPUs or cores for PCD with a recommended amount of 16. PCD requires 32 GB of RAM to function properly. And PCD requires 50 GB of local storage with 100 GB being recommended. For your hypervisor hosts, you'll need a minimum of Iuntu 2204 AMD 64 cloud image. You'll

need a minimum of eight CPU cores. You'll also need a minimum of 16 GB of RAM, and you'll need at least 50 GB for local storage. Obviously, these minimums don't provide you with much room for running virtual workloads. So, I recommend allocating as much available resources as possible for your VM workloads. For my installation, I opted

to install the Private Cloud Director Community Edition on Iuntu 2204 desktop running as a VM in Proxmox. I opted for a guey because part of the installation process can require you to access the PCD web guy via the local machine itself. For my hypervisor hosts, I opted to install Iuntu 2204 server on my quad node 2U server which I built a while back for testing. Each node features two

Xeon Gold 6132 CPUs, 12 GB of RAM, and approximately 3 TB of local storage. Let's get through a quick rundown of the installation now. All right, let's get the Private Cloud Director Community Edition management system built first.

I'm following the instructions on Platform 9's website, which you can see in the left pane. And in the right pane is my freshly built and updated Iuntu desktop 20204 VMs command line via SSH. First thing we need to do is become root by typing in pseudo- i and hitting enter. Now we'll copy the install script

command shown in the left window pane. Paste it into the right window pane and hit the go button. The setup and installation process for PCDCE takes a while. In fact, in the installation output, it even states that it can take around 45 minutes to complete. So, go

grab a cup of coffee, a white monster, and listen to some Creed or something else to pass the time. Once the installation process is completed, you'll be provided with the URL, user, and initial password for your admin user. Grab that so we can log into the management interface. One thing to note,

you'll need to either create DNS records for PCDC in your internal DNS server to resolve specific host names for PCD, or you'll need to add entries to your local host file on whatever machine you're accessing the PCD guy from. It's all listed in their installation guide so you can find more details there. Now, let's get logged in, configure our cluster settings, and add some nodes to our cluster. All right, let's get logged into PCD, talk about cluster blueprints, and add some hosts to our cluster. Don't worry, we'll do a deep dive of the guey here after this initial setup stuff. To

get to the guey of Private Cloud Director, we'll head up to the address bar and enter in the address of PCD- community.pf9.io. The site has a self-signed search, so you'll see the typical warning in your browser. just accept and move on. Our next step is to get logged into the GUI using the default user and password from the PCD installation. To do this, we'll need to head up to the top right corner of the authentication window and click on use local credentials since PCD is using local O and not SSO. Now, we'll toss in

the username and password from the installation and log in. Welcome to the private cloud director dashboard. Again, we'll dig in deeper here post setup, but initially we need to create our cluster blueprint and get our hosts added. To do this, we'll start by heading over to infrastructure on the left and then select cluster blueprint. In PCD, the cluster blueprint is basically the template that defines all of your standard configurations for your hosts in your cluster. The blueprint is applied to every host you add to your cluster, so each host is identically configured and ready to use. You can

configure things like enabling VM high availability in case you lose a host, automate resource rebalancing to spread workloads evenly across your cluster, defining your standard network configuration, defining your persistent storage backends, customizing cluster defaults, and of course, giving your cluster a name. After you've configured your cluster blueprint as you like, you then add hosts that will have the blueprint applied when they're added. Adding host to your cluster is really a two-part process. You'll need to have a

base install of Iuntu on whatever physical or virtual hardware you're planning on using for your hosts. And then you'll need to install the host agent onto your hosts to configure them and add them to your PCD cluster. To start the host agent install on your hosts, we'll swing over to cluster hosts under infrastructure on the left. Again, I'm going to go split screen again so we can better see the installation process. Obviously, we have no hosts yet since this is a fresh install. So, we'll head up and click add new hosts in the top right. Private Cloud Director gives you

all the commands you'll need to execute on your fresh hosts to install the host agent and get those hosts added to your cluster. One quick note, you need to be root to execute these tasks. So, if you're not already, become root first.

Let's run through this now. First off is to copy the first command, paste it into the SSH session on our soontobe host and hit enter. Next, we'll copy the second command using the copy button, toss that into our SSH session on the right, and hit enter. You'll be asked for the password for your admin account, so enter that and hit enter. If you require

a proxy URL to get out to the internet, you'll need to enter that here. I don't have one, so I'll leave that blank and hit enter. Next, you'll be asked for your MFA token. Unless you've set up MFA for your user, you can ignore this. I haven't, so I'll just hit enter to skip it. All right, last step here is to copy in the last command on the left, paste it into the SSH session on the right, and hit the go button. Now, we sit back

and wait for the host agent installation to complete. Good news is it doesn't take nearly as long as the PCD install takes. Of course, the speed in which it takes is dependent on your connection speed to the internet to download packages and the hardware you're installing it on. And done. Now, let's check and make sure that our new host appears in PCD. So, we'll swing over to

the left window pane where I have my PCD guy open still and click refresh in cluster hosts. And there's my newly added host listed. The name P9- Node1 is the host name of the system I built. Awesome. If you run into any issues adding your hosts, make sure you've imported the SSL certificates of your PCD. Initially, I had this problem in the beginning because I'm terrible at reading apparently, but it's clearly stated in the setup guide. After you've

added all of the hosts you want added to your cluster, you'll need to authorize those hosts to be part of your cluster and define their roles. It's a simple process that I've already completed. As you can see here, I have four hosts and they all have the same host network config applied. They're all enabled to function as hypervisors for running workloads. One is defined as the VM image library and all have the same persistent storage applied. All right,

you got this far. Let's dig into the guey for Platform 9's Private Cloud Director Community Edition. Let's get logged in. I'm still using local credentials, so I'll swing over and click use local credentials to get to the local login screen. Once I enter in

my admin account's email and password, I'll click sign in below. All right, welcome to the dashboard of Private Cloud Director. Again, the GUI is nice and clean and follows the typical approach of having your navigation on the lefth hand side and on the right your content. The overview dashboard is broken into five main cards of information. Across the top, we have a card for virtual machines that includes the quantities and the status of the VMs part of the current tenant. Next over is

a card for the state of the available hypervisors in your cluster. And down below, we have detailed tenant quota usage information, including details on compute quota, storage quota, and network quota usage. Keep in mind that everything you see here is dependent on the tenant that you're viewing by default. There's a slider option at the

top to allow you to see all tenant info in as well. My deployment only currently has a single tenant, so we're good to go here. Let's dig into the rest of the PCD guy. Now, over on the left under infrastructure is the familiar cluster blueprint section we showed earlier. Let's take a look at my deployed blueprint. For my blueprint, I've enabled VM high availability because I've added four hosts to my cluster.

You'll need a minimum of four hosts to use this feature. VM high availability is the same that you have in VMware and other standard hypervisors where if a host crashes or goes offline, PCD would detect the loss of the host and start those workloads again on other hosts in the cluster. Keep in mind you need to have your VMs hosted in persistent storage available to all of the hosts just like in other hypervisors. Next is

automatic resource scheduling. Yep, PCD has automated load balancing for free. I've configured mine to spread the workloads across the hosts in my cluster versus consolidating them on a few hosts and I've left the default time frame for rebalancing to 20 minutes. Down in cluster network parameters, I've set my default internal domain name and left both DVR and virtual networking enabled. In host network configurations, I've defined the interface that has access to my physical network on each of my hosts and defined what that interface can be used for. Down in persistent storage

connectivity, I've defined my NFS connection for my external storage for my host to use for persistent storage. PCD supports a lot of different connection types for external persistent storage from NFS, Seth and LVM to NVME over TCP, fiber channel, and more and includes support from many of the bigname sand storage providers on the market today. Down in customize cluster defaults, I've left this set to the default settings. And finally, down in cluster name, I've named my cluster 2GT cluster. Let's check out my hosts now. Over on the left under cluster hosts, you can see the four hosts that I've added, including the one you saw me add earlier in the video. The three main cards in the middle give us macro stats of overall CPU, memory, and storage usage in the cluster. And below in the

hostless table, we can see a variety of different stats on the hosts themselves. Under host aggregates on the left, I don't have anything defined. Host aggregates allows you to create smaller collections of hosts that have different features. Say for example, you had hosts that had GPUs in them. You could create

an aggregate of those hosts that have just GPUs to target workloads specifically at those hosts. In virtual machines, we can see all the actively deployed VMs for the specific tenant. The cards across the top provide macro stats about the VMs, the number of vCPUs, RAM, and storage provisioned to them. Interacting with VMs in PCD is a lot different than what you'd expect to see in traditional data center hypervisors. For example, if we select my Rocky test VM, additional functions become available. From there, you can quickly start, stop, access the console, and delete the VM. Under the power

actions dropdown, you have a few additional functions like hardware boot and rescue. Under the other dropdown, you can further manage a VM with options to rename, create a snapshot, add or remove a public or private IP, edit security groups, edit metadata, resize the VM, migrate it, and rebuild it. Clicking on the VM's name in the list sends you to an overview page for the VM that provides you with a detailed overview of the guest workload, but noticeably missing are the VM controls you might typically expect coming from other hypervisor platforms. Let's quickly deploy a virtual machine to show you what that process is like. We'll head back over to our list of VMs again and then click deploy new VM at the top. Your first stop is to choose where your VM will boot from. Your options are to

boot from an image, create a new volume for the VM, or use an existing volume. VMs that boot from an image don't have persistence, meaning once that VM is shut down, any changes are lost as the VM is essentially booting from a predefined image. Choosing a new volume or an existing volume allows you to create a VM that has persistence and any changes made will survive a reboot. I want this VM to have permanent storage, so I'll select new volume. The next step is to define the size of the volume. For

the sake of this demo, I'll leave it set to 20 GB. Under storage type is where you choose to store your new persistent volume. And in the list, we can see my NFS shared storage entry. So I'll select

that one. Now we need to select the image we want to use for the VM. Images can be pre-built deployments like Linux cloud images like you see here with the Kali image, Rocky image, and the Noble image. And they can also be ISO images

as well, like the Iuntu desktop image shown. Keep in mind these images aren't used for installing Linux. They're completely pre-built Linux images that are essentially ready to deploy. No

installation necessary. Again, think private cloud. Your customer chooses an image and it's ready. They don't manually install the OS themselves. We'll choose my rocky image and move on. Next, we need to choose our VM t-shirt size or as PCD calls them flavor. Again,

these are predefined virtual machine hardware configurations. When you deploy a VM, you don't have granular control over the quantity of eCPUs, RAM, or storage. You have to choose from one of the predefined available sizes. We'll look at creating custom flavors in a moment. I'll grab small here, and we'll move on. Now, we need to select which network we want our VM to connect to.

I've got two networks defined. server network which puts the VM directly on my LAN and a virtual network called VertNet0. That is a virtual network between the hosts my cluster. I'll grab

the VertNet zero and move on. Nearly done here. We need to provide a name for the VM. You can select a predefined SSH

key if you've added one to your configuration. If you're running a cloud image, you can enable Cloudinit and customize the deployed cloud image, including changing the default password in your image, assign a security group to your VM, add any metadata you'd like, and finally deploy the virtual machine. Now back to the list of VMs, we can see my new VM being deployed. Once the VM shows a state of active, we can select it and check out the console. Moving on, server groups allow you to create affinity and anti-affffinity policies for virtual machines. I don't currently have any defined, but creating policies like this is useful if you need to make sure certain VMs are or aren't together on the same host. This is especially useful

in fault tolerance applications where spreading VMs across different hosts can prevent downtime in case of host failure. All right, let's talk about images. Here's where you can upload VM images that can be used to create virtual machines. All of the major distributions have cloud images available for their OSs, so it's incredibly easy to search for your distribution of choice and download their image and upload it to PCD. PCD also supports other image types like ISOs, VMware VMDKs, and Microsoft VHD and VHDX images. Under flavors, you can

define the predefined virtual machine sizes you want your users to be able to deploy in your private cloud. The flavors you see listed here are the defaults, but we can easily create our own by heading over to create flavor on the top right. We'll need to give our flavor a name, select the number of vCPUs the configuration will have, the amount of available RAM, and the size the disck deployed. You can also customize metadata, match host aggregates, and make the flavor available publicly to all tenants on the system. Under storage and volumes, we

can see all actively deployed volumes in our tenant. And if we select one, we can quickly edit that volume, create a snapshot, and delete it. Under the actions dropdown, we have options to upload as an image if we want to create an image of the volume to deploy other VMs and attach the volume. In volume types, we can see the available volume types in the cluster. I have two, the NFS persistent volume that I deployed as part of my cluster blueprint and the default that comes out of the box. Down in snapshots is where you can manage and manipulate any active snapshots you have on your systems. Currently, my list is

empty because I don't have any active snaps. Let's move on to networking. Network is where you create and manage the software defined networks that are part of your private cloud. Physical networks are where you define the networks physically in your infrastructure. I've created just one

here to connect out to the rest of my infrastructure and the internet and it's defined as a flat layer 2 network. PCD supports flat VLAN and VXLAN network types as part of their physical networks. Virtual networks are internal tenant facing networks that utilize the physical networks defined previously. Virtual networks can also support flat VLAN and VXLAN network types as well. Routers work exactly like you'd expect them to. Defining a router in PCD allows

you to create network connections between different networks to route traffic. For example, the virtual router I've created allows traffic in my VertNet zero to route out to my physical network. Public IPs is where you can define and configure actual public IP addresses for your private cloud. Again, think cloud services here. If you were a cloud service provider with public IP addresses, you can configure those addresses to be available to be applied to VMs you deploy the same way you would be if you rented a public IP address from AWS or Azure. Clearly, that's beyond what I'm doing here, so obviously I don't have any configured. Lastly, in

networking are security groups. Security groups are firewall policies that you can define to control the flow of traffic. The default policy that's defined in the installation allows outbound traffic only from VMs. I've

created an additional policy called allow all which when applied to a VM allows traffic to flow without any filtering applied. Okay, on to orchestration in stacks. Stacks are orchestrated deployments of infrastructure resources like VMs, networks, volumes, and routers defined and deployed together using templates. Let's say you needed an automated deployment for a complex web application. You could build a stack that automatically deployed a couple of web servers, placed them behind a load balancer, connected them to a private subnet, attached a router to give them access to an external network, and provision out persistent storage, all automatically. All this can be executed

via API, quickly spun up and torn down as needed. It's basically infrastructure as code using a YAML based heat orchestration template. Under access and security, you can define SSH keys that you can then add to a VM deployment to make SSH authentication into your VMs painless and easy. I've added my public SSH key from one of my systems just for that purpose. API access provides you a

full list of all of the API endpoints for private cloud director and the underlying OpenStack services. You can also access your OpenStack.rc variables at the bottom. Tenants is where you can create, edit, and remove any tenants you want on your private cloud. By default, the service tenant is the only tenant, and you can happily live within that tenant or create any number of tenants to best suit your deployment needs. Users is where you'll find all of your local users who can access your PCD infrastructure. You can quickly create,

edit, and remove users as you wish and grant them access to any tenant defined on your platform. Groups are collections of users with specific permissions applied to them just like groups in any other system. I don't have any defined here, which is why this is empty. And

finally, roles. User roles are predefined in PCD. And you have three different roles to select from.

Administrator has full control over everything. Read only user can login and view resources in the cloud. and self-service user can create new VMs based on the predefined images, flavors, and networks on the system. Self-service users are basically your customers who can only use the configurations you've created to deploy and manage their own VMs. Well, that was a lot of

information, and we only really scratched the surface in terms of what PCD can do here. Since you've stuck around to this point, let's get to my final thoughts here. First things first, I have massive respect for any company that creates a community or home lab version of their software. I think it says so much about them as a business and their dedication to the community and the users of their products. This is how you build the next generation of cloud engineers. You give them the ability to learn the software and platforms on their own for free. That is

how I learned. In terms of what I think of Platform 9's private cloud diretory community edition, I'm really impressed. The installation was easy. Just a few commands here and there, and their installation scripts do all of the manual lifting for you. These guys are running on top of OpenStack here, people. I don't know if you've ever tried to deploy it on your own, but it is not friendly. Not at all. Platform 9

somehow has managed to take all the headaches out of OpenStack and wrap it in an incredibly easy use management interface. I will admit at first it took me a while to wrap my head around the paradigm that is running your own private cloud compared to my background in traditional data center virtualization. But I got to tell you, this all makes me want to run my own private cloud service out of my garage. Once you grasp the entirely different approach to VM provisioning, the rest comes easy. And to be honest, in

retrospect, I can't believe I've been deploying virtual machines the oldfashioned way. There are some gotchas to be aware of if you're planning on giving PCD Community Edition a go. First, because of the way the deployment works, your PCD management system has to be a separate system outside of your cluster hosts from management perspective. And for anyone who's come

from the world of VMware, Nutanix, or XCPG, your management plane typically runs as a VM inside of your cluster. And that's achievable because you build your hypervisors first and then create your management VM on your hypervisor. With PCD that is not possible because you have to have your PCD management plane up first and then you add your host to the cluster. So you kind of have a chicken and egg scenario there. Also to run PCD you need to have a considerable amount of hardware resources to provide to it. As a reminder PCD's requirements

are 32 GB of RAM and 12 cores minimum which makes sense when you understand all that's going on behind the scenes with OpenStack and all of the requisite components. So just be aware you're not going to be running this on a potato as one of our Discord members is fond of saying. Another thing I didn't mention during my deployment overview is that PCD community edition is hardcoded to only answer to the URL of PCD- community.pf9.io and there's no way to change that. This means that it's up to you to have your own DNS server infrastructure running in your network and you have to create a new zone to resolve those addresses so all of your systems can access the guey or you have to manually add the DNS entries to each host file on every system you want to use to manage PCD. I'd love to be able to define my own URL for all of my private cloud stuff. So, I hope that

that changes in the future. And lastly, there doesn't seem to be a way via the PCD guy to manage or manipulate certificates for your deployment. I'm sure this can be done via command line for OpenStack, but I'd really love to see that as a management function from within the PCD guy, because otherwise there's hoops to jump through in terms of dealing with self-signed certificates. All of that being said, there are so many reasons for you to download and try this out if you have the hardware to run it. If you're at all

curious about what running a private cloud is like, or if you're a user looking to get off Broadcom's death spiral of licensing hell, or you're working on improving your cloud infrastructure engineering skills, there's no reason not to deploy the community edition of PCD today. And that friends will do it for this video. If you liked it, throw us a sub and a like. And if you have a beef with anything I've said here, let me know in the comments below. Special thank you to YouTube members. You guys

help keep the lights on and we thank you for it. If you'd like to help support the channel, consider becoming a member or buying some of our swag. It'll helps keep us helps keeps keeps us help making videos. You know what I'm trying to say. And now that you finish watching this video, how about checking out this place over here with the great virtualization videos we've done in the past. You're

looking for your next great enterprise virtualization platform. We can help. [Music]

2025-04-29 10:45

Show Video

Other news

Tech Stocks Rise on US-China Trade, Trump Talks Apple | Bloomberg Technology 2025-05-16 08:06
Browser Fingerprinting Masterclass: How It Works & How To Protect Yourself 2025-05-11 01:10
The Evolution Of Technology And Spacecrafts From Today To The Year 4000 2025-05-07 14:21