I might be deep into vintage computers and retro network gear—but when it comes to my day-to-day work, I’m all about efficiency and modern tech. A few months back, I gave my NAS systems a major refresh and consolidation. The result? I slashed my constant IT-related power usage from 900 watts to just 700. Not bad, right? And if I rewind two years, when my setup was still eating 1.5 kilowatts... yeah, that’s a major difference! Now, as I mentioned in my last Behind-the-Scenes video, my Cisco switches and access points alone were pulling a solid 290 watts—and still didn’t give me enough 10 Gbit ports for my growing demands. So today, we’re diving into how I finally swapped out that aging Cisco gear for some sleek, modern Ubiquiti hardware—and whether I managed to shave off yet another 200 watts in the process.
Let's talk about legacy network technology. I'm THE PHINTAGE COLLECTOR and these are my stories. So what you're looking at is the connectivity rack. Right now, it holds a 24-port Cisco 3750-E PoE switch from 2010, a couple of routers, and some other gear. Since it's a PoE switch, it powers three LAP1142N access points — also from 2010. That switch is cross-connected via two stack cables to another Cisco 48-port switch in my other rack — same model line, but from 2008. All of this gear — switches and access
points — is 15 to 17 years old. Which means: yes, it's vintage. And no, it's not exactly power-efficient. As you can see, the current draw is 290 watts total. About 250 watts go to the switches, around 33 watts to the access points,
and the rest to other PoE devices — which we'll come back to in a bit. MR KNOW-IT-ALL: Ah, come on — what’s all the fuss? Just buy new gear and save some power. Of course — but it's not that simple. There were quite a few factors involved,
and that led me on a days-long journey through spec sheets and power charts. Why? Because I had very specific requirements: • A permanent power reduction of at least 200 watts • At least one 24-port switch in each rack, with a minimum of 16 PoE ports • Enough 10G ports for my NAS, video workstation, and future expandability • CLI and API access, ideally with web and app management, SNMP for legacy • Support for dot1q VLANs, LACP, and Spanning Tree • And last but not least, stay within a budget of 2000 CHF / 2500 USD So I started comparing: Cisco, Cisco Meraki — which both I know professionally — HPE Aruba, which we use at work, of which I even had some actual test units on hand. Also looked into Ubiquiti, MikroTik, TP-Link and others. I built a pretty exhaustive list: features, required accessories, prices — and most importantly, the power consumption from each vendor's datasheet. Now, remember those two PoE consumers? This is actually HPE Aruba access points which I had deployed alongside the Ciscos. They use less than half the power of the Cisco APs.
I hadn’t planned to include APs in my upgrade, but at that point — why keep burning 40 watts when I could cut that in half and upgrade to Wi-Fi 6? From there, I worked through multiple scenarios, combining different gear. In the end, I picked Ubiquiti — not because the others were bad, but because it hit the right balance for me: Good value, solid feature set, and it’s just comfortable to use. Also, I could run my own device controller locally, without needing to rely on cloud management — something not every vendor allows. But even then, I had to ask myself: am I really improving things by replacing two switches with two others plus adding a third 10G switch? Didn’t I want consolidation? Even within Ubiquiti, I narrowed it down to four possible setups — some minimal, some more generous with 10G ports.
I initially priced everything with Ubiquiti’s official accessories, but let me tell you — after 25 years in networking, noname modules usually work just fine. Of course, there’s always a chance of getting a dud — like this DAC cable that caused port flapping. But more on that later. Price-wise, the choice could've been easy: go cheap or settle for a middle ground. But I also needed to factor in power savings. So I ran the numbers — calculated the yearly
power savings, the cost offset, and how quickly the investment would pay for itself. Now, vendor specs are nice, but often a bit optimistic. Luckily, Ubiquiti’s community came through — people started posting real-world power draw data.
And while not every device was covered, it was clear that even though APs were rated at 9 watts, the average was closer to 3–4 watts. So I created a second power model based on those real-world numbers — much more realistic, and with it, a clearer picture of my actual savings. MR KNOW-IT-ALL: Alright, alright — we get it, your TCO math checks out. Now how about showing us the gear already? Alright, so here it is: four WiFi 6 access points, the USW-24-POE with 24 Gigabit ports (16 of them PoE), and the USW-Pro-Aggregation, offering 28 SFP+ ports for 1/10G and an extra 4 SFP28 ports for 25G. While unboxing the gear, I noticed there’s very little plastic — mostly cardboard or antistatic foam.
The switches come fully loaded: power cable, rack-mount brackets, all the screws, and even a printed quick-start guide. The Pro Aggregation switch also came with a surprise — a 1-meter DAC cable I had completely overlooked. Nice touch. Even though it’s probably too early for this, I hooked up my amp meter just out of curiosity. It’s not very precise, and it flaps between 0 and 0.1 amps because of its limited resolution.
Still, 0.1 amps at 230V gives us about 23 watts. The access points also use minimal plastic packaging — the small parts come in a plastic bag, and the quick-start guide is sealed in a protective sleeve. The latter feels unnecessary, honestly.
Here’s a fun little detail: the box includes a cardboard drilling template — complete with a tiny built-in spirit level. The template is genuinely helpful, but the spirit level? Kind of a gimmick. Cute, yes — but let’s be real: you’ll use it once, smile, and it ends up in the bin. Once I wired everything up, I repeated the (still not very precise) measurement — this time it showed 0.2 amps, or roughly 46 watts.
Of course, that’s with zero config and no actual traffic. Once the transceivers are seated and the network’s live, the numbers will shift. But still — it’s a decent early snapshot. If you're already familiar with Ubiquiti, you might’ve noticed something missing: I didn’t mention — or get — a Cloud Key. The Cloud Key is Ubiquiti’s hardware controller that connects to your local network and provides the management interface for UniFi gear. None of the UniFi devices has a built-in configuration web UI — so you do need some form of controller. But here’s the nice part: Ubiquiti also
offers a software version of the UniFi Network Controller that runs on Windows or Linux. So as mentioned earlier, you’re not tied to the cloud. You can keep all your logs and config data local. And no — you don’t need the hardware-based Cloud Key. A virtual machine works just fine, as long as you have somewhere to host it. In my case, I wanted to run it on
one of my Synology NAS servers. As a callback to a previous episode where I consolidated my NAS setup, I mentioned that I upgraded the Synology RackStation to 32 gigs of RAM — exactly to support use cases like this. Even though I’ve got a VMware cluster, I deliberately wanted to keep the UniFi controller more standalone. That way, I can shut down some of the VM nodes occasionally to save power, while the Synology — which only sips about 60 watts — stays online 24/7. To run VMs on Synology, you need to install
the Virtual Machine Manager. That’s a bit beyond the scope of this video, but for completeness: besides enabling Open vSwitch, I also had to deal with LACP. My NAS was configured to use all four 1-Gig interfaces in an LACP bundle, so I had to switch it over to use the Balance-TCP algorithm. With that sorted, I spun up a new VM and installed Ubuntu. Now, to get the UniFi Network Controller up and running, you have two options: do it manually — the hard way — or use a community-developed script that handles everything for you.
Spoiler: use the script. It’s super straightforward. Thanks to that, my UniFi Controller was up and running within minutes. Well — almost. It did complain about an invalid TLS certificate. But again, the same developer provides another helper script that works with Certbot to fetch a proper Let’s Encrypt certificate. That’s where I hit a small issue.
The documentation for the Certbot integration could be a bit better. I had to dig through various sources to figure out exactly which parameters go into acmedns.conf, and how to properly format the required JSON file. But eventually, I got it all working — including
integration with my acme-dns setup — and ended up with a valid, trusted certificate. When logging in for the first time, you'll be asked to create an account with Ubiquiti online. So even so all configuration data is managed locally, the login procedure including the multi-factor authentication relies on a global user account. That's because it allows also to integrate with UniFi online, also for app-based remote control from outside your network, if you want to do so. But if you dont want that, you can also disable this default behaviour, for a truely isolated management experience.
As soon as I logged into to my UniFi network server, it started to automagically discover all my devices. You have to go through a so-called adoption process, which pairs the device with the network server. This will also update them to the latest firmware version. You can have your devices being run with DHCP addressing, if you want. Personally, I prefer to assign static addresses to infrastructure devices, which also has the added benefit that they can be managed with 3rd party software, for example for SNMP monitoring, just as I enabled it right here. Don't forget to also configure the SNMP access in the global settings. I then adopted all my existing VLANs. Of course, a traditional Cisco switch
would offer way more functionalities, but believe me, you won't need many of that in many networks. Still, Ubiquiti has everything I need, from multicast-DNS autodiscovery, IGMP Snooping, which may be needed for applications like IPTV, the afore mentioned Spannning Tree protocol for loop protection, and many more. There was only one limitation I ran into. Obviously, any VLAN IDs above 4009, even so technically up to 4096 are supported, cannot be configured. Pity me, as I indeed had a few in that range. So I had to renumber them. No big deal, but still an annoyance.
One can also define port profiles with common settings, which can then easily be applied as templates to different ports I'll be definitely using that feature to define a common denominator for my network ports. As for the WiFi, it works in a similar fashion, where you define the various SSIDs and their characteristics globally. This includes the ability to define if those will be broadcast on all Access Points simulatenously, only on specific ones, included a defined group of devices. I won't make things more complex than necessary. With four access points in total, it's anyway my intend to broadcast the SSIDs everywhere.
One neat feature, which definitely helps for optimal deployment is the InnerSpace. This allows you to upload floor plans to place the Access Points into. By drawing the walls and definining their characteristics, InnerSpace will automatically update the coverage area. Not necessarily something I need, as when we built our house many years ago, I had planned the ethernet outlets accordingly to support optimal placement of the access points. Still, one neat feature that UniFi server supports is to run the channel optimization on a daily basis for adjusting the Access Points for band collisions and signal strength. That was definitely something that was more cumbersome to configure in the Cisco world.
Finally, it was time to swap out the old devices. I started by shifting the existing gear around to make space and build the new network in parallel — minimizing disruption as much as possible, apart from the inevitable rewiring chaos. And my wife, a formidable downtime detector, proudly confirmed she didn’t notice a thing. Perfect. At this stage, I began reconfiguring selected ports to become LACP link aggregation bundles, allowing me to connect my NAS servers with multiple ports: • 4× 1-Gigabit for my older NAS, now repurposed as a backup target • 2× 10-Gigabit for each of my newer NAS systems MR KNOW-IT-ALL: “But that’s only six! You could’ve easily gone with the 8-port switch!” Sure, but I also need 10G links to my workstations.
That would’ve maxed it out right away. And even though it might seem silly to “waste” precious SFP+ slots on RJ45 1-Gigabit modules right now, I’m confident I’ll have more 10G-capable gear in the future. When that time comes, swapping those modules will be easy. Plus, this setup gives me full flexibility — mixing 1G and 10G connections across RJ45, Twinax DAC, and fiber-optic modules as needed. Though fiber will only be a temporary solution, as I plan to transition everything to DAC cables to furtherly reduce power consumption.
We’ll get into that in just a moment when we go through the final optimization steps. Of course, I also encountered some issues. For example this red wire here, which leads into the inverter.
Although I'm using RJ45 SFP Modules on the Pro Aggregation switch, it can only negotiate to a minimum speed of 1 Gigabit. And apparently, the inverter has a 100 Megabits interface only, so it wouldn't sync, and thus I can't hook it up to the lower switch, which is closer in distance. But the cable was too short to reach the other switch, so I had to temporarily use an RJ45 coupler to use another cable to extend it into the second rack. I'll have to dedicate some time later again to swap that cable, which unfortunately has a proprietary plug on one end. The other issue I ran into was related to those 3rd-party accessories I mentioned earlier. Turns out, one of the Twinax DAC cables wasn’t quite right — it caused port flapping.
I noticed this on one of my NAS servers, where the mce0 interface kept going up and down. Digging into it with ifconfig, I saw that the DAC module wasn’t even recognized as a 10G adapter — just a 1 gigabits one. Swapping in another cable solved the problem. The port was correctly detected as 10G, the flapping stopped, and the LACP bundle finally came up cleanly. So, yeah — keep that in mind. Stuff like this can happen with no-name modules, especially when you’re, like me, ordering straight off AliExpress. I did it deliberately to save some money. And to be fair, my experience over the years has mostly been good. But you
can expect the occasional dud. Vendor-branded modules usually come with better quality assurance — which can justify the higher price. If you’re not the gambling type, 3rd-party branded modules are still a solid, affordable alternative. And if you are okay taking the occasional risk for an even lower price, no-name modules are usually just fine — just order an extra one or two as spares, and you'll be prepared. So, beyond all the technological uplift and its merits — what was the actual outcome on the power-saving front? Let’s rewind to my last optimization round, where I shaved off around 200 watts of permanent, IT-driven power draw.
Here’s a report from my energy supplier dated January 2025 — and it shows I used about 100 kWh less compared to January 2024. Now, a constant 200-watt reduction should translate to about 144 kWh a month. But, it’s not a straight line — because we’re dealing with both permanent load and dynamic load. In my case, that includes things like the heat
pump, which is temperature-driven. It consumes more when it’s colder and less when it’s milder — so it fluctuates seasonally. Still, when looking at the broader picture — including data from 2023 and 2022 — we clearly see a downward trend thanks to all those little and not-so-little power-saving actions over the past years.
Altogether, I've cut down my monthly consumption by about 1200 kWh. That’s… huge! Also, I specifically chose January for comparison because — despite having photovoltaics on the roof — January typically brings zero solar production. So this isn't offset energy; it's energy I actually had to buy. MR. KNOW-IT-ALL: But doesn’t that mean your amortization math is wrong because of the free solar energy? Not really. The investment pays off either way.
If it’s reducing my grid power consumption, great — that’s money saved. If it’s offsetting solar energy usage, that’s protecting the ROI from my PV system investment. So from a financial standpoint, it balances out — it’s neutral in that regard. MR. KNOW-IT-ALL: Alright, alright… but did you manage to cut it down to 500 watts in the end? Almost! Here’s the chart: you can see the baseline power usage after my last optimization round, sitting just above 700 watts — generally stable around 720 watts. There's a noticeable spike during the transition, where both the old and new switches were running in parallel. Then comes the big drop — when
I finally shut down the old gear. After that, my smart meter consistently reported about 560 watts — so that’s a solid 160-watt reduction right there. If we zoom in a little further, you’ll notice smaller dips in power usage too. Those came from incremental optimizations: I had a bunch of small devices still powered by individual 5V power bricks — and when you have a lot of those, the parasitic losses add up. So I started migrating these to PoE. Now, none of them were PoE-ready out of the box — so I used PoE splitters.
These come in all shapes and sizes — like this bulky unit here… …but newer ones are much more compact, like this one. Just powering four of those devices via PoE instead of wall bricks brought the draw down from ~20 watts to about 11 watts. Then there’s the access points. The UniFi ones draw about 16 watts total, compared to the 30 watts for the old Ciscos — plus another 8 watts saved for the two Aruba units I was using before. And it’s not over yet. Every watt counts — and there’s still room to squeeze a few more out.
Now, during the transition phase, I had to temporarily put some fiber optics in place. Cool? Yes. Efficient? Not quite. From a power perspective, fiber optics aren’t great — especially on short distances, where they offer no real advantage over copper, other than looking cooler in a rack. A typical optical module, whether it’s short-range or long-range, can easily draw anywhere between 1 and 3.5 watts, and that’s not even counting higher-bandwidth ones. Compare that to Twinax DAC cables, which usually come in at 500 milliwatts or less per port, depending on length. That's much more efficient.
Unfortunately, when I kicked off the upgrade weekend, I didn’t have those DAC cables on hand, so I had to roll with 10G optics to get things running. Only after the DACs arrived I could finally swap out all the optics — and that gave me one last nice little dip in power usage, bringing me to a steady-state consumption of around 510 watts. And guess what? That lined up almost exactly with my original educated guess.
So yeah — mission accomplished. The setup now does everything I wanted, with the power draw right where I hoped it’d land. You might say I’m a bit too obsessed with power savings. But let’s rewind to 2021, before the photovoltaic system was even in place. Back then, I was hitting around 1500 kWh per month during summer — that’s a continuous draw of 2 kilowatts, and 1.5 kilowatts of that was just my IT workload. So investing in solar and optimizing my gear was not only a logical step — it was essential, especially with European energy prices going bananas after the Ukraine crisis. Now, 2022 is a bit messy in the data — we
built the solar mid-year, so the numbers are skewed. But when you factor everything in, here’s the rough trend: • 2022: ~24 MWh • 2023: ~19.4 MWh (first big IT power optimization) • 2024: ~18.6 MWh (smaller tweaks here and there) • 2025: Still counting — but we’re
trending in the right direction. MR KNOW-IT-ALL: Hmm. I don’t know. I was expecting a bigger long-term drop... Well, I didn’t tell you the full story yet. We also ditched our combustion engine car and switched to a battery-electric vehicle — and some of the energy savings from my IT actually went straight into charging the car. But even with that shift, I'm close to energy net-zero — generating almost as much as I consume. And these latest IT optimizations help maintaining that balance. Bottom line: You can reduce your power usage by going through lifecycle upgrades. Of course, it doesn’t make sense to replace
every gadget just for the sake of it. But once something reaches the end of its useful life, swapping it for a more efficient option can bring huge benefits. Personally I haven’t sacrificed a single thing — except wasting power. What do you think about all this? Let me know down in the comments! I’m THE PHINTAGE COLLECTOR, and this was my special behind-the-scenes story for today. Thanks for watching — and see you again for the usual retro computing shenanigans after a quick two-week spring break in early May!
2025-04-26 06:14