Homelab re-organization project conclusions
Several months ago I began a rather extensive homelab re-organization project, which I started documenting here on my blog as-well. Since it has been a while since I last posted an update, I wanted to gather more recent progress updates into one update post. Let's get to it.
What started this re-organization?
I sort of dove head-first into having a homelab, as I needed the hardware for a project I am working on but wanted to go about it in both a budget-friendly way, and to be able to learn more about server management in the process. Especially the software side is of interest, but by having hardware at home I was able to also get a better understanding of the hardware, and how I can best set things up when dealing with multiple physical servers.
After running one of my servers 24/7 however, and as my project progressed, I started running the second server along with the first, as I needed the dedicated hardware for development purposes. I was enjoying my setup as it helped me progress well.
Until I started getting phone calls from my electricity company.
Pay more first, or pay more later.
Instead of purchasing brand new parts to build a new main machine, I bought my servers with budget in mind. The thinking was that for relatively small amounts of money I would be able to get much more bang-for-buck performance. And this thinking is still true. However, I should have researched more.
While the basic idea is true for everyne, regardless of where you live, depending on your country or state you might have a much higher electricity cost than others. So when deciding what generation of hardware to purchase, keeping this in mind is very important. In retrospect I should have invested more up-front in server hardware and have gone with the next generation of CPUs instead of the ones I went with. Even after replacing all server CPUs with their generation's most power efficient CPUs, things still add up quick.
Besides running at least one server, I was also running my main machine, of course. I managed to reduce power consumption to reasonable levels for each machine, but when all are running at the same time, it adds up. The way electricity usage is charged where I live increases the cost as you use more. There are thresholds and, once crosssed, you enter a higher-cost tier. Combine this with us needing to run Aircon at least some of the time during the highest points of summer, and you have a bill that doesn't just increase; it trippled.
Saving money: failed
So, one of my main reasons to go the homelab route, unfortunately, failed. The cost to run these servers wasn't marginally more, it was simply not something that was worth doing. It would be much more affordable to run what I needed in the cloud even, or to replace all this hardware with one more modern machine as it would be able to run much more efficiently.
And so, I migrated everything onto one server initially, which kind of worked but severely impacted my ability to develop comfortably. I sold one of the two Intel servers while I prepared to offload my project-related tasks to VPS instances.
Once I had moved these to the cloud, I had to figure out how to handle storage. I was using my Dell as a file server, among other things, using 8 1TB hard drives in a RAID6 setup (another area where I went too far to the budget-friendly-up-front direction), so I had to figure out a way to safely move and store this data elsewhere now.
I ended up ordering an 8TB Western Digital drive, with the intention of shucking it and installing it into my aging Hackintosh. Long-term I would like to re-evaluate this setup and add another drive for local redundancy. One benefit of having storage back in my main machine is that I can finally start using Backblaze personal backup again. I was not happy when I had to switch away from this and had to set up some custom backup solution using B2 (going the Google Cloud backup route felt hacky in the wrong way).
I sold the Dell shortly thereafter, and now only have one Intel server left, which is up for sale too. My router is still running OPNsense, although now that I am moving away from a rack-type setup, I would like to replace it with a smaller, fanless option down the road.
Where to go from here
I have learned a tremendous amount while playing with, setting up and using these servers. While I have been using servers one way or another for years now, it was a new experience to deal with them on a hardware level. I felt sad that my plan had failed to achieve its goal, but learned a lot from making the mistakes I made, so it's not a total loss.
Moving forward I basically had two ideas:
one) Use the funds of sold hardware to purchase a newer-generation server instead. Not only would a newer generation server be more energy efficient, it also has much more performance to boot. It would probably take all funds from selling these three servers to just be able to buy one newer generation one where I live, but it would certainly be able to suit my performance needs.
two) No matter how efficient a rack-mount server is, it's big, bulky, and usually loud. Newer generation servers can be less loud, but no matter how you spin (ha!) it, it is much easier to build a quiet tower computer with large, silent fans. A tower is easier to store too. This route I can also pick exactyl the hardware I need, which is a nice bonus. Up-front cost wise this is likely the most expensive route, however.
I am pretty much decided on going the route of building a new PC. Hopefully sometime early next year I will set out to spec out a new computer that can serve both as my "servers" for development needs, as-well as my main work machine. I am interested in trying out a full-AMD build, but have some long-term concerns regarding Hackintoshing it.
I'm also thinking of having it run Linux as its host OS and potentially running macOS in a VM, with proper GPU passthrough and what-not. I have tested this setup both on the Dell R510 as-well as on my current desktop, using Debian as the host OS, and this has shown that this might be a good route. It would get me full, native Linux for things like Docker, but also allow me to keep using macOS as my main OS. Although I wonder how much of a challenge it would be to move over to using Linux as my main OS, something I have been contemplating more lately.
So that's where things stand right now. You might not see any more physical server related posts, photos or videos come by here for the forseeable future, but I will try to bring Learn With Me along with me for more software specific topics. I have been using a lot of tools and services for the first time this year and I think some of these might be interesting topics to cover, but that is for another time.
It was a pretty fun ride, but I'm happy to have an electricity bill that isn't relatively alarm-bell-inducing anymore. I'm excited to build a new machine too, so as soon as I am starting with that I will be sure to cover it here, too.