Homelab – January 2023
I started my home lab journey with a HP Compaq desktop machine with Intel E8400 CPU and 16GB of RAM. It was running Hyper-V and it was doing it’s job. One day I was looking at network settings of a virtual switch and saw a checkmark at “Allow management operating system to share this network adapter”. I didn’t know what this is and removed the check mark, applied the settings and I lost connection to my host as it had single network card for VM’s and host. In most cases this wouldn’t be a problem, but my host was 100 kilometers away at my parents house and had no monitor connected to it. I had to do an “autoconfig”, which means, I had to sit in a car and drive to the location and fix it.
After that a colleague introduced me to vSphere and ESXi. I saw something that I liked and started thinking about a bigger and better homelab. So I got on eBay and found a used HPE Proliant DL380 G6 server with two Intel Xeon L5630, 144 GB of RAM and four 146 GB 10k SAS drives. It was a great piece of hardware and I used it for a couple of years.
Later on I also found an auction on eBay, that was selling used Dell PowerEdge R710 with two Intel Xeon E5620, 72 GB of RAM and no disks. I won that auction and later added some disks I also got on eBay and I got my first cluster. It’s hardly said it was a great cluster, but it was. A smaller two disk Synology DS716+II was used as a shared storage for virtual machines and some were on local disks. Switching between all three components was done by a Mikrotik CSS326-24G-2S+RM switch. It has 24 gigabit ports and two 10G SFP+ ports.
A few years back I got an opportunity to go for a newer solution and I bought three HPE Proliant DL360p Gen8, each with two Intel Xeon E5-2620v2 CPU, 128 GB RAM, HP P420i storage controller with 1GB of cache and four LFF slots. It also came with 10G dual port SFP+ connectivity, which meant, my lab was going 10G. So I also needed 10G networking and after a research I found great 16 port 10G switch made by Mikrotik. Model number CRS317-1G-16S+RM. It was a great investment which allowed me to utilize some higher bandwidths to my storage and implementation of all-flash vSAN on those three nodes. Ia had a few consumer SSD disks in each node (only one disk group per node). Total capacity was about 6 TB. This setup allowed me to really learn vSAN setup and test configurations, see what SPBM (Storage Policy Based Management) is and how it works.
Later on I had some failed SSDs in my lab and after that I decided that vSAN with consumer grade disks is a not such a good idea. So I researched and in the end decided to get a NAS as my primary storage for homelab. I picked the Synology DS1821+ in the beginning of 2021 as it was a new box with a lot of great features at the time. I added 4 Seagate Exos x16 16TB drives and moved some of my vSAN SSD disks to this NAS. I have a total of 40 TB HDD and about 2 TB SSD of usable capacity.
I also moved in a house that we rebuilt and I have my homelab in a dedicated space under stairs.

The DL360p Gen8 nodes are still serving me great, but are currently only turned on when I need them. Last year I got my hands on a DL380p Gen8 with two Intel E5-2670 v2 and 512 GB of RAM. It is also running when I need it.
My main vSphere host that is running all my “prod” virtual machines is HPE DL380 Gen9 that I also got last year. This was an ebay auction. It has two Intel E5-2680 v4 CPUs and 64 GB DDR4 RAM. I am in process of buying and adding more RAM and also some Enterprise SSD disks to lower provisioning times of nested environments.
So now I finished describing hardware and how I built my homelab. Now we get to the part where I tell you what is running in there and how it all started.
Through time, my lab grew also in software aspect from only vSphere ESXi on one host then vCenter and my first cluster. I also did a lot of testing with Microsoft products, especially Windows Server and the roles that can be configured on that. I had set up my own Active Directory domain, file server, print server, Exchange, SQL and so on. This was built and torn down and rebuilt as always in the process of learning. As I changed role in my company and my focus shifted also into maintaining and developing our VMware cloud offering I got more into VMware products like vRealize Suite, NSX, Cloud Director etc. So it was only logical to also test products in my homelab. I deployed nested ESXi hosts that were acting as my testing ground for NSX and VCD. All the NSX managers, vRealize products were running on bare metal ESXi nodes and the workloads that I was testing were provisioned on nested hosts.
All of this was getting a bit chaotic and problematic to maintain, so I decided to delete and rebuild it all. Because of some major projects at work I did not get around to that but I have made a plan and decided what do I want/need running in my lab to test some scenarios so I hope that soon I will be able to share more posts on that.
Stay tuned 🙂