When I decided to build a homelab, I wasn’t looking to recreate an enterprise data center in my home. I wanted something practical: powerful enough to run real workloads, efficient enough not to destroy my electric bill, and quiet enough to live in my home office. Most importantly, I wanted to learn modern infrastructure practices, clustering, and high availability without the overhead of managing rack-mounted servers.
The question that haunted me for weeks was simple yet complex: what hardware do I actually need?
I spent weeks researching options, analyzing power consumption, comparing costs, and trying to understand what would give me the best learning platform without turning my office into a server room. What I eventually built was a 3-node cluster of business-class mini PCs that hits the sweet spot between capability, efficiency, and cost.
This is the story of how I chose my homelab hardware, the alternatives I considered, and what I learned along the way.

The Vision
My goals for this homelab were clear from the start. I wanted to learn enterprise-grade technologies and DevOps practices hands-on. Reading documentation and watching tutorials only gets you so far. I needed to actually build and break things, implement clustering, experiment with high availability, and understand how distributed systems work in practice.
I also wanted the flexibility to run multiple technologies. Proxmox for virtualization. Kubernetes clusters. Docker containers. PostgreSQL databases. Gitea for version control. Maybe even service mesh and observability stacks. This meant I needed real compute power, not just single-board computers running lightweight services.
But I also had constraints. My spouse would not tolerate a loud, heat-generating server rack in our home. My electric utility would not appreciate a 400W idle power draw running 24/7. And my budget wouldn’t stretch to buying brand-new enterprise equipment.
My Requirements
After researching and soul-searching, here’s what mattered to me:
Must-Haves:
- Run Proxmox VE with multiple VMs and containers
- 3-node cluster for quorum and learning high availability concepts
- Low power consumption (100-150W total under typical load)
- Quiet enough for home office environment
- Room for growth (upgradeable RAM and storage)
- Gigabit Ethernet minimum
- x86 architecture for broad software compatibility
Nice-to-Haves:
- Enterprise-grade reliability
- Small form factor (1L or smaller)
- Support for NVMe storage
- Intel vPro for remote management
- Under $1500 total budget
These requirements immediately ruled out several common homelab paths. Raspberry Pi clusters were too limited. Enterprise servers were too loud and power-hungry. Gaming PCs were overkill for most tasks and didn’t teach clustering. That left me looking at business-class mini PCs.
The Hardware I Chose
Node 1: Master - Lenovo ThinkCentre M920q
- CPU: Intel Core i7-8700T (6 cores, 12 threads)
- Why: 35W TDP with excellent single-thread performance
- Boosts to 4.0GHz for demanding workloads
- RAM: 32GB DDR4
- Why: Room for multiple VMs and Kubernetes control plane
- Expandable to 64GB if needed
- Storage: 512GB NVMe SSD
- Why: Fast boot times and low latency for VMs
- Can add 2.5” SATA drive for expansion
- Power: ~65W typical load
- Cost: ~$400 used on eBay
- Role: Primary cluster node, runs critical services and control planes
Why this as master? The extra RAM and CPU threads make it perfect for running the Proxmox cluster coordinator, Kubernetes control plane, and resource-intensive workloads like databases and CI/CD runners.
Node 2 & 3: Workers - Dell OptiPlex 3060 Micro
- CPU: Intel Core i5-8500T (6 cores, 6 threads)
- Why: Great multi-core performance for containerized workloads
- 35W TDP keeps power consumption low
- RAM: 16GB DDR4 each
- Why: Sufficient for worker nodes running distributed containers
- Can upgrade to 32GB each if workloads demand it
- Storage: 256GB NVMe SSD each
- Why: Enough for OS, container images, and local volumes
- SATA bay available for expansion
- Power: ~35W typical load each
- Cost: ~$250 each used
- Role: Worker nodes for distributed workloads, container orchestration, and application services
Why Dell? Consistent hardware across both worker nodes means predictable performance. The Dell OptiPlex Micro line is extremely common on the used market due to corporate refresh cycles, making them affordable and easy to find. They also have excellent Linux compatibility.
Network: TP-Link TL-SG108
- Specs: 8-port Gigabit unmanaged switch
- Why:
- Unmanaged means simple and reliable
- Metal case for good heat dissipation
- Fanless for silent operation
- Inexpensive (~$25)
- Future: May upgrade to managed switch for VLAN support and network monitoring
Total Cluster Stats
- CPU: 18 physical cores, 24 threads total
- RAM: 64GB total across all nodes
- Storage: 1TB NVMe total
- Power: ~135W typical load (measured with Kill-A-Watt)
- Noise: Virtually silent in home office (under 30 dB)
- Cost: ~$900 total (used market 2024)
The Alternatives I Considered
Option 1: Used Enterprise Servers (Dell R720, HP DL380 G8)
Pros: Massive compute power, redundant PSUs, hot-swap drives, true enterprise features
Cons:
- 200-400W idle power consumption
- Loud (60+ dB even with fan mods)
- Large form factor (2U-4U rack mount)
- Expensive to ship (100+ lbs)
- High ongoing electricity costs
Why I passed: The electric bill alone would cost more than my mini PC cluster in 6 months. I calculated that running an R720 24/7 would cost about $30-40/month just in electricity at my local rates, compared to $15/month for the mini PCs. Plus, my spouse would absolutely not tolerate the noise. Enterprise servers are built for data centers with isolated server rooms, not home offices.
Option 2: Intel NUCs (NUC 11/12)
Pros: Small, efficient, modern CPUs, great build quality
Cons:
- Expensive new ($800+ per unit)
- Limited expandability compared to business mini PCs
- Harder to find used in quantity
- Some models lack dual NICs for advanced networking
Why I passed: Budget was the killer here. Three NUCs would have cost $2400+ even with modest specs. The business mini PCs gave me 90% of the capability at 40% of the cost. When you’re building a homelab for learning, that price difference matters.
Option 3: Raspberry Pi Cluster (4x Pi 4 8GB)
Pros: Super energy efficient, cheap, great for learning Kubernetes basics
Cons:
- ARM architecture limits software compatibility
- No nested virtualization support
- Limited to containers only (no full VMs)
- SD card reliability issues
- Limited RAM and CPU for real workloads
Why I passed: I wanted to run Proxmox and full VMs, experiment with x86 software, and have the power to run actual production-like workloads. ARM would have limited what I could learn and experiment with. Raspberry Pi clusters are great for Kubernetes learning, but I wanted more flexibility.
Option 4: Repurposed Gaming PC
Pros: Already owned, powerful GPU, lots of RAM, great single-node performance
Cons:
- 300W+ power consumption
- Single point of failure (no clustering or HA learning)
- Loud under load
- Overkill for most homelab tasks
- Doesn’t teach distributed systems concepts
Why I passed: A single powerful machine doesn’t teach you clustering, high availability, quorum, or distributed systems. Those are the concepts I most wanted to learn. One big box is just a standalone server. Three smaller nodes form a real cluster with real HA capabilities.
Lessons from Hardware Selection
What I Got Right
- Mini PCs were the sweet spot: Perfect balance of power, efficiency, and cost for homelab use
- Buying used saved 60%+: Business mini PCs depreciate fast but last forever with proper care
- Matched CPU generations: All 8th-gen Intel means consistent performance, features, and behavior
- Prioritized one powerful master node: Having 32GB RAM on the master was crucial for control planes
- Left room to grow: Can still upgrade RAM and storage as my needs expand
What I’d Do Differently
- More storage on workers: 256GB fills up faster than expected with container images and volumes
- Started with a managed switch: Would make VLANs and network monitoring easier from day one
- Checked BIOS before buying: One Dell arrived with a password-locked BIOS (fixable but annoying)
- Bought extra power bricks upfront: Had to order one separately later ($30 + shipping delay)
- Verified all specs before purchase: One listing said “NVMe” but only had SATA M.2 support
What Surprised Me
- How quiet mini PCs are: Fan noise is barely audible even under sustained load
- How efficient they are: $15/month electricity for the entire cluster running 24/7
- How capable they are: Currently running 15+ VMs/containers with room to spare
- How available they are: Thousands of units available from corporate refresh cycles
- How reliable business hardware is: These machines were built to run all day, every day
Power Consumption Analysis
This was a critical factor in my decision. I measured actual power consumption with a Kill-A-Watt meter:
| State | Power Draw | Cost/Month* |
|---|---|---|
| Idle (all 3 nodes) | 45W | $5 |
| Typical load (5-6 VMs) | 135W | $15 |
| Heavy load (15+ containers) | 220W | $25 |
| Max stress test | 280W | $32 |
*Based on $0.12/kWh electricity rate
Comparison:
- My old gaming PC: 300W idle, $35/month to run 24/7
- Enterprise Dell R720: 250W idle, $30/month minimum
- My mini PC cluster: 135W typical, $15/month
ROI: The power savings alone pay for the hardware cost difference compared to enterprise servers in about 3 years. But the real win is having enterprise-like infrastructure that I can actually afford to run continuously.
Where to Buy
What worked for me:
- eBay: Best prices, widest selection, buy from corporate sellers with return policies
- Amazon Renewed: Good for warranties, slightly higher prices but more protection
- Local business surplus stores: Can find excellent deals if you have them nearby
- Facebook Marketplace/Craigslist: Hit or miss, but can find local deals without shipping
What to avoid:
- Unknown sellers with no return policy
- Units sold without RAM/storage unless you verified the total cost
- Damaged or dented cases (may indicate drops or rough handling)
- “For Parts” listings unless you know exactly what’s broken
- Listings with stock photos instead of actual unit photos
Pre-Purchase Checklist
Before buying each unit, I verified:
- CPU generation and specs (8th gen Intel or newer recommended)
- RAM slots and maximum capacity
- Storage type (NVMe M.2 preferred over SATA M.2)
- Network ports (onboard Gigabit minimum)
- BIOS not corporate-locked (asked seller explicitly)
- Seller rating and return policy
- Includes power adapter and any mounting hardware
- Linux compatibility (business PCs are usually excellent)
Building the Cluster
With hardware selected, purchased, and tested, I had my three nodes ready to build a real cluster. The next step was installing Proxmox VE and configuring them into a proper high-availability cluster.
What’s Next: In the next article, I’ll cover the complete homelab infrastructure build, from installing Proxmox to deploying Kubernetes, implementing infrastructure as code with Terraform, and building a production-like environment for learning and experimentation.
Resources
Reflections
Choosing homelab hardware taught me that constraints drive better decisions. The limits of power consumption, noise, and budget forced me to think carefully about what I actually needed versus what would be nice to have.
The mini PC route isn’t the most glamorous. It doesn’t have the cache of a rack full of enterprise servers or the cutting-edge performance of the latest hardware. But it gave me exactly what I needed: a real cluster that I can actually afford to run, that’s quiet enough to live with, and that’s powerful enough to learn enterprise technologies.
Most importantly, it’s been reliable. These business-class machines were built to run all day, every day in corporate environments. They bring that same reliability to the homelab world.
Now that I have the hardware foundation in place, the real learning begins: building the infrastructure, deploying services, implementing high availability, and understanding how these technologies work in practice.
The hardware is just the beginning. The journey continues with the software that brings it all to life.