top of page
Search
Writer's pictureNathan

My Azure Stack HCI Home Lab - Part 1

Updated: 3 days ago

 

Warnings:

  • "Azure Stack HCI" has been re-branded/renamed to "Azure Local"

  • ECC RAM might be a hard requirement now, where it wasn't before. That means, it may not be possible to use the MS-01 workstations to build a cluster anymore. I need to do some testing on this, and I will update this post when I do.


 

If you haven't already please check out the other parts of the series first:


  • Part 1: The physical hardware (you're reading it now)

  • Part 2: Prerequisite Steps & Driver problems

  • Part 3: Creating the cluster

  • Part 4: Workload: AKS - Azure Kubernetes Service

  • Part 5: Workload: AVD - Azure Virtual Desktop


 

I have a long history working with Hyper-V Failover Clusters. At one point in my career, I had a job where I traveled the globe installing Hyper-V 2008 R2 clusters into many different factory locations. But, nowadays, I work mostly in the cloud and don't deal with physical hardware anymore. The last Hyper-V cluster that I built was running Hyper-V 2016.


Azure Stack HCI has reignited my interest in Hyper-V clusters. The newest release, version 23H2, really piqued my interest, and I was quite anxious to try it out. So, I started looking into how I could test it out. There is a solution called HCIBox, which is essentially a giant Azure VM that uses nested virtualization to build a Stack HCI Cluster for you. While this solution is very cool, it's just too easy. I'm not actually building anything physical, and I'm just clicking a few buttons in the browser. I really wanted to build my own physical cluster at home.


Previously, my home lab was made up of EliteDesk 800 G3 Mini systems from HP. Don't get me wrong, these are great little systems for a home lab. However, they don't have the networking and storage horsepower required to run Azure Stack HCI. So, I decided to build a new home lab with a goal of using it to create, run, and play around with an Azure Stack HCI 23H2 cluster.


Not long after I decided that I was going to rebuild my home lab, I found a mini PC that I quickly became very interested in, the Minisforum MS-01. These are a solid fit for my intended use-case. The expandability is incredible for such a tiny machine. Each one comes with a Intel 12th/13th gen CPU, supports up to 96GB of DDR-5 RAM, supports up to 3 NVMe drives, has 10Gb networking, 2.5Gb networking, dual USB4 ports, and finally, a PCI-Express 4.0 x8 slot. In my opinion, these make for the perfect small-form-factor home lab machines. So, I promptly ordered 2 of them ... and then had to wait about 3 months as they were out of stock, bummer.


Fast forward a few months, and I finally received the systems. After a little trial and error, I was able to build my Azure Stack HCI cluster. In this series of blog posts I'm going to tell you about the trials and tribulations I went through to make it all work, so buckle in.

 

The Hardware


While I was waiting for my systems to arrive, the first thing I did was search for a solution that would allow me to mount both of my MS-01 workstations into my server rack. I found a seller on Etsy named HiveTechSolutions who sold the perfect 3d-printed mount, which you can see below.



What I absolutely love about this mount is that is has keystone jacks at the front, which I can use to pass through connections from the back of the system. Below, you can see that for each system I'm passing through both of the 2.5Gb network ports, the HDMI port, and a DAC cable from one of the 10Gb SFP+ ports. You'll also see that I have network cards in each of the PCI-express slots, which are being used to directly connect the systems to each other. These are Mellanox ConnectX-4 cards and they have dual 25Gb SFP28 ports and support RDMA. I found these used on eBay for very cheap and they came with the small form factor bracket that I needed as well. These cards will be used to hold the storage traffic for the cluster, but more on that later.



Below you'll see a picture of everything fully assembled and running in my rack.



In the end, I configured each system with the following specs:

  • CPU: Intel Core-i5 12600H (Azure recognizes this as 12 cores)

  • RAM: 64 GB (32 GB x 2)

  • SSD:

    • 1TB NVMe drive x1 for the Operating System

    • 2TB NVMe drives x2 for the cluster storage

  • PCIe: Mellanox ConnectX-4 LX


All told, the cluster has 24 cores, 128GB RAM, 8TB (raw) NVMe cluster storage, 10Gb networking and 25Gb networking. All of this comes in a compact 2U worth of rack space!


 

That's pretty much everything I wanted to cover in Part 1. Coming up in Part 2 I'll start talking about my initial attempts to build a cluster in Azure.



4,017 views

Comments


bottom of page