top of page
Search

My Azure Local Home Lab v2.0 - Part 1

  • Writer: Nathan
    Nathan
  • 11 hours ago
  • 4 min read

Please check out all parts of this series:


  • Part 1: The physical hardware (you're reading it now)

  • Part 2: Coming Soon!



Lab 1.0 versus Lab 2.0


I've previously written a whole series of articles on the 1.0 version of my Azure Local home lab (formerly known as Azure Stack HCI). You can find that series here.


Initially, I had no plans to upgrade my lab. But, then Minisforum announced their MS-02 Ultra line of mini PCs. After seeing how cool they were, it didn't take me long to make the decision to start building my new lab.


So, when dealing with Azure Local clusters, what benefits do the MS-02 Ultra systems give me over my previous MS-01 systems?


Memory: The MS-02 Ultra supports ECC memory, whereas the MS-01 does not. Microsoft updated Azure Local and made ECC a requirement now. Note, you can workaround this requirement if you need to. But, with the MS-02 Ultras you don't need to worry about any workarounds.


On top of that the MS-02 Ultra supports a higher max memory of 256 GB, whereas the MS-01 only supports a max of 96 GB (depending on your CPU).


Networking: Another benefit is that the built-in SFP networking was upgraded. The older MS-01 systems have dual 10Gb ports (from an Intel X710 controller). While the newer MS-02 Ultras have dual 25Gb ports (from an Intel E810 controller).


Expansion: The older MS-01 systems only come with 1 PCIe slot which runs at PCIe 4.0 x8 speed. The MS-02 Ultra systems offer better expandability, but keep in mind this also comes at the cost of bigger size. The MS-02 Ultra has 2 free slots, one runs at PCIe 5.0 x16, and the other runs at PCIe 4.0 x4.


In my clusters, I use the PCIe slots for Mellanox cards which are responsible for carrying the storage traffic. Having a full PCIe 5.0 x16 slot in the MS-02 Ultra opens up many possibilities in the Mellanox world. In this 2.0 version of the cluster, I decided to go with the Mellanox ConnectX-5 EX cards, which offer dual 100Gb ports. I found these cards on eBay, and while they are not exactly cheap, I think they offer the best price/performance ratio that I could find. Stepping up to the ConnectX-6 series and later would have added much more cost.


Storage: The last benefit I will discuss is that the MS-02 Ultra has better storage options versus the MS-01:

  • The MS-02 Ultra comes with a total of 4 M.2 slots:

    • One PCIe 5.0 x4

    • Three PCIe 4.0 x4

  • The MS-01 is still respectable and comes with a total of 3 M.2 slots. However, they run at varying speeds:

    • One PCIe 4.0 x4

    • One PCIe 3.0 x4

    • One PCIe 3.0 x2


Final specs for a single system in my v2.0 cluster:


  • CPU: Intel Core Ultra 9 285HX (Azure recognizes this as 24 cores)

  • RAM: DDR5 64 GB ECC (32 GB x 2)

  • SSD:

    • 1TB NVMe PCIE 5.0 x4 drive for the Operating System

    • 1TB NVMe PCIe 4.0 x4 drives x3 for the cluster storage

  • PCIe: Mellanox ConnectX-5 EX


All told, the cluster has 48 CPU cores, 128GB ECC RAM, 6TB (raw) NVMe cluster storage, 25Gb networking and 100Gb networking.



Racking the Hardware


I failed on my first attempt at racking these systems. I bought a shelf for my 19-inch rack and tried to place both systems horizontally next to each other on the shelf. However, this didn't work out. When placed side-by-side, the systems are too wide and won't fit in the rack. But, man, is it very close! I contemplated taking off the rubber feet from the sides of the systems, which might have allowed them to fit. But, I didn't want to damage the systems. Plus, this also would have blocked off some of the ventilation holes on the sides. So, this idea was scratched.


Next, I decided to dip my toes into the world of 10-inch mini racks and I bought my first mini rack. I went with a 8U mini rack, a pair of heavy duty shelves, and a keystone panel to help me pass through some of the connections for easier access. As you can see below, I think it turned out nicely!



You might be wondering what is that weird PCI bracket with a hole in it, and a cable coming out of it? I'll save that for part 2! 😊



Connecting everything together



The wiring diagram is fairly simple for this build.


The ConnectX-5 cards are used for the 2 cluster storage networks. I just directly connect them with some QSFP28 DAC cables.


The Intel E810 ports are used for combined cluster traffic (management & compute). These ports connect to my 10Gb switch, so they do not operate at their full 25Gb capacity. However, I plan to upgrade this switch to one that has 25Gb ports, so hopefully soon these can run at full speed.



That's pretty much everything I wanted to cover in Part 1. Coming up in Part 2 I'll start talking about building the cluster in Azure.

bottom of page