*****     ***    ******     ***    **   **  **  **    ***    *******
**        ** **   **   **   ** **   **   **  ** **    ** **     ***  
 *****   **   **  ******   **   **  *******  ****    **   **    ***  
     **  *******  **  **   *******  **   **  ** **   *******    ***  
 *****   **   **  **   **  **   **  **   **  **  **  **   **    ***  

SRV.HOME - Getting Started

Chassis and Parts

So far, I’ve got most of the stuff to get started sans a couple parts.

I’m basing this build on the Dell R720XD - which recently became affordable on the second-hand market, and will be replacing my R710.

The R7xx series is well known for being a great all-round machine, on account of it’s ability to take a lot of RAM/CPU, and is generally seen as a great virtualization platform.

The R720XD Takes this a step further by allowing for better CPU upgrades, and also a ton more for storage - 6 LFF drive bays up to 12 (Plus a couple SFF bays). This allows double the 3.5” HDD’s in the same space, which especially pleases the data hoarder problems I’ve been having.

Notes about Using the R720XD

Some notes from my purchasing experience, if anyone wants to avoid discovering these one at a time and waiting a month to sort it all out:

Flex Bay

R720XD’s don’t come with the rear “Flex Bay” board by default - this is the part that hooks the two 2.5” SFF drives slotted into the back of the server. This is something I discovered after receiving the chassis and had to order after the fact. If this is a similar situation, the Flex Bay card that matches the 3.5” LFF version of the R720XD is dell part number “0JDG3”, I found it for $100 on ebay.

PERC cards

Dell servers usually have an option for an add-in RAID that comes as a daughterboard to the motherboard - allowing changes in the hardware capabilities without being tied to hardware integrated on the motherboard. In previous generations, this was a PCI-E slot that existed deeper within the server and didn’t take up any real-estate on the rear of the chassis - leaving room for customer add-in cards, and now it’s a daughterboard right off the motherboard using proprietary connector.

It’s really handy to use the PERC cards since it won’t take up a PCI-E slot but none of the Dell versions of the card support straight HBA mode without causing problems with operating system drivers if you try do do so. The popular solution previously was to grab a PCI-E card and hook the backplane of the server into the add-in card, since replacing the firmware on the PERC card usually causes a frozen-boot as Dell checks the firmware on the card, and rejects anything not “Dell”.

The other problem with the PERC cards, is that the cards on this generation don’t seem to universally love disks. The disks I have set aside for this project don’t provide much hope. All six show as “Status: Blocked” in the PERC BIOS, show a lower capacity than they are, and won’t show to the operating system at all. There are multiple possibilities - even though this generation of card should support >2TB disks, that could be a conflict. The drive firmware could also be a conflict as they’re not “Dell Drives”, even though internet reading says a PERC update should of allowed any drive. Maybe failed drive would support all the facts, if they didn’t show just fine on another HBA.

So the easiest solution is to go to an IT mode HBA - a Host Bus Adapter that presents each drive to the operating system without doing anything fancy, and with less compatibility checks.

I found recently that The Art of Server found a way to flash IT firmware on the integrated slot, by leaving the hex values of the PCI ID and such in place while having different firmware installed - the card still talks to the system over PCI, even if on 13th gen servers and newer, it’s a proprietary connector. I’d recommend checking his stuff out, it’s pretty nifty that he figured it out.

Anyway, I picked up a H310 Mini flashed to IT firmware for $65, I don’t really trust myself to play around with flashing the firmware when I want to get this up and running soon - but it doesn’t look like the most impossible process in the world. As of this writing, I’m waiting on that card to arrive in the mail.

The Drives

Right now I have a small array, 6 drives of 2TB each, to replace my monolithic, non-reduntant single 4TB drive I’ve been living from for the past few years. This drive has recently been giving me some grief, thus prompting this whole foray into upgrading the whole server.

6 drives currently fills up half the slots, and I’d plan on filling the other half with much larger capacity drives when I can afford it, moving the data to that array, and retiring the 2TB’s at that point.

I’d like to go with ZFS for the file system on the drives, since data redundancy and integrity is becoming more of a priority, and ZFS on Linux doesn’t currently promote a stable ZFS as root, and besides, I’d kind of like the data segregated from the OS itself. This is where the rear flex bay comes into play.

The Flex Bay offers two more drives out the back of the server that would be easy to throw in a mirrored array and run the operating system from. This is the plan, re-using some 500GB drives from a server currently serving as the temporary srv.home. Maybe some SSD’s would be nice to partition out for also the SLOG and L2ARC for ZFS but, as I’m only running gigabit at home, I don’t think I’d get much use from that.

The Software

Previously, I’ve been running ESXi on the metal to get my fill of independent operating systems and virtual networking for both home server and home lab use without powering on a dozen different systems. I now want to run a more legitimate file server than “4TB USB Drive passed through to a VM”, and that doesn’t seem terribly possible with ESXi, at least not without multiple HBA’s, or onboard SATA, which the R720XD doesn’t provide. Maybe this would be a more valid solution to someone building a server from a tower or a supermicro server that isn’t as tightly integrated, but in theory passing one of the HBA’s through via PCI pass through could work.

Instead, I’m deciding to run Ubuntu/Debian on the bare metal, handle the networking with bridge interfaces, and hook KVM virtual machines in on top of that. The bare-metal server should handle the ZFS pool, and share that with a NextCloud instance that runs in a VM. I’m not yet decided on NFS, CIFS, etc, or what the exact architecture would be, but, I’ll make it work. I’ve always used the home server projects as a learning experience anyway.

I’m already running KVM on a temporary server that’s holding my virtual machines for now - I migrated that all when I wanted to empty the R710 I originally had and try to upgrade that, but the upgrade process ended up convincing me to get double the drive bays in the R720XD.

In the virtual machines, I’ll likely continue with my habit of running a “network in a box”, where I run pfSense, and plug the server into a modem directly and then into an access point / switch for physical networking. This seems to be the most power efficient way to run what I feel like I need to run, without powering on more hardware. Technically, pfSense would be better on it’s own hardware, but I’d rather not pull another extra 100 watts idle, or buy an appliance. I may also prefer to switch to OPNsense, undecided as of now.

Misc Hardware

Also procured are a couple UPS units - one for the desk and one for the mini-rack this will reside in. Will be hooking the UPS’s management USB interface into the server to enable an automatic graceful shutdown on a power outage lasting more than a few minutes (Don’t wanna shutdown in brown-out or breaker tripped if I don’t have to). Will document this journey as I get to it.

Status to Date

I have the chassis, I have the drives, I have the Flex Bay. Waiting on the H310 Mini Mono turned HBA to turn up in the mail, then it’ll get to time to configure everything.

This project, on completion, will likely get edited down into a more concise entry for the “Projects” side of the website, where it’s more about documentation than it is about stream-of-consciousness rant.