Description

This machine replaced a QNAP TS-559 Pro+ NAS (file/media server and Transmission), a Ivy Bridge Xpenology (IP Cameras and Minecraft server), and a Sky Lake gaming computer (passed on to child for Minecraft/Sims machine). If you add all these cores (12), memory (64GB), and storage (89TB) up, this one will still have more than all of those combined! This is definitely not an everyone computer, it's more of a server-hybrid. I chose a 4U server chassis for the case to allow for up to 15 trayless hot-swap drives. 4U allows for sufficient height for a GPU to be mounted vertically in the motherboard. This also allows for 156mm of height from the motherboard for a cooler.

I used my owned GPU, power supply, peripherals, a couple of SATA SSDs, and 3 2TB SATA HDDs. I tried to use the Wraith Prism cooler and pair of stock 120mm fans but temperatures got out of control when the GPU was at full throttle. I cut the case with a rotary tool to accept a pair of 140mm fans (one intake, one exhaust), 2 side 80mm fans for intake, and 2 rear 80mm fans for exhaust. I replaced the stock cooler with a Scythe Ninja. Unfortunately, the Scythe Ninja is taller than the specifications indicate (or I interpreted them to indicate): it reaches 161mm above the motherboard rather than 155mm. I had to drill 12 holes in the top of the case to a facilitate the unplanned height.

I chose the Asus PRO WS X570-ACE motherboard for it's PCIe lane magic. While the chipset still only has 20 PCIe lanes (plus 24 on the 3900X), through some PCIe 3 / 4 wizardry, it can still run 3 cards in 8x mode. That's like getting an extra 4 PCIe lanes! Still far short of the 64 lanes on X399 or 72 lanes on TRX40, but still, it's kind of like threadripper-lite on the relatively inexpensive consumer platform. Another bonus is that this card has an onboard U.2 that can be used with a SFF-8643 mini-SAS HD cable to control an additional 4 SATA drives for a total of 8. It's frankly ridiculous that the ACE doesn't have 10GbE LAN given the workstation branding or a USB 3 Gen 2 header.

I prefer to stick with the QVL for memory when selecting ECC memory. In the case of the ACE motherboard, this limits you to three varieties of 2133-CL15 DIMMS, and four varieties of 2666-CL19 (all at 1.2v). Of the three vendors, only SK Hynix makes readily available and identifiable DIMMS. Good luck finding V-Color or Innodisk. "HMA82GU7CJR8N-VK" however is sold by Supermicro and a variety of other vendors. I chose to purchase Supermicro branded DIMMS as they cost $75, about the same as other vendors. One nice thing about purchasing server memory is that is abundantly clear exactly what you are getting. According to SK Hynix product sheet, it's 2Rx8 Hynix C-die. I found in overclocking that it is completely stable for me at 2666-CL14. Above 2666, I would get WHEA errors, but no crashes until 3466.

I found a LSI 9400-16i on eBay for $247.50, which seems like a heck of a deal on a SAS III controller with 4 ports and is even compatible with NVMe. Most importantly, it only uses 8 lanes of PCIe 3. On any other X570 motherboard, I would have just used up the very last of my PCIe lanes! (8x GPU & 8x HBA)

In addition to the owned drives, I added 10 HE8 drives (Sun Oracle branded) that I bought with two years of use in a datacenter. $11/TB is ridiculously cheap for quality HGST drives. I setup the 10 HE8 drives in a 2-way mirror pool, yielding 35.7TB of effective storage space, this is used as shared network storage, media, programs, qBittorent, etc. I setup the Samsung HDDs into a simple pool yielding 5.5TB of storage, this is used by Blue Iris for IP Camera video storage. The Sabrent Rocket is being used as the primary system drive, with the WD Blue SATA SSD being used for games, and OZC Vertex SATA SSD for Adobe Creative Suite.

The spinning drives are mounted in iStarUSA racks. Note that the single-slot rack (BPN-DE110HD) does not come with a fan, so I added a 40mm Scythe Mini Kaze fan. The 2-pin fan header on the iStarUSA rack is smaller than standard (mini-GPU header?). I had to grind down the sides of the Scythe 2-pin connector to allow it to fit. The SATA SSDs are mounted in the EZDIY-FAB faceplate. That faceplate isn't designed for this purpose, but it has holes that will work for lightweight SSDs.

A couple of notes on the iStarUSA D-410 chassis. It has tremendous expansion up front in the 10 half-height drive bays, but only have two 120mm fan mounting spaces makes this incompatible with a powerful GPU that will be consuming a few hundred watts plus a powerhouse CPU. Also note that the case comes with no instructions, not that it needs them... but you get to figure out wires and stuff, which isn't too difficult. The LEDs for HDD indicator and power were mounted in the incorrect slots in my case: the blue power LED should be on the left side, and the red HDD LED should be on the right.

Fans and mesh filters are mounted with flat head 6-32 machine screws: 1.5" (Hillman #46508-F) except for the top four on the rear, which are 2" (Hillman #46508-Q) with OO flat neoprene faucet washers (Hillman #46087-A), 6-32 hex nuts (Hillman #46508-H), and #6 nylon washers (58056-K). The neoprene faucet washers do a great job of absorbing any vibration between the screw and the case, and fit very nicely with the flat head screw.

Log in to rate comments or to post a comment.

Comments

  • 1 month ago
  • 3 points

its a monster

  • 1 month ago
  • 3 points

Of the variety that Doctor Frankenstein built!

  • 1 month ago
  • 3 points

WOW, Feature pls!

  • 1 month ago
  • 3 points

This is the coolest trashiest high end build I've ever seen. I mean that in a good way.

  • 1 month ago
  • 2 points

Absolutely amazing!

  • 1 month ago
  • 1 point

I don't see the RAM anywhere in the list.

  • 1 month ago
  • 1 point

It's listed under Custom, since PCPartPicker doesn't generally list server memory. It is Supermicro​ MEM-DR416L-HL01-EU26 16GB ECC ​DDR4 2666 ​(SK Hynix ​C-Die HMA8​2GU7CJR8N-​VK): https://www.newegg.com/supermicro-16gb-288-pin-ddr4-sdram/p/1X5-000K-002A7

  • 1 month ago
  • 1 point

Very nice and impressive build.

I noticed you use storage spaces though. I just wanted to warn you that if something goes wrong with that it goes terribly wrong. You don't have to take my word, just take a look at the Microsoft Forum. I used it in the past as well and wasted weeks to fix everything. I would suggest going RAID or a program like Drivepool or Snapraid, depending on how you want to get your data managed.

  • 1 month ago
  • 1 point

Thanks for the warning, that is what the off-site backup is for.

  • 1 month ago
  • 1 point

This is an incredible build! Thanks for sharing. Where I work we are currently looking to build a media server so that all the video editors can get the footage on a centralized location. This is definitely a great guide for this! Thanks for sharing

  • 1 month ago
  • 2 points

If you are just building file storage and don't need it to function as a workstation as well, you wouldn't need the GPU, which is the source of most of the heat. That would allow a more traditional server. Dual 10GbE NICs, an HBA, and a whole lot of drive bays would do the trick! Spinning drives are inexpensive, but if you are serving video files to a workgroup in a timely manner, you should probably consider solid state drives. My drive array will saturate my 1GbE LAN, but it wouldn't be fully utilized on a 10GbE network. My sustained sequential read speed is around 260 MB/s (2000 Mbps), which is more than sufficient for my purposes.

  • 1 month ago
  • 1 point

Thank you for your answer! Having 260 MB/s is incredible. We currently use a very old Mac Pro and our max speed has been 40 MB/s. May I ask which OS are you using? Or what configuration/software are you using to share the drives with your work group?

Thanks again!

  • 29 days ago
  • 1 point

Transferring multi-gigabyte video files would be a real time-suck at 40 MB/s! My OS choice was based on my desire to utilize a single consolidated device to replace other server devices that I had previously been using as well as function as a gaming/workstation. For the latter requirement, Windows Pro was the obvious choice. For the former, Windows Server or a variety of Linux permutations would work great, except that I also wanted to run Blue Iris which is Windows-only. I originally went with Windows Pro as the host OS, with Windows Server 2019 running in Hyper-V. However, not having direct access to the pooled volumes from the host OS was annoying... so I am now running Windows Pro for Workstations v1909 (Workstations allows creation of ReFS volumes, which Microsoft removed from Pro in v1709) for everything but MineOS which is running in a Hyper-V virtual machine.

I am utilizing Storage Spaces for drive pooling, which is super easy to setup, and far more flexible than traditional RAID. For example, you can pool dissimilar size and types of drives. I chose to not mix dissimilar drives, but you could, which is nice for future expansion flexibility. I am sharing with SMB. Note that Mirroring is far faster (>5x) than Parity (around the speed of your current system) in Storage Spaces, while less efficient from a storage space perspective. Given the cost of storage these days, losing 50% to two-way mirroring isn't really that big of a deal.

This hardware is way overkill for simple file sharing. The i5 Ivy Bridge-based Xpenology that this partially replaced, was just as good for that purpose, and was running on several-generation old hardware with only 16gb of RAM. That Xpenology build was far better for Time Machine, which might be a consideration if you are utilizing Mac OS workstations. This hardware can run Premier (or Red Dead Redemption, etc) concurrently with Blue Iris, qBittorrent, Plex Server, MineOS in Hyper-V. The last two screen shots showing thread and GPU utilization or while running FFMPEG and everything else concurrently. Everything gets utilized, but not running at 100%.

I mentioned 10GbE NICs, but really, that would only be a benefit if you also upgraded all of your client machines and network hardware to 10GbE. That can be an expensive proposition, and probably not necessary if you are already getting by with 40 MB/s. However, you might benefit from teaming of a pair of 1GbE NICs found on many motherboards, including the Asus ACE x570 in this build. I am personally using those NICs independently, so they have separate IP Addresses.

  • 29 days ago
  • 1 point

In case I wasn't clear, local sustained read/write speeds are about 260 MB/s. Of course, that saturates a gigabit network (1GbE), so transfers over the LAN are a bit under the theoretical maximum of 125 MB/s (1000 mbps). You should get large file sustaining transfer speeds over a gigabit network at around 95-110 MB/s from just about any solution. If you were to implement 10GbE networking, add an extra zero onto the end of all those numbers; except for the cost, maybe add a couple extra zeros for that!