Description

This is my first build since the 486 days. I'd been using a Mac Mini as a Plex server for a few years, with my media stored on a Synology 411j, which started to run out of space about a year ago. After doing some research, I decided to build my own server, rather than upgrade to a larger Synology.

This build houses 10x3TB WD Red drives (yes, they all fit in the case, without any cooling issues). Due to having a need for the ability to transcode up to 5 HD plex streams at any given time, I decided to use a Xeon 1231 (passmark is just under 10000). So far, the CPU has never been stressed above 80% (and that was during a 15 minute window where I had 3 transcodes running along with the generation of BIF index files for all of my TV shows).

I've flashed the onboard SAS controller to IT mode, which allows FreeNAS/ZFS to have direct access to the platters. Temperatures hover between 25-37 degrees for the drives, and 30-60 degrees for the CPU. The SAS controller's temperature range is typically between 36-42 degrees.

The case came with 3 fans, and I bought 3 additional fans (exact same version as the ones that came with the case, except the 3 I bought are PWM 4-pin fans [sidenote: couldn't find them as an option here on PCPartPicker, so my build list is slightly wrong]). All three fans on the 'hard drive' side of the case are hooked up through the SMART powered case-fan switch. All three fans on the 'CPU' side of the case are the 4-pin PWM fans and are hooked up directly to the motherboard (along with the stock CPU cooler).

FreeNAS is, largely by nature, a headless system, so there is no need for a graphics card (or even integrated graphics on the CPU). The motherboard comes with its own VGA chip which allows IPMI. I have never plugged a keyboard or mouse into this computer!

My boot drive is/are 2 mirrored 16GB USB flash drives. My zpool is comprised of 2 vdevs of 5x3TB drives, both in RAIDz1. This means that although I have 30 TB of raw storage, 6 TB is being used for parity. In theory, I can lose a drive in either vdev and the zpool will repair itself when I put a replacement drive in. I can't however lose 2 drives at the same time in either vdev, as that will cause the entire pool to crash and burn! All of the media on this server is either backed up elsewhere, or I still have the original blu-ray/DVDs. My boot mirror is scrubbed once a month, and my zpool is scrubbed twice a month. I have short smart tests running every night, and longer ones once a week.

A few build notes: right angle SATA cables were essential due to there being very little space between the hard drives and the power supply. The fan cables can be ran along the top of the case (just under the top panel and above the drive cages) to keep wires out of the way.

Currently I have just over 11 TiB of media stored on this server, with approximately 11 TiB left to grow. When (haha, not if) I run out of space, I plan on buying another Node 804 and housing 2 additional vdevs of 4x4 TB Red drives, plugged directly into the motherboard by way of an SAS9200-8e card, giving me a total of 48 TB of usable storage space.

EDIT: I've gone ahead and exported/imported my old Pool to a new Pool comprised of one vdev of 8x4TB Red drives in RAIDz2 and my original 10 3TB Reds, now in a second vdev of 10x3TB in RAIDz2. I feel a bit safer about my data, now that each vdev has 2 drives of redundancy. As I stated above, I added a second Node 804, an SAS9200-8e card, and all of the required cabling, and now have approximately 48TB of usable storage space.

Comments

  • 49 months ago
  • 3 points

Is this you YiFY??? If so, please come back!

  • 49 months ago
  • 2 points

I miss YiFY too bro

  • 49 months ago
  • 2 points

Nope, not YiFY. I prefer CyTSuNee releases.

  • 49 months ago
  • 1 point

Hmmm, I'm gonna have to look them up next time. Thanks for the info!

  • 49 months ago
  • 2 points

Damn! This makes my 12TB movie/media server look like a thumb drive. :-D

  • 49 months ago
  • 1 point

Haha I felt the same way when I got started! I had originally intended to do a bigger build with a 4U server case that would house 24 drives, but decided that WAF was more important!

  • 49 months ago
  • 1 point

I was going to build my own server, and even started putting the parts in the online cart, but I just happened to cruise the slick deals type sites and ran across a short lived deal from Dell on their entry level PowerEdge T20 server: Haswell Xeon processor (1225v3), 4GB RAM, and free shipping for $200. I upped the RAM to 12GB, stuffed 4x 3TB WD drives and run it all from an old Intel 320 Series SSD.

  • 49 months ago
  • 2 points

Wow, that's a fantastic deal! You can always grow your storage with an LSI-9200-8e plugged into your PCI slot. It will allow 8 additional drives in a separate enclosure!

  • 49 months ago
  • 1 point

This is my favorite FreeNAS build here and very closely resembles what I have in my parts list. Great job!

  • 49 months ago
  • 1 point

Great to hear, and thanks for the compliment! Everything in my build worked from day 1 without any issues, so if your parts are similar, it sounds like you'll have a solid, stable build!

  • 48 months ago
  • 1 point

Making NAS? I had no idea of how to make a NAS. Thanks.

  • 48 months ago
  • 1 point

This is a very open-ended question. A NAS is typically a computer with a large storage capacity that is capable of sharing that capacity over the entire local network (or external network via FTP or other protocols).

Is there a specific question you might have that would help me share an answer?

  • 48 months ago
  • 1 point

Firstly I am new to building PC and by the sounds of it you know whats really going on , I like this build the mid tower with room for 10 bays and the motherboard are a good choice but if I were to hook this directly up to my TV what video card you you recommend or should i just use it through my network and use the plex app on my tv ? also how is power consumption do you leave it always running 24/7 ? is the power supply enough for all the hard drives?

  • 48 months ago
  • 1 point

Why would you want to hook this [my build] up directly to your TV?

Plex Server and Plex Players do very different things. I had both running on a Mac Mini for years, and there wasn't too much of an issue (I'd say there was a restart every 2 weeks or so). The problem was, every time I restarted the computer because of Plex Player problems, it also shut down the server (for a minute or two). My friends complained about the downtime, and I was quickly running out of room, so I built this. For reference, this is NOT a Plex Player box.

I can't comment too much on video cards. I know that Plex Player software does just fine with most hardware (I am using a Chromebox as a full featured client, and people use Pis all the time). Use pretty much anything with a video card, and Plex will work.

Power consumption is relatively high, because I've decided to leave all drives spinning all the time (decreases search time by ALOT). Average 55-60 watts "idle" and up to 180 watts when it's transcoding 4-5 streams. Because I share my library with a few people, yes, I leave it up/on 24/7.

Yes, my power supply is quite enough, thank you. I bought it with the specific knowledge/intent to NOT have a graphics card (due to my motherboard having IPMI and a VGA chip). This build was specifically for a server, not a multi-use machine. IF (and that's a total "I'd never actually do this IF") I were to design this as a multi-use machine, I'd consider the AMD Radeon R9 390X, and change my power supply to a Corsair RM1000i. That would add and extra $500 or so.

Hope that answers your questions

  • 48 months ago
  • 1 point

Thank you for the reply , i had been looking at other builds and most had 700w+ power supplies so i wasn't sure thats all, i didn't mean to to offend you or anything

Im not going to put a gpu in it now , i understand its just a server.

Im currently storing all my media on external hard drives and constantly swapping between TV's so i wanna build a media server to access from one spot.

I really liked your system it really stood out being the small form factor it is with all those drives *droools ,so thanks for putting your build up on the site and thanks for responding big help.

  • 48 months ago
  • 1 point

No offense was taken! I was just trying to stress a point that I prefer to keep my machines separate for the most part.

Ultimately you want to look for 2 things when selecting a PSU. 1) What would a possible total max load look like for all of your components, both in terms of wattage AND voltage. 2) Most PSUs get their maximum efficiency somewhere around 50% total load. Ideally you want a PSU that at approx 50% is your average running load, so you are really only using the electricity needed to power the system, and not make an expensive heater. (In a hypothetical example, a 100% efficient PSU would draw 1W of power from your wall and turn it into 1W of power for your machine. PSUs get closest to this 100% efficiency around 50% of their overall rating. This is where Bronze, Silver, Gold, etc come into play. A Gold rated PSU will get 90% efficiency at 50% of it's total capacity load, meaning it will draw 1W of power, and give the machine .9W, while the other .1W turns into wasted heat energy).

Given the usage scenario you're describing, all you'd really need is the Plex Server app running on a machine (with access to your drives) in your internal network, and Plex Player apps on your TVs, phones, etc. When I started using Plex years ago (when it was still in .8 development), I had a bunch of externals plugged into an iMac. Then I moved on to a cheap Synology box and continued to run the Plex Server app on the iMac, while ensuring the iMac had full time access to the Synology (I just treated the Synology as a dumb box for storage). I moved on to the Mac Mini doing the same thing, and then I built this as a long term solution, since the Synology was out of space.

Thank you for the kind words. I'm very happy with the way it turned out, as it has a relatively small footprint and looks pretty sleek overall (hey, if my wife doesn't complain, then I did a good job)!

  • 47 months ago
  • 1 point

Thanks for the details on your build. I have the case and bought the same board, but was going for an i3 instead. Decided to go for the E3 to future proof myself a little :) and I know it works well from your reports. I am wondering if I need to flash the SAS controller to IT mode as well for Unraid, or was that a FreeNAS thing?

  • 47 months ago
  • 1 point

Sure thing, and not a terrible idea to future proof :) Everything about this build has been rock solid. I couldn't be happier with the components, and am loving the system for its overall design and function.

Definitely not an expert when it comes to Unraid, but my understanding is that it is better to let Unraid's ability talk to 'dumb' disks, rather than through an 'interpreter,' like a RAID card (even if you tell the RAID card not to RAID, it still receives, processes, and sends the information).

When it comes to these software raid OS's (like Unraid, FreeNAS/FreeBSD, etc) you really want to let the OS do all the communicating with the disks, rather than rely/trust that the RAID card will truly pass the exact information along. I'd bet that enough searching on the lime-tech forums would result in quite a few tutorials on flashing RAID cards from IR to IT...

http://forums.servethehome.com/processors-motherboards/2115-supermicro-x10sl7-f-vs-x10slh-f.html http://forums.servethehome.com/raid-controllers-host-bus-adapters/1734-flashing-lsi2308-x9srh-7f-mode.html https://lime-technology.com/forum/index.php?topic=27800.149 (post 149 gives actual flashing instructions)

Seems like people were having the most success with their WD Red drives if they did flash to IT, but others said it worked right out of the box (in IR). For what it's worth, there's a very similar discussion in the FreeNAS/FreeBSD forums...

Either way, good luck with your build! Hope it all goes smoothly, boots up the first time, and serves it's intended purposes for years to come!

  • 47 months ago
  • 1 point

Hey, thanks very much for the efforts in pointing me in the right direction.

I am struggling to get it to post with more than 2x8g modules installed at the moment, so I am not quite ready for any raid card flashing :)

All modules work in slot 1+2 but 4 beeps in any other config.

Time for some more googling before I RMA anything.

  • 47 months ago
  • 1 point

I swear I wrote you back yesterday, but it looks like I didn't... oops!

My immediate thought is RMA - that seems like a very odd error, and one that you won't want to deal with down the road. Either way, I hope it all worked out.

[comment deleted]
  • 47 months ago
  • 1 point

Glad to hear! Looking forward to checking out your build once it's built, stable and you have time to post it here!

  • 47 months ago
  • 1 point

That power supply only has 4 SATA connectors, how did you power the rest of your SATA drives? Thanks!

  • 47 months ago
  • 1 point

Some more questions

1) Where are the two mirrored USB sticks? 2) How did you Mirror them? 3) Where did you buy the 4-pin matching fans? I can't find 4-pins in that model.

Thanks!

  • 47 months ago
  • 2 points

Hey there!

SATA: I used a total of 3 of these (combined, they powered 10 drives and the case fan control). http://www.amazon.com/StarTech-com-Power-Splitter-Adapter-PYO4SATA/dp/B0086OGN9E/ 1 of the extensions allowed a run up and behind the drive cages, where I plugged the end into the fan control. The second and third plugged into one of the SATA cables that came with the PSU, and those two powered the 8 drives in the cages. I then used one of these: http://www.amazon.com/gp/product/B000067SLY to plug in the molex cable that came with the PSU to the 2 drives on the motherboard side of the case.

USB Mirrored sticks: 1 is directly mounted to the middle of the motherboard (right below the CPU fan, above the CE, you can see the top of a Sandisk USB drive), the 2nd is plugged into a USB 2 port on the back of the case. Mirroring your boot drive is an option when creating your FreeNAS boot drive (ensure both are plugged in, and then select both when creating your boot drive, and FreeNAS will mirror your boot)

PWM fans: They're actually Arctic fans, which seem to be pretty much identical to the fans that came with the case: http://www.amazon.com/gp/product/B00H3T1KBE

  • 47 months ago
  • 1 point

I can't thank you enough for this build and the response. The last step is to decide if I want to wait for Skylake support or just go with a v3 Xeon

  • 47 months ago
  • 1 point

No prob! I'm a fan of learning, so I'm glad to have helped!

In terms of skylake, totally your move- I enjoy having a chip that has been tested by tons of people in their respective builds, so I'd stay with v3 for a server style build. If I were doing something less server-y and a little more day to day, I'd consider waiting (but in the end, I probably would cave and not wait!)

  • 47 months ago
  • 1 point

I bought basically your entire build -2 WD +2 SSDs for plex / emby to be installed on. Can't wait! Thanks again

  • 45 months ago
  • 1 point

I am confused on the StarTech power splitter adapters. I have 2-3 needing to power 8 drives. What are you plugging these into to gain power?

I have the same board and case. Do you have your CPU fan plugged into FAN1? Does it ramp up and then back down in a continuous cycle?

  • 45 months ago
  • 1 point

The StarTech power splitters plug into the SATA power cables that come with your power supply (one end of the SATA power cables that came with your power supply will be a 6 pin box, which goes directly into your power supply, and then the one or two SATA drive-side power plugs are what you plug the splitters into).

Motherboard FAN1-4 headers all operate based on the CPU temperature, meaning you can plug your CPU into any of them. FANA is designated for an add-on card. My CPU fan is plugged into FAN2 just because that worked well with my setup. I have 4-pin fans on the motherboard side of the case plugged into FAN1, FAN3 and FAN4. On the drive cage side of the case, I have 3 fans, plugged into the case fan-controller.

Hope that helps!

  • 43 months ago
  • 1 point

Very clever! I never thought a SATA splitter existed! Here I was thinking I would need to go for the 650w Seasonic to get 10x SATA (to future proof), when really all I need is the 450w or 550w Seasonic and the splitter(s).

Though I'm running unRAID with 6 SATA drives at the moment and have a GTX 1070 passed through to a Windows VM for gaming, so my power requirements are a little different than yours!

  • 43 months ago
  • 1 point

Yep, handy little guys they are. Just make sure you distribute your splitters, rather than 'daisy chain' back to one SATA PSU connector, so you don't create voltage problems at initial spinup.

Yeah, running the GTX definitely creates a different power profile than a box with a bunch of hard drives!

  • 47 months ago
  • 1 point

This is a terrific build and I am tempted to copy it exactly. How would you update it if you were to do it again today? Are there newer CPU options that tempt you? Perhaps a newer motherboard to help future proof some more? Thanks for the great example to work from.

  • 47 months ago
  • 1 point

Honestly, I did my homework, and wouldn't change a thing. It's a great build, completely solid, and runs without any issues! The motherboard is about the best thing could have hoped for with this style build. I'd been looking at doing a full rack mount with 9200-8i's and the supermicro x9scm-f, but ended up getting a way higher wife approval factor with this build, and am very happy. I'm not too worried about this machine becoming archaic, it's got a chip that has a passmark right at 10k, and I figure that will be fine for the foreseeable future!

  • 47 months ago
  • 1 point

sweet. thanks for the quick reply.

  • 46 months ago
  • 2 points

hey, i checked out your saved parts list, and wanted to give you the correct link for the 3 PWM fans I had in my build: http://www.amazon.com/gp/product/B00H3T1KBE At the time of my build, PCPartPicker didn't have them listed, so I put something similar in My Saved Parts as a placeholder. Use the link I just sent if you want actual 4 pin fans for the MB side of your case!

  • 47 months ago
  • 1 point

At this moment most of the skylake motherboards aren't fully supported by FreeNAS / FreeBSD, so you'll have to wait if you want to move to skylake architecture.

  • 46 months ago
  • 1 point

Surprisingly, with the help of people on reddit buildapc for a Plex Server, I've been directed in almost the exact same configuration with the exception of motherboard and HDD size.

  • Any particular reason you went with the X10SL7-F over the X10SLL-F?
  • Was your choice to go with the 3TB Reds instead of the 4TB Reds a cost decision?
  • Lastly, you haven't found any reason to use a different CPU Cooler such as the Cooler Master Hyper 212 EVO?
  • 46 months ago
  • 2 points

Hey there!

The SL7 has an onboard SAS chip, which is essentially a RAID/SAS controller card, giving you 8 additional SAS/SATA drives, without occupying a PCI-e slot. This becomes important if you plan on expanding your machine down the road, as you'll need those PCI-e slots for RAID/SAS controller cards (which typically run around $100, google IBM M1015). Long story short, think about what your long term goals are for your machine, and buy an appropriate MB.

I went with 3TB reds due to my vdev setup, and also cost. In FreeNAS, resilver (rebuild) times vary according to how much data needs to be rebuilt. 4TB drives are a bit more expensive, and will take longer to rebuilt. I have taken what some would consider a somewhat risky approach, by building a machine with RAIDz1 vdevs, as opposed to RAIDz2. The 3TB will take a bit less time to resilver, so that makes me happy, and the cost is nice for replacement drives.

So far, I haven't found a reason to consider replacing the stock cooler, but I'm not sure what your usage scenario will be. I share my Plex server with a few friends, and we typically use a few tablets around the home. My machine also runs Sonarr, Couchpotato, NZBGet, PlexPy, PlexConnect, and an occasional VM. I live in a relatively warm climate, and the ambient environment in my home stays between 65-75, depending on the season. I've seen the CPU max out at 80 degrees (c) for very short spurts of time. My CPU hangs out around 40 more often than not. With that type of thermal load, I don't see the need for anything but the stock cooler.

Hope that helps!

  • 44 months ago
  • 1 point

Saving this for later.

  • 43 months ago
  • 1 point

Did you end up building a similar machine? I'm in the process of building my expansion unit (in a separate box). I'll be creating a new pool of 8x4TB Reds in a RAIDz2 config, connecting that pool via the LSI card to the main machine, and then migrating my existing pool to the more robust/safe RAIDz2 volume I just created. Once all said and done, I can then destroy the original pool, and then re-add all 10x3TB reds from my initial build into the new pool, with a safer config of RAIDz2, ending up with 48TB of RAIDz2 protected storage!

  • 43 months ago
  • 1 point

OH, nice. I haven't done anything yet, still doing some research and accumulating monetary investors.

  • 43 months ago
  • 1 point

Right on- good luck with your continued research and financial accumulation!

[comment deleted]
  • 49 months ago
  • 2 points

With regards to NAS's being vulnerable to miners/ransomeware, I actually used to see SSH brute force attacks fairly frequently when I was using the Synology! I never actually had any issues, just some random IP addresses in China and India trying to get in. Haven't had that issue with my FreeNAS setup. I'd recommend limiting what ports you open, and ensuring that any open ports require password entry before a user can interface with whatever web GUI the port forwards to. I have not, nor will I ever, forward the port which allows direct access to the web GUI for the FreeNAS system, as that would be asking for trouble. [I think Synology may be a bit more vulnerable here, as it can be set up to easily allow access to the main system from an external network].

FreeNAS is an appliance version of FreeBSD, which is a very stable OS capable of utilizing ZFS as its file system. If you do end up going the route of building your own server, I highly recommend doing the research to see if ZFS is right for you. As long as you utilize ECC ram, your file system will be extremely resilient to data corruption, which is important (at least to me) when considering media archival/storage (I mean, I would hate to lose pictures of my 1 year old son due to bit-rot)!

Additionally, FreeNAS can utilize FreeBSD ports and packages, in addition to having it's own set of 'plugins,' which are a form of simplified FreeBSD packages. I currently have Plex Server, NZBGet, Sonarr, Couchpotato, Transmission and PlexPy installed in their own individual jails (or sandboxes). The jail architecture is another feature of FreeNAS/BSD which I really enjoy, as it ensure that applications only have access to specific datasets (folders/file trees) that you specifically allow (rather than have system level access). This further protects your system, as even if you were able to find and install a port/package called 'MineMyServer,' or something silly like that, it wouldn't have access to your actual system files unless you chose to give it access.

I agree, Synology has built its reputation as a solid/stable NAS system architecture, and as such, it has commanded a high price. I wanted my server to be capable of transcoding 4-5 streams, and in order to get that level of performance in a Synology, I would have had to pay quite the pretty penny. QNAP has some interesting options, but in the end, I decided that I'd find more joy in learning a new OS, and had more flexibility in choosing components. Overall, I'm very pleased with my server, don't miss the days of Synology at all.

If you're serious about building your own server, I can give you a few pointers, most of which are common sense and things you probably have already thought of.

1) Do your homework: Read the forums, and start to make a list of questions you have. 2) "Respect your elders:" meaning, there are a lot of people out there who have already wrestled with the same questions and issues you may be having. Don't reinvent the wheel! 3) Be honest with your use needs: Had I not needed the transcoding capability, I would have been just fine building a system with a much lower powered CPU. I don't need graphics, so I didn't shop for a card that would never get used. 4) Be honest with your level of experience, and what level of experience is needed to maintain your system: I say this more from the mindset that you may have to teach someone else how to maintain the system. My wife, for example, knows how to request new files through the various web GUIs, but could not, for the life of her, navigate the system structure through terminal. FreeNAS can be nicely maintained through a web GUI, which is one of the reasons I went this route as opposed to something like ZFSonLinux.

Good luck, and sorry for being so long-winded!

[comment deleted]
  • 49 months ago
  • 1 point

ECC Ram is a bit of a Pandora's Box, when it comes to the ZFS community. Some people will seemingly never speak to you again if you even mention a build without ECC, others could care less. To me, ECC ram was worth the extra $50 or so (per 16 GB) because of my usage situation. I'll direct you to this forum post for an in depth explanation: https://forums.freenas.org/index.php?threads/ecc-vs-non-ecc-ram-and-zfs.15449/

Now, to me, the important thing to remember is that this type of error (where RAM corrupts the file) is the same across all file systems. ZFS, however, has a series of features that allows for the creators of ZFS to state that data stored in a ZFS file system will NOT become corrupt (a claim other systems cannot make). This claim is only valid when you use ECC ram. Typically, data is written to RAM first, and then your HDDs, so if the data is corrupted due to a RAM error, it will be corrupt on the HDDs as well. ECC ram ensures that this will not happen. A ZFS file system with non-ECC ram runs the risk of the original data being corrupted, checksummed and written into parity as a valid file, which can then be read back (to that bad RAM) which will further corrupt the file, since ZFS will try to repair based on the already bad copy stored in parity (here, ZFS's desire to fix files is the culprit).

Long story short, to me, ECC ram was worth the extra $$. Every person's use case is different, and it may not be important to you. I agree that trying to pinpoint a bit of corrupt data will be as easy as finding a needle in a haystack. Like I said, however, data on the HDDs can be corrupt, in that the data may have already been corrupt when it was written from RAM, so take that into consideration.

[comment deleted]
  • 49 months ago
  • 1 point

Id expect RAM errors to occur at a pretty consistent rate across all architectures, which should be very low overall. I guess when it all comes down to it, my research led me to choose ECC ram, but everyone has to make their own build decisions.

Good luck with your build!