• About
  • Projects
  • Contact
A modified server case to allow for server expansion
Custom DAS
Preface
I host a number of servers in my home, many of these servers are virtualized, but of course, something physical has to exist to actually host these systems. One such physical system of mine is my NAS (Network Attached Storage). This system acts as a core server for my network, providing storage for a variety of my VMs, allowing their virtual drives to be far smaller and allowing me to increase their available capacity as simply as adding new drives to my NAS. For the past few years, I have been making use of a DAS (Direct Attached Storage) system that has allowed me to add more drives in another system and connect them to my NAS using SAS.
Until now this system has been working fine for me, however, I have recently started hosting a Plex server, and this is a workload that many will know can start to consume large quantities of storage. In order to meet the demand of this use case I have been diligently adding hard drives to my DAS, however, I have recently run out of physical space in my current DAS. As a result, I have had to get creative with my storage upgrade, resulting in modifying an existing server case to suit my needs.
In the world of server storage there are systems known as disk shelves, these are systems that can act as a DAS by connecting multiple hot-swappable drives to another remote system. These are exceedingly useful systems as they allow for quick drive replacement, are capable of being daisy-chained, and have excellent airflow to keep the drives running cool. Unfortunately, these pieces of retired enterprise hardware can be hard to get ahold of in the great white north, and when they are available it is at an exorbitant cost typically due to coming from the US. As a result, those of us outside of the US have to get tricky with our disk storage.
Luckily, although disk shelves themselves are often expensive in Canada, not all hardware is the same. By combining multiple pieces of retired enterprise hardware it is possible to build a reasonable facsimile, albeit with a reduced feature set. This post will take you through the process I have taken to build such a system.
Background
Before we get into the design of my system we should cover a bit of background. Most consumer-grade computers use an interface known as SATA to communicate with disks connected to them (Let's not get into PCIe today). This interface is great because it provides a fast connection between the system and the drive, however, this interface does have a limitation in that most systems have a relatively small number of SATA ports available. Of course one can add additional SATA capacity by adding PCIe SATA cards, but this will only work as long as the drives are within one system. Once you are out of space in the system for PCIe cards or drives you can't add any more. There are cards that allow for SATA multiplexing getting multiple SATA ports from one main port, but these generally don't provide many extra ports and can no be chained. The final nail in the coffin for expanding SATA is that SATA is not a standard that likes to be moved outside of a host system. eSATA does allow for external SATA connections, but is only rated for a distance of 6.6ft and would require a large number of cables to add minimal extra capacity.
After all this talk of SATA you might wonder how on earth systems with many dozens of drives work? Well, one such method is through the use of SAS or Serial Attached SCSI. SAS is an interface that is built first and foremost for the enterprise, with many features not seen in SATA. The features we will be taking advantage of today are that SAS multiplexing is much simpler and more effective, as well as its native support for being an interconnect between systems. Unlike SATA where we can only multiplex a given port once, SAS allows for daisy-chaining of multiple SAS multiplexers, meaning that the only limit to SAS expansion is the number of drives your controller can index and the reduction in speed given many iterations of daisy-chaining.
In my case, I use an LSI 9200-16e SAS HBA, this is a SAS interface card that provides 4 SFF-8088 connectors for SAS interfacing. The SFF-8088 connector is intended as an external interconnect meant to connect two SAS systems. Each SAS connector carries 4 SAS signals, meaning that my card can support 16 drives at full speed (in the case of my card this is 6gbps SAS or 3gbps SATA each). However, should I wish to make use of multiplexing my card is capable of supporting up to 512 drives at a multiplexed speed. That means that each disk is detected at the maximum speed of the slowest part of the SAS network and will only start to see speed degradation if we exceed 4 disks worth of bandwidth on a single SFF-8088 link. So by making use of multiplexers, we can get up to 4 disks worth of bandwidth over each of these connectors, and as many drives on each as we could want up to 512.
For this project I will be using a very well documented multiplexer known as the 'HP SAS Expander', these cards are abundant, effective, and most importantly cheap! Even given the costs of enterprise hardware in Canada these cards can be had for $20-40 on eBay, and although the default speed of these cards is SAS 3gbps it is still more than fast enough for my needs. Depending on the model of expander you purchase you may even be able to upgrade them to sync at 6gbps, although I will leave that as an exercise for the reader as I have not done so for this project (at least not yet). So then by making use of these cards we are able to get up to 8 SAS connections from a single SFF-8088 connection, although in my case I am only using 6 of these connections per card, with 2 expanders providing capacity for up to 48 total drives.
Hardware
With the background out of the way it is time to discuss the hardware I have chosen for this project. I will try to provide links for the parts used, but links are subject to change as listings are taken down.
  • 1x Rosewill RSV-L4500 Server Case - link
  • 1x Corsair RM 750x Modular Power Supply - link
  • 1x Corsair Molex Cable - I grabbed one of these from another system but you may be able to find one online
  • 2x HP SAS Expanders - link
  • 4x SuperMicro SAS826A Backplanes - link
  • 2x Powered PCIe Risers - link
  • 14x SFF-8087 Cables - link
  • 1x dual SFF-8088 to SFF-8087 Adapter - link
  • 2x SFF-8088 Cables - link
  • Plenty of 3D Printer Filament
  • Self Adhesive Foam
The use of these specific backplanes isn't required, but my design files are meant for them, so other backplanes will require customization. I selected them as they were cheap, and as they are pass-through backplanes they can actually sync at faster SAS speeds than their rated 3gbps. In addition, it may be difficult to locate an additional Molex cable for the power supply, unfortunately, I do not have an ideal solution for this, so it is left as an exercise for the reader. You may also wish to use shorter SFF-8087 Cables as mine were a little long and required a fair bit of work to get into place. Finally, the SFF-8088 cables will need to be a length of your choosing to bridge the distance between the DAS and host, in my case I used 2M cables.
Design
The design of this project took place over the course of about 2 weeks and all design files are available on thingiverse. Assembly of the parts is done through the use of M3 heat-set inserts and matching M3 cap head bolts. However, for the bracket faceplates, button head bolts are mandatory due to clearance with the drives. Most of these parts are attached to the case using nonexistent holes in the prints. These will have to be added either by modifying the files, or creating them as needed.
Most of the parts are unique, but together you will need 4 completed bracket bottom units, all of the bracket tops, one PCIe bracket, one each of the spacer and spacer bottom, all four drive spacers, two fan mounts, and one center fan mount. There are some issues with these designs that are noted on the Thingiverse page that are intended to be fixed in the future.
Assembly
The assembly of this unit is done as follows
  • Start by Connecting all the molex power connectors and SFF-8087 cables to the backplanes, as installing them after assembly is very difficult.
  • Assemble the backplane brackets, by connecting the short and long parts using threaded inserts and M3 bolts.
  • Connect all four backplane brackets together using inserts and m3 bolts.
  • Align the backplanes to the backplane brackets and place the backplane top pieces (they will go together like a puzzle)
  • Insert M3 button head screws to create a flat cover for the backplanes. The final unit should be very rigid
  • Install the spacer and narrow spacer into the front of the case (they should be flush with the face plate)
  • Attach fans to the fan brackets and install them to the spacers
  • Assemble the drive spacer unit by combining all four pieces using threaded inserts and M3 bolts, the center piece is held together using an M3 nut instead of an insert
  • Remove the PCIe brackets from the SAS expanders and insert them into the PCIe risers
  • Install the SAS expanders with PCIe risers into the PCIe bracket (Note: make sure to have fans installed on your SAS expanders, I used the 60mm fans that came with the case, but you can purchase other fans for this)
  • Mount the backplane assembly into the case on a layer of self dhesive foam (I made holes and bolted it in)
  • Attach power to the front and expander fans (I used a SATA to Molex adapter to reuse the fans that came with the case), connect SATA to the PCIe risers, jump the 24 pin cable to run and install all cables into the PSU
  • Connect 2 SFF-8087 cables between the adapter bracket and port 7 on each expander
  • Install the adapter to the back of the case
  • Install the PCIe bracket to the side of the case (again done by making holes and bolting together)
  • Connect the 12 SFF-8087 cables to the expander, splitting 6 for each expander
  • Lash together the SFF-8087 cables (I zip tied mine to the PCIe bracket)
  • Install drives and place spacer over top of them, installing to the lib of the case using double sided tape
  • Install the assembled system and connect SFF-8088 cables to the host system
Conclusion
This project turned out exceedingly well in my opinion, being far more rigid and well-assembled than I could have anticipated. It does consist of a lot of "made up" mounting but is easy enough to modify for more proper mounting options. My SAS expanders are not flashed for 6gbps operation, so each card can only transfer a maximum of 3gbps, but this is still more than enough for my purposes. Other potential changes include using different backplanes, possibly making use of one that has an integrated expander. Finally, a larger PSU will be needed in the future as the 750W unit I made use of will not be able to handle a full load of 48 drives. Other than this I intend to create a spacer to allow for the top cover of the case to be installed in the system and hope to improve the mounting solutions. Hopefully, this helps someone make something better from my initial designs and allows for affordable disk storage to become more available.
Xeon Phi Blog