As with my articles that involve the iSCSI Enterprise Target (IET), the project that drove the desire to attempt a custom storage server was a backup to disk solution. The requirements were cost, space and speed. For the most part, using the case as described below and IET, all three were met.
Case. SuperMicro 3U rack mounted case, model SC836A-R1200B ($1100). My first prototype was with the SuperMicro SC932T-760B case ($800) that had a triple redundant 760watt PS and 15 SATA bays and 15 SATA connectors requiring miniSAS to SATA fan cables. The power supply could not handle the motherboard, raid controller, and system fans – bad design. The new case is also a 3U, but holds 16 SATA/SAS drives and the backplane has 4 miniSAS connectors.
RAID Controller. The first prototype was using the LSI MegaRAID 84016E ($600), which in my opinion is very good controller – however, it does not work well with the IET iSCSI target software. During my initial stages of the iSCSI SAN project, I moved forward to the more expensive 31605 ($850?) because it was on sale ($500) and the LSI produced borderlining pathetic performance with a RAID5 configuration with the IET iSCSI application interface (15MBytes/sec). The Adaptec 31605 card holds up strong and can produce nearly 5x (3GB file copy in around 50 seconds…) the throughput with the same drives and RAID set type, 5. Both controllers have 4 miniSAS MFF8087 connectors, which allow up to 16 SATA devices through a fan-out cable or backplane. I used miniSAS on the controller to miniSAS on the backplane in the current configuration.
Update: I recommend using OpenSolaris, COMSTAR and ZFS for all custom iSCSI SANs – in my opinion, there is no better solution available at the moment. In addition, the LSI MegaRAID 84016E with the battery kit is a solid controller with OpenSolaris and COMSTAR.
Storage SATA hard drives. I have been using two sizes of drives, 640GB in the SAN that will be used for a VMware environment, creating 2 RAID60 RAID sets (not completely tested yet) and 1.5TB in the backup to disk creating a large 22TB RAID5. As a side note, NewEgg seems to have the BEST pricing when it comes to drives.
Boot controller / drives. In the new SC836A-1200B case, I had the option for two 2.5” drives. I am using the SATA ports on the motherboard and mirroring between the two drives using the ZFS file system.
Network Interfaces. In my first prototype I utilized an Intel dual port PCI-express x2 card and within RedHat I bonded the ports together using LACP (mode=4). In the VMware environment, I am using a PCI-express x4 Intel quad port card. For those that are wondering about jumbo framing: although I am currently using the Dell PowerConnect 5448, which supports jumbo framing, I am not using jumbo frames at this time.
Motherboard/RAM/CPU. My first prototype used an ASUS board that had one PCI express and around 3 PCI slots, my second was a GigaByte board, which performed better, however it is still missing a key CORE component, ECC ram. With the first motherboard (ASUS), the idea was to utilize the PCI slots for the network interfaces – however, after I purchased the motherboard, I realized/remembered that the throughput of a standard PCI bus cannot even support one gigabit Ethernet, yet alone two or three bonded. I am currently using the Intel Server Board S3200SH with 8GB ECC DDR2 800 RAM and an Intel Core2 Duo…
While building both the prototype and other SANs, I ran into some issues, which were for the most part overcome with some engineering or parts purchasing.
Make sure the motherboard you will be using has enough expansion slots and bus speed to support the required throughput of the RAID controller and the network interfaces. PCI Express comes in many sizes, from x1 to x16. IBM wrote a create article about the differences of each speed: http://www.redbooks.ibm.com/abstracts/tips0456.html
Each motherboard configuration is different, when purchasing the motherboard, make sure the cables from the power supply are long enough to reach the ports on the motherboard. This is typically not a problem with a standard case, however, using a SuperMicro server case, in both instances some power connectors were not reachable. With the SC932T-760B case, the power supply only had a quad 12v plug for the CPU, which means it would only support a dual core processor. The SC836A-1200B has a dual quad 12v plug which would support quad core.
The SuperMicro case uses a proprietary power/reset/harddrive cable, if you do not use a SuperMicro server motherboard, you may need to purchase the cable fan-out adapter. Additionally, in the SC932T-760B case, I had to manually extend the power switch cable.
For the 2.5” optional bays, you will need to purchase drive sliders for the SC836A-1200B case to hold those drives. SuperMicro has parts available for this purpose: MCP-220-00024-01 and MCP-220-00007-01.
As a final note, the SC836A-1200B case was not assembled properly in my instance. I had to disable the alarm for a disconnected case fan; the backplane handles 4 fans but the new case design (released in June 09) only uses 3 fans.