Installing two PERC H200s

Homelab Jul 08, 2019

Ever since acquiring the Dell R510 —which came with a PERC H700— it was my intention to replace it with an often-recommended H200. This particular card is popular because it can be found relatively affordably, and it can be flashed to so-called IT mode, which means the card passes all connected hard drives directly through to the host.

This is part one of my homelab re-organization project.

The benefit of this over a hardware raid solution is that you can instead rely on a software solution, such as ZFS. Your system (as-well as tools like smartmontools) will also be able to read and react to SMART status changes, as supposed to how normally your RAID card would be doing that (or you, through the RAID). There's probably more benefits (and downsides) to using an H200 which other, more knowledgeable people have already described in detail, so let's skip ahead to me actually installing two of them, and why I went with two.

More often than not is the H200 you can find of the variety that comes with a PCI bracket. You can remove this bracket with two screws and it will fit in the storage slot no problem. Once the firmware is flashed, this will work perfectly.
Out with the old, in with the "new."

Installation

After flashing the firmware (something I will try to cover in a separate post, as information on this can be a bit hard to find in complete and usable form) of both cards, I installed one in the R510's "storage" PCI slot, and the other in one of the normal PCI slots.

The H200 in the storage slot is hooked up to the SAS backplane using the same cables as the H700 was. So long as you install the H200 in the storage slot, the cables will be long enough.

One of the H200s —with its PCI bracket removed— installed in the "storage" PCI slot.

For the second card I used a Mini-SAS to 4x SATA cable, with the cables going around and through the same area as where the other SAS cables come through, and connect directly to the two boot drives.

The 12-bay version of the R510 comes with space for two internal 2.5" drives. The  drives that came with my R510 were two Seagate Savvio 10k SAS drives 146GB drives. I replaced these with two Kingston A400 480GB SSDs. While certainly not the greatest of SSDs, they're very affordable, and offer me enough space for the main OS And the few VMs I plan on running on here.

The two SSDs installed. I can't use the original cables as they're SAS-only, unfortunately.

Now here comes a bit of a challenge. As you may or may not know, the Dell R510 has not a single extra sata power cable, or place to easily obtain this. To resolve this, some people choose to solder on wires directly to the power output's relevant voltage lines, but I didn't want to go that route.

Instead, I used a 15-pin SATA Power Y splitter cable. Snip off one of its plastic sides, and it plugs in perfectly to the existing SAS cable.  I had also purchased these as an alternative choice. Either one will work, I just went with the one that only required one cable to be plugged in.

It's pretty soft plastic and comes off very easily with small pliers or even your fingers.
It may not win any beauty prizes, but it'll definitely serve its purpose.

While it may look a wee bit wonky, it certainly does its job, and it's not as tricky or iffy as soldering on wires in places the system doesn't expect them. This is, in essence, exactly what the system expects. Just that the data line is plugged in elsewhere.

It would've been handy if the SAS to SATA cable had right-angle plugs, but it fits at least.

And that's it. The system boots fine and recognizes both cards, something I was a little worried about as at times identical (looking) cards might cause conflicts or so, but not in this case. You're able to enable or disable bootability on either card, and within each specify which drives should be booted from, and in what order. As far as I can tell, everything works as it should.

One minor issue I had is only with the front LED lights not currently fully working as they should. I say this is minor because I've seen multiple posts online about this, and oftentimes this seem to be caused by something done or not done during firmware flashing, so I don't think this has anything to do with two cards being installed. This, too, because only one of the two cards is actually plugged into the backplane, of course. I'll probably have to reflash the card and fully ensure I actually have the latest-most files.

Both H200 cards installed (one under the 4-nic network card)

What I'm now able to do is pass through the entire H200 to a virtual machine, and at the same time maintain the ability to boot from the two drives which right I have running as a btrfs mirror. I really did not want to have to boot from a USB stick or something, and this way I won't have to and maintain fast read/write speeds to the main OS drive(s).

Once I get to setting up the passthrough part of this project, I will write about it here too. For now, I hope this was useful to some of you. If you were wondering if it was possible to install two H200s and if you did, how you could get power to these additional/separate hard drives/SSDs, this might be a good way to achieve this without resorting to soldering.

If you have any questions or feedback, you can find me on Twitter, right here.

Thank you.