When I set up unRAID as a VM on Mordor ESXi host, I reviewed the available solutions for configuring data drives for use in unRAID. I reached the conclusion that the optimal solution would be to use VMDirectPath I/O to passthrough the disks directly to unRAID, but I could not accomplish that successfully, and settled on using Raw Device Mapping. But now I got the IBM ServerRAID M1015 PCI-Express card I purchased on eBay (also available from Amazon), and I’m all set to install it and upgrade my setup!
Following is a step-by-step procedure of a successful install.
Below it I go over possible pitfalls that I actually encountered, as a useful reference of what to avoid.
- Before installing the card, go into BIOS config, under Boot settings, and change the PCI ROM Priority from Legacy ROM to EFI Compatible ROM:
I’m not completely sure what’s this about, but I saw somewhere that it is needed for EFI-based motherboards.
- After unpacking the loot, I connected the cables to the ports on the LSI (first one connected in picture):
- The RAID card is PCI-Express 2 x8, so I installed it on the first PCIe slot that was available on my Mother Board, with suitable performance. This was the PCIEX16_1 slot on my ASUS P8Z68-V Pro s1155 (navy blue), that operates either in x16 mode or dual x8/x8 mode (whatever that means…)
- Power up, with all HDDs and USBs unplugged (to avoid boot, and have enough time to see on-screen messages).
Cto start the configuration utility, and verify the firmware is indeed as expected (here: SAS9210-8i, 14-IT)
- Once it was verified that the card is working and flashed appropriately, I powered off the server and connected a single NAS-HDD via the RAID card, and connected the other HDDs and USBs as they were before. This is because I wanted to see that I was able to configure ESXi and unRAID to work with this single HDD without damaging any data, before moving the other HDDs to the RAID card.
- Power up and enter to configuration utility again, now seeing that the connected HDD is detected:
- Exit the configuration utility, and let to server boot into ESXi.
- On a Windows computer, connect to the ESXi host using vSphere Client, and verify that the RAID card is recognized by ESXi in the host Configuration tab (Storage Adapters):
- Make the RAID card available for passthrough, by activating the Advanced Settings view in the Configuration tab, and opening the Mark device for passthrough dialog by clicking the Edit… link:
- Find the card in the list (in my case it’s LSI Logic / Symbios LSI2008) and mark it:
- The device will appear in the list of passthrough devices, with a note that it will take effect only after the host is restarted:
- So restart the host, and make sure the device was configured for passthrough successfully:
- Now edit the settings of the unRAID VM:
- Note that the RDM still exists, so start by removing the RDM related to the HDD that is on the RAID card (Hard disk 2 in my case, recognized based on the HDD identifier that appears in the RDM name):
- Launch the Add Hardware wizard and add a new PCI Device:
- Select the LSI Logic device from the list:
- Confirm and Finish the wizard:
- Confirm the changes to the unRAID VM:
- Power up the unRAID VM and let it boot and stuff, until the web admin interface is available:
- It is visible that the passthrough worked successfully, and unRAID was able to detect the RAID card and the HDD connected through it, because the HDD is available for the array (
disk1in the screenshot above). The HDD can be recognized by its identifier, and by the fact that unRAID is able to get a Temperature reading, which it is not able to do for the RDM’ed HDDs.
- Start the array and do some random checking around the user shares that all data is good (specifically data that is stored on that specific HDD).
- Shut down the VM and the host, and move the other HDDs to the RAID card.
- Power up the server, and enter the LSI Configuration utility again to make sure the HDDs are detected by the RAID card:
- Exit the utility and let the server boot into ESXi.
- Edit the settings of the unRAID VM again, and remove the remaining RDMs:
This will also remove the SCSI controller 1 controller, because it is not longer needed. That’s OK.
- Power up the unRAID VM as before, and repeat the previous steps regarding the web admin, array start, and data check. Also a good idea to run a parity check on the array at this point.
- After everything is configured and working properly, I did some extra cleanup:
- Manually delete the RDM files that were created for raw device mapping – they are not needed anymore.
Backup the unRAID VM settings and config files.
That concludes a successful installation procedure of a RAID card, including passing-through the card to a VM with VMDirectPath I/O.
Some pitfalls I’ve encountered and dealt with during the procedure:
- Good idea to connect the cables before inserting the card into the slot! Much better access that way.
- At first I tried installing into PCIEX16_2 slot (the gray one), that operates in x8 mode. It didn’t work – the card was not detected by the BIOS at all, and I don’t know why. My theory: BIOS detects no device on PCIEX16_1, so it doesn’t proceed to check other slots.
- I actually didn’t know what it should look like when the card is detected successfully, so I had hard time figuring out it wasn’t…
- I looked around the BIOS GUI for clues – nothing.
- I tried various keyboard-combos to get to some configuration screen (
C) – nothing (well, sort of..
Hgot me into the configuration of something else – some PXE thingie).
- During boot I looked for on-screen messages, but it went through a couple too fast! So I tried taking a video of the boot process and watch it in slow motion…
- Finally, I decided to try it in another PC, where I could see how it looks like when it is detected, and that
Cis the correct combo to get to the configuration screen.
That’s it. Go back to the Mordor project page, or keep browsing around.