Enhancing Mordor unRAID With IBM ServerRAID M1015

When I set up unRAID as a VM on Mordor ESXi host, I reviewed the available solutions for configuring data drives for use in unRAID. I reached the conclusion that the optimal solution would be to use VMDirectPath I/O to passthrough the disks directly to unRAID, but I could not accomplish that successfully, and settled on using Raw Device Mapping. But now I got the IBM ServerRAID M1015 PCI-Express card I purchased on eBay (also available from Amazon), and I’m all set to install it and upgrade my setup!

The loot:

The package included one IBM RAID Card, already cross-flashed to LSI-9210-8i IT firmware, and two SFF-8087-to-4-SATA cables (refer to the unRAID forums for more on LSI firmware versions and unRAID).

Following is a step-by-step procedure of a successful install.

Below it I go over possible pitfalls that I actually encountered, as a useful reference of what to avoid.

Happy-flow Installation

  1. Before installing the card, go into BIOS config, under Boot settings, and change the PCI ROM Priority from Legacy ROM to EFI Compatible ROM:
    I’m not completely sure what’s this about, but I saw somewhere that it is needed for EFI-based motherboards.
  2. After unpacking the loot, I connected the cables to the ports on the LSI (first one connected in picture):
  3. The RAID card is PCI-Express 2 x8, so I installed it on the first PCIe slot that was available on my Mother Board, with suitable performance. This was the PCIEX16_1 slot on my ASUS P8Z68-V Pro s1155 (navy blue), that operates either in x16 mode or dual x8/x8 mode (whatever that means…)
  4. Power up, with all HDDs and USBs unplugged (to avoid boot, and have enough time to see on-screen messages).
  5. Hit Ctrl+C to start the configuration utility, and verify the firmware is indeed as expected (here: SAS9210-8i, 14-IT)
  6. Once it was verified that the card is working and flashed appropriately, I powered off the server and connected a single NAS-HDD via the RAID card, and connected the other HDDs and USBs as they were before. This is because I wanted to see that I was able to configure ESXi and unRAID to work with this single HDD without damaging any data, before moving the other HDDs to the RAID card.
  7. Power up and enter to configuration utility again, now seeing that the connected HDD is detected:
  8. Exit the configuration utility, and let to server boot into ESXi.
  9. On a Windows computer, connect to the ESXi host using vSphere Client, and verify that the RAID card is recognized by ESXi in the host Configuration tab (Storage Adapters):
  10. Make the RAID card available for passthrough, by activating the Advanced Settings view in the Configuration tab, and opening the Mark device for passthrough dialog by clicking the Edit… link:
  11. Find the card in the list (in my case it’s LSI Logic / Symbios LSI2008) and mark it:
  12. The device will appear in the list of passthrough devices, with a note that it will take effect only after the host is restarted:
  13. So restart the host, and make sure the device was configured for passthrough successfully:
  14. Now edit the settings of the unRAID VM:
  15. Note that the RDM still exists, so start by removing the RDM related to the HDD that is on the RAID card (Hard disk 2 in my case, recognized based on the HDD identifier that appears in the RDM name):
  16. Launch the Add Hardware wizard and add a new PCI Device:
  17. Select the LSI Logic device from the list:
  18. Confirm and Finish the wizard:
  19. Confirm the changes to the unRAID VM:
  20. Power up the unRAID VM and let it boot and stuff, until the web admin interface is available:
  21. It is visible that the passthrough worked successfully, and unRAID was able to detect the RAID card and the HDD connected through it, because the HDD is available for the array (disk1 in the screenshot above). The HDD can be recognized by its identifier, and by the fact that unRAID is able to get a Temperature reading, which it is not able to do for the RDM’ed HDDs.
  22. Start the array and do some random checking around the user shares that all data is good (specifically data that is stored on that specific HDD).
  23. Shut down the VM and the host, and move the other HDDs to the RAID card.
  24. Power up the server, and enter the LSI Configuration utility again to make sure the HDDs are detected by the RAID card:
  25. Exit the utility and let the server boot into ESXi.
  26. Edit the settings of the unRAID VM again, and remove the remaining RDMs:
    This will also remove the SCSI controller 1 controller, because it is not longer needed. That’s OK.
  27. Power up the unRAID VM as before, and repeat the previous steps regarding the web admin, array start, and data check. Also a good idea to run a parity check on the array at this point.
  28. After everything is configured and working properly, I did some extra cleanup:
    1. Manually delete the RDM files that were created for raw device mapping – they are not needed anymore.

    2. Backup the unRAID VM settings and config files.

That concludes a successful installation procedure of a RAID card, including passing-through the card to a VM with VMDirectPath I/O.

Pitfalls

Some pitfalls I’ve encountered and dealt with during the procedure:

  • Good idea to connect the cables before inserting the card into the slot! Much better access that way.
  • At first I tried installing into PCIEX16_2 slot (the gray one), that operates in x8 mode. It didn’t work – the card was not detected by the BIOS at all, and I don’t know why. My theory: BIOS detects no device on PCIEX16_1, so it doesn’t proceed to check other slots.
  • I actually didn’t know what it should look like when the card is detected successfully, so I had hard time figuring out it wasn’t…
  • I looked around the BIOS GUI for clues – nothing.
  • I tried various keyboard-combos to get to some configuration screen (Ctrl+S, Ctrl+H, Ctrl+C) – nothing (well, sort of.. Ctrl+H got me into the configuration of something else – some PXE thingie).
  • During boot I looked for on-screen messages, but it went through a couple too fast! So I tried taking a video of the boot process and watch it in slow motion…
  • Finally, I decided to try it in another PC, where I could see how it looks like when it is detected, and that Ctrl+C is the correct combo to get to the configuration screen.

Conclusion

That’s it. Go back to the Mordor project page, or keep browsing around.

2 Comments
  • Choque
    February 21, 2014

    Hey, thanks for the posts! Are there any updates on your project? Seeing that your Index post covers a lot more points than the one’s I am able to find 🙂

    • itamaro
      February 22, 2014

      Hi, and thanks for the feedback!
      You’re absolutely right about the index post covering more points than posts I published. It’s my time management that’s at fault here… 🙂
      For some reason, when I find the time, I find myself working on new sub-projects instead of making progress on the backlog of unwritten posts… (for example, now working on a remote control-and-monitor web-app for the ESXi server and main VMs)
      In any case, of the unwritten posts so far – which one would be most interesting to you?

Leave a Reply