Saturday, November 26, 2016

My Setup with OpenZFS on Mac OS X



My new OpenZFS on Mac OS X setup in my cabinet, with raidz over 4x 1TB SSD on two AC powered USB 3.0 hubs. I ran into these gotchas:
  • The USB 3.0 UASP adapter I'm using sometimes changes the disk serial number, which causes zpool difficulty finding the disk after unplug or reboot. Fix: do a zpool export followed by zpool import immediately after pool creation renames the disks in vdev to use media ID, which is more robust. The media IDs are created the same time the disks are added to the pool and automatically partitioned and formatted (with the GPT scheme), so you can't create a pool with disks referenced by media IDs.
  • If a disk is lost due to serial number change, the only way to get it back is by creating the missing symlink under /var/run/disk/by-serial manually and export then import the pool again. ZFS does not allow replacing a disk that is still part of a pool.
  • When running zpool import SomePool with umask 0077, the parent ".." directory of /Volume/SomePool becomes accessible only to root and permission denied for all other users, which causes the Finder to crash when creating folders. The Finder also displays existing directories (created in Terminal) as a blank document with no content. Just export and rerun zpool import after setting umask 0022.
  • ZFS by default is case sensitive. Some application bundles are built incorrectly, which causes a variety of errors when I open the application such as "You can't open the application _______ because it may be damaged or incomplete" or "The application cannot be opened because its executable is missing." I ended up creating a dedicated Applications filesystem under the pool with casesensitivity=insensitive option (which can only be set when creating a filesystem).
  • Remember to set zfs set com.apple.mimic_hfs=on if you're planning to store Photos library on it.

Update on November 27:

I'm still experimenting with this setup, and in the process managed to toast an older 512GB SSD because one of my older USB 3.0 to SATA adapters had a bad contact, causing rapid power losses that fried the NAND. Trying to read/write problematic blocks would cause the drive to stall. But I managed to wipe it (under Mac OS X) by using VirtualBox running Ubuntu. Make sure the VM setting supplies USB 3.0 devices, or you'll get VERR_PDB_NO_USB_PORTS when attaching the USB/SATA adapter to the VM. Assuming the device shows up in the Linux guest as /dev/sdb (please double check, as this will be destructive), run:
apt install sg3-utils
num_blocks=...  # get this from /proc/partitions
sg_unmap --lba=0 --num=${num_blocks} /dev/sdb
After this, the SSD is empty and in a clean slate, and I could then format it.

Initially it seemed to be a good idea to use USB 3.0 hub and a bunch of USB 3.0 to SATA adapters to accomplish the disk array because it's much cheaper than an enclosure, but I've had many problems with it. The USB 3.0 hub sometimes incorrectly recognizes some adapters as USB 2.0 only, and this limits the speed of the whole array to USB 2.0 speed. I've also experienced flaky ports causing checksum errors. There are too many components that could go wrong and did go wrong, so I'm now getting a proper RAID enclosure as I should have in the first place.

Update on November 28:

The drive died again after writing some odd GBs of data on it. It seems that TRIM limits the number of blocks it can operate on at once, so I didn't actually wipe the whole disk. I also tried ATA secure erase but the drive died again, so it didn't seem to free up the blocks. I concocted a script:
for ((i = 0; i < ${num_blocks}; i += 8)); do
  while [[ ! -b /dev/sdb ]]; do echo; read -p 'waiting for disk...'; done
  printf '%d' $i
  sg_unmap --lba=$i --num=8 /dev/sdb
  printf '\r'
done
Even though it stalls from time to time requiring manual intervention, it does seem that the more blocks I unmapped, the more I could write, so I'm hopeful.

Update on November 30:

It turned out the aforementioned SSD drive didn't die. It's due to another flaky USB/SATA adapter. I replaced it with another, and the drive is working merrily. In case anyone's wondering, those IOcrest adapters seem to have a high failure rate. In any case, my USB woes are solved altogether with the Akitio Thunder2 Mini.

Post a Comment