For my new home NAS I wanted to use a RAIDZ2 setup, but I also wanted to use Arch Linux as host system because I already have the most experience with it. Another must have was a full encryption, however native encryption is part of ZFS v30 which is currently just available for Solaris so we have to use LUKS over ZFS. For this I will use the Arch ZFS Kernel Module by Jesus Alvarez.
Getting started
First we need to add the archzfs repository to our /etc/pacman.conf:
[demz-repo-core]
SigLevel = Required DatabaseOptional TrustedOnly
Server = http://demizerone.com/$repo/$arch
Then add the signing key to pacman's trusted key list:
sudo pacman-key -r 0EE7A126
sudo pacman-key --lsign-key 0EE7A126
And finally update your packages list and install archzfs:
sudo pacman -Syy
sudo pacman -S archzfs
Add ZFS to the autostart:
sudo systemctl enable zfs.service
Format your drives
I using 7x Western Digital Red 3TB drives which support Advanced Format (4K sector size).
In order to use them we have to create a GUID Partition Table (GPT) and create a primary partition with the right sector size.
For this I used parted like this:
sudo parted -a optimal /dev/sdb
GNU Parted 3.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 2048s 100%
(parted) p
Model: ATA WDC WD30EFRX-68A (scsi)
Disk /dev/sdb: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 3001GB 3001GB
(parted) unit s
(parted) p
Model: ATA WDC WD30EFRX-68A (scsi)
Disk /dev/sdb: 5860533168s
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 2048s 5860532223s 5860530176s
(parted)
"mklabel gpt" will create the partition table and "mkpart primary 2048s 100%" align the primary partition at 1MiB (2048-sector) and take the rest of the drive. This practice will work with most HDD/SSDs. You can also check if the sector size (unit s) is a multiple of 8 to be sure everything went fine.
Repeate this step for every drive you want to use in your setup.
Encrypt your drives
I used this command to encrypt my drives:
sudo cryptsetup --cipher aes-xts-plain64 --key-size 512 --hash sha512 --iter-time 5000 --use-urandom --verify-passphrase luksFormat /dev/sdb1
WARNING!
========
This will overwrite data on /dev/sdb1 irrevocably.
Are you sure? (Type uppercase yes): YES
Enter LUKS passphrase:
Verify passphrase:
Your can read about all the details in the ArchWiki, but basically we encrypt a drive using AES with a 256bit strong key.
Also repeat this step for every drive.
Mount your drives
This command will mount your fresh encrypted drive to /dev/mapper/data0:
sudo cryptsetup luksOpen /dev/sdb1 data0
Do it for all drives, e.g use data1, data2, data3...
Create the ZFS RAIDZ2 pool
Now its time to create the ZFS setup, but first we need the disk ids we want to use. Execute the following command will give you an output like like this:
ls -lah /dev/disk/by-id/
total 0
[...]
lrwxrwxrwx 1 root root 10 Apr 28 18:27 dm-name-data0 -> ../../dm-0
lrwxrwxrwx 1 root root 10 Apr 28 18:27 dm-name-data1 -> ../../dm-1
lrwxrwxrwx 1 root root 10 Apr 28 18:27 dm-name-data2 -> ../../dm-2
lrwxrwxrwx 1 root root 10 Apr 28 18:27 dm-name-data3 -> ../../dm-3
lrwxrwxrwx 1 root root 10 Apr 28 18:27 dm-name-data4 -> ../../dm-4
lrwxrwxrwx 1 root root 10 Apr 28 18:27 dm-name-data5 -> ../../dm-5
lrwxrwxrwx 1 root root 10 Apr 28 18:27 dm-name-data6 -> ../../dm-6
[...]
I cutted everything unimportant out, but you can easily identify the ids of your disk by the name you gave them while mounting.
Now that you have the ids, you can create RAIDZ2 like this:
sudo zpool create -m /mnt/data -o ashift=12 tank raidz2 dm-name-data0 dm-name-data1 dm-name-data2 dm-name-data3 dm-name-data4 dm-name-data5 dm-name-data6
-m /mnt/data: Where we want to mount the filesystem
-o ashift=12: Pool is optimized for 4K sector disks
tank: The name of the pool
raidz2: We want to create raidz2
dm-name-data0...: A white space separated list of disk ids we got above to use for this pool
This should execute without any output.
Now you can check your pool like this:
sudo zpool status
pool: tank
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
dm-name-data0 ONLINE 0 0 0
dm-name-data1 ONLINE 0 0 0
dm-name-data2 ONLINE 0 0 0
dm-name-data3 ONLINE 0 0 0
dm-name-data4 ONLINE 0 0 0
dm-name-data5 ONLINE 0 0 0
dm-name-data6 ONLINE 0 0 0
errors: No known data errors
Add a dataset to your pool
Now that we have our main storage pool it's time to add a dataset (in this case for my downloads) like this:
sudo zfs create tank/Downloads
Check the new dataset:
sudo zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 101M 12.5T 288K /mnt/data
tank/Downloads 100M 12.5T 100M /mnt/data/Downloads
You should also now be able to cd to /mnt/data/Downloads and create some files/folders.
Script to mount everything automatically
Here is the script I use to mount everything after a boot, just enter the password and everything works:
#!/bin/sh
echo "Enter password:"
read password
echo "Mounting sdb1 -> data0"
echo $password | cryptsetup luksOpen /dev/sdb1 data0
echo "Mounting sdc1 -> data1"
echo $password | cryptsetup luksOpen /dev/sdc1 data1
echo "Mounting sdd1 -> data2"
echo $password | cryptsetup luksOpen /dev/sdd1 data2
echo "Mounting sde1 -> data3"
echo $password | cryptsetup luksOpen /dev/sde1 data3
echo "Mounting sdf1 -> data4"
echo $password | cryptsetup luksOpen /dev/sdf1 data4
echo "Mounting sdg1 -> data5"
echo $password | cryptsetup luksOpen /dev/sdg1 data5
echo "Mounting sdh1 -> data6"
echo $password | cryptsetup luksOpen /dev/sdh1 data6
echo "Restarting ZFS"
systemctl restart zfs.service
zfs list
The ZFS pool should be automatically mounted once the disks are unlocked.
However, I had the problem that when later adding additional datasets they were not directly mounted, only after a few minutes they magically appear . I solved this by deleting "/etc/zfs/zpool.cache" and calling "sudo zpool import".