Install Native ZFS for Linux
Add the PPA for zfs-native and install ubuntu-zfs:
sudo add-apt-repository ppa:zfs-native/stable sudo aptitude update sudo aptitude install ubuntu-zfs
Create the ZFS pool (zpool)
There are several different types of ZFS pools that can be created. I didn’t need anything too fancy, so I just went with a mirror (similar to RAID-1). If I decide to add more capacity later, I can simply add another mirror to my pool; I won’t have to mess with destroying my existing ZFS pool and creating a new one with more disks or anything like that.
Find the IDs for the disks that will be used to create the ZFS pool.
ls -l /dev/disk/by-id/
The output will vary from system to system, but the two relevant lines for my two HDDs showed up as
lrwxrwxrwx 1 root root 9 Jan 29 21:10 ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0438258 -> ../../sdb lrwxrwxrwx 1 root root 9 Jan 29 21:10 ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0429693 -> ../../sdc
To create the ZFS mirrored pool, use the following command (replacing the pool name and disk IDs as necessary).
sudo zpool create earth mirror /dev/disk/by-id/ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0438258 /dev/disk/by-id/ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0429693
Note: by adding the disks using their GUIDs instead of adding them using something like /dev/sdb, I can swap out the hard-drive or switch around which disks are plugged in to which ports on the motherboard without worrying about the disks possibly being assigned another /dev/sdx number.
Create ZFS Filesystems
Instead of simply creating directories in the pool, ZFS filesystems can be used to give more fine-grained control over the data that you store in your zpool. If you wanted to use deduplication or compression (or any of ZFS’s features) for some (but not all) of the files in your pool, it can be enabled for specific filesystems without directly affecting the settings for the other filesystems in the pool.
For example, if you wanted to create separate filesystems for backups and downloads, you could run
sudo zfs create earth/Backup sudo zfs create earth/Downloads
To see all of your filesystems, use
NAME USED AVAIL REFER MOUNTPOINT earth 191G 1.60T 200K /earth earth/Backup 5.31G 1.60T 5.31G /earth/Backup earth/Downloads 5.63G 1.60T 5.63G /earth/Downloads
Monitor pool status
To confirm that the pool has been created, run
zpool status as root.
pool: earth state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM earth ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0429693 ONLINE 0 0 0 ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0438258 ONLINE 0 0 0 errors: No known data errors
zpool iostat returns:
capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- earth 191G 1.63T 2 0 241K 2.47K
zpool iostat earth -v returns:
capacity operations bandwidth pool alloc free read write read write -------------------------------------------- ----- ----- ----- ----- ----- ----- earth 191G 1.63T 2 0 240K 2.47K mirror 191G 1.63T 2 0 240K 2.47K ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0429693 - - 1 0 123K 4.33K ata-WDC_WD20EZRX-19D8PB0_WD-WCC4M0438258 - - 1 0 120K 4.33K -------------------------------------------- ----- ----- ----- ----- ----- -----
Scrubbing is ZFS’s way to verify the integrity of all of the data in your zpool. While ZFS verifies the data as it’s being read, not all of the data in the zpool is typically going to be read very often (and so it won’t get checked very often). In order to make sure that all of the data in the zpool is intact, regularly scrubbing the zpool will help prevent data from silently getting corrupted and reducing redundancy.
Depending on the quality of the drives that are being used in the ZFS pool, they should be scrubbed every once in a while to verify that your data isn’t being silently corrupted. The ZFS Best Practices Guide suggests weekly scrubbing when using consumer drives.
# weekly scrub of 'earth' zfs pool 0 0 * * 6 root /sbin/zpool scrub earth
The status of the scrub can be shown using
Mount ZFS Filesystems at Startup
/etc/rc.local to include commands to mount the ZFS filesystems you want mounted:
zfs mount -O earth/Backup zfs mount -O earth/Downloads
By default, ZFS will not mount a filesystem on top of a non-empty directory. On my system, I found that during the boot process Crashplan would create files in my “Backup” filesystem and prevent my Backup directory from being mounted. The first time this happened it took me about a minute (of thinking my data was gone and mentally preparing for restoring from backups) to figure out what happened, but until that point I thought that ZFS had somehow selectively destroyed most of the directories in that filesystem (except for the directory containing the Crashplan files). The
-O option will force ZFS to mount a filesystem on top of a non-empty directory.
ZFS for Ubuntu has built-in commands to enable Samba for ZFS filesystems, which actually makes it easier to share ZFS filesystems than non-ZFS directories. To share a directory, use
zfs set sharesmb=on earth/Backup
For whatever reason, I had to keep turning on samba sharing for each ZFS filesystem after rebooting the system in order for it to actually show up on the network, even though samba sharing had already been turned on prior to rebooting. I had already added commands to
/etc/rc.local for mounting each filesystem, so I added a few more lines to turn on sharing for the filesystems that I wanted to be shared.
zfs mount earth/Backup zfs set sharesmb=on earth/Backup zfs mount earth/Downloads zfs set sharesmb=on earth/Downloads
Compression can be enabled for a particular ZFS filesystems like so
zfs set compression=on earth/Documents
Note that compression is not retroactive; so if you want compression enabled you should do it before writing to the filesystem.
In order to check whether compression is enabled for any given ZFS filesystem
zfs get compression /earth/Documents
Snapshots can be automatically created and managed using the excellent
zfs-auto-snapshot utility. All you have to do is install it
sudo add-apt-repository ppa:zfs-native/stable; sudo apt-get install zfs-auto-snapshot
Manual snapshots can be created using
zfs snap filesystem@snapname
Snapshots can be destroyed using
zfs destroy filesystem@snapname
List all zfs filesystem snapshots using the following command
zfs list -t snapshot
Linux command line
Individual files from a snapshot can be accessed by simply browsing to the hidden .zfs directory where the zfs filesystem is mounted.
To rollback the filesystem to a previous snapshot, all intermediate snapshots will be destroyed. The command to roll back a zfs filesystem to a previous snapshot is
zfs rollback filesystem@snapname
Previous Versions on Windows client (using Samba share)
/etc/samba/smb.conf file, add the following to the global configuration in order to see hourly snapshots in the Windows Previous versions. If you want to share frequent or daily snapshots, the shadow format should be updated. After updating the samba server configuration, use
service smbd restart to apply the changes.
[global] # zfs auto snap previous versions stuff shadow: snapdir = .zfs/snapshot shadow: sort = desc shadow: format = zfs-auto-snap_hourly-%Y-%m-%d-%H%M vfs objects = shadow_copy2
ZFS status updates (email)
Update: hacky scripts like the ones below are no longer necessary. See
zed for event-based notifications.
This assumes that you have ssmtp installed on your server and is properly configured to send email.
Edit the system-wide crontab:
sudo nano /etc/crontab
and add something like the following line to get simple status updates (in this case every Saturday morning at 0200). Also, don’t forget to fix the email address that you want the status updates sent to.
0 2 * * 6 root STATUS="`/sbin/zpool status`" ; echo -e "Subject: ZFS Status\n\n$STATUS" | ssmtp NotAReal.Address@fbi.gov.biz
zpool degraded state alerts
See this shell script.
- useful snapshot commands
- snapshot and backup info