Hi, Pablo here
Why I Put My VMs on a ZFS Mirror
Part 1 of 3 in my "First ZFS Degradation" series. Also read Part 2: Diagnosing the Problem and Part 3: The Fix.
Why This Series Exists
A few weeks into running my new homelab server, I stumbled upon something I wasn't expecting to see that early: my ZFS pool was in "DEGRADED" state. One of my two mirrored drives had gone FAULTED.
This was the first machine I had set up with a ZFS mirror, precisely to be able to deal with disk issues smoothly, without losing data and having downtime. Although it felt like a pain in the ass to spot the problem, I was also happy because it gave me a chance to drill the kind of disk maintenance I was hoping to do in this new server.
But here's the thing: when I was in the middle of it, I couldn't find a single resource that walked through the whole experience in detail. Plenty of docs explain what ZFS is. Plenty of forum posts have people asking "help my pool is degraded." But nothing that said "here's what it actually feels like to go through this, step by step, with all the commands and logs and reasoning behind the decisions."
So I wrote it down. I took a lot of notes during the process and crafted a more or less organized story from them. This three-part series is for fellow amateur homelabbers who are curious about ZFS, maybe a little intimidated by it, and want to know what happens when things go sideways. I wish I had found a very detailed log like this when I was researching ZFS initially. Hope it helps you.
The server and disks
My homelab server is a modest but capable box I built in late 2025. It has decent consumer hardware, but nothing remarkable. I'll only specify that I have currently three disks on it:
- OS Drive: Kingston KC3000 512GB NVMe. Proxmox lives here.
- Data Drives: Two Seagate IronWolf Pro 4TB drives (ST4000NT001). This is where my Proxmox VMs get their disks stored.
The two IronWolf drives are where this story takes place. I labeled them AGAPITO1 and AGAPITO2 because... well, every pair of drives deserves a silly name. I have issues remembering serial numbers.
The server runs Proxmox and hosts most of my self-hosted life: personal services, testing VMs, and my Bitcoin infrastructure (which I share over at bitcoininfra.contrapeso.xyz). If this pool goes down, everything goes down.
Why ZFS?
I'll be honest: I didn't overthink this decision. ZFS is the default storage recommendation for Proxmox, it has a reputation for being rock-solid, and I'd heard enough horror stories about silent data corruption to want something with checksumming built in.
What I was most interested in was the ability to define RAID setups in software and deal easily with disks going in and out of them. I had never gone beyond the naive "one disk for the OS, one disk for data" setup in previous servers. After having disks failing on me in previous boxes, I decided it was time to gear up and do it proper this time. My main concern initially was just saving time: it's messy when a "simple" host has disk issues, and I hoped mirroring would allow me to invest less time in cleaning up disasters.
Why a Mirror?
When I set up the pool, I had two 4TB drives. That gave me a few options:
- Single disk: Maximum space (8TB usable), zero redundancy. One bad sector and you're crying.
- Mirror: Half the space (4TB usable from 8TB raw), but everything is written to both drives. One drive can completely die and you lose nothing.
- RAIDZ: Needs at least 3 drives, gives you parity-based redundancy. More space-efficient than mirrors at scale.
I went with the mirror for a few reasons.
First, I only had two drives to start with, so RAIDZ wasn't even an option yet.
Second, mirrors are simple. Data goes to both drives. If one dies, the other has everything. No parity calculations, no write penalties, no complexity.
Third (and this is the one that sold me), mirrors let you expand incrementally. With ZFS, you can add more mirror pairs (called "vdevs") to your pool later. You can even mix sizes: start with two 4TB drives, add two 8TB drives later, and ZFS will use all of it. RAIDZ doesn't give you that flexibility; once you set your vdev width, you're stuck with it.
When Would RAIDZ Make More Sense?
If you're starting with 4+ drives and you want to maximize usable space, RAIDZ starts looking attractive:
| Configuration | Drives | Usable Space | Fault Tolerance |
|---|---|---|---|
| Mirror | 2 | 50% | 1 drive |
| RAIDZ1 | 3 | ~67% | 1 drive |
| RAIDZ1 | 4 | 75% | 1 drive |
| RAIDZ2 | 4 | 50% | 2 drives |
| RAIDZ2 | 6 | ~67% | 2 drives |
RAIDZ2 is popular for larger arrays because it can survive two drive failures, which matters more as you add drives (more drives = higher chance of one failing during a resilver).
But for a two-drive homelab that might grow to four drives someday, I felt a mirror was the right call. I can always add another mirror pair later.
The Pool: proxmox-tank-1
My ZFS pool is called proxmox-tank-1. Here's what it looks like when everything is healthy:
pool: proxmox-tank-1
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
proxmox-tank-1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-ST4000NT001-3M2101_WX11TN0Z ONLINE 0 0 0
ata-ST4000NT001-3M2101_WX11TN2P ONLINE 0 0 0
That's it. One pool, one mirror vdev, two drives. The drives are identified by their serial numbers (the WX11TN0Z and WX11TN2P parts), which is important — ZFS uses stable identifiers so it doesn't get confused if Linux decides to shuffle around /dev/sda and /dev/sdb.
All my Proxmox VMs store their virtual disks on this pool. When I create a new VM, I point its storage at proxmox-tank-1 and ZFS handles the rest.
What Could Possibly Go Wrong?
Everything was humming along nicely. VMs were running fine and I was feeling pretty good about my setup.
Then, a few weeks in, I was poking around the Proxmox web UI and noticed something that caught my eye.
The ZFS pool was DEGRADED. One of my drives — AGAPITO1, serial WX11TN0Z — was FAULTED.
In Part 2, I'll walk through how I diagnosed what was actually wrong. Spoiler: the drive itself was fine. The problem was much dumber than that.
Continue to Part 2: Diagnosing the Problem