Date: Sat, 29 Feb 2020 11:46:29 +0100 (CET) From: =?UTF-8?Q?Trond_Endrest=C3=B8l?= <trond.endrestol@ximalas.info> To: byrnejb@harte-lyne.ca Cc: freebsd-questions@freebsd.org Subject: Re: ZFS i/o error on boot unable to start system Message-ID: <alpine.BSF.2.22.395.2002291119450.6036@enterprise.ximalas.info> In-Reply-To: <eb8f8f32fcf5559774daf3a772a1ad2e.squirrel@webmail.harte-lyne.ca> References: <eb8f8f32fcf5559774daf3a772a1ad2e.squirrel@webmail.harte-lyne.ca>
next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, 28 Feb 2020 08:51-0500, James B. Byrne via freebsd-questions wrote: > I have reported this on the forums as well. > > FreeBSD-12.1p2 > raidz2 on 4x8TB HDD (reds) > root on zfs > > We did a hot restart of this host this morning and received the following on > the console: > > ZFS: i/o error - all block copies unavailable > ZFS: failed to read pool zroot directory object > qptzfsboot: failed to mount default pool zroot > > FreeBSD/x86 boot > ZFS: i/o error - all block copies unavailable > ZFS: can't fild dataset 0 > Default: zroot/<0x0> > boot: > > What has happened? How do I get this system back up and online? Inspect the hardware. Make sure all cables are connected. Maybe you have a bad disk cable. Boot off the install media and ensure all drives are visible in the dmesg. Next, use gpart show -p to ensure all the GPTs are valid and all the partitions are properly listed. If all checks out, try to import the pool. You might need to forcefully import the pool as it's already marked as being in use. Do not attempt to rewind the pool to the last checkpoint. If the import is successful, start a scrub, and run zpool status -v multiple times to inspect the results as they come in. Last year, ada0 in one of my systems went so bad that the primary GPT got corrupted, and the boot partition and the swap partition near the beginning of the drive was unavailable. I booted off ada1, which was a mirror of ada0. Luckily, ZFS stores enough metadata to know where the ZFS partition resided on ada0, and thus, ZFS was able to continue as if nothing bad had happened, and this was confirmed by running a scrub. I opted to replace all fives drives in this system as they had been in continuous use for a little over seven years. > My first thought is that in modifying rc.conf to change some ip4 address > assignments that I may have done something else inadvertently which has caused > this. I cannot think of any other changes made since the system was last > restarted a noon yesterday. The mere act of editing and saving a file might have had an adverse impact on ZFS, but I suspect something else is at play. Let us know how things are progressing. -- Trond.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.22.395.2002291119450.6036>