Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 26 Feb 2020 18:09:40 +0100
From:      Willem Jan Withagen <wjw@digiware.nl>
To:        FreeBSD Filesystems <freebsd-fs@freebsd.org>
Subject:   ZFS pools in "trouble"
Message-ID:  <71e1f22a-1261-67d9-e41d-0f326bf81469@digiware.nl>

next in thread | raw e-mail | index | archive | help
Hi,

I'm using my pools in perhaps a rather awkward way as underlying storage 
for my ceph cluster:
	1 disk per pool, with log and cache on SSD

For one reason or another one of the servers has crashed ad does not 
really want to read several of the pools:
----
   pool: osd_2
  state: UNAVAIL
Assertion failed: (reason == ZPOOL_STATUS_OK), file 
/usr/src/cddl/contrib/opensolaris/cmd/zpool/zpool_main.c, line 5098.
Abort (core dumped)
----

The code there is like:
----
         default:
                 /*
                  * The remaining errors can't actually be generated, yet.
                  */
                 assert(reason == ZPOOL_STATUS_OK);

----
And this on already 3 disks.
Running:
FreeBSD 12.1-STABLE (GENERIC) #0 r355208M: Fri Nov 29 10:43:47 CET 2019

Now this is a test cluster, so no harm there in matters of data loss.
And the ceph cluster probably can rebuild everything if I do not lose 
too many disk.

But the problem also lies in the fact that not all disk are recognized 
by the kernel, and not all disk end up mounted. So I need to remove a 
pool first to get more disks online.

Is there anything I can do the get them back online?
Or is this a lost cause?

--WjW



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?71e1f22a-1261-67d9-e41d-0f326bf81469>