Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 17 Apr 2019 11:12:34 +0200 (CEST)
From:      =?ISO-8859-1?Q?Trond_Endrest=F8l?= <Trond.Endrestol@fagskolen.gjovik.no>
To:        FreeBSD stable <freebsd-stable@freebsd.org>
Subject:   Re: ZFS parallel mounting gone wrong?
Message-ID:  <alpine.BSF.2.21.9999.1904171040070.81396@mail.fig.ol.no>
In-Reply-To: <alpine.BSF.2.21.9999.1904151512400.81396@mail.fig.ol.no>
References:  <alpine.BSF.2.21.9999.1904151512400.81396@mail.fig.ol.no>

next in thread | previous in thread | raw e-mail | index | archive | help
On Mon, 15 Apr 2019 15:24+0200, Trond Endrestøl wrote:

> I upgraded a non-critical system running amd64 stable/12 to r346220.
> 
> During multiuser boot not all ZFS filesystems below zroot/usr/local 
> was mounted.

Some more explaining is in order:

This system has two 7 year old pools that complement each other.

/usr/local comes mostly from the zroot pool, but other children comes 
from a zdata pool. The intermediary filesystems have their canmount 
property set to off and mountpoints are specified at the top level 
only. The same goes for other parts of the filesystem hierarchy, such 
as /var/db and /var/spool.

I just upgraded to stable/12 global r346269, local r346268. During 
a routine "zfs mount -av" performed in single user mode, the kernel 
proceeded to mount a child filesystem (enterprise_zdata/var/db/mysql) 
without the parent filesystems being mounted first.

I rebooted back to r345628 from March 28th, and this kernel has no 
problems correctly mounting the ZFS filesystems in parallel. That BE 
used LLVM 7.0.1 from base as its system compiler.

Rebooting into r346220 (April 15th) or r346269 (April 17th) clearly 
shows problems mounting filesystems in the correct order. These BEs 
was compiled using LLVM 8.0.0 from base.

Maybe the system compiler is irrelevant.

The name of the pools might also be a factor, the zdata pool preceedes 
the zroot pool in alphanumerical order.

Maybe there is a bug in the code, or the code breaks when parts of the 
filesystem hierarchy is being built from multiple pools.

Here's an attempt at explaining how this fits together:

zfs list -ro name,canmount,mountpoint enterprise_zroot/usr enterprise_zdata/usr enterprise_zroot/var enterprise_zdata/var
[the list has been slightly edited, moving zdata below zroot and adding an empty line]

NAME                                                   CANMOUNT  MOUNTPOINT
enterprise_zroot/usr                                        off  /usr
enterprise_zroot/usr/compat                                  on  /usr/compat
enterprise_zroot/usr/local                                   on  /usr/local
enterprise_zroot/usr/local/certs                             on  /usr/local/certs
enterprise_zroot/usr/local/etc                               on  /usr/local/etc
enterprise_zroot/usr/local/etc/shellkonfig3                  on  /usr/local/etc/shellkonfig3
enterprise_zroot/usr/local/etc/shellkonfig3/head             on  /usr/local/etc/shellkonfig3/head
enterprise_zroot/usr/local/etc/shellkonfig3/stable-10        on  /usr/local/etc/shellkonfig3/stable-10
enterprise_zroot/usr/local/etc/shellkonfig3/stable-11        on  /usr/local/etc/shellkonfig3/stable-11
enterprise_zroot/usr/local/etc/shellkonfig3/stable-8         on  /usr/local/etc/shellkonfig3/stable-8
enterprise_zroot/usr/local/etc/shellkonfig3/stable-9         on  /usr/local/etc/shellkonfig3/stable-9
enterprise_zroot/usr/local/info                              on  /usr/local/info
enterprise_zroot/usr/local/var                               on  /usr/local/var
enterprise_zroot/usr/obj                                     on  /usr/obj
enterprise_zroot/usr/ports                                   on  /usr/ports
enterprise_zroot/usr/ports/distfiles                         on  /usr/ports/distfiles
enterprise_zroot/usr/ports/local                            off  /usr/ports/local
enterprise_zroot/usr/ports/packages                          on  /usr/ports/packages
enterprise_zroot/usr/ports/workdirs                          on  /usr/ports/workdirs
enterprise_zroot/usr/src                                     on  /usr/src
enterprise_zdata/usr                                        off  /usr
enterprise_zdata/usr/local                                  off  /usr/local
enterprise_zdata/usr/local/moodledata                        on  /usr/local/moodledata
enterprise_zdata/usr/local/pgsql                             on  /usr/local/pgsql
enterprise_zdata/usr/local/restaurering                      on  /usr/local/restaurering
enterprise_zdata/usr/local/www                               on  /usr/local/www
enterprise_zdata/usr/local/www/moodle                        on  /usr/local/www/moodle

enterprise_zroot/var                                        off  /var
enterprise_zroot/var/Named                                   on  /var/Named
enterprise_zroot/var/account                                 on  /var/account
enterprise_zroot/var/audit                                   on  /var/audit
enterprise_zroot/var/cache                                  off  /var/cache
enterprise_zroot/var/cache/ccache                            on  /var/cache/ccache
enterprise_zroot/var/cache/synth                             on  /var/cache/synth
enterprise_zroot/var/crash                                   on  /var/crash
enterprise_zroot/var/db                                      on  /var/db
enterprise_zroot/var/db/darkstat                             on  /var/db/darkstat
enterprise_zroot/var/db/dkim                                 on  /var/db/dkim
enterprise_zroot/var/db/etcupdate                            on  /var/db/etcupdate
enterprise_zroot/var/db/hyperv                               on  /var/db/hyperv
enterprise_zroot/var/db/ntp                                  on  /var/db/ntp
enterprise_zroot/var/db/pkg                                  on  /var/db/pkg
enterprise_zroot/var/db/ports                                on  /var/db/ports
enterprise_zroot/var/db/sup                                  on  /var/db/sup
enterprise_zroot/var/empty                                   on  /var/empty
enterprise_zroot/var/log                                     on  /var/log
enterprise_zroot/var/mail                                    on  /var/mail
enterprise_zroot/var/munin                                   on  /var/munin
enterprise_zroot/var/run                                     on  /var/run
enterprise_zroot/var/spool                                   on  /var/spool
enterprise_zroot/var/spool/cvsup                             on  /var/spool/cvsup
enterprise_zroot/var/synth                                   on  /var/synth
enterprise_zroot/var/synth/builders                          on  /var/synth/builders
enterprise_zroot/var/synth/live_packages                     on  /var/synth/live_packages
enterprise_zroot/var/tmp                                     on  /var/tmp
enterprise_zroot/var/unbound                                 on  /var/unbound
enterprise_zdata/var                                        off  /var
enterprise_zdata/var/db                                     off  /var/db
enterprise_zdata/var/db/mysql                                on  /var/db/mysql
enterprise_zdata/var/db/mysql_secure                         on  /var/db/mysql_secure
enterprise_zdata/var/db/mysql_tmpdir                         on  /var/db/mysql_tmpdir
enterprise_zdata/var/db/postgres                             on  /var/db/postgres
enterprise_zdata/var/db/postgres/data11                      on  /var/db/postgres/data11
enterprise_zdata/var/db/postgres/data11/base                 on  /var/db/postgres/data11/base
enterprise_zdata/var/db/postgres/data11/pg_wal               on  /var/db/postgres/data11/pg_wal
enterprise_zdata/var/db/postgres/data96                      on  /var/db/postgres/data96
enterprise_zdata/var/db/postgres/data96/base                 on  /var/db/postgres/data96/base
enterprise_zdata/var/db/postgres/data96/pg_xlog              on  /var/db/postgres/data96/pg_xlog
enterprise_zdata/var/db/prometheus                           on  /var/db/prometheus
enterprise_zdata/var/db/prometheus/data                      on  /var/db/prometheus/data
enterprise_zdata/var/db/prometheus/data/wal                  on  /var/db/prometheus/data/wal
enterprise_zdata/var/spool                                  off  /var/spool
enterprise_zdata/var/spool/bareos                            on  /var/spool/bareos
enterprise_zdata/var/spool/ftp                               on  /var/spool/ftp

Using this remount script in singleuser mode, brings order to chaos:

#!/bin/sh

# To be run while in singleuser mode,
# preferably (re)booted directly to singleuser mode.

PATH="/bin:/sbin:/usr/bin:/usr/sbin:/rescue"
export PATH

killall devd
killall moused

umount /usr/compat/linux/dev/fd
umount /usr/compat/linux/dev
umount /usr/compat/linux/proc
umount /usr/compat/linux/sys

zfs unmount -a

for fs in `zfs list -Hro canmount,name enterprise_zroot | grep -v '^off' | grep -v 'enterprise_zroot$' | grep -v 'enterprise_zroot/ROOT' | grep -v 'enterprise_zroot/do-not-destroy' | awk '{print $2}'`; do
  zfs mount ${fs}
done

for fs in `zfs list -Hro canmount,name enterprise_zdata | grep -v '^off' | grep -v 'enterprise_zdata$'                                   | grep -v 'enterprise_zdata/do-not-destroy' | awk '{print $2}'`; do
  zfs mount ${fs}
done

mount -al

echo "You may now attempt to exit to multiuser mode ..."

# EOF

-- Trond.
From owner-freebsd-stable@freebsd.org  Wed Apr 17 09:42:42 2019
Return-Path: <owner-freebsd-stable@freebsd.org>
Delivered-To: freebsd-stable@mailman.ysv.freebsd.org
Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2610:1c1:1:606c::19:1])
 by mailman.ysv.freebsd.org (Postfix) with ESMTP id 8793A158C056
 for <freebsd-stable@mailman.ysv.freebsd.org>;
 Wed, 17 Apr 2019 09:42:42 +0000 (UTC) (envelope-from ae@FreeBSD.org)
Received: from butcher-nb.yandex.net (unknown [IPv6:2a02:6b8:b010:d003::1:13])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 server-signature RSA-PSS (4096 bits)
 client-signature RSA-PSS (2048 bits) client-digest SHA256)
 (Client CN "butcher-nb.yandex.net",
 Issuer "butcher-nb.yandex.net" (not verified))
 by mx1.freebsd.org (Postfix) with ESMTPS id BF2697091F
 for <freebsd-stable@freebsd.org>; Wed, 17 Apr 2019 09:42:41 +0000 (UTC)
 (envelope-from ae@FreeBSD.org)
Received: from butcher-nb.yandex.net (localhost [127.0.0.1])
 by butcher-nb.yandex.net (8.15.2/8.15.2) with ESMTP id x3H9fGY9030277;
 Wed, 17 Apr 2019 12:41:22 +0300 (MSK) (envelope-from ae@FreeBSD.org)
Subject: Re: Panic during reboot involving softclock_call_cc(), nd6_timer()
 and nd6_dad_start()
To: =?UTF-8?Q?Trond_Endrest=c3=b8l?= <Trond.Endrestol@fagskolen.gjovik.no>,
 FreeBSD stable <freebsd-stable@freebsd.org>
References: <alpine.BSF.2.21.9999.1904151524220.81396@mail.fig.ol.no>
From: "Andrey V. Elsukov" <ae@FreeBSD.org>
Openpgp: id=E6591E1B41DA1516F0C9BC0001C5EA0410C8A17A
Autocrypt: addr=ae@FreeBSD.org; prefer-encrypt=mutual; keydata=
 mQENBEwBF1kBCADB9sXFhBEUy8qQ4X63Y8eBatYMHGEFWN9ypS5lI3RE6qQW2EYbxNk7qUC5
 21YIIS1mMFVBEfvR7J9uc7yaYgFCEb6Sce1RSO4ULN2mRKGHP3/Sl0ijZEjWHV91hY1YTHEF
 ZW/0GYinDf56sYpDDehaBF5wkWIo1+QK5nmj3vl0DIDCMNd7QEiWpyLVwECgLX2eOAXByT8B
 bCqVhJGcG6iFP7/B9Ll6uX5gb8thM9LM+ibwErDBVDGiOgvfxqidab7fdkh893IBCXa82H9N
 CNwnEtcgzh+BSKK5BgvPohFMgRwjti37TSxwLu63QejRGbZWSz3OK3jMOoF63tCgn7FvABEB
 AAG0IkFuZHJleSBWLiBFbHN1a292IDxhZUBmcmVlYnNkLm9yZz6JATsEEwECACUCGwMGCwkI
 BwMCBhUIAgkKCwQWAgMBAh4BAheABQJMB/ruAhkBAAoJEAHF6gQQyKF6MLwH/3Ri/TZl9uo0
 SepYWXOnxL6EaDVXDA+dLb1eLKC4PRBBjX29ttQ0KaWapiE6y5/AfzOPmRtHLrHYHjd/aiHX
 GMLHcYRXD+5GvdkK8iMALrZ28X0JXyuuZa8rAxWIWmCbYHNSBy2unqWgTI04Erodk90IALgM
 9JeHN9sFqTM6zalrMnTzlcmel4kcjT3lyYw3vOKgoYLtsLhKZSbJoVVVlvRlGBpHFJI5AoYJ
 SyfXoN0rcX6k9X7Isp2K50YjqxV4v78xluh1puhwZyC0p8IShPrmrp9Oy9JkMX90o6UAXdGU
 KfdExJuGJfUZOFBTtNIMNIAKfMTjhpRhxONIr0emxxC5AQ0ETAEXWQEIAJ2p6l9LBoqdH/0J
 PEFDY2t2gTvAuzz+8zs3R03dFuHcNbOwjvWCG0aOmVpAzkRa8egn5JB4sZaFUtKPYJEQ1Iu+
 LUBwgvtXf4vWpzC67zs2dDuiW4LamH5p6xkTD61aHR7mCB3bg2TUjrDWn2Jt44cvoYxj3dz4
 S49U1rc9ZPgD5axCNv45j72tggWlZvpefThP7xT1OlNTUqye2gAwQravXpZkl5JG4eOqJVIU
 X316iE3qso0iXRUtO7OseBf0PiVmk+wCahdreHOeOxK5jMhYkPKVn7z1sZiB7W2H2TojbmcK
 HZC22sz7Z/H36Lhg1+/RCnGzdEcjGc8oFHXHCxUAEQEAAYkBHwQYAQIACQUCTAEXWQIbDAAK
 CRABxeoEEMihegkYCAC3ivGYNe2taNm/4Nx5GPdzuaAJGKWksV+w9mo7dQvU+NmI2az5w8vw
 98OmX7G0OV9snxMW+6cyNqBrVFTu33VVNzz9pnqNCHxGvj5dL5ltP160JV2zw2bUwJBYsgYQ
 WfyJJIM7l3gv5ZS3DGqaGIm9gOK1ANxfrR5PgPzvI9VxDhlr2juEVMZYAqPLEJe+SSxbwLoz
 BcFCNdDAyXcaAzXsx/E02YWm1hIWNRxanAe7Vlg7OL+gvLpdtrYCMg28PNqKNyrQ87LQ49O9
 50IIZDOtNFeR0FGucjcLPdS9PiEqCoH7/waJxWp6ydJ+g4OYRBYNM0EmMgy1N85JJrV1mi5i
Message-ID: <be4be84a-549d-eba9-6f7a-9b8f0efa3c04@FreeBSD.org>
Date: Wed, 17 Apr 2019 12:41:16 +0300
User-Agent: Mozilla/5.0 (X11; FreeBSD amd64; rv:60.0) Gecko/20100101
 Thunderbird/60.6.1
MIME-Version: 1.0
In-Reply-To: <alpine.BSF.2.21.9999.1904151524220.81396@mail.fig.ol.no>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="lKLtm7DNWRtYUeCc0Lw8PZEWgm4ew9agc"
X-Rspamd-Queue-Id: BF2697091F
X-Spamd-Bar: --
Authentication-Results: mx1.freebsd.org
X-Spamd-Result: default: False [-2.99 / 15.00];
 local_wl_from(0.00)[FreeBSD.org];
 NEURAL_HAM_MEDIUM(-1.00)[-0.998,0];
 NEURAL_HAM_SHORT(-0.99)[-0.988,0];
 ASN(0.00)[asn:13238, ipnet:2a02:6b8::/32, country:RU];
 NEURAL_HAM_LONG(-1.00)[-1.000,0]
X-BeenThere: freebsd-stable@freebsd.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Production branch of FreeBSD source code <freebsd-stable.freebsd.org>
List-Unsubscribe: <https://lists.freebsd.org/mailman/options/freebsd-stable>, 
 <mailto:freebsd-stable-request@freebsd.org?subject=unsubscribe>
List-Archive: <http://lists.freebsd.org/pipermail/freebsd-stable/>;
List-Post: <mailto:freebsd-stable@freebsd.org>
List-Help: <mailto:freebsd-stable-request@freebsd.org?subject=help>
List-Subscribe: <https://lists.freebsd.org/mailman/listinfo/freebsd-stable>,
 <mailto:freebsd-stable-request@freebsd.org?subject=subscribe>
X-List-Received-Date: Wed, 17 Apr 2019 09:42:42 -0000

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--lKLtm7DNWRtYUeCc0Lw8PZEWgm4ew9agc
Content-Type: multipart/mixed; boundary="zDSNDJ1vDRQQMOQzD6czJNOhKP2EyxQmF";
 protected-headers="v1"
From: "Andrey V. Elsukov" <ae@FreeBSD.org>
To: =?UTF-8?Q?Trond_Endrest=c3=b8l?= <Trond.Endrestol@fagskolen.gjovik.no>,
 FreeBSD stable <freebsd-stable@freebsd.org>
Message-ID: <be4be84a-549d-eba9-6f7a-9b8f0efa3c04@FreeBSD.org>
Subject: Re: Panic during reboot involving softclock_call_cc(), nd6_timer()
 and nd6_dad_start()
References: <alpine.BSF.2.21.9999.1904151524220.81396@mail.fig.ol.no>
In-Reply-To: <alpine.BSF.2.21.9999.1904151524220.81396@mail.fig.ol.no>

--zDSNDJ1vDRQQMOQzD6czJNOhKP2EyxQmF
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable

On 15.04.2019 16:31, Trond Endrest=C3=B8l wrote:
> Has anyone else witnessed a panic during reboot involving=20
> softclock_call_cc(), nd6_timer(), and nd6_dad_start()?
>=20
> The stack trace goes more or less like this:
>=20
> db_trace_self_wrapper()
> vpanic()
> panic()
> trap_fatal()
> trap()
> calltrap()
> nd6_dad_start()
> nd6_timer()
> softclock_call_cc()
> softclock()
> ithread_loop()
> fork_exit()
> fork_trampoline()
>=20
> This was last seen while transitioning from r345628 to r346220 on=20
> amd64 stable/12.

Hi,

do you have exact panic message and/or backtrace from core dump?
It would be good to submit PR about such problems.

--=20
WBR, Andrey V. Elsukov


--zDSNDJ1vDRQQMOQzD6czJNOhKP2EyxQmF--

--lKLtm7DNWRtYUeCc0Lw8PZEWgm4ew9agc
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQEzBAEBCAAdFiEE5lkeG0HaFRbwybwAAcXqBBDIoXoFAly29LwACgkQAcXqBBDI
oXraEgf+LdzH0ZILT9jNUW19NuSCLHH+TRvEMdAC5HrYODOxaNwW6rwGFpOBzjXG
3JdTwGzevnOj00aRkVaBfkt+gM49QFgXeVuRl4NxSMgG2RaXYpiB0kIpcc8Erx7R
e2IBg4vpFlEqqOkB6ESKquE9cA4XR+1BqdMH05NrXoQlUi4vm2pUw8YERKhBAypb
xJyU7IoALMS/4uA/fAlPXx5A7lzRuCZ+HmqjVrAMcmBkHTUfivEZkIafGAxypb5V
Kg3UdVJMV8ttKFA0jBtLxwzbvaLa+h9ORUxtrsL2N1dOEXjHGfQpGOwWfrKw5GO6
BkIFfvGHfYZ7xe8CL4c8L1nfavIk0A==
=4XV1
-----END PGP SIGNATURE-----

--lKLtm7DNWRtYUeCc0Lw8PZEWgm4ew9agc--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.21.9999.1904171040070.81396>