Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 16 Mar 2017 10:04:27 -0600
From:      Warner Losh <imp@bsdimp.com>
To:        Pete French <petefrench@ingresso.co.uk>
Cc:        Andriy Gapon <avg@freebsd.org>, stable@freebsd.org
Subject:   Re: moutnroot failing on zpools in Azure after upgrade from 10 to 11 due to lack of waiting for da0
Message-ID:  <CANCZdfpUa6GX2OVT70g4fCM2SwAcdN2ghMFO9UPeN%2BDC3Pa%2B6Q@mail.gmail.com>
In-Reply-To: <CANCZdfpx7gO8O-%2Bt41HwS5tkjakzMntw7WJ9N5pnR%2B88DzJL=Q@mail.gmail.com>
References:  <6b397d83-e802-78ca-e24e-6d0713f07212@FreeBSD.org> <E1coUAY-0000ou-8i@dilbert.ingresso.co.uk> <CANCZdfpx7gO8O-%2Bt41HwS5tkjakzMntw7WJ9N5pnR%2B88DzJL=Q@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
[[ stupid mouse ]]

On Thu, Mar 16, 2017 at 10:01 AM, Warner Losh <imp@bsdimp.com> wrote:
> On Thu, Mar 16, 2017 at 6:06 AM, Pete French <petefrench@ingresso.co.uk> wrote:
>>> I don't like the delay and retry approach at all.
>>
>> Its not ideal, but it is what we do for UFS after all...
>>
>>> Imagine that you told the kernel that you want to mount your root from a ZFS
>>> pool which is on a USB driver which you have already thrown out.  Should the
>>> kernel just keep waiting for that pool to appear?
>>
>> I'm not talking about an infinite loop here, just making it honour
>> the 'vfs.mountroot.timeout' setting like it does ofr UFS. So it
>> should wait for the timeout I have set and then proceed as it would if
>> there had been no timeout. Default behaviout is for it to behave as it
>> does now, its onyl when you need the retry that you enable it.
>
> Put another way: With UFS is keeps retrying until the timeout expires.
> If the first try succeeds, the boot is immediate.
>
>> Right now this works for UFS, but not for ZFS, which is an inconsistency
>> that I dont like, and also means I am being forced down a UFS root
>> path if I require this.
>
> Yes. ZFS is special, but I don't think the assumptions behind its
> specialness are quite right:
>
>         /*
>          * In case of ZFS and NFS we don't have a way to wait for
>          * specific device.  Also do the wait if the user forced that
>          * behaviour by setting vfs.root_mount_always_wait=1.
>          */
>         if (strcmp(fs, "zfs") == 0 || strstr(fs, "nfs") != NULL ||
>             dev[0] == '\0' || root_mount_always_wait != 0) {
>                 vfs_mountroot_wait();
>                 return (0);
>         }
>
> So you can make it always succeed by forcing the wait, but that's lame...

Later we check to see if a device by a given name is present. Since
ZFS doesn't present its pool names as devices to the rest of the
system, that's not going to work quite right. That's the real reason
that ZFS is special. It isn't that we can't wait for individual
devices, it's that we can't wait for the 'mount token' that we use for
what to mount to be 'ready'. NFS suffers from the same problem, but
since its device is always ready since it's stateless, it isn't as
noticeable.

Warner



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CANCZdfpUa6GX2OVT70g4fCM2SwAcdN2ghMFO9UPeN%2BDC3Pa%2B6Q>