Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 30 Sep 2016 15:36:06 -0400
From:      Mark Saad <nonesuch@longcount.org>
To:        dweimer@dweimer.net
Cc:        Priyadarshan <bsd@bontempi.net>, owner-freebsd-stable@freebsd.org,  FreeBSD Stable <freebsd-stable@freebsd.org>
Subject:   Re: File Name Too Long?
Message-ID:  <CAMXt9NZV_gMTLg1e2wf37i7Xk-urCB2mJmP-WMAQuUd1jkUmVA@mail.gmail.com>
In-Reply-To: <7cb903d1b035e6bc1d311c1c37f57fd7@dweimer.net>
References:  <974bc572bedda786fdc18a41085952c1@dweimer.net> <c522f5de02a7e101be4c7e1b1ed85b70@dweimer.net> <1475246346.1079471.741974313.29D6C2CB@webmail.messagingengine.com> <7cb903d1b035e6bc1d311c1c37f57fd7@dweimer.net>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, Sep 30, 2016 at 10:52 AM, Dean E. Weimer <dweimer@dweimer.net>
wrote:

> On 2016-09-30 9:39 am, Priyadarshan wrote:
>
>> On Fri, 30 Sep 2016, at 14:34, Dean E. Weimer wrote:
>>
>>> On 2016-09-29 9:32 am, Dean E. Weimer wrote:
>>> > I discovered, unfortunately by deleting a jail by accident, that my
>>> > backup process isn't working. At least it was only the operating
>>> > system part of the jail, I still have all the data so I just need to
>>> > reinstall the operating system. While the ports are in the process of
>>> > building I started to investigate the cause, because the backup logs
>>> > report everything was fine.
>>> >
>>> > I have a custom pre-backup script I wrote that takes snapshots of my
>>> > ZFS datasets, and then mounts those under /mnt/backup with nullfs
>>> > mount points to the .zfs/snapshot/.. directories then I back them up
>>> > rather than the live file system, allowing me to stop some services
>>> > that don't restore from a running state correctly and then restart
>>> > after the snapshot so downtime is only a couple of minutes instead of
>>> > full length of backups.
>>> >
>>> > It appeared to be running perfectly, without errors, but apparently
>>> > the script isn't reporting some nullfs mount failures, so why are the
>>> > failing, turns out it thinks the file name is too long? but looking at
>>> > the mount(2) man page it states this:
>>> >
>>> > [ENAMETOOLONG]     A component of a pathname exceeded 255 characters,
>>> > or
>>> >                         the entire length of a path name exceeded 1023
>>> >                         characters.
>>> >
>>> > I can see that at some point under this, I may reach that 1023 limit,
>>> > but what of the total 71 characters in this path has a problem?
>>> >
>>> > /jails/unifi/ROOT/.zfs/snapshot/11.0-RELEASE-r306379-2016.09.28--bsnap
>>> >
>>> > root@freebsd:/jails/unifi/ROOT/.zfs/snapshot # ls
>>> > ls: 11.0-RELEASE-r306379-2016.09.28--bsnap: File name too long
>>> >
>>> > I thought, maybe its a ZFS specific error, and ran across this:
>>> > http://lists.freebsd.org/pipermail/freebsd-fs/2010-March/007964.html
>>> >
>>> > [..snip..]
>>> > From looking at the code, I think you hitting this limit:
>>> >
>>> >       /*
>>> >        * Be ultra-paranoid about making sure the type and fspath
>>> >        * variables will fit in our mp buffers, including the
>>> >        * terminating NUL.
>>> >        */
>>> >       if (strlen(fstype) >= MFSNAMELEN || strlen(fspath) >= MNAMELEN)
>>> >               return (ENAMETOOLONG);
>>> >
>>> > in vfs_domount() or vfs_donmount().
>>> >
>>> > This is FreeBSD limit caused by statfs structure:
>>> >
>>> > /*
>>> >  * filesystem statistics
>>> >  */
>>> > [...]
>>> > #define MNAMELEN        88              /* size of on/from name bufs */
>>> > [...]
>>> > struct statfs {
>>> > [...]
>>> >       char    f_mntfromname[MNAMELEN];/* mounted filesystem */
>>> >       char    f_mntonname[MNAMELEN];  /* directory on which mounted */
>>> > };
>>> >
>>> > When you list .zfs/snapshot/ directory (especially with -l option) ZFS
>>> > mounts snapshots on lookup and this is this mount that fails.
>>> > [..snip..]
>>> >
>>> > I can seemingly due anything else with the snapshot, clone, send,
>>> > receive its just that I am unable to access the files on it through
>>> > .zfs/snapshot/..
>>> >
>>> > I am trying to find what the limit is here from this, because this one
>>> > here works.
>>> >
>>> > /jails/webmail/usr-local-subversion/.zfs/snapshot/usr-local-
>>> subversion--bsnap
>>> >
>>> > its longer in total length than most of the ones that are failing.
>>> >
>>> > /jails/unifi/ROOT/.zfs/snapshot/11.0-RELEASE-r306379-2016.09.28--bsnap
>>> >
>>> > So it appears that its in the name, and not the mount point.
>>> >
>>> > this one works as well, which is my ZFS boot environment on the main
>>> > system
>>> > zraid/ROOT/11.0-RELEASE-r306379-2016.09.28
>>> > snapshot is /.zfs/snapshot/11.0-RELEASE-r306379-2016.09.28--bsnap
>>> >
>>> > So its not just the last component of the zfs dataset name, which is
>>> > in this case the same.
>>> >
>>> > I am trying to wrap my head around this and find where the limit is so
>>> > I can adjust my naming conventions used and actually get backups of
>>> > all of my data. Turns out all of my jail operating system paths aren't
>>> > being backed up, fortunately at least all of the data file systems for
>>> > the jails are.
>>>
>>> I found a solution, I was naming the snapshots with the dataset name,
>>> which I think was causing the issue.
>>>
>>> The following didn't seem to long to be an issue
>>> /jails/unifi/ROOT/.zfs/snapshot/11.0-RELEASE-r306379-2016.09.28--bsnap
>>>
>>> But apparently the snapshot name was
>>> zraid/jails/unifi/11.0-RELEASE-r306379-2016.09.28@11.0-
>>> RELEASE-r306379-2016.09.28--bsnap
>>>
>>> Still not sure how it adds up to too long, both full paths together
>>> aren't over 255, at 160, but apparently something else is added in
>>> there. I was able to easily modify my backup script to not include the
>>> last part of the dataset in the snapshot name and simply use -bsnap-, as
>>> the name. it appears to avoid all the issues, and my backups from last
>>> night include all the files.
>>>
>>> /jails/unifi/ROOT/.zfs/snapshot/-bsnap-
>>> zraid/jails/unifi/11.0-RELEASE-r306379-2016.09.28@-bsnap-
>>>
>>> The total path now only adds up to 98, I haven't done any testing yet to
>>> find out where the limit is hit, The longest combination of these I had
>>> last night would have added up to 135, and that worked
>>>
>>> --
>>> Thanks,
>>>     Dean E. Weimer
>>>     http://www.dweimer.net/
>>>
>>
>>
>> This may be related:
>>
>> http://iocage.readthedocs.io/en/latest/known-issues.html#cha
>> racter-mount-path-limitation
>>
>> Priyadarshan
>>
>
>
Having run into this before in nfs exports , and pulling my hair out; I ran
into this fix . I know its from a long time ago; but I am willing to put up
a bounty to get this bumped to 512

http://www.secnetix.de/olli/FreeBSD/mnamelen.hawk


> Thanks, that's probably it, the original snapshot name with its full data
> set path added up to 89, with that in mind I can edit my script to throw a
> warning if this limit is hit so that my backup logs will let me know if a
> data set gets missed. I need to edit it anyways so that a warning gets
> logged on the mount failure which was already occurring. It looks like I
> escaped the errors, so that the script returned successful and didn't make
> the Bacula backup job fail so that the data that did get mounted would be
> backed up, but forgot to write the error to the log.
>
> --
> Thanks,
>    Dean E. Weimer
>    http://www.dweimer.net/
> _______________________________________________
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe@freebsd.org"
>



-- 
mark saad | nonesuch@longcount.org



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAMXt9NZV_gMTLg1e2wf37i7Xk-urCB2mJmP-WMAQuUd1jkUmVA>