Date: Sun, 27 Jan 2013 16:34:20 -0500 From: grarpamp <grarpamp@gmail.com> To: freebsd-fs@freebsd.org Subject: Re: ZFS slackspace, grepping it for data Message-ID: <CAD2Ti28WBAgJTu2480Cs88LGmTG%2BOFXe_XDL4GXxuNuTPrDVaA@mail.gmail.com> In-Reply-To: <CAD2Ti2-1ROTxQXNA6FzWtcgnMoaAzvfcdh__zH7AVC7zCPsyzw@mail.gmail.com> References: <CAD2Ti2-1ROTxQXNA6FzWtcgnMoaAzvfcdh__zH7AVC7zCPsyzw@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
> zdb -mmm pool_name Ahh, I saw this later too, thanks. Seems I've got 425k free ranges to scan among 25k free txg's. This will take a while but it's still a nice feature. I doubt it was meant for this purpose though. More likely for debugging zfs structures and data issues. > for on-disk offset add 0x400000 If i remember correctly. I could check for it with a string search near the head of data. Does that fs to disk offset stay the same throughout the fs? The minimum range size appears to be 4KiB (245k worth), with another 75k at 8KiB and 100k more on up to 32KiB. So not sure yet whether using zdb to collect the slack will perform any worse than supplying the list to dd, or even trying to write some C to avoid the shell overhead and further to read the disk direct. I occaisionally get failed assertions and core dumps with various zdb operations. Is there interest in ticketing them? Assertion failed: (object_count == usedobjs (0x0 == 0x1e33ec)), file /re8/src/cddl/usr.sbin/zdb/../../../cddl/contrib/opensolaris/cmd/zdb/zdb.c, line 1649.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAD2Ti28WBAgJTu2480Cs88LGmTG%2BOFXe_XDL4GXxuNuTPrDVaA>