Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 12 Jan 2012 20:10:36 -0800
From:      Garrett Cooper <yanegomi@gmail.com>
To:        Stephen Montgomery-Smith <stephen@missouri.edu>
Cc:        Andre Goree <andre@drenet.info>, Doug Barton <dougb@freebsd.org>, freebsd-stable@freebsd.org, Rob Clark <rpclark@tds.net>
Subject:   Re: GENERIC make buildkernel error / fails - posix_fadvise
Message-ID:  <CAGH67wTTRA_CA52dHTtq7oDX4LOLj-E-osfTnEgaEUwfs%2BeYDA@mail.gmail.com>
In-Reply-To: <4F0FA3C8.2020608@missouri.edu>
References:  <20120111161110.4258969c.rpclark@tds.net> <CAN-pd=cPY=Eg1RintaBx6GAon3FsLm-X0h6yvSBxzq=EZ5ukbg@mail.gmail.com> <20120112200843.2a348d2f.rpclark@tds.net> <4F0F8E6F.8000909@FreeBSD.org> <CAGH67wTFy_2JAGOX=VmUx2KRmFcRrv8aaE3H4EkiqcOxmk09Hw@mail.gmail.com> <4F0FA3C8.2020608@missouri.edu>

next in thread | previous in thread | raw e-mail | index | archive | help
On Thu, Jan 12, 2012 at 7:23 PM, Stephen Montgomery-Smith
<stephen@missouri.edu> wrote:
> On 01/12/2012 09:11 PM, Garrett Cooper wrote:
>
>> +1. And it's faster yet when you can run parallel copies of rm on
>> different portions of the directory tree (e.g. xargs, find [..] -exec)
>> as rm is O(n).
>
>
> I have always wondered about that! =A0I thought that the main bottleneck =
in
> "rm -r" might be deleting directories which are not in the disk cache, wh=
ich
> then have to be copied from the disk. =A0Copying two different parts of t=
he
> disk into cache - well it has to be done one at a time whether the jobs
> asking for the copy of the disk are going concurrently or consecutively.
>
> And perhaps two instances of "rm -r" acting on different parts of the har=
d
> drive will cause disk thrashing, so that they might even slow each other
> down.

Not really. The problem is limited by the fact that rm(1) needs to
dive into the kernel each time it does an unlink(1) syscall. Per Prof.
McKusick's teaching and the way things are designed from libc to disk
-- the process is artificially like: rm -> syscall -> top-half of the
kernel -> vfs -> filesystem (in my case ZFS) -> geom -> scsi layer ->
storage controller/disk driver. This is super expensive as n becomes
large as the path is long, so the best to amortize the operation is to
run more instances in parallel as there should be a relatively low
chance of there being lock contention (assuming you're not consuming
too many GIANT locks, you don't overprovision your system, etc). See
the loop in .../bin/rm/rm.c:rm_tree(char**) if you're curious where
things have issues.

> But this is all guess work on my part.
>
> If I am wrong, and "rm -r" does work faster when working in parallel on
> different parts,

Yes. Less modifications to directory entries -> less vfs locking
contention and less useless writing back to disk -> better.

> then why doesn't someone write the "rm" command to fork
> copies of itself that work on different parts of large trees?

Perhaps. The point is that my systems can do more work in parallel
with a larger number of rm's in order to improve throughput. I learned
this the hard way when I started deleting a prepopulated directory
with pjd's fstest: it took hours to prune directory entries the O(n)
way by a small fraction (<10% of 4 mil. or 40 mil. files). When I did
stuff in parallel (have 8 regular cores, 8 SMT cores), the process
took less than an hour to complete.

Thanks,
-Garrett



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?CAGH67wTTRA_CA52dHTtq7oDX4LOLj-E-osfTnEgaEUwfs%2BeYDA>