Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 11 Jul 2010 15:28:18 -0700
From:      nukunuku@sbcglobal.net (Richard Lee)
To:        Jeremy Chadwick <freebsd@jdc.parodius.com>
Cc:        freebsd-stable@freebsd.org, Richard Lee <ricky@csua.berkeley.edu>
Subject:   Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).
Message-ID:  <20100711222818.GA37207@catsspat.iriomote>
In-Reply-To: <20100711214546.GA81873@icarus.home.lan>
References:  <20100711182511.GA21063@soda.CSUA.Berkeley.EDU> <20100711204757.GA81084@icarus.home.lan> <20100711211213.GA36377@catsspat.iriomote> <20100711214546.GA81873@icarus.home.lan>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sun, Jul 11, 2010 at 02:45:46PM -0700, Jeremy Chadwick wrote:
> On Sun, Jul 11, 2010 at 02:12:13PM -0700, Richard Lee wrote:
> > On Sun, Jul 11, 2010 at 01:47:57PM -0700, Jeremy Chadwick wrote:
> > > On Sun, Jul 11, 2010 at 11:25:12AM -0700, Richard Lee wrote:
> > > > This is on clean FreeBSD 8.1 RC2, amd64, with 4GB memory.
> > > > 
> > > > The closest I found by Googling was this:
> > > > http://forums.freebsd.org/showthread.php?t=9935
> > > > 
> > > > And it talks about all kinds of little tweaks, but in the end, the
> > > > only thing that actually works is the stupid 1-line perl code that
> > > > forces the kernal to free the memory allocated to (non-zfs) disk
> > > > cache, which is the "Inact"ive memory in "top."
> > > > 
> > > > I have a 4-disk raidz pool, but that's unlikely to matter.
> > > > 
> > > > Try to copy large files from non-zfs disk to zfs disk.  FreeBSD will
> > > > cache the data read from non-zfs disk in memory, and free memory will
> > > > go down.  This is as expected, obviously.
> > > > 
> > > > Once there's very little free memory, one would expect whatever is
> > > > more important to kick out the cached data (Inact) and make memory
> > > > available.
> > > > 
> > > > But when almost all of the memory is taken by disk cache (of non-zfs
> > > > file system), ZFS disks start threshing like mad and the write
> > > > throughput goes down in 1-digit MB/second.
> > > > 
> > > > I believe it should be extremely easy to duplicate.  Just plug in a
> > > > big USB drive formatted in UFS (msdosfs will likely do the same), and
> > > > copy large files from that USB drive to zfs pool.
> > > > 
> > > > Right after clean boot, gstat will show something like 20+MB/s
> > > > movement from USB device (da*), and occasional bursts of activity on
> > > > zpool devices at very high rate.  Once free memory is exhausted, zpool
> > > > devices will change to constant low-speed activity, with disks
> > > > threshing about constantly.
> > > > 
> > > > I tried enabling/disabling prefetch, messing with vnode counts,
> > > > zfs.vdev.min/max_pending, etc.  The only thing that works is that
> > > > stupid perl 1-liner (perl -e '$x="x"x1500000000'), which returns the
> > > > activity to that seen right after a clean boot.  It doesn't last very
> > > > long, though, as the disk cache again consumes all the memory.
> > > > 
> > > > Copying files between zfs devices doesn't seem to affect anything.
> > > > 
> > > > I understand zfs subsystem has its own memory/cache management.
> > > > Can a zfs expert please comment on this?
> > > > 
> > > > And is there a way to force the kernel to not cache non-zfs disk data?
> > > 
> > > I believe you may be describing two separate issues:
> > > 
> > > 1) ZFS using a lot of memory but not freeing it as you expect
> > > 2) Lack of disk I/O scheduler
> > > 
> > > For (1), try this in /boot/loader.conf and reboot:
> > > 
> > > # Disable UMA (uma(9)) for ZFS; amd64 was moved to exclusively use UMA
> > > # on 2010/05/24.
> > > # http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057162.html
> > > vfs.zfs.zio.use_uma="0"
> > > 
> > > For (2), may try gsched_rr:
> > > 
> > > http://svnweb.freebsd.org/viewvc/base/releng/8.1/sys/geom/sched/README?view=markup
> > > 
> > > -- 
> > > | Jeremy Chadwick                                   jdc@parodius.com |
> > > | Parodius Networking                       http://www.parodius.com/ |
> > > | UNIX Systems Administrator                  Mountain View, CA, USA |
> > > | Making life hard for others since 1977.              PGP: 4BD6C0CB |
> > 
> > vfs.zfs.zio.use_uma is already 0.  It looks to be the default, as I never
> > touched it.
> 
> Okay, just checking, because the default did change at one point, as the
> link in my /boot/loader.conf denotes.  Here's further confirmation (same
> thread), the first confirming on i386, the second confirming on amd64:
> 
> http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057168.html
> http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057239.html
> 
> > And in my case, Wired memory is stable at around 1GB.  It's
> > the Inact memory that takes off, but only if reading from non-zfs file
> > system.  Without other file systems, I can keep moving files around and
> > see no adverse slowdown.  I can also scp huge files from another system
> > into the zfs machine, and it doesn't affect memory usage (as reported by
> > top), nor does it affect performance.
> 
> Let me get this straight:
> 
> The system has ZFS enabled (kernel module loaded), with a 4-disk raidz1
> pool defined and used in the past (Wired being @ 1GB, due to ARC).  The
> same system also has UFS2 filesystems.  The ZFS pool vdevs consist of
> their own dedicated disks, and the UFS2 filesystems also have their own
> disk (which appears to be USB-based).

Yes, correct.

I have:
ad4 (An old 200GB SATA UFS2 main system drive)
ad8, ad10, ad12, ad14 (1TB SATA drives) part of raidz1 and nothing else
da0 is an external USB disk (1TB), but I don't think it's related to USB.

Status looks like this:
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        uchuu       ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ad8     ONLINE       0     0     0
            ad10    ONLINE       0     0     0
            ad12    ONLINE       0     0     0
            ad14    ONLINE       0     0     0

errors: No known data errors

> When any sort of read I/O is done on the UFS2 filesystems, Inact
> skyrockets, and as a result this impacts performance of ZFS.
>
> If this is correct: can you remove USB from the picture and confirm the
> problem still happens?  This is the first I've heard of the UFS caching
> mechanism "spiraling out of control".

To isolate away USB involvement, I did the following.

Without any USB drive attached at all, I copied a large 7GB file from zfs
pool to the system drive (internal ad4 UFS2).  This alone caused the Inact
memory to top out since it's caching whatever is to/from normal file
system.  Despite Inact memory usage topping out, I didn't notice any slow
down in copying *from* zfs to UFS drive (ad4), but I'm not 100% sure.  It
certainly wasn't obvious if there were any effect.  Maybe zfs reads aren't
as badly affected.

Now, I copied that large file from ad4 back to zpool (somewhere else from
where the original file was, of course), and this *was* noticeably
affected.  It started out similarly (ad4 reading near its max platter
speed 40-50MB/s), and zfs pool doing burst writes that are of higher
bandwidth.  This didn't last very long, though, possibly because memory is
already fully consumed (or close to it).  It switched to the ad4 read
slowing down to below 20MB/s, and zfs write becoming constant and slower,
too, instead of quick bursty write behavior.

Note I was watching it using gstat.

It wasn't as slow as USB drive -> zfs, but that may just be due to USB
overhead.

While this was happening, I ran that perl code to force the kernel to give
up some memory, and it went back to speedy behavior, again until the
UFS caching took all the memory.

It's as if the kernel doesn't know to throw away Inact memory stuff based
on its own internal activity (zfs activity), even though a user processing
asking for memory makes it throw them out in an instant.  But that's not a
qualified statement, of course.  Just thinking out loud.

--rich



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20100711222818.GA37207>