Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 21 Jul 2011 20:25:35 +0200
From:      Martin Matuska <mm@FreeBSD.org>
To:        Ivan Voras <ivoras@freebsd.org>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: ZFS and large directories - caveat report
Message-ID:  <4E286F1F.6010502@FreeBSD.org>
In-Reply-To: <j09hk8$svj$1@dough.gmane.org>
References:  <j09hk8$svj$1@dough.gmane.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Quoting:
... The default record size ZFS utilizes is 128K, which is good for many
storage servers that will harbor larger files. However, when dealing
with many files that are only a matter of tens of kilobytes, or even
bytes, considerable slowdown will result. ZFS can easily alter the
record size of the data to be written through the use of attributes.
These attributes can be set at any time through the use of the "zfs set"
command. To set the record size attribute perform "zfs set
recordsize=32K pool/share". This will set the recordsize to 32K on share
"share" within pool "pool". This type of functionality can even be
implemented on nested shares for even more flexibility. ...

Read more:
http://www.articlesbase.com/information-technology-articles/improving-file-system-performance-utilizing-dynamic-record-sizes-in-zfs-4565092.html#ixzz1SlWZ7BM5



Dn(a 21. 7. 2011 17:45, Ivan Voras  wrote / napísal(a):
> I'm writing this mostly for future reference / archiving and also if
> someone has an idea on how to improve the situation.
>
> A web server I maintain was hit by DoS, which has caused more than 4
> million PHP session files to be created. The session files are sharded
> in 32 directories in a single level - which is normally more than
> enough for this web server as the number of users is only a couple of
> thousand. With the DoS, the number of files per shard directory rose
> to about 130,000.
>
> The problem is: ZFS has proven horribly inefficient with such large
> directories. I have other, more loaded servers with simlarly bad /
> large directories on UFS where the problem is not nearly as serious as
> here (probably due to the large dirhash). On this system, any
> operation which touches even only the parent of these 32 shards (e.g.
> "ls") takes seconds, and a simple "find | wc -l" on one of the shards
> takes > 30 minutes (I stopped it after 30 minutes). Another symptom is
> that SIGINT-ing such find process takes 10-15 seconds to complete
> (sic! this likely means the kernel operation cannot be interrupted for
> so long).
>
> This wouldn't be a problem by itself, but operations on such
> directories eat IOPS - clearly visible with the "find" test case,
> making the rest of the services on the server fall as collateral
> damage. Apparently there is a huge amount of seeking being done, even
> though I would think that for read operations all the data would be
> cached - and somehow the seeking from this operation takes priority /
> livelocks other operations on the same ZFS pool.
>
> This is on a fresh 8-STABLE AMD64, pool version 28 and zfs version 5.
>
> Is there an equivalent of UFS dirhash memory setting for ZFS? (i.e.
> the size of the metadata cache)
>
> _______________________________________________
> freebsd-fs@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-fs
> To unsubscribe, send any mail to "freebsd-fs-unsubscribe@freebsd.org"


-- 
Martin Matuska
FreeBSD committer
http://blog.vx.sk




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4E286F1F.6010502>