Date: Sun, 28 Jun 2009 13:54:18 -0400 From: Nathanael Hoyle <nhoyle@hoyletech.com> To: Dan Naumov <dan.naumov@gmail.com> Cc: freebsd-fs@freebsd.org Subject: Re: read/write benchmarking: UFS2 vs ZFS vs EXT3 vs ZFS RAIDZ vs Linux MDRAID Message-ID: <4A47AE4A.6090705@hoyletech.com> In-Reply-To: <cf9b1ee00906280402g40dcd4b2p81dbf18612495d02@mail.gmail.com> References: <cf9b1ee00906261636m5d09966ag6d7e1b7557ada709@mail.gmail.com> <4A4725FA.80505@modulus.org> <cf9b1ee00906280330s1f500266xdcbfb1462deda7f8@mail.gmail.com> <4A4747A0.6040902@modulus.org> <cf9b1ee00906280402g40dcd4b2p81dbf18612495d02@mail.gmail.com>
next in thread | previous in thread | raw e-mail | index | archive | help
<entire previous conversation snipped, in part due to top-posting; removed geom from CC since this post doesn't reference it> The clear distinction between the two sets of performance tests you two have done is that Dan's are highly random short i/o's, and Andrew's are large sequential transfers. Large sequential transfers necessarily engage all of the disks in the pool, regardless of the parity strategy, therefore the implied penalty for ZFS to read the parity data from all drives is mostly theoretical, and actually performs more like RAID 5 typically would. In the case of Dan's highly random, short i/o's, the read itself is trivial, making the overhead of spinning/seeking all the disks to calculate the full checksum and validate it inordinately high. The implication of these two benchmarks is clear as well: ZFS RAIDZ may be an excellent choice for large storage capacity with reasonable performance characteristics for large sequential workloads, but should be avoided where many small transfers will be occurring. -Nathanael
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4A47AE4A.6090705>