From owner-freebsd-questions@FreeBSD.ORG Tue Nov 22 09:52:42 2011 Return-Path: Delivered-To: freebsd-questions@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B8CD9106564A for ; Tue, 22 Nov 2011 09:52:42 +0000 (UTC) (envelope-from kraduk@gmail.com) Received: from mail-yw0-f54.google.com (mail-yw0-f54.google.com [209.85.213.54]) by mx1.freebsd.org (Postfix) with ESMTP id 764768FC08 for ; Tue, 22 Nov 2011 09:52:42 +0000 (UTC) Received: by ywe9 with SMTP id 9so8137914ywe.13 for ; Tue, 22 Nov 2011 01:52:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=BRzW/Zbl2KKOFOiwUXED14DmN+WwJbVkvJhrHByIcnc=; b=SV6zncW7m5l7+AyrgHI4JfvqMxnGWteifPaGqOVaa40b5rIjnq0M3nDePyIyORze8g ee3YWHLtlTg7figrGejAvoRNZLxc5OS/vdUVwQ/UkDA+CVvqJK55L8rQhzCxx3JIPU13 6Tx1j1bjlnnYkZucC6QqqlrH8y6mpKn7hE/pQ= MIME-Version: 1.0 Received: by 10.236.190.130 with SMTP id e2mr24936447yhn.107.1321955561586; Tue, 22 Nov 2011 01:52:41 -0800 (PST) Received: by 10.236.102.164 with HTTP; Tue, 22 Nov 2011 01:52:41 -0800 (PST) In-Reply-To: <4ECB580E.20203@infracaninophile.co.uk> References: <88f3d8e819b3420f8e61723bee90ba5e.squirrel@www.magehandbook.com> <4ECB580E.20203@infracaninophile.co.uk> Date: Tue, 22 Nov 2011 09:52:41 +0000 Message-ID: From: krad To: Matthew Seaman Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-questions@freebsd.org Subject: Re: Setting up ZFS - Filesystem Properties and Installing on Root X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Nov 2011 09:52:42 -0000 > It seems to me that you would only need disk > 1 to have boot, swap, and zfs, and the other 3 disks only have one > partition (using the entire drive) for zfs's pool. As other have mentioned redundancy, but also you will nver see the befit as the zfs vdev (like any other raid system) size will defined by the smallest unit in the group. ie if you have 4 x 1tb drives and you have 3x 1tb slices and 950GB available on your boot drive then all the storage you will get is 4 x 950 - the parity data. Therefore make all you drives layouts identical and mirror any boot partitions across them all, or just 2 and use the other 2 for swap or a combination of the 2. Another way to do it is boot off usb stick although you should be able to boot off a native raidz these days without to much hassle. If you do run into issues with booting of zfs though try these recompiled boot blocks as I never have issues with them. http://people.freebsd.org/~pjd/zfsboot/ If you are using 4k disks which there is a fairly good chance you are make sure you create the pool with ashift=12 using the gnop trick. Otherwise you may experiance bad disk performance. http://www.leidinger.net/blog/2011/05/03/another-root-on-zfs-howto-optimized-for-4k-sector-drives/ WIth regards to dedup, unless you have bucket loads of ram (32+Gigs) and/or an ssd dedicated to l2arc stay away from it as you will almost certainly find that very quickly the DDT wont fit into ram, and when that happens the performance of the pool takes a serious performance dive do to every write incuring many many reads to retrieve the ddt information. Also it may not be worth it with your dataset. To test what you might achieve do a zdb -S to see your expected dedup ratio. in terms of disk layout this is fairly arbitary and you have a lot of choice. This is what i use, and a loosly based it on opensolaris system-4k/be 26.6G 207G 252K /system-4k/be system-4k/be/root20110930 1.73G 207G 1.31G legacy system-4k/be/root20111011 2.03G 207G 1.69G legacy system-4k/be/root20111023 1.98G 207G 1.68G /system-4k/be/root20111023 system-4k/be/root20111028 2.00G 207G 1.68G /system-4k/be/root20111028 system-4k/be/root20111112 2.08G 207G 1.76G /system-4k/be/root20111112 system-4k/be/tmp 360K 209G 360K /tmp system-4k/be/usr-local 3.30G 207G 3.30G /usr/local/ system-4k/be/usr-obj 728M 207G 728M /usr/obj system-4k/be/usr-ports 2.05G 207G 1.51G /usr/ports system-4k/be/usr-ports/distfiles 547M 207G 547M /usr/ports/distfiles system-4k/be/usr-src 705M 207G 705M /usr/src system-4k/be/var 2.04G 213G 816M /var system-4k/be/var/log 1.21G 213G 1.21G /var/log system-4k/be/var/mysql 34.0M 213G 34.0M /var/db/mysql everytime I do a make installword and installkernel I create a new root fs. This way I can easily flip flop back and two between different os builds if i want to. I use this simple script to set it up for me. Its not perfect but it works well enough $ cat /usr/local/scripts/install_world #!/usr/local/bin/bash if [ $UID != 0 ] ; then echo your not root !! ; exit 1 fi date=`date '+%Y%m%d'` oroot=`grep "vfs.root.mountfrom=\"zfs:system-4k/" /boot/loader.conf | sed -e "s#^.*\"zfs:system-4k/be/##" -e "s#\"##"` nroot="root$date" snap="autoup-$RANDOM" zpool=system-4k export DESTDIR=/$zpool/be/$nroot if [ "$oroot" = "$nroot" ] ; then echo "i cant update twice in one day"; exit 1 fi echo building in $zpool/be/$nroot zfs snapshot $zpool/be/$oroot@$snap && zfs send $zpool/be/$oroot@$snap | mbuffer -m 500M | zfs receive -vv $zpool/be/$nroot && cd /usr/src && make installkernel && mount_nullfs /var $DESTDIR/var && mergemaster -p -D $DESTDIR && make installworld && mergemaster -D $DESTDIR && sed -i -e "s#$zpool/be/$oroot#$zpool/be/$nroot#" $DESTDIR/boot/loader.conf && \ echo "Installing boot records.." && zpool status system-4k | grep -A 2 mirror | grep ad | sed -e "s/p[0-9]//" | while read a b; do gpart bootcode -b /zfsboot/pmbr -p /zfsboot/gptzfsboot -i 1 $a; done && cp -v /zfsboot/zfsloader $DESTDIR/boot/. && echo -en "\n\nNow run these two commands to make the changes live, and reboot zfs set mountpoint=legacy $zpool/be/$nroot zpool set bootfs=$zpool/be/$nroot $zpool\n\n"