From owner-freebsd-fs@FreeBSD.ORG Wed Oct 12 17:31:22 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 60818106566B for ; Wed, 12 Oct 2011 17:31:22 +0000 (UTC) (envelope-from ler@lerctr.org) Received: from thebighonker.lerctr.org (lrosenman-1-pt.tunnel.tserv8.dal1.ipv6.he.net [IPv6:2001:470:1f0e:3ad::2]) by mx1.freebsd.org (Postfix) with ESMTP id 0A7BE8FC12 for ; Wed, 12 Oct 2011 17:31:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lerctr.org; s=lerami; h=Content-Type:MIME-Version:References:Message-ID:In-Reply-To:Subject:cc:To:Sender:From:Date; bh=VYOLoVo3zrrRG68mw3BgtRvqhtQ2DcD/KoTzkdomfAo=; b=QhhPhRF6e/KQf1Uf6P9MXXXbMI3kzuUuiYx9M6St71Va1JgZC2Aj/9nwedpR2HXvGhaovL8e35LwbaIlOJeUIATeQ0BNgYvkh9ikfTtrUca5pvXuyVmpqk0kSbcNIdNIbpJwHw/8Do/20DVjbaZrUuyou3kITun3lV/OfaeuusY=; Received: from cpe-72-182-3-73.austin.res.rr.com ([72.182.3.73]:65231 helo=[192.168.200.4]) by thebighonker.lerctr.org with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.76 (FreeBSD)) (envelope-from ) id 1RE2eF-0005El-Ep; Wed, 12 Oct 2011 12:31:21 -0500 Date: Wed, 12 Oct 2011 12:31:16 -0500 (CDT) From: Larry Rosenman Sender: ler@lrosenman.dyndns.org To: Tom Evans In-Reply-To: Message-ID: References: <4E95AE08.7030105@lerctr.org> User-Agent: Alpine 2.00 (BSF 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed X-Spam-Score: -2.8 (--) X-LERCTR-Spam-Score: -2.8 (--) X-Spam-Report: SpamScore (-2.8/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, SARE_SUB_OBFU_OTHER=0.135 X-LERCTR-Spam-Report: SpamScore (-2.8/5.0) ALL_TRUSTED=-1, BAYES_00=-1.9, SARE_SUB_OBFU_OTHER=0.135 Cc: freebsd-fs@freebsd.org Subject: Re: AF (4096 byte sector) drives: Can you mix/match in a ZFS pool? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 12 Oct 2011 17:31:22 -0000 On Wed, 12 Oct 2011, Tom Evans wrote: > On Wed, Oct 12, 2011 at 4:11 PM, Larry Rosenman wrote: >> I have a root on ZFS box with 6 drives, all 400G (except one 500G) in a >> pool. >> >> I want to upgrade to 2T or 3T drives, but was wondering if you can mix/match >> while doing the drive by drive >> replacement. >> >> This is on 9.0-BETA3 if that matters. >> >> Thanks! >> > > Hi Larry > > I'm in a similar position. I have a 2 x 6 x 1.5TB raidz system, > configured a while ago when I wasn't aware enough of 4k sector drives, > and so ZFS is configured to use 512 byte sectors (ashift=9). All of > the drives in it were 512 byte sector drives, until one of them > failed. > > At that point, I couldn't lay my hands on a large capacity drive that > still used 512 byte sectors, so I replaced it with a 4k sector drive, > made sure it was aligned correctly, and hoped for the best. The > performance sucks (500MB/s reads -> 150MB/s reads!), but it 'works', > all my data is safe. > > The solution is to make sure that all your vdevs, whether they are > backed by disks that have 512 byte or 4k sectors, are created with 4k > sectors (ashift=12). It won't negatively affect your older disks, and > you won't end up in the position I am in, where I need to recreate the > pool to fix the issue (and have 12TB of data with nowhere to put it!) > I wish I had asked this question BEFORE I made the box Root on ZFS on Saturday. Here's what I have: pool: zroot state: ONLINE scan: scrub repaired 0 in 0h20m with 0 errors on Sat Oct 8 22:21:50 2011 config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 gpt/disk0 ONLINE 0 0 0 gpt/disk1 ONLINE 0 0 0 gpt/disk2 ONLINE 0 0 0 gpt/disk3 ONLINE 0 0 0 gpt/disk4 ONLINE 0 0 0 gpt/disk5 ONLINE 0 0 0 errors: No known data errors zroot: version: 28 name: 'zroot' state: 0 txg: 185 pool_guid: 6776217281607456243 hostname: '' vdev_children: 1 vdev_tree: type: 'root' id: 0 guid: 6776217281607456243 children[0]: type: 'raidz' id: 0 guid: 1402298321185619698 nparity: 1 metaslab_array: 30 metaslab_shift: 34 ashift: 9 asize: 2374730514432 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 9076139076816521807 path: '/dev/gpt/disk0' phys_path: '/dev/gpt/disk0' whole_disk: 1 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 1302481463702775221 path: '/dev/gpt/disk1' phys_path: '/dev/gpt/disk1' whole_disk: 1 create_txg: 4 children[2]: type: 'disk' id: 2 guid: 15500000621616879018 path: '/dev/gpt/disk2' phys_path: '/dev/gpt/disk2' whole_disk: 1 create_txg: 4 children[3]: type: 'disk' id: 3 guid: 11011035160331724516 path: '/dev/gpt/disk3' phys_path: '/dev/gpt/disk3' whole_disk: 1 create_txg: 4 children[4]: type: 'disk' id: 4 guid: 17522530679015716424 path: '/dev/gpt/disk4' phys_path: '/dev/gpt/disk4' whole_disk: 1 create_txg: 4 children[5]: type: 'disk' id: 5 guid: 16647118440423800168 path: '/dev/gpt/disk5' phys_path: '/dev/gpt/disk5' whole_disk: 1 create_txg: 4 So, is there a way to change/fix/whatever this setup and not have to copy 40+G of data? Thanks for the reply! -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 512-248-2683 E-Mail: ler@lerctr.org US Mail: 430 Valona Loop, Round Rock, TX 78681-3893