Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 12 Oct 2011 12:31:16 -0500 (CDT)
From:      Larry Rosenman <ler@lerctr.org>
To:        Tom Evans <tevans.uk@googlemail.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: AF (4096 byte sector) drives: Can you mix/match in a ZFS pool?
Message-ID:  <alpine.BSF.2.00.1110121229570.17730@lrosenman.dyndns.org>
In-Reply-To: <CAFHbX1K5O%2BGeO-9LixbLi=V=77O5D1g93UC%2BQSN4-s0hEm0aDw@mail.gmail.com>
References:  <4E95AE08.7030105@lerctr.org> <CAFHbX1K5O%2BGeO-9LixbLi=V=77O5D1g93UC%2BQSN4-s0hEm0aDw@mail.gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, 12 Oct 2011, Tom Evans wrote:

> On Wed, Oct 12, 2011 at 4:11 PM, Larry Rosenman <ler@lerctr.org> wrote:
>> I have a root on ZFS box with 6 drives, all 400G (except one 500G) in a
>> pool.
>>
>> I want to upgrade to 2T or 3T drives, but was wondering if you can mix/match
>> while doing the drive by drive
>> replacement.
>>
>> This is on 9.0-BETA3 if that matters.
>>
>> Thanks!
>>
>
> Hi Larry
>
> I'm in a similar position. I have a 2 x 6 x 1.5TB raidz system,
> configured a while ago when I wasn't aware enough of 4k sector drives,
> and so ZFS is configured to use 512 byte sectors (ashift=9). All of
> the drives in it were 512 byte sector drives, until one of them
> failed.
>
> At that point, I couldn't lay my hands on a large capacity drive that
> still used 512 byte sectors, so I replaced it with a 4k sector drive,
> made sure it was aligned correctly, and hoped for the best. The
> performance sucks (500MB/s reads -> 150MB/s reads!), but it 'works',
> all my data is safe.
>
> The solution is to make sure that all your vdevs, whether they are
> backed by disks that have 512 byte or 4k sectors, are created with 4k
> sectors (ashift=12). It won't negatively affect your older disks, and
> you won't end up in the position I am in, where I need to recreate the
> pool to fix the issue (and have 12TB of data with nowhere to put it!)
>
I wish I had asked this question BEFORE I made the box Root on ZFS on Saturday.

Here's what I have:


   pool: zroot
  state: ONLINE
  scan: scrub repaired 0 in 0h20m with 0 errors on Sat Oct  8 22:21:50 2011
config:

 	NAME           STATE     READ WRITE CKSUM
 	zroot          ONLINE       0     0     0
 	  raidz1-0     ONLINE       0     0     0
 	    gpt/disk0  ONLINE       0     0     0
 	    gpt/disk1  ONLINE       0     0     0
 	    gpt/disk2  ONLINE       0     0     0
 	    gpt/disk3  ONLINE       0     0     0
 	    gpt/disk4  ONLINE       0     0     0
 	    gpt/disk5  ONLINE       0     0     0

errors: No known data errors


zroot:
     version: 28
     name: 'zroot'
     state: 0
     txg: 185
     pool_guid: 6776217281607456243
     hostname: ''
     vdev_children: 1
     vdev_tree:
         type: 'root'
         id: 0
         guid: 6776217281607456243
         children[0]:
             type: 'raidz'
             id: 0
             guid: 1402298321185619698
             nparity: 1
             metaslab_array: 30
             metaslab_shift: 34
             ashift: 9
             asize: 2374730514432
             is_log: 0
             create_txg: 4
             children[0]:
                 type: 'disk'
                 id: 0
                 guid: 9076139076816521807
                 path: '/dev/gpt/disk0'
                 phys_path: '/dev/gpt/disk0'
                 whole_disk: 1
                 create_txg: 4
             children[1]:
                 type: 'disk'
                 id: 1
                 guid: 1302481463702775221
                 path: '/dev/gpt/disk1'
                 phys_path: '/dev/gpt/disk1'
                 whole_disk: 1
                 create_txg: 4
             children[2]:
                 type: 'disk'
                 id: 2
                 guid: 15500000621616879018
                 path: '/dev/gpt/disk2'
                 phys_path: '/dev/gpt/disk2'
                 whole_disk: 1
                 create_txg: 4
             children[3]:
                 type: 'disk'
                 id: 3
                 guid: 11011035160331724516
                 path: '/dev/gpt/disk3'
                 phys_path: '/dev/gpt/disk3'
                 whole_disk: 1
                 create_txg: 4
             children[4]:
                 type: 'disk'
                 id: 4
                 guid: 17522530679015716424
                 path: '/dev/gpt/disk4'
                 phys_path: '/dev/gpt/disk4'
                 whole_disk: 1
                 create_txg: 4
             children[5]:
                 type: 'disk'
                 id: 5
                 guid: 16647118440423800168
                 path: '/dev/gpt/disk5'
                 phys_path: '/dev/gpt/disk5'
                 whole_disk: 1
                 create_txg: 4

So, is there a way to change/fix/whatever this setup and not have to copy
40+G of data?

Thanks for the reply!

-- 
Larry Rosenman                     http://www.lerctr.org/~ler
Phone: +1 512-248-2683                 E-Mail: ler@lerctr.org
US Mail: 430 Valona Loop, Round Rock, TX 78681-3893



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?alpine.BSF.2.00.1110121229570.17730>