From owner-freebsd-fs@FreeBSD.ORG Wed Jun 29 08:44:40 2011 Return-Path: Delivered-To: freebsd-fs@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id EA5B61065673 for ; Wed, 29 Jun 2011 08:44:40 +0000 (UTC) (envelope-from edhoprima@gmail.com) Received: from mail-bw0-f54.google.com (mail-bw0-f54.google.com [209.85.214.54]) by mx1.freebsd.org (Postfix) with ESMTP id 6996C8FC12 for ; Wed, 29 Jun 2011 08:44:40 +0000 (UTC) Received: by bwa20 with SMTP id 20so1202770bwa.13 for ; Wed, 29 Jun 2011 01:44:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:from:date :x-google-sender-auth:message-id:subject:to:content-type; bh=gLpbEEG/RCZT+fExVzhp15MVJiJbcQBavm+kRAqwTdI=; b=lOpn86sMjxl5Vcybp7ZHlfH4u/6XY2XUpIxHoVmdiKPqt9olBg958mAQEr7Vwmodq/ WwtEcoGZJNQuk/aQ0anvSFBW97RndioeOBIaFDGxUV+QAervxR3vVW/cGD+2BCkEUal/ sExn/vE4TZsqkHYSMVdg1SfQNzrMTlZH40vKI= Received: by 10.205.83.133 with SMTP id ag5mr474425bkc.121.1309337079101; Wed, 29 Jun 2011 01:44:39 -0700 (PDT) MIME-Version: 1.0 Sender: edhoprima@gmail.com Received: by 10.204.119.197 with HTTP; Wed, 29 Jun 2011 01:44:19 -0700 (PDT) In-Reply-To: References: From: Edho P Arief Date: Wed, 29 Jun 2011 15:44:19 +0700 X-Google-Sender-Auth: uSBRDIqxDcTeQ8I1mhGg5JVVcLg Message-ID: To: freebsd-fs@freebsd.org Content-Type: text/plain; charset=UTF-8 Subject: Re: zpool raidz2 missing space? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jun 2011 08:44:41 -0000 On Wed, Jun 29, 2011 at 2:10 PM, Edho P Arief wrote: > My zpool seems to be missing ~500G of space. One of the disk > originally sized at around 1.65T which probably caused it but I've > replaced the partition and it should show full 4*1.8T (~7.2T) but it > still shows old capacity (4*1.65T ~ 6.6T). > > What should be done? I've tried export/import cycle but the result is same. > sorry, seems like reboot cycle solved it. [root@einhart ~]# zpool status pool: dpool state: ONLINE scan: scrub in progress since Wed Jun 29 15:35:22 2011 24.4G scanned out of 4.61T at 55.4M/s, 24h6m to go 0 repaired, 0.52% done config: NAME STATE READ WRITE CKSUM dpool ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/fe13fc94-9bfe-11e0-bd6e-0030678cf5c1 ONLINE 0 0 0 gptid/0dc1601d-9f95-11e0-9a98-0030678cf5c1 ONLINE 0 0 0 gptid/1e76f2ad-9d5d-11e0-997b-0030678cf5c1 ONLINE 0 0 0 gptid/8d23200a-9d5c-11e0-997b-0030678cf5c1 ONLINE 0 0 0 errors: No known data errors [root@einhart ~]# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT dpool 7.06T 4.61T 2.45T 65% 1.00x ONLINE - [root@einhart ~]# glabel status Name Status Components gptid/94d650c8-a05d-11e0-b636-0030678cf5c1 N/A ad4p1 gptid/0dc1601d-9f95-11e0-9a98-0030678cf5c1 N/A ad4p5 gptid/1b214d9b-9f95-11e0-9a98-0030678cf5c1 N/A ad4p9 gptid/025527eb-9d5d-11e0-997b-0030678cf5c1 N/A ad6p1 gptid/1e76f2ad-9d5d-11e0-997b-0030678cf5c1 N/A ad6p5 gptid/2570c3bb-9d5d-11e0-997b-0030678cf5c1 N/A ad6p9 gptid/6b426779-9d5c-11e0-997b-0030678cf5c1 N/A ad8p1 gptid/8d23200a-9d5c-11e0-997b-0030678cf5c1 N/A ad8p5 gptid/99ee1556-9d5c-11e0-997b-0030678cf5c1 N/A ad8p9 gptid/31d15ce9-9bfe-11e0-bd6e-0030678cf5c1 N/A ad10p1 gptid/fe13fc94-9bfe-11e0-bd6e-0030678cf5c1 N/A ad10p5 gptid/128af023-9bff-11e0-bd6e-0030678cf5c1 N/A ad10p9 ufs/root0 N/A mirror/gm0a label/swap0 N/A stripe/gs0b ufs/home0 N/A stripe/gs0d [root@einhart ~]# zfs list NAME USED AVAIL REFER MOUNTPOINT dpool 2.23T 1.14T 174K legacy dpool/data 2.23T 1.14T 2.23T legacy dpool/data/documents 888M 1.14T 888M legacy dpool/jails 267M 1.14T 186K legacy dpool/jails/debian 267M 1.14T 267M legacy dpool/ports-distfiles 2.61G 1.14T 2.61G /usr/ports/distfiles dpool/ports-tmp 90.6M 1.14T 90.6M /.ports-tmp dpool/src.cvs 562M 1.14T 562M /usr/src.cvs dpool/srv 31.0M 1.14T 31.0M legacy dpool/usr.obj 221K 1.14T 209K /usr/obj dpool/usr.src 2.22G 1.14T 2.22G /usr/src [root@einhart ~]# df -h Filesystem Size Used Avail Capacity Mounted on /dev/ufs/root0 9.7G 6.5G 2.5G 72% / devfs 1.0k 1.0k 0B 100% /dev /dev/ufs/home0 46G 755M 42G 2% /usr/home procfs 4.0k 4.0k 0B 100% /proc linprocfs 4.0k 4.0k 0B 100% /compat/linux/proc dpool/data 3.4T 2.2T 1.1T 66% /data dpool/srv 1.1T 31M 1.1T 0% /srv dpool/data/documents 1.1T 888M 1.1T 0% /data/documents dpool/jails 1.1T 186k 1.1T 0% /jails dpool/jails/debian 1.1T 266M 1.1T 0% /jails/debian dpool/ports-tmp 1.1T 90M 1.1T 0% /.ports-tmp dpool/usr.obj 1.1T 209k 1.1T 0% /usr/obj dpool/ports-distfiles 1.1T 2.6G 1.1T 0% /usr/ports/distfiles dpool/usr.src 1.1T 2.2G 1.1T 0% /usr/src dpool/src.cvs 1.1T 562M 1.1T 0% /usr/src.cvs /data/documents 1.1T 888M 1.1T 0% /usr/home/edho/Documents /data/downloads 3.4T 2.2T 1.1T 66% /usr/home/edho/Downloads [root@einhart ~]# zdb dpool: version: 28 name: 'dpool' state: 0 txg: 407886 pool_guid: 5265065684459342039 hostid: 4266313884 hostname: 'einhart' vdev_children: 1 vdev_tree: type: 'root' id: 0 guid: 5265065684459342039 children[0]: type: 'raidz' id: 0 guid: 10113259324866791715 nparity: 2 metaslab_array: 23 metaslab_shift: 36 ashift: 12 asize: 7314369150976 is_log: 0 children[0]: type: 'disk' id: 0 guid: 850395506991012944 path: '/dev/gptid/fe13fc94-9bfe-11e0-bd6e-0030678cf5c1' phys_path: '/dev/gptid/fe13fc94-9bfe-11e0-bd6e-0030678cf5c1' whole_disk: 0 DTL: 164 children[1]: type: 'disk' id: 1 guid: 11140108939464482570 path: '/dev/gptid/0dc1601d-9f95-11e0-9a98-0030678cf5c1' phys_path: '/dev/gptid/0dc1601d-9f95-11e0-9a98-0030678cf5c1' whole_disk: 1 DTL: 173 children[2]: type: 'disk' id: 2 guid: 2470764073478818097 path: '/dev/gptid/1e76f2ad-9d5d-11e0-997b-0030678cf5c1' phys_path: '/dev/gptid/1e76f2ad-9d5d-11e0-997b-0030678cf5c1' whole_disk: 0 DTL: 168 children[3]: type: 'disk' id: 3 guid: 3492436401681256292 path: '/dev/gptid/8d23200a-9d5c-11e0-997b-0030678cf5c1' phys_path: '/dev/gptid/8d23200a-9d5c-11e0-997b-0030678cf5c1' whole_disk: 0 DTL: 165 -- O< ascii ribbon campaign - stop html mail - www.asciiribbon.org