From owner-freebsd-fs@FreeBSD.ORG Wed Dec 20 15:43:36 2006 Return-Path: X-Original-To: freebsd-fs@freebsd.org Delivered-To: freebsd-fs@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [69.147.83.52]) by hub.freebsd.org (Postfix) with ESMTP id A355A16A514 for ; Wed, 20 Dec 2006 15:43:36 +0000 (UTC) (envelope-from anderson@centtech.com) Received: from mh1.centtech.com (moat3.centtech.com [64.129.166.50]) by mx1.FreeBSD.org (Postfix) with ESMTP id 18D5343CBF for ; Wed, 20 Dec 2006 15:43:22 +0000 (GMT) (envelope-from anderson@centtech.com) Received: from [10.177.171.220] (neutrino.centtech.com [10.177.171.220]) by mh1.centtech.com (8.13.8/8.13.8) with ESMTP id kBKFhEgv086915; Wed, 20 Dec 2006 09:43:14 -0600 (CST) (envelope-from anderson@centtech.com) Message-ID: <45895A1D.2010105@centtech.com> Date: Wed, 20 Dec 2006 09:43:25 -0600 From: Eric Anderson User-Agent: Thunderbird 1.5.0.7 (X11/20061015) MIME-Version: 1.0 To: Arone Silimantia References: <626700.55454.qm@web58612.mail.re3.yahoo.com> In-Reply-To: <626700.55454.qm@web58612.mail.re3.yahoo.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: ClamAV 0.88.4/2360/Wed Dec 20 00:24:09 2006 on mh1.centtech.com X-Virus-Status: Clean X-Spam-Status: No, score=-2.5 required=8.0 tests=AWL,BAYES_00 autolearn=ham version=3.1.6 X-Spam-Checker-Version: SpamAssassin 3.1.6 (2006-10-03) on mh1.centtech.com Cc: freebsd-fs@freebsd.org Subject: Re: quotas safe on >2 TB filesystems in 6.1-RELEASE ? X-BeenThere: freebsd-fs@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Filesystems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 20 Dec 2006 15:43:36 -0000 On 12/20/06 09:15, Arone Silimantia wrote: > Eric, > > Thanks for your comments and help - your posts on this > list are much appreciated. Comments in line below: > > > --- Eric Anderson wrote: > >> With 9TB without >> any journaling, you might run into problems if you >> crash and need to >> fsck - the number of files you could have on the >> file system could well >> require more memory/time than you have available. > > > Hmmm...the time required is more dependent on inodes > than on size of data / size of files, right ? > > My 9 TB dataset uses about 36 million inodes. > > Any comments on that number ? Large ? Pedestrian ? > Typical ? Sounds like you have a lot of larger files (~250k per file on average possibly), which helps the fsck times. 36 million inodes should be fsck'able with enough memory (maybe ~3gb ish? That's a wild guess). I have two 10Tb file systems, one has 180million inodes. I don't attempt to fsck it, because it would take a very long time, and I might possibly run out of memory (I have 8Gb of memory). I use GJOURNAL on these, and am very happy about it (thanks Pawel!). df snippet: Filesystem 1K-blocks Used Avail Capacity iused ifree %iused Mounted on /dev/label/vol10-data.journal 9925732858 8780187028 351487202 96% 180847683 1102147515 14% /vol10 /dev/label/vol11-data.journal 9925732858 2598987846 6532686384 28% 49422705 1233572493 4% /vol11 I also have several 2Tb partitions (set up prior to gjournal being available), that are *FULL*, and each have about 25-45million inodes on them. Those fsck in about 4-7hours each, using between 1Gb and 3Gb of memory to do so. > > I am hoping that that could be fsck'd (modern hitachi > SATA drives, raid-6, 3ware) in 48 hours ... or am I > way off ? Depends on the drives really, and maybe caching. The geom_cache module (still beta probably, and not in the src tree currently I don't believe) is said to improve fsck times. 48 hours is a long time, and I *think* it should complete within that time frame, but you'd really have to test it to be sure. >>> But I do absolutely need to run quotas (both user >> and >>> group) on this 9 GB array. I also need to >> successfully >>> use all quota admin tools (repquota, edquota, >> quota, >>> etc.) >>> >>> Can I get an assurance that this is totally safe, >>> sane, and fit to run in a mission critical, data >>> critical environment ? Anyone doing it currently >> ? >>> Any comments or warnings of _any kind_ much >> appreciated. >> >> I don't think anyone will say 'I promise it will >> work' of course, but I >> would start by using the latest 6-STABLE source >> since there have been >> quite a number of updates to file system related >> code since 6.1. > > > Ok, but all of the CLI tools (edquota, repquota, > quota, quotacheck, quotaon) are all known-good for > "bigdisk" ? > > And there is no known "quotas just don't work with > bigdisk" problems ? > > I was hoping someone out there was running quotas with > 6.1-RELEASE on a >2TB filesystem and could report favorably... I'm not certain. There were some bugs in quotas, that recently were fixed (Kris Kennaway I think reported them and saw the fixes into the tree), and prior to that I saw those consistently, so I stopped using them. I haven't tried since the fixes have been in place, and the fixes (if I recall correctly) had to do with background fsck (softupdates maybe) and not the size of the disk. You could try this in a mock-up environment, by creating a sparse file and use it with mdconfig, then newfs it, enabling quotas, mount, and then use a script to create massive amounts (36million-ish) files of about 200k-ish in size, with random users, in a similar fashion as your real data, and see if all goes well. You can also try your fsck that way too. Eric -- ------------------------------------------------------------------------ Eric Anderson Sr. Systems Administrator Centaur Technology An undefined problem has an infinite number of solutions. ------------------------------------------------------------------------