From owner-freebsd-stable@FreeBSD.ORG Wed Jul 26 23:46:20 2006 Return-Path: X-Original-To: freebsd-stable@freebsd.org Delivered-To: freebsd-stable@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id D2ED216A4DA for ; Wed, 26 Jul 2006 23:46:20 +0000 (UTC) (envelope-from jhs@flat.berklix.net) Received: from thin.berklix.org (thin.berklix.org [194.246.123.68]) by mx1.FreeBSD.org (Postfix) with ESMTP id 591D243D49 for ; Wed, 26 Jul 2006 23:46:19 +0000 (GMT) (envelope-from jhs@flat.berklix.net) Received: from js.berklix.net (p549A7615.dip.t-dialin.net [84.154.118.21]) (authenticated bits=128) by thin.berklix.org (8.12.11/8.12.11) with ESMTP id k6QNjxvA042392; Thu, 27 Jul 2006 01:46:00 +0200 (CEST) (envelope-from jhs@flat.berklix.net) Received: from fire.jhs.private (fire.jhs.private [192.168.91.41]) by js.berklix.net (8.12.11/8.12.11) with ESMTP id k6QNjGN1003487; Thu, 27 Jul 2006 01:45:57 +0200 (CEST) (envelope-from jhs@flat.berklix.net) Received: from fire.jhs.private (localhost.jhs.private [127.0.0.1]) by fire.jhs.private (8.13.1/8.13.1) with ESMTP id k6QNjGv2012721; Thu, 27 Jul 2006 01:45:16 +0200 (CEST) (envelope-from jhs@fire.jhs.private) Message-Id: <200607262345.k6QNjGv2012721@fire.jhs.private> To: Sven Willenberger In-Reply-To: Message from Sven Willenberger of "Wed, 26 Jul 2006 13:07:19 EDT." <44C7A147.9010106@dmv.com> Date: Thu, 27 Jul 2006 01:45:16 +0200 From: "Julian H. Stacey" Cc: freebsd-stable@freebsd.org, Feargal Reilly Subject: Re: filesystem full error with inumber X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 26 Jul 2006 23:46:21 -0000 Sven Willenberger wrote: > > > Feargal Reilly presumably uttered the following on 07/24/06 11:48: > > On Mon, 24 Jul 2006 17:14:27 +0200 (CEST) > > Oliver Fromme wrote: > > > >> Nobody else has answered so far, so I try to give it a shot ... > >> > >> The "filesystem full" error can happen in three cases: > >> 1. The file system is running out of data space. > >> 2. The file system is running out of inodes. > >> 3. The file system is running out of non-fragmented blocks. > >> > >> The third case can only happen on extremely fragmented > >> file systems which happens very rarely, but maybe it's > >> a possible cause of your problem. > > > > I rebooted that server, and df then reported that disk at 108%, > > so it appears that df was reporting incorrect figures prior to > > the reboot. Having cleaned up, it appears by my best > > calculations to be showing correct figures now. > > > >> > kern.maxfiles: 20000 > >> > kern.openfiles: 3582 > >> > >> Those have nothing to do with "filesystem full". > >> > > > > Yeah, that's what I figured. > > > >> > Looking again at dumpfs, it appears to say that this is > >> > formatted with a block size of 8K, and a fragment size of > >> > 2K, but tuning(7) says: [...] > >> > Reading this makes me think that when this server was > >> > installed, the block size was dropped from the 16K default > >> > to 8K for performance reasons, but the fragment size was > >> > not modified accordingly. > >> > > >> > Would this be the root of my problem? > >> > >> I think a bsize/fsize ratio of 4/1 _should_ work, but it's > >> not widely used, so there might be bugs hidden somewhere. > >> > > > > Such as df not reporting the actual data usage, which is now my > > best working theory. I don't know what df bases it's figures on, > > perhaps it either slowly got out of sync, or more likely, got > > things wrong once the disk filled up. > > > > I'll monitor it to see if this happens again, but hopefully > > won't keep that configuration around for too much longer anyway. > > > > Thanks, > > -fr. > > > > One of my machines that I recently upgraded to 6.1 (6.1-RELEASE-p3) is also > exhibiting df reporting wrong data usage numbers. Notice the negative "Used" numbers > below: Negative isnt an example of programming error, just that the system is now using the last bit only root can use. for insight try for example man tunefs reboot boot -s tunefs -m 2 /dev/da0s1e then decide what level of m you want default is 8 to 10 I recall. > > > df -h > Filesystem Size Used Avail Capacity Mounted on > /dev/da0s1a 496M 63M 393M 14% / > devfs 1.0K 1.0K 0B 100% /dev > /dev/da0s1e 989M -132M 1.0G -14% /tmp > /dev/da0s1f 15G 478M 14G 3% /usr > /dev/da0s1d 15G -1.0G 14G -8% /var > /dev/md0 496M 228K 456M 0% /var/spool/MIMEDefang > devfs 1.0K 1.0K 0B 100% /var/named/dev > > Sven -- Julian Stacey. Consultant Unix Net & Sys. Eng., Munich. http://berklix.com Mail in Ascii, HTML=spam. Ihr Rauch = mein allergischer Kopfschmerz.