Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 29 Nov 2004 03:48:32 +0100
From:      Michael Nottebrock <michaelnottebrock@gmx.net>
To:        freebsd-current@freebsd.org
Subject:   fsck shortcomings
Message-ID:  <41AA8E00.2050401@gmx.net>

next in thread | raw e-mail | index | archive | help
I recently had a filesystem go bad on me in such a way that it was recognized 
way bigger than it actually was, causing fsck to fail while trying to allocate 
and equally astronomic amount of memory (and my machine already had 1 Gig of 
mem + 2 Gig swap available).
I just newfs'd and I'm now in the process of restoring data, however, I 
googled a bit on this and it seems that this kind of fs corruption is 
occurring quite often, in particular due to power failures.

Is there really no way that fsck could be made smarter about dealing with 
seemingly huge filesystems? Also, what kind of memory would be required to 
fsck a _real_ 11TB filesystem?

-- 
    ,_,   | Michael Nottebrock               | lofi@freebsd.org
  (/^ ^\) | FreeBSD - The Power to Serve     | http://www.freebsd.org
    \u/   | K Desktop Environment on FreeBSD | http://freebsd.kde.org



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?41AA8E00.2050401>