Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 22 Jul 2004 10:26:07 -0700
From:      Nate Lawson <nate@root.org>
To:        Scott Long <scottl@freebsd.org>
Cc:        cvs-src@freebsd.org
Subject:   Re: cvs commit: src/sys/kern kern_shutdown.c
Message-ID:  <40FFF8AF.5090805@root.org>
In-Reply-To: <40FFF46A.2080703@freebsd.org>
References:  <200407212045.i6LKjHvX090599@palm.tree.com> <40FEE569.2010209@elischer.org> <40FEE6CA.3090005@samsco.org> <20040722092441.GH3001@cirb503493.alcatel.com.au> <40FFEB86.2050209@root.org> <40FFF46A.2080703@freebsd.org>

next in thread | previous in thread | raw e-mail | index | archive | help
Scott Long wrote:
> Nate Lawson wrote:
>> Peter Jeremy wrote:
>>> You still wind up with unwritten data in RAM, just less of it.
>>>
>>> How much effort would be required to add journalling to UFS or UFS2?
>>> How big a gain does journalling give you over soft-updates?
>>
>>
>> Kirk pointed out something to me the other day which many people don't 
>> think about.  None of the journaling systems has had its recovery mode 
>> fully tested, especially on very large systems (dozen TB).  It turns 
>> out that memory pressure from per-allocation unit state is a big 
>> problem when you are trying to recover a huge volume.
>>
>> Just because it says "journaling" doesn't make it good.
> 
> You are very correct that there are issues like this, and that's why I 
> said that it would take a while to chase out the bugs and make it 
> production quality.  However, given the enterprise nature of Sun, I'd
> say it's a bit of a stretch to think that they haven't tested their
> f/s on multi-terabyte arrays.

I was referring to the herd of Linux journaling systems.

> Even Apple advertises multi-terabyte
> storage with their XServe, so I'd be surprised if they hadn't done at
> least some testing there.

 > 2 TB?

-- 
-Nate



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?40FFF8AF.5090805>