Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 02 Nov 2011 16:14:35 +0200
From:      Daniel Kalchev <daniel@digsys.bg>
To:        Lee Dilkie <Lee@Dilkie.com>
Cc:        freebsd-fs@freebsd.org
Subject:   Re: Default inode number too low in FFS nowadays?
Message-ID:  <4EB1504B.3080107@digsys.bg>
In-Reply-To: <4EB14A47.8010107@Dilkie.com>
References:  <B888842A-7DB4-491B-93E3-A376745019F5@sarenet.es> <20111102131311.GA56941@icarus.home.lan> <4EB1476A.3070204@digsys.bg> <4EB14A47.8010107@Dilkie.com>

next in thread | previous in thread | raw e-mail | index | archive | help


On 02.11.11 15:48, Lee Dilkie wrote:
>
> On 11/2/2011 1:36 PM, Daniel Kalchev wrote:
>>
>>
>> On 02.11.11 15:13, Jeremy Chadwick wrote:
>>> On Wed, Nov 02, 2011 at 12:57:33PM +0100, Borja Marcos wrote:
>>>> Today I?ve come across an issue long ago forgotten :) Running out 
>>>> of i-nodes.
>>>
>> Just for the completeness of it, one would use ZFS and be done with 
>> this issue. :-)
>
> Are you suggesting that ZFS be the default FS?

Not really. Perhaps we might think about something like this in 10.0 or 
11.0 -- today too many people are wary of ZFS and there are already 
trivial ways to have ZFS-only FreeBSD install - so no need to hurry.

> My only concern with ZFS is that it still appears to be in flux and 
> have some issues. I don't know, from monitoring this list, if those 
> are issues that heavy load users experience and ZFS is as stable as 
> UFS or if it isn't. I just know I see issues being raised.
>

Personally, I have two issues with ZFS: memory use and ... that it 
exposes very quickly bad hardware. I am currently at something like ~85% 
of my systems farm converted to ZFS-only. In the process, too many 
components proved to be bad. Disks, that previously were 'wonderful', 
display CRC errors in ZFS. Guess what --- these disks were happily 
reading/writing garbage with UFS and nobody ever noticed!
This is a serious "issue" with going to ZFS .. that has me prompted to 
convert any active system to use ZFS-only, although that would require 
much more resources memory-wise.

Another issue I have with ZFS is that it is not (yet) trivial to use for 
read-only installs, especially on the root. I have a multitude of 
systems that mount all their 'system' partitions read-only (UFS) and 
only data partitions are writable. I have yet to discover how one does 
this with ZFS only.

Yet another issue, more pronounced with v28 than with v15 is that when 
your zpool gets full, performance becomes abysmal. That is particularly 
bad for systems that are nearly full most of the time --- easily fixable 
with larger disks, I know..

Yet another issue with ZFS is that while the traditional UNIX 
partittioning semantics has been local (such as, partittions a,b,c on  
drive1 are different than partittions a,b,c on drive2), ZFS pool names 
are global. You cannot have two 'system' pools on the same system and 
that makes some historic habits difficult to apply. The same trouble is 
with GEOM/GPT labels too, so we may just have to grow up.

Other than that, my experience with ZFS has been more than wonderful.

Daniel



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4EB1504B.3080107>