Skip site navigation (1)Skip section navigation (2)
Date:      Sat, 4 Sep 2004 13:29:50 -0300 (ADT)
From:      "Marc G. Fournier" <scrappy@hub.org>
To:        Julian Elischer <julian@elischer.org>
Cc:        freebsd-current@freebsd.org
Subject:   Re: vnode leak in FFS code ... ?
Message-ID:  <20040904131131.A812@ganymede.hub.org>
In-Reply-To: <41394D0B.1050004@elischer.org>
References:  <20040901151405.G47186@ganymede.hub.org> <20040901200257.GA92717@afields.ca><41365746.2030605@samsco.org> <20040902013534.GD9327@afields.ca> <20040901224632.O72978@ganymede.hub.org> <20040904004706.O812@ganymede.hub.org> <41394D0B.1050004@elischer.org>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, 3 Sep 2004, Julian Elischer wrote:

> Marc G. Fournier wrote:
>> 
>> Just as a followup to this ... the server crashed on Thursday night around 
>> 22:00ADT, only just came back up after a very long fsck ... with all 62 VMs 
>> started up, and 1008 processes running, vnodes currently look like:
>
> are you using nullfs at all on your vms?

No, I stop'd using that over a year ago, figuring that it was exasperating 
the problems we were having back then ... the only thing we did use nullfs 
at that time was so that we could 'identify' which files were specific to 
a VM, vs a file on the template ... we moved to using nfs to do the same 
thing ...

The only thing we use is unionfs, and nfs ...

Basically, we do a 'mount_union -b <template> <vm>', where template is a 
shared file system containing common applications, in order to reduce 
overall disk space being used by each client.  So, for instance, on one of 
our servers we have a 'template' VM that, when we need to add/upgrade an 
application, we start up, log into and install from ports ... then, we 
rsync that template to the live server(s) so that those apps are available 
within all VMs ...

We then use NFS to mount the 'base' file system for each VM, that contains 
only the changed files that are specific to the VM (ie. config files, any 
apps the client happens to have installed, etc) and use that to determine 
storage usage ...

there is only one NFS mount point, that covers the whole file system, we 
dont do a mount per VM or anything like that ...

So, in the case of the system that has risen quite high on the vnode 
count, with 60 VMs ... there would be:

5 UFS mounts
 	- /, /var, /tmp, /usr and /vm
 	- /vm is where the virtual machines run off of
1 NFS mount
60 UNIONFS mount points
 	- one for each VM
60 procfs mount points
 	- one for each VM

Thanks to the work that David and Tor put in last summer on vnlru, this 
works quite well, with the occasional crash when a 'fringe bug' gets 
tweaked ... our record uptime on a server, in this configuration, so far 
is 106days ...

The part that hurts the most is that the longer the server is up and 
running, the greater the chance of having a 12+hr fsck run due to all the 
ZERO LENGTH DIRECTORYs :(

Whenever I get a good core dump, I try and post a report to GNaTs, but 
between everyone focusing on 5.x, and those crying "unionfs is broken", 
they tend to sit in limbo ... altho most of the bugs I am able to find 
most likely exist in 5.x code as well, and fixing them would go one more 
step towards improving unionfs ...

----
Marc G. Fournier           Hub.Org Networking Services (http://www.hub.org)
Email: scrappy@hub.org           Yahoo!: yscrappy              ICQ: 7615664



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20040904131131.A812>