Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 14 Oct 2007 02:15:41 -0500
From:      Mike Pritchard <mpp@mail.mppsystems.com>
To:        Nikolay Pavlov <qpadla@gmail.com>
Cc:        arch@freebsd.org, James Gritton <jamie@gritton.org>, Marko Zec <zec@freebsd.org>, Julian Elischer <julian@elischer.org>, freebsd-arch@freebsd.org
Subject:   Re: kernel level virtualisation requirements.
Message-ID:  <20071014071541.GA63551@mail.mppsystems.com>
In-Reply-To: <200710131021.03861.qpadla@gmail.com>
References:  <470E5BFB.4050903@elischer.org> <470FD0DC.5080503@gritton.org> <200710131021.03861.qpadla@gmail.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On Sat, Oct 13, 2007 at 10:20:58AM +0300, Nikolay Pavlov wrote:
> On Friday 12 October 2007 22:54:04 James Gritton wrote:
> > Julian Elischer wrote:
> >  > What I'd like to see is a bit of a 'a-la-carte' virtualisation
> >  > ability.
> > ...
> >  > My question to you, the reader, is:
> >  > what aspects of virtualisation (the appearance of multiple instances
> >  > of some resource) would you like to see in the system?
> >
> > Filesystem quotas, without the need for each jail to have its own mount
> > point.
> 
> Strange, but IMHO it would be better slightly revert this statement:
> Filesystem quotas _with_ the need for each jail to have it's own mount 
> point, but with out the need to maintain them in fstab (Like it is in 
> ZFS). Because you gain the ability to maintain jails in a filesystem 
> level(snapshots, cloning, dump, restore and so on).

Let me start with, the current quota system requires a mount point
to maintain the quota datya file pointer information (currently a UFS mount 
point), and a pointer in the i-node to the quota struct for that UID/GID
And the idea of multiple mount points sharing quotas strikes me as a novel 
idea right now....Hmm.  That concept fits with ZFS better too...

I've been working on bumping quotas up a level.  Instead of mantaining
the data in the file system layer, I have them working at the vnode layer.
There is no real reason quotas need to know anything about the data below
the vnode layer (e.g. move the i-node info up into the vnode, and the 
information maintained in the ufsmount struct up into the mount struct).
Quotas just needs to know, the ID, its type (uid/gid), and which data file
to get/write the limit/usage info from/to and what to +/- that data.

My goal on this was to be able to extend quotas to work on non-UFS file
systems.  I have them 85% or so working on tmpfs right now as my test case.

I have never done anything with jails.  If someone who knows a little bit
about quotas and more about jails wants to get together with me on that,
I'm open for guidance.  If I'm going to do this major change to quotas, I'd
like to be able to make it work in a jailed environment.  I think that
should be possible (and not sure why they don't work now, since
I think they should, but like i said no jail experience here...).

And from googling on ZFS, it does sound like there is a need for
quotas, even with the ZFS quotas that are available now.  Although 
ZFS looks messy from a quota standpoint.

> point, but with out the need to maintain them in fstab (Like it is in 
> ZFS). Because you gain the ability to maintain jails in a filesystem 
> level(snapshots, cloning, dump, restore and so on).

 Our utilities (like quotaon) require that quotas be specified in /etc/fstab.
And from comments in the code, SunOS didn't require that.  And other than 
the fact that we allow the admin to specify an alternate location for the 
quota data files in fstab, there is no reason to not let the commands try.
We would just have to add some options to allow the commands to access a
different quota data file.  If the admin screws up, I guess we let them shoot
themselves in the foot.  Its easy to add warnings saying:

"***** The /sandbox file system does not have quotas enabled in /etc/fstab,
continuing with quota database files /sandbox/user.quota, /sandbox/group.quota"

I'm sure there are some more issues, but quotas are basically a (very simple)
database maintained by the kernel.  As long as it can read/write that data
for the file system in question, they should work.  The biggest issue
is finding the correct places to insert the quota update calls in each
file system.

And from working on the utilities to remove the UFS only restriction,
I was thinking it might be desirable to move back to that model from SunOs.
Let the commands work and let them fail at the syscall level if
quotas are not supported.  This would require more work, since
each command would now need an option to specify the location of the 
quota data files, but it is feasable (and not that hard really)...

Sorry, I got a bit more in depth there then I intended, but welcome any input
or help.
-- 
Mike Pritchard
mpp @ FreeBSD.org
"If tyranny and oppression come to this land, it will be in the guise
of fighting a foreign enemy."  - James Madison (1787)



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20071014071541.GA63551>