Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 22 Jun 95 12:21:15 MDT
From:      terry@cs.weber.edu (Terry Lambert)
To:        pete@dsw.com (Pete Kruckenberg)
Cc:        freebsd-hackers@freebsd.org
Subject:   Re: Disk quotas: why broken, when fixed?
Message-ID:  <9506221821.AA03268@cs.weber.edu>
In-Reply-To: <Pine.LNX.3.91.950622105819.368D-100000@dsw.dsw.com> from "Pete Kruckenberg" at Jun 22, 95 10:58:34 am

next in thread | previous in thread | raw e-mail | index | archive | help
> I've read several comments about disk quotas in 2.0.5R, most of them 
> saying that they're broken. Could somebody take a sec to tell me exactly 
> what is broken and why? I'd love to try to fix it, but I'd like a little 
> insight beforehand.
> 
> Is anyone working on fixing disk quotas yet? I wouldn't want to duplicate 
> efforts, so let me know now before I get started.
> 
> A short history of when disk quotas were broken, why they got broken, and 
> what might be needed to fix them would be very helpful.

I'm not currently working on quotas.

I don't want to work on quotas for the near future; it seems to me that
the quota mechanism should itself be abstracted as a file system layer
instead of being as embedded everywhere as it it, so that it would apply
to all of the file systems equally.

By the same token, the bottom end of the UFS interface (incorrectly)
consumes kernel internals instead of consuming the vfs interface;
this is also largely bad, but an obvious result of not having an anonymous
block layer (on top of which one could easily implement policies like
striping or volume spanning).

It also seems to me that quota operation on a per inode basis is *bad*,
and that internal use should be vnode based instead.  This is all part
of the general BSD use of inodes as primary objects for caching, etc.
instead of vnodes, which would be more orthogonal given the stacking
interface.

On the general issue of quotas, there is one *blatant* failure mode,
which is the requirement for the use of a user space utility to pass
in the quotaon as a system call (instead of it being treated as a mount
option).

Because of this, if you failed to specify a different file for each file
system, you could easily run into problems.  There is no computation of
transitive closure over the graph: the key to the record is the inode
number instead of the inode number and device (when in reality a proper
implementation would probably be the UID and the device).


There are several other potential race conditions.  You can close them
by explicitly turning off the quotas before unmount and by maintaining
the quota file for a mounted file system on the file system itself.

Both of these (potentially) reduce the usefulness of quotas, especially
the pre-unmount disabling in light of system shutdown procedure.


Anyway, it's pretty ugly code and I don't want to look at it any more. 8^).


					Terry Lambert
					terry@cs.weber.edu
---
Any opinions in this posting are my own and not those of my present
or previous employers.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?9506221821.AA03268>