Skip site navigation (1)Skip section navigation (2)
Date:      Tue, 14 Jan 1997 10:44:42 -0700 (MST)
From:      Terry Lambert <terry@lambert.org>
To:        avalon@coombs.anu.edu.au (Darren Reed)
Cc:        terry@lambert.org, stesin@gu.net, karpen@ocean.campus.luth.se, hackers@FreeBSD.org
Subject:   Re: truss, trace ??
Message-ID:  <199701141744.KAA00127@phaeton.artisoft.com>
In-Reply-To: <199701140337.UAA16894@coyote.Artisoft.COM> from "Darren Reed" at Jan 14, 97 02:37:13 pm

next in thread | previous in thread | raw e-mail | index | archive | help
> In some mail from Terry Lambert, sie said:
                                   ^^^ -- Is this British, or what?

> The way I see it, there some things to consider which you may (or may
> not) want to `work' with cyclic files:
> 
> * offset - when you pass byte n of an n byte cylic file, should lseek tell
>            you that you're at byte n+1 or 0 ?
> 
>            Does it make sense to return n+1 if it can't lseek to that
>            absolute position ?  Would lseek() be hacked to goto position
>            x as x % n ?

I think you are getting confused with a VMS C library bug here, where
it put the lseek/tell boundries on records instead of bytes, and failed
to advance the record pointer after implying the carriage return carriage
control to the stream.

Even so, there remains the problem of reclaiming the blocks at the front
of the file, even if there was unused room in the inode to store an
offset.

> * blocks - why do you need to shuffle blocks around ?  Why not just just
>            the offset pointer once you get to the end ?  (In effect, the
>            write is done in 2 parts: first to the end of the file, the
>            second from the start).

Because you must deallocate blocks.

Also: what happens after two years of running this log file, when the
end offset exceeds the number of bytes storable in an inode, even if you
have hacked it to have a start offset some number of bytes from the end
offset?

You *must* move block offsets or you will face this problem.

> * readers - if a reader is open and at position y and the next write will
>            go from x to x+n whre x+n > y does the writer block ? (Consider
>            that all data from y around to x is valid).

No.  The reader gets bad data.  File offsets associate with fd's, not
with vnodes, or with on disk inode data.

No matter how you do it, you could be "more"ing the log file, go on
vacation, and come back two weeks later and hit "space".


> I guess you're thinking of what happens when you keep appending to a file
> ...(open - write - close.  I donm't  see that non-block sized record
> files can exist as cyclic files properly under Unix, eg:
> 
> I have a 30,000 byte cyclic file.  I write 1 byte to it, making 30,001.
> This isn't enough to delete the first block, but you must append it.
> (hmmm, would this mean the first block would be a fragment - would it even
> work ?)
> 
> Anyone for O_CYCLIC ?

You *could* do it, if you were extent or log based.  See the VIVAFS
paper; get it from the University of Kentucky, or from the proceedings
of usenix (ftp.sage.usenix.org).

In reality, I'm thinking of wtmp, and making sire the file starts on
a wtmp record boundry.

Short of defining file-specific truncators, I think you are SOL.


					Terry Lambert
					terry@lambert.org
---
Any opinions in this posting are my own and not those of my present
or previous employers.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199701141744.KAA00127>