Skip site navigation (1)Skip section navigation (2)
Date:      21 Aug 1995 22:09:12 +0800
From:      peter@haywire.dialix.com (Peter Wemm)
To:        freebsd-hackers@freebsd.org
Subject:   Re: Making a FreeBSD NFS server
Message-ID:  <41a428$1aa$1@haywire.DIALix.COM>
References:  <9508201948.AA23045@cs.weber.edu>, <199508210345.WAA29762@bonkers.taronga.com>

next in thread | previous in thread | raw e-mail | index | archive | help
peter@bonkers.taronga.com (Peter da Silva) writes:

>In article <9508201948.AA23045@cs.weber.edu>,
>Terry Lambert <terry@cs.weber.edu> wrote:
>>Unless you are running everything on the same box, it's impossible to
>>provide inter-machine consistency guarantees.  That's why NFS is the
>>way it is.

>Oh, crap. You handle machine failures the same way you handle disk failures.
>If you can't handle disk failures you shouldn't have a stateful *local* file
>system. For conventional file I/O you can get pretty much the same recovery
>semantics both ways (client reloads state), and for non-file I/O you get the
>choice of no access at all or error returns. I'll take the error returns.

>I've used stateless and stateful remote file systems, and I'll take stateful
>any day. I'd much rather type:

>	tar tvfB //xds13/dev/rmt0

>Than:

>	rsh xds13 dd if=/dev/rmt0 | tar tvfb -

>And it's awful nice to be able to set up a getty on //modem1/dev/ttyc4. And
>being able to get open-count semantics on temp files. And accessing named
>pipes over the net. And "fsck //hurtsystem/dev/rw0a". And so on...

>I really miss OpenNET.

AT&T's RFS did this too..  It's a pity the implementation was so
grossly unportable, inherently slow, and the crash recovery was
practically useless.

Just as a quick summary of the way RFS worked for those who've not had
the ...umm... ``experience'' of dealing with it:
- The system calls for files on remote machines were intercepted,
packaged up and sent to the remote system for execution..
- The data was copied back and forward from the client's user space
and the server with remote copyin/copyout.
- It *guaranteed* proper unix semantics on the remote fs... none of
the kludgey stuff that NFS has to do..
- It was slow.. There was a lot of latency because of the numerous
transfers across the network.
- It was reliable... it used a connection orientated link, like tcp.
The problem with the implementation was that the recovery from a
dropped link really sucked.. This was not an inherent flaw of the
design however, just bad implementation choices.
- You had access to remote devices..
- It was totally un-supportive of non-hetrogenous environments.  Since
ioctl()'s were interpreted and executed on the remote system, you had
to have both systems using the same ioctl numbers.....
- Because of the remote access, you could do really cool things like
"ps -r remotesystem" or "chroot /remotesystem truss -p processid" -
since /proc, /dev/kmem and /unix could be exported.

An RFS type design is far more suitable for a cluster of closely
coupled systems than NFS is.  Like Peter says:
"getty vt100 /modemserver/dev/term/A46" and "tar tvf /tapeserver/dev/tape" 

We have old SVR4 machines here that have RFSv2.. These examples are
real.. (we dont use them as we dont have any need, but I used to keep the
config handy for showing people..  Alas, I disabled the RFS code last
time I rebuilt the kernels)

This might be fun to implement one day for a FreeBSD "cluster"...
Providing it's done right, of course, and isn't "marketed" as a
generic network filesystem like AT&T's droids tried.

-Peter



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?41a428$1aa$1>