Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 19 Jul 2002 15:33:00 -0700 (PDT)
From:      Julian Elischer <julian@elischer.org>
To:        Richard Sharpe <rsharpe@ns.aus.com>
Cc:        freebsd-hackers@freebsd.org, jra@samba.org
Subject:   Re: Any problems with Using sendfile 
Message-ID:  <Pine.BSF.4.21.0207191531190.91009-100000@InterJet.elischer.org>
In-Reply-To: <Pine.LNX.4.33.0207200834570.4109-100000@ns.aus.com>

next in thread | previous in thread | raw e-mail | index | archive | help
the samba people said that there is a reason they can not use any sendfile
implementation.
they explained it to me once and I think I got it. but I have since
forgotten it. It was something to do with what happens if the session is
aborted or broken in some way  but I forget the details.

On Sat, 20 Jul 2002, Richard Sharpe wrote:

>  Hi,
>  
> I did some testing a couple of days ago with sendfile under FreeBSD and 
> Samba, and observed that in pulling 500MB of data from files on a server I 
> could achieve about 45MB/s over GigE (I have a problem in my switch, I 
> think) but that CPU utilization was at 100% without sendfile and 50% with 
> sendfile.
>  
> This is a big improvement, but there are potential problems. These 
> problems are due to fact that once you start sending, if anything changes, 
> you cannot stop and say oops, I screwed up. If you promised to send 64kB, 
> you have to send that or drop the connection.
>  
> The problem is exacerbated by the fact that the Linux sendfile call does 
> not seem to allow you to specify the header on the call, so you are forced 
> to send the header from userspace and the data from the kernel, and you 
> therefor introduce a window during which things can go wrong. For example, 
> the file can be truncated or deleted, and I don't think SMB allows you to 
> send zeros for parts of the file that are not there. This is especially an 
> issue if you don't have kernel oplocks and you have UNIX users sharing 
> files with Windows users.
> 
> The way Samba does this normally is that it assembles the data in 
> userspace and then sends the response. If a problem occurs, it can 
> determine this before sending anything at all, and can send an 
> error response instead and does not need to drop the connection.
> 
> However, FreeBSD's sendfile implementation allows you to specify the 
> header to be sent in the call, and I believe that it also locks the vnode 
> prior to trying anything so you are protected against the file changing 
> under you. 
> 
> The only other thing that it could perhaps  do is to pin all the pages 
> before trying to send anything. Thus, if any error occurs, it can return 
> to the user saying, sorry, I could not do this, you send an error message.
> That is, errors are handled in a recoverable way.
> 
> The question is, does it gain you anything by demanding that the pages 
> involved (up to 16) be pinned before starting to write on the socket? It 
> increases pressure on memory, but it might be that the only problems that 
> could occur mean that really bad things have happened anyway (like the 
> file system has gone because the disk has died), so it might be that there 
> is no need to be worried about this aspect.
>  
> Are there any comments from here?
> 
> Regards
> -----
> Richard Sharpe, rsharpe@ns.aus.com, rsharpe@samba.org, 
> sharpe@ethereal.com
> 
> 
> To Unsubscribe: send mail to majordomo@FreeBSD.org
> with "unsubscribe freebsd-hackers" in the body of the message
> 


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-hackers" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?Pine.BSF.4.21.0207191531190.91009-100000>