Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 5 Apr 1996 02:59:13 -0800 (PST)
From:      asami@cs.berkeley.edu (Satoshi Asami)
To:        davidg@root.com
Cc:        current@freebsd.org, nisha@cs.berkeley.edu, tege@matematik.su.se, hasty@rah.star-gate.com, dyson@freebsd.org
Subject:   Re: fast memory copy for large data sizes
Message-ID:  <199604051059.CAA24750@silvia.HIP.Berkeley.EDU>
In-Reply-To: <199604051021.CAA00222@Root.COM> (message from David Greenman on Fri, 05 Apr 1996 02:21:48 -0800)

next in thread | previous in thread | raw e-mail | index | archive | help
 * >    size     libc             ours
 * >      32  15.258789 MB/s   6.103516 MB/s 
 * >      64  20.345052 MB/s  15.258789 MB/s
 * >     128  17.438616 MB/s  15.258789 MB/s
 * 
 *    This would be a big lose in the kernel since just about all
 * bcopy's fall into this range _except_ disk I/O block copies.

Of course we need to put a cut-off number to use the old routine, this
is what we did when we stuck it in the kernel.  Sorry if I didn't
mention that in my previous mail, the purpose of collecting these data
was to see where this threshold is going to be.

 * 								I know
 * this can be done better using other techniques (non-FP, see hackers
 * mail from about 3 months ago). You should talk to John Dyson who's
 * also working on this.

I have that mail, tried what was in there, but it wasn't as fast as FP 
copies.  Maybe I screwed up something, I'll try again tomorrow.

Satoshi



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199604051059.CAA24750>