Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 05 Apr 1996 11:06:31 -0600
From:      Jon Loeliger <jdl@jdl.com>
To:        davidg@Root.COM
Cc:        asami@cs.berkeley.edu (Satoshi Asami), current@FreeBSD.org, nisha@cs.berkeley.edu, tege@matematik.su.se, hasty@rah.star-gate.com
Subject:   Re: fast memory copy for large data sizes 
Message-ID:  <199604051706.LAA14190@chrome.jdl.com>
In-Reply-To: Your message of "Fri, 05 Apr 1996 02:21:48 PST." <199604051021.CAA00222@Root.COM> 

next in thread | previous in thread | raw e-mail | index | archive | help
So, like David Greenman was saying to me just the other day:
> >Here are the kind of numbers we are seeing, and hope you will see, if
> >you run the program attached at the end of this mail:
> >
> > 90MHz Pentium (silvia), SiS chipset, 256KB cache:
> >
> >    size     libc             ours
> >      32  15.258789 MB/s   6.103516 MB/s 
> >      64  20.345052 MB/s  15.258789 MB/s
> >     128  17.438616 MB/s  15.258789 MB/s
> 
>    This would be a big lose in the kernel since just about all bcopy's fall
> into this range _except_ disk I/O block copies. I know this can be done bette
>r
> using other techniques (non-FP, see hackers mail from about 3 months ago).

Don't know how much it would cost (in performance), but would it
make sense to have a simple size-based cutoff test for the two
different algorithms?

jdl



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?199604051706.LAA14190>