From owner-freebsd-current Tue Aug 26 11:57:58 1997 Return-Path: Received: (from root@localhost) by hub.freebsd.org (8.8.7/8.8.7) id LAA00266 for current-outgoing; Tue, 26 Aug 1997 11:57:58 -0700 (PDT) Received: from MindBender.serv.net (root@mindbender.serv.net [205.153.153.98]) by hub.freebsd.org (8.8.7/8.8.7) with ESMTP id LAA00251 for ; Tue, 26 Aug 1997 11:57:47 -0700 (PDT) Received: from localhost.HeadCandy.com (michaelv@localhost.HeadCandy.com [127.0.0.1]) by MindBender.serv.net (8.7.5/8.7.3) with SMTP id LAA23349; Tue, 26 Aug 1997 11:57:12 -0700 (PDT) Message-Id: <199708261857.LAA23349@MindBender.serv.net> X-Authentication-Warning: MindBender.serv.net: Host michaelv@localhost.HeadCandy.com [127.0.0.1] didn't use HELO protocol To: Simon Shapiro cc: "Jordan K. Hubbard" , current@freebsd.org, Ollivier Robert Subject: Re: IDE vs SCSI was: flags 80ff works (like anybody doubted it) In-reply-to: Your message of Mon, 25 Aug 97 23:24:50 -0700. Date: Tue, 26 Aug 1997 11:57:10 -0700 From: "Michael L. VanLoon -- HeadCandy.com" Sender: owner-freebsd-current@freebsd.org X-Loop: FreeBSD.org Precedence: bulk >Hi "Jordan K. Hubbard"; On 25-Aug-97 you wrote: >> Hmmm. If we're going to talk SCSI perf, let's get seriously SCSI here >> then: Quantum XP39100W drive on 2940UW controller: >> root@time-> dd if=/dev/rsd0 of=/dev/null count=1600 bs=64k >> 1600+0 records in >> 1600+0 records out >> 104857600 bytes transferred in 10.974902 secs (9554309 bytes/sec) >* Given unlimited CPU cycles, IDE is much ``better'' than SCSI: > a. Much cheaper. somple IDE interface costs about $0.11 to build > b. Much simpler code > c. much shorter latencies on a given command. > d. Runs sequential tests much faster. You forgot a condition: Given unlimited CPU cycles, and a limited budget, IDE is much ``better'' than SCSI. >* But consider this; > a. How do you put more than 2 devices on a cable? > b. How do you make the cable longer than a child's step? > c. How do you issue multiple commands to multiple devices, allow them > to disconnect and re-connect when done? > e. How do you allow command sequences s to be optimized by the device? f. How do you get simultaneous, pipe-lined processing on all drives at once in a stripe set? >Answer: By replacing IDE with SCSI :-) >Why are you guys always evaluate your disk systems with huge sequential >reads? How many times do you actually use your computer to do such I/O? >(Yes, I burn rubber on my truck, excites the boys to no end :-) It's just one way to measure. Honestly, not the best. I think most of us use more than this one measurement. >Even access to raw deviceis limited (for excellent reasons) to 64K at >a time. Measure your performance in operations/sec and you will get in >the right direction. Load the system with multiple processes and you >will start getting an idea how useful is the system for a server. And start loading it with processes, while accessing multiple drives, possible for interleaved swap, various disk-accessing processes, and/or striped partitions. You'll really wish you were using SCSI in that scenario. >Example: >This discussion is based on st.c; a random I/O generator I wrote some >time ago. As a matter of fact, when I was trying to decide between >Linux, FreeBSD, Solaris, Unixware and NT (just to keem management happy). >St.c will randomly read from a file (or raww device, I always test raw >devices, as filesystem performance is not what I am being paid for and I >am a very insignificant ``expert'' in these. You can ask st.c to either >write back the read data, to write a pattern, to sequentially access the >disk (two different ways), to lock, to flush caches, etc. you get the idea. > >FreeBSD (current, as of last Friday) will start saturating losing I/O rate >around 256 processes. This may be due to the hardware used, or maybe >because of some other reason. Since this is exactly where we want to be, >we did not bother to find why. > >Under 2.2, we see the saturation point at about 900 disk I/O ops/sec. >Under 3.0 we see just over 1,400. Again, the test method was different, >so these results are not meaningful. Our target was proven 800. We are >happy. I would think the disk subsystem would be the primary limiting factor here. What mix of controllers and drives were these tests run on? It would also be interesting to run this simulation against a striped set of SCSI drives. It would also be enlightening if you ran the same test against your striped set of IDE drives. ----------------------------------------------------------------------------- Michael L. VanLoon michaelv@MindBender.serv.net --< Free your mind and your machine -- NetBSD free un*x >-- NetBSD working ports: 386+PC, Mac 68k, Amiga, Atari 68k, HP300, Sun3, Sun4/4c/4m, DEC MIPS, DEC Alpha, PC532, VAX, MVME68k, arm32... NetBSD ports in progress: PICA, others... -----------------------------------------------------------------------------