Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 03 Jun 2002 14:34:50 -0700
From:      Nate Lawson <nate@root.org>
To:        current@freebsd.org, stable@freebsd.org
Subject:   AIO test results
Message-ID:  <5.1.0.14.2.20020603143427.025b6810@cryptography.securesites.com>

next in thread | raw e-mail | index | archive | help


I have been working on several projects using AIO to break up latency in
large, sequential reads.  In my code, I was surprised at what appear to be
performance problems where splitting a 256k read into 4 64k reads was
actually quite a bit slower than a single 256k read (in fact, 3 times
slower).  So I cooked up this little bit of demo code called aioex (linked
below) to exercise AIO and test different configurations.

I've included the test results in the tgz also.  Please ignore the
absolute latency of some of the responses since that is a factor of the
drive itself (especially the ~10 ms head seeks).  I'm more concerned with
the average latency of the subsequent completion handlers.  In the best
tests, 4 64k reads took 15-17 ms total while 1 256k read took 5-7 ms.
Since the reads are all sequential, I would expect read-ahead would begin
to improve things as the test went on but it didn't.

With "dd bs=64k if=/mnt/testvol of=/dev/null", the drive gets around 45
MB/sec sustained which works out to around 1.5 ms per 64k block which fits
perfectly with the results for AIO reads of 256k.

In the test results, "prio" means I used rtprio to give aioex realtime
priority (all other processes had normal priority).  I also ran the test
with max_aio_procs set to 1 aiod and 4 aiods.  Finally, I tried it with
different chunk sizes (i.e. 2 requests of 128k each).  All timing was done
with RDTSC.  Be sure to update CYCLES_PER_SEC for your CPU in aioex.c and
enable "options VFS_AIO" in your kernel.  The script "runit" will help in
sending back results since it prints your config as well as the results.

My initial tests seem to show that giving aioex realtime priority helps a
small amount and limiting it to 1 aiod helps a bit too.  (I'm guessing the
latter is due to the potential for requests to complete out of order with
multiple aiods and the current scheduler).

I'd really appreciate it if anyone could check my results and advise.
I've run this past Alan Cox but he is currently busy. The tests were done
on 4-stable, Celeron 500mhz, 128M ram, Adaptec 2940U2W, Quantum Atlas 10k3
drive.  I use -current as well and would be interested in others' results
on that platform, hence the cc.

    http://www.root.org/~nate/aioex.tgz

Thanks,
Nate


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-current" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?5.1.0.14.2.20020603143427.025b6810>