Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 19 Aug 2005 09:06:20 -0600
From:      Scott Long <scottl@samsco.org>
To:        Vasim Valejev <vasim@human-capital.ru>
Cc:        freebsd-amd64@freebsd.org
Subject:   Re: Slow PCI-X controller perfomance (adaptec 2130SLP)
Message-ID:  <4305F56C.8090405@samsco.org>
In-Reply-To: <00d601c5a88e$1cb8c260$2107a8c0@vasimwork>
References:  <00d601c5a88e$1cb8c260$2107a8c0@vasimwork>

next in thread | previous in thread | raw e-mail | index | archive | help
Vasim Valejev wrote:
> Hi !
> 
> I've tried to set up FreeBSD-current on Dual opteron box with Adaptec 2130SLP
> PCI-X RAID controller (256MB memory). My tests did show that maximum transfer
> rate from controller to OS was about 132MB/s. That is very strange since PCI-X
> maximum speed should be about 1GB/s. Why the controller was so slow? Because
> poor driver or my method of the test?
> 
> To test transfer rate, i just did run command "dd if=/dev/aacd1 of=/dev/null
> bs=8k count=4k" many times (may be i'm wrong but the controller shall cache this
> request in its memory). Change of hw.aac.iosize_max to 98k did a very little
> effect.
> 
> My system config:
> 
> Tyan K8SD Pro (S2882-D) motherboard (onboard SATA controller was turned off)
> 2xOpteron 244
> 2GB memory (2x512MB RAM stick for each CPU)
> Adaptec 2130SLP RAID controller with 256MB memory and backup battery (tried
> every PCI-X slot on the MB with same results)
> Six Maxtor Atlas 15K2 36GB drives (8E036J0) on the SCSI channel (but only four
> was used for /dev/aacd1, RAID5 with 64K stripe)
> 
> FreeBSD7.0-CURRENT/amd64 (Tested with 5.4-RELEASE/amd64 and 6.0-BETA2/386 but no
> difference).
> 
> Vasim V.
> 

Whenever people complain about slow performance with AAC products and 
try to point the finger at the driver, I ask them do repeat their 
benchmark under linux and see how it compares.  Then they realize just 
how good they have it with FreeBSD.  I've worked very hard over the
years to make the FreeBSD driver be the fastest driver available for
AAC cards.

Your particular test is testing the following things:

1) disk transfer speed
2) controller firmware efficiency
3) cache memory bandwidth on the controller
4) stripe alignment of the requests
5) PCI-X bandwidth

Your disk transfer speed is going to be about 70MB/s per disk.  Given
that you are doing RAID-5 on 4 disks, you'll ideally get 3x70MB/s =
210MB/s.  The reason that you multiply by 3 and not 4 is that for a
given stripe, you only have 3 disks that contain data, with the 4th
disk containing parity information.  Also note that you are dealing
with Ultra320.  Adding more disks than what you have will start to
saturate the bandwidth of the bus (which is probabl between 260-300MB/s
depending on a lot of factors)

The controller firmware efficiency part is hard to quantify.  Is the
firmware doing a good job of servicing requets from the OS in a timely
fashion and also servicing completions from the disks?  There have been
significant observable problems with this in the past.  Also, maybe the
disk is reading and recomputing parity with each operation.  This is a
feature of high-reliability disk systems and is not typically a feature
of Adaptec controllers, but it's a possibility.  Try running your test
with RAID0 instead.

Cache memory bandwidth is very important.  Usually when you create an
array on an Adaptec card, it defaults to having the array read cache
turned on.  I cannot stress enough how incredibly useless this feature
is.  What it means is that data coming off the drive must travel across
the controller's internal bus to the cache, then travel back across the
same bus to the PCI bridge to the host.  Since the data has to make two
trips across the same medium, latency and contention is introduced.
Read caches on raid controllers are completely worthless in all but the
most specific tests.  Turn it off and make sure that the write cache is
turned on (which will significantly help write performance, just make
sure that you have a battery attached to the card when doing RAID-5 or
else you'll risk data corruption).  Having the read cache on is likely
a very significant factor in your test.

Stripe alignment matters because most RAID firmware is optimized to work
in terms of reading and writing full stripes of data across all disks at
the same time.  If a request is sent down that isn't aligned with the
stripe boundary, then two stripe reads/writes will have to be issued,
impacting performance.  Since you are reading from the raw aacd0 device,
this probably isn't an issue, but you might try playing with different
offsets to 'dd' to see if it changes anything.

Scott



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4305F56C.8090405>