Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 14 Jun 2007 20:03:20 -0400
From:      Kris Kennaway <kris@obsecurity.org>
To:        Chuck Swiger <cswiger@mac.com>
Cc:        smp@FreeBSD.org, performance@FreeBSD.org, current@FreeBSD.org, Kris Kennaway <kris@obsecurity.org>
Subject:   Re: BIND 9.4.1 performance on FreeBSD 6.2 vs. 7.0
Message-ID:  <20070615000320.GA94458@rot13.obsecurity.org>
In-Reply-To: <449EAA15-A4BC-4AAE-B3ED-B65E7A079877@mac.com>
References:  <20070614084817.GA81087@rot13.obsecurity.org> <449EAA15-A4BC-4AAE-B3ED-B65E7A079877@mac.com>

next in thread | previous in thread | raw e-mail | index | archive | help

--bp/iNruPH9dso1Pn
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Thu, Jun 14, 2007 at 04:53:01PM -0700, Chuck Swiger wrote:
> Hi, Kris--
>=20
> This was interesting, thanks for putting together the testing and =20
> graphs.
>=20
> On Jun 14, 2007, at 1:48 AM, Kris Kennaway wrote:
> >I have been benchmarking BIND 9.4.1 recursive query performance on an
> >8-core opteron, using the resperf utility (dns/dnsperf in ports).  The
> >query data set was taken from www.freebsd.org's httpd-access.log with
> >some of the highly aggressive robot IP addresses pruned out (to avoid
> >huge numbers of repeated queries against a small subset of addresses,
> >which would skew the results).
>=20
> It's at least arguable that doing queries against a data set =20
> including a bunch of repeats is "skewed" in a more realistic =20
> fashion. :-)  A quick look at some of the data sources I have handy =20
> such as http access logs or Squid proxy logs suggests that (for =20
> example) out of a database of 17+ million requests, there were only =20
> 46000 unique IPs involved.

There were still lots of repeats, just some of them were repeated
hundreds of thousands of times - I stripped about a dozen of those
(googlebots, I'm looking at you ;-), leaving a distribution that was
less biased to the top end.

> You might find it interesting to compare doing queries against your =20
> raw and filtered datasets, just to see what kind of difference you =20
> get, if any.

Cached queries perform much better, as you might expect.  As an
estimate I was getting query rates exceeding 120000 qps when serving
entirely out of cache, and I dont think I reached the upper bound yet.

> >Testing was done over a broadcom gigabit ethernet cable connected
> >back-to-back between two identical machines.  named was restarted in
> >between tests to flush the cache.
>=20
> What was the external network connectivity in terms of speed?  The =20
> docs suggest you need something like a 16MBs up/8 Mbs down =20
> connectivity in order to get up to 50K requests/sec....

I wasn't seeing anything close to this, so I guess it depends how much
data is being returned by the queries (I was doing PTR lookups).  I
forget the exact numbers but it wasn't exceeding about 10Mbit in both
directions, which should have been well within link capacity.  Also
the lock profiling data bears out the interpretation that it was BIND
that was becoming saturated and not the hardware.

> [ ... ]
> >It would be interesting to test BIND performance when acting as an
> >authoritative server, which probably has very different performance
> >characteristics; the difficulty there is getting access to a suitably
> >interesting and representative zone file and query data.
>=20
> I suppose you could also set up a test nameserver which claims to be =20
> authoritative for all of in-addr.arpa, and set up a bunch (65K?) /16 =20
> reverse zone files, and then test against real unmodified IPs, but it =20
> would be easier to do something like this:
>=20
> Set up a nameserver which is authoritative for 1.10.in-addr.arpa (ie, =20
> the reverse zone for 10.1/16), and use a zonefile with the $GENERATE =20
> directive to populate your PTR records:
>=20
> $TTL    86400
> $origin 1.10.in-addr.arpa.
>=20
> @       IN      SOA     localhost. hostmaster.localhost. (
>         1       ; serial (YYYYMMDD##)
>         3h      ; Refresh 3 hours
>         1h      ; Retry   1 hour
>         30d     ; Expire  30 days
>         1d )    ; Minimum 24 hours
>=20
> @       NS      localhost.
>=20
> $GENERATE 0-255 $.0 PTR ip-10-1-0-$.example.com.
> $GENERATE 0-255 $.1 PTR ip-10-1-1-$.example.org.
> $GENERATE 0-255 $.2 PTR ip-10-1-2-$.example.net.
> ; ...etc...
>=20
> ...and then feed it a query database consisting of PTR lookups.  If =20
> you wanted to, you could take your existing IP database, and glue the =20
> last two octets of the real IPs onto 10.1 to produce a reasonable =20
> assortment of IPs to perform a reverse lookup upon.

I could construct something like this but I'd prefer a more
"realistic" workload (i.e. an uneven distribution of queries against
different subsets of the data).  I don't have a good idea what
"realistic" means here, which makes it hard to construct one from
scratch.  Fortunately I have an offer from someone for access to a
real large zone file and a large sample of queries.

Kris

--bp/iNruPH9dso1Pn
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.3 (FreeBSD)

iD8DBQFGcddHWry0BWjoQKURAm5SAJ0WNKEKSmWAeDvbLVZDsYGGtyT9QQCgt/Rl
imFuDyK59RuNiN+tPJ4C8/Q=
=c16C
-----END PGP SIGNATURE-----

--bp/iNruPH9dso1Pn--



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20070615000320.GA94458>