Skip site navigation (1)Skip section navigation (2)
Date:      Wed, 2 Mar 2011 11:47:48 -0900
From:      Henrik Hudson <lists@rhavenn.net>
To:        freebsd-questions@freebsd.org
Subject:   Re: FreeBSD Performance
Message-ID:  <20110302204748.GA3416@alucard.int.rhavenn.net>
In-Reply-To: <E8BCB1A0-A22F-4491-9383-62BD5C96FA0A@gmail.com>
References:  <201102272143.p1RLhr0J027801@mail.r-bonomi.com> <E8BCB1A0-A22F-4491-9383-62BD5C96FA0A@gmail.com>

Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
On Wed, 02 Mar 2011, David wrote:

> 
> On Feb 27, 2011, at 4:43 PM, Robert Bonomi wrote:
> 
> >> From owner-freebsd-questions@freebsd.org  Sun Feb 27 14:54:09 2011
> >> From: David <cyber366@gmail.com>
> >> Date: Sun, 27 Feb 2011 15:46:03 -0500
> >> To: freebsd-questions@freebsd.org
> >> Subject: FreeBSD Performance
> >> 
> >> Hello All:
> >> 
> >> I am curious... does anyone know of a reasonably priced commodity server 
> >> capable of sourcing/sinking 10 Gbps of data from/to disk via 2 x 10 GE 
> >> network interfaces? Any ideas on how hard this would be to do with 
> >> FreeBSD?
> >> 
> >> I know of a proprietary linux-based system, but looking for open-source 
> >> FreeBSD based system.

I know it's not FreeBSD, but check out a Nexenta (NexentaStor
Community Edition). As long as you're willing to spend some money on
the hardware you should be able to reach the performance you're
looking for. Basically, ZFS raidZ vols with some high-end SSDs setup
as memory "caching" disks and then JBOD controllers with 10K rpm SAS drives
should get you 12TB without issue.

Also, FreeBSD 9 with it's updated ZFS version "should" get you to
the same spot assuming you can find a well supported NIC stack, JBOD
controller and SSDs and SAS drives you should be fine. However,
you're probably talking $10K-$30k of hardware depending on what you
want to spend / need. However, that would mean running CURRENT which may or
may not be what you want.

ZFS with the right JBOD controller and memory caches really does it
just as well as a hardware RAID card if not better and it's much
more flexible.

We've got a 20TB HA setup with Nexenta right now with a high-end
DDR3 memory SSD, a raidZ SSD set and sets of 4 disk raidZ 250GB
SAS drives on JBOD controllers spread over 2 shelves and we
outperform a similar sized NetApp setup by a good margin.


> 
> Thanks for the comments Robert...
> 
> > A lot depends on what you need to do with the data.
> 
> At the moment, I'm just looking to see if anyone has tried anything similar.
> I have a detailed set of requirements/results, but wanted to keep things simple initially.
> For now, let's just say there are two use cases:
> 
> 1. Record 10 Gbps of data received from 2 10xGE cards onto hard disk array
> 2. Playback 10 Gbps of data out over 2 10xGE cards onto network.
> 
> >  Do you need just the 'contents' of the network packets -- i.e. are you
> >  trying to send/recieve a single stream of data -- or do you need 
> >  complete headers, augmented with timestamps, such that you can re-
> >  construct/replay what was 'seen on the wire'?
> 
> Just contents is fine.
> 
> >  Is the box 'dedicated' to receiving (or sending), and does -nothing-else-
> >  while that operation is in process? or do you need to sample the data in
> >  real-time as well?
> 
> Dedicated.
> 
> >  Another question is _how_long_ you need to handle the 2x10gbit/sec of 
> >  data. a few seconds? a few tens of seconds?  minutes? hours?
> 
> One hour (for now).
> 
> >  If you need to 'go to disk' in real-time, you're looking at needing
> >  at least 3-4 gigabyte/sec of bandwith to disk.  No commodity drives 
> >  provide that kind of capacity, so you're looking at multiple drives 
> >  'in parallel' -- the logical equivalent of a 'striped' RAID array.  
> >  Probably 12-16 spindles paralleled.  Best handled with _hardware_ 
> >  raid, directly in the disk controller, but I don't know of a commodity 
> 
> Yes.
> 
> >  controller that supports enough spindles to give that bandwidth.
> >  This means one is best off doing it in the application softwre itself,
> >  rather than trusting the O/S to get it right.
> 
> Yes.
> 
> >   You're also looking at a _big_ disk array. Around 200 gigs for ONE 
> >  MINUTE of data.  Need 'only' an hour?  That's merely 12 terabytes.
> 
> Yes :)
> 
> > The O/S is -relatively- unimportant. <wry grin>
> 
> OK. As a recent convert to FreeBSD, I was hoping you would tell me 
> that the clean architecture and efficient implementation of FreeBSD would solve
> all of my problems :)
> 
> > You need _good_ network cards, with good drivers -- preferably ones where
> > most of the network stack can be off-loaded onto the card itself.
> 
> Yes. Something like TOE, batched interrupts, etc.
> 
> > You also need good disk controllers, ideally semi-autonomous (like SCSI),
> > with fairly large data buffers.
> 
> Yes.
> 
> OK. Thanks for the comments, that is helpful. I would be very interested
> to hear if anyone has had experience implementing a system like
> this (or close to it). I'm trying to decide whether I should try this myself
> or proceed with the current linux-based system.
> 
> 
> 
> 
> 
> 
> _______________________________________________
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questions-unsubscribe@freebsd.org"

-- 
Henrik Hudson
lists@rhavenn.net
-----------------------------------------
"God, root, what is difference?" Pitr; UF 




Want to link to this message? Use this URL: <http://docs.FreeBSD.org/cgi/mid.cgi?20110302204748.GA3416>