Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 9 Sep 2004 23:41:26 +0300
From:      Vlad GALU <dudu@diaspar.rdsnet.ro>
To:        freebsd-net@freebsd.org
Subject:   Re: [TEST/REVIEW] Netflow implementation
Message-ID:  <20040909234126.3f1c7cf3.dudu@diaspar.rdsnet.ro>
In-Reply-To: <20040909200052.GD12168@cell.sick.ru>
References:  <20040905121111.GA78276@cell.sick.ru> <4140834C.3000306@freebsd.org> <20040909171018.GA11540@cell.sick.ru> <414093DE.A6DC6E67@freebsd.org> <Pine.BSF.4.53.0409091743120.51837@e0-0.zab2.int.zabbadoz.net> <41409CB5.836DE816@freebsd.org> <20040909193507.GA12168@cell.sick.ru> <4140B603.8E979D72@freebsd.org> <20040909200052.GD12168@cell.sick.ru>

next in thread | previous in thread | raw e-mail | index | archive | help
On Fri, 10 Sep 2004 00:00:52 +0400
Gleb Smirnoff <glebius@freebsd.org> wrote:

> On Thu, Sep 09, 2004 at 09:58:59PM +0200, Andre Oppermann wrote:
> A> Do you really log all Netflow packets to disk to be able to provide
> A> details to the customer?  Or do you aggregate the details on the
> A> collector?
> 
> Full netflow dumps are stored on disk for about 2-3 months, aggregated
> data goes into billing programs and is stored for years. This is
> common practice here.

	This made me raise my eyebrow. I wrote a small tool that we use in
production at RDS: http://freshmeat.net/projects/glflow. The way I
designed it, it is supposed to clean up the flow tree once in a while
and remove 'old' flows (that haven't had any packet matching them in the
last X seconds). The problem is that I currently have about 400-500k
active flows on a 700Mbps link. Every 10 seconds the software removes
about 100-200k of them in no more than 0.2-0.3 seconds. Of course, I
couldn't possibly send them over a socket somewhere else at that speed,,
and chose to open a tempfile, mmap() it, write the expired flows to the
buffer. When the buffer exceeds a programatically chosen number of
packets, it is msync()-ed, munmap()-ed and a new file is open.
	Do you accidentally have a better storage model ? I've been trying to
dump these binary files to SQL but for a 42 meg binary log the necessary
SQL storage went to about 150 megs, which is a bit over reasonable,
considering the fact that the software dumps a binary file every 5 to 10
seconds.

P.S. I haven't yet tried to aggregate the flows between reading them
from the binary file and inserting the data into SQL. I thought it would
take too much time to be able to keep up with the newly created dumps.

> 
> -- 
> Totus tuus, Glebius.
> GLEBIUS-RIPN GLEB-RIPE
> _______________________________________________
> freebsd-net@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscribe@freebsd.org"
> 
> !DSPAM:4140b6e2920391200521163!
> 


----
If it's there, and you can see it, it's real.
If it's not there, and you can see it, it's virtual.
If it's there, and you can't see it, it's transparent.
If it's not there, and you can't see it, you erased it.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20040909234126.3f1c7cf3.dudu>