Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 04 Oct 2007 08:43:31 -0400
From:      Steve Bertrand <iaccounts@ibctech.ca>
To:        questions@freebsd.org
Subject:   Managing very large files
Message-ID:  <4704DFF3.9040200@ibctech.ca>

next in thread | raw e-mail | index | archive | help
Hi all,

I've got a 28GB tcpdump capture file that I need to (hopefully) break
down into a series of 100,000k lines or so, hopefully without the need
of reading the entire file all at once.

I need to run a few Perl processes on the data in the file, but AFAICT,
doing so on the entire original file is asking for trouble.

Is there any way to accomplish this, preferably with the ability to
incrementally name each newly created file?

TIA,

Steve



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?4704DFF3.9040200>