Skip site navigation (1)Skip section navigation (2)
Date:      Thu, 4 Oct 2007 16:51:30 -0400
From:      Jerry McAllister <jerrymc@msu.edu>
To:        freebsd-questions@freebsd.org
Subject:   Re: Managing very large files
Message-ID:  <20071004205130.GB89374@gizmo.acns.msu.edu>
In-Reply-To: <20071003225108.GB46149@demeter.hydra>
References:  <4704DFF3.9040200@ibctech.ca> <20071003200013.GD45244@demeter.hydra> <47054A1D.2000701@ibctech.ca> <200710042222.25488.wundram@beenic.net> <47054C2E.8040304@ibctech.ca> <20071003225108.GB46149@demeter.hydra>

next in thread | previous in thread | raw e-mail | index | archive | help
On Wed, Oct 03, 2007 at 04:51:08PM -0600, Chad Perrin wrote:

> On Thu, Oct 04, 2007 at 04:25:18PM -0400, Steve Bertrand wrote:
> > Heiko Wundram (Beenic) wrote:
> > > Am Donnerstag 04 Oktober 2007 22:16:29 schrieb Steve Bertrand:
> > >> This is what I am afraid of. Just out of curiosity, if I did try to read
> > >> the entire file into a Perl variable all at once, would the box panic,
> > >> or as the saying goes 'what could possibly go wrong'?
> > > 
> > > Perl most certainly wouldn't make the box panic (at least I hope so :-)), but 
> > > would barf and quit at some point in time when it can't allocate any more 
> > > memory (because all memory is in use). Meanwhile, your swap would've filled 
> > > up completely, and your box would've become totally unresponsive, which goes 
> > > away instantly the second Perl is dead/quits.
> > > 
> > > Try it. ;-) (at your own risk)
> > 
> > LOL, on a production box?...nope.
> > 
> > Hence why I asked here, probing if someone has made this mistake before
> > I do ;)
> > 
> > The reason for the massive file size was my haste in running out of the
> > office on Friday and forgetting to kill the tcpdump process before the
> > weekend began.
> 
> Sounds like you may want a Perl script to automate managing your
> tcpdumps.
> 
> Just a thought.

Yes.  
Actually, you can open that file and start reading it in Perl and
open files to write out the chunks the way you want them.  Then close
each.  Make up a name with a counter in it to create all the many 
files of chunks.  Suck off some data/statistics and accumulate info you 
want as you go.   You could even decide some of it isn't worth keeping
and cut the size of your chunks down if you don't need all of it.
But, you would have to close each of those chunk files or you would
run out of space for open files.   So, there would have to be a counter
loop to keep track of how much was written to each chunk and an open
and close for each one.

////jerry

> 
> -- 
> CCD CopyWrite Chad Perrin [ http://ccd.apotheon.org ]
> Kent Beck: "I always knew that one day Smalltalk would replace Java.  I
> just didn't know it would be called Ruby."
> _______________________________________________
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "freebsd-questions-unsubscribe@freebsd.org"



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?20071004205130.GB89374>