Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 05 Oct 2007 08:33:47 -0400
From:      Steve Bertrand <iaccounts@ibctech.ca>
To:        Jorn Argelo <jorn@wcborstel.com>
Cc:        FreeBSD Questions <freebsd-questions@freebsd.org>
Subject:   Re: Managing very large files
Message-ID:  <47062F2B.60208@ibctech.ca>
In-Reply-To: <4705F12B.1000501@wcborstel.com>
References:  <4704DFF3.9040200@ibctech.ca>	<200710041458.22743.wundram@beenic.net>	<20071003200013.GD45244@demeter.hydra> <47054A1D.2000701@ibctech.ca> <4705F12B.1000501@wcborstel.com>

next in thread | previous in thread | raw e-mail | index | archive | help
> Check out Tie::File on CPAN. This Perl module treats every line in a
> file as an array element, and the array element is loaded into memory
> when it's being requested. In other words: This will work great with
> huge files such as these, as not the entire file is loaded into memory
> at once.
> 
> http://search.cpan.org/~mjd/Tie-File-0.96/lib/Tie/File.pm

Thanks everyone who replied to me regarding this issue.

The above appears to be my best approach.

Although I have not the time yet to look into Tie::Find (and I've never
used that module before) but I will.

So long as I can read chunks of the file, load the data into variables
(I like the array approach above) and process each array independently
without loading all of them at once into memory, and without having to
load the entire file into memory.

Tks!

Steve



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?47062F2B.60208>