Skip site navigation (1)Skip section navigation (2)
Date:      Fri, 4 Nov 2005 23:04:22 +0300
From:      "Andrew P." <infofarmer@gmail.com>
To:        Kirk Strauser <kirk@strauser.com>
Cc:        freebsd-questions@freebsd.org
Subject:   Re: Fast diff command for large files?
Message-ID:  <cb5206420511041204y6a4120eq5198f4f1fd4426de@mail.gmail.com>
In-Reply-To: <200511041129.17912.kirk@strauser.com>
References:  <200511040956.19087.kirk@strauser.com> <436B8ADF.4000703@mac.com> <200511041129.17912.kirk@strauser.com>

next in thread | previous in thread | raw e-mail | index | archive | help
On 11/4/05, Kirk Strauser <kirk@strauser.com> wrote:
> On Friday 04 November 2005 10:22, Chuck Swiger wrote:
>
> > Multigigabyte?  Find another approach to solving the problem, a text-ba=
se
> > diff is going to require excessive resources and time.  A 64-bit platfo=
rm
> > with 2 GB of RAM & 3GB of swap requires ~1000 seconds to diff ~400MB.
>
> There really aren't many options.  For the patient, here's what's happeni=
ng:
>
> Our legacy application runs on FoxPro.  Our web application runs on a
> PostgreSQL database that's a mirror of the FoxPro tables.
>
> We do the mirroring by running a program that dumps the FoxPro tables out=
 as
> tab-delimited files.  Thus far, we'd been using PostgreSQL's "copy from"
> command to read those files into the database.  In reality, though, a ver=
y,
> very small percentage of rows in those tables actually change.  So, I wro=
te
> a program that takes the output of diff and converts it into a series of
> "delete" and "insert" commands; benchmarking shows that this is roughly 3=
00
> times faster in our use.
>
> And that's why I need a fast diff.  Even if it takes as long as the datab=
ase
> bulk loads, we can run it on another server and use 20 seconds of CPU for
> PostgreSQL instead of 45 minutes.  The practical upshot is that the
> database will never get sluggish, even if the other "diff server" is load=
ed
> to the gills.
> --
> Kirk Strauser
>
>
>

Does the overall order of lines change every time
you dump the tables? If not, is there any inexpensive
way to sort them (not alphabetically, but just that
the order stays the same)? If it does/can, then there's
a trivial solution (a few lines in perl, or a hundred
lines in C) that'll make the speed roughly similar
to that of I/O.



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?cb5206420511041204y6a4120eq5198f4f1fd4426de>