Skip site navigation (1)Skip section navigation (2)
Date:      Mon, 07 Nov 2005 11:40:46 -0500
From:      "francisco@natserv.net" <francisco@natserv.net>
To:        Kirk Strauser <kirk@strauser.com>
Cc:        freebsd-questions@freebsd.org
Subject:   Re: Fast diff command for large files?
Message-ID:  <cone.1131381646.500858.17113.1000@zoraida.natserv.net>
References:  <200511040956.19087.kirk@strauser.com> <436B8ADF.4000703@mac.com> <200511041129.17912.kirk@strauser.com>

next in thread | previous in thread | raw e-mail | index | archive | help
Kirk Strauser writes:

> Our legacy application runs on FoxPro.  Our web application runs on a 
> PostgreSQL database that's a mirror of the FoxPro tables.

I had the same setup a while back.
A few suggestions.
* Add a date/changed field in Foxpro and update.
* If only recent records are updated, only copy those over
* Add a "changed" flag in Foxpro tables
* Load entire foxpro tables every time and do a delete/reload withing a 
single transaction.

The idea situation is if you can somehow segreate your older, non changeable 
data, and copy every day only records that are recent.. even if they have 
not changed.

What type of system is this? In particular do any record can be modified or 
are only recent records changed?



Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?cone.1131381646.500858.17113.1000>