From owner-freebsd-questions@FreeBSD.ORG Mon Apr 3 12:38:41 2006 Return-Path: X-Original-To: freebsd-questions@freebsd.org Delivered-To: freebsd-questions@freebsd.org Received: from mx1.FreeBSD.org (mx1.freebsd.org [216.136.204.125]) by hub.freebsd.org (Postfix) with ESMTP id 9C83C16A420 for ; Mon, 3 Apr 2006 12:38:41 +0000 (UTC) (envelope-from o.greve@axis.nl) Received: from yggdrasil.interstroom.nl (yggdrasil.interstroom.nl [80.85.129.11]) by mx1.FreeBSD.org (Postfix) with ESMTP id 32B1A43D45 for ; Mon, 3 Apr 2006 12:38:41 +0000 (GMT) (envelope-from o.greve@axis.nl) Received: from ip127-180.introweb.nl ([80.65.127.180] helo=[192.168.1.42]) by yggdrasil with asmtp (Exim 3.35 #1 (Debian)) id 1FQOKL-0000Gc-00; Mon, 03 Apr 2006 14:38:37 +0200 Message-ID: <4431174C.8020506@axis.nl> Date: Mon, 03 Apr 2006 14:38:36 +0200 From: Olaf Greve User-Agent: Mozilla Thunderbird 1.0.7-1.4.1.centos4 (X11/20051007) X-Accept-Language: en-us, en MIME-Version: 1.0 To: "Daniel A." References: <44310A0D.80607@axis.nl> <5ceb5d550604030448t6ecf22uaef80f13f222c465@mail.gmail.com> In-Reply-To: <5ceb5d550604030448t6ecf22uaef80f13f222c465@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-MailScanner-Information: Interstroom virusscan, please e-mail helpdesk@interstroom.nl for more information X-MailScanner-SpamCheck: Cc: freebsd-questions Subject: Re: How can I increase the shell's (or specific application's) memory limit? X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 03 Apr 2006 12:38:41 -0000 Hi Daniel, > Generally, I think it's bad programming practice to retrieve such big > datasets if it is possible to do otherwise. I definitely agree that it is bad practice, and in that respect I'm inclined towards doing batch loading as you suggest too. However, there's some data agregation I'll have to take into account, and as it involves testing for the presence of specific tables of a merge table set, I'd have to rewrite part of that logic. All doable, of course, and no big issue either, but it would be a lot faster for me if I could simply increase the memory limit.... Still, I very much hear you, and I know that what you suggest _is_ the proper approach, so I may end up doing that too. ;) Also: there is another perhaps more elegant (read: robust) way, being a hybrid solution between the PHP script and using mysqldump. I can then use PHP for working out the batches, and retrieve the batches using a (set of) commandline mysqldump call(s). The generated batches can then directly be dumped into the proper merge tables. The only catch is that I directly left join data in into the merge tables, so I'd have to first do a blunt dump of the lhs of the data, then of the rhs(es) (both to temp tables) and then afterwards left join them into the eventual merge tables. This is the main reason why I hadn't chosen this solution, as at present I can combine all of these steps in one query...:/ If someone knows a clean way to increase the memory limit, I'd be happy to hear about it. If not, I'll do some rewrite... Cheers, Olafo