From owner-freebsd-emulation@FreeBSD.ORG Thu Jul 14 03:02:13 2011 Return-Path: Delivered-To: freebsd-emulation@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E75A4106564A for ; Thu, 14 Jul 2011 03:02:13 +0000 (UTC) (envelope-from amvandemore@gmail.com) Received: from mail-fx0-f44.google.com (mail-fx0-f44.google.com [209.85.161.44]) by mx1.freebsd.org (Postfix) with ESMTP id 5183F8FC08 for ; Thu, 14 Jul 2011 03:02:12 +0000 (UTC) Received: by fxe6 with SMTP id 6so224451fxe.17 for ; Wed, 13 Jul 2011 20:02:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=1bHMd/IAu57dIXMVmhbRgJVg6qrnFZ+dhX94NsFffB4=; b=MUFxKIsPH4i9XcHtiYWcPAsF27EP8QVFp74nz8D5s9rWoOOZluXHVkV9jYG4P1q2X1 FkfBTm01j1YsVU2MEF1myeZB3fz8PvIFrhcj4hhzyccLLJj8uIxvvt2hQfNSl3KaRSFn UeO8rXBMTqDbRBZ3UDlkPId5hDWwZRG9deG94= MIME-Version: 1.0 Received: by 10.223.144.136 with SMTP id z8mr382026fau.31.1310612532205; Wed, 13 Jul 2011 20:02:12 -0700 (PDT) Received: by 10.223.124.12 with HTTP; Wed, 13 Jul 2011 20:02:12 -0700 (PDT) In-Reply-To: <20110714115504.20182xr8y5z7o3ug@webmail.in-berlin.de> References: <20110714095717.35581xj4rdju1pel@webmail.in-berlin.de> <20110714115504.20182xr8y5z7o3ug@webmail.in-berlin.de> Date: Wed, 13 Jul 2011 22:02:12 -0500 Message-ID: From: Adam Vande More To: Peter Ross Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: freebsd-emulation@freebsd.org Subject: Re: Network problems while running VirtualBox X-BeenThere: freebsd-emulation@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Development of Emulators of other operating systems List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 14 Jul 2011 03:02:14 -0000 On Wed, Jul 13, 2011 at 8:55 PM, Peter Ross wrote: > I am running named on the same box. I have overtime some errors there as > well: > > Apr 13 05:17:41 bind named[23534]: internal_send: 192.168.50.145#65176: > Cannot allocate memory > Jun 21 23:30:44 bind named[39864]: internal_send: 192.168.50.251#36155: > Cannot allocate memory > Jun 24 15:28:00 bind named[39864]: internal_send: 192.168.50.251#28651: > Cannot allocate memory > Jun 28 12:57:52 bind named[2462]: internal_send: 192.168.165.154#1201: > Cannot allocate memory > Jul 13 19:43:05 bind named[4032]: internal_send: 192.168.167.147#52736: > Cannot allocate memory > > coming from a sendmsg(2). > > My theory there is: my scp sends a lot data at the same time while the > named is sending a lot of data over time - both increasing the likelyhood of > the error. That doesn't really answer the question if a using a different ssh binary helps, but I'm guessing it won't. You can try with different scp option like encryption algo, compression, -l, and -v to see if any clues are gained. > > > Do you have >> any more info about the threshold of file size for when this problem >> starts >> occurring? is it always the same? >> > > No, it varies. Usually after a few GB. E.g. he last one lasted 11GB but I > had failures below 8GB transfer before. > My machine specs are fairly similar to yours although this a mostly a desktop system(virtualbox-ose-4.0.10). I am unable to reproduce this error after several attempts at scp'ing a 20GB /dev/random file around. I assume this would have been enough to trigger it on your system? > EG if Vbox has 2 GB mapped out and you > >> get an error at a certain file size, does reducing the Vbox memory >> footprint >> allow a larger file to be successfully sent? >> > > Given that the amount of data is randomly just now I cannot imagine how to > get reliable numbers in this experiment. > I suspect this has less to do with actual memory and more to do with some other buffer-like bottleneck. Does tuning any of the network buffers make any difference? A couple to try: net.inet.ip.intr_queue_maxlen net.link.ifqmaxlen kern.ipc.nmbclusters If possible, does changing from VM bridged -> NAT or vice-versa result in any behavior change? -- Adam Vande More