From owner-freebsd-stable@FreeBSD.ORG Sat Jan 1 23:22:32 2011 Return-Path: Delivered-To: freebsd-stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C6522106564A for ; Sat, 1 Jan 2011 23:22:32 +0000 (UTC) (envelope-from miyamoto.31b@gmail.com) Received: from mail-ww0-f42.google.com (mail-ww0-f42.google.com [74.125.82.42]) by mx1.freebsd.org (Postfix) with ESMTP id 5FB718FC14 for ; Sat, 1 Jan 2011 23:22:31 +0000 (UTC) Received: by wwi17 with SMTP id 17so12927977wwi.1 for ; Sat, 01 Jan 2011 15:22:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:date:message-id :subject:from:to:content-type; bh=inRjvygVI7+ec2y0OIOkpkig+N2qThNz8cT4Clhlucw=; b=ZAd5oV/eD+cvaYcOUKue9GL6wSAIjkqGXvOON9JIIY82Vc+jiJftKJ34X5rPJg8epB JVEL/+9U0DGEGSub4DTm8SDaO3kR7GHPPWeZS54A8nB0WwqHmpLmtqWv47wa9FNbbUKW kLalvMntRRrXfYAzT1KIgoUpcvGLfCCEJb03U= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=b253Dy9nAFFcG8MQkgPSVOlNHkGRTKQgXWeN/rI3hamaz9gO/uJlNRvqJad11s493E 69W3nmnoLcautLTePhGIz/vwpfhQYcmVA4H3jqDMa2uE3oJI+5m1nsWcO5dI7/78mVHo XFUsiJiPkY2syJIOuD4XZf2UKQmhv1Q4+c2Rk= MIME-Version: 1.0 Received: by 10.227.107.99 with SMTP id a35mr10635039wbp.156.1293922335900; Sat, 01 Jan 2011 14:52:15 -0800 (PST) Received: by 10.227.13.143 with HTTP; Sat, 1 Jan 2011 14:52:15 -0800 (PST) Date: Sat, 1 Jan 2011 22:52:15 +0000 Message-ID: From: miyamoto moesasji To: freebsd-stable@freebsd.org Content-Type: text/plain; charset=ISO-8859-1 Subject: tmpfs runs out of space on 8.2pre-release, zfs related? X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 01 Jan 2011 23:22:32 -0000 In setting up tmpfs (so not tmpmfs) on a machine that is using zfs(v15, zfs v4) on 8.2prerelease I run out of space on the tmpfs when copying a file of ~4.6 GB file from the zfs-filesystem to the memory disk. This machine has 8GB of memory backed by swap on the harddisk, so I expected the file to copy to memory without problems. Below in detail what happens: upon rebooting the machine the tmpfs has 8GB available as can be seen below: --- hge@PulsarX4:~/ > df -hi /tmp Filesystem Size Used Avail Capacity iused ifree %iused Mounted on tmpfs 8.2G 12K 8.2G 0% 19 39M 0% /tmp --- Subsequently copying a ~4.6GB file from a location in the zfs-pool to the memory filesystem fails with no more space left message --- hge@PulsarX4:~/ > cp ~/temp/large.iso /tmp/large_file cp: /tmp/large_file: No space left on device --- After this the tmpfs has shrunk to just 2.7G, obviously much less than the 8.2G available before the copy-operation. At the same time there are still free inodes left, so that does not appear to be the problem. Output of the df after the copy: --- hge@PulsarX4:~/ > df -hi /tmp Filesystem Size Used Avail Capacity iused ifree %iused Mounted on tmpfs 2.7G 2.7G 1.4M 100% 19 6.4k 0% /tmp --- A quick search shows the following bug-report for solaris: http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=e4ae9c32983000ef651e38edbba1?bug_id=6804661This appears closely related as here I also try to copy a file >50% of memory to the tmpfs and the way to reproduce appears identical to what I did here. As it might help spot the problem: below the information on the zfs ARC size obtained from the output of zfs-stats. This gives: Before the copy: --- System Memory Statistics: Physical Memory: 8161.74M Kernel Memory: 511.64M DATA: 94.27% 482.31M TEXT: 5.73% 29.33M ARC Size: Current Size (arcsize): 5.88% 404.38M Target Size (Adaptive, c): 100.00% 6874.44M Min Size (Hard Limit, c_min): 12.50% 859.31M Max Size (High Water, c_max): ~8:1 6874.44M --- After the copy: --- System Memory Statistics: Physical Memory: 8161.74M Kernel Memory: 3326.98M DATA: 99.12% 3297.65M TEXT: 0.88% 29.33M ARC Size: Current Size (arcsize): 46.99% 3230.55M Target Size (Adaptive, c): 100.00% 6874.44M Min Size (Hard Limit, c_min): 12.50% 859.31M Max Size (High Water, c_max): ~8:1 6874.44M --- Unfortunately I have difficulties interpreting this further than this, so suggestions how to prevent this behavior (or further trouble shoot this) would be appreciated as my feeling is that this should not happen.