From owner-freebsd-questions@freebsd.org Wed Sep 9 15:45:32 2015 Return-Path: Delivered-To: freebsd-questions@mailman.ysv.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) by mailman.ysv.freebsd.org (Postfix) with ESMTP id CDDAFA01CEB for ; Wed, 9 Sep 2015 15:45:32 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: from mail-qg0-f45.google.com (mail-qg0-f45.google.com [209.85.192.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by mx1.freebsd.org (Postfix) with ESMTPS id 8DC01104F for ; Wed, 9 Sep 2015 15:45:31 +0000 (UTC) (envelope-from paul@kraus-haus.org) Received: by qgez77 with SMTP id z77so11257742qge.1 for ; Wed, 09 Sep 2015 08:45:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:content-type:mime-version:subject:from :in-reply-to:date:content-transfer-encoding:message-id:references:to; bh=0MKxjZOg7cJsx4K905l93OtvWOHjpvR2EcJclW8rfuA=; b=meBVHVfVPc2HD8FrPx6+sV70FeaeHzzexFPsI3NrOPbv741uC10OaC3HSXPiMfDG1u mTo5TRY/v5q1EV4k5zSr+iDCRWS3qDUHxAfsjPlPX+186GplxRj3VIWBtvaC8wUjeTFm uL3R9T9SbwH/WU6VUhRdZAS52C4aI28/p4MJhT3FRTeA2vZXrfKGx8oTIj6h88+k55b3 Qcn4oVU3Xuu9hzRlxPYD6E+E47fSCPrgtxqML3h2R5yfj80I9YvS3sJIY7BXgIzP5BoJ 8wdQz7bLQCjKzorskG6Zamn6P11lIkqhFPV7eb243wIYukBTrLRTV8U9Nw+/Eig0WQPy Fo3w== X-Gm-Message-State: ALoCoQmQ1ruF9pOSmQMV465Jcm+F7IMHfovyiEWPTP1rab7UUcY8nVhyeoIWM1nzeuhHMNu+qWzF X-Received: by 10.140.135.208 with SMTP id 199mr47680249qhh.33.1441813530822; Wed, 09 Sep 2015 08:45:30 -0700 (PDT) Received: from [172.24.4.228] (rrcs-24-39-108-194.nyc.biz.rr.com. [24.39.108.194]) by smtp.gmail.com with ESMTPSA id 18sm3925409qgg.39.2015.09.09.08.45.29 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 09 Sep 2015 08:45:29 -0700 (PDT) Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) Subject: Re: Storage question From: Paul Kraus In-Reply-To: <55F04D78.8070508@hiwaay.net> Date: Wed, 9 Sep 2015 11:45:27 -0400 Content-Transfer-Encoding: quoted-printable Message-Id: References: <55EF3D23.5060009@hiwaay.net> <20150908220639.20412cbd@gumby.homeunix.com> <55EF5409.8020007@yahoo.com> <55EFC2DA.3020101@hiwaay.net> <08B351DD-AA48-4F30-B0D6-C500D0877FB3@lafn.org> <55F02DC8.7000706@hiwaay.net> <20150909150626.5c3b99e5.freebsd@edvax.de> <55F031A0.40500@hiwaay.net> <20150909145820.c3b48aafad4f70553c1c1fd8@sohara.org> <55F0451A.5080709@hiwaay.net> <20150909160005.d3b84775c3d0748014a871e5@sohara.org> <55F04D78.8070508@hiwaay.net> To: FreeBSD Questions X-Mailer: Apple Mail (2.1878.6) X-BeenThere: freebsd-questions@freebsd.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: User questions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Sep 2015 15:45:33 -0000 On Sep 9, 2015, at 11:15, William A. Mahaffey III = wrote: > I have heard that filling your zpool is a *BAD* thing, Filling a zpool is _fatal_. With Copy on Write and no free space you = cannot even remove a file. > but it can be for any FS, just maybe a bit worse for ZFS. I create a dummy dataset (filesystem) with a reservation of 1 GB and a = quota of 1 GB and no mount point and call it =93do-not-remove=94 or = =93dnr=94 for short :-) If the zpool is small I make the quota and = reservation 512 MB. If I accidentally fill the zpool, I can then _stop_ = whatever is filling it, remove and reservation from =93dnr=94 and = proceed to remove files. The other thing to remember is that ZFS (without compression) is _less_ = space efficient. The checksums and metadata take up more room than for = UFS. ZFS also has a steep performance drop when you hit a certain % in = use. And that is NOT a fixed number but varies with workload. For my VM = hosts I need to keep the zpool less than 85-90% full or the performance = drops so far that the VMs=92 I/O start timing out. > I am going to study that option a bit more. The online docs all seem = to show swap within the zpool as well, does that work OK, performance = wise ? It would simplify installation, however I am planning to script = that, so a bit of 'extra' effort for separate swap partitions is not an = issue. I have always thought that separate swap partitions directly = kernel managed were the best for swap performance if/when it gets down = to that, no ? Swap on zvol is a bad idea (as you may have already found). The issue, = as best I can tell, is that since ZFS grabs all the RAM it can, and SWAP = is used in low RAM situations, there are times were SWAP is trying to = write to the zvol and ZFS is trying to decrease it=92s RAM usage = (because of memory pressure) so it=92s performance drops. Functionally = it becomes a race state that ends badly. -- Paul Kraus paul@kraus-haus.org