From owner-svn-doc-projects@FreeBSD.ORG Tue Aug 13 05:46:43 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id C5A5EC14; Tue, 13 Aug 2013 05:46:43 +0000 (UTC) (envelope-from gabor@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id B3BC629CC; Tue, 13 Aug 2013 05:46:43 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r7D5kh1T042427; Tue, 13 Aug 2013 05:46:43 GMT (envelope-from gabor@svn.freebsd.org) Received: (from gabor@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r7D5khRh042426; Tue, 13 Aug 2013 05:46:43 GMT (envelope-from gabor@svn.freebsd.org) Message-Id: <201308130546.r7D5khRh042426@svn.freebsd.org> From: Gabor Kovesdan Date: Tue, 13 Aug 2013 05:46:43 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r42534 - projects/db5/share/xsl X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Aug 2013 05:46:43 -0000 Author: gabor Date: Tue Aug 13 05:46:43 2013 New Revision: 42534 URL: http://svnweb.freebsd.org/changeset/doc/42534 Log: - svnref is now revnumber Modified: projects/db5/share/xsl/freebsd-xhtml-common.xsl Modified: projects/db5/share/xsl/freebsd-xhtml-common.xsl ============================================================================== --- projects/db5/share/xsl/freebsd-xhtml-common.xsl Mon Aug 12 17:51:22 2013 (r42533) +++ projects/db5/share/xsl/freebsd-xhtml-common.xsl Tue Aug 13 05:46:43 2013 (r42534) @@ -180,7 +180,7 @@ - + From owner-svn-doc-projects@FreeBSD.ORG Tue Aug 13 08:28:28 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 6110C3EE; Tue, 13 Aug 2013 08:28:28 +0000 (UTC) (envelope-from gabor@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 4B1E3234C; Tue, 13 Aug 2013 08:28:28 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r7D8SSAd007041; Tue, 13 Aug 2013 08:28:28 GMT (envelope-from gabor@svn.freebsd.org) Received: (from gabor@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r7D8SPP9007025; Tue, 13 Aug 2013 08:28:25 GMT (envelope-from gabor@svn.freebsd.org) Message-Id: <201308130828.r7D8SPP9007025@svn.freebsd.org> From: Gabor Kovesdan Date: Tue, 13 Aug 2013 08:28:25 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r42535 - in projects/db5: de_DE.ISO8859-1/htdocs de_DE.ISO8859-1/htdocs/releases de_DE.ISO8859-1/share/xml en_US.ISO8859-1/articles/committers-guide en_US.ISO8859-1/articles/releng en_U... X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Aug 2013 08:28:28 -0000 Author: gabor Date: Tue Aug 13 08:28:25 2013 New Revision: 42535 URL: http://svnweb.freebsd.org/changeset/doc/42535 Log: - MFH Added: projects/db5/share/pgpkeys/markm.key - copied, changed from r42534, head/share/pgpkeys/markm.key projects/db5/share/security/advisories/FreeBSD-SA-13:07.bind.asc - copied unchanged from r42534, head/share/security/advisories/FreeBSD-SA-13:07.bind.asc projects/db5/share/security/advisories/FreeBSD-SA-13:08.nfsserver.asc - copied unchanged from r42534, head/share/security/advisories/FreeBSD-SA-13:08.nfsserver.asc projects/db5/share/security/patches/SA-13:07/ - copied from r42534, head/share/security/patches/SA-13:07/ projects/db5/share/security/patches/SA-13:08/ - copied from r42534, head/share/security/patches/SA-13:08/ Modified: projects/db5/de_DE.ISO8859-1/htdocs/releases/index.xml projects/db5/de_DE.ISO8859-1/htdocs/where.xml projects/db5/de_DE.ISO8859-1/share/xml/news.xml projects/db5/de_DE.ISO8859-1/share/xml/press.xml projects/db5/de_DE.ISO8859-1/share/xml/release.l10n.ent projects/db5/en_US.ISO8859-1/articles/committers-guide/article.xml projects/db5/en_US.ISO8859-1/articles/releng/article.xml projects/db5/en_US.ISO8859-1/books/fdp-primer/docbook-markup/chapter.xml projects/db5/en_US.ISO8859-1/books/fdp-primer/writing-style/chapter.xml projects/db5/en_US.ISO8859-1/books/fdp-primer/xml-primer/chapter.xml projects/db5/en_US.ISO8859-1/books/handbook/config/chapter.xml projects/db5/en_US.ISO8859-1/books/handbook/geom/chapter.xml projects/db5/en_US.ISO8859-1/books/handbook/l10n/chapter.xml projects/db5/en_US.ISO8859-1/books/handbook/mirrors/chapter.xml projects/db5/en_US.ISO8859-1/books/handbook/network-servers/chapter.xml projects/db5/en_US.ISO8859-1/books/handbook/security/chapter.xml projects/db5/en_US.ISO8859-1/books/porters-handbook/book.xml projects/db5/en_US.ISO8859-1/books/porters-handbook/uses.xml projects/db5/en_US.ISO8859-1/htdocs/administration.xml projects/db5/en_US.ISO8859-1/htdocs/community/mailinglists.xml projects/db5/en_US.ISO8859-1/htdocs/donations/donors.xml projects/db5/en_US.ISO8859-1/htdocs/internal/machines.xml projects/db5/en_US.ISO8859-1/htdocs/releases/9.2R/schedule.xml projects/db5/en_US.ISO8859-1/share/xml/release.l10n.ent projects/db5/ja_JP.eucJP/articles/contributing/article.xml projects/db5/ja_JP.eucJP/books/handbook/bsdinstall/chapter.xml projects/db5/ja_JP.eucJP/books/handbook/cutting-edge/chapter.xml projects/db5/ja_JP.eucJP/books/handbook/mirrors/chapter.xml projects/db5/ja_JP.eucJP/htdocs/community/mailinglists.xml projects/db5/ja_JP.eucJP/htdocs/internal/machines.xml projects/db5/ja_JP.eucJP/share/xml/news.xml projects/db5/ja_JP.eucJP/share/xml/release.l10n.ent projects/db5/ru_RU.KOI8-R/articles/contributing/article.xml projects/db5/ru_RU.KOI8-R/articles/cvs-freebsd/article.xml projects/db5/ru_RU.KOI8-R/articles/hubs/article.xml projects/db5/ru_RU.KOI8-R/books/handbook/bsdinstall/chapter.xml projects/db5/ru_RU.KOI8-R/books/handbook/install/chapter.xml projects/db5/ru_RU.KOI8-R/books/porters-handbook/Makefile projects/db5/share/pgpkeys/pgpkeys-developers.xml projects/db5/share/pgpkeys/pgpkeys.ent projects/db5/share/pgpkeys/tdb.key projects/db5/share/xml/advisories.xml projects/db5/share/xml/authors.ent projects/db5/share/xml/events2013.xml projects/db5/share/xml/news.xml projects/db5/share/xml/release.ent projects/db5/share/xsl/freebsd-common.xsl Directory Properties: projects/db5/ (props changed) projects/db5/de_DE.ISO8859-1/ (props changed) projects/db5/en_US.ISO8859-1/ (props changed) projects/db5/ja_JP.eucJP/ (props changed) projects/db5/ru_RU.KOI8-R/ (props changed) projects/db5/share/ (props changed) Modified: projects/db5/de_DE.ISO8859-1/htdocs/releases/index.xml ============================================================================== --- projects/db5/de_DE.ISO8859-1/htdocs/releases/index.xml Tue Aug 13 05:46:43 2013 (r42534) +++ projects/db5/de_DE.ISO8859-1/htdocs/releases/index.xml Tue Aug 13 08:28:25 2013 (r42535) @@ -2,7 +2,7 @@ - + ]> @@ -59,7 +59,9 @@ Errata

-

Release &rel2.current; (Februar 2011) +

Produktion (alt)

+ +

Release &rel2.current; (Juni 2013) Announcement : Release Notes : @@ -69,17 +71,6 @@ Errata

-

Produktion (alt)

- -

Release &rel3.current; (Februar 2011) - - Announcement : - Release Notes : - Hardware Notes : - Readme : - Errata -

-

Zukünftige Versionen

@@ -127,6 +118,18 @@ +
  • 8.3 (April 2012) + + Announcement: + Release Notes: + Installation + Instructions: + Hardware Notes: + Readme: + Errata + +
  • +
  • 8.2 (Februar 2011) Announcement: Modified: projects/db5/de_DE.ISO8859-1/htdocs/where.xml ============================================================================== --- projects/db5/de_DE.ISO8859-1/htdocs/where.xml Tue Aug 13 05:46:43 2013 (r42534) +++ projects/db5/de_DE.ISO8859-1/htdocs/where.xml Tue Aug 13 08:28:25 2013 (r42535) @@ -58,7 +58,7 @@ Version & Plattform Distribution - ISO + ISO Release
    Notes Hardware
    Notes Installation
    Notes @@ -139,14 +139,14 @@ [Distribution] [ISO] - + @@ -17,6 +17,23 @@ 2013 + + + 6 + + + Fixing Network Attached Storage with commodity hardware + and BSD + http://boingboing.net/2013/06/23/fixing-network-attached-storag.html + Boing Boing + http://boingboing.net/ + 23. Juni 2013 + Ben Laurie +

    Ben Laurie beschreibt, warum er ein proprietäres NAS durch ein + auf &os; basierendes NAS mit Standardhardware ersetzt hat.

    +
    +
    + 2 Modified: projects/db5/de_DE.ISO8859-1/share/xml/release.l10n.ent ============================================================================== --- projects/db5/de_DE.ISO8859-1/share/xml/release.l10n.ent Tue Aug 13 05:46:43 2013 (r42534) +++ projects/db5/de_DE.ISO8859-1/share/xml/release.l10n.ent Tue Aug 13 08:28:25 2013 (r42535) @@ -2,7 +2,7 @@ @@ -102,35 +102,47 @@ amd64
    (x86-64, x64) - [Distribution] - [ISO] + [Distribution] + [ISO] i386 - [Distribution] - [ISO] + [Distribution] + [ISO] + + + ia64 + [Distribution] + [ISO] + + + + powerpc + [Distribution] + [ISO] + + - sparc64 - [Distribution] - [ISO] + powerpc64 + [Distribution] + [ISO] - powerpc64 - [Distribution] - [ISO] + sparc64 + [Distribution] + [ISO] - --> Modified: projects/db5/en_US.ISO8859-1/articles/committers-guide/article.xml ============================================================================== --- projects/db5/en_US.ISO8859-1/articles/committers-guide/article.xml Tue Aug 13 05:46:43 2013 (r42534) +++ projects/db5/en_US.ISO8859-1/articles/committers-guide/article.xml Tue Aug 13 08:28:25 2013 (r42535) @@ -379,7 +379,7 @@ Subversion can be installed from the &os; Ports - Collection, by issuing the following commands: + Collection by issuing these commands: &prompt.root; cd /usr/ports/devel/subversion &prompt.root; make clean install @@ -590,7 +590,7 @@ &os; Ports Tree Branches and Layout In svn+ssh://svn.freebsd.org/ports, - ports refers repository root of the + ports refers to the repository root of the ports tree. In general, most &os; port work will be done within @@ -710,8 +710,8 @@ It is possible to anonymously check out the &os; repository with Subversion. This will give access to a - read-only tree that can be updated, but not committed - to. To do this, use the following command: + read-only tree that can be updated, but not committed back + to the main repository. To do this, use the following command: &prompt.user; svn co https://svn0.us-west.FreeBSD.org/base/head /usr/src @@ -801,8 +801,8 @@ Most new source files should include a - $&os;$ string in the - new file. On commit, svn will expand + $&os;$ string near the start of the + file. On commit, svn will expand the $&os;$ string, adding the file path, revision number, date and time of commit, and the username of the committer. Files which Modified: projects/db5/en_US.ISO8859-1/articles/releng/article.xml ============================================================================== --- projects/db5/en_US.ISO8859-1/articles/releng/article.xml Tue Aug 13 05:46:43 2013 (r42534) +++ projects/db5/en_US.ISO8859-1/articles/releng/article.xml Tue Aug 13 08:28:25 2013 (r42535) @@ -135,10 +135,10 @@ including HEAD, assuming that the system management interfaces are not used. - In the interim period between releases, monthly snapshots are + In the interim period between releases, weekly snapshots are built automatically by the &os; Project build machines and made available for download from ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/. + class="resource">ftp://ftp.FreeBSD.org/pub/FreeBSD/snapshots/. The widespread availability of binary release snapshots, and the tendency of our user community to keep up with -STABLE development with Subversion and make @@ -153,6 +153,18 @@ quality assurance activities ramp up pending a major release. + In addition to installation ISO snapshots, weekly virtual + machine images are also provided for use with + VirtualBox, + qemu, or other popular emulation + software. The virtual machine images can be downloaded from + ftp://ftp.FreeBSD.org/pub/FreeBSD/snapshots/VM-IMAGES/. + + The virtual machine images are approximately 150MB &man.xz.1; + compressed, and contain a 10GB sparse filesystem when attached to + a virtual machine. + Bug reports and feature requests are continuously submitted by users throughout the release cycle. Problems reports are entered into our GNATS database @@ -173,7 +185,7 @@ security fixes and additions are merged onto the release branch. In addition to source updates via Subversion, binary patchkits are available to keep systems on the - RELENG_X_Y + releng/X.Y branches updated. @@ -256,8 +268,8 @@ Sixty days before the anticipated release, the source repository enters a code freeze. During this - time, all commits to the -STABLE branch must be approved by the - &a.re;, the approval process is technically enforced by the + time, all commits to the -STABLE branch must be approved by + &a.re;. The approval process is technically enforced by a pre-commit hook. The kinds of changes that are allowed during this period include: @@ -324,7 +336,7 @@ In all examples below, $FSVN refers to the location of the &os; Subversion repository, - svn+ssh://svn.freebsd.org/base/. + svn+ssh://svn.FreeBSD.org/base/. The layout of &os; branches in Subversion is Modified: projects/db5/en_US.ISO8859-1/books/fdp-primer/docbook-markup/chapter.xml ============================================================================== --- projects/db5/en_US.ISO8859-1/books/fdp-primer/docbook-markup/chapter.xml Tue Aug 13 05:46:43 2013 (r42534) +++ projects/db5/en_US.ISO8859-1/books/fdp-primer/docbook-markup/chapter.xml Tue Aug 13 08:28:25 2013 (r42535) @@ -2551,7 +2551,7 @@ IMAGES= chapter1/fig1.png url="&url.books.handbook;/svn.html#svn-intro"SVN introductionulink, then pick the nearest mirror from the list of ulink - url="&url.books.handbook;/subversion-mirrors.html"Subversion + url="&url.books.handbook;/svn-mirrors.html"Subversion mirror sitesulink.para Appearance: @@ -2560,7 +2560,7 @@ IMAGES= chapter1/fig1.png url="&url.books.handbook;/svn.html#svn-intro">SVN introduction, then pick the nearest mirror from the list of Subversion + url="&url.books.handbook;/svn-mirrors.html">Subversion mirror sites. Usage for article links: Modified: projects/db5/en_US.ISO8859-1/books/fdp-primer/writing-style/chapter.xml ============================================================================== --- projects/db5/en_US.ISO8859-1/books/fdp-primer/writing-style/chapter.xml Tue Aug 13 05:46:43 2013 (r42534) +++ projects/db5/en_US.ISO8859-1/books/fdp-primer/writing-style/chapter.xml Tue Aug 13 08:28:25 2013 (r42535) @@ -276,7 +276,7 @@ <!doctype…>. - + Acronyms Acronyms should be defined the first time they appear in a @@ -291,7 +291,7 @@ acronym tags. - + Indentation The first line in each file starts with no indentation, @@ -332,10 +332,10 @@ . - + Tag Style - + Tag Spacing Tags that start at the same indent as a previous tag @@ -371,7 +371,7 @@ - + Separating Tags Tags like itemizedlist which will @@ -401,7 +401,7 @@ - + Whitespace Changes Do not commit changes @@ -421,7 +421,7 @@ ignored by translators. - + Non-Breaking Space Avoid line breaks in places where they look ugly or make Modified: projects/db5/en_US.ISO8859-1/books/fdp-primer/xml-primer/chapter.xml ============================================================================== --- projects/db5/en_US.ISO8859-1/books/fdp-primer/xml-primer/chapter.xml Tue Aug 13 05:46:43 2013 (r42534) +++ projects/db5/en_US.ISO8859-1/books/fdp-primer/xml-primer/chapter.xml Tue Aug 13 08:28:25 2013 (r42535) @@ -75,7 +75,7 @@ Consider this text:
    - To remove /tmp/foo use + To remove /tmp/foo, use &man.rm.1;. &prompt.user; rm /tmp/foo @@ -108,9 +108,9 @@ The previous example is actually represented in this document like this: - paraTo remove filename/tmp/foofilename use &man.rm.1;.para + paraTo remove filename/tmp/foofilename, use &man.rm.1;.para -screen&prompt.user; userinputrm /tmp/foouserinputscreen +screen&prompt.user; userinputrm /tmp/foouserinputscreen The markup is clearly separate from the content. Modified: projects/db5/en_US.ISO8859-1/books/handbook/config/chapter.xml ============================================================================== --- projects/db5/en_US.ISO8859-1/books/handbook/config/chapter.xml Tue Aug 13 05:46:43 2013 (r42534) +++ projects/db5/en_US.ISO8859-1/books/handbook/config/chapter.xml Tue Aug 13 08:28:25 2013 (r42535) @@ -270,7 +270,6 @@ sshd_enable="YES" keyrate="fast" defaultrouter="10.1.1.254" - @@ -278,7 +277,6 @@ defaultrouter="10.1.1.254"hostname="node1.example.org" ifconfig_fxp0="inet 10.1.1.1/8" - @@ -608,9 +606,9 @@ PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin Users who wish to begin their own crontab file from scratch, without the - use of a template, can use crontab -e. This - will invoke the default editor with an empty file. When this - file is saved, it will be automatically installed by + use of a template, can use crontab -e. + This will invoke the default editor with an empty file. When + this file is saved, it will be automatically installed by &man.crontab.1;. In order to remove a user &man.crontab.5; completely, @@ -633,12 +631,13 @@ PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin Using &man.rc.8; Under &os; In 2002, &os; integrated the NetBSD &man.rc.8; system for - system initialization. The files listed in /etc/rc.d provide basic services - which can be controlled with the , - , and options - to &man.service.8;. For instance, &man.sshd.8; can be restarted - with the following command: + system initialization. The files listed in + /etc/rc.d provide basic + services which can be controlled with the + , , and + options to &man.service.8;. For + instance, &man.sshd.8; can be restarted with the following + command: &prompt.root; service sshd restart @@ -682,7 +681,9 @@ PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin &prompt.root; service sshd rcvar # sshd -$sshd_enable=YES +# +sshd_enable="YES" +# (default: "") The # sshd line is output from the @@ -1260,15 +1261,15 @@ round-trip min/avg/max/stddev = 0.700/0. host. This can happen if no default route is specified or if a cable is unplugged. Check the output of netstat -rn and make sure there is a - valid route to the host. If there is not, read . + valid route to the host. If there is not, read + . ping: sendto: Permission denied error messages are often caused by a misconfigured firewall. If a firewall is enabled on &os; but no rules have been defined, the default policy is to deny all traffic, even - &man.ping.8;. Refer to for more information. + &man.ping.8;. Refer to + for more information. Sometimes performance of the card is poor or below average. In these cases, try setting the media @@ -1312,9 +1313,9 @@ round-trip min/avg/max/stddev = 0.700/0. given interface, there must be one address which correctly represents the network's netmask. Any other addresses which fall within this network must have a netmask of all - 1s, expressed as either 255.255.255.255 or 0xffffffff. + 1s, expressed as either + 255.255.255.255 or + 0xffffffff. For example, consider the case where the fxp0 interface is connected to two @@ -1322,18 +1323,18 @@ round-trip min/avg/max/stddev = 0.700/0. netmask of 255.255.255.0 and 202.0.75.16 with a netmask of 255.255.255.240. The system - is to be configured to appear in the ranges 10.1.1.1 through 10.1.1.5 and 202.0.75.17 through 202.0.75.20. Only the first address - in a given network range should have a real netmask. All the - rest (10.1.1.2 through 10.1.1.5 and 202.0.75.18 through 202.0.75.20) must be configured with - a netmask of 255.255.255.255. + is to be configured to appear in the ranges + 10.1.1.1 through + 10.1.1.5 and + 202.0.75.17 through + 202.0.75.20. Only the first + address in a given network range should have a real netmask. + All the rest (10.1.1.2 through + 10.1.1.5 and + 202.0.75.18 through + 202.0.75.20) must be configured + with a netmask of + 255.255.255.255. The following /etc/rc.conf entries configure the adapter correctly for this scenario: @@ -1347,7 +1348,6 @@ ifconfig_fxp0_alias4="inet 202.0.75.17 n ifconfig_fxp0_alias5="inet 202.0.75.18 netmask 255.255.255.255" ifconfig_fxp0_alias6="inet 202.0.75.19 netmask 255.255.255.255" ifconfig_fxp0_alias7="inet 202.0.75.20 netmask 255.255.255.255" - @@ -1394,8 +1394,8 @@ ifconfig_fxp0_alias7="inet 202.0.75.20 n syslogd_flags in /etc/rc.conf. Refer to &man.syslogd.8; for more information on the arguments, and &man.rc.conf.5;, - and for more information about + and + for more information about /etc/rc.conf and the &man.rc.8; subsystem. @@ -1535,8 +1535,8 @@ cron.* facilities, refer to &man.syslog.3; and &man.syslogd.8;. For more information about /etc/syslog.conf, its syntax, and more - advanced usage examples, see &man.syslog.conf.5; and . + advanced usage examples, see &man.syslog.conf.5; and + . @@ -1630,14 +1630,14 @@ cron.* &man.newsyslog.8; further instructions, such as how to compress the rotated file or to create the log file if it is missing. The last two fields are optional, and - specify the PID file of a process - and a signal number to send to that process when the file - is rotated. For more information on all fields, valid + specify the + PID file of a + process and a signal number to send to that process when the + file is rotated. For more information on all fields, valid flags, and how to specify the rotation time, refer to - &man.newsyslog.conf.5;. Since &man.newsyslog.8; is run - from &man.cron.8;, it can not rotate files more often than - it is run from &man.cron.8;. + &man.newsyslog.conf.5;. Since &man.newsyslog.8; is run from + &man.cron.8;, it can not rotate files more often than it is + run from &man.cron.8;. @@ -1733,9 +1733,8 @@ cron.* resolv.conf - How a - &os; system accesses the Internet Domain Name System - (DNS) is controlled by + How a &os; system accesses the Internet Domain Name + System (DNS) is controlled by &man.resolv.conf.5;. The most common entries to @@ -1889,13 +1888,13 @@ kern.maxproc: 1044 kern.maxfiles: 2088 -> 5000 Settings of sysctl variables are usually either strings, - numbers, or booleans, where a a boolean is 1 + numbers, or booleans, where a boolean is 1 for yes or 0 for no. To automatically set some variables each time the machine boots, add them to /etc/sysctl.conf. For - more information, refer to &man.sysctl.conf.5; and . + more information, refer to &man.sysctl.conf.5; and + . <filename>sysctl.conf</filename> @@ -1921,7 +1920,6 @@ kern.logsigexit=0 # Prevent users from seeing information about processes that # are being run under another UID. security.bsd.see_other_uids=0 - @@ -2187,16 +2185,16 @@ device_probe_and_attach: cbb0 attach ret data blocks of a file did not find their way out of the buffer cache onto the disk by the time of the crash, &man.fsck.8; recognizes this and repairs the file system - by setting the file length to - 0. Additionally, the implementation is - clear and simple. The disadvantage is that meta-data - changes are slow. For example, rm -r - touches all the files in a directory sequentially, but each - directory change will be written synchronously to the - disk. This includes updates to the directory itself, to - the inode table, and possibly to indirect blocks allocated - by the file. Similar considerations apply for unrolling - large hierarchies using tar -x. + by setting the file length to 0. + Additionally, the implementation is clear and simple. The + disadvantage is that meta-data changes are slow. For + example, rm -r touches all the files in a + directory sequentially, but each directory change will be + written synchronously to the disk. This includes updates to + the directory itself, to the inode table, and possibly to + indirect blocks allocated by the file. Similar + considerations apply for unrolling large hierarchies using + tar -x. The second approach is to use asynchronous meta-data updates. This is the default for a UFS @@ -2264,7 +2262,7 @@ device_probe_and_attach: cbb0 attach ret in use are marked as such in their blocks and inodes. After a crash, the only resource allocation error that occurs is that resources are marked as used - which are actually free. &man.fsck.8; + which are actually free. &man.fsck.8; recognizes this situation, and frees the resources that are no longer used. It is safe to ignore the dirty state of the file system after a crash by forcibly mounting it @@ -2379,7 +2377,7 @@ device_probe_and_attach: cbb0 attach ret compile software. The most important table set by maxusers is the maximum number of processes, which is set to - 20 + 16 * maxusers. If + 20 + 16 * maxusers. If maxusers is set to 1, there can only be 36 simultaneous processes, including @@ -2491,12 +2489,11 @@ device_probe_and_attach: cbb0 attach ret The net.inet.ip.portrange.* - &man.sysctl.8; - variables control the port number ranges automatically bound - to TCP and UDP - sockets. There are three ranges: a low range, a default - range, and a high range. Most network programs use the - default range which is controlled by + &man.sysctl.8; variables control the port number ranges + automatically bound to TCP and + UDP sockets. There are three ranges: a + low range, a default range, and a high range. Most network + programs use the default range which is controlled by net.inet.ip.portrange.first and net.inet.ip.portrange.last, which default to 1024 and 5000, @@ -2568,12 +2565,12 @@ device_probe_and_attach: cbb0 attach ret conditions, but it can also result in higher &man.ping.8; times over slow links, though still much lower than without the inflight algorithm. In such cases, try reducing this - parameter to 15, - 10, or 5 and - reducing net.inet.tcp.inflight.min - to a value such as 3500 to get the - desired effect. Reducing these parameters should be done - as a last resort only. + parameter to 15, 10, + or 5 and reducing + net.inet.tcp.inflight.min to a value such + as 3500 to get the desired effect. + Reducing these parameters should be done as a last resort + only. @@ -2632,9 +2629,9 @@ kern.maxvnodes: 100000 Adding a new hard drive for swap gives better performance than adding a partition on an existing drive. Setting up - partitions and hard drives is explained in while discusses partition + partitions and hard drives is explained in + while + discusses partition layouts and swap partition size considerations. Use &man.swapon.8; to add a swap partition to the system. @@ -2643,7 +2640,6 @@ kern.maxvnodes: 100000 &prompt.root; swapon /dev/ada1s1b - It is possible to use any partition not currently mounted, even if it already contains data. Using &man.swapon.8; on a partition that contains data will @@ -2683,7 +2679,6 @@ kern.maxvnodes: 100000 - The GENERIC kernel already includes the memory disk driver (&man.md.4;) required for this operation. When building a custom kernel, @@ -2759,8 +2754,8 @@ kern.maxvnodes: 100000 temperature increases unexpectedly. This section provides comprehensive information about - ACPI. References will be provided for further - reading. + ACPI. References will be provided for + further reading. What Is ACPI? @@ -2977,13 +2972,12 @@ kern.maxvnodes: 100000 Most &os; developers watch &a.current;, but one should submit problems to &a.acpi.name; to be sure it is seen. Be patient when waiting for a response. If the bug is not - immediately apparent, submit a - PR using &man.send-pr.1;. When entering a - PR, include the same information as - requested above. This helps developers to track the problem - and resolve it. Do not send a PR without - emailing &a.acpi.name; first as it is likely that the problem - has been reported before. + immediately apparent, submit a PR using + &man.send-pr.1;. When entering a PR, + include the same information as requested above. This helps + developers to track the problem and resolve it. Do not send a + PR without emailing &a.acpi.name; first as + it is likely that the problem has been reported before. @@ -3276,8 +3270,9 @@ hw.acpi.s4bios: 0 ASL, use &man.acpidump.8;. Include both , to show the contents of the fixed tables, and , to disassemble the - AML. Refer to for an example syntax. + AML. Refer to + for an example + syntax. The simplest first check is to recompile the ASL to check for errors. Warnings can Modified: projects/db5/en_US.ISO8859-1/books/handbook/geom/chapter.xml ============================================================================== --- projects/db5/en_US.ISO8859-1/books/handbook/geom/chapter.xml Tue Aug 13 05:46:43 2013 (r42534) +++ projects/db5/en_US.ISO8859-1/books/handbook/geom/chapter.xml Tue Aug 13 08:28:25 2013 (r42535) @@ -824,6 +824,314 @@ mountroot> + *** DIFF OUTPUT TRUNCATED AT 1000 LINES *** From owner-svn-doc-projects@FreeBSD.ORG Tue Aug 13 08:46:18 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 727099FF; Tue, 13 Aug 2013 08:46:18 +0000 (UTC) (envelope-from gabor@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 5E8CA247A; Tue, 13 Aug 2013 08:46:18 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r7D8kHRG014022; Tue, 13 Aug 2013 08:46:17 GMT (envelope-from gabor@svn.freebsd.org) Received: (from gabor@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r7D8kHG9014021; Tue, 13 Aug 2013 08:46:17 GMT (envelope-from gabor@svn.freebsd.org) Message-Id: <201308130846.r7D8kHG9014021@svn.freebsd.org> From: Gabor Kovesdan Date: Tue, 13 Aug 2013 08:46:17 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r42536 - projects/db5/share/xml X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Aug 2013 08:46:18 -0000 Author: gabor Date: Tue Aug 13 08:46:17 2013 New Revision: 42536 URL: http://svnweb.freebsd.org/changeset/doc/42536 Log: - Drop table constraints, which do not help too much Modified: projects/db5/share/xml/freebsd.sch Modified: projects/db5/share/xml/freebsd.sch ============================================================================== --- projects/db5/share/xml/freebsd.sch Tue Aug 13 08:28:25 2013 (r42535) +++ projects/db5/share/xml/freebsd.sch Tue Aug 13 08:46:17 2013 (r42536) @@ -58,11 +58,6 @@ You cannot use both colname and spanname attributes on table entries. - - Programlisting is not allowed in tables (in section ). - The screen element is not allowed in tables (in section ). - Footnote is not allowed in tables (in section ). - From owner-svn-doc-projects@FreeBSD.ORG Wed Aug 14 22:21:16 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 8082128E; Wed, 14 Aug 2013 22:21:16 +0000 (UTC) (envelope-from wblock@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 6B6A42B8C; Wed, 14 Aug 2013 22:21:16 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r7EMLGVA074307; Wed, 14 Aug 2013 22:21:16 GMT (envelope-from wblock@svn.freebsd.org) Received: (from wblock@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r7EMLF8w074299; Wed, 14 Aug 2013 22:21:15 GMT (envelope-from wblock@svn.freebsd.org) Message-Id: <201308142221.r7EMLF8w074299@svn.freebsd.org> From: Warren Block Date: Wed, 14 Aug 2013 22:21:15 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r42540 - in projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook: . bsdinstall filesystems zfs X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 14 Aug 2013 22:21:16 -0000 Author: wblock Date: Wed Aug 14 22:21:15 2013 New Revision: 42540 URL: http://svnweb.freebsd.org/changeset/doc/42540 Log: Split the ZFS content into a separate chapter. Added: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml - copied, changed from r42538, projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/Makefile projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/book.xml projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/bsdinstall/chapter.xml projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/chapters.ent projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/Makefile ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/Makefile Wed Aug 14 21:50:46 2013 (r42539) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/Makefile Wed Aug 14 22:21:15 2013 (r42540) @@ -278,6 +278,7 @@ SRCS+= serialcomms/chapter.xml SRCS+= users/chapter.xml SRCS+= virtualization/chapter.xml SRCS+= x11/chapter.xml +SRCS+= zfs/chapter.xml # Entities SRCS+= chapters.ent Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/book.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/book.xml Wed Aug 14 21:50:46 2013 (r42539) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/book.xml Wed Aug 14 22:21:15 2013 (r42540) @@ -235,6 +235,7 @@ &chap.audit; &chap.disks; &chap.geom; + &chap.zfs; &chap.filesystems; &chap.virtualization; &chap.l10n; Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/bsdinstall/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/bsdinstall/chapter.xml Wed Aug 14 21:50:46 2013 (r42539) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/bsdinstall/chapter.xml Wed Aug 14 22:21:15 2013 (r42540) @@ -1411,7 +1411,7 @@ Trying to mount root from cd9660:/dev/is Another partition type worth noting is freebsd-zfs, used for partitions that will contain a &os; ZFS filesystem. See - . &man.gpart.8; shows more + . &man.gpart.8; shows more of the available GPT partition types. Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/chapters.ent ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/chapters.ent Wed Aug 14 21:50:46 2013 (r42539) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/chapters.ent Wed Aug 14 22:21:15 2013 (r42540) @@ -38,6 +38,7 @@ + Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml Wed Aug 14 21:50:46 2013 (r42539) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml Wed Aug 14 22:21:15 2013 (r42540) @@ -34,7 +34,7 @@ UFS which has been modernized as UFS2. Since &os; 7.0, the Z File System ZFS is also available as a native file - system. + system. See for more information. In addition to its native file systems, &os; supports a multitude of other file systems so that data from other @@ -96,1439 +96,6 @@ - - The Z File System (ZFS) - - The Z file system, originally developed by &sun;, - is designed to future proof the file system by removing many of - the arbitrary limits imposed on previous file systems. ZFS - allows continuous growth of the pooled storage by adding - additional devices. ZFS allows you to create many file systems - (in addition to block devices) out of a single shared pool of - storage. Space is allocated as needed, so all remaining free - space is available to each file system in the pool. It is also - designed for maximum data integrity, supporting data snapshots, - multiple copies, and cryptographic checksums. It uses a - software data replication model, known as - RAID-Z. RAID-Z provides - redundancy similar to hardware RAID, but is - designed to prevent data write corruption and to overcome some - of the limitations of hardware RAID. - - - ZFS Features and Terminology - - ZFS is a fundamentally different file system because it - is more than just a file system. ZFS combines the roles of - file system and volume manager, enabling additional storage - devices to be added to a live system and having the new space - available on all of the existing file systems in that pool - immediately. By combining the traditionally separate roles, - ZFS is able to overcome previous limitations that prevented - RAID groups being able to grow. Each top level device in a - zpool is called a vdev, which can be a simple disk or a RAID - transformation such as a mirror or RAID-Z array. ZFS file - systems (called datasets), each have access to the combined - free space of the entire pool. As blocks are allocated the - free space in the pool available to of each file system is - decreased. This approach avoids the common pitfall with - extensive partitioning where free space becomes fragmentated - across the partitions. - - - - - - zpool - - A storage pool is the most basic building block - of ZFS. A pool is made up of one or more vdevs, the - underlying devices that store the data. A pool is - then used to create one or more file systems - (datasets) or block devices (volumes). These datasets - and volumes share the pool of remaining free space. - Each pool is uniquely identified by a name and a - GUID. The zpool also controls the - version number and therefore the features available - for use with ZFS. - &os; 9.0 and 9.1 include - support for ZFS version 28. Future versions use ZFS - version 5000 with feature flags. This allows - greater cross-compatibility with other - implementations of ZFS. - - - - - vdev Types - - A zpool is made up of one or more vdevs, which - themselves can be a single disk or a group of disks, - in the case of a RAID transform. When multiple vdevs - are used, ZFS spreads data across the vdevs to - increase performance and maximize usable space. - - - - Disk - The most basic type - of vdev is a standard block device. This can be - an entire disk (such as - /dev/ada0 - or - /dev/da0) - or a partition - (/dev/ada0p3). - Contrary to the Solaris documentation, on &os; - there is no performance penalty for using a - partition rather than an entire disk. - - - - - File - In addition to - disks, ZFS pools can be backed by regular files, - this is especially useful for testing and - experimentation. Use the full path to the file - as the device path in the zpool create command. - All vdevs must be atleast 128 MB in - size. - - - - - Mirror - When creating a - mirror, specify the mirror - keyword followed by the list of member devices - for the mirror. A mirror consists of two or - more devices, all data will be written to all - member devices. A mirror vdev will only hold as - much data as its smallest member. A mirror vdev - can withstand the failure of all but one of its - members without losing any data. - - - - A regular single disk vdev can be - upgraded to a mirror vdev at any time using - the zpool attach - command. - - - - - - RAID-Z - - ZFS implements RAID-Z, a variation on standard - RAID-5 that offers better distribution of parity - and eliminates the "RAID-5 write hole" in which - the data and parity information become - inconsistent after an unexpected restart. ZFS - supports 3 levels of RAID-Z which provide - varying levels of redundancy in exchange for - decreasing levels of usable storage. The types - are named RAID-Z1 through Z3 based on the number - of parity devinces in the array and the number - of disks that the pool can operate - without. - - In a RAID-Z1 configuration with 4 disks, - each 1 TB, usable storage will be 3 TB - and the pool will still be able to operate in - degraded mode with one faulted disk. If an - additional disk goes offline before the faulted - disk is replaced and resilvered, all data in the - pool can be lost. - - In a RAID-Z3 configuration with 8 disks of - 1 TB, the volume would provide 5TB of - usable space and still be able to operate with - three faulted disks. Sun recommends no more - than 9 disks in a single vdev. If the - configuration has more disks, it is recommended - to divide them into separate vdevs and the pool - data will be striped across them. - - A configuration of 2 RAID-Z2 vdevs - consisting of 8 disks each would create - something similar to a RAID 60 array. A RAID-Z - group's storage capacity is approximately the - size of the smallest disk, multiplied by the - number of non-parity disks. 4x 1 TB disks - in Z1 has an effective size of approximately - 3 TB, and a 8x 1 TB array in Z3 will - yeild 5 TB of usable space. - - - - - Spare - ZFS has a special - pseudo-vdev type for keeping track of available - hot spares. Note that installed hot spares are - not deployed automatically; they must manually - be configured to replace the failed device using - the zfs replace command. - - - - - Log - ZFS Log Devices, also - known as ZFS Intent Log (ZIL) - move the intent log from the regular pool - devices to a dedicated device. The ZIL - accelerates synchronous transactions by using - storage devices (such as - SSDs) that are faster - compared to those used for the main pool. When - data is being written and the application - requests a guarantee that the data has been - safely stored, the data is written to the faster - ZIL storage, then later flushed out to the - regular disks, greatly reducing the latency of - synchronous writes. Log devices can be - mirrored, but RAID-Z is not supported. When - specifying multiple log devices writes will be - load balanced across all devices. - - - - - Cache - Adding a cache vdev - to a zpool will add the storage of the cache to - the L2ARC. Cache devices cannot be mirrored. - Since a cache device only stores additional - copies of existing data, there is no risk of - data loss. - - - - - - Adaptive Replacement - Cache (ARC) - - ZFS uses an Adaptive Replacement Cache - (ARC), rather than a more - traditional Least Recently Used - (LRU) cache. An - LRU cache is a simple list of items - in the cache sorted by when each object was most - recently used; new items are added to the top of the - list and once the cache is full items from the bottom - of the list are evicted to make room for more active - objects. An ARC consists of four - lists; the Most Recently Used (MRU) - and Most Frequently Used (MFU) - objects, plus a ghost list for each. These ghost - lists tracks recently evicted objects to provent them - being added back to the cache. This increases the - cache hit ratio by avoiding objects that have a - history of only being used occasionally. Another - advantage of using both an MRU and - MFU is that scanning an entire - filesystem would normally evict all data from an - MRU or LRU cache - in favor of this freshly accessed content. In the - case of ZFS since there is also an - MFU that only tracks the most - frequently used objects, the cache of the most - commonly accessed blocks remains. - - - - L2ARC - - The L2ARC is the second level - of the ZFS caching system. The - primary ARC is stored in - RAM, however since the amount of - available RAM is often limited, - ZFS can also make use of cache - vdevs. Solid State Disks (SSDs) - are often used as these cache devices due to their - higher speed and lower latency compared to traditional - spinning disks. An L2ARC is entirely optional, but - having one will significantly increase read speeds for - files that are cached on the SSD - instead of having to be read from the regular spinning - disks. The L2ARC can also speed up deduplication - since a DDT that does not fit in - RAM but does fit in the - L2ARC will be much faster than if - the DDT had to be read from disk. - The rate at which data is added to the cache devices - is limited to prevent prematurely wearing out the - SSD with too many writes. Until - the cache is full (the first block has been evicted to - make room), writing to the L2ARC is - limited to the sum of the write limit and the boost - limit, then after that limited to the write limit. A - pair of sysctl values control these rate limits; - vfs.zfs.l2arc_write_max controls - how many bytes are written to the cache per second, - while vfs.zfs.l2arc_write_boost - adds to this limit during the "Turbo Warmup Phase" - (Write Boost). - - - - Copy-On-Write - - Unlike a traditional file system, when data is - overwritten on ZFS the new data is written to a - different block rather than overwriting the old data - in place. Only once this write is complete is the - metadata then updated to point to the new location of - the data. This means that in the event of a shorn - write (a system crash or power loss in the middle of - writing a file) the entire original contents of the - file are still available and the incomplete write is - discarded. This also means that ZFS does not require - a fsck after an unexpected shutdown. - - - - Dataset - - Dataset is the generic term for a ZFS file - system, volume, snapshot or clone. Each dataset will - have a unique name in the format: - poolname/path@snapshot. The root - of the pool is technically a dataset as well. Child - datasets are named hierarchically like directories; - for example mypool/home, the home - dataset is a child of mypool and inherits properties - from it. This can be expended further by creating - mypool/home/user. This grandchild - dataset will inherity properties from the parent and - grandparent. It is also possible to set properties - on a child to override the defaults inherited from the - parents and grandparents. ZFS also allows - administration of datasets and their children to be - delegated. - - - - Volume - - In additional to regular file system datasets, - ZFS can also create volumes, which are block devices. - Volumes have many of the same features, including - copy-on-write, snapshots, clones and - checksumming. Volumes can be useful for running other - file system formats on top of ZFS, such as UFS or in - the case of Virtualization or exporting - iSCSI extents. - - - - Snapshot - - The copy-on-write - design of ZFS allows for nearly instantaneous - consistent snapshots with arbitrary names. After - taking a snapshot of a dataset (or a recursive - snapshot of a parent dataset that will include all - child datasets), new data is written to new blocks (as - described above), however the old blocks are not - reclaimed as free space. There are then two versions - of the file system, the snapshot (what the file system - looked like before) and the live file system; however - no additional space is used. As new data is written - to the live file system, new blocks are allocated to - store this data. The apparent size of the snapshot - will grow as the blocks are no longer used in the live - file system, but only in the snapshot. These - snapshots can be mounted (read only) to allow for the - recovery of previous versions of files. It is also - possible to rollback - a live file system to a specific snapshot, undoing any - changes that took place after the snapshot was taken. - Each block in the zpool has a reference counter which - indicates how many snapshots, clones, datasets or - volumes make use of that block. As files and - snapshots are deleted, the reference count is - decremented; once a block is no longer referenced, it - is reclaimed as free space. Snapshots can also be - marked with a hold, - once a snapshot is held, any attempt to destroy it - will return an EBUY error. Each snapshot can have - multiple holds, each with a unique name. The release - command removes the hold so the snapshot can then be - deleted. Snapshots can be taken on volumes, however - they can only be cloned or rolled back, not mounted - independently. - - - - Clone - - Snapshots can also be cloned; a clone is a - writable version of a snapshot, allowing the file - system to be forked as a new dataset. As with a - snapshot, a clone initially consumes no additional - space, only as new data is written to a clone and new - blocks are allocated does the apparent size of the - clone grow. As blocks are overwritten in the cloned - file system or volume, the reference count on the - previous block is decremented. The snapshot upon - which a clone is based cannot be deleted because the - clone is dependeant upon it (the snapshot is the - parent, and the clone is the child). Clones can be - promoted, reversing this - dependeancy, making the clone the parent and the - previous parent the child. This operation requires no - additional space, however it will change the way the - used space is accounted. - - - - Checksum - - Every block that is allocated is also checksummed - (which algorithm is used is a per dataset property, - see: zfs set). ZFS transparently validates the - checksum of each block as it is read, allowing ZFS to - detect silent corruption. If the data that is read - does not match the expected checksum, ZFS will attempt - to recover the data from any available redundancy - (mirrors, RAID-Z). You can trigger the validation of - all checksums using the scrub - command. The available checksum algorithms include: - - fletcher2 - fletcher4 - sha256 - The fletcher algorithms are faster, - but sha256 is a strong cryptographic hash and has a - much lower chance of a collisions at the cost of some - performance. Checksums can be disabled but it is - inadvisable. - - - - Compression - - Each dataset in ZFS has a compression property, - which defaults to off. This property can be set to - one of a number of compression algorithms, which will - cause all new data that is written to this dataset to - be compressed as it is written. In addition to the - reduction in disk usage, this can also increase read - and write throughput, as only the smaller compressed - version of the file needs to be read or - written. - LZ4 compression is only available after &os; - 9.2 - - - - - Deduplication - - ZFS has the ability to detect duplicate blocks of - data as they are written (thanks to the checksumming - feature). If deduplication is enabled, instead of - writing the block a second time, the reference count - of the existing block will be increased, saving - storage space. In order to do this, ZFS keeps a - deduplication table (DDT) in - memory, containing the list of unique checksums, the - location of that block and a reference count. When - new data is written, the checksum is calculated and - compared to the list. If a match is found, the data - is considered to be a duplicate. When deduplication - is enabled, the checksum algorithm is changed to - SHA256 to provide a secure - cryptographic hash. ZFS deduplication is tunable; if - dedup is on, then a matching checksum is assumed to - mean that the data is identical. If dedup is set to - verify, then the data in the two blocks will be - checked byte-for-byte to ensure it is actually - identical and if it is not, the hash collision will be - noted by ZFS and the two blocks will be stored - separately. Due to the nature of the - DDT, having to store the hash of - each unique block, it consumes a very large amount of - memory (a general rule of thumb is 5-6 GB of ram - per 1 TB of deduplicated data). In situations - where it is not practical to have enough - RAM to keep the entire DDT in - memory, performance will suffer greatly as the DDT - will need to be read from disk before each new block - is written. Deduplication can make use of the L2ARC - to store the DDT, providing a middle ground between - fast system memory and slower disks. It is advisable - to consider using ZFS compression instead, which often - provides nearly as much space savings without the - additional memory requirement. - - - - Scrub - - In place of a consistency check like fsck, ZFS - has the scrub command, which reads - all data blocks stored on the pool and verifies their - checksums them against the known good checksums stored - in the metadata. This periodic check of all the data - stored on the pool ensures the recovery of any - corrupted blocks before they are needed. A scrub is - not required after an unclean shutdown, but it is - recommended that you run a scrub at least once each - quarter. ZFS compares the checksum for each block as - it is read in the normal course of use, but a scrub - operation makes sure even infrequently used blocks are - checked for silent corruption. - - - - Dataset Quota - - ZFS provides very fast and accurate dataset, user - and group space accounting in addition to quotes and - space reservations. This gives the administrator fine - grained control over how space is allocated and allows - critical file systems to reserve space to ensure other - file systems do not take all of the free space. - ZFS supports different types of quotas: the - dataset quota, the reference - quota (refquota), the - user - quota, and the - group quota. - - Quotas limit the amount of space that a dataset - and all of its descendants (snapshots of the - dataset, child datasets and the snapshots of those - datasets) can consume. - - - Quotas cannot be set on volumes, as the - volsize property acts as an - implicit quota. - - - - - Reference - Quota - - A reference quota limits the amount of space a - dataset can consume by enforcing a hard limit on the - space used. However, this hard limit includes only - space that the dataset references and does not include - space used by descendants, such as file systems or - snapshots. - - - - User - Quota - - User quotas are useful to limit the amount of - space that can be used by the specified user. - - - - Group - Quota - - The group quota limits the amount of space that a - specified group can consume. - - - - Dataset - Reservation - - The reservation property makes - it possible to guaranteed a minimum amount of space - for the use of a specific dataset and its descendants. - This means that if a 10 GB reservation is set on - storage/home/bob, if another - dataset tries to use all of the free space, at least - 10 GB of space is reserved for this dataset. If - a snapshot is taken of - storage/home/bob, the space used - by that snapshot is counted against the reservation. - The refreservation - property works in a similar way, except it - excludes descendants, such as - snapshots. - Reservations of any sort are useful - in many situations, such as planning and testing the - suitability of disk space allocation in a new - system, or ensuring that enough space is available - on file systems for audio logs or system recovery - procedures and files. - - - - Reference - Reservation - - The refreservation property - makes it possible to guaranteed a minimum amount of - space for the use of a specific dataset - excluding its descendants. This - means that if a 10 GB reservation is set on - storage/home/bob, if another - dataset tries to use all of the free space, at least - 10 GB of space is reserved for this dataset. In - contrast to a regular reservation, - space used by snapshots and decendant datasets is not - counted against the reservation. As an example, if a - snapshot was taken of - storage/home/bob, enough disk - space would have to exist outside of the - refreservation amount for the - operation to succeed because descendants of the main - data set are not counted by the - refreservation amount and so do not - encroach on the space set. - - - - Resilver - - When a disk fails and must be replaced, the new - disk must be filled with the data that was lost. This - process of calculating and writing the missing data - (using the parity information distributed across the - remaining drives) to the new drive is called - Resilvering. - - - - - - - - - What Makes ZFS Different - - ZFS is significantly different from any previous file - system owing to the fact that it is more than just a file - system. ZFS combines the traditionally separate roles of - volume manager and file system, which provides unique - advantages because the file system is now aware of the - underlying structure of the disks. Traditional file systems - could only be created on a single disk at a time, if there - were two disks then two separate file systems would have to - be created. In a traditional hardware RAID - configuration, this problem was worked around by presenting - the operating system with a single logical disk made up of - the space provided by a number of disks, on top of which the - operating system placed its file system. Even in the case of - software RAID solutions like GEOM, the UFS - file system living on top of the RAID - transform believed that it was dealing with a single device. - ZFS's combination of the volume manager and the file system - solves this and allows the creation of many file systems all - sharing a pool of available storage. One of the biggest - advantages to ZFS's awareness of the physical layout of the - disks is that ZFS can grow the existing file systems - automatically when additional disks are added to the pool. - This new space is then made available to all of the file - systems. ZFS also has a number of different properties that - can be applied to each file system, creating many advantages - to creating a number of different filesystems and datasets - rather than a single monolithic filesystem. - - - - <acronym>ZFS</acronym> Quick Start Guide - - There is a start up mechanism that allows &os; to - mount ZFS pools during system - initialization. To set it, issue the following - commands: - - &prompt.root; echo 'zfs_enable="YES"' >> /etc/rc.conf -&prompt.root; service zfs start - - The examples in this section assume three - SCSI disks with the device names - da0, - da1, - and da2. - Users of SATA hardware should instead use - ada - device names. - - - Single Disk Pool - - To create a simple, non-redundant ZFS - pool using a single disk device, use - zpool: - - &prompt.root; zpool create example /dev/da0 - - To view the new pool, review the output of - df: - - &prompt.root; df -Filesystem 1K-blocks Used Avail Capacity Mounted on -/dev/ad0s1a 2026030 235230 1628718 13% / -devfs 1 1 0 100% /dev -/dev/ad0s1d 54098308 1032846 48737598 2% /usr -example 17547136 0 17547136 0% /example - - This output shows that the example - pool has been created and mounted. It - is now accessible as a file system. Files may be created - on it and users can browse it, as seen in the following - example: - - &prompt.root; cd /example -&prompt.root; ls -&prompt.root; touch testfile -&prompt.root; ls -al -total 4 -drwxr-xr-x 2 root wheel 3 Aug 29 23:15 . -drwxr-xr-x 21 root wheel 512 Aug 29 23:12 .. --rw-r--r-- 1 root wheel 0 Aug 29 23:15 testfile - - However, this pool is not taking advantage of any - ZFS features. To create a dataset on - this pool with compression enabled: - - &prompt.root; zfs create example/compressed -&prompt.root; zfs set compression=gzip example/compressed - - The example/compressed dataset is now - a ZFS compressed file system. Try - copying some large files to /example/compressed. - - Compression can be disabled with: - - &prompt.root; zfs set compression=off example/compressed - - To unmount a file system, issue the following command - and then verify by using df: - - &prompt.root; zfs umount example/compressed -&prompt.root; df -Filesystem 1K-blocks Used Avail Capacity Mounted on -/dev/ad0s1a 2026030 235232 1628716 13% / -devfs 1 1 0 100% /dev -/dev/ad0s1d 54098308 1032864 48737580 2% /usr -example 17547008 0 17547008 0% /example - - To re-mount the file system to make it accessible - again, and verify with df: - - &prompt.root; zfs mount example/compressed -&prompt.root; df -Filesystem 1K-blocks Used Avail Capacity Mounted on -/dev/ad0s1a 2026030 235234 1628714 13% / -devfs 1 1 0 100% /dev -/dev/ad0s1d 54098308 1032864 48737580 2% /usr -example 17547008 0 17547008 0% /example -example/compressed 17547008 0 17547008 0% /example/compressed - - The pool and file system may also be observed by viewing - the output from mount: - - &prompt.root; mount -/dev/ad0s1a on / (ufs, local) -devfs on /dev (devfs, local) -/dev/ad0s1d on /usr (ufs, local, soft-updates) -example on /example (zfs, local) -example/data on /example/data (zfs, local) -example/compressed on /example/compressed (zfs, local) - - ZFS datasets, after creation, may be - used like any file systems. However, many other features - are available which can be set on a per-dataset basis. In - the following example, a new file system, - data is created. Important files will be - stored here, the file system is set to keep two copies of - each data block: - - &prompt.root; zfs create example/data -&prompt.root; zfs set copies=2 example/data - - It is now possible to see the data and space utilization - by issuing df: - - &prompt.root; df -Filesystem 1K-blocks Used Avail Capacity Mounted on -/dev/ad0s1a 2026030 235234 1628714 13% / -devfs 1 1 0 100% /dev -/dev/ad0s1d 54098308 1032864 48737580 2% /usr -example 17547008 0 17547008 0% /example -example/compressed 17547008 0 17547008 0% /example/compressed -example/data 17547008 0 17547008 0% /example/data - - Notice that each file system on the pool has the same - amount of available space. This is the reason for using - df in these examples, to show that the - file systems use only the amount of space they need and all - draw from the same pool. The ZFS file - system does away with concepts such as volumes and - partitions, and allows for several file systems to occupy - the same pool. - - To destroy the file systems and then destroy the pool as - they are no longer needed: - - &prompt.root; zfs destroy example/compressed -&prompt.root; zfs destroy example/data -&prompt.root; zpool destroy example - - - - - <acronym>ZFS</acronym> RAID-Z - - There is no way to prevent a disk from failing. One - method of avoiding data loss due to a failed hard disk is to - implement RAID. ZFS - supports this feature in its pool design. RAID-Z pools - require 3 or more disks but yield more usable space than - mirrored pools. - - To create a RAID-Z pool, issue the - following command and specify the disks to add to the - pool: - - &prompt.root; zpool create storage raidz da0 da1 da2 - - - &sun; recommends that the number of devices used in - a RAID-Z configuration is between - three and nine. For environments requiring a single pool - consisting of 10 disks or more, consider breaking it up - into smaller RAID-Z groups. If only - two disks are available and redundancy is a requirement, - consider using a ZFS mirror. Refer to - &man.zpool.8; for more details. - - - This command creates the storage - zpool. This may be verified using &man.mount.8; and - &man.df.1;. This command makes a new file system in the - pool called home: - - &prompt.root; zfs create storage/home - - It is now possible to enable compression and keep extra - copies of directories and files using the following - commands: - - &prompt.root; zfs set copies=2 storage/home -&prompt.root; zfs set compression=gzip storage/home - - To make this the new home directory for users, copy the - user data to this directory, and create the appropriate - symbolic links: - - &prompt.root; cp -rp /home/* /storage/home -&prompt.root; rm -rf /home /usr/home -&prompt.root; ln -s /storage/home /home -&prompt.root; ln -s /storage/home /usr/home - - Users should now have their data stored on the freshly - created /storage/home. Test by - adding a new user and logging in as that user. - - Try creating a snapshot which may be rolled back - later: - - &prompt.root; zfs snapshot storage/home@08-30-08 - - Note that the snapshot option will only capture a real - file system, not a home directory or a file. The - @ character is a delimiter used between - the file system name or the volume name. When a user's - home directory gets trashed, restore it with: - - &prompt.root; zfs rollback storage/home@08-30-08 - - To get a list of all available snapshots, run - ls in the file system's - .zfs/snapshot - directory. For example, to see the previously taken - snapshot: - - &prompt.root; ls /storage/home/.zfs/snapshot - - It is possible to write a script to perform regular - snapshots on user data. However, over time, snapshots - may consume a great deal of disk space. The previous - snapshot may be removed using the following command: - - &prompt.root; zfs destroy storage/home@08-30-08 - - After testing, /storage/home can be made the - real /home using - this command: - - &prompt.root; zfs set mountpoint=/home storage/home - - Run df and - mount to confirm that the system now - treats the file system as the real - /home: - - &prompt.root; mount -/dev/ad0s1a on / (ufs, local) -devfs on /dev (devfs, local) -/dev/ad0s1d on /usr (ufs, local, soft-updates) -storage on /storage (zfs, local) -storage/home on /home (zfs, local) -&prompt.root; df -Filesystem 1K-blocks Used Avail Capacity Mounted on -/dev/ad0s1a 2026030 235240 1628708 13% / -devfs 1 1 0 100% /dev -/dev/ad0s1d 54098308 1032826 48737618 2% /usr -storage 26320512 0 26320512 0% /storage -storage/home 26320512 0 26320512 0% /home - - This completes the RAID-Z - configuration. To get status updates about the file systems - created during the nightly &man.periodic.8; runs, issue the - following command: - - &prompt.root; echo 'daily_status_zfs_enable="YES"' >> /etc/periodic.conf - - - - Recovering <acronym>RAID</acronym>-Z - - Every software RAID has a method of - monitoring its state. The status of - RAID-Z devices may be viewed with the - following command: - - &prompt.root; zpool status -x - *** DIFF OUTPUT TRUNCATED AT 1000 LINES *** From owner-svn-doc-projects@FreeBSD.ORG Wed Aug 14 22:29:08 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id BD20C37F; Wed, 14 Aug 2013 22:29:08 +0000 (UTC) (envelope-from gabor@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 9B66C2BC2; Wed, 14 Aug 2013 22:29:08 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r7EMT85e075598; Wed, 14 Aug 2013 22:29:08 GMT (envelope-from gabor@svn.freebsd.org) Received: (from gabor@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r7EMT83B075595; Wed, 14 Aug 2013 22:29:08 GMT (envelope-from gabor@svn.freebsd.org) Message-Id: <201308142229.r7EMT83B075595@svn.freebsd.org> From: Gabor Kovesdan Date: Wed, 14 Aug 2013 22:29:08 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r42541 - in projects/db5/share: mk xml xsl X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 14 Aug 2013 22:29:08 -0000 Author: gabor Date: Wed Aug 14 22:29:07 2013 New Revision: 42541 URL: http://svnweb.freebsd.org/changeset/doc/42541 Log: - Improve generated epub file * Generate EPUB3 * Wrap programlistings if necessary * Use grey background and smaller fonts for programlistings * Justify main text * Do not break after table caption Added: projects/db5/share/xml/docbook-epub.css.xml (contents, props changed) Modified: projects/db5/share/mk/doc.docbook.mk projects/db5/share/xsl/freebsd-epub.xsl Modified: projects/db5/share/mk/doc.docbook.mk ============================================================================== --- projects/db5/share/mk/doc.docbook.mk Wed Aug 14 22:21:15 2013 (r42540) +++ projects/db5/share/mk/doc.docbook.mk Wed Aug 14 22:29:07 2013 (r42541) @@ -299,13 +299,12 @@ ${DOC}.html.tar: ${DOC}.html ${LOCAL_IMA ${DOC}.epub: ${DOC}.parsed.xml ${LOCAL_IMAGES_LIB} ${LOCAL_IMAGES_PNG} \ ${CSS_SHEET} ${XSLTPROC} ${XSLTPROCOPTS} ${XSLEPUB} ${DOC}.parsed.xml - ${ECHO} "application/epub+zip" > mimetype - ${CP} ${CSS_SHEET} OEBPS/ .if defined(LOCAL_IMAGES_LIB) || defined(LOCAL_IMAGES_PNG) - ${CP} ${LOCAL_IMAGES_LIB} ${LOCAL_IMAGES_PNG} OEBPS/ +.for f in ${LOCAL_IMAGES_LIB} ${LOCAL_IMAGES_PNG} + ${CP} ${f} OEBPS/ +.endfor .endif - ${ZIP} ${ZIPOPTS} ${DOC}.epub mimetype - ${ZIP} ${ZIPOPTS} -Dr ${DOC}.epub OEBPS META-INF + ${ZIP} ${ZIPOPTS} -r -X ${DOC}.epub mimetype OEBPS META-INF # TXT -------------------------------------------------------------------- Added: projects/db5/share/xml/docbook-epub.css.xml ============================================================================== --- /dev/null 00:00:00 1970 (empty, because file is newly added) +++ projects/db5/share/xml/docbook-epub.css.xml Wed Aug 14 22:29:07 2013 (r42541) @@ -0,0 +1,196 @@ + + Modified: projects/db5/share/xsl/freebsd-epub.xsl ============================================================================== --- projects/db5/share/xsl/freebsd-epub.xsl Wed Aug 14 22:21:15 2013 (r42540) +++ projects/db5/share/xsl/freebsd-epub.xsl Wed Aug 14 22:29:07 2013 (r42541) @@ -3,14 +3,61 @@ - + - + + + ../xml/docbook-epub.css.xml + +figure after +example before +equation after +table before +procedure before + + + + + + + + + + + + + + + + + + + + + + + + + + + + + From owner-svn-doc-projects@FreeBSD.ORG Wed Aug 14 23:34:16 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 88C568C4; Wed, 14 Aug 2013 23:34:16 +0000 (UTC) (envelope-from wblock@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 734D52F67; Wed, 14 Aug 2013 23:34:16 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r7ENYGYA021850; Wed, 14 Aug 2013 23:34:16 GMT (envelope-from wblock@svn.freebsd.org) Received: (from wblock@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r7ENYGR9021849; Wed, 14 Aug 2013 23:34:16 GMT (envelope-from wblock@svn.freebsd.org) Message-Id: <201308142334.r7ENYGR9021849@svn.freebsd.org> From: Warren Block Date: Wed, 14 Aug 2013 23:34:16 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r42542 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 14 Aug 2013 23:34:16 -0000 Author: wblock Date: Wed Aug 14 23:34:16 2013 New Revision: 42542 URL: http://svnweb.freebsd.org/changeset/doc/42542 Log: Whitespace-only fixes. Translators, please ignore. Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Wed Aug 14 22:29:07 2013 (r42541) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Wed Aug 14 23:34:16 2013 (r42542) @@ -15,723 +15,729 @@ - The Z File System (ZFS) + The Z File System (ZFS) - The Z file system, originally developed by &sun;, - is designed to future proof the file system by removing many of - the arbitrary limits imposed on previous file systems. ZFS - allows continuous growth of the pooled storage by adding - additional devices. ZFS allows you to create many file systems - (in addition to block devices) out of a single shared pool of - storage. Space is allocated as needed, so all remaining free - space is available to each file system in the pool. It is also - designed for maximum data integrity, supporting data snapshots, - multiple copies, and cryptographic checksums. It uses a - software data replication model, known as - RAID-Z. RAID-Z provides - redundancy similar to hardware RAID, but is - designed to prevent data write corruption and to overcome some - of the limitations of hardware RAID. - - - ZFS Features and Terminology - - ZFS is a fundamentally different file system because it - is more than just a file system. ZFS combines the roles of - file system and volume manager, enabling additional storage - devices to be added to a live system and having the new space - available on all of the existing file systems in that pool - immediately. By combining the traditionally separate roles, - ZFS is able to overcome previous limitations that prevented - RAID groups being able to grow. Each top level device in a - zpool is called a vdev, which can be a simple disk or a RAID - transformation such as a mirror or RAID-Z array. ZFS file - systems (called datasets), each have access to the combined - free space of the entire pool. As blocks are allocated the - free space in the pool available to of each file system is - decreased. This approach avoids the common pitfall with - extensive partitioning where free space becomes fragmentated - across the partitions. - - - - - - zpool - - A storage pool is the most basic building block - of ZFS. A pool is made up of one or more vdevs, the - underlying devices that store the data. A pool is - then used to create one or more file systems - (datasets) or block devices (volumes). These datasets - and volumes share the pool of remaining free space. - Each pool is uniquely identified by a name and a - GUID. The zpool also controls the - version number and therefore the features available - for use with ZFS. - &os; 9.0 and 9.1 include - support for ZFS version 28. Future versions use ZFS - version 5000 with feature flags. This allows - greater cross-compatibility with other - implementations of ZFS. - - - - - vdev Types - - A zpool is made up of one or more vdevs, which - themselves can be a single disk or a group of disks, - in the case of a RAID transform. When multiple vdevs - are used, ZFS spreads data across the vdevs to - increase performance and maximize usable space. - - - - Disk - The most basic type - of vdev is a standard block device. This can be - an entire disk (such as - /dev/ada0 - or - /dev/da0) - or a partition - (/dev/ada0p3). - Contrary to the Solaris documentation, on &os; - there is no performance penalty for using a - partition rather than an entire disk. - - - - - File - In addition to - disks, ZFS pools can be backed by regular files, - this is especially useful for testing and - experimentation. Use the full path to the file - as the device path in the zpool create command. - All vdevs must be atleast 128 MB in - size. - - - - - Mirror - When creating a - mirror, specify the mirror - keyword followed by the list of member devices - for the mirror. A mirror consists of two or - more devices, all data will be written to all - member devices. A mirror vdev will only hold as - much data as its smallest member. A mirror vdev - can withstand the failure of all but one of its - members without losing any data. - - - - A regular single disk vdev can be - upgraded to a mirror vdev at any time using - the zpool The Z file system, originally developed by &sun;, + is designed to future proof the file system by removing many of + the arbitrary limits imposed on previous file systems. ZFS + allows continuous growth of the pooled storage by adding + additional devices. ZFS allows you to create many file systems + (in addition to block devices) out of a single shared pool of + storage. Space is allocated as needed, so all remaining free + space is available to each file system in the pool. It is also + designed for maximum data integrity, supporting data snapshots, + multiple copies, and cryptographic checksums. It uses a + software data replication model, known as + RAID-Z. RAID-Z provides + redundancy similar to hardware RAID, but is + designed to prevent data write corruption and to overcome some + of the limitations of hardware RAID. + + + ZFS Features and Terminology + + ZFS is a fundamentally different file system because it + is more than just a file system. ZFS combines the roles of + file system and volume manager, enabling additional storage + devices to be added to a live system and having the new space + available on all of the existing file systems in that pool + immediately. By combining the traditionally separate roles, + ZFS is able to overcome previous limitations that prevented + RAID groups being able to grow. Each top level device in a + zpool is called a vdev, which can be a simple disk or a RAID + transformation such as a mirror or RAID-Z array. ZFS file + systems (called datasets), each have access to the combined + free space of the entire pool. As blocks are allocated the + free space in the pool available to of each file system is + decreased. This approach avoids the common pitfall with + extensive partitioning where free space becomes fragmentated + across the partitions. + + + + + + zpool + + A storage pool is the most basic building block of + ZFS. A pool is made up of one or more vdevs, the + underlying devices that store the data. A pool is then + used to create one or more file systems (datasets) or + block devices (volumes). These datasets and volumes + share the pool of remaining free space. Each pool is + uniquely identified by a name and a + GUID. The zpool also controls the + version number and therefore the features available for + use with ZFS. + + + &os; 9.0 and 9.1 include support for ZFS version + 28. Future versions use ZFS version 5000 with + feature flags. This allows greater + cross-compatibility with other implementations of + ZFS. + + + + + vdev Types + + A zpool is made up of one or more vdevs, which + themselves can be a single disk or a group of disks, in + the case of a RAID transform. When multiple vdevs are + used, ZFS spreads data across the vdevs to increase + performance and maximize usable space. + + + + + Disk - The most basic type + of vdev is a standard block device. This can be + an entire disk (such as + /dev/ada0 + or + /dev/da0) + or a partition + (/dev/ada0p3). + Contrary to the Solaris documentation, on &os; + there is no performance penalty for using a + partition rather than an entire disk. + + + + + File - In addition to + disks, ZFS pools can be backed by regular files, + this is especially useful for testing and + experimentation. Use the full path to the file + as the device path in the zpool create command. + All vdevs must be atleast 128 MB in + size. + + + + + Mirror - When creating a + mirror, specify the mirror + keyword followed by the list of member devices + for the mirror. A mirror consists of two or + more devices, all data will be written to all + member devices. A mirror vdev will only hold as + much data as its smallest member. A mirror vdev + can withstand the failure of all but one of its + members without losing any data. + + + regular single disk vdev can be upgraded to + a mirror vdev at any time using the + zpool attach - command. - - - - - - RAID-Z - - ZFS implements RAID-Z, a variation on standard - RAID-5 that offers better distribution of parity - and eliminates the "RAID-5 write hole" in which - the data and parity information become - inconsistent after an unexpected restart. ZFS - supports 3 levels of RAID-Z which provide - varying levels of redundancy in exchange for - decreasing levels of usable storage. The types - are named RAID-Z1 through Z3 based on the number - of parity devinces in the array and the number - of disks that the pool can operate - without. - - In a RAID-Z1 configuration with 4 disks, - each 1 TB, usable storage will be 3 TB - and the pool will still be able to operate in - degraded mode with one faulted disk. If an - additional disk goes offline before the faulted - disk is replaced and resilvered, all data in the - pool can be lost. - - In a RAID-Z3 configuration with 8 disks of - 1 TB, the volume would provide 5TB of - usable space and still be able to operate with - three faulted disks. Sun recommends no more - than 9 disks in a single vdev. If the - configuration has more disks, it is recommended - to divide them into separate vdevs and the pool - data will be striped across them. - - A configuration of 2 RAID-Z2 vdevs - consisting of 8 disks each would create - something similar to a RAID 60 array. A RAID-Z - group's storage capacity is approximately the - size of the smallest disk, multiplied by the - number of non-parity disks. 4x 1 TB disks - in Z1 has an effective size of approximately - 3 TB, and a 8x 1 TB array in Z3 will - yeild 5 TB of usable space. - - - - - Spare - ZFS has a special - pseudo-vdev type for keeping track of available - hot spares. Note that installed hot spares are - not deployed automatically; they must manually - be configured to replace the failed device using - the zfs replace command. - - - - - Log - ZFS Log Devices, also - known as ZFS Intent Log (ZIL) - move the intent log from the regular pool - devices to a dedicated device. The ZIL - accelerates synchronous transactions by using - storage devices (such as - SSDs) that are faster - compared to those used for the main pool. When - data is being written and the application - requests a guarantee that the data has been - safely stored, the data is written to the faster - ZIL storage, then later flushed out to the - regular disks, greatly reducing the latency of - synchronous writes. Log devices can be - mirrored, but RAID-Z is not supported. When - specifying multiple log devices writes will be - load balanced across all devices. - - - - - Cache - Adding a cache vdev - to a zpool will add the storage of the cache to - the L2ARC. Cache devices cannot be mirrored. - Since a cache device only stores additional - copies of existing data, there is no risk of - data loss. - - - - - - Adaptive Replacement - Cache (ARC) - - ZFS uses an Adaptive Replacement Cache - (ARC), rather than a more - traditional Least Recently Used - (LRU) cache. An - LRU cache is a simple list of items - in the cache sorted by when each object was most - recently used; new items are added to the top of the - list and once the cache is full items from the bottom - of the list are evicted to make room for more active - objects. An ARC consists of four - lists; the Most Recently Used (MRU) - and Most Frequently Used (MFU) - objects, plus a ghost list for each. These ghost - lists tracks recently evicted objects to provent them - being added back to the cache. This increases the - cache hit ratio by avoiding objects that have a - history of only being used occasionally. Another - advantage of using both an MRU and - MFU is that scanning an entire - filesystem would normally evict all data from an - MRU or LRU cache - in favor of this freshly accessed content. In the - case of ZFS since there is also an - MFU that only tracks the most - frequently used objects, the cache of the most - commonly accessed blocks remains. - - - - L2ARC - - The L2ARC is the second level - of the ZFS caching system. The - primary ARC is stored in - RAM, however since the amount of - available RAM is often limited, - ZFS can also make use of + + + + + + RAID-Z - + ZFS implements RAID-Z, a variation on standard + RAID-5 that offers better distribution of parity + and eliminates the "RAID-5 write hole" in which + the data and parity information become + inconsistent after an unexpected restart. ZFS + supports 3 levels of RAID-Z which provide + varying levels of redundancy in exchange for + decreasing levels of usable storage. The types + are named RAID-Z1 through Z3 based on the number + of parity devinces in the array and the number + of disks that the pool can operate + without. + + In a RAID-Z1 configuration with 4 disks, + each 1 TB, usable storage will be 3 TB + and the pool will still be able to operate in + degraded mode with one faulted disk. If an + additional disk goes offline before the faulted + disk is replaced and resilvered, all data in the + pool can be lost. + + In a RAID-Z3 configuration with 8 disks of + 1 TB, the volume would provide 5TB of + usable space and still be able to operate with + three faulted disks. Sun recommends no more + than 9 disks in a single vdev. If the + configuration has more disks, it is recommended + to divide them into separate vdevs and the pool + data will be striped across them. + + A configuration of 2 RAID-Z2 vdevs + consisting of 8 disks each would create + something similar to a RAID 60 array. A RAID-Z + group's storage capacity is approximately the + size of the smallest disk, multiplied by the + number of non-parity disks. 4x 1 TB disks + in Z1 has an effective size of approximately + 3 TB, and a 8x 1 TB array in Z3 will + yeild 5 TB of usable space. + + + + + Spare - ZFS has a special + pseudo-vdev type for keeping track of available + hot spares. Note that installed hot spares are + not deployed automatically; they must manually + be configured to replace the failed device using + the zfs replace command. + + + + + Log - ZFS Log Devices, also + known as ZFS Intent Log (ZIL) + move the intent log from the regular pool + devices to a dedicated device. The ZIL + accelerates synchronous transactions by using + storage devices (such as + SSDs) that are faster + compared to those used for the main pool. When + data is being written and the application + requests a guarantee that the data has been + safely stored, the data is written to the faster + ZIL storage, then later flushed out to the + regular disks, greatly reducing the latency of + synchronous writes. Log devices can be + mirrored, but RAID-Z is not supported. When + specifying multiple log devices writes will be + load balanced across all devices. + + + + + Cache - Adding a cache vdev + to a zpool will add the storage of the cache to + the L2ARC. Cache devices cannot be mirrored. + Since a cache device only stores additional + copies of existing data, there is no risk of + data loss. + + + + + + Adaptive Replacement + Cache (ARC) + + ZFS uses an Adaptive Replacement Cache + (ARC), rather than a more + traditional Least Recently Used + (LRU) cache. An + LRU cache is a simple list of items + in the cache sorted by when each object was most + recently used; new items are added to the top of the + list and once the cache is full items from the bottom + of the list are evicted to make room for more active + objects. An ARC consists of four + lists; the Most Recently Used (MRU) + and Most Frequently Used (MFU) + objects, plus a ghost list for each. These ghost + lists tracks recently evicted objects to provent them + being added back to the cache. This increases the + cache hit ratio by avoiding objects that have a + history of only being used occasionally. Another + advantage of using both an MRU and + MFU is that scanning an entire + filesystem would normally evict all data from an + MRU or LRU cache + in favor of this freshly accessed content. In the + case of ZFS since there is also an + MFU that only tracks the most + frequently used objects, the cache of the most + commonly accessed blocks remains. + + + + L2ARC + + The L2ARC is the second level + of the ZFS caching system. The + primary ARC is stored in + RAM, however since the amount of + available RAM is often limited, + ZFS can also make use of cache - vdevs. Solid State Disks (SSDs) - are often used as these cache devices due to their - higher speed and lower latency compared to traditional - spinning disks. An L2ARC is entirely optional, but - having one will significantly increase read speeds for - files that are cached on the SSD - instead of having to be read from the regular spinning - disks. The L2ARC can also speed up SSDs) are + often used as these cache devices due to their higher + speed and lower latency compared to traditional spinning + disks. An L2ARC is entirely optional, but having one + will significantly increase read speeds for files that + are cached on the SSD instead of + having to be read from the regular spinning disks. The + L2ARC can also speed up deduplication - since a DDT that does not fit in - RAM but does fit in the - L2ARC will be much faster than if - the DDT had to be read from disk. - The rate at which data is added to the cache devices - is limited to prevent prematurely wearing out the - SSD with too many writes. Until - the cache is full (the first block has been evicted to - make room), writing to the L2ARC is - limited to the sum of the write limit and the boost - limit, then after that limited to the write limit. A - pair of sysctl values control these rate limits; - vfs.zfs.l2arc_write_max controls - how many bytes are written to the cache per second, - while vfs.zfs.l2arc_write_boost - adds to this limit during the "Turbo Warmup Phase" - (Write Boost). - - - - Copy-On-Write - - Unlike a traditional file system, when data is - overwritten on ZFS the new data is written to a - different block rather than overwriting the old data - in place. Only once this write is complete is the - metadata then updated to point to the new location of - the data. This means that in the event of a shorn - write (a system crash or power loss in the middle of - writing a file) the entire original contents of the - file are still available and the incomplete write is - discarded. This also means that ZFS does not require - a fsck after an unexpected shutdown. - - - - Dataset - - Dataset is the generic term for a ZFS file - system, volume, snapshot or clone. Each dataset will - have a unique name in the format: - poolname/path@snapshot. The root - of the pool is technically a dataset as well. Child - datasets are named hierarchically like directories; - for example mypool/home, the home - dataset is a child of mypool and inherits properties - from it. This can be expended further by creating - mypool/home/user. This grandchild - dataset will inherity properties from the parent and - grandparent. It is also possible to set properties - on a child to override the defaults inherited from the - parents and grandparents. ZFS also allows - administration of datasets and their children to be - delegated. - - - - Volume - - In additional to regular file system datasets, - ZFS can also create volumes, which are block devices. - Volumes have many of the same features, including - copy-on-write, snapshots, clones and - checksumming. Volumes can be useful for running other - file system formats on top of ZFS, such as UFS or in - the case of Virtualization or exporting - iSCSI extents. - - - - Snapshot - - The copy-on-write - design of ZFS allows for nearly instantaneous - consistent snapshots with arbitrary names. After - taking a snapshot of a dataset (or a recursive - snapshot of a parent dataset that will include all - child datasets), new data is written to new blocks (as - described above), however the old blocks are not - reclaimed as free space. There are then two versions - of the file system, the snapshot (what the file system - looked like before) and the live file system; however - no additional space is used. As new data is written - to the live file system, new blocks are allocated to - store this data. The apparent size of the snapshot - will grow as the blocks are no longer used in the live - file system, but only in the snapshot. These - snapshots can be mounted (read only) to allow for the - recovery of previous versions of files. It is also - possible to DDT that does not fit in + RAM but does fit in the + L2ARC will be much faster than if the + DDT had to be read from disk. The + rate at which data is added to the cache devices is + limited to prevent prematurely wearing out the + SSD with too many writes. Until the + cache is full (the first block has been evicted to make + room), writing to the L2ARC is + limited to the sum of the write limit and the boost + limit, then after that limited to the write limit. A + pair of sysctl values control these rate limits; + vfs.zfs.l2arc_write_max controls how + many bytes are written to the cache per second, while + vfs.zfs.l2arc_write_boost adds to + this limit during the "Turbo Warmup Phase" (Write + Boost). + + + + Copy-On-Write + + Unlike a traditional file system, when data is + overwritten on ZFS the new data is written to a + different block rather than overwriting the old data in + place. Only once this write is complete is the metadata + then updated to point to the new location of the data. + This means that in the event of a shorn write (a system + crash or power loss in the middle of writing a file) the + entire original contents of the file are still available + and the incomplete write is discarded. This also means + that ZFS does not require a fsck after an unexpected + shutdown. + + + + Dataset + + Dataset is the generic term for a ZFS file system, + volume, snapshot or clone. Each dataset will have a + unique name in the format: + poolname/path@snapshot. The root of + the pool is technically a dataset as well. Child + datasets are named hierarchically like directories; for + example mypool/home, the home dataset + is a child of mypool and inherits properties from it. + This can be expended further by creating + mypool/home/user. This grandchild + dataset will inherity properties from the parent and + grandparent. It is also possible to set properties + on a child to override the defaults inherited from the + parents and grandparents. ZFS also allows + administration of datasets and their children to be + delegated. + + + + Volume + + In additional to regular file system datasets, ZFS + can also create volumes, which are block devices. + Volumes have many of the same features, including + copy-on-write, snapshots, clones and checksumming. + Volumes can be useful for running other file system + formats on top of ZFS, such as UFS or in the case of + Virtualization or exporting iSCSI + extents. + + + + Snapshot + + The copy-on-write + + design of ZFS allows for nearly instantaneous consistent + snapshots with arbitrary names. After taking a snapshot + of a dataset (or a recursive snapshot of a parent + dataset that will include all child datasets), new data + is written to new blocks (as described above), however + the old blocks are not reclaimed as free space. There + are then two versions of the file system, the snapshot + (what the file system looked like before) and the live + file system; however no additional space is used. As + new data is written to the live file system, new blocks + are allocated to store this data. The apparent size of + the snapshot will grow as the blocks are no longer used + in the live file system, but only in the snapshot. + These snapshots can be mounted (read only) to allow for + the recovery of previous versions of files. It is also + possible to rollback - a live file system to a specific snapshot, undoing any - changes that took place after the snapshot was taken. - Each block in the zpool has a reference counter which - indicates how many snapshots, clones, datasets or - volumes make use of that block. As files and - snapshots are deleted, the reference count is - decremented; once a block is no longer referenced, it - is reclaimed as free space. Snapshots can also be - marked with a hold, - once a snapshot is held, any attempt to destroy it - will return an EBUY error. Each snapshot can have - multiple holds, each with a unique name. The release - command removes the hold so the snapshot can then be - deleted. Snapshots can be taken on volumes, however - they can only be cloned or rolled back, not mounted - independently. - - - - Clone - - Snapshots can also be cloned; a clone is a - writable version of a snapshot, allowing the file - system to be forked as a new dataset. As with a - snapshot, a clone initially consumes no additional - space, only as new data is written to a clone and new - blocks are allocated does the apparent size of the - clone grow. As blocks are overwritten in the cloned - file system or volume, the reference count on the - previous block is decremented. The snapshot upon - which a clone is based cannot be deleted because the - clone is dependeant upon it (the snapshot is the - parent, and the clone is the child). Clones can be - promoted, reversing this - dependeancy, making the clone the parent and the - previous parent the child. This operation requires no - additional space, however it will change the way the - used space is accounted. - - - - Checksum - - Every block that is allocated is also checksummed - (which algorithm is used is a per dataset property, - see: zfs set). ZFS transparently validates the - checksum of each block as it is read, allowing ZFS to - detect silent corruption. If the data that is read - does not match the expected checksum, ZFS will attempt - to recover the data from any available redundancy - (mirrors, RAID-Z). You can trigger the validation of - all checksums using the scrub - command. The available checksum algorithms include: - - fletcher2 - fletcher4 - sha256 - The fletcher algorithms are faster, - but sha256 is a strong cryptographic hash and has a - much lower chance of a collisions at the cost of some - performance. Checksums can be disabled but it is - inadvisable. - - - - Compression - - Each dataset in ZFS has a compression property, - which defaults to off. This property can be set to - one of a number of compression algorithms, which will - cause all new data that is written to this dataset to - be compressed as it is written. In addition to the - reduction in disk usage, this can also increase read - and write throughput, as only the smaller compressed - version of the file needs to be read or - written. - LZ4 compression is only available after &os; - 9.2 - - - - - Deduplication - - ZFS has the ability to detect duplicate blocks of - data as they are written (thanks to the checksumming - feature). If deduplication is enabled, instead of - writing the block a second time, the reference count - of the existing block will be increased, saving - storage space. In order to do this, ZFS keeps a - deduplication table (DDT) in - memory, containing the list of unique checksums, the - location of that block and a reference count. When - new data is written, the checksum is calculated and - compared to the list. If a match is found, the data - is considered to be a duplicate. When deduplication - is enabled, the checksum algorithm is changed to - SHA256 to provide a secure - cryptographic hash. ZFS deduplication is tunable; if - dedup is on, then a matching checksum is assumed to - mean that the data is identical. If dedup is set to - verify, then the data in the two blocks will be - checked byte-for-byte to ensure it is actually - identical and if it is not, the hash collision will be - noted by ZFS and the two blocks will be stored - separately. Due to the nature of the - DDT, having to store the hash of - each unique block, it consumes a very large amount of - memory (a general rule of thumb is 5-6 GB of ram - per 1 TB of deduplicated data). In situations - where it is not practical to have enough - RAM to keep the entire DDT in - memory, performance will suffer greatly as the DDT - will need to be read from disk before each new block - is written. Deduplication can make use of the L2ARC - to store the DDT, providing a middle ground between - fast system memory and slower disks. It is advisable - to consider using ZFS compression instead, which often - provides nearly as much space savings without the - additional memory requirement. - - - - Scrub - - In place of a consistency check like fsck, ZFS - has the scrub command, which reads - all data blocks stored on the pool and verifies their - checksums them against the known good checksums stored - in the metadata. This periodic check of all the data - stored on the pool ensures the recovery of any - corrupted blocks before they are needed. A scrub is - not required after an unclean shutdown, but it is - recommended that you run a scrub at least once each - quarter. ZFS compares the checksum for each block as - it is read in the normal course of use, but a scrub - operation makes sure even infrequently used blocks are - checked for silent corruption. - - - - Dataset Quota - - ZFS provides very fast and accurate dataset, user - and group space accounting in addition to quotes and - space reservations. This gives the administrator fine - grained control over how space is allocated and allows - critical file systems to reserve space to ensure other - file systems do not take all of the free space. - ZFS supports different types of quotas: the - dataset quota, the + + + + Clone + + Snapshots can also be cloned; a clone is a writable + version of a snapshot, allowing the file system to be + forked as a new dataset. As with a snapshot, a clone + initially consumes no additional space, only as new data + is written to a clone and new blocks are allocated does + the apparent size of the clone grow. As blocks are + overwritten in the cloned file system or volume, the + reference count on the previous block is decremented. + The snapshot upon which a clone is based cannot be + deleted because the clone is dependeant upon it (the + snapshot is the parent, and the clone is the child). + Clones can be promoted, reversing + this dependeancy, making the clone the parent and the + previous parent the child. This operation requires no + additional space, however it will change the way the + used space is accounted. + + + + Checksum + + Every block that is allocated is also checksummed + (which algorithm is used is a per dataset property, see: + zfs set). ZFS transparently validates the checksum of + each block as it is read, allowing ZFS to detect silent + corruption. If the data that is read does not match the + expected checksum, ZFS will attempt to recover the data + from any available redundancy (mirrors, RAID-Z). You + can trigger the validation of all checksums using the + scrub + command. The available checksum algorithms include: + + + + fletcher2 + + + + fletcher4 + + + + sha256 + + + + The fletcher algorithms are faster, but sha256 is a + strong cryptographic hash and has a much lower chance of + a collisions at the cost of some performance. Checksums + can be disabled but it is inadvisable. + + + + Compression + + Each dataset in ZFS has a compression property, + which defaults to off. This property can be set to one + of a number of compression algorithms, which will cause + all new data that is written to this dataset to be + compressed as it is written. In addition to the + reduction in disk usage, this can also increase read and + write throughput, as only the smaller compressed version + of the file needs to be read or written. + + + LZ4 compression is only available after &os; + 9.2 + + + + + Deduplication + + ZFS has the ability to detect duplicate blocks of + data as they are written (thanks to the checksumming + feature). If deduplication is enabled, instead of + writing the block a second time, the reference count of + the existing block will be increased, saving storage + space. In order to do this, ZFS keeps a deduplication + table (DDT) in memory, containing the + list of unique checksums, the location of that block and + a reference count. When new data is written, the + checksum is calculated and compared to the list. If a + match is found, the data is considered to be a + duplicate. When deduplication is enabled, the checksum + algorithm is changed to SHA256 to + provide a secure cryptographic hash. ZFS deduplication + is tunable; if dedup is on, then a matching checksum is + assumed to mean that the data is identical. If dedup is + set to verify, then the data in the two blocks will be + checked byte-for-byte to ensure it is actually identical + and if it is not, the hash collision will be noted by + ZFS and the two blocks will be stored separately. Due + to the nature of the DDT, having to + store the hash of each unique block, it consumes a very + large amount of memory (a general rule of thumb is + 5-6 GB of ram per 1 TB of deduplicated data). + In situations where it is not practical to have enough + RAM to keep the entire DDT in memory, *** DIFF OUTPUT TRUNCATED AT 1000 LINES *** From owner-svn-doc-projects@FreeBSD.ORG Thu Aug 15 01:04:54 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id BD6EAD7F; Thu, 15 Aug 2013 01:04:54 +0000 (UTC) (envelope-from wblock@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id AA1592704; Thu, 15 Aug 2013 01:04:54 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r7F14sKS055833; Thu, 15 Aug 2013 01:04:54 GMT (envelope-from wblock@svn.freebsd.org) Received: (from wblock@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r7F14snc055832; Thu, 15 Aug 2013 01:04:54 GMT (envelope-from wblock@svn.freebsd.org) Message-Id: <201308150104.r7F14snc055832@svn.freebsd.org> From: Warren Block Date: Thu, 15 Aug 2013 01:04:54 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r42543 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 Aug 2013 01:04:54 -0000 Author: wblock Date: Thu Aug 15 01:04:54 2013 New Revision: 42543 URL: http://svnweb.freebsd.org/changeset/doc/42543 Log: Fix IDs. Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Wed Aug 14 23:34:16 2013 (r42542) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Thu Aug 15 01:04:54 2013 (r42543) @@ -33,7 +33,7 @@ designed to prevent data write corruption and to overcome some of the limitations of hardware RAID. - + ZFS Features and Terminology ZFS is a fundamentally different file system because it @@ -58,7 +58,7 @@ zpool + id="zfs-term-zpool">zpool A storage pool is the most basic building block of ZFS. A pool is made up of one or more vdevs, the @@ -82,7 +82,7 @@ vdev Types + id="zfs-term-vdev">vdev Types A zpool is made up of one or more vdevs, which themselves can be a single disk or a group of disks, in @@ -92,7 +92,7 @@ - + Disk - The most basic type of vdev is a standard block device. This can be an entire disk (such as @@ -107,7 +107,7 @@ - + File - In addition to disks, ZFS pools can be backed by regular files, this is especially useful for testing and @@ -118,7 +118,7 @@ - + Mirror - When creating a mirror, specify the mirror keyword followed by the list of member devices @@ -133,13 +133,13 @@ regular single disk vdev can be upgraded to a mirror vdev at any time using the zpool attach + linkend="zfs-zpool-attach">attach command. - + RAID-Z - ZFS implements RAID-Z, a variation on standard RAID-5 that offers better distribution of parity @@ -183,7 +183,7 @@ - + Spare - ZFS has a special pseudo-vdev type for keeping track of available hot spares. Note that installed hot spares are @@ -193,7 +193,7 @@ - + Log - ZFS Log Devices, also known as ZFS Intent Log (ZIL) move the intent log from the regular pool @@ -214,7 +214,7 @@ - + Cache - Adding a cache vdev to a zpool will add the storage of the cache to the L2ARC. Cache devices cannot be mirrored. @@ -227,7 +227,7 @@ Adaptive Replacement + id="zfs-term-arc">Adaptive Replacement Cache (ARC) ZFS uses an Adaptive Replacement Cache @@ -260,7 +260,7 @@ L2ARC + id="zfs-term-l2arc">L2ARC The L2ARC is the second level of the ZFS caching system. The @@ -268,7 +268,7 @@ RAM, however since the amount of available RAM is often limited, ZFS can also make use of cache + linkend="zfs-term-vdev-cache">cache vdevs. Solid State Disks (SSDs) are often used as these cache devices due to their higher speed and lower latency compared to traditional spinning @@ -277,7 +277,7 @@ are cached on the SSD instead of having to be read from the regular spinning disks. The L2ARC can also speed up deduplication + linkend="zfs-term-deduplication">deduplication since a DDT that does not fit in RAM but does fit in the L2ARC will be much faster than if the @@ -299,7 +299,7 @@ Copy-On-Write + id="zfs-term-cow">Copy-On-Write Unlike a traditional file system, when data is overwritten on ZFS the new data is written to a @@ -316,7 +316,7 @@ Dataset + id="zfs-term-dataset">Dataset Dataset is the generic term for a ZFS file system, volume, snapshot or clone. Each dataset will have a @@ -338,7 +338,7 @@ Volume + id="zfs-term-volum">Volume In additional to regular file system datasets, ZFS can also create volumes, which are block devices. @@ -352,10 +352,10 @@ Snapshot + id="zfs-term-snapshot">Snapshot The copy-on-write + linkend="zfs-term-cow">copy-on-write design of ZFS allows for nearly instantaneous consistent snapshots with arbitrary names. After taking a snapshot @@ -373,7 +373,7 @@ These snapshots can be mounted (read only) to allow for the recovery of previous versions of files. It is also possible to rollback + linkend="zfs-zfs-snapshot">rollback a live file system to a specific snapshot, undoing any changes that took place after the snapshot was taken. Each block in the zpool has a reference counter which @@ -382,11 +382,11 @@ are deleted, the reference count is decremented; once a block is no longer referenced, it is reclaimed as free space. Snapshots can also be marked with a hold, + linkend="zfs-zfs-snapshot">hold, once a snapshot is held, any attempt to destroy it will return an EBUY error. Each snapshot can have multiple holds, each with a unique name. The release + linkend="zfs-zfs-snapshot">release command removes the hold so the snapshot can then be deleted. Snapshots can be taken on volumes, however they can only be cloned or rolled back, not mounted @@ -395,7 +395,7 @@ Clone + id="zfs-term-clone">Clone Snapshots can also be cloned; a clone is a writable version of a snapshot, allowing the file system to be @@ -417,7 +417,7 @@ Checksum + id="zfs-term-checksum">Checksum Every block that is allocated is also checksummed (which algorithm is used is a per dataset property, see: @@ -427,7 +427,7 @@ expected checksum, ZFS will attempt to recover the data from any available redundancy (mirrors, RAID-Z). You can trigger the validation of all checksums using the - scrub + scrub command. The available checksum algorithms include: @@ -452,7 +452,7 @@ Compression + id="zfs-term-compression">Compression Each dataset in ZFS has a compression property, which defaults to off. This property can be set to one @@ -471,7 +471,7 @@ Deduplication + id="zfs-term-deduplication">Deduplication ZFS has the ability to detect duplicate blocks of data as they are written (thanks to the checksumming @@ -511,7 +511,7 @@ Scrub + id="zfs-term-scrub">Scrub In place of a consistency check like fsck, ZFS has the scrub command, which reads all @@ -530,7 +530,7 @@ Dataset Quota + id="zfs-term-quota">Dataset Quota ZFS provides very fast and accurate dataset, user and group space accounting in addition to quotes and @@ -541,11 +541,11 @@ ZFS supports different types of quotas: the dataset quota, the reference + linkend="zfs-term-refquota">reference quota (refquota), the - user + user quota, and the - group + group quota. Quotas limit the amount of space that a dataset @@ -562,7 +562,7 @@ Reference + id="zfs-term-refquota">Reference Quota A reference quota limits the amount of space a @@ -575,7 +575,7 @@ User + id="zfs-term-userquota">User Quota User quotas are useful to limit the amount of space @@ -584,7 +584,7 @@ Group + id="zfs-term-groupquota">Group Quota The group quota limits the amount of space that a @@ -593,7 +593,7 @@ Dataset + id="zfs-term-reservation">Dataset Reservation The reservation property makes @@ -607,7 +607,7 @@ storage/home/bob, the space used by that snapshot is counted against the reservation. The refreservation + linkend="zfs-term-refreservation">refreservation property works in a similar way, except it excludes descendants, such as snapshots. @@ -622,7 +622,7 @@ Reference + id="zfs-term-refreservation">Reference Reservation The refreservation property @@ -634,7 +634,7 @@ dataset tries to use all of the free space, at least 10 GB of space is reserved for this dataset. In contrast to a regular reservation, + linkend="zfs-term-reservation">reservation, space used by snapshots and decendant datasets is not counted against the reservation. As an example, if a snapshot was taken of @@ -649,7 +649,7 @@ Resilver + id="zfs-term-resilver">Resilver When a disk fails and must be replaced, the new disk must be filled with the data that was lost. This @@ -663,7 +663,7 @@ - + What Makes ZFS Different ZFS is significantly different from any previous file system @@ -694,7 +694,7 @@ than a single monolithic filesystem. - + <acronym>ZFS</acronym> Quick Start Guide There is a start up mechanism that allows &os; to mount @@ -1071,108 +1071,108 @@ errors: No known data errors - + <command>zpool</command> Administration - + Creating & Destroying Storage Pools - + Adding & Removing Devices - + Dealing with Failed Devices - + Importing & Exporting Pools - + Upgrading a Storage Pool - + Checking the Status of a Pool - + Performance Monitoring - + Splitting a Storage Pool - + <command>zfs</command> Administration - + Creating & Destroying Datasets - + Creating & Destroying Volumes - + Renaming a Dataset - + Setting Dataset Properties - + Managing Snapshots - + Managing Clones - + ZFS Replication - + Dataset, User and Group Quotes To enforce a dataset quota of 10 GB for @@ -1276,7 +1276,7 @@ errors: No known data errors &prompt.root; zfs get quota storage/home/bob - + Reservations @@ -1307,53 +1307,53 @@ errors: No known data errors &prompt.root; zfs get refreservation storage/home/bob - + Compression - + Deduplication - + Delegated Administration - + ZFS Advanced Topics - + ZFS Tuning - + Booting Root on ZFS - + ZFS Boot Environments - + Troubleshooting - + ZFS on i386 Some of the features provided by ZFS @@ -1417,7 +1417,7 @@ vfs.zfs.vdev.cache.size="5M" - + Additional Resources From owner-svn-doc-projects@FreeBSD.ORG Thu Aug 15 01:08:24 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 43541DFD; Thu, 15 Aug 2013 01:08:24 +0000 (UTC) (envelope-from wblock@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 2FB212738; Thu, 15 Aug 2013 01:08:24 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r7F18ORS056619; Thu, 15 Aug 2013 01:08:24 GMT (envelope-from wblock@svn.freebsd.org) Received: (from wblock@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r7F18OET056618; Thu, 15 Aug 2013 01:08:24 GMT (envelope-from wblock@svn.freebsd.org) Message-Id: <201308150108.r7F18OET056618@svn.freebsd.org> From: Warren Block Date: Thu, 15 Aug 2013 01:08:24 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r42544 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 Aug 2013 01:08:24 -0000 Author: wblock Date: Thu Aug 15 01:08:23 2013 New Revision: 42544 URL: http://svnweb.freebsd.org/changeset/doc/42544 Log: Move Terms section to end. Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Thu Aug 15 01:04:54 2013 (r42543) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Thu Aug 15 01:08:23 2013 (r42544) @@ -33,636 +33,6 @@ designed to prevent data write corruption and to overcome some of the limitations of hardware RAID. - - ZFS Features and Terminology - - ZFS is a fundamentally different file system because it - is more than just a file system. ZFS combines the roles of - file system and volume manager, enabling additional storage - devices to be added to a live system and having the new space - available on all of the existing file systems in that pool - immediately. By combining the traditionally separate roles, - ZFS is able to overcome previous limitations that prevented - RAID groups being able to grow. Each top level device in a - zpool is called a vdev, which can be a simple disk or a RAID - transformation such as a mirror or RAID-Z array. ZFS file - systems (called datasets), each have access to the combined - free space of the entire pool. As blocks are allocated the - free space in the pool available to of each file system is - decreased. This approach avoids the common pitfall with - extensive partitioning where free space becomes fragmentated - across the partitions. - - - - - - zpool - - A storage pool is the most basic building block of - ZFS. A pool is made up of one or more vdevs, the - underlying devices that store the data. A pool is then - used to create one or more file systems (datasets) or - block devices (volumes). These datasets and volumes - share the pool of remaining free space. Each pool is - uniquely identified by a name and a - GUID. The zpool also controls the - version number and therefore the features available for - use with ZFS. - - - &os; 9.0 and 9.1 include support for ZFS version - 28. Future versions use ZFS version 5000 with - feature flags. This allows greater - cross-compatibility with other implementations of - ZFS. - - - - - vdev Types - - A zpool is made up of one or more vdevs, which - themselves can be a single disk or a group of disks, in - the case of a RAID transform. When multiple vdevs are - used, ZFS spreads data across the vdevs to increase - performance and maximize usable space. - - - - - Disk - The most basic type - of vdev is a standard block device. This can be - an entire disk (such as - /dev/ada0 - or - /dev/da0) - or a partition - (/dev/ada0p3). - Contrary to the Solaris documentation, on &os; - there is no performance penalty for using a - partition rather than an entire disk. - - - - - File - In addition to - disks, ZFS pools can be backed by regular files, - this is especially useful for testing and - experimentation. Use the full path to the file - as the device path in the zpool create command. - All vdevs must be atleast 128 MB in - size. - - - - - Mirror - When creating a - mirror, specify the mirror - keyword followed by the list of member devices - for the mirror. A mirror consists of two or - more devices, all data will be written to all - member devices. A mirror vdev will only hold as - much data as its smallest member. A mirror vdev - can withstand the failure of all but one of its - members without losing any data. - - - regular single disk vdev can be upgraded to - a mirror vdev at any time using the - zpool attach - command. - - - - - - RAID-Z - - ZFS implements RAID-Z, a variation on standard - RAID-5 that offers better distribution of parity - and eliminates the "RAID-5 write hole" in which - the data and parity information become - inconsistent after an unexpected restart. ZFS - supports 3 levels of RAID-Z which provide - varying levels of redundancy in exchange for - decreasing levels of usable storage. The types - are named RAID-Z1 through Z3 based on the number - of parity devinces in the array and the number - of disks that the pool can operate - without. - - In a RAID-Z1 configuration with 4 disks, - each 1 TB, usable storage will be 3 TB - and the pool will still be able to operate in - degraded mode with one faulted disk. If an - additional disk goes offline before the faulted - disk is replaced and resilvered, all data in the - pool can be lost. - - In a RAID-Z3 configuration with 8 disks of - 1 TB, the volume would provide 5TB of - usable space and still be able to operate with - three faulted disks. Sun recommends no more - than 9 disks in a single vdev. If the - configuration has more disks, it is recommended - to divide them into separate vdevs and the pool - data will be striped across them. - - A configuration of 2 RAID-Z2 vdevs - consisting of 8 disks each would create - something similar to a RAID 60 array. A RAID-Z - group's storage capacity is approximately the - size of the smallest disk, multiplied by the - number of non-parity disks. 4x 1 TB disks - in Z1 has an effective size of approximately - 3 TB, and a 8x 1 TB array in Z3 will - yeild 5 TB of usable space. - - - - - Spare - ZFS has a special - pseudo-vdev type for keeping track of available - hot spares. Note that installed hot spares are - not deployed automatically; they must manually - be configured to replace the failed device using - the zfs replace command. - - - - - Log - ZFS Log Devices, also - known as ZFS Intent Log (ZIL) - move the intent log from the regular pool - devices to a dedicated device. The ZIL - accelerates synchronous transactions by using - storage devices (such as - SSDs) that are faster - compared to those used for the main pool. When - data is being written and the application - requests a guarantee that the data has been - safely stored, the data is written to the faster - ZIL storage, then later flushed out to the - regular disks, greatly reducing the latency of - synchronous writes. Log devices can be - mirrored, but RAID-Z is not supported. When - specifying multiple log devices writes will be - load balanced across all devices. - - - - - Cache - Adding a cache vdev - to a zpool will add the storage of the cache to - the L2ARC. Cache devices cannot be mirrored. - Since a cache device only stores additional - copies of existing data, there is no risk of - data loss. - - - - - - Adaptive Replacement - Cache (ARC) - - ZFS uses an Adaptive Replacement Cache - (ARC), rather than a more - traditional Least Recently Used - (LRU) cache. An - LRU cache is a simple list of items - in the cache sorted by when each object was most - recently used; new items are added to the top of the - list and once the cache is full items from the bottom - of the list are evicted to make room for more active - objects. An ARC consists of four - lists; the Most Recently Used (MRU) - and Most Frequently Used (MFU) - objects, plus a ghost list for each. These ghost - lists tracks recently evicted objects to provent them - being added back to the cache. This increases the - cache hit ratio by avoiding objects that have a - history of only being used occasionally. Another - advantage of using both an MRU and - MFU is that scanning an entire - filesystem would normally evict all data from an - MRU or LRU cache - in favor of this freshly accessed content. In the - case of ZFS since there is also an - MFU that only tracks the most - frequently used objects, the cache of the most - commonly accessed blocks remains. - - - - L2ARC - - The L2ARC is the second level - of the ZFS caching system. The - primary ARC is stored in - RAM, however since the amount of - available RAM is often limited, - ZFS can also make use of cache - vdevs. Solid State Disks (SSDs) are - often used as these cache devices due to their higher - speed and lower latency compared to traditional spinning - disks. An L2ARC is entirely optional, but having one - will significantly increase read speeds for files that - are cached on the SSD instead of - having to be read from the regular spinning disks. The - L2ARC can also speed up deduplication - since a DDT that does not fit in - RAM but does fit in the - L2ARC will be much faster than if the - DDT had to be read from disk. The - rate at which data is added to the cache devices is - limited to prevent prematurely wearing out the - SSD with too many writes. Until the - cache is full (the first block has been evicted to make - room), writing to the L2ARC is - limited to the sum of the write limit and the boost - limit, then after that limited to the write limit. A - pair of sysctl values control these rate limits; - vfs.zfs.l2arc_write_max controls how - many bytes are written to the cache per second, while - vfs.zfs.l2arc_write_boost adds to - this limit during the "Turbo Warmup Phase" (Write - Boost). - - - - Copy-On-Write - - Unlike a traditional file system, when data is - overwritten on ZFS the new data is written to a - different block rather than overwriting the old data in - place. Only once this write is complete is the metadata - then updated to point to the new location of the data. - This means that in the event of a shorn write (a system - crash or power loss in the middle of writing a file) the - entire original contents of the file are still available - and the incomplete write is discarded. This also means - that ZFS does not require a fsck after an unexpected - shutdown. - - - - Dataset - - Dataset is the generic term for a ZFS file system, - volume, snapshot or clone. Each dataset will have a - unique name in the format: - poolname/path@snapshot. The root of - the pool is technically a dataset as well. Child - datasets are named hierarchically like directories; for - example mypool/home, the home dataset - is a child of mypool and inherits properties from it. - This can be expended further by creating - mypool/home/user. This grandchild - dataset will inherity properties from the parent and - grandparent. It is also possible to set properties - on a child to override the defaults inherited from the - parents and grandparents. ZFS also allows - administration of datasets and their children to be - delegated. - - - - Volume - - In additional to regular file system datasets, ZFS - can also create volumes, which are block devices. - Volumes have many of the same features, including - copy-on-write, snapshots, clones and checksumming. - Volumes can be useful for running other file system - formats on top of ZFS, such as UFS or in the case of - Virtualization or exporting iSCSI - extents. - - - - Snapshot - - The copy-on-write - - design of ZFS allows for nearly instantaneous consistent - snapshots with arbitrary names. After taking a snapshot - of a dataset (or a recursive snapshot of a parent - dataset that will include all child datasets), new data - is written to new blocks (as described above), however - the old blocks are not reclaimed as free space. There - are then two versions of the file system, the snapshot - (what the file system looked like before) and the live - file system; however no additional space is used. As - new data is written to the live file system, new blocks - are allocated to store this data. The apparent size of - the snapshot will grow as the blocks are no longer used - in the live file system, but only in the snapshot. - These snapshots can be mounted (read only) to allow for - the recovery of previous versions of files. It is also - possible to rollback - a live file system to a specific snapshot, undoing any - changes that took place after the snapshot was taken. - Each block in the zpool has a reference counter which - indicates how many snapshots, clones, datasets or - volumes make use of that block. As files and snapshots - are deleted, the reference count is decremented; once a - block is no longer referenced, it is reclaimed as free - space. Snapshots can also be marked with a hold, - once a snapshot is held, any attempt to destroy it will - return an EBUY error. Each snapshot can have multiple - holds, each with a unique name. The release - command removes the hold so the snapshot can then be - deleted. Snapshots can be taken on volumes, however - they can only be cloned or rolled back, not mounted - independently. - - - - Clone - - Snapshots can also be cloned; a clone is a writable - version of a snapshot, allowing the file system to be - forked as a new dataset. As with a snapshot, a clone - initially consumes no additional space, only as new data - is written to a clone and new blocks are allocated does - the apparent size of the clone grow. As blocks are - overwritten in the cloned file system or volume, the - reference count on the previous block is decremented. - The snapshot upon which a clone is based cannot be - deleted because the clone is dependeant upon it (the - snapshot is the parent, and the clone is the child). - Clones can be promoted, reversing - this dependeancy, making the clone the parent and the - previous parent the child. This operation requires no - additional space, however it will change the way the - used space is accounted. - - - - Checksum - - Every block that is allocated is also checksummed - (which algorithm is used is a per dataset property, see: - zfs set). ZFS transparently validates the checksum of - each block as it is read, allowing ZFS to detect silent - corruption. If the data that is read does not match the - expected checksum, ZFS will attempt to recover the data - from any available redundancy (mirrors, RAID-Z). You - can trigger the validation of all checksums using the - scrub - command. The available checksum algorithms include: - - - - fletcher2 - - - - fletcher4 - - - - sha256 - - - - The fletcher algorithms are faster, but sha256 is a - strong cryptographic hash and has a much lower chance of - a collisions at the cost of some performance. Checksums - can be disabled but it is inadvisable. - - - - Compression - - Each dataset in ZFS has a compression property, - which defaults to off. This property can be set to one - of a number of compression algorithms, which will cause - all new data that is written to this dataset to be - compressed as it is written. In addition to the - reduction in disk usage, this can also increase read and - write throughput, as only the smaller compressed version - of the file needs to be read or written. - - - LZ4 compression is only available after &os; - 9.2 - - - - - Deduplication - - ZFS has the ability to detect duplicate blocks of - data as they are written (thanks to the checksumming - feature). If deduplication is enabled, instead of - writing the block a second time, the reference count of - the existing block will be increased, saving storage - space. In order to do this, ZFS keeps a deduplication - table (DDT) in memory, containing the - list of unique checksums, the location of that block and - a reference count. When new data is written, the - checksum is calculated and compared to the list. If a - match is found, the data is considered to be a - duplicate. When deduplication is enabled, the checksum - algorithm is changed to SHA256 to - provide a secure cryptographic hash. ZFS deduplication - is tunable; if dedup is on, then a matching checksum is - assumed to mean that the data is identical. If dedup is - set to verify, then the data in the two blocks will be - checked byte-for-byte to ensure it is actually identical - and if it is not, the hash collision will be noted by - ZFS and the two blocks will be stored separately. Due - to the nature of the DDT, having to - store the hash of each unique block, it consumes a very - large amount of memory (a general rule of thumb is - 5-6 GB of ram per 1 TB of deduplicated data). - In situations where it is not practical to have enough - RAM to keep the entire DDT in memory, - performance will suffer greatly as the DDT will need to - be read from disk before each new block is written. - Deduplication can make use of the L2ARC to store the - DDT, providing a middle ground between fast system - memory and slower disks. It is advisable to consider - using ZFS compression instead, which often provides - nearly as much space savings without the additional - memory requirement. - - - - Scrub - - In place of a consistency check like fsck, ZFS has - the scrub command, which reads all - data blocks stored on the pool and verifies their - checksums them against the known good checksums stored - in the metadata. This periodic check of all the data - stored on the pool ensures the recovery of any corrupted - blocks before they are needed. A scrub is not required - after an unclean shutdown, but it is recommended that - you run a scrub at least once each quarter. ZFS - compares the checksum for each block as it is read in - the normal course of use, but a scrub operation makes - sure even infrequently used blocks are checked for - silent corruption. - - - - Dataset Quota - - ZFS provides very fast and accurate dataset, user - and group space accounting in addition to quotes and - space reservations. This gives the administrator fine - grained control over how space is allocated and allows - critical file systems to reserve space to ensure other - file systems do not take all of the free space. - - ZFS supports different types of quotas: the - dataset quota, the reference - quota (refquota), the - user - quota, and the - group - quota. - - Quotas limit the amount of space that a dataset - and all of its descendants (snapshots of the dataset, - child datasets and the snapshots of those datasets) - can consume. - - - Quotas cannot be set on volumes, as the - volsize property acts as an - implicit quota. - - - - - Reference - Quota - - A reference quota limits the amount of space a - dataset can consume by enforcing a hard limit on the - space used. However, this hard limit includes only - space that the dataset references and does not include - space used by descendants, such as file systems or - snapshots. - - - - User - Quota - - User quotas are useful to limit the amount of space - that can be used by the specified user. - - - - Group - Quota - - The group quota limits the amount of space that a - specified group can consume. - - - - Dataset - Reservation - - The reservation property makes - it possible to guaranteed a minimum amount of space for - the use of a specific dataset and its descendants. This - means that if a 10 GB reservation is set on - storage/home/bob, if another - dataset tries to use all of the free space, at least - 10 GB of space is reserved for this dataset. If a - snapshot is taken of - storage/home/bob, the space used by - that snapshot is counted against the reservation. The - refreservation - property works in a similar way, except it - excludes descendants, such as - snapshots. - - Reservations of any sort are useful in many - situations, such as planning and testing the - suitability of disk space allocation in a new system, - or ensuring that enough space is available on file - systems for audio logs or system recovery procedures - and files. - - - - Reference - Reservation - - The refreservation property - makes it possible to guaranteed a minimum amount of - space for the use of a specific dataset - excluding its descendants. This - means that if a 10 GB reservation is set on - storage/home/bob, if another - dataset tries to use all of the free space, at least - 10 GB of space is reserved for this dataset. In - contrast to a regular reservation, - space used by snapshots and decendant datasets is not - counted against the reservation. As an example, if a - snapshot was taken of - storage/home/bob, enough disk space - would have to exist outside of the - refreservation amount for the - operation to succeed because descendants of the main - data set are not counted by the - refreservation amount and so do not - encroach on the space set. - - - - Resilver - - When a disk fails and must be replaced, the new - disk must be filled with the data that was lost. This - process of calculating and writing the missing data - (using the parity information distributed across the - remaining drives) to the new drive is called - Resilvering. - - - - - - What Makes ZFS Different @@ -1019,443 +389,1073 @@ config: errors: No known data errors - As shown from this example, everything appears to be - normal. - + As shown from this example, everything appears to be + normal. + + + + Data Verification + + ZFS uses checksums to verify the + integrity of stored data. These are enabled automatically + upon creation of file systems and may be disabled using the + following command: + + &prompt.root; zfs set checksum=off storage/home + + Doing so is not recommended as + checksums take very little storage space and are used to check + data integrity using checksum verification in a process is + known as scrubbing. To verify the data + integrity of the storage pool, issue this + command: + + &prompt.root; zpool scrub storage + + This process may take considerable time depending on the + amount of data stored. It is also very I/O + intensive, so much so that only one scrub may be run at any + given time. After the scrub has completed, the status is + updated and may be viewed by issuing a status request: + + &prompt.root; zpool status storage + pool: storage + state: ONLINE + scrub: scrub completed with 0 errors on Sat Jan 26 19:57:37 2013 +config: + + NAME STATE READ WRITE CKSUM + storage ONLINE 0 0 0 + raidz1 ONLINE 0 0 0 + da0 ONLINE 0 0 0 + da1 ONLINE 0 0 0 + da2 ONLINE 0 0 0 + +errors: No known data errors + + The completion time is displayed and helps to ensure data + integrity over a long period of time. + + Refer to &man.zfs.8; and &man.zpool.8; for other + ZFS options. + + + + + <command>zpool</command> Administration + + + + + Creating & Destroying Storage Pools + + + + + + Adding & Removing Devices + + + + + + Dealing with Failed Devices + + + + + + Importing & Exporting Pools + + + + + + Upgrading a Storage Pool + + + + + + Checking the Status of a Pool + + + + + + Performance Monitoring + + + + + + Splitting a Storage Pool + + + + + + + <command>zfs</command> Administration + + + + + Creating & Destroying Datasets + + + + + + Creating & Destroying Volumes + + + + + + Renaming a Dataset + + + + + + Setting Dataset Properties + + + + + + Managing Snapshots + + + + + + Managing Clones + + + + + + ZFS Replication + + + + + + Dataset, User and Group Quotes + + To enforce a dataset quota of 10 GB for + storage/home/bob, use the + following: + + &prompt.root; zfs set quota=10G storage/home/bob + + To enforce a reference quota of 10 GB for + storage/home/bob, use the + following: + + &prompt.root; zfs set refquota=10G storage/home/bob + + The general format is + userquota@user=size, + and the user's name must be in one of the following + formats: + + + + POSIX compatible name such as + joe. + + + + POSIX numeric ID such as + 789. + + + + SID name + such as + joe.bloggs@example.com. + + + + SID + numeric ID such as + S-1-123-456-789. + + + + For example, to enforce a user quota of 50 GB for a + user named joe, use the + following: + + &prompt.root; zfs set userquota@joe=50G + + To remove the quota or make sure that one is not set, + instead use: + + &prompt.root; zfs set userquota@joe=none + + + User quota properties are not displayed by + zfs get all. + Non-root users can only see their own + quotas unless they have been granted the + userquota privilege. Users with this + privilege are able to view and set everyone's quota. + + + The general format for setting a group quota is: + groupquota@group=size. + + To set the quota for the group + firstgroup to 50 GB, + use: + + &prompt.root; zfs set groupquota@firstgroup=50G - - Data Verification + To remove the quota for the group + firstgroup, or to make sure that + one is not set, instead use: - ZFS uses checksums to verify the - integrity of stored data. These are enabled automatically - upon creation of file systems and may be disabled using the - following command: + &prompt.root; zfs set groupquota@firstgroup=none - &prompt.root; zfs set checksum=off storage/home + As with the user quota property, + non-root users can only see the quotas + associated with the groups that they belong to. However, + root or a user with the + groupquota privilege can view and set all + quotas for all groups. - Doing so is not recommended as - checksums take very little storage space and are used to check - data integrity using checksum verification in a process is - known as scrubbing. To verify the data - integrity of the storage pool, issue this - command: + To display the amount of space consumed by each user on + the specified filesystem or snapshot, along with any specified + quotas, use zfs userspace. For group + information, use zfs groupspace. For more + information about supported options or how to display only + specific options, refer to &man.zfs.1;. - &prompt.root; zpool scrub storage + Users with sufficient privileges and + root can list the quota for + storage/home/bob using: - This process may take considerable time depending on the - amount of data stored. It is also very I/O - intensive, so much so that only one scrub may be run at any - given time. After the scrub has completed, the status is - updated and may be viewed by issuing a status request: + &prompt.root; zfs get quota storage/home/bob + - &prompt.root; zpool status storage - pool: storage - state: ONLINE - scrub: scrub completed with 0 errors on Sat Jan 26 19:57:37 2013 -config: + + Reservations - NAME STATE READ WRITE CKSUM - storage ONLINE 0 0 0 - raidz1 ONLINE 0 0 0 - da0 ONLINE 0 0 0 - da1 ONLINE 0 0 0 - da2 ONLINE 0 0 0 + -errors: No known data errors + The general format of the reservation + property is + reservation=size, + so to set a reservation of 10 GB on + storage/home/bob, use: - The completion time is displayed and helps to ensure data - integrity over a long period of time. + &prompt.root; zfs set reservation=10G storage/home/bob - Refer to &man.zfs.8; and &man.zpool.8; for other - ZFS options. - - + To make sure that no reservation is set, or to remove a + reservation, use: - - <command>zpool</command> Administration + &prompt.root; zfs set reservation=none storage/home/bob - + The same principle can be applied to the + refreservation property for setting a + refreservation, with the general format + refreservation=size. - - Creating & Destroying Storage Pools + To check if any reservations or refreservations exist on + storage/home/bob, execute one of the + following commands: - + &prompt.root; zfs get reservation storage/home/bob +&prompt.root; zfs get refreservation storage/home/bob - - Adding & Removing Devices + + Compression - - Dealing with Failed Devices + + Deduplication - - Importing & Exporting Pools + + Delegated Administration *** DIFF OUTPUT TRUNCATED AT 1000 LINES *** From owner-svn-doc-projects@FreeBSD.ORG Thu Aug 15 01:15:10 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id ABAA0F32; Thu, 15 Aug 2013 01:15:10 +0000 (UTC) (envelope-from wblock@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 98FE927DB; Thu, 15 Aug 2013 01:15:10 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r7F1FAaa060163; Thu, 15 Aug 2013 01:15:10 GMT (envelope-from wblock@svn.freebsd.org) Received: (from wblock@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r7F1FAFG060161; Thu, 15 Aug 2013 01:15:10 GMT (envelope-from wblock@svn.freebsd.org) Message-Id: <201308150115.r7F1FAFG060161@svn.freebsd.org> From: Warren Block Date: Thu, 15 Aug 2013 01:15:10 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r42545 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 Aug 2013 01:15:10 -0000 Author: wblock Date: Thu Aug 15 01:15:10 2013 New Revision: 42545 URL: http://svnweb.freebsd.org/changeset/doc/42545 Log: Factor out the valign="top" from the big table rows into a single entry in tbody. Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Thu Aug 15 01:08:23 2013 (r42544) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Thu Aug 15 01:15:10 2013 (r42545) @@ -851,10 +851,9 @@ vfs.zfs.vdev.cache.size="5M" - + - zpool + zpool A storage pool is the most basic building block of ZFS. A pool is made up of one or more vdevs, the @@ -877,8 +876,7 @@ vfs.zfs.vdev.cache.size="5M" - vdev Types + vdev Types A zpool is made up of one or more vdevs, which themselves can be a single disk or a group of disks, in @@ -1022,8 +1020,7 @@ vfs.zfs.vdev.cache.size="5M" - Adaptive Replacement + Adaptive Replacement Cache (ARC) ZFS uses an Adaptive Replacement Cache @@ -1055,8 +1052,7 @@ vfs.zfs.vdev.cache.size="5M" - L2ARC + L2ARC The L2ARC is the second level of the ZFS caching system. The @@ -1094,8 +1090,7 @@ vfs.zfs.vdev.cache.size="5M" - Copy-On-Write + Copy-On-Write Unlike a traditional file system, when data is overwritten on ZFS the new data is written to a @@ -1111,8 +1106,7 @@ vfs.zfs.vdev.cache.size="5M" - Dataset + Dataset Dataset is the generic term for a ZFS file system, volume, snapshot or clone. Each dataset will have a @@ -1133,8 +1127,7 @@ vfs.zfs.vdev.cache.size="5M" - Volume + Volume In additional to regular file system datasets, ZFS can also create volumes, which are block devices. @@ -1147,8 +1140,7 @@ vfs.zfs.vdev.cache.size="5M" - Snapshot + Snapshot The copy-on-write @@ -1190,8 +1182,7 @@ vfs.zfs.vdev.cache.size="5M" - Clone + Clone Snapshots can also be cloned; a clone is a writable version of a snapshot, allowing the file system to be @@ -1212,8 +1203,7 @@ vfs.zfs.vdev.cache.size="5M" - Checksum + Checksum Every block that is allocated is also checksummed (which algorithm is used is a per dataset property, see: @@ -1247,8 +1237,7 @@ vfs.zfs.vdev.cache.size="5M" - Compression + Compression Each dataset in ZFS has a compression property, which defaults to off. This property can be set to one @@ -1266,8 +1255,7 @@ vfs.zfs.vdev.cache.size="5M" - Deduplication + Deduplication ZFS has the ability to detect duplicate blocks of data as they are written (thanks to the checksumming @@ -1306,8 +1294,7 @@ vfs.zfs.vdev.cache.size="5M" - Scrub + Scrub In place of a consistency check like fsck, ZFS has the scrub command, which reads all @@ -1325,8 +1312,7 @@ vfs.zfs.vdev.cache.size="5M" - Dataset Quota + Dataset Quota ZFS provides very fast and accurate dataset, user and group space accounting in addition to quotes and @@ -1357,8 +1343,7 @@ vfs.zfs.vdev.cache.size="5M" - Reference + Reference Quota A reference quota limits the amount of space a @@ -1370,8 +1355,7 @@ vfs.zfs.vdev.cache.size="5M" - User + User Quota User quotas are useful to limit the amount of space @@ -1379,8 +1363,7 @@ vfs.zfs.vdev.cache.size="5M" - Group + Group Quota The group quota limits the amount of space that a @@ -1388,8 +1371,7 @@ vfs.zfs.vdev.cache.size="5M" - Dataset + Dataset Reservation The reservation property makes @@ -1417,8 +1399,7 @@ vfs.zfs.vdev.cache.size="5M" - Reference + Reference Reservation The refreservation property @@ -1444,8 +1425,7 @@ vfs.zfs.vdev.cache.size="5M" - Resilver + Resilver When a disk fails and must be replaced, the new disk must be filled with the data that was lost. This From owner-svn-doc-projects@FreeBSD.ORG Thu Aug 15 01:21:24 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 6D35020D; Thu, 15 Aug 2013 01:21:24 +0000 (UTC) (envelope-from wblock@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 5A5482860; Thu, 15 Aug 2013 01:21:24 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r7F1LOqw063313; Thu, 15 Aug 2013 01:21:24 GMT (envelope-from wblock@svn.freebsd.org) Received: (from wblock@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r7F1LOM8063312; Thu, 15 Aug 2013 01:21:24 GMT (envelope-from wblock@svn.freebsd.org) Message-Id: <201308150121.r7F1LOM8063312@svn.freebsd.org> From: Warren Block Date: Thu, 15 Aug 2013 01:21:24 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r42546 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 Aug 2013 01:21:24 -0000 Author: wblock Date: Thu Aug 15 01:21:23 2013 New Revision: 42546 URL: http://svnweb.freebsd.org/changeset/doc/42546 Log: Remove role= attributes from acronym tags. Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Thu Aug 15 01:15:10 2013 (r42545) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Thu Aug 15 01:21:23 2013 (r42546) @@ -564,27 +564,23 @@ errors: No known data errors - POSIX compatible name such as + POSIX compatible name such as joe. - POSIX numeric ID such as + POSIX numeric ID such as 789. - SID name + SID name such as joe.bloggs@example.com. - SID + SID numeric ID such as S-1-123-456-789. From owner-svn-doc-projects@FreeBSD.ORG Thu Aug 15 02:01:37 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 2635035E; Thu, 15 Aug 2013 02:01:37 +0000 (UTC) (envelope-from wblock@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 135332A74; Thu, 15 Aug 2013 02:01:37 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r7F21bTc079225; Thu, 15 Aug 2013 02:01:37 GMT (envelope-from wblock@svn.freebsd.org) Received: (from wblock@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r7F21b11079224; Thu, 15 Aug 2013 02:01:37 GMT (envelope-from wblock@svn.freebsd.org) Message-Id: <201308150201.r7F21b11079224@svn.freebsd.org> From: Warren Block Date: Thu, 15 Aug 2013 02:01:37 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r42547 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 Aug 2013 02:01:37 -0000 Author: wblock Date: Thu Aug 15 02:01:36 2013 New Revision: 42547 URL: http://svnweb.freebsd.org/changeset/doc/42547 Log: Fix numerous punctuation, spelling, and phrasing problems, stuff the chapter full of acronym tags. Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Thu Aug 15 01:21:23 2013 (r42546) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Thu Aug 15 02:01:36 2013 (r42547) @@ -15,7 +15,7 @@ - The Z File System (ZFS) + The Z File System (<acronym>ZFS</acronym>) The Z file system, originally developed by &sun;, is designed to future proof the file system by removing many of @@ -34,10 +34,10 @@ of the limitations of hardware RAID. - What Makes ZFS Different + What Makes <acronym>ZFS</acronym> Different - ZFS is significantly different from any previous file system - owing to the fact that it is more than just a file system. ZFS + ZFS is significantly different from any previous file system + owing to the fact that it is more than just a file system. ZFS combines the traditionally separate roles of volume manager and file system, which provides unique advantages because the file system is now aware of the underlying structure of the disks. @@ -48,17 +48,17 @@ around by presenting the operating system with a single logical disk made up of the space provided by a number of disks, on top of which the operating system placed its file system. Even in - the case of software RAID solutions like - GEOM, the UFS file system living on top of + the case of software RAID solutions like + GEOM, the UFS file system living on top of the RAID transform believed that it was - dealing with a single device. ZFS's combination of the volume + dealing with a single device. ZFS's combination of the volume manager and the file system solves this and allows the creation of many file systems all sharing a pool of available storage. - One of the biggest advantages to ZFS's awareness of the physical - layout of the disks is that ZFS can grow the existing file + One of the biggest advantages to ZFS's awareness of the physical + layout of the disks is that ZFS can grow the existing file systems automatically when additional disks are added to the pool. This new space is then made available to all of the file - systems. ZFS also has a number of different properties that can + systems. ZFS also has a number of different properties that can be applied to each file system, creating many advantages to creating a number of different filesystems and datasets rather than a single monolithic filesystem. @@ -69,10 +69,13 @@ There is a start up mechanism that allows &os; to mount ZFS pools during system initialization. To - set it, issue the following commands: + enable it, add this line to /etc/rc.conf: - &prompt.root; echo 'zfs_enable="YES"' >> /etc/rc.conf -&prompt.root; service zfs start + zfs_enable="YES" + + Then start the service: + + &prompt.root; service zfs start The examples in this section assume three SCSI disks with the device names @@ -132,7 +135,7 @@ drwxr-xr-x 21 root wheel 512 Aug 29 2 &prompt.root; zfs set compression=off example/compressed - To unmount a file system, issue the following command and + To unmount a file system, use zfs umount and then verify by using df: &prompt.root; zfs umount example/compressed @@ -143,7 +146,7 @@ devfs 1 1 0 /dev/ad0s1d 54098308 1032864 48737580 2% /usr example 17547008 0 17547008 0% /example - To re-mount the file system to make it accessible again, + To re-mount the file system to make it accessible again, use zfs mount and verify with df: &prompt.root; zfs mount example/compressed @@ -211,11 +214,11 @@ example/data 17547008 0 175 There is no way to prevent a disk from failing. One method of avoiding data loss due to a failed hard disk is to implement RAID. ZFS - supports this feature in its pool design. RAID-Z pools + supports this feature in its pool design. RAID-Z pools require 3 or more disks but yield more usable space than mirrored pools. - To create a RAID-Z pool, issue the + To create a RAID-Z pool, issue the following command and specify the disks to add to the pool: @@ -226,7 +229,7 @@ example/data 17547008 0 175 RAID-Z configuration is between three and nine. For environments requiring a single pool consisting of 10 disks or more, consider breaking it up into smaller - RAID-Z groups. If only two disks are + RAID-Z groups. If only two disks are available and redundancy is a requirement, consider using a ZFS mirror. Refer to &man.zpool.8; for more details. @@ -312,7 +315,7 @@ devfs 1 1 0 storage 26320512 0 26320512 0% /storage storage/home 26320512 0 26320512 0% /home - This completes the RAID-Z + This completes the RAID-Z configuration. To get status updates about the file systems created during the nightly &man.periodic.8; runs, issue the following command: @@ -325,8 +328,8 @@ storage/home 26320512 0 26320512 Every software RAID has a method of monitoring its state. The status of - RAID-Z devices may be viewed with the - following command: + RAID-Z devices may be viewed with this + command: &prompt.root; zpool status -x @@ -724,19 +727,19 @@ errors: No known data errors Some of the features provided by ZFS are RAM-intensive, so some tuning may be required to provide - maximum efficiency on systems with limited RAM. + maximum efficiency on systems with limited RAM. Memory At a bare minimum, the total system memory should be at - least one gigabyte. The amount of recommended RAM depends - upon the size of the pool and the ZFS features which are - used. A general rule of thumb is 1GB of RAM for every 1TB + least one gigabyte. The amount of recommended RAM depends + upon the size of the pool and the ZFS features which are + used. A general rule of thumb is 1 GB of RAM for every 1 TB of storage. If the deduplication feature is used, a general - rule of thumb is 5GB of RAM per TB of storage to be - deduplicated. While some users successfully use ZFS with - less RAM, it is possible that when the system is under heavy + rule of thumb is 5 GB of RAM per TB of storage to be + deduplicated. While some users successfully use ZFS with + less RAM, it is possible that when the system is under heavy load, it may panic due to memory exhaustion. Further tuning may be required for systems with less than the recommended RAM requirements. @@ -745,8 +748,8 @@ errors: No known data errors Kernel Configuration - Due to the RAM limitations of the &i386; platform, users - using ZFS on the &i386; architecture should add the + Due to the RAM limitations of the &i386; platform, users + using ZFS on the &i386; architecture should add the following option to a custom kernel configuration file, rebuild the kernel, and reboot: @@ -777,7 +780,7 @@ vfs.zfs.arc_max="40M" vfs.zfs.vdev.cache.size="5M" For a more detailed list of recommendations for - ZFS-related tuning, see ZFS-related tuning, see . @@ -826,22 +829,22 @@ vfs.zfs.vdev.cache.size="5M" - ZFS Features and Terminology + <acronym>ZFS</acronym> Features and Terminology - ZFS is a fundamentally different file system because it - is more than just a file system. ZFS combines the roles of + ZFS is a fundamentally different file system because it + is more than just a file system. ZFS combines the roles of file system and volume manager, enabling additional storage devices to be added to a live system and having the new space available on all of the existing file systems in that pool immediately. By combining the traditionally separate roles, - ZFS is able to overcome previous limitations that prevented - RAID groups being able to grow. Each top level device in a - zpool is called a vdev, which can be a simple disk or a RAID - transformation such as a mirror or RAID-Z array. ZFS file + ZFS is able to overcome previous limitations that prevented + RAID groups being able to grow. Each top level device in a + zpool is called a vdev, which can be a simple disk or a RAID + transformation such as a mirror or RAID-Z array. ZFS file systems (called datasets), each have access to the combined - free space of the entire pool. As blocks are allocated the - free space in the pool available to of each file system is - decreased. This approach avoids the common pitfall with + free space of the entire pool. As blocks are allocated from + the pool, the space available to each file system + decreases. This approach avoids the common pitfall with extensive partitioning where free space becomes fragmentated across the partitions. @@ -852,7 +855,7 @@ vfs.zfs.vdev.cache.size="5M"zpool A storage pool is the most basic building block of - ZFS. A pool is made up of one or more vdevs, the + ZFS. A pool is made up of one or more vdevs, the underlying devices that store the data. A pool is then used to create one or more file systems (datasets) or block devices (volumes). These datasets and volumes @@ -860,14 +863,14 @@ vfs.zfs.vdev.cache.size="5M"GUID. The zpool also controls the version number and therefore the features available for - use with ZFS. + use with ZFS. - &os; 9.0 and 9.1 include support for ZFS version - 28. Future versions use ZFS version 5000 with + &os; 9.0 and 9.1 include support for ZFS version + 28. Future versions use ZFS version 5000 with feature flags. This allows greater cross-compatibility with other implementations of - ZFS. + ZFS. @@ -876,8 +879,8 @@ vfs.zfs.vdev.cache.size="5M"A zpool is made up of one or more vdevs, which themselves can be a single disk or a group of disks, in - the case of a RAID transform. When multiple vdevs are - used, ZFS spreads data across the vdevs to increase + the case of a RAID transform. When multiple vdevs are + used, ZFS spreads data across the vdevs to increase performance and maximize usable space. @@ -899,7 +902,7 @@ vfs.zfs.vdev.cache.size="5M" File - In addition to - disks, ZFS pools can be backed by regular files, + disks, ZFS pools can be backed by regular files, this is especially useful for testing and experimentation. Use the full path to the file as the device path in the zpool create command. @@ -930,21 +933,21 @@ vfs.zfs.vdev.cache.size="5M" - RAID-Z - - ZFS implements RAID-Z, a variation on standard - RAID-5 that offers better distribution of parity - and eliminates the "RAID-5 write hole" in which + RAID-Z - + ZFS implements RAID-Z, a variation on standard + RAID-5 that offers better distribution of parity + and eliminates the "RAID-5 write hole" in which the data and parity information become - inconsistent after an unexpected restart. ZFS - supports 3 levels of RAID-Z which provide + inconsistent after an unexpected restart. ZFS + supports 3 levels of RAID-Z which provide varying levels of redundancy in exchange for decreasing levels of usable storage. The types - are named RAID-Z1 through Z3 based on the number + are named RAID-Z1 through RAID-Z3 based on the number of parity devinces in the array and the number of disks that the pool can operate without. - In a RAID-Z1 configuration with 4 disks, + In a RAID-Z1 configuration with 4 disks, each 1 TB, usable storage will be 3 TB and the pool will still be able to operate in degraded mode with one faulted disk. If an @@ -952,8 +955,8 @@ vfs.zfs.vdev.cache.size="5M" - In a RAID-Z3 configuration with 8 disks of - 1 TB, the volume would provide 5TB of + In a RAID-Z3 configuration with 8 disks of + 1 TB, the volume would provide 5 TB of usable space and still be able to operate with three faulted disks. Sun recommends no more than 9 disks in a single vdev. If the @@ -961,53 +964,53 @@ vfs.zfs.vdev.cache.size="5M" - A configuration of 2 RAID-Z2 vdevs + A configuration of 2 RAID-Z2 vdevs consisting of 8 disks each would create - something similar to a RAID 60 array. A RAID-Z + something similar to a RAID-60 array. A RAID-Z group's storage capacity is approximately the size of the smallest disk, multiplied by the - number of non-parity disks. 4x 1 TB disks - in Z1 has an effective size of approximately - 3 TB, and a 8x 1 TB array in Z3 will - yeild 5 TB of usable space. + number of non-parity disks. Four 1 TB disks + in RAID-Z1 has an effective size of approximately + 3 TB, and an array of eight 1 TB disks in RAID-Z3 will + yield 5 TB of usable space. - Spare - ZFS has a special + Spare - ZFS has a special pseudo-vdev type for keeping track of available hot spares. Note that installed hot spares are not deployed automatically; they must manually be configured to replace the failed device using - the zfs replace command. + zfs replace. - Log - ZFS Log Devices, also + Log - ZFS Log Devices, also known as ZFS Intent Log (ZIL) move the intent log from the regular pool - devices to a dedicated device. The ZIL + devices to a dedicated device. The ZIL accelerates synchronous transactions by using storage devices (such as SSDs) that are faster - compared to those used for the main pool. When + than those used for the main pool. When data is being written and the application requests a guarantee that the data has been safely stored, the data is written to the faster - ZIL storage, then later flushed out to the + ZIL storage, then later flushed out to the regular disks, greatly reducing the latency of synchronous writes. Log devices can be - mirrored, but RAID-Z is not supported. When - specifying multiple log devices writes will be - load balanced across all devices. + mirrored, but RAID-Z is not supported. If + multiple log devices are used, writes will be + load balanced across them. Cache - Adding a cache vdev to a zpool will add the storage of the cache to - the L2ARC. Cache devices cannot be mirrored. + the L2ARC. Cache devices cannot be mirrored. Since a cache device only stores additional copies of existing data, there is no risk of data loss. @@ -1019,7 +1022,7 @@ vfs.zfs.vdev.cache.size="5M"Adaptive Replacement Cache (ARC) - ZFS uses an Adaptive Replacement Cache + ZFS uses an Adaptive Replacement Cache (ARC), rather than a more traditional Least Recently Used (LRU) cache. An @@ -1032,8 +1035,8 @@ vfs.zfs.vdev.cache.size="5M"MRU) and Most Frequently Used (MFU) objects, plus a ghost list for each. These ghost - lists tracks recently evicted objects to provent them - being added back to the cache. This increases the + lists track recently evicted objects to prevent them + from being added back to the cache. This increases the cache hit ratio by avoiding objects that have a history of only being used occasionally. Another advantage of using both an MRU and @@ -1041,14 +1044,14 @@ vfs.zfs.vdev.cache.size="5M"MRU or LRU cache in favor of this freshly accessed content. In the - case of ZFS since there is also an + case of ZFS, since there is also an MFU that only tracks the most frequently used objects, the cache of the most commonly accessed blocks remains. - L2ARC + L2ARC The L2ARC is the second level of the ZFS caching system. The @@ -1060,11 +1063,11 @@ vfs.zfs.vdev.cache.size="5M"SSDs) are often used as these cache devices due to their higher speed and lower latency compared to traditional spinning - disks. An L2ARC is entirely optional, but having one + disks. An L2ARC is entirely optional, but having one will significantly increase read speeds for files that are cached on the SSD instead of having to be read from the regular spinning disks. The - L2ARC can also speed up L2ARC can also speed up deduplication since a DDT that does not fit in RAM but does fit in the @@ -1089,35 +1092,35 @@ vfs.zfs.vdev.cache.size="5M"Copy-On-Write Unlike a traditional file system, when data is - overwritten on ZFS the new data is written to a + overwritten on ZFS the new data is written to a different block rather than overwriting the old data in place. Only once this write is complete is the metadata then updated to point to the new location of the data. This means that in the event of a shorn write (a system - crash or power loss in the middle of writing a file) the + crash or power loss in the middle of writing a file), the entire original contents of the file are still available and the incomplete write is discarded. This also means - that ZFS does not require a fsck after an unexpected + that ZFS does not require a &man.fsck.8; after an unexpected shutdown. Dataset - Dataset is the generic term for a ZFS file system, + Dataset is the generic term for a ZFS file system, volume, snapshot or clone. Each dataset will have a unique name in the format: poolname/path@snapshot. The root of the pool is technically a dataset as well. Child datasets are named hierarchically like directories; for - example mypool/home, the home dataset - is a child of mypool and inherits properties from it. - This can be expended further by creating + example, mypool/home, the home dataset, + is a child of mypool and inherits properties from it. + This can be expanded further by creating mypool/home/user. This grandchild dataset will inherity properties from the parent and grandparent. It is also possible to set properties on a child to override the defaults inherited from the - parents and grandparents. ZFS also allows + parents and grandparents. ZFS also allows administration of datasets and their children to be delegated. @@ -1125,12 +1128,12 @@ vfs.zfs.vdev.cache.size="5M" Volume - In additional to regular file system datasets, ZFS + In additional to regular file system datasets, ZFS can also create volumes, which are block devices. Volumes have many of the same features, including copy-on-write, snapshots, clones and checksumming. Volumes can be useful for running other file system - formats on top of ZFS, such as UFS or in the case of + formats on top of ZFS, such as UFS or in the case of Virtualization or exporting iSCSI extents. @@ -1141,7 +1144,7 @@ vfs.zfs.vdev.cache.size="5M"The copy-on-write - design of ZFS allows for nearly instantaneous consistent + design of ZFS allows for nearly instantaneous consistent snapshots with arbitrary names. After taking a snapshot of a dataset (or a recursive snapshot of a parent dataset that will include all child datasets), new data @@ -1202,15 +1205,15 @@ vfs.zfs.vdev.cache.size="5M"Checksum Every block that is allocated is also checksummed - (which algorithm is used is a per dataset property, see: - zfs set). ZFS transparently validates the checksum of - each block as it is read, allowing ZFS to detect silent + (the algorithm used is a per dataset property, see: + zfs set). ZFS transparently validates the checksum of + each block as it is read, allowing ZFS to detect silent corruption. If the data that is read does not match the - expected checksum, ZFS will attempt to recover the data - from any available redundancy (mirrors, RAID-Z). You - can trigger the validation of all checksums using the - scrub - command. The available checksum algorithms include: + expected checksum, ZFS will attempt to recover the data + from any available redundancy, like mirrors or RAID-Z). Validation of all checksums can be triggered with + the + scrub + command. Available checksum algorithms include: @@ -1235,7 +1238,7 @@ vfs.zfs.vdev.cache.size="5M" Compression - Each dataset in ZFS has a compression property, + Each dataset in ZFS has a compression property, which defaults to off. This property can be set to one of a number of compression algorithms, which will cause all new data that is written to this dataset to be @@ -1245,7 +1248,7 @@ vfs.zfs.vdev.cache.size="5M" - LZ4 compression is only available after &os; + LZ4 compression is only available after &os; 9.2 @@ -1253,12 +1256,12 @@ vfs.zfs.vdev.cache.size="5M" Deduplication - ZFS has the ability to detect duplicate blocks of + ZFS has the ability to detect duplicate blocks of data as they are written (thanks to the checksumming feature). If deduplication is enabled, instead of writing the block a second time, the reference count of the existing block will be increased, saving storage - space. In order to do this, ZFS keeps a deduplication + space. To do this, ZFS keeps a deduplication table (DDT) in memory, containing the list of unique checksums, the location of that block and a reference count. When new data is written, the @@ -1266,25 +1269,25 @@ vfs.zfs.vdev.cache.size="5M"SHA256 to - provide a secure cryptographic hash. ZFS deduplication + provide a secure cryptographic hash. ZFS deduplication is tunable; if dedup is on, then a matching checksum is assumed to mean that the data is identical. If dedup is set to verify, then the data in the two blocks will be checked byte-for-byte to ensure it is actually identical and if it is not, the hash collision will be noted by - ZFS and the two blocks will be stored separately. Due + ZFS and the two blocks will be stored separately. Due to the nature of the DDT, having to store the hash of each unique block, it consumes a very large amount of memory (a general rule of thumb is 5-6 GB of ram per 1 TB of deduplicated data). In situations where it is not practical to have enough - RAM to keep the entire DDT in memory, - performance will suffer greatly as the DDT will need to + RAM to keep the entire DDT in memory, + performance will suffer greatly as the DDT will need to be read from disk before each new block is written. - Deduplication can make use of the L2ARC to store the - DDT, providing a middle ground between fast system - memory and slower disks. It is advisable to consider - using ZFS compression instead, which often provides + Deduplication can make use of the L2ARC to store the + DDT, providing a middle ground between fast system + memory and slower disks. Consider + using ZFS compression instead, which often provides nearly as much space savings without the additional memory requirement. @@ -1292,7 +1295,7 @@ vfs.zfs.vdev.cache.size="5M" Scrub - In place of a consistency check like fsck, ZFS has + In place of a consistency check like &man.fsck.8;, ZFS has the scrub command, which reads all data blocks stored on the pool and verifies their checksums them against the known good checksums stored @@ -1300,7 +1303,7 @@ vfs.zfs.vdev.cache.size="5M"ZFS compares the checksum for each block as it is read in the normal course of use, but a scrub operation makes sure even infrequently used blocks are checked for @@ -1310,14 +1313,14 @@ vfs.zfs.vdev.cache.size="5M" Dataset Quota - ZFS provides very fast and accurate dataset, user - and group space accounting in addition to quotes and + ZFS provides very fast and accurate dataset, user + and group space accounting in addition to quotas and space reservations. This gives the administrator fine grained control over how space is allocated and allows critical file systems to reserve space to ensure other file systems do not take all of the free space. - ZFS supports different types of quotas: the + ZFS supports different types of quotas: the dataset quota, the reference quota (refquota), the @@ -1378,7 +1381,7 @@ vfs.zfs.vdev.cache.size="5M"storage/home/bob, the space used by + storage/home/bob, the space used by that snapshot is counted against the reservation. The refreservation @@ -1428,7 +1431,7 @@ vfs.zfs.vdev.cache.size="5M" + resilvering. From owner-svn-doc-projects@FreeBSD.ORG Thu Aug 15 02:28:44 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:1900:2254:206a::19:1]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 5171BBCC; Thu, 15 Aug 2013 02:28:44 +0000 (UTC) (envelope-from wblock@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 3CBD02B6D; Thu, 15 Aug 2013 02:28:44 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r7F2SiYG088136; Thu, 15 Aug 2013 02:28:44 GMT (envelope-from wblock@svn.freebsd.org) Received: (from wblock@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r7F2SiIN088135; Thu, 15 Aug 2013 02:28:44 GMT (envelope-from wblock@svn.freebsd.org) Message-Id: <201308150228.r7F2SiIN088135@svn.freebsd.org> From: Warren Block Date: Thu, 15 Aug 2013 02:28:44 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r42548 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 Aug 2013 02:28:44 -0000 Author: wblock Date: Thu Aug 15 02:28:43 2013 New Revision: 42548 URL: http://svnweb.freebsd.org/changeset/doc/42548 Log: Whitespace-only fixes. Translators, please ignore. Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Thu Aug 15 02:01:36 2013 (r42547) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Thu Aug 15 02:28:43 2013 (r42548) @@ -36,32 +36,35 @@ What Makes <acronym>ZFS</acronym> Different - ZFS is significantly different from any previous file system - owing to the fact that it is more than just a file system. ZFS - combines the traditionally separate roles of volume manager and - file system, which provides unique advantages because the file - system is now aware of the underlying structure of the disks. - Traditional file systems could only be created on a single disk - at a time, if there were two disks then two separate file - systems would have to be created. In a traditional hardware + ZFS is significantly different from any + previous file system owing to the fact that it is more than just + a file system. ZFS combines the + traditionally separate roles of volume manager and file system, + which provides unique advantages because the file system is now + aware of the underlying structure of the disks. Traditional + file systems could only be created on a single disk at a time, + if there were two disks then two separate file systems would + have to be created. In a traditional hardware RAID configuration, this problem was worked around by presenting the operating system with a single logical disk made up of the space provided by a number of disks, on top of which the operating system placed its file system. Even in the case of software RAID solutions like - GEOM, the UFS file system living on top of - the RAID transform believed that it was - dealing with a single device. ZFS's combination of the volume - manager and the file system solves this and allows the creation - of many file systems all sharing a pool of available storage. - One of the biggest advantages to ZFS's awareness of the physical - layout of the disks is that ZFS can grow the existing file - systems automatically when additional disks are added to the - pool. This new space is then made available to all of the file - systems. ZFS also has a number of different properties that can - be applied to each file system, creating many advantages to - creating a number of different filesystems and datasets rather - than a single monolithic filesystem. + GEOM, the UFS file system + living on top of the RAID transform believed + that it was dealing with a single device. + ZFS's combination of the volume manager and + the file system solves this and allows the creation of many file + systems all sharing a pool of available storage. One of the + biggest advantages to ZFS's awareness of the + physical layout of the disks is that ZFS can + grow the existing file systems automatically when additional + disks are added to the pool. This new space is then made + available to all of the file systems. ZFS + also has a number of different properties that can be applied to + each file system, creating many advantages to creating a number + of different filesystems and datasets rather than a single + monolithic filesystem. @@ -69,7 +72,8 @@ There is a start up mechanism that allows &os; to mount ZFS pools during system initialization. To - enable it, add this line to /etc/rc.conf: + enable it, add this line to + /etc/rc.conf: zfs_enable="YES" @@ -135,8 +139,9 @@ drwxr-xr-x 21 root wheel 512 Aug 29 2 &prompt.root; zfs set compression=off example/compressed - To unmount a file system, use zfs umount and - then verify by using df: + To unmount a file system, use + zfs umount and then verify by using + df: &prompt.root; zfs umount example/compressed &prompt.root; df @@ -146,8 +151,9 @@ devfs 1 1 0 /dev/ad0s1d 54098308 1032864 48737580 2% /usr example 17547008 0 17547008 0% /example - To re-mount the file system to make it accessible again, use zfs mount - and verify with df: + To re-mount the file system to make it accessible again, + use zfs mount and verify with + df: &prompt.root; zfs mount example/compressed &prompt.root; df @@ -214,9 +220,9 @@ example/data 17547008 0 175 There is no way to prevent a disk from failing. One method of avoiding data loss due to a failed hard disk is to implement RAID. ZFS - supports this feature in its pool design. RAID-Z pools - require 3 or more disks but yield more usable space than - mirrored pools. + supports this feature in its pool design. + RAID-Z pools require 3 or more disks but + yield more usable space than mirrored pools. To create a RAID-Z pool, issue the following command and specify the disks to add to the @@ -727,31 +733,35 @@ errors: No known data errors Some of the features provided by ZFS are RAM-intensive, so some tuning may be required to provide - maximum efficiency on systems with limited RAM. + maximum efficiency on systems with limited + RAM. Memory At a bare minimum, the total system memory should be at - least one gigabyte. The amount of recommended RAM depends - upon the size of the pool and the ZFS features which are - used. A general rule of thumb is 1 GB of RAM for every 1 TB - of storage. If the deduplication feature is used, a general - rule of thumb is 5 GB of RAM per TB of storage to be - deduplicated. While some users successfully use ZFS with - less RAM, it is possible that when the system is under heavy - load, it may panic due to memory exhaustion. Further tuning - may be required for systems with less than the recommended - RAM requirements. + least one gigabyte. The amount of recommended + RAM depends upon the size of the pool and + the ZFS features which are used. A + general rule of thumb is 1 GB of RAM for every + 1 TB of storage. If the deduplication feature is used, + a general rule of thumb is 5 GB of RAM per TB of + storage to be deduplicated. While some users successfully + use ZFS with less RAM, + it is possible that when the system is under heavy load, it + may panic due to memory exhaustion. Further tuning may be + required for systems with less than the recommended RAM + requirements. Kernel Configuration - Due to the RAM limitations of the &i386; platform, users - using ZFS on the &i386; architecture should add the - following option to a custom kernel configuration file, - rebuild the kernel, and reboot: + Due to the RAM limitations of the + &i386; platform, users using ZFS on the + &i386; architecture should add the following option to a + custom kernel configuration file, rebuild the kernel, and + reboot: options KVA_PAGES=512 @@ -831,20 +841,22 @@ vfs.zfs.vdev.cache.size="5M" <acronym>ZFS</acronym> Features and Terminology - ZFS is a fundamentally different file system because it - is more than just a file system. ZFS combines the roles of - file system and volume manager, enabling additional storage - devices to be added to a live system and having the new space - available on all of the existing file systems in that pool - immediately. By combining the traditionally separate roles, - ZFS is able to overcome previous limitations that prevented - RAID groups being able to grow. Each top level device in a - zpool is called a vdev, which can be a simple disk or a RAID - transformation such as a mirror or RAID-Z array. ZFS file - systems (called datasets), each have access to the combined - free space of the entire pool. As blocks are allocated from - the pool, the space available to each file system - decreases. This approach avoids the common pitfall with + ZFS is a fundamentally different file + system because it is more than just a file system. + ZFS combines the roles of file system and + volume manager, enabling additional storage devices to be added + to a live system and having the new space available on all of + the existing file systems in that pool immediately. By + combining the traditionally separate roles, + ZFS is able to overcome previous limitations + that prevented RAID groups being able to + grow. Each top level device in a zpool is called a vdev, which + can be a simple disk or a RAID transformation + such as a mirror or RAID-Z array. + ZFS file systems (called datasets), each have + access to the combined free space of the entire pool. As blocks + are allocated from the pool, the space available to each file + system decreases. This approach avoids the common pitfall with extensive partitioning where free space becomes fragmentated across the partitions. @@ -855,21 +867,22 @@ vfs.zfs.vdev.cache.size="5M"zpool A storage pool is the most basic building block of - ZFS. A pool is made up of one or more vdevs, the - underlying devices that store the data. A pool is then - used to create one or more file systems (datasets) or - block devices (volumes). These datasets and volumes - share the pool of remaining free space. Each pool is - uniquely identified by a name and a + ZFS. A pool is made up of one or + more vdevs, the underlying devices that store the data. + A pool is then used to create one or more file systems + (datasets) or block devices (volumes). These datasets + and volumes share the pool of remaining free space. + Each pool is uniquely identified by a name and a GUID. The zpool also controls the version number and therefore the features available for use with ZFS. - &os; 9.0 and 9.1 include support for ZFS version - 28. Future versions use ZFS version 5000 with - feature flags. This allows greater - cross-compatibility with other implementations of + &os; 9.0 and 9.1 include support for + ZFS version 28. Future versions + use ZFS version 5000 with feature + flags. This allows greater cross-compatibility with + other implementations of ZFS. @@ -879,9 +892,10 @@ vfs.zfs.vdev.cache.size="5M"A zpool is made up of one or more vdevs, which themselves can be a single disk or a group of disks, in - the case of a RAID transform. When multiple vdevs are - used, ZFS spreads data across the vdevs to increase - performance and maximize usable space. + the case of a RAID transform. When + multiple vdevs are used, ZFS spreads + data across the vdevs to increase performance and + maximize usable space. @@ -901,12 +915,12 @@ vfs.zfs.vdev.cache.size="5M" - File - In addition to - disks, ZFS pools can be backed by regular files, - this is especially useful for testing and - experimentation. Use the full path to the file - as the device path in the zpool create command. - All vdevs must be atleast 128 MB in + File - In addition to disks, + ZFS pools can be backed by + regular files, this is especially useful for + testing and experimentation. Use the full path to + the file as the device path in the zpool create + command. All vdevs must be atleast 128 MB in size. @@ -934,86 +948,93 @@ vfs.zfs.vdev.cache.size="5M" RAID-Z - - ZFS implements RAID-Z, a variation on standard - RAID-5 that offers better distribution of parity - and eliminates the "RAID-5 write hole" in which + ZFS implements + RAID-Z, a variation on standard + RAID-5 that offers better + distribution of parity and eliminates the + "RAID-5 write hole" in which the data and parity information become - inconsistent after an unexpected restart. ZFS - supports 3 levels of RAID-Z which provide - varying levels of redundancy in exchange for - decreasing levels of usable storage. The types - are named RAID-Z1 through RAID-Z3 based on the number - of parity devinces in the array and the number - of disks that the pool can operate - without. - - In a RAID-Z1 configuration with 4 disks, - each 1 TB, usable storage will be 3 TB - and the pool will still be able to operate in - degraded mode with one faulted disk. If an - additional disk goes offline before the faulted - disk is replaced and resilvered, all data in the - pool can be lost. - - In a RAID-Z3 configuration with 8 disks of - 1 TB, the volume would provide 5 TB of - usable space and still be able to operate with - three faulted disks. Sun recommends no more - than 9 disks in a single vdev. If the - configuration has more disks, it is recommended - to divide them into separate vdevs and the pool - data will be striped across them. - - A configuration of 2 RAID-Z2 vdevs - consisting of 8 disks each would create - something similar to a RAID-60 array. A RAID-Z - group's storage capacity is approximately the - size of the smallest disk, multiplied by the - number of non-parity disks. Four 1 TB disks - in RAID-Z1 has an effective size of approximately - 3 TB, and an array of eight 1 TB disks in RAID-Z3 will - yield 5 TB of usable space. + inconsistent after an unexpected restart. + ZFS supports 3 levels of + RAID-Z which provide varying + levels of redundancy in exchange for decreasing + levels of usable storage. The types are named + RAID-Z1 through + RAID-Z3 based on the number of + parity devinces in the array and the number of + disks that the pool can operate without. + + In a RAID-Z1 configuration + with 4 disks, each 1 TB, usable storage will + be 3 TB and the pool will still be able to + operate in degraded mode with one faulted disk. + If an additional disk goes offline before the + faulted disk is replaced and resilvered, all data + in the pool can be lost. + + In a RAID-Z3 configuration + with 8 disks of 1 TB, the volume would + provide 5 TB of usable space and still be + able to operate with three faulted disks. Sun + recommends no more than 9 disks in a single vdev. + If the configuration has more disks, it is + recommended to divide them into separate vdevs and + the pool data will be striped across them. + + A configuration of 2 + RAID-Z2 vdevs consisting of 8 + disks each would create something similar to a + RAID-60 array. A + RAID-Z group's storage capacity + is approximately the size of the smallest disk, + multiplied by the number of non-parity disks. + Four 1 TB disks in RAID-Z1 + has an effective size of approximately 3 TB, + and an array of eight 1 TB disks in + RAID-Z3 will yield 5 TB of + usable space. - Spare - ZFS has a special - pseudo-vdev type for keeping track of available - hot spares. Note that installed hot spares are - not deployed automatically; they must manually - be configured to replace the failed device using + Spare - + ZFS has a special pseudo-vdev + type for keeping track of available hot spares. + Note that installed hot spares are not deployed + automatically; they must manually be configured to + replace the failed device using zfs replace. - Log - ZFS Log Devices, also - known as ZFS Intent Log (ZIL) - move the intent log from the regular pool - devices to a dedicated device. The ZIL - accelerates synchronous transactions by using - storage devices (such as - SSDs) that are faster - than those used for the main pool. When - data is being written and the application - requests a guarantee that the data has been - safely stored, the data is written to the faster - ZIL storage, then later flushed out to the - regular disks, greatly reducing the latency of - synchronous writes. Log devices can be - mirrored, but RAID-Z is not supported. If - multiple log devices are used, writes will be - load balanced across them. + Log - ZFS + Log Devices, also known as ZFS Intent Log + (ZIL) move the intent log from + the regular pool devices to a dedicated device. + The ZIL accelerates synchronous + transactions by using storage devices (such as + SSDs) that are faster than + those used for the main pool. When data is being + written and the application requests a guarantee + that the data has been safely stored, the data is + written to the faster ZIL + storage, then later flushed out to the regular + disks, greatly reducing the latency of synchronous + writes. Log devices can be mirrored, but + RAID-Z is not supported. If + multiple log devices are used, writes will be load + balanced across them. Cache - Adding a cache vdev to a zpool will add the storage of the cache to - the L2ARC. Cache devices cannot be mirrored. - Since a cache device only stores additional - copies of existing data, there is no risk of - data loss. + the L2ARC. Cache devices + cannot be mirrored. Since a cache device only + stores additional copies of existing data, there + is no risk of data loss. @@ -1022,51 +1043,53 @@ vfs.zfs.vdev.cache.size="5M"Adaptive Replacement Cache (ARC) - ZFS uses an Adaptive Replacement Cache - (ARC), rather than a more - traditional Least Recently Used - (LRU) cache. An - LRU cache is a simple list of items - in the cache sorted by when each object was most - recently used; new items are added to the top of the - list and once the cache is full items from the bottom - of the list are evicted to make room for more active - objects. An ARC consists of four - lists; the Most Recently Used (MRU) - and Most Frequently Used (MFU) - objects, plus a ghost list for each. These ghost - lists track recently evicted objects to prevent them - from being added back to the cache. This increases the - cache hit ratio by avoiding objects that have a - history of only being used occasionally. Another - advantage of using both an MRU and - MFU is that scanning an entire - filesystem would normally evict all data from an - MRU or LRU cache - in favor of this freshly accessed content. In the - case of ZFS, since there is also an + ZFS uses an Adaptive Replacement + Cache (ARC), rather than a more + traditional Least Recently Used (LRU) + cache. An LRU cache is a simple list + of items in the cache sorted by when each object was + most recently used; new items are added to the top of + the list and once the cache is full items from the + bottom of the list are evicted to make room for more + active objects. An ARC consists of + four lists; the Most Recently Used + (MRU) and Most Frequently Used + (MFU) objects, plus a ghost list for + each. These ghost lists track recently evicted objects + to prevent them from being added back to the cache. + This increases the cache hit ratio by avoiding objects + that have a history of only being used occasionally. + Another advantage of using both an + MRU and MFU is + that scanning an entire filesystem would normally evict + all data from an MRU or + LRU cache in favor of this freshly + accessed content. In the case of + ZFS, since there is also an MFU that only tracks the most - frequently used objects, the cache of the most - commonly accessed blocks remains. + frequently used objects, the cache of the most commonly + accessed blocks remains. - L2ARC + L2ARC The L2ARC is the second level of the ZFS caching system. The primary ARC is stored in RAM, however since the amount of available RAM is often limited, - ZFS can also make use of cache + ZFS can also make use of + cache vdevs. Solid State Disks (SSDs) are often used as these cache devices due to their higher speed and lower latency compared to traditional spinning - disks. An L2ARC is entirely optional, but having one - will significantly increase read speeds for files that - are cached on the SSD instead of - having to be read from the regular spinning disks. The + disks. An L2ARC is entirely + optional, but having one will significantly increase + read speeds for files that are cached on the + SSD instead of having to be read from + the regular spinning disks. The L2ARC can also speed up deduplication since a DDT that does not fit in @@ -1092,48 +1115,51 @@ vfs.zfs.vdev.cache.size="5M"Copy-On-Write Unlike a traditional file system, when data is - overwritten on ZFS the new data is written to a - different block rather than overwriting the old data in - place. Only once this write is complete is the metadata - then updated to point to the new location of the data. - This means that in the event of a shorn write (a system - crash or power loss in the middle of writing a file), the - entire original contents of the file are still available - and the incomplete write is discarded. This also means - that ZFS does not require a &man.fsck.8; after an unexpected + overwritten on ZFS the new data is + written to a different block rather than overwriting the + old data in place. Only once this write is complete is + the metadata then updated to point to the new location + of the data. This means that in the event of a shorn + write (a system crash or power loss in the middle of + writing a file), the entire original contents of the + file are still available and the incomplete write is + discarded. This also means that ZFS + does not require a &man.fsck.8; after an unexpected shutdown. Dataset - Dataset is the generic term for a ZFS file system, - volume, snapshot or clone. Each dataset will have a - unique name in the format: - poolname/path@snapshot. The root of - the pool is technically a dataset as well. Child - datasets are named hierarchically like directories; for - example, mypool/home, the home dataset, - is a child of mypool and inherits properties from it. - This can be expanded further by creating - mypool/home/user. This grandchild - dataset will inherity properties from the parent and - grandparent. It is also possible to set properties - on a child to override the defaults inherited from the - parents and grandparents. ZFS also allows - administration of datasets and their children to be - delegated. + Dataset is the generic term for a + ZFS file system, volume, snapshot or + clone. Each dataset will have a unique name in the + format: poolname/path@snapshot. The + root of the pool is technically a dataset as well. + Child datasets are named hierarchically like + directories; for example, + mypool/home, the home dataset, is a + child of mypool and inherits + properties from it. This can be expanded further by + creating mypool/home/user. This + grandchild dataset will inherity properties from the + parent and grandparent. It is also possible to set + properties on a child to override the defaults inherited + from the parents and grandparents. + ZFS also allows administration of + datasets and their children to be delegated. Volume - In additional to regular file system datasets, ZFS - can also create volumes, which are block devices. - Volumes have many of the same features, including - copy-on-write, snapshots, clones and checksumming. - Volumes can be useful for running other file system - formats on top of ZFS, such as UFS or in the case of + In additional to regular file system datasets, + ZFS can also create volumes, which + are block devices. Volumes have many of the same + features, including copy-on-write, snapshots, clones and + checksumming. Volumes can be useful for running other + file system formats on top of ZFS, + such as UFS or in the case of Virtualization or exporting iSCSI extents. @@ -1142,41 +1168,40 @@ vfs.zfs.vdev.cache.size="5M"Snapshot The copy-on-write - - design of ZFS allows for nearly instantaneous consistent - snapshots with arbitrary names. After taking a snapshot - of a dataset (or a recursive snapshot of a parent - dataset that will include all child datasets), new data - is written to new blocks (as described above), however - the old blocks are not reclaimed as free space. There - are then two versions of the file system, the snapshot - (what the file system looked like before) and the live - file system; however no additional space is used. As - new data is written to the live file system, new blocks - are allocated to store this data. The apparent size of - the snapshot will grow as the blocks are no longer used - in the live file system, but only in the snapshot. - These snapshots can be mounted (read only) to allow for - the recovery of previous versions of files. It is also - possible to rollback - a live file system to a specific snapshot, undoing any - changes that took place after the snapshot was taken. - Each block in the zpool has a reference counter which + linkend="zfs-term-cow">copy-on-write design of + ZFS allows for nearly instantaneous + consistent snapshots with arbitrary names. After taking + a snapshot of a dataset (or a recursive snapshot of a + parent dataset that will include all child datasets), + new data is written to new blocks (as described above), + however the old blocks are not reclaimed as free space. + There are then two versions of the file system, the + snapshot (what the file system looked like before) and + the live file system; however no additional space is + used. As new data is written to the live file system, + new blocks are allocated to store this data. The + apparent size of the snapshot will grow as the blocks + are no longer used in the live file system, but only in + the snapshot. These snapshots can be mounted (read + only) to allow for the recovery of previous versions of + files. It is also possible to + rollback a live + file system to a specific snapshot, undoing any changes + that took place after the snapshot was taken. Each + block in the zpool has a reference counter which indicates how many snapshots, clones, datasets or volumes make use of that block. As files and snapshots are deleted, the reference count is decremented; once a block is no longer referenced, it is reclaimed as free - space. Snapshots can also be marked with a hold, - once a snapshot is held, any attempt to destroy it will - return an EBUY error. Each snapshot can have multiple - holds, each with a unique name. The release - command removes the hold so the snapshot can then be - deleted. Snapshots can be taken on volumes, however - they can only be cloned or rolled back, not mounted + space. Snapshots can also be marked with a + hold, once a + snapshot is held, any attempt to destroy it will return + an EBUY error. Each snapshot can have multiple holds, + each with a unique name. The + release command + removes the hold so the snapshot can then be deleted. + Snapshots can be taken on volumes, however they can only + be cloned or rolled back, not mounted independently. @@ -1206,13 +1231,16 @@ vfs.zfs.vdev.cache.size="5M"Every block that is allocated is also checksummed (the algorithm used is a per dataset property, see: - zfs set). ZFS transparently validates the checksum of - each block as it is read, allowing ZFS to detect silent - corruption. If the data that is read does not match the - expected checksum, ZFS will attempt to recover the data - from any available redundancy, like mirrors or RAID-Z). Validation of all checksums can be triggered with - the - scrub + zfs set). ZFS + transparently validates the checksum of each block as it + is read, allowing ZFS to detect + silent corruption. If the data that is read does not + match the expected checksum, ZFS will + attempt to recover the data from any available + redundancy, like mirrors or RAID-Z). + Validation of all checksums can be triggered with the + scrub command. Available checksum algorithms include: @@ -1238,90 +1266,96 @@ vfs.zfs.vdev.cache.size="5M" Compression - Each dataset in ZFS has a compression property, - which defaults to off. This property can be set to one - of a number of compression algorithms, which will cause - all new data that is written to this dataset to be - compressed as it is written. In addition to the - reduction in disk usage, this can also increase read and - write throughput, as only the smaller compressed version - of the file needs to be read or written. + Each dataset in ZFS has a + compression property, which defaults to off. This + property can be set to one of a number of compression + algorithms, which will cause all new data that is + written to this dataset to be compressed as it is + written. In addition to the reduction in disk usage, + this can also increase read and write throughput, as + only the smaller compressed version of the file needs to + be read or written. - LZ4 compression is only available after &os; - 9.2 + LZ4 compression is only + available after &os; 9.2 Deduplication - ZFS has the ability to detect duplicate blocks of - data as they are written (thanks to the checksumming - feature). If deduplication is enabled, instead of - writing the block a second time, the reference count of - the existing block will be increased, saving storage - space. To do this, ZFS keeps a deduplication - table (DDT) in memory, containing the - list of unique checksums, the location of that block and - a reference count. When new data is written, the - checksum is calculated and compared to the list. If a - match is found, the data is considered to be a - duplicate. When deduplication is enabled, the checksum - algorithm is changed to SHA256 to - provide a secure cryptographic hash. ZFS deduplication - is tunable; if dedup is on, then a matching checksum is - assumed to mean that the data is identical. If dedup is - set to verify, then the data in the two blocks will be - checked byte-for-byte to ensure it is actually identical - and if it is not, the hash collision will be noted by - ZFS and the two blocks will be stored separately. Due - to the nature of the DDT, having to - store the hash of each unique block, it consumes a very - large amount of memory (a general rule of thumb is - 5-6 GB of ram per 1 TB of deduplicated data). - In situations where it is not practical to have enough - RAM to keep the entire DDT in memory, - performance will suffer greatly as the DDT will need to - be read from disk before each new block is written. - Deduplication can make use of the L2ARC to store the - DDT, providing a middle ground between fast system - memory and slower disks. Consider - using ZFS compression instead, which often provides - nearly as much space savings without the additional - memory requirement. + ZFS has the ability to detect + duplicate blocks of data as they are written (thanks to + the checksumming feature). If deduplication is enabled, + instead of writing the block a second time, the + reference count of the existing block will be increased, + saving storage space. To do this, + ZFS keeps a deduplication table + (DDT) in memory, containing the list + of unique checksums, the location of that block and a + reference count. When new data is written, the checksum + is calculated and compared to the list. If a match is + found, the data is considered to be a duplicate. When + deduplication is enabled, the checksum algorithm is + changed to SHA256 to provide a secure + cryptographic hash. ZFS + deduplication is tunable; if dedup is on, then a + matching checksum is assumed to mean that the data is + identical. If dedup is set to verify, then the data in + the two blocks will be checked byte-for-byte to ensure + it is actually identical and if it is not, the hash + collision will be noted by ZFS and + the two blocks will be stored separately. Due to the + nature of the DDT, having to store + the hash of each unique block, it consumes a very large + amount of memory (a general rule of thumb is 5-6 GB + of ram per 1 TB of deduplicated data). In + situations where it is not practical to have enough + RAM to keep the entire + DDT in memory, performance will + suffer greatly as the DDT will need + to be read from disk before each new block is written. + Deduplication can make use of the + L2ARC to store the + DDT, providing a middle ground + between fast system memory and slower disks. Consider + using ZFS compression instead, which + often provides nearly as much space savings without the + additional memory requirement. Scrub - In place of a consistency check like &man.fsck.8;, ZFS has - the scrub command, which reads all - data blocks stored on the pool and verifies their - checksums them against the known good checksums stored - in the metadata. This periodic check of all the data - stored on the pool ensures the recovery of any corrupted - blocks before they are needed. A scrub is not required - after an unclean shutdown, but it is recommended that - you run a scrub at least once each quarter. ZFS - compares the checksum for each block as it is read in - the normal course of use, but a scrub operation makes - sure even infrequently used blocks are checked for - silent corruption. + In place of a consistency check like &man.fsck.8;, + ZFS has the scrub + command, which reads all data blocks stored on the pool + and verifies their checksums them against the known good + checksums stored in the metadata. This periodic check + of all the data stored on the pool ensures the recovery + of any corrupted blocks before they are needed. A scrub + is not required after an unclean shutdown, but it is + recommended that you run a scrub at least once each + quarter. ZFS compares the checksum + for each block as it is read in the normal course of + use, but a scrub operation makes sure even infrequently + used blocks are checked for silent corruption. Dataset Quota - ZFS provides very fast and accurate dataset, user - and group space accounting in addition to quotas and - space reservations. This gives the administrator fine - grained control over how space is allocated and allows - critical file systems to reserve space to ensure other - file systems do not take all of the free space. + ZFS provides very fast and + accurate dataset, user and group space accounting in + addition to quotas and space reservations. This gives + the administrator fine grained control over how space is + allocated and allows critical file systems to reserve + space to ensure other file systems do not take all of + the free space. - ZFS supports different types of quotas: the - dataset quota, the ZFS supports different types of + quotas: the dataset quota, the reference quota (refquota), the user @@ -1381,9 +1415,9 @@ vfs.zfs.vdev.cache.size="5M"storage/home/bob, the space used by - that snapshot is counted against the reservation. The - storage/home/bob, + the space used by that snapshot is counted against the + reservation. The refreservation property works in a similar way, except it excludes descendants, such as From owner-svn-doc-projects@FreeBSD.ORG Thu Aug 15 02:39:22 2013 Return-Path: Delivered-To: svn-doc-projects@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTP id 69CF5FAE; Thu, 15 Aug 2013 02:39:22 +0000 (UTC) (envelope-from wblock@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 579622BDE; Thu, 15 Aug 2013 02:39:22 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.7/8.14.7) with ESMTP id r7F2dM4h091909; Thu, 15 Aug 2013 02:39:22 GMT (envelope-from wblock@svn.freebsd.org) Received: (from wblock@localhost) by svn.freebsd.org (8.14.7/8.14.5/Submit) id r7F2dMmC091908; Thu, 15 Aug 2013 02:39:22 GMT (envelope-from wblock@svn.freebsd.org) Message-Id: <201308150239.r7F2dMmC091908@svn.freebsd.org> From: Warren Block Date: Thu, 15 Aug 2013 02:39:22 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-projects@freebsd.org Subject: svn commit: r42549 - projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs X-SVN-Group: doc-projects MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-projects@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: SVN commit messages for doc projects trees List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 Aug 2013 02:39:22 -0000 Author: wblock Date: Thu Aug 15 02:39:21 2013 New Revision: 42549 URL: http://svnweb.freebsd.org/changeset/doc/42549 Log: Rewrite the introductory paragraph, fix miscellaneous errors. Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Modified: projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml ============================================================================== --- projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Thu Aug 15 02:28:43 2013 (r42548) +++ projects/zfsupdate-201307/en_US.ISO8859-1/books/handbook/zfs/chapter.xml Thu Aug 15 02:39:21 2013 (r42549) @@ -17,21 +17,33 @@ The Z File System (<acronym>ZFS</acronym>) - The Z file system, originally developed by &sun;, - is designed to future proof the file system by removing many of - the arbitrary limits imposed on previous file systems. ZFS - allows continuous growth of the pooled storage by adding - additional devices. ZFS allows you to create many file systems - (in addition to block devices) out of a single shared pool of - storage. Space is allocated as needed, so all remaining free - space is available to each file system in the pool. It is also - designed for maximum data integrity, supporting data snapshots, - multiple copies, and cryptographic checksums. It uses a - software data replication model, known as - RAID-Z. RAID-Z provides - redundancy similar to hardware RAID, but is - designed to prevent data write corruption and to overcome some - of the limitations of hardware RAID. + The Z File System + (ZFS) was developed at &sun; to address many of + the problems with current file systems. There were three major + design goals: + + + + Data integrity: checksums are created when data is written + and checked when data is read. If on-disk data corruption is + detected, the user is notified and recovery methods are + initiated. + + + + Pooled storage: physical storage devices are added to a + pool, and storage space is allocated from that shared pool. + Space is available to all file systems, and can be increased + by adding new storage devices to the pool. + + + + Performance: + + + + A complete list of ZFS features and + terminology is shown in . What Makes <acronym>ZFS</acronym> Different @@ -1168,7 +1180,7 @@ vfs.zfs.vdev.cache.size="5M"Snapshot The copy-on-write design of + linkend="zfs-term-cow">copy-on-write (COW) design of ZFS allows for nearly instantaneous consistent snapshots with arbitrary names. After taking a snapshot of a dataset (or a recursive snapshot of a @@ -1259,7 +1271,7 @@ vfs.zfs.vdev.cache.size="5M" @@ -1278,7 +1290,7 @@ vfs.zfs.vdev.cache.size="5M" LZ4 compression is only - available after &os; 9.2 + available after &os; 9.2.