From owner-svn-doc-all@FreeBSD.ORG Fri Jan 18 23:26:14 2013 Return-Path: Delivered-To: svn-doc-all@freebsd.org Received: from mx1.freebsd.org (mx1.FreeBSD.org [8.8.178.115]) by hub.freebsd.org (Postfix) with ESMTP id 7BE2B72A; Fri, 18 Jan 2013 23:26:14 +0000 (UTC) (envelope-from wblock@FreeBSD.org) Received: from svn.freebsd.org (svn.freebsd.org [IPv6:2001:1900:2254:2068::e6a:0]) by mx1.freebsd.org (Postfix) with ESMTP id 6DB15DBC; Fri, 18 Jan 2013 23:26:14 +0000 (UTC) Received: from svn.freebsd.org ([127.0.1.70]) by svn.freebsd.org (8.14.5/8.14.5) with ESMTP id r0INQEjO085358; Fri, 18 Jan 2013 23:26:14 GMT (envelope-from wblock@svn.freebsd.org) Received: (from wblock@localhost) by svn.freebsd.org (8.14.5/8.14.5/Submit) id r0INQEBM085357; Fri, 18 Jan 2013 23:26:14 GMT (envelope-from wblock@svn.freebsd.org) Message-Id: <201301182326.r0INQEBM085357@svn.freebsd.org> From: Warren Block Date: Fri, 18 Jan 2013 23:26:14 +0000 (UTC) To: doc-committers@freebsd.org, svn-doc-all@freebsd.org, svn-doc-head@freebsd.org Subject: svn commit: r40681 - head/en_US.ISO8859-1/books/handbook/filesystems X-SVN-Group: doc-head MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: svn-doc-all@freebsd.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: "SVN commit messages for the entire doc trees \(except for " user" , " projects" , and " translations" \)" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Jan 2013 23:26:14 -0000 Author: wblock Date: Fri Jan 18 23:26:13 2013 New Revision: 40681 URL: http://svnweb.freebsd.org/changeset/doc/40681 Log: Whitespace-only fixes for the filesystems chapter. Translators, please ignore. Patch from dru on freebsd-doc, plus additional indentation fixes for ZFS section and a few other miscellaneous whitespace problems. Submitted by: Dru Lavigne Modified: head/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml Modified: head/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml ============================================================================== --- head/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml Fri Jan 18 22:30:06 2013 (r40680) +++ head/en_US.ISO8859-1/books/handbook/filesystems/chapter.xml Fri Jan 18 23:26:13 2013 (r40681) @@ -47,17 +47,18 @@ (ZFS). There are different levels of support for the various file - systems in &os;. Some will require a kernel module to be loaded, - others may require a toolset to be installed. This chapter is - designed to help users of &os; access other file systems on their - systems, starting with the &sun; Z file + systems in &os;. Some will require a kernel module to be + loaded, others may require a toolset to be installed. This + chapter is designed to help users of &os; access other file + systems on their systems, starting with the &sun; Z file system. After reading this chapter, you will know: - The difference between native and supported file systems. + The difference between native and supported file + systems. @@ -113,10 +114,11 @@ ZFS Tuning The ZFS subsystem utilizes much of - the system resources, so some tuning may be required to provide - maximum efficiency during every-day use. As an experimental - feature in &os; this may change in the near future; however, - at this time, the following steps are recommended. + the system resources, so some tuning may be required to + provide maximum efficiency during every-day use. As an + experimental feature in &os; this may change in the near + future; however, at this time, the following steps are + recommended. Memory @@ -127,9 +129,10 @@ several other tuning mechanisms in place. Some people have had luck using fewer than one gigabyte - of memory, but with such a limited amount of physical memory, - when the system is under heavy load, it is very plausible - that &os; will panic due to memory exhaustion. + of memory, but with such a limited amount of physical + memory, when the system is under heavy load, it is very + plausible that &os; will panic due to memory + exhaustion. @@ -138,11 +141,12 @@ It is recommended that unused drivers and options be removed from the kernel configuration file. Since most devices are available as modules, they may be loaded - using the /boot/loader.conf file. + using the /boot/loader.conf + file. - Users of the &i386; architecture should add the following - option to their kernel configuration file, rebuild their - kernel, and reboot: + Users of the &i386; architecture should add the + following option to their kernel configuration file, + rebuild their kernel, and reboot: options KVA_PAGES=512 @@ -158,11 +162,11 @@ Loader Tunables - The kmem address space should be - increased on all &os; architectures. On the test system with - one gigabyte of physical memory, success was achieved with the - following options which should be placed in - the /boot/loader.conf file and the system + The kmem address space should + be increased on all &os; architectures. On the test system + with one gigabyte of physical memory, success was achieved + with the following options which should be placed in the + /boot/loader.conf file and the system restarted: vm.kmem_size="330M" @@ -170,9 +174,9 @@ vm.kmem_size_max="330M" vfs.zfs.arc_max="40M" vfs.zfs.vdev.cache.size="5M" - For a more detailed list of recommendations for ZFS-related - tuning, see - . + For a more detailed list of recommendations for + ZFS-related tuning, see . @@ -184,23 +188,25 @@ vfs.zfs.vdev.cache.size="5M" - &prompt.root; echo 'zfs_enable="YES"' >> /etc/rc.conf + &prompt.root; echo 'zfs_enable="YES"' >> /etc/rc.conf &prompt.root; /etc/rc.d/zfs start - The remainder of this document assumes three - SCSI disks are available, and their device names - are da0, - da1 - and da2. - Users of IDE hardware may use the - ad - devices in place of SCSI hardware. + The remainder of this document assumes three + SCSI disks are available, and their + device names are + da0, + da1 + and da2. + Users of IDE hardware may use the + ad + devices in place of SCSI hardware. Single Disk Pool - To create a simple, non-redundant ZFS pool using a - single disk device, use the zpool command: + To create a simple, non-redundant ZFS + pool using a single disk device, use the + zpool command: &prompt.root; zpool create example /dev/da0 @@ -239,8 +245,8 @@ drwxr-xr-x 21 root wheel 512 Aug 29 2 The example/compressed is now a ZFS compressed file system. Try copying - some large files to it by copying them to - /example/compressed. + some large files to it by copying them to /example/compressed. The compression may now be disabled with: @@ -307,8 +313,8 @@ example/data 17547008 0 175 amount of available space. This is the reason for using df through these examples, to show that the file systems are using only the amount of space - they need and will all draw from the same pool. - The ZFS file system does away with concepts + they need and will all draw from the same pool. The + ZFS file system does away with concepts such as volumes and partitions, and allows for several file systems to occupy the same pool. Destroy the file systems, and then destroy the pool as they are no longer @@ -332,28 +338,31 @@ example/data 17547008 0 175 As previously noted, this section will assume that three SCSI disks exist as devices da0, da1 - and da2 (or ad0 - and beyond in case IDE disks are being used). To create a - RAID-Z pool, issue the following - command: + and da2 (or + ad0 and beyond in case IDE disks + are being used). To create a RAID-Z + pool, issue the following command: &prompt.root; zpool create storage raidz da0 da1 da2 - &sun; recommends that the amount of devices used in a - RAID-Z configuration is between three and nine. If your needs - call for a single pool to consist of 10 disks or more, consider - breaking it up into smaller RAID-Z groups. If - you only have two disks and still require redundancy, consider using - a ZFS mirror instead. See the &man.zpool.8; - manual page for more details. + + &sun; recommends that the amount of devices used + in a RAID-Z configuration is between + three and nine. If your needs call for a single pool to + consist of 10 disks or more, consider breaking it up into + smaller RAID-Z groups. If you only + have two disks and still require redundancy, consider + using a ZFS mirror instead. See the + &man.zpool.8; manual page for more details. + The storage zpool should have been - created. This may be verified by using the &man.mount.8; and - &man.df.1; commands as before. More disk devices may have - been allocated by adding them to the end of the list above. - Make a new file system in the pool, called - home, where user files will eventually be - placed: + created. This may be verified by using the &man.mount.8; + and &man.df.1; commands as before. More disk devices may + have been allocated by adding them to the end of the list + above. Make a new file system in the pool, called + home, where user files will eventually + be placed: &prompt.root; zfs create storage/home @@ -529,13 +538,14 @@ errors: No known data errors &prompt.root; zfs set checksum=off storage/home This is not a wise idea, however, as checksums take - very little storage space and are more useful when enabled. There - also appears to be no noticeable costs in having them enabled. - While enabled, it is possible to have ZFS - check data integrity using checksum verification. This - process is known as scrubbing. To verify the - data integrity of the storage pool, issue - the following command: + very little storage space and are more useful when enabled. + There also appears to be no noticeable costs in having them + enabled. While enabled, it is possible to have + ZFS check data integrity using checksum + verification. This process is known as + scrubbing. To verify the data integrity of + the storage pool, issue the following + command: &prompt.root; zpool scrub storage @@ -571,178 +581,187 @@ errors: No known data errors - ZFS Quotas + ZFS Quotas - ZFS supports different types of quotas; the refquota, the - general quota, the user quota, and the group quota. This - section will explain the basics of each one, and include some - usage instructions. - - Quotas limit the amount of space that a dataset and its - descendants can consume, and enforce a limit on the amount of - space used by filesystems and snapshots for the descendants. - In terms of users, quotas are useful to limit the amount of - space a particular user can use. - - - Quotas cannot be set on volumes, as the - volsize property acts as an implicit - quota. - - - The refquota, - refquota=size, - limits the amount of space a dataset can consume by enforcing - a hard limit on the space used. However, this hard limit does - not include space used by descendants, such as file systems or - snapshots. - - To enforce a general quota of 10 GB for - storage/home/bob, use the - following: - - &prompt.root; zfs set quota=10G storage/home/bob - - User quotas limit the amount of space that can be used by - the specified user. The general format is - userquota@user=size, - and the user's name must be in one of the following - formats: - - - - POSIX - compatible name (e.g., joe). - - - POSIX - numeric ID (e.g., 789). - - - SID - name (e.g., - joe.bloggs@example.com). - - - SID - numeric ID (e.g., - S-1-123-456-789). - - - - For example, to enforce a quota of 50 GB for a user - named joe, use the - following: - - &prompt.root; zfs set userquota@joe=50G - - To remove the quota or make sure that one is not - set, instead use: - - &prompt.root; zfs set userquota@joe=none - - User quota properties are not displayed by - zfs get all. Non-root - users can only see their own quotas unless they have been - granted the userquota privilege. Users - with this privilege are able to view and set everyone's - quota. - - The group quota limits the amount of space that a - specified user group can consume. The general format is - groupquota@group=size. - - To set the quota for the group - firstgroup to 50 GB, - use: - - &prompt.root; zfs set groupquota@firstgroup=50G - - To remove the quota for the group - firstgroup, or make sure that one - is not set, instead use: - - &prompt.root; zfs set groupquota@firstgroup=none - - As with the user quota property, - non-root users can only see the quotas - associated with the user groups that they belong to, however - a root user or a user with the - groupquota privilege can view and set all - quotas for all groups. - - The zfs userspace subcommand displays - the amount of space consumed by each user on the specified - filesystem or snapshot, along with any specified quotas. - The zfs groupspace subcommand does the - same for groups. For more information about supported - options, or only displaying specific options, see - &man.zfs.1;. - - To list the quota for - storage/home/bob, if you have the - correct privileges or are root, - use the following: + ZFS supports different types of quotas; the + refquota, the general quota, the user quota, and + the group quota. This section will explain the + basics of each one, and include some usage + instructions. + + Quotas limit the amount of space that a dataset + and its descendants can consume, and enforce a limit + on the amount of space used by filesystems and + snapshots for the descendants. In terms of users, + quotas are useful to limit the amount of space a + particular user can use. + + + Quotas cannot be set on volumes, as the + volsize property acts as an + implicit quota. + + + The refquota, + refquota=size, + limits the amount of space a dataset can consume + by enforcing a hard limit on the space used. However, + this hard limit does not include space used by descendants, + such as file systems or snapshots. + + To enforce a general quota of 10 GB for + storage/home/bob, use the + following: + + &prompt.root; zfs set quota=10G storage/home/bob + + User quotas limit the amount of space that can + be used by the specified user. The general format + is + userquota@user=size, + and the user's name must be in one of the following + formats: + + + + POSIX compatible name + (e.g., joe). + + + + POSIX + numeric ID (e.g., + 789). + + + + SID name + (e.g., + joe.bloggs@example.com). + + + + SID + numeric ID (e.g., + S-1-123-456-789). + + + + For example, to enforce a quota of 50 GB for a user + named joe, use the + following: + + &prompt.root; zfs set userquota@joe=50G + + To remove the quota or make sure that one is not set, + instead use: + + &prompt.root; zfs set userquota@joe=none + + User quota properties are not displayed by + zfs get all. + Non-root users can only see their own + quotas unless they have been granted the + userquota privilege. Users with this + privilege are able to view and set everyone's quota. + + The group quota limits the amount of space that a + specified user group can consume. The general format is + groupquota@group=size. + + To set the quota for the group + firstgroup to 50 GB, + use: + + &prompt.root; zfs set groupquota@firstgroup=50G + + To remove the quota for the group + firstgroup, or make sure that one + is not set, instead use: + + &prompt.root; zfs set groupquota@firstgroup=none + + As with the user quota property, + non-root users can only see the quotas + associated with the user groups that they belong to, however + a root user or a user with the + groupquota privilege can view and set all + quotas for all groups. + + The zfs userspace subcommand displays + the amount of space consumed by each user on the specified + filesystem or snapshot, along with any specified quotas. + The zfs groupspace subcommand does the + same for groups. For more information about supported + options, or only displaying specific options, see + &man.zfs.1;. + + To list the quota for + storage/home/bob, if you have the + correct privileges or are root, use the + following: - &prompt.root; zfs get quota storage/home/bob + &prompt.root; zfs get quota storage/home/bob - ZFS Reservations + ZFS Reservations + + ZFS supports two types of space reservations. + This section will explain the basics of each one, + and include some usage instructions. + + The reservation property makes it + possible to reserve a minimum amount of space guaranteed + for a dataset and its descendants. This means that if a + 10 GB reservation is set on + storage/home/bob, if disk + space gets low, at least 10 GB of space is reserved + for this dataset. The refreservation + property sets or indicates the minimum amount of space + guaranteed to a dataset excluding descendants, such as + snapshots. As an example, if a snapshot was taken of + storage/home/bob, enough disk space + would have to exist outside of the + refreservation amount for the operation + to succeed because descendants of the main data set are + not counted by the refreservation + amount and so do not encroach on the space set. + + Reservations of any sort are useful in many + situations, for example planning and testing the + suitability of disk space allocation in a new system, or + ensuring that enough space is available on file systems + for system recovery procedures and files. + + The general format of the reservation + property is +reservation=size, + so to set a reservation of 10 GB on + storage/home/bobthe below command is + used: + + &prompt.root; zfs set reservation=10G storage/home/bob - ZFS supports two types of space reservations. This - section will explain the basics of each one, and include - some usage instructions. - - The reservation property makes it - possible to reserve a minimum amount of space guaranteed for a - dataset and its descendants. This means that if a 10 GB - reservation is set on storage/home/bob, - if disk space gets low, at least 10 GB of space is - reserved for this dataset. The - refreservation property sets or indicates - the minimum amount of space guaranteed to a dataset excluding - descendants, such as snapshots. As an example, if a snapshot - was taken of storage/home/bob, enough - disk space would have to exist outside of the - refreservation amount for the operation to - succeed because descendants of the main data set are not - counted by the refreservation amount and - so do not encroach on the space set. - - Reservations of any sort are useful in many situations, - for example planning and testing the suitability of disk space - allocation in a new system, or ensuring that enough space is - available on file systems for system recovery procedures and - files. - - The general format of the reservation - property is - reservation=size, - so to set a reservation of 10 GB on - storage/home/bobthe below command is - used: - - &prompt.root; zfs set reservation=10G storage/home/bob - - To make sure that no reservation is set, or to remove a - reservation, instead use: - - &prompt.root; zfs set reservation=none storage/home/bob - - The same principle can be applied to the - refreservation property for setting a - refreservation, with the general format - refreservation=size. - - To check if any reservations or refreservations exist on - storage/home/bob, execute one of the - following commands: + To make sure that no reservation is set, or to remove a + reservation, instead use: - &prompt.root; zfs get reservation storage/home/bob + &prompt.root; zfs set reservation=none storage/home/bob + + The same principle can be applied to the + refreservation property for setting a + refreservation, with the general format + refreservation=size. + + To check if any reservations or refreservations exist on + storage/home/bob, execute one of the + following commands: + + &prompt.root; zfs get reservation storage/home/bob &prompt.root; zfs get refreservation storage/home/bob @@ -760,12 +779,13 @@ errors: No known data errors The &man.ext2fs.5; file system kernel implementation was written by Godmar Back, and the driver first appeared in &os; 2.2. In &os; 8 and earlier, the code is licensed under - the GNU Public License, however under &os; 9, - the code has been rewritten and it is now licensed under the - BSD license. + the GNU Public License, however under &os; + 9, the code has been rewritten and it is now licensed under + the BSD license. The &man.ext2fs.5; driver will allow the &os; kernel - to both read and write to ext2 file systems. + to both read and write to ext2 file + systems. First, load the kernel loadable module: @@ -776,6 +796,7 @@ errors: No known data errors &prompt.root; mount -t ext2fs /dev/ad1s1 /mnt + XFS @@ -815,6 +836,7 @@ errors: No known data errors metadata. This can be used to quickly create a read-only filesystem which can be tested on &os;. + ReiserFS @@ -826,7 +848,8 @@ errors: No known data errors access ReiserFS file systems and read their contents, but not write to them, currently. - First, the kernel-loadable module needs to be loaded: + First, the kernel-loadable module needs to be + loaded: &prompt.root; kldload reiserfs