From owner-freebsd-doc@FreeBSD.ORG Mon Apr 7 01:27:33 2008 Return-Path: Delivered-To: doc@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 77D21106566C for ; Mon, 7 Apr 2008 01:27:33 +0000 (UTC) (envelope-from mariamayer.online@freenet.de) Received: from mout4.freenet.de (mout4.freenet.de [IPv6:2001:748:100:40::2:6]) by mx1.freebsd.org (Postfix) with ESMTP id 19C048FC0C for ; Mon, 7 Apr 2008 01:27:33 +0000 (UTC) (envelope-from mariamayer.online@freenet.de) Received: from [195.4.92.11] (helo=1.mx.freenet.de) by mout4.freenet.de with esmtpa (Exim 4.69) (envelope-from ) id 1Jig8y-0007rD-1F for doc@freebsd.org; Mon, 07 Apr 2008 03:27:32 +0200 Received: from [194.74.163.97] (port=2085 helo=Golden) by 1.mx.freenet.de with esmtpa (ID mariamayer.online@freenet.de) (port 587) (Exim 4.69 #12) id 1Jig8x-0007Nq-PY for doc@freebsd.org; Mon, 07 Apr 2008 03:27:32 +0200 Date: Mon, 7 Apr 2008 02:27:33 +0100 X-Mailer: IBM email Message-ID: <20080407012733265.0BA6693A73394E2F@Golden> From: mariamayer.online@freenet.de To: doc@freebsd.org Sender: mariamayer.online@freenet.de Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit Cc: Subject: YOUR NAME HAS BEEN LISTED X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Apr 2008 01:27:33 -0000 UK ESSEX PROMO 05th April, 2008 Dear Winner, This is to inform you that your email ID has won you the sum of £500,000.00 (five hundred thousand pounds) in the new year UK ESSEX PROMOTION conducted with the computer balloting system and your email won with the winning numbers stated below; Coupon Number: H88H78 Online Number: 65/76/90 Ticket Number: 965/435/7855 PIN: UK765XX This is a millennium scientific computer game in which email addresses were used and it is a promotional program aimed at encouraging internet users; therefore you do not need to buy ticket to enter for the game. The lottery is sponsored by a regional organization belonging to the World Lottery Association (WLA) which represents 147 lotteries from 81countries, with combined annual revenues in excess of US$102 billion. In your best interest, we request that you keep the entire details of your award strictly from public notice until the process of transferring your claims has been completed. This is in accordance with section 13(1) (n) of the national gambling act as adopted in 1993 and amended on 3rd July 1996 by the constitutional assembly. This is to protect winners and to avoid misappropriation of funds. PAYMENT OF PRIZE AND CLAIM Winners shall be paid in accordance with his/her Settlement Centre. Your Prize Award must be claimed no later than 15 days from date of Draw Notification. Any prize not claimed within this period will be forfeited. In pursuant to your prize collection, kindly contact the processing officer in charge of your region by completing the form below for due verification, then send via fax or email to the processing officer with information below for guildance on how to notarize your file to enable us make payment of your prize money to you. Essex Program Claims Department Tel/Fax: +(44) 704 090 0534, 700 607 9654 Contact Person: Joe M. Anderson E-Mail: joemanderson@searchmachine.com (1)NAME OF BENEFICIARY:_______________________________________________ (2)ADDRESS:__________________________________________________________ (3)TELEPHONE:_________________________________________________________ (4)FAX:________________________________________________________________ (5)OCCUPATION:_______________________________________________________ (6)NATIONALITY:_______________________________________________________ (7)CITY:_______________________________________________________________ (8)COUNTRY:___________________________________________________________ (9)COUPON NUMBERS:____________________________________________________ (10)TICKET NUMBERS:____________________________________________________ (11)WINNER'S EMAIL ADDRESS:____________________________________________ I WANT TO BE PAID BY: a/BANK TRANSFER. b/PERSONAL COLLECTION. Send your details by fax if you find it difficult sending an email. Congratulations from all our staff. Sincerely Yours, Maria Mayer. International Co-ordinator Copyright (c) 2008 Euro Millions Inc. From owner-freebsd-doc@FreeBSD.ORG Mon Apr 7 11:06:06 2008 Return-Path: Delivered-To: freebsd-doc@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 942C6106564A for ; Mon, 7 Apr 2008 11:06:06 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 836328FC0A for ; Mon, 7 Apr 2008 11:06:06 +0000 (UTC) (envelope-from owner-bugmaster@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m37B66J5047846 for ; Mon, 7 Apr 2008 11:06:06 GMT (envelope-from owner-bugmaster@FreeBSD.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m37B65Jc047842 for freebsd-doc@FreeBSD.org; Mon, 7 Apr 2008 11:06:05 GMT (envelope-from owner-bugmaster@FreeBSD.org) Date: Mon, 7 Apr 2008 11:06:05 GMT Message-Id: <200804071106.m37B65Jc047842@freefall.freebsd.org> X-Authentication-Warning: freefall.freebsd.org: gnats set sender to owner-bugmaster@FreeBSD.org using -f From: FreeBSD bugmaster To: FreeBSD doc list Cc: Subject: Current unassigned doc problem reports X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Apr 2008 11:06:06 -0000 Current FreeBSD problem reports The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. Bugs can be in one of several states: o - open A problem report has been submitted, no sanity checking performed. a - analyzed The problem is understood and a solution is being sought. f - feedback Further work requires additional information from the originator or the community - possibly confirmation of the effectiveness of a proposed solution. p - patched A patch has been committed, but some issues (MFC and / or confirmation from originator) are still open. r - repocopy The resolution of the problem report is dependent on a repocopy operation within the CVS repository which is awaiting completion. s - suspended The problem is not being worked on, due to lack of information or resources. This is a prime candidate for somebody who is looking for a project to do. If the problem cannot be solved at all, it will be closed, rather than suspended. c - closed A problem report is closed when any changes have been integrated, documented, and tested -- or when fixing the problem is abandoned. Critical problems Serious problems S Tracker Resp. Description -------------------------------------------------------------------------------- o docs/27605 doc [patch] Cross-document references () s docs/35678 doc docproj Makefiles for web are broken for paths with sp o docs/61605 doc [request] Improve documentation for i386 disk geometry o docs/84932 doc new document: printing with an Epson ALC-3000N on Free o docs/98115 doc Missing parts after rendering handbook to RTF format o docs/106135 doc [request] articles/vinum needs to be updated o docs/110253 doc [patch] rtprio(1): remove processing starvation commen p docs/112935 doc [patch] newfs_msdos(8): document 4.3g limit on files w o docs/116080 doc PREFIX is documented, but not the more important LOCAL o docs/118902 doc [patch] wrong signatures in d2i_RSAPublicKey man pages o docs/119545 doc books/arch-handbook/usb/chapter.sgml formatting o docs/120456 doc ath(4) needs to specify requirement on wlan_scan_sta o docs/121193 doc Err in Release notes (twa driver number wrong) o docs/121312 doc RELNOTES_LANG breaks release if not en_US.ISO8859-1 14 problems total. Non-critical problems S Tracker Resp. Description -------------------------------------------------------------------------------- s docs/20028 doc ASCII docs should reflect tags in the sourc o docs/24786 doc missing FILES descriptions in sa(4) o docs/26286 doc *printf(3) etc should gain format string warnings a docs/30008 doc [patch] French softupdates document should be translat s docs/33589 doc [patch] to doc.docbook.mk to post process .tex files. o docs/36432 doc Proposal for doc/share/mk: make folded books using psu o docs/36449 doc symlink(7) manual doesn't mention trailing slash, etc. o docs/38982 doc [patch] developers-handbook/Jail fix o docs/40423 doc Keyboard(4)'s definition of parameters to GETFKEY/SETF o docs/41089 doc pax(1) -B option does not mention interaction with -z o docs/43823 doc [PATCH] update to environ(7) manpage o docs/47818 doc [patch] ln(1) manpage is confusing o docs/48101 doc [patch] add documentation on the fixit disk to the FAQ o docs/50211 doc [PATCH] doc.docbook.mk: fix textfile creation o docs/53271 doc bus_dma(9) fails to document alignment restrictions o docs/53596 doc Updates to mt(1) manual page o docs/53751 doc bus_dma(9) incorrectly documents BUS_DMA_ALLOCNOW s docs/54752 doc bus_dma explained in ISA section in Handbook: should b o docs/57388 doc [patch] INSTALL.TXT enhancement: mention ok prompt o docs/59044 doc [patch] doc.docbook.mk does not properly handle a sour o docs/59477 doc Outdated Info Documents at http://docs.freebsd.org/inf o docs/59835 doc ipfw(8) man page does not warn about accepted but mean o docs/61301 doc [patch] Manpage patch for aue(4) to enable HomePNA fun o docs/69861 doc [patch] usr.bin/csplit/csplit.1 does not document POSI o docs/70652 doc [patch] New man page: portindex(5) o docs/75865 doc comments on "backup-basics" in handbook o docs/75995 doc hcreate(3) documentation(?) bug o docs/76333 doc [patch] ferror(3): EOF indicator can be cleared by not o docs/78138 doc [patch] Error in pre-installation section of installat o docs/78240 doc [patch] handbook: replace with aroun o docs/78480 doc Networked printer setup unnecessarily complex in handb o docs/82595 doc 25.5.3 Configuring a bridge section of the handbook ne o docs/84265 doc [patch] chmod(1) manpage omits implication of setting o docs/84268 doc chmod(1) manpage's BUGS entry is either wrong or too c o docs/84317 doc fdp-primer doesn't show class=USERNAME distinctively o docs/84670 doc [patch] tput(1) manpage missing ENVIRONMENT section wi o docs/84806 doc mdoc(7) manpage has section ordering problems o docs/84956 doc [patch] intro(5) manpage doesn't mention API coverage o docs/85118 doc [PATCH] opiekey(1) references non-existing opiegen(1) o docs/85128 doc loader.conf(5) autoboot_delay incompletly described o docs/85187 doc [patch] find(1) manpage missing block info for -ls o docs/86342 doc bikeshed entry of Handbook is wrong o docs/87857 doc ifconfig(8) wireless options order matters o docs/87936 doc Handbook chapter on NIS/YP lacks good information on a o docs/88512 doc [patch] mount_ext2fs(8) man page has no details on lar o docs/91149 doc read(2) can return EINVAL for unaligned access to bloc o docs/91506 doc ndis(4) man page should be more specific about support o docs/92626 doc jail manpage should mention disabling some periodic sc o docs/94625 doc [patch] growfs man page -- document "panic: not enough o docs/95139 doc FAQ to move filesystem to new disk fails: incorrect pe o docs/96207 doc Comments of a sockaddr_un structure could confuse one o docs/98974 doc Missing tunables in loader(8) manpage o docs/99506 doc FreeBSD Handbook addition: IPv6 Server Settings o docs/100196 doc man login.conf does explain not "unlimited" o docs/100242 doc sysctl(3) description of KERN_PROC is not correct anym o docs/101464 doc sync ru_RU.KOI8-R/articles/portbuild/article.html with o docs/102148 doc The description of which Intel chips have EM64T is out o docs/102719 doc [patch] ng_bpf(4) example leads to unneeded promiscuos o docs/104403 doc man security should mention that the usage of the X Wi o docs/104879 doc Howto: Listen to IMA ADPCM .wav files on FreeBSD box o docs/105608 doc fdc(4) debugging description staled o docs/105997 doc sys/kern/sys_pipe.c refer to tuning(7), but there is n o docs/107432 doc Handbook's default partitioning schema is out-of-date o docs/108101 doc /boot/default/loader.conf contains an incorrect commen a docs/108980 doc list of missing man pages o docs/109115 doc add Ultra 450 to hardware list for sparc64 o docs/109201 doc [request]: manual for callbootd f docs/109226 doc [request] No manual entry for sntp o docs/109972 doc No manual entry for zless/bzless o docs/109973 doc No manual entry for c++filt o docs/109975 doc No manual entry for elf2aout o docs/109977 doc No manual entry for ksu o docs/109981 doc No manual entry for post-grohtml o docs/109983 doc No manual entry for protoize o docs/110061 doc [PATCH] tuning(7) missing reference to vfs.read_max o docs/110062 doc [patch] mount_nfs(8) fails to mention a failure condit o docs/110376 doc [patch] add some more explanations for the iwi/ipw fir o docs/110692 doc wi(4) man page doesn't say WPA is not supported o docs/110999 doc carp(4) should document unsupported interface types o docs/111147 doc hostapd.conf is not documented o docs/111263 doc [request] Information on $EDITOR variable in section 3 o docs/111265 doc [request] Clarify how to set common shell variables o docs/111425 doc Missing chunks of text in historical manpages o docs/112481 doc bug in ppp.linkup example o docs/112682 doc Handbook GEOM_GPT explanation does not provide accurat o docs/112804 doc groff(1) command should be called to explicitly use "p o docs/113194 doc [patch] [request] crontab.5: handling of day-in-month o docs/114139 doc mbuf(9) has misleading comments on M_DONTWAIT and M_TR o docs/114184 doc [patch] [ndis]: add info to man 4 ndis o docs/114371 doc [patch][ipv6] rtadvd.con(5) should show how to adverti o docs/115000 doc [PATCH] nits and updates to FAQs (part 1) o docs/115065 doc [patch] sync ps.1 with p_flag and keywords o docs/115921 doc Booting from pst(4) is not supported o docs/116116 doc mktemp (3) re/move note o docs/116480 doc sysctl(3) description of kern.file no longer applies s o docs/117013 doc mount_smbfs(8) doesn't document -U (username) argument f docs/117308 doc Clarification of /etc/defaults/devfs.rules status o docs/117747 doc 'break' system call needs a man page o docs/117798 doc formatting oddity in sysmouse(4) o docs/118214 doc close(2) error returns incomplete f docs/118332 doc man page for top does not describe STATE column wait e o docs/118545 doc loader tunables kern.dfldsiz and friends nearly undocu o docs/119329 doc [patch] Fix misleading man 1 split o docs/119338 doc gprof(1) refers to unmentioned option "-c" s docs/119404 doc [request] events page should list only last 2 years wo a docs/119536 doc a few typos in French handbook (basics) o docs/119746 doc l10n chapter of handbook (Russian Language) o docs/119907 doc Ports compatibility o docs/120024 doc resolver(5) and hosts(5) need updated for IPv6 o docs/120040 doc handbook: diskless operation: populate root doesn't po o docs/120125 doc [patch] Installing FreeBSD 7.0 via serial console and o docs/120248 doc [patch] getaddrinfo() implementation on FreeBSD 7 is i o docs/120357 doc [patch] zone.9 - document also the _arg versions of th o docs/120539 doc Inconsistent ipfw's man page o docs/120628 doc PAE documentation errror in handbook s docs/120917 doc [request]: Man pages mising for thr_xxx syscalls o docs/121173 doc [patch] mq_getattr(2): mq_flags mistakenly described a o docs/121197 doc [patch] edits to books/porters-handbook o docs/121321 doc Handbook should reflect new pf.conf defaults s docs/121541 doc [request] no man pages for wlan_scan_ap o docs/121565 doc dhcp-options(5) manpage incorrectly formatted omitting o docs/121585 doc [handbook] Wrong multicast specification o docs/121648 doc [patch] add portmaster(8) to man-refs.ent o docs/121713 doc man page for su contains errornous example. o docs/121721 doc telnetd(8) not describing -X authentication types o docs/121821 doc [patch] wpa_supplicant.conf.5 - provide pointer to sam o docs/121863 doc IPSEC handbook update for FreeBSD 7 and later o docs/121871 doc ftpd does not interpret configuration files as documen o docs/121952 doc Handbook chapter on Network Address Translation wrong o docs/122052 doc minor update on handbook section 20.7.1 o docs/122053 doc updaze on vinum(4) reference to newfs(8) o docs/122351 doc [patch] update to PF section of handbook o docs/122470 doc [patch] exit status on fetch(1) manual page o docs/122476 doc [handbook] [patch] Misleading doc for adding new users 134 problems total. From owner-freebsd-doc@FreeBSD.ORG Mon Apr 7 19:09:43 2008 Return-Path: Delivered-To: freebsd-doc@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 389A11065674 for ; Mon, 7 Apr 2008 19:09:43 +0000 (UTC) (envelope-from abeuke@lancope.com) Received: from talbot.electric.net (talbot.electric.net [72.35.23.19]) by mx1.freebsd.org (Postfix) with ESMTP id D89378FC1C for ; Mon, 7 Apr 2008 19:09:42 +0000 (UTC) (envelope-from abeuke@lancope.com) Received: from 1JiwTS-0000qa-TJ by talbot.electric.net with emc1-ok (Exim 4.67) (envelope-from ) id 1JiwTT-0000rG-U9 for freebsd-doc@FreeBSD.org; Mon, 07 Apr 2008 11:53:47 -0700 Received: by emcmailer; Mon, 07 Apr 2008 11:53:47 -0700 Received: from [209.182.185.10] (helo=lchqmr01.lancope.com) by talbot.electric.net with esmtp (Exim 4.67) (envelope-from ) id 1JiwTS-0000qa-TJ for freebsd-doc@FreeBSD.org; Mon, 07 Apr 2008 11:53:46 -0700 Received: from lchqex01.lancope.local (unknown [10.201.0.21]) by lchqmr01.lancope.com (Postfix) with ESMTP id 7C31DA581 for ; Mon, 7 Apr 2008 14:05:18 +0000 (UTC) Received: from lchqex02.lancope.local ([10.201.0.25]) by lchqex01.lancope.local with Microsoft SMTPSVC(6.0.3790.3959); Mon, 7 Apr 2008 14:53:45 -0400 X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Date: Mon, 7 Apr 2008 14:53:44 -0400 Message-ID: X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: Website priase and Link suggestions for FreeBSD Thread-Index: AciY4LSRDxgW7jqITLaMQRVhBiM3ag== X-Priority: 1 Priority: Urgent Importance: high From: "Alicia Beuke" To: X-OriginalArrivalTime: 07 Apr 2008 18:53:45.0589 (UTC) FILETIME=[B4F1A250:01C898E0] X-Outbound-IP: 209.182.185.10 X-Env-From: abeuke@lancope.com X-Virus-Status: Scanned by VirusSMART (c) Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Cc: Subject: Website priase and Link suggestions for FreeBSD X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Apr 2008 19:09:43 -0000 To Whom It May Concern: =20 While browsing security appliances websites, I came across your page about hardware vendors. After spending some time browsing your site, I was highly impressed with the breadth and depth of high quality of information and saw it was a really great resource for network and security professionals. =20 Our company, Lancope.com, works in the Network Behavior Analysis industry. Our product, StealthWatch (a hardware appliance product), leverages NetFlow or sFlow to gather flow data that helps provide company's end-to-end network visibility for both network and security administrators. =20 Since you list links to other companies similar to ours, I am curious if you feel that a link to Lancope.com would be of help to your website visitors. If so, I was wondering if you felt that your visitors would benefit from adding a link to the Lancope.com website on this page: http://www.freebsd.org/commercial/hardware.html =20 =20 If so, please let us know if it would help to have suggestions for how the link could be displayed and if there is anything that we can do to help. =20 Thank you for the consideration! =20 Kind Regards, =20 Alicia Beuke www.lancope.com =20 =20 Alicia Beuke =20 Marketing Specialist=20 Lancope, Inc. | 3650 Brookside Pkwy | Suite 400 | Alpharetta, GA | 30022 abeuke@lancope.com | O: 770-225-3128 =20 Lancope(r) Optimizing Security and Network Operations(tm)=20 =20 StealthWatch(tm) - the most widely used Network Behavior Analysis solution STREAMLINE security and network operations, REDUCE time and resources, and ELIMINATE cost and complexity www.lancope.com =20 =20 From owner-freebsd-doc@FreeBSD.ORG Tue Apr 8 02:50:04 2008 Return-Path: Delivered-To: freebsd-doc@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 667B81065673 for ; Tue, 8 Apr 2008 02:50:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 556F38FC29 for ; Tue, 8 Apr 2008 02:50:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m382o4el030634 for ; Tue, 8 Apr 2008 02:50:04 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m382o45J030633; Tue, 8 Apr 2008 02:50:04 GMT (envelope-from gnats) Date: Tue, 8 Apr 2008 02:50:04 GMT Message-Id: <200804080250.m382o45J030633@freefall.freebsd.org> To: freebsd-doc@FreeBSD.org From: "Ben Kaduk" Cc: Subject: Re: docs/122470: [patch] exit status on fetch(1) manual page X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Ben Kaduk List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Apr 2008 02:50:04 -0000 The following reply was made to PR docs/122470; it has been noted by GNATS. From: "Ben Kaduk" To: "Jaakko Heinonen" Cc: FreeBSD-gnats-submit@freebsd.org Subject: Re: docs/122470: [patch] exit status on fetch(1) manual page Date: Mon, 7 Apr 2008 22:24:12 -0400 Hello, On 4/5/08, Jaakko Heinonen wrote: [snip] > fetch(1) manual page states that fetch(1) exits with status zero or one: > > EXIT STATUS > The fetch command returns zero on success, or one on failure. If multi- > ple URLs are listed on the command line, fetch will attempt to retrieve > each one of them in turn, and will return zero only if they were all suc- > cessfully retrieved. > > However it's possible that it exits with status other than zero or one: > > $ fetch -h foo > usage: fetch [-146AFMPRUadlmnpqrsv] [-N netrc] [-o outputfile] > [-S bytes] [-B bytes] [-T seconds] [-w seconds] > [-h host -f file [-c dir] | URL ...] > $ echo $? > 64 > It seems that fetch(1) can also return EX_USAGE when it is not invoked as it is expecting; this macro is defined to be 64 in /usr/include/sysexits.h We might as well change the man page to say ``EX_USAGE is returned when .Nm is not invoked properly.'' -Ben Kaduk From owner-freebsd-doc@FreeBSD.ORG Tue Apr 8 14:37:19 2008 Return-Path: Delivered-To: freebsd-doc@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CAEA31065678 for ; Tue, 8 Apr 2008 14:37:19 +0000 (UTC) (envelope-from federicogalvezdurand@yahoo.com) Received: from web58006.mail.re3.yahoo.com (web58006.mail.re3.yahoo.com [68.142.236.114]) by mx1.freebsd.org (Postfix) with SMTP id 42E918FC32 for ; Tue, 8 Apr 2008 14:37:19 +0000 (UTC) (envelope-from federicogalvezdurand@yahoo.com) Received: (qmail 26305 invoked by uid 60001); 8 Apr 2008 14:10:37 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:Date:From:Subject:To:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-ID; b=nP7NMNaKtkrxKLOgliCpE5JepnvbZ66eKbuImyJ4TZIwBYRGX/LF5F5nW8V+RzLigVBS/SPQ8vtjt+WQn7QVgfxUpeyS9CeP0JArmpojs1m0w+jZp/OCrN1Fd/40Wh1NiCrTpSL86A5/BFtg+d/CFA1CaJIcbufph34KS2a49Rw=; X-YMail-OSG: AfW11FQVM1krs94JNZQLPK9yZ7jLmqs.gPbYWz9O Received: from [83.76.200.90] by web58006.mail.re3.yahoo.com via HTTP; Tue, 08 Apr 2008 07:10:36 PDT Date: Tue, 8 Apr 2008 07:10:36 -0700 (PDT) From: Federico Galvez-Durand To: FreeBSD-gnats-submit@FreeBSD.org, freebsd-doc@FreeBSD.org In-Reply-To: <200803241540.m2OFe3Qq016618@freefall.freebsd.org> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="0-968980677-1207663836=:26150" Content-Transfer-Encoding: 8bit Message-ID: <539168.26150.qm@web58006.mail.re3.yahoo.com> Cc: Subject: Re: docs/122052: minor update on handbook section 20.7.1 X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Apr 2008 14:37:19 -0000 --0-968980677-1207663836=:26150 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit Content-Id: Content-Disposition: inline Well, now the minor update is not that minor. Find attached a patch file. This patch -> deprecates: handbook/vinum-object-naming.html handbook/vinum-access-bottlenecks.html handbook/vinum/vinum-concat.png handbook/vinum/vinum-raid10-vol.png handbook/vinum/vinum-simple-vol.png handbook/vinum/vinum-striped.png handbook/vinum/vinum-mirrored-vol.png handbook/vinum/vinum-raid5-org.png handbook/vinum/vinum-striped-vol.png creates: handbook/vinum-disk-performance-issues.html handbook.new/vinum/vinum-concat.png handbook.new/vinum/vinum-raid01.png handbook.new/vinum/vinum-raid10.png handbook.new/vinum/vinum-simple.png handbook.new/vinum/vinum-raid0.png handbook.new/vinum/vinum-raid1.png handbook.new/vinum/vinum-raid5.png updates: all remaining handbook/vinum-*.html handbook/raid.html handbook/virtualization.html. I think I cannot attach the new PNG files here. Please, advise how to submit them. . ____________________________________________________________________________________ You rock. That's why Blockbuster's offering you one month of Blockbuster Total Access, No Cost. http://tc.deals.yahoo.com/tc/blockbuster/text5.com --0-968980677-1207663836=:26150 Content-Type: text/plain; name="patch01.txt" Content-Description: 2837643882-patch01.txt Content-Disposition: inline; filename="patch01.txt" diff -r -u handbook.orig/docbook.css handbook/docbook.css --- handbook.orig/docbook.css 2008-03-22 05:33:04.000000000 +0100 +++ handbook/docbook.css 2008-04-05 15:28:57.000000000 +0200 @@ -129,6 +129,26 @@ color: #000000; } +TABLE.CLASSTABLE { + border-collapse: collapse; + border-top: 2px solid gray; + border-bottom: 2px solid gray; +} + +TABLE.CLASSTABLE TH { + border-top: 2px solid gray; + border-right: 1px solid gray; + border-left: 1px solid gray; + border-bottom: 2px solid gray; +} + +TABLE.CLASSTABLE TD { + border-top: 1px solid gray; + border-right: 1px solid gray; + border-left: 1px solid gray; + border-bottom: 1px solid gray; +} + .FILENAME { color: #007a00; } diff -r -u handbook.orig/vinum-config.html handbook/vinum-config.html --- handbook.orig/vinum-config.html 2008-03-22 05:43:54.000000000 +0100 +++ handbook/vinum-config.html 2008-04-08 14:56:10.000000000 +0200 @@ -7,8 +7,8 @@ - - + + @@ -22,7 +22,7 @@ -Prev Chapter 20 The Vinum Volume Manager
-

20.8 Configuring -Vinum

+

20.6 Configuring Vinum

The GENERIC kernel does not contain Vinum. It is possible to build a special kernel which includes Vinum, but this is not recommended. The standard way to start Vinum is as a kernel module (kld). You do -not even need to use kldload(8) for Vinum: +not even need to use + +kldload(8) +for Vinum: when you start gvinum(8), it checks +href="http://www.FreeBSD.org/cgi/man.cgi?query=gvinum&sektion=8"> +gvinum(8), +it checks whether the module has been loaded, and if it is not, it loads it automatically.

-

20.8.1 Startup

+

20.6.1 Preparing a disk

+

Vinum needs a + +bsdlabel(8), +on your disk. +

Assuming +/dev/ad1 +is the device in use and your Vinum Volume will use the whole disk, it is advisable to initialize the device with a single Slice, using + +fdisk(8). The following command creates +a single Slice +s1 +over the whole disk +/dev/ad1. +

+
+#fdisk -vI ad1
+
+ +

After creating +the disk Slice, it can be labeled: +

+
+#bsdlabel -w ad1s1
+
+ +

The bsdlabel utility can not write an adequate label for Vinum automatically, you need to edit the standard label:

+
+#bsdlabel -e ad1s1
+
+

Will show you something similar to this:

+
+# /dev/ad1s1:
+8 partitions:
+#        size   offset    fstype   [fsize bsize bps/cpg]
+  a:  1048241       16    unused        0     0     0                    
+  c:  1048257        0    unused        0     0         # "raw" part, don't edit
+
+ +

You need to edit the partitions. Once this disk is not bootable (it could be, see section +Section 20.7), you could rename partition +a +to partition +h +and replace +fstype +unused +with +vinum. +The fields fsize bsize bps/cpg have no meaning for +fstype vinum. +

+
+# /dev/ad1s1:
+8 partitions:
+#        size   offset    fstype   [fsize bsize bps/cpg]
+  c:  1048257        0    unused        0     0         # "raw" part, don't edit
+  h:  1048241       16     vinum                    
+
+
+ +
+

20.6.2 Configuration File

+

This file can be placed anywhere in your system. After executing the instructions in this file + +gvinum(8) + will not use it anymore. Everything is stored in a database. But you should keep this file in a safe place, you may need it in case of a Volume crash. +

+

The following configuration creates a Volume named +Simple +containing a drive named +DiskB +based on the device +/dev/ad1s1. The +plex +organization is +concat +and contains only one +subdisk (sd) +. +

+
+drive diskB device /dev/ad1s1h
+volume Simple 
+	plex org concat
+	sd drive diskB
+
+
+ +
+

20.6.3 Creating a Volume

+ +

Once you have prepared your disk and created a configuration file, you can use + +gvinum(8) +to create a Volume. +

+
+#gvinum create Simple
+1 drive:
+D diskB                 State: up	/dev/ad1s1h	A: 0/511 MB (0%)
+
+1 volume:
+V Simple                State: up	Plexes:       1	Size:        511 MB
+
+1 plex:
+P Simple.p0           C State: up	Subdisks:     1	Size:        511 MB
+
+1 subdisk:
+S Simple.p0.s0          State: up	D: diskB        Size:        511 MB
+
+ + +

At this point, a new entry has been created for your Volume:

+ +
+#ls -l /dev/gvinum
+crw-r-----  1 root  operator    0,  89 Mar 26 17:17 /dev/gvinum/Simple
+
+/dev/gvinum/plex:
+total 0
+crw-r-----  1 root  operator    0,  86 Mar 26 17:17 Simple.p0
+
+/dev/gvinum/sd:
+total 0
+crw-r-----  1 root  operator    0,  83 Mar 26 17:17 Simple.p0.s0
+
+ +
+ +
+

20.6.4 Starting a Volume

+ +

After creating a Volume you need to allow the system access to the objects:

+
+#gvinum start Simple
+
+ +

The Starting process can be slow, depending on the size of the subdisk or subdisks contained in your plex. Enter gvinum and use the option +l +to see whether the status of all your subdisks is already +"up" +. +

+

A message is printed by gvinum for each subdisk's Start Process completed.

+
+ +
+

20.6.5 Creating a File System

+ +

After having created a Volume, you need to create a file system using + +newfs(8) +

-

Vinum stores configuration information on the disk slices in essentially the same form +

+#newfs /dev/gvinum/Simple
+
+

If no errors are reported, you should check the file system:

+
+#fsck -t ufs /dev/gvinum/Simple
+
+

If no errors are reported, you can mount the file system:

+
+#mount /dev/gvinum/Simple /mnt
+
+ +

At this point, if everything seems to be right, it is desirable to reboot your machine and perform the following test:

+
+#fsck -t ufs /dev/gvinum/Simple
+
+

If no errors are reported, you can mount the file system:

+
+#mount /dev/gvinum/Simple /mnt
+
+

If everything looks fine now, then you have succeeded creating a Vinum Volume.

+ +
+ +
+

20.6.6 Mounting a Volume Automatically

+ +

In order to have your Volumes mounted automatically you need two things:

+
    +
  • +Set + geom_vinum_load="YES" +in +/boot/loader.conf. +
  • +
  • +Add an entry in + /etc/fstab +for your Volume (e.g. Simple). The mountpoint in this example is the directory + /space . See + +fstab(5) +and + +mount(8) +for details. +
  • +
    +#
    +# Device                Mountpoint  FStype  Options     Dump    Pass#
    +#
    +[...]
    +/dev/gvinum/Simple      /space      ufs     rw          2       2
    +
    + +
+

Your Volumes will be checked by + +fsck(8) +at boot time if you specify non zero values for + Dump +and + Pass +fields.

+ +
+ +
+

20.6.7 Troubleshooting

+ +
+

20.6.7.1 Creating a File System

+

The process of Starting a Volume may take long; you must be sure this process has been completed before creating a file system. At the moment this manual is written, + +newfs(8) +will not complain if you try to create a file system and the Starting process is still in progress. Even running + +fsck(8) +on your new file system may tell you everything is OK. But most probably you will not be able to use the Volume later on, after rebooting your machine. +

+ +

In case your Volume does not pass the checkup, you may try to repeat the process one more time:

+
+#gvinum start Simple
+#newfs /dev/gvinum/Simple
+#fsck -t ufs /dev/gvinum/Simple
+
+

If everything looks fine, then reboot your machine.

+
+#shutdown -r now
+
+

Then execute again:

+
+#fsck -t ufs /dev/gvinum/Simple
+#mount /dev/gvinum/Simple /mnt
+
+

It should work without problem.

+ +
+ +
+

20.6.8 Miscellaneous Notes

+ +

Vinum stores configuration information on disk slices in essentially the same form as in the configuration files. When reading from the configuration database, Vinum recognizes a number of keywords which are not allowed in the configuration files. For example, a disk configuration might contain the following text:

@@ -86,18 +343,11 @@ to identify drives correctly even if they have been assigned different UNIX® drive IDs.

-
-

20.8.1.1 Automatic -Startup

- -
-
-

Note: This information only relates to the historic Vinum implementation. Gvinum always features an automatic -startup once the kernel module is loaded.

-
+
+

20.6.8 Differences for FreeBSD 4.X

+

In order to start Vinum automatically when you boot the system, ensure that you have the following line in your /etc/rc.conf:

@@ -119,8 +369,7 @@ does not matter which drive is read. After a crash, however, Vinum must determine which drive was updated most recently and read the configuration from this drive. It then updates the configuration if necessary from progressively older drives.

-
-
+
diff -r -u handbook.orig/vinum-data-integrity.html handbook/vinum-data-integrity.html --- handbook.orig/vinum-data-integrity.html 2008-03-22 05:43:54.000000000 +0100 +++ handbook/vinum-data-integrity.html 2008-04-08 13:00:38.000000000 +0200 @@ -7,7 +7,7 @@ - + @@ -22,7 +22,7 @@ -Prev Chapter 20 The Vinum Volume Manager
-

20.4 Data -Integrity

+

20.4 Data Integrity

-

The final problem with current disks is that they are unreliable. Although disk drive -reliability has increased tremendously over the last few years, they are still the most -likely core component of a server to fail. When they do, the results can be catastrophic: -replacing a failed disk drive and restoring data to it can take days.

- -

The traditional way to approach this problem has been mirroring, keeping two copies of the data on different -physical hardware. Since the advent of the RAID -levels, this technique has also been called RAID level -1 or RAID-1. Any write to the volume writes -to both locations; a read can be satisfied from either, so if one drive fails, the data -is still available on the other drive.

+

Although disk drive reliability has increased tremendously over the last few years, they are still the most likely core component of a server to fail. When they do, the results can be catastrophic: replacing a failed disk drive and restoring data to it can take a long time.

-

Mirroring has two problems:

+

The traditional way to approach this problem has been mirroring, keeping two copies of the data on different physical hardware. Since the advent of the RAID levels, this technique has also been called RAID level 1 or RAID-1.

+ +

An alternative solution is using an +error-correcting code. +This strategy is implemented in the RAID levels 2, 3, 4, 5 and 6. Of these, RAID-5 is the most interesting; for each data block a simple +parity check code is generated and stored as part of each stripe. +For arrays with large number of disks, the RAID-5 might not provide enough protection; in this case more complex error-correcting codes (e.g. Reed-Solomon) may provide better results. +

+

RAID levels can be nested to create other RAID configurations with improved resilience. Of these, RAID 0+1 and RAID 1+0 are explained here. Under certain conditions, these arrays can work in degraded mode with up to N/2 broken disks. However, having two Disks broken can stop the arrays, if they fail in the right positions. In both cases, having only one disk down is fully tolerated.

+

+Therefore, whenever you think about a failure in a RAID-0+1 or RAID-1+0 you are considering either the probability of having two disks failing at the same time or not having replaced the first broken disk before the second fails. On top of that, the second disk needs to fail in a very specific position inside the array. +

+

+In modern storage facilities, critical mission arrays are implemented using HOT PLUG technology, allowing a broken Disk to be replaced without having to stop the array. The probability of having a second Disk failing before having replaced the first Disk broken is mathematically clear. However the Possibility of such an event must be estimated first and it depends mainly on security policies and stock management paradigms beyond the scope of this discussion.

+

Therefore, a more interesting discussion about RAID-0+1 and RAID-1+0 reliability should be based on the Mean Time Before Failure (MTBF) of the devices in use and on other variables provided by the disk drive constructor and the storage facility administration. +

+

For the sake of simplicity, all disks (N) in a RAID are considered to have the same capacity (CAP) and R/W characteristics. This is not mandatory in all cases.

+

In the Figures, Data stored in a RAID is represented by (X;Y;Z). Data striped along an array of disks is represented by (X0,X1,X2...; Y0,Y1,Y2...; Z0,Z1,Z2...). +

+ +
+

20.4.1 RAID-1: Mirror

+

In a mirrored array, any write to the volume writes to both disks; a read can be satisfied from either disk, so if one fails, the data is still available on the other one.

+ +
+

+

Figure 20-3. RAID-1 Organization

+
  • -

    The price. It requires twice as much disk storage as a non-redundant solution.

    +

    The total storage capacity is CAP*N/2.

  • -

    The performance impact. Writes must be performed to both drives, so they take up twice -the bandwidth of a non-mirrored volume. Reads do not suffer from a performance penalty: -it even looks as if they are faster.

    +

    The Write performance is impacted because all data must be stored in both drives, so it takes up twice the bandwidth of a non-mirrored volume. Reads do not suffer from a performance penalty. +

-

An alternative solution is parity, implemented in the RAID levels 2, 3, 4 and 5. Of these, RAID-5 is the most interesting. As implemented in Vinum, it is -a variant on a striped organization which dedicates one block of each stripe to parity of -the other blocks. As implemented by Vinum, a RAID-5 -plex is similar to a striped plex, except that it implements RAID-5 by including a parity block in each stripe. As required -by RAID-5, the location of this parity block changes -from one stripe to the next. The numbers in the data blocks indicate the relative block -numbers.

+
+ +
+

20.4.2 RAID-5

-

+

As implemented in Vinum, it is a variant on a plex striped organization which dedicates one block of each stripe to parity of the other blocks (Px,Py,Pz). +As required by RAID-5, the location of this parity block changes from one stripe to the next. The numbers in the data blocks indicate the relative block numbers (X0,X1,Px; Y0,Py,Y1; Pz,Z0,Z1;...).

-

Figure 20-3. RAID-5 Organization

+

+

Figure 20-4. RAID-5 Organization

+
+ +
    +
  • +The total capacity of the array is equal to (N-1)*CAP. +

  • +
  • +At least 3 disks are necessary. +

  • +
  • +Read access is similar to that of striped organizations but write access is significantly slower. In order to update (write) one striped block you need to read the other striped blocks and compute the parity block again before writing the new block and the new parity. This effect can be mitigated by using systems with large R/W cache memory, then you do not need to read the other blocks again in order to compute the new parity. +

  • + +
  • +If one drive fails, the array can continue to operate in degraded mode: a read from one of the remaining accessible drives continues normally, but a read from the failed drive is recalculated from the corresponding block on all the remaining drives. +

  • +
-

-
-
-

Compared to mirroring, RAID-5 has the advantage of -requiring significantly less storage space. Read access is similar to that of striped -organizations, but write access is significantly slower, approximately 25% of the read -performance. If one drive fails, the array can continue to operate in degraded mode: a -read from one of the remaining accessible drives continues normally, but a read from the -failed drive is recalculated from the corresponding block from all the remaining -drives.

+ +
+

20.4.3 RAID-0+1

+ +

+In Vinum, a RAID-0+1 can be straightforward constructed by concatenating two striped plex. In this array, resilience is improved and more than one disk can fail without compromising the functionality. Performance is degraded when the array is forced to work without the full set of disks. +

+
+

+

Figure 20-5. RAID-0+1 Organization

+
    +
  • +The total storage capacity is CAP*N/2. +

  • +
  • +At least 4 disks are necessary. +

  • +
  • +This array will stop working when one disk fails in each of the mirrors (e.g. DiskB and DiskF) but it could work in degraded mode with N/2 disks down as long as they are all in the same mirror (e.g. DiskE, DiskF and DiskG). + +

  • +
+ +
+ +
+

20.4.4 RAID-1+0

+ +

+In Vinum, a RAID-1+0 can not be constructed by a simple manipulation of plexes. You need to construct the mirrors (e.g., m0, m1, m3...) first and then use these mirrors into a striped plex. +In this array, resilience is improved and more than one disk can fail without compromising the functionality. Performance is degraded when the array is forced to work without the full set of disks. +

+ +
+

+

Figure 20-6. RAID-1+0 Organization

+
+ +
    +
  • +The total storage capacity is CAP*N/2. +

  • +
  • +At least 4 disks are necessary. +

  • +
  • +This array will stop working when two disks fail in the same mirrors (e.g. DiskB and DiskC) but it could work in degraded mode with N/2 disks down as long as they are not in the same mirror (e.g. DiskB, DiskE and DiskF). + +

  • +
+
+ +
-

20.6 Some -Examples

- -

Vinum maintains a configuration -database which describes the objects known to an individual system. Initially, -the user creates the configuration database from one or more configuration files with the -aid of the gvinum(8) utility -program. Vinum stores a copy of its configuration database on each disk slice (which -Vinum calls a device) under its -control. This database is updated on each state change, so that a restart accurately -restores the state of each Vinum object.

- +

20.8 Vinum Examples

+

+All Disks in the following examples are identical in capacity (512 Mb) and R/W characteristics. However, the size reported by +gvinum(8) + is 511 Mb. This is normal in a real case, when the Disk is not exactly 536870912 bytes and some space (approx. 8 KB) is reserved by the +bsdlabel(8). +The size used for the stripes is 256k in all examples. +

+

+For the sake of simplicity, only three stripes out of many are represented in the Figures. +

-

20.6.1 The Configuration File

+

20.8.1 A Simple Volume

The configuration file describes individual Vinum objects. The definition of a simple volume might be:

-    drive a device /dev/da3h
-    volume myvol
-      plex org concat
-        sd length 512m drive a
+#cat simple.conf
+drive diskB device /dev/ad1s1h
+volume Simple 
+	plex org concat
+	sd drive diskB
 

This file describes four Vinum objects:

@@ -67,79 +66,65 @@

The drive line describes a disk partition (drive) and its location relative to the underlying hardware. It is given the symbolic name a. This separation of the symbolic names +class="emphasis">diskB. This separation of the symbolic names from the device names allows disks to be moved from one location to another without confusion.

  • The volume line describes a -volume. The only required attribute is the name, in this case myvol.

    +Vinum Volume. The only required attribute is the name, in this case Simple.

  • -

    The plex line defines a plex. +

    The plex line defines a Vinum Plex. The only required parameter is the organization, in this case concat. No name is necessary: the system automatically generates a name from the volume name by adding the suffix .px, -where x is the number of the plex +class="EMPHASIS">.p${x}, +where ${x} is the number of the plex in the volume. Thus this plex will be called myvol.p0.

    +class="EMPHASIS">Simple.p0.

  • -

    The sd line describes a subdisk. +

    The sd line describes a Vinum subdisk. The minimum specifications are the name of a drive on which to store it, and the length of the subdisk. As with plexes, no name is necessary: the system automatically assigns names derived from the plex name by adding the suffix .sx, -where x is the number of the +class="EMPHASIS">.s${x}, +where ${x} is the number of the subdisk in the plex. Thus Vinum gives this subdisk the name myvol.p0.s0.

    +class="EMPHASIS">Simple.p0.s0.

  • -

    After processing this file, gvinum(8) produces the -following output:

    - -
    -      # gvinum -> create config1
    -      Configuration summary
    -      Drives:         1 (4 configured)
    -      Volumes:        1 (4 configured)
    -      Plexes:         1 (8 configured)
    -      Subdisks:       1 (16 configured)
    -     
    -    D a                     State: up       Device /dev/da3h        Avail: 2061/2573 MB (80%)
    -    
    -    V myvol                 State: up       Plexes:       1 Size:        512 MB
    -    
    -    P myvol.p0            C State: up       Subdisks:     1 Size:        512 MB
    -    
    -    S myvol.p0.s0           State: up       PO:        0  B Size:        512 MB
    -
    - -

    This output shows the brief listing format of gvinum(8). It is -represented graphically in Figure -20-4.

    +

    After processing this file, +gvinum(8) +produces the following output:

    -

    +
    +# gvinum create simple.conf
    +1 drive:
    +D diskB                 State: up	/dev/ad1s1h	A: 0/511 MB (0%)
    +
    +1 volume:
    +V Simple                State: up	Plexes:       1	Size:        511 MB
    +
    +1 plex:
    +P Simple.p0           C State: up	Subdisks:     1	Size:        511 MB
    +
    +1 subdisk:
    +S Simple.p0.s0          State: up	D: diskB        Size:        511 MB
    +
    -

    Figure 20-4. A Simple Vinum Volume

    -

    +

    +

    Figure 20-4. A Simple Vinum Volume

    -
    -

    This figure, and the ones which follow, represent a volume, which contains the plexes, which in turn contain the subdisks. In this trivial example, the volume contains one plex, and the plex contains one subdisk.

    @@ -147,181 +132,320 @@

    This particular volume has no specific advantage over a conventional disk partition. It contains a single plex, so it is not redundant. The plex contains a single subdisk, so there is no difference in storage allocation from a conventional disk partition. The -following sections illustrate various more interesting configuration methods.

    +following sections illustrate more interesting configuration methods.

    +
    -

    20.6.2 Increased Resilience: -Mirroring

    +

    20.8.2 RAID-1: Mirrored set

    -

    The resilience of a volume can be increased by mirroring. When laying out a mirrored -volume, it is important to ensure that the subdisks of each plex are on different drives, +

    The resilience of a volume can be increased by mirroring +(Section 20.4.1). +When laying out a mirrored volume, it is important to ensure that the subdisks of each plex are on different drives, so that a drive failure will not take down both plexes. The following configuration mirrors a volume:

    -   drive b device /dev/da4h
    -    volume mirror
    -      plex org concat
    -        sd length 512m drive a
    -      plex org concat
    -        sd length 512m drive b
    +#cat mirror.conf
    +drive diskB device /dev/ad1s1h
    +drive diskC device /dev/ad2s1h
    +volume Mirror
    +	plex org concat
    +	sd drive diskB
    +	plex org concat
    +	sd drive diskC
     
    -

    In this example, it was not necessary to specify a definition of drive a again, since Vinum keeps track of all -objects in its configuration database. After processing this definition, the +

    +After processing this definition, the configuration looks like:

    -   Drives:         2 (4 configured)
    -    Volumes:        2 (4 configured)
    -    Plexes:         3 (8 configured)
    -    Subdisks:       3 (16 configured)
    -    
    -    D a                     State: up       Device /dev/da3h        Avail: 1549/2573 MB (60%)
    -    D b                     State: up       Device /dev/da4h        Avail: 2061/2573 MB (80%)
    -
    -    V myvol                 State: up       Plexes:       1 Size:        512 MB
    -    V mirror                State: up       Plexes:       2 Size:        512 MB
    -  
    -    P myvol.p0            C State: up       Subdisks:     1 Size:        512 MB
    -    P mirror.p0           C State: up       Subdisks:     1 Size:        512 MB
    -    P mirror.p1           C State: initializing     Subdisks:     1 Size:        512 MB
    -  
    -    S myvol.p0.s0           State: up       PO:        0  B Size:        512 MB
    -    S mirror.p0.s0          State: up       PO:        0  B Size:        512 MB
    -    S mirror.p1.s0          State: empty    PO:        0  B Size:        512 MB
    +#gvinum create mirror.conf
    +2 drives:
    +D diskC                 State: up	/dev/ad2s1h	A: 0/511 MB (0%)
    +D diskB                 State: up	/dev/ad1s1h	A: 0/511 MB (0%)
    +
    +1 volume:
    +V Mirror                State: up	Plexes:       2	Size:        511 MB
    +
    +2 plexes:
    +P Mirror.p1           C State: up	Subdisks:     1	Size:        511 MB
    +P Mirror.p0           C State: up	Subdisks:     1	Size:        511 MB
    +
    +2 subdisks:
    +S Mirror.p1.s0          State: up	D: diskC        Size:        511 MB
    +S Mirror.p0.s0          State: up	D: diskB        Size:        511 MB
     
    -

    Figure 20-5 shows the structure -graphically.

    - -

    -
    -

    Figure 20-5. A Mirrored Vinum Volume

    - -

    +

    +

    Figure 20-5. A RAID-1 Vinum Volume

    -
    -
    -

    In this example, each plex contains the full 512 MB of address space. As in the -previous example, each plex contains only a single subdisk.

    -

    20.6.3 Optimizing Performance

    +

    20.8.3 RAID-0: Striped set

    -

    The mirrored volume in the previous example is more resistant to failure than an -unmirrored volume, but its performance is less: each write to the volume requires a write -to both drives, using up a greater proportion of the total disk bandwidth. Performance +

    The RAID-1 volume in the previous example is more resistant to failure than a +simple volume, but it has inferior Writing performance because each Write to the volume requires a Write +to both drives, using a greater percentage of the total disk bandwidth. Performance considerations demand a different approach: instead of mirroring, the data is striped -across as many disk drives as possible. The following configuration shows a volume with a -plex striped across four disk drives:

    +(Section 20.3.2) +across as many disk drives as possible. This configuration does not provide data protection against failure. +The following configuration shows a volume with a +plex striped across three disk drives:

    + +
    +#cat striped.conf
    +drive diskB device /dev/ad1s1h
    +drive diskC device /dev/ad2s1h
    +drive diskD device /dev/ad3s1h
    +volume Stripes
    +	plex org striped 256k
    +	sd drive diskB
    +	sd drive diskC
    +	sd drive diskD
    +
    -   drive c device /dev/da5h
    -    drive d device /dev/da6h
    -    volume stripe
    -    plex org striped 512k
    -      sd length 128m drive a
    -      sd length 128m drive b
    -      sd length 128m drive c
    -      sd length 128m drive d
    -
    - -

    As before, it is not necessary to define the drives which are already known to Vinum. -After processing this definition, the configuration looks like:

    - -
    -   Drives:         4 (4 configured)
    -    Volumes:        3 (4 configured)
    -    Plexes:         4 (8 configured)
    -    Subdisks:       7 (16 configured)
    -  
    -    D a                     State: up       Device /dev/da3h        Avail: 1421/2573 MB (55%)
    -    D b                     State: up       Device /dev/da4h        Avail: 1933/2573 MB (75%)
    -    D c                     State: up       Device /dev/da5h        Avail: 2445/2573 MB (95%)
    -    D d                     State: up       Device /dev/da6h        Avail: 2445/2573 MB (95%)
    -  
    -    V myvol                 State: up       Plexes:       1 Size:        512 MB
    -    V mirror                State: up       Plexes:       2 Size:        512 MB
    -    V striped               State: up       Plexes:       1 Size:        512 MB
    -  
    -    P myvol.p0            C State: up       Subdisks:     1 Size:        512 MB
    -    P mirror.p0           C State: up       Subdisks:     1 Size:        512 MB
    -    P mirror.p1           C State: initializing     Subdisks:     1 Size:        512 MB
    -    P striped.p1            State: up       Subdisks:     1 Size:        512 MB
    -  
    -    S myvol.p0.s0           State: up       PO:        0  B Size:        512 MB
    -    S mirror.p0.s0          State: up       PO:        0  B Size:        512 MB
    -    S mirror.p1.s0          State: empty    PO:        0  B Size:        512 MB
    -    S striped.p0.s0         State: up       PO:        0  B Size:        128 MB
    -    S striped.p0.s1         State: up       PO:      512 kB Size:        128 MB
    -    S striped.p0.s2         State: up       PO:     1024 kB Size:        128 MB
    -    S striped.p0.s3         State: up       PO:     1536 kB Size:        128 MB
    +#gvinum create striped.conf
    +3 drives:
    +D diskD                 State: up	/dev/ad3s1h	A: 0/511 MB (0%)
    +D diskC                 State: up	/dev/ad2s1h	A: 0/511 MB (0%)
    +D diskB                 State: up	/dev/ad1s1h	A: 0/511 MB (0%)
    +
    +1 volume:
    +V Stripes               State: up	Plexes:       1	Size:       1534 MB
    +
    +1 plex:
    +P Stripes.p0          S State: up	Subdisks:     3	Size:       1534 MB
    +
    +3 subdisks:
    +S Stripes.p0.s2         State: up	D: diskD        Size:        511 MB
    +S Stripes.p0.s1         State: up	D: diskC        Size:        511 MB
    +S Stripes.p0.s0         State: up	D: diskB        Size:        511 MB
     

    -

    Figure 20-6. A Striped Vinum Volume

    -

    +

    +

    Figure 20.6. A Striped Vinum Volume

    +
    + +
    + +
    +

    20.8.4 RAID-5: Striped set with distributed parity

    +

    RAID-1 resilience can be improved by using a striped array with distributed parity, this configuration is known as RAID-5 +(Section 20.4.2). + The cost of this strategy is the space consumed by the parity data (usually the size of one disk of the array) and slower read/write access. The minimum number of disks required is 3 and the array continues operating, in degraded mode, when one disk fails. +

    + +
    +#cat raid5.conf
    +drive diskB device /dev/ad1s1h
    +drive diskC device /dev/ad2s1h
    +drive diskD device /dev/ad3s1h
    +volume Raid5 
    +	plex org raid5 256k
    +	sd drive diskB
    +	sd drive diskC
    +	sd drive diskD
    +
    + + +
    +#gvinum create raid5.conf
    +3 drives:
    +D diskD                 State: up	/dev/ad3s1h	A: 0/511 MB (0%)
    +D diskC                 State: up	/dev/ad2s1h	A: 0/511 MB (0%)
    +D diskB                 State: up	/dev/ad1s1h	A: 0/511 MB (0%)
    +
    +1 volume:
    +V Raid5                   State: up	Plexes:       1	Size:       1023 MB
    +
    +1 plex:
    +P Raid5.p0             R5 State: up	Subdisks:     3	Size:       1023 MB
    +
    +3 subdisks:
    +S Raid5.p0.s2             State: up	D: diskD        Size:        511 MB
    +S Raid5.p0.s1             State: up	D: diskC        Size:        511 MB
    +S Raid5.p0.s0             State: up	D: diskB        Size:        511 MB
    +
    + +
    +

    +

    Figure 20-7. A RAID-5 Vinum Volume

    -
    -
    -

    This volume is represented in Figure -20-6. The darkness of the stripes indicates the position within the plex address -space: the lightest stripes come first, the darkest last.

    -

    20.6.4 Resilience and -Performance

    +

    20.8.5 RAID 0+1

    -

    With sufficient hardware, it is +

    With sufficient hardware, it is possible to build volumes which show both increased resilience and increased performance compared to standard UNIX® partitions. A typical -configuration file might be:

    +configuration file for a RAID-0+1 +(Section 20.4.3). +might be:

    + +
    +#cat raid01.conf
    +drive diskB device /dev/da0s1h
    +drive diskC device /dev/da1s1h
    +drive diskD device /dev/da2s1h
    +drive diskE device /dev/da3s1h
    +drive diskF device /dev/da4s1h
    +drive diskG device /dev/da5s1h
    +volume RAID01
    +	plex org striped 256k
    +		sd drive diskB
    +		sd drive diskC
    +		sd drive diskD
    +	plex org striped 256k
    +		sd drive diskE
    +		sd drive diskF
    +		sd drive diskG
    +
    -   volume raid10
    -      plex org striped 512k
    -        sd length 102480k drive a
    -        sd length 102480k drive b
    -        sd length 102480k drive c
    -        sd length 102480k drive d
    -        sd length 102480k drive e
    -      plex org striped 512k
    -        sd length 102480k drive c
    -        sd length 102480k drive d
    -        sd length 102480k drive e
    -        sd length 102480k drive a
    -        sd length 102480k drive b
    +# gvinum create raid01.conf
    +6 drives:
    +D diskG                 State: up	/dev/da5s1h	A: 0/511 MB (0%)
    +D diskF                 State: up	/dev/da4s1h	A: 0/511 MB (0%)
    +D diskE                 State: up	/dev/da3s1h	A: 0/511 MB (0%)
    +D diskD                 State: up	/dev/da2s1h	A: 0/511 MB (0%)
    +D diskC                 State: up	/dev/da1s1h	A: 0/511 MB (0%)
    +D diskB                 State: up	/dev/da0s1h	A: 0/511 MB (0%)
    +
    +1 volume:
    +V RAID01               State: up	Plexes:       2	Size:       1535 MB
    +
    +2 plexes:
    +P RAID01.p1          S State: up	Subdisks:     3	Size:       1535 MB
    +P RAID01.p0          S State: up	Subdisks:     3	Size:       1535 MB
    +
    +6 subdisks:
    +S RAID01.p1.s2         State: up	D: diskG        Size:        511 MB
    +S RAID01.p1.s1         State: up	D: diskF        Size:        511 MB
    +S RAID01.p1.s0         State: up	D: diskE        Size:        511 MB
    +S RAID01.p0.s2         State: up	D: diskD        Size:        511 MB
    +S RAID01.p0.s1         State: up	D: diskC        Size:        511 MB
    +S RAID01.p0.s0         State: up	D: diskB        Size:        511 MB
     

    The subdisks of the second plex are offset by two drives from those of the first plex: this helps ensure that writes do not go to the same subdisks even if a transfer goes over two drives.

    -

    Figure 20-7 represents the -structure of this volume.

    - -

    +
    +

    +

    Figure 20-8. A RAID-0+1 Vinum Volume

    +
    -
    -

    Figure 20-7. A Mirrored, Striped Vinum Volume

    +
    -

    -
    -
    +
    +

    20.8.5 RAID 1+0

    + +

    With sufficient hardware, it is possible to build volumes which show both increased resilience and increased performance +compared to standard UNIX® partitions in more than one way. The RAID-1+0 configuration differs from RAID-0+1 in the way mirrors and stripes are used. A typical configuration file for a RAID-1+0 +(Section 20.4.4). +might be:

    + +
    +#cat raid10_ph1.conf
    +drive diskB device /dev/da0s1h
    +drive diskC device /dev/da1s1h
    +drive diskD device /dev/da2s1h
    +drive diskE device /dev/da3s1h
    +drive diskF device /dev/da4s1h
    +drive diskG device /dev/da5s1h
    +volume m0
    +	plex org concat
    +		sd drive diskB
    +	plex org concat
    +		sd drive diskC
    +volume m1
    +	plex org concat
    +		sd drive diskD
    +	plex org concat
    +		sd drive diskE
    +volume m2
    +	plex org concat
    +		sd drive diskF
    +	plex org concat
    +		sd drive diskG
    +
    +#cat raid10_ph2.conf
    +drive dm0 device /dev/gvinum/m0
    +drive dm1 device /dev/gvinum/m1
    +drive dm2 device /dev/gvinum/m2
    +
    +volume RAID10
    +	plex org striped 256k
    +		sd drive dm0
    +		sd drive dm1
    +		sd drive dm2
    +
    + +
    +#gvinum create raid10_ph1.conf
    +#gvinum create raid10_ph2.conf
    +
    + +
    +# gvinum list
    +9 drives:
    +D dm2                   State: up	/dev/gvinum/sd/m2.p0.s0	A: 0/511 MB (0%)
    +D dm1                   State: up	/dev/gvinum/sd/m1.p0.s0	A: 0/511 MB (0%)
    +D dm0                   State: up	/dev/gvinum/sd/m0.p0.s0	A: 0/511 MB (0%)
    +D diskG                 State: up	/dev/da5s1h	A: 0/511 MB (0%)
    +D diskF                 State: up	/dev/da4s1h	A: 0/511 MB (0%)
    +D diskE                 State: up	/dev/da3s1h	A: 0/511 MB (0%)
    +D diskD                 State: up	/dev/da2s1h	A: 0/511 MB (0%)
    +D diskC                 State: up	/dev/da1s1h	A: 0/511 MB (0%)
    +D diskB                 State: up	/dev/da0s1h	A: 0/511 MB (0%)
    +
    +4 volumes:
    +V RAID10                State: up	Plexes:       1	Size:       1534 MB
    +V m2                    State: up	Plexes:       2	Size:        511 MB
    +V m1                    State: up	Plexes:       2	Size:        511 MB
    +V m0                    State: up	Plexes:       2	Size:        511 MB
    +
    +7 plexes:
    +P RAID10.p0           S State: up	Subdisks:     3	Size:       1534 MB
    +P m2.p1               C State: up	Subdisks:     1	Size:        511 MB
    +P m2.p0               C State: up	Subdisks:     1	Size:        511 MB
    +P m1.p1               C State: up	Subdisks:     1	Size:        511 MB
    +P m1.p0               C State: up	Subdisks:     1	Size:        511 MB
    +P m0.p1               C State: up	Subdisks:     1	Size:        511 MB
    +P m0.p0               C State: up	Subdisks:     1	Size:        511 MB
    +
    +9 subdisks:
    +S RAID10.p0.s2          State: up	D: dm2          Size:        511 MB
    +S RAID10.p0.s1          State: up	D: dm1          Size:        511 MB
    +S RAID10.p0.s0          State: up	D: dm0          Size:        511 MB
    +S m2.p1.s0              State: up	D: diskG        Size:        511 MB
    +S m2.p0.s0              State: up	D: diskF        Size:        511 MB
    +S m1.p1.s0              State: up	D: diskE        Size:        511 MB
    +S m1.p0.s0              State: up	D: diskD        Size:        511 MB
    +S m0.p1.s0              State: up	D: diskC        Size:        511 MB
    +S m0.p0.s0              State: up	D: diskB        Size:        511 MB
    +
    + +
    +

    +

    Figure 20-9. A RAID-1+0 Volume

    +
    diff -r -u handbook.orig/vinum-intro.html handbook/vinum-intro.html --- handbook.orig/vinum-intro.html 2008-03-22 05:43:54.000000000 +0100 +++ handbook/vinum-intro.html 2008-04-08 14:23:40.000000000 +0200 @@ -3,12 +3,12 @@ -Disks Are Too Small +Introduction - + @@ -25,7 +25,7 @@ Prev Chapter 20 The Vinum Volume Manager -Next @@ -34,14 +34,24 @@
    -

    20.2 Disks Are Too -Small

    +

    20.2 Introduction

    + +

    +Since computers begun to be used as data storage devices the issue of ensuring a safe operation has been studied. +

    +

    +Different strategies have been developed, one of the most interesting is the Redundant Arrays of Inexpensive Disks (RAID). +The term RAID was first defined by David A. Patterson, Garth A. Gibson and Randy Katz at the University of California, Berkeley in 1987. They studied the possibility of using two or more drives to appear as a single device to the host system and published a paper: "A Case for Redundant Arrays of Inexpensive Disks (RAID)" in June 1988 at the SIGMOD conference. However, the idea of using redundant disk arrays was first patented by Norman Ken Ouchi at IBM. This patent was awarded in 1978 (U.S. patent 4,092,732) titled "System for recovering data stored in failed memory unit." The claims for this patent describe what would later be named RAID-5 with full stripe writes. This 1978 patent also acknowledges that disk mirroring or duplexing (RAID-1) and protection with dedicated parity (RAID-4) were prior art at the time the patent was deposited. +

    +

    +VINUM is a Volume Manager and can be understood as a Software capable of implementing RAID-0, RAID-1 and RAID-5 specifications. Nowadays, hardware RAID-Controllers are very popular and some of them have significant better performance than a similar Software RAID approach. Nevertheless, a Software Volume Manager can provide more flexibility and can also be used in conjunction with a hardware controller. +

    +

    +Since FreeBSD RELEASE 5.0, VINUM has been integrated under the GEOM framework +(Chapter 19), +which also provides an alternative way of implementing RAID-0 and RAID-1. +

    -

    Disks are getting bigger, but so are data storage requirements. Often you will find -you want a file system that is bigger than the disks you have available. Admittedly, this -problem is not as acute as it was ten years ago, but it still exists. Some systems have -solved this by creating an abstract device which stores its data on a number of -disks.

    diff -r -u handbook.orig/vinum-objects.html handbook/vinum-objects.html --- handbook.orig/vinum-objects.html 2008-03-22 05:43:54.000000000 +0100 +++ handbook/vinum-objects.html 2008-04-08 14:34:32.000000000 +0200 @@ -8,7 +8,7 @@ - + @@ -25,7 +25,7 @@ Prev Chapter 20 The Vinum Volume Manager -Next @@ -36,15 +36,14 @@

    20.5 Vinum Objects

    -

    In order to address these problems, Vinum implements a four-level hierarchy of -objects:

    +

    Vinum implements a four-level hierarchy of objects:

    • The most visible object is the virtual disk, called a volume. Volumes have essentially the same properties as a UNIX® disk drive, though there are some minor -differences. They have no size limitations.

      +differences. Their size is not limited by the size of an individual drive.

    • @@ -103,31 +102,34 @@

      20.5.3 Performance Issues

      -

      Vinum implements both concatenation and striping at the plex level:

      +

      Vinum implements Concatenation, Striping and RAID-5 at the plex level:

      • -

        A concatenated plex uses the +

        A Concatenated plex uses the address space of each subdisk in turn.

      • -

        A striped plex stripes the data +

        A Striped plex stripes the data across each subdisk. The subdisks must all have the same size, and there must be at least two subdisks in order to distinguish it from a concatenated plex.

      • +
      • +Like a striped plex, a RAID-5 plex stripes the data across each subdisk. The subdisks +must all have the same size, and there must be at least three subdisks, otherwise mirroring would be more efficient. +
      -

      20.5.4 Which Plex -Organization?

      +

      20.5.4 Which Plex Organization?

      -

      The version of Vinum supplied with FreeBSD 7.0 implements two kinds of plex:

      +

      The version of Vinum supplied with FreeBSD 7.0 implements three kinds of plex:

      • -

        Concatenated plexes are the most flexible: they can contain any number of subdisks, +

        Concatenated plexes are the most flexible: they can contain any number of subdisks, and the subdisks may be of different length. The plex may be extended by adding additional subdisks. They require less CPU time than striped plexes, though the difference in CPU overhead @@ -136,29 +138,30 @@

      • -

        The greatest advantage of striped (RAID-0) plexes +

        The greatest advantage of Striped (RAID-0) plexes is that they reduce hot spots: by choosing an optimum sized stripe (about 256 kB), you can even out the load on the component drives. The disadvantages of this approach are (fractionally) more complex code and restrictions on subdisks: they must be all the same size, and extending a plex by adding new subdisks is so complicated that Vinum currently does not implement it. Vinum imposes an additional, trivial restriction: a striped plex -must have at least two subdisks, since otherwise it is indistinguishable from a +must have at least two subdisks, otherwise it is indistinguishable from a concatenated plex.

      • +
      • +RAID-5 plexes are effectively an extension of striped plexes. Compared to striped +plexes, they offer the advantage of fault tolerance, but the disadvantages of higher +storage cost and significantly higher CPU overhead, particularly for writes. The code +is an order of magnitude more complex than for concatenated and striped plexes. Like +striped plexes, RAID-5 plexes must have equal-sized subdisks and cannot currently be +extended. Vinum enforces a minimum of three subdisks for a RAID-5 plex, since any +smaller number would not make sense +
      -

      Table 20-1 summarizes the advantages -and disadvantages of each plex organization.

      -
      -

      Table 20-1. Vinum Plex Organizations

      +

      Table 20-1. Vinum Plex Organizations: advantages and disadvantages

      - -----+
      @@ -171,7 +174,7 @@ - + @@ -179,18 +182,76 @@ - + + + + + + + + + +
      Plex type
      concatenatedConcatenated 1 yes no
      stripedStriped 2 no yes High performance in combination with highly concurrent access
      RAID-53noyesHighly reliable storage, efficient read access, data update has moderate performance
      +
      + +
      +

      20.5.5 Object Naming

      + +

      Vinum assigns default names to plexes and subdisks, although they +may be overridden. Overriding the default names is not recommended: experience with the +VERITAS volume manager, which allows arbitrary naming of objects, has shown that this +flexibility does not bring a significant advantage, and it can cause confusion.

      + +

      Names may contain any non-blank character, but it is recommended to restrict them to +letters, digits and the underscore characters. The names of volumes, plexes and subdisks +may be up to 64 characters long, and the names of drives may be up to 32 characters +long.

      + +

      Vinum objects are assigned device nodes in the hierarchy /dev/gvinum. All volumes get direct entries there too. +

      + +
        + +
      • +

        The directories /dev/gvinum/plex, and /dev/gvinum/sd contain device nodes for each plex and for +each subdisk, respectively.

        +

        For each Volume created, there will be a /dev/gvinum/My-Volume-Name entry.

        +
      • +
      +
      + +
      +

      20.5.6 Differences for FreeBSD 4.X

      + +

      Vinum objects are assigned device nodes in the hierarchy /dev/vinum. +

      +
        +
      • +

        The control devices /dev/vinum/control and /dev/vinum/controld, used by +gvinum(8) +and the Vinum daemon respectively.

        +
      • + +
      • +

        A directory /dev/vinum/drive with entries for each drive. +These entries are in fact symbolic links to the corresponding disk nodes.

        +
      • + + +
      +
      + diff -r -u handbook.orig/vinum-root.html handbook/vinum-root.html --- handbook.orig/vinum-root.html 2008-03-22 05:43:54.000000000 +0100 +++ handbook/vinum-root.html 2008-04-08 14:28:55.000000000 +0200 @@ -3,12 +3,12 @@ -Using Vinum for the Root Filesystem +Using Vinum for the Root File system - + @@ -25,7 +25,7 @@ Prev Chapter 20 The Vinum Volume Manager -Next @@ -34,110 +34,56 @@
      -

      20.9 Using Vinum for the Root -Filesystem

      +

      20.7 Using Vinum for the Root +File system

      -

      For a machine that has fully-mirrored filesystems using Vinum, it is desirable to also -mirror the root filesystem. Setting up such a configuration is less trivial than -mirroring an arbitrary filesystem because:

      +

      For a machine that has fully-mirrored file systems using Vinum, it is desirable to also +mirror the root file system. Setting up such a configuration is less trivial than +mirroring an arbitrary file system because:

      • -

        The root filesystem must be available very early during the boot process, so the Vinum +

        The root file system must be available very early during the boot process, so the Vinum infrastructure must already be available at this time.

      • -

        The volume containing the root filesystem also contains the system bootstrap and the +

        The volume containing the root file system also contains the system bootstrap and the kernel, which must be read using the host system's native utilities (e. g. the BIOS on PC-class machines) which often cannot be taught about the details of Vinum.

      In the following sections, the term “root volume” is generally used to -describe the Vinum volume that contains the root filesystem. It is probably a good idea +describe the Vinum volume that contains the root file system. It is probably a good idea to use the name "root" for this volume, but this is not technically required in any way. All command examples in the following sections assume this name though.

      -

      20.9.1 Starting up Vinum Early Enough -for the Root Filesystem

      +

      20.7.1 Starting up Vinum Early Enough +for the Root File system

      -

      There are several measures to take for this to happen:

      - -
        -
      • -

        Vinum must be available in the kernel at boot-time. Thus, the method to start Vinum -automatically described in Section -20.8.1.1 is not applicable to accomplish this task, and the start_vinum parameter must actually not be set when the following setup is being arranged. The -first option would be to compile Vinum statically into the kernel, so it is available all -the time, but this is usually not desirable. There is another option as well, to have /boot/loader (Section -12.3.3) load the vinum kernel module early, before starting the kernel. This can be -accomplished by putting the line:

        +

        Vinum must be available in the kernel at boot-time. +Add the following line to your /boot/loader.conf (Section +12.3.3) in order to load the Vinum kernel module early enough.

         geom_vinum_load="YES"
         
        -

        into the file /boot/loader.conf.

        -
      • - -
      • -
        -
        -

        Note: For Gvinum, all -startup is done automatically once the kernel module has been loaded, so the procedure -described above is all that is needed. The following text documents the behaviour of the -historic Vinum system, for the sake of older setups.

        -
        -
        - -

        Vinum must be initialized early since it needs to supply the volume for the root -filesystem. By default, the Vinum kernel part is not looking for drives that might -contain Vinum volume information until the administrator (or one of the startup scripts) -issues a vinum start command.

        - -
        -
        -

        Note: The following paragraphs are outlining the steps needed for FreeBSD 5.X -and above. The setup required for FreeBSD 4.X differs, and is described below in Section 20.9.5.

        -
        -
        - -

        By placing the line:

        - -
        -vinum.autostart="YES"
        -
        - -

        into /boot/loader.conf, Vinum is instructed to automatically -scan all drives for Vinum information as part of the kernel startup.

        - -

        Note that it is not necessary to instruct the kernel where to look for the root -filesystem. /boot/loader looks up the name of the root device -in /etc/fstab, and passes this information on to the kernel. -When it comes to mount the root filesystem, the kernel figures out from the device name -provided which driver to ask to translate this into the internal device ID (major/minor -number).

        -
      • -
      -

      20.9.2 Making a Vinum-based Root +

      20.7.2 Making a Vinum-based Root Volume Accessible to the Bootstrap

      Since the current FreeBSD bootstrap is only 7.5 KB of code, and already has the burden -of reading files (like /boot/loader) from the UFS filesystem, +of reading files (like /boot/loader) from the UFS file system, it is sheer impossible to also teach it about internal Vinum structures so it could parse the Vinum configuration data, and figure out about the elements of a boot volume itself. Thus, some tricks are necessary to provide the bootstrap code with the illusion of a -standard "a" partition that contains the root filesystem.

      +standard "a" partition that contains the root file system.

      For this to be possible at all, the following requirements must be met for the root volume:

      @@ -153,9 +99,9 @@

    Note that it is desirable and possible that there are multiple plexes, each containing -one replica of the root filesystem. The bootstrap process will, however, only use one of +one replica of the root file system. The bootstrap process will, however, only use one of these replica for finding the bootstrap and all the files, until the kernel will -eventually mount the root filesystem itself. Each single subdisk within these plexes will +eventually mount the root file system itself. Each single subdisk within these plexes will then need its own "a" partition illusion, for the respective device to become bootable. It is not strictly needed that each of these faked "a" partitions is located at the same offset within its device, @@ -186,18 +132,18 @@

     # bsdlabel -e devname
    +class="REPLACEABLE">${devname}
     

    for each device that participates in the root volume. devname must be either the name of the disk (like ${devname} must be either the name of the disk (like da0) for disks without a slice (aka. fdisk) table, or the name of the slice (like ad0s1).

    If there is already an "a" partition on the device -(presumably, containing a pre-Vinum root filesystem), it should be renamed to something +(presumably, containing a pre-Vinum root file system), it should be renamed to something else, so it remains accessible (just in case), but will no longer be used by default to -bootstrap the system. Note that active partitions (like a root filesystem currently +bootstrap the system. Note that active partitions (like a root file system currently mounted) cannot be renamed, so this must be executed either when being booted from a “Fixit” medium, or in a two-step process, where (in a mirrored situation) the disk that has not been currently booted is being manipulated first.

    @@ -209,7 +155,7 @@ partition can be taken verbatim from the calculation above. The "fstype" should be 4.2BSD. The "fsize", "bsize", and "cpg" values should best be chosen to match the actual filesystem, +class="LITERAL">"cpg" values should best be chosen to match the actual file system, though they are fairly unimportant within this context.

    That way, a new "a" partition will be established that @@ -225,20 +171,20 @@

     # fsck -n /dev/devnamea
    +class="REPLACEABLE">${devname}a
     

    It should be remembered that all files containing control information must be relative -to the root filesystem in the Vinum volume which, when setting up a new Vinum root -volume, might not match the root filesystem that is currently active. So in particular, +to the root file system in the Vinum volume which, when setting up a new Vinum root +volume, might not match the root file system that is currently active. So in particular, the files /etc/fstab and /boot/loader.conf need to be taken care of.

    At next reboot, the bootstrap should figure out the appropriate control information -from the new Vinum-based root filesystem, and act accordingly. At the end of the kernel +from the new Vinum-based root file system, and act accordingly. At the end of the kernel initialization process, after all devices have been announced, the prominent notice that shows the success of this setup is a message like:

    @@ -248,7 +194,7 @@
    -

    20.9.3 Example of a Vinum-based Root +

    20.7.3 Example of a Vinum-based Root Setup

    After the Vinum root volume has been set up, the output of gvinum @@ -293,7 +239,7 @@ class="LITERAL">"offset" parameter is the sum of the offset within the Vinum partition "h", and the offset of this partition within the device (or slice). This is a typical setup that is necessary to avoid the problem -described in Section 20.9.4.3. It can also +described in Section 20.7.4.3. It can also be seen that the entire "a" partition is completely within the "h" partition containing all the Vinum data for this device.

    @@ -303,13 +249,13 @@

    -

    20.9.4 Troubleshooting

    +

    20.7.4 Troubleshooting

    If something goes wrong, a way is needed to recover from the situation. The following list contains few known pitfalls and solutions.

    -

    20.9.4.1 System Bootstrap Loads, but +

    20.7.4.1 System Bootstrap Loads, but System Does Not Boot

    If for any reason the system does not continue to boot, the bootstrap can be @@ -324,26 +270,26 @@

    When ready, the boot process can be continued with a boot -as. The options -as will request the kernel to ask for -the root filesystem to mount (-a), and make the boot process -stop in single-user mode (-s), where the root filesystem is +the root file system to mount (-a), and make the boot process +stop in single-user mode (-s), where the root file system is mounted read-only. That way, even if only one plex of a multi-plex volume has been mounted, no data inconsistency between plexes is being risked.

    -

    At the prompt asking for a root filesystem to mount, any device that contains a valid -root filesystem can be entered. If /etc/fstab had been set up +

    At the prompt asking for a root file system to mount, any device that contains a valid +root file system can be entered. If /etc/fstab had been set up correctly, the default should be something like ufs:/dev/gvinum/root. A typical alternate choice would be something like ufs:da0d which could be a hypothetical partition that -contains the pre-Vinum root filesystem. Care should be taken if one of the alias "a" partitions are entered here that are actually reference to the subdisks of the Vinum root device, because in a mirrored setup, this would only mount one -piece of a mirrored root device. If this filesystem is to be mounted read-write later on, +piece of a mirrored root device. If this file system is to be mounted read-write later on, it is necessary to remove the other plex(es) of the Vinum root volume since these plexes would otherwise carry inconsistent data.

    -

    20.9.4.2 Only Primary Bootstrap +

    20.7.4.2 Only Primary Bootstrap Loads

    If /boot/loader fails to load, but the primary bootstrap @@ -352,12 +298,12 @@ point, using the space key. This will make the bootstrap stop in stage two, see Section 12.3.2. An attempt can be made here to boot off an alternate partition, like the partition containing the -previous root filesystem that has been moved away from "a" +previous root file system that has been moved away from "a" above.

    -

    20.9.4.3 Nothing +

    20.7.4.3 Nothing Boots, the Bootstrap Panics

    This situation will happen if the bootstrap had been destroyed by the Vinum @@ -381,9 +327,32 @@

    -

    20.9.5 Differences for +

    20.7.5 Differences for FreeBSD 4.X

    +

    Vinum must be initialized early since it needs to supply the volume for the root +file system. By default, the Vinum kernel part is not looking for drives that might +contain Vinum volume information until the administrator (or one of the startup scripts) +issues a vinum start command.

    + +

    By placing the line:

    + +
    +vinum.autostart="YES"
    +
    + +

    into /boot/loader.conf, Vinum is instructed to automatically +scan all drives for Vinum information as part of the kernel startup.

    + +

    Note that it is not necessary to instruct the kernel where to look for the root +file system. /boot/loader looks up the name of the root device +in /etc/fstab, and passes this information on to the kernel. +When it comes to mount the root file system, the kernel figures out from the device name +provided which driver to ask to translate this into the internal device ID (major/minor +number).

    + + +

    Under FreeBSD 4.X, some internal functions required to make Vinum automatically scan all disks are missing, and the code that figures out the internal ID of the root device is not smart enough to handle a name like /dev/vinum/root @@ -402,7 +371,7 @@ listed, nor is it necessary to add each slice and/or partition explicitly, since Vinum will scan all slices and partitions of the named drives for valid Vinum headers.

    -

    Since the routines used to parse the name of the root filesystem, and derive the +

    Since the routines used to parse the name of the root file system, and derive the device ID (major/minor number) are only prepared to handle “classical” device names like /dev/ad0s1a, they cannot make any sense out of a root volume name like /dev/vinum/root. For that reason, Vinum @@ -422,7 +391,7 @@ name of the root device string being passed (that is, "vinum" in our case), it will use the pre-allocated device ID, instead of trying to figure out one itself. That way, during the usual automatic startup, it can continue to mount the Vinum -root volume for the root filesystem.

    +root volume for the root file system.

    However, when boot -a has been requesting to ask for entering the name of the root device manually, it must be noted that this routine still cannot @@ -447,7 +416,7 @@ accesskey="P">Prev Home -Next @@ -455,7 +424,7 @@ Configuring Vinum Up -Virtualization +Vinum Examples

    diff -r -u handbook.orig/vinum-vinum.html handbook/vinum-vinum.html --- handbook.orig/vinum-vinum.html 2008-03-22 05:43:54.000000000 +0100 +++ handbook/vinum-vinum.html 2008-04-08 14:40:26.000000000 +0200 @@ -8,7 +8,7 @@ - + @@ -42,21 +42,20 @@
    20.1 Synopsis
    -
    20.2 Disks Are Too Small
    +
    20.2 Introduction
    -
    20.3 Access Bottlenecks
    +
    20.3 Disk Performance Issues
    20.4 Data Integrity
    20.5 Vinum Objects
    -
    20.6 Some Examples
    +
    20.6 Configuring Vinum
    -
    20.7 Object Naming
    +
    20.7 Using Vinum for the Root File system
    -
    20.8 Configuring Vinum
    +
    20.8 Vinum Examples
    -
    20.9 Using Vinum for the Root Filesystem
    @@ -86,7 +85,9 @@ users safeguard themselves against such issues is through the use of multiple, and sometimes redundant, disks. In addition to supporting various cards and controllers for hardware RAID systems, the base FreeBSD system includes the Vinum Volume Manager, a block -device driver that implements virtual disk drives. vinum(4) +that implements virtual disk drives. Vinum is a so-called Volume Manager, a virtual disk driver that addresses these three problems. Vinum provides more flexibility, performance, and reliability than @@ -100,12 +101,13 @@

    Note: Starting with FreeBSD 5, Vinum has been rewritten in order to fit into the GEOM architecture (Chapter 19), retaining the original ideas, -terminology, and on-disk metadata. This rewrite is called gvinum (for GEOM -vinum). The following text usually refers to gvinum(8) +(for GEOM vinum). The following text usually refers to Vinum as an abstract name, regardless of the implementation -variant. Any command invocations should now be done using the gvinum command, and the name of the kernel module has been changed +variant. Any command invocations should now be done using the +gvinum(8) +command, and the name of the kernel module has been changed from vinum.ko to geom_vinum.ko, and all device nodes reside under /dev/gvinum instead of /dev/vinum. As of FreeBSD 6, the old Vinum implementation is no @@ -132,7 +134,7 @@ UFS Journaling Through GEOM Up -Disks Are Too Small +Introduction

    --- /dev/null 2008-04-08 15:00:00.000000000 +0200 +++ handbook/vinum-disk-performance-issues.html 2008-04-08 15:09:49.000000000 +0200 @@ -0,0 +1,148 @@ + + + + +Disk Performance Issues + + + + + + + + + + + +
    +

    20.3 Disk Performance Issues

    + +

    Modern systems frequently need to access data in a highly concurrent manner. For +example, large FTP or HTTP servers can maintain thousands of concurrent sessions and have +multiple 100 Mbit/s connections to the outside world. +

    + +

    +The most critical parameter is the load that a transfer places on the subsystem, in other words the time +for which a transfer occupies a drive. +

    + +

    In any disk transfer, the drive must first position the heads, wait for the first +sector to pass under the read head, and then perform the transfer. These actions can be +considered to be atomic: it does not make any sense to interrupt them. +The data transfer time is negligible compared to the time taken for positioning the heads.

    + +

    The traditional and obvious solution to this bottleneck is “more +spindles”: rather than using one large disk, it uses several smaller disks with the +same aggregate storage space. Each disk is capable of positioning and transferring +independently, so the effective throughput increases by a factor close to the number of +disks used.

    + +

    The exact throughput improvement is, of course, smaller than the number of disks +involved: although each drive is capable of transferring in parallel, there is no way to +ensure that the requests are evenly distributed across the drives. Inevitably the load on +one drive will be higher than on another.

    + +

    The evenness of the load on the disks is strongly dependent on the way the data is +shared across the drives. In the following discussion, it is convenient to think of the +disk storage as a large number of data sectors which are addressable by number, rather +like the pages in a book. +

    + +
    +

    20.3.1 Concatenation

    + +

    The most obvious method is to divide the virtual disk into +groups of consecutive sectors the size of the individual physical disks and store them in +this manner, rather like taking a large book and tearing it into smaller sections. This +method is called concatenation and +has the advantage that the disks are not required to have any specific size +relationships. It works well when the access to the virtual disk is spread evenly about +its address space. When access is concentrated on a smaller area, the improvement is less +marked. Figure 20-1 illustrates +the sequence in which storage units are allocated in a concatenated organization.

    + +

    + +
    +

    +

    Figure 20-1. Concatenated Organization

    +
    + +
    + +
    +

    20.3.2 Striping

    + +

    An alternative mapping is to divide the address space into smaller, equal-sized +components and store them sequentially on different devices. For example, the first 256 +sectors may be stored on the first disk, the next 256 sectors on the next disk and so on. +After filling the last disk, the process repeats until the disks are full. This mapping +is called striping or RAID-0. Striping requires somewhat +more effort to locate the data, and it can cause additional I/O load where a transfer is +spread over multiple disks, but it can also provide a more constant load across the +disks. Figure 20-2 illustrates +the sequence in which storage units are allocated in a striped organization.

    + +

    + +
    +

    +

    Figure 20-2. Striped Organization

    +
    + +
    + + + +

    This, and other documents, can be downloaded from ftp://ftp.FreeBSD.org/pub/FreeBSD/doc/.

    + +

    For questions about FreeBSD, read the documentation before contacting <questions@FreeBSD.org>.
    +For questions about this documentation, e-mail <doc@FreeBSD.org>.

    + + + --- handbook.orig/virtualization.html 2008-03-22 05:43:54.000000000 +0100 +++ handbook/virtualization.html 2008-04-08 15:14:45.000000000 +0200 @@ -7,8 +7,8 @@ - + @@ -23,7 +23,7 @@ -Prev -Prev Home @@ -126,7 +126,7 @@ -Using Vinum for the Root Filesystem +Vinum Examples Up FreeBSD as a Guest OS --- handbook.orig/raid.html 2008-03-22 05:43:54.000000000 +0100 +++ handbook/raid.html 2008-04-08 15:43:16.000000000 +0200 @@ -93,8 +93,8 @@

    Next, consider how to attach them as part of the file system. You should research both -vinum(8) (vinum(4) (Chapter 20) and ccd(4). In this @@ -309,17 +309,18 @@

    18.4.1.2 The Vinum Volume Manager

    -

    The Vinum Volume Manager is a block device driver which implements virtual disk +

    The Vinum Volume Manager is a block device driver +vinum(4) +which implements virtual disk drives. It isolates disk hardware from the block device interface and maps data in ways which result in an increase in flexibility, performance and reliability compared to the -traditional slice view of disk storage. vinum(8) implements +traditional slice view of disk storage. Vinum implements the RAID-0, RAID-1 and RAID-5 models, both individually and in combination.

    -

    See Chapter 20 for more information about vinum(8).

    +

    See Chapter 20 for more information about most recent Vinum implementation, gvinum(8), under the Geom architecture +Chapter 19 +.

    --0-968980677-1207663836=:26150-- From owner-freebsd-doc@FreeBSD.ORG Tue Apr 8 14:40:03 2008 Return-Path: Delivered-To: freebsd-doc@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F296F106566C for ; Tue, 8 Apr 2008 14:40:02 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id DDE108FC0C for ; Tue, 8 Apr 2008 14:40:02 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m38Ee2il007967 for ; Tue, 8 Apr 2008 14:40:02 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m38Ee2rv007966; Tue, 8 Apr 2008 14:40:02 GMT (envelope-from gnats) Date: Tue, 8 Apr 2008 14:40:02 GMT Message-Id: <200804081440.m38Ee2rv007966@freefall.freebsd.org> To: freebsd-doc@FreeBSD.org From: Federico Galvez-Durand Cc: Subject: Re: docs/122052: minor update on handbook section 20.7.1 X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Federico Galvez-Durand List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Apr 2008 14:40:03 -0000 The following reply was made to PR docs/122052; it has been noted by GNATS. From: Federico Galvez-Durand To: FreeBSD-gnats-submit@FreeBSD.org, freebsd-doc@FreeBSD.org Cc: Subject: Re: docs/122052: minor update on handbook section 20.7.1 Date: Tue, 8 Apr 2008 07:10:36 -0700 (PDT) --0-968980677-1207663836=:26150 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit Content-Id: Content-Disposition: inline Well, now the minor update is not that minor. Find attached a patch file. This patch -> deprecates: handbook/vinum-object-naming.html handbook/vinum-access-bottlenecks.html handbook/vinum/vinum-concat.png handbook/vinum/vinum-raid10-vol.png handbook/vinum/vinum-simple-vol.png handbook/vinum/vinum-striped.png handbook/vinum/vinum-mirrored-vol.png handbook/vinum/vinum-raid5-org.png handbook/vinum/vinum-striped-vol.png creates: handbook/vinum-disk-performance-issues.html handbook.new/vinum/vinum-concat.png handbook.new/vinum/vinum-raid01.png handbook.new/vinum/vinum-raid10.png handbook.new/vinum/vinum-simple.png handbook.new/vinum/vinum-raid0.png handbook.new/vinum/vinum-raid1.png handbook.new/vinum/vinum-raid5.png updates: all remaining handbook/vinum-*.html handbook/raid.html handbook/virtualization.html. I think I cannot attach the new PNG files here. Please, advise how to submit them. . ____________________________________________________________________________________ You rock. That's why Blockbuster's offering you one month of Blockbuster Total Access, No Cost. http://tc.deals.yahoo.com/tc/blockbuster/text5.com --0-968980677-1207663836=:26150 Content-Type: text/plain; name="patch01.txt" Content-Description: 2837643882-patch01.txt Content-Disposition: inline; filename="patch01.txt" diff -r -u handbook.orig/docbook.css handbook/docbook.css --- handbook.orig/docbook.css 2008-03-22 05:33:04.000000000 +0100 +++ handbook/docbook.css 2008-04-05 15:28:57.000000000 +0200 @@ -129,6 +129,26 @@ color: #000000; } +TABLE.CLASSTABLE { + border-collapse: collapse; + border-top: 2px solid gray; + border-bottom: 2px solid gray; +} + +TABLE.CLASSTABLE TH { + border-top: 2px solid gray; + border-right: 1px solid gray; + border-left: 1px solid gray; + border-bottom: 2px solid gray; +} + +TABLE.CLASSTABLE TD { + border-top: 1px solid gray; + border-right: 1px solid gray; + border-left: 1px solid gray; + border-bottom: 1px solid gray; +} + .FILENAME { color: #007a00; } diff -r -u handbook.orig/vinum-config.html handbook/vinum-config.html --- handbook.orig/vinum-config.html 2008-03-22 05:43:54.000000000 +0100 +++ handbook/vinum-config.html 2008-04-08 14:56:10.000000000 +0200 @@ -7,8 +7,8 @@ - - + + @@ -22,7 +22,7 @@ -Prev Chapter 20 The Vinum Volume Manager
    -

    20.8 Configuring -Vinum

    +

    20.6 Configuring Vinum

    The GENERIC kernel does not contain Vinum. It is possible to build a special kernel which includes Vinum, but this is not recommended. The standard way to start Vinum is as a kernel module (kld). You do -not even need to use kldload(8) for Vinum: +not even need to use + +kldload(8) +for Vinum: when you start gvinum(8), it checks +href="http://www.FreeBSD.org/cgi/man.cgi?query=gvinum&sektion=8"> +gvinum(8), +it checks whether the module has been loaded, and if it is not, it loads it automatically.

    -

    20.8.1 Startup

    +

    20.6.1 Preparing a disk

    +

    Vinum needs a + +bsdlabel(8), +on your disk. +

    Assuming +/dev/ad1 +is the device in use and your Vinum Volume will use the whole disk, it is advisable to initialize the device with a single Slice, using + +fdisk(8). The following command creates +a single Slice +s1 +over the whole disk +/dev/ad1. +

    +
     +#fdisk -vI ad1
     +
    + +

    After creating +the disk Slice, it can be labeled: +

    +
     +#bsdlabel -w ad1s1
     +
    + +

    The bsdlabel utility can not write an adequate label for Vinum automatically, you need to edit the standard label:

    +
     +#bsdlabel -e ad1s1
     +
    +

    Will show you something similar to this:

    +
     +# /dev/ad1s1:
     +8 partitions:
     +#        size   offset    fstype   [fsize bsize bps/cpg]
     +  a:  1048241       16    unused        0     0     0                    
     +  c:  1048257        0    unused        0     0         # "raw" part, don't edit
     +
    + +

    You need to edit the partitions. Once this disk is not bootable (it could be, see section +Section 20.7), you could rename partition +a +to partition +h +and replace +fstype +unused +with +vinum. +The fields fsize bsize bps/cpg have no meaning for +fstype vinum. +

    +
     +# /dev/ad1s1:
     +8 partitions:
     +#        size   offset    fstype   [fsize bsize bps/cpg]
     +  c:  1048257        0    unused        0     0         # "raw" part, don't edit
     +  h:  1048241       16     vinum                    
     +
    +
    + +
    +

    20.6.2 Configuration File

    +

    This file can be placed anywhere in your system. After executing the instructions in this file + +gvinum(8) + will not use it anymore. Everything is stored in a database. But you should keep this file in a safe place, you may need it in case of a Volume crash. +

    +

    The following configuration creates a Volume named +Simple +containing a drive named +DiskB +based on the device +/dev/ad1s1. The +plex +organization is +concat +and contains only one +subdisk (sd) +. +

    +
     +drive diskB device /dev/ad1s1h
     +volume Simple 
     +	plex org concat
     +	sd drive diskB
     +
    +
    + +
    +

    20.6.3 Creating a Volume

    + +

    Once you have prepared your disk and created a configuration file, you can use + +gvinum(8) +to create a Volume. +

    +
     +#gvinum create Simple
     +1 drive:
     +D diskB                 State: up	/dev/ad1s1h	A: 0/511 MB (0%)
     +
     +1 volume:
     +V Simple                State: up	Plexes:       1	Size:        511 MB
     +
     +1 plex:
     +P Simple.p0           C State: up	Subdisks:     1	Size:        511 MB
     +
     +1 subdisk:
     +S Simple.p0.s0          State: up	D: diskB        Size:        511 MB
     +
    + + +

    At this point, a new entry has been created for your Volume:

    + +
     +#ls -l /dev/gvinum
     +crw-r-----  1 root  operator    0,  89 Mar 26 17:17 /dev/gvinum/Simple
     +
     +/dev/gvinum/plex:
     +total 0
     +crw-r-----  1 root  operator    0,  86 Mar 26 17:17 Simple.p0
     +
     +/dev/gvinum/sd:
     +total 0
     +crw-r-----  1 root  operator    0,  83 Mar 26 17:17 Simple.p0.s0
     +
    + +
    + +
    +

    20.6.4 Starting a Volume

    + +

    After creating a Volume you need to allow the system access to the objects:

    +
     +#gvinum start Simple
     +
    + +

    The Starting process can be slow, depending on the size of the subdisk or subdisks contained in your plex. Enter gvinum and use the option +l +to see whether the status of all your subdisks is already +"up" +. +

    +

    A message is printed by gvinum for each subdisk's Start Process completed.

    +
    + +
    +

    20.6.5 Creating a File System

    + +

    After having created a Volume, you need to create a file system using + +newfs(8) +

    -

    Vinum stores configuration information on the disk slices in essentially the same form +

     +#newfs /dev/gvinum/Simple
     +
    +

    If no errors are reported, you should check the file system:

    +
     +#fsck -t ufs /dev/gvinum/Simple
     +
    +

    If no errors are reported, you can mount the file system:

    +
     +#mount /dev/gvinum/Simple /mnt
     +
    + +

    At this point, if everything seems to be right, it is desirable to reboot your machine and perform the following test:

    +
     +#fsck -t ufs /dev/gvinum/Simple
     +
    +

    If no errors are reported, you can mount the file system:

    +
     +#mount /dev/gvinum/Simple /mnt
     +
    +

    If everything looks fine now, then you have succeeded creating a Vinum Volume.

    + +
    + +
    +

    20.6.6 Mounting a Volume Automatically

    + +

    In order to have your Volumes mounted automatically you need two things:

    +
      +
    • +Set + geom_vinum_load="YES" +in +/boot/loader.conf. +
    • +
    • +Add an entry in + /etc/fstab +for your Volume (e.g. Simple). The mountpoint in this example is the directory + /space . See + +fstab(5) +and + +mount(8) +for details. +
    • +
       +#
       +# Device                Mountpoint  FStype  Options     Dump    Pass#
       +#
       +[...]
       +/dev/gvinum/Simple      /space      ufs     rw          2       2
       +
      + +
    +

    Your Volumes will be checked by + +fsck(8) +at boot time if you specify non zero values for + Dump +and + Pass +fields.

    + +
    + +
    +

    20.6.7 Troubleshooting

    + +
    +

    20.6.7.1 Creating a File System

    +

    The process of Starting a Volume may take long; you must be sure this process has been completed before creating a file system. At the moment this manual is written, + +newfs(8) +will not complain if you try to create a file system and the Starting process is still in progress. Even running + +fsck(8) +on your new file system may tell you everything is OK. But most probably you will not be able to use the Volume later on, after rebooting your machine. +

    + +

    In case your Volume does not pass the checkup, you may try to repeat the process one more time:

    +
     +#gvinum start Simple
     +#newfs /dev/gvinum/Simple
     +#fsck -t ufs /dev/gvinum/Simple
     +
    +

    If everything looks fine, then reboot your machine.

    +
     +#shutdown -r now
     +
    +

    Then execute again:

    +
     +#fsck -t ufs /dev/gvinum/Simple
     +#mount /dev/gvinum/Simple /mnt
     +
    +

    It should work without problem.

    + +
    + +
    +

    20.6.8 Miscellaneous Notes

    + +

    Vinum stores configuration information on disk slices in essentially the same form as in the configuration files. When reading from the configuration database, Vinum recognizes a number of keywords which are not allowed in the configuration files. For example, a disk configuration might contain the following text:

    @@ -86,18 +343,11 @@ to identify drives correctly even if they have been assigned different UNIX® drive IDs.

    -
    -

    20.8.1.1 Automatic -Startup

    - -
    -
    -

    Note: This information only relates to the historic Vinum implementation. Gvinum always features an automatic -startup once the kernel module is loaded.

    -
    +
    +

    20.6.8 Differences for FreeBSD 4.X

    +

    In order to start Vinum automatically when you boot the system, ensure that you have the following line in your /etc/rc.conf:

    @@ -119,8 +369,7 @@ does not matter which drive is read. After a crash, however, Vinum must determine which drive was updated most recently and read the configuration from this drive. It then updates the configuration if necessary from progressively older drives.

    -
    -
    +
    diff -r -u handbook.orig/vinum-data-integrity.html handbook/vinum-data-integrity.html --- handbook.orig/vinum-data-integrity.html 2008-03-22 05:43:54.000000000 +0100 +++ handbook/vinum-data-integrity.html 2008-04-08 13:00:38.000000000 +0200 @@ -7,7 +7,7 @@ - + @@ -22,7 +22,7 @@ -Prev Chapter 20 The Vinum Volume Manager
    -

    20.4 Data -Integrity

    +

    20.4 Data Integrity

    -

    The final problem with current disks is that they are unreliable. Although disk drive -reliability has increased tremendously over the last few years, they are still the most -likely core component of a server to fail. When they do, the results can be catastrophic: -replacing a failed disk drive and restoring data to it can take days.

    - -

    The traditional way to approach this problem has been mirroring, keeping two copies of the data on different -physical hardware. Since the advent of the RAID -levels, this technique has also been called RAID level -1 or RAID-1. Any write to the volume writes -to both locations; a read can be satisfied from either, so if one drive fails, the data -is still available on the other drive.

    +

    Although disk drive reliability has increased tremendously over the last few years, they are still the most likely core component of a server to fail. When they do, the results can be catastrophic: replacing a failed disk drive and restoring data to it can take a long time.

    -

    Mirroring has two problems:

    +

    The traditional way to approach this problem has been mirroring, keeping two copies of the data on different physical hardware. Since the advent of the RAID levels, this technique has also been called RAID level 1 or RAID-1.

    + +

    An alternative solution is using an +error-correcting code. +This strategy is implemented in the RAID levels 2, 3, 4, 5 and 6. Of these, RAID-5 is the most interesting; for each data block a simple +parity check code is generated and stored as part of each stripe. +For arrays with large number of disks, the RAID-5 might not provide enough protection; in this case more complex error-correcting codes (e.g. Reed-Solomon) may provide better results. +

    +

    RAID levels can be nested to create other RAID configurations with improved resilience. Of these, RAID 0+1 and RAID 1+0 are explained here. Under certain conditions, these arrays can work in degraded mode with up to N/2 broken disks. However, having two Disks broken can stop the arrays, if they fail in the right positions. In both cases, having only one disk down is fully tolerated.

    +

    +Therefore, whenever you think about a failure in a RAID-0+1 or RAID-1+0 you are considering either the probability of having two disks failing at the same time or not having replaced the first broken disk before the second fails. On top of that, the second disk needs to fail in a very specific position inside the array. +

    +

    +In modern storage facilities, critical mission arrays are implemented using HOT PLUG technology, allowing a broken Disk to be replaced without having to stop the array. The probability of having a second Disk failing before having replaced the first Disk broken is mathematically clear. However the Possibility of such an event must be estimated first and it depends mainly on security policies and stock management paradigms beyond the scope of this discussion.

    +

    Therefore, a more interesting discussion about RAID-0+1 and RAID-1+0 reliability should be based on the Mean Time Before Failure (MTBF) of the devices in use and on other variables provided by the disk drive constructor and the storage facility administration. +

    +

    For the sake of simplicity, all disks (N) in a RAID are considered to have the same capacity (CAP) and R/W characteristics. This is not mandatory in all cases.

    +

    In the Figures, Data stored in a RAID is represented by (X;Y;Z). Data striped along an array of disks is represented by (X0,X1,X2...; Y0,Y1,Y2...; Z0,Z1,Z2...). +

    + +
    +

    20.4.1 RAID-1: Mirror

    +

    In a mirrored array, any write to the volume writes to both disks; a read can be satisfied from either disk, so if one fails, the data is still available on the other one.

    + +
    +

    +

    Figure 20-3. RAID-1 Organization

    +
    • -

      The price. It requires twice as much disk storage as a non-redundant solution.

      +

      The total storage capacity is CAP*N/2.

    • -

      The performance impact. Writes must be performed to both drives, so they take up twice -the bandwidth of a non-mirrored volume. Reads do not suffer from a performance penalty: -it even looks as if they are faster.

      +

      The Write performance is impacted because all data must be stored in both drives, so it takes up twice the bandwidth of a non-mirrored volume. Reads do not suffer from a performance penalty. +

    -

    An alternative solution is parity, implemented in the RAID levels 2, 3, 4 and 5. Of these, RAID-5 is the most interesting. As implemented in Vinum, it is -a variant on a striped organization which dedicates one block of each stripe to parity of -the other blocks. As implemented by Vinum, a RAID-5 -plex is similar to a striped plex, except that it implements RAID-5 by including a parity block in each stripe. As required -by RAID-5, the location of this parity block changes -from one stripe to the next. The numbers in the data blocks indicate the relative block -numbers.

    +
    + +
    +

    20.4.2 RAID-5

    -

    +

    As implemented in Vinum, it is a variant on a plex striped organization which dedicates one block of each stripe to parity of the other blocks (Px,Py,Pz). +As required by RAID-5, the location of this parity block changes from one stripe to the next. The numbers in the data blocks indicate the relative block numbers (X0,X1,Px; Y0,Py,Y1; Pz,Z0,Z1;...).

    -

    Figure 20-3. RAID-5 Organization

    +

    +

    Figure 20-4. RAID-5 Organization

    +
    + +
      +
    • +The total capacity of the array is equal to (N-1)*CAP. +

    • +
    • +At least 3 disks are necessary. +

    • +
    • +Read access is similar to that of striped organizations but write access is significantly slower. In order to update (write) one striped block you need to read the other striped blocks and compute the parity block again before writing the new block and the new parity. This effect can be mitigated by using systems with large R/W cache memory, then you do not need to read the other blocks again in order to compute the new parity. +

    • + +
    • +If one drive fails, the array can continue to operate in degraded mode: a read from one of the remaining accessible drives continues normally, but a read from the failed drive is recalculated from the corresponding block on all the remaining drives. +

    • +
    -

    -
    -
    -

    Compared to mirroring, RAID-5 has the advantage of -requiring significantly less storage space. Read access is similar to that of striped -organizations, but write access is significantly slower, approximately 25% of the read -performance. If one drive fails, the array can continue to operate in degraded mode: a -read from one of the remaining accessible drives continues normally, but a read from the -failed drive is recalculated from the corresponding block from all the remaining -drives.

    + +
    +

    20.4.3 RAID-0+1

    + +

    +In Vinum, a RAID-0+1 can be straightforward constructed by concatenating two striped plex. In this array, resilience is improved and more than one disk can fail without compromising the functionality. Performance is degraded when the array is forced to work without the full set of disks. +

    +
    +

    +

    Figure 20-5. RAID-0+1 Organization

    +
      +
    • +The total storage capacity is CAP*N/2. +

    • +
    • +At least 4 disks are necessary. +

    • +
    • +This array will stop working when one disk fails in each of the mirrors (e.g. DiskB and DiskF) but it could work in degraded mode with N/2 disks down as long as they are all in the same mirror (e.g. DiskE, DiskF and DiskG). + +

    • +
    + +
    + +
    +

    20.4.4 RAID-1+0

    + +

    +In Vinum, a RAID-1+0 can not be constructed by a simple manipulation of plexes. You need to construct the mirrors (e.g., m0, m1, m3...) first and then use these mirrors into a striped plex. +In this array, resilience is improved and more than one disk can fail without compromising the functionality. Performance is degraded when the array is forced to work without the full set of disks. +

    + +
    +

    +

    Figure 20-6. RAID-1+0 Organization

    +
    + +
      +
    • +The total storage capacity is CAP*N/2. +

    • +
    • +At least 4 disks are necessary. +

    • +
    • +This array will stop working when two disks fail in the same mirrors (e.g. DiskB and DiskC) but it could work in degraded mode with N/2 disks down as long as they are not in the same mirror (e.g. DiskB, DiskE and DiskF). + +

    • +
    +
    + +
    -

    20.6 Some -Examples

    - -

    Vinum maintains a configuration -database which describes the objects known to an individual system. Initially, -the user creates the configuration database from one or more configuration files with the -aid of the gvinum(8) utility -program. Vinum stores a copy of its configuration database on each disk slice (which -Vinum calls a device) under its -control. This database is updated on each state change, so that a restart accurately -restores the state of each Vinum object.

    - +

    20.8 Vinum Examples

    +

    +All Disks in the following examples are identical in capacity (512 Mb) and R/W characteristics. However, the size reported by +gvinum(8) + is 511 Mb. This is normal in a real case, when the Disk is not exactly 536870912 bytes and some space (approx. 8 KB) is reserved by the +bsdlabel(8). +The size used for the stripes is 256k in all examples. +

    +

    +For the sake of simplicity, only three stripes out of many are represented in the Figures. +

    -

    20.6.1 The Configuration File

    +

    20.8.1 A Simple Volume

    The configuration file describes individual Vinum objects. The definition of a simple volume might be:

     -    drive a device /dev/da3h
     -    volume myvol
     -      plex org concat
     -        sd length 512m drive a
     +#cat simple.conf
     +drive diskB device /dev/ad1s1h
     +volume Simple 
     +	plex org concat
     +	sd drive diskB
      

    This file describes four Vinum objects:

    @@ -67,79 +66,65 @@

    The drive line describes a disk partition (drive) and its location relative to the underlying hardware. It is given the symbolic name a. This separation of the symbolic names +class="emphasis">diskB. This separation of the symbolic names from the device names allows disks to be moved from one location to another without confusion.

  • The volume line describes a -volume. The only required attribute is the name, in this case myvol.

    +Vinum Volume. The only required attribute is the name, in this case Simple.

  • -

    The plex line defines a plex. +

    The plex line defines a Vinum Plex. The only required parameter is the organization, in this case concat. No name is necessary: the system automatically generates a name from the volume name by adding the suffix .px, -where x is the number of the plex +class="EMPHASIS">.p${x}, +where ${x} is the number of the plex in the volume. Thus this plex will be called myvol.p0.

    +class="EMPHASIS">Simple.p0
    .

  • -

    The sd line describes a subdisk. +

    The sd line describes a Vinum subdisk. The minimum specifications are the name of a drive on which to store it, and the length of the subdisk. As with plexes, no name is necessary: the system automatically assigns names derived from the plex name by adding the suffix .sx, -where x is the number of the +class="EMPHASIS">.s${x}, +where ${x} is the number of the subdisk in the plex. Thus Vinum gives this subdisk the name myvol.p0.s0.

    +class="EMPHASIS">Simple.p0.s0.

  • -

    After processing this file, gvinum(8) produces the -following output:

    - -
     -      # gvinum -> create config1
     -      Configuration summary
     -      Drives:         1 (4 configured)
     -      Volumes:        1 (4 configured)
     -      Plexes:         1 (8 configured)
     -      Subdisks:       1 (16 configured)
     -     
     -    D a                     State: up       Device /dev/da3h        Avail: 2061/2573 MB (80%)
     -    
     -    V myvol                 State: up       Plexes:       1 Size:        512 MB
     -    
     -    P myvol.p0            C State: up       Subdisks:     1 Size:        512 MB
     -    
     -    S myvol.p0.s0           State: up       PO:        0  B Size:        512 MB
     -
    - -

    This output shows the brief listing format of gvinum(8). It is -represented graphically in Figure -20-4.

    +

    After processing this file, +gvinum(8) +produces the following output:

    -

    +
     +# gvinum create simple.conf
     +1 drive:
     +D diskB                 State: up	/dev/ad1s1h	A: 0/511 MB (0%)
     +
     +1 volume:
     +V Simple                State: up	Plexes:       1	Size:        511 MB
     +
     +1 plex:
     +P Simple.p0           C State: up	Subdisks:     1	Size:        511 MB
     +
     +1 subdisk:
     +S Simple.p0.s0          State: up	D: diskB        Size:        511 MB
     +
    -

    Figure 20-4. A Simple Vinum Volume

    -

    +

    +

    Figure 20-4. A Simple Vinum Volume

    -
    -

    This figure, and the ones which follow, represent a volume, which contains the plexes, which in turn contain the subdisks. In this trivial example, the volume contains one plex, and the plex contains one subdisk.

    @@ -147,181 +132,320 @@

    This particular volume has no specific advantage over a conventional disk partition. It contains a single plex, so it is not redundant. The plex contains a single subdisk, so there is no difference in storage allocation from a conventional disk partition. The -following sections illustrate various more interesting configuration methods.

    +following sections illustrate more interesting configuration methods.

    +
    -

    20.6.2 Increased Resilience: -Mirroring

    +

    20.8.2 RAID-1: Mirrored set

    -

    The resilience of a volume can be increased by mirroring. When laying out a mirrored -volume, it is important to ensure that the subdisks of each plex are on different drives, +

    The resilience of a volume can be increased by mirroring +(Section 20.4.1). +When laying out a mirrored volume, it is important to ensure that the subdisks of each plex are on different drives, so that a drive failure will not take down both plexes. The following configuration mirrors a volume:

     -   drive b device /dev/da4h
     -    volume mirror
     -      plex org concat
     -        sd length 512m drive a
     -      plex org concat
     -        sd length 512m drive b
     +#cat mirror.conf
     +drive diskB device /dev/ad1s1h
     +drive diskC device /dev/ad2s1h
     +volume Mirror
     +	plex org concat
     +	sd drive diskB
     +	plex org concat
     +	sd drive diskC
      
    -

    In this example, it was not necessary to specify a definition of drive a again, since Vinum keeps track of all -objects in its configuration database. After processing this definition, the +

    +After processing this definition, the configuration looks like:

     -   Drives:         2 (4 configured)
     -    Volumes:        2 (4 configured)
     -    Plexes:         3 (8 configured)
     -    Subdisks:       3 (16 configured)
     -    
     -    D a                     State: up       Device /dev/da3h        Avail: 1549/2573 MB (60%)
     -    D b                     State: up       Device /dev/da4h        Avail: 2061/2573 MB (80%)
     -
     -    V myvol                 State: up       Plexes:       1 Size:        512 MB
     -    V mirror                State: up       Plexes:       2 Size:        512 MB
     -  
     -    P myvol.p0            C State: up       Subdisks:     1 Size:        512 MB
     -    P mirror.p0           C State: up       Subdisks:     1 Size:        512 MB
     -    P mirror.p1           C State: initializing     Subdisks:     1 Size:        512 MB
     -  
     -    S myvol.p0.s0           State: up       PO:        0  B Size:        512 MB
     -    S mirror.p0.s0          State: up       PO:        0  B Size:        512 MB
     -    S mirror.p1.s0          State: empty    PO:        0  B Size:        512 MB
     +#gvinum create mirror.conf
     +2 drives:
     +D diskC                 State: up	/dev/ad2s1h	A: 0/511 MB (0%)
     +D diskB                 State: up	/dev/ad1s1h	A: 0/511 MB (0%)
     +
     +1 volume:
     +V Mirror                State: up	Plexes:       2	Size:        511 MB
     +
     +2 plexes:
     +P Mirror.p1           C State: up	Subdisks:     1	Size:        511 MB
     +P Mirror.p0           C State: up	Subdisks:     1	Size:        511 MB
     +
     +2 subdisks:
     +S Mirror.p1.s0          State: up	D: diskC        Size:        511 MB
     +S Mirror.p0.s0          State: up	D: diskB        Size:        511 MB
      
    -

    Figure 20-5 shows the structure -graphically.

    - -

    -
    -

    Figure 20-5. A Mirrored Vinum Volume

    - -

    +

    +

    Figure 20-5. A RAID-1 Vinum Volume

    -
    -
    -

    In this example, each plex contains the full 512 MB of address space. As in the -previous example, each plex contains only a single subdisk.

    -

    20.6.3 Optimizing Performance

    +

    20.8.3 RAID-0: Striped set

    -

    The mirrored volume in the previous example is more resistant to failure than an -unmirrored volume, but its performance is less: each write to the volume requires a write -to both drives, using up a greater proportion of the total disk bandwidth. Performance +

    The RAID-1 volume in the previous example is more resistant to failure than a +simple volume, but it has inferior Writing performance because each Write to the volume requires a Write +to both drives, using a greater percentage of the total disk bandwidth. Performance considerations demand a different approach: instead of mirroring, the data is striped -across as many disk drives as possible. The following configuration shows a volume with a -plex striped across four disk drives:

    +(Section 20.3.2) +across as many disk drives as possible. This configuration does not provide data protection against failure. +The following configuration shows a volume with a +plex striped across three disk drives:

    + +
     +#cat striped.conf
     +drive diskB device /dev/ad1s1h
     +drive diskC device /dev/ad2s1h
     +drive diskD device /dev/ad3s1h
     +volume Stripes
     +	plex org striped 256k
     +	sd drive diskB
     +	sd drive diskC
     +	sd drive diskD
     +
     -   drive c device /dev/da5h
     -    drive d device /dev/da6h
     -    volume stripe
     -    plex org striped 512k
     -      sd length 128m drive a
     -      sd length 128m drive b
     -      sd length 128m drive c
     -      sd length 128m drive d
     -
    - -

    As before, it is not necessary to define the drives which are already known to Vinum. -After processing this definition, the configuration looks like:

    - -
     -   Drives:         4 (4 configured)
     -    Volumes:        3 (4 configured)
     -    Plexes:         4 (8 configured)
     -    Subdisks:       7 (16 configured)
     -  
     -    D a                     State: up       Device /dev/da3h        Avail: 1421/2573 MB (55%)
     -    D b                     State: up       Device /dev/da4h        Avail: 1933/2573 MB (75%)
     -    D c                     State: up       Device /dev/da5h        Avail: 2445/2573 MB (95%)
     -    D d                     State: up       Device /dev/da6h        Avail: 2445/2573 MB (95%)
     -  
     -    V myvol                 State: up       Plexes:       1 Size:        512 MB
     -    V mirror                State: up       Plexes:       2 Size:        512 MB
     -    V striped               State: up       Plexes:       1 Size:        512 MB
     -  
     -    P myvol.p0            C State: up       Subdisks:     1 Size:        512 MB
     -    P mirror.p0           C State: up       Subdisks:     1 Size:        512 MB
     -    P mirror.p1           C State: initializing     Subdisks:     1 Size:        512 MB
     -    P striped.p1            State: up       Subdisks:     1 Size:        512 MB
     -  
     -    S myvol.p0.s0           State: up       PO:        0  B Size:        512 MB
     -    S mirror.p0.s0          State: up       PO:        0  B Size:        512 MB
     -    S mirror.p1.s0          State: empty    PO:        0  B Size:        512 MB
     -    S striped.p0.s0         State: up       PO:        0  B Size:        128 MB
     -    S striped.p0.s1         State: up       PO:      512 kB Size:        128 MB
     -    S striped.p0.s2         State: up       PO:     1024 kB Size:        128 MB
     -    S striped.p0.s3         State: up       PO:     1536 kB Size:        128 MB
     +#gvinum create striped.conf
     +3 drives:
     +D diskD                 State: up	/dev/ad3s1h	A: 0/511 MB (0%)
     +D diskC                 State: up	/dev/ad2s1h	A: 0/511 MB (0%)
     +D diskB                 State: up	/dev/ad1s1h	A: 0/511 MB (0%)
     +
     +1 volume:
     +V Stripes               State: up	Plexes:       1	Size:       1534 MB
     +
     +1 plex:
     +P Stripes.p0          S State: up	Subdisks:     3	Size:       1534 MB
     +
     +3 subdisks:
     +S Stripes.p0.s2         State: up	D: diskD        Size:        511 MB
     +S Stripes.p0.s1         State: up	D: diskC        Size:        511 MB
     +S Stripes.p0.s0         State: up	D: diskB        Size:        511 MB
      

    -

    Figure 20-6. A Striped Vinum Volume

    -

    +

    +

    Figure 20.6. A Striped Vinum Volume

    +
    + +
    + +
    +

    20.8.4 RAID-5: Striped set with distributed parity

    +

    RAID-1 resilience can be improved by using a striped array with distributed parity, this configuration is known as RAID-5 +(Section 20.4.2). + The cost of this strategy is the space consumed by the parity data (usually the size of one disk of the array) and slower read/write access. The minimum number of disks required is 3 and the array continues operating, in degraded mode, when one disk fails. +

    + +
     +#cat raid5.conf
     +drive diskB device /dev/ad1s1h
     +drive diskC device /dev/ad2s1h
     +drive diskD device /dev/ad3s1h
     +volume Raid5 
     +	plex org raid5 256k
     +	sd drive diskB
     +	sd drive diskC
     +	sd drive diskD
     +
    + + +
     +#gvinum create raid5.conf
     +3 drives:
     +D diskD                 State: up	/dev/ad3s1h	A: 0/511 MB (0%)
     +D diskC                 State: up	/dev/ad2s1h	A: 0/511 MB (0%)
     +D diskB                 State: up	/dev/ad1s1h	A: 0/511 MB (0%)
     +
     +1 volume:
     +V Raid5                   State: up	Plexes:       1	Size:       1023 MB
     +
     +1 plex:
     +P Raid5.p0             R5 State: up	Subdisks:     3	Size:       1023 MB
     +
     +3 subdisks:
     +S Raid5.p0.s2             State: up	D: diskD        Size:        511 MB
     +S Raid5.p0.s1             State: up	D: diskC        Size:        511 MB
     +S Raid5.p0.s0             State: up	D: diskB        Size:        511 MB
     +
    + +
    +

    +

    Figure 20-7. A RAID-5 Vinum Volume

    -
    -
    -

    This volume is represented in Figure -20-6. The darkness of the stripes indicates the position within the plex address -space: the lightest stripes come first, the darkest last.

    -

    20.6.4 Resilience and -Performance

    +

    20.8.5 RAID 0+1

    -

    With sufficient hardware, it is +

    With sufficient hardware, it is possible to build volumes which show both increased resilience and increased performance compared to standard UNIX® partitions. A typical -configuration file might be:

    +configuration file for a RAID-0+1 +(Section 20.4.3). +might be:

    + +
     +#cat raid01.conf
     +drive diskB device /dev/da0s1h
     +drive diskC device /dev/da1s1h
     +drive diskD device /dev/da2s1h
     +drive diskE device /dev/da3s1h
     +drive diskF device /dev/da4s1h
     +drive diskG device /dev/da5s1h
     +volume RAID01
     +	plex org striped 256k
     +		sd drive diskB
     +		sd drive diskC
     +		sd drive diskD
     +	plex org striped 256k
     +		sd drive diskE
     +		sd drive diskF
     +		sd drive diskG
     +
     -   volume raid10
     -      plex org striped 512k
     -        sd length 102480k drive a
     -        sd length 102480k drive b
     -        sd length 102480k drive c
     -        sd length 102480k drive d
     -        sd length 102480k drive e
     -      plex org striped 512k
     -        sd length 102480k drive c
     -        sd length 102480k drive d
     -        sd length 102480k drive e
     -        sd length 102480k drive a
     -        sd length 102480k drive b
     +# gvinum create raid01.conf
     +6 drives:
     +D diskG                 State: up	/dev/da5s1h	A: 0/511 MB (0%)
     +D diskF                 State: up	/dev/da4s1h	A: 0/511 MB (0%)
     +D diskE                 State: up	/dev/da3s1h	A: 0/511 MB (0%)
     +D diskD                 State: up	/dev/da2s1h	A: 0/511 MB (0%)
     +D diskC                 State: up	/dev/da1s1h	A: 0/511 MB (0%)
     +D diskB                 State: up	/dev/da0s1h	A: 0/511 MB (0%)
     +
     +1 volume:
     +V RAID01               State: up	Plexes:       2	Size:       1535 MB
     +
     +2 plexes:
     +P RAID01.p1          S State: up	Subdisks:     3	Size:       1535 MB
     +P RAID01.p0          S State: up	Subdisks:     3	Size:       1535 MB
     +
     +6 subdisks:
     +S RAID01.p1.s2         State: up	D: diskG        Size:        511 MB
     +S RAID01.p1.s1         State: up	D: diskF        Size:        511 MB
     +S RAID01.p1.s0         State: up	D: diskE        Size:        511 MB
     +S RAID01.p0.s2         State: up	D: diskD        Size:        511 MB
     +S RAID01.p0.s1         State: up	D: diskC        Size:        511 MB
     +S RAID01.p0.s0         State: up	D: diskB        Size:        511 MB
      

    The subdisks of the second plex are offset by two drives from those of the first plex: this helps ensure that writes do not go to the same subdisks even if a transfer goes over two drives.

    -

    Figure 20-7 represents the -structure of this volume.

    - -

    +
    +

    +

    Figure 20-8. A RAID-0+1 Vinum Volume

    +
    -
    -

    Figure 20-7. A Mirrored, Striped Vinum Volume

    +
    -

    -
    -
    +
    +

    20.8.5 RAID 1+0

    + +

    With sufficient hardware, it is possible to build volumes which show both increased resilience and increased performance +compared to standard UNIX® partitions in more than one way. The RAID-1+0 configuration differs from RAID-0+1 in the way mirrors and stripes are used. A typical configuration file for a RAID-1+0 +(Section 20.4.4). +might be:

    + +
     +#cat raid10_ph1.conf
     +drive diskB device /dev/da0s1h
     +drive diskC device /dev/da1s1h
     +drive diskD device /dev/da2s1h
     +drive diskE device /dev/da3s1h
     +drive diskF device /dev/da4s1h
     +drive diskG device /dev/da5s1h
     +volume m0
     +	plex org concat
     +		sd drive diskB
     +	plex org concat
     +		sd drive diskC
     +volume m1
     +	plex org concat
     +		sd drive diskD
     +	plex org concat
     +		sd drive diskE
     +volume m2
     +	plex org concat
     +		sd drive diskF
     +	plex org concat
     +		sd drive diskG
     +
     +#cat raid10_ph2.conf
     +drive dm0 device /dev/gvinum/m0
     +drive dm1 device /dev/gvinum/m1
     +drive dm2 device /dev/gvinum/m2
     +
     +volume RAID10
     +	plex org striped 256k
     +		sd drive dm0
     +		sd drive dm1
     +		sd drive dm2
     +
    + +
     +#gvinum create raid10_ph1.conf
     +#gvinum create raid10_ph2.conf
     +
    + +
     +# gvinum list
     +9 drives:
     +D dm2                   State: up	/dev/gvinum/sd/m2.p0.s0	A: 0/511 MB (0%)
     +D dm1                   State: up	/dev/gvinum/sd/m1.p0.s0	A: 0/511 MB (0%)
     +D dm0                   State: up	/dev/gvinum/sd/m0.p0.s0	A: 0/511 MB (0%)
     +D diskG                 State: up	/dev/da5s1h	A: 0/511 MB (0%)
     +D diskF                 State: up	/dev/da4s1h	A: 0/511 MB (0%)
     +D diskE                 State: up	/dev/da3s1h	A: 0/511 MB (0%)
     +D diskD                 State: up	/dev/da2s1h	A: 0/511 MB (0%)
     +D diskC                 State: up	/dev/da1s1h	A: 0/511 MB (0%)
     +D diskB                 State: up	/dev/da0s1h	A: 0/511 MB (0%)
     +
     +4 volumes:
     +V RAID10                State: up	Plexes:       1	Size:       1534 MB
     +V m2                    State: up	Plexes:       2	Size:        511 MB
     +V m1                    State: up	Plexes:       2	Size:        511 MB
     +V m0                    State: up	Plexes:       2	Size:        511 MB
     +
     +7 plexes:
     +P RAID10.p0           S State: up	Subdisks:     3	Size:       1534 MB
     +P m2.p1               C State: up	Subdisks:     1	Size:        511 MB
     +P m2.p0               C State: up	Subdisks:     1	Size:        511 MB
     +P m1.p1               C State: up	Subdisks:     1	Size:        511 MB
     +P m1.p0               C State: up	Subdisks:     1	Size:        511 MB
     +P m0.p1               C State: up	Subdisks:     1	Size:        511 MB
     +P m0.p0               C State: up	Subdisks:     1	Size:        511 MB
     +
     +9 subdisks:
     +S RAID10.p0.s2          State: up	D: dm2          Size:        511 MB
     +S RAID10.p0.s1          State: up	D: dm1          Size:        511 MB
     +S RAID10.p0.s0          State: up	D: dm0          Size:        511 MB
     +S m2.p1.s0              State: up	D: diskG        Size:        511 MB
     +S m2.p0.s0              State: up	D: diskF        Size:        511 MB
     +S m1.p1.s0              State: up	D: diskE        Size:        511 MB
     +S m1.p0.s0              State: up	D: diskD        Size:        511 MB
     +S m0.p1.s0              State: up	D: diskC        Size:        511 MB
     +S m0.p0.s0              State: up	D: diskB        Size:        511 MB
     +
    + +
    +

    +

    Figure 20-9. A RAID-1+0 Volume

    +
    diff -r -u handbook.orig/vinum-intro.html handbook/vinum-intro.html --- handbook.orig/vinum-intro.html 2008-03-22 05:43:54.000000000 +0100 +++ handbook/vinum-intro.html 2008-04-08 14:23:40.000000000 +0200 @@ -3,12 +3,12 @@ -Disks Are Too Small +Introduction - + @@ -25,7 +25,7 @@ Prev Chapter 20 The Vinum Volume Manager -Next @@ -34,14 +34,24 @@
    -

    20.2 Disks Are Too -Small

    +

    20.2 Introduction

    + +

    +Since computers begun to be used as data storage devices the issue of ensuring a safe operation has been studied. +

    +

    +Different strategies have been developed, one of the most interesting is the Redundant Arrays of Inexpensive Disks (RAID). +The term RAID was first defined by David A. Patterson, Garth A. Gibson and Randy Katz at the University of California, Berkeley in 1987. They studied the possibility of using two or more drives to appear as a single device to the host system and published a paper: "A Case for Redundant Arrays of Inexpensive Disks (RAID)" in June 1988 at the SIGMOD conference. However, the idea of using redundant disk arrays was first patented by Norman Ken Ouchi at IBM. This patent was awarded in 1978 (U.S. patent 4,092,7 32) titled "System for recovering data stored in failed memory unit." The claims for this patent describe what would later be named RAID-5 with full stripe writes. This 1978 patent also acknowledges that disk mirroring or duplexing (RAID-1) and protection with dedicated parity (RAID-4) were prior art at the time the patent was deposited. +

    +

    +VINUM is a Volume Manager and can be understood as a Software capable of implementing RAID-0, RAID-1 and RAID-5 specifications. Nowadays, hardware RAID-Controllers are very popular and some of them have significant better performance than a similar Software RAID approach. Nevertheless, a Software Volume Manager can provide more flexibility and can also be used in conjunction with a hardware controller. +

    +

    +Since FreeBSD RELEASE 5.0, VINUM has been integrated under the GEOM framework +(Chapter 19), +which also provides an alternative way of implementing RAID-0 and RAID-1. +

    -

    Disks are getting bigger, but so are data storage requirements. Often you will find -you want a file system that is bigger than the disks you have available. Admittedly, this -problem is not as acute as it was ten years ago, but it still exists. Some systems have -solved this by creating an abstract device which stores its data on a number of -disks.

    diff -r -u handbook.orig/vinum-objects.html handbook/vinum-objects.html --- handbook.orig/vinum-objects.html 2008-03-22 05:43:54.000000000 +0100 +++ handbook/vinum-objects.html 2008-04-08 14:34:32.000000000 +0200 @@ -8,7 +8,7 @@ - + @@ -25,7 +25,7 @@ Prev Chapter 20 The Vinum Volume Manager -Next @@ -36,15 +36,14 @@

    20.5 Vinum Objects

    -

    In order to address these problems, Vinum implements a four-level hierarchy of -objects:

    +

    Vinum implements a four-level hierarchy of objects:

    • The most visible object is the virtual disk, called a volume. Volumes have essentially the same properties as a UNIX® disk drive, though there are some minor -differences. They have no size limitations.

      +differences. Their size is not limited by the size of an individual drive.

    • @@ -103,31 +102,34 @@

      20.5.3 Performance Issues

      -

      Vinum implements both concatenation and striping at the plex level:

      +

      Vinum implements Concatenation, Striping and RAID-5 at the plex level:

      • -

        A concatenated plex uses the +

        A Concatenated plex uses the address space of each subdisk in turn.

      • -

        A striped plex stripes the data +

        A Striped plex stripes the data across each subdisk. The subdisks must all have the same size, and there must be at least two subdisks in order to distinguish it from a concatenated plex.

      • +
      • +Like a striped plex, a RAID-5 plex stripes the data across each subdisk. The subdisks +must all have the same size, and there must be at least three subdisks, otherwise mirroring would be more efficient. +
      -

      20.5.4 Which Plex -Organization?

      +

      20.5.4 Which Plex Organization?

      -

      The version of Vinum supplied with FreeBSD 7.0 implements two kinds of plex:

      +

      The version of Vinum supplied with FreeBSD 7.0 implements three kinds of plex:

      • -

        Concatenated plexes are the most flexible: they can contain any number of subdisks, +

        Concatenated plexes are the most flexible: they can contain any number of subdisks, and the subdisks may be of different length. The plex may be extended by adding additional subdisks. They require less CPU time than striped plexes, though the difference in CPU overhead @@ -136,29 +138,30 @@

      • -

        The greatest advantage of striped (RAID-0) plexes +

        The greatest advantage of Striped (RAID-0) plexes is that they reduce hot spots: by choosing an optimum sized stripe (about 256 kB), you can even out the load on the component drives. The disadvantages of this approach are (fractionally) more complex code and restrictions on subdisks: they must be all the same size, and extending a plex by adding new subdisks is so complicated that Vinum currently does not implement it. Vinum imposes an additional, trivial restriction: a striped plex -must have at least two subdisks, since otherwise it is indistinguishable from a +must have at least two subdisks, otherwise it is indistinguishable from a concatenated plex.

      • +
      • +RAID-5 plexes are effectively an extension of striped plexes. Compared to striped +plexes, they offer the advantage of fault tolerance, but the disadvantages of higher +storage cost and significantly higher CPU overhead, particularly for writes. The code +is an order of magnitude more complex than for concatenated and striped plexes. Like +striped plexes, RAID-5 plexes must have equal-sized subdisks and cannot currently be +extended. Vinum enforces a minimum of three subdisks for a RAID-5 plex, since any +smaller number would not make sense +
      -

      Table 20-1 summarizes the advantages -and disadvantages of each plex organization.

      -
      -

      Table 20-1. Vinum Plex Organizations

      +

      Table 20-1. Vinum Plex Organizations: advantages and disadvantages

      - -----+
      @@ -171,7 +174,7 @@ - + @@ -179,18 +182,76 @@ - + + + + + + + + + +
      Plex type
      concatenatedConcatenated 1 yes no
      stripedStriped 2 no yes High performance in combination with highly concurrent access
      RAID-53noyesHighly reliable storage, efficient read access, data update has moderate performance
      +
      + +
      +

      20.5.5 Object Naming

      + +

      Vinum assigns default names to plexes and subdisks, although they +may be overridden. Overriding the default names is not recommended: experience with the +VERITAS volume manager, which allows arbitrary naming of objects, has shown that this +flexibility does not bring a significant advantage, and it can cause confusion.

      + +

      Names may contain any non-blank character, but it is recommended to restrict them to +letters, digits and the underscore characters. The names of volumes, plexes and subdisks +may be up to 64 characters long, and the names of drives may be up to 32 characters +long.

      + +

      Vinum objects are assigned device nodes in the hierarchy /dev/gvinum. All volumes get direct entries there too. +

      + +
        + +
      • +

        The directories /dev/gvinum/plex, and /dev/gvinum/sd contain device nodes for each plex and for +each subdisk, respectively.

        +

        For each Volume created, there will be a /dev/gvinum/My-Volume-Name entry.

        +
      • +
      +
      + +
      +

      20.5.6 Differences for FreeBSD 4.X

      + +

      Vinum objects are assigned device nodes in the hierarchy /dev/vinum. +

      +
        +
      • +

        The control devices /dev/vinum/control and /dev/vinum/controld, used by +gvinum(8) +and the Vinum daemon respectively.

        +
      • + +
      • +

        A directory /dev/vinum/drive with entries for each drive. +These entries are in fact symbolic links to the corresponding disk nodes.

        +
      • + + +
      +
      + diff -r -u handbook.orig/vinum-root.html handbook/vinum-root.html --- handbook.orig/vinum-root.html 2008-03-22 05:43:54.000000000 +0100 +++ handbook/vinum-root.html 2008-04-08 14:28:55.000000000 +0200 @@ -3,12 +3,12 @@ -Using Vinum for the Root Filesystem +Using Vinum for the Root File system - + @@ -25,7 +25,7 @@ Prev Chapter 20 The Vinum Volume Manager -Next @@ -34,110 +34,56 @@
      -

      20.9 Using Vinum for the Root -Filesystem

      +

      20.7 Using Vinum for the Root +File system

      -

      For a machine that has fully-mirrored filesystems using Vinum, it is desirable to also -mirror the root filesystem. Setting up such a configuration is less trivial than -mirroring an arbitrary filesystem because:

      +

      For a machine that has fully-mirrored file systems using Vinum, it is desirable to also +mirror the root file system. Setting up such a configuration is less trivial than +mirroring an arbitrary file system because:

      • -

        The root filesystem must be available very early during the boot process, so the Vinum +

        The root file system must be available very early during the boot process, so the Vinum infrastructure must already be available at this time.

      • -

        The volume containing the root filesystem also contains the system bootstrap and the +

        The volume containing the root file system also contains the system bootstrap and the kernel, which must be read using the host system's native utilities (e. g. the BIOS on PC-class machines) which often cannot be taught about the details of Vinum.

      In the following sections, the term “root volume” is generally used to -describe the Vinum volume that contains the root filesystem. It is probably a good idea +describe the Vinum volume that contains the root file system. It is probably a good idea to use the name "root" for this volume, but this is not technically required in any way. All command examples in the following sections assume this name though.

      -

      20.9.1 Starting up Vinum Early Enough -for the Root Filesystem

      +

      20.7.1 Starting up Vinum Early Enough +for the Root File system

      -

      There are several measures to take for this to happen:

      - -
        -
      • -

        Vinum must be available in the kernel at boot-time. Thus, the method to start Vinum -automatically described in Section -20.8.1.1 is not applicable to accomplish this task, and the start_vinum parameter must actually not be set when the following setup is being arranged. The -first option would be to compile Vinum statically into the kernel, so it is available all -the time, but this is usually not desirable. There is another option as well, to have /boot/loader (Section -12.3.3) load the vinum kernel module early, before starting the kernel. This can be -accomplished by putting the line:

        +

        Vinum must be available in the kernel at boot-time. +Add the following line to your /boot/loader.conf (Section +12.3.3) in order to load the Vinum kernel module early enough.

          geom_vinum_load="YES"
          
        -

        into the file /boot/loader.conf.

        -
      • - -
      • -
        -
        -

        Note: For Gvinum, all -startup is done automatically once the kernel module has been loaded, so the procedure -described above is all that is needed. The following text documents the behaviour of the -historic Vinum system, for the sake of older setups.

        -
        -
        - -

        Vinum must be initialized early since it needs to supply the volume for the root -filesystem. By default, the Vinum kernel part is not looking for drives that might -contain Vinum volume information until the administrator (or one of the startup scripts) -issues a vinum start command.

        - -
        -
        -

        Note: The following paragraphs are outlining the steps needed for FreeBSD 5.X -and above. The setup required for FreeBSD 4.X differs, and is described below in Section 20.9.5.

        -
        -
        - -

        By placing the line:

        - -
         -vinum.autostart="YES"
         -
        - -

        into /boot/loader.conf, Vinum is instructed to automatically -scan all drives for Vinum information as part of the kernel startup.

        - -

        Note that it is not necessary to instruct the kernel where to look for the root -filesystem. /boot/loader looks up the name of the root device -in /etc/fstab, and passes this information on to the kernel. -When it comes to mount the root filesystem, the kernel figures out from the device name -provided which driver to ask to translate this into the internal device ID (major/minor -number).

        -
      • -
      -

      20.9.2 Making a Vinum-based Root +

      20.7.2 Making a Vinum-based Root Volume Accessible to the Bootstrap

      Since the current FreeBSD bootstrap is only 7.5 KB of code, and already has the burden -of reading files (like /boot/loader) from the UFS filesystem, +of reading files (like /boot/loader) from the UFS file system, it is sheer impossible to also teach it about internal Vinum structures so it could parse the Vinum configuration data, and figure out about the elements of a boot volume itself. Thus, some tricks are necessary to provide the bootstrap code with the illusion of a -standard "a" partition that contains the root filesystem.

      +standard "a" partition that contains the root file system.

      For this to be possible at all, the following requirements must be met for the root volume:

      @@ -153,9 +99,9 @@

    Note that it is desirable and possible that there are multiple plexes, each containing -one replica of the root filesystem. The bootstrap process will, however, only use one of +one replica of the root file system. The bootstrap process will, however, only use one of these replica for finding the bootstrap and all the files, until the kernel will -eventually mount the root filesystem itself. Each single subdisk within these plexes will +eventually mount the root file system itself. Each single subdisk within these plexes will then need its own "a" partition illusion, for the respective device to become bootable. It is not strictly needed that each of these faked "a" partitions is located at the same offset within its device, @@ -186,18 +132,18 @@

      # bsdlabel -e devname
     +class="REPLACEABLE">${devname}
      

    for each device that participates in the root volume. devname must be either the name of the disk (like ${devname} must be either the name of the disk (like da0) for disks without a slice (aka. fdisk) table, or the name of the slice (like ad0s1).

    If there is already an "a" partition on the device -(presumably, containing a pre-Vinum root filesystem), it should be renamed to something +(presumably, containing a pre-Vinum root file system), it should be renamed to something else, so it remains accessible (just in case), but will no longer be used by default to -bootstrap the system. Note that active partitions (like a root filesystem currently +bootstrap the system. Note that active partitions (like a root file system currently mounted) cannot be renamed, so this must be executed either when being booted from a “Fixit” medium, or in a two-step process, where (in a mirrored situation) the disk that has not been currently booted is being manipulated first.

    @@ -209,7 +155,7 @@ partition can be taken verbatim from the calculation above. The "fstype" should be 4.2BSD. The "fsize", "bsize", and "cpg" values should best be chosen to match the actual filesystem, +class="LITERAL">"cpg" values should best be chosen to match the actual file system, though they are fairly unimportant within this context.

    That way, a new "a" partition will be established that @@ -225,20 +171,20 @@

      # fsck -n /dev/devnamea
     +class="REPLACEABLE">${devname}a
      

    It should be remembered that all files containing control information must be relative -to the root filesystem in the Vinum volume which, when setting up a new Vinum root -volume, might not match the root filesystem that is currently active. So in particular, +to the root file system in the Vinum volume which, when setting up a new Vinum root +volume, might not match the root file system that is currently active. So in particular, the files /etc/fstab and /boot/loader.conf need to be taken care of.

    At next reboot, the bootstrap should figure out the appropriate control information -from the new Vinum-based root filesystem, and act accordingly. At the end of the kernel +from the new Vinum-based root file system, and act accordingly. At the end of the kernel initialization process, after all devices have been announced, the prominent notice that shows the success of this setup is a message like:

    @@ -248,7 +194,7 @@
    -

    20.9.3 Example of a Vinum-based Root +

    20.7.3 Example of a Vinum-based Root Setup

    After the Vinum root volume has been set up, the output of gvinum @@ -293,7 +239,7 @@ class="LITERAL">"offset" parameter is the sum of the offset within the Vinum partition "h", and the offset of this partition within the device (or slice). This is a typical setup that is necessary to avoid the problem -described in Section 20.9.4.3. It can also +described in Section 20.7.4.3. It can also be seen that the entire "a" partition is completely within the "h" partition containing all the Vinum data for this device.

    @@ -303,13 +249,13 @@

    -

    20.9.4 Troubleshooting

    +

    20.7.4 Troubleshooting

    If something goes wrong, a way is needed to recover from the situation. The following list contains few known pitfalls and solutions.

    -

    20.9.4.1 System Bootstrap Loads, but +

    20.7.4.1 System Bootstrap Loads, but System Does Not Boot

    If for any reason the system does not continue to boot, the bootstrap can be @@ -324,26 +270,26 @@

    When ready, the boot process can be continued with a boot -as. The options -as will request the kernel to ask for -the root filesystem to mount (-a), and make the boot process -stop in single-user mode (-s), where the root filesystem is +the root file system to mount (-a), and make the boot process +stop in single-user mode (-s), where the root file system is mounted read-only. That way, even if only one plex of a multi-plex volume has been mounted, no data inconsistency between plexes is being risked.

    -

    At the prompt asking for a root filesystem to mount, any device that contains a valid -root filesystem can be entered. If /etc/fstab had been set up +

    At the prompt asking for a root file system to mount, any device that contains a valid +root file system can be entered. If /etc/fstab had been set up correctly, the default should be something like ufs:/dev/gvinum/root. A typical alternate choice would be something like ufs:da0d which could be a hypothetical partition that -contains the pre-Vinum root filesystem. Care should be taken if one of the alias "a" partitions are entered here that are actually reference to the subdisks of the Vinum root device, because in a mirrored setup, this would only mount one -piece of a mirrored root device. If this filesystem is to be mounted read-write later on, +piece of a mirrored root device. If this file system is to be mounted read-write later on, it is necessary to remove the other plex(es) of the Vinum root volume since these plexes would otherwise carry inconsistent data.

    -

    20.9.4.2 Only Primary Bootstrap +

    20.7.4.2 Only Primary Bootstrap Loads

    If /boot/loader fails to load, but the primary bootstrap @@ -352,12 +298,12 @@ point, using the space key. This will make the bootstrap stop in stage two, see Section 12.3.2. An attempt can be made here to boot off an alternate partition, like the partition containing the -previous root filesystem that has been moved away from "a" +previous root file system that has been moved away from "a" above.

    -

    20.9.4.3 Nothing +

    20.7.4.3 Nothing Boots, the Bootstrap Panics

    This situation will happen if the bootstrap had been destroyed by the Vinum @@ -381,9 +327,32 @@

    -

    20.9.5 Differences for +

    20.7.5 Differences for FreeBSD 4.X

    +

    Vinum must be initialized early since it needs to supply the volume for the root +file system. By default, the Vinum kernel part is not looking for drives that might +contain Vinum volume information until the administrator (or one of the startup scripts) +issues a vinum start command.

    + +

    By placing the line:

    + +
     +vinum.autostart="YES"
     +
    + +

    into /boot/loader.conf, Vinum is instructed to automatically +scan all drives for Vinum information as part of the kernel startup.

    + +

    Note that it is not necessary to instruct the kernel where to look for the root +file system. /boot/loader looks up the name of the root device +in /etc/fstab, and passes this information on to the kernel. +When it comes to mount the root file system, the kernel figures out from the device name +provided which driver to ask to translate this into the internal device ID (major/minor +number).

    + + +

    Under FreeBSD 4.X, some internal functions required to make Vinum automatically scan all disks are missing, and the code that figures out the internal ID of the root device is not smart enough to handle a name like /dev/vinum/root @@ -402,7 +371,7 @@ listed, nor is it necessary to add each slice and/or partition explicitly, since Vinum will scan all slices and partitions of the named drives for valid Vinum headers.

    -

    Since the routines used to parse the name of the root filesystem, and derive the +

    Since the routines used to parse the name of the root file system, and derive the device ID (major/minor number) are only prepared to handle “classical” device names like /dev/ad0s1a, they cannot make any sense out of a root volume name like /dev/vinum/root. For that reason, Vinum @@ -422,7 +391,7 @@ name of the root device string being passed (that is, "vinum" in our case), it will use the pre-allocated device ID, instead of trying to figure out one itself. That way, during the usual automatic startup, it can continue to mount the Vinum -root volume for the root filesystem.

    +root volume for the root file system.

    However, when boot -a has been requesting to ask for entering the name of the root device manually, it must be noted that this routine still cannot @@ -447,7 +416,7 @@ accesskey="P">Prev Home -Next @@ -455,7 +424,7 @@ Configuring Vinum Up -Virtualization +Vinum Examples

    diff -r -u handbook.orig/vinum-vinum.html handbook/vinum-vinum.html --- handbook.orig/vinum-vinum.html 2008-03-22 05:43:54.000000000 +0100 +++ handbook/vinum-vinum.html 2008-04-08 14:40:26.000000000 +0200 @@ -8,7 +8,7 @@ - + @@ -42,21 +42,20 @@
    20.1 Synopsis
    -
    20.2 Disks Are Too Small
    +
    20.2 Introduction
    -
    20.3 Access Bottlenecks
    +
    20.3 Disk Performance Issues
    20.4 Data Integrity
    20.5 Vinum Objects
    -
    20.6 Some Examples
    +
    20.6 Configuring Vinum
    -
    20.7 Object Naming
    +
    20.7 Using Vinum for the Root File system
    -
    20.8 Configuring Vinum
    +
    20.8 Vinum Examples
    -
    20.9 Using Vinum for the Root Filesystem
    @@ -86,7 +85,9 @@ users safeguard themselves against such issues is through the use of multiple, and sometimes redundant, disks. In addition to supporting various cards and controllers for hardware RAID systems, the base FreeBSD system includes the Vinum Volume Manager, a block -device driver that implements virtual disk drives. vinum(4) +that implements virtual disk drives. Vinum is a so-called Volume Manager, a virtual disk driver that addresses these three problems. Vinum provides more flexibility, performance, and reliability than @@ -100,12 +101,13 @@

    Note: Starting with FreeBSD 5, Vinum has been rewritten in order to fit into the GEOM architecture (Chapter 19), retaining the original ideas, -terminology, and on-disk metadata. This rewrite is called gvinum (for GEOM -vinum). The following text usually refers to gvinum(8) +(for GEOM vinum). The following text usually refers to Vinum as an abstract name, regardless of the implementation -variant. Any command invocations should now be done using the gvinum command, and the name of the kernel module has been changed +variant. Any command invocations should now be done using the +gvinum(8) +command, and the name of the kernel module has been changed from vinum.ko to geom_vinum.ko, and all device nodes reside under /dev/gvinum instead of /dev/vinum. As of FreeBSD 6, the old Vinum implementation is no @@ -132,7 +134,7 @@ UFS Journaling Through GEOM Up -Disks Are Too Small +Introduction

    --- /dev/null 2008-04-08 15:00:00.000000000 +0200 +++ handbook/vinum-disk-performance-issues.html 2008-04-08 15:09:49.000000000 +0200 @@ -0,0 +1,148 @@ + + + + +Disk Performance Issues + + + + + + + + + + + +
    +

    20.3 Disk Performance Issues

    + +

    Modern systems frequently need to access data in a highly concurrent manner. For +example, large FTP or HTTP servers can maintain thousands of concurrent sessions and have +multiple 100 Mbit/s connections to the outside world. +

    + +

    +The most critical parameter is the load that a transfer places on the subsystem, in other words the time +for which a transfer occupies a drive. +

    + +

    In any disk transfer, the drive must first position the heads, wait for the first +sector to pass under the read head, and then perform the transfer. These actions can be +considered to be atomic: it does not make any sense to interrupt them. +The data transfer time is negligible compared to the time taken for positioning the heads.

    + +

    The traditional and obvious solution to this bottleneck is “more +spindles”: rather than using one large disk, it uses several smaller disks with the +same aggregate storage space. Each disk is capable of positioning and transferring +independently, so the effective throughput increases by a factor close to the number of +disks used.

    + +

    The exact throughput improvement is, of course, smaller than the number of disks +involved: although each drive is capable of transferring in parallel, there is no way to +ensure that the requests are evenly distributed across the drives. Inevitably the load on +one drive will be higher than on another.

    + +

    The evenness of the load on the disks is strongly dependent on the way the data is +shared across the drives. In the following discussion, it is convenient to think of the +disk storage as a large number of data sectors which are addressable by number, rather +like the pages in a book. +

    + +
    +

    20.3.1 Concatenation

    + +

    The most obvious method is to divide the virtual disk into +groups of consecutive sectors the size of the individual physical disks and store them in +this manner, rather like taking a large book and tearing it into smaller sections. This +method is called concatenation and +has the advantage that the disks are not required to have any specific size +relationships. It works well when the access to the virtual disk is spread evenly about +its address space. When access is concentrated on a smaller area, the improvement is less +marked. Figure 20-1 illustrates +the sequence in which storage units are allocated in a concatenated organization.

    + +

    + +
    +

    +

    Figure 20-1. Concatenated Organization

    +
    + +
    + +
    +

    20.3.2 Striping

    + +

    An alternative mapping is to divide the address space into smaller, equal-sized +components and store them sequentially on different devices. For example, the first 256 +sectors may be stored on the first disk, the next 256 sectors on the next disk and so on. +After filling the last disk, the process repeats until the disks are full. This mapping +is called striping or RAID-0. Striping requires somewhat +more effort to locate the data, and it can cause additional I/O load where a transfer is +spread over multiple disks, but it can also provide a more constant load across the +disks. Figure 20-2 illustrates +the sequence in which storage units are allocated in a striped organization.

    + +

    + +
    +

    +

    Figure 20-2. Striped Organization

    +
    + +
    + + + +

    This, and other documents, can be downloaded from ftp://ftp.FreeBSD.org/pub/FreeBSD/doc/.

    + +

    For questions about FreeBSD, read the documentation before contacting <questions@FreeBSD.org>.
    +For questions about this documentation, e-mail <doc@FreeBSD.org>.

    + + + --- handbook.orig/virtualization.html 2008-03-22 05:43:54.000000000 +0100 +++ handbook/virtualization.html 2008-04-08 15:14:45.000000000 +0200 @@ -7,8 +7,8 @@ - + @@ -23,7 +23,7 @@ -Prev -Prev Home @@ -126,7 +126,7 @@ -Using Vinum for the Root Filesystem +Vinum Examples Up FreeBSD as a Guest OS --- handbook.orig/raid.html 2008-03-22 05:43:54.000000000 +0100 +++ handbook/raid.html 2008-04-08 15:43:16.000000000 +0200 @@ -93,8 +93,8 @@

    Next, consider how to attach them as part of the file system. You should research both -vinum(8) (vinum(4) (Chapter 20) and ccd(4). In this @@ -309,17 +309,18 @@

    18.4.1.2 The Vinum Volume Manager

    -

    The Vinum Volume Manager is a block device driver which implements virtual disk +

    The Vinum Volume Manager is a block device driver +vinum(4) +which implements virtual disk drives. It isolates disk hardware from the block device interface and maps data in ways which result in an increase in flexibility, performance and reliability compared to the -traditional slice view of disk storage. vinum(8) implements +traditional slice view of disk storage. Vinum implements the RAID-0, RAID-1 and RAID-5 models, both individually and in combination.

    -

    See Chapter 20 for more information about vinum(8).

    +

    See Chapter 20 for more information about most recent Vinum implementation, gvinum(8), under the Geom architecture +Chapter 19 +.

    --0-968980677-1207663836=:26150-- From owner-freebsd-doc@FreeBSD.ORG Tue Apr 8 16:10:00 2008 Return-Path: Delivered-To: freebsd-doc@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id F019910656A8 for ; Tue, 8 Apr 2008 16:10:00 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id CCE4A8FC25 for ; Tue, 8 Apr 2008 16:10:00 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m38GA0pr014655 for ; Tue, 8 Apr 2008 16:10:00 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m38GA0dK014654; Tue, 8 Apr 2008 16:10:00 GMT (envelope-from gnats) Resent-Date: Tue, 8 Apr 2008 16:10:00 GMT Resent-Message-Id: <200804081610.m38GA0dK014654@freefall.freebsd.org> Resent-From: FreeBSD-gnats-submit@FreeBSD.org (GNATS Filer) Resent-To: freebsd-doc@FreeBSD.org Resent-Reply-To: FreeBSD-gnats-submit@FreeBSD.org, Thomas Abthorpe Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 82F17106566C; Tue, 8 Apr 2008 16:01:54 +0000 (UTC) (envelope-from tabthorpe@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 576B98FC17; Tue, 8 Apr 2008 16:01:54 +0000 (UTC) (envelope-from tabthorpe@FreeBSD.org) Received: from freefall.freebsd.org (tabthorpe@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m38G1s9L014593; Tue, 8 Apr 2008 16:01:54 GMT (envelope-from tabthorpe@freefall.freebsd.org) Received: (from tabthorpe@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m38G1sHw014592; Tue, 8 Apr 2008 16:01:54 GMT (envelope-from tabthorpe) Message-Id: <200804081601.m38G1sHw014592@freefall.freebsd.org> Date: Tue, 8 Apr 2008 16:01:54 GMT From: Thomas Abthorpe To: FreeBSD-gnats-submit@FreeBSD.org X-Send-Pr-Version: 3.113 Cc: tabthorpe@FreeBSD.org, miwi@FreeBSD.org Subject: docs/122578: Use of DESTDIR in portstree is deprecated, remove reference in Porters Handbook X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Thomas Abthorpe List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Apr 2008 16:10:01 -0000 >Number: 122578 >Category: docs >Synopsis: Use of DESTDIR in portstree is deprecated, remove reference in Porters Handbook >Confidential: no >Severity: non-critical >Priority: low >Responsible: freebsd-doc >State: open >Quarter: >Keywords: >Date-Required: >Class: change-request >Submitter-Id: current-users >Arrival-Date: Tue Apr 08 16:10:00 UTC 2008 >Closed-Date: >Last-Modified: >Originator: Thomas Abthorpe >Release: FreeBSD 7.0-STABLE i386 >Organization: FreeBSD.GoodKing.Ca >Environment: System: FreeBSD freefall.freebsd.org 7.0-STABLE FreeBSD 7.0-STABLE #33: Fri Feb 29 00:53:41 UTC 2008 simon@freefall.freebsd.org:/usr/src/sys/i386/compile/FREEFALL i386 >Description: - Remove section refering to DESTDIR in portstree >How-To-Repeat: >Fix: --- ph-book.patch begins here --- Index: book.sgml =================================================================== RCS file: /home/dcvs/doc/en_US.ISO8859-1/books/porters-handbook/book.sgml,v retrieving revision 1.914 diff -u -r1.914 book.sgml --- book.sgml 6 Apr 2008 21:48:28 -0000 1.914 +++ book.sgml 8 Apr 2008 15:47:04 -0000 @@ -8893,24 +8893,13 @@ - <makevar>PREFIX</makevar> and <makevar>DESTDIR</makevar> + <makevar>PREFIX</makevar> PREFIX determines the location where the port will install. It is usually /usr/local, or /opt. User can set PREFIX to anything he wants. Your port must respect this variable. - DESTDIR, if set by user, determines the - complete alternative environment, usually a jail, or an installed - system mounted elsewhere than /. - A port will actually install into - DESTDIR/PREFIX, and register - with the package database in DESTDIR/var/db/pkg. - As DESTDIR is handled automatically by the - ports infrastructure via calling &man.chroot.8;, you do not - need any modifications or any extra care to write - DESTDIR-compliant ports. - The value of PREFIX will be set to LOCALBASE (default /usr/local). If --- ph-book.patch ends here --- >Release-Note: >Audit-Trail: >Unformatted: From owner-freebsd-doc@FreeBSD.ORG Tue Apr 8 16:16:05 2008 Return-Path: Delivered-To: freebsd-doc@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 50AF4106564A; Tue, 8 Apr 2008 16:16:05 +0000 (UTC) (envelope-from miwi@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 0434F8FC0A; Tue, 8 Apr 2008 16:16:05 +0000 (UTC) (envelope-from miwi@FreeBSD.org) Received: from freefall.freebsd.org (miwi@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m38GG4s8016252; Tue, 8 Apr 2008 16:16:04 GMT (envelope-from miwi@freefall.freebsd.org) Received: (from miwi@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m38GG3kx016248; Tue, 8 Apr 2008 16:16:03 GMT (envelope-from miwi) Date: Tue, 8 Apr 2008 16:16:03 GMT Message-Id: <200804081616.m38GG3kx016248@freefall.freebsd.org> To: miwi@FreeBSD.org, freebsd-doc@FreeBSD.org, miwi@FreeBSD.org From: miwi@FreeBSD.org Cc: Subject: Re: docs/122578: [patch] Use of DESTDIR in portstree is deprecated, remove reference in Porters Handbook X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Apr 2008 16:16:05 -0000 Synopsis: [patch] Use of DESTDIR in portstree is deprecated, remove reference in Porters Handbook Responsible-Changed-From-To: freebsd-doc->miwi Responsible-Changed-By: miwi Responsible-Changed-When: Tue Apr 8 16:16:03 UTC 2008 Responsible-Changed-Why: I'll take it. http://www.freebsd.org/cgi/query-pr.cgi?pr=122578 From owner-freebsd-doc@FreeBSD.ORG Tue Apr 8 17:54:18 2008 Return-Path: Delivered-To: freebsd-doc@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 684E41065678 for ; Tue, 8 Apr 2008 17:54:18 +0000 (UTC) (envelope-from tom@ctors.net) Received: from yorgi.telenet-ops.be (yorgi.telenet-ops.be [195.130.133.69]) by mx1.freebsd.org (Postfix) with ESMTP id 2BB6D8FC15 for ; Tue, 8 Apr 2008 17:54:18 +0000 (UTC) (envelope-from tom@ctors.net) Received: from edna.telenet-ops.be (edna.telenet-ops.be [195.130.132.58]) by yorgi.telenet-ops.be (Postfix) with ESMTP id 15137680B2E for ; Tue, 8 Apr 2008 19:43:32 +0200 (CEST) Received: from localhost (localhost.localdomain [127.0.0.1]) by edna.telenet-ops.be (Postfix) with SMTP id 2DE08E4018 for ; Tue, 8 Apr 2008 19:43:30 +0200 (CEST) Received: from [10.55.55.10] (dD5761519.access.telenet.be [213.118.21.25]) by edna.telenet-ops.be (Postfix) with ESMTP id 21ED6E400D for ; Tue, 8 Apr 2008 19:43:19 +0200 (CEST) Message-ID: <47FBAEDE.30804@ctors.net> Date: Tue, 08 Apr 2008 19:43:58 +0200 From: Tom Van Looy User-Agent: Thunderbird 2.0.0.12 (X11/20080227) MIME-Version: 1.0 To: freebsd-doc@freebsd.org Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Handbook, wrong path to smb.conf.default X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Apr 2008 17:54:18 -0000 The handbook mentions a wrong path to smb.conf.default: --- network-samba.html.old 2008-04-08 19:33:24.551066400 +0200 +++ network-samba.html 2008-04-08 19:35:29.534212800 +0200 @@ -60,7 +60,7 @@

    27.9.2 Configuration

    A default Samba configuration file is installed as /usr/local/share/examples/smb.conf.default. This file must be +class="FILENAME">/usr/local/share/examples/samba/smb.conf.default. This file must be copied to /usr/local/etc/smb.conf and customized before Samba can be used.

    From owner-freebsd-doc@FreeBSD.ORG Tue Apr 8 18:40:04 2008 Return-Path: Delivered-To: freebsd-doc@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 6B40A106564A for ; Tue, 8 Apr 2008 18:40:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 59AA28FC0A for ; Tue, 8 Apr 2008 18:40:04 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m38Ie45J027555 for ; Tue, 8 Apr 2008 18:40:04 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m38Ie4NS027554; Tue, 8 Apr 2008 18:40:04 GMT (envelope-from gnats) Date: Tue, 8 Apr 2008 18:40:04 GMT Message-Id: <200804081840.m38Ie4NS027554@freefall.freebsd.org> To: freebsd-doc@FreeBSD.org From: John Ferrell Cc: Subject: Re: docs/122351: [patch] update to PF section of handbook X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: John Ferrell List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Apr 2008 18:40:04 -0000 The following reply was made to PR docs/122351; it has been noted by GNATS. From: John Ferrell To: bug-followup@FreeBSD.org, jdferrell3@yahoo.com Cc: Subject: Re: docs/122351: [patch] update to PF section of handbook Date: Tue, 8 Apr 2008 11:03:20 -0700 (PDT) --0-2114477508-1207677800=:37890 Content-Type: multipart/alternative; boundary="0-380745449-1207677800=:37890" --0-380745449-1207677800=:37890 Content-Type: text/plain; charset=us-ascii Minor change to diff file. (New diff attached) John --0-380745449-1207677800=:37890 Content-Type: text/html; charset=us-ascii
    Minor change to diff file.  (New diff attached)

    John
    --0-380745449-1207677800=:37890-- --0-2114477508-1207677800=:37890 Content-Type: text/plain; name="chapter.sgml.diff.txt" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="chapter.sgml.diff.txt" LS0tIGNoYXB0ZXIuc2dtbC5vcmlnCTIwMDgtMDMtMjggMTc6MDc6MDEuMDAw MDAwMDAwIC0wNDAwCisrKyBjaGFwdGVyLnNnbWwJMjAwOC0wNC0wMSAxNDox NDoxOS4wMDAwMDAwMDAgLTA0MDAKQEAgLTE4Miw2ICsxODIsMTcgQEAKICAg PC9zZWN0MT4KIAogICA8c2VjdDEgaWQ9ImZpcmV3YWxscy1wZiI+CisgICAg ICA8c2VjdDFpbmZvPiAgICAKKyAgICAgICAgPGF1dGhvcmdyb3VwPgorICAg ICAgICAgIDxhdXRob3I+CisgICAgICAgICAgICA8Zmlyc3RuYW1lPkpvaG48 L2ZpcnN0bmFtZT4KKyAgICAgICAgICAgIDxzdXJuYW1lPkZlcnJlbGw8L3N1 cm5hbWU+CisgICAgICAgICAgICA8Y29udHJpYj5SZXZpc2VkIGFuZCB1cGRh dGVkIGJ5IDwvY29udHJpYj4KKyAgICAgICAgICAgIDwhLS0gMjQgTWFyY2gg MjAwOCAtLT4KKyAgICAgICAgICA8L2F1dGhvcj4KKyAgICAgICAgPC9hdXRo b3Jncm91cD4KKyAgICAgIDwvc2VjdDFpbmZvPgorCiAgICAgPHRpdGxlPlRo ZSBPcGVuQlNEIFBhY2tldCBGaWx0ZXIgKFBGKSBhbmQKICAgICAgIDxhY3Jv bnltPkFMVFE8L2Fjcm9ueW0+PC90aXRsZT4KIApAQCAtMTkyLDYwICsyMDMs NjUgQEAKICAgICA8L2luZGV4dGVybT4KIAogICAgIDxwYXJhPkFzIG9mIEp1 bHkgMjAwMyB0aGUgT3BlbkJTRCBmaXJld2FsbCBzb2Z0d2FyZSBhcHBsaWNh dGlvbgotICAgICAga25vd24gYXMgPGFjcm9ueW0+UEY8L2Fjcm9ueW0+IHdh cyBwb3J0ZWQgdG8gJm9zOyBhbmQgd2FzIG1hZGUKLSAgICAgIGF2YWlsYWJs ZSBpbiB0aGUgJm9zOyBQb3J0cyBDb2xsZWN0aW9uOyB0aGUgZmlyc3QgcmVs ZWFzZSB0aGF0Ci0gICAgICBjb250YWluZWQgPGFjcm9ueW0+UEY8L2Fjcm9u eW0+IGFzIGFuIGludGVncmF0ZWQgcGFydCBvZiB0aGUKLSAgICAgIGJhc2Ug c3lzdGVtIHdhcyAmb3M7Jm5ic3A7NS4zIGluIE5vdmVtYmVyIDIwMDQuCi0g ICAgICA8YWNyb255bT5QRjwvYWNyb255bT4gaXMgYSBjb21wbGV0ZSwgZnVs bHkgZmVhdHVyZWQgZmlyZXdhbGwKKyAgICAgIGtub3duIGFzIDxhY3Jvbnlt PlBGPC9hY3JvbnltPiB3YXMgcG9ydGVkIHRvICZvczsgYW5kIAorICAgICAg bWFkZSBhdmFpbGFibGUgaW4gdGhlICZvczsgUG9ydHMgQ29sbGVjdGlvbi4g IFJlbGVhc2VkIGluIDIwMDQsIAorICAgICAgJm9zOyA1LjMgd2FzIHRoZSBm aXJzdCByZWxlYXNlIHRoYXQgY29udGFpbmVkIAorICAgICAgPGFjcm9ueW0+ UEY8L2Fjcm9ueW0+IGFzIGFuIGludGVncmF0ZWQgcGFydCBvZiB0aGUgYmFz ZSBzeXN0ZW0uCisgICAgICA8YWNyb255bT5QRjwvYWNyb255bT4gaXMgYSBj b21wbGV0ZSwgZnVsbC1mZWF0dXJlZCBmaXJld2FsbAogICAgICAgdGhhdCBo YXMgb3B0aW9uYWwgc3VwcG9ydCBmb3IgPGFjcm9ueW0+QUxUUTwvYWNyb255 bT4gKEFsdGVybmF0ZQogICAgICAgUXVldWluZykuICA8YWNyb255bT5BTFRR PC9hY3JvbnltPiBwcm92aWRlcyBRdWFsaXR5IG9mIFNlcnZpY2UKLSAgICAg ICg8YWNyb255bT5Rb1M8L2Fjcm9ueW0+KSBiYW5kd2lkdGggc2hhcGluZyB0 aGF0IGFsbG93cwotICAgICAgZ3VhcmFudGVlaW5nIGJhbmR3aWR0aCB0byBk aWZmZXJlbnQgc2VydmljZXMgYmFzZWQgb24gZmlsdGVyaW5nCi0gICAgICBy dWxlcy4gIFRoZSBPcGVuQlNEIFByb2plY3QgZG9lcyBhbiBvdXRzdGFuZGlu ZyBqb2Igb2YKLSAgICAgIG1haW50YWluaW5nIHRoZSBQRiBVc2VyJ3MgR3Vp ZGUgdGhhdCBpdCB3aWxsIG5vdCBiZSBtYWRlIHBhcnQgb2YKLSAgICAgIHRo aXMgaGFuZGJvb2sgZmlyZXdhbGwgc2VjdGlvbiBhcyB0aGF0IHdvdWxkIGp1 c3QgYmUgZHVwbGljYXRlZAotICAgICAgZWZmb3J0LjwvcGFyYT4KLQotICAg IDxwYXJhPk1vcmUgaW5mbyBjYW4gYmUgZm91bmQgYXQgdGhlIFBGIGZvciAm b3M7IHdlYiBzaXRlOiA8dWxpbmsKLQl1cmw9Imh0dHA6Ly9wZjRmcmVlYnNk LmxvdmUycGFydHkubmV0LyI+PC91bGluaz4uPC9wYXJhPgotCi0gICAgPHNl Y3QyPgotICAgICAgPHRpdGxlPkVuYWJsaW5nIFBGPC90aXRsZT4KLQotICAg ICAgPHBhcmE+UEYgaXMgaW5jbHVkZWQgaW4gdGhlIGJhc2ljICZvczsgaW5z dGFsbCBmb3IgdmVyc2lvbnMgbmV3ZXIKLQl0aGFuIDUuMyBhcyBhIHNlcGFy YXRlIHJ1biB0aW1lIGxvYWRhYmxlIG1vZHVsZS4gIFRoZSBzeXN0ZW0KLQl3 aWxsIGR5bmFtaWNhbGx5IGxvYWQgdGhlIFBGIGtlcm5lbCBsb2FkYWJsZSBt b2R1bGUgd2hlbiB0aGUKLQlyYy5jb25mIHN0YXRlbWVudCA8bGl0ZXJhbD5w Zl9lbmFibGU9IllFUyI8L2xpdGVyYWw+IGlzIHVzZWQuCi0JVGhlIGxvYWRh YmxlIG1vZHVsZSB3YXMgY3JlYXRlZCB3aXRoICZtYW4ucGZsb2cuNDsgbG9n Z2luZwotCWVuYWJsZWQuPC9wYXJhPgotCi0gICAgICA8bm90ZT4KLQk8cGFy YT5UaGUgbW9kdWxlIGFzc3VtZXMgdGhlIHByZXNlbmNlIG9mIDxsaXRlcmFs Pm9wdGlvbnMKLQkgICAgSU5FVDwvbGl0ZXJhbD4gYW5kIDxsaXRlcmFsPmRl dmljZSBicGY8L2xpdGVyYWw+LiAgVW5sZXNzCi0JICA8bGl0ZXJhbD5OT0lO RVQ2PC9saXRlcmFsPiBmb3IgJm9zOyBwcmlvciB0byA2LjAtUkVMRUFTRSBh bmQKLQkgIDxsaXRlcmFsPk5PX0lORVQ2PC9saXRlcmFsPiBmb3IgbGF0ZXIg cmVsZWFzZXMgKGZvciBleGFtcGxlIGluCi0JICAmbWFuLm1ha2UuY29uZi41 Oykgd2FzIGRlZmluZWQgZHVyaW5nIHRoZSBidWlsZCwgaXQgYWxzbwotCSAg cmVxdWlyZXMgPGxpdGVyYWw+b3B0aW9ucyBJTkVUNjwvbGl0ZXJhbD4uPC9w YXJhPgotICAgICAgPC9ub3RlPgorICAgICAgKDxhY3JvbnltPlFvUzwvYWNy b255bT4pIGZ1bmN0aW9uYWxpdHkuPC9wYXJhPgogCi0gICAgICA8cGFyYT5P bmNlIHRoZSBrZXJuZWwgbW9kdWxlIGlzIGxvYWRlZCBvciB0aGUga2VybmVs IGlzIHN0YXRpY2FsbHkKLQlidWlsdCB3aXRoIFBGIHN1cHBvcnQsIGl0IGlz IHBvc3NpYmxlIHRvIGVuYWJsZSBvciBkaXNhYmxlCi0JPGFwcGxpY2F0aW9u PnBmPC9hcHBsaWNhdGlvbj4gd2l0aCB0aGUgPGNvbW1hbmQ+cGZjdGw8L2Nv bW1hbmQ+Ci0JY29tbWFuZC48L3BhcmE+Ci0KLSAgICAgIDxwYXJhPlRoaXMg ZXhhbXBsZSBkZW1vbnN0cmF0ZXMgaG93IHRvIGVuYWJsZQotCTxhcHBsaWNh dGlvbj5wZjwvYXBwbGljYXRpb24+OjwvcGFyYT4KLQotICAgICAgPHNjcmVl bj4mcHJvbXB0LnJvb3Q7IDx1c2VyaW5wdXQ+cGZjdGwgLWU8L3VzZXJpbnB1 dD48L3NjcmVlbj4KLQotICAgICAgPHBhcmE+VGhlIDxjb21tYW5kPnBmY3Rs PC9jb21tYW5kPiBjb21tYW5kIHByb3ZpZGVzIGEgd2F5IHRvIHdvcmsKLQl3 aXRoIHRoZSA8YXBwbGljYXRpb24+cGY8L2FwcGxpY2F0aW9uPiBmaXJld2Fs bC4gSXQgaXMgYSBnb29kCi0JaWRlYSB0byBjaGVjayB0aGUgJm1hbi5wZmN0 bC44OyBtYW51YWwgcGFnZSB0byBmaW5kIG91dCBtb3JlCi0JaW5mb3JtYXRp b24gYWJvdXQgdXNpbmcgaXQuPC9wYXJhPgorICAgIDxwYXJhPlRoZSBPcGVu QlNEIFByb2plY3QgZG9lcyBhbiBvdXRzdGFuZGluZyBqb2Igb2YKKyAgICAg IG1haW50YWluaW5nIHRoZSAKKyAgICAgIDx1bGluayB1cmw9Imh0dHA6Ly93 d3cub3BlbmJzZC5vcmcvZmFxL3BmLyI+UEYgRkFRPC91bGluaz4uICAKKyAg ICAgIEFzIHN1Y2gsIHRoaXMgc2VjdGlvbiBvZiB0aGUgaGFuZGJvb2sgd2ls bCBmb2N1cyBvbiAgCisgICAgICA8YWNyb255bT5QRjwvYWNyb255bT4gYXMg aXQgcGVydGFpbnMgdG8gJm9zOyB3aGlsZSBwcm92aWRpbmcgCisgICAgICBz b21lIGdlbmVyYWwgaW5mb3JtYXRpb24gcmVnYXJkaW5nIHVzYWdlLiAgRm9y IGRldGFpbGVkIHVzYWdlIAorICAgICAgaW5mb3JtYXRpb24gcGxlYXNlIHJl ZmVyIHRvIHRoZSAKKyAgICAgIDx1bGluayB1cmw9Imh0dHA6Ly93d3cub3Bl bmJzZC5vcmcvZmFxL3BmLyI+UEYgRkFRPC91bGluaz4uICAgICAgCisgICAg ICA8L3BhcmE+CisKKyAgICA8cGFyYT5Nb3JlIGluZm9ybWF0aW9uIGFib3V0 IDxhY3JvbnltPlBGPC9hY3JvbnltPiBmb3IgJm9zOyAKKyAgICAgIGNhbiBi ZSBmb3VuZCBhdCAKKyAgICAgIDx1bGluayB1cmw9Imh0dHA6Ly9wZjRmcmVl YnNkLmxvdmUycGFydHkubmV0LyI+PC91bGluaz4uPC9wYXJhPgorCisgICAg PHNlY3QyPgorICAgICAgPHRpdGxlPlVzaW5nIHRoZSBQRiBsb2FkYWJsZSBr ZXJuZWwgbW9kdWxlPC90aXRsZT4KKworICAgICAgPHBhcmE+U2luY2UgdGhl IHJlbGVhc2Ugb2YgJm9zOyA1LjMsIFBGIGhhcyBiZWVuIGluY2x1ZGVkIGlu IHRoZSAKKyAgICAgICAgYmFzaWMgaW5zdGFsbCBhcyBhIHNlcGFyYXRlIHJ1 biB0aW1lIGxvYWRhYmxlIG1vZHVsZS4gIFRoZSAKKyAgICAgICAgc3lzdGVt IHdpbGwgZHluYW1pY2FsbHkgbG9hZCB0aGUgUEYga2VybmVsIG1vZHVsZSB3 aGVuIHRoZSAKKyAgICAgICAgJm1hbi5yYy5jb25mLjU7IHN0YXRlbWVudCA8 bGl0ZXJhbD5wZl9lbmFibGU9IllFUyI8L2xpdGVyYWw+IAorICAgICAgICBp cyBwcmVzZW50LiAgSG93ZXZlciwgdGhlIDxhY3JvbnltPlBGPC9hY3Jvbnlt PiBtb2R1bGUgd2lsbCAKKyAgICAgICAgbm90IGxvYWQgaWYgdGhlIHN5c3Rl bSBjYW5ub3QgZmluZCBhIDxhY3JvbnltPlBGPC9hY3JvbnltPiAKKyAgICAg ICAgcnVsZXNldCBjb25maWd1cmF0aW9uIGZpbGUuICBUaGUgZGVmYXVsdCBs b2NhdGlvbiBpcyAKKyAgICAgICAgPGZpbGVuYW1lPi9ldGMvcGYuY29uZjwv ZmlsZW5hbWU+LiAgSWYgeW91ciAKKyAgICAgICAgPGFjcm9ueW0+UEY8L2Fj cm9ueW0+IHJ1bGVzZXQgaXMgbG9jYXRlZCBzb21ld2hlcmUgZWxzZSB1c2Ug CisgICAgICAgIDxvcHRpb24+cGZfcnVsZXM9IjxyZXBsYWNlYWJsZT4vcGF0 aC9wZi5ydWxlczwvcmVwbGFjZWFibGU+Ijwvb3B0aW9uPgorICAgICAgICB0 byBzcGVjaWZ5IHRoZSBsb2NhdGlvbi48L3BhcmE+CisKKyAgICAgICAgPG5v dGU+CisgICAgICAgICAgPHBhcmE+QXMgb2YgJm9zOyA3LjAgdGhlIHNhbXBs ZSA8ZmlsZW5hbWU+cGYuY29uZjwvZmlsZW5hbWU+IHRoYXQgCisgICAgICAg ICAgICB3YXMgaW4gPGZpbGVuYW1lPi9ldGMvPC9maWxlbmFtZT4gaGFzIGJl ZW4gbW92ZWQgdG8gCisgICAgICAgICAgICA8ZmlsZW5hbWU+L3Vzci9zaGFy ZS9leGFtcGxlcy9wZi88L2ZpbGVuYW1lPi4gIEZvciAmb3M7IHZlcnNpb25z IAorICAgICAgICAgICAgcHJpb3IgdG8gNy4wIHRoZXJlIGlzIGFuIDxmaWxl bmFtZT4vZXRjL3BmLmNvbmY8L2ZpbGVuYW1lPiBieSAKKyAgICAgICAgICAg IGRlZmF1bHQuPC9wYXJhPgorICAgICAgICA8L25vdGU+CisKKyAgICAgIDxw YXJhPlRoZSA8YWNyb255bT5QRjwvYWNyb255bT4gbW9kdWxlIGNhbiBhbHNv IGJlIGxvYWRlZCBtYW51YWxseSAKKyAgICAgICAgZnJvbSB0aGUgY29tbWFu ZCBsaW5lOjwvcGFyYT4KKworICAgICAgPHNjcmVlbj4mcHJvbXB0LnJvb3Q7 IDx1c2VyaW5wdXQ+a2xkbG9hZCBwZi5rbzwvdXNlcmlucHV0Pjwvc2NyZWVu PgorCisgICAgICA8cGFyYT5UaGUgbG9hZGFibGUgbW9kdWxlIHdhcyBjcmVh dGVkIHdpdGggJm1hbi5wZmxvZy40OyBlbmFibGVkIAorICAgICAgICAgd2hp Y2ggcHJvdmlkZXMgc3VwcG9ydCBmb3IgbG9nZ2luZy4gIElmIHlvdSBuZWVk IG90aGVyIAorICAgICAgICAgPGFjcm9ueW0+UEY8L2Fjcm9ueW0+IGZlYXR1 cmVzIHlvdSB3aWxsIG5lZWQgdG8gY29tcGlsZSAKKyAgICAgICAgIDxhY3Jv bnltPlBGPC9hY3JvbnltPiBzdXBwb3J0IGludG8gdGhlIGtlcm5lbC48L3Bh cmE+ICAKICAgICA8L3NlY3QyPgogCiAgICAgPHNlY3QyPgotICAgICAgPHRp dGxlPktlcm5lbCBvcHRpb25zPC90aXRsZT4KKyAgICAgIDx0aXRsZT5QRiBr ZXJuZWwgb3B0aW9uczwvdGl0bGU+CiAKICAgICAgIDxpbmRleHRlcm0+CiAJ PHByaW1hcnk+a2VybmVsIG9wdGlvbnM8L3ByaW1hcnk+CkBAIC0yNjUsMjIg KzI4MSwyNyBAQAogCTxzZWNvbmRhcnk+ZGV2aWNlIHBmc3luYzwvc2Vjb25k YXJ5PgogICAgICAgPC9pbmRleHRlcm0+CiAKLSAgICAgIDxwYXJhPkl0IGlz IG5vdCBhIG1hbmRhdG9yeSByZXF1aXJlbWVudCB0aGF0IHlvdSBlbmFibGUg UEYgYnkKLQljb21waWxpbmcgdGhlIGZvbGxvd2luZyBvcHRpb25zIGludG8g dGhlICZvczsga2VybmVsLiAgSXQgaXMKLQlvbmx5IHByZXNlbnRlZCBoZXJl IGFzIGJhY2tncm91bmQgaW5mb3JtYXRpb24uICBDb21waWxpbmcgUEYKLQlp bnRvIHRoZSBrZXJuZWwgY2F1c2VzIHRoZSBsb2FkYWJsZSBtb2R1bGUgdG8g bmV2ZXIgYmUKLQl1c2VkLjwvcGFyYT4KLQotICAgICAgPHBhcmE+U2FtcGxl IGtlcm5lbCBjb25maWcgUEYgb3B0aW9uIHN0YXRlbWVudHMgYXJlIGluIHRo ZQotCTxmaWxlbmFtZT4vdXNyL3NyYy9zeXMvY29uZi9OT1RFUzwvZmlsZW5h bWU+IGtlcm5lbCBzb3VyY2UgYW5kCi0JYXJlIHJlcHJvZHVjZWQgaGVyZTo8 L3BhcmE+CisgICAgICA8cGFyYT5XaGlsZSBpdCBpcyBub3QgbmVjZXNzYXJ5 IHRoYXQgeW91IGNvbXBpbGUKKyAgICAgICAgPGFjcm9ueW0+UEY8L2Fjcm9u eW0+IHN1cHBvcnQgaW50byB0aGUgJm9zOyBrZXJuZWwsIHlvdSBtYXkgd2Fu dCAKKyAgICAgICAgdG8gZG8gc28gdG8gdGFrZSBhZHZhbnRhZ2Ugb2Ygb25l IG9mIFBGJ3MgYWR2YW5jZWQgZmVhdHVyZXMgdGhhdCAKKyAgICAgICAgaXMg bm90IGluY2x1ZGVkIGluIHRoZSBsb2FkYWJsZSBtb2R1bGUsIG5hbWVseSAm bWFuLnBmc3luYy40Oy4gIAorICAgICAgICBwZnN5bmMgaXMgYSBwc2V1ZG8t ZGV2aWNlIHRoYXQgZXhwb3NlcyBjZXJ0YWluIGNoYW5nZXMgdG8KKyAgICAg ICAgdGhlIHN0YXRlIHRhYmxlIHVzZWQgYnkgPGFjcm9ueW0+UEY8L2Fjcm9u eW0+LiAgcGZzeW5jIGNhbiBiZSAKKyAgICAgICAgcGFpcmVkIHdpdGggJm1h bi5jYXJwLjQ7IHRvIGNyZWF0ZSBmYWlsb3ZlciBmaXJld2FsbHMgdXNpbmcg CisgICAgICAgIDxhY3JvbnltPlBGPC9hY3JvbnltPi4gIE1vcmUgaW5mb3Jt YXRpb24gb24gCisgICAgICAgIDxhY3JvbnltPkNBUlA8L2Fjcm9ueW0+IGNh biBiZSBmb3VuZCBpbiAKKyAgICAgICAgPGxpbmsgbGlua2VuZD0iY2FycCI+ Y2hhcHRlciAyOTwvbGluaz4gb2YgdGhlIGhhbmRib29rLjwvcGFyYT4KKwor ICAgICAgPHBhcmE+VGhlIDxhY3JvbnltPlBGPC9hY3JvbnltPiBrZXJuZWwg b3B0aW9ucyBjYW4gYmUgZm91bmQgaW4gCisJPGZpbGVuYW1lPi91c3Ivc3Jj L3N5cy9jb25mL05PVEVTPC9maWxlbmFtZT4gYW5kIGFyZSByZXByb2R1Y2Vk IAorICAgICAgICBiZWxvdzo8L3BhcmE+CiAKICAgICAgIDxwcm9ncmFtbGlz dGluZz5kZXZpY2UgcGYKIGRldmljZSBwZmxvZwogZGV2aWNlIHBmc3luYzwv cHJvZ3JhbWxpc3Rpbmc+CiAKICAgICAgIDxwYXJhPjxsaXRlcmFsPmRldmlj ZSBwZjwvbGl0ZXJhbD4gZW5hYmxlcyBzdXBwb3J0IGZvciB0aGUKLQk8cXVv dGU+UGFja2V0IEZpbHRlcjwvcXVvdGU+IGZpcmV3YWxsLjwvcGFyYT4KKwk8 cXVvdGU+UGFja2V0IEZpbHRlcjwvcXVvdGU+IGZpcmV3YWxsICgmbWFuLnBm LjQ7KS48L3BhcmE+CiAKICAgICAgIDxwYXJhPjxsaXRlcmFsPmRldmljZSBw ZmxvZzwvbGl0ZXJhbD4gZW5hYmxlcyB0aGUgb3B0aW9uYWwKIAkmbWFuLnBm bG9nLjQ7IHBzZXVkbyBuZXR3b3JrIGRldmljZSB3aGljaCBjYW4gYmUgdXNl ZCB0byBsb2cKQEAgLTI4OCwyMSArMzA5LDE1IEBACiAJY2FuIGJlIHVzZWQg dG8gc3RvcmUgdGhlIGxvZ2dpbmcgaW5mb3JtYXRpb24gdG8gZGlzay48L3Bh cmE+CiAKICAgICAgIDxwYXJhPjxsaXRlcmFsPmRldmljZSBwZnN5bmM8L2xp dGVyYWw+IGVuYWJsZXMgdGhlIG9wdGlvbmFsCi0JJm1hbi5wZnN5bmMuNDsg cHNldWRvIG5ldHdvcmsgZGV2aWNlIHRoYXQgaXMgdXNlZCB0byBtb25pdG9y Ci0JPHF1b3RlPnN0YXRlIGNoYW5nZXM8L3F1b3RlPi4gIEFzIHRoaXMgaXMg bm90IHBhcnQgb2YgdGhlCi0JbG9hZGFibGUgbW9kdWxlIG9uZSBoYXMgdG8g YnVpbGQgYSBjdXN0b20ga2VybmVsIHRvIHVzZQotCWl0LjwvcGFyYT4KLQot ICAgICAgPHBhcmE+VGhlc2Ugc2V0dGluZ3Mgd2lsbCB0YWtlIGVmZmVjdCBv bmx5IGFmdGVyIHlvdSBoYXZlIGJ1aWx0Ci0JYW5kIGluc3RhbGxlZCBhIGtl cm5lbCB3aXRoIHRoZW0gc2V0LjwvcGFyYT4KKwkmbWFuLnBmc3luYy40OyBw c2V1ZG8tbmV0d29yayBkZXZpY2UgdGhhdCBpcyB1c2VkIHRvIG1vbml0b3IK Kwk8cXVvdGU+c3RhdGUgY2hhbmdlczwvcXVvdGU+LjwvcGFyYT4KICAgICA8 L3NlY3QyPgogCiAgICAgPHNlY3QyPgogICAgICAgPHRpdGxlPkF2YWlsYWJs ZSByYy5jb25mIE9wdGlvbnM8L3RpdGxlPgogCi0gICAgICA8cGFyYT5Zb3Ug bmVlZCB0aGUgZm9sbG93aW5nIHN0YXRlbWVudHMgaW4KLQk8ZmlsZW5hbWU+ L2V0Yy9yYy5jb25mPC9maWxlbmFtZT4gdG8gYWN0aXZhdGUgUEYgYXQgYm9v dAotCXRpbWU6PC9wYXJhPgorICAgICAgPHBhcmE+VGhlIGZvbGxvd2luZyAm bWFuLnJjLmNvbmYuNTsgc3RhdGVtZW50cyBjb25maWd1cmUKKwk8YWNyb255 bT5QRjwvYWNyb255bT4gYW5kICZtYW4ucGZsb2cuNDsgYXQgYm9vdDo8L3Bh cmE+CiAKICAgICAgIDxwcm9ncmFtbGlzdGluZz5wZl9lbmFibGU9IllFUyIg ICAgICAgICAgICAgICAgICMgRW5hYmxlIFBGIChsb2FkIG1vZHVsZSBpZiBy ZXF1aXJlZCkKIHBmX3J1bGVzPSIvZXRjL3BmLmNvbmYiICAgICAgICAgIyBy dWxlcyBkZWZpbml0aW9uIGZpbGUgZm9yIHBmCkBAIC0zMTIsMjIgKzMyNywx MTAgQEAKIHBmbG9nX2ZsYWdzPSIiICAgICAgICAgICAgICAgICAgIyBhZGRp dGlvbmFsIGZsYWdzIGZvciBwZmxvZ2Qgc3RhcnR1cDwvcHJvZ3JhbWxpc3Rp bmc+CiAKICAgICAgIDxwYXJhPklmIHlvdSBoYXZlIGEgTEFOIGJlaGluZCB0 aGlzIGZpcmV3YWxsIGFuZCBoYXZlIHRvIGZvcndhcmQKLQlwYWNrZXRzIGZv ciB0aGUgY29tcHV0ZXJzIGluIHRoZSBMQU4gb3Igd2FudCB0byBkbyBOQVQs IHlvdQotCWhhdmUgdG8gZW5hYmxlIHRoZSBmb2xsb3dpbmcgb3B0aW9uIGFz IHdlbGw6PC9wYXJhPgorCXBhY2tldHMgZm9yIHRoZSBjb21wdXRlcnMgb24g dGhlIExBTiBvciB3YW50IHRvIGRvIE5BVCwgeW91CisJd2lsbCBuZWVkIHRo ZSBmb2xsb3dpbmcgb3B0aW9uIGFzIHdlbGw6PC9wYXJhPgogCiAgICAgICA8 cHJvZ3JhbWxpc3Rpbmc+Z2F0ZXdheV9lbmFibGU9IllFUyIgICAgICAgICAg ICAjIEVuYWJsZSBhcyBMQU4gZ2F0ZXdheTwvcHJvZ3JhbWxpc3Rpbmc+CiAg ICAgPC9zZWN0Mj4KIAogICAgIDxzZWN0Mj4KKyAgICAgIDx0aXRsZT5DcmVh dGluZyBGaWx0ZXJpbmcgUnVsZXM8L3RpdGxlPgorCisgICAgICA8cGFyYT48 YWNyb255bT5QRjwvYWNyb255bT4gcmVhZHMgaXRzIGNvbmZpZ3VyYXRpb24g cnVsZXMgZnJvbSAKKyAgICAgICAgJm1hbi5wZi5jb25mLjU7ICg8ZmlsZW5h bWU+L2V0Yy9wZi5jb25mPC9maWxlbmFtZT4gYnkgCisgICAgICAgIGRlZmF1 bHQpIGFuZCBpdCBtb2RpZmllcywgZHJvcHMsIG9yIHBhc3NlcyBwYWNrZXRz IGFjY29yZGluZyB0byAKKyAgICAgICAgdGhlIHJ1bGVzIG9yIGRlZmluaXRp b25zIHNwZWNpZmllZCB0aGVyZS4gIFRoZSAmb3M7IAorICAgICAgICBpbnN0 YWxsYXRpb24gaW5jbHVkZXMgc2V2ZXJhbCBzYW1wbGUgZmlsZXMgbG9jYXRl ZCBpbiAKKyAgICAgICAgPGZpbGVuYW1lPi91c3Ivc2hhcmUvZXhhbXBsZXMv cGYvPC9maWxlbmFtZT4uICBQbGVhc2UgcmVmZXIgdG8gCisgICAgICAgIHRo ZSA8dWxpbmsgdXJsPSJodHRwOi8vd3d3Lm9wZW5ic2Qub3JnL2ZhcS9wZi8i PlBGIEZBUTwvdWxpbms+IAorICAgICAgICBmb3IgY29tcGxldGUgY292ZXJh Z2Ugb2YgPGFjcm9ueW0+UEY8L2Fjcm9ueW0+IHJ1bGVzZXRzLjwvcGFyYT4K KworICAgICAgPHdhcm5pbmc+CisJPHBhcmE+V2hlbiBicm93c2luZyB0aGUg PHVsaW5rIHVybD0iaHR0cDovL3d3dy5vcGVuYnNkLm9yZy9mYXEvcGYvIj5Q RiBGQVE8L3VsaW5rPiwgCisgICAgICAgICAgcGxlYXNlIGtlZXAgaW4gbWlu ZCB0aGF0IGRpZmZlcmVudCB2ZXJzaW9ucyBvZiAmb3M7IGNvbnRhaW4gCisg ICAgICAgICAgZGlmZmVyZW50IHZlcnNpb25zIG9mIFBGOjwvcGFyYT4KKwor ICAgICAgICA8aXRlbWl6ZWRsaXN0PgorICAgICAgICAgIDxsaXN0aXRlbT4K KyAgICAgICAgICAgIDxzaW1wYXJhPiZvczsgNS54IC0gPGFjcm9ueW0+UEY8 L2Fjcm9ueW0+IGlzIGF0IE9wZW5CU0QgMy41PC9zaW1wYXJhPgorICAgICAg ICAgICAgPC9saXN0aXRlbT4KKworICAgICAgICAgIDxsaXN0aXRlbT4KKyAg ICAgICAgICAgIDxzaW1wYXJhPiZvczsgNi54IC0gPGFjcm9ueW0+UEY8L2Fj cm9ueW0+IGlzIGF0IE9wZW5CU0QgMy43PC9zaW1wYXJhPgorICAgICAgICAg ICAgPC9saXN0aXRlbT4KKworICAgICAgICAgIDxsaXN0aXRlbT4KKyAgICAg ICAgICAgIDxzaW1wYXJhPiZvczsgNy54IC0gPGFjcm9ueW0+UEY8L2Fjcm9u eW0+IGlzIGF0IE9wZW5CU0QgNC4xPC9zaW1wYXJhPgorICAgICAgICAgICAg PC9saXN0aXRlbT4KKyAgICAgICAgPC9pdGVtaXplZGxpc3Q+CisgICAgICA8 L3dhcm5pbmc+CisKKyAgICAgIDxwYXJhPlRoZSAmYS5wZjsgaXMgYSBnb29k IHBsYWNlIHRvIGFzayBxdWVzdGlvbnMgYWJvdXQKKwljb25maWd1cmluZyBh bmQgcnVubmluZyB0aGUgPGFjcm9ueW0+UEY8L2Fjcm9ueW0+CisJZmlyZXdh bGwuICBEbyBub3QgZm9yZ2V0IHRvIGNoZWNrIHRoZSBtYWlsaW5nIGxpc3Qg YXJjaGl2ZXMKKwliZWZvcmUgYXNraW5nIHF1ZXN0aW9ucyE8L3BhcmE+Cisg ICAgPC9zZWN0Mj4KKworICAgIDxzZWN0Mj4KKyAgICAgIDx0aXRsZT5Xb3Jr aW5nIHdpdGggUEY8L3RpdGxlPgorCisgICAgICA8cGFyYT5Vc2UgJm1hbi5w ZmN0bC44OyB0byBjb250cm9sIDxhY3JvbnltPlBGPC9hY3JvbnltPi4gIEJl bG93IAorICAgICAgICBhcmUgc29tZSB1c2VmdWwgY29tbWFuZHMgKGJlIHN1 cmUgdG8gcmV2aWV3IHRoZSAmbWFuLnBmY3RsLjg7IAorICAgICAgICBtYW4g cGFnZSBmb3IgYWxsIGF2YWlsYWJsZSBvcHRpb25zKToKKwk8L3BhcmE+CisK KyAgICAgICAgPGluZm9ybWFsdGFibGUgZnJhbWU9Im5vbmUiIHBnd2lkZT0i MSI+CisgICAgICAgICAgPHRncm91cCBjb2xzPSIyIj4KKyAgICAgICAgICAg IDx0aGVhZD4KKyAgICAgICAgICAgICAgPHJvdz4KKyAgICAgICAgICAgICAg ICA8ZW50cnk+Q29tbWFuZDwvZW50cnk+CisgICAgICAgICAgICAgICAgPGVu dHJ5PlB1cnBvc2U8L2VudHJ5PgorICAgICAgICAgICAgICA8L3Jvdz4KKyAg ICAgICAgICAgIDwvdGhlYWQ+CisKKyAgICAgICAgICAgIDx0Ym9keT4KKyAg ICAgICAgICAgICAgPHJvdz4KKyAgICAgICAgICAgICAgICA8ZW50cnk+PGNv bW1hbmQ+cGZjdGwgLWU8L2NvbW1hbmQ+PC9lbnRyeT4KKyAgICAgICAgICAg ICAgICA8ZW50cnk+RW5hYmxlIFBGPC9lbnRyeT4KKyAgICAgICAgICAgICAg PC9yb3c+CisKKyAgICAgICAgICAgICAgPHJvdz4KKyAgICAgICAgICAgICAg ICA8ZW50cnk+PGNvbW1hbmQ+cGZjdGwgLWQ8L2NvbW1hbmQ+PC9lbnRyeT4K KyAgICAgICAgICAgICAgICA8ZW50cnk+RGlzYWJsZSBQRjwvZW50cnk+Cisg ICAgICAgICAgICAgIDwvcm93PgorCisgICAgICAgICAgICAgIDxyb3c+Cisg ICAgICAgICAgICAgICAgPGVudHJ5Pjxjb21tYW5kPnBmY3RsIC1GIGFsbCAt ZiAvZXRjL3BmLmNvbmY8L2NvbW1hbmQ+PC9lbnRyeT4KKyAgICAgICAgICAg ICAgICA8ZW50cnk+Rmx1c2ggYWxsIHJ1bGVzIChuYXQsIGZpbHRlciwgc3Rh dGUsIHRhYmxlLCBldGMuKSBhbmQgcmVsb2FkIGZyb20gdGhlIGZpbGUgPGZp bGVuYW1lPi9ldGMvcGYuY29uZjwvZmlsZW5hbWU+PC9lbnRyeT4KKyAgICAg ICAgICAgICAgPC9yb3c+CisKKyAgICAgICAgICAgICAgPHJvdz4KKyAgICAg ICAgICAgICAgICA8ZW50cnk+PGNvbW1hbmQ+cGZjdGwgLXMgWyBydWxlcyB8 IG5hdCB8IHN0YXRlIF08L2NvbW1hbmQ+PC9lbnRyeT4KKyAgICAgICAgICAg ICAgICA8ZW50cnk+UmVwb3J0IG9uIHRoZSAgZmlsdGVyIHJ1bGVzLCBuYXQg cnVsZXMsIG9yIHN0YXRlIHRhYmxlPC9lbnRyeT4KKyAgICAgICAgICAgICAg PC9yb3c+CisKKyAgICAgICAgICAgICAgPHJvdz4KKyAgICAgICAgICAgICAg ICA8ZW50cnk+PGNvbW1hbmQ+cGZjdGwgLXZuZiAvZXRjL3BmLmNvbmY8L2Nv bW1hbmQ+PC9lbnRyeT4KKyAgICAgICAgICAgICAgICA8ZW50cnk+Q2hlY2sg PGZpbGVuYW1lPi9ldGMvcGYuY29uZjwvZmlsZW5hbWU+IGZvciBlcnJvcnMs IGJ1dCBkbyBub3QgbG9hZCBydWxlc2V0PC9lbnRyeT4KKyAgICAgICAgICAg ICAgPC9yb3c+CisKKyAgICAgICAgICAgIDwvdGJvZHk+CisgICAgICAgICAg PC90Z3JvdXA+CisgICAgICAgIDwvaW5mb3JtYWx0YWJsZT4KKyAgICA8L3Nl Y3QyPgorCisgICAgPHNlY3QyPgogICAgICAgPHRpdGxlPkVuYWJsaW5nIDxh Y3JvbnltPkFMVFE8L2Fjcm9ueW0+PC90aXRsZT4KIAotICAgICAgPHBhcmE+ PGFjcm9ueW0+QUxUUTwvYWNyb255bT4gaXMgb25seSBhdmFpbGFibGUgYnkg Y29tcGlsaW5nIHRoZQotCW9wdGlvbnMgaW50byB0aGUgJm9zOyBLZXJuZWwu ICA8YWNyb255bT5BTFRRPC9hY3JvbnltPiBpcyBub3QKLQlzdXBwb3J0ZWQg YnkgYWxsIG9mIHRoZSBhdmFpbGFibGUgbmV0d29yayBjYXJkIGRyaXZlcnMu CSBQbGVhc2UKLQlzZWUgdGhlICZtYW4uYWx0cS40OyBtYW51YWwgcGFnZSBm b3IgYSBsaXN0IG9mIGRyaXZlcnMgdGhhdCBhcmUKLQlzdXBwb3J0ZWQgaW4g eW91ciByZWxlYXNlIG9mICZvczsuICBUaGUgZm9sbG93aW5nIG9wdGlvbnMg d2lsbAotCWVuYWJsZSA8YWNyb255bT5BTFRRPC9hY3JvbnltPiBhbmQgYWRk IGFkZGl0aW9uYWwKLQlmdW5jdGlvbmFsaXR5LjwvcGFyYT4KKyAgICAgIDxw YXJhPjxhY3JvbnltPkFMVFE8L2Fjcm9ueW0+IGlzIG9ubHkgYXZhaWxhYmxl IGJ5IGNvbXBpbGluZyAKKyAgICAgICAgc3VwcG9ydCBmb3IgaXQgaW50byB0 aGUgJm9zOyBrZXJuZWwuICA8YWNyb255bT5BTFRRPC9hY3JvbnltPiBpcyAK KyAgICAgICAgbm90IHN1cHBvcnRlZCBieSBhbGwgb2YgdGhlIGF2YWlsYWJs ZSBuZXR3b3JrIGNhcmQgZHJpdmVycy4JIAorICAgICAgICBQbGVhc2Ugc2Vl IHRoZSAmbWFuLmFsdHEuNDsgbWFudWFsIHBhZ2UgZm9yIGEgbGlzdCBvZiBk cml2ZXJzIAorICAgICAgICB0aGF0IGFyZSBzdXBwb3J0ZWQgaW4geW91ciBy ZWxlYXNlIG9mICZvczsuPC9wYXJhPgorCisgICAgICA8cGFyYT5UaGUgZm9s bG93aW5nIGtlcm5lbCBvcHRpb25zIHdpbGwgZW5hYmxlIAorICAgICAgICA8 YWNyb255bT5BTFRRPC9hY3JvbnltPiBhbmQgYWRkIGFkZGl0aW9uYWwgZnVu Y3Rpb25hbGl0eToKKyAgICAgICAgPC9wYXJhPgogCiAgICAgICA8cHJvZ3Jh bWxpc3Rpbmc+b3B0aW9ucyAgICAgICAgIEFMVFEKIG9wdGlvbnMgICAgICAg ICBBTFRRX0NCUSAgICAgICAgIyBDbGFzcyBCYXNlcyBRdWV1aW5nIChDQlEp CkBAIC0zNzMsMzYgKzQ3Niw2IEBACiAJVGhpcyBvcHRpb24gaXMgcmVxdWly ZWQgb24gPGFjcm9ueW0+U01QPC9hY3JvbnltPgogCXN5c3RlbXMuPC9wYXJh PgogICAgIDwvc2VjdDI+Ci0KLSAgICA8c2VjdDI+Ci0gICAgICA8dGl0bGU+ Q3JlYXRpbmcgRmlsdGVyaW5nIFJ1bGVzPC90aXRsZT4KLQotICAgICAgPHBh cmE+VGhlIFBhY2tldCBGaWx0ZXIgcmVhZHMgaXRzIGNvbmZpZ3VyYXRpb24g cnVsZXMgZnJvbSB0aGUKLQkmbWFuLnBmLmNvbmYuNTsgZmlsZSBhbmQgaXQg bW9kaWZpZXMsIGRyb3BzIG9yIHBhc3NlcyBwYWNrZXRzCi0JYWNjb3JkaW5n IHRvIHRoZSBydWxlcyBvciBkZWZpbml0aW9ucyBzcGVjaWZpZWQgdGhlcmUu ICBUaGUgJm9zOwotCWluc3RhbGxhdGlvbiBjb21lcyB3aXRoIGEgZGVmYXVs dAotCTxmaWxlbmFtZT4vZXRjL3BmLmNvbmY8L2ZpbGVuYW1lPiB3aGljaCBj b250YWlucyB1c2VmdWwgZXhhbXBsZXMKLQlhbmQgZXhwbGFuYXRpb25zLjwv cGFyYT4KLQotICAgICAgPHBhcmE+QWx0aG91Z2ggJm9zOyBoYXMgaXRzIG93 biA8ZmlsZW5hbWU+L2V0Yy9wZi5jb25mPC9maWxlbmFtZT4KLQl0aGUgc3lu dGF4IGlzIHRoZSBzYW1lIGFzIG9uZSB1c2VkIGluIE9wZW5CU0QuICBBIGdy ZWF0Ci0JcmVzb3VyY2UgZm9yIGNvbmZpZ3VyaW5nIHRoZSA8YXBwbGljYXRp b24+cGY8L2FwcGxpY2F0aW9uPgotCWZpcmV3YWxsIGhhcyBiZWVuIHdyaXR0 ZW4gYnkgT3BlbkJTRCB0ZWFtIGFuZCBpcyBhdmFpbGFibGUgYXQKLQk8dWxp bmsgdXJsPSJodHRwOi8vd3d3Lm9wZW5ic2Qub3JnL2ZhcS9wZi8iPjwvdWxp bms+LjwvcGFyYT4KLQotICAgICAgPHdhcm5pbmc+Ci0JPHBhcmE+V2hlbiBi cm93c2luZyB0aGUgcGYgdXNlcidzIGd1aWRlLCBwbGVhc2Uga2VlcCBpbiBt aW5kIHRoYXQKLSAgICAgZGlmZmVyZW50IHZlcnNpb25zIG9mICZvczsgY29u dGFpbiBkaWZmZXJlbnQgdmVyc2lvbnMgb2YgcGYuICBUaGUKLSAgICAgPGFw cGxpY2F0aW9uPnBmPC9hcHBsaWNhdGlvbj4gZmlyZXdhbGwgaW4gJm9zOyA1 LlggaXMgYXQgdGhlIGxldmVsCi0gICAgIG9mIE9wZW5CU0QgdmVyc2lvbiAz LjUgYW5kIGluICZvczsgNi5YIGlzIGF0IHRoZSBsZXZlbCBvZiBPcGVuQlNE Ci0gICAgIHZlcnNpb24gMy43LjwvcGFyYT4KLSAgICAgIDwvd2FybmluZz4K LQotICAgICAgPHBhcmE+VGhlICZhLnBmOyBpcyBhIGdvb2QgcGxhY2UgdG8g YXNrIHF1ZXN0aW9ucyBhYm91dAotCWNvbmZpZ3VyaW5nIGFuZCBydW5uaW5n IHRoZSA8YXBwbGljYXRpb24+cGY8L2FwcGxpY2F0aW9uPgotCWZpcmV3YWxs LiAgRG8gbm90IGZvcmdldCB0byBjaGVjayB0aGUgbWFpbGluZyBsaXN0IGFy Y2hpdmVzCi0JYmVmb3JlIGFza2luZyBxdWVzdGlvbnMuPC9wYXJhPgotICAg IDwvc2VjdDI+CiAgIDwvc2VjdDE+CiAKICAgPHNlY3QxIGlkPSJmaXJld2Fs bHMtaXBmIj4K --0-2114477508-1207677800=:37890-- From owner-freebsd-doc@FreeBSD.ORG Tue Apr 8 22:39:16 2008 Return-Path: Delivered-To: freebsd-doc@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7733D106564A for ; Tue, 8 Apr 2008 22:39:16 +0000 (UTC) (envelope-from itetcu@FreeBSD.org) Received: from it.buh.tecnik93.com (it.buh.tecnik93.com [81.196.204.98]) by mx1.freebsd.org (Postfix) with ESMTP id 2F2F18FC0C for ; Tue, 8 Apr 2008 22:39:16 +0000 (UTC) (envelope-from itetcu@FreeBSD.org) Received: from it.buh.tecnik93.com (localhost [127.0.0.1]) by it.buh.tecnik93.com (Postfix) with ESMTP id D87B32C50D0D; Wed, 9 Apr 2008 01:22:01 +0300 (EEST) Date: Wed, 9 Apr 2008 01:21:55 +0300 From: Ion-Mihai Tetcu To: "Alicia Beuke" Message-ID: <20080409012155.2673801a@it.buh.tecnik93.com> In-Reply-To: References: X-Mailer: Claws Mail 3.3.1 (GTK+ 2.12.9; i386-portbld-freebsd7.0) Mime-Version: 1.0 Content-Type: multipart/signed; boundary="Sig_/XOfZGX.zsatE8e..Jr.2FQv"; protocol="application/pgp-signature"; micalg=PGP-SHA1 Cc: freebsd-doc@FreeBSD.org Subject: Re: Website priase and Link suggestions for FreeBSD X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 08 Apr 2008 22:39:16 -0000 --Sig_/XOfZGX.zsatE8e..Jr.2FQv Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable On Mon, 7 Apr 2008 14:53:44 -0400 "Alicia Beuke" wrote: > To Whom It May Concern: >=20 > =20 >=20 > While browsing security appliances websites, I came across your page > about hardware vendors. After spending some time browsing your site, I > was highly impressed with the breadth and depth of high quality of > information and saw it was a really great resource for network and > security professionals. >=20 > Our company, Lancope.com, works in the Network Behavior Analysis > industry. Our product, StealthWatch (a hardware appliance product), > leverages NetFlow or sFlow to gather flow data that helps provide > company's end-to-end network visibility for both network and security > administrators. >=20 > Since you list links to other companies similar to ours, I am curious if > you feel that a link to Lancope.com would be of help to your website > visitors. If so, I was wondering if you felt that your visitors would > benefit from adding a link to the Lancope.com website on this page: > http://www.freebsd.org/commercial/hardware.html =20 Does your software run on FreeBSD? Is you appliance based on FreeBSD? What is the connection with FreeBSD? Searching 'FreeBSD' on your site: You searched for : FreeBSD No records found - try again (0.17s) The page you quote above says, at the beginning: "The power, flexibility, and reliability of FreeBSD attract a wide variety of users and vendors. Here you will find vendors offering commercial products and/or services for FreeBSD." Do you offer such services? BTW, also at the beginning is the recommend way to get listed: "If your company supports a FreeBSD-compatible product or service that should be added to this page, please fill out a problem report for category www. Submissions should be in HTML and a medium-sized paragraph in length." HTH, --=20 IOnut - Un^d^dregistered ;) FreeBSD "user" "Intellectual Property" is nowhere near as valuable as "Intellect" FreeBSD committer -> itetcu@FreeBSD.org, PGP Key ID 057E9F8B493A297B --Sig_/XOfZGX.zsatE8e..Jr.2FQv Content-Type: application/pgp-signature; name=signature.asc Content-Disposition: attachment; filename=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.8 (FreeBSD) iEYEARECAAYFAkf78AkACgkQBX6fi0k6KXsXuwCg1e02U7hHLpinzX1gcX472WqU LfMAn3fS+mY1XUfgJ4Jl8lVCc5jrNqXI =PtvB -----END PGP SIGNATURE----- --Sig_/XOfZGX.zsatE8e..Jr.2FQv-- From owner-freebsd-doc@FreeBSD.ORG Wed Apr 9 13:59:59 2008 Return-Path: Delivered-To: freebsd-doc@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8CC3A106566C for ; Wed, 9 Apr 2008 13:59:59 +0000 (UTC) (envelope-from remko@elvandar.org) Received: from websrv01.jr-hosting.nl (websrv01.jr-hosting.nl [78.47.69.233]) by mx1.freebsd.org (Postfix) with ESMTP id 4F5E88FC26 for ; Wed, 9 Apr 2008 13:59:59 +0000 (UTC) (envelope-from remko@elvandar.org) Received: from localhost ([::1] helo=galain.elvandar.org) by websrv01.jr-hosting.nl with esmtpa (Exim 4.69 (FreeBSD)) (envelope-from ) id 1JjaNP-000A4q-Uu; Wed, 09 Apr 2008 15:30:12 +0200 Received: from 194.74.82.3 (SquirrelMail authenticated user remko) by galain.elvandar.org with HTTP; Wed, 9 Apr 2008 15:30:12 +0200 (CEST) Message-ID: <42599.194.74.82.3.1207747812.squirrel@galain.elvandar.org> In-Reply-To: <47FBAEDE.30804@ctors.net> References: <47FBAEDE.30804@ctors.net> Date: Wed, 9 Apr 2008 15:30:12 +0200 (CEST) From: "Remko Lodder" To: "Tom Van Looy" User-Agent: SquirrelMail/1.4.13 MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Priority: 3 (Normal) Importance: Normal Cc: freebsd-doc@freebsd.org Subject: Re: Handbook, wrong path to smb.conf.default X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: remko@elvandar.org List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Apr 2008 13:59:59 -0000 On Tue, April 8, 2008 7:43 pm, Tom Van Looy wrote: > The handbook mentions a wrong path to smb.conf.default: > > > --- network-samba.html.old 2008-04-08 19:33:24.551066400 +0200 > +++ network-samba.html 2008-04-08 19:35:29.534212800 +0200 > @@ -60,7 +60,7 @@ >

    27.9.2 > Configuration

    > >

    A default Samba configuration file is > installed as -class="FILENAME">/usr/local/share/examples/smb.conf.default. This > file must be > +class="FILENAME">/usr/local/share/examples/samba/smb.conf.default. > This file must be > copied to /usr/local/etc/smb.conf and > customized before class="APPLICATION">Samba can be used.

    > fixed! thanks! -- /"\ Best regards, | remko@FreeBSD.org \ / Remko Lodder | remko@EFnet X http://www.evilcoder.org/ | / \ ASCII Ribbon Campaign | Against HTML Mail and News From owner-freebsd-doc@FreeBSD.ORG Wed Apr 9 17:00:05 2008 Return-Path: Delivered-To: freebsd-doc@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 8A17D1065672; Wed, 9 Apr 2008 17:00:05 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 3881B8FC21; Wed, 9 Apr 2008 17:00:05 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m39H05HT055095; Wed, 9 Apr 2008 17:00:05 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m39H05dg055094; Wed, 9 Apr 2008 17:00:05 GMT (envelope-from gnats) Resent-Date: Wed, 9 Apr 2008 17:00:05 GMT Resent-Message-Id: <200804091700.m39H05dg055094@freefall.freebsd.org> Resent-From: FreeBSD-gnats-submit@FreeBSD.org (GNATS Filer) Resent-To: freebsd-doc@FreeBSD.org Resent-Cc: keramida@FreeBSD.org Resent-Reply-To: FreeBSD-gnats-submit@FreeBSD.org, Giorgos Keramidas Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 18614106564A for ; Wed, 9 Apr 2008 16:57:09 +0000 (UTC) (envelope-from keramida@ceid.upatras.gr) Received: from igloo.linux.gr (igloo.linux.gr [62.1.205.36]) by mx1.freebsd.org (Postfix) with ESMTP id 828CC8FC1D for ; Wed, 9 Apr 2008 16:57:08 +0000 (UTC) (envelope-from keramida@ceid.upatras.gr) Received: from kobe.laptop (vader.bytemobile-rio.ondsl.gr [83.235.57.37]) (authenticated bits=128) by igloo.linux.gr (8.14.2/8.14.2/Debian-3) with ESMTP id m39Gb7Sx015808 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT) for ; Wed, 9 Apr 2008 19:37:33 +0300 Received: from kobe.laptop (kobe.laptop [127.0.0.1]) by kobe.laptop (8.14.2/8.14.2) with ESMTP id m39GapUQ003698 for ; Wed, 9 Apr 2008 19:36:51 +0300 (EEST) (envelope-from keramida@kobe.laptop) Received: (from keramida@localhost) by kobe.laptop (8.14.2/8.14.2/Submit) id m39GanQk003697; Wed, 9 Apr 2008 19:36:49 +0300 (EEST) (envelope-from keramida) Message-Id: <200804091636.m39GanQk003697@kobe.laptop> Date: Wed, 9 Apr 2008 19:36:49 +0300 (EEST) From: Giorgos Keramidas To: FreeBSD-gnats-submit@FreeBSD.org X-Send-Pr-Version: 3.113 X-GNATS-Notify: keramida@FreeBSD.org Cc: Subject: docs/122604: make-localhost is gone, handbook needs to be updated X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Giorgos Keramidas List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Apr 2008 17:00:05 -0000 >Number: 122604 >Category: docs >Synopsis: make-localhost is gone, handbook needs to be updated >Confidential: no >Severity: non-critical >Priority: low >Responsible: freebsd-doc >State: open >Quarter: >Keywords: >Date-Required: >Class: doc-bug >Submitter-Id: current-users >Arrival-Date: Wed Apr 09 17:00:04 UTC 2008 >Closed-Date: >Last-Modified: >Originator: Giorgos Keramidas >Release: FreeBSD 8.0-CURRENT i386 >Organization: >Environment: System: FreeBSD kobe 8.0-CURRENT FreeBSD 8.0-CURRENT #0: Sun Mar 9 02:16:50 EET 2008 build@kobe:/home/build/obj/home/build/src/sys/KOBE i386 >Description: As noted by Dr Matthew J Seaman in freebsd-questions, the Handbook needs an update now that make-localhost is gone: : Date: Wed, 09 Apr 2008 17:01:31 +0100 : From: Matthew Seaman < m.seaman (at) infracaninophile (dot) co (dot) uk > : Subject: Re: "make-localhost" not there (DNS/named setup on 7.0) : To: Ewald Jenisch < a (at) jenisch (dot) at > : Cc: freebsd-questions at freebsd.org : Message-ID: <47FCE85B.4040505@infracaninophile.co.uk> : : The handbook is slightly out of date and the 'make-localhost' script : is now history. Instead, the system comes with pre-installed : /etc/namedb/master/localhost-forward.db and .../localhost-reverse.db : zone files and the appropriate configuration shown in the example : named.conf file. Basically, ignore the paragraph in the handbook that : says 'run make-localhost' and nowadays base your named.conf on the : sample configuration supplied with the system rather than the examples : shown in the handbook. >How-To-Repeat: >Fix: I'll try to handle this. >Release-Note: >Audit-Trail: >Unformatted: From owner-freebsd-doc@FreeBSD.ORG Wed Apr 9 17:11:52 2008 Return-Path: Delivered-To: freebsd-doc@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 37903106566B; Wed, 9 Apr 2008 17:11:52 +0000 (UTC) (envelope-from keramida@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 0EB908FC16; Wed, 9 Apr 2008 17:11:52 +0000 (UTC) (envelope-from keramida@FreeBSD.org) Received: from freefall.freebsd.org (keramida@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m39HBpXh055624; Wed, 9 Apr 2008 17:11:51 GMT (envelope-from keramida@freefall.freebsd.org) Received: (from keramida@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m39HBp5a055620; Wed, 9 Apr 2008 17:11:51 GMT (envelope-from keramida) Date: Wed, 9 Apr 2008 17:11:51 GMT Message-Id: <200804091711.m39HBp5a055620@freefall.freebsd.org> To: keramida@FreeBSD.org, keramida@FreeBSD.org, freebsd-doc@FreeBSD.org, keramida@FreeBSD.org From: keramida@FreeBSD.org Cc: Subject: Re: docs/122604: make-localhost is gone, handbook needs to be updated X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Apr 2008 17:11:52 -0000 Synopsis: make-localhost is gone, handbook needs to be updated Responsible-Changed-From-To: freebsd-doc->keramida Responsible-Changed-By: keramida Responsible-Changed-When: Wed Apr 9 17:11:10 UTC 2008 Responsible-Changed-Why: I'll handle this. http://www.freebsd.org/cgi/query-pr.cgi?pr=122604 From owner-freebsd-doc@FreeBSD.ORG Wed Apr 9 19:40:05 2008 Return-Path: Delivered-To: freebsd-doc@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4CA1E1065673 for ; Wed, 9 Apr 2008 19:40:05 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 2522F8FC19 for ; Wed, 9 Apr 2008 19:40:05 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m39Je5pp067789 for ; Wed, 9 Apr 2008 19:40:05 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m39Je4XP067788; Wed, 9 Apr 2008 19:40:04 GMT (envelope-from gnats) Resent-Date: Wed, 9 Apr 2008 19:40:04 GMT Resent-Message-Id: <200804091940.m39Je4XP067788@freefall.freebsd.org> Resent-From: FreeBSD-gnats-submit@FreeBSD.org (GNATS Filer) Resent-To: freebsd-doc@FreeBSD.org Resent-Reply-To: FreeBSD-gnats-submit@FreeBSD.org, Gabor PALI Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B34281065673 for ; Wed, 9 Apr 2008 19:32:59 +0000 (UTC) (envelope-from pali.gabor@googlemail.com) Received: from fk-out-0910.google.com (fk-out-0910.google.com [209.85.128.186]) by mx1.freebsd.org (Postfix) with ESMTP id 40F468FC1D for ; Wed, 9 Apr 2008 19:32:54 +0000 (UTC) (envelope-from pali.gabor@googlemail.com) Received: by fk-out-0910.google.com with SMTP id b27so4106543fka.11 for ; Wed, 09 Apr 2008 12:32:53 -0700 (PDT) Received: by 10.82.154.5 with SMTP id b5mr817615bue.10.1207768006389; Wed, 09 Apr 2008 12:06:46 -0700 (PDT) Received: from pgj@FreeBSD.org ( [80.98.116.90]) by mx.google.com with ESMTPS id 35sm4703109nfu.36.2008.04.09.12.06.44 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 09 Apr 2008 12:06:45 -0700 (PDT) Received: by pgj@FreeBSD.org (sSMTP sendmail emulation); Wed, 9 Apr 2008 21:06:42 +0200 Message-Id: <47fd13c5.2315300a.4b6b.ffffca38@mx.google.com> Date: Wed, 9 Apr 2008 21:06:42 +0200 From: "Gabor PALI" Sender: =?UTF-8?B?UMOBTEkgR8OhYm9yIErDoW5vcw==?= To: FreeBSD-gnats-submit@FreeBSD.org X-Send-Pr-Version: 3.113 Cc: Subject: docs/122608: [PATCH] Typo Fix for Committer's Guide (SGML) X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Gabor PALI List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Apr 2008 19:40:05 -0000 >Number: 122608 >Category: docs >Synopsis: [PATCH] Typo Fix for Committer's Guide (SGML) >Confidential: no >Severity: non-critical >Priority: low >Responsible: freebsd-doc >State: open >Quarter: >Keywords: >Date-Required: >Class: doc-bug >Submitter-Id: current-users >Arrival-Date: Wed Apr 09 19:40:04 UTC 2008 >Closed-Date: >Last-Modified: >Originator: Gabor PALI >Release: FreeBSD 6.3-STABLE i386 >Organization: >Environment: System: FreeBSD disznohal 6.3-STABLE FreeBSD 6.3-STABLE #4: Fri Apr 4 23:29:43 CEST 2008 dezzy@disznohal:/usr/obj/usr/src/sys/GENERIC_ i386 >Description: Article titled "Commiter's Guide" contains an unnecessary 'X' before "cvs commit" in section "CVS Operations". It has a replaceable tag, so it may have a purpose, but it cannot be found anywhere else in the document. >How-To-Repeat: >Fix: Patch attached with submission follows: --- committers-guide.patch.diff begins here --- --- committers-guide.old/article.sgml 2008-04-09 18:53:54.000000000 +0200 +++ committers-guide/article.sgml 2008-04-09 18:54:11.153029605 +0200 @@ -349,7 +349,7 @@ alias scvs cvs -d user@ncvs.FreeBSD.org:/home/ncvs This way they can do all CVS operations - locally and use Xcvs commit for committing + locally and use cvs commit for committing to the official CVS tree. If you wish to add something which is wholly new (like contrib-ified sources, etc), cvs import should be used. --- committers-guide.patch.diff ends here --- >Release-Note: >Audit-Trail: >Unformatted: From owner-freebsd-doc@FreeBSD.ORG Wed Apr 9 19:44:27 2008 Return-Path: Delivered-To: freebsd-doc@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 5DA531065670 for ; Wed, 9 Apr 2008 19:44:27 +0000 (UTC) (envelope-from federicogalvezdurand@yahoo.com) Received: from web58005.mail.re3.yahoo.com (web58005.mail.re3.yahoo.com [68.142.236.113]) by mx1.freebsd.org (Postfix) with SMTP id E77428FC1B for ; Wed, 9 Apr 2008 19:44:26 +0000 (UTC) (envelope-from federicogalvezdurand@yahoo.com) Received: (qmail 14686 invoked by uid 60001); 9 Apr 2008 19:44:25 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:Date:From:Subject:To:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-ID; b=ZrU2LbA3XzE9rFiWsE6BEQ/bnkBIGUOiBbfLUrmsbM4uLOD7aUH8z+CrlE+eYLj8BLVT0qO41zWABXC9ULhfpvqcjDREkfY6TMGG7R4fkp7z5BD5o1NRNU+xn66YxSf2Ec5US8onOvWgcPQMZk/r46v17tyja/XygjfpZJ06oCA=; X-YMail-OSG: p0zvCq4VM1k3ExpyHPbR.ttDRJbcolwdVKK5JZa_PqisjdrZRe.TI2S1mm0IqHGzy1ucle0qsTKTpwXJCtwh5I0r2YU.PnsbODDke6UyUn9b2xMRPP6PPRiokTq3R6m3do2nLbAEPjdh6CI- Received: from [83.77.240.72] by web58005.mail.re3.yahoo.com via HTTP; Wed, 09 Apr 2008 12:44:25 PDT Date: Wed, 9 Apr 2008 12:44:25 -0700 (PDT) From: Federico Galvez-Durand To: FreeBSD-gnats-submit@FreeBSD.org, freebsd-doc@FreeBSD.org, Remko In-Reply-To: <200803241540.m2OFe3Qq016618@freefall.freebsd.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit Message-ID: <743130.13683.qm@web58005.mail.re3.yahoo.com> Cc: Subject: Re: docs/122052: minor update on handbook section 20.7.1 X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Apr 2008 19:44:27 -0000 the new PNG files are here: http://del.ufrj.br/~federico.besnard/test/vinum_png.tgz Fico. __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From owner-freebsd-doc@FreeBSD.ORG Wed Apr 9 19:50:05 2008 Return-Path: Delivered-To: freebsd-doc@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C50061065670 for ; Wed, 9 Apr 2008 19:50:05 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 99EB68FC13 for ; Wed, 9 Apr 2008 19:50:05 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m39Jo55n068777 for ; Wed, 9 Apr 2008 19:50:05 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m39Jo5Xx068776; Wed, 9 Apr 2008 19:50:05 GMT (envelope-from gnats) Date: Wed, 9 Apr 2008 19:50:05 GMT Message-Id: <200804091950.m39Jo5Xx068776@freefall.freebsd.org> To: freebsd-doc@FreeBSD.org From: Federico Galvez-Durand Cc: Subject: Re: docs/122052: minor update on handbook section 20.7.1 X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Federico Galvez-Durand List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Apr 2008 19:50:05 -0000 The following reply was made to PR docs/122052; it has been noted by GNATS. From: Federico Galvez-Durand To: FreeBSD-gnats-submit@FreeBSD.org, freebsd-doc@FreeBSD.org, Remko Cc: Subject: Re: docs/122052: minor update on handbook section 20.7.1 Date: Wed, 9 Apr 2008 12:44:25 -0700 (PDT) the new PNG files are here: http://del.ufrj.br/~federico.besnard/test/vinum_png.tgz Fico. __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From owner-freebsd-doc@FreeBSD.ORG Thu Apr 10 17:30:28 2008 Return-Path: Delivered-To: freebsd-doc@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id BE4DB106564A; Thu, 10 Apr 2008 17:30:28 +0000 (UTC) (envelope-from remko@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 94ADB8FC1C; Thu, 10 Apr 2008 17:30:28 +0000 (UTC) (envelope-from remko@FreeBSD.org) Received: from freefall.freebsd.org (remko@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m3AHUSZ8008630; Thu, 10 Apr 2008 17:30:28 GMT (envelope-from remko@freefall.freebsd.org) Received: (from remko@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m3AHUSWE008626; Thu, 10 Apr 2008 17:30:28 GMT (envelope-from remko) Date: Thu, 10 Apr 2008 17:30:28 GMT Message-Id: <200804101730.m3AHUSWE008626@freefall.freebsd.org> To: pgj@FreeBSD.org, remko@FreeBSD.org, freebsd-doc@FreeBSD.org From: remko@FreeBSD.org Cc: Subject: Re: docs/122608: [PATCH] Typo Fix for Committer's Guide (SGML) X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Apr 2008 17:30:28 -0000 Synopsis: [PATCH] Typo Fix for Committer's Guide (SGML) State-Changed-From-To: open->closed State-Changed-By: remko State-Changed-When: Thu Apr 10 17:30:28 UTC 2008 State-Changed-Why: I dont think this is needed, the X refers to 'Scvs, Dcvs, Pcvs' or rather Source CVS tree, Doc/WWW CVS tree and Ports CVS tree'. A simple cvs commit doesn't do the job for committers, they have aliases pointing to the proper CVS Tree :-). Thanks for the submission though! It's really appreciated! http://www.freebsd.org/cgi/query-pr.cgi?pr=122608 From owner-freebsd-doc@FreeBSD.ORG Thu Apr 10 19:40:01 2008 Return-Path: Delivered-To: freebsd-doc@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id AFDE91065671 for ; Thu, 10 Apr 2008 19:40:01 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 874848FC26 for ; Thu, 10 Apr 2008 19:40:01 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m3AJe1nc018211 for ; Thu, 10 Apr 2008 19:40:01 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m3AJe1X1018210; Thu, 10 Apr 2008 19:40:01 GMT (envelope-from gnats) Resent-Date: Thu, 10 Apr 2008 19:40:01 GMT Resent-Message-Id: <200804101940.m3AJe1X1018210@freefall.freebsd.org> Resent-From: FreeBSD-gnats-submit@FreeBSD.org (GNATS Filer) Resent-To: freebsd-doc@FreeBSD.org Resent-Reply-To: FreeBSD-gnats-submit@FreeBSD.org, Gabor PALI Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id CE96F106567B for ; Thu, 10 Apr 2008 19:34:27 +0000 (UTC) (envelope-from pali.gabor@googlemail.com) Received: from ug-out-1314.google.com (ug-out-1314.google.com [66.249.92.170]) by mx1.freebsd.org (Postfix) with ESMTP id 5AAFE8FC16 for ; Thu, 10 Apr 2008 19:34:27 +0000 (UTC) (envelope-from pali.gabor@googlemail.com) Received: by ug-out-1314.google.com with SMTP id y2so1601771uge.37 for ; Thu, 10 Apr 2008 12:34:26 -0700 (PDT) Received: by 10.78.100.2 with SMTP id x2mr2048019hub.52.1207856065953; Thu, 10 Apr 2008 12:34:25 -0700 (PDT) Received: from pgj@FreeBSD.org ( [80.98.116.90]) by mx.google.com with ESMTPS id 35sm17449997nfu.36.2008.04.10.12.34.24 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 10 Apr 2008 12:34:25 -0700 (PDT) Received: by pgj@FreeBSD.org (sSMTP sendmail emulation); Thu, 10 Apr 2008 21:34:22 +0200 Message-Id: <47fe6bc1.2315300a.4b6b.ffffd91b@mx.google.com> Date: Thu, 10 Apr 2008 21:34:22 +0200 From: "Gabor PALI" Sender: =?UTF-8?B?UMOBTEkgR8OhYm9yIErDoW5vcw==?= To: FreeBSD-gnats-submit@FreeBSD.org X-Send-Pr-Version: 3.113 Cc: Subject: docs/122635: [patch] Fix for Section 8.6 of Handbook X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Gabor PALI List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Apr 2008 19:40:01 -0000 >Number: 122635 >Category: docs >Synopsis: [patch] Fix for Section 8.6 of Handbook >Confidential: no >Severity: non-critical >Priority: low >Responsible: freebsd-doc >State: open >Quarter: >Keywords: >Date-Required: >Class: doc-bug >Submitter-Id: current-users >Arrival-Date: Thu Apr 10 19:40:01 UTC 2008 >Closed-Date: >Last-Modified: >Originator: Gabor PALI >Release: FreeBSD 6.3-STABLE i386 >Organization: >Environment: System: FreeBSD disznohal 6.3-STABLE FreeBSD 6.3-STABLE #4: Fri Apr 4 23:29:43 CEST 2008 dezzy@disznohal:/usr/obj/usr/src/sys/GENERIC_ i386 >Description: In Chapter 8 (Configuring the FreeBSD Kernel), Section 8.6 (If Something Goes Wrong) it says there are five categories of trouble can occur when building custom kernels, but only four of them are listed. One can fix this by updating their count or by mentioning a fifth category. My patch implements the former. >How-To-Repeat: >Fix: I suggest the following patch: --- kernelconfig.patch.diff begins here --- --- chapter.sgml.orig 2008-04-09 18:44:43.000000000 +0200 +++ chapter.sgml 2008-04-10 20:52:19.995020476 +0200 @@ -1368,7 +1368,7 @@ If Something Goes Wrong - There are five categories of trouble that can occur when + There are four categories of trouble that can occur when building a custom kernel. They are: --- kernelconfig.patch.diff ends here --- >Release-Note: >Audit-Trail: >Unformatted: From owner-freebsd-doc@FreeBSD.ORG Sat Apr 12 14:05:43 2008 Return-Path: Delivered-To: doc@FreeBSD.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 29040106564A for ; Sat, 12 Apr 2008 14:05:43 +0000 (UTC) (envelope-from jakub_lach@mailplus.pl) Received: from tur.go2.pl (tur.go2.pl [193.17.41.50]) by mx1.freebsd.org (Postfix) with ESMTP id E29698FC1E for ; Sat, 12 Apr 2008 14:05:42 +0000 (UTC) (envelope-from jakub_lach@mailplus.pl) Received: from rekin18.go2.pl (rekin18.go2.pl [193.17.41.40]) by tur.go2.pl (o2.pl Mailer 2.0.1) with ESMTP id 88DE1234141 for ; Sat, 12 Apr 2008 15:43:44 +0200 (CEST) Received: from o2.pl (unknown [10.0.0.68]) by rekin18.go2.pl (Postfix) with SMTP id 01FAB53D79 for ; Sat, 12 Apr 2008 15:43:42 +0200 (CEST) From: =?UTF-8?Q?Jakub_Lach?= To: doc@FreeBSD.org Mime-Version: 1.0 Message-ID: <32d5ddab.623f6295.4800bc87.f24b8@o2.pl> Date: Sat, 12 Apr 2008 15:43:35 +0200 X-Originator: 81.210.72.12 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Cc: Subject: FreeBSD 7.0-RELEASE Hardware Notes - minor error/corruption X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Apr 2008 14:05:43 -0000 Hello, It=20appears=20to=20me=20that=20"not"=20is=20ommited=20in=20section=20reg= arding=20snd=5Femu10kx sound=20drivers.=20(=20http://www.freebsd.org/releases/7.0R/hardware.html= =20) "[i386,amd64]=20The=20snd=5Femu10kx(4)=20driver=20does=20support=20the=20= following=20sound=20cards=20(although=20they=20are=20named=20similar=20to= =20some=20supported=20ones): Creative=20Sound=20Blaster=20Live!=2024-Bit,=20identified=20by=20as Creative=20Sound=20Blaster=20Audigy=20LS=20/=20ES,=20identified=20by=20as= All=20other=20Creative=20sound=20cards=20with=20-DAT=20chipsets. All=20Creative=20X-Fi=20series=20sound=20cards." Compare=20to=20note=20on=20snd=5Femu10kx(4): http://www.freebsd.org/cgi/man.cgi?query=3Dsnd=5Femu10kx&sektion=3D4&manp= ath=3DFreeBSD+7.0-RELEASE =20=20The=20snd=5Femu10kx=20driver=20does=20not=20support=20the=20followi= ng=20sound=20cards =20=20=20=20=20(=20although=20they=20are=20named=20similar=20to=20some=20= supported=20ones=20): =20=20=20=20=20+o=09=20Creative=20Sound=20Blaster=20Live!=2024-Bit,=20ide= ntified=20by=20FreeBSD=20as =09=20"emu10k1x=20Soundblaster=20Live!=205.1". =20=20=20=20=20+o=09=20Creative=20Sound=20Blaster=20Audigy=20LS=20/=20ES,= =20identified=20by=20FreeBSD=20as =09=20"CA0106-DAT=20Audigy=20LS". =20=20=20=20=20+o=09=20All=20other=20Creative=20sound=20cards=20with=20-D= AT=20chipsets. =20=20=20=20=20+o=09=20All=20Creative=20X-Fi=20series=20sound=20cards. Keep=20up=20the=20good=20work! -=20Jakub=20Lach From owner-freebsd-doc@FreeBSD.ORG Sat Apr 12 20:15:12 2008 Return-Path: Delivered-To: freebsd-doc@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 7AB33106564A for ; Sat, 12 Apr 2008 20:15:12 +0000 (UTC) (envelope-from gnemmi@gmail.com) Received: from an-out-0708.google.com (an-out-0708.google.com [209.85.132.240]) by mx1.freebsd.org (Postfix) with ESMTP id 15E808FC15 for ; Sat, 12 Apr 2008 20:15:11 +0000 (UTC) (envelope-from gnemmi@gmail.com) Received: by an-out-0708.google.com with SMTP id c14so271936anc.13 for ; Sat, 12 Apr 2008 13:15:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:from:to:subject:date:user-agent:mime-version:content-type:content-transfer-encoding:content-disposition:message-id; bh=KyvdO1q5tukrdzN8z83TjNw8iGi45GebtrhsetHtqT8=; b=dYAstG38m5JSYozz5XS/KfCqlDoy4TJlJ8gzbn7silpMLqL8tCRYbWtnRR9EHiudbnbLmNFJHkrEaFrhbYakgN9SboOGKOZFs4JYPjC4CJS9FWKAwT6AKEbDwdXJGiXGr0vzKVV6id0IDqT1zO4XRz3ks4GKU+v5Zk+f2VVpIgY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:to:subject:date:user-agent:mime-version:content-type:content-transfer-encoding:content-disposition:message-id; b=Vh1uyDi3DGal4z8vxKtl7vTo0NQmOhc3EKd+Hg10ow3Gp/zHM8moYv1mgfqgyLI3/6AAeiCJA6YrLFgkKEkVwMj4UK7IsA0L1wgJOEkVgTcyX3OsEVgJVsMr0gBU+lEdlmMDMbvsmKV7+slgEjOLKkeiCrG5UA3S01nsdESunG4= Received: by 10.100.240.17 with SMTP id n17mr8488249anh.49.1208029795118; Sat, 12 Apr 2008 12:49:55 -0700 (PDT) Received: from 87-228-114-200.fibertel.com.ar ( [200.114.228.87]) by mx.google.com with ESMTPS id 9sm9444107wrl.31.2008.04.12.12.49.53 (version=TLSv1/SSLv3 cipher=OTHER); Sat, 12 Apr 2008 12:49:54 -0700 (PDT) From: Gonzalo Nemmi To: freebsd-doc@freebsd.org Date: Sat, 12 Apr 2008 16:49:43 -0300 User-Agent: KMail/1.9.9 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200804121649.43315.gnemmi@gmail.com> Subject: FreeBSD Handbook (4th Edition??) X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Apr 2008 20:15:12 -0000 Hello to you all ! First post in here =) I was wondering .. are there any plans to publish a FreeBSD Handbook 4th Edition? According to freebsdmall.com, the 3rd Edition is " completely up to date for the latest FreeBSD 4.x and 5.x versions." .. which was enough for 4.x and 5.x releases ... but taking into consideration that FreeBSD has already past 6.x .. 7.x is on the go .. and 8.x is looming on the horizon .. spending $ 59.95 on the 3rd Edition ( both books ) doesn't look like a really enticing offer. Hope to hear from you soon =) Blessings PS: I was in doubt on whether this e-mail should have been sent to the doc or the advocacy list .. I thought doc was the most reasonable option .. If I was wrong, please accept my apologies. --- Gonzalo Nemmi From owner-freebsd-doc@FreeBSD.ORG Sat Apr 12 20:18:56 2008 Return-Path: Delivered-To: freebsd-doc@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id B0BCC106566B for ; Sat, 12 Apr 2008 20:18:56 +0000 (UTC) (envelope-from murray@stokely.org) Received: from ug-out-1314.google.com (ug-out-1314.google.com [66.249.92.169]) by mx1.freebsd.org (Postfix) with ESMTP id 309748FC12 for ; Sat, 12 Apr 2008 20:18:55 +0000 (UTC) (envelope-from murray@stokely.org) Received: by ug-out-1314.google.com with SMTP id y2so167026uge.37 for ; Sat, 12 Apr 2008 13:18:55 -0700 (PDT) Received: by 10.67.116.6 with SMTP id t6mr1274611ugm.76.1208031534399; Sat, 12 Apr 2008 13:18:54 -0700 (PDT) Received: by 10.67.20.17 with HTTP; Sat, 12 Apr 2008 13:18:54 -0700 (PDT) Message-ID: <2a7894eb0804121318o2a7a99d2x606dd4abf2de5867@mail.gmail.com> Date: Sat, 12 Apr 2008 13:18:54 -0700 From: "Murray Stokely" To: "Gonzalo Nemmi" In-Reply-To: <200804121649.43315.gnemmi@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline References: <200804121649.43315.gnemmi@gmail.com> Cc: freebsd-doc@freebsd.org Subject: Re: FreeBSD Handbook (4th Edition??) X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Apr 2008 20:18:56 -0000 I think to be honest we need some more chapters and new content before we publish a new printed edition. There is certainly more up to date online content but not as much has changed as between previous printed editions. In the mean time, one option is to use one of the print on demand places to get a cheap bound copy of the current online handbook. - Murray On Sat, Apr 12, 2008 at 12:49 PM, Gonzalo Nemmi wrote: > Hello to you all ! > First post in here =) > I was wondering .. are there any plans to publish a FreeBSD Handbook 4th > Edition? > > According to freebsdmall.com, the 3rd Edition is " completely up to date for > the latest FreeBSD 4.x and 5.x versions." .. which was enough for 4.x and 5.x > releases ... but taking into consideration that FreeBSD has already past > 6.x .. 7.x is on the go .. and 8.x is looming on the horizon .. spending $ > 59.95 on the 3rd Edition ( both books ) doesn't look like a really enticing > offer. > > Hope to hear from you soon =) > Blessings > > PS: I was in doubt on whether this e-mail should have been sent to the doc or > the advocacy list .. I thought doc was the most reasonable option .. If I was > wrong, please accept my apologies. > > --- > Gonzalo Nemmi > _______________________________________________ > freebsd-doc@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-doc > To unsubscribe, send any mail to "freebsd-doc-unsubscribe@freebsd.org" > From owner-freebsd-doc@FreeBSD.ORG Sat Apr 12 21:20:03 2008 Return-Path: Delivered-To: freebsd-doc@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C020A10656AA for ; Sat, 12 Apr 2008 21:20:03 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 74A5D8FC17 for ; Sat, 12 Apr 2008 21:20:03 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m3CLK3TT046833 for ; Sat, 12 Apr 2008 21:20:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m3CLK3iC046832; Sat, 12 Apr 2008 21:20:03 GMT (envelope-from gnats) Resent-Date: Sat, 12 Apr 2008 21:20:03 GMT Resent-Message-Id: <200804122120.m3CLK3iC046832@freefall.freebsd.org> Resent-From: FreeBSD-gnats-submit@FreeBSD.org (GNATS Filer) Resent-To: freebsd-doc@FreeBSD.org Resent-Reply-To: FreeBSD-gnats-submit@FreeBSD.org, Gabor PALI Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id D6D59106567F for ; Sat, 12 Apr 2008 21:13:14 +0000 (UTC) (envelope-from pali.gabor@googlemail.com) Received: from fg-out-1718.google.com (fg-out-1718.google.com [72.14.220.158]) by mx1.freebsd.org (Postfix) with ESMTP id 7A6FC8FC43 for ; Sat, 12 Apr 2008 21:13:11 +0000 (UTC) (envelope-from pali.gabor@googlemail.com) Received: by fg-out-1718.google.com with SMTP id 16so1011403fgg.35 for ; Sat, 12 Apr 2008 14:13:08 -0700 (PDT) Received: by 10.86.28.5 with SMTP id b5mr9090709fgb.79.1208034788075; Sat, 12 Apr 2008 14:13:08 -0700 (PDT) Received: from pgj@FreeBSD.org ( [80.98.116.90]) by mx.google.com with ESMTPS id y2sm14755366mug.9.2008.04.12.14.13.05 (version=TLSv1/SSLv3 cipher=OTHER); Sat, 12 Apr 2008 14:13:07 -0700 (PDT) Received: by pgj@FreeBSD.org (sSMTP sendmail emulation); Sat, 12 Apr 2008 23:13:04 +0200 Message-Id: <480125e3.02e2660a.02ad.3019@mx.google.com> Date: Sat, 12 Apr 2008 23:13:04 +0200 From: "Gabor PALI" Sender: =?UTF-8?B?UMOBTEkgR8OhYm9yIErDoW5vcw==?= To: FreeBSD-gnats-submit@FreeBSD.org X-Send-Pr-Version: 3.113 Cc: Subject: docs/122698: [patch] Wrong Closing Tag in FreeBSD Glossary X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: Gabor PALI List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Apr 2008 21:20:03 -0000 >Number: 122698 >Category: docs >Synopsis: [patch] Wrong Closing Tag in FreeBSD Glossary >Confidential: no >Severity: non-critical >Priority: low >Responsible: freebsd-doc >State: open >Quarter: >Keywords: >Date-Required: >Class: doc-bug >Submitter-Id: current-users >Arrival-Date: Sat Apr 12 21:20:03 UTC 2008 >Closed-Date: >Last-Modified: >Originator: Gabor PALI >Release: FreeBSD 6.3-STABLE i386 >Organization: >Environment: System: FreeBSD disznohal 6.3-STABLE FreeBSD 6.3-STABLE #4: Fri Apr 4 23:29:43 CEST 2008 dezzy@disznohal:/usr/obj/usr/src/sys/GENERIC_ i386 >Description: There is a wrong closing tag for near the definition of ``Request for Comments'' in FreeBSD Glossary (spotted in: en_US). >How-To-Repeat: >Fix: -> --- freebsd-glossary.patch.diff begins here --- Index: freebsd-glossary.sgml =================================================================== RCS file: /doc/en_US.ISO8859-1/share/sgml/glossary/freebsd-glossary.sgml,v retrieving revision 1.28 diff -u -r1.28 freebsd-glossary.sgml --- freebsd-glossary.sgml 12 May 2007 13:12:14 -0000 1.28 +++ freebsd-glossary.sgml 12 Apr 2008 21:02:41 -0000 @@ -1629,7 +1629,7 @@ A set of documents defining Internet standards, protocols, and so forth. See www.rfc-editor.org. - + Also used as a general term when someone has a suggested change and wants feedback. --- freebsd-glossary.patch.diff ends here --- >Release-Note: >Audit-Trail: >Unformatted: From owner-freebsd-doc@FreeBSD.ORG Sat Apr 12 21:41:00 2008 Return-Path: Delivered-To: freebsd-doc@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 827361065673; Sat, 12 Apr 2008 21:41:00 +0000 (UTC) (envelope-from brueffer@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 4447C8FC1D; Sat, 12 Apr 2008 21:41:00 +0000 (UTC) (envelope-from brueffer@FreeBSD.org) Received: from freefall.freebsd.org (brueffer@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m3CLf0MG049145; Sat, 12 Apr 2008 21:41:00 GMT (envelope-from brueffer@freefall.freebsd.org) Received: (from brueffer@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m3CLf0YS049141; Sat, 12 Apr 2008 23:41:00 +0200 (CEST) (envelope-from brueffer) Date: Sat, 12 Apr 2008 23:41:00 +0200 (CEST) Message-Id: <200804122141.m3CLf0YS049141@freefall.freebsd.org> To: pgj@FreeBSD.org, brueffer@FreeBSD.org, freebsd-doc@FreeBSD.org From: brueffer@FreeBSD.org Cc: Subject: Re: docs/122698: [patch] Wrong Closing Tag in FreeBSD Glossary X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Apr 2008 21:41:00 -0000 Synopsis: [patch] Wrong Closing Tag in FreeBSD Glossary State-Changed-From-To: open->closed State-Changed-By: brueffer State-Changed-When: Sat Apr 12 23:40:42 CEST 2008 State-Changed-Why: Committed, thanks! http://www.freebsd.org/cgi/query-pr.cgi?pr=122698 From owner-freebsd-doc@FreeBSD.ORG Sat Apr 12 21:45:24 2008 Return-Path: Delivered-To: freebsd-doc@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 2C4211065671; Sat, 12 Apr 2008 21:45:24 +0000 (UTC) (envelope-from brueffer@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id E2EDE8FC1A; Sat, 12 Apr 2008 21:45:23 +0000 (UTC) (envelope-from brueffer@FreeBSD.org) Received: from freefall.freebsd.org (brueffer@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m3CLjNgR049244; Sat, 12 Apr 2008 21:45:23 GMT (envelope-from brueffer@freefall.freebsd.org) Received: (from brueffer@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m3CLjN3v049240; Sat, 12 Apr 2008 23:45:23 +0200 (CEST) (envelope-from brueffer) Date: Sat, 12 Apr 2008 23:45:23 +0200 (CEST) Message-Id: <200804122145.m3CLjN3v049240@freefall.freebsd.org> To: pgj@FreeBSD.org, brueffer@FreeBSD.org, freebsd-doc@FreeBSD.org From: brueffer@FreeBSD.org Cc: Subject: Re: docs/122635: [patch] Fix for Section 8.6 of Handbook X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Apr 2008 21:45:24 -0000 Synopsis: [patch] Fix for Section 8.6 of Handbook State-Changed-From-To: open->closed State-Changed-By: brueffer State-Changed-When: Sat Apr 12 23:45:06 CEST 2008 State-Changed-Why: Committed, thanks! http://www.freebsd.org/cgi/query-pr.cgi?pr=122635 From owner-freebsd-doc@FreeBSD.ORG Sat Apr 12 21:50:03 2008 Return-Path: Delivered-To: freebsd-doc@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id C9FD5106566C for ; Sat, 12 Apr 2008 21:50:03 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id A5BF78FC13 for ; Sat, 12 Apr 2008 21:50:03 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m3CLo3Tr049366 for ; Sat, 12 Apr 2008 21:50:03 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m3CLo3Uo049360; Sat, 12 Apr 2008 21:50:03 GMT (envelope-from gnats) Date: Sat, 12 Apr 2008 21:50:03 GMT Message-Id: <200804122150.m3CLo3Uo049360@freefall.freebsd.org> To: freebsd-doc@FreeBSD.org From: dfilter@FreeBSD.ORG (dfilter service) Cc: Subject: Re: docs/122698: commit references a PR X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: dfilter service List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Apr 2008 21:50:03 -0000 The following reply was made to PR docs/122698; it has been noted by GNATS. From: dfilter@FreeBSD.ORG (dfilter service) To: bug-followup@FreeBSD.org Cc: Subject: Re: docs/122698: commit references a PR Date: Sat, 12 Apr 2008 21:40:47 +0000 (UTC) brueffer 2008-04-12 21:40:31 UTC FreeBSD doc repository Modified files: en_US.ISO8859-1/share/sgml/glossary freebsd-glossary.sgml Log: Correct closing tag. PR: 122698 Submitted by: pgj Revision Changes Path 1.29 +1 -1 doc/en_US.ISO8859-1/share/sgml/glossary/freebsd-glossary.sgml _______________________________________________ cvs-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/cvs-all To unsubscribe, send any mail to "cvs-all-unsubscribe@freebsd.org" From owner-freebsd-doc@FreeBSD.ORG Sat Apr 12 21:50:06 2008 Return-Path: Delivered-To: freebsd-doc@hub.freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 286E810656BF for ; Sat, 12 Apr 2008 21:50:06 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (freefall.freebsd.org [IPv6:2001:4f8:fff6::28]) by mx1.freebsd.org (Postfix) with ESMTP id 03F5B8FC20 for ; Sat, 12 Apr 2008 21:50:06 +0000 (UTC) (envelope-from gnats@FreeBSD.org) Received: from freefall.freebsd.org (gnats@localhost [127.0.0.1]) by freefall.freebsd.org (8.14.2/8.14.2) with ESMTP id m3CLo5F4049392 for ; Sat, 12 Apr 2008 21:50:05 GMT (envelope-from gnats@freefall.freebsd.org) Received: (from gnats@localhost) by freefall.freebsd.org (8.14.2/8.14.1/Submit) id m3CLo5HP049391; Sat, 12 Apr 2008 21:50:05 GMT (envelope-from gnats) Date: Sat, 12 Apr 2008 21:50:05 GMT Message-Id: <200804122150.m3CLo5HP049391@freefall.freebsd.org> To: freebsd-doc@FreeBSD.org From: dfilter@FreeBSD.ORG (dfilter service) Cc: Subject: Re: docs/122635: commit references a PR X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list Reply-To: dfilter service List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Apr 2008 21:50:06 -0000 The following reply was made to PR docs/122635; it has been noted by GNATS. From: dfilter@FreeBSD.ORG (dfilter service) To: bug-followup@FreeBSD.org Cc: Subject: Re: docs/122635: commit references a PR Date: Sat, 12 Apr 2008 21:45:03 +0000 (UTC) brueffer 2008-04-12 21:44:53 UTC FreeBSD doc repository Modified files: en_US.ISO8859-1/books/handbook/kernelconfig chapter.sgml Log: Only four trouble categories are mentioned, not five. PR: 122635 Submitted by: pgj Revision Changes Path 1.181 +1 -1 doc/en_US.ISO8859-1/books/handbook/kernelconfig/chapter.sgml _______________________________________________ cvs-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/cvs-all To unsubscribe, send any mail to "cvs-all-unsubscribe@freebsd.org" From owner-freebsd-doc@FreeBSD.ORG Sat Apr 12 23:24:43 2008 Return-Path: Delivered-To: freebsd-doc@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id 4BF46106567B for ; Sat, 12 Apr 2008 23:24:43 +0000 (UTC) (envelope-from do_not_reply@pentagonlight.com) Received: from treehouse.forest.net (treehouse.forest.net [216.168.37.80]) by mx1.freebsd.org (Postfix) with ESMTP id 10D738FC1F for ; Sat, 12 Apr 2008 23:24:42 +0000 (UTC) (envelope-from do_not_reply@pentagonlight.com) Received: from adsl-75-49-102-1.dsl.pltn13.sbcglobal.net (account marketing@pentalite.com [75.49.102.1] verified) by treehouse.forest.net (CommuniGate Pro SMTP 4.3.9) with ESMTPA id 404138212 for freebsd-doc@freebsd.org; Sat, 12 Apr 2008 15:21:15 -0700 From: "PentagonLight" To: "freebsd-doc" Date: Sat, 12 Apr 2008 15:24:41 -0700 Organization: PentagonLight MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_NextPart_000_0000_01C6527E.AE8904D0" Message-ID: X-Content-Filtered-By: Mailman/MimeDel 2.1.5 Subject: K2 Porcupine Light in Action X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Apr 2008 23:24:43 -0000 This is a multi-part message in MIME format. ------=_NextPart_000_0000_01C6527E.AE8904D0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit This is a text part of the message. It is shown for the users of old-style e-mail clients ------=_NextPart_000_0000_01C6527E.AE8904D0-- From owner-freebsd-doc@FreeBSD.ORG Sat Apr 12 23:40:54 2008 Return-Path: Delivered-To: freebsd-doc@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [IPv6:2001:4f8:fff6::34]) by hub.freebsd.org (Postfix) with ESMTP id E0C4C106566C for ; Sat, 12 Apr 2008 23:40:54 +0000 (UTC) (envelope-from glen.j.barber@gmail.com) Received: from yw-out-2324.google.com (yw-out-2324.google.com [74.125.46.30]) by mx1.freebsd.org (Postfix) with ESMTP id 533EF8FC23 for ; Sat, 12 Apr 2008 23:40:54 +0000 (UTC) (envelope-from glen.j.barber@gmail.com) Received: by yw-out-2324.google.com with SMTP id 2so397011ywt.13 for ; Sat, 12 Apr 2008 16:40:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:date:from:to:subject:message-id:references:mime-version:content-type:content-disposition:in-reply-to:list-id:user-agent; bh=OJBosrWB/NOEj4k20C1FpJ9FUqxgUqUK9fVT6gpL5Nk=; b=lysxU1nxHV2mbiZcpxBA8OUoRgElefVthPjVqqh39AJMzip3+QkqbkLt9LJKnM0v+u4o7PnAOmE6w8LYQMcDKx6t4Az/I6cK6k6z4PQKf3q2DyW8Uyc7I+WZpDEa0Fi8D59kMTfr0v0yQW4lSPXBkkvD4ShroQoqCkXzXFSjWXU= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:subject:message-id:references:mime-version:content-type:content-disposition:in-reply-to:list-id:user-agent; b=KiyzQX7syEWzQZytFdVWHA2np9UI/5A8zOf2UiXRpmLLr81AGQm2g+GRkX5GfP8llVItTGB6C6nE362v46PUm5miAQkkmfS1MXEVMNDxXYPLWWgAllBDzBeJ4igBff6Wz8u0Di0z3b+IhhsWyu6dZmI0P9uaMoef8Z9tjOp7OP0= Received: by 10.150.154.5 with SMTP id b5mr4790057ybe.207.1208042188175; Sat, 12 Apr 2008 16:16:28 -0700 (PDT) Received: from orion.hexidigital.org ( [24.238.59.126]) by mx.google.com with ESMTPS id 7sm7863773ywo.1.2008.04.12.16.16.26 (version=TLSv1/SSLv3 cipher=OTHER); Sat, 12 Apr 2008 16:16:26 -0700 (PDT) Date: Sat, 12 Apr 2008 19:16:21 -0400 From: Glen Barber To: freebsd-doc@freebsd.org Message-ID: <20080412231621.GA59810@orion.hexidigital.org> References: <200804121649.43315.gnemmi@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <200804121649.43315.gnemmi@gmail.com> User-Agent: Mutt/1.5.17 (2007-11-01) Subject: Re: FreeBSD Handbook (4th Edition??) X-BeenThere: freebsd-doc@freebsd.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: Documentation project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 12 Apr 2008 23:40:55 -0000 Gonzalo Nemmi said: > > According to freebsdmall.com, the 3rd Edition is " completely up to date for > the latest FreeBSD 4.x and 5.x versions." .. which was enough for 4.x and 5.x > releases ... but taking into consideration that FreeBSD has already past > 6.x .. 7.x is on the go .. and 8.x is looming on the horizon .. spending $ > 59.95 on the 3rd Edition ( both books ) doesn't look like a really enticing > offer. On top of whatever 'real' answer you are provided here, may I suggest taking a look at 'Absolute FreeBSD: 2nd Edition' by Michael W. Lucas. It is a good read (though some topics aren't too detailed). It covers 7.0-RELEASE, but I haven't found anything that doesn't also work on 6.3-RELEASE. HTH. -- Glen Barber http://www.dev-urandom.com/