From owner-freebsd-stable@FreeBSD.ORG Sun Dec 22 03:59:25 2013 Return-Path: Delivered-To: stable@freebsd.org Received: from mx1.freebsd.org (mx1.freebsd.org [8.8.178.115]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by hub.freebsd.org (Postfix) with ESMTPS id 4B572B6F; Sun, 22 Dec 2013 03:59:25 +0000 (UTC) Received: from mx1.fisglobal.com (mx1.fisglobal.com [199.200.24.190]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.freebsd.org (Postfix) with ESMTPS id 0C9E11AAC; Sun, 22 Dec 2013 03:59:24 +0000 (UTC) Received: from smtp.fisglobal.com ([10.132.206.16]) by ltcfislmsgpa01.fnfis.com (8.14.5/8.14.5) with ESMTP id rBM3xMiJ000581 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NOT); Sat, 21 Dec 2013 21:59:22 -0600 Received: from LTCFISWMSGMB21.FNFIS.com ([169.254.1.7]) by LTCFISWMSGHT05.FNFIS.com ([10.132.206.16]) with mapi id 14.03.0158.001; Sat, 21 Dec 2013 21:59:21 -0600 From: "Teske, Devin" To: Adam McDougall Subject: Re: bsdinstall, zfs booting, gpt partition order suitable for volume expansion Thread-Topic: bsdinstall, zfs booting, gpt partition order suitable for volume expansion Thread-Index: AQHO9doM/shhS682tEKTWovDLLVEaQ== Date: Sun, 22 Dec 2013 03:59:20 +0000 Message-ID: <919B8365-6F27-4CFE-9DF8-7D32D805AEA8@fisglobal.com> References: <20131210175323.GB1728@egr.msu.edu> <93C924DB-E760-4830-B5E2-3A20160AD322@fisglobal.com> <2D40298B-39FA-4BA9-9AC2-6006AA0E0C9C@fisglobal.com> <73E28A82-E9FE-4B25-8CE6-8B0543183E7F@fisglobal.com> <20131218135326.GM1728@egr.msu.edu> <1AD35F39-35EB-4AAD-B4B1-AF21B2B6F6BA@fisglobal.com> <20131218163145.GA1630@egr.msu.edu> <52B4C3FE.2050706@egr.msu.edu> <52B659E0.8020904@egr.msu.edu> In-Reply-To: <52B659E0.8020904@egr.msu.edu> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.132.253.120] Content-Type: text/plain; charset="iso-8859-1" Content-ID: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.11.87, 1.0.14, 0.0.0000 definitions=2013-12-21_02:2013-12-20,2013-12-21,1970-01-01 signatures=0 Cc: "stable@freebsd.org" , Devin Teske , "Teske, Devin" X-BeenThere: freebsd-stable@freebsd.org X-Mailman-Version: 2.1.17 Precedence: list Reply-To: Devin Teske List-Id: Production branch of FreeBSD source code List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 22 Dec 2013 03:59:25 -0000 On Dec 21, 2013, at 7:17 PM, Adam McDougall wrote: > On 12/20/2013 17:26, Adam McDougall wrote: >> On 12/19/2013 02:19, Teske, Devin wrote: >>>=20 >>> On Dec 18, 2013, at 8:31 AM, Adam McDougall wrote: >>>=20 >>>> [snip] >>>> I have posted /tmp/bsdinstall_log at: http://p.bsd-unix.net/ps9qmfqc2 >>>>=20 >>>=20 >>> I think this logging stuff I put so much effort into is really paying d= ividends. >>> I'm finding it really easy to debug issues that others have run into. >>>=20 >>>=20 >>>> The corresponding procedure: >>>>=20 >>>> Virtualbox, created VM with 4 2.0TB virtual hard disks >>>> Install >>>> Continue with default keymap=20 >>>> Hostname: test >>>> Distribution Select: OK=20=20=20=20 >>>> Partitioning: ZFS >>>> Pool Type/Disks: stripe, select ada0-3 and hit OK >>>> Install >>>> Last Chance! YES >>>>=20 >>>=20 >>> I've posted the following commits to 11.0-CURRENT: >>>=20 >>> http://svnweb.freebsd.org/base?view=3Drevision&revision=3D259597 >>> http://svnweb.freebsd.org/base?view=3Drevision&revision=3D259598 >>>=20 >>> As soon as a new ISO is rolled, can you give the above another go? >>> I rolled my own ISO with the above and tested cleanly. >>>=20 >>=20 >> I did some testing with 11.0-HEAD-r259612-JPSNAP and 4 disk raidz, 4 >> disk mirror worked, 1-3 disk stripe worked but 4 disk stripe got "ZFS: >> i/o error - all block copies unavailable" although the parts where this >> happens during the loader varies". Sometimes the loader would fault, >> sometimes it just can't load kernel, sometimes it prints some of the >> color text and sometimes not even that far. Might depend on the >> install? Also I did not try exhaustive combinations such as 2-3 in a >> mirror, 4 in a raidz2, or anything more than 4 disks. I'll try to test >> a 10 ISO tomorrow if I can, either a fresh JPSNAP or RC3 if it is ready >> by the time I am, maybe both. >=20 > Good news, I believe this was "hardware" error. VirtualBox (in sata > mode along with a virtual cdrom) and XenServer 6.0/6.2 appear to make a > maximum of 3 virtual hard disks visible to the FreeBSD bootloader. This > is easier to tell when booting from the CD since you can see it > enumerate them, but if you are booting from disks, it may not get that > far. Interestingly, when you tell VirtualBox to use scsi disks, you max > out at 4 bootable instead of 3. Installation then works on 4 disks but > not 5 (understandably). Thus the symptoms are appropriate and it is not > a fault of the installer/installation. I've heard of similar issues on > real hardware but since this is a new install, nothing should be lost. >=20 > Thanks for making the improvements and bug fixes! >=20 Thank you very much for testing! And I'm very happy it was a limitation of VirtualBox. Imho, the new module is doing a great job in making it easier to test more combinations and learn these limitations, sharing with others along the way. > The below issue stands, but I'd say is not urgent for 10.0. >=20 Yeah, will have to do some testing to see the best way to deal with that (I agree ideally no export/re-import would be best -- will have to investig= ate). --=20 Devin >>=20 >> I also found another issue, not very dire: If you install to X number of >> disks as "zpool", then reinstall on (X-1 or less) disks as "zpool", the >> install fails with: "cannot import 'zroot': more than one matching pool >> import by numeric ID instead" >> because it sees both the old and the new zroot (makes sense, since it >> should not be touching disks we didn't ask about): >>=20 >> DEBUG: zfs_create_boot: Temporarily exporting ZFS pool(s)... >> DEBUG: zfs_create_boot: zpool export "zroot" >> DEBUG: zfs_create_boot: retval=3D0 >> DEBUG: zfs_create_boot: gnop destroy "ada0p3.nop" >> DEBUG: zfs_create_boot: retval=3D0 >> DEBUG: zfs_create_boot: gnop destroy "ada1p3.nop" >> DEBUG: zfs_create_boot: retval=3D0 >> DEBUG: zfs_create_boot: gnop destroy "ada2p3.nop" >> DEBUG: zfs_create_boot: retval=3D0 >> DEBUG: zfs_create_boot: Re-importing ZFS pool(s)... >> DEBUG: zfs_create_boot: zpool import -o altroot=3D"/mnt" "zroot" >> DEBUG: zfs_create_boot: retval=3D1 >> cannot import 'zroot': more than one matching pool >> import by numeric ID instead >> DEBUG: f_dialog_max_size: dialog --print-maxsize =3D [MaxSize: 25, 80] >> DEBUG: f_getvar: var=3D[height] value=3D[6] r=3D0 >> DEBUG: f_getvar: var=3D[width] value=3D[54] r=3D0 >>=20 >> Full log at: http://p.bsd-unix.net/p2juq9y25 >>=20 >> Workaround: use a different pool name, or use a shell to manually zpool >> labelclear the locations with the old zpool label (advanced user operati= on) >>=20 >> Suggested solution: avoid exporting and importing the pool? I don't >> think you need to unload gnop, zfs should be able to find the underlying >> partition fine on its own the next boot and the install would go quicker >> without the export and import. Or were you doing it for another reason >> such as the cache file? >>=20 >> Alternative: would it be possible to determine the numeric ID before >> exporting so it can use it to import? But that would be adding >> complexity as opposed to removing complexity by eliminating the >> export/import if possible. _____________ The information contained in this message is proprietary and/or confidentia= l. If you are not the intended recipient, please: (i) delete the message an= d all copies; (ii) do not disclose, distribute or use the message in any ma= nner; and (iii) notify the sender immediately. In addition, please be aware= that any message addressed to our domain is subject to archiving and revie= w by persons other than the intended recipient. Thank you.